Sort by
Refine Your Search
-
Listed
-
Category
-
Employer
-
Field
-
PhD position: Global soil mapping with process-informed machine learning Faculty: Faculty of Geosciences Department: Department of Physical Geography Hours per week: 36 to 40 Application deadline
-
Mathematics (Inverse Problems), Computer Science (Machine learning, Efficient Algorithms and High-Performance Computing), and Physics (Image Formation Modelling). Your project is part of the NXTGen High-tech
-
Vacancies 2x PhD positions in the Mathematical Foundations of Machine Learning on Graphs and Networks Key takeaways The Discrete Mathematics and Mathematical Programming (DMMP) group
-
degree in AI, Computing Science, Mathematics, or Data Science. Strong coding, communication and organizational skills. Demonstrable experience with using machine learning packages (e.g., PyTorch
-
organizational skills. Demonstrable experience with using machine learning packages (e.g., PyTorch). Completed academic courses in AI or machine learning. We consider it an advantage if you bring experience with
-
supervision signals (e.g., labels in a downstream task or symbolic constraints). You will perform machine learning research, developing a framework for learning interpretable and robust concepts with
-
modelling (e.g., agent-based Bayesian models, cognitive learning models, machine learning). Experience in annotation software such as ELAN and PRAAT. Existing peer-reviewed journal publications and conference
-
to learn more about the project, and perhaps our group? Feel free to browse our webpages: About our department: QCE department . About our group: Computer Engineering Lab . Job requirements For this position
-
Website https://www.academictransfer.com/en/jobs/358703/phd-in-scalable-safe-ai-for-sem… Requirements Specific Requirements A master’s degree AI, Machine Learning, Data Science, Computer Science or a
-
discipline. Experience with deep learning framework PyTorch or similar. Strong background in machine learning, image or signal processing. Knowledge of SotA models for multi-modality and scene understanding