Sort by
Refine Your Search
-
Listed
-
Country
-
Program
-
Employer
- Forschungszentrum Jülich
- Oak Ridge National Laboratory
- University of Innsbruck, Institute of Computer Science
- University of Utah
- CNRS
- Ecole Centrale de Lyon
- Lawrence Berkeley National Laboratory
- National Renewable Energy Laboratory NREL
- Northeastern University
- Singapore-MIT Alliance for Research and Technology
- The University of North Carolina at Chapel Hill
- 1 more »
- « less
-
Field
-
engineering; Formal methods, models, and languages; Interactive and cognitive systems; Distributed systems, parallel computing, and networks. The successful candidate will work closely with teams specializing
-
, or deployment at scale. A proven track record of high-quality research contributions published in top-tier machine learning conferences or journals. Proficiency in high-performance computing, distributed and
-
tight AI-simulation coupling. What is Required: PhD in Physics, Chemistry, Computational Science, Data Science, Computer Science, Applied Mathematics, or a related numerical field. Programming experience
-
systems projects. We are developing the Apollo application development and computing environment. We have coordinated several EU projects on distributed and parallel systems including the edutain@grid
-
as well as the entire PhD course and research program are held in English only. There is no need to learn German for these positions.Preferred SkillsBasic Computer Science• Distributed systems (Cloud
-
The University of North Carolina at Chapel Hill | Chapel Hill, North Carolina | United States | 2 months ago
and Experience: Distributed parallel training and parameter-efficient tuning. Familiarity with multi-modal foundation models, HITL techniques, and prompt engineering. Experience with LLM fine-tuning
-
, Statistical Physics, Genome Annotation, and/or related fields Practical experience with High Performance Computing Systems as well as parallel/distributed programming Very good command of written and spoken
-
willingness to learn: High-performance computing (distributed systems, profiling, performance optimization), Training large AI models (PyTorch/JAX/TensorFlow, parallelization, mixed precision), Data analysis
-
. Demonstrated experience developing and running computational tools for high-performance computing environment, including distributed parallelism for GPUs. Demonstrated experience in common scientific programming
-
for an accurate simulation of time-dependent flows, enabling sensitive applications such as aeroacoustics. Furthermore, the high scalability on massively parallel computers can lead to advantageous turn-around