-
, Statistical Physics, Genome Annotation, and/or related fields Practical experience with High Performance Computing Systems as well as parallel/distributed programming Very good command of written and spoken
-
-edge Machine Learning applications on the Exascale computer JUPITER. Your work will include: Developing, implementing, and refining ML techniques suited for the largest scale Parallelizing model training
-
on the Exascale computer JUPITER. Your work will include: Developing, implementing, and refining ML techniques suited for the largest scale Parallelizing model training and optimizing the execution User support in
-
willingness to learn: High-performance computing (distributed systems, profiling, performance optimization), Training large AI models (PyTorch/JAX/TensorFlow, parallelization, mixed precision), Data analysis
-
results. Machine Learning skills to automise comparison process. Unbiased approach to different theoretical models. Experience in HPC system usage and parallel/distributed computing. Knowledge in GPU-based
-
hydrodynamics and/or N-body simulations in the star and planet formation context Experience in the field with HPC system usage and parallel/distributed computing Knowledge in GPU-based programming would be