Sort by
Refine Your Search
-
Listed
-
Category
-
Program
-
Employer
- Forschungszentrum Jülich
- Technical University of Munich
- DAAD
- Leibniz
- Nature Careers
- Fraunhofer-Gesellschaft
- Max Planck Institute for Multidisciplinary Sciences, Göttingen
- Academic Europe
- Heidelberg University
- Helmholtz-Zentrum Dresden-Rossendorf - HZDR - Helmholtz Association
- Humboldt-Universität zu Berlin
- Max Planck Institute for Innovation and Competition, Munich
- Max Planck Institute for Radio Astronomy, Bonn
- Max Planck Institute for Solar System Research, Göttingen
- Max Planck Institute of Animal Behavior, Radolfzell / Konstanz
- Max Planck Institute of Geoanthropology, Jena
- NEC Laboratories Europe GmbH
- University of Tübingen
- 8 more »
- « less
-
Field
-
is of advantage: Knowledge of parallel programming and HPC architectures, including accelerators (e.g., GPUs) Experience in modelling and simulation, ideally in the field of energy systems Experience
-
managing supercomputer resources Strong skills in algorithm development for large sparse matrices Excellency in programming GPU accelerators from all major vendors Very good command of written and spoken
-
E13) up to 5 years International collaboration to build a large radiotherapy dataset Dedicated GPU infrastructure Strong collaborations within TUM’s AI ecosystem High-impact publication potential
-
program embedded in a large-scale, nationally funded research consortium with access to unique multimodal clinical datasets - State-of-the-art GPU infrastructure for training and fine-tuning large
-
, mathematics or any related field. What we offer State of the art on-site high performance/GPU compute facilities Competitive research in an inspiring, world-class environment A wide range of offers to help you
-
containers (Docker/Singularity/Podman/Kubernetes). Experience with Ethernet, InfiniBand, RDMA network technologies. CPU/GPU/memory/RAID/storage/Data Center technologies. Knowledge of current technological
-
approaches, the application of meta learning, and the integration of convex optimization layers Increase inference efficiency (e.g., GPU acceleration) and assess the applicability domain of learned algorithms
-
commonly used on Unix systems. Additional languages or experience with libraries for utilizing GPU hardware efficiently, e.g., CUDA, are a plus. Experience in AI programming with, e.g., PyTorch(-DDP
-
physics, mathematics or any related field. What we offer State of the art on-site high performance/GPU compute facilities Competitive research in an inspiring, world-class environment A wide range of offers
-
Your profile: Preferably a doctoral degree, but MSc are also encouraged to apply Expert knowledge in one or several of the following High Performance Computing GPU computing Array Computing with JAX A