Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Field
-
). Practical experience with cloud computing platforms (e.g., AWS, GCP, Azure). Additional Qualifications Experience with multi-GPU model training and large-scale inference. Familiarity with modern AI
-
of the art equipment within DNA and RNA sequencing, laboratory automation, CPU and GPU compute resources, proteomics, metabolomics, and advanced microscopy. This position offers an excellent opportunity
-
programming (Shared and Distributed memory, GPU programming etc.) Demonstrated experience with distributed memory MPI programming Experience with collaborative software design, development, and testing
-
programming LAMP stack design and implementation experience Knowledge of GPU and FPGA cluster management Experience with federal research compliance and security requirements Background in AI/ML computing
-
disease insights. The lab has state-of-the-art computing capabilities with an in-house cluster serving 80 CPU cores and 1.5TB of RAM, as well as a newly acquired NVIDIA DGX box with eight H100 GPUs and 224
-
). Expertise in data and model parallelisms for distributed training on large GPU-based machines is essential. Candidates with experience using diffusion-based or other generative AI methods as
-
. Programming & Software Development: Proficiency in Python, PyTorch, JAX, or other ML frameworks Computing: Some experience with large-scale datasets, parallel computing, and GPUs/TPUs. Algorithm Development
-
software aspects of large-scale AI systems. Areas of interest may include, but are not limited to: • Advanced accelerator chip technologies, such as GPUs or other specialized chips for large-scale AI
-
E13) up to 5 years International collaboration to build a large radiotherapy dataset Dedicated GPU infrastructure Strong collaborations within TUM’s AI ecosystem High-impact publication potential
-
, telemetry systems) into immersive environments. Optimize XR applications for performance including CPU/GPU profiling, draw call reduction, shader optimization, memory management, and LOD systems. Develop