Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Employer
- Nature Careers
- Oak Ridge National Laboratory
- Argonne
- CNRS
- Duke University
- Technical University of Munich
- NEW YORK UNIVERSITY ABU DHABI
- Stanford University
- Aarhus University
- Harvard University
- Max Planck Institute for Multidisciplinary Sciences, Göttingen
- New York University
- Rutgers University
- SUNY Polytechnic Institute
- Technical University of Denmark
- University of Luxembourg
- University of Miami
- University of Nebraska Medical Center
- University of North Carolina at Chapel Hill
- AI4I
- Brookhaven National Laboratory
- Chalmers University of Technology
- ELETTRA - SINCROTRONE TRIESTE S.C.P.A.
- ETH Zürich
- Eindhoven University of Technology (TU/e)
- FAPESP - São Paulo Research Foundation
- Flanders Institute for Biotechnology
- Forschungszentrum Jülich
- Helmholtz-Zentrum Dresden-Rossendorf - HZDR - Helmholtz Association
- ICN2
- Max Planck Institute for Solar System Research, Göttingen
- Max Planck Institute of Animal Behavior, Radolfzell / Konstanz
- McGill University
- Nagoya University
- Northeastern University
- Sandia National Laboratories
- Texas A&M University
- University of Basel
- University of Central Florida
- University of Jyväskylä
- University of New Hampshire
- University of Turku
- University of Utah
- Utrecht University
- VIB
- 35 more »
- « less
-
Field
-
scientists and engineers are accustomed to. Moreover, the vast majority of the performance associated with these reduced precision formats resides on special hardware units such as tensor cores on NVIDIA GPUs
-
disease insights. The lab has state-of-the-art computing capabilities with an in-house cluster serving 80 CPU cores and 1.5TB of RAM, as well as a newly acquired NVIDIA DGX box with eight H100 GPUs and 224
-
). Practical experience with cloud computing platforms (e.g., AWS, GCP, Azure). Additional Qualifications Experience with multi-GPU model training and large-scale inference. Familiarity with modern AI
-
). Expertise in data and model parallelisms for distributed training on large GPU-based machines is essential. Candidates with experience using diffusion-based or other generative AI methods as
-
managing supercomputer resources Strong skills in algorithm development for large sparse matrices Excellency in programming GPU accelerators from all major vendors Very good command of written and spoken
-
E13) up to 5 years International collaboration to build a large radiotherapy dataset Dedicated GPU infrastructure Strong collaborations within TUM’s AI ecosystem High-impact publication potential
-
projects at CASS. The center fellows will have access to a 70,000-core Infiniband Cluster (Jubail) dedicated to the science division, several GPU-based clusters at NYUAD, and other supercomputer facilities
-
mathematicians, and domain scientists Develop software that integrates machine learning and numerical techniques targeting heterogeneous architectures (GPUs and accelerators), including DOE leadership-class
-
finite-element models, e.g. Poisson, linear elasticity, large-deformation soft tissue, for real-time execution on AR devices and GPUs Implement these models within open-source frameworks such as SOFA
-
engineering. The work involves simulations for quantum error correction and mid-circuit operations, and will require both low-level optimization skills (e.g., SIMD, GPU, FPGA) and an understanding of quantum