Sort by
Refine Your Search
-
Listed
-
Country
-
Employer
- Oak Ridge National Laboratory
- Argonne
- Nature Careers
- CNRS
- Duke University
- NEW YORK UNIVERSITY ABU DHABI
- Aarhus University
- Forschungszentrum Jülich
- Stanford University
- Technical University of Munich
- Harvard University
- New York University
- Rutgers University
- SUNY Polytechnic Institute
- Technical University of Denmark
- University of Luxembourg
- University of Miami
- University of Nebraska Medical Center
- University of North Carolina at Chapel Hill
- AI4I
- Brookhaven National Laboratory
- Chalmers University of Technology
- Czech Technical University in Prague
- ELETTRA - SINCROTRONE TRIESTE S.C.P.A.
- ETH Zürich
- Eindhoven University of Technology (TU/e)
- Erasmus MC (University Medical Center Rotterdam)
- FAPESP - São Paulo Research Foundation
- Flanders Institute for Biotechnology
- Helmholtz-Zentrum Dresden-Rossendorf - HZDR - Helmholtz Association
- ICN2
- Max Planck Institute for Solar System Research, Göttingen
- Max Planck Institute of Animal Behavior, Radolfzell / Konstanz
- McGill University
- Nagoya University
- Northeastern University
- RIKEN
- Sandia National Laboratories
- Texas A&M University
- Umeå universitet stipendiemodul
- University of Basel
- University of Central Florida
- University of Florida
- University of Jyväskylä
- University of New Hampshire – Main Campus
- University of Turku
- University of Utah
- Utrecht University
- VIB
- 39 more »
- « less
-
Field
-
100% funding per SNSF guidelines (~CHF 90'000/year) Access to modern GPU clusters and confidential-computing infrastructure Collaboration with leading researchers in AI & HPC systems and digital health
-
). Practical experience with cloud computing platforms (e.g., AWS, GCP, Azure). Additional Qualifications Experience with multi-GPU model training and large-scale inference. Familiarity with modern AI
-
in GPU programming one or more parallel computing models, including SYCL, CUDA, HIP, or OpenMP Experience with scientific computing and software development on HPC systems Ability to conduct
-
programming (Shared and Distributed memory, GPU programming etc.) Demonstrated experience with distributed memory MPI programming Experience with collaborative software design, development, and testing
-
variety of computational devices (e.g. CPUs and GPUs) while ensuring overall consistency and performance. - contribute to identify new CSE applications domains, such as condensed matter systems, quantum
-
with OFDM modulation required. Skills Programming skills in MATLAB and or Python required, experience with wireless testbeds desirable, some familiarity with GPU programming desirable (to support
-
/TimeSformer, CLIP/BLIP or similar) in PyTorch, including scalable training on GPUs and reproducible experimentation. Demonstrated experience building explainable models (e.g., concept bottlenecks, prototype
-
scientists and engineers are accustomed to. Moreover, the vast majority of the performance associated with these reduced precision formats resides on special hardware units such as tensor cores on NVIDIA GPUs
-
managing supercomputer resources Strong skills in algorithm development for large sparse matrices Excellency in programming GPU accelerators from all major vendors Very good command of written and spoken
-
disease insights. The lab has state-of-the-art computing capabilities with an in-house cluster serving 80 CPU cores and 1.5TB of RAM, as well as a newly acquired NVIDIA DGX box with eight H100 GPUs and 224