Sort by
Refine Your Search
-
Listed
-
Country
-
Employer
- Oak Ridge National Laboratory
- Argonne
- Nature Careers
- CNRS
- Duke University
- Technical University of Munich
- NEW YORK UNIVERSITY ABU DHABI
- Stanford University
- Aarhus University
- Harvard University
- New York University
- Rutgers University
- SUNY Polytechnic Institute
- Technical University of Denmark
- Texas A&M University
- University of Luxembourg
- University of Miami
- University of North Carolina at Chapel Hill
- AI4I
- Brookhaven National Laboratory
- Chalmers University of Technology
- Dublin City University
- ELETTRA - SINCROTRONE TRIESTE S.C.P.A.
- ETH Zürich
- Eindhoven University of Technology (TU/e)
- FAPESP - São Paulo Research Foundation
- Forschungszentrum Jülich
- Helmholtz-Zentrum Dresden-Rossendorf - HZDR - Helmholtz Association
- Max Planck Institute for Solar System Research, Göttingen
- Max Planck Institute of Animal Behavior, Radolfzell / Konstanz
- McGill University
- Nagoya University
- Northeastern University
- Sandia National Laboratories
- University of Basel
- University of Central Florida
- University of Jyväskylä
- University of Liverpool
- University of Nebraska Medical Center
- University of New Hampshire – Main Campus
- University of Turku
- University of Utah
- Université côte d'azur
- Utrecht University
- VIB
- 35 more »
- « less
-
Field
-
advanced compilation techniques for scientific and AI applications on heterogeneous GPU clusters. Research topics include scheduling, memory management, communication–computation overlap, and performance
-
computing software libraries (e.g., Trilinos, MFEM, PETSc, MOOSE). Experience with shared and distributed memory parallel programming models such as OpenMP and MPI. Experience with one more GPU or performance
-
). Practical experience with cloud computing platforms (e.g., AWS, GCP, Azure). Additional Qualifications Experience with multi-GPU model training and large-scale inference. Familiarity with modern AI
-
and GPU-accelerated tools for circuit and system design optimization, addressing challenges in physical design, timing analysis, and large-scale hardware design automation. The researcher will
-
simulation methods, GPU-accelerated computations, several programming languages, and presenting results to wide technical and non-technical audiences. Additionally, the candidate will also develop theory and
-
100% funding per SNSF guidelines (~CHF 90'000/year) Access to modern GPU clusters and confidential-computing infrastructure Collaboration with leading researchers in AI & HPC systems and digital health
-
variety of computational devices (e.g. CPUs and GPUs) while ensuring overall consistency and performance. - contribute to identify new CSE applications domains, such as condensed matter systems, quantum
-
in GPU programming one or more parallel computing models, including SYCL, CUDA, HIP, or OpenMP Experience with scientific computing and software development on HPC systems Ability to conduct
-
environments. Experience with parallel computing environments, HPC in a Linux environment. Experience with surrogate modeling. Experience with data analytics techniques. Familiarity with C++ and GPU programming
-
frameworks (preferably Pytorch) Use of Linux GPU servers via command line Written and spoken scientific English It would be a plus to have familiarity with: GIS and remote sensing Internal Application form(s