Sort by
Refine Your Search
-
Listed
-
Country
-
Program
-
Employer
- University of North Carolina at Chapel Hill
- Argonne
- NEW YORK UNIVERSITY ABU DHABI
- Princeton University
- Universite de Moncton
- Yale University
- Forschungszentrum Jülich
- Imperial College London
- Jane Street Capital
- MOHAMMED VI POLYTECHNIC UNIVERSITY
- National University of Singapore
- University of Luxembourg
- ; University of Oxford
- AALTO UNIVERSITY
- Brookhaven Lab
- Carnegie Mellon University
- Columbia University
- Duke University
- Embry-Riddle Aeronautical University
- Empa
- European Magnetism Association EMA
- European Space Agency
- Georgia State University
- Heriot Watt University
- KINGS COLLEGE LONDON
- Linköping University
- Manchester Metropolitan University
- Monash University
- Nanyang Technological University
- New York University
- Northeastern University
- Oak Ridge National Laboratory
- SINGAPORE INSTITUTE OF TECHNOLOGY (SIT)
- Shanghai Jiao Tong University
- Simons Foundation
- Stanford University
- Stony Brook University
- Technical University of Munich
- The Ohio State University
- The University of Alabama
- The University of Arizona
- UNIVERSITY OF SOUTHAMPTON
- University of Glasgow
- University of Houston Central Campus
- University of Maryland, Baltimore
- University of Minnesota
- University of Minnesota Twin Cities
- University of New Hampshire – Main Campus
- University of North Texas at Dallas
- University of Oxford
- University of South Carolina
- University of Texas at Arlington
- University of Texas at Austin
- VU Amsterdam
- 44 more »
- « less
-
Field
-
/GPUs. These devices provide massive spatial parallelism and are well-suited for dataflow programming paradigms. However, optimizing and porting code efficiently to these architectures remains a key
-
and work together to train models, architect systems, and run trading strategies. We work with petabytes of data, a computing cluster with hundreds of thousands of cores, and a growing GPU cluster
-
astrophysical free boundaries. Responsibilities include running high-resolution GPU-accelerated simulations on exascale computing systems, developing and applying geometric measure theory tools to quantify
-
, and be willing to share their knowledge through tutorials, consultations, and teaching. Preferred Qualifications: Experience with teaching or tutorial creation. Experience with Bash, Linux, GPU, or high
-
artificielle (IA) (CPU, GPU, accélérateurs d'IA, etc.) nécessitent une puissance élevée et des réseaux de distribution d'énergie (PDN) optimisés pour améliorer l'efficacité en puissance et préserver son
-
. Desirable criteria Experience working with generative models or large language models Experience with GPU-based model training or cloud computing Knowledge of synthetic biology or regulatory sequence design
-
results. Machine Learning skills to automise comparison process. Unbiased approach to different theoretical models. Experience in HPC system usage and parallel/distributed computing. Knowledge in GPU-based
-
and planet formation context Experience in the field with HPC system usage and parallel/distributed computing Knowledge in GPU-based programming would be considered an asset Proven record in publication
-
learning frameworks such as PyTorch, JAX, or TensorFlow. Experience with C++ and GPU programming. A strong growth mindset, attention to scientific rigor, and the ability to thrive in an interdisciplinary
-
) platforms used in machine learning, big data and artificial intelligence (AI) based applications (CPUs, GPUs, AI accelerators etc.) require high power demands with optimized power distribution networks (PDNs