Sort by
Refine Your Search
-
Listed
-
Country
-
Program
-
Employer
- University of North Carolina at Chapel Hill
- Argonne
- European Space Agency
- NEW YORK UNIVERSITY ABU DHABI
- Princeton University
- Universite de Moncton
- Duke University
- Forschungszentrum Jülich
- Imperial College London
- MOHAMMED VI POLYTECHNIC UNIVERSITY
- National University of Singapore
- University of Luxembourg
- Yale University
- ; King's College London
- AALTO UNIVERSITY
- Brookhaven Lab
- Carnegie Mellon University
- Columbia University
- Embry-Riddle Aeronautical University
- Empa
- European Magnetism Association EMA
- Georgia State University
- Harvard University
- Heriot Watt University
- Jane Street Capital
- KINGS COLLEGE LONDON
- Linköping University
- Manchester Metropolitan University
- Monash University
- Nanyang Technological University
- Nature Careers
- New York University
- Northeastern University
- Oak Ridge National Laboratory
- SINGAPORE INSTITUTE OF TECHNOLOGY (SIT)
- Shanghai Jiao Tong University
- Simons Foundation
- Stanford University
- Stony Brook University
- Technical University of Denmark
- Technical University of Munich
- The Ohio State University
- The University of Arizona
- UNIVERSITY OF SOUTHAMPTON
- University of Glasgow
- University of Houston Central Campus
- University of Lund
- University of Maryland, Baltimore
- University of Minnesota
- University of Minnesota Twin Cities
- University of New Hampshire – Main Campus
- University of North Texas at Dallas
- University of Oxford
- University of South Carolina
- University of Texas at Arlington
- University of Texas at Austin
- VIB
- VU Amsterdam
- 48 more »
- « less
-
Field
-
/GPUs. These devices provide massive spatial parallelism and are well-suited for dataflow programming paradigms. However, optimizing and porting code efficiently to these architectures remains a key
-
artificielle (IA) (CPU, GPU, accélérateurs d'IA, etc.) nécessitent une puissance élevée et des réseaux de distribution d'énergie (PDN) optimisés pour améliorer l'efficacité en puissance et préserver son
-
. Desirable criteria Experience working with generative models or large language models Experience with GPU-based model training or cloud computing Knowledge of synthetic biology or regulatory sequence design
-
results. Machine Learning skills to automise comparison process. Unbiased approach to different theoretical models. Experience in HPC system usage and parallel/distributed computing. Knowledge in GPU-based
-
and planet formation context Experience in the field with HPC system usage and parallel/distributed computing Knowledge in GPU-based programming would be considered an asset Proven record in publication
-
learning frameworks such as PyTorch, JAX, or TensorFlow. Experience with C++ and GPU programming. A strong growth mindset, attention to scientific rigor, and the ability to thrive in an interdisciplinary
-
) platforms used in machine learning, big data and artificial intelligence (AI) based applications (CPUs, GPUs, AI accelerators etc.) require high power demands with optimized power distribution networks (PDNs
-
vision systems (e.g., NVIDIA Jetson Nano) Real-time processing and GPU acceleration Experience working on industry R&D projects Key Competencies Able to build and maintain strong working relationships with
-
that address real-world challenges and deliver positive business outcomes. The Institute for Insight is equipped with a computer cluster that includes multiple GPUs, designed for big data analytics for both
-
(HPC) platforms used in machine learning, big data and artificial intelligence (AI) based applications (CPUs, GPUs, AI accelerators etc.) require high power demands with optimized power distribution