Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Employer
- Nature Careers
- Argonne
- University of North Carolina at Chapel Hill
- European Space Agency
- NEW YORK UNIVERSITY ABU DHABI
- Oak Ridge National Laboratory
- Technical University of Munich
- Duke University
- Stony Brook University
- Technical University of Denmark
- University of Luxembourg
- University of South Carolina
- Yale University
- ; University of Oxford
- Brookhaven Lab
- Durham University
- Embry-Riddle Aeronautical University
- Emory University
- Empa
- European Magnetism Association EMA
- Harvard University
- Imperial College London
- MOHAMMED VI POLYTECHNIC UNIVERSITY
- Max Planck Institute for Multidisciplinary Sciences, Göttingen
- New York University
- Northeastern University
- Shanghai Jiao Tong University
- Stanford University
- The Ohio State University
- The University of Arizona
- UNIVERSITY OF HELSINKI
- University of Antwerp
- University of Colorado
- University of Minnesota
- University of North Texas at Dallas
- University of Texas at Arlington
- VIB
- 27 more »
- « less
-
Field
-
training NLP/deep learning models on GPUs (with framework such as PyTorch, tensorflow) Demonstrated experience with NLP state-of-the-art models (Deepseek, Llama, Mistral, GPT-4, BERT etc) Demonstrated
-
-of-the-art foundation models and large vision-language models. Experience in large-scale deep learning systems and/or large foundation model, and the ability to train models using GPU/TPU parallelization
-
profiler . Experience with GPUs is a bonus. Of course, you need fluency in written and spoken English to communicate your ideas in this interdisciplinary project. Note that we expect from candidates either
-
optimisation, distributed-parallel-GPU optimisation (e.g. pagmo2), Taylor-based numerical integration of ODEs (e.g. heyoka), differential algebra and high order automated differentiation (audi), quantum