Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Employer
- Nature Careers
- University of North Carolina at Chapel Hill
- Argonne
- European Space Agency
- NEW YORK UNIVERSITY ABU DHABI
- Oak Ridge National Laboratory
- Technical University of Munich
- Duke University
- Stony Brook University
- Technical University of Denmark
- University of Luxembourg
- University of South Carolina
- Yale University
- ; The University of Edinburgh
- Brookhaven Lab
- Durham University
- Embry-Riddle Aeronautical University
- Emory University
- Empa
- European Magnetism Association EMA
- Harvard University
- Imperial College London
- MOHAMMED VI POLYTECHNIC UNIVERSITY
- Max Planck Institute for Multidisciplinary Sciences, Göttingen
- New York University
- Northeastern University
- Shanghai Jiao Tong University
- Stanford University
- The Ohio State University
- The University of Arizona
- UNIVERSITY OF HELSINKI
- University of Antwerp
- University of Colorado
- University of Minnesota
- University of Minnesota Twin Cities
- University of North Texas at Dallas
- University of Oxford
- University of Texas at Arlington
- VIB
- 29 more »
- « less
-
Field
-
developing and implementing very large deep learning models. Familiarity with high performance computing environments (e.g., HPC clusters, GPUs, Cloud resources) and managing Linux based hardware systems
-
conferences. Qualifications: PhD in computer science with file systems, GPU architecture experience. Proven ability to articulate research work and findings in peer-reviewed proceedings. Knowledge of systems
-
University of North Carolina at Chapel Hill | Chapel Hill, North Carolina | United States | 3 months ago
of data scientists/clinicians and working with unique datasets from multiple academic medical centers (e.g. UNC, UCSF, Mayo Clinic, Memorial Sloan Kettering, etc). Lab dedicated GPU workstations/servers and
-
University of North Carolina at Chapel Hill | Chapel Hill, North Carolina | United States | 3 days ago
of data scientists/clinicians and working with unique datasets from multiple academic medical centers (e.g. UNC, UCSF, Mayo Clinic, Memorial Sloan Kettering, etc). Lab dedicated GPU workstations/servers and
-
training NLP/deep learning models on GPUs (with framework such as PyTorch, tensorflow) Demonstrated experience with NLP state-of-the-art models (Deepseek, Llama, Mistral, GPT-4, BERT etc) Demonstrated
-
-of-the-art foundation models and large vision-language models. Experience in large-scale deep learning systems and/or large foundation model, and the ability to train models using GPU/TPU parallelization
-
profiler . Experience with GPUs is a bonus. Of course, you need fluency in written and spoken English to communicate your ideas in this interdisciplinary project. Note that we expect from candidates either
-
optimisation, distributed-parallel-GPU optimisation (e.g. pagmo2), Taylor-based numerical integration of ODEs (e.g. heyoka), differential algebra and high order automated differentiation (audi), quantum