Sort by
Refine Your Search
-
Listed
-
Employer
- Oak Ridge National Laboratory
- Argonne
- Duke University
- Stanford University
- Harvard University
- New York University
- Rutgers University
- SUNY Polytechnic Institute
- University of Miami
- University of Nebraska Medical Center
- University of North Carolina at Chapel Hill
- Brookhaven National Laboratory
- Northeastern University
- Sandia National Laboratories
- Texas A&M University
- University of Central Florida
- University of New Hampshire – Main Campus
- University of Utah
- 8 more »
- « less
-
Field
-
environments. Experience with parallel computing environments, HPC in a Linux environment. Experience with surrogate modeling. Experience with data analytics techniques. Familiarity with C++ and GPU programming
-
simulation methods, GPU-accelerated computations, several programming languages, and presenting results to wide technical and non-technical audiences. Additionally, the candidate will also develop theory and
-
). Practical experience with cloud computing platforms (e.g., AWS, GCP, Azure). Additional Qualifications Experience with multi-GPU model training and large-scale inference. Familiarity with modern AI
-
in GPU programming one or more parallel computing models, including SYCL, CUDA, HIP, or OpenMP Experience with scientific computing and software development on HPC systems Ability to conduct
-
, United States of America [map ] Subject Areas: Engineering / Computational Science and Engineering , Machine Learning , Quantum Science and Engineering Appl Deadline: (posted 2026/03/05 05:00 AM UnitedKingdomTime, listed
-
are in compliance with the necessary trainings (both at the lab and at the institutional level). Minimum Education and Experience: A PhD degree in Computer Science, Electrical/Computer Engineering, or a
-
: Knowledge on floating point arithmetic and mixed/reduced precision computing techniques Experience with programming GPUs and/or other accelerators Proficiency in mathematical reasoning and numerical analysis
-
simulated and measured results to assess quantities of interest. Interface with world-class exascale computing clusters. Work with a dynamic team of researchers, developers, experimentalists, and model
-
learning architectures for scientific or high-performance computing applications. Background in software performance evaluation, profiling, and optimization on CPUs and GPUs. Knowledge of common numerical
-
geophysical sciences, computer science, or machine learning with 0 to 2 years of experience Knowledge of deep learning, PyTorch/JAX, and scaling deep learning models to large GPU-based machines Technical