Sort by
Refine Your Search
-
Listed
-
Program
-
Employer
- Oak Ridge National Laboratory
- Argonne
- Harvard University
- Duke University
- Rutgers University
- Stanford University
- Texas A&M University
- New York University
- Princeton University
- SUNY Polytechnic Institute
- University of Miami
- University of North Carolina at Chapel Hill
- Barnard College
- Brookhaven National Laboratory
- Center for Devices and Radiological Health (CDRH)
- Florida Atlantic University
- Jane Street Capital
- National Aeronautics and Space Administration (NASA)
- National Renewable Energy Laboratory NREL
- Northeastern University
- SUNY University at Buffalo
- Sandia National Laboratories
- The California State University
- University at Buffalo
- University of Central Florida
- University of Idaho
- University of Maryland, Baltimore
- University of Nebraska Medical Center
- University of New Hampshire – Main Campus
- University of Texas at Austin
- University of Utah
- 21 more »
- « less
-
Field
-
computing software libraries (e.g., Trilinos, MFEM, PETSc, MOOSE). Experience with shared and distributed memory parallel programming models such as OpenMP and MPI. Experience with one more GPU or performance
-
and GPU-accelerated tools for circuit and system design optimization, addressing challenges in physical design, timing analysis, and large-scale hardware design automation. The researcher will
-
simulation methods, GPU-accelerated computations, several programming languages, and presenting results to wide technical and non-technical audiences. Additionally, the candidate will also develop theory and
-
in GPU programming one or more parallel computing models, including SYCL, CUDA, HIP, or OpenMP Experience with scientific computing and software development on HPC systems Ability to conduct
-
computing environment that includes GPU clusters, large-memory servers, and an NVIDIA DGX B200 system. These resources support the training of large multimodal models involving audio, video, language
-
scientists and engineers are accustomed to. Moreover, the vast majority of the performance associated with these reduced precision formats resides on special hardware units such as tensor cores on NVIDIA GPUs
-
with OFDM modulation required. Skills Programming skills in MATLAB and or Python required, experience with wireless testbeds desirable, some familiarity with GPU programming desirable (to support
-
). Practical experience with cloud computing platforms (e.g., AWS, GCP, Azure). Additional Qualifications Experience with multi-GPU model training and large-scale inference. Familiarity with modern AI
-
environments. Experience with parallel computing environments, HPC in a Linux environment. Experience with surrogate modeling. Experience with data analytics techniques. Familiarity with C++ and GPU programming
-
/TimeSformer, CLIP/BLIP or similar) in PyTorch, including scalable training on GPUs and reproducible experimentation. Demonstrated experience building explainable models (e.g., concept bottlenecks, prototype