Sort by
Refine Your Search
-
Employer
- Oak Ridge National Laboratory
- Argonne
- Duke University
- Stanford University
- Harvard University
- New York University
- Rutgers University
- SUNY Polytechnic Institute
- University of Miami
- University of Nebraska Medical Center
- University of North Carolina at Chapel Hill
- Brookhaven National Laboratory
- Northeastern University
- Sandia National Laboratories
- Texas A&M University
- University of Central Florida
- University of New Hampshire – Main Campus
- University of Utah
- 8 more »
- « less
-
Field
-
). Practical experience with cloud computing platforms (e.g., AWS, GCP, Azure). Additional Qualifications Experience with multi-GPU model training and large-scale inference. Familiarity with modern AI
-
environments. Experience with parallel computing environments, HPC in a Linux environment. Experience with surrogate modeling. Experience with data analytics techniques. Familiarity with C++ and GPU programming
-
in GPU programming one or more parallel computing models, including SYCL, CUDA, HIP, or OpenMP Experience with scientific computing and software development on HPC systems Ability to conduct
-
programming (Shared and Distributed memory, GPU programming etc.) Demonstrated experience with distributed memory MPI programming Experience with collaborative software design, development, and testing
-
with OFDM modulation required. Skills Programming skills in MATLAB and or Python required, experience with wireless testbeds desirable, some familiarity with GPU programming desirable (to support
-
/TimeSformer, CLIP/BLIP or similar) in PyTorch, including scalable training on GPUs and reproducible experimentation. Demonstrated experience building explainable models (e.g., concept bottlenecks, prototype
-
scientists and engineers are accustomed to. Moreover, the vast majority of the performance associated with these reduced precision formats resides on special hardware units such as tensor cores on NVIDIA GPUs
-
disease insights. The lab has state-of-the-art computing capabilities with an in-house cluster serving 80 CPU cores and 1.5TB of RAM, as well as a newly acquired NVIDIA DGX box with eight H100 GPUs and 224
-
). Expertise in data and model parallelisms for distributed training on large GPU-based machines is essential. Candidates with experience using diffusion-based or other generative AI methods as
-
). Practical experience with cloud computing platforms (e.g., AWS, GCP, Azure). Additional Qualifications Experience with multi-GPU model training and large-scale inference. Familiarity with modern AI