Sort by
Refine Your Search
-
Listed
-
Employer
- Oak Ridge National Laboratory
- Argonne
- Duke University
- Stanford University
- Harvard University
- New York University
- Rutgers University
- SUNY Polytechnic Institute
- University of Miami
- University of Nebraska Medical Center
- University of North Carolina at Chapel Hill
- Brookhaven National Laboratory
- Northeastern University
- Sandia National Laboratories
- Texas A&M University
- University of Central Florida
- University of New Hampshire
- University of Utah
- 8 more »
- « less
-
Field
-
and GPU-accelerated tools for circuit and system design optimization, addressing challenges in physical design, timing analysis, and large-scale hardware design automation. The researcher will
-
simulation methods, GPU-accelerated computations, several programming languages, and presenting results to wide technical and non-technical audiences. Additionally, the candidate will also develop theory and
-
in GPU programming one or more parallel computing models, including SYCL, CUDA, HIP, or OpenMP Experience with scientific computing and software development on HPC systems Ability to conduct
-
environments. Experience with parallel computing environments, HPC in a Linux environment. Experience with surrogate modeling. Experience with data analytics techniques. Familiarity with C++ and GPU programming
-
with OFDM modulation required. Skills Programming skills in MATLAB and or Python required, experience with wireless testbeds desirable, some familiarity with GPU programming desirable (to support
-
/TimeSformer, CLIP/BLIP or similar) in PyTorch, including scalable training on GPUs and reproducible experimentation. Demonstrated experience building explainable models (e.g., concept bottlenecks, prototype
-
scientists and engineers are accustomed to. Moreover, the vast majority of the performance associated with these reduced precision formats resides on special hardware units such as tensor cores on NVIDIA GPUs
-
programming (Shared and Distributed memory, GPU programming etc.) Demonstrated experience with distributed memory MPI programming Experience with collaborative software design, development, and testing
-
disease insights. The lab has state-of-the-art computing capabilities with an in-house cluster serving 80 CPU cores and 1.5TB of RAM, as well as a newly acquired NVIDIA DGX box with eight H100 GPUs and 224
-
). Practical experience with cloud computing platforms (e.g., AWS, GCP, Azure). Additional Qualifications Experience with multi-GPU model training and large-scale inference. Familiarity with modern AI