Sort by
Refine Your Search
-
Listed
-
Employer
- Oak Ridge National Laboratory
- Argonne
- Harvard University
- SUNY Polytechnic Institute
- Stanford University
- University of Nebraska Medical Center
- University of North Carolina at Chapel Hill
- Yale University
- Brookhaven National Laboratory
- Duke University
- Embry-Riddle Aeronautical University
- Northeastern University
- Rutgers University
- Texas A&M University
- The University of Arizona
- University of New Hampshire – Main Campus
- University of Utah
- 7 more »
- « less
-
Field
-
scientists and engineers are accustomed to. Moreover, the vast majority of the performance associated with these reduced precision formats resides on special hardware units such as tensor cores on NVIDIA GPUs
-
communication skills. First-author publications at NeurIPS, ICLR, ICML, AAAI, KDD, or IJCAI. Experience working with large-scale, noisy, or real-world datasets. Experience with GPU-based training and high-performance
-
CPU and GPU based HPC systems. Exploration of the capabilities of DPU/IPU SmartNICs to support network security isolation, platform level root-of-trust, and secure platform management/partitioning
-
(Xilinx Vitis/Vivado, Intel Quartus, HLS tools) HPC environments or GPU-accelerated computing On-detector firmware or data acquisition systems Familiarity with HEP data formats and reconstruction
-
-atmosphere dynamics. We will build an AI-enabled modeling system that couples a GPU-optimized ocean model with a biogeochemical module and AI-based, kilometer-scale atmospheric forecasts. This system will
-
-atmosphere dynamics. We will build an AI-enabled modeling system that couples a GPU-optimized ocean model with a biogeochemical module and AI-based, kilometer-scale atmospheric forecasts. This system will
-
tracking), dataset curation, HPC/GPU programming, blockchain for secure data, C-family languages, and embodied AI/robotics are a plus. Experience with general network resilience, cellular automata
-
. Experience with graph-based data analysis or anomaly detection methods. Exposure to high-performance or GPU-based computing environments. Demonstrated ability to contribute to publications or technical reports
-
or OpenMP. Experience in heterogeneous programming (i.e., GPU programming) and/or developing, debugging, and profiling massively parallel codes. Experience with using high performance computing (HPC
-
models. Experience in large-scale deep learning systems and/or large foundation model, and the ability to train models using GPU/TPU parallelization. Experience in multi-modality data analysis (e.g., image