Sort by
Refine Your Search
-
Listed
-
Program
-
Employer
- Oak Ridge National Laboratory
- Argonne
- Duke University
- Harvard University
- Rutgers University
- Stanford University
- New York University
- Princeton University
- SUNY Polytechnic Institute
- Texas A&M University
- University of Miami
- University of North Carolina at Chapel Hill
- Barnard College
- Brookhaven National Laboratory
- Center for Devices and Radiological Health (CDRH)
- Florida Atlantic University
- Jane Street Capital
- National Renewable Energy Laboratory NREL
- Northeastern University
- SUNY University at Buffalo
- Sandia National Laboratories
- The California State University
- University at Buffalo
- University of Central Florida
- University of Idaho
- University of Maryland, Baltimore
- University of Nebraska Medical Center
- University of New Hampshire – Main Campus
- University of Texas at Austin
- University of Utah
- 20 more »
- « less
-
Field
-
hardware architects to establish how agentic AI and these languages co‑design with heterogeneous HPC systems (CPUs, GPUs, PIM, AI accelerators). Study performance and portability tradeoffs, leveraging
-
or TensorFlow. Advanced programming and high-performance computing skills, including proficiency in Python and/or C/C++, experience with GPU acceleration, and the ability to develop, test, and maintain research
-
on small test clusters. Test computational performance and resolve technical challenges on significantly larger models of selected quantum materials. Work on speeding up Krylov solvers on GPUs. Demonstrate
-
Experience with HPC (GPUs preferred) Related Skills and Other Requirements Ability to work at the interface of AI and science/engineering problems Ability to lead, develop, and contribute to multiple projects
-
, engineering, physical science or related technical discipline. Experience: Expertise in developing and training AI models Proficiency in Python Experience with HPC (GPUs preferred) Related Skills and Other
-
computing environments, and GPU programming. Necessary skills include knowledge of data processing using software (e.g., Matlab, R, IDL) and/or statistical/mathematical programming languages (e.g., R, Matlab
-
computing software libraries (e.g., Trilinos, MFEM, PETSc, MOOSE). Experience with shared and distributed memory parallel programming models such as OpenMP and MPI. Experience with one more GPU or performance
-
). Practical experience with cloud computing platforms (e.g., AWS, GCP, Azure). Additional Qualifications Experience with multi-GPU model training and large-scale inference. Familiarity with modern AI
-
and GPU-accelerated tools for circuit and system design optimization, addressing challenges in physical design, timing analysis, and large-scale hardware design automation. The researcher will
-
simulation methods, GPU-accelerated computations, several programming languages, and presenting results to wide technical and non-technical audiences. Additionally, the candidate will also develop theory and