Sort by
Refine Your Search
-
Employer
- Oak Ridge National Laboratory
- Argonne
- Duke University
- Harvard University
- SUNY Polytechnic Institute
- University of Miami
- University of Nebraska Medical Center
- Brookhaven National Laboratory
- Northeastern University
- Rutgers University
- Sandia National Laboratories
- Stanford University
- Texas A&M University
- University of New Hampshire – Main Campus
- University of North Carolina at Chapel Hill
- University of Utah
- 6 more »
- « less
-
Field
-
Requisition Id 16166 Overview: The Programming Systems Group at ORNL seeks a forward‑leaning Postdoctoral Researcher to advance research at the nexus of Agentic AI, high‑productivity programming
-
languages; experience with GPU programming (e.g., CUDA) is highly desirable. Background in optimization, image-guided radiotherapy, medical imaging, or computational modeling. Experience with treatment
-
computing software libraries (e.g., Trilinos, MFEM, PETSc, MOOSE). Experience with shared and distributed memory parallel programming models such as OpenMP and MPI. Experience with one more GPU or performance
-
University. Dr. Cheng Peng invites applications for a postdoctoral position supported by a Laboratory Directed Research and Development (LDRD) program. This appointment spans a 2-year timeframe. The
-
computing environments, and GPU programming. Necessary skills include knowledge of data processing using software (e.g., Matlab, R, IDL) and/or statistical/mathematical programming languages (e.g., R, Matlab
-
environments. Experience with parallel computing environments, HPC in a Linux environment. Experience with surrogate modeling. Experience with data analytics techniques. Familiarity with C++ and GPU programming
-
simulation methods, GPU-accelerated computations, several programming languages, and presenting results to wide technical and non-technical audiences. Additionally, the candidate will also develop theory and
-
and GPU-accelerated tools for circuit and system design optimization, addressing challenges in physical design, timing analysis, and large-scale hardware design automation. The researcher will
-
in top-tier machine learning/AI conferences and/or leading scientific journals. Excellent programming skills and hands-on experience with leading machine learning frameworks (e.g., TensorFlow, PyTorch
-
in GPU programming one or more parallel computing models, including SYCL, CUDA, HIP, or OpenMP Experience with scientific computing and software development on HPC systems Ability to conduct