Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Employer
- Nature Careers
- Oak Ridge National Laboratory
- ;
- Argonne
- Fraunhofer-Gesellschaft
- Nanyang Technological University
- California Institute of Technology
- ETH Zurich
- Forschungszentrum Jülich
- National University of Singapore
- Technical University of Denmark
- University of Cincinnati
- University of Dayton
- University of Washington
- AIT Austrian Institute of Technology
- Central China Normal University
- Cold Spring Harbor Laboratory
- European Magnetism Association EMA
- Free University of Berlin
- Johns Hopkins University
- King Abdullah University of Science and Technology
- Lawrence Berkeley National Laboratory
- Los Alamos National Laboratory
- Meta/Facebook
- NTNU - Norwegian University of Science and Technology
- Northeastern University
- Singapore Institute of Technology
- Stanford University
- Technical University of Munich
- Texas A&M University
- The Chinese University of Hong Kong
- The Ohio State University
- UNIVERSITY OF HELSINKI
- University of Arkansas
- University of California Davis
- University of California Davis Health System
- University of California Irvine
- University of Glasgow
- University of Maryland, Baltimore County
- University of Minnesota
- University of North Carolina at Chapel Hill
- University of North Carolina at Greensboro
- University of North Texas at Dallas
- University of Oxford
- University of Pittsburgh
- Washington University in St. Louis
- 36 more »
- « less
-
Field
-
Pytorch, CUDA, Docker, vllm, TGI 6. Familiarity with platforms AWS, GCP, Azure, GitHub, HuggingFace, Spark/Airflow 7. Familiarity with Large Language Models and/or NLP will be advantageous 8
-
on the platform CUDA for parallelization of the computation over several GPUs’ cores, and has interfaces with Matlab and Python for ease of use. However, powerful as it is, MagTense is at present limited in its
-
lattice field theory and numerical methods, with experience in HPC programming (e.g., C++, Python, MPI, OpenMP, CUDA) and parallel computing environments. - Experience in performance analysis, debugging
-
LLM training Bright Cluster Manager Pyxis/enroot CUDA System and storage benchmarking DataDirect Networks (DDN) SFA high-performance storage systems Working Conditions This is a hybrid position, in
-
training in the second area are encouraged to highlight this in their application. Experience with high performance computing and GPU acceleration tools (e.g. CUDA) and deep learning frameworks, such as
-
• Familiarity with operating HPC clusters (e.g., bash, Python) Preferred Qualifications • HPC programming skills (e.g., modern Fortran or C/C++) • Parallel programming skills (e.g., OpenMP, MPI, OPENACC, CUDA
-
knowledge of programming, software development. Proficiency in at least one programming language such as Python, Fortran, C++, or CUDA, with the ability to learn others as needed. Familiarity with tools
-
Linux kernel internals, computation accelerators (e.g., GPU computing, CUDA), MPI, and OpenMP. Highly resourceful and adept at juggling multiple simultaneous projects. Must demonstrate ability to work
-
closure modeling and/or high performance computing environments (MPI, CUDA) • Expertise in software development and computing tools (C/C++, python, git, parallel computing, etc.) • Experience with deep
-
, MATLAB, Git, debugging, and modern software engineering practices. Experience with GPU computing (e.g., CUDA, HIP), parallel computing (e.g., MPI, Actor Model). Familiarity with containerization (e.g