Sort by
Refine Your Search
-
Listed
-
Country
-
Employer
- Argonne
- Central China Normal University
- European Magnetism Association EMA
- Los Alamos National Laboratory
- Northeastern University
- Oak Ridge National Laboratory
- Technical University of Denmark
- UNIVERSITY OF HELSINKI
- University of California Irvine
- University of North Carolina at Chapel Hill
- University of North Texas at Dallas
- 1 more »
- « less
-
Field
-
implemented in the Fortran programming language, and it relies on the platform CUDA for parallelization of the computation over several GPUs’ cores, and has interfaces with Matlab and Python for ease of use
-
techniques. Preferred Qualifications: Knowledge of HPC matrix, tensor and graph algorithms. Knowledge of GPU CUDA and HIP programming Knowledge on distributed algorithms using MPI and other frameworks such as
-
++/Python/CUDA programming for real-time image processing. Experience in MRI pulse sequence programming, ideally on Siemens MRI platforms. Experience in MRI image reconstruction; motion and distortion
-
on the platform CUDA for parallelization of the computation over several GPUs’ cores, and has interfaces with Matlab and Python for ease of use. However, powerful as it is, MagTense is at present limited in its
-
training in the second area are encouraged to highlight this in their application. Experience with high performance computing and GPU acceleration tools (e.g. CUDA) and deep learning frameworks, such as
-
lattice field theory and numerical methods, with experience in HPC programming (e.g., C++, Python, MPI, OpenMP, CUDA) and parallel computing environments. - Experience in performance analysis, debugging
-
• Familiarity with operating HPC clusters (e.g., bash, Python) Preferred Qualifications • HPC programming skills (e.g., modern Fortran or C/C++) • Parallel programming skills (e.g., OpenMP, MPI, OPENACC, CUDA
-
closure modeling and/or high performance computing environments (MPI, CUDA) • Expertise in software development and computing tools (C/C++, python, git, parallel computing, etc.) • Experience with deep
-
computer science or related computational engineering disciplines. Experience with simulation frameworks for complex computer systems and architectures. Some knowledge of accelerator (CUDA, SYCL, HIP) and scientific
-
in GPU programming one or more parallel computing models, including SYCL, CUDA, HIP, or OpenMP Experience with scientific computing and software development on HPC systems Ability to conduct