Sort by
Refine Your Search
-
Listed
-
Country
-
Program
-
Field
-
. Qualifications: Familiarity with machine learning interatomic potentials, CPU and GPU parallelization, knowledge of LAMMPS and molecular dynamics, experience with first principles calculations of dielectric and
-
of cores, and a growing GPU cluster containing thousands of high-end GPUs. Depending on the day, we might be diving deep into market data, tuning hyperparameters, debugging distributed training performance
-
Experience with HPC (GPUs preferred) Related Skills and Other Requirements Ability to work at the interface of AI and science/engineering problems Ability to lead, develop, and contribute to multiple projects
-
, engineering, physical science or related technical discipline. Experience: Expertise in developing and training AI models Proficiency in Python Experience with HPC (GPUs preferred) Related Skills and Other
-
IT4Innovations National Supercomputing Center, VSB - Technical University of Ostrava | Czech | 14 days ago
deployment, · knowledge of GPU computing and large-scale training, · experience working in an HPC environment, · experience with data annotation pipelines or synthetic data generation. We offer: · work in a
-
computing software libraries (e.g., Trilinos, MFEM, PETSc, MOOSE). Experience with shared and distributed memory parallel programming models such as OpenMP and MPI. Experience with one more GPU or performance
-
computing environments, and GPU programming. Necessary skills include knowledge of data processing using software (e.g., Matlab, R, IDL) and/or statistical/mathematical programming languages (e.g., R, Matlab
-
Max Planck Institute of Animal Behavior, Radolfzell / Konstanz | Konstanz, Baden W rttemberg | Germany | about 1 month ago
Behavior (CASCB), providing access to shared infrastructure, high-speed data networks, and central technical services. Computational needs will be met through multiple layers of resources: local GPU-equipped
-
advanced compilation techniques for scientific and AI applications on heterogeneous GPU clusters. Research topics include scheduling, memory management, communication–computation overlap, and performance
-
to diverse academic and industrial audiences. Proficiency in Python and deep learning frameworks such as PyTorch. Experience with Linux environments and GPU cluster management is essential. Competent in