Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Field
-
computing environments, and GPU programming. Necessary skills include knowledge of data processing using software (e.g., Matlab, R, IDL) and/or statistical/mathematical programming languages (e.g., R, Matlab
-
computing software libraries (e.g., Trilinos, MFEM, PETSc, MOOSE). Experience with shared and distributed memory parallel programming models such as OpenMP and MPI. Experience with one more GPU or performance
-
advanced compilation techniques for scientific and AI applications on heterogeneous GPU clusters. Research topics include scheduling, memory management, communication–computation overlap, and performance
-
– Documented experience in large-scale data management, high-performance computing systems, GPU acceleration, and parallel file systems – Ability to communicate fluently in English, both spoken and written
-
for Neural Rendering for Computer Graphics and Real-Time Rendering. By using ANNs, coded for high-performance on cross-vendor GPUs, we aim to create new techniques for global illumination and material models
-
-mode taxonomies). Implement and maintain high-quality research codebases (PyTorch/HF), experiment tracking, and compute workflows (multi-GPU, HPC/cluster), ensuring reproducibility and documentation
-
that combine parallel architectures (i.e., GPUs or accelerator boards, clusters) and numerical algorithms suited to such architectures with the goal of improving the speed of convergence and the stability
-
to diverse academic and industrial audiences. Proficiency in Python and deep learning frameworks such as PyTorch. Experience with Linux environments and GPU cluster management is essential. Competent in
-
models on GPU infrastructure (SSH access) and distributed computing environments. Strong problem-solving, documentation, and communication skills across technical and non-technical contexts. Ability
-
). Practical experience with cloud computing platforms (e.g., AWS, GCP, Azure). Additional Qualifications Experience with multi-GPU model training and large-scale inference. Familiarity with modern AI