Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Field
-
100% funding per SNSF guidelines (~CHF 90'000/year) Access to modern GPU clusters and confidential-computing infrastructure Collaboration with leading researchers in AI & HPC systems and digital health
-
variety of computational devices (e.g. CPUs and GPUs) while ensuring overall consistency and performance. - contribute to identify new CSE applications domains, such as condensed matter systems, quantum
-
in GPU programming one or more parallel computing models, including SYCL, CUDA, HIP, or OpenMP Experience with scientific computing and software development on HPC systems Ability to conduct
-
or TensorFlow. Practical background in training and validating models on GPU-based and distributed computing environments. Working knowledge of containerization tools and orchestration platforms (e.g. Docker
-
datasets and run experiments on HPC infrastructure (GPU clusters, SLURM). Strong written communication skills for technical documentation, reporting, and research outputs. Ability to work independently and
-
computing environment that includes GPU clusters, large-memory servers, and an NVIDIA DGX B200 system. These resources support the training of large multimodal models involving audio, video, language
-
environments. Experience with parallel computing environments, HPC in a Linux environment. Experience with surrogate modeling. Experience with data analytics techniques. Familiarity with C++ and GPU programming
-
them. Research Computing & AI Enablement - Work with the Architecture team to build scalable HPC and GPU-enabled environments on IaaS cloud sites along with other specialized hosted solutions. Work
-
, forward-looking, and varied research fields and projects, with numerous development opportunities Modern hardware and infrastructure at the workplace, from compute and GPU servers to supercomputers
-
., Bayesian, hierarchical, time-series), experience with sensor-based data (such as eye tracking, EEG, and heart rate), and proficiency in computational workflows, including distributed and GPU-based systems