Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Field
-
compensation package with comprehensive health and welfare benefits. A supportive team environment that promotes collaboration and knowledge sharing. Access to world-class computational infrastructure, GPU-based
-
collaborative, international team. We offer Cutting-Edge Resources: Access to state-of-the-art compute and GPU infrastructure, including H100 and B300 GPU clusters. Innovation: The opportunity
-
, Vehicular Communications Experience in system modeling and simulation of communication systems Strong programming skills in MATLAB are required; experience with Python, C/C++, or GPU-based computing is an
-
community. We provide the resources to match your ambition: Industrial-Scale Computing: Exclusive access to massive GPU clusters and high-performance computing. Guaranteed Talent Pipeline: Generous
-
-node GPU training and inference pipelines for foundational models. You'll also develop tools for ingesting, transforming, and integrating large, heterogeneous microscopy image datasets—including writing
-
/ computer vision and pattern recognition, including but not limited to biomedical applications Strong interest in applied machine learning, including but not limited to deep learning Experience utilising GPU
-
finite-element models, e.g. Poisson, linear elasticity, large-deformation soft tissue, for real-time execution on AR devices and GPUs Implement these models within open-source frameworks such as SOFA
-
-term project. We are looking for a software engineer to develop new features and extend the capabilities of a real-time neural data processing and decoding platform. This includes optimizing GPU
-
role We are seeking a highly motivated PhD student to perform fundamental research and to conceive truly sparse solutions (on both, CPU and GPU) for dynamic sparse training, aiming to cut the training
-
of advanced language models and derived use cases by focusing on one or more of the following topics in their PhD project: Training and inference of ML models on GPU clusters. Method development for scalable