Sort by
Refine Your Search
-
molecular dynamics simulations and was specially designed for parallelisation on GPUs. It is open source and licensed under the LGPL. Details can be found on the website https://halmd.org Job-Description
-
and train CNN and SNN models utilizing frameworks such as Keras, PyTorch, and SNNtorch Implement GPU acceleration through CUDA to enable efficient neural network training Apply hardware-aware design
-
Max Planck Institute for Intelligent Systems, Tübingen site, Tübingen | Bingen am Rhein, Rheinland Pfalz | Germany | 2 months ago
— operates a state-of-the-art GPU cluster with more than 1200 GPUs, serving as a critical backbone for advancing ground-breaking research in AI. Possible tasks include: Build, administer, optimize, and
-
, TrustLLM and EuroLingua-GPT, in which large foundation models are trained from scratch on the basis of several million GPU hours and several thousand GPUs. The distinctive feature of the work at the FMR-Lab
-
on them Work on the design, development and operation of GPU and compute cluster systems together with an interactive team Serve as the first point of contact for users for help or problem analysis and
-
experiment operation in 2028. Emphasis has to be put on the application of the software in real-time, making use of massive parallelism on CPU and/or on GPU. Your profile: From the applicant, we expect a
-
performance on neuromorphic hardware. What you will do Develop and train CNN and SNN models using frameworks like Keras, PyTorch, and SNNtorch Implement GPU acceleration using CUDA for efficient training
-
researcher with a proven track record in areas relevant to auto-tuning, focusing on ML-driven compiler optimization, transfer learning, and programming for heterogeneous systems across CPUs, GPUs, and
-
researcher with a proven track record in areas relevant to auto-tuning, focusing on ML-driven compiler optimization, transfer learning, and programming for heterogeneous systems across CPUs, GPUs, and
-
, parallel/distributed computing, as well as diverse architectures and understanding of its impact on application performance Knowledge in GPU-based programming and modelling of scientific simulations