Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Field
-
and optimization strategies for large-scale or streaming data. Develop parallelized and GPU-accelerated learning modules, ensuring scalability and performance efficiency. Build and maintain robust data
-
Work location Zürich or Lugano Topic B: Simultaneous tree traversal with Producer-Consumer pattern on GPUs Abstract Simultaneous tree traversal, also referred to as dual tree traversal, can be applied
-
robotics simulation environments across multiple platforms, including NVIDIA Isaac Sim, Gazebo, MuJoCo, and other relevant simulators for different applications and robotics platform Design modular and
-
perspectives of our people. By embracing diversity, we believe science can achieve its fullest potential. THE ROLE You will be working in a multi-disciplinary group, where people with different backgrounds
-
. The researcher will be provided access to state-of-the-art supercomputing facilities with advanced GPU and data storage capabilities. Additionally, opportunities will be available for collaborations
-
the future. Here’s how you’ll make a difference: Collaborative research centers (SFBs) are the "Champions League" of the DFG-funded projects (Deutsche Forschungsgemeinschaft). They are spanned over 12 years
-
can support prediction of many different clinical outcomes at once. To fuel your models, you will have access to one of the largest multicentre ICU resources to date (~1M patients, ~33B clinical events
-
and optimization strategies for large-scale or streaming data. Develop parallelized and GPU-accelerated learning modules, ensuring scalability and performance efficiency. Build and maintain robust data
-
at ESRF with the open-source PyNX software suite which exploits graphical processing units (GPU) for accelerated reconstructions enabling online data analysis. You will drive software development to exploit
-
that outperforms highly optimized code written by expert programmers and can target different hardware architectures (multicore, GPUs, FPGAs, and distributed machines). In order to have the best performance