Sort by
Refine Your Search
-
Listed
-
Category
-
Program
-
Employer
-
Field
-
exposing hardware accelerators, such as GPUs and FPGAs, in a seamless and portable way. This includes designing execution logic and resource-scheduling strategies that make efficient use of available
-
tasks across distributed infrastructures. A key aspect of the position involves integrating and exposing hardware accelerators, such as GPUs and FPGAs, in a seamless and portable way. This includes
-
environment, which brings together more than 400 researchers across disciplines. The collaboration provides access to substantial computational resources (GPU nodes), advanced high-throughput instruments
-
together more than 400 researchers across disciplines. The collaboration provides access to substantial computational resources (GPU nodes), advanced high-throughput instruments (including a FACS, mass
-
management, high-performance computing systems, GPU acceleration, and parallel file systems * Documented experience with container and cloud technologies such as Docker, Helm, and Kubernetes * Ability
-
frameworks (e.g., PyTorch). Engineering skills: GPU/cluster training, experiment tracking, data engineering. Ability to formulate research questions, run empirical studies at scale. *for students with
-
engineering. The work involves simulations for quantum error correction and mid-circuit operations, and will require both low-level optimization skills (e.g., SIMD, GPU, FPGA) and an understanding of quantum