Sort by
Refine Your Search
-
Listed
-
Country
-
Field
-
). Experience managing systems utilizing GPU (NVIDIA and AMD) clusters for AI/ML and/or image processing. Knowledge of networking fundamentals including TCP/IP, traffic analysis, common protocols, and network
-
especially on GPU infrastructure enhancements and improvements as part of Yale’s comprehensive campus investment in AI . As an experienced subject matter expert, you will help lead the system design
-
theoretical physics, whose responsibilities relate to distributed systems and the GPU optimization of AI algorithms. We expect the team to grow in size considerably over the next few years, and are looking
-
, TrustLLM and EuroLingua-GPT, in which large foundation models are trained from scratch on the basis of several million GPU hours and several thousand GPUs. The distinctive feature of the work at the FMR-Lab
-
quantitative genetics, machine learning, bioinformatics, and population genetics, and their applications in an agricultural setting A modern dedicated computational infrastructure (CPUs & GPUs) Well-developed in
-
-of-the-art high-performance computing (HPC) infrastructure. In this position, you'll leverage your expertise to support and manage CSHL's AI-driven compute cluster powered by NVIDIA H100 GPUs, empowering
-
software development with GPUs. Experience with visualization of large data sets. An understanding of how to make results easily available using common web interfaces. Required Documents Resume Cover Letter
-
. Computational Infrastructure: Deploy and maintain high-performance computing environments (GPU clusters, cloud services) for large-scale image-text experimentation. Data Engineering: Establish workflows
-
of parallel computing (GPUs) to speed solution within the optimisation process. Funding Notes 1st or 2:1 degree in Engineering, Materials Science, Physics, Chemistry, Applied Mathematics, or other Relevant
-
management of a state-of-the-art medical imaging research data center. The environment includes a robust multi-CPU/GPU architecture with virtual and physical servers, supporting advanced parallel computation