Sort by
Refine Your Search
-
Listed
-
Country
-
Field
-
, implementing input sanitization, and contributing to AI‑safety research. Utilizing GPU/TPU resources, mixed‑precision training, and distributed training frameworks such as DeepSpeed or ZeRO. Prior work
-
University is home to Türkiye’s largest GPU cluster, providing advanced infrastructure for leading-edge AI research. The Department of Computer Science and Engineering at Koç University has world-renowned
-
home to HiPerGator, one of the most powerful high-performance computers at a US public university (https://www.rc.ufl.edu/about/hipergator/ ), and recently added the new AI NVIDIA GPU SuperPod (https
-
Knowledge of scaling and optimising software to take advantage of GPU / HPC infrastructure. Desirable: B1 Knowledge of Trusted Research Environments out with or within an HPC environment. Skills Essential: C1
-
storage and archiving solutions to collaboration and analytics tools. ARC also delivers Baskerville; a leading GPU accelerated National Compute Resource (NCR) and supports researchers using specialist
-
following technology areas: hardware/software co-design, performance optimization with heterogeneous and alternative computing systems (CPU/GPU/NPU/etc.), FPGA design, high-performance computing (HPC
-
IT4Innovations National Supercomputing Center, VSB - Technical University of Ostrava | Czech | 3 months ago
on highly scalable parallel applications with focus on: development and implementation of parallel aplications, GPU acceleration of applications, application optimization (improving scalability, vectorization
-
SecureData4Health (SD4H) OpenStack cloud infrastructure. It currently includes 15,000 VCPU, 60 Petabyte of storage, 30 GPU and is growing as additional academic research projects join. The Software Infrastructure
-
networking technologies such as InfiniBand. Working knowledge of GPU technologies like CUDA and OpenCL. Experience with distributed computing job schedulers (e.g., Slurm, PBS). Familiarity with
-
infrastructures, improving system performance, scalability, and efficiency by optimizing resource usage (e.g., GPUs, CPUs, energy consumption). Researchers and students will explore innovative approaches to reduce