-
containerisation, orchestration platforms, and CI/CD practices. Benchmark, profile, and optimise AI applications across software and hardware layers to maximise GPU cluster efficiency Build research community
-
infrastructures, improving system performance, scalability, and efficiency by optimizing resource usage (e.g., GPUs, CPUs, energy consumption). Researchers and students will explore innovative approaches to reduce
-
) for reproducible research workflows. Support Optimising GPU-accelerated workloads (e.g., PyTorch, TensorFlow), including multi-GPU scaling and distributed training. Develop training materials, documentation, and
-
learning frameworks such as PyTorch, JAX, or TensorFlow. Experience with C++ and GPU programming. A strong growth mindset, attention to scientific rigor, and the ability to thrive in an interdisciplinary