Sort by
Refine Your Search
-
Listed
-
Category
-
Program
-
Employer
- ;
- UNIVERSITY OF SURREY
- University of Sheffield
- Loughborough University
- Nature Careers
- ; CRUK Scotland Institute
- ; Loughborough University
- ; Swansea University
- ; The University of Manchester
- ; University of Birmingham
- ; University of Exeter
- ; University of Reading
- ; University of Sheffield
- ; University of Southampton
- Durham University
- Heriot Watt University
- KINGS COLLEGE LONDON
- Kingston University
- Manchester Metropolitan University
- The University of Southampton
- UNIVERSITY OF GREENWICH
- UNIVERSITY OF SOUTHAMPTON
- University of Bristol
- University of Glasgow
- University of Greenwich
- University of Oxford
- University of Surrey
- 17 more »
- « less
-
Field
-
-performance computing, including parallel or GPU programming (MPI, OpenMP, CUDA, Kokkos, etc.) Familiarity with modern software development practices, including debugging, profiling, and version control
-
of GPUs and/or time in either training or inference procedures, which pose considerable challenges to both academia and industry for widespread access and deployment. In particular, the sampling process of
-
of research computing at LSE. Your expertise will be key in future-proofing our research hardware environment, ensuring high availability, scalability and security across HPC clusters; GPU acceleration, high
-
. Experience designing and operating massive-scale GPU and combined CPU/GPU workloads across these services. Design and debug platforms and will work closely with researchers as you co-design solutions that will
-
as code” approach to systems automation. You’ll be working across a range of predominately Linux based systems, including HPC and GPU accelerated compute, large-scale and high-performance storage, and
-
developments; Significant experience with the development of custom modules using GPU-accelerated APIs for deep learning (e.g., Pytorch); and Publications in top-tier venues in Machine Learning and/or Signal
-
theoretical physics, whose responsibilities relate to distributed systems and the GPU optimization of AI algorithms. We expect the team to grow in size considerably over the next few years, and are looking
-
, including recent developments; Significant experience with the development of custom modules using GPU-accelerated APIs for deep learning (e.g., Pytorch); and Publications in top-tier venues in Machine
-
nature of electroweak symmetry breaking and mass generation in the standard model. We developed state of the art (open source) software working on GPU- and CPU-based supercomputing architectures, and
-
. They will have strong Python programming skills and familiarity with Deep Learning, including PyTorch would convey a significant advantage. They will have access to our in-house GPU-enabled High Performance