Sort by
Refine Your Search
-
Category
-
Program
-
Employer
-
Field
-
provide a performance or efficiency advantage, and determine scenarios where conventional AI accelerators (such as embedded GPUs or FPGA-based accelerators) remain more appropriate due to data
-
small cluster computers, high-end GPU nodes and specially configured workstations on which experiments are conducted. You will be working in a team of junior and senior system administrators. Naturally
-
++ and Python programming languages. Experience in open source projects, GPU programming, distributed computing and cloud computing are considered to be strong assets. The position of Research Fellow at
-
System Network Engineer to join our department, with a specific knack for networking. The infrastructure combines accelerated computing and GPUs clusters, open-source platforms-as-a-service, and fast
-
calculated using our Software Energy Lab, which has multiple test machines with GPUs and, in the future, AI accelerators. Development teams currently lack guidance on how to create sustainable systems. You
-
optimisation, distributed-parallel-GPU optimisation (e.g. pagmo2), Taylor-based numerical integration of ODEs (e.g. heyoka), differential algebra and high order automated differentiation (audi), quantum