Sort by
Refine Your Search
-
Program
-
Employer
-
Field
-
/GPUs. These devices provide massive spatial parallelism and are well-suited for dataflow programming paradigms. However, optimizing and porting code efficiently to these architectures remains a key
-
provide a performance or efficiency advantage, and determine scenarios where conventional AI accelerators (such as embedded GPUs or FPGA-based accelerators) remain more appropriate due to data
-
++ and Python programming languages. Experience in open source projects, GPU programming, distributed computing and cloud computing are considered to be strong assets. The position of Research Fellow at
-
optimisation, distributed-parallel-GPU optimisation (e.g. pagmo2), Taylor-based numerical integration of ODEs (e.g. heyoka), differential algebra and high order automated differentiation (audi), quantum