Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Field
-
models on one or more GPUs and the ability to work with existing codebases to set up training runs Research interest in one or more of the following areas: probabilistic machine learning, time series
-
containerisation, orchestration platforms, and CI/CD practices. Benchmark, profile, and optimise AI applications across software and hardware layers to maximise GPU cluster efficiency Build research community
-
models, LLMs and Transformer architectures Excellent programming skills in PyTorch/JAX and experience working with GPUs and high-performance clusters. Strong mathematical skills with excellent
-
influence the technological trajectory of the ecosystem. The core responsibilities of this position include developing and owning the overall SoC specifications and architecture, encompassing CPU, GPU, memory
-
Inria, the French national research institute for the digital sciences | Pau, Aquitaine | France | 3 months ago
methods (SFEM) offer superior accuracy per degree of freedom and are naturally suited to HPC architectures (CPU/GPU clusters). Two main Galerkin formulations exist: Continuous Galerkin (CG-SFEM): Memory
-
learning, multicore and GPU programming, and highly parallel systems. Good knowledge in one or more of the following programming languages/environments: C/C++, Python, PyTorch (or similar), and Cuda. Place
-
precision algorithms for CPUs and GPUs. Performance engineering and analysis including application profiling, benchmarking to identify performance bottlenecks. Verification, and validation of the developed
-
communications and sensing, and a GPU Lab for training of advanced machine learning models. IDLab is both part of the University of Antwerp and the research centre imec. Position As a graduate teaching & research
-
- and hardware-oriented, reliable programming of parallel systems ranging from embedded multi- and many-core processors to large-scale heterogeneous systems (CPUs, GPUs, Accelerators, etc.). The research
-
supporting new research and engineering using ORNL’s Frontier exascale supercomputer for its dense GPU-based HPC resources to train, deploy models and create large-scale production datasets for high-impact