Sort by
Refine Your Search
-
Category
-
Program
-
Employer
-
Field
-
Inria, the French national research institute for the digital sciences | Pau, Aquitaine | France | 3 months ago
://ffaucher.gitlab.io/hawen-website/ ), by enriching its feature set and improving its parallel performance on graphical processing units (GPUs). The Hawen software simulates wave propagation in various media, including
-
Post-doctoral Researcher in Multimodal Foundation Models for Brain Cancer & Neuro-degenerative Disea
translation models and medical images analysis are considered important assets. Established expertise in key machine & deep learning frameworks and toolsets. Experience in GPU computing, HPC, Containers & Image
-
/ computer vision and pattern recognition, including but not limited to biomedical applications Strong interest in applied machine learning, including but not limited to deep learning Experience utilising GPU
-
Inria, the French national research institute for the digital sciences | Villers les Nancy, Lorraine | France | about 1 month ago
have access to major national HPC facilities (Grid5000, Jean Zay, GENCI allocations), including large-scale GPU resources. Biomolecular function is driven by both structure and dynamics. Understanding
-
typically work with datasets of up to several TBs depending on the case study); - Autonomy to conduct independent analysis and research on our (GPU/CPU) servers, familiarity with coding frames in machine
-
l'institut du thorax, INSERM, CNRS, Nantes Université | Nantes, Pays de la Loire | France | about 2 months ago
Devices" ). • Bring various improvements on the synthetic model (vasculature shape / aneurysm / background noise modelling) • Numerical simulations will be performed on a GPU HPC cluster. • Programming in
-
Inria, the French national research institute for the digital sciences | Saint Martin, Midi Pyrenees | France | 2 months ago
, embeddings with transformers, training with flow matching) and high performance computing (e.g. handling large-scale parallel simulators, multi-node and GPU training on large supercomputers). When considering