Sort by
Refine Your Search
-
Listed
-
Country
-
Field
-
. Conduct experimental studies using GPU-enabled computing resources for model training, inference, and simulation-based evaluation. Support rapid prototyping and iteration of research ideas, from concept
-
, optimizing, and deploying AI models on HPC and GPU-based systems. Provide guidance on performance optimization, scaling, and efficient resource utilization. Contribute to architectural and design decisions in
-
AUSTRALIAN NATIONAL UNIVERSITY (ANU) | Canberra, Australian Capital Territory | Australia | about 1 month ago
that supports this project has an expected end date of 30 June 2028. This role gives you hands-on access to Australia’s national supercomputing infrastructure—including world-class HPC clusters, large-scale GPU
-
detector plus Falcon 4i and the other with a Selectris energy filter and Falcon 4i. The Crick Institute has excellent High Performance Computing resources, dedicated high-speed data storage and CPU and GPU
-
model. Further your knowledge in quantitative modeling and financial analysis. Involve yourself in emerging technologies (e.g. cloud/grid computing, GPU computing, FPGA) in the Fintech field. Benefit from
-
– Documented experience in large-scale data management, high-performance computing systems, GPU acceleration, and parallel file systems – Ability to communicate fluently in English, both spoken and written
-
models on GPU infrastructure (SSH access) and distributed computing environments. Strong problem-solving, documentation, and communication skills across technical and non-technical contexts. Ability
-
. The role involves the design, implementation, and testing of GPU compute kernels, and associated host code, for the CHR real-time pipeline. Particular challenges include high-throughput beamforming via a
-
fonctionnant sur CPU et GPU, cette thèse de doctorat vise à caractériser la dynamique de ces ondes de choc, leur évolution à long terme et leur signature observationnelle. Cette thèse est financée par une bourse
-
-mode taxonomies). Implement and maintain high-quality research codebases (PyTorch/HF), experiment tracking, and compute workflows (multi-GPU, HPC/cluster), ensuring reproducibility and documentation