Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Field
-
academic setting Proficient in at least two programming languages used in research (e.g. Python, C++, Fortran) Experience with programming paradigms used in HPC (e.g., MPI, GPU-programming) Experience
-
strategies for large-scale or streaming data. Develop parallelized and GPU-accelerated learning modules, ensuring scalability and performance efficiency. Build and maintain robust data pipelines for high
-
of predicting electronic, structural, and thermal quantities while leveraging underlying symmetries for computational efficiency. There will be a significant computational component in deploying multi-GPU codes
-
optimized code written by expert programmers and can target different hardware architectures (multicore, GPUs, FPGAs, and distributed machines). In order to have the best performance (fastest execution) for a
-
research, research groups require access to high performance data storage for data-intensive processing, visualization for data analysis, GPUs, access to cloud services and research infrastructure skills
-
Proficiency in Python and machine learning frameworks such as PyTorch, TensorFlow, or JAX Experience with HPC and GPU-accelerated computing Familiarity with foundation models / LLMs; interest in reproducible
-
systems, and the Zero-G Lab, a unique facility designed to emulate proximity operations under space-like conditions. In addition, CVI2 provides high-performance GPU computing resources that support the
-
provides high-performance GPU computing resources that support the design and training of advanced AI models. The research agenda of CVI2 focuses on cutting-edge topics such as 3D understanding and
-
hardware architects to establish how agentic AI and these languages co‑design with heterogeneous HPC systems (CPUs, GPUs, PIM, AI accelerators). Study performance and portability tradeoffs, leveraging
-
AUSTRALIAN NATIONAL UNIVERSITY (ANU) | Canberra, Australian Capital Territory | Australia | about 2 months ago
that supports this project has an expected end date of 30 June 2028. This role gives you hands-on access to Australia’s national supercomputing infrastructure—including world-class HPC clusters, large-scale GPU