Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Employer
- INESC TEC
- Princeton University
- SINGAPORE INSTITUTE OF TECHNOLOGY (SIT)
- National University of Singapore
- FCiências.ID
- Humboldt-Universität zu Berlin
- INESC ID
- Lawrence Berkeley National Laboratory
- Nanyang Technological University
- University of Arkansas
- University of Idaho
- University of Minho
- University of Oslo
- Université Catholique de Louvain (UCL)
- 4 more »
- « less
-
Field
-
skills (Python preferred), with familiarity in GPU or distributed computing environments. • Experience with biomedical or neuroimaging data is advantageous but not required. • Excellent analytical, writing
-
communicate results clearly in writing and presentations. Desired Qualifications: Knowledge of GPU architecture and GPU programming. Interest or experience in distributed training on large scientific datasets
-
of the ROBERTA research project with the aim of: to explore the potential of GPU programming for treatment planning using randomized optimization approaches and the development of optimization models and
-
scientific software development. Proficiency in C/C++ and Python, with experience in HPC environments (e.g., MPI/OpenMP; GPU experience a plus). Record of peer-reviewed publications appropriate to career stage
-
, PyTorch/TensorFlow) experience working with cluster/GPU computing resources is desirable excellent verbal and written communication skills and the ability to work effectively in a collaborative team
-
development skills Model deployment (e.g., ONNX, TensorRT) Edge computing or embedded vision systems (e.g., NVIDIA Jetson Nano) Real-time processing and GPU acceleration Experience working on industry R&D
-
made at the Postdoctoral Research Associate rank. The AI Postdoctoral Research Fellow will have access to the AI Lab GPU cluster (300 H100s). Candidates should have recently received or be about to
-
of today’s heterogeneous hardware (multicore CPUs, GPUs, SmartNICs, disaggregated datacenters). We explore: SmartNICs & P4 switches for offloading intelligence from hosts Device-to-device communication
-
part of the core PLI team, which includes top-tier faculty, research fellows, scientists, software engineers, postdocs, and graduate students. Fellows will have access to the AI Lab GPU cluster (300
-
, energy consumption, and accuracy.; ; Training deep learning models, especially in LLMs, faces critical challenges that compromise the optimal use of GPUs. These bottlenecks result in poor computational