Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Field
-
GPU utilization and allocate computing resources efficiently across users. c) create and manage user accounts for faculty and students; troubleshoot user issues; and design and deliver
-
to support computations on GPU hardware with various types of Finite Element methods. This work is embedded in a research project considering structure preserving Finite Element methods for multiphase flows
-
of today’s heterogeneous hardware (multicore CPUs, GPUs, SmartNICs, disaggregated datacenters). We explore: SmartNICs & P4 switches for offloading intelligence from hosts Device-to-device communication
-
. Training LLMs, large-scale deep learning systems, and/or large foundation models using GPU/TPU parallelization while setting up the environment/system network under various constraints, such as limited
-
Engineers. Serve as liaison with Princeton Research Computing staff on GPU cluster related issues. Professional Development Learn the underlying science, mathematics, statistics, data analysis, and algorithms
-
, scientific computing, etc). Strong scientific computing background, with experience of different architectures (e.g. CPUs/GPUs) and their use in high-performance computing through shared or distributed
-
modules, and monitor training progress. Display performance metrics (e.g., inference time, GPU utilization, throughput, ROI impact) in real time. System Integration Work with the research team to connect AI
-
tasks. Application Instructions To apply, please submit your resume with your GPA clearly listed at the top. Applicants should also include 2–3 professional or academic references. What We Provide Hands
-
your resume with your GPA clearly listed at the top. Applicants should also include 2–3 professional or academic references. What We Provide Hands-on experience in a dynamic and immersive research
-
optimization – with rigorous theoretical analysis. The ideal candidate has strong machine learning and AI expertise and is comfortable with – or eager to learn – large-scale multi-GPU experimentation