-
models including scaling models across a large set of GPUs; building or optimizing LLMs to tackle new, complex tasks; developing new models of brain circuits and function; and learning software engineering
-
of unparalleled computing resources in the academic environment by optimizing AI/ML models including scaling models across a large set of GPUs; building or optimizing LLMs to tackle new, complex tasks; developing
-
computing platforms (e.g., AWS, GCP, Azure). Additional Qualifications Experience with multi-GPU model training and large-scale inference. Familiarity with modern AI environments and tools. Prior experience
-
support. Deep understanding of modern machine learning architectures and optimization techniques. Proficiency in deep learning frameworks (PyTorch). Experience with Nvidia GPU stack and HPC technologies