-
unique opportunity to engage in transformational research that advances the development of AI-ready scientific data, optimized workflows, and distributed intelligence across the computing continuum. In
-
. Demonstrated experience developing and running computational tools for high-performance computing environment, including distributed parallelism for GPUs. Demonstrated experience in common scientific programming
-
for Science @ Scale: Pretraining, instruction tuning, continued pretraining, Mixture-of-Experts; distributed training/inference (FSDP, DeepSpeed, Megatron-LM, tensor/sequence parallelism); scalable evaluation
Searches related to parallel and distributed computing phd
Enter an email to receive alerts for parallel-and-distributed-computing-phd positions