-
optimize large-scale distributed training frameworks (e.g., data parallelism, tensor parallelism, pipeline parallelism). Develop high-performance inference engines, improving latency, throughput, and memory
-
to well-known open-source projects or a personal portfolio of impactful open-source research code. Experience with large-scale distributed training and high-performance computing (HPC) environments.
-
Description This PhD project bridges computational neuroscience and machine learning to study the mechanisms of active forgetting—or unlearning—through the lens of both biological and artificial systems
Enter an email to receive alerts for parallel-and-distributed-computing-phd-"Multiple" positions