-
optimize large-scale distributed training frameworks (e.g., data parallelism, tensor parallelism, pipeline parallelism). Develop high-performance inference engines, improving latency, throughput, and memory
-
to well-known open-source projects or a personal portfolio of impactful open-source research code. Experience with large-scale distributed training and high-performance computing (HPC) environments.
-
Institute of Health, with over 12,000 square meters of laboratories and today it is made up of more than 400 researchers distributed among 33 research groups working on different basic, applied and clinical
Searches related to distributed computing
Enter an email to receive alerts for distributed-computing positions