Sort by
Refine Your Search
-
optimize large-scale distributed training frameworks (e.g., data parallelism, tensor parallelism, pipeline parallelism). Develop high-performance inference engines, improving latency, throughput, and memory
-
to well-known open-source projects or a personal portfolio of impactful open-source research code. Experience with large-scale distributed training and high-performance computing (HPC) environments.
Searches related to parallel and distributed computing
Enter an email to receive alerts for parallel-and-distributed-computing positions