-
optimize large-scale distributed training frameworks (e.g., data parallelism, tensor parallelism, pipeline parallelism). Develop high-performance inference engines, improving latency, throughput, and memory
-
to well-known open-source projects or a personal portfolio of impactful open-source research code. Experience with large-scale distributed training and high-performance computing (HPC) environments.
-
of the project stands on the fact that activity recording data are collected and integrated in the model from multiple experimental sources, in the hope to exploit the full power of computational modelling to span
Enter an email to receive alerts for parallel-and-distributed-computing-"Multiple" positions