-
optimize large-scale distributed training frameworks (e.g., data parallelism, tensor parallelism, pipeline parallelism). Develop high-performance inference engines, improving latency, throughput, and memory
-
methods to accelerate the discovery and optimization of novel materials, and actively develop large-scale materials models (AI for Science) to transform the R&D process through AI-driven paradigms. In
-
-Based Generative Models: How can we fundamentally redesign generation processes for superior efficiency, controllability, and quality? We are exploring diffusion models, flow-matching, and other parallel
Searches related to parallel processing
Enter an email to receive alerts for parallel-processing positions