Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Employer
- University of Washington
- Northeastern University
- Nature Careers
- University of Glasgow
- NEW YORK UNIVERSITY ABU DHABI
- UiT The Arctic University of Norway
- University of Texas at Dallas
- Brookhaven Lab
- Durham University
- European Space Agency
- FCiências.ID
- Harvard University
- Humboldt-Stiftung Foundation
- Johns Hopkins University
- Monash University
- NIST
- Oak Ridge National Laboratory
- SUNY University at Buffalo
- State University of New York University at Albany
- University of California
- University of California Davis
- University of California, San Diego
- University of Colorado
- University of Pennsylvania
- University of the Pacific
- Université Côte d'Azur
- Zintellect
- 17 more »
- « less
-
Field
-
programming; Experience programming distributed systems; Experience with parallel and distributed File Systems (e.g., Lustre, GPFS, Ceph) development. Advanced experience with high-performance computing and/or
-
) for a given Tiramisu program, many code optimizations should be applied. Optimizations include vectorization (using hardware vector instructions), parallelization (running loop iterations in parallel
-
programming; Experience programming distributed systems; Experience with parallel and distributed File Systems (e.g., Lustre, GPFS, Ceph) development. Advanced experience with high-performance computing and/or
-
State University of New York University at Albany | Albany, New York | United States | about 18 hours ago
transformative paradigm. By taking advantage of qubits' ability to exist in multiple states simultaneously and exploiting the probabilistic and delocalized behaviors of quantum systems, quantum computing promises
-
programming; Experience programming distributed systems; Experience with parallel and distributed File Systems (e.g., Lustre, GPFS, Ceph) development. Advanced experience with high-performance computing and/or
-
, Mixture-of-Experts; distributed training/inference (e.g. FSDP, DeepSpeed, Megatron-LM, tensor/sequence parallelism); scalable evaluation pipelines for reasoning and agents. Federated & Collaborative
-
/Fortran) Shared and distributed memory programming tools (e.g. OpenMP, MPI) Accelerator programming (e.g. CUDA, OpenCL, SYCL) Machine Learning libraries such as Tensorflow or PyTorch Serial and parallel
-
four years. The nominal length of the PhD programme is three years. The fourth year is distributed as 25 % each year and will consist of teaching and other duties. The objective of the position is that
-
parallel processing, distributed computing, and resource management techniques for efficient resource utilization. Resource Allocation: Oversee the allocation of computational resources, ensuring scalability
-
. The position is for a period of four years. The nominal length of the PhD programme is three years. The fourth year is distributed as 25 % each year and will consist of teaching and other duties. The objective