Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Field
-
modern high performance computation facilities and parallel computing clusters (CPU and GPU). Excellent publication record and demonstrated conference presentation skills. Demonstrated ability to operate
-
orchestration, and high-throughput data processing, as well as experience working with large biological datasets in GPU-based computing environments. What we provide: A competitive compensation package, with
-
have access to state-of-the-art facilities and major cyberinfrastructure investments, including the Advanced Research Computing Center (ARCC), the NCAR-Wyoming Supercomputing Center (NWSC), GPU computing
-
communication skills. First-author publications at NeurIPS, ICLR, ICML, AAAI, KDD, or IJCAI. Experience working with large-scale, noisy, or real-world datasets. Experience with GPU-based training and high-performance
-
. Training LLMs, large-scale deep learning systems, and/or large foundation models using GPU/TPU parallelization while setting up the environment/system network under various constraints, such as limited
-
resources, including the Texas Advanced Computing Center (TACC) , and multiple HPCs on campus , including some GPU-heavy clusters. University of Texas at Arlington Research Institute (UTARI) Center
-
for candidates with experience in ML model deployment, workflow orchestration, and high-throughput data processing, as well as experience working with large biological datasets in scalable GPU-based computing
-
datasets in scalable GPU-based computing environments. What we provide: A competitive compensation package, with comprehensive health and welfare benefits. A supportive team environment that promotes
-
advanced C++ development, including multi-threading, CMake, and Eigen Familiarity with GPU-based computing concepts such as CUDA Understanding of industrial communication protocols such as Ethernet, USB
-
to the development of advanced language models and derived use cases by focusing on one or more of the following topics in their PhD project: Training and inference of ML models on GPU clusters. Method development