Sort by
Refine Your Search
-
Listed
-
Country
-
Field
-
computing (HPC) systems, including GPUs, and programming, such as using CUDA, MPI, AI/ML/DL, and advanced debuggers and performance analyzers. Familiarity with working on open-source projects. About UF
-
optimized code written by expert programmers and can target different hardware architectures (multicore, GPUs, FPGAs, and distributed machines). In order to have the best performance (fastest execution) for a
-
). Experience managing systems utilizing GPU (NVIDIA and AMD) clusters for AI/ML and/or image processing. Knowledge of networking fundamentals including TCP/IP, traffic analysis, common protocols, and network
-
architectures. This includes among other: (a) design and implementation of machine learning and GenAI models, (b) efficient training and inference on GPU-based systems, (c) fine-tuning and optimization of large
-
Engineers. Serve as liaison with Princeton Research Computing staff on GPU cluster related issues. Professional Development Learn the underlying science, mathematics, statistics, data analysis, and algorithms
-
://shimadzuinstitute.org/ Faculty members have excellent access to computational resources, including Texas Advanced Computing Center (TACC), and multiple HPCs on campus, including some GPU-heavy clusters as
-
. Evaluates and selects appropriate foundational models (OpenSource vs. Proprietary) and hosting strategies (Azure AI Foundry, AWS Bedrock, local GPU/TPU), directly influencing the University's cloud spend and
-
these resources through a cloud-native Kubernetes environment integrating large-scale CPU and GPU resources, Ceph object storage, BinderHub, Coffea-Casa, Dask, and ServiceX. This platform supports more than 500
-
pipelines for complex decision‑making. Conducting adversarial testing, implementing input sanitization, and contributing to AI‑safety research. Utilizing GPU/TPU resources, mixed‑precision training, and
-
, PyTorch) for ML applications, training, evaluation, and deployment of models Use of GPU-based servers and modern IT infrastructure for training and inference Application of classical ML methods (e.g