Sort by
Refine Your Search
-
Listed
-
Country
-
Field
-
of the following technology areas: hardware/software co-design, performance optimization with heterogeneous and alternative computing systems (CPU/GPU/NPU/etc.), FPGA design, high-performance computing
-
Infrastructure - Collaborate with research computing teams and HPC centers to architect AI infrastructure solutions that support both administrative and research computing needs, including GPU-accelerated
-
, including Large Language Models (LLMs), agent-based systems, and Retrieval-Augmented Generation (RAG). Practical expertise in training and optimizing neural networks on high-performance (GPU-enabled
-
-based HPC services, this role will involve supporting SAS researchers who use Penn’s new PARCC (Penn Advanced Research Computing Center) centralized HPC services, including both CPU and GPU cutting-edge
-
inference Develop distributed model training and inference architectures leveraging GPU-based compute resources Implement server-less and containerized solutions using Docker, Kubernetes, and cloud-native
-
, and interacting with pilots and passengers. Operates and becomes familiar with ground support equipment, such as the aircraft tug, ground power unit (GPU), lavatory service cart, de-icing cart, forklift
-
computing (HPC) systems, including GPUs, and programming, such as using CUDA, MPI, AI/ML/DL, and advanced debuggers and performance analyzers. Familiarity with working on open-source projects. About UF
-
optimized code written by expert programmers and can target different hardware architectures (multicore, GPUs, FPGAs, and distributed machines). In order to have the best performance (fastest execution) for a
-
). Experience managing systems utilizing GPU (NVIDIA and AMD) clusters for AI/ML and/or image processing. Knowledge of networking fundamentals including TCP/IP, traffic analysis, common protocols, and network
-
architectures. This includes among other: (a) design and implementation of machine learning and GenAI models, (b) efficient training and inference on GPU-based systems, (c) fine-tuning and optimization of large