Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Field
-
, FAISS/embedding retrieval, LLM-based parsing, RAG-style pipeline, and GPU/HPC training. Familiarity with 3D data processing or willingness to learn quickly. Publications, thesis work, or demonstrable
-
(Jubail) dedicated to the science division, several GPU-based clusters at NYUAD, and other supercomputer facilities through the CASS network. NYUAD also has guaranteed observing time on the Green Bank
-
, suitable for high I/O and large memory workloads. Mass Data Storage: 2.5 PB of networked storage, plus an additional 150 TB high-speed SSD storage for fast data access. GPU Supercomputing: A GPU server with
-
signal processing and/or survey datasets. ML & AI techniques and applications. HPC and orchestration of scientific data processing workflows. Parallel computing (GPU & CPU). good software engineering
-
stakeholders as required. Person Specification Important note: It is the University’s policy to use the person specification as a key tool for short-listing. Candidates should evidence that they meet ALL
-
, TensorFlow) with several years of practice Experience in maintaining high-quality code on Github Experience in running and managing experiments using GPUs Ability to visualize experimental results and learning
-
: Responsibilities: Provide computational support for structural biology research involving cryo-EM, X-ray crystallography, and molecular modeling. Maintain and manage specialized Linux-based GPU computing systems
-
framework and implemented on our research scanners for pre-clinical trials and validation at the University of Copenhagen using GPU processing. Qualifications Candidates should have a PhD degree in electrical
-
project is to develop a high-performance computing framework for mass spectrometry proteomics to enhance efficient processing and interpretation of large datasets using deep learning algorithms and GPU
-
) and reproducible research practices Desirable criteria Experience working with generative models or large language models Experience with large scale GPU-based model training and cloud computing