Sort by
Refine Your Search
-
Listed
-
Country
-
Field
-
aggressive vehicle maneuvers Embedded systems and real time applications Constructing GPU-based data processing pipelines for senor data Familiarity with image and point cloud processing algorithms Why NREC
-
computer science or a science or engineering field. Expertise working with high-performance computing systems, GPU programming, machine learning, and/or full-stack. Experience teaching best practices in software
-
well as the newly launched Center for Generative AI and its associated GPU cluster consisting of 600 GH200 nodes. Qualifications All candidates must hold a Ph.D. or equivalent degree in Chemistry, Molecular Biology
-
learning, AI engineering, AI infrastructure, hybrid cloud computing, and parallel programming with GPUs, to work at the Institute for Artificial Intelligence and Data Science (IAD). As a Senior Research
-
, hybrid cloud computing, and parallel programming with GPUs. As a Junior Research Engineer, you should have some basic understanding and experience in the development of scalable AI systems and deployment
-
for the Foundations of Machine Learning (IFML), and Good Systems initiative as well as the newly launched Center for Generative AI and its associated GPU cluster consisting of 600 GH200 nodes. All positions are subject
-
that combine parallel architectures (i.e., GPUs or accelerator boards, clusters) and numerical algorithms suited to such architectures with the goal of improving the speed of convergence and the stability
-
methodologies, including data and model parallelism. Proficiency in GPU programming and kernel-based programming for deep learning, including the ability to optimize deep learning algorithms at a low level, is a
-
heterogeneous sets of digital processors (FPGAs, CPUs, GPUs, and potentially ASICs). Moreover, airborne radar places limitations on the SWaP of the high-performance embedded computing (HPEC) systems that must
-
foundation models and large vision-language models. Experience in large-scale deep learning systems and/or large foundation model, and the ability to train models using GPU/TPU parallelization. Experience