Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Field
-
/GPU environments. Provide consultative support and training to researchers using BRC AI/ML tools and pipelines. Performs related duties & responsibilities as assigned/requested. Qualifications REQUIRED
-
development skills Model deployment (e.g., ONNX, TensorRT) Edge computing or embedded vision systems (e.g., NVIDIA Jetson Nano) Real-time processing and GPU acceleration Experience working on industry R&D
-
(URCF) at Drexel University is building a new shared computing platform focused on GPU-accelerated workloads, particularly AI model training. The system includes GPU and CPU compute nodes with Nvidia H200
-
heterogeneous (CPU/GPU) computing models. Collaborate with physicists, computer scientists, mathematicians and engineers across LBNL divisions to define software requirements, implement robust solutions, and
-
that serve the entire campus community. You will bridge the gap between high-performance hardware and practical user applications, ensuring that our AI infrastructure, from GPU infrastructure to sovereign data
-
experiments, particularly ATLAS and DUNE. Contribute to the architecture and core development of the Phlex framework, emphasizing scalable, multi-threaded, and heterogeneous (CPU/GPU) computing models
-
well as large-scale GPU computing facilities for deep learning. Our Lab aims to hire a Research Fellow to lead a research project on Real-World Deepfake Detection and Image Forgery Localization. The role will
-
, multi-modal data, and GPU-accelerated machine learning for materials science. Information We are seeking two highly motivated postdoctoral researchers to join the Horizon Europe project SIMU-LINGUA, a
-
heterogeneous (CPU/GPU) computing models. Collaborate with physicists, computer scientists, mathematicians and engineers across LBNL divisions to define software requirements, implement robust solutions, and
-
Job Code 0005 Employee Class Civil Service Add to My Favorite Jobs Email this Job About the Job The successful applicant will assist in the adaptation of the PPMstar code to run well on GPU-accelerated