Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Field
-
cluster supports codes with distributed (MPI), shared (OpenMP), and GPU (CUDA, NVidia A100) parallelization and has high-speed scratch storage and InfiniBand interconnect systems. Teaching The College
-
. Experience with graph-based data analysis or anomaly detection methods. Exposure to high-performance or GPU-based computing environments. Demonstrated ability to contribute to publications or technical reports
-
tracking), dataset curation, HPC/GPU programming, blockchain for secure data, C-family languages, and embodied AI/robotics are a plus. Experience with general network resilience, cellular automata
-
vision systems (e.g., NVIDIA Jetson Nano) Real-time processing and GPU acceleration Experience working on industry R&D projects Key Competencies Able to build and maintain strong working relationships with
-
University of North Carolina at Chapel Hill | Chapel Hill, North Carolina | United States | 3 days ago
. The postdoctoral scholar will be expected to improve on existing GPU-accelerated ocean models and develop laboratory experiments (in the Joint Fluids Lab at UNC), analyze results, publish in peer-reviewed journals
-
(Jubail) dedicated to the science division, several GPU-based clusters at NYUAD, and other supercomputer facilities through the CASS network. NYUAD also has guaranteed observing time on the Green Bank
-
the College of Engineering. UNLV GPU Cluster (named RebelX) is also available for A.I. research and education. Detailed information about the CEEC Department can be found at: http://www.unlv.edu/ceec MINIMUM
-
of the Empire AI Consortium, researchers have access to state-of-the-art computational infrastructure, including large-scale GPU clusters and high-performance computing resources. The Institute has
-
gun, associated diagnostics resources, and well-equipped high-speed and reacting flow laboratories. The National Center for Supercomputing Applications houses the most performant GPU-based systems and
-
contact, as identified by AFRL through recent past efforts. This includes the implementation of relevant algorithms and solvers for distributed GPU computing within the JAX Python library. Qualifications