Sort by
Refine Your Search
-
experience writing in C++ and Python, using libraries including CUDA, ROS, ROS2, PyTorch, etc. You will be able to create and debug large software projects, and can write interfaces to integrate sensors
-
Massachusetts Institute of Technology (MIT) | Cambridge, Massachusetts | United States | 29 days ago
research experience; strong proficiency in Python and C++, with deep familiarity with AI/ML frameworks (PyTorch, TensorFlow, JAX); hands-on experience with GPU programming models (e.g., CUDA, HIP, or OpenCL
-
Massachusetts Institute of Technology | Cambridge, Massachusetts | United States | about 1 month ago
models (e.g., CUDA, HIP, or OpenCL); experience with performance profiling and benchmarking tools on Linux-based High-Performance Computing systems; excellent communication skills; ability to collaborate
-
, Cloud Service Deployment). Desired: Experience with High-Performance Computing or GPU programming (CUDA). Specialized knowledge of Neural Rendering (NeRF/3DGS) or Satellite Photogrammetry. Demonstrated
-
at least two related publications. Proficiency with core HPC programming languages and paradigms, including C/C++, Fortran, and MPI. Proficiency in GPU-accelerated HPC programming, with emphasis on CUDA and
-
Python, with experience in modern software development environments (Linux, Docker, Cloud Service Deployment). Desired: Experience with High-Performance Computing or GPU programming (CUDA). Specialized
-
contributions, and addressing bugs that arise as the platform is used in active research settings. The platform is built in C++ with CUDA-based computation running on NVIDIA GPUs, and is being developed both
-
at the Secret level or higher and may be subject to a government background investigation to upgrade clearance eligibility, if required Preferred skills/experience areas include: Python, C++, CUDA, time-series
-
potential use of Rust, CUDA, C/C++, Bash/Zsh, and Haskell while collaborating closely with students, professionals, and external partners. The role offers significant opportunities to contribute to system
-
Language Model (LLM) GPU cluster to ensure stable and reliable operation of training tasks; (b) handle GPU node failures, IB network anomalies, CUDA/NCCL errors and Kubernetes scheduling failures, perform