Sort by
Refine Your Search
-
Listed
-
Category
-
Program
-
Employer
- University of Washington
- University of California
- Northeastern University
- Oak Ridge National Laboratory
- Rutgers University
- Colorado State University
- SUNY University at Buffalo
- Auburn University
- Baylor University
- Brookhaven Lab
- Brookhaven National Laboratory
- Johns Hopkins University
- Lawrence Berkeley National Laboratory
- Nature Careers
- University of Cincinnati
- University of Maine
- University of North Carolina at Chapel Hill
- University of Texas at Dallas
- Fairleigh Dickinson University
- Harvard University
- NIST
- National Renewable Energy Laboratory NREL
- State University of New York University at Albany
- The University of Alabama
- The University of Arizona
- The University of Chicago
- The University of North Carolina at Chapel Hill
- University of Alabama, Tuscaloosa
- University of California Davis
- University of California, Merced
- University of California, San Diego
- University of Colorado
- University of Dayton
- University of Delaware
- University of Florida
- University of Illinois at Urbana Champaign
- University of Massachusetts Boston
- University of Oklahoma
- University of Pennsylvania
- University of the Pacific
- Washington University in St. Louis
- Zintellect
- 32 more »
- « less
-
Field
-
to begin September 1, 2025. We will consider strong candidates in any research area but will prioritize Distributed and Parallel Computing. A PhD in computer science or a related area is required
-
management, cache optimization, and vectorization techniques. Strong understanding of algorithms and data structures, especially those suitable for parallel processing and distributed computing. Understanding
-
The University of North Carolina at Chapel Hill | Chapel Hill, North Carolina | United States | about 9 hours ago
and Experience: Distributed parallel training and parameter-efficient tuning. Familiarity with multi-modal foundation models, HITL techniques, and prompt engineering. Experience with LLM fine-tuning
-
programming; Experience programming distributed systems; Experience with parallel and distributed File Systems (e.g., Lustre, GPFS, Ceph) development. Advanced experience with high-performance computing and/or
-
management, cache optimization, and vectorization techniques. Strong understanding of algorithms and data structures, especially those suitable for parallel processing and distributed computing. Understanding
-
programming; Experience programming distributed systems; Experience with parallel and distributed File Systems (e.g., Lustre, GPFS, Ceph) development. Advanced experience with high-performance computing and/or
-
University of North Carolina at Chapel Hill | Chapel Hill, North Carolina | United States | 1 day ago
Distributed parallel training and parameter-efficient tuning. Familiarity with multi-modal foundation models, HITL techniques, and prompt engineering. Experience with LLM fine-tuning, prompt engineering, or
-
, AMD uProf, or Omniperf. Debugging experience with distributed-memory parallel applications. Experience with containers (Docker, Podman, Shifter or similar) and modern software practices such as Git
-
reports to the Director of Advanced Research Computing, Security, and Information Management (ARCSIM) and will help support the needs of the UMaine research community and its collaborators by enabling and
-
Mathematics, or a related field, awarded within the last five years Programming experience in one or more of Python, C++, Fortran, or Julia Knowledge of high-performance and parallel computing Experience