Sort by
Refine Your Search
-
Listed
-
Category
-
Program
-
Field
-
, and evaluate AI systems that plan, reason, and take actions to accelerate discovery across domains (materials, chemistry, climate, fusion, biology, and more). NCCS operates the Frontier exascale
-
, train, and evaluate AI systems that plan, reason, and take actions to accelerate discovery across domains (materials, chemistry, climate, fusion, biology, and more). NCCS operates the Frontier exascale
-
batch schedulers (e.g., SLURM, PBS, LSF) and parallel file systems (Lustre, GPFS/Spectrum Scale). Experience implementing and managing automation and configuration management frameworks (Ansible, Puppet
-
systems. Expertise with batch schedulers (SLURM, PBS, LSF) and parallel file systems (Lustre, GPFS/Spectrum Scale). Proven ability to lead technical projects from concept through implementation, balancing
-
. Demonstrated experience developing and running computational tools for high-performance computing environment, including distributed parallelism for GPUs. Demonstrated experience in common scientific programming
-
user support. Familiarity with scientific software, Linux systems, and parallel computing frameworks. Special Requirements: Visa sponsorship is not available for this position. This position requires
-
leading peer-reviewed journals and conferences. Researching and developing parallel/scalable uncertainty visualization algorithms using HPC resources. Collaboration with domain scientists for demonstration
-
strategic management and strict adherence to security protocols. We are looking for candidates with extensive experience in either classified HPC data center operations, architecture, parallel computing
-
with environment, safety, health and quality program requirements. Maintain strong dedication to the implementation and perpetuation of values and ethics. Deliver ORNL’s mission by aligning behaviors
-
learning algorithms in PyTorch. Expertise in object-oriented programming, and scripting languages. Parallel algorithm and software development using the message-passing interface (MPI), particularly as