235 parallel-processing-bioinformatics "Multiple" positions at Oak Ridge National Laboratory
Sort by
Refine Your Search
-
Listed
-
Category
-
Program
-
Field
-
of computational scientists, computer scientists, experimentalists, and engineers/physicists conducting basic and applied research in support of the Laboratory’s missions. Participate in the development of multi
-
client deployments. Ensure the secure and effective operation of computing systems through compliance with ORNL classified procedures, IT Internal Operating Procedures, and Cyber Security best techniques
-
also include the following: pre-procurement planning with customers, pre-proposal conferences, facilitating concurrent reviews, leadership of subcontract negotiations, participation on process
-
) characterizing pellet fueling dynamics using multiple observables, 2) measuring and predicting the dynamic and cumulative impact on long-pulse particle balance and plasma performance, and 3) developing real-time
-
/Responsibilities: Conduct research on processes that arise from the confinement of high temperature plasmas, for example in the context of radiative energy from fusion-relevant research devices, such as tokamaks
-
vision transformer or large vision AI model Expertise in high performance computing Expertise in image and spatiotemporal data processing Expertise in federated learning on large computing clusters A
-
opportunity by fostering a respectful workplace – in how we treat one another, work together, and measure success. Basic Qualifications: A Ph.D. in electrical engineering, or related field. Minimum of two years
-
ensuring the seamless commissioning and operation of testbed systems, enhancing productivity, and minimizing downtime. Major Duties/Responsibilities: Assembly & Maintenance: Assemble, maintain, repair, and
-
workplace – in how we treat one another, work together, and measure success. Basic Qualifications: A BS degree in computer science, computer engineering, information technology, information systems, science
-
for Science @ Scale: Pretraining, instruction tuning, continued pretraining, Mixture-of-Experts; distributed training/inference (FSDP, DeepSpeed, Megatron-LM, tensor/sequence parallelism); scalable evaluation