Sort by
Refine Your Search
-
Listed
-
Employer
- SUNY University at Buffalo
- Brookhaven Lab
- California Institute of Technology
- Cornell University
- Duke University
- Harvard University
- University of California
- University of California Davis
- University of California, San Diego
- University of Colorado
- University of Pennsylvania
- University of Utah
- Brookhaven National Laboratory
- Career Education Corporation
- Colorado State University
- Colorado Technical University
- George Mason University
- Hofstra University
- Indiana University
- Lawrence Berkeley National Laboratory
- Meta/Facebook
- NIST
- Rutgers University
- The University of Arizona
- University of Delaware
- University of Florida
- University of Massachusetts Boston
- University of Oklahoma
- University of Wisconsin-Madison
- Yale University
- 20 more »
- « less
-
Field
-
services, distributed web authentication, LDAP, computing account management, and other similar technologies, as well as auditing software, centralized antivirus management, intrusion detection systems
-
services, distributed web authentication, LDAP, computing account management, and other similar technologies, as well as auditing software, centralized antivirus management, intrusion detection systems
-
Workflows: Collaborate with computational scientists to optimize parallel computing workflows and port research tools to high-end computing platforms. Automate Developer Workflows: Design tools and frameworks
-
) and parallel computing, with a focus on cost-efficient and scalable model deployment. Skilled in working with medium-large scale multicore and heterogeneous (CPU + GPU) clusters. Excellent verbal and
-
Details Posted: Unknown Location: Salary: Summary: Summary here. Details Posted: 14-May-25 Location: Philadelphia, Pennsylvania Type: Full-time Categories: Academic/Faculty Computer/Information
-
simulation, AI based detector design optimization, streaming computing model development, production, distributed computing and workflow management, software infrastructure, particle ID, tracking
-
with the architecture and performance characteristics of distributed computing and data handling systems. Extensive knowledge in computer science or related field, demonstrated through education or
-
exploring them. Basic data preprocessing, feature engineering, and model evaluation, or a strong willingness to gain hands-on experience. Eagerness to learn HPC concepts, including parallel computing
-
based on MPI. Experience working with the architecture and performance characteristics of distributed computing and data handling systems. Extensive knowledge in computer science or related field
-
clusters, including CPU and GPU architectures; Proficiency with job schedulers (e.g., Slurm); Knowledge of parallel and distributed computing principles; Understanding of data security and compliance