Sort by
Refine Your Search
-
Listed
-
Country
-
Program
-
Employer
- Forschungszentrum Jülich
- NEW YORK UNIVERSITY ABU DHABI
- University of California, Merced
- Argonne
- Chongqing University
- DURHAM UNIVERSITY
- European Space Agency
- Hong Kong Polytechnic University
- Humboldt-Stiftung Foundation
- IMT
- Leibniz
- Linköping University
- Manchester Metropolitan University
- Monash University
- National University of Singapore
- Nature Careers
- Northeastern University
- The University of Chicago
- University of Massachusetts
- Western Norway University of Applied Sciences
- 10 more »
- « less
-
Field
-
optimization, with experience in adaptive routing and SDN technologies. Proficiency in programming languages such as Python, C/C++, and experience with parallel computing frameworks. Effective written and oral
-
the emergence of edge computing, data storage will become more geo-distributed to account for performance or regulatory constraints. One challenge is to maintain an up-to-date view of available content in such a
-
You will be working in a larger research and development project in parallel, distributed and heterogeneous computing. The project work can, under certain circumstances, be combined with an internal
-
insights from related small models, and employ distributed optimisation and parallel computing to investigate knowledge transfer and interaction between small and large models; (b) assist in general
-
frameworks (e.g. PyTorch, TensorFlow) and relevant libraries. Practical experience in scalable data processing, including the use of parallel computing, cloud platforms, and distributed systems for efficient
-
working with high performance computers (e.g., parallelizing and distributing code). Experience in distributed data management and workflow systems. Preferred Competencies Ability to work independently and
-
PhD degree in Computer Science, Physics or a related field Experience with parallel programming models Strong programming skills in C/C++ and/or Python Knowledge of distributed memory programming with
-
Identify new applications for Machine Learning in science, engineering, and technology Develop, implement and refine ML techniques Implement parallel ML training on the High Performance Computers Engage in
-
) for a given Tiramisu program, many code optimizations should be applied. Optimizations include vectorization (using hardware vector instructions), parallelization (running loop iterations in parallel
-
hardware architectures (multicore, GPUs, FPGAs, and distributed machines). In order to have the best performance (fastest execution) for a given Tiramisu program, many code optimizations should be applied