Sort by
Refine Your Search
-
Listed
-
Country
-
Program
-
Employer
- NEW YORK UNIVERSITY ABU DHABI
- DURHAM UNIVERSITY
- Forschungszentrum Jülich
- University of California, Merced
- ; Durham University
- CNRS
- Chongqing University
- Graz University of Technology
- Hong Kong Polytechnic University
- Humboldt-Stiftung Foundation
- IMT
- IMT - Institut Mines-Télécom
- Leibniz
- Monash University
- National University of Singapore
- Nature Careers
- New York University
- Northeastern University
- Télécom Paris
- UNIVERSIDAD POLITECNICA DE MADRID
- UNIVERSITY OF SOUTHAMPTON
- UNIVERSITY OF VIENNA
- Universidad de Alicante
- University of Massachusetts
- University of North Carolina at Chapel Hill
- Western Norway University of Applied Sciences
- 16 more »
- « less
-
Field
-
the emergence of edge computing, data storage will become more geo-distributed to account for performance or regulatory constraints. One challenge is to maintain an up-to-date view of available content in such a
-
insights from related small models, and employ distributed optimisation and parallel computing to investigate knowledge transfer and interaction between small and large models; (b) assist in general
-
PhD degree in Computer Science, Physics or a related field Experience with parallel programming models Strong programming skills in C/C++ and/or Python Knowledge of distributed memory programming with
-
Post-doctorate position (M/F) : Exascale Port of a 3D Sparse PIC Simulation Code for Plasma Modeling
, C. (2023). Approche sur grilles parcimonieuses pour accélérer la méthode PIC (PhD, Toulouse 3). - Participate in the Exascale porting of 3D sparse-PIC code, including distributed parallelism
-
Identify new applications for Machine Learning in science, engineering, and technology Develop, implement and refine ML techniques Implement parallel ML training on the High Performance Computers Engage in
-
) for a given Tiramisu program, many code optimizations should be applied. Optimizations include vectorization (using hardware vector instructions), parallelization (running loop iterations in parallel
-
hardware architectures (multicore, GPUs, FPGAs, and distributed machines). In order to have the best performance (fastest execution) for a given Tiramisu program, many code optimizations should be applied
-
into algorithms for the Lov sz Local Lemma and related problems in distributed and parallel models. Research will primarily be conducted in collaboration with Dr. Davies-Peck, but successful applicants will also be
-
. You have experience in parallel computing, automatic performance tuning and optimization of advanced applications on parallel and distributed systems. An excellent scientific track record proven through
-
into algorithms for the Lovász Local Lemma and related problems in distributed and parallel models. Research will primarily be conducted in collaboration with Dr. Davies-Peck, but successful applicants will also be