Sort by
Refine Your Search
-
Country
-
Employer
- NEW YORK UNIVERSITY ABU DHABI
- University of Glasgow
- Blekinge Institute of Technology
- AUSTRALIAN NATIONAL UNIVERSITY (ANU)
- Auburn University
- Cornell University
- Delft University of Technology (TU Delft)
- Duke University
- ETH Zurich
- Harvard University
- IFREMER - Institut Français de Recherche pour l'Exploitation de la MER
- Lunds universitet
- Monash University
- University of Lund
- University of Texas at Dallas
- 5 more »
- « less
-
Field
-
optimized code written by expert programmers and can target different hardware architectures (multicore, GPUs, FPGAs, and distributed machines). In order to have the best performance (fastest execution) for a
-
at one time. In non-stationary environments on the other hand, the same algorithms cannot be applied as the underlying data distributions change constantly and the same models are not valid. Hence, we need
-
of the following subjects: scalable data management, systems for machine learning, distributed and parallel systems, or cloud-based systems. We are especially interested in researchers who build working systems and
-
in deep learning at scale, familiarity with the “alphabet soup” of distributed computing (DP, TP, SP, CP, EP) Experience with production environments, including Git-based workflows Experience working
-
IFREMER - Institut Français de Recherche pour l'Exploitation de la MER | Brest, Bretagne | France | 14 days ago
Research Framework Programme? Not funded by a EU programme Reference Number 2026-1852/1 Is the Job related to staff position within a Research Infrastructure? No Offer Description Deadline for applications
-
optimized code written by expert programmers and can target different hardware architectures (multicore, GPUs, FPGAs, and distributed machines). In order to have the best performance (fastest execution) for a
-
: Large‑scale optimization and machine learning: Stochastic and/or (non‑)convex optimization methods, first‑order methods, variance reduction, distributed and parallel optimization, federated learning
-
including workload schedulers, storage systems, and distributed compute nodes. Applies analytical methods to evaluate system performance, identify bottlenecks, and implement corrective actions to improve
-
, for example: Large‑scale optimization and machine learning: Stochastic and/or (non‑)convex optimization methods, first‑order methods, variance reduction, distributed and parallel optimization, federated
-
tools (OpenXDMod), distributed and parallel file systems (CEPHFS, NFS, Lustre, BeeGFS), and virtualization platforms (Openstack) Rewards and Benefits This position is located in Ithaca, New York