Sort by
Refine Your Search
-
Category
-
Country
-
Program
-
Employer
- NEW YORK UNIVERSITY ABU DHABI
- University of Glasgow
- Blekinge Institute of Technology
- AUSTRALIAN NATIONAL UNIVERSITY (ANU)
- Auburn University
- Centrale Lille Institut
- Cornell University
- Delft University of Technology (TU Delft)
- Duke University
- ETH Zurich
- Harvard University
- Helmholtz Zentrum Hereon
- Helmholtz-Zentrum Hereon
- IFREMER - Institut Français de Recherche pour l'Exploitation de la MER
- Inria, the French national research institute for the digital sciences
- Lunds universitet
- MACQUARIE UNIVERSITY - SYDNEY AUSTRALIA
- Macquarie University
- Monash University
- University of Lund
- University of Oslo
- University of Texas at Dallas
- Université côte d'azur
- 13 more »
- « less
-
Field
-
the bubble size and spatial distribution, making it possible to induce and study different flow regimes (from homogeneous to highly heterogeneous) and to observe the transitions between them. These controlled
-
: Large‑scale optimization and machine learning: Stochastic and/or (non‑)convex optimization methods, first‑order methods, variance reduction, distributed and parallel optimization, federated learning
-
including workload schedulers, storage systems, and distributed compute nodes. Applies analytical methods to evaluate system performance, identify bottlenecks, and implement corrective actions to improve
-
, for example: Large‑scale optimization and machine learning: Stochastic and/or (non‑)convex optimization methods, first‑order methods, variance reduction, distributed and parallel optimization, federated
-
tools (OpenXDMod), distributed and parallel file systems (CEPHFS, NFS, Lustre, BeeGFS), and virtualization platforms (Openstack) Rewards and Benefits This position is located in Ithaca, New York
-
hardware architectures (multicore, GPUs, FPGAs, and distributed machines). In order to have the best performance (fastest execution) for a given Tiramisu program, many code optimizations should be applied
-
algorithms for parallel/distributed AI/ML Hardware-aware and resource-efficient partitioning for parallel/distributed AI/ML Optimization of process-to-process communication in parallel/distributed AI/ML
-
modern high performance computation facilities and parallel computing clusters (CPU and GPU). Excellent publication record and demonstrated conference presentation skills. Demonstrated ability to operate
-
of analytical reports to achieve client, program, and business objectives for resource optimization. Work closely with internal technology teams and vendors to deliver tools and system solutions. Lead the
-
conduct world-class applied research. We change and make a difference. Do you want to become one of us? Work description The PhD positions are part of the research portfolio within Mechanical Engineering