Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Employer
- Aalborg University
- Cranfield University
- The University of Manchester
- Aalborg Universitet
- Eindhoven University of Technology (TU/e)
- Linköping University
- University of Nottingham
- Western Norway University of Applied Sciences
- CEA
- CNRS
- European XFEL
- Faculdade de Medicina da Universidade do Porto
- Grenoble INP - Institute of Engineering
- IMDEA Networks Institute
- Inria, the French national research institute for the digital sciences
- Institute of Low Temperature and Structure Research Polish Academy of Sciences
- KU LEUVEN
- NTNU - Norwegian University of Science and Technology
- Newcastle University
- Northeastern University London
- Queensland University of Technology
- University of Cambridge;
- University of Newcastle
- University of Nottingham;
- University of Oxford
- University of Sheffield;
- University of Southern Denmark
- University of Stavanger
- University of Twente
- University of Twente (UT)
- University of Warwick;
- 21 more »
- « less
-
Field
-
-hardware co-design of neural architectures, memory hierarchies, and communication systems Model compression and optimization (pruning, quantization, knowledge distillation) for edge deployment FPGA- and ASIC
-
About the Project An exciting PhD project on the effects of heat transfer of transitional compressible boundary layers will be carried out under the UK Hypersonics Doctoral Network, which has been
-
. Joerg Hoffmann at University of Saarland. 2) 1-2 PhD students working with Prof. Hendrik Blockeel and/or Prof. Jesse Davis on the topic of developing novel approaches for learning, compressing, and
-
(e.g., model compression/simplification and hardware-aware optimization). We are also interested in how resource-efficiency interacts with broader sustainability aspects of machine learning such as
-
Rolls-Royce UltraFan has completely changed the architecture of the compression system. This has opened the design space and means that new technologies that can improve performance and reduce fuel burn
-
and interpretability, analogous to RAG (Retrieval-Augmented Generation) in LLMs Investigating methods for improving AI model sustainability, e.g. model compression techniques (such as quantization and
-
and servers such as gradient compression, asynchronous training, reduced synchronization frequency, semantic communication, and design of new application and transport layer protocols. Data management
-
statistics This PhD project falls under the collaboration between Research Thrust RT2 Physics-based models, and Research Thrust RT3 on representation, compression, learning, and inference. For long-distance
-
on constrained platforms using techniques such as model compression, quantization, and hardware-aware neural network design. Investigating mechanisms that protect the integrity and reliability of deployed AI
-
model pre-training and multimodal adaptation to architectures and compression for edge deployment while targeting real-world validation in domains like HealthTech, smart industry, and autonomous mobility