Sort by
Refine Your Search
-
frameworks to maximise compression ratios. We will use predictor models which estimate projections or slices, storing only differences between the prediction and original data. Because errors are small and
-
Inria, the French national research institute for the digital sciences | Palaiseau, le de France | France | 30 days ago
split over many computing nodes. An important consideration in our context is that, unlike classical data stream, the data is not i.i.d. on the nodes, but stems from the domain partitioning imposed by
-
to the Wallenberg AI, Autonomous Systems and Software Program (WASP). This enables interaction with researchers from different research fields and access to various scientific and technical expertise. Data-driven
-
We are recruiting highly motivated and enthusiastic doctoral students in the field of synthesis, characterisation and/or modelling of next generation materials for hydrogen storage and compression
-
encompasses model compression and optimization for edge deployment on UAV-mounted processors to support real-time inference. The candidate will collaborate with industrial partners for real-world data
-
(e.g., model compression/simplification and hardware-aware optimization). We are also interested in how resource-efficiency interacts with broader sustainability aspects of machine learning such as
-
and interpretability, analogous to RAG (Retrieval-Augmented Generation) in LLMs Investigating methods for improving AI model sustainability, e.g. model compression techniques (such as quantization and
-
on constrained platforms using techniques such as model compression, quantization, and hardware-aware neural network design. Investigating mechanisms that protect the integrity and reliability of deployed AI
-
lightweight AI models suitable for real-time execution on constrained platforms using techniques such as model compression, quantization, and hardware-aware neural network design. Investigating mechanisms
-
inference and deployment costs (e.g., model compression/simplification and hardware-aware optimization). We are also interested in how resource-efficiency interacts with broader sustainability aspects