34 phd-position-in-data-modeling Postdoctoral positions at University of Oxford in United Kingdom
Sort by
Refine Your Search
-
We are seeking a motivated and Talented experimentalist for a full-time Postdoctoral Research Assistant in Modelling of Quantum Computing Control Systems within Professor Ares’ and Professor
-
, and enabling data-driven improvements in patient care. You will have opportunities to apply foundation models—including large language models (LLMs) to real-world clinical data. You will work with well
-
Institute). The position is fixed term for 36 months and will provide opportunities to work on aircraft icing modelling and experimental campaigns. Ice crystal icing is one of the least well characterised
-
Professor Chris Russell. This is an exciting opportunity for you to work at the cutting edge of AI, contributing to a major shift in how we understand and apply foundation models. The position is full-time
-
on and defensive mechanisms for safe multi-agent systems, powered by LLM and VLM models. Candidates should possess a PhD (or be near completion) in Machine Learning or a highly related discispline. You
-
(LiB’s). You will be responsible for: • Developing models and simulations of the electrode fabrication process, sensors, and actuators. • Developing a demonstrator of a soft sensing system that
-
interpretation of atmospheric circulation in high-resolution reanalysis data, idealised model simulations and a state-of-the-art weather forecasting system. The post-holder will have the opportunity to teach
-
-£46,913 per annum. This is a full time, fixed term position for 2 years. We are seeking an enthusiastic cardiovascular immunologist or an expert in immunology and/or vascular biology to join Professor
-
especially suitable for someone with strong formal reasoning and data analysis skills who is considering progression to a PhD or further postdoctoral research in AI ethics, social choice theory
-
with the possibility of renewal. This project addresses the high computational and energy costs of Large Language Models (LLMs) by developing more efficient training and inference methods, particularly