143 high-performance-computing-postdoc Postdoctoral research jobs at University of Oxford in United Kingdom
Sort by
Refine Your Search
-
Listed
-
Field
-
subject. You should have a high level of competence in biochemistry and structural biology as well as relevant experience demonstrated by first author publications in high-profile journals. The ideal
-
to less experienced members of the research group, including postdocs, research assistants, technicians, and PhD and project students. In this post you will manage your own academic research and
-
Raman’s cardiovascular research team. This role is embedded within a cutting-edge programme focused on integrating high-dimensional datasets, including advanced cardiac MRI (oxygen-sensitive, metabolic, and
-
of a wider programme of work to establish that membraneless organelles, biological liquid droplets, are effectively regions of organic solvent, suspended inside cells and that the properties of each
-
2022, PMID: 36462505). The research will be conducted in a friendly and supportive atmosphere with access to outstanding facilities and within a vibrant postdoc community. The applicant should hold, or
-
mentoring junior researchers and collaborate with faculty, DPhil students, and postdocs across engineering, computer science, government, and law disciplines. The role is full time 2 years fixed term
-
for applications is 12:00 on Monday 7th July 2025. Interviews will be held as soon as possible thereafter. At the Dunn School we are committed to supporting the professional and career development of our postdocs
-
on single-agent settings. We are seeking a highly motivated postdoc to conduct research into this fast-moving area. Directions may include investigating quality evaluation methods for multi-agent systems
-
Machine Learning, Statistics, Computer Science or closely related discipline. They will demonstrate an ability to publish, including the ability to produce high-quality academic writing. They will have the
-
with the possibility of renewal. This project addresses the high computational and energy costs of Large Language Models (LLMs) by developing more efficient training and inference methods, particularly