Ensuring the Safe Deployment of Generative AI for Healthcare

Updated: about 10 hours ago
Location: Bedford Park, SOUTH AUSTRALIA
Deadline: 31 May 2025

Are you passionate about AI and want to make a difference in healthcare? Join our team to lead a groundbreaking PhD investigating how generative AI tools (like ChatGPT and Gemini) can be safely used in medicine — and how we can stop them being misused to spread health disinformation or pose risks to patients.

You’ll contribute to one of the world’s leading programs studying large language models in health, supported by a multidisciplinary team of data scientists, clinicians, ethicists, and consumer advisors. Past work from our team has been published in top journals (e.g. BMJ, JAMA Internal Medicine), and influenced policy debates on AI safety.

Your project could include:

  • Auditing AI tools for health misinformation or unsafe behaviour
  • Designing risk mitigation and transparency frameworks
  • Building tools to detect or prevent unsafe AI outputs
  • Exploring regulatory gaps and proposing solutions

This is an ideal opportunity for candidates with interests in machine learning, public health, ethics, regulation, or safety-critical systems.



Similar Positions