Bayesian Uncertainty Estimation for Robust Single- and Multi-View Learning in CV and NLP

Updated: about 23 hours ago
Location: Melbourne, VICTORIA

Background and Motivation

Modern deep learning models have achieved remarkable success in computer vision and natural language processing. However, they typically produce overconfident predictions and lack reliable mechanisms to quantify uncertainty. This limitation becomes particularly problematic in high-stakes applications, such as healthcare diagnosis, autonomous systems, and scientific discovery.

Bayesian approaches provide a principled framework for modeling uncertainty by capturing posterior distributions over model parameters or predictions. Despite recent progress in approximate Bayesian deep learning (e.g., Monte Carlo dropout, deep ensembles, Laplace approximations, and variational inference), several challenges remain:

  • Scalability: Many Bayesian inference methods are computationally expensive for modern large models.

  • Incomplete Uncertainty Modeling: Most methods focus on single-modal data and fail to account for uncertainty arising from multi-view or multimodal interactions.

  • Distribution Shifts and Missing Modalities: In real-world settings, modalities may be missing or corrupted, making uncertainty estimation unreliable.

  • Calibration Across Modalities: Existing models often produce poorly calibrated uncertainty when integrating multiple modalities.

  • Decision-making under uncertainty: Current frameworks rarely translate uncertainty estimates into robust downstream decisions.

  • Addressing these issues is critical for building trustworthy multimodal AI systems.

    Research Objectives

    The goal of this PhD project is to develop scalable Bayesian uncertainty estimation frameworks for single- and multi-view learning that are robust under distribution shift and missing modalities.

    The key objectives include:

  • Develop scalable Bayesian deep learning methods for uncertainty estimation in modern neural architectures.

  • Design principled uncertainty modelling frameworks for multi-view/multimodal learning.

  • Model uncertainty propagation across modalities in fusion architectures.

  • Develop robust learning methods under missing modalities and distribution shift.

  • Design uncertainty-aware decision frameworks for downstream tasks.

  • Expected Contributions

    This PhD project is expected to contribute:

  • Scalable Bayesian uncertainty estimation methods for deep neural networks.

  • A principled Bayesian framework for multimodal uncertainty modeling.

  • Robust learning algorithms under missing modalities and distribution shifts.

  • New uncertainty-aware decision frameworks.

  • Open-source toolkits for multimodal uncertainty estimation.

  • Expected Outcomes

    Academic outputs may include publications in:

    • NeurIPS

    • ICLR

    • ICML

    • CVPR / ICCV

    • ACL / EMNLP

    • IEEE TPAMI / JMLR

    The project will also produce:

    • open-source implementations

    • benchmark datasets for multimodal uncertainty.



    Similar Positions