Sort by
Refine Your Search
-
this goal, we expect to deeply and comprehensively explore the following directions: 1) Investigate privacy-enhancing techniques (e.g., differential privacy and/or secure multi-party computation) and design
-
I am seeking PhD candidates interested in working on designing Learning Analytics or similar reflection interfaces that automatically highlight design elements of data visualisations and generate
-
Strait Islander peoples. This ground-breaking course is a transformational leadership program for Indigenous Australians, designed to strengthen Australia’s Indigenous workforce in public, private and
-
, Melbourne. We are seeking PhD candidates interested in developing methods to assist the formative assessment and improvement of collocated teamwork, by making multimodal activity traces visible and available
-
tools that would allow individuals to understand their own personal risk at any time in their lives. We plan to offer appropriate health care strategies that combine guidance and maintenance to provide a
-
older than 18 years old without giving away any more personal details. PETs are extremely powerful tools and have potential to solve major privacy issues encountered today. They can help to minimise
-
, due to the difficulty of designing models which can condition on the context, coupled with the difficulty of creating workable abstractions of the context. In our work, we have made progress on using
-
application, AML has attracted a large amount of attention in recent years. However, the underlying theoretical foundation for AML still remains unclear and how to design effective and efficient attack and
-
designing and implementing new algorithms to produce visual aids to assist people to reason with causal Bayesian networks, as well as the planning and conduct of exploratory usability studies to assess
-
privacy-enhancing techniques such as secure multi-party computation, homomorphic encryption, differential privacy, and trusted execution to design algorithms and protocols to secure ML models within