Sort by
Refine Your Search
-
Listed
-
Category
-
Field
-
"A picture is worth a thousands words"... or so the saying goes. How much information can we extract from an image of an insect on a flower? What species is the insect? What species is the flower? Where was the photograph taken? And at what time of the year? What time of the day? What was the...
-
responsible for overseeing the development of a new function with the vision to create a modern content engine room that drives engagement internationally. The appointee will lead a team of specialists
-
optimisation of our enterprise and research computing infrastructure. In this pivotal role, you’ll drive excellence across storage, backup, and virtualisation systems—ensuring resilience, scalability, and
-
vision and pattern recognition methods, will be utilized to automate the process of fingertip detection. These methods will be trained to learn patterns from fingertip features and detect them using object
-
of providing innovative IT solutions for Monash University. We are spearheading significant technological, service, and organisational reforms to create a unified IT function that aligns with our vision
-
-reality application development; modern AI techniques (such as computer vision or large multimodal language models); and/or human-computer interaction. Our industry partners are developing software
-
PhD Scholarship – Feasibility, Acceptability and Utility of a Person-Centred Behaviour Change Program to Slow Cognitive Decline in Older Adults Job No.: 677725 Location: Clayton campus Employment
-
the area of end-to-end modular autonomous driving using computer vison and deep learning methods. This includes developing an efficient and interpretable image processing, vision-based perception and
-
analysis, contextual analysis, audio feature extraction, and machine learning models to identify and assess potentially dangerous content. Similarly, computer vision models are implemented to analyse images
-
accepted by the intended users due to their limited capabilities to sustain long-term interactions. In this project we propose to develop compositional vision-language models for social robots, enabling them