Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Employer
- Nanyang Technological University
- Nature Careers
- NEW YORK UNIVERSITY ABU DHABI
- Aalborg University
- Carnegie Mellon University
- Hong Kong Polytechnic University
- Istituto Italiano di Tecnologia
- SINGAPORE INSTITUTE OF TECHNOLOGY (SIT)
- University of Cambridge;
- University of Glasgow
- Barcelona Beta Brain Research Center
- Barnard College
- DAAD
- IMEC
- IMT Atlantique
- Institut Pasteur
- KU LEUVEN
- King Abdullah University of Science and Technology
- Luleå tekniska universitet
- NTNU Norwegian University of Science and Technology
- Northeastern University
- Norwegian University of Life Sciences (NMBU)
- Princeton University
- Tampere University
- UiT The Arctic University of Norway
- Ulster University
- University of Cambridge
- University of Glasgow;
- University of Jyväskylä
- University of Luxembourg
- University of Massachusetts Medical School
- University of Surrey
- University of Texas at El Paso
- Université de Picardie - Jules Verne
- Örebro University
- 25 more »
- « less
-
Field
-
Job Purpose: The primary responsibility of this role is to deliver on an industry innovation research project where you will be part of the research team to develop a Multimodal AI for Fire
-
reasoning, perception–language grounding, and decision-making in robotic systems. The engineer will interface VLMs with navigation, mapping, and control pipelines to support end-to-end system experimentation
-
develop learning-enabled and perception-driven robotic systems that operate in complex, unstructured environments, with particular focus on: 1) Robot learning and embodied AI; 2) Multimodal perception and
-
, we believe science can achieve its fullest potential. THE ROLE During your internship you will work on a projectin the Event-Driven Perception for Robotics(https://edpr.iit.it/ ) group, coordinated by
-
environments Vision–Language–Action Models: Practical experience with multimodal models combining vision, language, and action for embodied agents, robotics, or autonomous driving applications Perception and
-
. Key Responsibilities: Develop and implement perception and control algorithms for robotic arms and embodied AI systems. Assist in integrating multimodal AI models (vision, language, force sensors) with
-
into a mental image s(t) through perception. Then alternative futures s(t + n) and s′(t + n) can be obtained. Past events s(t−m) can be reenacted or changed to produce “what if” scenarios (s′) to learn
-
. Candidates should demonstrate technical proficiency in one or more of the following areas: Embodied AI and robot learning Vision-language-action (VLA) or multimodal AI modeling Perception, control
-
research project investigating the computational and neural principles by which multimodal information is encoded and combined in human perception. The research will combine computational modelling with
-
twelve months] Duties The appointees will assist the project leader in the research project - “Towards multimodal seamless human-robot collaboration: few-shot perception and spatial skill learning with