Sort by
Refine Your Search
-
on computer vision. The role involves developing and advancing novel algorithms for emerging challenges in computer vision, including continual learning and few-shot learning. The candidate is also expected
-
advance research in computer vision, machine learning, and/or robotics for the digitalization, monitoring, and automation of civil infrastructure. The role will focus on developing innovative methodologies
-
Disease Programme (SPARKLE) - Theme 5: Neurotech Intervention, we establish a sub-project on “Human Gait Detection via Acoustic-enabled Footstep Tracking under NTU-CCDS. In line with NTU’s vision and
-
record of publications in Computer Vision, Natural Language Processing or Multimodal AI research. Excellent programming skills and familiarity with Python and PyTorch Excellent mathematical literacy and
-
. This role will contribute to NTU’s mission of driving transformative research in artificial intelligence and robotics by developing force-integrated Vision-Language-Action models that enable seamless human
-
computer vision and machine learning. To produce research reports and/or publications as required by the funding body or for dissemination to the wider academic community. To provide guidance and support to
-
Engineering, Mechatronics, Computer Science, etc. Strong background in AI, Vision Language Model, end-to-end autonomous driving, deep learning, computer vision, robotics and automation. Candidates having
-
who place patients at the centre of exemplary care. Guided by its vision to redefine medicine and transform healthcare, LKCMedicine advances research through impactful discoveries with national and
-
foundation models for general manipulation based on vision-language-action models Develop learning from human demonstration (LfD) framework for vision-language-action models Develop multi-embodiment
-
: PhD degree in Computer Science, Electrical Engineering, or a closely related field Strong research background in computer vision and deep learning Solid experience with multimodal learning, segmentation