Sort by
Refine Your Search
-
on computer vision. The role involves developing and advancing novel algorithms for emerging challenges in computer vision, including continual learning and few-shot learning. The candidate is also expected
-
learning-based computer vision algorithms and software for object detection, classification, and segmentation. Key Responsibilities Participate in and manage the research project together with the PI, Co-PI
-
advance research in computer vision, machine learning, and/or robotics for the digitalization, monitoring, and automation of civil infrastructure. The role will focus on developing innovative methodologies
-
Disease Programme (SPARKLE) - Theme 5: Neurotech Intervention, we establish a sub-project on “Human Gait Detection via Acoustic-enabled Footstep Tracking under NTU-CCDS. In line with NTU’s vision and
-
record of publications in Computer Vision, Natural Language Processing or Multimodal AI research. Excellent programming skills and familiarity with Python and PyTorch Excellent mathematical literacy and
-
. This role will contribute to NTU’s mission of driving transformative research in artificial intelligence and robotics by developing force-integrated Vision-Language-Action models that enable seamless human
-
computer vision and machine learning. To produce research reports and/or publications as required by the funding body or for dissemination to the wider academic community. To provide guidance and support to
-
Engineering, Mechatronics, Computer Science, etc. Strong background in AI, Vision Language Model, end-to-end autonomous driving, deep learning, computer vision, robotics and automation. Candidates having
-
businesses in their sustainability transition. We are looking for a motivated, entrepreneurial researcher who is not only technically strong but also passionate about developing a long-term research vision
-
(Kubernetes), serverless computing, and REST API development. Proficient in Python, with basic experience in machine learning or computer vision libraries; familiarity with Vision-Language Models (e.g., CLIP