16 machine-learning-"https:" "https:" "https:" "https:" "https:" "University of St" Fellowship positions at Hong Kong Polytechnic University in Hong Kong
Sort by
Refine Your Search
-
and Computer Engineering or a related discipline or an equivalent qualification and must have no more than five years of post-qualification experience at the time of application, including one to two
-
the University and Boeing research agreement. They will be required to: (a) develop machine learning-based image processing algorithms for surface condition recognition, defect detection, and digital feature
-
and energy materials. Preference will be given to those with knowledge of computer programming, AI and/or machining learning. Applicants are invited to contact Prof. Jianguo Lin at telephone number 2766
-
challenge issues, using advanced machine learning models and necessary techniques; (d) evaluate and validate the performance of proposed methods and algorithms through theoretical analysis; (e) maintain
-
-inspired learning algorithms for efficient, robust and scalable pattern recognition; (b) assist in general management of the project; and (c) perform any other duties as assigned by the project leader
-
or an equivalent qualification. For all posts, applicants should also: (a) have solid experience in electric machines, finite element methods and theory of electromagnetic files; and (b) be able to complete
-
the research project - “Advancing generative AI algorithms for haptic rendering: bridging human-computer and human-robot interactions”. Qualifications Applicants should have a doctoral degree or an equivalent
-
extensive experience in conducting research; (c) have good computer literacy, including MS Excel and Chinese word processing; (d) have experience in writing research papers and proposals; (e) have a
-
development platforms capable of ingesting, curating, and continuously learning from multi-center hepatocellular carcinoma datasets; (c) create innovative algorithms/systems based on the hepatocellular
-
-training, post-training and reinforcement learning for Large Language Models (LLMs); (c) lead the pre-training of both Dense and MoE LLMs, optimising for performance, scalability, model structure and