Sort by
Refine Your Search
-
learned through publications in international scientific conferences and in at least three peer-reviewed journals. Participate actively in the EDISON project in direct connection with SINTEF and industry
-
SFI FAST: PhD position in Microstructure/texture evolution during extrusion of scrap-based Aluminium
(as machine learning techniques, etc.). Personal characteristics In the evaluation of which candidate is best qualified for the PhD position, emphasis will be placed on education, experience and
-
Computer science » Computer systems Computer science » Programming Technology » Communication technology Technology » Telecommunications technology Researcher Profile First Stage Researcher (R1) Positions PhD
-
competence-building environment (training researchers, contributing to university courses, and industry upskilling) and a consortium including SimulaMet, SINTEF Digital, NTNU, Aalborg University, Okinawa
-
Digital. The research focuses on advanced signal analysis and machine learning methods that enable robust operation and service continuity in future wireless networks under challenging radio conditions. As
-
Digital. The research focuses on advanced signal analysis and machine learning methods that enable robust operation and service continuity in future wireless networks under challenging radio conditions. As
-
with machine learning or other relevant AI technologies Scandinavian language skills Previous experience from industry or research in engineer-to-order manufacturing. This can be proven by the attendance
-
Norwegian University of Science and Technology (NTNU) for general criteria for the position. Preferred selection criteria Experience with machine learning or other relevant AI technologies Scandinavian
-
the semantic foundation that enables AI systems to reason more coherently about ship designs, reducing ambiguity in the data available to machine‑learning systems, and supports explainability by grounding AI
-
systems to reason more coherently about ship designs, reducing ambiguity in the data available to machine‑learning systems, and supports explainability by grounding AI outputs in a known structure. This