Sort by
Refine Your Search
-
Listed
-
Employer
-
Field
-
. This evolution calls for new methodologies capable of effectively representing and compressing data in infinite-dimensional settings. In this project, we aim to address this challenge by developing a theoretical
-
are transparent, interpretable and aligned with human needs and values. You will focus on developing, testing and reviewing a methodology to make AI systems transparent and explainable. The goal is to empower
-
Deep Learning (CIDL), part of the Leiden Institute of Advanced Computer Science (LIACS). As a team, we develop cutting-edge techniques for advanced computational imaging systems, combining expertise from
-
Deep Learning (CIDL), part of the Leiden Institute of Advanced Computer Science (LIACS). As a team, we develop cutting-edge techniques for advanced computational imaging systems, combining expertise from
-
participants of the Netherlands Twin Register, integrating genetic and psychological data where relevant. Beyond algorithm development, you will also address methodological challenges such as data quality, bias
-
students to become experts in a specific domain of choice. This vacancy is explicitly targeted at candidates interested in algorithmic biases and developing methodological approaches to tackle this challenge
-
to source localization based on microphone arrays or distributed sensors. This PhD project will focus on the development of novel methods and algorithms for airborne noise source localization in generic urban
-
; Develop system architecture and training strategy to enable the FM to learn from heterogeneous MRI data in terms of data source purpose and physical location in the scanner; Develop efficient techniques
-
tagging algorithm development as well as physics data analysis, with a focus on Higgs boson physics, top quark physics, and searches for new physics signatures. This is what you will do After the discovery
-
or incomplete. Information Your tasks will include: Developing and benchmarking ML/AI algorithms tailored to low-data regimes — e.g. few-shot learning, transfer learning or data-efficient representation learning