Sort by
Refine Your Search
-
motivated post-doctoral associate with a strong background in game theory, control systems, and/or learning theory to join the research team of Prof. Muhammad Umar B. Niazi. The position focuses on the design
-
motivated post-doctoral associate with a strong background in control systems and machine learning to join the research team of Prof. M. Umar B. Niazi. The position focuses on the development of digital twins
-
research team working at the intersection of machine learning, algorithmic fairness, human-computer interaction, and responsible AI. The project aims to investigate how bias emerges in data pipelines and AI
-
methodology will involve the development of mathematical models for signal transmission and reception, derivation of fundamental performance limits, algorithmic-level system design, and performance evaluation
-
. This involves the development of mathematical models for signal transmission/reception, derivation of performance limits, algorithmic-level system design and performance evaluation via computer simulations and/or
-
motivated post-doctoral associate with a strong background in game theory, control systems, and/or learning theory to join the research team of Prof. Muhammad Umar B. Niazi. The position focuses on the design
-
. This involves the development of mathematical models for signal transmission/reception, derivation of performance limits, algorithmic-level system design and performance evaluation via computer simulations and/or
-
motivated post-doctoral associate with a strong background in control systems and machine learning to join the research team of Prof. M. Umar B. Niazi. The position focuses on the development of digital twins
-
particular focus on applications relevant to the Arab world. The successful applicant will join a multidisciplinary research team working at the intersection of machine learning, algorithmic fairness, human
-
research team working at the intersection of machine learning, algorithmic fairness, human-computer interaction, and responsible AI. The project aims to investigate how bias emerges in data pipelines and AI