Sort by
Refine Your Search
-
at Chalmers here . Application procedure The application should be written in English be attached as PDF-files, as below. Maximum size for each file is 40 MB. Please note that the system does not support Zip
-
. Application procedure The application should be written in English be attached as PDF-files, as below. Maximum size for each file is 40 MB. Please note that the system does not support Zip files. CV:(Please
-
The aim is to unravel the anthropogenic and natural processes, and their relative importance, for triggering shallow landslides in sensitive clays. The focus will be on developing computational models
-
at Chalmers. Application procedure The application should be written in English and be attached as PDF-files, as below. Maximum size for each file is 40 MB. Please note that the system does not support Zip
-
a quantum computer based on superconducting circuits. You will be part of the Quantum Computing group in the Quantum Technology Laboratory (QTL) division of the Microtechnology and Nanoscience (MC2
-
procedure The application should be written in English and be attached as PDF-files, as below. Maximum size for each file is 40 MB. Please note that the system does not support Zip files. CV:(Please name the
-
. Application procedure The application should be written in English be attached as PDF-files, as below. Maximum size for each file is 40 MB. Please note that the system does not support Zip files. CV:(Please
-
to the future of clean energy? Since the 1970s, nuclear power has played a key role in reducing carbon emissions and decreasing reliance on fossil fuels. Today, with over 400 reactors in operation worldwide and
-
, the applicant should have: A master's level degree corresponding to at least 240 higher education credits in a relevant field. This includes educational sciences, teacher education, natural sciences, engineering
-
automotive software engineering. About the research project This project explores how neural-symbolic approaches can enhance the trustworthiness of generative AI models (e.g., Large Language Models (LLMs) and