Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Field
-
expertise in HCI and education, including adaptive gamification, engagement, learning analysis, and the design of motivational affordances in education. As part of the project, the PhD student will work with
-
collaborate closely with experimental partners (ICCF, IJL, IC2MP, and Syensqo) to validate computational predictions, ensuring the development of catalysts that are both highly active and stable under harsh
-
neural population dynamics recorded by experimental partners - Collaborate with project partners - Participate in scientific activities of the team and scientific consortium - Study learning mechanisms and
-
different nationalities. Partnerships exist with 150 companies and our research groups collaborate with more than XX countries throughout the world. Its exceptional instrumental platforms are spread over 4
-
quantitative and machine learning approaches ● Developing predictive models linking nuclear features to future cell fate ● Interacting with collaborators in imaging, computational biology, and developmental
-
resources of CESAM, including its Machine Learning and Deep Learning hub, • close collaborations with ONERA. The successful candidate will work in a multidisciplinary environment bringing together researchers
-
conditions (Burgard et al., 2022). The application of deep learning to this problem has yielded promising results (Rosier et al., 2023; Burgard et al., 2023). Further development and refinement
-
will work on this project under the supervision of Marie Kerjean, and may collaborate with other members of the ANR Diplo project. If the successful candidate wishes, the results obtained may be
-
Description Within the ANR HEBBIAN contract, the objective is to adapt bio-inspired Hebbian learning models recently proposed by one of the partners of this ANR (Frédéric Lavigne) in order to account for data
-
Deployment Strategies - Model Compression: Investigate techniques such as quantization, pruning, and knowledge distillation to reduce the computational and memory footprint of deep learning models without