Sort by
Refine Your Search
-
Listed
-
Country
-
Employer
- Nature Careers
- Argonne
- Stanford University
- Technical University of Denmark
- University of Lund
- University of Oxford
- Yale University
- CNRS
- Carnegie Mellon University
- Heriot Watt University
- Institut Pasteur
- MOHAMMED VI POLYTECHNIC UNIVERSITY
- Princeton University
- University of Minnesota
- University of Nebraska Medical Center
- ;
- ; Xi'an Jiaotong - Liverpool University
- Aarhus University
- Brookhaven Lab
- CEA
- Emory University
- Harvard University
- KINGS COLLEGE LONDON
- Linköping University
- New York University
- Northeastern University
- Ohio University
- RCSI - Royal College of Surgeons in Ireland
- Reykjavik University
- Rutgers University
- Technical University of Munich
- The Ohio State University
- The University of Arizona
- University of Bath
- University of California Irvine
- University of Chicago
- University of Colorado
- University of Delaware
- University of Houston Central Campus
- University of Luxembourg
- University of Minnesota Twin Cities
- University of Nevada Las Vegas
- University of North Carolina at Chapel Hill
- University of Oklahoma
- University of Washington
- VIB
- Vanderbilt University
- Virginia Tech
- 38 more »
- « less
-
Field
-
multitechnique approach, using ion beam imaging/spectroscopy, for example. In this context, the postdoctoral associate will be in charge of developing an operando/in situ cell compatible with ion beam analysis
-
Project title Multi-Modal Large Language Model for Medical Image Analysis Research period 2 years Abstract The proposed research project aims to develop a novel multi-modal large language model (MLLM
-
noise from mechanical/vibratory sources. Two methodologies for monitoring the state of health of a structure based on an experimental modal analysis of the structure have been proposed: - the detection
-
-based environments for high-performance data analysis. Knowledge of biological network inference, causal modeling, and graph-based AI approaches. Experience in multi-modal data fusion, representation
-
fundamental challenges in multimodal representation learning by developing novel approaches to align distinct embedding spaces from speech and sign language modalities. Sign languages encode information through
-
innovative methods for processing and analyzing 7Tesla MRI images of different modalities and formats (NIFTI, DICOM, etc.) using machine learning and artificial intelligence techniques. These methods will be
-
. The successful applicant will integrate multi-modal live imaging and omics data using AI-based pipelines to identify and refine early disease phenotypes, laying the groundwork for therapeutic intervention
-
on the analysis of different characteristics of the skin and body fluids, such as sweat and blood. Many of these methods are challenged with quality (accuracy/precision of measurement), power consumption, usability
-
, multi-modal, and longitudinal data with detailed information about mothers, their pregnancy, and the long-term development of their children. You will be part of a dynamic and highly motivated research
-
. Particular emphasis to be placed on developing and applying emergent system identification and modal decomposition techniques as well as statistical analysis of large datasets. Candidate will disseminate