Sort by
Refine Your Search
-
301.975.4579 Description As the demand for high resolution, high content imaging increases, the cost and challenges of acquiring, storing, processing, and analyzing today’s very large imaging data sets are even
-
over time. Stem cell populations can be highly heterogeneous and can exhibit complex responses. Using quantitative imaging data on large numbers of live cells over time, we can construct potential
-
computing. As the code has scaled up, we have also faced challenges in visualizing and analyzing large data sets. The visualization tool used with FDS is called Smokview . We are seeking a computer scientist
-
Description We work with scientists in other NIST laboratories to develop tools for computer simulation and analysis of magnetic systems at the nanometer scale. Model verification is achieved by comparison
-
sophisticated potential energy functions and adequate sampling to reveal the associated, intricate molecular details. In addition to being centrally important, high-quality experimental data (free energy
-
of hydrogen-safe infrastructure. As with most environmental degradation problems, industry-specific testing has been prioritized, leading to phenomenological standards that are adjusted as new data or new
-
differentiation, data describing the changes in gene expression at the single cell level are needed. In this project, quantitative live cell imaging and image analysis will be used to follow gene expression
-
301.975.3507 Description Recent developments in Artificial Intelligence (AI) have allowed machine learning models to solve certain complex problems in natural language processing and other areas at large scales
-
cycle mass spectrometers have made this analysis possible, there are still looming problems related to the inherently large search space and to comparing results temporally or between laboratories
-
, pixelated detection system. Taking full advantage of this cutting edge technology will require the development of new methods for collecting, processing, and interpreting the large amounts of data we now have