Sort by
Refine Your Search
-
Category
-
Employer
-
Field
-
design, development, and optimization of scalable AI inference pipelines Implement and experiment with LLMs, GNNs, multi-modal AI, and vision models Apply techniques such as quantization, pruning
-
methods for causal inference in observational data, is strongly preferred. Using various existing large datasets with rich information for knowledge synthetisation and triangulation over the course of the
-
semantic can be inferred. At the end of the processing pipeline, a component will aggregate the accumulated knowledge about the discovered attack surface. The ultimate objective is to provide useful and
-
MLOps, model deployment, or scalable data pipelines. Knowledge of advanced ML model serving frameworks (TorchServe, TensorFlow Serving, Triton Inference Server).Familiarity with AI risk assessment
-
). The goals are to develop new computational methods that allow the scientific inference of explainable attributes to describe human behavior and self-reports as well as to make progress in the computational