Sort by
Refine Your Search
-
., StableDiffusion) and large language models (LLMs) based on the transformer architecture [6] (e.g., ChatGPT). In general, the above generative models need considerable amount of computational resources in terms
-
High-Performance Computing is entering a revolutionary phase characterised by Exascale capabilities, with step-changes in technology enabling numerically intensive processes to answer outstanding
-
analysed in a standard web browser. HiPIMS computations can be executed on local GPUs or through cloud-based GPU services, empowering users to conduct large-scale fast flood simulations without worrying
-
multimodal machine learning, large language models, and fairness and uncertainty evaluations. The PhD student will benefit from: State-of-the-art AI computing recourses for large-scale model training including
-
for collaboration inside and outside of the University. It has access to extensive dedicated computing resources (GPU, large storage). The successful applicant will work under the supervision of Prof. Hain. Please
-
-fidelity simulation data will be used to build the machine-learning model to be then embedded into a GPU-accelerated blade-element momentum solver that will be made open-source to enable direct impact with
-
us to run large numerical simulations with billions grid points on mixed computer architectures including CPU and GPU machines. A current project is preparing the code set for the next generation of
-
national high-performance computing facilities (both CPU and GPU-based) to conduct large-scale simulations efficiently. Working closely with experimental collaborators to validate computational predictions