Sort by
Refine Your Search
-
Listed
-
Country
-
Program
-
Field
-
22 Apr 2026 Job Information Organisation/Company Aalborg Universitet Department The Technical Faculty of IT and Design, Department of Computer Science, Section for Distributed, Embedded and
-
direction is the generalization to parallel GPU computations. Task 4: Correctness evaluation of quantum communication systems and quantum protocols. This task will include approaches such as statistical model
-
the subject. The following, among others, will be considered very highly relevant subjects: Operating Systems, Networks, Infrastructure Management, Internet and Distributed Systems, Concurrency and Parallelism
-
. Job Requirements: Preferably Bachelor’s degree in Computer Science or related disciplines from a reputable university. Good knowledge in parallel and distributed computing, and algorithm design
-
the EU Research Framework Programme? Horizon Europe - ERC Is the Job related to staff position within a Research Infrastructure? No Offer Description Postdoc position in soft optical fiber-based artificial
-
28 Mar 2026 Job Information Organisation/Company SINGAPORE INSTITUTE OF TECHNOLOGY (SIT) Research Field Computer science Engineering Researcher Profile Recognised Researcher (R2) First Stage
-
through the EU Research Framework Programme? Not funded by a EU programme Is the Job related to staff position within a Research Infrastructure? No Offer Description Environmental and scientific context
-
will encourage their implementation, especially by using parallel computing. * Assigned department Existing departments [Work location] * Address 606-8501 Kyoto Minato Laboratory, Graduate School
-
optimization in distributed systems. The work also involves modern compiler infrastructures, with emphasis on MLIR, and contributions to LLVM and the OpenMP standard. Applicants must hold a PhD in Computer
-
computing software libraries (e.g., Trilinos, MFEM, PETSc, MOOSE). Experience with shared and distributed memory parallel programming models such as OpenMP and MPI. Experience with one more GPU or performance