23 programming-language-"HT---Human-Technopole" Postdoctoral positions at Brookhaven Lab
Sort by
Refine Your Search
-
associate position with a focus on natural language processing (NLP). This extremely fast-moving and competitive field has produced innovations with highly visible impact in industry, education, and public
-
-based machine learning and natural language processing. Strong research experience (e.g., evidenced by publication record). Excellent programming and computer science skills. Security clearance
-
discipline is required within the last 5 years. Candidates must have excellent written and oral communication skills, be self-motivated and able to work both independently and as a multi-institutional team. A
-
Required Knowledge, Skills, and Abilities: PhD in Accelerator Physics or a related field In-depth working knowledge of accelerator design codes such as BMAD, MADX, or ELEGANT Working knowledge of programming
-
Excellent programming skills in various platforms and languages (e.g., Matlab, Python) Ability to work independently and collaboratively Clear and concise verbal and written communication and presentation
-
experience using and advancing data analytics and modeling, which may include using FEFF, Python or other modeling/programming languages. • Your experience is demonstrated through publications, GitHub
-
or academic presentations. Ability to communicate clearly and concisely in English, both verbally and in writing. Self-motivated and able to work both independently and as a team. Preferred Knowledge, Skills
-
to take a leading role in studying the origin of Lambda hyperon transverse polarization in electron scattering experiments—such as the CLAS12 experiment at Jefferson Lab’s 12 GeV program and the ePIC
-
. The program involves close collaborations with experts in theory and data science and will benefit from frequent interactions with principal investigators at the National Synchrotron Light Source II (NSLS-II
-
of existing ones for scientific applications; (ii) Large Language Models (LLMs) and multi-modal Foundation Models (iii) Large vision-language models (VLM) and computer vision techniques; and (iv) techniques