top of page

Research Scientist

San Francisco, California, USA

Full time

Onsite

$200k-370k

About the Role

As a Research Scientist here, you will develop innovative machine learning techniques and advance the research agenda of the team you work on, while also collaborating with peers across the organization. We are looking for people who want to discover simple, generalizable ideas that work well even at large scale, and form part of a broader research vision that unifies the entire company.

We expect you to:

  • Have a track record of coming up with new ideas or improving upon existing ideas in machine learning, demonstrated by accomplishments such as first author publications or projects

  • Possess the ability to own and pursue a research agenda, including choosing impactful research problems and autonomously carrying out long-running projects

  • Be excited about OpenAI’s approach to research 

Nice to have: 

  • Interested in and thoughtful about the impacts of AI technology

  • Past experience in creating high-performance implementations of deep learning algorithms

Below are some of our current teams and the work you may do on them. 

  • Algorithms: Conduct exploratory research and drive algorithmic and architectural advances on the critical path to AGI. Algos has been responsible for a number of high profile OpenAI releases including GPT-2, Image GPT, Jukebox, DALL-E.



  • Alignment: Fine-tune large language models to perform tasks in accordance with the user's intentions. Explore learning from human feedback and assisting humans evaluating AI. Recent projects: InstructGPT, Book summarization



  • Applied AI Research: Develop innovative solutions and open-ended research to solve real-world problems. Care about applications driven by user feedback and long-term research ​​with significant impact on products, while keeping a high safety product standard. Recent project: Text and Code Embeddings in OpenAI API



  • Code Generation: Research and develop AI programmers, the neural models that write, debug and improve computer programs. Creating AI which can solve hard symbolic reasoning problems is one of the most difficult problems in modern deep learning and you can attack it head-on. Seek new ways for increasingly powerful AI systems to interact with the world through a very general interface of computer code. Recent project: Codex that powered the creation of GitHub Copilot.



  • Language: Develop GPT advancements by exploring and forecasting future capabilities and resource needs. Seek a deep understanding of the scaling of our models across multiple orders of magnitude to optimize for best-case performance at the largest scale. Value impact over novelty – Team has found outstanding results from solid engineering with good design, implementation, and benchmarking.



  • Mathgen: Help models learn to solve problems in informal natural text and prove theorems in formal languages like Lean. Large language models are known to make false claims that sound plausible, while mathematics requires the utmost rigor so training models that can reason robustly is a critical step on the path to AGI. Recent projects: Training Verifiers and Solving Olympiad Problems.

  • Policy Research: Measure the risks, benefits, and overall impact of our technology on the world with an eye towards informing our policies and those of other institutions. Develop novel ways of characterizing model properties, work with internal and external partners to measure our economic impact, and build novel mitigations that help us safely deploy models in an iterative fashion. Conduct far-sighted research on ideal governance and regulation of powerful AI systems.

  • Scaling: Build the model training software stack, solving problems at all layers of the stack including iteration speed, observability, compute efficiency, correctness, and fault detection and recovery. Scaling owns the engineering and research required to harness custom-built hyperscale supercomputers, the latest algorithmic improvements, and massive datasets to train AI models of unprecedented capability.



  • Science of Deep Learning: Explore and understand the dynamics of training large models to guide both the training of current models and the trajectory of the next ones.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.  At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Compensation, Benefits and Perks

The annual salary range for this role is $200,000 – $370,000. Total compensation also includes generous equity and benefits.

  • Medical, dental, and vision insurance for you and your family

  • Mental health and wellness support

  • 401(k) plan with 4% matching

  • Unlimited time off and 18+ company holidays per year

  • Paid parental leave (20 weeks) and family-planning support

  • Annual learning & development stipend ($1,500 per year)

We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status. Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records. 

Please let the company know you found this position on Jobdai to support us!

bottom of page