OpenAI Launches Safety Fellowship to Fund External AI Research
OpenAI is increasing security efforts past its partitions with a brand new Safety Fellowship that can fund exterior researchers to research AI dangers. The OpenAI Safety Fellowship will run for six months from September 2026 to February 2027, in accordance to a information announcement, broadening the corporate’s participation in alignment and security work. The initiative comes as AI firms face growing scrutiny over how they handle dangers related to quickly advancing techniques.
This system is open to researchers, engineers, and practitioners from outdoors the corporate. Members will obtain stipends, entry to OpenAI fashions, and technical assist to conduct analysis in areas reminiscent of robustness, privateness, agent oversight, and misuse prevention. Fellows are anticipated to produce outputs reminiscent of analysis papers, benchmarks, or datasets.
OpenAI mentioned the fellowship is meant to “assist high-impact analysis on the protection and alignment of superior AI techniques” and to develop the variety of individuals engaged on technical security challenges. This system displays a wider development amongst main AI builders to fund exterior analysis by way of fellowships, residencies, and educational partnerships.
As an example, Anthropic, a rival AI firm centered on security, runs an analogous fellows program that helps impartial researchers engaged on alignment, interpretability, and AI safety. This system supplies funding, mentorship, and compute sources, with members sometimes producing publicly accessible analysis.
Google and its DeepMind unit function a spread of pupil researcher and fellowship applications that place members on analysis groups for a number of months. These applications cowl a broad vary of AI matters, together with safety-related work, although they don’t seem to be all the time explicitly branded as alignment-focused.
Microsoft and Meta have additionally expanded funding for exterior AI analysis by way of educational partnerships, grants, and residency-style applications, usually geared toward advancing work on accountable AI and system reliability.
Collectively, these initiatives kind a rising ecosystem of externally funded analysis tied to main AI labs.
OpenAI mentioned the precedence areas for its fellowship embody “agentic oversight” and “high-severity misuse domains,” reflecting considerations about techniques able to taking multi-step actions with restricted human intervention. Latest advances in AI capabilities have enabled techniques to carry out extra advanced duties, together with coding, analysis help, and workflow automation. This has shifted some security considerations from dangerous outputs towards the potential for unintended or dangerous actions taken by autonomous or semi-autonomous techniques.
The expansion of fellowship applications comes amid growing demand for AI security researchers, a comparatively small however increasing subject. Firms are providing aggressive compensation and entry to computing sources to entice expertise, as they compete to develop extra superior fashions. On the identical time, governments and regulators are growing stress on AI builders to display that techniques may be deployed safely and reliably.
Whereas exterior applications might broaden participation in security work, they don’t substitute inside decision-making processes at AI firms. Researchers taking part in fellowships sometimes would not have direct authority over product releases. Their work is usually advisory, centered on figuring out dangers and proposing mitigation methods. Duty for deploying AI techniques stays with the businesses that construct and function them.
OpenAI mentioned the fellowship is a part of a broader effort to assist analysis and enhance understanding of AI dangers, however didn’t present particulars on how findings from this system can be integrated into product selections.
The primary cohort of the OpenAI Safety Fellowship is predicted to be chosen later this yr. For extra info, go to the OpenAI website.
Concerning the Writer
John K. Waters is the editor in chief of numerous Converge360.com websites, with a concentrate on high-end improvement, AI and future tech. He is been writing about cutting-edge applied sciences and tradition of Silicon Valley for greater than two a long time, and he is written greater than a dozen books. He additionally co-scripted the documentary movie Silicon Valley: A 100 Yr Renaissance, which aired on PBS. He may be reached at [email protected].
Source link
#OpenAI #Launches #Safety #Fellowship #Fund #External #Research #Campus #Technology


