OpenAI and Microsoft pledge to create safe AI

The two tech giants have joined the UK’s international coalition to safeguard global AI development in a bid to improve public trust in the rapidly growing technology.

Leading tech firms OpenAI and Microsoft are the latest to join an initiative spearheaded by the UK’s AI Security Institute (AISI), encouraging trust and public confidence in AI as it rewires public services.

Announced by Deputy Prime Minister David Lammy and AI Minister Kanishka Narayan as the AI Impact Summit in India draws to a close today (Friday 20 February), the news bolsters the work of AISI’s Alignment Project, which was first announced last summer.

Some £27 million will now be made available through the fund, supporting research efforts to ensure AI systems work as they’re supposed to, with £5.6 million coming from OpenAI, and additional support from Microsoft and others.

Today also sees the first Alignment Project grants awarded to 60 projects from across eight countries, with a second round due to open this summer.

AI alignment refers to the attempt to steer advanced AI systems to act reliably, without unintentional or harmful behaviours. 

It involves developing methods that prevent such unsafe behaviours as AI systems become more capable.

Progress on alignment is something that will boost confidence and trust in AI, ultimately supporting the adoption of systems which are increasing productivity, slashing medical scan times for patients, and unlocking new jobs for communities up and down the country.

Without continued progress in alignment research, increasingly powerful AI models could act in ways that are difficult to anticipate or control, which could pose challenges for global safety and governance.

UK Deputy Prime Minister, David Lammy, said: “AI offers us huge opportunities, but we will always be clear-eyed on the need to ensure safety is baked into it from the outset. 

“We’ve built strong safety foundations which have put us in a position where we can start to realise the benefits of this technology. The support of OpenAI and Microsoft will be invaluable in continuing to progress this effort.

UK AI Minister, Kanishka Narayan, said: “We can only unlock the full power of AI if people trust it – that’s the mission driving all of us. Trust is one of the biggest barriers to AI adoption, and alignment research tackles this head-on.

“With fresh backing from OpenAI and Microsoft, we’re supporting work that’s crucial to ensuring AI delivers its huge benefits safely, confidently and for everyone.”

Alignment is crucial for the security of advanced AI systems and its long-term adoption across all walks of life.

The aim is to ensure AI models operate “as they should do”, even as their capabilities rapidly evolve.

With the rise of AI systems that can perform increasingly complex tasks, there is a growing global consensus that AI alignment is one of the most urgent technical challenges of our era.

Besides OpenAI and Microsoft, AISI’s Alignment Project is supported by an international coalition, including: 

• Canadian Institute for Advanced Research (CIFAR)
• Australian Department of Industry, Science and Resources’ AI Safety Institute
• Schmidt Sciences
• Amazon Web Services (AWS)
• Anthropic
• AI Safety Tactical Opportunities Fund
• Halcyon Futures
• Safe AI Fund
• Sympatico Ventures
• Renaissance Philanthropy
• UK Research and Innovation (UKRI)
• Advanced Research and Invention Agency (ARIA)

It is led by an expert advisory board, including Yoshua Bengio, Zico Kolter, Shafi Goldwasser, and Andrea Lincoln.

Mia Glaese, VP of Research at OpenAI, said: “As AI systems become more capable and more autonomous, alignment has to keep pace.

“The hardest problems won’t be solved by any one organisation working in isolation – we need independent teams testing different assumptions and approaches.

“Our support for the UK AI Security Institute’s Alignment Project complements our internal alignment work and helps strengthen a broader research ecosystem focused on keeping advanced systems reliable and controllable as they’re deployed in more open-ended settings.”

Previous Article Your digital copy of DPA’s February issue awaits…
Next Article OpenAI and Microsoft pledge to create safe AI
Related Posts
© mattImage Copyrights Title

Powering the next generation of agricultural machinery

fonts/
or