The British government’s efforts to stay abreast of artificial intelligence are moving forward apace, with the appointment of top industry advisors to its AI taskforce announced on Thursday.
Housed within the Department for Science, Innovation and Technology, the AI taskforce was first announced in June with £100 million ($125 million) in funding.
Now the newly-renamed Frontier AI Taskforce, which chairman Ian Hogarth described as a “start-up inside government”, has an advisory board consisting of AI research and national security experts.
Members include Turing Award winner Yoshua Bengio, former OpenAI researcher Paul Christiano, and director of the UK’s security agency GCHQ, Anne Keast-Butler.
📢An elite team of high-profile AI specialists have been recruited as advisors for the Frontier AI Taskforce.
— Department for Science, Innovation and Technology (@SciTechgovuk) September 7, 2023
“We have seen massive investment into improving AI capabilities, but not nearly enough investment into protecting the public, whether in terms of AI safety research or in terms of governance to make sure that AI is developed for the benefit of all,” Bengio said.
“With the upcoming global AI Safety Summit and the Frontier AI Taskforce, the UK government has taken greatly needed leadership in advancing international coordination on AI, especially on the question of risks and safety.”
The taskforce’s first progress report, also released on Thursday, noted a number of other appointments and collaborations, including the recruitment of Downing Street staffer Ollie Ilott to be the program’s director. Hogarth said Ilott brought a “General Groves” energy to proceedings, referencing the man who oversaw the creation of the Manhattan Project.
UK Government collaborates with AI industry
Since its inception, the government taskforce has stated that it will collaborate with AI behemoths including Google’s DeepMind (Bard, PaLM-2), OpenAI (ChatGPT, GPT-4), and Anthropic (Claude AI, Constitutional AI).
On Thursday, the government said in its announcement that those companies have committed to providing deep access to their AI models for researchers.
Partnerships with several leading technical organizations were also announced, with researchers from various firms and non-profits joining the project to help assess the risks and opportunities of AI.
One of those is Heidy Khlaaf, engineering director at consulting firm Trail of Bits, who will lead work on the risks at the intersection of cybersecurity and frontier AI systems.
“Having worked on evaluating code synthesis models, regardless of their capabilities or correctness, it’s been unclear how well they aid in offensive cyber capabilities and increase security risks,” she wrote on X, formerly known as Twitter. “Excited to announce our work for the UK AI Taskforce to assess exactly that!”
Meanwhile, ways to measure whether AI is being built for the collective good will be developed by Saffron Huang and Divya Siddarth, co-founders of the Collective Intelligence Project.