AI regulation starts to take shape in US and UK

US Vice-President Kamala Harris applauds as US President Joe Biden signs an executive order after delivering remarks on advancing the safe, secure and trustworthy development and use of artificial intelligence, in the East Room of the White House in Washington, DC, on October 30, 2023. Photo: AFP

US Vice-President Kamala Harris applauds as US President Joe Biden signs an executive order after delivering remarks on advancing the safe, secure and trustworthy development and use of artificial intelligence, in the East Room of the White House in Washington, DC, on October 30, 2023. Photo: AFP

Published Nov 7, 2023

Share

The world’s attention has been captivated by the promises and perils of artificial intelligence. From its potential to solve health-care puzzles and bridge educational gaps to concerns about its impact on warfare and misinformation, AI’s impact is undeniable.

A landmark event, the “AI Safety Summit”, hosted by the UK at the historic Bletchley Park, is gathering luminaries from various fields to discuss the challenges and opportunities presented by AI.

The summit, which unfolds over two days, aims to foster a shared understanding of the risks posed by cutting-edge AI and the need for international collaboration to address the challenges. It also seeks to identify measures that organisations can take to enhance AI safety. The list of participants includes top government officials, industry leaders and thought leaders, with a last-minute addition of tech magnate Elon Musk.

The exclusive nature of the event, with limited guest lists and private discussions, has led to the emergence of additional events and discussions outside the summit. The “AI Fringe” conference, organised by a PR firm, Milltown Partners, has expanded the conversation, allowing a broader audience to engage with the subject matter.

Furthermore, a group of 100 trade unions and rights campaigners expressed dissatisfaction with the exclusivity of the summit, stating that their voices were excluded from the conversation. They urged the government to involve a more diverse range of stakeholders.

The broader AI landscape has also witnessed significant developments. These include the UK government’s plan to establish an AI safety institute, a paper published by academics on managing AI risks, and the UN’s creation of a task force to explore AI implications. In the US, US President Joe Biden has issued an executive order on generative AI to address safety, privacy and equity concerns.

One of the central debates in the AI community is the concept of AI posing “existential risk”, in other words creating risks that threaten humanity’s existence. Some argue that it may be exaggerated to divert attention from more immediate AI-related challenges, such as misinformation, while others emphasise the importance of considering catastrophic risks in the near term.

The summit’s overarching goal is to establish the UK as a leader in safe AI, attracting investments and job opportunities. However, it also raises concerns about “regulatory capture”, where industry giants play a prominent role in framing AI risks and protections. The balance between government oversight and corporate influence remains a critical point of discussion.

In a parallel development across the Atlantic, Biden has signed an executive order in the US addressing generative AI. The order outlines eight key objectives to regulate AI effectively and responsibly.

The goals include setting new standards for AI safety and security, protecting user privacy, promoting equity and civil rights, ensuring consumer protection, supporting workers, fostering innovation and competition, advancing US leadership in AI technology and promoting the responsible government use of AI.

Several government agencies have been assigned specific tasks to achieve the objectives. They are responsible for creating standards to safeguard against AI’s potential misuse in engineering biological materials, developing best practices for content authentication and enhancing cybersecurity measures. The National Institute of Standards and Safety will play a crucial role in assessing AI models before their public release, while other agencies focus on addressing AI’s potential threats to infrastructure and cybersecurity.

The executive order also requires developers of large AI models, such as OpenAI’s GPT and Meta’s Llama 2, to share safety test results. To protect users’ privacy, the White House is urging Congress to pass data privacy regulations and develop “privacy-preserving” techniques.

One notable aspect of the executive order is its emphasis on preventing AI discrimination. This includes addressing algorithmic discrimination and ensuring fairness in AI applications, particularly in areas like sentencing, parole and surveillance. Government agencies are also directed to provide guidelines for preventing AI from exacerbating discrimination in housing, federal benefits programmes and contracts.

The order recognises the potential for AI to disrupt jobs and requires a report on its impact on the labour market. It aims to encourage more individuals to work in the AI field and provides resources to students and researchers through the National AI Research Resource.

While the executive order is a significant step towards establishing standards for generative AI, it is important to note that it is not permanent law and is subject to change with the administration. Lawmakers are working on permanent legislation to regulate AI.

Overall, the developments in the UK and the U.S. reflect the growing recognition of the potential impact of machine learning technology by powerful nation states. Developments in the UK and US are important to watch as they usually lay down the path for other countries to follow. It remains to be seen whether the hesitant regulatory steps will result in either meaningful or positive results.

All this governmental hubbub may end up being ineffectual, as continues to be the case around issues of climate change.

James Browning is a freelance tech writer and music journalist.

BUSINESS REPORT