U.S. tech giants like Microsoft Corporation, Alphabet Inc., and OpenAI are reportedly readying to publicly declare their dedication to AI safety measures on Friday in response to an appeal from the White House.
The administration of President Joe Biden had previously admonished these firms about the critical need for assurances that AI technologies will not result in harm.
The non-compulsory aspect of these declarations highlights the limits of the Biden administration’s influence in steering the most sophisticated AI models away from potential misuse, according to a Thursday Bloomberg report that quoted people familiar with the plans.
Congressional representatives have been engaging in knowledge-gathering sessions to deepen their understanding of AI before framing any laws. Achieving a unanimous agreement on enforceable regulation seems uncertain, as per Bloomberg’s reporting.
The commitments checklist from the White House set to be unveiled on Friday will likely be reflected in the promises made by premier AI corporations that attended a discussion in May with Vice President Kamala Harris.
Harris, along with other senior White House officials, stressed the firms’ obligation to guarantee the security of their tech products during this meeting.
Jeff Zients, the White House chief of staff, expressed unease over the slow progress of regulatory procedures, remarking in a podcast interview a month ago, “We cannot afford to wait a year or two.”
The commitment document, which may undergo changes before Friday, is expected to propose eight pledges centered around safety, security and societal responsibility. These encompass enabling independent specialists to test models for potential harmful behavior – a method termed “red-teaming; sharing data related to trust and safety with government bodies and other firms; Implementing watermarking on audio-visual content to facilitate the identification of AI-generated content; boosting investment in cybersecurity; promoting third-party involvement in discovering security weaknesses; prioritizing research into societal risks posed by AI, and utilizing the most advanced AI systems, known as frontier models, for addressing major societal challenges.
There’s been a global call by governments for AI governance, similar to the accords in place to avert nuclear conflict. The Group of Seven nations, for instance, agreed earlier this year in Hiroshima, Japan, to synchronize their approach toward this technology, and an international AI summit is slated to be held in the U.K. before year-end.
Yet, such efforts lag behind the swift progress of AI developments, which are fueled by fierce competition between business rivals and anxiety that Chinese breakthroughs could surpass those in the West. This has led Western leaders to resort to urging companies to self-regulate, at least for now.
Alexandr Wang, founder of the AI startup Scale AI, recently gave a grave warning to the Pentagon, calling for immediate action in managing military data and investment in nascent technologies. Wang equated China’s investment in AI to the Apollo project, describing “AI as China’s equivalent to the Apollo project.”
Produced in association with Benzinga