The heads of Google, Microsoft, and two other companies working on artificial intelligence met with Vice President Kamala Harris on Thursday as the Biden administration launched initiatives designed to ensure that the rapidly evolving technology improves lives without jeopardising people’s rights and safety.
In a short appearance at the meeting in the Roosevelt Room of the White House, President Joe Biden expressed the hope that the group might “educate us” on what is most important for defending and advancing society.
According to a video released on his Twitter account, Biden addressed the CEOs, “What you’re doing has enormous potential and enormous danger.”
According to White House officials, even Biden has tried out the AI chatbot ChatGPT due to its popularity. This has led to an increase in commercial investment in AI tools that can produce genuinely human-like language as well as original artwork, music, and computer code.
However, the ease with which it can pass for a human has forced governments all over the world to think about how it could rob people of their jobs, deceive citizens, and spread misinformation.
Seven new AI research institutions will be founded thanks to a $140 million investment promised by the Democratic government.
Additionally, guidelines on how federal agencies can use AI tools are anticipated to be released in the coming months by the White House Office of Management and Budget. Top AI developers have also independently agreed to take part in an open evaluation of their systems in August at the DEF CON hacker convention in Las Vegas.
According to Adam Conner of the liberal Centre for American Progress, the White House must also take more action since the AI systems developed by these businesses are being incorporated into hundreds of consumer apps.
We’re at a turning point where, as in other digital regulatory areas like privacy or regulating big online platforms, the next few months will really define whether or not we take the initiative on this or relinquish leadership to other regions of the globe, according to Conner.
The purpose of the meeting was to allow Harris and administration representatives to talk about the dangers of current AI research with Sundar Pichai, Satya Nadella, and the CEOs of two significant startups: Google-backed Anthropic and Microsoft-backed OpenAI, the company that created ChatGPT.
After the meeting behind closed doors, Harris said in a statement that she spoke to the executives and explained that “the private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products.”
The development of several new “generative AI” techniques, like ChatGPT, has increased social and ethical worries regarding automated systems trained on massive data sets.
The data that certain businesses, including OpenAI, used to train their AI systems has been kept a secret. This has made it more difficult to comprehend why a chatbot is giving inaccurate or biassed responses to inquiries or to address questions about whether it is plagiarising from protected works.
According to Margaret Mitchell, chief ethics scientist at AI startup Hugging Face, businesses that worry about being held accountable for something in their training data may not have incentives to closely monitor it in a way that would be beneficial “in terms of some of the concerns around consent, privacy, and licencing.”
She said, “From what I understand of tech culture, that just isn’t done.”
To push AI service companies to expose their systems to increased external scrutiny, some have asked for transparency rules. However, since AI systems are constructed using foundational models, it won’t be simple to offer more transparency later on.
Governments will ultimately have to decide whether or not to throw away all of the effort that has been done, according to Mitchell. Naturally, I kind of anticipate that the decisions will favour corporations and support the fact that it has already been done, at least in the United States. If all these businesses were forced to simply throw away all of this effort and start again, the effects would be enormous.
While the White House indicated a collaborative approach with the industry on Thursday, U.S. government agencies like the Federal Trade Commission, which upholds consumer protection and antitrust laws, are also paying closer attention to businesses that develop or use AI.
The European Union, where negotiators are finalising AI legislation that may propel the 27-nation union to the forefront of the worldwide movement to define standards for the technology, might also impose stricter limits on the corporations.
The primary goal of the EU’s first proposal for AI regulations in 2021 was to limit high-risk applications that endanger people’s rights or safety, such live face scanning or governmental social score systems that assess individuals based on their behaviour. Chatbots hardly got a notice.
However, negotiators in Brussels have been frantically revising their proposals to account for general-purpose AI systems like those created by OpenAI, which is a reflection of how quickly AI technology has advanced. A new partial draught of the law acquired by The Associated Press calls for provisions to be added to the bill that would force so-called foundation AI models to reveal the copyright information used to train the systems.
The AI Act may not go into force for many years even though a committee of the European Parliament is scheduled to vote on the proposal next week.
Italy temporarily suspended ChatGPT due to a violation of the region’s strict privacy laws, while the UK’s competition authority said on Thursday that it is starting an investigation into the AI sector.
According to Heather Frase, a senior fellow at Georgetown University’s Centre for Security and Emerging Technology, placing AI systems out for public inspection at the DEF CON hacker conference might be a creative method to examine hazards in the United States, albeit it is unlikely to be as comprehensive as a protracted audit.
Companies that the White House claims have committed to participate include Hugging Face, chipmaker Nvidia, and Stability AI, renowned for its image-generator Stable Diffusion, in addition to Google, Microsoft, OpenAI, and Anthropic.
“This would be a way for very skilled and creative people to do it in one kind of big burst,” said Frase.

