OpenAI CEO to Testify in US Senate amid Raising Concerns over Used Technology

UTC by Darya Rudz · 4 min read
OpenAI CEO to Testify in US Senate amid Raising Concerns over Used Technology
Photo: TechCrunch / Flickr

Last week, OpenAI CEO attended a meeting of the White House that was devoted to the regulation of AI. And according to Altman, White House representatives and OpenAI executives are “on the same page” in their vision. 

OpenAI CEO Sam Altman is preparing to testify in US Senate next week, which will be his first experience of testimony before Congress. On Monday, Sam Altman will join members of the House of Representatives for dinner, and on Tuesday, he will testify before the Senate Judiciary Subcommittee on Privacy, Technology & the Law. During the hearing called “Oversight of AI: Rules for Artificial Intelligence”, Altman will speak on what laws might be necessary to introduce to protect Americans as artificial intelligence (AI) use is rapidly spreading, encompassing every industry.

Last week, OpenAI CEO attended a meeting of the White House that was devoted to the regulation of AI. And according to Altman, White House representatives and OpenAI executives are “on the same page” in their vision.

Connecticut Democratic Senator Richard Blumenthal, who heads the Senate panel on privacy and technology, commented:

“Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls. This hearing begins our Subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful technology. I look forward to working with my colleagues as we explore sensible standards and principles to help us navigate this uncharted territory.”

Apart from OpenAI CEO, among other experts who will testify on Tuesday before Senate, we should mention Christina Montgomery, chief privacy and trust officer at IBM, and Gary Marcus, professor emeritus at New York University.

AI Boom and Regulatory Concerns

OpenAI is the company behind ChatGPT, an AI-powered chatbot designed to mimic human conversation and provide engaging and informative responses to a wide range of queries. ChatGPT has created a lot of buzz in the media and triggered a surge of new interest and investment in artificial intelligence. Notably, it’s not only investors attracted by the technology but also the regulators who have raised concerns regarding the technology.

According to Sam Altman, being “the greatest technology humanity has yet developed”, artificial intelligence technology will reshape society as we know it. However, with the potential to drastically improve our lives, artificial intelligence comes with real dangers. The biggest ones include consumer privacy, biased programming, danger to humans, and unclear legal regulation.

“We’ve got to be careful here. I think people should be happy that we are a little bit scared of this,” stated Sam Altman.

Further, speaking on the tech models used by ChatGPT, Altman added:

“I’m particularly worried that these models could be used for large-scale disinformation. Now that they’re getting better at writing computer code, [they] could be used for offensive cyber-attacks.”

To mitigate the risks associated with artificial intelligence, regulators and society need to be involved with the technology.

By now, a few different AI regulation approaches have been applied in various parts of the world. For example, in 2021, the European Union introduced the Artificial Intelligence Act (AIA) which proposes a risk-based approach to guide the use of AI in both the private and public sectors. It bans the use of AI in critical services that could threaten livelihoods or encourage destructive behavior but accepts AI in other sensitive sectors on condition of maximum safety and efficacy checks by regulators.

In contrast, the Canadian modified risk-based approach allows the use of AI even in critical areas, but developers must create a mitigation plan to reduce risks and increase transparency when using AI in high-risk systems.

In the US, there is no comprehensive regulation yet, but last year, 15 states and localities proposed or passed legislation concerning AI. New York City was the first one to introduce an AI law that aims to prevent AI bias in the employment process. Meanwhile, Colorado and Vermont established task force units to study AI applications at the state level.

Artificial Intelligence, News, Technology News
Related Articles