CEO of Company Behind ChatGPT Is ‘Nervous’ About AI Risks: ‘If This Technology Goes Wrong, It Can Go Quite Wrong’ [Video] | lovebscott.com

CEO of Company Behind ChatGPT Is ‘Nervous’ About AI Risks: ‘If This Technology Goes Wrong, It Can Go Quite Wrong’ [Video]

The CEO of OpenAI, the company responsible for creating artificial intelligence chatbot ChatGPT and image generator Dall-E 2, said “regulation of AI is essential” as he testified in his first appearance in front of the US Congress.

via: Complex

Sam Altman, who serves as CEO of the company that developed ChatGPT, was questioned by senators about the potential risks associated with the tech’s continued development. One notable moment, as seen in the clip above, saw Altman pondering what he says are his “worst fears” about the future.

“My worst fears are that we cause significant—we [as in] the field, the technology, the industry—cause significant harm to the world,” he said. “I think that could happen a lot of different ways.”

Continuing, Altman pointed to these concerns as being a key reason why he was speaking with lawmakers on Tuesday and why he aims to continue doing so.

“I think if this technology goes wrong, it can go quite wrong,” he added. “And we want to be vocal about that. We want to work with the government to prevent that from happening. But we try to be very clear-eyed about what the downside case is and the work that we have to do to mitigate that.”

Elsewhere, Altman pushed back on the assertion that this issue is similar to the past rise of social media, noting that AI and related tech is simply not comparable and “thus requires a different response.” Altman also faced specific questions about concerns surrounding upcoming elections, which he says is one of his key areas of concern at the moment.

“I do think some regulation would be quite wise on this topic,” he said on Tuesday. “Someone mentioned earlier, it’s something we really agree with, that people need to know if they’re talking to an AI [or] if content that they’re looking at might be generated or might not. I think it’s a great thing to do, is to make that clear. I think we also need rules [and] guidelines about what’s expected in terms of disclosure from a company providing a model that could have these sorts of abilities that you talk about. So I’m nervous about it.”

See more from Altman and the larger committee hearing above.

These remarks follow others who have spoken out in a similarly cautionary tone, including a pioneer in the field and former Google employee who’s often referred to as the “Godfather of AI.

Share This Post