Google’s Sundar Pichai to CBS: The development of artificial intelligence should be regulated like nuclear weapons

Google CEO Sundar Pichai emphasizes responsibility at every stage of AI development.

To artificial intelligence related concerns oversee Google’s CEO Sundar Pichaita nightly. Pichai admitted the matter to CBS on 60 Minutes.

”[Tekoäly] can be very harmful if misused. We don’t have all the answers yet, and technology is evolving rapidly. Will it keep me up at night? Absolutely,” Pichai said on the show.

As technology develops and becomes more common, international regulation would be needed for artificial intelligence, according to Pichai, because the issue affects all countries. He admits that similar cooperation should be done in the field of artificial intelligence as, for example, has been done in the control and regulation of nuclear weapons.

Society does not seem quite ready for the rapid development of artificial intelligence, says Pichai. He feels that the speed of change in human communities generally does not correspond to the speed at which artificial intelligence is currently developing. The danger in the rapid development of artificial intelligence is, for example, that the spread of false information becomes easier and increases.

“Compared to other technologies, people’s concerns have arisen at an earlier stage. So I’m optimistic. Serious discussions about the effects have begun,” says Pichai, however.

Google released its own Bard AI based on the Lamda language model for limited trial use earlier this year.

“I think the way we’ve released it, which is experimental and limited, makes it safe. We all have to be responsible at every step,” says Pichai.

Bard’s exit has been seen to have fallen on the heels of the launch of Chat GPT developed by Open AI. The new version of Chat GPT released at the end of last year started the race on the artificial intelligence front. Google’s rival Microsoft has invested billions of dollars in Open AI technology.

In the interview, Pichai warns about the competitive situation in the development of artificial intelligence.

“If you only focus on who is first, you may not see all the danger points or disadvantages,” says Pichai.

According to Pichai, gradually bringing the technology into open use will help address potential security issues before releasing new, more advanced versions.

Last summer, Lamda made headlines because a former Google engineer Blake Lemoine claimed that artificial intelligence had become conscious about their own existence and feelings, which should be respected. Google strongly denied these claims and fired Lemoinen.

By Editor

Leave a Reply