OpenAI delays arrival of ChatGPT's advanced voice mode | Artificial Intelligence | AI | TECHNOLOGY

OpenAI has delayed the arrival of the advanced voice mode that introduces the GPT-40 language model into the chatbot, and which enables a more natural conversation, with “emotions and non-verbal signals”, to ensure that they meet “high standards of security and reliability”.

GPT-4o, introduced in May, was designed to offer a “more natural” human-machine interaction, as it has the ability to understand a combination of text, audio and image inputs and generate responses in the same measure with great speed.

It also includes an advanced ‘Voice Mode’, which allows you to choose from a series of voices to personalize the interaction with the chatbot.’ Precisely, a feature that has generated controversy and has led to the withdrawal of the Sky voice for resembling that of actress Scarlett Johansson,who already voiced an artificial intelligence assistant in the 2013 film Her.

This mode was going to be tested with a small group of users in July, but OpenAI has decided to delay its start, because they need “one more month” to reach the standard they have set, as they have reported through a statement published in X ( old Twitter).

During this time, will improve the model’s ability to detect and reject inappropriate content and improve the user experience,They will also work on the infrastructure that will support its large-scale and real-time use.

The roadmap contemplates the launch of this voice mode in the fall for Plus subscription users, although it may change. Additionally, they hope to incorporate new video and screen sharing capabilities.

ChatGPT’s advanced voice mode can understand and respond with emotions and non-verbal cues, bringing us closer to natural conversations in real time with AI”, they say at OpenAI. Therefore, they say, the timeframe in which it is available will depend on whether it meets its “high standards of security and reliability”.

By Editor

Leave a Reply