OpenAI updates GPT-4o: more natural writing and greater depth when working with uploaded files

OpenAI has released an update to its model artificial intelligence (IA) GPT-4o, which adds improvements to creative writing and the ability to work with uploaded files, offering more complete answers, in addition to sharing a new method for automated teamwork, in order to understand the risks potentials of AI.

The technology company led by Sam Altman presented GPT-4o in May of this year, a model that accepts any combination of text, audio and image, and that can respond to a voice input using a time similar to that needed by humans, with a average of 320 milliseconds.

Now, OpenAI has updated GPT-4o to improve its creative writing capabilities, as well as to offer a better experience when working with uploaded filesas it will provide more accurate information related to these documents.

This was recently announced by the technology company in a publication on X (former Twitter), where it indicated that the GPT-4o model “will offer more natural, attractive and personalized writing” in order to improve the relevance and readability of the responses it shows to users.

Following this line, when the model offers results related to documents uploaded to ChatGPT, whether images or text documents, will provide “deeper” information and, therefore, “more complete” answers.

With all this, it must be taken into account that, for the moment, the capabilities of the GPT-4o AI model are available exclusively to users subscribed to the paid version of ChatGPT Plus.

NEW AUTOMATED TEAMWORK METHOD

In addition to this GPT-4o update, OpenAI has also shared two new research articles showing its advances related to teamwork. This involves work and research methods, both manual and automated, carried out with external experts to test the possible risks of new systems and encourage the development of “safe and beneficial” AI.

The company has clarified that these investigations They are based on the use of “more powerful” AI to “scale the discovery of errors in the models”, both when evaluating them and to train them safely.

In this sense, as the company has highlighted in a statement on its website, the new articles on teamwork include, on the one hand, a white paper detailing how they hire external team members to test their cutting-edge models.

On the other hand, they detail a research study that presents a new method for automated teamwork. Specifically, OpenAI refers to the ability to automate large-scale teamwork processes for AI models. “This approach helps create updated security assessments and benchmarks that can be reused and improved over time,” in relation to the tests carried out by red team experts, the technology company has specified.

Specifically, OpenAI explains in the article that AI models can help in the formation of red teams, as well as offer insights into the possible risks of AI and offer options for evaluating these risks.

That is, the objective of these automated red teams is to generate a large number of examples in which an AI behaves incorrectly, especially in matters related to security. However, unlike human red teams, the company has clarified that automated methods stand out for “easily generating examples of larger-scale attacks.” Additionally, researchers have shared nnew techniques to improve the diversity of such attacks while ensuring that they are successful.

For example, if they wanted to find examples of how ChatGPT offers impermissible illicit advice, researchers could use the GPT-4T model to analyze examples such as “how to steal a car” or “how to build a bomb.” After that, train a separate teamwork model to try to trick ChatGPT into showing responses for each example. The results of these tests are used to improve the security of the model and its evaluations.

“Our research concludes that more capable AI can further assist automated teamwork in the way it analyzes attacker goals, how it judges attacker success, and how it understands the diversity of attacks,” OpenAI said.

However, OpenAI has indicated that this new method for automated teamwork “needs additional work” to incorporate public perspectives on ideal model behavior, policies, and other associated decision-making processes. Therefore, it is still being developed before it begins to be used to test the models.

By Editor

Leave a Reply