Ultimate Resource for AI, Software, Hosting & Marketing Solutions

GPT-4: launch imminent for new OpenAI model, what we know

GPT-4 launch imminent for new OpenAI model what we know

Andreas Braun, CTO of Microsoft Germany, announces the imminent arrival of GPT-4. He also mentions the multimodal AI, capable of handling several formats including video.

Is OpenAI about to introduce GPT-4
Is OpenAI about to introduce GPT-4

The launch of ChatGPT will have allowed the general public to discover the capabilities of OpenAI’s models, particularly those based on GPT-3. These technologies have been available since 2020 via the publisher’s API. And recently, developers can even integrate ChatGPT’s technology, based on GPT-3.5, into their applications thanks to the availability of a dedicated API.

The success of ChatGPT has raised expectations – and speculation – about the next OpenAI language model, namely GPT-4. The latest rumors suggested a launch in the first quarter of 2023, and that may well be true… as early as next week.

We will present GPT-4 next week

During the KI im Fokus event organized by Microsoft, the CTO of the company’s German branch dropped two important pieces of information.

We will present GPT-4 next week, and we will have multimodal models that will offer completely different possibilities – for example videos.

Clearly, the next OpenAI language model will be presented next week – by the editor, by Microsoft, or jointly by both companies? It’s a good thing: Satya Nadella and Jared Spataro have scheduled announcements on artificial intelligence next Thursday, March 16. But we were expecting more news related to the application of AI in the world of work.

A multimodal artificial intelligence?

Another important fact, Andreas Braun mentions, in the same sentence, the arrival of multimodal models – without saying clearly if this would concern GPT-4 or not – i.e. capable of supporting several types of content, including video.

In any case, this will be a major evolution for artificial intelligence models. These announcements come one week after the presentation of Kosmos-1 by Microsoft researchers, a multimodal AI model that is quite impressive, capable of interpreting the contents of an image.

What we know about GPT-4

For the moment, we know relatively little about GPT-4. On the date of presentation of the model, the CTO of Microsoft Germany seems in any case to confirm the statements of the New York Times. The CEO of OpenAI, Sam Altman, had however tempered the information, specifying that GPT-4 would be released when he was convinced that it could be done “safely, responsibly”. Also, remember that it will probably take some time between the presentation of GPT-4 and its availability via an API or an interface such as ChatGPT.

On the capabilities of GPT-4, Sam Altman also said that “people are just asking to be disappointed and they will be”. An astronomical number of parameters had been mentioned at first: 100,000 billion, or 571 times the size of the neural network of GPT-3. It seems that the innovations brought by GPT-4 are rather in the optimization of the training of the model. We hope to know more next week!

Source : Heise.

Exit mobile version