Launched in 2020, GPT-3 caused astonishment by being capable of text products very similar to what humans can write. But OpenAI, the organization behind this artificial intelligence, wants more. O GPT-3.5 came out this week. The first tests show that the new version is even more accurate than the previous one.
In fact, the really big evolution is expected for GPT-4, which could be released in 2023. Let’s say GPT-3.5 is part of the way there. But a well traveled part. The initial perception is that the new version generates more cohesive and longer texts compared to the previous version.
First, what is GPT?
Acronym for Generative Pretrained Transformer, GPT is a language model based on neural networks, a concept closely linked to the artificial intelligence🇧🇷 Understand as a language model a mechanism that predicts the possibility of a text sentence being coherent, that is, making sense for human understanding.
For example, theTechnoblog is a website on the internet” is a phrase with real meaning. Already “Technoblog washes the dishes taking a plane” only makes sense for those under the influence of dubious substances.
Main name behind the project, OpenAI presented the first version of GPT in 2018, with the second version appearing in 2019. But it was only in 2020, with the emergence of GPT-3, that the results came to public attention.
The differences between the second and third versions are stark. While GPT-2 is based on 1.5 billion parameters, GPT-3 has 175 billion and was trained on a much larger base of texts. This explains its ability to generate more convincing results.
GPT-3.5 goes further
Such parameters can be understood as the components learned by the GPT from the training. According to TechCrunchGPT-4 is predicted to have more than 100 trillion parameters.
Apparently, GPT-3.5 continues with the 175 billion of GPT-3. What is different about the new version is a set of improved techniques. This is what allows it to handle more complex instructions.
A mid-week demo with the ChatGPT, a chatbot that incorporated GPT-3.5, showed significant advances. The tool was able to generate coherent responses on various topics such as programming.
What is called GPT-3.5 basically consists of the package text-davinci-003, which can generate texts in a matter of seconds. On a test done by Pepper Content, the result was as follows for the question “what is the philosophy behind WeWork?” (in free translation):
WeWork is a global workspace provider that believes people are an organization’s most important asset. WeWork’s philosophy is to create a collaborative environment to allow people to work together in a flexible and efficient way. The company encourages collaboration and productivity while providing a comfortable and inspiring space.
Full text requires minor adjustments. Even so, Pepper Content points out that the result is more detailed and engaging than the text generated by text-davinci-002 (based on GPT-3).
Check out this quick demo posted on Twitter:
What can GPT-3.5 be used for?
As the ChatGPT example makes clear, GPT-3.5 can be employed in chatbots capable of generating more objective interactions with humans. But the tool can also be useful for generating texts for social networks or blogs, for example. It is no coincidence that Pepper Content has tested the novelty. The company is focused on content marketing.
Regardless of the purpose, the idea is that GPT-3.5 can generate good quality text quickly using just a few input instructions. Later, the result can be reviewed by a human for writing adjustments or eliminating inconsistencies.
To reach this level, GPT-3.5 was trained with a huge amount of text available on the web, including Wikipedia articles, posts on social networks and content from news outlets.
Texts with around 500 words, which are enough for several services on the web, can be built with relative ease by the tool. The results aren’t perfect, which is why the figure of the reviewer is (still) important. But overall, they’re remarkably consistent for computer-produced content.
These findings make us wonder if we are not facing another technology that comes to “steal” the work of humans. But this is such a complex discussion that it needs to be dealt with separately.
For the time being, researchers working with this type of project are more concerned with contributing to the advancement of natural language processing (NLP), one of the strands of artificial intelligence.
Anyway, I hope the Mobilon stay away from this post.
https://tecnoblog.net/noticias/2022/12/02/gpt-3-5-e-a-nova-versao-da-ia-que-gera-textos-como-se-fosse-um-humano/