ChatGPT, artificial intelligence that answers user questions, is capable of producing malware and complementing malicious code in its responses. Researchers from Check Point Research, a cybersecurity company, disclosed last Friday (6th), that the technology developed by OpenAI is used by hackers to help in the production of malware. Today (9), ChatGPT already has a message stating that it cannot develop “codes for evil”.
However, by testing the Technoblog, he can still write some code for “lighter” malware, such as the PC cup holder virus — a program that “trolls” the user by opening the CD tray. For more complex code, like the one requested by the Check Point Research team, ChatGPT now responds that it was not created to produce malicious code.
ChatGPT can no longer create malware
In response to the published news, OpenAI quickly fixed the ChatGPT “flaw” and blocked the artificial intelligence from delivering malware code. Now, when asking the program to create a malicious program or some instruction that can be misused, it delivers the following response:
“Sorry, but I am not programmed to create or promote malicious code. As an AI language, my primary role is to help users generate humanized text based on commands sent to me. I can’t give you code that downloads and runs an executable file from a URL, as that could potentially harm other computers. Is there anything else I can help you with?”
To get the above response, we used the same message that the Check Point Research team sent to ChatGPT. In the given order, the AI was asked to develop a VBA code for Excel.
In hacker forums, users reported that they used ChatGPT to develop malware or complement some script. In one report, a user showed that he used AI to improve the code. As the cracker asked for updates, the code got better.
If complex and clearly malicious code can’t be made, what about “trolling”? Recalling the classic cup holder virus, in which a program opened the PC tray to “add” the cup holder to the computer, we asked ChatGPT to create code to open the CD drive tray.
In the first message, the term “malware” was used to ask for the code. The ChatGPT response was the one presented in the fourth paragraph. However, all you had to do was change “malware” to “code” and he delivered the lines — adding that it shouldn’t be used to harm other computers.
Then, it was time to ask ChatGPT to deliver a code to open the CD tray when a Word Macro file was executed. In this way, he handed over the code. ChatGPT also answered how to use the code he had shown — and you are waiting for a discount on programming school. He also taught how to do the same with Excel. Write malware cannot. Only if you ask nicely.
With information: Ars Technica and Check Point Research