WormGPT: The "dark side of ChatGPT"

Hell0o

Newbie
Jul 19, 2023
186
9
18
A hacker has invented WormGPT, a chatbot designed to aid internet criminals, as his own maliciously oriented alternative to ChatGPT.

According to SlashNext, the developer of WormGPT is allegedly selling access to the program at a well-known hacker forum. In a blog post, the company said, "We see that malicious actors are now creating their own custom modules similar to ChatGPT, but easier to use for nefarious purposes."

Before being made public last month, the hacker appears to have first exhibited the chatbot in March. WormGPT, unlike ChatGPT or Google's Bard, has any controls to stop it from responding to harmful requests.


WormGPT is the "dark side" of ChatGPT

According to SlashNext, the malicious variant of ChatGPT, known as WormGPT, was made available this month. In contrast to other well-known generative AI tools like ChatGPT or Bing, it may respond to searches that contain dangerous information.

It's crucial to keep in mind that WormGPT is a harmful chatbot designed to help internet criminals carry out their activities. WormGPT should not be used for anything. If we are aware of the risks associated with WormGPT and its potential impacts, we may better appreciate the benefits of using technology ethically and responsibly.

WormGPT's developer has also shared images showing how to direct the bot to write malware using Python code and solicit help in developing potentially harmful attacks. The creator claims to have used the 2021 big language model's open-source GPT-J, a previous large language model. After the model was trained using data on the creation of malware, WormGPT was developed.


AI has become significantly dangerous

With the improvement of technology, especially artificial intelligence, known cyberattacks like phishing have become more and more used by hackers from all around the world. WormGPT is the latest product that lifts all the ChatGPT restrictions to help these bad actors, and we need to be careful against these phishing attempts.

Deepfakes, fake news, and spam are only a few examples of the uses of generative AI technologies, which are growing more and more potent. Unfortunately, these technologies may also be employed for illegal activities like spreading malware and phishing scams.

Here are some recommendations on how to protect oneself against harmful generative AI tools:

- Recognize the warning indications of an "evil" generative AI technology: These tools frequently make use of grammatically sound but meaningless language, or they could include typical phishing attack tactics.

- Take caution while clicking on links: Never click on a link in an email or text message unless you are certain that it is authentic.

- Update your program frequently: Security fixes that might assist in protecting you against generative AI tools that are dangerous are frequently included in software upgrades.

- Use a reliable password manager: You can generate and save secure passwords for all of your online accounts with the aid of a password manager.

- Be cautious while sharing information online: Never divulge sensitive information to someone you don't know and trust, such as your credit card number or Social Security number.

It is entirely prohibited to use it for phishing or other illegal activities. It is far preferable to use the normal ChatGPT rather than attempting to use this one because it doesn't vary in any way from the regular ChatGPT in terms of responding to general and legal inquiries. Besides, GPT-4 is far better in almost every aspect.

WormGPT might theoretically be used for good intentions. But it's important to remember that WormGPT was developed and spread with malicious intentions. Any use of it raises moral dilemmas and legal risks.