Curated Content | Thought Leadership | Technology News

How ChatGPT—and Bots Like It—Can Spread Malware

AI-powered applications can be abused and it's up to you to protect yourself against malware and other scams.
Joshua Koszalkowski
Contributing Writer

Tools such as ChatGPT and Midjourney are helping the AI landscape to move quicker than ever through the creation of image and text results in seconds based on natural language prompts, for example. AI applications such as these, while producing incredible results, can also engineer malware and other not-so-nice scams.

Why it matters: AI-powered scams such as writing malware and phishing emails are being put into play by many applications. Even when the creators of the applications like ChatGPT build safeguards around these activities, they can still be used to create similar nuisances. It’s important to guard against these and take control of your security.

  • AI can be used to create text, audio, or video that sounds very believable, especially when it’s being used to mimic someone you know, such as a boss or colleague.
  • The most recent browsers will protect you against many different phishing and scam attacks, thus it’s a good idea to keep them up to date alongside your operating systems and apps.
  • While the technology may have evolved, many of the same techniques are still being utilized to attempt to get you to do something urgently that (usually) feels weird. Always take your time to vet emails and messages to ensure that they came from the correct sender and look out for obvious red flags.

Go Deeper—>

☀️ Subscribe to the Early Morning Byte! Begin your day informed, engaged, and ready to lead with the latest in technology news and thought leadership.

☀️ Your latest edition of the Early Morning Byte is here! Kickstart your day informed, engaged, and ready to lead with the latest in technology news and thought leadership.

ADVERTISEMENT

×
You have 4 free article(s) left this month courtesy of CIO Partners.