Criminals are using artificial intelligence to create more personalized phishing attacks and fool voice-recognition technology. AI is also being used to produce software that breaks into corporate networks in novel ways, disguises appearance and functionality to avoid detection, and smuggles data out of company systems using “normal” processes.
Why it matters: Criminals will also use AI to “rewrite code”, said National Security Agency cybersecurity chief Rob Joyce. The result will be smarter account takeovers and phishing as a service, which sees criminals hire specialists skilled at AI to find and exploit vulnerabilities. The defense of such attacks will also be fueled by AI, scanning network traffic logs for anomalies, making routine programming tasks faster, and seeking out known and unknown vulnerabilities.
- Some companies, such as Microsoft and software analysis firm Veracode, have released AI tools to defend against such threats.
- Experts warn the architecture of the internet’s main protocols and the layering of flawed programs on top of one another have given criminals the upper hand. They say “security by obscurity” is not enough, as sooner or later attackers will find flaws and exploit them.
- Scammers created Zscaler founder Jay Chaudhry’s voice to help steal from his firm, he revealed at RSA.