Cyberattackers are using generative AI to draft polished spam, create malicious code and write persuasive phishing lures. They are also learning how to turn AI systems themselves into points of compromise. Recent findings highlight this shift. Researchers from Columbia University and the University of Chicago studied malicious email traffic collected over three years. Barracuda Research has also tracked attackers exploiting weaknesses in AI assistants and tampering with AI-driven security tools. AI in email-based attacks Messages … More →
The post How attackers poison AI tools and defenses appeared first on Help Net Security.
http://news.poseidon-us.com/TNLpQTLike this:
Like Loading...
Related