Case In Point: Lessons for the Proactive Manager
Volume 17 Issue 10 | October 2025
For National Cybersecurity Awareness Month (and Halloween), I asked our Director of Institutional Compliance & Privacy, Kristin Roberts, to tell us about something that is keeping cybersecurity professionals up at night.
Something that is certainly unnerving right now is the increasing use of Artificial Intelligence (AI) by cybercriminals to make their social engineering attacks more believable and effective.
Malicious actors are taking advantage of Generative AI to scam people with phishing emails, spoofed voice calls, and fraudulent videos of real people. Interestingly, our annual cybersecurity training focused on this topic as well, so it is a top concern in the industry right now.
Generative AI (GenAI) like ChatGPT, Gemini, and others, can create text, images, audio, and videos. Learning from real photos, sound clips, videos, and public statements, GenAI can mimic a person’s look, sound, mannerisms, and communication style. What used to be clear indicators of phishing, such as poor spelling and grammar, are now gone and replaced with fictitious, yet believable, “deep-fake” messages from your boss, coworker, or family members.
Additionally, tools like FraudGPT and WormGPT are used to generate phishing emails, fake websites, malicious code, and even detect vulnerabilities. XXXGPT can produce malware, including remote access trojans (RATs), cryptostealers, and keyloggers; and Black Mamba can rewrite code to evade antivirus software. It’s spooky out there on the dark web.
As cybercriminals are leveraging AI more than ever to craft convincing social engineering attacks and malicious code, we, too, can use AI to help detect suspicious activity, predict threats, and automate responses at a speed and scale that human security teams could not replicate. AI can provide proactive and scalable defenses against cyberattacks by analyzing large amounts of data and identifying threats in real time. Consider the use of AI in your cyber defense to automate and enhance your processes as more and more GenAI cyber threats emerge.
Thank you, Kristin, for enlightening us about a scary aspect of AI. Again, we invite you to review the events across higher education with a view toward proactively managing that risk. As always, we welcome your comments and suggestions.

Kevin Robinson
Vice President
Institutional Compliance & Security