Generative AI is supercharging both sides of cybersecurity. Attackers are using it to write highly convincing phishing messages, clone voices, and scale “smishing” (texts) and “vishing” (voice) scams far beyond what we’ve seen before.
The good news: defenders benefit from AI, too through faster detection, smarter response, and better user awareness.
What to watch for:
- Polished, personal emails or texts that create urgency (e.g., “Act now!”) or request MFA codes, wire transfers, or gift cards.
- Voice cloning calls that sound like a colleague, leader, or family member asking for money or credentials, often with intense urgency.
- Deepfake video or audio in meetings or messages. Clues include subtle lip‑sync issues, odd pauses, or mismatched lighting.
How to stay safe:
- Slow down and verify. If a message or call asks for money, data, or codes, call the sender back on a number you already trust.
- Use MFA everywhere and never share one‑time codes.
- Be mindful of your voice online. Just a few recorded words can be enough to convincingly clone a voice.
Call to action:
Explore available training resources to learn more . If you have any questions, contact Technology Support at (410) 313‑7004 (option 4) or abuse@hcpss.org.
Learn more (trusted references):
- FBI IC3 PSA: Smishing & AI voice (vishing) campaign—what to look for and how to respond. Read the PSA
- FTC Consumer Advice: How to spot and handle AI voice cloning scams. Fighting back against harmful voice cloning
- Microsoft Security—Cyber Signals: AI‑powered deception trends and defenses. Issue 9: Emerging fraud threats & countermeasures
- Microsoft 365 Tips: What are voice‑cloning scams and practical ways to protect yourself. Voice cloning scams explained
- Optional (Reporting beyond HCPSS): You can also file a report with the FBI at IC3.gov. Internet Crime Complaint Center