Why We Should Be Slightly Alarmed by AI—and What We Can Do About It

Introduction
Imagine a trusted voice suddenly alerting you to an imminent threat—would you believe it? In 2025, a convincing fake could easily incite panic. This highlights the unsettling power of modern AI. The original Whale’s Tales essay posed a critical question: why does AI seem so frightening right now? The short answer is that synthetic voices and images are becoming so convincing that we no longer feel our natural alarm bells ringing. The longer answer delves into the implications for jobs, elections, security, and even our planet’s resources. This article examines these risks in straightforward language and, more importantly, provides actionable steps you can implement today.
The Deepfake Moment is Here
- In January 2024, thousands of New Hampshire voters received a phone call purportedly from President Biden instructing them to skip the primary. It was a deepfake. Regulators swiftly banned AI voice robocalls nationwide and imposed fines on those involved. While this incident didn’t trigger a disaster, it demonstrated how quickly misinformation can spread through public channels.
- The takeaway is that deepfakes are not just feasible—they are inexpensive, quick, and increasingly realistic, making them more convincing, especially when we are busy or stressed. Simple verification habits can go a long way (more on that later).
How AI Fuels Everyday Scams
Scammers used to reveal themselves through typos and awkward phrasing. Generative AI has eliminated those tell-tale signs. The FBI’s Internet Crime Complaint Center reported approximately $16.6 billion in losses in 2024, a significant rise from the previous year. Investigators caution that generative AI is enabling criminals to craft more persuasive schemes, clone voices, and mass-produce deceitful tactics.
Voice cloning is particularly perilous as it undermines trust. The FTC has repeatedly warned about family emergency scams relying on cloned voices pleading for money. If an urgent and emotional call reaches you, hang up and return the call using a known number. Consider setting up a family code word for emergencies.
Democracy in Jeopardy
AI doesn’t just deceive individuals; it can amplify confusion on a mass scale. Even major platforms stumble. In 2024, Google’s AI Overviews produced embarrassing responses, including bizarre suggestions like using glue on pizza dough. Although corrected, this incident serves as a reminder of how easily automated summaries can misinterpret information and disseminate inaccuracies rapidly.
Countermeasures are on the horizon. A coalition of industry players is integrating tamper-evident provenance into photos, audio, and video using the C2PA standard, with organizations like Adobe, Google, Meta, and Amazon piloting “Content Credentials.” While provenance won’t eliminate all fakes, it raises the stakes of deception and provides investigators with a tracking method.
AI and the Future of Work: Hype, Hope, and Challenges
Will AI replace jobs, or simply transform them? Both scenarios are possible, but the timing and scale remain important.
- A well-cited analysis by Goldman Sachs suggested that generative AI could expose around 300 million full-time jobs globally to automation while simultaneously enhancing productivity and growth. Exposure does not equate to elimination, but the numbers warrant consideration.
- According to the International Labour Organization, generative AI is more likely to automate specific tasks within roles rather than eliminate complete jobs. Clerical positions are particularly at risk, disproportionately affecting women in high- and middle-income countries. Policy decisions will significantly influence outcomes.
- More recent labor data through 2025 indicates no immediate, economy-wide shock from generative AI. A Yale and Brookings analysis noted that the U.S. labor market has remained stable, though certain sectors are feeling pressure.
What should we make of these mixed signals? AI could serve as a productivity booster while simultaneously causing significant disruptions. The near-term risks are concentrated in task-heavy, routine-based jobs (like high-volume customer support or standard document preparation). Over time, the greater risk is that organizations might use AI to reduce skill levels or narrow entry-level pathways, making it difficult for newcomers to enter the job market. The opportunity is to leverage AI to enhance human judgment rather than merely cut costs.
Security Risks and Autonomous Weapons
The worry that a convincing fake could incite a crisis isn’t just fiction. National security experts have long warned that AI could exacerbate miscalculations in tense situations. Concurrently, military organizations are increasingly incorporating autonomy into their operations. The U.S. Defense Department still mandates that autonomous and semi-autonomous weapons allow commanders to retain human judgment in the use of force. While international negotiations around autonomous weapons continue, binding global regulations remain elusive.
What the rules look like heavily depends on politics. In 2025, the Biden administration shifted national AI policy, moving away from prioritizing AI safety tests, content provenance, and reporting requirements in favor of accelerating AI development. States are now taking action; for instance, California implemented an AI safety disclosure law in September 2025. No matter your perspective, the regulatory landscape is rapidly evolving and not always in a straightforward direction.
The Environmental Impact of AI
Powerful models require massive computational resources, which in turn demand significant energy and water. Although data centers’ direct water use constitutes a minor share of total consumption nationally, local impacts can be severe, especially in drought-affected areas. Analysts predict that U.S. data center water use could potentially double or quadruple between 2023 and 2028, with the indirect water usage for power generation often overshadowing on-site cooling. Major cloud providers are experimenting with designs to reduce water usage while pledging to be water-positive. Progress is underway, but transparency and site selection are crucial.
The Reality Check: Panic is Optional
AI can and does make blunders. If you’ve heard an AI-generated narrator mispronounce a band name or reduce a poem to a monotonous tone, you know the feeling. As technologies improve, however, the old mistakes will become harder to detect. The best response isn’t resignation; it’s upgrading our norms and tools to ensure truth has a fighting chance.
Practical Steps You Can Take Today
For Everyone
- Pause Before Sharing: If a post incites fear or outrage, take it as a signal to verify first. Use reverse image search, check the date, and consult trusted sources.
- Verify Voices and Videos: If you receive a concerning call or video message, disconnect and return the call using a known number. Consider a family code word for emergencies.
- Look for Content Credentials: Some media now feature C2PA “Content Credentials.” If your tools allow, inspect the metadata. Treat missing provenance as a clue rather than proof.
- Use Two-Factor Authentication and a Password Manager: As AI lowers the cost of phishing, strong security measures can prevent many attacks.
- Be Cautious at Checkout: Romance, crypto, and investment scams increasingly utilize polished AI-generated text and voice. If it sounds too good to be true, it likely is.
For Teams and Organizations
- Implement a Risk Framework: NIST’s Generative AI profile builds on the AI Risk Management Framework and provides actionable controls for development and deployment. Use it to pose insightful questions and document decisions.
- Train for Deepfake Resilience: Conduct tabletop exercises focused on fake CEO voices requesting wire transfers or fabricated crisis videos. Have clear protocols for routing suspect media to verification teams quickly.
- Layer Defenses Against Voice Fraud: Some organizations are developing challenge-response methods and “liveness” scoring to flag suspicious calls. Combine machine evaluations with human response options.
- Prioritize Provenance and Watermarking: Whenever possible, use tools that embed provenance metadata and detect tampering. Don’t depend solely on one signal; combine provenance checks with behavioral and contextual assessments.
- Communicate With Employees: Be transparent about internal AI deployment, goals, privacy considerations, and performance expectations. Clarity helps mitigate fear and misuse.
Policy Signals to Monitor
- Europe’s AI Act: This law entered into force in August 2024, with obligations rolling out through 2025 and 2026, including regulations for general-purpose AI. If you operate in the EU, compliance is already on the horizon.
- U.S. Policy and State-Level Actions: In 2025, U.S. policy shifted from mandatory safety reporting to a focus on growth and competitiveness. Expect more actions from states and sector regulators (e.g., the FCC on robocalls), and watch Congress for targeted legislation regarding deepfakes and critical infrastructure.
A Note on AI’s Creative Frontier
AI video systems have made significant advances, exciting filmmakers and educators alike. However, these realistic audio and visual capabilities make it harder to differentiate real from fake. Leading tools are now including watermarks and metadata to help users and platforms distinguish AI-generated content from original footage. While these enhancements assist, anticipate an ongoing struggle between detection and evasion. Approach unusual content with a healthy dose of skepticism and prioritize sources that provide provenance.
What to Fear, What to Fix, and What to Build
Why should AI unnerve us—just a little? Because it can disrupt human trust at machine speed, and the incentives surrounding its development might lead to cost-cutting rather than implementing necessary protections. But merely being afraid doesn’t constitute a viable strategy. What’s effective is a blend of habits, tools, and regulations that increase the cost of deception while making verification more accessible. The way forward involves:
- On a Personal Level: Slow down, verify, and refrain from rushing to transfer money or share information.
- On an Organizational Level: Establish clear risk frameworks, conduct drills, and integrate provenance into your content workflow.
- On a Policy Level: Advocate for smart, targeted regulations that minimize harm without stifling innovation.
By taking these steps, we can harness the benefits of an incredible new tool while safeguarding our essential human faculties.
FAQs
Q1: How can I spot a deepfake video or voice?
A: Look for small discrepancies such as mismatched lighting, unnatural blinking, sync issues in lip movements, or room acoustics that don’t match the environment. Always check the source account and seek confirmation from several reputable outlets. If you’re still uncertain, use reverse image or frame searches and check for C2PA content credentials when available. When in doubt, refrain from sharing.
Q2: Will AI take my job?
A: AI is more likely to automate specific tasks rather than entire jobs in the near term. However, some positions may shrink or evolve. Clerical roles are particularly vulnerable; professional roles will adjust as routine tasks get automated. Upskilling and redesigning jobs can enhance your position in the job market. Recent research indicates no broad employment shock presently, though trends vary across sectors.
Q3: Are watermarks and content credentials enough to prevent deepfakes?
A: Not alone. Watermarks, C2PA metadata, and visible content labels complicate deception and improve traceability, but malicious actors can strip or mimic these signals. Use provenance alongside multiple verification methods, including source credibility and independent confirmation.
Q4: What is the EU AI Act, and does it impact U.S. companies?
A: The EU AI Act is a comprehensive law categorizing AI systems by risk and imposing obligations accordingly. It went into effect in August 2024, with notable obligations beginning in 2025 and 2026. If you provide AI-driven products or services in the EU, you may need to comply with these rules, irrespective of your location.
Q5: What specific changes did the FCC make regarding robocalls?
A: In February 2024, the FCC ruled that AI-generated voices in robocalls are subject to federal robocall laws, enhancing regulatory and consumer efforts to tackle offenders. Although this doesn’t eliminate scams, it elevates the legal risks for perpetrators.
Key Sources for Further Reading
- FCC’s ban on AI voice robocalls and enforcement initiatives.
- Alerts from the FBI and IC3 on trends in AI-enabled fraud.
- NIST’s Generative AI Profile for AI Risk Management Framework.
- The timeline and scope of the EU AI Act.
- Reports from ILO and Goldman Sachs regarding job automation risks.
- Water and energy consumption in data centers.
The Bottom Line
AI should evoke a reasonable level of concern because of its rapid evolution and its integration into our daily lives—be it in learning, shopping, working, or voting. Yet, we are not powerless. By adopting verification habits, utilizing intelligent tools, and advocating for sensible regulations, we can harness AI’s advantages while safeguarding what matters most: truth, trust, and our human judgment.
Thank You for Reading this Blog and See You Soon! 🙏 👋
Let's connect 🚀
Latest Blogs
Read My Latest Blogs about AI

AI This Week: OpenAI Reaches $500B Valuation, Google’s Latest Shake-Ups, and What’s Ahead
OpenAI’s $500B valuation, Google’s Gemini-first redesign and layoffs, Anthropic’s new agent model, Amazon’s Alexa+, Meta’s AI ads shift, and fresh AI rules you should know.
Read more