Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Hugh Jackman in a Truly PG-Rated Murder Mystery

    ‘One of the Greatest Leaders’: How Bobby Cox Left a Lasting Impression on His Players

    Tennessee redistricting plan splits Memphis neighbors and reshapes midterms as other states follow

    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram Pinterest VKontakte
    Sg Latest NewsSg Latest News
    • Home
    • Politics
    • Business
    • Technology
    • Entertainment
    • Health
    • Sports
    Sg Latest NewsSg Latest News
    Home»Technology»Anthropic’s AI Won’t Help with Surveillance? Here’s Why the Feds Aren’t Happy
    Technology

    Anthropic’s AI Won’t Help with Surveillance? Here’s Why the Feds Aren’t Happy

    AdminBy AdminNo Comments3 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Anthropic is working overtime to brand itself as the chill, responsible one in AI, though not without ruffling some powerful feathers. 

    According to a Semafor report, the firm’s strict no-go rules on surveillance have put it at odds with the Trump administration and federal law enforcement agencies, who aren’t thrilled about being told “no” by a chatbot company.

    Here’s the deal: Anthropic’s usage policy flat-out bans its AI from being used for criminal justice, censorship, or surveillance. (Via: Gizmodo)

    That means no analyzing someone’s emotional state, no tracking a person’s movements, no censoring government critics. 

    This has allegedly frustrated agencies like the FBI, Secret Service, and ICE, all of whom have been exploring AI tools to supercharge their surveillance capabilities.

    Anthropic even offers the federal government access to its Claude tools for just $1, but unlike competitors such as OpenAI, it leaves fewer loopholes. 

    OpenAI, for example, only restricts “unauthorized monitoring,” which could leave wiggle room for “legal” surveillance. Anthropic, by contrast, is the strict parent shutting down the party early.

    That doesn’t mean Claude is off-limits to Uncle Sam entirely. 

    The company built ClaudeGov, a special version for the intelligence community, which has received “High” FedRAMP authorization, meaning it’s cleared for sensitive workloads like cybersecurity. Still, domestic surveillance remains a red line.

    One administration official complained that Anthropic’s policy “makes a moral judgment” about how law enforcement operates. 

    Which, yes, it does. But it’s also a legal shield, as much about liability as ethics. 

    If the government is annoyed that it can’t use Claude to automate surveillance, the real headline might be that the government wants to automate surveillance in the first place.

    Anthropic’s principled stance comes as part of its broader PR play. Earlier this month, it became the only major AI firm to back California’s proposed AI safety bill, which could force companies to prove their models aren’t ticking time bombs.

    And yet, the halo is a little tarnished: the company just agreed to a $1.5 billion settlement for pirating millions of books and papers to train Claude, while its valuation ballooned to nearly $200 billion.

    So yes, Anthropic is trying to be the “good guy” in AI. Just don’t ask the authors, it’s underpaid, or the FBI.

    Is Anthropic’s refusal to allow AI surveillance a principled stand for civil liberties, or does it create unnecessary obstacles for legitimate law enforcement activities? Should AI companies have the right to set moral boundaries on how their technology is used by government agencies, or should these decisions be left to courts and legislators? Tell us below in the comments, or reach us via our Twitter or Facebook.



    Ronil is a Computer Engineer by education and a consumer technology writer by choice. Over the course of his professional career, his work has appeared in reputable publications like MakeUseOf, TechJunkie, GreenBot, and many more. When not working, you’ll find him at the gym breaking a new PR.





    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Admin
    • Website

    Related Posts

    Denon Home series speakers review: Siri & superior sound

    Google settles racial discrimination lawsuit for $50 million

    Access Denied

    More people are using AI for retirement planning, but how accurate is it? Here’s what experts say.

    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    Electrical fire to keep theater that hosts ‘The Book of Mormon’ closed through May 17

    The 2026 Grammy Award nominations are about be announced. Here’s what to know

    Disease of 1,000 faces shows how science is tackling immunity’s dark side

    Judge reverses Trump administration’s cuts of billions of dollars to Harvard University

    Top Reviews
    9.1

    Review: Mi 10 Mobile with Qualcomm Snapdragon 870 Mobile Platform

    By Admin
    8.9

    Which LED Lights for Nail Salon Safe? Comparison of Major Brands

    By Admin
    8.9

    Review: Xiaomi’s New Loudspeakers for Hi-fi and Home Cinema Systems

    By Admin
    Sg Latest News
    Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
    • Get In Touch
    © 2026 SglatestNews. All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.