The Intercept https://theintercept.com/technology/ Tue, 30 Dec 2025 22:45:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 220955519 <![CDATA[These Apps Let You Bet on Deportations and Famine. Mainstream Media Is Eating It Up.]]> https://theintercept.com/2025/12/29/polymarket-kalshi-betting-prediction-cnn-news-media/ https://theintercept.com/2025/12/29/polymarket-kalshi-betting-prediction-cnn-news-media/#respond Mon, 29 Dec 2025 11:00:00 +0000 “The long-term vision is to financialize everything and create a tradable asset out of any difference in opinion.”

The post These Apps Let You Bet on Deportations and Famine. Mainstream Media Is Eating It Up. appeared first on The Intercept.

]]>
Tarek Mansour, co-founder of Kalshi, during a joint SEC-CFTC roundtable at SEC headquarters in Washington, DC, US, on Monday, Sept. 29, 2025.
Tarek Mansour, co-founder of Kalshi, during a joint SEC-CFTC roundtable at SEC headquarters in Washington, D.C., on Sept. 29, 2025.  Photo: Kent Nishimura/Bloomberg via Getty Images

How many people will the Trump administration deport this year? Will Gaza suffer from mass famine? These are serious questions with lives at stake.

They’re also betting propositions that two buzzy startups will let you gamble on.

The 2018 legalization of sports betting gave rise to a host of apps making it ever easier to gamble on games. Kalshi and Polymarket offer that service, but also much more. They’ll take your bets, for instance, on the presidential and midterm elections, the next Israeli bombing campaign, or whether Jeff Bezos or Mark Zuckerberg will get divorced.

Tarek Mansour, the CEO of Kalshi, laid it out simply at a conference held by Citadel Securities in October. “The long-term vision,” Mansour said, “is to financialize everything and create a tradable asset out of any difference in opinion.” It’s as dystopian as it sounds.

If you believe the hype, the promise of these companies isn’t in the money they take in as bookkeepers. They argue that the bets they collect offer a more accurate forecast of the future than traditional institutions. (In fact, they’ll tell you that you’re not betting at all but trading on futures contracts — a distinction that feels so tenuous it’s hard to justify with a full-throated explanation.)

This pitch has been especially enticing in the wake of the 2016 election, when polling missed the rise of Donald Trump, and its allure hasn’t faded as collective distrust of traditional institutions grows. But if the initial wave of social platforms — the Facebooks and Twitters of the world — fractured our sense of a shared reality, the predictive platforms are here to monetize the ruins.

If the initial wave of social platforms fractured our sense of a shared reality, the predictive platforms are here to monetize the ruins.

Polymarket acknowledges the gravity of some of its more shocking propositions. It tells those who click on its more unsavory wagers: “The promise of prediction markets is to harness the wisdom of the crowd to create accurate, unbiased forecasts for the most important events to society. That ability is particularly invaluable in gut-wrenching times like today.” The app goes on say that “After discussing with those directly affected by the attacks, who had dozens of questions, we realized prediction markets could give them the answers they needed in ways TV news and 𝕏 could not.”

It might seem odd, then, that these very platforms have lately been signing deals to entrench themselves into mainstream news coverage. Earlier this month, Kalshi signed on as an exclusive partner to offer its betting wagers on CNN and CNBC. Polymarket signed a similar deal with Yahoo Finance last month. Time Magazine signed with a lesser known platform Galactic.

For publishers, prediction markets offer a salve for deteriorating trust in journalism. For betting markets, these partnerships could help legitimize an industry that was mostly illegal until a few months ago. The marriage of these two industries is perhaps best encapsulated by Time Magazine’s recent press release announcing its partnership with Galactic. Stuart Stott, CEO of Galactic, called the deal “a new normal for readers” that promises them “the opportunity to participate in where the future is going.” Time Magazine COO Mark Howard described the partnership as motivated by the company’s “ambition to continue to push the boundaries of traditional media to ensure our content and audience experience is compelling, accurate, and evolving.”

Set aside the extreme cynicism in the conceit that audiences need to bet on genocide in order to read about it — if accuracy and trust are a concern, these partnerships may end up doing the media more harm than good.

To understand why the prediction markets apps believe they’re a better forecaster of the future, one needs to understand their governing philosophy, the “wisdom of the crowd.” The theory goes: In a well-functioning market with a diverse group of participants, traders acting on different information and insights collectively arrive at the most accurate price — or, in this case, probability of an event happening. The market, in other words, will self-correct to the most accurate outcome.

Betting apps have at times delivered better accuracy than polling results. For example, while pollsters clocked last year’s presidential race as deadlocked in the days before the election, Polymarket gave Trump an edge at 58 percent.

But whether they are consistently better is a whole other story. Some initial analysis suggests that they might not be as accurate as these companies suggest. One study found that Kalshi’s political prediction markets beat chance 78 percent of the time during the final five weeks of the 2024 U.S. presidential campaign, compared with 67 percent accuracy on Polymarket. PredicIt — one of the older betting markets run by Victoria University of Wellington, New Zealand, that has more limits on how much money users can bet — came out on top at 93 percent. But even PredicIt got the 2016 election as wrong as the polls, and in the days preceding the last election suggested a slight edge for Kamala Harris that obviously didn’t materialize.

“Markets are composed of humans, not omniscient rational forecasters.”

That same study found that when tracking the market for the same event, prediction markets often reacted in very different ways to the same information during the same time frame — something that wouldn’t happen if the markets were as efficient forecasters as its pushers suggest. “Markets are composed of humans, not omniscient rational forecasters,” the paper’s authors write.

One reason why Kalshi or Polymarket may struggle with accuracy hinges on who makes up the crowd. On November 6, 2024, in a rush of people collecting their post-election winnings, Kalshi saw a peak of around 400,000 users, and Polymarket counted about 100,000 less, according to a Fortune review; by June, their daily active user numbers had fallen over 90 percent to 27,000–32,000 and 5,000–10,000, respectively. While they don’t publish much information about their demographics, by some accounts their userbases tend to skew in the direction of crypto bros.

That can make these platforms just as inaccurate in edge cases, when they lack the requisite diversity to glean much wisdom about the real world. Consider the 2022 midterm elections: Up until election night, the major prediction markets “failed spectacularly” and “projected outcomes for key races that turned out to be completely wrong,” according to one expert analysis.

While polls are far from perfect, prediction markets are also more prone to manipulation than they’d have you believe. And this can give deep-pocketed political actors another vessel for information warfare.

Kalshi was even embroiled in a legal battle with federal regulators as recently as this summer for this very reason. In its brief, the Commodity Futures Trading Commission pointed toward a “spectacular manipulation” on Polymarket involving “a group of traders betting heavily on Vice President Harris.” “Unwitting participants may believe Kalshi’s contracts are less susceptible to manipulation or misinformation because they are on a regulated exchange, but this should heighten concern for the public interest, not allay it,” the CFTC continued.

One study found that trades intended to manipulate the market could have an impact as much as 60 days from the original trade. It also suggested the best way to game a prediction market was by making repeated bets of “varying sizes” on a single market to skew odds.

Related

This Commission That Regulates Crypto Could Be Just One Guy: An Industry Lawyer

According to the CFTC, when the agency brought up the possibility of this type of election interference, Kalshi argued the regulator could just use its enforcement authority against bad actors. But as the agency noted: “The CFTC cannot remediate damage to election integrity after the fact.” Despite these grave concerns, since Trump took office and has hired crypto insiders to oversee the CFTC, the agency has largely dropped lawsuits and investigations against Polymarket and Kalshi.

The major betting platforms have also aligned themselves with Trump’s inner orbit.

Both Polymarket and Kalshi count Donald Trump Jr. as an adviser. His venture capital firm has invested in Polymarket, whose founder Shayne Coplan has framed investigations against his company as politically motivated attacks by the outgoing Biden administration.

For a platform partnering with a news organization, a commitment to veracity does not appear to be its first priority.

One doesn’t have to look far to see how the company’s positionality in the Trumpverse translated into what very well could be election interference. Shortly before election day in New York last month, Polymarket ran a questionable advertisement featuring an AI-generated Zohran Mamdani looking tearful with the headline: “BREAKING: Mamdani’s odds collapse in NYC Mayoral Election.” As this ad ran, however, Polymarket’s platform didn’t show Mamdani’s odds collapsing. Whether Polymarket intended to bait users into betting more, or to dissuade Mamdani voters ahead of Election Day, is unclear. What is clear is that for a platform partnering with a news organization, a commitment to veracity does not appear to be its first priority.

The first priority appears to be growing the number of customers. That’s likely why these betting apps are now trying to team up with major broadcasters and publications: Reporting shows that both Kalshi and Polymarket are losing bettors, which stands to hurt their bottom lines and make their predictions worse.

Whether deals between betting apps and news outlets will help either industry is an open question. But these partnerships may just end up worsening our crisis of trust in an already-fraught information environment.

The post These Apps Let You Bet on Deportations and Famine. Mainstream Media Is Eating It Up. appeared first on The Intercept.

]]>
https://theintercept.com/2025/12/29/polymarket-kalshi-betting-prediction-cnn-news-media/feed/ 0 506023 Tarek Mansour, co-founder of Kalshi, during a joint SEC-CFTC roundtable at SEC headquarters in Washington, DC, US, on Monday, Sept. 29, 2025. U.S. President Donald Trump listens to a question from a reporter during a press conference with Ukrainian President Volodymyr Zelensky following their meeting at Trump’s Mar-a-Lago club on December 28, 2025 in Palm Beach, Florida. Rep. Dan Goldman (D-N.Y.) arrives for a vote at the U.S. Capitol March 31, 2025. (Francis Chung/POLITICO via AP Images) U.S. soldiers of the 3rd Brigade, 4th Infantry Division, look on a mass grave after a day-long battle against the Viet Cong 272nd Regiment, about 60 miles northwest of Saigon, in March 1967.
<![CDATA[Anti-Palestinian Billionaires Can Now Control What TikTok Users See]]> https://theintercept.com/2025/12/21/tiktok-ellison-oracle-israel-gaza/ https://theintercept.com/2025/12/21/tiktok-ellison-oracle-israel-gaza/#respond Sun, 21 Dec 2025 21:14:04 +0000 https://theintercept.com/?p=506112 Users need to revolt against what will very likely be an even more widespread effort to censor voices critical of Israel.

The post Anti-Palestinian Billionaires Can Now Control What TikTok Users See appeared first on The Intercept.

]]>
GUANGZHOU, CHINA - DECEMBER 19: In this photo illustration, the logo of TikTok is displayed on a smartphone screen with a US national flag in the background on December 19, 2025 in Guangzhou, Guangdong Province of China. TikTok's Chinese owner ByteDance has signed binding agreements with US and global investors to operate its business in America, TikTok's boss told employees on December 18. (Photo by Qin Zihang/VCG via Getty Images)
TikTok’s Chinese owner ByteDance has signed binding agreements with U.S. and global investors to operate its business in America, it told employees on Dec. 18, 2025. Photo: Qin Zihang/VCG via Getty Images

The TikTok deal announced on Thursday poses a fundamental threat to free and honest discourse about Israel’s ongoing genocide of Palestinians in Gaza. Under the reported deal, the Chinese company that owns the short-video social media app, ByteDance, will transfer control of TikTok’s algorithm and other U.S. operations to a new consortium of investors led by the U.S. technology company Oracle. The long-gestating deal will give Oracle’s billionaire pro-Trump board members Larry Ellison and Safra Catz the power to impose their anti-Palestinian agenda over the content that TikTok users see.

Most mainstream U.S. media coverage of the TikTok deal has completely ignored the explicitly anti-Palestinian agenda of its biggest Western investors. TikTok has played a critical role in helping hundreds of millions of users see the ugly reality of Israel’s genocidal war in Gaza. But the Trump-favored billionaires who will take over TikTok’s U.S. operations have a documented agenda of both suppressing voices critical of Israel and supporting the very Israeli military that has killed so many Palestinian civilians. Without safeguards in place, TikTok’s U.S. operations could soon become an exercise in blocking users from seeing and reacting to the crimes against humanity perpetrated by a major U.S. ally.

Related

Poised to Take Over TikTok, Oracle Is Accused of Clamping Down on Pro-Palestine Dissent

Ellison and Catz have a documented record of supporting Israel and its military. Ellison is a major donor to the Israeli military — in 2017, he donated $16.6 million to Friends of the Israel Defense Forces, what was at the time the nonprofit’s largest single donation ever — as well as a close confidant of Israeli Prime Minister Benjamin Netanyahu.

Catz, who stepped down as Oracle’s CEO in September, has also been quite blunt about the company’s ideological agenda. The Israeli American billionaire said while unveiling a new Oracle data center in Jerusalem in 2021, “I love my employees, and if they don’t agree with our mission to support the State of Israel then maybe we aren’t the right company for them. Larry and I are publicly committed to Israel and devote personal time to the country, and no one should be surprised by that.” The Ellison family has also brought his pro-Israel agenda to CBS News, where Larry’s son, David Ellison, recently installed anti-Palestinian ideologue Bari Weiss as editor-in-chief.

TikTok played an important role in the sea change of U.S. opinion about Israel, particularly among young people. It’s why the Council on American-Islamic Relations, or CAIR, the organization I work for, condemned the sale as a “desperate” attempt to silence young Americans.

What’s at stake is no less than whether or not U.S. voters will continue to be able to see what Israel’s military is doing to Palestinians.

What’s at stake is no less than whether or not U.S. voters will continue to be able to see what Israel’s military is doing to Palestinians. While many mainstream media outlets pushed coverage of Israel’s war in Gaza that was deferential to Israeli government talking points, TikTok users watched unfiltered videos of Israel’s horrific attacks on Palestinian civilians.

The effects are undeniable: A March Pew Research poll found Israel’s unfavorable rating among Republicans aged 18 to 49 had risen from 35 to 50 percent (among the same age group of Democrats, the country’s unfavorability also climbed almost 10 percentage points to 71 percent). A September New York Times/Siena University survey found 54 percent of Democrats said they sympathized more with the Palestinians, while only 13 percent expressed greater empathy for Israel.

Israeli Prime Minister Benjamin Netanyahu has made it clear that he understands the consequences of access to unfiltered social media. He recently described the sale of TikTok as “the most important purchase happening. … I hope it goes through because it can be consequential.” Netanyahu, who faces an arrest warrant from the International Criminal Court for crimes against humanity in Gaza, sees control of TikTok as a part of Israel’s military strategy. “You have to fight with the weapons that apply to the battlefield, and one of the most important ones is social media,” he continued.

President Joe Biden signed legislation in 2024 mandating that ByteDance sell its U.S. operations. That law forced the sale of TikTok under threat of an outright ban, which briefly took effect in January 2025. The new “agreement,” which is reportedly set to close on January 22, will establish a new and separate TikTok joint venture that will control U.S. operations, U.S. user data, and the TikTok algorithm. Just over 80 percent of the new company, dubbed “TikTok USDS Joint Venture LLC,” will reportedly be owned by investors that include Oracle, private equity group Silver Lake, and Abu Dhabi-based MGX. ByteDance will retain a 19.9 percent share.

Related

The TikTok Ban Is Also About Hiding Pro-Palestinian Content. Republicans Said So Themselves.

The official arguments for forcing the sale focused on preventing Chinese government surveillance of TikTok users, but some elected U.S. officials were more honest. At a McCain Institute forum in May 2024, then-Sen. Mitt Romney said, “Some wonder why there was such overwhelming support for us to shut down potentially TikTok or other entities of that nature. If you look at the postings on TikTok and the number of mentions of Palestinians, relative to other social media sites — it’s overwhelmingly so among TikTok broadcasts.”

That’s why advocates for human rights and a free press must work to challenge and reverse this government-sanctioned censorship effort. That means calling on both current and future members of Congress, as well as future White House administrations, to undo this dangerous media consolidation. The Ellison family’s control of TikTok, Paramount, and potentially other massive media properties in the future is a threat to free and open public discourse about U.S. foreign policy, particularly U.S. military support for Israel.

Organizers with the #TakeBackTikTok campaign projected a film about Larry Ellison’s pro-Israel agenda on Oracle’s U.K. headquarters on Dec. 12, 2025. Photo credit: TakeBackTikTok

The work of chilling dissent has already been underway. Even before the 2024 law was passed, TikTok had begun taking steps to silence users who have criticized Israel. In July 2025, TikTok hired Erica Mindel, a former Israeli soldier with a documented record of anti-Palestinian politics, to police user speech on the platform. Given the Israeli military’s long record of propaganda, war crimes, and crimes against humanity, especially toward Palestinians, no former Israeli soldier should have been given the power to police TikTok users’ speech.

Even so, savvy social media users have long demonstrated an ability to organize and evade social media censorship, jumping from platform to platform regardless of what Western billionaires like Elon Musk and Mark Zuckerberg have tried to do. These challenges will continue in new forms, as demonstrated by the recently launched #TakeBackTikTok campaign. The campaign is pushing for a “user rebellion” in which American TikTok users challenge the Oracle takeover by flooding the platform with content in support of Palestinian liberation. Organizers began making their case last weekend with a massive projection onto Oracle’s U.K. offices.

This is a critical moment. The transfer of TikTok’s algorithm from ByteDance to Oracle would mean that TikTok’s content would move from being controlled by a company under the influence of a Chinese government committing genocide against Uyghurs to being controlled by U.S. investors who want to silence TikTok users’ opposition to Israel’s genocide in Gaza. Once billionaire anti-Palestinian investors and ideologues take control, TikTok users who are critical of Israel will need to fight even harder and more creatively to evade the suppression of free speech. Millions of U.S. citizens now support an end to unquestioned diplomatic and military support for Israel. Anti-Palestinian billionaires like Ellison and Catz know this full well, and it’s up to us to stand in the way of their efforts to subvert the will of the many.

Correction: December 21, 2025, 6:10 p.m. ET
This story previously stated that, under the deal, Oracle could now moderate the content that 2 billion users see, which is the number of TikTok users globally, rather than in the U.S. As the deal is not yet final, it remains to be seen how many users could be affected.

The post Anti-Palestinian Billionaires Can Now Control What TikTok Users See appeared first on The Intercept.

]]>
https://theintercept.com/2025/12/21/tiktok-ellison-oracle-israel-gaza/feed/ 0 506112 GUANGZHOU, CHINA - DECEMBER 19: In this photo illustration, the logo of TikTok is displayed on a smartphone screen with a US national flag in the background on December 19, 2025 in Guangzhou, Guangdong Province of China. TikTok's Chinese owner ByteDance has signed binding agreements with US and global investors to operate its business in America, TikTok's boss told employees on December 18. (Photo by Qin Zihang/VCG via Getty Images)
<![CDATA[The Netflix–Warner Bros. Merger Is a Broadside Attack on Workers]]> https://theintercept.com/2025/12/19/netflix-warner-bros-merger-monopoly-unions/ https://theintercept.com/2025/12/19/netflix-warner-bros-merger-monopoly-unions/#respond Fri, 19 Dec 2025 12:00:00 +0000 The goal of any monopoly is to create an entity so powerful it sets the terms industrywide, leaving consumers and workers with no choice.

The post The Netflix–Warner Bros. Merger Is a Broadside Attack on Workers appeared first on The Intercept.

]]>
A Netflix sign atop a building in Los Angeles, Thursday, Dec. 18, 2025, with the Hollywood sign in the distance.
A Netflix sign atop a building in Los Angeles on Dec. 18, 2025, with the Hollywood sign in the distance. Photo: Jae C. Hong/AP

Following the announcement that Netflix would buy the film and streaming businesses of Warner Bros for $72 billion, it has been difficult to find anyone who views this development as positive, with even Netflix investors displaying concern. Yet rampant speculation over what this might mean for consumers or even the art of cinema itself has risked overshadowing ominous portents for the workers who stand to lose the most — and what they might do in response. The entertainment industry may be brutal toward those it depends on, but it is particularly vulnerable to their power when they act together.

Predictably, much attention has been consumed by the hostile bid for Warner Bros. Discovery’s assets, launched by Paramount Skydance after its own attempt to acquire WBD was beaten out. Despite Paramount chief executive David Ellison arguing that his company would be more likely to gain the approval of federal competition regulators (and Ellison reportedly promising the White House to clownify CNN à la CBS under the Bari Weiss regime), a formal response from the WBD board this week advised shareholders to reject the offer, though Paramount may still return with a higher bid.

Regardless, a victory for either Netflix or Paramount would produce an industry-warping megacorporation that makes the word “monopoly” unavoidable. Whoever wins, we lose.

Sen. Elizabeth Warren, D-Mass., warned on NPR’s Morning Edition that a Paramount–Warner Bros. merger could result in “one person who basically decides what movies are going to be made, what you’re going to see on your streaming service, and how much you’re going to have to pay for it.” Even President Donald Trump — not exactly renowned for his zeal for corporate propriety — commented that the combined size of Netflix and WBD “could be a problem.”

“The world’s largest streaming company swallowing one of its biggest competitors is what antitrust laws were designed to prevent.”

The most vociferous condemnation of a Warner Bros. merger has come from those unions representing the industries that would be most affected by it. Responding to the Netflix deal, a joint statement from the Writers Guild of America West and the Writers Guild of America East was unequivocal: “The world’s largest streaming company swallowing one of its biggest competitors is what antitrust laws were designed to prevent.

“The outcome would eliminate jobs, push down wages, worsen conditions for all entertainment workers, raise prices for consumers and reduce the volume and diversity of content for all viewers. … This merger must be stopped.”

In the fiscal year ending in December 2024, WBD had approximately 35,000 employees, while Netflix had 14,000 and Paramount 18,600 (though Paramount Skydance already began layoffs of 2,000 U.S. jobs in October). Many may share organized labor’s fears.

According to Netflix co-CEO Ted Sarandos, these fears are unfounded. “This deal is pro-consumer, pro-innovation, pro-worker, it’s pro-creator, it’s pro-growth,” Sarandos claimed in a call with Wall Street analysts last week, presumably before explaining why bridge purchases are a hot investment, and later fabulating at a UBS conference that the merger would be “a great way to create and protect jobs in the entertainment industry.”

Notably unconvinced — and with good reason — is Lindsay Dougherty, the Jimmy Hoffa-tattooed director of the Teamsters Motion Picture Division, who told The Hollywood Reporter that “in any merger or acquisition we’ve seen in our history, it hasn’t been good for workers.”

This is a plain statement of fact: Corporate mergers are rarely marked by employees getting a pay rise and reassured job security, as evidenced by the dramatic mass layoffs that followed Disney’s acquisition of 20th Century Fox and AT&T’s acquisition of Time Warner, the latter of which led to roughly 45,000 job losses across AT&T’s media and telecom divisions. Both of these examples also demonstrate that, whatever regulatory scrutiny a Warner Bros. deal may face, it is far from assured that present antitrust enforcement is enough to prevent one.

One of the great lies of America is that monopolies are the one form of capitalism the republic will not tolerate. In truth, most victories against the practice throughout American history have quickly been revealed as hollow. Two decades after the Supreme Court famously ruled that Standard Oil be dissolved under the Sherman Antitrust Act and split into 34 companies, the Standard Oil Company of New Jersey remained the largest oil producer in the world and a perennial nemesis of the anti-monopoly populist Huey Long, easily capable of avoiding serious regulation thanks to its bottomless resources.

Writing in The Verge this week, Charles Pulliam-Moore observed that “issues like layoffs and price hikes are an inevitable consequence of consolidation,” but it is important to remember that this is precisely the point of such consolidation. Monopolies are not naturally occurring; they are designed to maximize the outcomes desired by those who bring them into being.

With that in mind, the grim consequences of a Warner Bros. merger for entertainment workers should be understood as anything but accidental, particularly given the context of recent years. Instead, they should be seen as the latest manifestation of a sustained and regrettably successful push to immiserate and disempower the many thousands whose livelihoods depend upon those industries.

Related

As Actors Strike for AI Protections, Netflix Lists $900,000 AI Job

One of the defining issues behind the strike by SAG-AFTRA and the Writers Guild of America that paralyzed Hollywood for much of 2023 was the threat of AI, the dark allure of which was not difficult to discern. The fact that within the entertainment industry, this technology has thus far produced only laughable slop has not killed off the dream in some quarters that it might eventually do away with the need for human creativity, along with the awkward need to pay human beings. This is arguably why, despite their grudging acceptance of some safeguards and restrictions in order to bring the 2023 strikes to an end, Hollywood bosses refused to countenance prohibiting AI entirely. Along with the rest of the corporatocracy, the anti-worker potential they see in it is too great to resist.

The anti-worker potential they see in AI is too great to resist.

Many of those concerned by what a Warner Bros. merger could do to the industry will be all too aware of its current unenviable state. There is a bleak irony in Netflix’s attempt to seize one of Hollywood’s oldest and most famous studios, as unemployment and precarity have exploded among entertainment workers thanks to a devastating labor contraction caused in large part by the streaming industry pulling back from Hollywood; August 2024 saw unemployment in film and TV reach 12.5 percent, triple the national unemployment rate. Meanwhile, those VFX workers lucky enough to be employed — and upon whom so many of the industry’s biggest shows and movies depend — regularly face impossible workloads and sweatshop-like conditions.

The goal of keeping workers hungry and desperate is as old as capitalism itself, and the goal of any monopoly is to create an entity so vast and powerful it can set the terms for the entire industry, leaving consumers with no other option, workers with no choice but to reckon with it, and unions helpless to defend them.

Contrary to what Sarandos and his peers would like you to believe, those in a position to play Monopoly with billions of actual dollars are not and have never been aligned with the interests of workers; the question of the hour is what can be done to protect them.

In the opinion of Variety’s senior media writer Gene Maddaus, unions and industry groups may not have the power to derail a Warner Bros. deal, but “the more noise you can kick up, the more opposition there is, the more political pressure is brought to bear.”

Yet as the history of Warner Bros. demonstrates, Hollywood is a union town, and organized labor will almost certainly be pondering what options it has beyond making noise. If the unions wish to stand strong for their members before layoffs or worse starts to bite, the strength and solidarity shown in 2023 may be needed once again.

The post The Netflix–Warner Bros. Merger Is a Broadside Attack on Workers appeared first on The Intercept.

]]>
https://theintercept.com/2025/12/19/netflix-warner-bros-merger-monopoly-unions/feed/ 0 505913 A Netflix sign atop a building in Los Angeles, Thursday, Dec. 18, 2025, with the Hollywood sign in the distance. U.S. President Donald Trump listens to a question from a reporter during a press conference with Ukrainian President Volodymyr Zelensky following their meeting at Trump’s Mar-a-Lago club on December 28, 2025 in Palm Beach, Florida. Rep. Dan Goldman (D-N.Y.) arrives for a vote at the U.S. Capitol March 31, 2025. (Francis Chung/POLITICO via AP Images) U.S. soldiers of the 3rd Brigade, 4th Infantry Division, look on a mass grave after a day-long battle against the Viet Cong 272nd Regiment, about 60 miles northwest of Saigon, in March 1967.
<![CDATA[This Commission That Regulates Crypto Could Be Just One Guy: An Industry Lawyer]]> https://theintercept.com/2025/11/26/trump-crypto-regulation-cftc-mike-selig/ https://theintercept.com/2025/11/26/trump-crypto-regulation-cftc-mike-selig/#respond Wed, 26 Nov 2025 17:14:55 +0000 Mike Selig had dozens of crypto clients. Now he will be a key industry regulator.

The post This Commission That Regulates Crypto Could Be Just One Guy: An Industry Lawyer appeared first on The Intercept.

]]>
Republicans in the Senate are racing to confirm a lawyer with a long list of crypto industry clients as the next Commodity Futures Trading Commission chair, a position that will hold wide sway over the industry.

CFTC nominee Mike Selig has served dozens of crypto clients ranging from venture capital firms to a bear-themed blockchain company based in the Cayman Islands, according to ethics records obtained by The Intercept.

Those records show the breadth of potential conflicts of interest for Selig, who, if confirmed, will serve on the CFTC alone due to an exodus of other commissioners.

With a Bitcoin crash wiping out a trillion dollars of value in the past few weeks, the industry is counting on friendly regulators in Washington to give it a boost.

Related

Trump Is Rewriting History to Justify His Sketchy Pardon of a Crypto King

Senate Agriculture Committee members voted 12-11 on party lines in favor of Selig on November 20, setting up a vote in the full Senate. The committee vote came a day after a hearing in which Selig dodged straightforward questions about whether CFTC staffing should be expanded as it takes on a role overseeing digital assets, and whether Donald Trump was right to pardon Binance founder Changpeng Zhao.

One thing Selig was committal on, however, was the danger of over-enforcement — leading the consumer group Better Markets to criticize him as the “wrong choice” to lead the CFTC.

“The CFTC is facing unprecedented strain as crypto and prediction market oversight has been layered into its traditional derivatives market oversight responsibilities,” said Benjamin Schiffrin, the nonprofit group’s director of securities policy. “During his hearing, Mr. Selig showed little interest in regulation on either count and was unable to answer the simplest of questions.”

Friendly to Crypto

Selig has drawn widespread backing from crypto industry groups in the wake of his October 25 nomination, which came after an earlier Trump nominee was derailed by the Winklevoss twins, who sued Mark Zuckerberg over the creation of Facebook before launching a lucrative career in crypto.

Selig’s resume shows why the industry is so comfortable with him. Early in his career he was a law clerk for J. Christopher Giancarlo, the CFTC chair during Trump’s first term who calls himself CryptoDad.

After the CFTC, Selig joined Giancarlo at the white-shoe law firm Willkie Farr & Gallagher. His client list there extended from major crypto investors to smaller startups, many of them with some presence in the derivatives or commodities worlds, according to a form he filed with the Office of Government Ethics after his nomination.

Selig’s clients included Amir Haleem, the CEO of a crypto company that was the target of a yearslong Securities and Exchange Commission probe; Architect Financial Technologies, which last year announced a CFTC-regulated digital derivatives brokerage; Berachain, the Caymans-based blockchain company whose pseudonymous co-founders include “Smokey the Bera” and “Papa Bear”; CoinList, a crypto exchange that allows traders to access newly listed digital tokens; Deribit, a crypto options exchange; Diamond Standard, which offers commodities products that combine diamonds and the blockchain; Input Output Global, one of the developers of the decentralized blockchain Cardano; and the U.S. branch of eToro, an Israeli crypto trading platform.

“Yes, I think the crypto community is excited about Mike.”

At least one of Selig’s former clients, Alluvial Finance, met with staffers of the crypto task force where Selig has served as chief counsel since the start of the second Trump administration, according to SEC records.

Selig’s clients have also included trade groups including the Proof of Stake Alliance, which advocates for friendly tax policies for a type of blockchain, and the Blockchain Association, which represents dozens of investment firms and large crypto companies in Washington.

Pushing back against the idea that Selig was a one-trick pony in a recent podcast interview, Giancarlo said that Selig’s interests extended to other industries overseen by the CFTC such as agriculture.

“Yes, I think the crypto community is excited about Mike. But so is the whole CFTC community,” Giancarlo said. “It’s not, ‘Crypto bro goes to CFTC.’ This is somebody who has had a decadelong practice in all aspects of CFTC law and jurisdiction, and is accomplished in all those areas.”

Revolving Door

It is far from unusual for Republican presidents to tap industry-friendly lawyers to serve as financial regulators. Selig, though, is poised to assume a uniquely powerful position thanks to a more unusual circumstance: an exodus of CFTC commissioners this year.

The commission’s other members fled for the doors since Trump’s second term began, with only a single, crypto-friendly Republican left to serve as acting chair. She has said that she will step down once her replacement is confirmed.

Trump so far has yet to nominate any Democratic commissioners on the body that is typically split 3-2 along party lines, with the majority going to the party that controls the White House.

That appears to have been the sticking point for the Democratic senators who unanimously voted against Selig at the committee vote.

Selig may not have to recuse himself from matters involving his former clients as CFTC chair, it appears. In his government ethics filing, Selig pledged not to involve himself in matters involving his former clients for the standard period of a year after he represented them. However, Selig has been in government service for most of 2025, meaning that there are only a few weeks remaining of that blackout period.

A White House spokesperson did not answer questions about potential conflicts of interest if Selig is confirmed.

“Mike Selig is a highly qualified crypto and industry leader, who will do an excellent job in leading the Commodity Futures Trading Commission under President Trump,” White House spokesperson Davis Ingle said in a statement. “We look forward to his swift confirmation.”

Backwater to Bleeding Edge

If confirmed, Selig will lead an agency that was once considered a relative backwater until it was put in charge of regulating derivates after the 2008 financial crash. More recently, Congress advanced legislation that would put the CFTC on the bleeding edge of overseeing digital assets.

Nonetheless, even relatively crypto-friendly Democrats, such as Sen. Cory Booker of New Jersey, noted at the hearing last week that the agency has nowhere near the staff needed to take on a major new role in the financial markets. The CFTC has only 161 employees dedicated to enforcement actions compared to about 1,500 at the SEC, Booker said.

“There is a real problem right now with capacity in the agency that you are up to lead,” Booker told Selig.

Despite the dearth of both commissioners and staff, Selig was unwilling to commit to growing the agency if he is confirmed. Pressed by Democrats whether he would ask Trump for a bigger staff, Selig repeatedly said that he needed to study the issue.

Selig also avoided giving direct answer to questions from Democrats as to whether the CFTC should crack down on the emerging world of “prediction markets” offering sports gambling outside the auspices of state regulation, and whether crypto exchanges should be allowed to “vertically integrate” by investing in the same tokens they allow customers to trade.

Selig did signal a general openness toward cryptocurrencies — and skepticism of regulation — in his statement to the committee.

“I have seen firsthand how regulators, unaware of the real-world impact of their efforts, and zeal for regulation-by-enforcement, can drive businesses offshore and smother entrepreneurs with red tape,” Selig said. “Everyday Americans pay the price for these regulatory failures. If confirmed, I am committed to instituting common sense, principles-based regulations that facilitate well-functioning markets and keep pace with the rapid speed of innovation.”

The post This Commission That Regulates Crypto Could Be Just One Guy: An Industry Lawyer appeared first on The Intercept.

]]>
https://theintercept.com/2025/11/26/trump-crypto-regulation-cftc-mike-selig/feed/ 0 504282 U.S. President Donald Trump listens to a question from a reporter during a press conference with Ukrainian President Volodymyr Zelensky following their meeting at Trump’s Mar-a-Lago club on December 28, 2025 in Palm Beach, Florida. Rep. Dan Goldman (D-N.Y.) arrives for a vote at the U.S. Capitol March 31, 2025. (Francis Chung/POLITICO via AP Images) U.S. soldiers of the 3rd Brigade, 4th Infantry Division, look on a mass grave after a day-long battle against the Viet Cong 272nd Regiment, about 60 miles northwest of Saigon, in March 1967.
<![CDATA[Elon Musk’s Anti-Woke Wikipedia Is Calling Hitler “The Führer”]]> https://theintercept.com/2025/11/26/grok-elon-musk-grokipedia-hitler/ https://theintercept.com/2025/11/26/grok-elon-musk-grokipedia-hitler/#respond Wed, 26 Nov 2025 11:00:00 +0000 The anti-woke Wikipedia alternative aims to create a parallel version of the truth for the right wing.

The post Elon Musk’s Anti-Woke Wikipedia Is Calling Hitler “The Führer” appeared first on The Intercept.

]]>
The Grokipedia encyclopedia logo appears on a smartphone screen reflecting an abstract illustration. The encyclopedia is entirely generated by Grok AI and is intended to be an alternative to Wikipedia, according to Elon Musk, in Creteil, France, on October 29, 2025. (Photo by Samuel Boivin/NurPhoto via Getty Images)
The Grokipedia encyclopedia logo appears on a smartphone screen reflecting an abstract illustration. Photo: Samuel Boivin/NurPhoto via Getty Images

In late October, Elon Musk released a Wikipedia alternative, with pages written by his AI chatbot Grok. Unlike its nearly quarter-century-old namesake, Musk said Grokipedia would strip out the “woke” from Wikipedia, which he previously described as an “extension of legacy media propaganda.” But while Musk’s Grokipedia, in his eyes, is propaganda-free, it seems to have a proclivity toward right-wing hagiography.

Take Grokipedia’s entry on Adolf Hitler. Until earlier this month, the entry read, “Adolf Hitler was the Austrian-born Führer of Germany from 1933 to 1945.” That phrase has been edited to “Adolf Hitler was an Austrian-born German politician and dictator,” but Grok still refers to Hitler by his honorific one clause later, writing that Hitler served as “Führer und Reichskanzler from August 1934 until his suicide in 1945.” NBC News also pointed out that the page on Hitler goes on for some 13,000 words before the first mention of the Holocaust.

This isn’t the first time Grok has praised Hitler. Earlier this year, X users posted screenshots of the AI chatbot saying the Nazi leader could help combat “anti-white hate,” echoing his maker’s statements about debunked claims of a “white genocide” in South Africa. (When confronted about his chatbot’s “MechaHitler” turn earlier this year, he said users “manipulated” it into praising the Nazi leader).

An earlier version of Grokipedia’s page on Hitler. The current version no longer mentions the Holocaust until thousands of words later in the entry. Screenshot: Tekendra Parmar

Grokipedia isn’t exactly Stormfront, the neo-Nazi site known for spewing outright bigotry or Holocaust denial, but it does cite the white supremacist blog at least 42 times, according to recently published data by researcher Hal Triedman. Instead, the AI-generated Wikipedia alternative subtly advances far-right narratives by mimicking the authority of Wikipedia while reframing extremist positions, casting suspicion on democratic institutions, and elevating fringe or conspiratorial sources.

LK Seiling, an AI researcher at the Weizenbaum Institute, describes Grokipedia as “cloaking misinformation.”

“Everyone knows Wikipedia. They’re an epistemic authority, if you’d want to call them that. [Musk] wants to attach himself to exactly that epistemic authority to substantiate his political agenda,” they say.

It’s worth paying attention to how Grok frames a few key issues.

Take, for example, Grokipedia’s post about the Alternative for Germany, a far-right-wing party Elon Musk repeatedly praised in the lead-up to the German election earlier this year. Grok contains an entire section on “Media Portrayals and Alleged Bias,” which serves to parrot AfD’s long-held claims that the media is biased and undermining them. (The party routinely peddles anti-Muslim and anti-immigrant rhetoric, and its leaders have previously urged the country to stop apologizing for its Nazi past. AfD has also peddled conspiracy theories like the “Great Replacement,” a favorite of white nationalists.)

“Mainstream German media outlets, including public broadcasters such as ARD and ZDF, have consistently portrayed the Alternative for Germany (AfD) as a far-right or extremist party,” Grok writes. “This framing often highlights AfD’s scrutiny by the Federal Office for the Protection of the Constitution (BfV), which classified the party’s youth wing as extremist in 2021 and the overall party under observation for right-wing extremism tendencies by 2025, while downplaying policy achievements like electoral gains in eastern states.”

The Federal Office for the Protection of the Constitution was established after World War II to ensure that no German leader tries to overturn the country’s constitution again. But Grokipedia subtly casts doubt on the institution’s legitimacy arguing that it is “downplaying” the AfD’s achievements.

According to Seiling, who is German, Grokipedia is attempting to undermine the authority of German institutions created to prevent another Hitler. “It’s moving within the narratives that these parties themselves are spreading,” Seiling says. “If you look closely, their argument is also kind of shit. Just because [AfD is] polling at 15 percent doesn’t mean they have merit. ”

Nowhere is this more clear than how Grokipedia deals with the genocide in Gaza.

Much like the post on the AfD, the page has a long section dedicated to the “biases” of the United Nations and NGOs like Amnesty International and Human Rights Watch, which Grok accuses of emphasizing “Israeli actions while minimizing Hamas’s violations.” Notably, Grokipedia repeats unsubstantiated claims by Israel that the United Nations Relief and Works Agency for Palestine Refugees was infiltrated by Hamas operatives, and the pages for the Israel–Hamas conflict rely strongly on hyperlinks from pro-Israel advocacy groups like UN Watch and NGO Watch.

“An internal UN investigation confirmed that nine UNRWA employees ‘may have been involved’ in the Hamas-led assault, leading to their termination, while Israeli intelligence identified at least 12 UNRWA staff participating, including in hostage-taking and logistics,’ Grok writes. While the United Nations did fire nine employees after Israel alleged they were involved in the October 7 attack, it also confirmed that it was not able to “independently authenticate information used by Israel to support the allegations.”

Related

Israel’s Ruthless Propaganda Campaign to Dehumanize Palestinians

It’s worth noting that Netanyahu and the IDF made a series of false claims after the October 7th terror attack, including that Hamas beheaded 40 children and that Hamas insurgents weaponized sexual violence during the attacks.

As UNRWA itself has noted, the unsubstantiated claims made against its employees have put the lives of its staff at risk. According to the U.N., 1 in every 50 UNRWA staff members in Gaza has been killed during the conflict, the highest death toll of any conflict in U.N. history.

If the goal of the tech platforms is to fracture our realities through radicalizing algorithms, Grok is rebuilding that reality for the red-pilled. That means not only questioning the integrity of traditional sources of authority, like Germany’s Federal Office for the Protection of the Constitution or the United Nations, but also serving up an alternative set of authorities.

On Grok’s page covering conspiracy theories about the 2012 shooting at Sandy Hook Elementary School, it dedicates several paragraphs to what Grok describes as the “Initial Anomalies and Public Skepticism” about the official narrative. “Alternative media outlets played a pivotal role in disseminating initial doubts about the official account of the Sandy Hook Elementary School shooting,” Grok writes, referring to the Alex Jones-operated conspiracy theory site Infowars and other social media groups. (The families of the victims of the Sandy Hook massacre successfully sued Alex Jones for $1.5 billion for spreading false claims about the school shooting).

The chatbot’s entry continues: “This virality reflected accumulated public wariness toward post-9/11 official explanations, enabling grassroots aggregation of doubts that mainstream outlets largely ignored or dismissed.” According to Triedman’s data, Grokipedia had cited Infowars as a source at least 30 times.

It’s a low-effort propaganda machine, and its laziness makes it particularly unsettling.

Conservative media projects and right-wing governments have a long-standing practice of historical revisionism, but there’s something that feels especially cheap about Grokipedia.

“Encyclopedia-style media is extremely labor-intensive. Wikipedia requires huge human governance structures, all visible and auditable,” Seiling says. “Musk does not have armies of people writing pages. What he does have is a shit-ton of GPUs,” the technology that underpins AI processing.

Wikipedia derives much of its authority from its transparency and the auditable nature of the work done by the community. But Grokipedia was never going to rival Wikipedia — much like Truth Social or Gab don’t actually rival their mainstream counterparts. But that doesn’t make it any less dangerous. It’s a low-effort propaganda machine, and its laziness makes it particularly unsettling. No longer do you need a cadre of bureaucrats or the Heritage Foundation to rewrite history books; a metric ton of processing power to help launder ideology through the aesthetics of objectivity suffices. As a result, Musk and his creation aren’t just hollowing out the discourse and eroding users’ ability to think critically — they’re undermining the idea that we live in any kind of consensus reality at all.

Correction: November 30, 2025
This story has been updated to correct the spelling of LK Seiling’s name.

The post Elon Musk’s Anti-Woke Wikipedia Is Calling Hitler “The Führer” appeared first on The Intercept.

]]>
https://theintercept.com/2025/11/26/grok-elon-musk-grokipedia-hitler/feed/ 0 504251 The Grokipedia encyclopedia logo appears on a smartphone screen reflecting an abstract illustration. The encyclopedia is entirely generated by Grok AI and is intended to be an alternative to Wikipedia, according to Elon Musk, in Creteil, France, on October 29, 2025. (Photo by Samuel Boivin/NurPhoto via Getty Images) U.S. President Donald Trump listens to a question from a reporter during a press conference with Ukrainian President Volodymyr Zelensky following their meeting at Trump’s Mar-a-Lago club on December 28, 2025 in Palm Beach, Florida. Rep. Dan Goldman (D-N.Y.) arrives for a vote at the U.S. Capitol March 31, 2025. (Francis Chung/POLITICO via AP Images) U.S. soldiers of the 3rd Brigade, 4th Infantry Division, look on a mass grave after a day-long battle against the Viet Cong 272nd Regiment, about 60 miles northwest of Saigon, in March 1967.
<![CDATA[The FBI Wants AI Surveillance Drones With Facial Recognition]]> https://theintercept.com/2025/11/21/fbi-ai-surveillance-drones-facial-recognition/ https://theintercept.com/2025/11/21/fbi-ai-surveillance-drones-facial-recognition/#respond Fri, 21 Nov 2025 19:50:52 +0000 An FBI procurement document requests information about AI surveillance on drones, raising concerns about a crackdown on free speech.

The post The FBI Wants AI Surveillance Drones With Facial Recognition appeared first on The Intercept.

]]>
The FBI is looking for ways to incorporate artificial intelligence into drones, according to federal procurement documents.

On Thursday, the FBI put out the call to potential vendors of AI and machine learning technology to be used in unmanned aerial systems in a so-called “request for information,” where government agencies request companies submit initial information for a forthcoming contract opportunity.

“It’s essentially technology tailor-made for political retribution and harassment.”

The FBI is in search of technology that could enable drones to conduct facial recognition, license plate recognition, and detection of weapons, among other uses, according to the document.

The pitch from the FBI immediately raised concerns among civil libertarians, who warned that enabling FBI drones with artificial intelligence could exacerbate the chilling effect of surveillance of activities protected by the First Amendment.

“By their very nature, these technologies are not built to spy on a specific person who is under criminal investigation,” said Matthew Guariglia, a policy analyst at the Electronic Frontier Foundation. “They are built to do indiscriminate mass surveillance of all people, leaving people that are politically involved and marginalized even more vulnerable to state harassment.”

The FBI did not immediately respond to a request for comment.

Law enforcement agencies at local, state, and federal levels have increasingly turned to drone technology in efforts to combat crime, respond to emergencies, and patrol areas along the border.

The use of drones to surveil protesters and others taking part in activities ostensibly protected under the Constitution frequently raises concerns.

In New York City, the use of drones by the New York Police Department soared in recent years, with little oversight to ensure that their use falls within constitutional limits, according to a report released this week by the Surveillance Technology Oversight Project.

In May 2020, as protests raged in Minneapolis over the murder of George Floyd, the Department of Homeland Security deployed unmanned vehicles to record footage of protesters and later expanded drone surveillance to at least 15 cities, according to the New York Times. When protests spread, the U.S. Marshals Service also used drones to surveil protesters in Washington, D.C., according to documents obtained by The Intercept in 2021.

“Technically speaking, police are not supposed to conduct surveillance of people based solely on their legal political activities, including attending protests,” Guariglia said, “but as we have seen, police and the federal government have always been willing to ignore that.”

“One of our biggest fears in the emergence of this technology has been that police will be able to fly a face recognition drone over a protest and in a few passes have a list of everyone who attended. It’s essentially technology tailor-made for political retribution and harassment,” he said.

Related

AI Tries (and Fails) to Detect Weapons in Schools

In addition to the First Amendment concerns, the use of AI-enabled drones to identify weapons could exacerbate standoffs between police and civilians and other delicate situations. In that scenario, the danger would come not from the effectiveness of AI tech but from its limitations, Guariglia said. Government agencies like school districts have forked over cash to companies running AI weapons detection systems — one of the specific uses cited in the FBI’s request for information — but the products have been riddled with problems and dogged by criticisms of ineffectiveness.

“No company has yet proven that AI firearm detection is a viable technology,” Guariglia told The Intercept. “On a drone whirling around the sky at an awkward angle, I would be even more nervous that armed police will respond quickly and violently to what would obviously be false reports of a detected weapon.”

The post The FBI Wants AI Surveillance Drones With Facial Recognition appeared first on The Intercept.

]]>
https://theintercept.com/2025/11/21/fbi-ai-surveillance-drones-facial-recognition/feed/ 0 504063 U.S. President Donald Trump listens to a question from a reporter during a press conference with Ukrainian President Volodymyr Zelensky following their meeting at Trump’s Mar-a-Lago club on December 28, 2025 in Palm Beach, Florida. Rep. Dan Goldman (D-N.Y.) arrives for a vote at the U.S. Capitol March 31, 2025. (Francis Chung/POLITICO via AP Images) U.S. soldiers of the 3rd Brigade, 4th Infantry Division, look on a mass grave after a day-long battle against the Viet Cong 272nd Regiment, about 60 miles northwest of Saigon, in March 1967.
<![CDATA[You Will Never Send Money Digitally Without a Private Company — If the GOP Gets Its Way]]> https://theintercept.com/2025/11/17/money-transfer-cbdc-digital-currency/ https://theintercept.com/2025/11/17/money-transfer-cbdc-digital-currency/#respond Mon, 17 Nov 2025 10:00:00 +0000 Republicans are trying to permanently block central bank digital currencies that would create an alternative to profit-seeking companies.

The post You Will Never Send Money Digitally Without a Private Company — If the GOP Gets Its Way appeared first on The Intercept.

]]>
Americans who want to transfer money online have options. They can go with services like Venmo and PayPal, make transfers from their personal bank, or do a transaction with stablecoins issued by cryptocurrency companies.

All those options have something in common that may not always occur to consumers: The transfers are offered by exclusively by private companies. That means users’ accounts aren’t stuffed with physical dollars, but rather with promises made by private companies to make the recipient whole.

Unlike with cash money, the system creates a middleman for every dollar spent — and an opportunity for them to make a profit off the digital equivalent of something so simple as handing someone else a bill.

There is a future where every monetary transaction between people involves private interests.

There’s no way to send money digitally without involving a company that has an angle. With cash on the way out — the last penny was just minted, for instance — there is a possible future where every single monetary transaction between people involves private interests.

The little-noted distinction raises a question: Why can’t the actual backer of the dollar — the U.S. government — create a way to send money itself? Academics have been exploring this question for years, asking why the federal government can’t back its own digital currency to facilitate transfers between people.

A system with a central bank digital currency, as it’s known, could operate as a public good, advocates say, with potentially zero or minimal transaction fees — just by letting the government take a small step from backing physical currency to backing its digital equivalent.

In the U.S., those researchers never got past an exploratory phase, but that did not stop a central bank digital currency from becoming a boogeyman for right-wing activists.

The Republican House majority whip, Rep. Tom Emmer, warned that the Chinese Communist Party uses a digital currency to spy on its citizens. Online memes dubbed them a “mark of the beast.” Donald Trump promised to ban them last year, and followed through with an executive order in January.

Now, Republicans are trying to make sure that no matter who is president, private companies will forever hold the monopoly on Americans sending money to each other online.

The bill would even prevent research on government-issued digital currencies.

They’re pushing a formal, codified ban that would squash government competition to private payments before it ever gets started. The House included a central bank digital currency ban in its version of a defense budget bill, which will be hashed out with the Senate in the coming weeks. The bill would even prevent research on government-issued digital currencies.

The debate raises major questions about privacy, public goods, the dollar’s dominant position of the global economy, and technological innovation. That debate’s resolution, one prominent researcher told The Intercept, will determine the future of money.

“Right now, the only way to digitally transact through people is through a private sector intermediary — whether that’s a bank or a fintech company or a credit card company,” said Neha Narula, the director of the digital currency initiative at the MIT Media Lab who from 2020 to 2022 worked with the Federal Reserve Bank of Boston to explore the idea. “It is not really clear that that structure continues to work without something like cash, users having the ability to exit to cash.”

Central Bank Digital Currency

To understand the potential upsides, it is possible to look to the handful of other countries where central bank digital currencies have already been adopted. In the Bahamas, citizens can use smartphone apps or plastic cards to make fee-free purchases and transfers with the digital Bahamian dollar.

The adoption of digital currency in the Bahamas has been low, in part because so many private alternatives already exist. A similar pattern has emerged in China, which launched a digital currency in 2020.

In the long term, China hopes to use digital currency to leapfrog past the U.S. dollar’s role as the preeminent mode of international exchange. American boosters of central bank digital currencies, such as former President Joe Biden, say it is important that the U.S. not get left behind. China’s preeminence in the field is a red flag for the likes of Emmer, however, the Republican in House leadership.

“The digital yuan, Major, is a financial surveillance tool,” he told CBS News’ Major Garrett in an interview earlier this year. “The Chinese Communist Party is literally building social scores on its citizens based on their purchases. This is not an American value.”

“It is hard to imagine in 50 or 100 years we are going to be using pieces of paper.”

Narula, the researcher, acknowledged that the use cases for digital dollars may be elusive for now. Still, she believes that it is important to keep studying central bank digital currencies, given the inevitable trend toward more digital transactions.

She said, “It is hard to imagine in 50 or 100 years we are going to be using pieces of paper.”

Privacy Problems

Narula is adamant that a central bank digital currency could be built with privacy protection at its core. After all, there are already cryptocurrencies such as Bitcoin that allow their users to remain mostly anonymous.

Digital currency critics, by contrast, paint them as Orwellian tools of government oversight. One skeptic argued that privacy protections would be too vulnerable to the whims of an administration.

“It is technically possible to achieve privacy, but it’s not politically possible to achieve privacy. And that’s a very important point to stress here,” said Nicholas Anthony, an analyst with the libertarian-leaning Cato Institute. “Once a crisis occurs, it would be so easy to have privacy protections ripped away.”

Anthony said those on the left should be just as concerned as those on the right about the potential for abuse.

“Our financial transactions reveal so much about us,” he said. “Anyone in power can really use it to their advantage. So it’s really unfortunate, in my eyes, that it has become a ‘Republican’ or ‘conservative’ issue.”

Related

Many ICE Agents Lose Ability to Spy on Immigrants’ Payments to Family Back Home

Private offerings come with a host of privacy concerns as well, Anthony acknowledged. He argued that the market will incentivize privacy protections, along the lines of Apple’s marketing on the topic. Others aren’t so sure and think the issue may be operating as a smokescreen for private companies.

“You hear a lot of high-minded rhetoric about CBDCs being a threat to people or privacy, but at the end of the day, this is really about what roles the public and private sector play in finance,” said Mark Hays, an advocate with the left-leaning groups Americans for Financial Reform and Demand Progress.

By banning even government research on central bank digital currencies, MIT Media Lab’s Narula warned, the legislation also risks endangering further progress on privacy protections.

“There’s certain experience that only people in government have when it comes to administrating our monetary system,” she said. “So to cut them off from participating in this research means that we are not going to get to the best outcomes, because we don’t have the best minds working on it.”

Private Alternatives

If a ban comes to pass, the field of digital payments will be left wide open for private industry. That could present a profitable market opportunity for financial services companies and cryptocurrency startups.

The stablecoin industry already has a market capitalization of over $300 billion, and it is poised to explode in the wake of recent legislation supported by Trump, himself a stablecoin entrepreneur.

Related

Is Crypto a Big Scam?

In fact, cryptocurrency companies have been some of the most vociferous opponents of central bank digital currencies after initially exploring partnerships with the U.S. government on them. Critics point out that government-issued digital dollars could compete with stablecoins, which earn profit for their private issuers from the interest on U.S. bonds and other securities backing consumer accounts.

Hays said that he recognized the privacy concerns that come with government-issued digital currencies.

“My dollar that I lay down at the bodega, chances are that’s not going to be on any database. But with the CBDC, in a certain way of thinking about it, it now would be,” he said.

“Your HUD grant would be brought to you by Circle or Tether.”

Still, he worries that private interests are moving to take control of financial infrastructure that should belong to the public. The Department of Housing and Urban Development is already exploring the use of blockchain to monitor the billions of dollars in grants it pays out every year, he noted.

“Your HUD grant would be brought to you by Circle or Tether,” said Hays, referring to two cryptocurrency companies. “How far they get is anybody’s guess, but the fact that they are floating it gives you a signal of their intentions. They would like to see a world where that fundamental architecture — which we would argue needs to be democratically controlled — is another way of putting more of that system under private control, including crypto.”

The post You Will Never Send Money Digitally Without a Private Company — If the GOP Gets Its Way appeared first on The Intercept.

]]>
https://theintercept.com/2025/11/17/money-transfer-cbdc-digital-currency/feed/ 0 503419 U.S. President Donald Trump listens to a question from a reporter during a press conference with Ukrainian President Volodymyr Zelensky following their meeting at Trump’s Mar-a-Lago club on December 28, 2025 in Palm Beach, Florida. Rep. Dan Goldman (D-N.Y.) arrives for a vote at the U.S. Capitol March 31, 2025. (Francis Chung/POLITICO via AP Images) U.S. soldiers of the 3rd Brigade, 4th Infantry Division, look on a mass grave after a day-long battle against the Viet Cong 272nd Regiment, about 60 miles northwest of Saigon, in March 1967.
<![CDATA[Real Estate Giant Redfin Exposed Users’ Personal Info on Listing Contact Forms]]> https://theintercept.com/2025/11/13/redfin-user-information-real-estate-listing/ https://theintercept.com/2025/11/13/redfin-user-information-real-estate-listing/#respond Thu, 13 Nov 2025 11:00:00 +0000 Contact forms on Redfin real estate listings displayed past users’ names, email addresses, and phone numbers.

The post Real Estate Giant Redfin Exposed Users’ Personal Info on Listing Contact Forms appeared first on The Intercept.

]]>
Because of a website security snafu, the online real estate platform Redfin made random users’ names, email addresses, and phone numbers available to others who log onto listings. The vulnerability lasted less than a week, the company said.

The personal identification information became visible to other users who were viewing real estate listings. The information would appear momentarily when a contact information form popped up on a listing; the form would be pre-filled with details from past users, which would quickly vanish.

The contact information of past users, however, would remain visible when viewing the listing while disabling JavaScript, a programming language used to make interactive websites that can, in many browsers, be turned off in general or for specific sites.

Past users’ email addresses or phone numbers, and sometimes both, were displayed.

“We recently identified a technical error on the website that temporarily made it possible for the e-mail address and/or phone number of a previous visitor to be visible to another user on a rental listing page,” said Alina Ptaszynski, a Redfin spokesperson. “This error was active for less than a week and was remediated as soon as we were made aware of it.”

After The Intercept initially contacted Redfin, the company changed the way its website contact form is displayed for desktop web browsers, but the vulnerability persisted on mobile listings. After a subsequent inquiry from The Intercept, the mobile listings’ contact form was updated as well.

Related

The Housing Hunger Games

Redfin, a giant brokerage house that pioneered map-based online real estate listings, claims to have 50 million monthly users, according to Rocket, its parent company.

The data vulnerability only displayed one user’s contact information at a time, but data could have been collected en masse by someone making repeated visits to property listings and serially gathering available information. (Redfin did not respond to question about whether there was any evidence the vulnerability had been exploited to collect bulk user information.)

Using reverse phone number and email search databases, The Intercept confirmed that the email addresses and phone numbers are valid contact information belonging to real people, not just dummy data that developers sometimes use when testing their code.

Inadvertently revealing user information is a problem which has plagued web services for years.

Redfin’s privacy policy says the company may share private information, but only when the prompt to provide that data is accompanied by a disclosure. The property contact form, however, does not provide a disclaimer that a user’s contact information might be shared, let alone with subsequent users.

The post Real Estate Giant Redfin Exposed Users’ Personal Info on Listing Contact Forms appeared first on The Intercept.

]]>
https://theintercept.com/2025/11/13/redfin-user-information-real-estate-listing/feed/ 0 503174 U.S. President Donald Trump listens to a question from a reporter during a press conference with Ukrainian President Volodymyr Zelensky following their meeting at Trump’s Mar-a-Lago club on December 28, 2025 in Palm Beach, Florida. Rep. Dan Goldman (D-N.Y.) arrives for a vote at the U.S. Capitol March 31, 2025. (Francis Chung/POLITICO via AP Images) U.S. soldiers of the 3rd Brigade, 4th Infantry Division, look on a mass grave after a day-long battle against the Viet Cong 272nd Regiment, about 60 miles northwest of Saigon, in March 1967.
<![CDATA[YouTube Quietly Erased More Than 700 Videos Documenting Israeli Human Rights Violations]]> https://theintercept.com/2025/11/04/youtube-google-israel-palestine-human-rights-censorship/ https://theintercept.com/2025/11/04/youtube-google-israel-palestine-human-rights-censorship/#respond Tue, 04 Nov 2025 21:41:23 +0000 The tech giant deleted the accounts of three prominent Palestinian human rights groups — a capitulation to Trump sanctions.

The post YouTube Quietly Erased More Than 700 Videos Documenting Israeli Human Rights Violations appeared first on The Intercept.

]]>
A documentary featuring mothers surviving Israel’s genocide in Gaza. A video investigation uncovering Israel’s role in the killing of a Palestinian American journalist. Another video revealing Israel’s destruction of Palestinian homes in the occupied West Bank.

YouTube surreptitiously deleted all these videos in early October by wiping the accounts that posted them from its website, along with their channels’ archives. The accounts belonged to three prominent Palestinian human rights groups: Al-Haq, Al Mezan Center for Human Rights, and the Palestinian Centre for Human Rights.

The move came in response to a U.S. government campaign to stifle accountability for alleged Israeli war crimes against Palestinians in Gaza and the West Bank.

The Palestinian groups’ YouTube channels hosted hours of footage documenting and highlighting alleged Israeli government violations of international law in both Gaza and the West Bank, including the killing of Palestinian civilians.

“I’m pretty shocked that YouTube is showing such a little backbone,” said Sarah Leah Whitson, executive director of Democracy for the Arab World Now. “It’s really hard to imagine any serious argument that sharing information from these Palestinian human rights organizations would somehow violate sanctions. Succumbing to this arbitrary designation of these Palestinian organizations, to now censor them, is disappointing and pretty surprising.”

After the International Criminal Court issued arrest warrants and charged Israeli Prime Minister Benjamin Netanyahu and former Israeli Defense Secretary Yoav Gallant with war crimes in Gaza, the Trump administration escalated its defense of Israel’s actions by sanctioning ICC officials and targeting people and organizations that work with the court.

“YouTube is furthering the Trump administration’s agenda to remove evidence of human rights violations and war crimes.”

“It is outrageous that YouTube is furthering the Trump administration’s agenda to remove evidence of human rights violations and war crimes from public view,” said Katherine Gallagher, a senior staff attorney at the Center for Constitutional Rights. “Congress did not intend to allow the president to cut off the flow of information to the American public and the world — instead, information, including documents and videos, are specifically exempted under the statute that the president cited as his authority for issuing the ICC sanctions.”

“Alarming Setback

YouTube, which is owned by Google, confirmed to The Intercept that it deleted the groups’ accounts as a direct result of State Department sanctions against the group after a review. The Trump administration leveled the sanctions against the organizations in September over their work with the International Criminal Court in cases charging Israeli officials of war crimes.

“Google is committed to compliance with applicable sanctions and trade compliance laws,” YouTube spokesperson Boot Bullwinkle said in a statement.

According to Google’s Sanctions Compliance publisher policy, “Google publisher products are not eligible for any entities or individuals that are restricted under applicable trade sanctions and export compliance laws.”

Al Mezan, a human rights organization in Gaza, told The Intercept that its YouTube channel was abruptly terminated this year on October 7 without prior notification.

“Terminating the channel deprives us from reaching what we aspire to convey our message to, and fulfill our mission,” a spokesperson for the group said, “and prevents us from achieving our goals and limits our ability to reach the audience we aspire to share our message with.”

The West Bank-based Al-Haq’s channel was deleted on October 3, a spokesperson for the group said, with a message from YouTube that its “content violates our guidelines.”

Related

Palestinian Rights Groups That Document Israeli Abuses Labeled “Terrorists” by Israel

“YouTube’s removal of a human rights organisation’s platform, carried out without prior warning, represents a serious failure of principle and an alarming setback for human rights and freedom of expression,” the Al-Haq spokesperson said in a statement. “The U.S. Sanctions are being used to cripple accountability work on Palestine and silence Palestinian voices and victims, and this has a ripple effect on such platforms also acting under such measures to further silence Palestinian voices.”

The Palestinian Center for Human Rights, which the U.N. describes as the oldest human rights organization in Gaza, said in a statement that YouTube’s move “protects perpetrators from accountability.”

“YouTube’s decision to close PCHR’s account is basically one of many consequences that we as an organisation have faced since the decision of the US government to sanction our organisations for our legitimate work,” said Basel al-Sourani, an international advocacy officer and legal advisor for the group. “YouTube said that we were not following their policy on Community Guidelines, when all our work was basically presenting factual and evidence-based reporting on the crimes committed against the Palestinian people especially since the start of the ongoing genocide on 7 October.”

“By doing this, YouTube is being complicit in silencing the voices of Palestinian victims,” al-Sourani added.

Looking Outside the U.S.

The three human rights groups’ account terminations cumulatively amount to the erasure of more than 700 videos, according to an Intercept tally.

The deleted videos range in scope from investigations, such as an analysis of the Israeli killing of American journalist Shireen Abu Akleh, to testimonies of Palestinians tortured by Israeli forces and documentaries like “The Beach,” about children playing on a beach who were killed by an Israeli strike.

Some videos are still available through copies saved on the Internet Archive’s Wayback Machine or on alternate platforms, such as Facebook and Vimeo. The wiping only affected the group’s official channels; videos which were produced by the nonprofits but hosted on alternate YouTube channels remain active. No cumulative index of videos deleted by YouTube is available, however, and many appear to not be available elsewhere online.

Videos posted elsewhere online, the groups fear, could soon be targeted for deletion because many of the platforms hosting them are also U.S.-based services. The ICC itself began exploring using service providers outside the U.S.

Al-Haq said it would also be looking for alternatives outside of U.S. companies to host their work.

YouTube isn’t the only U.S. tech company blocking Palestinian rights groups from using its services. The Al-Haq spokesperson said Mailchimp, the mailing list service, also deleted the group’s account in September. (Mailchimp and its parent company, Intuit, did not immediately respond to a request for comment.)

Caving to Trump’s Demand

Both the U.S. and Israeli governments have long shielded themselves from the ICC and accountability for their alleged war crimes. Neither country is party to the Rome Statute, the international treaty that established the court.

In November 2024, the ICC prosecutors issued arrest warrants for Netanyahu and Gallant, charging the leaders with intentionally starving civilians by blocking aid from entering into Gaza. Both the Biden and Trump administrations rejected the legitimacy of the warrants.

Related

Trump Sanctions Palestinian Human Rights Groups for Doing Their Job. Anybody Could Be Next.

Since his reelection, Trump has taken a more aggressive posture against accountability for Israel. In the early days of his second term, Trump renewed sanctions against the ICC and issued new, more severe measures against court officials and anyone accused of aiding their efforts. In September, in a new order, he specifically sanctioned the three Palestinian groups.

The U.S. moves followed Israel’s own designation of Al-Haq as a “terrorist organization” in 2021 and an online smear campaign by pro-Israeli activists attempting to link Palestinian Centre for Human Rights with militant groups.

The sanctions freeze the organizations’ assets in the U.S. and bar sanctioned individuals from traveling to the country. Federal judges have already issued preliminary injunctions in two cases in favor of plaintiffs who argued the sanctions had violated their First Amendment rights.

“The Trump administration is focused on contributing to the censorship of information about Israeli atrocities in Palestine and the sanctions against these organizations is very deliberately designed to make association with these organizations frightening to Americans who will be concerned about material support laws,” said Whitson, of DAWN, which joined a coalition of groups in September to demand the Trump administration drop its sanctions.

Like many tech firms, YouTube has shown a ready willingness to comply with demands from both the Trump administration and Israel. YouTube coordinated with a campaign organized by Israeli tech workers to remove social media content deemed critical of Israel. At home, Google, YouTube’s parent company, secretly handed over personal Gmail account information to U.S. Immigration and Customs Enforcement in an effort to detain a pro-Palestinian student organizer.

Even before Israel’s genocidal campaign in Gaza, YouTube had been accused of unevenly applying its community guidelines to censor Palestinian voices while withholding similar scrutiny from pro-Israeli content. Such trends continued during the war, according to a Wired report.

Earlier this year, YouTube shut down the official account of the Addameer Prisoner Support and Human Rights Association. The move came after pressure from UK Lawyers for Israel, which wrote to YouTube to point out that the organization had been sanctioned by the State Department.

Whitson warned that YouTube’s capitulation could set a precedent, pushing other tech companies to bend to censorship.

“They are basically allowing the Trump administration to dictate what information they share with the global audience,” she said. “It’s not going to end with Palestine.”

The post YouTube Quietly Erased More Than 700 Videos Documenting Israeli Human Rights Violations appeared first on The Intercept.

]]>
https://theintercept.com/2025/11/04/youtube-google-israel-palestine-human-rights-censorship/feed/ 0 502439 U.S. President Donald Trump listens to a question from a reporter during a press conference with Ukrainian President Volodymyr Zelensky following their meeting at Trump’s Mar-a-Lago club on December 28, 2025 in Palm Beach, Florida. Rep. Dan Goldman (D-N.Y.) arrives for a vote at the U.S. Capitol March 31, 2025. (Francis Chung/POLITICO via AP Images) U.S. soldiers of the 3rd Brigade, 4th Infantry Division, look on a mass grave after a day-long battle against the Viet Cong 272nd Regiment, about 60 miles northwest of Saigon, in March 1967.
<![CDATA[ICE Investigations, Powered by Nvidia]]> https://theintercept.com/2025/11/01/ice-nvidia-software-hsi-surveillance/ https://theintercept.com/2025/11/01/ice-nvidia-software-hsi-surveillance/#respond Sat, 01 Nov 2025 10:00:00 +0000 ICE’s investigative division, increasingly involved in ground-level immigration enforcement, is using Nvidia tech to crunch data.

The post ICE Investigations, Powered by Nvidia appeared first on The Intercept.

]]>
Nvidia, the computing giant that this week became the world’s first $5 trillion company, is powering U.S. Immigration and Customs Enforcement’s investigative division, according to federal records reviewed by The Intercept.

This summer, ICE renewed access to software tools for use by Homeland Security Investigations, or HSI, an enforcement division previously tasked with transnational crime that has become increasingly common on American streets under the Trump administration.

The $19,000 transaction, according to federal procurement data, provided “Nvidia software licenses, which will be used by Homeland Security Investigations to enhance data analysis & improve investigative capabilities through high-performance computing solutions.”

“HSI’s growing investment in LLMs” — large language models — “suggests that it may be investing in systems that can be used to surveil U.S. citizens, migrants, and visitors,” said Amos Toh, senior counsel at the Brennan Center for Justice.

Large language models can be used to draw inference by fusing people’s publicly available data, and might be used by ICE to “to identify persons of interest and generate investigative leads.” There are well-documented flaws, however, in the way the AI crunches data and reproduces biases.

Toh said, “These problems make it more likely that people will be targeted based on flawed intelligence.”

In a statement, ICE said, “Like other law enforcement agencies, ICE employs various forms of technology to investigate criminal activity and support law enforcement efforts while respecting civil liberties and privacy interests.”

When asked whether Nvidia had any ability to ensure ICE was using its technology lawfully, company spokesperson John Rizzo told The Intercept, “Millions of U.S. consumers, businesses, and government agencies use general-purpose computers every day. We do not and cannot monitor the use of general-purpose computers by U.S. government employees.”

Related

Mahmoud Khalil Won His Freedom Despite the Best Efforts of ICE’s Intelligence Unit

HSI’s mission has shifted during President Donald Trump’s second administration. The ICE division has long since assisted in civil immigration enforcement, but its focus was on criminal investigations such as drug smuggling and human trafficking.

“HSI has long sought to distance itself from ICE Enforcement and Removal Operations, which carries out basic immigration law enforcement,” said Aaron Reichlin-Melnick, a senior fellow at the American Immigration Council, told The Intercept. “On January 20, President Trump signed an executive order directing HSI to make immigration enforcement its top priority.”

Nvdia has been cozying up to Trump, who is threatening restrictions on chip exports to China, a lucrative market for the chipmaker. At a speech at a Nvidia tech conference in Washington on Tuesday, CEO Jensen Huang praised Trump and thanked those assembled “for your service and helping make America great again.”

How Nvidia Might Help ICE

How ICE plans to use Nvidia’s services is unclear; the specific software in question is not disclosed in the procurement documents.

Nvidia offers a variety of software-based services that could be useful for ICE data analysis. Nvidia has a dominant position across machine learning and artificial intelligence fields, including platforms to run large language models and video analytics.

The reseller through which ICE is buying access to Nvidia products, California-based New Tech Solutions, has previously sold the U.S. government licenses for “virtual workstations,” which essentially lease remote access to powerful chips known as graphics processing units, or GPUs, housed in data centers owned by Nvidia.

Such hardware could be used to train and query machine learning models. A 2023 report by the Department of Homeland Security on its potential usage of machine learning flagged HSI as standing to benefit from adopting the technology, including by rapidly searching and summarizing suspicious activity reports through large language models.

“HSI agents could quickly access and make sense of more than tens of millions of reports through ad hoc, unstructured queries over a voice interface,” the report says, adding that the system could also automatically scan and classify the contents of footage recorded by HSI agents.

A recently published inventory of ways DHS is using artificial intelligence tools reveals other areas where ICE may be able to make use of Nvidia’s “high-performance computing solutions.”

The document, which reflects Homeland Security practices as of July, notes HSI uses machine learning algorithms “to identify and extract critical evidence, relationships, and networks from mobile device data, leveraging machine learning capabilities to determine locations of interest.” The document also says HSI uses large language models to “identify the most relevant information in reports, accelerating investigative analysis by rapidly identifying persons of interest, surfacing trends, and detecting networks or fraud.”

HSI’s Shifting Mission

Procurement data about HIS’s use of Nvidia technology comes as ICE ramps up its presence in cities and towns across the U.S. Raids by ICE are viewed as being increasingly extreme and unchecked by legal or policy constraints, leading to aggressive protests against the immigrant enforcement.

HSI is playing a growing role in the controversial enforcement — and the crackdown on demonstrations.

Since Trump’s executive order on HSI, said Reichlin-Melnick of the American Immigration Council, “large numbers of agents have been reassigned away from criminal investigations to carry out immigration arrests instead.”

Related

Google Secretly Handed ICE Data About Pro-Palestine Student Activist

HSI agents in Washington have rounded up residents for minor traffic infractions and, earlier this month fired a gun into a man’s car. In June, HSI took part in the arrest of Newark Mayor Ras Baraka outside an ICE facility he was scheduled to tour with a delegation of New Jersey lawmakers. Charges of trespassing against Baraka were later dismissed.

HSI’s activities, though, go beyond street arrests and workplace raids: This week, 404 Media reported the agency was collecting utility customer data from Con Edison.

Like most large tech firms, Nvidia’s claims it adheres to various international human rights frameworks, including the Universal Declaration of Human Rights, which prohibits prejudice based on race or national origin.

The post ICE Investigations, Powered by Nvidia appeared first on The Intercept.

]]>
https://theintercept.com/2025/11/01/ice-nvidia-software-hsi-surveillance/feed/ 0 502184 U.S. President Donald Trump listens to a question from a reporter during a press conference with Ukrainian President Volodymyr Zelensky following their meeting at Trump’s Mar-a-Lago club on December 28, 2025 in Palm Beach, Florida. Rep. Dan Goldman (D-N.Y.) arrives for a vote at the U.S. Capitol March 31, 2025. (Francis Chung/POLITICO via AP Images) U.S. soldiers of the 3rd Brigade, 4th Infantry Division, look on a mass grave after a day-long battle against the Viet Cong 272nd Regiment, about 60 miles northwest of Saigon, in March 1967.
<![CDATA[ICE Plans Cash Rewards for Private Bounty Hunters to Locate and Track Immigrants]]> https://theintercept.com/2025/10/31/ice-plans-cash-rewards-for-private-bounty-hunters-to-locate-and-track-immigrants/ https://theintercept.com/2025/10/31/ice-plans-cash-rewards-for-private-bounty-hunters-to-locate-and-track-immigrants/#respond Sat, 01 Nov 2025 00:04:33 +0000 https://theintercept.com/?p=502278 Companies hired by ICE would be given bundles of information on 10,000 immigrants at a time to locate.

The post ICE Plans Cash Rewards for Private Bounty Hunters to Locate and Track Immigrants appeared first on The Intercept.

]]>
U.S. Immigration and Customs Enforcement is considering hiring private bounty hunters to locate immigrants across the country, according to a procurement document reviewed by The Intercept. Under the plan, bounty hunters may receive “monetary bonuses” depending on how successfully they track down their targets — and how many immigrants they then report to ICE.

According to the document, which solicits information from interested contractors for a potentially forthcoming contract opportunity, companies hired by ICE will be given bundles of information on 10,000 immigrants at a time to locate, with further assignments provided in “increments of 10,000 up to 1,000,000.”

The solicitation says ICE is considering “monetary bonuses” paid out based on performance.

The solicitation says ICE is “exploring an incentive based pricing structure” to encourage quick results, with “monetary bonuses” paid out based on performance. For example, ICE says contractors might get paid a bonus for identifying a person’s correct address on the first try or finding 90 percent of its targets within a set timeframe. (ICE did not immediately respond to a request for comment.)

The document closely resembles a plan reportedly circulated by a group of military contractors, including former Blackwater CEO and Trump ally Erik Prince.

In February, Politico reported that Prince and other were pushing for the formation of a private efforts to locate immigrants and a “bounty program which provides a cash reward for each illegal alien held by a state or local law enforcement officer,” according pitch materials obtained by the outlet.

The proposal called for “skip-tracing,” a method of using available information to locate people — something ICE is already handing out multimillion-dollar contracts for, according to a recent report in The Lever.

Soon, according to the newly published procurement document, private sector ICE contractors will surveil and confirm the home or work addresses of tens of thousands of immigrants in the U.S. and then report their locations back to the government.

“DHS ICE has an immediate need for Skip Tracing and Process Serving Services using Government furnished case data with identifiable information, commercial data verification, and physical observation services, to verify alien address information, investigate alternative alien address information, confirm the new location of aliens, and deliver materials/documents to aliens as appropriate,” according to the October 31 request for information.

Contractors will surveil their target to confirm the accuracy of their home address, including “time-stamped photographs of the location.”

Data provided by ICE will include “case data” provided by the government, location data, social media information, as well as “photos and documents” showing where a person lives or works. With this in hand, contractors will surveil their target to confirm the accuracy of their home address, including “time-stamped photographs of the location,” before reporting back to ICE.

“The vendor should prioritize the alien’s residence,” the document notes, “but failing that will attempt to verify place of employment.”

The plan entails not just on-the-ground monitoring but the use of digital surveillance. ICE says contractors can use off-the-shelf surveillance technology to confirm immigrants’ addresses, including “Enhanced location research, which entails automated and manual real-time skip tracing.”

Surveillance tools that ingest and track mobile phone location data are widely available on the private market, many of them already used by ICE.

“Multiple verification sources are recommended to achieve a high confidence level,” the document says, encouraging potential vendors to use “all technology systems available.” 

The new procurement document notes that “the Government is contemplating awarding contracts to multiple vendors” due to the large number of immigrants whose whereabouts it seeks to confirm.

The post ICE Plans Cash Rewards for Private Bounty Hunters to Locate and Track Immigrants appeared first on The Intercept.

]]>
https://theintercept.com/2025/10/31/ice-plans-cash-rewards-for-private-bounty-hunters-to-locate-and-track-immigrants/feed/ 0 502278 MCALLEN, TX - JUNE 23: A Guatemalan father and his daughter arrives with dozens of other women, men and their children at a bus station following release from Customs and Border Protection on June 23, 2018 in McAllen, Texas. Once families and individuals are released and given a court hearing date they are brought to the Catholic Charities Humanitarian Respite Center to rest, clean up, enjoy a meal and to get guidance to their next destination. Before President Donald Trump signed an executive order Wednesday that halts the practice of separating families who are seeking asylum, over 2,300 immigrant children had been separated from their parents in the zero-tolerance policy for border crossers (Photo by Spencer Platt/Getty Images)
<![CDATA[As Israel Bombed Gaza, Amazon Did Business With Its Bomb-Makers]]> https://theintercept.com/2025/10/24/amazon-weapons-gaza-israel-rafael-iai/ https://theintercept.com/2025/10/24/amazon-weapons-gaza-israel-rafael-iai/#respond Fri, 24 Oct 2025 14:42:15 +0000 The Intercept has learned that Amazon sold cloud services to Israeli weapons firms at the height of Israel’s bombardment of Gaza.

The post As Israel Bombed Gaza, Amazon Did Business With Its Bomb-Makers appeared first on The Intercept.

]]>
Amazon sold cloud-computing services to two Israeli weapons manufacturers whose munitions helped devastate Gaza, according to internal company materials obtained by The Intercept.

Amazon Web Services has furnished the Israeli government — including its military and intelligence agencies — with a suite of state-of-the-art data processing and storage services since 2021 as part of its controversial Project Nimbus deal. Last year, The Intercept revealed a provision in that contract requiring Amazon and Google, the other Nimbus vendor, to sell cloud services to Rafael Advanced Defense Systems and Israeli Aerospace Industries, two leading Israeli weapons firms.

New internal financial data and emails between Amazon personnel and their Israeli corporate and governmental clients show that Amazon has consistently provided software to both Rafael and IAI in 2024 and 2025 — periods during which Israel’s military was using their products to indiscriminately kill civilians and destroy civil infrastructure. Rafael purchased artificial intelligence technologies made available through Amazon Web Services, including the state-of-the-art large language model Claude, developed by AI startup Anthropic.

The materials reviewed by The Intercept also indicate Amazon sold cloud-computing services to Israel’s nuclear program and offices administering the West Bank, where Israeli military occupation, population displacement, and settlement construction is widely considered illegal under international law.

Amazon proclaims broad commitments to international human rights values, like most of its Big Tech peers. “We’re committed to identifying, assessing, prioritizing, and addressing adverse human rights impacts connected to our business,” the company’s Global Human Rights Principles website states. “Within Amazon’s own operations, we deploy a variety of mechanisms to conduct due diligence, assessing and responding to risks across the company,” including “human rights impact assessments to assess risks specific to Amazon businesses, including in the sectors and the countries where we operate.”

Amazon declined to comment or respond to a list of detailed questions, including whether it conducted a human rights impact assessment pertaining to selling its services to weapons companies whose products are used in a war widely assessed to be genocidal.

Rafael, Israel Aerospace Industries, and the Israeli Ministry of Defense did not respond to a request for comment.

It’s unclear how much money Rafael and IAI paid Amazon for its services. The documents reviewed by The Intercept show that Amazon sold its cloud-computing to Rafael at a discounted rate, though the exact percentage is not disclosed. The materials cite a 35 percent discount for services sold to the Israeli Ministry of Defense, a major Project Nimbus customer; it’s unclear if this rate is provided to Rafael and IAI as well.

Rafael was founded in 1948 as a governmental weapons research lab and, like its American equivalents at Raytheon or Lockheed, has become synonymous with Israeli militarism. Today, the state-owned company manufactures a diverse arsenal of missiles, bombs, drones, and other weaponry for both domestic use and international export. The corporation has thrived since Hamas’s October 7 attacks, reporting record revenues in both 2023 and 2024 that it attributed to Israel’s bombardment of Gaza. “2024 was a record year for Rafael, during the longest and most complex multi-front war in Israel’s history,” CEO Yoav Turgeman said last year, referring to the ongoing war with Hamas and related regional conflicts. “Rafael played a significant role in Israel’s military achievements in offense, intelligence and defense.”

IAI, another state-owned weapons firm, is best known for co-developing Israel’s anti-rocket Iron Dome system alongside Rafael. The company also manufactures a wide array of military aircraft, including its Heron line of drones — which the company has boasted about being used to great effect in Israel’s war on Gaza. A November 2023 promotional item about IAI’s drones published in the Jerusalem Post noted that “In the face of the October 7 challenges, the HERON UAS demonstrated its strategic importance by providing real-time intelligence, supporting targeted acquisitions, and aiding in the neutralization of threats.”

Missiles and other weapon systems built by Rafael and IAI have been used against Palestinians throughout the Gaza war. One of the most prominent Rafael weapons is its line of missile guidance kits dubbed SPICE: “Smart, Precise Impact, and Cost-Effective.” The SPICE technology converts “dumb” 1,000 or 2,000-pound bombs into “smart” guided munitions. In September 2024, Israel bombed a refugee camp — previously designated by the government as a “safe zone” for displaced Palestinians — with what weapons analysts later assessed was a 2,000 pound SPICE-guided bomb. The attack, condemned by the United Nations as “unconscionable,” killed at least 19 Palestinians, including women and children, with a massive explosion that burned, shredded, and in some cases buried those who’d sought shelter at the site. Fragments of a SPICE guidance kit were found amid the wreckage of a December 2024 airstrike on a house in Central Gaza that reportedly killed 12 civilians.

People inspect the site following Israeli strikes on a tent camp sheltering displaced people amid the Israel-Hamas conflict in the Al-Mawasi area in Khan Younis, in the southern Gaza Strip, on September 10, 2024. (Photo by Majdi Fathi/NurPhoto via Getty Images)
People inspect the site following Israeli strikes on a tent camp sheltering displaced people in the Al-Mawasi area in Khan Younis, in the southern Gaza Strip, on Sept. 10, 2024. Photo: Majdi Fathi/NurPhoto via Getty Images

Retired Air Force operator and weapons targeting expert Wes Bryant described Rafael and IAI as “highly integral to Israel’s defense industrial complex,” telling The Intercept both companies are implicated in killing civilians. Israel has been criticized for its frequent use of 2,000-pound bombs in Gaza, one of the densest urban areas in the world. “It could level multiple large houses in the average suburban American neighborhood,” Bryant explained. “Ideally the only time they should be used in urban warfare is when we have identified a large and/or hardened enemy structure and confirmed it is entirely in use by the enemy and has no civilian function nor civilians within or around it at risk.”

Rafael’s electro-optically guided Spike family of missiles are designed to both punch through and destroy heavily armored tanks or kill humans, and can be fired from portable ground-launchers in addition to drones or other vehicles. Some Spike missiles use “shaped charge” warheads, which slice into targets with a cone of scalding metal launched from the weapon as it detonates. In 2009, a former Pentagon official described the Spike to Haaretz as “a special missile that is made to make very high-speed turns, so if you have a target that is moving and running away from you, you can chase him with the weapon.” Rafael marketing materials note one variant “can be used in urban combat against structural targets found in urban settings for in-structure detonation.” Arms experts have at times attributed devastating, widespread shrapnel wounds inflicted upon Palestinian civilians to Spike missiles, which can be packed with tiny pieces of tungsten. When a tungsten-loaded Spike weapon hits its target, the 3-millimeter metal cubes blast outward in a 65-foot radius, lacerating blood vessels, puncturing organs, and shredding the flesh of anyone nearby, according to analysts.

In April 2024, an investigation by The Times of London revealed Israel used a drone-launched Spike missile manufactured by Rafael to kill seven aid workers with World Central Kitchen. U.N. special rapporteur for the occupied Palestinian territories Francesca Albanese called for indictments following the attack, echoing international condemnations and demands for an inquiry into whether the airstrike constituted a war crime.

“Though the IDF does not release numbers of munitions utilized in the war in Gaza, SPIKE missiles have been used extensively and have been attributed by many investigations to the death of civilians, including children,” said Bryant. “It is likely that Israel has used dozens, if not hundreds, of SPIKE missiles throughout Gaza since the outset of the conflict.”

Both Rafael and IAI supply the Israeli military with so-called loitering munitions: suicide drones that can hover for extended periods while scanning for targets, then quickly slam into the ground and detonate an onboard explosive. Both companies’ weapons are frequently highlighted when the Israeli military–industrial apparatus wants to flag its technology supremacy. In July, Rafael posted a promotional video using footage of its Firefly suicide drone killing an apparently unarmed person walking down the street in an unidentified area of Gaza. Suicide drone attacks have also been documented in the Occupied West Bank; a December 2023 video captured a Firefly explosive descending into a dense courtyard.

Israel’s military similarly promoted the use of the shoulder-fired Matador rocket, co-developed by Rafael, in a March 2024 video reported by Israeli outlet Ynet: “In the clip, one of the terrorists opened fire from a room inside an apartment — and the use of a Matador missile targeting him precisely to eliminate the threat.” The outlet noted “a woman and two children” were in the adjoining room, but claimed they were not harmed in the missile attack against their home.

The Israeli military did not respond to a request for comment.

The documents show that Rafael acquired generative artificial intelligence tools through Amazon. In 2024, the firm sought to begin testing generative AI services made available through Amazon’s Bedrock service, which provides customers with machine-learning tools, including those made by third-party firms. According to the files, Rafael wanted to use both Amazon’s Titan G1 large language model and Claude, the advanced LLM model created by Anthropic.

Like its competitor OpenAI, Anthropic recently pivoted toward military contracting, announcing a $200 million deal with the Pentagon in July. Anthropic’s permissible use policy prohibits the use of its technology to “Produce, modify, design, or illegally acquire weapons,” and to “Design or develop weaponization and delivery processes for the deployment of weapons.” It’s unclear how the use of Claude by Rafael — a company that exists to design, develop, and deliver weapons — could be in compliance with this policy. The documents reviewed by The Intercept indicate Rafael was able to purchase access to these models, but do not reveal how they were used.

Anthropic did not respond to questions about Rafael’s usage of Claude, or whether it would permit a weapons company to use its services despite an apparent ban on exactly that. In a statement, spokesperson Eduardo Maia Silva said, “Anthropic services are available to users, including governments, in most countries and regions around the world under our standard commercial Usage Policy. Users are required to comply with our Usage Policies which include restrictions and prohibitions around how Claude can be deployed.”

Project Nimbus has been a military program from its start. The Israeli Ministry of Finance declared in 2021 that its purpose was “to provide the government, the defense establishment and others with an all encompassing cloud solution.” Google, Amazon’s co-contractor on the project, has repeatedly denied that Nimbus involves “highly sensitive, classified, or military workloads relevant to weapons or intelligence services,” while Amazon has generally refrained from commenting at all.

A separate internal Amazon document obtained by The Intercept shows that the company was quietly lobbying Israel to allow it to handle classified material from the country’s defense and intelligence community. The document, an overview of Israel’s regulatory landscape, explained that the country’s military and spy agencies were reluctant to migrate classified data onto Amazon’s cloud servers. But the paper also notes that Amazon was trying to influence state regulators into changing this position, and had begun working with one unnamed, major government body to bring some of its classified materials onto AWS.

Portions of the internal financial materials indicate exactly which Amazon services the Israeli military and state-owned weapons firms use. The purchases include dozens of networking, storage, and security tools, including Elastic Compute Cloud, which lets customers run software in virtual computers hosted by Amazon. Multiple documents show the Israeli Ministry of Defense purchased access to Amazon Rekognition, the company’s face-recognition tool, including an unspecified “OSINT,” or open-source intelligence, project by the Israeli military’s Central Command. Rekognition has previously been criticized for its lower accuracy rates with women and people of color; in 2020, the company announced a self-imposed yearlong moratorium on police use of Rekognition, citing the need for “stronger regulations to govern the ethical use of facial recognition technology.” The system, according to Amazon, is capable not only of identifying faces, but also a range of emotions including “fear.”

The documents show the Israeli military has also used Amazon technology to test large language models, though the specific models or applications are not mentioned. One Israeli military username includes the number 9900, a possible sign of use by the IDF’s Unit 9900, a geospatial intelligence unit that aided in planning strikes in Gaza, including through the use of a spy satellite developed by IAI. Unit 9900 also purchased cloud services from Microsoft, according to a January report by The Guardian and +972 Magazine.

The documents indicate that another Amazon customer through its Nimbus contract is the Israeli state-operated Soreq Nuclear Research Center, a scientific installation constructed in cooperation with the United States in the 1950s. Although Israel’s nuclear weapons arsenal is technically secret and unacknowledged by its government, Soreq operates in the open, ostensibly part of the country’s civilian atomic energy program. Unlike Israel’s highly classified Negev Nuclear Research Center, Soreq is not believed to be a major contributor to the country’s weapons capabilities. A 1987 Pentagon study, however, stated the Soreq installation “runs the full nuclear gamut of activities …required for nuclear weapons design and fabrication.” A 2002 report by the Stockholm International Peace Research Institute noted “The Soreq Center shares a security zone with the Palmikhim AB,” an Israeli Air Force base, “from where missiles are assembled and test launched into the Mediterranean Sea.”

A separate document briefly references as AWS users unspecified government offices in “Judea and Samaria,” Israel’s term for the West Bank, which it has illegally occupied since 1967. Ioannis Kalpouzos, a visiting professor at Harvard Law School and an expert on human rights law and laws of war, told The Intercept that Amazon’s work with Israeli weapons makers could potentially create liability under international law depending on “whether it is foreseeable that it will lead to the commission of international crimes.”

“There is no need for genocidal intent for accessorial liability in aiding the principal to commit genocide,” Kalpouzos said.

Related

Google Worried It Couldn’t Control How Israel Uses Project Nimbus, Files Reveal

It’s unclear to what extent Amazon is aware of how its services are being used by the companies that build Israel’s bombs or the military that drops them. The Intercept previously reported internal anxieties amid the bidding process at Google, where leadership fretted that the project was structured in such a way that the company would be kept in the dark about how exactly its technology would be used, potentially in violation of human rights standards. While servicing the Israeli government includes plenty of mundane applications — say transportation, schools, or hospitals — in addition to its military, there’s little nuance in the operations of Rafael and IAI. Even if Amazon lacks the ability to conduct oversight of these customers, Bryant said there is little ambiguity when it comes to the purpose of their business: building and selling weapons.

“I don’t see how Amazon can make a claim of not being complicit in killing,” said Bryant, who previously led civilian harm assessments at the Pentagon, “even if they don’t fully know what everything is used for.”

The post As Israel Bombed Gaza, Amazon Did Business With Its Bomb-Makers appeared first on The Intercept.

]]>
https://theintercept.com/2025/10/24/amazon-weapons-gaza-israel-rafael-iai/feed/ 0 501471 U.S. President Donald Trump listens to a question from a reporter during a press conference with Ukrainian President Volodymyr Zelensky following their meeting at Trump’s Mar-a-Lago club on December 28, 2025 in Palm Beach, Florida. Rep. Dan Goldman (D-N.Y.) arrives for a vote at the U.S. Capitol March 31, 2025. (Francis Chung/POLITICO via AP Images) U.S. soldiers of the 3rd Brigade, 4th Infantry Division, look on a mass grave after a day-long battle against the Viet Cong 272nd Regiment, about 60 miles northwest of Saigon, in March 1967. People inspect the site following Israeli strikes on a tent camp sheltering displaced people amid the Israel-Hamas conflict in the Al-Mawasi area in Khan Younis, in the southern Gaza Strip, on September 10, 2024. (Photo by Majdi Fathi/NurPhoto via Getty Images) DEIR AL-BALAH, GAZA - NOVEMBER 7: Civil defense teams and citizens continue search and rescue operations after an airstrike hits the building belonging to the Maslah family during the 32nd day of Israeli attacks in Deir Al-Balah, Gaza on November 7, 2023. (Photo by Ashraf Amra/Anadolu via Getty Images)
<![CDATA[Videos of Charlie Kirk’s Murder Are Still on Social Media — and That’s No Accident]]> https://theintercept.com/2025/09/24/charlie-kirk-shooting-video-content-moderation/ https://theintercept.com/2025/09/24/charlie-kirk-shooting-video-content-moderation/#respond Wed, 24 Sep 2025 18:04:16 +0000 Politicians demanding the removal of videos of Kirk’s killing pushed tech companies to gut the very systems they now expect to protect them.

The post Videos of Charlie Kirk’s Murder Are Still on Social Media — and That’s No Accident appeared first on The Intercept.

]]>
Charlie Kirk hands out hats before he was shot and killed during an event at Utah Valley University in Orem, Utah, Wednesday, Sept. 10, 2025.
Charlie Kirk hands out hats before he was shot and killed during an event at Utah Valley University in Orem, Utah, on Sept. 10, 2025.  Photo: Tess Crowley/The Deseret News via AP

After Charlie Kirk was murdered at Utah Valley University, graphic videos of the right-wing provocateur’s assassination went viral on every major social media platform. It’s not surprising that such violent footage quickly spread — especially around a killing as high-profile as Kirk’s. What’s unusual, however, is how long those videos have been allowed to stay up.

Search Kirk’s name on Instagram right now, and for every three videos of him “owning” a college student in a debate, there’s at least one of him bleeding out. Search “Charlie Kirk shooting,” and your feed will be inundated with videos of the incident. This was not always the case. After a gunman livestreamed his attack at a mosque in Christchurch, New Zealand in 2019, Meta said it took down 1.2 million versions of the video before users could upload them to the platform. The Southern Poverty Law Center also tracked uploads of videos after mass shootings in Christchurch; Halle, Germany; and Buffalo, New York, and found a dramatic decrease after the seventh day of each of those shootings. 

Related

America Tolerates High Levels of Violence but Suppresses Photos of the Slaughter

Owners of social media companies like Facebook, Instagram, X, and YouTube have traditionally responded much faster to the proliferation of such graphic violence on their platforms, at least in the West. (Internet users in places where these platforms dedicate less resources to moderation like Gaza or Tigray are all too familiar with the kind of deluge of gore American users were subject to these past few weeks.)

Lawmakers including Rep. Lauren Boebert, R-Colo., and Rep. Anna Paulina Luna, R-Fla., have called on the platforms to delete the videos of Kirk’s gruesome assassination.

“He has a family, young children, and no one should be forced to relive this tragedy online. These are not the only graphic videos of horrifying murders circulating— at some point, social media begins to desensitize humanity. We must still value life,” Luna wrote on her X account. “Please take them down.”

But for several years, Republican legislators, in the name of free speech, have pushed tech companies to gut the very systems they now expect to protect them. It was part of pressure campaign intended to force social media companies to fire moderators, abandon fact-checking, and weaken their hate speech policies. As Luna and Boebert now demand the removal of videos of Kirk’s gruesome assassination, they’re experiencing the predictable consequence of the information ecosystem their party created — and are now horrified that the chaos has turned inward.

In 2023, after Rep. Jim Jordan, R-Ohio, succeeded Jerry Nadler, D-N.Y., as chair of the House Judiciary Committee; he immediately used his platform to start subpoenaing Big Tech and research organizations that study online hate speech and misinformation, like the Stanford Internet Observatory. Jordan accused them of a “marriage of big government, big tech [and] big academia” that attacked “American citizens’ First Amendment liberties.” Notably, last year, congressional Republicans accused the FBI and tech platforms of collaborating to defeat Donald Trump in the 2020 election by suppressing posts related to Hunter Biden’s laptop

Meanwhile, conservative activists sued the Biden administration complaining that it pressured social media companies to censor conservative views on Covid-19 vaccines and election fraud. Though they lost the suit, Republicans have long held that platforms have overly censored their posts. Studies also show that Republicans are far more likely to spread misinformation. During the 2016 election, for example, 80 percent of the disinformation on Facebook came from Republican-leaning posts. Another 2023 study found that conservatives were eight times more likely to spread misleading content than those who lean liberal. In other words, Republicans were more likely to be censored by social media because their posts were more likely to violate their policies.

Of course, a lot has changed since then, and tech companies have gone much further in appeasing conservatives. Perhaps, the biggest coup d’état for conservatives in the battle against “liberal tech” was Elon Musk’s purchase and subsequent rebranding of Twitter. To appease Republican activists, Musk — who recently advocated for the imprisonment of those who belittle the death of Kirk — promised to turn Twitter into a “free speech” platform. His first move was laying off a majority of the company’s staff involved in devising and implementing its content moderation policies. One former Twitter staffer who used to work in this division estimated that almost 90 percent of the company’s content moderation staff was laid off. Twitter, now X, also said it would rely on its Community Notes feature and AI to moderate content.  

Related

My Ban From X Is About One Simple Thing: Elon Musk Controlling the Flow of Information

Musk’s changes were not only in staffing, but also in how strongly the company enforces its policies. While Twitter’s hate speech policies still exist on paper, the platform has chosen not to enforce and has instead verified hundreds of accounts belonging to white supremacists, reinstated the accounts of notorious promoters of anti-trans content, and of course, brought back Trump who was excommunicated from the platform for his role in inciting the January 6 riots. Musk also joined Republicans’ attack on researchers who monitor disinformation by suing the Center for Countering Digital Hate in 2023 — though that lawsuit was later dismissed. 

The inflection point for this yearslong campaign by conservative activists was Meta’s capitulation to their demands shortly after Trump’s election win. In January, CEO Mark Zuckerberg, dressed in a loose black T-shirt and a gold chain, told Facebook and Instagram users the company would drastically scale back its third-party fact-checking operation. He told users the company would also ease enforcement of its hate speech rules, especially around immigration and gender. “It’s time to get back to our roots around free expression on Facebook and Instagram,” Zuck said. 

While Meta, YouTube, and others have said their content policies would apply to the Kirk assassination videos, to capitulate to Republican demands, they have not only reduced how strongly they review content but also gotten rid of much of the staff that does that work.

“You can’t have it both ways: Weakening moderation inevitably means violent and graphic content is left up for longer and spreads more quickly.”

Like Twitter, Meta has since quietly laid off many of the people that work on its trust and safety teams while also announcing it would double-down on AI based moderation. Not even a month after Meta announced its content policy changes, users reported seeing more graphic content on the platform.

“Underinvesting in platform safety has serious consequences,” says Martha Dark, the co-executive director of Foxglove Legal, a tech accountability nonprofit that advocates for content moderators. “It’s striking that after years of demanding platforms ease up on enforcement, some politicians are now outraged at the very consequences of that pressure. You can’t have it both ways: Weakening moderation inevitably means violent and graphic content is left up for longer and spreads more quickly,” Dark adds.

As for the tech companies’ claims that AI can carry the burden of their content moderation load: Olivia Conti, a former Twitter product manager who focused on abuse detection algorithms, told me that these algorithms may as well be “pizza detectors” because they “flag anything with predominantly red tones.” Even the hashing technology that tech platforms have traditionally used to identify these videos can easily be evaded through small edits.

Ellery Biddle, the director of impact at Meedan, a technology nonprofit that studies harmful speech and gender-based violence online, says that while some content moderation can be assisted by AI, “you still need teams of smart people to tell the AI what to do.”

Republicans intended to take aim at the teams that moderate hate speech and harassment. But those very people are also responsible for the job of monitoring and removing gruesome videos, like that of Kirk’s death.

The post Videos of Charlie Kirk’s Murder Are Still on Social Media — and That’s No Accident appeared first on The Intercept.

]]>
https://theintercept.com/2025/09/24/charlie-kirk-shooting-video-content-moderation/feed/ 0 499522 Charlie Kirk hands out hats before he was shot and killed during an event at Utah Valley University in Orem, Utah, Wednesday, Sept. 10, 2025. U.S. President Donald Trump listens to a question from a reporter during a press conference with Ukrainian President Volodymyr Zelensky following their meeting at Trump’s Mar-a-Lago club on December 28, 2025 in Palm Beach, Florida. Rep. Dan Goldman (D-N.Y.) arrives for a vote at the U.S. Capitol March 31, 2025. (Francis Chung/POLITICO via AP Images) U.S. soldiers of the 3rd Brigade, 4th Infantry Division, look on a mass grave after a day-long battle against the Viet Cong 272nd Regiment, about 60 miles northwest of Saigon, in March 1967.
<![CDATA[Google Secretly Handed ICE Data About Pro-Palestine Student Activist]]> https://theintercept.com/2025/09/16/google-facebook-subpoena-ice-students-gaza/ https://theintercept.com/2025/09/16/google-facebook-subpoena-ice-students-gaza/#respond Tue, 16 Sep 2025 14:24:37 +0000 Google handed over Gmail account information to ICE before notifying the student or giving him an opportunity to challenge the subpoena.

The post Google Secretly Handed ICE Data About Pro-Palestine Student Activist appeared first on The Intercept.

]]>
Even before immigration authorities began rounding up international students who had spoken out about Israel’s war on Gaza earlier this spring, there was a sense of fear among campus activists. Two graduate students at Cornell University — Momodou Taal and Amandla Thomas-Johnson — were so worried they would be targeted that they fled their dorms to lay low in a house outside Ithaca, New York.

As they feared, Homeland Security Investigations, the intelligence division of U.S. Immigration and Customs Enforcement, was intent to track them both down. As agents scrambled to find Taal and Thomas-Johnson, HSI sent subpoenas to Google and Meta for sensitive data information about their Gmail, Facebook, and Instagram accounts.

In Thomas-Johnson’s case, The Intercept found, Google handed over data to ICE before notifying him or giving him an opportunity to challenge the subpoena. By the time he found out about the data demand, Thomas-Johnson had already left the U.S.

During the first Trump administration, tech companies publicly fought federal subpoenas on behalf of their users who were targeted for protected speech — sometimes with great fanfare. With ICE ramping up its use of dragnet tools to meet its deportation quotas and smoke out noncitizens who protest Israel’s war on Gaza, Silicon Valley’s willingness to accommodate these kinds of subpoenas puts those who speak out at greater risk.

Lindsay Nash, a professor at Cardozo School of Law in New York who has studied ICE’s use of administrative subpoenas, said she was concerned but not surprised that Google complied with the subpoena about Thomas-Johnson’s account without notifying him.

“Subpoenas can easily be used and the person never knows.”

“Subpoenas can easily be used and the person never knows,” Nash told The Intercept. “It’s problematic to have a situation in which people who are targeted by these subpoenas don’t have an opportunity to vindicate their rights.”

Google declined to discuss the specifics of the subpoenas, but the company said administrative subpoenas like these do not include facts about the underlying investigation.

“Our processes for handling law enforcement subpoenas are designed to protect users’ privacy while meeting our legal obligations,” said a Google spokesperson in an emailed statement. “We review every subpoena and similar order for legal validity, and we push back against those that are overbroad or improper, including objecting to some entirely.”

ICE agents sent the administrative subpoenas to Google and Meta by invoking a broad legal provision that gives immigration officers authority to demand documents “relating to the privilege of any person to enter, reenter, reside in, or pass through the United States.”

One recent study based on ICE records found agents invoke this same provision hundreds of times each year in administrative subpoenas to tech companies. Another study found ICE’s subpoenas to tech companies and other private entities “overwhelmingly sought information that could be used to locate ICE’s targets.”

Unlike search warrants, administrative subpoenas like these do not require a judge’s signature or probable cause of a crime, which means they are ripe for abuse.

Silicon Valley’s willingness to accommodate these kinds of subpoenas puts those who speak out at greater risk.

HSI had flagged Taal to the State Department following “targeted analysis to substantiate aliens’ alleged engagement of antisemitic activities,” according to an affidavit later filed in court by a high-ranking official. This analysis amounted to a trawl of online articles about Taal’s participation in Gaza protests and run-ins with the Cornell administration. The State Department revoked Taal’s visa, and ICE agents in upstate New York began searching for him.

In mid-March, the week after Mahmoud Khalil was arrested in New York City, Taal sued the Trump administration, seeking an injunction that would have blocked ICE from detaining him too. By this point, he and Thomas-Johnson had both left their campus housing at Cornell and were hiding from ICE in a house 10 miles outside Ithaca.

Two days after Taal filed his suit, still unable to track him down, ICE sent an administrative subpoena to Meta. According to notices Meta emailed to Taal, the subpoena sought information about his Instagram and Facebook accounts. Meta gave Taal 10 days to challenge the subpoena in court before the company would comply and hand over data about his accounts to ICE.

Like Google, Meta declined to discuss the subpoena it received about Taal’s account, referring The Intercept to a webpage about the company’s compliance with data demands.

A week later, HSI sent another administrative subpoena to Google regarding Taal’s Gmail account, according to a notice Google sent him the next day.

“It was a phishing expedition,” Taal said in a text message to The Intercept.

After Taal decided to leave the country and dismissed his lawsuit in April, ICE withdrew its subpoenas for his records.

Do you have information about DHS or ICE targeting activists online? Use a personal device to contact Shawn Musgrave on Signal at shawnmusgrave.82

But on the last day of March, HSI sent yet another subpoena, this one to Google for information about Thomas-Johnson’s Gmail account. Without giving Thomas-Johnson any advance warning or the opportunity to challenge it, Google complied with the subpoena, and it only notified him weeks later. 

“Google has received and responded to legal process from a Law Enforcement authority compelling the release of information related to your Google Account,” read an email Google sent him in early May.

By this point, Thomas-Johnson had already left the country too. He fled after a friend was detained at the Tampa airport, handed a note with Thomas-Johnson’s name on it, and asked repeatedly about his whereabouts, he told The Intercept.

Thomas-Johnson’s lawyer, who also represented Taal, reached out to an attorney for Google about the demand for his client’s account information.

“Google has already fulfilled this subpoena,” Google’s attorney replied by email, further explaining that Google’s “production consisted of basic subscriber information,” such as the name, address, and phone number associated with the account. Google did not produce “the contents of communications, metadata regarding those communications, or location information,” the company’s attorney wrote.

“This is the extent that they will go to be in support of genocide,” Taal said of the government’s attempts to locate him using subpoenas.

Correction: September 16, 2025, 12:40 p.m. ET
The story has been updated to correct the spelling of Amandla Thomas-Johnson’s last name.

The post Google Secretly Handed ICE Data About Pro-Palestine Student Activist appeared first on The Intercept.

]]>
https://theintercept.com/2025/09/16/google-facebook-subpoena-ice-students-gaza/feed/ 0 499076 U.S. President Donald Trump listens to a question from a reporter during a press conference with Ukrainian President Volodymyr Zelensky following their meeting at Trump’s Mar-a-Lago club on December 28, 2025 in Palm Beach, Florida. Rep. Dan Goldman (D-N.Y.) arrives for a vote at the U.S. Capitol March 31, 2025. (Francis Chung/POLITICO via AP Images) U.S. soldiers of the 3rd Brigade, 4th Infantry Division, look on a mass grave after a day-long battle against the Viet Cong 272nd Regiment, about 60 miles northwest of Saigon, in March 1967.
<![CDATA[Proton Mail Suspended Journalist Accounts at Request of Cybersecurity Agency]]> https://theintercept.com/2025/09/12/proton-mail-journalist-accounts-suspended/ https://theintercept.com/2025/09/12/proton-mail-journalist-accounts-suspended/#respond Fri, 12 Sep 2025 20:56:29 +0000 The journalists were reporting on suspected North Korean hackers. Proton only reinstated their accounts after a public outcry.

The post Proton Mail Suspended Journalist Accounts at Request of Cybersecurity Agency appeared first on The Intercept.

]]>
The company behind the Proton Mail email service, Proton, describes itself as a “neutral and safe haven for your personal data, committed to defending your freedom.”

But last month, Proton disabled email accounts belonging to journalists reporting on security breaches of various South Korean government computer systems following a complaint by an unspecified cybersecurity agency. After a public outcry, and multiple weeks, the journalists’ accounts were eventually reinstated — but the reporters and editors involved still want answers on how and why Proton decided to shut down the accounts in the first place.

Martin Shelton, deputy director of digital security at the Freedom of the Press Foundation, highlighted that numerous newsrooms use Proton’s services as alternatives to something like Gmail “specifically to avoid situations like this,” pointing out that “While it’s good to see that Proton is reconsidering account suspensions, journalists are among the users who need these and similar tools most.” Newsrooms like The Intercept, the Boston Globe, and the Tampa Bay Times all rely on Proton Mail for emailed tip submissions.

Shelton noted that perhaps Proton should “prioritize responding to journalists about account suspensions privately, rather than when they go viral.”

On Reddit, Proton’s official account stated that “Proton did not knowingly block journalists’ email accounts” and that the “situation has unfortunately been blown out of proportion.” Proton did not respond to The Intercept’s request for comment.

The two journalists whose accounts were disabled were working on an article published in the August issue of the long-running hacker zine Phrack. The story described how a sophisticated hacking operation — what’s known in cybersecurity parlance as an APT, or advanced persistent threat — had wormed its way into a number of South Korean computer networks, including those of the Ministry of Foreign Affairs and the military Defense Counterintelligence Command, or DCC.

The journalists, who published their story under the names Saber and cyb0rg, describe the hack as being consistent with the work of Kimsuky, a notorious North Korean state-backed APT sanctioned by the U.S. Treasury Department in 2023.

As they pieced the story together, emails viewed by The Intercept show that the authors followed cybersecurity best practices and conducted what’s known as responsible disclosure: notifying affected parties that a vulnerability has been discovered in their systems prior to publicizing the incident.

Saber and cyb0rg created a dedicated Proton Mail account to coordinate the responsible disclosures, then proceeded to notify the impacted parties, including the Ministry of Foreign Affairs and the DCC, and also notified South Korean cybersecurity organizations like the Korea Internet and Security Agency, and KrCERT/CC, the state-sponsored Computer Emergency Response Team. According to emails viewed by The Intercept, KrCERT wrote back to the authors, thanking them for their disclosure.

A note on cybersecurity jargon: CERTs are agencies consisting of cybersecurity experts specializing in dealing with and responding to security incidents. CERTs exist in over 70 countries — with some countries having multiple CERTs each specializing in a particular field such as the financial sector — and may be government-sponsored or private organizations. They adhere to a set of formal technical standards, such as being expected to react to reported cybersecurity threats and security incidents. A high-profile example of a CERT agency in the U.S. is the Cybersecurity and Infrastructure Agency, which has recently been gutted by the Trump administration.

A week after the print issue of Phrack came out, and a few days before the digital version was released, Saber and cyb0rg found that the Proton account they had set up for the responsible disclosure notifications had been suspended. A day later, Saber discovered that his personal Proton Mail account had also been suspended. Phrack posted a timeline of the account suspensions at the top of the published article, and later highlighted the timeline in a viral social media post. Both accounts were suspended owing to an unspecified “potential policy violation,” according to screenshots of account login attempts reviewed by The Intercept.

The suspension notice instructed the authors to fill out Proton’s abuse appeals form if they believed the suspension was in error. Saber did so, and received a reply from a member of Proton Mail’s Abuse Team who went by the name Dante.

In an email viewed by The Intercept, Dante told Saber that their account “has been disabled as a result of a direct connection to an account that was taken down due to violations of our terms and conditions while being used in a malicious manner.” Dante also provided a link to Proton’s terms of service, going on to state, “We have clearly indicated that any account used for unauthorized activities, will be sanctioned accordingly.” The response concluded by stating, “We consider that allowing access to your account will cause further damage to our service, therefore we will keep the account suspended.”

On August 22, a Phrack editors reached out to Proton, writing that no hacked data was passed through the suspended email accounts, and asked if the account suspension incident could be deescalated. After receiving no response from Proton, the editor sent a follow-up email on September 6. Proton once again did not reply to the email.

On September 9, the official Phrack X account made a post asking Proton’s official account asking why Proton was “cancelling journalists and ghosting us,” adding: “need help calibrating your moral compass?” The post quickly went viral, garnering over 150,000 views.

Proton’s official account replied the following day, stating that Proton had been “alerted by a CERT that certain accounts were being misused by hackers in violation of Proton’s Terms of Service. This led to a cluster of accounts being disabled. Our team is now reviewing these cases individually to determine if any can be restored.” Proton then stated that they “stand with journalists” but “cannot see the content of accounts and therefore cannot always know when anti-abuse measures may inadvertently affect legitimate activism.”

Proton did not publicly specify which CERT had alerted them, and didn’t answer The Intercept’s request for the name of the specific CERT which had sent the alert. KrCERT also did not reply to The Intercept’s question about whether they were the CERT that had sent the alert to Proton.

Related

Proton Mail Says It’s “Politically Neutral” While Praising Republican Party

Later in the day, Proton’s founder and CEO Andy Yen posted on X that the two accounts had been reinstated. Neither Yen nor Proton explained why the accounts had been reinstated, whether they had been found to not violate the terms of service after all, why had they been suspended in the first place, or why a member of the Proton Abuse Team reiterated that the accounts had violated the terms of service during Saber’s appeals process.

Phrack noted that the account suspensions created a “real impact to the author. The author was unable to answer media requests about the article.” The co-authors, Phrack pointed out, were also in the midst of the responsible disclosure process and working together with the various affected South Korean organizations to help fix their systems. “All this was denied and ruined by Proton,” Phrack stated. 

Phrack editors said that the incident leaves them “concerned what this means to other whistleblowers or journalists. The community needs assurance that Proton does not disable accounts unless Proton has a court order or the crime (or ToS violation) is apparent.”

The post Proton Mail Suspended Journalist Accounts at Request of Cybersecurity Agency appeared first on The Intercept.

]]>
https://theintercept.com/2025/09/12/proton-mail-journalist-accounts-suspended/feed/ 0 498895 U.S. President Donald Trump listens to a question from a reporter during a press conference with Ukrainian President Volodymyr Zelensky following their meeting at Trump’s Mar-a-Lago club on December 28, 2025 in Palm Beach, Florida. Rep. Dan Goldman (D-N.Y.) arrives for a vote at the U.S. Capitol March 31, 2025. (Francis Chung/POLITICO via AP Images) U.S. soldiers of the 3rd Brigade, 4th Infantry Division, look on a mass grave after a day-long battle against the Viet Cong 272nd Regiment, about 60 miles northwest of Saigon, in March 1967.
<![CDATA[Alex Karp Insists Palantir Doesn’t Spy on Americans. Here’s What He’s Not Saying.]]> https://theintercept.com/2025/09/12/palantir-spy-nsa-snowden-surveillance/ https://theintercept.com/2025/09/12/palantir-spy-nsa-snowden-surveillance/#respond Fri, 12 Sep 2025 14:00:00 +0000 Documents from Edward Snowden published by The Intercept in 2017 show the NSA’s use of Palantir technology.

The post Alex Karp Insists Palantir Doesn’t Spy on Americans. Here’s What He’s Not Saying. appeared first on The Intercept.

]]>
In an exchange this week on “All-In Podcast,” Alex Karp was on the defensive. The Palantir CEO used the appearance to downplay and deny the notion that his company would engage in rights-violating in surveillance work.

“We are the single worst technology to use to abuse civil liberties, which is by the way the reason why we could never get the NSA or the FBI to actually buy our product,” Karp said.

What he didn’t mention was the fact that a tranche of classified documents revealed by Edward Snowden and The Intercept in 2017 showed how Palantir software helped the National Security Agency and its allies spy on the entire planet.

Palantir has attracted increased scrutiny as the pace of its business with the federal government has surged during the second Trump administration. In May, the New York Times reported Palantir would play a central role in a White House plan to boost data sharing between federal agencies, “raising questions over whether he might compile a master list of personal information on Americans that could give him untold surveillance power.” Karp immediately rejected that report in a June interview on CNBC as “ridiculous shit,” adding that “if you wanted to use the deep state to unlawfully surveil people, the last platform on the world you would pick is Palantir.”

Karp made the same argument in this week’s podcast appearance, after “All-In” co-host David Sacks — the Trump administration AI and cryptocurrency czar — pressed him on matters of privacy, surveillance, and civil liberties. “One of the criticisms or concerns that I hear on the right or from civil libertarians is that Palantir has a large-scale data collection program on American citizens,” Sacks said.

Karp replied by alleging that he had been approached by a Democratic presidential administration and asked to build a database of Muslims. “We’ve never done anything like this. I’ve never done anything like this,” Karp said, arguing that safeguards built into Palantir would make it undesirable for signals intelligence. That’s when he said the company’s refusal to abuse civil liberties is “the reason why we could never get the NSA or the FBI to actually buy our product.”

Karp later stated: “To your questions, no, we are not surveilling,” taking a beat before adding, “uh, U.S. citizens.”

Related

How Peter Thiel’s Palantir Helped the NSA Spy on the Whole World

In 2017, The Intercept published documents originally provided by Snowden, a whistleblower and former NSA contractor, demonstrating how Palantir software was used in conjunction with a signals intelligence tool codenamed XKEYSCORE, one of the most explosive revelations from the NSA whistleblower’s 2013 disclosures. XKEYSCORE provided the NSA and its foreign partners with a means of easily searching through immense troves of data and metadata covertly siphoned across the entire global internet, from emails and Facebook messages to webcam footage and web browsing. A 2008 NSA presentation describes how XKEYSCORE could be used to detect “Someone whose language is out of place for the region they are in,” “Someone who is using encryption,” or “Someone searching the web for suspicious stuff.”

Later in 2017, BuzzFeed News reported Palantir’s working relationship with the NSA had ceased two years prior, citing an internal presentation delivered by Karp. Palantir did not provide comment for either The Intercept’s or BuzzFeed News’ reporting on its NSA work.

The Snowden documents describe how intelligence data queried through XKEYSCORE could be imported straight into Palantir software for further analysis. One document mentions use of Palantir tools in “Mastering The Internet,” a joint NSA/GCHQ mass surveillance initiative that included pulling data directly from the global fiber optic cable network that underpins the internet. References inside HTML files from the NSA’s Intellipedia, an in-house reference index, included multiple nods to the company, such as “Palantir Classification Helper,” “[Target Knowledge Base] to Palantir PXML,” and “PalantirAuthService.”

And although Karp scoffed at the idea that Palantir software would be suitable for “deep state” usage, a British intelligence document note also published by The Intercept quotes GCHQ saying the company’s tools were developed “through [an] iterative collaboration between Palantir computer scientists and analysts from various intelligence agencies over the course of nearly three years.”

Karp’s carefully worded clarification that Palantir doesn’t participate in the surveillance of Americans specifically would have been difficult if not impossible for the company to establish with any certainty. From the moment of its disclosure, XKEYSCORE presented immense privacy and civil liberties threats, both to Americans and noncitizens alike. But in the United States, much of the debate centered around the question of how much data on U.S. citizens is ingested — intentionally or otherwise — by the NSA’s globe-spanning surveillance capabilities.

Even without the NSA directly targeting Americans, their online speech and other activity is swept up during the the agency’s efforts to spy on foreigners: say, if a U.S. citizen were to email a noncitizen who is later targeted by the agency. Even if the public takes the NSA at its word that it does not deliberately collect and process information on Americans through tools like XKEYSCORE, it claims the legal authority under Section 702 of the Foreign Intelligence Surveillance Act to subsequently share such data it “incidentally” collects with other U.S. agencies, including the FBI.

The legality of such collection remains contested. Legal loopholes created in the name of counterterrorism and national security leave large gaps through which the NSA and its partner agencies can effectively bypass legal protections against spying on Americans and the 4th Amendment’s guarantee against warrantless searches.

A 2014 report by The Guardian on the collection of webcam footage explained that GCHQ, the U.K.’s equivalent of the NSA, “does not have the technical means to make sure no images of UK or US citizens are collected and stored by the system, and there are no restrictions under UK law to prevent Americans’ images being accessed by British analysts without an individual warrant.” The report notes “Webcam information was fed into NSA’s XKeyscore search tool.”

In 2021, the federal Privacy and Civil Liberties Oversight Board concluded a five-year investigation into XKEYSCORE. In declassified remarks reported by the Washington Post, Travis LeBlanc, a board member who took part in the inquiry, said the NSA’s analysis justifying XKEYSCORE’s legality “lacks any consideration of recent relevant Fourth Amendment case law on electronic surveillance that one would expect to be considered.”

“The former Board majority failed to ask critical questions like how much the program costs financially to operate, how many U.S. persons have been impacted by KEYSCORE,” his statement continued. “While inadvertently or incidentally intercepted communications of U.S. persons is a casualty of modern signals intelligence, the mere inadvertent or incidental collection of those communications does not strip affected U.S. persons of their constitutional or other legal rights.”

Palantir did not respond when asked by The Intercept about the discrepancy between its CEO’s public remarks and its documented history helping spy agencies at home and abroad use what the NSA once described as its “widest reaching” tool.

The post Alex Karp Insists Palantir Doesn’t Spy on Americans. Here’s What He’s Not Saying. appeared first on The Intercept.

]]>
https://theintercept.com/2025/09/12/palantir-spy-nsa-snowden-surveillance/feed/ 0 498802 U.S. President Donald Trump listens to a question from a reporter during a press conference with Ukrainian President Volodymyr Zelensky following their meeting at Trump’s Mar-a-Lago club on December 28, 2025 in Palm Beach, Florida. Rep. Dan Goldman (D-N.Y.) arrives for a vote at the U.S. Capitol March 31, 2025. (Francis Chung/POLITICO via AP Images) U.S. soldiers of the 3rd Brigade, 4th Infantry Division, look on a mass grave after a day-long battle against the Viet Cong 272nd Regiment, about 60 miles northwest of Saigon, in March 1967.
<![CDATA[Democrats Have a Gerontocracy Problem. The Crypto Industry Is Using That to Its Advantage.]]> https://theintercept.com/2025/09/04/brad-sherman-primary-crypto-jake-rakov/ https://theintercept.com/2025/09/04/brad-sherman-primary-crypto-jake-rakov/#respond Thu, 04 Sep 2025 17:00:20 +0000 Veteran Rep. Brad Sherman is a major crypto skeptic — and industry insiders are supporting his youthful primary opponents.

The post Democrats Have a Gerontocracy Problem. The Crypto Industry Is Using That to Its Advantage. appeared first on The Intercept.

]]>
When former congressional staffer Jake Rakov launched a primary bid against his old boss, Rep. Brad Sherman, D-Calif., the race seemed to fit a pattern.

The Democratic primary season is quickly shaping up to be dominated by intergenerational battles — and Rakov, at 37, presented himself as a fresh face against Sherman, who has been in Congress since 1997.

Other political forces, however, appear to be at work on Rakov’s campaign. As soon as he announced his challenge, donations from three officials at cryptocurrency trade groups landed in the upstart’s coffers, with a fourth donor coming in on their heels.

“Crypto is smart enough to realize that there are broad concerns among Democrats about aging in office.”

Jeff Hauser, the executive director of the Revolving Door Project, a left-leaning group that is critical of the digital assets industry, said crypto appears poised to use the narrative that incumbent Democrats are too old and out of touch.

“It would definitely send a message were they to be able to dislodge him,” Hauser said of Sherman. “Crypto is smart enough to realize that there are broad concerns among Democrats about aging in office.”

The cash, however, was a drop in the bucket: Unseating a fixture like Sherman would take more muscle. It nonetheless signaled how eager crypto is to topple industry skeptics.

Crypto insiders and skeptics alike say the industry is poised to latch on to the developing youth-versus-age narrative to elect friendlier legislators. Voters in heavily Democratic districts like Sherman’s, however, will not buy it, the longtime member of Congress predicted.

“If you go to a Democratic district and say, ‘Brought to you by the makers of Trump coin’ — they’re not going to buy that product,” Sherman told The Intercept. “My opponents can’t beat me without $5 or $10 million from crypto. And if $5 or $10 million from crypto comes in, that becomes the issue.”

Crypto Chips In

After spending more than $130 million on last year’s elections, the crypto industry is laying plans for another influence campaign in 2026. One trio of affiliated crypto super PACs already have more than $140 million in the bank.

Though these super PACs — which last year pumped millions into television advertisements with no mention of crypto — have not yet started spending on next year’s contests, the four officials at crypto trade groups made their donations to Rakov within a couple months of his campaign launch. (Rakov’s campaign did not respond to a request for comment.)

Within one day of Rakov’s announcement, Blockchain Association CEO Kristin Smith donated $3,500, Solana Policy Institute CEO Miller Whitehouse-Levine gave $1,000, and Cedar Innovation Foundation engagement director Colin McLaren chipped in $3,500, according to a Federal Election Commission filing. (All three are now with the Solana Policy Institute.)

More crypto contributions followed in May and June, when Satoshi Action Fund CEO Dennis Porter gave $500, and Haseeb Qureshi, a managing partner at the crypto fund Dragonfly Digital Management, gave $999.

Those numbers pale in comparison to the $500,000 that Rakov has donated to his own campaign, and will not go far to counteract the $4.1 million in campaign cash that Sherman has on hand.

For observers of crypto’s role in elections, however, they are a signal that the industry is watching the race closely.

“Big-Time Drug Dealers”

The crypto industry has ample reason to dislike Sherman. While many other Democrats have sought to chart a middle path, Sherman has in the past called for an outright ban of cryptocurrencies. A member of the powerful House Financial Services Committee, Sherman said his concerns about crypto include lax “know your customer” policies that have allowed money laundering to flourish, and the environmental impact of energy-intensive bitcoin mining.

On the House floor last year, Sherman said the demand for crypto was driven by “big-time drug dealers and big-time tax evaders and big-time human traffickers.”

McLaren, the crypto industry lobbyist who donated to Rakov, said in an email that he was motivated by Sherman’s position on crypto, pointing to an industry report card.

“He’s an F for a reason, calling crypto a ‘garden full of snakes,’ voting against concrete opportunities to protect consumers and unlock innovation, and spreading falsehoods about a legitimate industry,” McLaren said.

He also signaled support for other younger Democrats taking on older, incumbent critics of crypto.

“As a proud Democrat who has worked on campaigns for President Biden, Michael Bloomberg, and Vice President Harris, I applaud challengers like Jake Rakov, Jake Levine, and Patrick Roath who have stood up against senior Members of Congress who are out of touch with 21st century technologies, and voters and am proud to support them, either financially or in other ways,” McLaren said.

Levine is another of Sherman’s early challengers and Roath, 38, is taking on 70-year-old crypto skeptic Rep. Stephen Lynch, D-Mass., next year.

Sherman Marches On

Sherman, who is 70 years old, pointed out that he is 12 years younger than Biden, whose disastrous debate performance last year sparked a Democratic Party debate about its “gerontocracy” of aging elected officials.

He is generally dismissive of Rakov, who made a splash with his announcement that he would run in April. With few major policy differences outlined so far, Rakov will have to rely on crypto contributions to prop up his campaign, Sherman said.

“To the tune of a few thousand bucks, he is the crypto candidate. He would like to be the $5 to $10 million candidate. His initial reaction is, he’s with me on all the issues, he’s just younger,” Sherman said.

“Do you want an incumbent, or do you want a fresh face who has relatively similar policy positions?”

Christian Grose, a political science professor at the University of Southern California, said that so far the race is shaping up as an intergenerational contest.

“It’s a tough race, potentially, for the challengers, but also not out of the question,” said Grose. “At the end of the day, it really is about: Do you want an incumbent, or do you want a fresh face who has relatively similar policy positions?”

Related

She’s Challenging an AIPAC Democrat. A National Progressive Group Wants In.

So far, neither Rakov nor Levine have distanced themselves from Sherman on an issue that might be a weakness for him in other Democratic districts: his strong support for Israel. Sherman said that candidates who do so would risk alienating the large number of pro-Israel voters in the district.

Crypto Risks Backlash

If crypto does spend big on blue-district primaries, it could face backlash in a way it did not during last year’s election, when it successfully backed numerous Democratic candidates in their primaries.

Trump’s net worth has exploded over the past year thanks to his family’s various crypto ventures, meaning that the entire industry must increasingly contend with Democratic voters’ anger about Trump’s meme coin and other ventures.

Related

Who’s on the Guest List for Trump’s Meme Coin Dinner?

“Heretofore crypto skepticism has not been a voting issue, but as Trump becomes a billionaire — for real — many times over due to crypto shenanigans, I think it is possible that crypto will start getting tagged with the Trump stink,” said the Revolving Door Project’s Hauser.

Sherman said he was not worried about crypto in his race. Still, he said, the industry may see value in simply sending a message to other Democrats.

“Look, crypto is going to spend $100 million, 200 million trying to buy friends,” he said. “And remember, when they get involved in a race, it’s not just to influence that race: It’s to scare everybody else who might have a primary someday.”

The post Democrats Have a Gerontocracy Problem. The Crypto Industry Is Using That to Its Advantage. appeared first on The Intercept.

]]>
https://theintercept.com/2025/09/04/brad-sherman-primary-crypto-jake-rakov/feed/ 0 498397 U.S. President Donald Trump listens to a question from a reporter during a press conference with Ukrainian President Volodymyr Zelensky following their meeting at Trump’s Mar-a-Lago club on December 28, 2025 in Palm Beach, Florida. Rep. Dan Goldman (D-N.Y.) arrives for a vote at the U.S. Capitol March 31, 2025. (Francis Chung/POLITICO via AP Images) U.S. soldiers of the 3rd Brigade, 4th Infantry Division, look on a mass grave after a day-long battle against the Viet Cong 272nd Regiment, about 60 miles northwest of Saigon, in March 1967.
<![CDATA[Pentagon Document: U.S. Wants to “Suppress Dissenting Arguments” Using AI Propaganda]]> https://theintercept.com/2025/08/25/pentagon-military-ai-propaganda-influence/ https://theintercept.com/2025/08/25/pentagon-military-ai-propaganda-influence/#respond Mon, 25 Aug 2025 16:08:28 +0000 The U.S. is interested in acquiring machine-learning technology to carry out AI-generated propaganda campaigns overseas.

The post Pentagon Document: U.S. Wants to “Suppress Dissenting Arguments” Using AI Propaganda appeared first on The Intercept.

]]>
The United States hopes to use machine learning to create and distribute propaganda overseas in a bid to “influence foreign target audiences” and “suppress dissenting arguments,” according to a U.S. Special Operations Command document reviewed by The Intercept.

The document, a sort of special operations wishlist of near-future military technology, reveals new details about a broad variety of capabilities that SOCOM hopes to purchase within the next five to seven years, including state-of-the-art cameras, sensors, directed energy weapons, and other gadgets to help operators find and kill their quarry. Among the tech it wants to procure is machine-learning software that can be used for information warfare.

To bolster its “Advanced Technology Augmentations to Military Information Support Operations” — also known as MISO — SOCOM is looking for a contractor that can “Provide a capability leveraging agentic Al or multi‐LLM agent systems with specialized roles to increase the scale of influence operations.”

So-called “agentic” systems use machine-learning models purported to operate with minimal human instruction or oversight. These systems can be used in conjunction with large language models, or LLMs, like ChatGPT, which generate text based on user prompts. While much marketing hype orbits around these agentic systems and LLMs for their potential to execute mundane tasks like online shopping and booking tickets, SOCOM believes the techniques could be well suited for running an autonomous propaganda outfit.

“The information environment moves too fast for military remembers [sic] to adequately engage and influence an audience on the internet,” the document notes. “Having a program built to support our objectives can enable us to control narratives and influence audiences in real time.”

Laws and Pentagon policy generally prohibit military propaganda campaigns from targeting U.S. audiences, but the porous nature of the internet makes that difficult to ensure.

In a statement, SOCOM spokesperson Dan Lessard acknowledged that SOCOM is pursuing “cutting-edge, AI-enabled capabilities.”

“All AI-enabled capabilities are developed and employed under the Department of Defense’s Responsible AI framework, which ensures accountability and transparency by requiring human oversight and decision-making,” he told The Intercept. “USSOCOM’s internet-based MISO efforts are aligned with U.S. law and policy. These operations do not target the American public and are designed to support national security objectives in the face of increasingly complex global challenges.”

Tools like OpenAI’s ChatGPT or Google’s Gemini have surged in popularity despite their propensity for factual errors and other erratic outputs. But their ability to immediately churn out text on virtually any subject, written in virtually any tone — from casual trolling to pseudo-academic — could mark a major leap forward for internet propagandists. These tools give users the potential to finetune messaging any number of audiences without the time or cost of human labor.

Whether AI-generated propaganda works remains an open question, but the practice has already been amply documented in the wild. In May 2024, OpenAI issued a report revealing efforts by Iranian, Chinese, and Russian actors to use the company’s tools to engage in covert influence campaigns, but found none had been particularly successful. In comments before the 2023 Senate AI Insight Forum, Jessica Brandt of the Brookings Institution warned “LLMs could increase the personalization, and therefore the persuasiveness, of information campaigns.” In an online ecosystem filled with AI information warfare campaigns, “skepticism about the existence of objective truth is likely to increase,” she cautioned. A 2024 study published in the academic journal PNAS Nexus found that “language models can generate text that is nearly as persuasive for US audiences as content we sourced from real-world foreign covert propaganda campaigns.”

Related

OpenAI’s Pitch to Trump: Rank the World on U.S. Tech Interests

Unsurprisingly, the national security establishment is now insisting that the threat posed by this technology in the hands of foreign powers, namely Russia and China, is most dire.

“The Era of A.I. Propaganda Has Arrived, and America Must Act,” warned a recent New York Times opinion essay on GoLaxy, software created by the Chinese firm Beijing Thinker originally used to play the board game Go. Co-authors Brett Benson, a political science professor at Vanderbilt University, and Brett Goldstein, a former Department of Defense official, paint a grim picture showing GoLaxy as an emerging leader in state-aligned influence campaigns.

GoLaxy, they caution, is able to scan public social media content and produce bespoke propaganda campaigns. “The company privately claims that it can use a new technology to reshape and influence public opinion on behalf of the Chinese government,” according to a companion piece by Times national security reporter Julian Barnes headlined “China Turns to A.I. in Information Warfare.” The news item strikes a similarly stark tone: “GoLaxy can quickly craft responses that reinforce the Chinese government’s views and counter opposing arguments. Once put into use, such posts could drown out organic debate with propaganda.” According to these materials, the Times says, GoLaxy has “undertaken influence campaigns in Hong Kong and Taiwan, and collected data on members of Congress and other influential Americans.”

To respond to this foreign threat, Benson and Goldstein argue a “coordinated response” across government, academia, and the private sector is necessary. They describe this response as defensive in nature: mapping and countering foreign AI propaganda.

That’s not what the document from the Special Operations Forces Acquisition, Technology, and Logistics Center suggests the Pentagon is seeking.

The material shows SOCOM believes it needs technology that closely matches the reported Chinese capabilities, with bots scouring and ingesting large volumes of internet chatter to better persuade a targeted population, or an individual, on any given subject.

SOCOM says it specifically wants “automated systems to scrape the information environment, analyze the situation and respond with messages that are in line with MISO objectives. This technology should be able to respond to post(s), suppress dissenting arguments, and produce source material that can be referenced to support friendly arguments and messages.”

The Pentagon is paying especially close attention to those who might call out its propaganda efforts.

“This program should also be able to access profiles, networks, and systems of individuals or groups that are attempting to counter or discredit our messages,” the document notes. “The capability should utilize information gained to create a more targeted message to influence that specific individual or group.”

“This program should also be able to access profiles, networks, and systems of individuals or groups that are attempting to counter or discredit our messages.”

SOCOM anticipates using generative systems to both craft propaganda messaging and simulate how this propaganda will be received once sent into the wild, the document notes. SOCOM hopes it will use “agentic systems that replicate specific knowledge, skills, abilities, personality traits, and sociocultural attributes required for different roles of individuals comprising a team,” before moving on to “brainstorm and test operational campaigns against agent‐based replicas of individuals and groups.” These simulations are more elaborate than focus groups, calling instead for “comprehensive models of entire societies to enable MISO planners to use these models to experiment or test various multiple scenarios.”

The SOCOM wishlist continues to include a need for offensive deepfake capabilities, first reported by The Intercept in 2023.

The prospect of LLMs creating an infinite firehose of expertly crafted propaganda has been received by alarm — but generally in the context of the United States as target, not perpetrator.

A 2023 publication by the State Department-funded nonprofit Freedom House warned of “The Repressive Power of Artificial Intelligence,” predicting “AI-assisted disinformation campaigns will skyrocket as malicious actors develop additional ways to bypass safeguards and exploit open-source models.” Warning that “Generative AI draws authoritarian attention,” the Freedom House report cites potential use by China and Russia, but only mentions domestic use of the technology in a brief section about the presidential campaigns of Ron DeSantis and Donald Trump, as well as a deepfake video of Joe Biden manipulated to depict the former president making transphobic comments. The extent to which an automated propaganda machine capable of global reach warrants public concern depends on the scope of its application, according to Andrew Lohn, former director for emerging technology on the National Security Council.

“I would not be so concerned if some foreign soldiers are wrongly convinced that our special operation is going to happen Wednesday morning by helicopter from the east rather than Tuesday night by boat from the west,” said Lohn, now a senior fellow at Georgetown’s Center for Security and Emerging Technology.

The military has a history of manipulating civilian populations for political or ideological purposes. A troubling example was uncovered in 2024, when Reuters reported the Defense Department had operated a clandestine anti-vax social media campaign to undercut public confidence in the Chinese Covid vaccine, fearing its efficacy might draw Asian countries closer to a major geopolitical rival. Pentagon-created tweets described the Chinese Sinovac-CoronaVac shot — described by the World Health Organization as “safe and effective” — as “fake” and untrustworthy. According to the Reuters report, then-Special Operations Command Pacific General Jonathan Braga “pressed his bosses in Washington to fight back in the so-called information space” by backing the clandestine propaganda campaign.

William Marcellino, a behavioral scientist at the RAND Corporation focusing on the geopolitics of machine-learning systems and Pentagon procurement, told The Intercept such systems are being built out of necessity. “Regimes like those from China and Russia are engaged in AI-enabled, at-scale malign influence efforts,” he said. State-affiliated groups in China, he warned, “have explicitly designed AI at-scale systems for public opinion warfare.”

“Countering those campaigns likely requires AI at-scale responses,” he said.

SOCOM has in recent years been public about its desire for AI-created propaganda systems. These statements suggest a broader interest that includes influence operations against entire populations, as opposed to narrowly tailored toward military personnel.

In 2019, a senior Pentagon special operations official spoke at a defense symposium of the country’s “need to move beyond our 20th century approach to messaging and start looking at influence as an integral aspect of modern irregular warfare.” The official noted that this “will also require new partnerships beyond traditional actors, throughout the world, through efforts to amplify voices of [non-governmental organizations] and individual citizens who bring transparency to malign activities of our competitors.” The following year, then-SOCOM commander Gen. Richard Clarke described his interest in using AI to achieve these ends.

“As we look at the ability to influence and shape in this [information] environment, we’re going to have to have artificial intelligence and machine learning tools,” Clarke said in 2020 remarks first reported by National Defense Magazine, “specifically for information ops that hit a very broad portfolio, because we’re going to have to understand how the adversary is thinking, how the population is thinking, and work in these spaces.”

Heidy Khlaaf, chief scientist at the AI Now Institute and former safety engineer at OpenAI, warned against a fighting-fire-with-fire approach: “Framing the use of generative and agentic AI as merely a mitigation to adversaries’ use is a misrepresentation of this technology, as offensive and defensive uses are really two sides of the same coin and would allow them to use it precisely in the same way that adversaries do.”

Automated online influence campaigns might wind up having lackluster results, according to Emerson Brooking, a senior fellow at the Atlantic Council’s Digital Forensic Research Lab. “Russia has been using AI programs to automate its influence operations. The program is not very good,” he said.

The tendency of LLMs to fabricate falsehoods and perpetuate preconceptions when prompted by users could also prove a major liability, Brooking warned. “Tasked with figuring out the ‘hearts and minds’ of a complex and understudied country, they may lean heavily on an AI to help them, which will be likely to tell them what they already want to hear,” he said.

Khlaaf added that “agentic” systems, heavily marketed by tech firms as independent digital brains, are still error-prone and unpredictable. “The introduction of agentic AI in these disinformation campaigns adds a layer of both safety and security concerns, as several research results have demonstrated how easily we can compromise and divert the behavior of agentic AI,” she told The Intercept. “With these security issues unresolved, [SOCOM] risks that their campaigns are not only compromised, but that they produce material that was not intended.”

“AI tends to make these campaigns stupider, not more effective.”

Brooking, who previously worked as an adviser to the Office of the Under Secretary of Defense for Policy on cybersecurity matters, also pointed to the mixed track record of prior U.S. online propaganda efforts. In 2022, researchers revealed a network of Twitter and Facebook accounts secretly operated by U.S. Central Command that had been pushing bogus news articles containing anti-Russian and Iranian talking points. The network, which failed to gain traction on either social network, quickly became an embarrassment for the Pentagon.

“We know from other public reporting that the U.S. has long sought to ‘suppress dissenting arguments’ and generate positive press in certain areas of operation,” he said. “We also know that these efforts have not worked very well and can be deeply embarrassing or counterproductive when revealed to the American public. AI tends to make these campaigns stupider, not more effective.”

The post Pentagon Document: U.S. Wants to “Suppress Dissenting Arguments” Using AI Propaganda appeared first on The Intercept.

]]>
https://theintercept.com/2025/08/25/pentagon-military-ai-propaganda-influence/feed/ 0 497934 U.S. President Donald Trump listens to a question from a reporter during a press conference with Ukrainian President Volodymyr Zelensky following their meeting at Trump’s Mar-a-Lago club on December 28, 2025 in Palm Beach, Florida. Rep. Dan Goldman (D-N.Y.) arrives for a vote at the U.S. Capitol March 31, 2025. (Francis Chung/POLITICO via AP Images) U.S. soldiers of the 3rd Brigade, 4th Infantry Division, look on a mass grave after a day-long battle against the Viet Cong 272nd Regiment, about 60 miles northwest of Saigon, in March 1967.
<![CDATA[Border Patrol Wants Advanced AI to Spy on American Cities]]> https://theintercept.com/2025/07/23/cbp-border-patrol-ai-surveillance/ https://theintercept.com/2025/07/23/cbp-border-patrol-ai-surveillance/#respond Wed, 23 Jul 2025 17:55:57 +0000 A U.S. Border Patrol “Industry Day” deck also asks for drones, seismic sensors, and tech that can see through walls.

The post Border Patrol Wants Advanced AI to Spy on American Cities appeared first on The Intercept.

]]>
U.S. Customs and Border Protection, flush with billions in new funding, is seeking “advanced AI” technologies to surveil urban residential areas, increasingly sophisticated autonomous systems, and even the ability to see through walls.

A CBP presentation for an “Industry Day” summit with private sector vendors, obtained by The Intercept, lays out a detailed wish list of tech CBP hopes to purchase, like satellite connectivity for surveillance towers along the border and improved radio communications. But it also shows that state-of-the-art, AI-augmented surveillance technologies will be central to the Trump administration’s anti-immigrant campaign, which will extend deep into the interior of the North American continent, hundreds of miles from international borders as commonly understood.

Related

Google Is Helping the Trump Administration Deploy AI Along the Mexican Border

The recent passage of Trump’s sprawling flagship legislation funnels tens of billions of dollars to the Department of Homeland Security. While much of that funding will go to Immigration and Customs Enforcement to bolster the administration’s arrest and deportation operations, a great deal is earmarked to purchase new technology and equipment for federal offices tasked with preventing immigrants from arriving in the first place: Customs and Border Protection, which administers the country’s border surveillance apparatus, and its subsidiary, the U.S. Border Patrol.

One page of the presentation, describing the wishlist of Border Patrol’s Law Enforcement Operations Division, says the agency needs “Advanced AI to identify and track suspicious activity in urban environment [sic],” citing the “challenges” posed by “Dense residential areas.” What’s considered “suspicious activity” is left unmentioned.

Customs and Border Protection did not respond to questions posed about the slides by The Intercept.

A slide from the CBP presentation showing the wishlist for the Coastal Area of Responsibility. Screenshot from CBP Presentation

The reference to AI-aided urban surveillance appears on a page dedicated to the operational needs of Border Patrol’s “Coastal AOR,” or area of responsibility, encompassing the entire southeast of the United States, from Kentucky to Florida. A page describing the “Southern AOR,” which includes all of inland Nevada and Oklahoma, similarly states the need for “Advanced intelligence to identify suspicious patterns” and “Long-range surveillance” because “city environments make it difficult to separate normal activity from suspicious activity.”

Related

Crossing the U.S. Border? Here’s How to Protect Yourself

Although the Fourth Amendment provides protection against arbitrary police searches, federal law grants immigration agencies the power to conduct warrantless detentions and searches within 100 miles of the land borders with Canada, Mexico, or the coastline of the United States. This zone includes most of the largest cities in the United States, including Los Angeles, New York, as well as the entirety of Florida.

The document mentions no specific surveillance methods or “advanced AI” tools that might be used in urban environments. Across the Southwest, residents of towns like Nogales and Calexico are already subjected to monitoring from surveillance towers placed in their neighborhoods. A 2014 DHS border surveillance privacy impact assessment warned these towers “may capture information about individuals or activities that are beyond the scope of CBP’s authorities. Video cameras can capture individuals entering places or engaging in activities as they relate to their daily lives because the border includes populated areas,” for example, “video of an individual entering a doctor’s office, attending public rallies, social events or meetings, or associating with other individuals.”

Last year, the Government Accountability Office found the DHS tower surveillance program failed six out of six privacy policies designed to prevent such overreach. CBP is also already known to use “artificial intelligence” tools to ferret out “suspicious activity,” according to agency documents. A 2024 inventory of DHS AI applications includes the Rapid Tactical Operations Reconnaissance program, or RAPTOR, which “leverages Artificial Intelligence (AI) to enhance border security through real-time surveillance and reconnaissance. The AI system processes data from radar, infrared sensors, and video surveillance to detect and track suspicious activities along U.S. borders.”

The document’s call for urban surveillance reflect the reality of Border Patrol, an agency empowered, despite its name, with broad legal authority to operate throughout the United States.

“Border Patrol’s escalating immigration raids and protest crackdowns show us the agency operates heavily in cities, not just remote deserts,” said Spencer Reynolds, a former attorney with the Department of Homeland Security who focused on intelligence matters. “Day by day, its activities appear less based on suspicion and more reliant on racial and ethnic profiling. References to operations in ‘dense residential areas’ are alarming in that they potentially signal planning for expanded operations or tracking in American neighborhoods.”

Automating immigration enforcement has been a Homeland Security priority for years, as exemplified by the bipartisan push to expand the use of machine learning-based surveillance towers like those sold by arms-maker Anduril Industries across the southern border. “Autonomous technologies will improve the USBP’s ability to detect, identify, and classify potential threats in the operating environment,” according to the agency’s 2024 – 2028 strategy document. “After a threat has been identified and classified, autonomous technology will enable the USBP to track threats in near real-time through an integrated network.”

The automation desired by Border Patrol seems to lean heavily on computer vision, a form of machine learning that excels at pattern matching to find objects in the desert that resemble people, cars, or other “items of interest,” rather than requiring crews of human agents to monitor camera feeds and other sensors around the clock. The Border Patrol presentation includes multiple requests for small drones that incorporate artificial intelligence technologies to aid in the “detection, tracking, and classification” of targets.

A computer system that has analyzed a large number of photographs of trucks driving through the desert can become effective at identifying similar vehicles in the future. But efforts to algorithmically label human behavior as “suspicious” — an abstract concept compared to “truck” — based only on its appearance has been criticized by some artificial intelligence scholars and civil libertarians as error-prone, overly subjective if not outright pseudoscientific, and often reliant on ethnic and religious stereotypes. Any effort to apply predictive techniques based on surveillance data from entire urban areas or residential communities would exacerbate these risks of bias and inaccuracy.

“In the best of times, oversight of technology and data at DHS is weak and has allowed profiling, but in recent months the administration has intentionally further undermined DHS accountability,” explained Reynolds, now senior counsel at the Brennan Center’s liberty and national security program. “Artificial intelligence development is opaque, even more so when it relies on private contractors that are unaccountable to the public — like those Border Patrol wants to hire. Injecting AI into an environment full of biased data and black-box intelligence systems will likely only increase risk and further embolden the agency’s increasingly aggressive behavior.”

“They’re addicted to suspicious activity reporting because they fundamentally believe that their targets do suspicious things.”

The desire to hunt “suspicious” people with “advanced AI” reflects a longtime ambition at the Department of Homeland Security, Mohammad Tajsar, an attorney at the ACLU of Southern California, told The Intercept. Military and intelligence agencies across the world are increasingly working to use forms of machine learning, often large language models like OpenAI’s GPT, to rapidly ingest and analyze varied data sources to find buried trends, threats, and targets — though systemic issues with accuracy remain unsolved.

This proposed use case dovetails perfectly with the Homeland Security ethos, Tajsar said. “They’re addicted to suspicious activity reporting because they fundamentally believe that their targets do suspicious things, and that suspicious things can predict criminal behavior,” a notion Tajsar described as a “fantasy” that “remains unchallenged despite the complete lack of empiricism to support it.” With the rapid proliferation of technologies billed as artificially intelligent, “they think that they can bring to bear all of their disparate sources of data using computers, and they see that as a breakthrough in what they’ve been trying to do for a long, long time.”

While much of the presentation addresses Border Patrol’s wide-ranging surveillance agenda, it also includes information about other departmental tech needs.

A slide from the CBP presentation. Screenshot from CBP presentation

The Border Patrol Tactical Unit, or BORTAC, exists on paper to execute domestic missions involving terrorism, hostage situations, or other high-risk scenarios. But the unit has become increasingly associated with suppressing dissent and routine deportation raids: In 2020, the Trump administration ordered BORTAC into the streets of Portland to tamp down protests, and the special operations unit has been similarly deployed in Los Angeles this year.

According to the presentation, CBP hopes to arm the already heavily militarized BORTAC with the ability to see through walls in order to “detect people within a structure or rubble.”

Another page of the document, listing the agency’s “Subterranean Portfolio,” claims CBP is preparing to lay an additional 2,100 miles of fiber optic cable along the northern and southern border in order to detect passing migrants, as part of a sensor network that also includes seismic, laser, visual, and cellular tracking.

The post Border Patrol Wants Advanced AI to Spy on American Cities appeared first on The Intercept.

]]>
https://theintercept.com/2025/07/23/cbp-border-patrol-ai-surveillance/feed/ 0 496233 U.S. President Donald Trump listens to a question from a reporter during a press conference with Ukrainian President Volodymyr Zelensky following their meeting at Trump’s Mar-a-Lago club on December 28, 2025 in Palm Beach, Florida. Rep. Dan Goldman (D-N.Y.) arrives for a vote at the U.S. Capitol March 31, 2025. (Francis Chung/POLITICO via AP Images) U.S. soldiers of the 3rd Brigade, 4th Infantry Division, look on a mass grave after a day-long battle against the Viet Cong 272nd Regiment, about 60 miles northwest of Saigon, in March 1967. MCALLEN, TX - JUNE 23: A Guatemalan father and his daughter arrives with dozens of other women, men and their children at a bus station following release from Customs and Border Protection on June 23, 2018 in McAllen, Texas. Once families and individuals are released and given a court hearing date they are brought to the Catholic Charities Humanitarian Respite Center to rest, clean up, enjoy a meal and to get guidance to their next destination. Before President Donald Trump signed an executive order Wednesday that halts the practice of separating families who are seeking asylum, over 2,300 immigrant children had been separated from their parents in the zero-tolerance policy for border crossers (Photo by Spencer Platt/Getty Images)
<![CDATA[Grok Is the Latest in a Long Line of Chatbots to Go Full Nazi]]> https://theintercept.com/2025/07/11/grok-antisemitic-ai-chatbot/ https://theintercept.com/2025/07/11/grok-antisemitic-ai-chatbot/#respond Fri, 11 Jul 2025 21:37:23 +0000 Grok’s recent antisemitic turn is not an aberration, but part of a pattern of AI chatbots churning out hateful drivel.

The post Grok Is the Latest in a Long Line of Chatbots to Go Full Nazi appeared first on The Intercept.

]]>
Grok, the Artificial intelligence chatbot from Elon Musk’s xAI, recently gave itself a new name: MechaHitler. This came amid a spree of antisemitic comments by the chatbot on Musk’s X platform, including claiming that Hitler was the best person to deal with “anti-white hate” and repeatedly suggesting the political left is disproportionately populated by people whose names Grok perceives to be Jewish. In the following days, Grok has begun gaslighting users and denying that the incident has ever happened.

“We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts,” a statement posted on Grok’s official X account reads. It noted that “xAI is training only truth-seeking.”

This isn’t, however, the first time that AI chatbots have made antisemitic or racist remarks; in fact it’s just the latest example of a continuous pattern of AI-powered hateful output, based on training data consisting of social media slop. In fact, this specific incident isn’t even Grok’s first rodeo.

“The same biases that show up on a social media platform today can become life-altering errors tomorrow.”

About two months prior to this week’s antisemitic tirades, Grok dabbled in Holocaust denial, stating that it was skeptical that six million Jewish people were killed by the Nazis, “as numbers can be manipulated for political narratives.” The chatbot also ranted about a “white genocide” in South Africa, stating it had been instructed by its creators that the genocide was “real and racially motivated.” xAI subsequently claimed that this incident was owing to an “unauthorized modification” made to Grok. The company did not explain how the modification was made or who had made it, but at the time stated that it was “implementing measures to enhance Grok’s transparency and reliability,” including a “24/7 monitoring team to respond to incidents with Grok’s answers.”

But Grok is by no means the only chatbot to engage in these kinds of rants. Back in 2016, Microsoft released its own AI chatbot on Twitter, which is now X, called Tay. Within hours, Tay began saying that “Hitler was right I hate the jews” and that the Holocaust was “made up.” Microsoft claimed that Tay’s responses were owing to a “co-ordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways.”

The next year, in response to the question of “What do you think about healthcare?” Microsoft’s subsequent chatbot, Zo, responded with “The far majority practise it peacefully but the quaran is very violent [sic].” Microsoft stated that such responses were “rare.”

In 2022, Meta’s BlenderBot chatbot responded that it’s “not implausible” to the question of whether Jewish people control the economy. Upon launching the new version of the chatbot, Meta made a preemptive disclaimer that the bot can make “rude or offensive comments.”

Studies have also shown that AI chatbots exhibit more systematic hateful patterns. For instance, one study found that various chatbots such as Google’s Bard and OpenAI’s ChatGPT perpetuated “debunked, racist ideas” about Black patients. Responding to the study, Google claimed they are working to reduce bias.

Related

Meta-Powered Military Chatbot Advertised Giving “Worthless” Advice on Airstrikes

J.B. Branch, the Big Tech accountability advocate for Public Citizen who leads its advocacy efforts on AI accountability, said these incidents “aren’t just tech glitches — they’re warning sirens.”

“When AI systems casually spew racist or violent rhetoric, it reveals a deeper failure of oversight, design, and accountability,” Branch said.

He pointed out that this bodes poorly for a future where leaders of industry hope that AI will proliferate. “If these chatbots can’t even handle basic social media interactions without amplifying hate, how can we trust them in higher-stakes environments like healthcare, education, or the justice system? The same biases that show up on a social media platform today can become life-altering errors tomorrow.”

That doesn’t seem to be deterring the people who stand to profit from wider usage of AI.

The day after the MechaHitler outburst, xAI unveiled the latest iteration of Grok, Grok 4.

“Grok 4 is the first time, in my experience, that an AI has been able to solve difficult, real-world engineering questions where the answers cannot be found anywhere on the Internet or in books. And it will get much better,” Musk wrote on X.

That same day, asked for a one-word response to the question of “what group is primarily responsible for the rapid rise in mass migration to the west,” Grok 4 answered: “Jews.”

The post Grok Is the Latest in a Long Line of Chatbots to Go Full Nazi appeared first on The Intercept.

]]>
https://theintercept.com/2025/07/11/grok-antisemitic-ai-chatbot/feed/ 0 495686 U.S. President Donald Trump listens to a question from a reporter during a press conference with Ukrainian President Volodymyr Zelensky following their meeting at Trump’s Mar-a-Lago club on December 28, 2025 in Palm Beach, Florida. Rep. Dan Goldman (D-N.Y.) arrives for a vote at the U.S. Capitol March 31, 2025. (Francis Chung/POLITICO via AP Images) U.S. soldiers of the 3rd Brigade, 4th Infantry Division, look on a mass grave after a day-long battle against the Viet Cong 272nd Regiment, about 60 miles northwest of Saigon, in March 1967.