READS: Bill of Rights (US) – Anarchist’s Cookbook – Preface to Transgression –
VIDEOS: The Birth of a Tool, The Birth of a Wooden House, Basque Axes,
- One Month Away: Is Kamala Blowing It?by Keith Preston on October 5, 2024 at 5:17 pm
Episode 194 with the Vanguard Krystal Kyle & Friends Oct 04, 2024 We’re reuniting with the Vanguard to take on the fascinating character of J.D. Vance, plus discourse on Melania Trump’s new book and its shocking strong stance in favor of abortion rights. Tuesday’s Vance-Walz faceoff gave us
- Harvard economist Raj Chetty vindicates Donald Trump’s sense of the white working class getting cheated out of their patrimony.by Keith Preston on October 5, 2024 at 5:14 pm
Steve Sailer Oct 05, 2024 Harvard economist Raj Chetty has released a new paper based on a colossal 57 million people born between 1978 and 1992. Chetty has managed to get his claws upon their parents 1040 tax forms from 1978-2019 and their own 1040s up through 2019.
- Escaping the Clutches of the Systemby Keith Preston on October 5, 2024 at 4:12 am
Troy Southgate Oct 04, 2024 TO emphasise why we should never take sides in disputes involving two or more parties that continue to support usury, taxation, centralised government and the ballot box, particularly if – as in the case of Putin, Assad, Jong-un and Khamenei – one side
- A Response to “Ya Ghazze Habibti—Gaza, My Love”by Keith Preston on October 5, 2024 at 4:10 am
By thecollective A recent piece published by Crimethinc is described as “in-depth account, an anarchist from occupied Palestine reviews the history of Zionist colonialism and Palestinian resistance, makes the case for an anti-colonial understanding of the situation, and explores what it means to act in solidarity with Palestinians.”
- Ya Ghazze Habibti—Gaza, My Loveby Keith Preston on October 5, 2024 at 4:10 am
By thecollective From CrimethInc. Understanding the Genocide in Palestine After slaughtering more than 42,000 Palestinians, including 16,500 children, the Israeli military is now invading Lebanon and threatening to go to war with Iran. In the following in-depth account, an anarchist from occupied Palestine reviews the history of Zionist
- Disaster Compassion is Real in North Carolinaby Keith Preston on October 5, 2024 at 4:09 am
By thecollective From Birds Before the Storm by Margaret Killjoy Late last week, a massive rainstorm hit Asheville, North Carolina and the surrounding areas, dumping eight inches of rain in a single day. Then, the next day, Hurricane Helene hit. This was the worst hurricane to hit the
- The SAGE Encyclopedia of War: Social Science Perspectives-Anarchismby Keith Preston on October 5, 2024 at 4:07 am
Encyclopedia_entry_Anarchism Dr. Michael Loadenthal serves as an Assistant Professor of Research in the School of Public and International Affairs, at the University of Cincinnati. He also serves as the Executive Director of the Peace and Justice Studies Association (Georgetown University), the Executive Director of the Prosecution Project, and
- A Deep Dive on Sports, Stadiums, and Fan Fundraisersby Keith Preston on October 5, 2024 at 4:03 am
October 3, 2024 Welcome back to The Lighthouse, the weekly email newsletter of the Independent Institute covering politics, economics, current events, and everything in between. Dear Readers, After nearly two decades of considering a relocation, the Athletics have played their last game at the Oakland Coliseum. It was
- Fact Check: Is Climate Change Making Hurricanes Worse?by Keith Preston on October 5, 2024 at 4:02 am
October 4 2024 Fact Check: Is Climate Change Making Hurricanes Worse? ANALYSIS By Virginia Allen “Scientists say climate change makes these hurricanes larger, stronger, and more deadly,” claims CBS News’ Norah O’Donnell. More Nevada Election Chief Blocks Inspection of Suspect Voter Names in Swing State NEWS By
- Ernst Jünger: Life in the Forester’s Houseby Keith Preston on October 5, 2024 at 4:01 am
by Daniil Zhitenev Arktos Journal Oct 04, 2024 Daniil Zhitenev explores the life of Ernst Jünger in his secluded residence at the forester’s house in Wilflingen, highlighting the writer’s disciplined routines, vast collections, and the profound influence this setting had on his later works. In Upper Swabia, in
- Deconstructing Israel’s Lexicon of Crimeby Ilana Mercer on October 5, 2024 at 4:01 am
IF it is portrayed as a war crime; genocide—the methodical, malicious murder of the many—can be dismissed as incidental to battle; a mere case of, “Oops, bad things happen in war.” You hear the last phrase all the time from Israel’s supporters, as they gush their enthusiasm for the Israeli State’s crimes. The genocide-as-a-war-crime conceptualization provides cover and lends imprimatur for criminals and criminality. You mitigate and minimize genocide when you call it a war crime. This is precisely the point of Israel and its co-belligerents: The purpose of framing Israel’s ongoing extermination of Palestinian society in Gaza as a … Continue reading → The post Deconstructing Israel’s Lexicon of Crime appeared first on LewRockwell.
- Biden Is Pushing Israel Towards Larger Warby No Author on October 5, 2024 at 4:01 am
After being hit by some 200 Iranian missiles Israel has not yet dared to response to to the strike. It instead has launched new air attacks into the center of Beirut and its southern area known as Dahiyeh (which simply means suburb) with its predominantly Shia population. Israel seems to have forgotten what attacks on Dahiyeh mean: Hizbullah asserts that it has established a new deterrence equation: an Israeli attack on the al-Dahieh neighborhood in Beirut will be met with a retaliatory strike on Tel Aviv. … According to Hizbullah, the new equation established by Hassan Nasrallah is that any attack on Tel … Continue reading → The post Biden Is Pushing Israel Towards Larger War appeared first on LewRockwell.
- The Roots of the UK Implosion and Why War Is Inevitableby Tom Luongo on October 5, 2024 at 4:01 am
In a lot of my commentary I give the UK a lot of grief. I give many people a lot of grief. It’s kinda my thing. But to remind everyone, I was one of the chief champions of Brexit, cutting my teeth hard during the endless Brexit negotiations of 2017-19, trying to explain why things were happening the way they were. I always knew that Brexit was a fight between UK elites beholden to Davos, the same folks that overthrew Margaret Thatcher in the 1990s, and the people themselves, backed by what I’m now calling The British Remnant. This group is easy … Continue reading → The post The Roots of the UK Implosion and Why War Is Inevitable appeared first on LewRockwell.
- The Western Media Helped Create These Horrors in the Middle Eastby No Author on October 5, 2024 at 4:01 am
The US and Iran are on the brink of war. Israel and the United States are planning a major attack on Iran, which according to Biden himself could entail strikes on Iranian oil sites. Iran is now saying that its days of “individual self-restraint” are over, and it is prepared to go all-in if the US and Israel keep ramping up escalations. The IDF continues to slaughter civilians in Lebanon with US-backed airstrikes as news surfaces that Hezbollah leader Hassan Nasrallah had agreed to a 21-day ceasefire with Israel shortly before Israel assassinated him. The US reportedly knew about the deal. And of course Israel is still killing dozens of civilians … Continue reading → The post The Western Media Helped Create These Horrors in the Middle East appeared first on LewRockwell.
- They Lie. They Cheat. They Steal. They Bomb. And They Spinby No Author on October 5, 2024 at 4:01 am
The Talmudic psychos not only obsessed on breathing fire against the Axis of Resistance but now also going after Russian national interests. A case could be made that Iran’s Ballistic Retaliation Night, a measured response to Israel’s serial provocations, is less consequential when it comes to the efficacy of the Axis of Resistance than the decapitation of Hezbollah’s leadership. Still, the message was enough to send the Talmudic psychopathologicals into a frenzy; for all their hysterical denials and massive spin, Iron Toilet Paper and the Arrow system were de facto rendered useless. The IRGC made it known that the volley … Continue reading → The post They Lie. They Cheat. They Steal. They Bomb. And They Spin appeared first on LewRockwell.
- October Surpriseby James Howard Kunstler on October 5, 2024 at 4:01 am
“Normally, Western politics gives us actors who are trying to play the role of politicians. Walz is like an actor who is trying to play the role of an actor trying to play the role of a politician. Almost everything about him is just a few degrees off-centre. He’s like what would happen if you endowed Chat GPT with a human body and sent it off to campaign for political office.” —Eugyppius on Substack Tuesday night’s veep palaver could be the last time you see the frightened animal known as Tim Walz for the duration of the campaign. He’s famous … Continue reading → The post October Surprise appeared first on LewRockwell.
- Things I Feared Most To Write, Part Threeby No Author on October 5, 2024 at 4:01 am
I wrote a piece here called “Energies”, in which I began to address the meta-material “energies” that I believe surround us and flow through us. The bias of our post-Enlightenment Western hermeneutic, of course, is that these “energies,” whether beneficent or malevolent, do not in fact exist, and that we are crazy or fanatically deluded to speak of them, let alone to try to invoke the protection of the good ones and fear the influence of the negative ones. That materialist bias can be good because it keeps us from leaping off into an ether of ungrounded imagination, or from … Continue reading → The post Things I Feared Most To Write, Part Three appeared first on LewRockwell.
- Why the Fed’s Two-Percent Inflation Target Is Meaninglessby Ryan McMaken on October 5, 2024 at 4:01 am
Federal Reserve technocrats like to use a variety of slogans and buzzwords designed to make the Federal Reserve look like it’s an apolitical, “scientific” institution guided only by a quest for sound management of the economy. Specifically, nearly every time Fed chairman Jerome Powell makes an appearance—whether it’s the usual FOMC press conference, or when testifying before Congress—Powell is careful to make mention of how the Fed is “data driven,” and how the Fed “seeks to achieve maximum employment and inflation at the rate of 2 percent over the longer run.” This sounds nice, of course, and gives the impression … Continue reading → The post Why the Fed’s Two-Percent Inflation Target Is Meaningless appeared first on LewRockwell.
- Handy Products for Every Home!by No Author on October 5, 2024 at 4:01 am
LewRockwell.com readers are supporting LRC and shopping at the same time. It’s easy and does not cost you a penny more than it would if you didn’t go through the LRC link. Just click on the Amazon link on LewRockwell.com’s homepage and add your items to your cart. It’s that easy! If you can’t live without your daily dose of LewRockwell.com in 2024, please remember to DONATE TODAY! Bloss Anti-skid Jar Opener Jar Lid Remover Rubber Can Opener Kitchen Grippers DEWALT 20V Max Cordless Drill/Driver Kit, Includes 2 Batteries and Charger (DCD771C2) Briignite Night Lights Plug into Wall, [4Pack], Nightlight with Light Sensors Greentech Environmental pureAir 50 – Air … Continue reading → The post Handy Products for Every Home! appeared first on LewRockwell.
- Vance’s Big Win Which Wasn’t All Thatby David Stockman on October 5, 2024 at 4:01 am
JD Vance is no slouch as a debater. We’d think even Tim Walz would grant that much after getting shellacked last night by the Donald’s running mate. We will also grant that both men were civil, even midwestern “nice”. But that just goes to prove that most of the blame for the nastiness that pervades the current political process in America, and which has thereby buried the real issues, lies with the Donald. He’s just an incorrigibly bombastic, ill-mannered lout who substitutes name-calling, slogan-checking and bragging for anything that even remotely sounds like a policy discussion. Of course, when it … Continue reading → The post Vance’s Big Win Which Wasn’t All That appeared first on LewRockwell.
- Abp. Viganò: Bergoglio’s Politically Correct ‘Sins’ Are Another Step Toward a Globalist Religionby No Author on October 5, 2024 at 4:01 am
Archbishop Carlo Maria Viganò has strongly criticized Francis for the penitential service he held on the eve of this year’s Synod on Synodality, saying the bizarre ceremony that included the public confession of a number of “politically correct” and loosely defined sins was another step toward a “globalist religion.” “Unwilling to ask forgiveness for the real sins against God and neighbor – which the followers of the Bergoglian sect casually commit – the Synod on Synodality invents new ones against the Earth, immigrants, the poor, women, the marginalized,” Viganò wrote Wednesday on X. “A new pauperist and politically correct decalogue.” … Continue reading → The post Abp. Viganò: Bergoglio’s Politically Correct ‘Sins’ Are Another Step Toward a Globalist Religion appeared first on LewRockwell.
- At This Time in the World There Is Only One Important Decision Waiting To Be Madeby Paul Craig Roberts on October 5, 2024 at 4:01 am
Except for the neoconservatives whose agenda it is, I sometimes wonder if I am the only other person who understands what the Ukraine conflict is about. While we await Washington’s decision about firing missiles into Russia, I will explain how we reached the current crisis. In 2007 Washington declared war on Russia without announcing it. Putin provoked Washington’s secret declaration of war when he rejected Washington’s uni-polar hegemony at the Munich Security Conference. Washington’s first attack was a year later when, while Putin was distracted at the Beijing Olympics, Washington sent a US trained and equipped Georgian army into South … Continue reading → The post At This Time in the World There Is Only One Important Decision Waiting To Be Made appeared first on LewRockwell.
- Jacob Hornberger Gives the Case for Open Bordersby Keith Preston on October 5, 2024 at 4:00 am
Watch Video Above Join us this Monday as FFF president Jacob G. Hornberger continues our newest online conference entitled, “The Case for Open Borders.” As with our previous online conferences, it will consist of a series of weekly presentations via Zoom. The presentations will start at 7 p.m.
- Gen Zer wants to know how Gen X got around without GPSby Keith Preston on October 5, 2024 at 3:58 am
October 03, 2024 | Read Online Gen Zer asks Gen X how they got around without GPS and the answers are perfectly accurate “We’re old. Older than Google, too.” It’s easy to forget what life was like before cell phones fit in your pocket and
- The Western Media Helped Create These Horrors In The Middle Eastby Keith Preston on October 5, 2024 at 3:57 am
Caitlin Johnstone Oct 03, 2024 Listen to a reading of this article (reading by Tim Foley): The US and Iran are on the brink of war. Israel and the United States are planning a major attack on Iran, which according to Biden himself could entail strikes on Iranian
- A New Challenge for Teflon Markby Keith Preston on October 5, 2024 at 3:56 am
by Hans Vogel Hans Vogel Oct 03, 2024 Hans Vogel highlights the rise of Mark Rutte, former Dutch Prime Minister, as NATO’s new General Secretary, his dismissal of public opinion, and his commitment to NATO’s lethal agenda, including support for Ukrainian attacks on Russia, while questioning the true
- Is the US Convinced That “We Can Win a ‘Simultaneous, First Strike’ Nuclear War”. “The Need to deter Russia, China and North Korea”by Keith Preston on October 5, 2024 at 3:54 am
Biden’s “Nuclear Employment Guide”. By Germán Gorraiz López Global Research, September 26, 2024 The globalist establishment would outline for the next five years a plan that would involve the recovery of the US role as a global gendarme. This will done through an extraordinary increase in US military
- The Educational Policies of Donald Trump and Kamala Harrisby Keith Preston on October 5, 2024 at 3:53 am
THURSDAY, OCTOBER 3, 2024 It is only from a special point of view that “education” is a failure. As to its own purposes, it is an unqualified success. One of its purposes is to serve as a massive tax-supported jobs program for legions of not especially able or
- The Law School Dean Who Quietly Worked to Overturn the Electionby Keith Preston on October 5, 2024 at 3:52 am
Most Read The Law School Dean Who Quietly Worked to Overturn the Election Shawn Musgrave Lawyers who worked to keep Trump in power in 2020 have risked being disbarred. But not Mark Martin. Read More → Israel’s “Limited, Localized” Invasion of Lebanon Is Sparking a Regional War Jonah
- Never-Ending Middle East Escalatorby Keith Preston on October 5, 2024 at 3:35 am
Forwarded this email? Subscribe here for more Watch now Never-Ending Middle East Escalator – New World Next Week The Corbett Report Oct 3 READ IN APP SHOW NOTES AND COMMENTS: https://corbettreport.com/nwnw566/ This week on the New World Next Week: Iran strikes Israel as the never-ending Middle East escalation
- Leave It to the Authoritiesby Keith Preston on October 5, 2024 at 3:34 am
Recently at The Signal: Isaac B. Kardon on what’s causing a surge of military confrontations in the South China Sea. … Today: Why have so many Americans turned against the idea of expertise in public affairs? David A. Hopkins on how the government has become increasingly reliant on experts,
- The Flying Car Is Here. It’s Slightly Illegalby Keith Preston on October 5, 2024 at 3:33 am
The flying car has been a running joke about the future since at least The Jetsons, but for $249 you can now rent the LIFT Hexa for a joyride. It’s one of a new generation of electric aircraft that, like a supersize drone, can take off and land
- Music After Auschwitzby Keith Preston on October 5, 2024 at 3:32 am
Sponsored by Library of America Peter E. Gordon Music and Memory After the Holocaust, classical composers explored music’s capacity to commemorate historical trauma without permitting horrific events to take on the allure of facile beauty. David S. Reynolds Grant vs. the Klan New books reconsider how Ulysses S.
- All The Places Kamala Harris Went Before Visiting Hurricane Victimsby Keith Preston on October 5, 2024 at 3:31 am
October 3 2024 All The Places Kamala Harris Went Before Visiting Hurricane Victims NEWS By Christina Lewis, Bradley Devlin While millions of Americans came face-to-face with the reality that they no longer have a place to call “home,” Vice President Kamala Harris was campaigning around the country. More
- “Israel” Pummeled by Iran, Hezbollahby Keith Preston on October 5, 2024 at 3:29 am
Forwarded this email? Subscribe here for more Truth Jihad Radio “Israel” Pummeled by Iran, He… 0:00 51:09 “Israel” Pummeled by Iran, Hezbollah More than a dozen Zionist invaders dead on first day of fighting, says Resistance Kevin Barrett Oct 3 READ IN APP Rumble link Bitchute link
- Jack Smith’s Political Gambit Targets Trumpby Keith Preston on October 5, 2024 at 3:25 am
NATIONAL REVIEW OCTOBER 04, 2024 ◼ Now we know why Tim Walz didn’t teach debate. ◼ With early voting already under way less than five weeks to Election Day, special counsel Jack Smith filed a book-length proffer of his 2020 election-interference case against Donald Trump, which Judge
- You’re invited: Ben Rhodes, Pankaj Mishra, and Suzy Hansen discuss October 7 and its aftermathby Keith Preston on October 5, 2024 at 3:24 am
The New York Review of Books presents Gaza, Israel, and the American Left: October 7 One Year Later a conversation with Ben Rhodes • Suzy Hansen • Pankaj Mishra An online event starting at 5 PM EDT, October 7, 2024 The New York Review of Books presents
- Cowards and Liarsby Keith Preston on October 5, 2024 at 3:23 am
Forwarded this email? Subscribe here for more Watch now Cowards and Liars Peter R. Quiñones Oct 4 READ IN APP You’re currently a free subscriber to The Pete Quinones Show. For the full experience, upgrade your subscription. Upgrade to paid
- Palestine & Lebanon Have the Right to Defend Themselves Against Israelby Keith Preston on October 5, 2024 at 3:22 am
Israel, with the full backing of the U.S., claims to be massacring civilians in Gaza, the West Bank and Lebanon in self-defense. Does this have any legal merit? What about the rights of Palestine and Lebanon? To discuss this and more Rania Khalek was joined by longtime human
- On the Sixtieth Anniversary of the Warren Reportby Keith Preston on October 5, 2024 at 3:21 am
FRIDAY, OCTOBER 4, 2024 We are mad, not only individually, but nationally. We check manslaughter and isolated murders; but what of war and the much vaunted crime of slaughtering whole peoples? – Lucius Annaeus Seneca, Epistles [1st Century A.D.] HORNBERGER’S BLOG October 4, 2024 “The Whole World Would
- Arguing With Richie Allen About Trump, Israel’s Pager Terrorism, and Which Media to Believeby Keith Preston on October 5, 2024 at 3:19 am
Forwarded this email? Subscribe here for more Watch now Arguing With Richie Allen About Trump, Israel’s Pager Terrorism, and Which Media to Believe Meanwhile, my special two-hour live interview with Alan Sabrosky is about to start… Kevin Barrett Oct 4 READ IN APP Yesterday’s interview with Richie Allen
- The Many Lies of JD Vanceby Keith Preston on October 5, 2024 at 3:18 am
OCTOBER 4, 2024 The Debate Revealed the Fascist Behind the Curtain Walz Defended Reality—Even as Vance Took Full Advantage of CBS’s Failure to Fact-Check → “Once it became clear that Trump had remade the GOP in his own image, Vance was shrewd enough to realize that he now
- Why Vance Mattersby Keith Preston on October 5, 2024 at 3:17 am
Conservatives and liberals need to take a longer view of the right’s future. Andrew Sullivan Oct 04, 2024 ∙ Paid (Andy Manis/Getty Images) By far the biggest revelation of last Tuesday night was what happens to American politics if you remove Donald J Trump. Everything instantly changes. We
- FEMA is creating America Firstersby Keith Preston on October 5, 2024 at 3:16 am
By Tom Woods Mollie Hemingway of The Federalist is right: “In a sane country, this would end the party in power.” She’s referring to something you’ve probably heard by now. Exhibit A: “For Fiscal Year (FY) 2024, the U.S. Department of Homeland Security will provide $640.9 million of
- Honoring St. Francis: 14 Tales of Animals and Their Holy Companionsby Keith Preston on October 5, 2024 at 3:15 am
Saintly Creatures: 14 Tales of Animals and Their Holy Companions is a Catholic children’s book containing 14 amazing stories about animals and the holy men and women they encountered. GET THE BOOK On this feast day of St. Francis, teach your kids about these saints who love animals including…
- Dugin’s Directive: “The Conflict in the Middle East Is the Start of a Great War”by Keith Preston on October 5, 2024 at 3:14 am
by Alexander Dugin Alexander Dugin Oct 04, 2024 Alexander Dugin argues that the escalating conflict in the Middle East marks the beginning of a larger global war, as Iran and its allies confront Israel and the Western hegemony, opening a second front following Ukraine. The missile strikes by
- Alan Sabrosky on Ever-Expanding Mideast Warby Keith Preston on October 5, 2024 at 3:13 am
Forwarded this email? Subscribe here for more Truth Jihad Radio Alan Sabrosky on Ever-Expandi… 0:00 56:10 Alan Sabrosky on Ever-Expanding Mideast War Kevin Barrett Oct 4 READ IN APP Rumble link Bitchute link Alan Sabrosky, former Director of Strategic Studies at the US Army War College, discusses
- Unearthing Middle-earth’s European Rootsby Keith Preston on October 5, 2024 at 3:12 am
by Alexander Raynor Alexander Raynor Oct 04, 2024 Alexandor Raynor discusses Armand Berger’s Tolkien, Europe, and Tradition and how Tolkien forged a new mythology from our European heritage. Tolkien, Europe, and Tradition, by Armand Berger, is another volume in the Iliade Institute-Arktos collection. Berger offers a fascinating exploration
- EFF to Fifth Circuit: Age Verification Laws Will Hurt More Than They Helpby Molly Buckley on October 4, 2024 at 8:38 pm
EFF, along with the ACLU and the ACLU of Mississippi, filed an amicus brief on Thursday asking a federal appellate court to continue to block Mississippi’s HB 1126—a bill that imposes age verification mandates on social media services across the internet. Our friend-of-the-court brief, filed in the U.S. Court of Appeals for the Fifth Circuit, argues that HB 1126 is “an extraordinary censorship law that violates all internet users’ First Amendment rights to speak and to access protected speech” online. HB 1126 forces social media sites to verify the age of every user and requires minors to get explicit parental consent before accessing online spaces. It also pressures them to monitor and censor content on broad, vaguely defined topics—many of which involve constitutionally protected speech. These sweeping provisions create significant barriers to the free and open internet and “force adults and minors alike to sacrifice anonymity, privacy, and security to engage in protected online expression.” A federal district court already prevented HB 1126 from going into effect, ruling that it likely violated the First Amendment. Blocking Minors from Vital Online Spaces At the heart of our opposition to HB 1126 is its dangerous impact on young people’s free expression. Minors enjoy the same First Amendment right as adults to access and engage in protected speech online. “No legal authority permits lawmakers to burden adults’ access to political, religious, educational, and artistic speech with restrictive age-verification regimes out of a concern for what minors might see. Nor is there any legal authority that permits lawmakers to block minors categorically from engaging in protected expression on general purpose internet sites like those regulated by HB 1126.” Social media sites are not just entertainment hubs; they are diverse and important spaces where minors can explore their identities—whether by creating and sharing art, practicing religion, or engaging in politics. As our brief explains, minors’ access to these online spaces “is essential to their growth into productive members of adult society because it helps them develop their own ideas, learn to express themselves, and engage productively with others in our democratic public sphere.” Social media also “enables individuals whose voices would otherwise not be heard to make vital and even lifesaving connections with one another, and to share their unique perspectives more widely.” LGBTQ+ youth, for example, turn to social media for community, exploration, and support, while others find help in forums that discuss mental health, disability, eating disorders, or domestic violence. HB 1126’s age-verification regime places unnecessary barriers between young people and these crucial resources. The law compels platforms to broadly restrict minors’ access to a vague list of topics—the majority of which concern constitutionally protected speech—that Mississippi deems “harmful” for minors. First Amendment Rights: Protection for All The impact of HB 1126 is not limited to minors—it also places unnecessary and unconstitutional restrictions on adults’ speech. The law requires all users to verify their age before accessing social media, which could entirely block access for the millions of U.S. adults who lack government-issued ID. Should a person who takes public transit every day need to get a driver’s license just to get online? Would you want everything you do online to be linked to your government-issued ID? HB 1126 also strips away users’ protected right to online anonymity, leaving them vulnerable to exposure and harassment and chilling them from speaking freely on social media. As our brief recounts, the vast majority of internet users have taken steps to minimize their digital footprints and even to “avoid observation by specific people, organizations, or the government.” “By forcibly tying internet users’ online interactions to their real-world identities, HB 1126 will chill their ability to engage in dissent, discuss sensitive, personal, controversial, or stigmatized content, or seek help from online communities.” Online Age Verification: A Privacy Nightmare Finally, HB 1126 forces social media sites to collect users’ most sensitive and immutable data, turning them into prime targets for hackers. In an era where data breaches and identity theft are alarmingly common, HB 1126 puts every user’s personal data at risk. Furthermore, the process of age verification often involves third-party services that profit from collecting and selling user data. This means that the sensitive personal information on your ID—such as your name, home address, and date of birth—could be shared with a web of data brokers, advertisers, and other intermediary entities. “Under the plain language of HB 1126, those intermediaries are not required to delete users’ identifying data and, unlike the online service providers themselves, they are also not restricted from sharing, disclosing, or selling that sensitive data. Indeed, the incentives are the opposite: to share the data widely.” No one—neither minors nor adults—should have to sacrifice their privacy or anonymity in order to exercise their free speech rights online. Courts Continue To Block Laws Like Mississippi’s Online age verification laws like HB 1126 are not new, and courts across the country have consistently ruled them unconstitutional. In cases from Arkansas to Ohio to Utah, courts have struck down similar online age-verification mandates because they burden users’ access to, and ability to engage with, protected speech. While Mississippi may have a legitimate interest in protecting children from harm, as the Supreme Court has held, “that does not include a free-floating power to restrict the ideas to which children may be exposed.” By imposing age verification requirements on all users, laws like HB 1126 undermine the First Amendment rights of both minors and adults, pose serious privacy and security risks, and chill users from accessing one of the most powerful expressive mediums of our time. For these reasons, we urge the Fifth Circuit to follow suit and continue to block Mississippi HB 1126.
- Digital Inclusion Week, Highlighting an EFA Members Digital Equity Work:by Christopher Vines on October 4, 2024 at 8:09 pm
In honor of Digital Inclusion Week, October 7-11th 2024, it’s an honor to uplift one of our Electronic Frontier Alliance (EFA) members who is doing great work making sure technology benefits everyone by addressing the digital divide: CCTV Cambridge. This year they partnered to host a Digital Navigator program. Its aim is to assist in bridging the digital divide in Cambridge by assessing the needs of the community and acting as a technological social worker. Digital Navigators (DN’s) have led to better outreach, assessment, and community connection. Making a difference in communities affected by the digital divide is impactful work. So far the DN’s have helped many people access resources online, distributed 50 Thinkpad laptops installed with Windows 10 and Microsoft Office, and distributed 15 hotspots for wifi with two years paid by T-mobile. This is groundbreaking because typically people are getting chromebooks on loan that have limited capabilities. The beauty of these devices is that you can work and learn on them with reliable, high-speed internet access, and they are able to be used anywhere. Samara Murrell, Coordinator of CCTV’s Digital Navigator Program states: "Being part of a solution that attempts to ensure that everyone has equal access to information, education and job opportunities, so that we can all fully participate in our society, is some of the best, most inspiring and honorable work that one can do." Left to Right: DN Coordinator Samara Murrell and DN’s Lida Griffin, Dana Grotenstein, and Eden Wagayehu CCTV Cambridge is also slated to start hosting classes in 2025. They hope to offer intermediate Windows and Microsoft Office to the cohort as the first step, and then advanced Excel as the second part for returning members of the cohort. Maritza Grooms, CCTV Cambridge’s Associate Director of Community Relations, says: "CCTV is incredibly grateful and honored to be the hub and headquarters of the Digital Navigator Pilot Program in partnership with the City of Cambridge, Cambridge Public Library, Cambridge Public School Department, and Just-A-Start. This program is crucial to serving Cambridge's most vulnerable and marginalized communities and ensuring they have the access to resources they need to be able to fully participate in society in this digital age. We appreciate any and all support to help us make the Digital Navigator Program a continued sustainable program beyond the pilot. Please contact me at maritza@cctvcambridge.org to find out how you can support this program or visit cctvcambridge.org/support to support today." There are countless examples of the impact CCTV’s DN’s have had already. One patron of the library who came in to take a tech class, had their own laptop because of the DNs. That enabled her to take a tech support class and advance her career. A young college student studying bioengineering needed a laptop and hotspot to continue his studies, and he recently got them from CCTV Cambridge. Kudos to CCTV Cambridge for addressing the disparities of the digital divide in your community with your awesome digital inclusion work! To connect with other members of the EFA doing impactful work in your area, please check out our allies page: https://efa.eff.org/allies
- Join the Movement for Public Broadband in PDXby Christopher Vines on October 4, 2024 at 7:01 pm
Did you know the City of Portland, Oregon, already owns and operates a fiber-optic broadband network? It's called IRNE (Integrated Regional Network Enterprise), and despite having it in place Portlanders are forced to pay through the nose for internet access because of a lack of meaningful competition. Even after 24 years of IRNE, too many in PDX struggle to afford and access fast internet service in their homes and small businesses. EFF and local Electronic Frontier Alliance members, Personal TelCo Project and Community Broadband PDX, are calling on city council & mayoral candidates to sign a pledge to support an open-access business model, where the city owns and leases the "dark" fiber. That way services can be run by local non profits, local businesses, or community cooperatives. The hope is these local services can then grow to support retail service and meet the needs of more residents. This change will only happen if we show our support, join the campaign today to stay up to date and find volunteer opportunities. Also come out for fun and learning at The People’s Digital Safety Fair Saturday October 19th, for talks and workshops from the local coalition. Let’s break the private ISP monopoly power in Portland! Leading this campaign is Community Broadband PDX, with the mission to ‘guide Portlanders to create a new option for fast internet access: Publicly owned and transparently operated, affordable, secure, fast and reliable broadband infrastructure that is always available to every neighborhood and community.’ According to Jennifer Redman, President, Board of Directors, and Campaign Manager of Community Broadband PDX, (who also formerly served as the Community Broadband Planning Manager in the Bureau of Planning and Sustainability with the City of Portland) when asked about the campaign to expand IRNE into affordable accessible internet for all Portlanders, she said: “Expanding access to the Integrated Regional Network Enterprise (IRNE) is the current campaign focus because within municipal government it is often easier to expand existing programs rather than create entirely new ones - especially if there is a major capital investment required. IRNE is staffed, there are regional partners and the program is highly effective. Yes it is limited in scope but there are active expansion plans. Leveraging IRNE allows us to advocate for policies like “Dig Once Dig Smart” every time the ground is open for any type of development in the City of Portland - publicly owned-fiber conduit must be included. The current governmental structure has made implementing these policies extremely difficult because of the siloed nature of how the City is run. For example, the water bureau doesn’t want to be told what to do by the technology services bureau. This should significantly improve with our charter change. Currently the City of Portland really operates as a group of disparate systems that sometimes work together. I hope that under a real city manager, the City is run as one system. IRNE already partners with Link Oregon - which provides the “retail” network services for many statewide educational and other non-profit institutions. The City is comfortable with this model - IRNE builds and manages the dark fiber network while partners provide the retail or “lit" service. Let’s grow local ISPs that keep dollars in Portland as opposed to corporate out-of-state providers like Comcast and Century Link.” The time is now to move Portland forward and make access to the publicly owned fiber optic network available to everyone. As explained by Russell Senior, President and member of the Board of Directors of Personal TelCo Project, this would bring major economic and workforce development advantages to Portland: “Our private internet access providers exploit their power to gouge us all with arbitrary prices, because our only alternative to paying them whatever they ask is to do without. The funds we pay these companies ends up with far away investors, on the order of $500 million per year in Multnomah County alone. Much of that money could be staying in our pockets and circulating locally if we had access they couldn't choke off. I learned most of my professional skills from information I found on the Internet. I got a good job, and have a successful career because of the open source software tools that I received from people who shared it on the internet. The internet is an immense store of human knowledge, and ready access to it is an essential part of developing into a fruitful, socially useful and fulfilled person.” Portland is currently an island of expensive, privately owned internet service infrastructure, as every county surrounding Portland is building or operating affordable publicly owned and publicly available super-fast fiber-optic broadband networks. Fast internet access in Portland remains expensive and limited to neighborhoods that provide the highest profits for the few private internet service providers (ISPs). Individual prosperity and a robust local economy are driven by UNIVERSAL affordable access to fast internet service. A climate resilient city needs robust publicly owned and available fiber-optic broadband infrastructure. Creating a digitally equitable and just city is dependent upon providing access to fast internet service at an affordable cost for everyone. That is why we are calling city officials to take the pledge that they will support open-access internet in Portland. Join the campaign to make access to the city owned fiber optic network available to everyone. Let’s break the private ISP monopoly power in Portland!
- Vote for EFF’s 'How to Fix the Internet’ Podcast in the Signal Awards!by Josh Richman on October 2, 2024 at 9:11 pm
We’re thrilled to announce that EFF’s “How to Fix the Internet” podcast is a finalist in the Signal Awards 3rd Annual Listener's Choice competition. Now we need your vote to put us over the top! Vote now! We’re barraged by dystopian stories about technology’s impact on our lives and our futures — from tracking-based surveillance capitalism to the dominance of a few large platforms choking innovation to the growing pressure by authoritarian governments to control what we see and say. The landscape can feel bleak. Exposing and articulating these problems is important, but so is envisioning and then building a better future. That’s where our podcast comes in. Through curious conversations with some of the leading minds in law and technology, “How to Fix the Internet” explores creative solutions to some of today’s biggest tech challenges. signal-badge_finalist_voteforus.png Over our five seasons, we’ve had well-known, mainstream names like Marc Maron to discuss patent trolls, Adam Savage to discuss the rights to tinker and repair, Dave Eggers to discuss when to set technology aside, and U.S. Sen. Ron Wyden, D-OR, to discuss how Congress can foster an internet that benefits everyone. But we’ve also had lesser-known names who do vital, thought-provoking work – Taiwan’s then-Minister of Digital Affairs Audrey Tang discussed seeing democracy as a kind of open-source social technology, Alice Marwick discussed the spread of conspiracy theories and disinformation, Catherine Bracy discussed getting tech companies to support (not exploit) the communities they call home, and Chancey Fleet discussing the need to include people with disabilities in every step of tech development and deployment. That’s just a taste. If you haven’t checked us out before, listen today to become deeply informed on vital technology issues and join the movement working to build a better technological future. And if you’ve liked what you’ve heard, please throw us a vote in the Signal Awards competition! Vote Now! Our deepest thanks to all our brilliant guests, and to the Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology, without whom this podcast would not be possible.
- Digital ID Isn't for Everybody, and That's Okay | EFFector 36.13by Christian Romero on October 2, 2024 at 5:23 pm
Need help staying up-to-date on the latest in the digital rights movement? You're in luck! In our latest newsletter, we outline the privacy protections needed for digital IDs, explain our call for the U.S. Supreme Court to strike down an unconstitutional age verification law, and call out the harms of AI monitoring software deployed in schools. It can feel overwhelming to stay up to date, but we've got you covered with our EFFector newsletter! You can read the full issue here, or subscribe to get the next one in your inbox automatically! You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below: LISTEN ON YouTube EFFECTOR 36.13 - Digital ID Isn't for Everybody, and That's Okay Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.
- How to Stop Advertisers From Tracking Your Teen Across the Internetby Guest Author on October 1, 2024 at 6:23 pm
This post was written by EFF fellow Miranda McClellan. Teens between the ages of 13 and 17 are being tracked across the internet using identifiers known as Advertising IDs. When children turn 13, they age out of the data protections provided by the Children’s Online Privacy Protection Act (COPPA). Then, they become targets for data collection from data brokers that collect their information from social media apps, shopping history, location tracking services, and more. Data brokers then process and sell the data. Deleting Advertising IDs off your teen’s devices can increase their privacy and stop advertisers collecting their data. What is an Advertising ID? Advertising identifiers – Android's Advertising ID (AAID) and Identifier for Advertising (IDFA) on iOS – enable third-party advertising by providing device and activity tracking information to advertisers. The advertising ID is a string of letters and numbers that uniquely identifies your phone, tablet, or other smart device. How Teens Are Left Vulnerable In most countries, children must be over 13 years old to manage their own Google account without a supervisory parent account through Google Family Link. Children over 13 gain the right to manage their own account and app downloads without a supervisory parent account—and they also gain an Advertising ID. At 13, children transition abruptly between two extremes—from potential helicopter parental surveillance to surveillance advertising that connects their online activity and search history to marketers serving targeted ads. Thirteen is a historically significant age. In the United States, both Facebook and Instagram require users to be at least 13 years old to make an account, though many children pretend to be older. The Children’s Online Privacy Protection Act (COPPA), a federal law, requires companies to obtain “verifiable parental consent” before collecting personal information from children under 13 for commercial purposes. But this means that teens can lose valuable privacy protections even before becoming adults. How to Protect Children and Teens from Tracking Here are a few steps we recommend that protect children and teens from behavioral tracking and other privacy-invasive advertising techniques: Delete advertising IDs for minors aged 13-17. Require schools using Chromebooks, Android tablets, or iPads to educate students and parents about deleting advertising IDs off school devices and accounts to preserve student privacy. Advocate for extended privacy protections for everyone. How to Delete Advertising IDs Advertising IDs track devices and activity from connected accounts. Both Android and iOS users can reset or delete their advertising IDs from the device. Removing the advertising ID removes a key component advertisers use to identify audiences for targeted ad delivery. While users will still see ads after resetting or deleting their advertising ID, the ads will be severed from previous online behaviors and provide less personally targeted ads. Follow these instructions, updated from a previous EFF blog post: On Android With the release of Android 12, Google began allowing users to delete their ad ID permanently. On devices that have this feature enabled, you can open the Settings app and navigate to Security & Privacy > Privacy > Ads. Tap “Delete advertising ID,” then tap it again on the next page to confirm. This will prevent any app on your phone from accessing it in the future. The Android opt out should be available to most users on Android 12, but may not available on older versions. If you don't see an option to "delete" your ad ID, you can use the older version of Android's privacy controls to reset it and ask apps not to track you. On iOS Apple requires apps to ask permission before they can access your IDFA. When you install a new app, it may ask you for permission to track you. Select “Ask App Not to Track” to deny it IDFA access. To see which apps you have previously granted access to, go to Settings > Privacy & Security > Tracking. In this menu, you can disable tracking for individual apps that have previously received permission. Only apps that have permission to track you will be able to access your IDFA. You can set the “Allow apps to Request to Track” switch to the “off” position (the slider is to the left and the background is gray). This will prevent apps from asking to track in the future. If you have granted apps permission to track you in the past, this will prompt you to ask those apps to stop tracking as well. You also have the option to grant or revoke tracking access on a per-app basis. Apple has its own targeted advertising system, separate from the third-party tracking it enables with IDFA. To disable it, navigate to Settings > Privacy > Apple Advertising and set the “Personalized Ads” switch to the “off” position to disable Apple’s ad targeting. Miranda McClellan served as a summer fellow at EFF on the Public Interest Technology team. Miranda has a B.S. and M.Eng. in Computer Science from MIT. Before joining EFF, Miranda completed a Fulbright research fellowship in Spain to apply machine learning to 5G networks, worked as a data scientist at Microsoft where she built machine learning models to detect malware, and was a fellow at the Internet Society. In her free time, Miranda enjoys running, hiking, and crochet. At EFF, Miranda conducted research focused on understanding the data broker ecosystem and enhancing children’s privacy. She received funding from the National Science Policy Network.
- EFF Awards Night: Celebrating Digital Rights Founders Advancing Free Speech and Access to Information Around the Worldby Karen Gullo on September 30, 2024 at 5:27 pm
Digital freedom and investigative reporting about technology have been at risk amid political and economic strife around the world. This year’s annual EFF Awards honored the achievements of people helping to ensure that the power of technology, the right to privacy and free speech, and access to information, is available to people all over the world. On September 12 in San Francisco’s Presidio, EFF presented awards to investigative news organization 404 Media, founder of Latin American digital rights group Fundación Karisma Carolina Botero, and Cairo-based nonprofit Connecting Humanity, which helps Palestinians in Gaza regain access to the internet. All our award winners overcame roadblocks to build organizations that protect and advocate for people’s rights to online free speech, digital privacy, and the ability to live free from government surveillance. If you missed the ceremony in San Francisco, you can still catch what happened on YouTube and the Internet Archive. You can also find a transcript of the live captions. Watch Now EFF Awards Ceremony on YouTube EFF Executive Director Cindy Cohn kicked off the ceremony, highlighting some of EFF’s recent achievements and milestones, including our How to Save the Internet podcast, now in its fifth season, which won two awards this year and saw a 21 percent increase in downloads. Cindy talked about EFF’s legal work defending a security researcher at this year’s DEF CON who was threatened for his planned talk about a security vulnerability he discovered. EFF’s Coders’ Rights team helped the researcher avoid a lawsuit and present his talk on the conference’s last day. Another win: EFF fought back to ensure that police drone footage was not exempt from public records requests. As a result, “we can see what the cops are seeing,” Cindy said. EFF Executive Director Cindy Cohn kicks off the ceremony. “It can be truly exhausting and scary to feel the weight of the world’s problems on our shoulders, but I want to let you in on a secret,” she said. “You’re not alone, and we’re not alone. And, as a wise friend once said, courage is contagious.” Cindy turned the program over to guest speaker Elizabeth Minkel, journalist and co-host of the long-running fan culture podcast Fansplaining. Elizabeth kept the audience giggling as she recounted her personal fandom history with Buffy the Vampire Slayer and later Harry Potter, and how EFF’s work defending fair use and fighting copyright maximalism has helped fandom art and fiction thrive despite attacks from movie studios and entertainment behemoths. Elizabeth Minkel—co-host and editor of the Fansplaining podcast, journalist, and editor. “The EFF’s fight for open creativity online has been helping fandom for longer than I’ve had an internet connection,” Minkel said. “Your values align with what I think of as the true spirit of transformative fandom, free and open creativity, and a strong push back against those copyright strangleholds in the homogenization of the web.” Presenting the first award of the evening, EFF Director of Investigations Dave Maass took the stage to introduce 404 Media, winner of EFF’s Award for Fearless Journalism. The outlet’s founders were all tech journalists who worked together at Vice Media’s Motherboard when its parent company filed for bankruptcy in May 2023. All were out of a job, part of a terrible trend of reporter layoffs and shuttered news sites as media businesses struggle financially. Journalists Jason Koebler, Sam Cole, Joseph Cox, and Emanuel Maiberg together resolved to go out on their own; in 2023 they started 404 Media, aiming to uncover stories about how technology impacts people in the real world. Since its founding, journalist-owned 404 Media has published scoops on hacking, cyber security, cybercrime, artificial intelligence, and consumer rights. They uncovered the many ways tech companies and speech platforms sell users' data without their knowledge or consent to AI companies for training purposes. Their reporting led to Apple banning apps that help create non-consensual sexual AI imagery, and revealed a feature on New York city subway passes that enabled rider location tracking, leading the subway system to shut down the feature. Jason Koebler remotely accepts the EFF Award for Fearless Journalism on behalf of 404 Media. “We believe that there is a huge demand for journalism that is written by humans for other humans, and that real people do not want to read AI-generated news stories that are written for search engine optimization algorithms and social media,” said 404 Media's Jason Koebler in a video recorded for the ceremony. EFF Director for International Freedom of Expression Jillian York introduced the next award recipient, Cairo-based nonprofit Connecting Humanity represented by Egyptian journalist and activist Mirna El Helbawi. The organization collects and distributes embedded SIMs (eSIMs), a software version of the physical chip used to connect a phone to cellular networks and the internet. The eSIMS have helped thousands of Gazans stay digitally connected with family and the outside world, speak to loved ones at hospitals, and seek emergency help amid telecom and internet blackouts during Israel’s war with Hamas. Connecting Humanity has distributed 400,000 eSIMs to people in Gaza since October. The eSIMS have been used to save families from under the rubble, allow people to resume their online jobs and attend online school, connect hospitals in Gaza, and assist journalists reporting on the ground, Mirna said. Mirna El Helbawi accepts EFF Award on behalf of Connecting Humanity. “This award is for Connecting Humanity’s small team of volunteers, who worked day and night to connect people in Gaza for the past 11 months and are still going strong,” she told the audience. “They are the most selfless people I have ever met. Not a single day has passed without this team doing their best to ensure that people are connecting in Gaza.” EFF Policy Director for Global Privacy Katitza Rodriguez took the stage next to introduce the night’s final honoree, Fundación Karisma founder and former executive director Carolina Botero. A researcher, lecturer, writer, and consultant, Carolina is among the foremost leaders in the fight for digital rights in Latin America. Karisma has worked since 2003 to put digital privacy and security on policymaking agendas in Colombia and the region and ensure that technology protects human rights. She played a key role in helping to defeat a copyright law that would have brought a DMCA-like notice and takedown regime in Colombia, threatening free expression. Her opposition to the measure made her a target of government surveillance, but even under intense pressure from the government, she refused to back down. Karisma and other NGOs proposed amending Brazil’s intelligence law to strengthen monitoring, transparency, and accountability mechanisms, and fought to increase digital security for human rights and environmental activists, who are often targets of government tracking. Carolina Botero receives the EFF Award for Fostering Digital Rights in Latin America. “Quiet work is a particularly thankless aspect of our mission in countries like Colombia, where there are few resources and few capacities, and where these issues are not on the public agenda,” Carolina said in her remarks. She left her position at Karisma this year, opening the door for a new generation while leaving an inspiring digital rights legacy in Latin America in the fight for digital rights. EFF is grateful that it can honor and lift up the important work of these award winners, who work both behind the scenes and in very public ways to protect online privacy, access to information, free expression, and the ability to find community and communicate with loved ones and the world on the internet. The night’s honorees saw injustices, rights violations, and roadblocks to information and free expression, and did something about it. We thank them. And thank you to all EFF members around the world who make our work possible—public support is the reason we can push for a better internet. If you're interested in supporting our work, consider becoming an EFF member! You can get special gear as a token of our thanks and help support the digital freedom movement. Of course, special thanks to the sponsors of this year’s EFF Awards: Dropbox and Electric Capital.
- New Email Scam Includes Pictures of Your House. Don’t Fall For It.by Cooper Quintin on September 27, 2024 at 7:36 pm
You may have arrived at this post because you received an email with an attached PDF from a purported hacker who is demanding payment or else they will send compromising information—such as pictures sexual in nature—to all your friends and family. You’re searching for what to do in this frightening situation, and how to respond to an apparently personalized threat that even includes your actual “LastNameFirstName.pdf” and a picture of your house. Don’t panic. Contrary to the claims in your email, you probably haven't been hacked (or at least, that's not what prompted that email). This is merely a new variation on an old scam —actually, a whole category of scams called "sextortion." This is a type of online phishing that is targeting people around the world and preying on digital-age fears. It generally uses publicly available information or information from data breaches, not information obtained from hacking the recipients of the emails specifically, and therefore it is very unlikely the sender has any "incriminating" photos or has actually hacked your accounts or devices. They begin the emails showing you your address, full name, and possibly a picture of your house. We’ll talk about a few steps to take to protect yourself, but the first and foremost piece of advice we have: do not pay the ransom. We have pasted an example of this email scam at the bottom of this post. The general gist is that a hacker claims to have compromised your computer and says they will release embarrassing information—such as images of you captured through your web camera or your pornographic browsing history—to your friends, family, and co-workers. The hacker promises to go away if you send them thousands of dollars, usually with bitcoin. This is different from a separate sextortion scam in which a stranger befriends and convinces a user to exchange sexual content then demands payment for secrecy; a much more perilous situation which requires a more careful response. What makes the email especially alarming is that, to prove their authenticity, they begin the emails showing you your address, full name, and possibly a picture of your house. Again, this still doesn't mean you've been hacked. The scammers in this case likely found a data breach which contained a list of names, emails, and home addresses and are sending this email out to potentially millions of people, hoping that some of them would be worried enough and pay out that the scam would become profitable. Here are some quick answers to the questions many people ask after receiving these emails. They Have My Address and Phone Number! How Did They Get a Picture of My House? Rest assured that the scammers were not in fact outside your house taking pictures. For better or worse, pictures of our houses are all over the internet. From Google Street View to real estate websites, finding a picture of someone’s house is trivial if you have their address. While public data on your home may be nerve-wracking, similar data about government property can have transparency benefits. Unfortunately, in the modern age, data breaches are common, and massive sets of peoples’ personal information often make their way to the criminal corners of the Internet. Scammers likely obtained such a list or multiple lists including email addresses, names, phone numbers, and addresses for the express purpose of including a kernel of truth in an otherwise boilerplate mass email. It’s harder to change your address and phone number than it is to change your password. The best thing you can do here is be aware that your information is out there and be careful of future scams using this information. Since this information (along with other leaked info such as your social security number) can be used for identity theft, it's a good idea to freeze your credit. And of course, you should always change your password when you’re alerted that your information has been leaked in a breach. You can also use a service like Have I Been Pwned to check whether you have been part of one of the more well-known password dumps. Should I Respond to the Email? Absolutely not. With this type of scam, the perpetrator relies on the likelihood that a small number of people will respond out of a batch of potentially millions. Fundamentally this isn't that much different from the old Nigerian prince scam, just with a different hook. By default they expect most people will not even open the email, let alone read it. But once they get a response—and a conversation is initiated—they will likely move into a more advanced stage of the scam. It’s better to not respond at all. So, I Shouldn’t Pay the Ransom? You should not pay the ransom. If you pay the ransom, you’re not only losing money, but you’re encouraging the scammers to continue phishing other people. If you do pay, then the scammers may also use that as a pressure point to continue to blackmail you, knowing that you’re susceptible. What Should I Do Instead? Unfortunately there isn’t much you can do. But there are a few basic security hygiene steps you can take that are always a good idea. Use a password manager to keep your passwords strong and unique. Moving forward, you should make sure to enable two-factor authentication whenever that is an option on your online accounts. You can also check out our Surveillance Self-Defense guide for more tips on how to protect your security and privacy online. One other thing to do to protect yourself is apply a cover over your computer’s camera. We offer some through our store, but a small strip of electrical tape will do. This can help ease your mind if you're worried that a rogue app may be turning your camera on, or that you left it on yourself—unlikely, but possible scenarios. We know this experience isn't fun, but it's also not the end of the world. Just ignore the scammers' empty threats and practice good security hygiene going forward! Overall this isn’t an issue that is up to consumers to fix. The root of the problem is that data brokers and nearly every other company have been allowed to store too much information about us for too long. Inevitably this data gets breached and makes its way into criminal markets where it is sold and traded and used for scams like this one. The most effective way to combat this would be with comprehensive federal privacy laws. Because, if the data doesn’t exist, it can’t be leaked. The best thing for you to do is advocate for such a law in Congress, or at the state level. Below are real examples of the scam that were sent to EFF employees. The scam text is similar across many different victims.. Example 1 [Name], I know that calling [Phone Number] or visiting [your address] would be a convenient way to contact you in case you don't act. Don't even try to escape from this. You've no idea what I'm capable of in [Your City]. I suggest you read this message carefully. Take a moment to chill, breathe, and analyze it thoroughly. 'Cause we're about to discuss a deal between you and me, and I don't play games. You do not know me but I know you very well and right now, you are wondering how, right? Well, you've been treading on thin ice with your browsing habits, scrolling through those videos and clicking on links, stumbling upon some not-so-safe sites. I placed a Malware on a porn website & you visited it to watch(you get my drift). While you were watching those videos, your smartphone began working as a RDP (Remote Control) which provided me complete control over your device. I can peep at everything on your display, flick on your camera and mic, and you wouldn't even suspect a thing. Oh, and I have got access to all your emails, contacts, and social media accounts too. Been keeping tabs on your pathetic life for a while now. It's simply your bad luck that I accessed your misdemeanor. I gave in more time than I should have looking into your personal life. Extracted quite a bit of juicy info from your system. and I've seen it all. Yeah, Yeah, I've got footage of you doing filthy things in your room (nice setup, by the way). I then developed videos and screenshots where on one side of the screen, there's whatever garbage you were enjoying, and on the other half, its your vacant face. With simply a single click, I can send this video to every single of your contacts. I see you are getting anxious, but let's get real. Actually, I want to wipe the slate clean, and allow you to get on with your daily life and wipe your slate clean. I will present you two alternatives. First Alternative is to disregard this email. Let us see what is going to happen if you take this path. Your video will get sent to all your contacts. The video was lit, and I can't even fathom the humiliation you'll endure when your colleagues, friends, and fam check it out. But hey, that's life, ain't it? Don't be playing the victim here. Option 2 is to pay me, and be confidential about it. We will name it my “privacy charges”. let me tell you what will happen if you opt this option. Your secret remains private. I will destroy all the data and evidence once you come through with the payment. You'll transfer the payment via Bitcoin only. Pay attention, I'm telling you straight: 'We gotta make a deal'. I want you to know I'm coming at you with good intentions. My word is my bond. Required Amount: $1950 BITCOIN ADDRESS: [REDACTED] Let me tell ya, it's peanuts for your tranquility. Notice: You now have one day in order to make the payment and I will only accept Bitcoins (I have a special pixel within this message, and now I know that you have read through this message). My system will catch that Bitcoin payment and wipe out all the dirt I got on you. Don't even think about replying to this or negotiating, it's pointless. The email and wallet are custom-made for you, untraceable. If I suspect that you've shared or discussed this email with anyone else, the garbage will instantly start getting sent to your contacts. And don't even think about turning off your phone or resetting it to factory settings. It's pointless. I don't make mistakes, [Name]. Can you notice something here? Honestly, those online tips about covering your camera aren't as useless as they seem. I am waiting for my payment… Example 2 [NAME],Is visiting [ADDRESS] a better way to contact in case you don't actBeautiful neighborhood btwIt's important you pay attention to this message right now. Take a moment to chill, breathe, and analyze it thoroughly. We're talking about something serious here, and I ain't playing games. You do not know anything about me but I know you very well and right now, you are thinking how, correct?Well, You've been treading on thin ice with your browsing habits, scrolling through those filthy videos and clicking on links, stumbling upon some not-so-safe sites. I installed a Spyware called "Pegasus" on a app you frequently use. Pegasus is a spyware that is designed to be covertly and remotely installed on mobile phones running iOS and Android. While you were busy watching videos, your device started out working as a RDP (Remote Protocol) which gave me total control over your device. I can peep at everything on your display, flick on your cam and mic, and you wouldn't even notice. Oh, and I've got access to all your emails, contacts, and social media accounts too.What I want?Been keeping tabs on your pathetic existence for a while now. It's just your hard luck that I accessed your misdemeanor. I invested in more time than I probably should've looking into your personal life. Extracted quite a bit of juicy info from your system. and I've seen it all. Yeah, Yeah, I've got footage of you doing embarrassing things in your room (nice setup, by the way). I then developed videos and screenshots where on one side of the screen, there's whatever garbage you were enjoying, and on the other part, it is your vacant face. With just a click, I can send this filth to all of your contacts.What can you do?I see you are getting anxious, but let's get real. Wholeheartedly, I am willing to wipe the slate clean, and let you move on with your regular life and wipe your slate clean. I am about to present you two alternatives. Either turn a blind eye to this warning (bad for you and your family) or pay me a small amount to finish this mattter forever. Let us understand those 2 options in details.First Option is to ignore this email. Let us see what will happen if you select this path. I will send your video to your contacts. The video was straight fire, and I can't even fathom the embarrasement you'll endure when your colleagues, friends, and fam check it out. But hey, that's life, ain't it? Don't be playing the victim here.Other Option is to pay me, and be confidential about it. We will name it my “privacy fee”. let me tell you what happens when you go with this choice. Your filthy secret will remain private. I will wipe everything clean once you send payment. You'll transfer the payment through Bitcoin only. I want you to know I'm aiming for a win-win here. I'm a person of integrity.Transfer Amount: USD 2000My Bitcoin Address: [BITCOIN ADDRESS]Or, (Here is your Bitcoin QR code, you can scan it):[IMAGE OF A QR CODE]Once you pay up, you'll sleep like a baby. I keep my word.Important: You now have one day to sort this out. (I've a special pixel in this message, and now I know that you've read through this mail). My system will catch that Bitcoin payment and wipe out all the dirt I got on you. Don't even think about replying to this, it's pointless. The email and wallet are custom-made for you, untraceable. I don't make mistakes, [NAME]. If I notice that you've shared or discussed this mail with anyone else, your garbage will instantly start getting sent to your contacts. And don't even think about turning off your phone or resetting it to factory settings. It's pointless.Honestly, those online tips about covering your camera aren't as useless as they seem.Don't dwell on it. Take it as a little lesson and keep your guard up in the future.
- FTC Report Confirms: Commercial Surveillance is Out of Controlby Lena Cohen on September 26, 2024 at 2:55 pm
A new Federal Trade Commission (FTC) report confirms what EFF has been warning about for years: tech giants are widely harvesting and sharing your personal information to fuel their online behavioral advertising businesses. This four-year investigation into the data practices of nine social media and video platforms, including Facebook, YouTube, and X (formerly Twitter), demonstrates how commercial surveillance leaves consumers with little control over their privacy. While not every investigated company committed the same privacy violations, the conclusion is clear: companies prioritized profits over privacy. While EFF has long warned about these practices, the FTC’s investigation offers detailed evidence of how widespread and invasive commercial surveillance has become. Here are key takeaways from the report: Companies Collected Personal Data Well Beyond Consumer Expectations The FTC report confirms that companies collect data in ways that far exceed user expectations. They’re not just tracking activity on their platforms, but also monitoring activity on other websites and apps, gathering data on non-users, and buying personal information from third-party data brokers. Some companies could not, or would not, disclose exactly where their user data came from. The FTC found companies gathering detailed personal information, such as the websites you visit, your location data, your demographic information, and your interests, including sensitive interests like “divorce support” and “beer and spirits.” Some companies could only report high-level descriptions of the user attributes they tracked, while others produced spreadsheets with thousands of attributes. There’s Unfettered Data Sharing With Third Parties Once companies collect your personal information, they don’t always keep it to themselves. Most companies reported sharing your personal information with third parties. Some companies shared so widely that they claimed it was impossible to provide a list of all third-party entities they had shared personal information with. For the companies that could identify recipients, the lists included law enforcement and other companies, both inside and outside the United States. Alarmingly, most companies had no vetting process for third parties before sharing your data, and none conducted ongoing checks to ensure compliance with data use restrictions. For example, when companies say they’re just sharing your personal information for something that seems unintrusive, like analytics, there's no guarantee your data is only used for the stated purpose. The lack of safeguards around data sharing exposes consumers to significant privacy risks. Consumers Are Left in the Dark The FTC report reveals a disturbing lack of transparency surrounding how personal data is collected, shared, and used by these companies. If companies can’t tell the FTC who they share data with, how can you expect them to be honest with you? Data tracking and sharing happens behind the scenes, leaving users largely unaware of how much privacy they’re giving up on different platforms. These companies don't just collect data from their own platforms—they gather information about non-users and from users' activity across the web. This makes it nearly impossible for individuals to avoid having their personal data swept up into these vast digital surveillance networks. Even when companies offer privacy controls, the controls are often opaque or ineffective. The FTC also found that some companies were not actually deleting user data in response to deletion requests. The scale and secrecy of commercial surveillance described by the FTC demonstrates why the burden of protecting privacy can’t fall solely on individual consumers. Surveillance Advertising Business Models Are the Root Cause The FTC report underscores a fundamental issue: these privacy violations are not just occasional missteps—they’re inherent to the business model of online behavioral advertising. Companies collect vast amounts of data to create detailed user profiles, primarily for targeted advertising. The profits generated from targeting ads based on personal information drive companies to develop increasingly invasive methods of data collection. The FTC found that the business models of most of the companies incentivized privacy violations. FTC Report Underscores Urgent Need for Legislative Action Without federal privacy legislation, companies have been able to collect and share billions of users’ personal data with few safeguards. The FTC report confirms that self-regulation has failed: companies’ internal data privacy policies are inconsistent and inadequate, allowing them to prioritize profits over privacy. In the FTC’s own words, “The report leaves no doubt that without significant action, the commercial surveillance ecosystem will only get worse.” To address this, the EFF advocates for federal privacy legislation. It should have many components, but these are key: Data minimization and user rights: Companies should be prohibited from processing a person’s data beyond what’s necessary to provide them what they asked for. Users should have the right to access their data, port it, correct it, and delete it. Ban on Online Behavioral Advertising: We should tackle the root cause of commercial surveillance by banning behavioral advertising. Otherwise, businesses will always find ways to skirt around privacy laws to keep profiting from intrusive data collection. Strong Enforcement with Private Right of Action: To give privacy legislation bite, people should have a private right of action to sue companies that violate their privacy. Otherwise, we’ll continue to see widespread violation of privacy laws due to limited government enforcement resources. Using online services shouldn't mean surrendering your personal information to countless companies to use as they see fit. When you sign up for an account on a website, you shouldn’t need to worry about random third-parties getting your information or every click being monitored to serve you ads. For now, our Privacy Badger extension can help you block some of the tracking technologies detailed in the FTC report. But the scale of commercial surveillance revealed in this investigation requires significant legislative action. Congress must act now and protect our data from corporate exploitation with a strong federal privacy law.
- The UN General Assembly and the Fight Against the Cybercrime Treatyby Katitza Rodriguez on September 26, 2024 at 10:49 am
Note on the update: The text has been revised to reflect the updated timeline for the UN General Assembly’s consideration of the convention, which is now expected at the end of this year. The update also emphasizes that states should reject the convention. Additionally, a new section outlines the risks associated with broad evidence-sharing, particularly the lack of robust safeguards needed to act as checks against the misuse of power. While the majority of the investigatory powers in the convention used the shall language in Chapter IV, and therefore, are mandatory, the safeguards are left to each state’s discretion in how they are applied. Please note that our piece in Just Security and this post are based on the latest version of the UNCC. The final draft text of the United Nations Convention Against Cybercrime, adopted last Thursday by the United Nations Ad Hoc Committee, is now headed to the UN General Assembly for a vote. The last hours of deliberations were marked by drama as Iran repeatedly, though unsuccessfully, attempted to remove almost all human rights protections that survived in the final text, receiving support from dozens of nations. Although Iran’s efforts were defeated, the resulting text is still nothing to celebrate, as it remains riddled with unresolved human rights issues. States should vote No when the UNGA votes on the UN Cybecrime Treaty. The Fight Moves to the UN General Assembly States will likely consider adopting or rejecting the treaty at the UN General Assembly later this year. It is crucial for States to reject the treaty and vote against it. This moment offers a key opportunity to push back and build a strong, coordinated opposition. Over more than three years of advocacy, we consistently fought for clearer definitions, narrower scope, and stronger human rights protections. Since the start of the process, we made it clear that we didn’t believe the treaty was necessary, and, given the significant variation in privacy and human rights standards among member states, we raised concerns that the investigative powers adopted in the treaty may accommodate the most intrusive police surveillance practices across participating countries. Yet, we engaged in the discussions in good faith to attempt to ensure that the treaty would be narrow in scope and include strong, mandatory human rights safeguards. However, in the end, the e-evidence sharing chapter remains broad in scope, and the rights section unfortunately falls short. Indeed, instead of merely facilitating cooperation on core cybercrime, this convention authorizes open-ended evidence gathering and sharing for any serious crime that a country chooses to punish with a sentence of at least four years or more, without meaningful limitations. While the convention excludes cooperation requests if there are substantial grounds to believe that the request is for the purpose of prosecuting or punishing someone based on their political beliefs or personal characteristics, it sets an extremely high bar for such exclusions and provides no operational safeguards or mechanisms to ensure that acts of transnational repression or human rights abuses are refused. The convention requires that these surveillance measures are proportionate, but leaves critical safeguards such as judicial review, the need for grounds of justifying surveillance, and the need for effective redress as optional despite the intrusive nature of the surveillance powers it adopts. Even more concerning, some states have already indicated that in their view the requirements for these critical safeguards is purely a matter of states' domestic law, many of which already fail to meet international human rights standards and lack meaningful judicial oversight or legal accountability. The convention ended up accommodating the most intrusive practices. For example, blanket, generalized data retention is problematic under human rights law but states that ignore these restrictions, and have such powers under their domestic law, can respond to assistance requests by sharing evidence that was retained through blanket data retention regimes. Similarly, encryption is also protected under international human rights standards but nothing in this convention prevents a state from employing encryption-breaking powers they have under their domestic law when responding to a cross-border request to access data. The convention’s underlying flaw is the assumption that, in accommodating all countries' practices, states will act in good faith. This assumption is flawed, as it only increases the likelihood that the powerful global cooperation tools established by the convention will be abused. The Unsettling Concessions in the Treaty Negotiations The key function of the Convention, if ratified, will be to create a means of requiring legal assistance between countries that do not already have mutual legal assistance treaties (MLATs) or other cooperation agreements. This would include repressive regimes who may previously have been hindered in their attempts to engage in cross-border surveillance and data sharing, in some cases because their concerning human rights records have excluded them from MLATs. For countries that already have MLATs in place, the new treaty’s cross-border cooperation provisions may provide additional tools for assistance. A striking pattern throughout the Convention as adopted is the leeway that it gives to states to decide whether or not to require human rights safeguards; almost all of the details of how human rights protections are implemented is left up to national law. For example, the scope and definition of many offenses “may"—or may not—include certain protective elements. In addition, states are not required to decline requests from other states to help investigate acts that are not crimes under their domestic law; they can choose to cooperate with those requests instead. Nor does the treaty obligate states to carefully scrutinize surveillance requests to ensure they are not pretextual attempts at persecution.This pattern continues. For example, the list of core cybercrimes under the convention—that in the past swept in good faith security research, whistleblowers, and journalistic activities—let states choose whether specific elements must be included before an act will be considered a crime, for example that the offense was done with dishonest intent or that it caused serious harm. Sadly, these elements are optional, not required. Similarly, provisions on child sexual abuse material (CSAM) allow states to adopt exceptions that would ensure scientific, medical, artistic or educational materials are not wrongfully targeted, and that would exclude consensual, age-appropriate exchanges between minors, in line with international human rights standards. Again, these exceptions are optional, meaning that over-criminalization is not only consistent with the Convention but also qualifies for the Convention's cross-border surveillance and extradition mechanisms. The broad discretion granted to states under the UN Cybercrime Treaty is a deliberate design intended to secure agreement among countries with varying levels of human rights protections. This flexibility, in certain cases, allows states with strong protections to uphold them, but it also permits those with weaker standards to maintain their lower levels of protection. This pattern was evident in the negotiations, where key human rights safeguards were made optional rather than mandatory, such as in the list of core cybercrimes and provisions on cross-border surveillance. These numerous options in the convention are also disappointing because they took the place of what would have been preferred: advancing the protections in their national laws as normative globally, and encouraging or requiring other states to adopt them. Exposing States’ Contempt For Rights Iran’s last-ditch attempts to strip human rights protections from the treaty were a clear indicator of the challenges ahead. In the final debate, Iran proposed deleting provisions that would let states refuse international requests for personal data when there’s a risk of persecution based on political opinions, race, ethnicity, or other factors. Despite its disturbing implications, the proposal received 25 votes in support including from India, Cuba, China, Belarus, Korea, Nicaragua, Nigeria, Russia, and Venezuela. That was just one of a series of proposals by Iran to remove specific human rights or procedural protections from the treaty at the last minute. Iran also requested a vote on deleting Article 6(2) of the treaty, another human rights clause that explicitly states that nothing in the Convention should be interpreted as allowing the suppression of human rights or fundamental freedoms, as well as Article 24, which establishes the conditions and safeguards—the essential checks and balances—for domestic and cross-border surveillance powers. Twenty-three countries, including Jordan, India, and Sudan, voted to delete Article 6(2), with 26 abstentions from countries like China, Uganda, and Turkey. This means a total of 49 countries either supported or chose not to oppose the removal of this critical clauses, showing a significant divide in the international community's commitment to protecting fundamental freedoms. And 11 countries voted to delete Article 24, with 23 abstentions. These and other Iranian proposals would have removed nearly every reference to human rights from the convention, stripping the treaty of its substantive human rights protections and impacting both domestic legislation and international cooperation, leaving only the preamble and general clause, which states: "State Parties shall ensure that the implementation of their obligations under this Convention is consistent with their obligations under international human rights law.” Additional Risks of Treaty Abuse The risk that treaty powers can be abused to persecute people is real and urgent. It is even more concerning that some states have sought to declare (by announcing a future potential “reservation”) that they may intend to not follow Article 6.2 (general human rights clause), Article 24 (conditions and safeguards for domestic and cross border spying assistance), and Article 40(22) on human-rights-based grounds for refusing mutual legal assistance, despite their integral roles in the treaty. Such reservations should be prohibited. According to the International Law Commission’s "Guide to Practice on Reservations to Treaties," a reservation is impermissible if it is incompatible with the object and purpose of the treaty. Human-rights safeguards, while not robust enough, are essential elements of the treaty, and reservations that undermine these safeguards could be considered incompatible with the treaty’s object and purpose. Furthermore, the Guide states that reservations should not affect essential elements necessary to the general tenor of the treaty, and if they do, such reservations impair the raison d’être of the treaty itself. Therefore, allowing reservations against human rights safeguards may not only undermine the treaty’s integrity but also challenge its legal and moral foundations. All of the attacks on safeguards in the treaty process raise particular concerns when foreign governments use the treaty powers to demand information from U.S. companies, who should be able to rely on the strong standards embedded in US law. Where norms and safeguards were made optional, we can presume that many states will choose to forego them. Cramming Even More Crimes Back In? Throughout the negotiations, several delegations voiced concerns that the scope of the Convention did not cover enough crimes, including many that threaten online content protected by the rights to free expression and peaceful protest. Russia, China, Nigeria, Egypt, Iran, and Pakistan advocated for broader criminalization, including crimes like incitement to violence and desecration of religious values. In contrast, the EU, the U.S., Costa Rica, and others advocated for a treaty that focuses solely on computer-related offenses, like attacks on computer systems, and some cyber-enabled crimes like CSAM and grooming. Despite significant opposition, Russia, China, and other states successfully advanced the negotiation of a supplementary protocol for additional crimes, even before the core treaty has been ratified and taken effect. This move is particularly troubling as it leaves unresolved the critical issue of consensus on what constitutes core cybercrimes—a ticking time bomb that could lead to further disputes and could retroactively expand application of the Convention's cross-border cooperation regime even further. Under the final agreement, it will take 40 ratifications for the treaty to enter into force and 60 before any new protocols can be adopted. While consensus remains the goal, if it cannot be reached, a protocol can still be adopted with a two-thirds majority vote of the countries present. The treaty negotiations are disappointing, but civil society and human rights defenders can unite to urge states to vote against the convention at the next UN General Assembly, ensuring that these flawed provisions do not undermine human rights globally.
- Digital ID Isn't for Everybody, and That's Okayby Alexis Hancock on September 25, 2024 at 10:57 pm
How many times do you pull out your driver’s license a week? Maybe two to four times to purchase age restricted items, pick up prescriptions, or go to a bar. If you get a mobile driver’s license (mDL) or other forms of digital identification (ID) being offered in Google and Apple wallets, you may have to share this information much more often than before, because this new technology may expand the scope of scenarios demanding your ID. mDLs and digital IDs are being deployed faster than states can draft privacy protections, including for presenting your ID to more third parties than ever before. While proponents of these digital schemes emphasize a convenience factor, these IDs can easily expand into new territories like controversial age verification bills that censor everyone. Moreover, digital ID is simultaneously being tested in sensitive situations, and expanded into a potential regime of unprecedented data tracking. In the digital ID space, the question of “how can we do this right?” often usurps the more pertinent question of “should we do this at all?” While there are highly recommended safeguards for these new technologies, we must always support each person’s right to choose to continue using physical documentation instead of going digital. Also, we must do more to bring understanding and decision power over these technologies to all, over zealously promoting them as a potential equalizer. What’s in Your Wallet? With modern hardware, phones can now safely store more sensitive data and credentials with higher levels of security. This enables functionalities like Google and Apple Pay exchanging transaction data online with e-commerce sites. While there’s platform-specific terminology, the general term to know is “Trusted Platform Module” (TPM). This hardware enables “Trusted Execution Environments” (TEEs) for sensitive data to be processed within this environment. Most modern phones, tablets, and laptops come with TPMs. Digital IDs are considered at a higher level of security within the Google and Apple wallets (as they should be). So if you have an mDL provisioned with this device, the contents of the mDL is not “synced to the cloud.” Instead, it stays on that device, and you have the option to remotely wipe the credential if the device is stolen or lost. Moving away from digital wallets already common on most phones, some states have their own wallet app for mDLs that would require downloading from an app store. The security on these applications can vary, along with the data they can and can’t see. Different private partners have been making wallet/ID apps for different states. These include IDEMIA, Thales, and Spruce ID, to name a few. Digital identity frameworks, like Europe’s (eIDAS), have been creating language and provisions for “open wallets,” where you don’t have to necessarily rely on big tech for a safe and secure wallet. However, privacy and security need to be paramount. If privacy is an afterthought, digital IDs can quickly become yet another gold mine of breaches for data brokers and bad actors. New Announcements, New Scope Digital ID has been moving fast this summer. More states are contracting with mDL providers. Federal agencies are looking at digital IDs to prevent fraud. A new API enables websites to ask for your mDL. TSA accepts “Digital passport IDs” in Android. Proponents of digital ID frequently present the “over 21” example, which is often described like this: You go to the bar, you present a claim from your phone that you are over 21, and a bouncer confirms the claim with a reader device for a QR code or a tap via NFC. Very private. Very secure. Said bouncer will never know your address or other information. Not even your name. This is called an “abstract claim”, where more-sensitive information is not exchanged, but instead just a less-sensitive attestation to the verifier. Like an age threshold rather than your date of birth and name. But there is a high privacy price to pay for this marginal privacy benefit. mDLs will not just swap in as a 1-on-1 representation of your physical ID. Rather, they are likely to expand the scenarios where businesses and government agencies demand that you prove your identity before entering physical and digital spaces or accessing goods and services. Our personal data will be passed at more frequent rates than ever, via frequent online verification of identity per day or week with multiple parties. This privacy menace far surpasses the minor danger of a bar bouncer collecting, storing, and using your name and address after glancing at your birth-date on your plastic ID for 5 seconds in passing. In cases where bars do scan ID, we’re still being asked to consider one potential privacy risk for an even more expanded privacy risk through digital ID presentation across the internet. While there are efforts to enable private businesses to read mDLs, these credentials today are mainly being used with the TSA. In contracts and agreements we have seen with Apple, the company largely controls the marketing and visibility of mDLs. In another push to boost adoption, Android allows you to create a digital passport ID for domestic travel. This development must be seen through the lens of the federal government’s 20-year effort to impose “REAL ID” on state-issued identification systems. REAL ID is an objective failure of a program that pushes for regimes that strip privacy from everyone and further marginalize undocumented people. While federal-level use of digital identity so far is limited to TSA, this use can easily expand. TSA wants to propose rules for mDLs in an attempt (the agency says) to “allow innovation” by states, while they contemplate uniform rules for everyone. This is concerning, as the scope of TSA —and its parent agency, the Department of Homeland Security—is very wide. Whatever they decide now for digital ID will have implications way beyond the airport. Equity First > Digital First We are seeing new digital ID plans being discussed for the most vulnerable of us. Digital ID must be designed for equity (as well as for privacy). With Google’s Digital Credential API and Apple’s IP&V Platform (as named from the agreement with California), these two major companies are going to be in direct competition with current age verification platforms. This alarmingly sets up the capacity for anyone to ask for your ID online. This can spread beyond content that is commonly age-gated today. Different states and countries may try to label additional content as harmful to children (such as LGBTQIA content or abortion resources), and require online platforms to conduct age verification to access that content. For many of us, opening a bank account is routine, and digital ID sounds like a way to make this more convenient. Millions of working class people are currently unbanked. Digital IDs won’t solve their problems. Many people can’t get simple services and documentation for a variety of reasons that come with having low-income. Millions of people in our country don’t have identification. We shouldn’t apply regimes that utilize age verification technology against people who often face barriers to compliance, such as license suspension for unpaid, non-traffic safety related fines. A new technical system with far less friction to attempt to verify age will, without regulation to account for nuanced lives, lead to an expedited, automated “NO” from digital verification. Another issue is that many lack a smartphone or an up-to-date smartphone, or may share a smartphone with their family. Many proponents of “digital first” solutions assume a fixed ratio of one smartphone for each person. While this assumption may work for some, others will need humans to talk to on a phone or face-to-face to access vital services. In the case of an mDL, you still need to upload your physical ID to even obtain an mDL, and need to carry a physical ID on your person. Digital ID cannot bypass the problem that some people don’t have physical ID. Failure to account for this is a rush to perceived solutions over real problems. Inevitable? No, digital identity shouldn’t be inevitable for everyone: many people don’t want it or lack resources to get it. The dangers posed by digital identity don’t have to be inevitable, either—if states legislate protections for people. It would also be great (for the nth time) to have a comprehensive federal privacy law. Illinois recently passed a law that at least attempts to address mDL scenarios with law enforcement. At the very minimum, law enforcement should be prohibited from using consent for mDL scans to conduct illegal searches. Florida completely removed their mDL app from app stores and asked residents who had it, to delete it; it is good they did not simply keep the app around for the sake of pushing digital ID without addressing a clear issue. State and federal embrace of digital ID is based on claims of faster access, fraud prevention, and convenience. But with digital ID being proposed as a means of online verification, it is just as likely to block claims of public assistance as facilitate them. That’s why legal protections are at least as important as the digital IDs themselves. Lawmakers should ensure better access for people with or without a digital ID.
- Calls to Scrap Jordan's Cybercrime Law Echo Calls to Reject Cybercrime Treatyby Jillian C. York on September 25, 2024 at 5:44 pm
In a number of countries around the world, communities—and particularly those that are already vulnerable—are threatened by expansive cybercrime and surveillance legislation. One of those countries is Jordan, where a cybercrime law enacted in 2023 has been used against LGBTQ+ people, journalists, human rights defenders, and those criticizing the government. We’ve criticized this law before, noting how it was issued hastily and without sufficient examination of its legal aspects, social implications, and impact on human rights. It broadly criminalizes online content labeled as “pornographic” or deemed to “expose public morals,” and prohibits the use of Virtual Private Networks (VPNs) and other proxies. Now, EFF has joined thirteen digital rights and free expression organizations in calling once again for Jordan to scrap the controversial cybercrime law. The open letter, organized by Article 19, calls upon Jordanian authorities to cease use of the cybercrime law to target and punish dissenting voices and stop the crackdown on freedom of expression. The letter also reads: “We also urge the new Parliament to repeal or substantially amend the Cybercrime Law and any other laws that violate the right to freedom of expression and bring them in line with international human rights law.” Jordan’s law is a troubling example of how overbroad cybercrime legislation can be misused to target marginalized communities and suppress dissent. This is the type of legislation that the U.N. General Assembly has expressed concern about, including in 2019 and 2021, when it warned against cybercrime laws being used to target human rights defenders. These concerns are echoed by years of reports from U.N. human rights experts on how abusive cybercrime laws facilitate human rights abuses. The U.N. Cybercrime Treaty also poses serious threats to free expression. Far from protecting against cybercrime, this treaty risks becoming a vehicle for repressive cross-border surveillance practices. By allowing broad international cooperation in surveillance for any crime 'serious' under national laws—defined as offenses punishable by at least four years of imprisonment—and without robust mandatory safeguards or detailed operational requirements to ensure “no suppression” of expression, the treaty risks being exploited by government to suppress dissent and target marginalized communities, as seen with Jordan’s overbroad 2023 cybercrime law. The fate of the U.N. Cybercrime Treaty now lies in the hands of member states, who will decide on its adoption later this year.
- Patient Rights and Consumer Groups Join EFF In Opposing Two Extreme Patent Billsby Joe Mullin on September 25, 2024 at 4:54 pm
Update 9/26/24: The hearing and scheduled committee vote on PERA and PREVAIL was canceled. Supporters can continue to register their opposition via our action, as these bills may still be scheduled for a vote later in 2024. The U.S. Senate Judiciary Committee is set to vote this Thursday on two bills that could significantly empower patent trolls. The Patent Eligibility Restoration Act (PERA) would bring back many of the abstract computer patents that have been barred for the past 10 years under Supreme Court precedent. Meanwhile, the PREVAIL Act would severely limit how the public can challenge wrongly granted patents at the patent office. Take Action Tell Congress: No New Bills For Patent Trolls EFF has sent letters to the Senate Judiciary Committee opposing both of these bills. The letters are co-signed by a wide variety of civil society groups, think tanks, startups, and business groups that oppose these misguided bills. Our letter on PERA states: Under PERA, any business method, methods of practicing medicine, legal agreement, media content, or even games and entertainment could be patented so long as the invention requires some use of computers or electronic communications… It is hard to overstate just how extreme and far-reaching such a change would be. If enacted, PERA could revive some of the most problematic patents used by patent trolls, including: The Alice Corp. patent, which claimed the idea of clearing financial transactions through a third party via a computer. The Ameranth patent, which covered the use of mobile devices to order food at restaurants. This patent was used to sue over 100 restaurants, hotels, and fast-food chains just for merely using off-the-shelf technology. A patent owned by Hawk Technology Systems LLC, which claimed generic video technology to view surveillance videos, and was used to sue over 200 hospitals, schools, charities, grocery stores, and other businesses. The changes proposed in PERA open the door to patent compounds that exist in nature which nobody invented A separate letter signed by 17 professors of IP law caution that PERA would cloud the legal landscape on patent eligibility, which the Supreme Court clarified in its 10-year-old Alice v. CLS Bank case. “PERA would overturn centuries of jurisprudence that prevents patent law from effectively restricting the public domain of science, nature, and abstract ideas that benefits all of society,” the professors write. The U.S. Public Interest Research Group also opposes both PERA and PREVAIL, and points out in its opposition letter that patent application misuse has improperly prevented generic drugs from coming on to the market, even years after the original patent has expired. They warn: “The changes proposed in PERA open the door to patent compounds that exist in nature which nobody invented, but are newly discovered,” the group writes. “This dramatic change could have devastating effects on drug pricing by expanding the universe of items that can have a patent, meaning it will be easier than ever for drug companies to build patent thickets which keep competitors off the market.” Patients’ rights advocacy groups have also weighed in. They argue that PREVAIL “seriously undermines citizens’ ability to promote competition by challenging patents,” while PERA “opens the door to allow an individual or corporation to acquire exclusive rights to aspects of nature and information about our own bodies.” Generic drug makers share these concerns. “PREVAIL will make it more difficult for generic and biosimilar manufacturers to challenge expensive brand-name drug patent thickets and bring lower-cost medicines to patients, and PERA will enable brand-name drug manufacturers to build even larger thickets and charge higher prices,” an industry group stated earlier this month. We urge the Senate to heed the voices of this broad coalition of civil society groups and businesses opposing these bills. Passing them would create a more unbalanced and easily exploitable patent system. The public interest must come before the loud voices of patent trolls and a few powerful patent holders. Take Action Tell Congress to reject pera and prevail Documents: EFF Coalition Letter Opposing PERA EFF Coalition Letter Opposing PREVAIL IP Law Academics Letter Opposing PERA US PIRG Letter Opposing PERA and PREVAIL Generation Patient, I-MAK, R Street, US PIRG Letter Opposing PERA and PREVAIL Association for Accessible Medicines Statement Opposing on PERA and PREVAIL
- EFF to Federal Trial Court: Section 230’s Little-Known Third Immunity for User-Empowerment Tools Covers Unfollow Everything 2.0by Sophia Cope on September 24, 2024 at 9:12 pm
EFF along with the ACLU of Northern California and the Center for Democracy & Technology filed an amicus brief in a federal trial court in California in support of a college professor who fears being sued by Meta for developing a tool that allows Facebook users to easily clear out their News Feed. Ethan Zuckerman, a professor at the University of Massachusetts Amherst, is in the process of developing Unfollow Everything 2.0, a browser extension that would allow Facebook users to automate their ability to unfollow friends, groups, or pages, thereby limiting the content they see in their News Feed. This type of tool would greatly benefit Facebook users who want more control over their Facebook experience. The unfollowing process is tedious: you must go profile by profile—but automation makes this process a breeze. Unfollowing all friends, groups, and pages makes the News Feed blank, but this allows you to curate your News Feed by refollowing people and organizations you want regular updates on. Importantly, unfollowing isn’t the same thing as unfriending—unfollowing takes your friends’ content out of your News Feed, but you’re still connected to them and can proactively navigate to their profiles. As Louis Barclay, the developer of Unfollow Everything 1.0, explained: I still remember the feeling of unfollowing everything for the first time. It was near-miraculous. I had lost nothing, since I could still see my favorite friends and groups by going to them directly. But I had gained a staggering amount of control. I was no longer tempted to scroll down an infinite feed of content. The time I spent on Facebook decreased dramatically. Overnight, my Facebook addiction became manageable. Prof. Zuckerman fears being sued by Meta, Facebook’s parent company, because the company previously sent Louis Barclay a cease-and-desist letter. Prof. Zuckerman, with the help of the Knight First Amendment Institute at Columbia University, preemptively sued Meta, asking the court to conclude that he has immunity under Section 230(c)(2)(B), Section 230’s little-known third immunity for developers of user-empowerment tools. In our amicus brief, we explained to the court that Section 230(c)(2)(B) is unique among the immunities of Section 230, and that Section 230’s legislative history supports granting immunity in this case. The other two immunities—Section 230(c)(1) and Section 230(c)(2)(A)—provide direct protection for internet intermediaries that host user-generated content, moderate that content, and incorporate blocking and filtering software into their systems. As we’ve argued many times before, these immunities give legal breathing room to the online platforms we use every day and ensure that those companies continue to operate, to the benefit of all internet users. But it’s Section 230(c)(2)(B) that empowers people to have control over their online experiences outside of corporate or government oversight, by providing immunity to the developers of blocking and filtering tools that users can deploy in conjunction with the online platforms they already use. Our brief further explained that the legislative history of Section 230 shows that Congress clearly intended to provide immunity for user-empowerment tools like Unfollow Everything 2.0. Section 230(b)(3) states, for example, that the statute was meant to “encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other interactive computer services,” while Section 230(b)(4) states that the statute was intended to “remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material.” Rep. Chris Cox, a co-author of Section 230, noted prior to passage that new technology was “quickly becoming available” that would help enable people to “tailor what we see to our own tastes.” Our brief also explained the more specific benefits of Section 230(c)(2)(B). The statute incentivizes the development of a wide variety of user-empowerment tools, from traditional content filtering to more modern social media tailoring. The law also helps people protect their privacy by incentivizing the tools that block methods of unwanted corporate tracking such as advertising cookies, and block stalkerware deployed by malicious actors. We hope the district court will declare that Prof. Zuckerman has Section 230(c)(2)(B) immunity so that he can release Unfollow Everything 2.0 to the benefit of Facebook users who desire more control over how they experience the platform.
- EFF to Supreme Court: Strike Down Texas’ Unconstitutional Age Verification Lawby Hudson Hongo on September 23, 2024 at 6:30 pm
New Tech Doesn’t Solve Old Problems With Age-Gating the Internet WASHINGTON, D.C.—The Electronic Frontier Foundation (EFF), the Woodhull Freedom Foundation, and TechFreedom urged the Supreme Court today to strike down HB 1181, a Texas law that unconstitutionally restricts adults’ access to sexual content online by requiring them to verify their age. Under HB 1181, signed into law last year, any website that Texas decides is composed of “one-third” or more of “sexual material harmful to minors” is forced to collect age-verifying personal information from all visitors. When the Supreme Court reviews a case challenging the law in its next term, its ruling could have major consequences for the freedom of adults to safely and anonymously access protected speech online. "Texas’ age verification law robs internet users of anonymity, exposes them to privacy and security risks, and blocks some adults entirely from accessing sexual content that’s protected under the First Amendment,” said EFF Staff Attorney Lisa Femia. “Applying longstanding Supreme Court precedents, other courts have consistently held that similar age verification laws are unconstitutional. To protect freedom of speech online, the Supreme Court should clearly reaffirm those correct decisions here.” In a flawed ruling last year, the Fifth Circuit of Appeals upheld the Texas law, diverging from decades of legal precedent that correctly recognized online ID mandates as imposing greater burdens on our First Amendment rights than in-person age checks. As EFF explains in its friend-of-the-court brief, there is nothing about HB 1181 or advances in technology that have lessened the harms the law’s age verification mandate imposes on adults wishing to exercise their constitutional rights. First, the Texas law forces adults to submit personal information over the internet to access entire websites, not just specific sexual materials. Second, compliance with the law will require websites to retain this information, exposing their users to a variety of anonymity, privacy, and security risks not present when briefly flashing an ID card to a cashier. Third, while sharing many of the same burdens as document-based age verification, newer technologies like “age estimation” introduce their own problems—and are unlikely to satisfy the requirements of HB 1181 anyway. "Sexual freedom is a fundamental human right critical to human dignity and liberty," said Ricci Levy, CEO of the Woodhull Freedom Foundation. "By requiring invasive age verification, this law chills protected speech and violates the rights of consenting adults to access lawful sexual content online.” Today’s friend-of-the-court brief is only the latest entry in EFF’s long history of fighting for freedom of speech online. In 1997, EFF participated as both plaintiff and co-counsel in ACLU v. Reno, the landmark Supreme Court case that established speech on the internet as meriting the highest standard of constitutional protection. And in the last year alone, EFF has urged courts to reject state censorship, throw out a sweeping ban on free expression, and stop the government from making editorial decisions about content on social media. For the brief: https://www.eff.org/document/fsc-v-paxton-eff-amicus-brief For more on HB 1181: https://www.eff.org/deeplinks/2024/05/eff-urges-supreme-court-reject-texas-speech-chilling-age-verification-law Contact: LisaFemiaStaff Attorneylfemia@eff.org
- Prison Banned Books Week: Being in Jail Shouldn’t Mean Having Nothing to Readby Will Greenberg on September 19, 2024 at 9:36 pm
Across the United States, nearly every state’s prison system offers some form of tablet access to incarcerated people, many of which boast of sizable libraries of eBooks. Knowing this, one might assume that access to books is on the rise for incarcerated folks. Unfortunately, this is not the case. A combination of predatory pricing, woefully inadequate eBook catalogs, and bad policies restricting access to paper literature has exacerbated an already acute book censorship problem in U.S. prison systems. New data collected by the Prison Banned Books Week campaign focuses on the widespread use of tablet devices in prison systems, as well as their pricing structure and libraries of eBooks. Through a combination of interviews with incarcerated people and a nationwide FOIA campaign to uncover the details of these tablet programs, this campaign has found that, despite offering access to tens of thousands of eBooks, prisons’ tablet programs actually provide little in the way of valuable reading material. The tablets themselves are heavily restricted, and typically only designed by one of two companies: Securus and ViaPath. The campaign also found that the material these programs do provide may not be accessible to many incarcerated individuals. “We might as well be rummaging the dusty old leftovers in some thrift store or back alley dumpster.” Limited, Censored Selections at Unreasonable Prices Many companies that offer tablets to carceral facilities advertise libraries of several thousand books. But the data reveals that a huge proportion of these books are public domain texts taken directly from Project Gutenberg. While Project Gutenberg is itself laudable for collecting freely accessible eBooks, and its library contains many of the “classics” of Western literary canon, a massive number of its texts are irrelevant and outdated. As Shawn Y., an incarcerated interviewee in Pennsylvania put it, “Books are available for purchase through the Securus systems, but most of the bookworms here [...] find the selection embarrassingly thin, laughable even. [...] We might as well be rummaging the dusty old leftovers in some thrift store or back alley dumpster.” These limitations on eBook selections exacerbate the already widespread censorship of physical reading materials, based on a variety of factors including books being deemed “harmful” content, determinations based on the book’s vendor (which, reports indicate, can operate as a ban on publishers), and whether the incarcerated person obtained advance permission from a prison administrator. Such censorial decisionmaking undermines incarcerated individuals’ right to receive information. These costs are a barrier that deprive those in carceral facilities from developing and maintaining a connection with life outside prison walls. Some facilities charge $0.99 or more per eBook—despite their often meager, antiquated selections. While this may not seem exorbitant to many people, a recent estimate of average hourly wages for incarcerated people in the US is $0.63 per hour. And these otherwise free eBooks can often cost much more: Larry, an individual incarcerated in Pennsylvania, explains, “[s]ome of the prices for other books [are] extremely outrageous.” In Larry’s facility, “[s]ome of those tablet prices range over twenty dollars and even higher.” Even if one can afford to rent these eBooks, they may have to pay for the tablets required to read them. For some incarcerated individuals, these costs can be prohibitive: procurement contracts in some states appear to require incarcerated people to pay upwards of $99 to use them. These costs are a barrier that deprive those in carceral facilities from developing and maintaining a connection with life outside prison walls. Part of a Trend Toward Inadequate Digital Replacements The trend of eliminating physical books and replacing them with digital copies accessible via tablets is emblematic of a larger trend from physical to digital that is occurring throughout our carceral system. These digital copies are not adequate substitutes. One of the hallmarks of tangible physical items is access: someone can open a physical book and read it when, how, and where they want. That’s not the case with the tablet systems prisons are adopting, and worryingly this trend has also extended to such personal items as incarcerated individual's personal mail. EFF is actively litigating to defend incarcerated individuals’ rights to access and receive tangible reading materials with our ABO Comix lawsuit. There, we—along with the Knight First Amendment Institute and Social Justice Legal Foundation—are fighting a San Mateo County (California) policy that bans those in San Mateo jails from receiving physical mail. Our complaint explains that San Mateo’s policy requires the friends and families of those jailed in its facilities to send their letters to a private company that scans them, destroys the physical copy, and retains the scan in a searchable database—for at least seven years after the intended recipient leaves the jail’s custody. Incarcerated people can only access the digital copies through a limited number of shared tablets and kiosks in common areas within the jails. Just as incarcerated peoples’ reading materials are censored, so is their mail when physical letters are replaced with digital facsimiles. Our complaint details how ripping open, scanning, and retaining mail has impeded the ability of those in San Mateo’s facilities to communicate with their loved ones, as well as their ability to receive educational and religious study materials. These digital replacements are inadequate both in and of themselves and because the tablets needed to access them are in short supply and often plagued by technical issues. Along with our free expression allegations, our complaint also alleges that the seizing, searching, and sharing of data from and about their letters violates the rights of both senders and recipients against unreasonable searches and seizures. Our ABO Comix litigation is ongoing. We are hopeful that the courts will recognize the free expression and privacy harms to incarcerated individuals and those who communicate with them that come from digitizing physical mail. We are also hopeful, on the occasion of this Prison Banned Books Week, for an end to the censorship of incarcerated individuals’ reading materials: restricting what some of us can read harms us all. Related Cases: A.B.O Comix, et al. v. San Mateo County
- Square Peg, Meet Round Hole: Previously Classified TikTok Briefing Shows Error of Banby Brendan Gilligan on September 19, 2024 at 8:07 pm
A previously classified transcript reveals Congress knows full well that American TikTok users engage in First Amendment protected speech on the platform and that banning the application is an inadequate way to protect privacy—but it banned TikTok anyway. The government submitted the partially redacted transcript as part of the ongoing litigation over the federal TikTok ban (which the D.C. Circuit just heard arguments about this week). The transcript indicates that that members of Congress and law enforcement recognize that Americans are engaging in First Amendment protected speech—the same recognition a federal district court made when it blocked Montana’s TikTok ban from going into effect. They also agreed that adequately protecting Americans’ data requires comprehensive consumer privacy protections. Yet, Congress banned TikTok anyway, undermining our rights and failing to protect our privacy. No Indication of Actual Harm, No New Arguments The members and officials didn’t make any particularly new points about the dangers of TikTok. Further, they repeatedly characterized their fears as hypothetical. The transcript is replete with references to the possibility of the Chinese government using TikTok to manipulate the content Americans’ see on the application, including to shape their views on foreign and domestic issues. For example, the official representing the DOJ expressed concern that the public and private data TikTok users generate on the platform is potentially at risk of going to the Chinese government, [and] being used now or in the future by the Chinese government in ways that could be deeply harmful to tens of millions of young people who might want to pursue careers in government, who might want to pursue careers in the human rights field, and who one day could end up at odds with the Chinese Government’s agenda. There is no indication from the unredacted portions of the transcript that this is happening. This DOJ official went on to express concern “with the narratives that are being consumed on the platform,” the Chinese government’s ability to influence those narratives, and the U.S. government’s preference for “responsible ownership” of the platform through divestiture. At one point, Representative Walberg even suggested that “certain public policy organizations” that oppose the TikTok ban should be investigated for possible ties to ByteDance (the company that owns TikTok). Of course, the right to oppose an ill-conceived ban on a popular platform goes to the very reason the U.S. has a First Amendment. Congress banned TikTok anyway, undermining our rights and failing to protect our privacy. Americans’ Speech and Privacy Rights Deserved More Rather than grandstanding about investigating opponents of the TikTok ban, Congress should spend its time considering the privacy and free speech arguments of those opponents. Judging by the (redacted) transcript, the committee failed to undertake that review here. First, the First Amendment rightly subjects bans like this one for TikTok to extraordinarily exacting judicial scrutiny. That is true even with foreign propaganda, which Americans have a well-established First Amendment right to receive. And it’s ironic for the DOJ to argue that banning an application which people use for self-expression—a human right—is necessary to protect their ability to advance human rights. Second, if Congress wants to stop the Chinese government from potentially acquiring data about social media users, it should pass comprehensive consumer privacy legislation that regulates how all social media companies can collect, process, store, and sell Americans’ data. Otherwise, foreign governments and adversaries will still be able to acquire Americans’ data by stealing it, or by using a straw purchaser to buy it. It’s especially jarring to read that a foreign government’s potential collection of data supposedly justifies banning an application, given Congress’s recent renewal of an authority—Section 702 of the Foreign Intelligence Surveillance Act—under which the U.S. government actually collects massive amounts of Americans’ communications— and which the FBI immediately directed its agents to abuse (yet again). EFF will continue fighting for TikTok users’ First Amendment rights to express themselves and to receive information on the platform. We will also continue urging Congress to drop these square peg, round hole approaches to Americans’ privacy and online expression and pass comprehensive privacy legislation that offers Americans genuine protection from the invasive ways any company uses data. While Congress did not fully consider the First Amendment and privacy interests of TikTok users, we hope the federal courts will.
- Strong End-to-End Encryption Comes to Discord Callsby Thorin Klosowski on September 19, 2024 at 6:25 pm
We’re happy to see that Discord will soon start offering a form of end-to-end encryption dubbed “DAVE” for its voice and video chats. This puts some of Discord’s audio and video offerings in line with Zoom, and separates it from tools like Slack and Microsoft Teams, which do not offer end-to-end encryption for video, voice, or any other communications on those apps. This is a strong step forward, and Discord can do even more to protect its users’ communications. End-to-end encryption is used by many chat apps for both text and video offerings, including WhatsApp, iMessage, Signal, and Facebook Messenger. But Discord operates differently than most of those, since alongside private and group text, video, and audio chats, it also encompasses large scale public channels on individual servers operated by Discord. Going forward, audio and video will be end-to-end encrypted, but text, including both group channels and private messages, will not. When a call is end-to-end encrypted, you’ll see a green lock icon. While it's not required to use the service, Discord also offers a way to optionally verify that the strong encryption a call is using is not being tampered with or eavesdropped on. During a call, one person can pull up the “Voice Privacy Code,” and send it over to everyone else on the line—preferably in a different chat app, like Signal—to confirm no one is compromising participants’ use of end-to-end encryption. This is a way to ensure someone is not impersonating someone and/or listening in to a conversation. By default, you have to do this every time you initiate a call if you wish to verify the communication has strong security. There is an option to enable persistent verification keys, which means your chat partners only have to verify you on each device you own (e.g. if you sometimes call from a phone and sometimes from a computer, they’ll want to verify for each). Key management is a hard problem in both the design and implementation of cryptographic protocols. Making sure the same encryption keys are shared across multiple devices in a secure way, as well as reliably discovered in a secure way by conversation partners, is no trivial task. Other apps such as Signal require some manual user interaction to ensure the sharing of key-material across multiple devices is done in a secure way. Discord has chosen to avoid this process for the sake of usability, so that even if you do choose to enable persistent verification keys, the keys on separate devices you own will be different. While this is an understandable trade-off, we hope Discord takes an extra step to allow users who have heightened security concerns the ability to share their persistent keys across devices. For the sake of usability, they could by default generate separate keys for each device while making sharing keys across them an extra step. This will avoid the associated risk of your conversation partners seeing you’re using the same device across multiple calls. We believe making the use of persistent keys easier and cross-device will make things safer for users as well: they will only have to verify the key for their conversation partners once, instead of for every call they make. Discord has performed the protocol design and implementation of DAVE in a solidly transparent way, including publishing the protocol whitepaper, the open-source library, commissioning an audit from well-regarded outside researchers, and expanding their bug-bounty program to include rewarding any security researchers who report a vulnerability in the DAVE protocol. This is the sort of transparency we feel is required when rolling out encryption like this, and we applaud this approach. But we’re disappointed that, citing the need for content moderation, Discord has decided not to extend end-to-end encryption offerings to include private messages or group chats. In a statement to TechCrunch, they reiterated they have no further plans to roll out encryption in direct messages or group chats. End-to-end encrypted video and audio chats is a good step forward—one that too many messaging apps lack. But because protection of our text conversations is important and because partial encryption is always confusing for users, Discord should move to enable end-to-end encryption on private text chats as well. This is not an easy task, but it’s one worth doing.
- Canada’s Leaders Must Reject Overbroad Age Verification Billby Paige Collings on September 19, 2024 at 5:14 pm
Canadian lawmakers are considering a bill, S-210, that’s meant to benefit children, but would sacrifice the security, privacy, and free speech of all internet users. First introduced in 2023, S-210 seeks to prevent young people from encountering sexually explicit material by requiring all commercial internet services that “make available” explicit content to adopt age verification services. Typically, these services will require people to show government-issued ID to get on the internet. According to bill authors, this is needed to prevent harms like the “development of pornography addiction” and “the reinforcement of gender stereotypes and the development of attitudes favorable to harassment and violence…particularly against women.” The motivation is laudable, but requiring people of all ages to show ID to get online won’t help women or young people. If S-210 isn't stopped before it reaches the third reading and final vote in the House of Commons, Canadians will be forced to a repressive and unworkable age verification regulation. Flawed Definitions Would Encompass Nearly the Entire Internet The bill’s scope is vast. S-210 creates legal risk not just for those who sell or intentionally distribute sexually explicit materials, but also for those who just transmit it–knowingly or not. Internet infrastructure intermediaries, which often do not know the type of content they are transmitting, would also be liable, as would all services from social media sites to search engines and messaging platforms. Each would be required to prevent access by any user whose age is not verified, unless they can claim the material is for a “legitimate purpose related to science, medicine, education or the arts,” or by implementing age verification. Basic internet infrastructure shouldn’t be regulating content at all, but S-210 doesn’t make the distinction. When these large services learn they are hosting or transmitting sexually explicit content, most will simply ban or remove it outright, using both automated tools and hasty human decision-making. History shows that over-censorship is inevitable. When platforms seek to ban sexual content, over-censorship is very common. Rules banning sexual content usually hurt marginalized communities and groups that serve them the most. That includes organizations that provide support and services to victims of trafficking and child abuse, sex workers, and groups and individuals promoting sexual freedom. Promoting Dangerous Age Verification Methods S-210 notes that “online age-verification technology is increasingly sophisticated and can now effectively ascertain the age of users without breaching their privacy rights.” This premise is just wrong. There is currently no technology that can verify users’ ages while protecting their privacy. The bill does not specify what technology must be used, leaving it for subsequent regulation. But the age verification systems that exist are very problematic. It is far too likely that any such regulation would embrace tools that retain sensitive user data for potential sale or harms like hacks and lack guardrails preventing companies from doing whatever they like with this data once collected. We’ve said it before: age verification systems are surveillance systems. Users have no way to be certain that the data they’re handing over is not going to be retained and used in unexpected ways, or even shared to unknown third parties. The bill asks companies to maintain user privacy and destroy any personal data collected but doesn’t back up that suggestion with comprehensive penalties. That’s not good enough. Companies responsible for storing or processing sensitive documents like drivers’ licenses can encounter data breaches, potentially exposing not only personal data about users, but also information about the sites that they visit. Finally, age-verification systems that depend on government-issued identification exclude altogether Canadians who do not have that kind of ID. Fundamentally, S-210 leads to the end of anonymous access to the web. Instead, Canadian internet access would become a series of checkpoints that many people simply would not pass, either by choice or because the rules are too onerous. Dangers for Everyone, But This Can Be Stopped Canada’s S-210 is part of a wave of proposals worldwide seeking to gate access to sexual content online. Many of the proposals have similar flaws. Canada’s S-210 is up there with the worst. Both Australia and France have paused the rollout of age verification systems, because both countries found that these systems could not sufficiently protect individuals’ data or address the issues of online harms alone. Canada should take note of these concerns. It's not too late for Canadian lawmakers to drop S-210. It’s what has to be done to protect the future of a free Canadian internet. At the very least, the bill’s broad scope must be significantly narrowed to protect user rights.
- Human Rights Claims Against Cisco Can Move Forward (Again)by Cindy Cohn on September 18, 2024 at 10:04 pm
Google and Amazon – You Should Take Note of Your Own Aiding and Abetting Risk EFF has long pushed companies that provide powerful surveillance tools to governments to take affirmative steps to avoid aiding and abetting human rights abuses. We have also worked to ensure they face consequences when they do not. Last week, the U.S. Court of Appeals for the Ninth Circuit helped this cause, by affirming its powerful 2023 decision that aiding and abetting liability in U.S. courts can apply to technology companies that provide sophisticated surveillance systems that are used to facilitate human rights abuses. The specific case is against Cisco and arises out of allegations that Cisco custom-built tools as part of the Great Firewall of China to help the Chinese government target members of disfavored groups, including the Falun Gong religious minority. The case claims that those tools were used to help identify individuals who then faced horrific consequences, including wrongful arrest, detention, torture, and death. We did a deep dive analysis of the Ninth Circuit panel decision when it came out in 2023. Last week, the Ninth Circuit rejected an attempt to have that initial decision reconsidered by the full court, called en banc review. While the case has now survived Ninth Circuit review and should otherwise be able to move forward in the trial court, Cisco has indicated that it intends to file a petition for U.S. Supreme Court review. That puts the case on pause again. Still, the Ninth Circuit’s decision to uphold the 2023 panel opinion is excellent news for the critical, though slow moving, process of building accountability for companies that aid repressive governments. The 2023 opinion unequivocally rejected many of the arguments that companies use to justify their decision to provide tools and services that are later used to abuse people. For instance, a company only needs to know that its assistance is helping in human rights abuses; it does not need to have a purpose to facilitate abuse. Similarly, the fact that a technology has legitimate law enforcement uses does not immunize the company from liability for knowingly facilitating human rights abuses. EFF has participated in this case at every level of the courts, and we intend to continue to do so. But a better way forward for everyone would be if Cisco owned up to its actions and took steps to make amends to those injured and their families with an appropriate settlement offer, like Yahoo! did in 2007. It’s not too late to change course, Cisco. And as EFF noted recently, Cisco isn’t the only company that should take note of this development. Recent reports have revealed the use (and misuse) of Google and Amazon services by the Israeli government to facilitate surveillance and tracking of civilians in Gaza. These reports raise serious questions about whether Google and Amazon are following their own published statements and standards about protecting against the use of their tools for human rights abuses. Unfortunately, it’s all too common for companies to ignore their own human rights policies, as we highlighted in a recent brief about notorious spyware company NSO Group. The reports about Gaza also raise questions about whether there is potential liability against Google and Amazon for aiding and abetting human rights abuses against Palestinians. The abuses by Israel have now been confirmed by the International Court of Justice, among others, and the longer they continue, the harder it is going to be for the companies to claim that they had no knowledge of the abuses. As the Ninth Circuit confirmed, aiding and abetting liability is possible even though these technologies are also useful for legitimate law enforcement purposes and even if the companies did not intend them to be used to facilitate human rights abuses. The stakes are getting higher for companies. We first call on Cisco to change course, acknowledge the victims, and accept responsibility for the human rights abuses it aided and abetted. Second, given the current ongoing abuses in Gaza, we renew our call for Google and Amazon to first come clean about their involvement in human rights abuses in Gaza and, where necessary, make appropriate changes to avoid assisting in future abuses. Finally, for other companies looking to sell surveillance, facial recognition, and other potentially abusive tools to repressive governments – we’ll be watching you, too. Related Cases: Doe I v. Cisco
- Senate Vote Could Give Helping Hand To Patent Trollsby Joe Mullin on September 18, 2024 at 4:33 pm
Update 9/26/24: The hearing and scheduled committee vote on PERA and PREVAIL was canceled. Supporters can continue to register their opposition via our action, as these bills may still be scheduled for a vote later in 2024. Update 9/20/24: The Senate vote scheduled for Thursday, Sep. 19 has been rescheduled for Thursday, Sep. 26. A patent on crowdfunding. A patent on tracking packages. A patent on photo contests. A patent on watching an ad online. A patent on computer bingo. A patent on upselling. These are just a few of the patents used to harass software developers and small companies in recent years. Thankfully, they were tossed out by U.S. courts, thanks to the landmark 2014 Supreme Court decision in Alice v. CLS Bank. The Alice ruling has effectively ended hundreds of lawsuits where defendants were improperly sued for basic computer use. Take Action Tell Congress: No New Bills For Patent Trolls Now, patent trolls and a few huge corporate patent-holders are upset about losing their bogus patents. They are lobbying Congress to change the rules–and reverse the Alice decision entirely. Shockingly, they’ve convinced the Senate Judiciary Committee to vote this Thursday on two of the most damaging patent bills we’ve ever seen. The Patent Eligibility Restoration Act (PERA, S. 2140) would overturn Alice, enabling patent trolls to extort small business owners and even hobbyists, just for using common software systems to express themselves or run their businesses. PERA would also overturn a 2013 Supreme Court case that prevents most kinds of patenting of human genes. Meanwhile, the PREVAIL Act (S. 2220) seeks to severely limit how the public can challenge bad patents at the patent office. Challenges like these are one of the most effective ways to throw out patents that never should have been granted in the first place. This week, we need to show Congress that everyday users and creators won’t stand for laws that actually expand avenues for patent abuse. The U.S. Senate must not pass new legislation to allow the worst patent scams to expand and flourish. Take Action Tell Congress: No New Bills For Patent Trolls
- Unveiling Venezuela’s Repression: A Legacy of State Surveillance and Controlby Guest Author on September 18, 2024 at 12:35 pm
The post was written by Laura Vidal (PhD), independent researcher in learning and digital rights. This is part two of a series. Part one on surveillance and control around the July election is here. Over the past decade, the government in Venezuela has meticulously constructed a framework of surveillance and repression, which has been repeatedly denounced by civil society and digital rights defenders in the country. This apparatus is built on a foundation of restricted access to information, censorship, harassment of journalists, and the closure of media outlets. The systematic use of surveillance technologies has created an intricate network of control. Security forces have increasingly relied on digital tools to monitor citizens, frequently stopping people to check the content of their phones and detaining those whose devices contain anti-government material. The country’s digital identification systems, Carnet de la Patria and Sistema Patria—established in 2016 and linked to social welfare programs—have also been weaponized against the population by linking access to essential services with affiliation to the governing party. Censorship and internet filtering in Venezuela became omnipresent ahead of the recent election period. The government blocked access to media outlets, human rights organizations, and even VPNs—restricting access to critical information. Social media platforms like X (formerly Twitter) and WhatsApp were also targeted—and are expected to be regulated—with the government accusing these platforms of aiding opposition forces in organizing a “fascist coup d’état” and spreading “hate” while promoting a “civil war.” The blocking of these platforms not only limits free expression but also serves to isolate Venezuelans from the global community and their networks in the diaspora, a community of around 9 million people. The government's rhetoric, which labels dissent as "cyberfascism" or "terrorism," is part of a broader narrative that seeks to justify these repressive measures while maintaining a constant threat of censorship, further stifling dissent. Moreover, there is a growing concern that the government’s strategy could escalate to broader shutdowns of social media and communication platforms if street protests become harder to control, highlighting the lengths to which the regime is willing to go to maintain its grip on power. Fear is another powerful tool that enhances the effectiveness of government control. Actions like mass arrests, often streamed online, and the public display of detainees create a chilling effect that silences dissent and fractures the social fabric. Economic coercion, combined with pervasive surveillance, fosters distrust and isolation—breaking down the networks of communication and trust that help Venezuelans access information and organize. This deliberate strategy aims not just to suppress opposition but to dismantle the very connections that enable citizens to share information and mobilize for protests. The resulting fear, compounded by the difficulty in perceiving the full extent of digital repression, deepens self-censorship and isolation. This makes it harder to defend human rights and gain international support against the government's authoritarian practices. Civil Society’s Response Despite the repressive environment, civil society in Venezuela continues to resist. Initiatives like Noticias Sin Filtro and El Bus TV have emerged as creative ways to bypass censorship and keep the public informed. These efforts, alongside educational campaigns on digital security and the innovative use of artificial intelligence to spread verified information, demonstrate the resilience of Venezuelans in the face of authoritarianism. However, the challenges remain extensive. The Inter-American Commission on Human Rights (IACHR) and its Special Rapporteur for Freedom of Expression (SRFOE) have condemned the institutional violence occurring in Venezuela, highlighting it as state terrorism. To be able to comprehend the full scope of this crisis it is paramount to understand that this repression is not just a series of isolated actions but a comprehensive and systematic effort that has been building for over 15 years. It combines elements of infrastructure (keeping essential services barely functional), blocking independent media, pervasive surveillance, fear-mongering, isolation, and legislative strategies designed to close civic space. With the recent approval of a law aimed at severely restricting the work of non-governmental organizations, the civic space in Venezuela faces its greatest challenge yet. The fact that this repression occurs amid widespread human rights violations suggests that the government's next steps may involve an even harsher crackdown. The digital arm of government propaganda reaches far beyond Venezuela’s borders, attempting to silence voices abroad and isolate the country from the global community. The situation in Venezuela is dire, and the use of technology to facilitate political violence represents a significant threat to human rights and democratic norms. As the government continues to tighten its grip, the international community must speak out against these abuses and support efforts to protect digital rights and freedoms. The Venezuelan case is not just a national issue but a global one, illustrating the dangers of unchecked state power in the digital age. However, this case also serves as a critical learning opportunity for the global community. It highlights the risks of digital authoritarianism and the ways in which governments can influence and reinforce each other's repressive strategies. At the same time, it underscores the importance of an organized and resilient civil society—in spite of so many challenges—as well as the power of a network of engaged actors both inside and outside the country. These collective efforts offer opportunities to resist oppression, share knowledge, and build solidarity across borders. The lessons learned from Venezuela should inform global strategies to safeguard human rights and counter the spread of authoritarian practices in the digital era. An open letter, organized by a group of Venezuelan digital and human rights defenders, calling for an end to technology-enabled political violence in Venezuela, has been published by Access Now and remains open for signatures.
- The New U.S. House Version of KOSA Doesn’t Fix Its Biggest Problemsby Jason Kelley on September 18, 2024 at 12:26 am
An amended version of the Kids Online Safety Act (KOSA) that is being considered this week in the U.S. House is still a dangerous online censorship bill that contains many of the same fundamental problems of a similar version the Senate passed in July. The changes to the House bill do not alter that KOSA will coerce the largest social media platforms into blocking or filtering a variety of entirely legal content, and subject a large portion of users to privacy-invasive age verification. They do bring KOSA closer to becoming law, and put us one step closer to giving government officials dangerous and unconstitutional power over what types of content can be shared and read online. TAKE ACTION TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT Reframing the Duty of Care Does Not Change Its Dangerous Outcomes For years now, digital rights groups, LGBTQ+ organizations, and many others have been critical of KOSA's “duty of care.” While the language has been modified slightly, this version of KOSA still creates a duty of care and negligence standard of liability that will allow the Federal Trade Commission to sue apps and websites that don’t take measures to “prevent and mitigate” various harms to minors that are vague enough to chill a significant amount of protected speech. The biggest shift to the duty of care is in the description of the harms that platforms must prevent and mitigate. Among other harms, the previous version of KOSA included anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors, “consistent with evidence-informed medical information.” The new version drops this section and replaces it with the "promotion of inherently dangerous acts that are likely to cause serious bodily harm, serious emotional disturbance, or death.” The bill defines “serious emotional disturbance” as “the presence of a diagnosable mental, behavioral, or emotional disorder in the past year, which resulted in functional impairment that substantially interferes with or limits the minor’s role or functioning in family, school, or community activities.” Despite the new language, this provision is still broad and vague enough that no platform will have any clear indication about what they must do regarding any given piece of content. Its updated list of harms could still encompass a huge swathe of entirely legal (and helpful) content about everything from abortion access and gender-affirming care to drug use, school shootings, and tackle football. It is still likely to exacerbate the risks of children being harmed online because it will place barriers on their ability to access lawful speech—and important resources—about topics like addiction, eating disorders, and bullying. And it will stifle minors who are trying to find their own supportive communities online. Kids will, of course, still be able to find harmful content, but the largest platforms—where the most kids are—will face increased liability for letting any discussion about these topics occur. It will be harder for suicide prevention messages to reach kids experiencing acute crises, harder for young people to find sexual health information and gender identity support, and generally, harder for adults who don’t want to risk the privacy- and security-invasion of age verification technology to access that content as well. As in the past version, enforcement of KOSA is left up to the FTC, and, to some extent, state attorneys general around the country. Whether you agree with them or not on what encompasses a “diagnosable mental, behavioral, or emotional disorder,” the fact remains that KOSA's flaws are as much about the threat of liability as about the actual enforcement. As long as these definitions remain vague enough that platforms have no clear guidance on what is likely to cross the line, there will be censorship—even if the officials never actually take action. The previous House version of the bill stated that “A high impact online company shall exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate the following harms to minors.” The new version slightly modifies this to say that such a company "shall create and implement its design features to reasonably prevent and mitigate the following harms to minors.” These language changes are superficial; this section still imposes a standard that requires platforms to filter user-generated content and imposes liability if they fail to do so “reasonably.” House KOSA Edges Closer to Harmony with Senate Version Some of the latest amendments to the House version of KOSA bring it closer in line with the Senate version which passed a few months ago (not that this improves the bill). This version of KOSA lowers the bar, set by the previous House version, that determines which companies would be impacted by KOSA’s duty of care. While the Senate version of KOSA does not have such a limitation (and would affect small and large companies alike), the previous House version created a series of tiers for differently-sized companies. This version has the same set of tiers, but lowers the highest bar from companies earning $2.5 billion in annual revenue, or having 150 million annual users, to companies earning $1 billion in annual revenue, or having 100 million annual users. This House version also includes the “filter bubble” portion of KOSA which was added to the Senate version a year ago. This requires any “public-facing website, online service, online application, or mobile application that predominantly provides a community forum for user-generated content” to provide users with an algorithm that uses a limited set of information, such as search terms and geolocation, but not search history (for example). This section of KOSA is meant to push users towards a chronological feed. As we’ve said before, there’s nothing wrong with online information being presented chronologically for those who want it. But just as we wouldn’t let politicians rearrange a newspaper in a particular order, we shouldn’t let them rearrange blogs or other websites. It’s a heavy-handed move to stifle the editorial independence of web publishers. Lastly, the House authors have added language that the bill would have no actual effect on how platforms or courts interpret the law, but which does point directly to the concerns we’ve raised. It states that, “a government entity may not enforce this title or a regulation promulgated under this title based upon a specific viewpoint of any speech, expression, or information protected by the First Amendment to the Constitution that may be made available to a user as a result of the operation of a design feature.” Yet KOSA does just that: the FTC will have the power to force platforms to moderate or block certain types of content based entirely on the views described therein. TAKE ACTION TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT KOSA Remains an Unconstitutional Censorship Bill KOSA remains woefully underinclusive—for example, Google's search results will not be impacted regardless of what they show young people, but Instagram is on the hook for a broad amount of content—while making it harder for young people in distress to find emotional, mental, and sexual health support. This version does only one important thing—it moves KOSA closer to passing in both houses of Congress, and puts us one step closer to enacting an online censorship regime that will hurt free speech and privacy for everyone.
- KOSA’s Online Censorship Threatens Abortion Accessby Lisa Femia on September 17, 2024 at 6:32 pm
For those living in one of the 22 states where abortion is banned or heavily restricted, the internet can be a lifeline. It has essential information on where and how to access care, links to abortion funds, and guidance on ways to navigate potential legal risks. Activists use the internet to organize and build community, and reproductive healthcare organizations rely on it to provide valuable information and connect with people in need. But both Republicans and Democrats in Congress are now actively pushing for federal legislation that could cut youth off from these vital healthcare resources and stifle online abortion information for adults and kids alike. This summer, the U.S. Senate passed the Kids Online Safety Act (KOSA), a bill that would grant the federal government and state attorneys general the power to restrict online speech they find objectionable in a misguided and ineffective attempt to protect kids online. A number of organizations have already sounded the alarm on KOSA’s danger to online LGBTQ+ content, but the hazards of the bill don’t stop there. KOSA puts abortion seekers at risk. It could easily lead to censorship of vital and potentially life-saving information about sexual and reproductive healthcare. And by age-gating the internet, it could result in websites requiring users to submit identification, undermining the ability to remain anonymous while searching for abortion information online. TAKE ACTION TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT Abortion Information Censored As EFF has repeatedly warned, KOSA will stifle online speech. It gives government officials the dangerous and unconstitutional power to decide what types of content can be shared and read online. Under one of its key censorship provisions, KOSA would create what the bill calls a “duty of care.” This provision would require websites, apps, and online platforms to comply with a vague and overbroad mandate to prevent and mitigate “harm to minors” in all their “design features.” KOSA contains a long list of harms that websites have a duty to protect against, including emotional disturbance, acts that lead to bodily harm, and online harassment, among others. The list of harms is open for interpretation. And many of the harms are so subjective that government officials could claim any number of issues fit the bill. This opens the door for political weaponization of KOSA—including by anti-abortion officials. KOSA is ambiguous enough to allow officials to easily argue that its mandate includes sexual and reproductive healthcare information. They could, for example, claim that abortion information causes emotional disturbance or death, or could lead to “sexual exploitation and abuse.” This is especially concerning given the anti-abortion movement’s long history of justifying abortion restrictions by claiming that abortions cause mental health issues, including depression and self-harm (despite credible research to the contrary). As a result, websites could be forced to filter and block such content for minors, despite the fact that minors can get pregnant and are part of the demographic most likely to get their news and information from social media platforms. By blocking this information, KOSA could cut off young people’s access to potentially life-saving sexual and reproductive health resources. So much for protecting kids. KOSA’s expansive and vague censorship requirements will also affect adults. To avoid liability and the cost and hassle of litigation, websites and platforms are likely to over-censor potentially covered content, even if that content is otherwise legal. This could lead to the removal of important reproductive health information for all internet users, adults included. A Tool For Anti-Choice Officials It’s important to remember that KOSA’s “duty of care” provision would be defined and enforced by the presidential administration in charge, including any future administration that is hostile to reproductive rights. The bill grants the Federal Trade Commission, majority-controlled by the President’s party, the power to develop guidelines and to investigate or sue any websites or platforms that don’t comply. It also grants the Executive Branch the power to form a Kids Online Safety Council to further identify “emerging or current risks of harms to minors associated with online platforms.” Meanwhile, KOSA gives state attorneys general, including those in abortion-restrictive states, the power to sue under its other provisions, many of which intersect with the “duty of care.” As EFF has argued, this gives state officials a back door to target and censor content they don’t like, including abortion information. It’s also directly foreseeable that anti-abortion officials would use KOSA in this way. One of the bill’s co-sponsors, Senator Marsha Blackburn (R-TN), has touted KOSA as a way to censor online content on social issues, claiming that children are being “indoctrinated” online. The Heritage Foundation, a politically powerful organization that espouses anti-choice views, also has its eyes on KOSA. It has been lobbying lawmakers to pass the bill and suggesting that a future administration could fill the Kids Online Safety Council with “representatives who share pro-life values.” This all comes at a time when efforts to censor abortion information online are at a fever pitch. In abortion-restrictive states, officials have already been eagerly attempting to erase abortion from the internet. Lawmakers in both South Carolina and Texas have introduced bills to censor online abortion information, though neither effort has yet to be successful. The National Right to Life Committee has also created a model abortion law aimed at restricting abortion rights in a variety of ways, including digital access to information. KOSA Hurts Anonymity Online KOSA will also push large and important parts of the internet behind age gates. In order to determine which users are minors, online services will likely impose age verification systems, which require everyone—both adults and minors—to verify their age by providing identifying information, oftentimes including government-issued ID or other personal records. This is deeply problematic for maintaining access to reproductive care. Age verification undermines our First Amendment right to remain anonymous online by requiring users to confirm their identity before accessing webpages and information. It would chill users who do not wish to share their identity from accessing or sharing online abortion resources, and put others’ identities at increased risk of exposure. In a post-Roe United States, in which states are increasingly banning, restricting, and prosecuting abortions, the ability to anonymously seek and share abortion information online is more important than ever. For people living in abortion-restrictive states, searching and sharing abortion information online can put you at risk. There have been multiple instances of law enforcement agencies using digital evidence, including internet history, in abortion-related criminal cases. We’ve also seen an increase in online harassment and doxxing of healthcare professionals, even in more abortion-protective states. Because of this, many organizations, including EFF, have tried to help people take steps to protect privacy and anonymity online. KOSA would undercut those efforts. While it’s true that our online ecosystem is already rich with private surveillance, age verification adds another layer of mass data collection. Online ID checks require adults to upload data-rich, government-issued identifying documents to either the website or a third-party verifier, creating a potentially lasting record of their visit to the website. For abortion seekers taking steps to protect their anonymity and avoid this pervasive surveillance, this would make things all the more difficult. Using a public computer or creating anonymous profiles on social networks won’t keep you safe if you have to upload ID to access the information you need. TAKE ACTION TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT We Can Still Stop KOSA From Passing KOSA has not yet passed the House, so there’s still time to stop it. But the Senate vote means that the House could bring it up for a vote at any time, and the House has introduced its own similarly flawed version of KOSA. If we want to protect access to abortion information online, we must organize now to stop KOSA from passing.
- Unveiling Venezuela’s Repression: Surveillance and Censorship Following July’s Presidential Electionby Guest Author on September 16, 2024 at 7:41 pm
The post was written by Laura Vidal (PhD), independent researcher in learning and digital rights. This is part one of a series. Part two on the legacy of Venezuela’s state surveillance is here. As thousands of Venezuelans took to the streets across the country to demand transparency in July’s election results, the ensuing repression has been described as the harshest to date, with technology playing a central role in facilitating this crackdown. The presidential elections in Venezuela marked the beginning of a new chapter in the country’s ongoing political crisis. Since July 28th, a severe backlash against demonstrations has been undertaken by the country’s security forces, leading to 20 people killed. The results announced by the government, in which they claimed a re-election of Nicolás Maduro, have been strongly contested by political leaders within Venezuela as well as by the Organization of American States (OAS), and governments across the region. In the days following the election, the opposition—led by candidates Edmundo González Urrutia and María Corina Machado—challenged the National Electoral Council’s (CNE) decision to award the presidency to Maduro. They called for greater transparency in the electoral process, particularly regarding the publication of the original tally sheets, which are essential for confirming or contesting the election results. At present, these original tally sheets remain unpublished. In response to the lack of official data, the coalition supporting the opposition—known as Comando con Venezuela—presented the tally sheets obtained by opposition witnesses on the night of July 29th. These were made publicly available on an independent portal named “Presidential Results 2024,” accessible to any internet user with a Venezuelan identity card. The government responded with repression and numerous instances of technology-supported repression and violence. The surveillance and control apparatus saw intensified use, such as increased deployment of VenApp, a surveillance application originally launched in December 2022 to report failures in public services. Promoted by President Nicolás Maduro as a means for citizens to report on their neighbors, VenApp has been integrated into the broader system of state control, encouraging citizens to report activities deemed suspicious by the state and further entrenching a culture of surveillance. Additional reports indicated the use of drones across various regions of the country. Increased detentions and searches at airports have particularly impacted human rights defenders, journalists, and other vulnerable groups. This has been compounded by the annulment of passports and other forms of intimidation, creating an environment where many feel trapped and fearful of speaking out. The combined effect of these tactics is the pervasive sense that it is safer not to stand out. Many NGOs have begun reducing the visibility of their members on social media, some individuals have refused interviews, have published documented human rights violations under generic names, and journalists have turned to AI-generated avatars to protect their identities. People are increasingly setting their social media profiles to private and changing their profile photos to hide their faces. Additionally, many are now sending information about what is happening in the country to their networks abroad for fear of retaliation. These actions often lead to arbitrary detentions, with security forces publicly parading those arrested as trophies, using social media materials and tips from informants to justify their actions. The clear intent behind these tactics is to intimidate, and they have been effective in silencing many. This digital repression is often accompanied by offline tactics, such as marking the residences of opposition figures, further entrenching the climate of fear. However, this digital aspect of repression is far from a sudden development. These recent events are the culmination of years of systematic efforts to control, surveil, and isolate the Venezuelan population—a strategy that draws from both domestic decisions and the playbook of other authoritarian regimes. In response, civil society in Venezuela continues to resist; and in August, EFF joined more than 150 organizations and individuals in an open letter highlighting the technology-enabled political violence in Venezuela. Read more about this wider history of Venezuela’s surveillance and civil society resistance in part two of this series, available here.
- The Climate Has a Posse – And So Does Political Satireby Corynne McSherry on September 16, 2024 at 3:36 pm
Greenwashing is a well-worn strategy to try to convince the public that environmentally damaging activities aren’t so damaging after all. It can be very successful precisely because most of us don’t realize it’s happening. Enter the Yes Men, skilled activists who specialize in elaborate pranks that call attention to corporate tricks and hypocrisy. This time, they’ve created a website – wired-magazine.com—that looks remarkably like Wired.com and includes, front and center, an op-ed from writer (and EFF Special Adviser) Cory Doctorow. The op-ed, titled “Climate change has a posse” discussed the “power and peril” of a new “greenwashing” emoji designed by renowned artist Shepard Fairey: First, we have to ask why in hell Unicode—formerly the Switzerland of tech standards—decided to plant its flag in the greasy battlefield of eco-politics now. After rejecting three previous bids for a climate change emoji, in 2017 and 2022, this one slipped rather suspiciously through the iron gates. Either the wildfire smoke around Unicode’s headquarters in Silicon Valley finally choked a sense of ecological urgency into them, or more likely, the corporate interests that comprise the consortium finally found a way to appease public contempt that was agreeable to their bottom line. Notified of the spoof, Doctorow immediately tweeted his joy at being included in a Yes Men hoax. Wired.com was less pleased. An attorney for its corporate parent, Condé Nast (CDN) demanded the Yes Men take the site down and transfer the domain name to CDN, claiming trademark infringement and misappropriation of Doctorow’s identity, with a vague reference to copyright infringement thrown in for good measure. As we explained in our response on the Yes Men’s behalf, Wired’s heavy-handed reaction was both misguided and disappointing. Their legal claims are baseless given the satirical, noncommercial nature of the site (not to mention Doctorow’s implicit celebration of it after the fact). And frankly, a publication of Wired’s caliber should be celebrating this form of political speech, not trying to shut it down. Hopefully Wired and CDN will recognize this is not a battle they want or need to fight. If not, EFF stands ready to defend the Yes Men and their critical work.
- NextNav’s Callous Land-Grab to Privatize 900 MHzby Rory Mir on September 13, 2024 at 2:52 pm
The 900 MHz band, a frequency range serving as a commons for all, is now at risk due to NextNav’s brazen attempt to privatize this shared resource. Left by the FCC for use by amateur radio operators, unlicensed consumer devices, and industrial, scientific, and medical equipment, this spectrum has become a hotbed for new technologies and community-driven projects. Millions of consumer devices also rely on the range, including baby monitors, cordless phones, IoT devices, garage door openers. But NextNav would rather claim these frequencies, fence them off, and lease them out to mobile service providers. This is just another land-grab by a corporate rent-seeker dressed up as innovation. EFF and hundreds of others have called on the FCC to decisively reject this proposal and protect the open spectrum as a commons that serves all. NextNav’s Proposed 'Band-Grab' NextNav wants the FCC to reconfigure the 902-928 MHz band to grant them exclusive rights to the majority of the spectrum. The country's airwaves are separated into different sections for different devices to communicate, like dedicated lanes on a highway. This proposal would not only give NextNav their own lane, but expanded operating region, increased broadcasting power, and more leeway for radio interference emanating from their portions of the band. All of this points to more power for NextNav at everyone else’s expense. This land-grab is purportedly to implement a Positioning, Navigation and Timing (PNT) network to serve as a US-specific backup of the Global Positioning System(GPS). This plan raises red flags off the bat. Dropping the “global” from GPS makes it far less useful for any alleged national security purposes, especially as it is likely susceptible to the same jamming and spoofing attacks as GPS. NextNav itself admits there is also little commercial demand for PNT. GPS works, is free, and is widely supported by manufacturers. If Nextnav has a grand plan to implement a new and improved standard, it was left out of their FCC proposal. What NextNav did include however is its intent to resell their exclusive bandwidth access to mobile 5G networks. This isn't about national security or innovation; it's about a rent-seeker monopolizing access to a public resource. If NextNav truly believes in their GPS backup vision, they should look to parts of the spectrum already allocated for 5G. Stifling the Future of Open Communication The open sections of the 900 MHz spectrum are vital for technologies that foster experimentation and grassroots innovation. Amateur radio operators, developers of new IoT devices, and small-scale operators rely on this band. One such project is Meshtastic, a decentralized communication tool that allows users to send messages across a network without a central server. This new approach to networking offers resilient communication that can endure emergencies where current networks fail. This is the type of innovation that actually addresses crises raised by Nextnav, and it’s happening in the part of the spectrum allocated for unlicensed devices while empowering communities instead of a powerful intermediary. Yet, this proposal threatens to crush such grassroots projects, leaving them without a commons in which they can grow and improve. This isn’t just about a set of frequencies. We need an ecosystem which fosters grassroots collaboration, experimentation, and knowledge building. Not only do these commons empower communities, they avoid a technology monoculture unable to adapt to new threats and changing needs as technology progresses. Invention belongs to the public, not just to those with the deepest pockets. The FCC should ensure it remains that way. FCC Must Protect the Commons NextNav’s proposal is a direct threat to innovation, public safety, and community empowerment. While FCC comments on the proposal have closed, replies remain open to the public until September 20th. The FCC must reject this corporate land-grab and uphold the integrity of the 900 MHz band as a commons. Our future communication infrastructure—and the innovation it supports—depends on it. You can read our FCC comments here.
- We Called on the Oversight Board to Stop Censoring “From the River to the Sea” — And They Listenedby Paige Collings on September 12, 2024 at 6:57 pm
Earlier this year, the Oversight Board announced a review of three cases involving different pieces of content on Facebook that contained the phrase “From the River to the Sea.” EFF submitted to the consultation urging Meta to make individualized moderation decisions on this content rather than a blanket ban as the phrase can be a historical call for Palestinian liberation and not an incitement of hatred in violation with Meta’s community standards. We’re happy to see that the Oversight Board agreed. In last week’s decision, the Board found that the three pieces of examined content did not break Meta’s rules on “Hate Speech, Violence and Incitement or Dangerous Organizations and Individuals.” Instead, these uses of the phrase “From the River to the Sea” were found to be an expression of solidarity with Palestinians and not an inherent call for violence, exclusion, or glorification of designated terrorist group Hamas. The Oversight Board decision follows Meta’s original action to keep the content online. In each of the three cases, users appealed to Meta to remove the content but the company’s automated tools dismissed the appeals for human review and kept the content on Facebook. Users subsequently appealed to the Board and called for the content to be removed. The material included a comment that used the hashtag #fromtherivertothesea, a video depicting floating watermelon slices forming the phrases “From the River to the Sea” and “Palestine will be free,” and a reshared post declaring support for the Palestinian people. As we’ve said many times, content moderation at scale does not work. Nowhere is this truer than on Meta services like Facebook and Instagram where the vast amount of material posted has incentivized the corporation to rely on flawed automated decision-making tools and inadequate human review. But this is a rare occasion where Meta’s original decision to carry the content and the Oversight Board’s subsequent decision supporting this upholds our fundamental right to free speech online. The tech giant must continue examining content referring to “From the River to the Sea” on an individualized basis, and we continue to call on Meta to recognize its wider responsibilities to the global user base to ensure people are free to express themselves online without biased or undue censorship and discrimination.
- Stopping the Harms of Automated Decision Making | EFFector 36.12by Christian Romero on September 11, 2024 at 4:35 pm
Curious about the latest digital rights news? Well, you're in luck! In our latest newsletter we cover topics including the Department of Homeland Security's use of AI in the immigration system, the arrest of Telegram’s CEO Pavel Durov, and a victory in California where we helped kill a bill that would have imposed mandatory internet ID checks. It can feel overwhelming to stay up to date, but we've got you covered with our EFFector newsletter! You can read the full issue here, or subscribe to get the next one in your inbox automatically! You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below: LISTEN ON YouTube EFFECTOR 36.12 - Stopping The Harms Of Automated Decision Making Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.
- Britain Must Call for Release of British-Egyptian Activist and Coder Alaa Abd El Fattahby Karen Gullo on September 11, 2024 at 2:20 pm
As British-Egyptian coder, blogger, and activist Alaa Abd El Fattah enters his fifth year in a maximum security prison outside Cairo, unjustly charged for supporting online free speech and privacy for Egyptians and people across the Middle East and North Africa, we stand with his family and an ever-growing international coalition of supporters in calling for his release. Alaa has over these five years endured beatings and solitary confinement. His family at times were denied visits or any contact with him. He went on a seven-month hunger strike in protest of his incarceration, and his family feared that he might not make it. But global attention on his plight, bolstered by support from British officials in recent years, ultimately led to improved prison conditions and family visitation rights. But let’s be clear: Egypt’s long-running retaliation against Alaa for his activism is a travesty and an arbitrary use of its draconian, anti-speech laws. He has spent the better part of the last 10 years in prison. He has been investigated and imprisoned under every Egyptian regime that has served in his lifetime. The time is long overdue for him to be freed. Over 20 years ago Alaa began using his technical skills to connect coders and technologists in the Middle East to build online communities where people could share opinions and speak freely and privately. The role he played in using technology to amplify the messages of his fellow Egyptians—as well as his own participation in the uprising in Tahrir Square—made him a prominent global voice during the Arab Spring, and a target for the country’s successive repressive regimes, which have used antiterrorism laws to silence critics by throwing them in jail and depriving them of due process and other basic human rights. Alaa is a symbol for the principle of free speech in a region of the world where speaking out for justice and human rights is dangerous and using the power of technology to build community is criminalized. But he has also come to symbolize the oppression and cruelty with which the Egyptian government treats those who dare to speak out against authoritarianism and surveillance. Egyptian authorities’ relentless, politically motivated pursuit of Alaa is an egregious display of abusive police power and lack of due process. He was first arrested and detained in 2006 for participating in a demonstration. He was arrested again in 2011 on charges related to another protest. In 2013 he was arrested and detained on charges of organizing a protest. He was eventually released in 2014, but imprisoned again after a judge found him guilty in absentia. What diplomatic price has Egypt paid for denying the right of consular access to a British citizen? And will the Minister make clear there will be serious diplomatic consequences if access is not granted immediately and Alaa is not released and reunited with his family? - David Lammy That same year he was released on bail, only to be re-arrested when he went to court to appeal his case. In 2015 he was sentenced to five years in prison and released in 2019. But he was re-arrested in a massive sweep of activists in Egypt while on probation and charged with spreading false news and belonging to a terrorist organization for sharing a Facebook post about human rights violations in prison. He was sentenced in 2021, after being held in pre-trial detention for more than two years, to five years in prison. September 29 will mark five years that he has spent behind bars. While he’s been in prison an anthology of his writing, which was translated into English by anonymous supporters, was published in 2021 as You Have Not Yet Been Defeated, and he became a British citizen through his mother, the rights activist and mathematician Laila Soueif, that December. Protesting his conditions, Alaa shaved his head and went on hunger strike beginning in April 2022. As he neared the third month of his hunger strike, former UK foreign secretary Liz Truss said she was working hard to secure his release. Similarly, then-PM Rishi Sunak wrote in a letter to Alaa’s sister, Sanaa Seif, that “the government is deeply committed to doing everything we can to resolve Alaa's case as soon as possible." David Lammy, then a Member of Parliament and now Britain’s foreign secretary, asked Parliament in November 2022, “what diplomatic price has Egypt paid for denying the right of consular access to a British citizen? And will the Minister make clear there will be serious diplomatic consequences if access is not granted immediately and Alaa is not released and reunited with his family?” Lammy joined Alaa’s family during a sit-in outside of the Foreign Office. When the UK government’s promises failed to come to fruition, Alaa escalated his hunger strike in the runup to the COP27 gathering. At the same time, a coordinated campaign led by his family and supported by a number of international organizations helped draw global attention to his plight, and ultimately led to improved prison conditions and family visitation rights. But although Alaa’s conditions have improved and his family visitation rights have been secured, he remains wrongfully imprisoned, and his family fears that the Egyptian government has no intention of releasing him. With Lammy, now UK Foreign Minister, and a new Labour government in place in the UK, there is renewed hope for Alaa’s release. Keir Starmer, Labour Leader and the new prime minister, has voiced his support for Fattah’s release. The new government must make good on its pledge to defend British values and interests, and advocate for the release of its British citizen Alaa Fattah. We encourage British citizens to write to their MP (external link) and advocate for his release. His continued detention is debased. Egypt should face the sole of shoes around the world until Fattah is freed.
- School Monitoring Software Sacrifices Student Privacy for Unproven Promises of Safetyby Bill Budington on September 6, 2024 at 10:12 pm
Imagine your search terms, key-strokes, private chats and photographs are being monitored every time they are sent. Millions of students across the country don’t have to imagine this deep surveillance of their most private communications: it’s a reality that comes with their school districts’ decision to install AI-powered monitoring software such as Gaggle and GoGuardian on students’ school-issued machines and accounts. As we demonstrated with our own Red Flag Machine, however, this software flags and blocks websites for spurious reasons and often disproportionately targets disadvantaged, minority and LGBTQ youth. The companies making the software claim it’s all done for the sake of student safety: preventing self-harm, suicide, violence, and drug and alcohol abuse. While a noble goal, given that suicide is the second highest cause of death among American youth 10-14 years old, no comprehensive or independent studies have shown an increase in student safety linked to the usage of this software. Quite to the contrary: a recent comprehensive RAND research study shows that such AI monitoring software may cause more harm than good. That study also found that how to respond to alerts is left to the discretion of the school districts themselves. Due to a lack of resources to deal with mental health, schools often refer these alerts to law enforcement officers who are not trained and ill-equipped to deal with youth mental crises. When police respond to youth who are having such episodes, the resulting encounters can lead to disastrous results. So why are schools still using the software–when a congressional investigation found a need for “federal action to protect students’ civil rights, safety, and privacy”? Why are they trading in their students’ privacy for a dubious-at-best marketing claim of safety? Experts suggest it's because these supposed technical solutions are easier to implement than the effective social measures that schools often lack resources to implement. I spoke with Isabelle Barbour, a public health consultant who has experience working with schools to implement mental health supports. She pointed out that there are considerable barriers to families, kids, and youth accessing health care and mental health supports at a community level. There is also a lack of investment in supporting schools to effectively address student health and well-being. This leads to a situation where many students come to school with needs that have been unmet and these needs impact the ability of students to learn. Although there are clear and proven measures that work to address the burdens youth face, schools often need support (time, mental health expertise, community partners, and a budget) to implement these measures. Edtech companies market largely unproven plug-and-play products to educational professionals who are stretched thin and seeking a path forward to help kids. Is it any wonder why schools sign contracts which are easy to point to when questioned about what they are doing with regard to the youth mental health epidemic? One example: Gaggle in marketing to school districts claims to have saved 5,790 student lives between 2018 and 2023, according to shaky metrics they themselves designed. All the while they keep the inner-workings of their AI monitoring secret, making it difficult for outsiders to scrutinize and measure its effectiveness. We give Gaggle an “F” Reports of the errors and inability of the AI flagging to understand context keep popping up. When the Lawrence, Kansas school district signed a $162,000 contract with Gaggle, no one batted an eye: It joined a growing number of school districts (currently ~1,500) nation-wide using the software. Then, school administrators called in nearly an entire class to explain photographs Gaggle’s AI had labeled as “nudity” because the software wouldn’t tell them: “Yet all students involved maintain that none of their photos had nudity in them. Some were even able to determine which images were deleted by comparing backup storage systems to what remained on their school accounts. Still, the photos were deleted from school accounts, so there is no way to verify what Gaggle detected. Even school administrators can’t see the images it flags.” Young journalists within the school district raised concerns about how Gaggle’s surveillance of students impacted their privacy and free speech rights. As journalist Max McCoy points out in his article for the Kansas Reflector, “newsgathering is a constitutionally protected activity and those in authority shouldn’t have access to a journalist’s notes, photos and other unpublished work.” Despite having renewed Gaggle’s contract, the district removed the surveillance software from the devices of student journalists. Here, a successful awareness campaign resulted in a tangible win for some of the students affected. While ad-hoc protections for journalists are helpful, more is needed to honor all students' fundamental right to privacy against this new front of technological invasions. Tips for Students to Reclaim their Privacy Students struggling with the invasiveness of school surveillance AI may find some reprieve by taking measures and forming habits to avoid monitoring. Some considerations: Consider any school-issued device a spying tool. Don’t try to hack or remove the monitoring software unless specifically allowed by your school: it may result in significant consequences from your school or law enforcement. Instead, turn school-issued devices completely off when they aren’t being used, especially while at home. This will prevent the devices from activating the camera, microphone, and surveillance software. If not needed, consider leaving school-issued devices in your school locker: this will avoid depending on these devices to log in to personal accounts, which will keep data from those accounts safe from prying eyes. Don’t log in to personal accounts on a school-issued device (if you can avoid it - we understand sometimes a school-issued device is the only computer some students have access to). Rather, use a personal device for all personal communications and accounts (e.g., email, social media). Maybe your personal phone is the only device you have to log in to social media and chat with friends. That’s okay: keeping separate devices for separate purposes will reduce the risk that your data is leaked or surveilled. Don’t log in to school-controlled accounts or apps on your personal device: that can be monitored, too. Instead, create another email address on a service the school doesn’t control which is just for personal communications. Tell your friends to contact you on that email outside of school. Finally, voice your concern and discomfort with such software being installed on devices you rely on. There are plenty of resources to point to, many linked to in this post, when raising concerns about these technologies. As the young journalists at Lawrence High School have shown, writing about it can be an effective avenue to bring up these issues with school administrators. At the very least, it will send a signal to those in charge that students are uncomfortable trading their right to privacy for an elusive promise of security. Schools Can Do Better to Protect Students Safety and Privacy It’s not only the students who are concerned about AI spying in the classroom and beyond. Parents are often unaware of the spyware deployed on school-issued laptops their children bring home. And when using a privately-owned shared computer logged into a school-issued Google Workspace or Microsoft account, a parent’s web search will be available to the monitoring AI as well. New studies have uncovered some of the mental detriments that surveillance causes. Despite this and the array of First Amendment questions these student surveillance technologies raise, schools have rushed to adopt these unproven and invasive technologies. As Barbour put it: “While ballooning class sizes and the elimination of school positions are considerable challenges, we know that a positive school climate helps kids feel safe and supported. This allows kids to talk about what they need with caring adults. Adults can then work with others to identify supports. This type of environment helps not only kids who are suffering with mental health problems, it helps everyone.” We urge schools to focus on creating that environment, rather than subjecting students to ever-increasing scrutiny through school surveillance AI.
- You Really Do Have Some Expectation of Privacy in Publicby Matthew Guariglia on September 6, 2024 at 4:01 pm
Being out in the world advocating for privacy often means having to face a chorus of naysayers and nihilists. When we spend time fighting the expansion of Automated License Plate Readers capable of tracking cars as they move, or the growing ubiquity of both public and private surveillance cameras, we often hear a familiar refrain: “you don’t have an expectation of privacy in public.” This is not true. In the United States, you do have some expectation of privacy—even in public—and it’s important to stand up and protect that right. How is it possible to have an expectation of privacy in public? The answer lies in the rise of increasingly advanced surveillance technology. When you are out in the world, of course you are going to be seen, so your presence will be recorded in one way or another. There’s nothing stopping a person from observing you if they’re standing across the street. If law enforcement has decided to investigate you, they can physically follow you. If you go to the bank or visit a courthouse, it’s reasonable to assume you’ll end up on their individual video security system. But our ever-growing network of sophisticated surveillance technology has fundamentally transformed what it means to be observed in public. Today’s technology can effortlessly track your location over time, collect sensitive, intimate information about you, and keep a retrospective record of this data that may be stored for months, years, or indefinitely. This data can be collected for any purpose, or even for none at all. And taken in the aggregate, this data can paint a detailed picture of your daily life—a picture that is more cheaply and easily accessed by the government than ever before. Because of this, we’re at risk of exposing more information about ourselves in public than we were in decades past. This, in turn, affects how we think about privacy in public. While your expectation of privacy is certainly different in public than it would be in your private home, there is no legal rule that says you lose all expectation of privacy whenever you’re in a public place. To the contrary, the U.S. Supreme Court has emphasized since the 1960’s that “what [one] seeks to preserve as private, even in an area accessible to the public, may be constitutionally protected.” The Fourth Amendment protects “people, not places.” U.S. privacy law instead typically asks whether your expectation of privacy is something society considers “reasonable.” This is where mass surveillance comes in. While it is unreasonable to assume that everything you do in public will be kept private from prying eyes, there is a real expectation that when you travel throughout town over the course of a day—running errands, seeing a doctor, going to or from work, attending a protest—that the entirety of your movements is not being precisely tracked, stored by a single entity, and freely shared with the government. In other words, you have a reasonable expectation of privacy in at least some of the uniquely sensitive and revealing information collected by surveillance technology, although courts and legislatures are still working out the precise contours of what that includes. In 2018, the U.S. Supreme Court decided a landmark case on this subject, Carpenter v. United States. In Carpenter, the court recognized that you have a reasonable expectation of privacy in the whole of your physical movements, including your movements in public. It therefore held that the defendant had an expectation of privacy in 127 days worth of accumulated historical cell site location information (CSLI). The records that make up CSLI data can provide a comprehensive chronicle of your movements over an extended period of time by using the cell site location information from your phone. Accessing this information intrudes on your private sphere, and the Fourth Amendment ordinarily requires the government to obtain a warrant in order to do so. Importantly, you retain this expectation of privacy even when those records are collected while you’re in public. In coming to its holding, the Carpenter court wished to preserve “the degree of privacy against government that existed when the Fourth Amendment was adopted.” Historically, we have not expected the government to secretly catalogue and monitor all of our movements over time, even when we travel in public. Allowing the government to access cell site location information contravenes that expectation. The court stressed that these accumulated records reveal not only a person’s particular public movements, but also their “familial, political, professional, religious, and sexual associations.” As Chief Justice John Roberts said in the majority opinion: “Given the unique nature of cell phone location records, the fact that the information is held by a third party does not by itself overcome the user’s claim to Fourth Amendment protection. Whether the Government employs its own surveillance technology . . . or leverages the technology of a wireless carrier, we hold that an individual maintains a legitimate expectation of privacy in the record of his physical movements as captured through [cell phone site data]. The location information obtained from Carpenter’s wireless carriers was the product of a search. . . . As with GPS information, the time-stamped data provides an intimate window into a person’s life, revealing not only his particular movements, but through them his “familial, political, professional, religious, and sexual associations.” These location records “hold for many Americans the ‘privacies of life.’” . . . A cell phone faithfully follows its owner beyond public thoroughfares and into private residences, doctor’s offices, political headquarters, and other potentially revealing locales. Accordingly, when the Government tracks the location of a cell phone it achieves near perfect surveillance, as if it had attached an ankle monitor to the phone’s user.” As often happens in the wake of a landmark Supreme Court decision, there has been some confusion among lower courts in trying to determine what other types of data and technology violate our expectation of privacy when we’re in public. There are admittedly still several open questions: How comprehensive must the surveillance be? How long of a time period must it cover? Do we only care about backward-looking, retrospective tracking? Still, one overall principle remains certain: you do have some expectation of privacy in public. If law enforcement or the government wants to know where you’ve been all day long over an extended period of time, that combined information is considered revealing and sensitive enough that police need a warrant for it. We strongly believe the same principle also applies to other forms of surveillance technology, such as automated license plate reader camera networks that capture your car’s movements over time. As more and more integrated surveillance technologies become the norm, we expect courts will expand existing legal decisions to protect this expectation of privacy. It's crucial that we do not simply give up on this right. Your location over time, even if you are traversing public roads and public sidewalks, is revealing. More revealing than many people realize. If you drive from a specific person’s house to a protest, and then back to that house afterward—what can police infer from having those sensitive and chronologically expansive records of your movement? What could people insinuate about you if you went to a doctor’s appointment at a reproductive healthcare clinic and then drove to a pharmacy three towns away from where you live? Scenarios like this involve people driving on public roads or being seen in public, but we also have to take time into consideration. Tracking someone’s movements all day is not nearly the same thing as seeing their car drive past a single camera at one time and location. The courts may still be catching up with the law and technology, but that doesn’t mean it’s a surveillance free-for-all just because you’re in the public. The government still has important restrictions against tracking our movement over time and in public even if you find yourself out in the world walking past individual security cameras. This is why we do what we do, because despite the naysayers, someone has to continue to hold the line and educate the world on how privacy isn’t dead.
- EFF & 140 Other Organizations Call for an End to AI Use in Immigration Decisionsby Hannah Zhao on September 5, 2024 at 5:13 pm
EFF, Just Futures Law, and 140 other groups have sent a letter to Secretary Alejandro Mayorkas that the Department of Homeland Security (DHS) must stop using artificial intelligence (AI) tools in the immigration system. For years, EFF has been monitoring and warning about the dangers of automated and so-called “AI-enhanced” surveillance at the U.S.-Mexico border. As we’ve made clear, algorithmic decision-making should never get the final say on whether a person should be policed, arrested, denied freedom, or, in this case, are worthy of a safe haven in the United States. The letter is signed by a wide range of organizations, from civil liberties nonprofits to immigrant rights groups, to government accountability watchdogs, to civil society organizations. Together, we declared that DHS’s use of AI, defined by the White House as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments,” appeared to violate federal policies governing its responsible use, especially when it’s used as part of the decision-making regarding immigration enforcement and adjudications. Read the letter here. The letter highlighted the findings from a bombshell report published by Mijente and Just Futures Law on the use of AI and automated decision-making by DHS and its sub-agencies, U.S. Citizenship and Immigration Services (USCIS), Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP). Despite laws, executive orders, and other directives to establish standards and processes for the evaluation, adoption, and use of AI by DHS—as well as DHS’s pledge that pledge that it “will not use AI technology to enable improper systemic, indiscriminate, or large-scale monitoring, surveillance or tracking of individuals”—the agency has seemingly relied on the loopholes for national security, intelligence gathering, and law enforcement to avoid compliance with those requirements. This completely undermines any supposed attempt on the part of the federal government to use AI responsibly and contain the technology’s habit of merely digitizing and accelerating decisions based preexisting on biases and prejudices. Even though AI is unproven in its efficacy, DHS has frenetically incorporated AI into many of its functions. These products are often a result of partnerships with vendors who have aggressively pushed the idea that AI will make immigration processing more efficient, more objective and less biased Yet the evidence begs to differ, or, at best, is mixed. As the report notes, studies, including those conducted by the government, have recognized that AI has often worsened discrimination due to the reality of “garbage in, garbage out.” This phenomenon was visible in Amazon’s use—and subsequent scrapping—of AI to screen résumés, which highlighted male applicants more often because the data on which the program had been trained included more applications from men. The same pitfalls arises in predictive policing products, something EFF categorically opposes, which often “predicts” crimes more likely to occur in Black and Brown neighborhoods due to the prejudices embedded in the historical crime data used to design that software. Furthermore, AI tools are often deficient when used in complex contexts, such as the morass that is immigration law. In spite of these grave concerns, DHS has incorporated AI decision-making into many levels of its operation with without taking the necessary steps to properly vet the technology. According to the report, AI technology is part of USCIS’s process to determine eligibility for immigration benefit or relief, credibility in asylum applications, and public safety or national security threat level of an individual. ICE uses AI to automate its decision-making on electronic monitoring, detention, and deportation. At the same time, there is a disturbing lack of transparency regarding those tools. We urgently need DHS to be held accountable for its adoption of opaque and untested AI programs promulgated by those with a financial interest in the proliferation of the technology. Until DHS adequately addresses the concerns raised in the letter and report, the Department should be prohibited from using AI tools.
- U.S. Federal Employees: Plant Your Flag for Digital Freedoms Today!by Christian Romero on September 4, 2024 at 11:01 pm
Like clockwork, September is here—and so is the Combined Federal Campaign (CFC) pledge period! The CFC is the world’s largest and most successful annual charity campaign for U.S. federal employees and retirees. You can now make a pledge to support EFF’s lawyers, technologists, and activists in the fight for privacy and free speech online. Last year members of the CFC community raised nearly $34,000 to support digital civil liberties. Giving to EFF through the CFC is easy! Just head over to GiveCFC.org and use our ID 10437. Once there, click DONATE to give via payroll deduction, credit/debit, or an e-check. If you have a renewing pledge, you can increase your support as well! Scan the QR code below to easily make a pledge or go to GiveCFC.org! This year's campaign theme—GIVE HAPPY—shows that when U.S. federal employees and retirees give together, they make a meaningful difference to a countless number of individuals throughout the world. They ensure that organizations like EFF can continue working towards our goals even during challenging times. With support from those who pledged through the CFC last year, EFF has: Provided pro bono legal representation so an independent news outlet could resist an unlawful search warrant and gag order. Demonstrated how to spot election misinformation and protect your data from political advertisers. Continued to make great strides in passing protections for the right to repair your devices in various states. Helped push the Fifth Circuit Court of Appeals to decide that geofence warrants are “categorically” unconstitutional. Federal employees and retirees have a tremendous impact on the shape of our democracy and the future of civil liberties and human rights online. Support EFF’s work by using our CFC ID 10437 when you make a pledge today!
- EFF Calls For Release of Alexey Soldatov, "Father of the Russian Internet"by Electronic Frontier Foundation on September 4, 2024 at 3:47 pm
EFF was deeply disturbed to learn that Alexey Soldatov, known as the “father of the Russian Internet,” was sentenced in July to two years in prison by a Moscow court for alleged “misuse” of IP addresses. In 1990, Soldatov led the Relcom computer network that made the first Soviet connection to the global internet. He also served as Russia’s Deputy Minister of Communications from 2008 to 2010. Soldatov was convicted on charges related to an alleged deal to transfer IP addresses to a foreign organization. He and his lawyers have denied the accusations. His family, many supporters, and Netzpolitik suggest that the accusations are politically motivated. Soldatov’s former business partner, Yevgeny Antipov, was also sentenced to eighteen months in prison. Soldatov was a trained nuclear scientist at Kurchatov nuclear research institute who, during the Soviet era, built the Russian Institute for Public Networks (RIPN), which was responsible for administering and allocating IP addresses in Russia from the early 1990s onwards. The network RIPN created was called Relcom (RELiable COMmunication). During the 1991 KGB-led coup d’etat Relcom—unlike traditional media—remained uncensored. As his son, journalist Andrei Soldatov recalls, Alexey Soldatov insisted on keeping the lines open under all circumstances. Following the collapse of the Soviet Union, Soldatov ran Relcom as the first ISP in Russia and has since helped establish organizations that provide the technical backbone of the Russian Internet. For this long service, he has been dubbed “the father of RuNet” (the term used to describe the Russian-speaking internet). During the time that Soldatov served as Russia’s deputy minister of communications, he was instrumental in getting ICANN to approve the use of Cyrillic in domain names. He also rejected then-preliminary discussions about isolating the Russian internet from the global internet. We are deeply concerned that this is a politically motivated prosecution. Multiple reports indicate this may be true. Soldatov suffers from both prostate cancer and a heart condition, and this sentence would almost certainly further endanger his health. His son Andrei Soldatov writes, “The Russian state, vindictive and increasingly violent by nature, decided to take his liberty, a perfect illustration of the way Russia treats the people who helped contribute to the modernization and globalization of the country.” Because of our concerns, EFF calls for his immediate release.
- Victory! California Bill To Impose Mandatory Internet ID Checks Is Dead—It Should Stay That Wayby Joe Mullin on September 3, 2024 at 7:28 pm
A misguided bill that would have required many people to show ID to get online has died without getting a floor vote in the California legislature, where key deadlines for bill passage passed this weekend. Thank you to our supporters for helping us to kill this wrongheaded bill, especially those of you who took the time to reach out to your legislators. EFF opposed this bill from the start. Bills that allow politicians to define what is “sexually explicit” content and then enact punishments for those who engage with it are inherently censorship bills—and they never stop with minors. A.B. 3080 would have required an age verification system, most likely a scanned uploaded government-issued ID, to be erected for any website that had more than 33% “sexually explicit” content. The proposal did not, and could not have, differentiated between sites that are largely graphic sexual content and a huge array of sites that have some content that is appropriate for minors, along with other content that is geared towards adults. Bills like this are similar to having state prosecutors insist on ID uploads in order to turn on Netflix, regardless of whether the movie you’re seeking is G-rated or R-rated. Political attempts to use pornography as an excuse to censor and control the internet are now almost 30 years old. These proposals persist despite the fact that applying government overseers to what Americans read and watch is not only unconstitutional, but broadly unpopular. In Reno v. ACLU, the Supreme Court overruled almost all of the Communications Decency Act, a 1996 law that was intended to keep “obscene or indecent” material away from minors. In 2004, the Supreme Court again rejected an age-gated internet in ACLU v. Ashcroft, striking down most of a federal law of that era. The right of adults to read and watch what they want online is settled law. It is also a right that the great majority of Americans want to keep. The age-gating systems that propose to analyze and copy our biometric data, our government IDs, or both, will be a huge privacy setback for Americans of all ages. Electronically uploading and copying IDs is far from the equivalent of an in-person card check. And they won’t be effective at moderating what children see, which can and must be done by individuals and families. Other states have passed online age-verification bills this year, including a Texas bill that EFF has asked the U.S. Supreme Court to evaluate. Tennessee’s age-verification bill even includes criminal penalties, allowing prosecutors to bring felony charges against anyone who “publishes or distributes”—i.e., links to—sexual material. California politicians should let this unconstitutional and censorious proposal fade away, and resist the urge to bring it back next year. Californians do not want mandatory internet ID checks, nor are they interested in fines and incarceration for those who fail to use them.
- EFF to Tenth Circuit: Protest-Related Arrests Do Not Justify Dragnet Device and Digital Data Searchesby Brendan Gilligan on September 3, 2024 at 6:10 pm
The Constitution prohibits dragnet device searches, especially when those searches are designed to uncover political speech, EFF explained in a friend-of-the-court brief filed in the U.S. Court of Appeals for the Tenth Circuit. The case, Armendariz v. City of Colorado Springs, challenges device and data seizures and searches conducted by the Colorado Springs police after a 2021 housing rights march that the police deemed “illegal.” The plaintiffs in the case, Jacqueline Armendariz and a local organization called the Chinook Center, argue these searches violated their civil rights. The case details repeated actions by the police to target and try to intimidate plaintiffs and other local civil rights activists solely for their political speech. After the 2021 march, police arrested several protesters, including Ms. Armendariz. Police alleged Ms. Armendariz “threw” her bike at an officer as he was running, and despite that the bike never touched the officer, police charged her with attempted simple assault. Police then used that charge to support warrants to seize and search six of her electronic devices—including several phones and laptops. The search warrant authorized police to comb through these devices for all photos, videos, messages, emails, and location data sent or received over a two-month period and to conduct a time-unlimited search of 26 keywords—including for terms as broad and sweeping as “officer,” “housing,” “human,” “right,” “celebration,” “protest,” and several common names. Separately, police obtained a warrant to search all of the Chinook Center’s Facebook information and private messages sent and received by the organization for a week, even though the Center was not accused of any crime. After Ms. Armendariz and the Chinook Center filed their civil rights suit, represented by the ACLU of Colorado, the defendants filed a motion to dismiss the case, arguing the searches were justified and, in any case, officers were entitled to qualified immunity. The district court agreed and dismissed the case. Ms. Armendariz and the Center appealed to the Tenth Circuit. As explained in our amicus brief—which was joined by the Center for Democracy & Technology, the Electronic Privacy Information Center, and the Knight First Amendment Institute at Columbia University—the devices searched contain a wealth of personal information. For that reason, and especially where, as here, political speech is implicated, it is imperative that warrants comply with the Fourth Amendment. The U.S. Supreme Court recognized in Riley v. California that electronic devices such as smartphones “differ in both a quantitative and a qualitative sense” from other objects. Our electronic devices’ immense storage capacities means that just one type of data can reveal more than previously possible because they can span years’ worth of information. For example, location data can reveal a person’s “familial, political, professional, religious, and sexual associations.” And combined with all of the other available data—including photos, video, and communications—a device such as a smartphone or laptop can store a “digital record of nearly every aspect” of a person’s life, “from the mundane to the intimate.” Social media data can also reveal sensitive, private information, especially with respect to users' private messages. It’s because our devices and the data they contain can be so revealing that warrants for this information must rigorously adhere to the Fourth Amendment’s requirements of probable cause and particularity. Those requirements weren’t met here. The police’s warrants failed to establish probable cause that any evidence of the crime they charged Ms. Armendariz with—throwing her bike at an officer—would be found on her devices. And the search warrant, which allowed officers to rifle through months of her private records, was so overbroad and lacking in particularity as to constitute an unconstitutional “general warrant.” Similarly, the warrant for the Chinook Center’s Facebook messages lacked probable cause and was especially invasive given that access to these messages may well have allowed police to map activists who communicated with the Center and about social and political advocacy. The warrants in this case were especially egregious because they appear designed to uncover First Amendment-protected activity. Where speech is targeted, the Supreme Court has recognized that it’s all the more crucial that warrants apply the Fourth Amendment’s requirements with “scrupulous exactitude” to limit an officer’s discretion in conducting a search. But that failed to happen here, and thus affected several of Ms. Armendariz and the Chinook Center’s First Amendment rights—including the right to free speech, the right to free association, and the right to receive information. Warrants that fail to meet the Fourth Amendment’s requirements disproportionately burden disfavored groups. In fact, the Framers adopted the Fourth Amendment to prevent the “use of general warrants as instruments of oppression”—but as legal scholars have noted, law enforcement routinely uses low-level, highly discretionary criminal offenses to impose order on protests. Once arrests are made, they are often later dropped or dismissed—but the damage is done, because protesters are off the streets, and many may be chilled from returning. Protesters undoubtedly will be further chilled if an arrest for a low-level offense then allows police to rifle through their devices and digital data, as happened in this case. The Tenth Circuit should let this case to proceed. Allowing police to conduct a virtual fishing expedition of a protester’s devices, especially when justification for that search is an arrest for a crime that has no digital nexus, contravenes the Fourth Amendment’s purposes and chills speech. It is unconstitutional and should not be tolerated.
- Americans Are Uncomfortable with Automated Decision-Makingby Catalina Sanchez on September 3, 2024 at 4:59 pm
Imagine a company you recently applied to work at used an artificial intelligence program to analyze your application to help expedite the review process. Does that creep you out? Well, you’re not alone. Consumer Reports recently released a national survey finding that Americans are uncomfortable with use of artificial intelligence (AI) and algorithmic decision-making in their day to day lives. The survey of 2,022 U.S. adults was administered by NORC at the University of Chicago and examined public attitudes on a variety of issues. Consumer Reports found: Nearly three-quarters of respondents (72%) said they would be “uncomfortable”— including nearly half (45%) who said they would be “very uncomfortable”—with a job interview process that allowed AI to screen their interview by grading their responses and in some cases facial movements. About two-thirds said they would be “uncomfortable”— including about four in ten (39%) who said they would be “very uncomfortable”— allowing banks to use such programs to determine if they were qualified for a loan or allowing landlords to use such programs to screen them as a potential tenant. More than half said they would be “uncomfortable”— including about a third who said they would be “very uncomfortable"— with video surveillance systems using facial recognition to identity them, and with hospital systems using AI or algorithms to help with diagnosis and treatment planning. The survey findings indicate that people are feeling disempowered by lost control over their digital footprint, and by corporations and government agencies adopting AI technology to make life-altering decisions about them. Yet states are moving at breakneck speed to implement AI “solutions” without first creating meaningful guidelines to address these reasonable concerns. In California, Governor Newsom issued an executive order to address government use of AI, and recently granted five vendors approval to test and AI for a myriad of state agencies. The administration hopes to apply AI to such topics as health-care facility inspections, assisting residents who are not fluent in English, and customer service. The vast majority of Consumer Reports’ respondents (83%) said they would want to know what information was used to instruct AI or a computer algorithm to make a decision about them. Another super-majority (91%) said they would want to have a way to correct the data where a computer algorithm was used. As states explore how to best protect consumers as corporations and government agencies deploy algorithmic decision-making, EFF urges strict standards of transparency and accountability. Laws should have a “privacy first” approach that ensures people have a say in how their private data is used. At a minimum, people should have a right to access what data is being used to make decisions about them and have the opportunity to correct it. Likewise, agencies and businesses using automated decision-making should offer an appeal process. Governments should ensure that consumers have protections from discrimination in algorithmic decision-making by both corporations and the public sector. Another priority should be a complete ban on many government uses of automated decision-making, including predictive policing. From deciding who gets housing or the best mortgages, who gets an interview or a job, or who law enforcement or ICE investigates, people are uncomfortable with algorithmic decision-making that will affect their freedoms. Now is the time for strong legal protections.
- The French Detention: Why We're Watching the Telegram Situation Closelyby David Greene on August 30, 2024 at 10:37 pm
EFF is closely monitoring the situation in France in which Telegram’s CEO Pavel Durov was charged with having committed criminal offenses, most of them seemingly related to the operation of Telegram. This situation has the potential to pose a serious danger to security, privacy, and freedom of expression for Telegram’s 950 million users. On August 24th, French authorities detained Durov when his private plane landed in France. Since then, the French prosecutor has revealed that Durov’s detention was related to an ongoing investigation, begun in July, of an “unnamed person.” The investigation involves complicity in crimes presumably taking place on the Telegram platform, failure to cooperate with law enforcement requests for the interception of communications on the platform, and a variety of charges having to do with failure to comply with French cryptography import regulations. On August 28, Durov was charged with each of those offenses, among others not related to Telegram, and then released on the condition that he check in regularly with French authorities and not leave France. We know very little about the Telegram-related charges, making it difficult to draw conclusions about how serious a threat this investigation poses to privacy, security, or freedom of expression on Telegram, or on online services more broadly. But it has the potential to be quite serious. EFF is monitoring the situation closely. There appear to be three categories of Telegram-related charges: First is the charge based on “the refusal to communicate upon request from authorized authorities, the information or documents necessary for the implementation and operation of legally authorized interceptions.” This seems to indicate that the French authorities sought Telegram’s assistance to intercept communications on Telegram. The second set of charges relate to “complicité” with crimes that were committed in some respect on or through Telegram. These charges specify “organized distribution of images of minors with a pedopornographic nature, drug trafficking, organized fraud, and conspiracy to commit crimes or offenses,” and “money laundering of crimes or offenses in an organized group.” The third set of charges all relate to Telegram’s failure to file a declaration required of those who import a cryptographic system into France. Now we are left to speculate. It is possible that all of the charges derive from “the failure to communicate.” French authorities may be claiming that Durov is complicit with criminals because Telegram refused to facilitate the “legally authorized interceptions.” Similarly, the charges connected to the failure to file the encryption declaration likely also derive from the “legally authorized interceptions” being encrypted. France very likely knew for many years that Telegram had not filed the required declarations regarding their encryption, yet they were not previously charged for that omission. Refusal to cooperate with a valid legal order for assistance with an interception could be similarly prosecuted in most international legal systems, including the United States. EFF has frequently contested the validity of such orders and gag orders associated with them, and have urged services to contest them in courts and pursue all appeals. But once such orders have been finally validated by courts, they must be complied with. It is a more difficult situation in other situations such as where the nation lacks a properly functioning judiciary or there is an absence of due process, such as China or Saudi Arabia. In addition to the refusal to cooperate with the interception, it seems likely that the complicité charges also, or instead, relate to Telegram’s failure to remove posts advancing crimes upon request or knowledge. Specifically, the charges of complicity in “the administration of an online platform to facilitate an illegal transaction” and “organized distribution of images of minors with a pedopornographic nature, drug trafficking,[and] organized fraud,” could likely be based on not depublishing posts. An initial statement by Ofmin, the French agency established to investigate threats to child safety online, referred to “lack of moderation” as being at the heart of their investigation. Under French law, Article 323-3-2, it is a crime to knowingly allow the distribution of illegal content or provision of illegal services, or to facilitate payments for either. It is not yet clear whether Telegram users themselves, or those offering similar services to Telegram, should be concerned. In particular, this potential “lack of moderation” liability bears watching. If Durov is prosecuted because Telegram simply inadequately removed offending content from the site that it is generally aware of, that could expose most every other online platform to similar liability. It would also be concerning, though more in line with existing law, if the charges relate to an affirmative refusal to address specific posts or accounts, rather than a generalized awareness. And both of these situations are much different from one in which France has evidence that Durov was more directly involved with those using Telegram for criminal purposes. Moreover, France will likely have to prove that Durov himself committed each of these offenses, and not Telegram itself or others at the company. EFF has raised serious concerns about Telegram’s behavior both as a social media platform and as a messaging app. In spite of its reputation as a “secure messenger,” only a very small subset of messages on Telegram are encrypted in such a way that prevents the company from reading the contents of communications—end-to-end encryption. (Only one-to-one messages with the “secret messages” option enabled are end-to-end encrypted) And even so, cryptographers have questioned the effectiveness of Telegram’s homebrewed cryptography. If the French government’s charges have to do with Telegram’s refusal to moderate or intercept these messages, EFF will oppose this case in the strongest terms possible, just as we have opposed all government threats to end-to-end encryption all over the world. This arrest marks an alarming escalation by a state’s authorities. It is not yet clear whether Telegram users themselves, or those offering similar services to Telegram, should be concerned. French authorities may ask for technical measures that endanger the security and privacy of those users. Durov and Telegram may or may not comply. Those running similar services may not have anything to fear, or these charges may be the canary in the coalmine warning us all that French authorities intend to expand their inspection of messaging and social media platforms. It is simply too soon, and there is too little information for us to know for sure. It is not the first time Telegram’s laissez faire attitude towards content moderation has led to government reprisals. In 2022, the company was forced to pay a fine in Germany for not establishing a lawful way for reporting illegal content or naming an entity in Germany to receive official communication. Brazil fined the company in 2023 for failing to suspend accounts of supporters of former President Jair Bolsonaro. Nevertheless this arrest marks an alarming escalation by a state’s authorities. We are monitoring the situation closely and will continue to do so.
- The California Supreme Court Should Help Protect Your Stored Communicationsby Mario Trujillo on August 30, 2024 at 4:31 pm
When you talk to your friends and family on Snapchat or Facebook, you should be assured that those services will not freely disclose your communications to the government or other private parties. That is why the California Supreme Court must take up and reverse the appellate opinion in the case of Snap v. The Superior Court of San Diego County. This opinion dangerously weakens the Stored Communications Act (SCA), which is one of the few federal privacy laws on the books. The SCA prevents certain communications providers from disclosing the content of your communications to private parties or the government without a warrant (or other narrow exceptions). EFF submitted an amicus letter to the court, along with the Center for Democracy & Technology. The lower court incorrectly ruled that modern services like Snapchat and Facebook largely do not have to comply with the 1986 law. Since those companies already access the content of your communications for their own business purposes—including to target their behavioral advertising—the lower court held that they can also freely disclose the content of your communications to anyone. The ruling came in the context of a criminal defendant who sought access to the communications of a deceased victim with a subpoena. In compliance with the law, both Meta and Snap resisted disclosing the information. The lower court’s opinion conflicts with nearly 40 years of interpretation by Congress and other courts. It ignores the SCA’s primary purpose of protecting your communications from disclosure. And the opinion gives too much weight to companies’ terms of service. Those terms, which almost no one reads, is where most companies bury their own right to access to your communications. There is no doubt that companies should also be restricted in how they access and use your data, and we need stronger laws to make that happen. For years, EFF has advocated for comprehensive data privacy legislation, including data minimization and a ban on online behavioral advertising. But that does not affect the current analysis of the SCA, which protects against disclosure now. If the California Supreme Court does not take this up, Meta, Snap, and other providers would be allowed to voluntarily disclose the content of their users’ communications to any other corporations for any reason, to parties in civil litigation, and to the government without a warrant. Private parties could also compel disclosure with a mere subpoena.
- Copyright Is Not a Tool to Silence Critics of Religious Educationby Mitch Stoltz on August 28, 2024 at 6:25 pm
Copyright law is not a tool to punish or silence critics. This is a principle so fundamental that it is the ur-example of fair use, which typically allows copying another’s creative work when necessary for criticism. But sometimes, unscrupulous rightsholders misuse copyright law to bully critics into silence by filing meritless lawsuits, threatening potentially enormous personal liability unless they cease speaking out. That’s why EFF is defending Zachary Parrish, a parent in Indiana, against a copyright infringement suit by LifeWise, Inc. LifeWise produces controversial “released time” religious education programs for public elementary school students during school hours. After encountering the program at his daughter’s public school, Mr. Parrish co-founded “Parents Against LifeWise,” a group that strives to educate and warn others about the harms they believe LifeWise’s programs cause. To help other parents make fully informed decisions about signing their children up for a LifeWise program, Mr. Parrish obtained a copy of LifeWise’s elementary school curriculum—which the organization kept secret from everyone except instructors and enrolled students—and posted it to the Parents Against LifeWise website. LifeWise sent a copyright takedown to the website’s hosting provider to get the curriculum taken down, and followed up with an infringement lawsuit against Mr. Parrish. EFF filed a motion to dismiss LifeWise’s baseless attempt to silence Mr. Parrish. As we explained to the court, Mr. Parrish’s posting of the curriculum was a paradigmatic example of fair use, an important doctrine that allows critics like Mr. Parrish to comment on, criticize, and educate others on the contents of a copyrighted work. LifeWise’s own legal complaint shows why Mr. Parrish’s use was fair: “his goal was to gather information and internal documents with the hope of publishing information online which might harm LifeWise’s reputation and galvanize parents to oppose local LifeWise Academy chapters in their communities.” This is a mission of public advocacy and education that copyright law protects. In addition, Mr. Parrish’s purpose was noncommercial: far from seeking to replace or compete with LifeWise, he posted the curriculum to encourage others to think carefully before signing their children up for the program. And posting the curriculum doesn’t harm LifeWise—at least not in any way that copyright law was meant to address. Just like copyright doesn’t stop a film critic from using scenes from a movie as part of a devastating review, it doesn’t stop a concerned parent from educating other parents about a controversial religious school program by showing them the actual content of that program. Early dismissals in copyright cases against fair users are crucial. Because, although fair use protects lots of important free expression like the commentary and advocacy of Mr. Parrish, it can be ruinously expensive and chilling to fight for those protections. The high cost of civil discovery and the risk of astronomical statutory damages—which reach as high as $150,000 per work in certain cases—can lead would-be fair users to self-censor for fear of invasive legal process and financial ruin. Early dismissal helps prevent copyright holders from using the threat of expensive, risky lawsuits to silence critics and control public conversations about their works. It also sends a message to others that their right to free expression doesn’t depend on having enough money to defend it in court or having access to help from organizations like EFF. While we are happy to help, we would be even happier if no one needed our help for a problem like this ever again. When society loses access to critical commentary and the public dialogue it enables, we all suffer. That’s why it is so important that courts prevent copyright law from being used to silence criticism and commentary. We hope the court will do so here, and dismiss LifeWise’s baseless complaint against Mr. Parrish.
- Backyard Privacy in the Age of Dronesby Hannah Zhao on August 27, 2024 at 3:12 pm
This article was originally published by The Legal Aid Society's Decrypting a Defense Newsletter on August 5, 2024 and is reprinted here with permission. Police departments and law enforcement agencies are increasingly collecting personal information using drones, also known as unmanned aerial vehicles. In addition to high-resolution photographic and video cameras, police drones may be equipped with myriad spying payloads, such as live-video transmitters, thermal imaging, heat sensors, mapping technology, automated license plate readers, cell site simulators, cell phone signal interceptors and other technologies. Captured data can later be scrutinized with backend software tools like license plate readers and face recognition technology. There have even been proposals for law enforcement to attach lethal and less-lethal weapons to drones and robots. Over the past decade or so, police drone use has dramatically expanded. The Electronic Frontier Foundation’s Atlas of Surveillance lists more than 1500 law enforcement agencies across the US that have been reported to employ drones. The result is that backyards, which are part of the constitutionally protected curtilage of a home, are frequently being captured, either intentionally or incidentally. In grappling with the legal implications of this phenomenon, we are confronted by a pair of U.S. Supreme Court cases from the 1980s: California v. Ciraolo and Florida v. Riley. There, the Supreme Court ruled that warrantless aerial surveillance conducted by law enforcement in low-flying manned aircrafts did not violate the Fourth Amendment because there was no reasonable expectation of privacy from what was visible from the sky. Although there are fundamental differences between surveillance by manned aircrafts and drones, some courts have extended the analysis to situations involving drones, shutting the door to federal constitution challenges. Yet, Americans, legislators, and even judges, have long voiced serious worries with the threat of rampant and unchecked aerial surveillance. A couple of years ago, the Fourth Circuit found in Leaders of a Beautiful Struggle v. Baltimore Police Department that a mass aerial surveillance program (using manned aircrafts) covering most of the city violated the Fourth Amendment. The exponential surge in police drone use has only heightened the privacy concerns underpinning that and similar decisions. Unlike the manned aircrafts in Ciraolo and Riley, drones can silently and unobtrusively gather an immense amount of data at only a tiny fraction of the cost of traditional aircrafts. Additionally, drones are smaller and easier to operate and can get into spaces—such as under eaves or between buildings—that planes and helicopters can never enter. And the noise created by manned airplanes and helicopters effectively functions as notice to those who are being watched, whereas drones can easily record information surreptitiously. In response to the concerns regarding drone surveillance voiced by civil liberties groups and others, some law enforcement agencies, like the NYPD, have pledged to abide by internal policies to refrain from warrantless use over private property. But without enforcement mechanisms, those empty promises are easily discarded by officials when they consider them inconvenient, as NYC Mayor Eric Adams did in announcing that drones would, in fact, be deployed to indiscriminately spy on backyard parties over Labor Day. Barring a seismic shift away from Ciraolo and Riley by the U.S. Supreme Court (which seems nigh impossible given the Fourth Amendment approach by the current members of the bench), protection from warrantless aerial surveillance—and successful legal challenges—will have to come from the states. Indeed, six months after Ciraolo was decided, the California Supreme Court held in People v. Cook that under the state’s constitution, an individual had a reasonable expectation that cops will not conduct warrantless surveillance of their backyard from the air. More recently, other states, such as Hawai’i, Vermont, and Alaska, have similarly relied on their state constitution’s Fourth Amendment corollary to find warrantless aerial surveillance improper. Some states have also passed new laws regulating governmental drone use. And at least half a dozen states, including Florida, Maine, Minnesota, Nevada, North Dakota, and Virginia have statutes requiring warrants (with exceptions) for police use. Law enforcement’s use of drones will only proliferate in the coming years, and drone capabilities continue to evolve rapidly. Courts and legislatures must keep pace to ensure that privacy rights do not fall victim to the advancement of technology. For more information on drones and other surveillance technologies, please visit EFF’s Street Level Surveillance guide at https://sls.eff.org/.
- Geofence Warrants Are 'Categorically' Unconstitutional | EFFector 36.11by Christian Romero on August 21, 2024 at 6:40 pm
School is back in session, so prepare for your first lesson from EFF! Today you'll learn about the latest court ruling on the dangers of geofence warrants, our letter urging Bumble to require opt-in consent to sell user data, and the continued fight against the UN Cybercrime Treaty. If you'd like future lessons about the fight for digital freedoms, you're in luck! We've got you covered with our EFFector newsletter. You can read the full issue here, or subscribe to get the next one in your inbox automatically. You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below: LISTEN ON YouTube EFFECTOR 36.11 - Geofence Warrants Are 'Categorically' Unconstitutional Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.
- NO FAKES – A Dream for Lawyers, a Nightmare for Everyone Elseby Corynne McSherry on August 19, 2024 at 6:57 pm
Performers and ordinary humans are increasingly concerned that they may be replaced or defamed by AI-generated imitations. We’re seeing a host of bills designed to address that concern – but every one just generates new problems. Case in point: the NO FAKES Act. We flagged numerous flaws in a “discussion draft” back in April, to no avail: the final text has been released, and it’s even worse. NO FAKES creates a classic “hecklers’ veto”: anyone can use a specious accusation to get speech they don’t like taken down. Under NO FAKES, any human person has the right to sue anyone who has either made, or made available, their “digital replica.” A replica is broadly defined as “a newly-created, computer generated, electronic representation of the image, voice or visual likeness” of a person. The right applies to the person themselves; anyone who has a license to use their image, voice, or likeness; and their heirs for up to 70 years after the person dies. Because it is a federal intellectual property right, Section 230 protections – a crucial liability shield for platforms and anyone else that hosts or shares user-generated content—will not apply. And that legal risk begins the moment a person gets a notice that the content is unlawful, even if they didn't create the replica and have no way to confirm whether or not it was authorized, or have any way to verify the claim. NO FAKES thereby creates a classic “hecklers’ veto”: anyone can use a specious accusation to get speech they don’t like taken down. The bill proposes a variety of exclusions for news, satire, biopics, criticism, etc. to limit the impact on free expression, but their application is uncertain at best. For example, there’s an exemption for use of a replica for a “bona fide” news broadcast, provided that the replica is “materially relevant” to the subject of the broadcast. Will citizen journalism qualify as “bona fide”? And who decides whether the replica is “materially relevant”? These are just some of the many open questions, all of which will lead to full employment for lawyers, but likely no one else, particularly not those whose livelihood depends on the freedom to create journalism or art about famous people. The bill also includes a safe harbor scheme modelled on the DMCA notice and takedown process. To stay within the NO FAKES safe harbors, a platform that receives a notice of illegality must remove “all instances” of the allegedly unlawful content—a broad requirement that will encourage platforms to adopt “replica filters” similar to the deeply flawed copyright filters like YouTube’s Content I.D. Platforms that ignore such a notice can be on the hook just for linking to unauthorized replicas. And every single copy made, transmitted, or displayed is a separate violation incurring a $5000 penalty – which will add up fast. The bill does throw platforms a not-very-helpful-bone: if they can show they had an objectively reasonable belief that the content was lawful, they only have to cough up $1 million if they guess wrong. All of this is a recipe for private censorship. For decades, the DMCA process has been regularly abused to target lawful speech, and there’s every reason to suppose NO FAKES will lead to the same result. All of this is a recipe for private censorship. What is worse, NO FAKES offers even fewer safeguards for lawful speech than the DMCA. For example, the DMCA includes a relatively simple counter-notice process that a speaker can use to get their work restored. NO FAKES does not. Instead, NO FAKES puts the burden on the speaker to run to court within 14 days to defend their rights. The powerful have lawyers on retainer who can do that, but most creators, activists, and citizen journalists do not. NO FAKES does include a provision that, in theory, would allow improperly targeted speakers to hold notice senders accountable. But they must prove that the lie was “knowing,” which can be interpreted to mean that the sender gets off scot-free as long as they subjectively believes the lie to be true, no matter how unreasonable that belief. Given the multiple open questions about how to interpret the various exemptions (not to mention the common confusions about the limits of IP protection that we’ve already seen), that’s pretty cold comfort. These significant flaws should doom the bill, and that’s a shame. Deceptive AI-generated replicas can cause real harms, and performers have a right to fair compensation for the use of their likenesses, should they choose to allow that use. Existing laws can address most of this, but Congress should be considering narrowly-targeted and proportionate proposals to fill in the gaps. The NO FAKES Act is neither targeted nor proportionate. It’s also a significant Congressional overreach—the Constitution forbids granting a property right in (and therefore a monopoly over) facts, including a person’s name or likeness. The best we can say about NO FAKES is that it has provisions protecting individuals with unequal bargaining power in negotiations around use of their likeness. For example, the new right can’t be completely transferred to someone else (like a film studio or advertising agency) while the person is alive, so a person can’t be pressured or tricked into handing over total control of their public identity (their heirs still can, but the dead celebrity presumably won’t care). And minors have some additional protections, such as a limit on how long their rights can be licensed before they are adults. TAKE ACTION Throw Out the NO FAKES Act and Start Over But the costs of the bill far outweigh the benefits. NO FAKES creates an expansive and confusing new intellectual property right that lasts far longer than is reasonable or prudent, and has far too few safeguards for lawful speech. The Senate should throw it out and start over.
- Court to California: Try a Privacy Law, Not Online Censorshipby Adam Schwartz on August 19, 2024 at 6:54 pm
In a victory for free speech and privacy, a federal appellate court confirmed last week that parts of the California Age-Appropriate Design Code Act likely violate the First Amendment, and that other parts require further review by the lower court. The U.S. Court of Appeals for the Ninth Circuit correctly rejected rules requiring online businesses to opine on whether the content they host is “harmful” to children, and then to mitigate such harms. EFF and CDT filed a friend-of-the-court brief in the case earlier this year arguing for this point. The court also provided a helpful roadmap to legislatures for how to write privacy first laws that can survive constitutional challenges. However, the court missed an opportunity to strike down the Act’s age-verification provision. We will continue to argue, in this case and others, that this provision violates the First Amendment rights of children and adults. The Act, The Rulings, and Our Amicus Brief In 2022, California enacted its Age-Appropriate Design Code Act (AADC). Three of the law’s provisions are crucial for understanding the court’s ruling. The Act requires an online business to write a “Data Protection Impact Assessment” for each of its features that children are likely to access. It must also address whether the feature’s design could, among other things, “expos[e] children to harmful, or potentially harmful, content.” Then the business must create a “plan to mitigate” that risk. The Act requires online businesses to follow enumerated data privacy rules specific to children. These include data minimization, and limits on processing precise geolocation data. The Act requires online businesses to “estimate the age of child users,” to an extent proportionate to the risks arising from the business’s data practices, or to apply child data privacy rules to all consumers. In 2023, a federal district court blocked the law, ruling that it likely violates the First Amendment. The state appealed. EFF’s brief in support of the district court’s ruling argued that the Act’s age-verification provision and vague “harmful” standard are unconstitutional; that these provisions cannot be severed from the rest of the Act; and thus that the entire Act should be struck down. We conditionally argued that if the court rejected our initial severability argument, privacy principles in the Act could survive the reduced judicial scrutiny applied to such laws and still safeguard peoples personal information. This is especially true given the government’s many substantial interests in protecting data privacy. The Ninth Circuit affirmed the preliminary injunction as to the Act’s Impact Assessment provisions, explaining that they likely violate the First Amendment on their face. The appeals court vacated the preliminary injunction as to the Act’s other provisions, reasoning that the lower court had not applied the correct legal tests. The appeals court sent the case back to the lower court to do so. Good News: No Online Censorship The Ninth Circuit’s decision to prevent enforcement of the AADC’s impact assessments on First Amendment grounds is a victory for internet users of all ages because it ensures everyone can continue to access and disseminate lawful speech online. The AADC’s central provisions would have required a diverse array of online services—from social media to news sites—to review the content on their sites and consider whether children might view or receive harmful information. EFF argued that this provision imposed content-based restrictions on what speech services could host online and was so vague that it could reach lawful speech that is upsetting, including news about current events. The Ninth Circuit agreed with EFF that the AADC’s “harmful to minors” standard was vague and likely violated the First Amendment for several reasons, including because it “deputizes covered businesses into serving as censors for the State.” The court ruled that these AADC censorship provisions were subject to the highest form of First Amendment scrutiny because they restricted content online, a point EFF argued. The court rejected California’s argument that the provisions should be subjected to reduced scrutiny under the First Amendment because they sought to regulate commercial transactions. “There should be no doubt that the speech children might encounter online while using covered businesses’ services is not mere commercial speech,” the court wrote. Finally, the court ruled that the AADC’s censorship provisions likely failed under the First Amendment because they are not narrowly tailored and California has less speech-restrictive ways to protect children online. EFF is pleased that the court saw AADC’s impact assessment requirements for the speech restrictions that they are. With those provisions preliminarily enjoined, everyone can continue to access important, lawful speech online. More Good News: A Roadmap for Privacy-First Laws The appeals court did not rule on whether the Act’s data privacy provisions could survive First Amendment review. Instead, it directed the lower court in the first instance to apply the correct tests. In doing so, the appeals court provided guideposts for how legislatures can write data privacy laws that survive First Amendment review. Spoiler alert: enact a “privacy first” law, without unlawful censorship provisions. Dark patterns. Some privacy laws prohibit user interfaces that have the intent or substantial effect of impairing autonomy and choice. The appeals court reversed the preliminary injunction against the Act’s dark patterns provision, because it is unclear whether dark patterns are even protected speech, and if so, what level of scrutiny they would face. Clarity. Some privacy laws require businesses to use clear language in their published privacy policies. The appeals court reversed the preliminary injunction against the Act’s clarity provision, because there wasn’t enough evidence to say whether the provision would run afoul of the First Amendment. Indeed, “many” applications will involve “purely factual and non-controversial” speech that could survive review. Transparency. Some privacy laws require businesses to disclose information about their data processing practices. In rejecting the Act’s Impact Assessments, the appeals court rejected an analogy to the California Consumer Privacy Act’s unproblematic requirement that large data processors annually report metrics about consumer requests to access, correct, and delete their data. Likewise, the court reserved judgment on the constitutionality of two of the Act’s own “more limited” reporting requirements, which did not require businesses to opine on whether third-party content is “harmful” to children. Social media. Many privacy laws apply to social media companies. While the EFF is second-to-none in defending the First Amendment right to moderate content, we nonetheless welcome the appeals court’s rejection of the lower court’s “speculat[ion]” that the Act’s privacy provisions “would ultimately curtail the editorial decisions of social media companies.” Some right-to-curate allegations against privacy laws might best be resolved with “as-applied claims” in specific contexts, instead of on their face. Ninth Circuit Punts on the AADC’s Age-Verification Provision The appellate court left open an important issue for the trial court to take up: whether the AADC’s age-verification provision violates the First Amendment rights of adults and children by blocking them from lawful speech, frustrating their ability to remain anonymous online, and chilling their speech to avoid danger of losing their online privacy. EFF also argued in our Ninth Circuit brief that the AADC’s age-verification provision was similar to many other laws that courts have repeatedly found to violate the First Amendment. The Ninth Circuit missed a great opportunity to confirm that the AADC’s age-verification provision violated the First Amendment. The court didn’t pass judgment on the provision, but rather ruled that the district court had failed to adequately assess the provision to determine whether it violated the First Amendment on its face. As EFF’s brief argued, the AADC’s age-estimation provision is pernicious because it restricts everyone’s access to lawful speech online, by requiring adults to show proof that they are old enough to access lawful content the AADC deems harmful. We look forward to the district court recognizing the constitutional flaws of the AADC’s age-verification provision once the issue is back before it.
- EFF and Partners to EU Commissioner: Prioritize User Rights, Avoid Politicized Enforcement of DSA Rulesby Christoph Schmon on August 19, 2024 at 12:23 pm
EFF, Access Now, and Article 19 have written to EU Commissioner for Internal Market Thierry Breton calling on him to clarify his understanding of “systemic risks” under the Digital Services Act, and to set a high standard for the protection of fundamental rights, including freedom of expression and of information. The letter was in response to Breton’s own letter addressed to X, in which he urged the platform to take action to ensure compliance with the DSA in the context of far-right riots in the UK as well as the conversation between US presidential candidate Donald Trump and X CEO Elon Musk, which was scheduled to be, and was in fact, live-streamed hours after his letter was posted on X. Clarification is necessary because Breton’s letter otherwise reads as a serious overreach of EU authority, and transforms the systemic risks-based approach into a generalized tool for censoring disfavored speech around the world. By specifically referencing the streaming event between Trump and Musk on X, Breton’s letter undermines one of the core principles of the DSA: to ensure fundamental rights protections, including freedom of expression and of information, a principle noted in Breton’s letter itself. The DSA Must Not Become A Tool For Global Censorship The letter plays into some of the worst fears of critics of the DSA that it would be used by EU regulators as a global censorship tool rather than addressing societal risks in the EU. The DSA requires very large online platforms (VLOPs) to assess the systemic risks that stem from “the functioning and use made of their services in the [European] Union.” VLOPs are then also required to adopt “reasonable, proportionate and effective mitigation measures,”“tailored to the systemic risks identified.” The emphasis on systemic risks was intended, at least in part, to alleviate concerns that the DSA would be used to address individual incidents of dissemination of legal, but concerning, online speech. It was one of the limitations that civil society groups concerned with preserving a free and open internet worked hard to incorporate. Breton’s letter troublingly states that he is currently monitoring “debates and interviews in the context of elections” for the “potential risks” they may pose in the EU. But such debates and interviews with electoral candidates, including the Trump-Musk interview, are clearly matters of public concern—the types of publication that are deserving of the highest levels of protection under the law. Even if one has concerns about a specific event, dissemination of information that is highly newsworthy, timely, and relevant to public discourse is not in itself a systemic risk. People seeking information online about elections have a protected right to view it, even through VLOPs. The dissemination of this content should not be within the EU’s enforcement focus under the threat of non-compliance procedures, and risks associated with such events should be analyzed with care. Yet Breton’s letter asserts that such publications are actually under EU scrutiny. And it is entirely unclear what proactive measures a VLOP should take to address a future speech event without resorting to general monitoring and disproportionate content restrictions. Moreover, Breton’s letter fails to distinguish between “illegal” and “harmful content” and implies that the Commission favors content-specific restrictions of lawful speech. The European Commission has itself recognized that “harmful content should not be treated in the same way as illegal content.” Breton’s tweet that accompanies his letter refers to the “risk of amplification of potentially harmful content.” His letter seems to use the terms interchangeably. Importantly, this is not just a matter of differences in the legal protections for speech between the EU, the UK, the US, and other legal systems. The distinction, and the protection for legal but harmful speech, is a well-established global freedom of expression principle. Lastly, we are concerned that the Commission is reaching beyond its geographic mandate. It is not clear how such events that occur outside the EU are linked to risks and societal harm to people who live and reside within the EU, as well as the expectation of the EU Commission about what actions VLOPs must take to address these risks. The letter itself admits that the assessment is still in process, and the harm merely a possibility. EFF and partners within the DSA Human Rights Alliance have advocated for a long time that there is a great need to follow a human rights-centered enforcement of the DSA that also considers the global effects of the DSA. It is time for the Commission to prioritize their enforcement actions accordingly. Read the full letter here.
- EFF Benefit Poker Tournament at DEF CON 32by Daniel de Zeeuw on August 19, 2024 at 9:50 am
“Shuffle up and deal!” announced Cory Doctorow and the sound of playing cards and poker chips filled the room. The sci-fi author and EFF special advisor was this year’s Celebrity Emcee for the 3rd annual EFF Benefit Poker Tournament, an official contest at the DEF CON computer hacking conference hosted by Red Queen Dynamics CEO and EFF board member Tarah Wheeler. Celebrity Knockout Guests were Runa Sandvik, MalwareJake Williams, and Deviant Ollam. poker_10.jpeg poker_23.jpg Forty-six EFF supporters and friends played in the charity tournament on Friday, August 9 in the Horseshoe Poker Room at the heart of the Las Vegas Strip. Every entrant received a special deck of cybersecurity playing cards. The original concept was suggested by information security attorney Kendra Albert, designed by Melanie “1dark1” Warner of Hotiron Collective, and made by Tarah Wheeler. poker-treasure-chest-2.jpg The day started with a poker clinic run by Tarah’s father, professional poker player Mike Wheeler. Mike shared how he taught Tarah to play poker with jelly beans and then shared with all listeners on when to check, when to raise, and how that flush draw might not be as solid as you think. clinic1_credit_andrew_brandt.jpg At noon, Cory Doctorow kicked off the tournament and the fun began. This year’s tournament featured a number of celebrity guests, each with a price on their head. Whichever player knocked one of them out of the tournament, would win a special prize. JD Sterling knocked out poker pro Mike Wheeler, collecting the bounty on his head posted by Tarah, and winning a $250 donation to EFF in his name. Mike swears he was ahead until the river. miketarahjd.jpg Malware Jake was the next celebrity to fall, with fish knocking him out just before the break to win a scrolling message LED hat. fishmalware.jpg Runa Sandvik was knocked out by Jacen Kohler winning an assortment of Norwegian milk chocolate. And Tarah Wheeler showed the skills and generosity that led her to start this charity tournament back in 2022. She knocked out Deviant, winning the literal shirt off his back, a one-of-a-kind Sci-Hub t-shirt that she later auctioned off for EFF. She also knocked out Cory Doctorow, winning a collection of his signed books that she also gave away. She bubbled and finished 9th in the tournament, just missing the final table of 8. poker_11.jpeg poker_8.jpeg poker_9.jpeg After Tarah fell, the final table assembled for one more hour of poker. The final five players were J.D. Sterling, Eric Hammersmark, n0v, Sid, and Ed Bailey, as the big stack. J.D. was the first to fall when his K8 lost to Ed Bailey’s QT when a Queen flopped early. Eric busted out next, leaving the final three. They traded blinds back and forth for a while until Ed, with the big stack, started pushing, joking that he had a plane to catch. One of his raises was eventually called by Sid. Ed flopped top pair and tried to keep Sid on the line, but Sid wriggled out, landing a King on the river to beat Ed’s pair, double up, and eat into Ed’s chip lead. Ed kept pushing, but ran into n0v’s pocket queens, taking another third of his chips, and then eventually busting out to pocket aces from Sid. Ed was satisfied with coming in third, and after shaking hands quickly took off. It turns out he actually did have a plane to catch. The final two, n0v and Sid traded blinds until n0V went all-in pre-flop with A5. Sid called and showed pocket 9s. The crowd cheered as the board showed no aces, giving Sid the hand and the tournament. poker_6.jpeg Cory presented Sid with the tournament’s traditional jellybean trophy and a treasure chest of precious stones from Tarah’s personal collection. poker_14.jpg It was an exciting afternoon of competition raising over $16,000 to support civil liberties and human rights online. We hope you join us next year as we continue to grow the tournament. Follow Tarah and EFF to make sure we have chips and a chair for you at DEF CON 33. poker_7.jpeg
- Digital License Plates and the Deal That Never Had a Chanceby Hayley Tsukayama on August 16, 2024 at 9:00 pm
Location and surveillance technology permeates the driving experience. Setting aside external technology like license plate readers, there is some form of internet-connected service or surveillance capability built into or on many cars, from GPS tracking to oil-change notices. This is already a dangerous situation for many drivers and passengers, and a bill in California requiring GPS-tracking in digital license plates would put us further down this troubling path. In 2022, EFF fought along with other privacy groups, domestic violence organizations, and LGBTQ+ rights organizations to prevent the use of GPS-enabled technology in digital license plates. A.B. 984, authored by State Assemblymember Lori Wilson and sponsored by digital license plate company Reviver, originally would have allowed for GPS trackers to be placed in the digital license plates of personal vehicles. As we have said many times, location data is very sensitive information, because where we go can also reveal things we'd rather keep private even from others in our household. Ultimately, advocates struck a deal with the author to prohibit location tracking in passenger cars, and this troubling flaw was removed. Governor Newsom signed A.B. 984 into law. Now, not even two years later, the state's digital license plate vendor, Reviver, and Assemblymember Wilson have filed A.B. 3138, which directly undoes the deal from 2022 and explicitly calls for location tracking in digital license plates for passenger cars. To best protect consumers, EFF urges the legislature to not approve A.B. 3138. Consumers Could Face Serious Concerns If A.B. 3138 Becomes Law In fact, our concerns about trackers in digital plates are stronger than ever. Recent developments have made location data even more ripe for misuse. People traveling to California from a state that criminalizes abortions may be unaware that the rideshare car they are in is tracking their trip to a Planned Parenthood via its digital license plate. This trip may generate location data that can be used against them in a state where abortion is criminalized. Unsupportive parents of queer youth could use GPS-loaded plates to monitor or track whether teens are going to local support centers or events. U.S. Immigration and Customs Enforcement (ICE) could use GPS surveillance technology to locate immigrants, as it has done by exploiting ALPR location data exchange between local police departments and ICE to track immigrants’ movements. The invasiveness of vehicle location technology is part of a large range of surveillance technology that is at the hands of ICE to fortify their ever-growing “virtual wall.” There are also serious implications in domestic violence situations, where GPS tracking has been investigated and found to be used as a tool of abuse and coercion by abusive partners. Most recently, two Kansas City families are jointly suing the company Spytec GPS after its technology was used in a double-murder suicide, in which a man used GPS trackers to find and kill his ex-girlfriend, her current boyfriend, and then himself. The families say the lawsuit is, in part, to raise awareness about the danger of making this technology and location information more easily available. There's no reason to make tracking any easier by embedding it in state-issued plates. We Urge the Legislature to Reject A.B. 3138 Shortly after California approved Reviver to provide digital license plates to commercial vehicles under A.B. 984, the company experienced a security breach where it was possible for hackers to use GPS in real time to track vehicles with a Reviver digital license plate. Privacy issues aside, this summer, the state of Michigan also terminated their two-year old contract with Reviver for the company’s failure to follow state law and its contractual obligations. This has forced 1,700 Michigan drivers to go back to a traditional metal license plate. Reviver is the only company that currently has state authorization to sell digital plates in California, and is the primary advocate for allowing tracking in passenger vehicle plates. The company says its goal is to modernize personalization and safety with digital license plate technology for passenger vehicles. But they haven't proven themselves up to the responsibility of protecting this data. A.B. 3138 functionally gives drivers one choice for a digital license plate vendor, and that vendor failed once to competently secure the location data collected by its products. It has now failed to meet basic contractual obligations with a state agency. California lawmakers should think carefully about the clear dangers of vehicle location tracking, and whether we can trust this company to protect the sensitive location information for vulnerable populations, or for any Californian.
- 2 Fast 2 Legal: How EFF Helped a Security Researcher During DEF CON 32by Hannah Zhao on August 15, 2024 at 9:03 pm
This year, like every year, EFF sent a variety of lawyers, technologists, and activists to the summer security conferences in Las Vegas to help foster support for the security research community. While we were at DEF CON 32, security researcher Dennis Giese received a cease-and-desist letter on a Thursday afternoon for his talk scheduled just hours later for Friday morning. EFF lawyers met with Dennis almost immediately, and by Sunday, Dennis was able to give his talk. Here’s what happened, and why the fight for coders’ rights matters. Throughout the year, we receive a number of inquiries from security researchers who seek to report vulnerabilities or present on technical exploits and want to understand the legal risks involved. Enter the EFF Coders’ Rights Project, designed to help programmers, tinkerers, and innovators who wish to responsibly explore technologies and report on those findings. Our Coders Rights lawyers counsel many of those who reach out to us on anything from mitigating legal risk in their talks, to reporting vulnerabilities they’ve found, to responding to legal threats. The number of inquiries often ramp up in the months leading to “hacker summer camp,” but we usually have at least a couple of weeks to help and advise the researcher. In this case, however, we did our work on an extremely short schedule. Dennis is a prolific researcher who has presented his work at conferences around the world. At DEF CON, one of the talks he planned along with a co-presenter involved digital locks, including the vendor Digilock. In the months leading up to the presentation, Dennis shared his findings with Digilock and sought to discuss potential remediations. Digilock expressed interest in these conversations, so it came as a surprise when the company sent him the cease-and-desist letter on the eve of the presentation raising a number of baseless legal claims. Because we had lawyers on the ground at DEF CON, Dennis was able to connect with EFF soon after receiving the cease-and-desist and, along with former EFF attorney and current Special Counsel to EFF, Kurt Opsahl, we agreed to represent him in responding to Digilock. Over the course of forty-eight hours, we were able to meet with Digilock’s lawyers and ultimately facilitated a productive conversation between Dennis and its CEO. Good-faith security researchers increase security for all of us. To its credit, Digilock agreed to rescind the cease-and-desist letter and also provided Dennis with useful information about its plans to address vulnerabilities discussed in his research. Dennis was able to give the talk, with this additional information, on Sunday, the last day of DEF CON. We are proud we could help Dennis navigate what can be a scary situation of receiving last-minute legal threats, and are happy that he was ultimately able to give his talk. Good-faith security researchers like Dennis increase security for all of us who use digital devices. By identifying and disclosing vulnerabilities, hackers are able to improve security for every user who depends on information systems for their daily life and work. If we do not know about security vulnerabilities, we cannot fix them, and we cannot make better computer systems in the future. Dennis’s research was not only legal, it demonstrated real world problems that the companies involved need to address. Just as important as discovering security vulnerabilities is reporting the findings so that users can protect themselves, vendors can avoid introducing vulnerabilities in the future, and other security researchers can build off that information. By publicly explaining these sorts of attacks and proposing remedies, other companies that make similar devices can also benefit by fixing these vulnerabilities. In discovering and reporting on their findings, security researchers like Dennis help build a safer future for all of us. However, this incident reminds us that even good faith hackers are often faced with legal challenges meant to silence them from publicly sharing the legitimate fruits of their labor. The Coders' Rights Project is part of our long standing work to protect researchers through legal defense, education, amicus briefs, and involvement in the community. Through it, we hope to promote innovation and safeguard the rights of curious tinkerers and hackers everywhere. We must continue to fight for the right to share this research, which leads to better security for us all. If you are a security researcher in need of legal assistance or have concerns before giving a talk, do not hesitate to reach out to us. If you'd like to support more of this work, please consider donating to EFF.
- EFF Honored as DEF CON 32 Uber Contributorby Rory Mir on August 15, 2024 at 7:23 pm
At DEF CON 32 this year, the Electronic Frontier Foundation became the first organization to be given the Uber Contributor award. This award recognizes EFF’s work in education and litigation, naming us “Defenders of the Hacker Spirit.” DEF CON Uber Contributor Award EFF Staff Attorney Hannah Zhao and Staff Technologist Cooper Quintin accepting the Uber Contributor Award from DEF CON founder Jeff Moss The Uber Contributor Award is an honor created three years ago to recognize people and groups who have made exceptional contributions to the infosec and hacker community at DEF CON. Our connection with DEF CON runs deep, dating back over 20 years. The conference has become a vital part of keeping EFF’s work, grounded in the ongoing issues faced by the creative builders and experimenters keeping tech secure (and fun). EFF Staff Attorney Hannah Zhao (left) and Staff Technologist Cooper Quintin (right) with the Uber Contributor Award (center) Every year attendees and organizers show immense support and generosity in return, but this year exceeded all expectations. EFF raised more funds than all previous years at hacker summer camp—the three annual Las Vegas hacker conferences, BSidesLV, Black Hat USA, and DEF CON. We also gained over 1,000 new supporting and renewing members supporting us year-round. This community’s generosity fuels our work to protect encrypted messaging, fight back against illegal surveillance, and defend your right to hack and experiment. We’re honored to be welcomed so warmly year after year. Just this year, we saw another last minute cease-and-desist order sent to a security researcher about their DEF CON talk. EFF attorneys from our Coders’ Rights Project attend every year, and were able to jump into action to protect the speaker. While the team puts out fires at DEF CON for one week in August, their year-round support of coders is thanks to the continued support of the wider community. Anyone facing intimidation and spurious legal threats can always reach out for support at info@eff.org. We are deeply grateful for this honor and the unwavering support from DEF CON. Thank you to everyone who supported EFF at the membership booth, participated in our Poker Tournament and Tech Trivia, or checked out our talks. We remain committed to meeting the needs of coders and will continue to live up to this award, ensuring the hacker spirit thrives despite an increasingly hostile landscape. We look forward to seeing you again next year!
- Infrastructure Investment: A State, Local, and Private Responsibilityby Cato Institute on June 18, 2022 at 10:37 am
Infrastructure Investment: A State, Local, and Private Responsibility
- The Next “Crisis”: The Debt Ceilingby Cato Institute on June 18, 2022 at 3:05 am
The Next "Crisis": The Debt Ceiling
Lew Rockwell, EFF,