Expanding E-Verify is a Privacy Disaster in the Making

E-Verify is a massive federal data system used to verify the eligibility of job applicants to work in the United States. The U.S. Department of Homeland Security (DHS), U.S. Citizenship and Immigration Services (USCIS), and the U.S. Social Security Administration (SSA) administer E-Verify. Until now, the federal government has not required private employers to use E-Verify, and only a few states have required it. However, a proposed bill in Congress, the Legal Workforce Act (HR 3711), aims to make E-Verify use mandatory nationwide despite all the very real privacy and accuracy issues associated with the data system.

EFF recently joined human rights and workers rights organizations from across the United States and sent a letter to Congress pointing out the flaws of E-Verify. 

Instead of learning from the recent Equifax data breach that access to sensitive information creates an attractive target for data thieves, our elected representatives want to compel a massive increase in the use of yet another data system that can be breached. To use E-Verify, employers need to collect and transmit sensitive information, such as our social security and passport numbers.

And a data breach isn’t the only concern with such a data system: there’s also the likelihood of data errors that can prevent many Americans from obtaining jobs. Even worse, E-Verify is likely to have an unfair disparate impact against women, as they are more likely to change their names due to marriage or divorce. Additionally, a Government Accountability Office (GAO) report [.pdf page 19] found that despite being eligible, E-Verify leads to more denials for people not born in America, and can “create the appearance of discrimination.” The GAO report also stated that these errors would increase dramatically if E-Verify is made mandatory.

Instead of recognizing the problematic nature of E-Verify, the White House is pushing to make it mandatory in its negotiations with Congress concerning legislative protection for Deferred Action for Childhood Arrivals (DACA) recipients. If successful, this would jeopardize Americans’ collective security and privacy. Not to mention that this expanded database may find uses beyond employment verification, and end up as another tool in an already impressive law enforcement surveillance arsenal.

As we have in the past, EFF will continue to do everything in our power to fight against the mandatory usage of E-Verify. It was a bad idea then and it’s a bad idea now.

Whistleblower Protections in USA Liberty Act Not Enough

The USA Liberty Act fails to safeguard whistleblowers—both as federal employees and contractors—because of a total lack of protection from criminal prosecution. These shortcomings—which exist in other whistleblower protection laws—shine a light on much-needed Espionage Act reform, a law that has been used to stifle anti-war speech and punish political dissent.                                                        

Inside the recent House bill, which seeks reauthorization for a massive government surveillance tool, authors have extended whistleblower protections to contract employees, a group that, today, has no such protection.                                                

The Liberty Act attempts to bring parity between intelligence community employees and contract employees by amending Section 1104 of the National Security Act of 1947.

According to the act, employees for the CIA, NSA, Defense Intelligence Agency, Office of the Director of National Intelligence, National Geospatial-Intelligence Agency, and National Reconnaissance Office are protected from certain types of employer retaliation when reporting evidence of “a violation of any federal law, rule, or regulation,” or “mismanagement, a gross waste of funds, an abuse of authority, or a substantial and specific danger to public health or safety.” Employees working at agencies the President deems have a “primary function” of conducting foreign intelligence or counterintelligence are also covered by these protections.

Employees can’t be fired. Employees can’t be demoted. They can’t receive lower pay or benefits or be reassigned. And no “personnel actions” whatsoever can be ordered, actually, meaning no promotions or raises.

But employees are only protected from retaliation in the workplace. Entirely missing from Section 1104 of the National Security Act of 1947 are protections from criminal prosecution. That’s because the government treats whistleblowers differently from what they call leakers. According to the federal laws, government employees who make protected disclosures to approved government officials are whistleblowers, and they have protections; employees who deliver confidential information to newspapers are leakers. Leakers do not have protections.

Extending these whistleblower protections to contractors—while positive—is just an extension of the incomplete protections our federal employees currently receive. And, as written, the Liberty Act only protects contract employees from retaliation made by the government agency they contract with, not their direct employer. Contract employees work directly for private companies—like Lockheed Martin—that have contracts with the federal government for specific projects. The available data is unclear, but a 2010 investigation by The Washington Post revealed that “1,931 private companies work on programs related to counterterrorism, homeland security and intelligence in about 10,000 locations across the United States.”

The problems continue. Currently, the Liberty Act, and Section 1104, do not specify how whistleblower protection is enforced.

Let’s say a contractor with Booz Allen Hamilton—the same contracting agency Edward Snowden briefly worked for when he confirmed widespread government surveillance to The Guardian in 2013—believes she has found evidence of an abuse of authority. According to the Liberty Act, she can present that evidence to a select number of individuals, which includes Director of National Intelligence Daniel Coats, Acting Inspector General of the Intelligence Community Wayne Stone, and any of the combined 38 members of the House of Representatives Permanent Select Committee on Intelligence and the U.S. Senate Select Committee on Intelligence. And, according to the Liberty Act, she will be protected from agency employer retaliation.


If the NSA still does fire the contractor, the Liberty Act does not explain how the contractor can fight back. There is no mention of appeals. There are no instructions for filing complaints. The bill—and the original National Security Act of 1947—has no bite.

The Liberty Act makes a good show of extending whistleblower protections to a separate—and steadily growing—class of employee. But the protections themselves are lacking. Contractors who offer confidential information to the press—like Reality Winner, who allegedly sent classified information to The Intercept—are still vulnerable under a World War I era law called The Espionage Act.

As we wrote, the Espionage Act has a history mired in xenophobia, with an ever-changing set of justifications for its use. University of Texas School of Law professor Stephen Vladeck lambasted the law in a 2016 opinion piece for The New York Daily News:

“Among many other shortcomings, the Espionage Act’s vague provisions fail to differentiate between classical spying, leaking, and whistleblowing; are hopelessly overbroad in some of the conduct they prohibit (such as reading a newspaper story about leaked classified information); and fail to prohibit a fair amount of conduct that reasonable people might conclude should be illegal, such as discussing classified information in unclassified settings.”

Whistleblower protections, present in the National Security Act of 1947 and extended in the Liberty Act, are weakened by the U.S. government’s broad interpretation of the Espionage Act.  Though the law was intended to stop spies and potential state sabotage, it has been used to buttress McCarthyism and to sentence a former Presidential candidate to 10 years in prison. Today, it is used to charge individuals who bring confidential information to newspapers and publishing platforms.                                                                                             

Whistleblower protections to the entire intelligence community are lacking. Instead of treating contractors the same, contractors should—together with employees—be treated better.

Improve whistleblower protections. Reform the Espionage Act.

Q&A with Professor Xiaoxing Xi, Victim of Unjust Surveillance

Professor Xiaoxing Xi, a physics professor with Temple University, was the subject of government surveillance under a FISA order. During September’s Color of Surveillance Hill briefing, Professor Xi told his story of the devastating impact of government surveillance on his life.



Privacy info. This embed will serve content from youtube-nocookie.com

Professor Xi faced a prosecution that was later dropped because there was no evidence that he had engaged in any wrongdoing. Ever since this invasive surveillance against him, he has become an outspoken advocate against race-based surveillance and prosecution.

We asked Professor Xi to elaborate on the surveillance against him and the effect it had on him, his family, and his scientific work. 

Q: People assume their private communications are not visible to others, but it’s become more and more clear that the government is surveilling countless Americans. How did you feel when you learned that the government had been reading your private emails, listening to your private phone calls, and conducting electronic surveillance? 

It was frightening. I knew from the beginning that their charges against me were completely wrong, but we were fearful till the end that they might twist something I wrote in my emails or something I said over the phone to send me to jail. I also felt like I was being violated. When you lose your privacy, it’s like being forced to walk around naked.

Q: Does knowing you had been surveilled cause you concern now, years later? Do you still worry you’re under surveillance?

Yes, my whole family are still seriously concerned about our emails being read and phone calls being listened to. People tell us that it is very unlikely we are still being surveilled, and they are probably right. Once violated, it is very difficult to shake off the fear. We watch every word we write and say, so that we don’t give them excuses to “pick bones out of an egg,” and life is very stressful like this.

Q: Your children were still young when this happened, especially your daughter. How did your family feel about all this? How were they affected?

They were shaken by guns being pointed at them and seeing me snatched away in handcuffs. Everyone was traumatized by this experience, like the sky was falling upon us. My wife was very courageous, trying to shield the children from the harm, even though she herself was under tremendous stress. My elder daughter was a chemistry major, and now she works in a civil rights organization trying to raise the awareness of people about the injustices immigrants face. My younger daughter tries to go about her life like nothing has happened, but we worry about the long term effect on her.

Q: How has your scientific work been affected by this horrible and unjust surveillance and prosecution?

It damaged my scientific research significantly. My reputation is now tainted and the opportunities for me to advance in the scientific community are more limited. My current research group is just a tiny fraction of what I used to have. In addition, I worry about routine academic activities being misconstrued by the government and I am scared to put my name on forms required for obtaining funding and managing research. 

Add your voice. Join EFF in speaking out against mass surveillance. 

Speak Out

Victory! California Just Reformed Its Gang Databases and Made Them More Accountable

Gov. Jerry Brown has signed A.B. 90, a bill that EFF advocated for to bring additional accountability and transparency to the various shared gang databases maintained by the State of California. With a campaign organized by a broad coalition of civil liberties organizations—such as Youth Justice Coalition, National Immigration Law Center, Urban Peace Institute, among others—the much needed reform was passed.

Why Reform was Desperately Needed

The California State Auditor found the state’s gang databases to be riddled with errors, containing records on individuals who should never have been included in the first place and information that should been purged a long time ago. The investigation also found that the system lacks basic oversight safeguards, and went as far as saying that due to the inaccurate information in the database, it’s crime-fighting value was “diminished.”

What the Reform Brings

The legislation brings a broad package of reforms. It codifies new standards and regulations for operating a shared gang database, including audits for accuracy and proper use. The bill would also create a new technical advisory committee comprised of all stakeholders—including criminal defense representatives, civil rights and immigration experts, gang-intervention specialist, and a person personally impacted because of being labeled as a gang member—as opposed to just representatives from law enforcement. Further, the legislation would ensure that the criteria for inclusion in the database and how long the information is retained is supported by empirical research.

Today, California has passed common-sense reforms that were desperately needed to protect its residents’ civil liberties. Californians should be proud.

58 Human Rights and Civil Liberties Organizations Demand an End to the Backdoor Search Loophole

EFF and 57 organizations, including American Civil Liberties Union, R Street, and NAACP, spoke out against warrantless searches of American citizens in a joint letter this week demanding reforms of the so-called “backdoor search” loophole that exists for data collected under Section 702.

The backdoor search loophole allows federal government agencies, including the FBI and CIA, to, without a warrant, search through data collected on American citizens.

Applying a warrant requirement only to searches of Section 702 data involving ‘criminal suspects,’ is not an adequate solution to this problem. 

The data is first collected by the intelligence community under a section of law called Section 702 of the FISA Amendments Act of 2008, which provides rules for sweeping up communications of foreign individuals outside the United States. However, the U.S. government also uses 702 to collect the communications of countless American citizens and store them in a database accessible by several agencies.

EFF and many others believe this type of mass collection alone is unconstitutional. The backdoor search loophole infringes American rights further—allowing agencies to warrantlessly search through 702-collected data by using search terms that describe U.S. persons. These terms could include names, email addresses, and more.   

This practice needs to end. And a proposal before Congress to require warrants on backdoor searches used only in criminal investigations—as recently reported by the New York Times—does not go far enough.

As EFF, and several other organizations, said in an Oct. 3 letter:

“Applying a warrant requirement only to searches of Section 702 data involving ‘criminal suspects,’ is not an adequate solution to this problem. Most fundamentally, it ignores the fact that the Fourth Amendment’s warrant requirement is not limited to criminal or non-national security related cases.”

Further, carving out a warrant requirement solely for criminal investigations ignores the broader umbrella term under which the FBI conducts many searches—that of “foreign intelligence.” Because the FBI conducts investigations with both criminal and foreign intelligence elements, the agency could predictably bypass backdoor warrant requirements by ascribing their searches to foreign affairs matters, rather than criminal.

Warrantless searches of American communications may especially impact those communities that may be speaking frequently to family outside of the United States of which have historically faced unjust surveillance. As we wrote: “Existing policies make it far too easy for the government to engage in searches that disproportionately target Muslim Americans and immigrants with overseas connections based merely on the assertion of a nebulous ‘foreign intelligence’ purpose.”

These searches are happening. In 2016, the CIA and NSA reported they conducted 30,000 searches for information about U.S. persons. That number does not include metadata searches by the CIA, a related problem that can also be fixed by Congress before Section 702 sunsets in December.

Backdoor searches of 702-collected data about U.S. citizens and residents should require a warrant based on probable cause. Congress can protect the rights of countless Americans by closing this loophole.

Read the full letter. 



LinkNYC Improves Privacy Policy, Yet Problems Remain

Since first appearing on the streets of New York City in 2016, LinkNYC’s free public Wi-Fi kiosks have prompted controversy. The initial version of the kiosks’ privacy policy was particularly invasive: it allowed for LinkNYC to store personal browser history, time spent on a particular website, and lacked clarity about how LinkNYC would handle government demands for user data, among others issues. While CityBridge, the private consortium administering the network, has thankfully incorporated welcome changes to its use policy, several problems unfortunately remain. 

What is LinkNYC? 

The LinkNYC system, announced by the Mayor’s office in November 2014 after inviting competitive bids from private industry, includes over 1,000 public kiosks spread across all five boroughs of New York City. Each kiosk offers free high-speed wifi, phone calls, a charging station for mobile devices, and a built-in tablet capable of accessing various city services, such as emergency services, maps, and directions. Funded by advertisers who pay for time on the two 55” displays on either side of each kiosk, the system requires no payment from users or taxpayers.

On the one hand, the spring 2017 revisions to the CityBridge privacy policy substantially improved it over the original 2016 version, which allowed nearly limitless retention of user data, including browsing history. In particular, the adoption of limits on the time during which CityBridge will retain user data, and the commitment not to track browsing histories for users who use their own devices, render the LinkNYC service today far more respectful of privacy than it was when the system was first launched.

In the wake of its 2017 policy changes, LinkNYC still collects what it describes as “Technical Information,” including information such as IP addresses, anonymized MAC addresses, device type, device identifiers, and more, for up to 60 days. Additionally, the LinkNYC kiosks have cameras that store footage for up to 7 days.  

Despite the positive privacy-minded revisions, the policy still fails to provide a pathway for public participation and includes no reference to remedies for potential violations. The story underlying the activation of LinkNYC cameras provides an illustration of these overlapping concerns. 

Opaque Processes Precluding Public Participation in Policy

Beyond the welcome updates to the CityBridge privacy policy, there appears neither any process allowing public participation in the governance of the kiosk system, nor a redress mechanism for potential violations. 

The process of setting policy remains thoroughly opaque. Even CityBridge’s welcome spring 2017 revisions prove the point: our colleagues at NYCLU wrote a letter to the Mayor’s office, which then engaged in negotiations with the private companies who together comprise the CityBridge consortium. Thankfully, the back door process between the Mayor’s office and the companies produced a better result than the previous policy.

Going forward, however, there is no means for New Yorkers to participate in decisions about how data from Link kiosks will be used, with whom it will be shared, for how long it will be retained, or whether the parameters under which it is initially collected might conceivably expand in the future. 

Separate from privacy and freedom of expression, but closely related to them, are principles of transparency and public process. Without opportunities to participate in the construction of policies that affect them, New Yorkers and visitors who use the kiosks are reduced to the position of being sources of data, rather than users with needs to be served. 

Unspecified Remedies and Redress

The transparency and public process problem is most poignant looking forward. In the wake of any potential violation or breach of the CityBridge policies, however, there is a parallel problem with respect to the lack of any process for resolving problems in the past.

Data breaches are a reality that we must consider. 2016 was a record year for data breaches, reflecting an increase of 40% from the year before, according to the nonprofit Identity Theft Resource Center. CityBridge fortunately collects and stores much less information than it did when the LinkNYC system first launched, which mitigates the potential impact of a data breach and renders the possibility far less threatening than other recent examples.

Beyond the event of a data breach is the possibility of CityBridge data being misused. Even if a CityBridge data center is never hacked by a malicious actor or foreign intelligence agency, what happens if a LinkNYC employee sells user information to third parties in violation of the consortium’s commitments?

LinkNYC’s failure to create a process, acknowledge any rights, or permit any remedies for potential violations all remain problematic. As noted by Rethink Link NYC, a local community organization in the Electronic Frontier Alliance, “Even the best privacy policy is worthless without oversight or accountability.”

Those Cameras Prove the Point

In addition to collecting and retaining some data about users’ interactions with the kiosks, the LinkNYC towers also include sensors and cameras. Apart from their role in normalizing everyday surveillance, the story underlying their activation demonstrates the lack of process undermining public accountability.

 An initial 2016 disclosure that the kiosks “may” have cameras somehow evolved—without a public mandate, and without any public process—into a 2017 policy allowing the cameras to operate and record users. CityBridge retains the footage for 7 days. 

The emergence of constant surveillance through a program ostensibly extending public services, without any apparent public oversight, suggests the need to be vigilant when programs that claim to make cities “smart” fail to respect privacy. 

While the privacy policy has improved, it still allows CityBridge to disclose retained video data to “…improve the services,” which could include any number of uses invisible to end users. Also, it isn’t clear to what kind of law enforcement requests will cause the footage to be disclosed, or what process CityBridge will undertake to potentially resist such requests that undermine speech, dissent, other constitutional rights, or the rights of discrete minority groups. 

Some New Yorkers, such as activists working with Rethink Link NYC, and opera singer Judith Barnes, have creatively drawn attention to these concerns, and others they have raised beyond those shared by EFF. For instance, Rethink Link NYC argues that “the city is getting paid because third-parties (unaccountable to this privacy policy) are getting unprecedented information from passers-by,” and notes that “bridging the digital divide doesn’t require turning the whole city in the one massive corporate surveillance network.”

We encourage everyone concerned about digital rights to raise their voices, and applaud these New Yorkers for taking creative action to raise awareness among their neighbors and visitors.

We also appreciate the efforts that CityBridge and the City of New York have made to improve LinkNYC’s privacy policies and practices. That said, we invite them to do more by providing a pathway for public participation and constructing remedies for potential breaches or misuse.



iOS 11’s Misleading “Off-ish” Setting for Bluetooth and Wi-Fi is Bad for User Security

Turning off your Bluetooth and Wi-Fi radios when you’re not using them is good security practice (not to mention good for your battery usage). When you consider Bluetooth’s known vulnerabilities, it’s especially important to make sure your Bluetooth and Wi-Fi settings are doing what you want them to. The iPhone’s newest operating system, however, makes it harder for users to control these settings.

On an iPhone, users might instinctively swipe up to open Control Center and toggle Wi-Fi and Bluetooth off from the quick settings. Each icon switches from blue to gray, leading a user to reasonably believe they have been turned off—in other words, fully disabled. In iOS 10, that was true. However, in iOS 11, the same setting change no longer actually turns Wi-Fi or Bluetooth  “off.”

Instead, what actually happens in iOS 11 when you toggle your quick settings to “off” is that the phone will disconnect from Wi-Fi networks and some devices, but remain on for Apple services. Location Services is still enabled, Apple devices (like Apple Watch and Pencil) stay connected, and services such as Handoff and Instant Hotspot stay on. Apple’s UI fails to even attempt to communicate these exceptions to its users.

It gets even worse. When you toggle these settings in the Control Center to what is best described as”off-ish,” they don’t stay that way. The Wi-Fi will turn back full-on if you drive or walk to a new location. And both Wi-Fi and Bluetooth will turn back on at 5:00 AM. This is not clearly explained to users, nor left to them to choose, which makes security-aware users vulnerable as well.

The only way to turn off the Wi-Fi and Bluetooth radios is to enable Airplane Mode or navigate into Settings and go to the Wi-Fi and Bluetooth sections.

When a phone is designed to behave in a way other than what the UI suggests, it results in both security and privacy problems. A user has no visual or textual clues to understand the device’s behavior, which can result in a loss of trust in operating system designers to faithfully communicate what’s going on. Since users rely on the operating system as the bedrock for most security and privacy decisions, no matter what app or connected device they may be using, this trust is fundamental.

In an attempt to keep you connected to Apple devices and services, iOS 11 compromises users’ security. Such a loophole in connectivity can potentially leave users open to new attacks. Closing this loophole would not be a hard fix for Apple to make. At a bare minimum, Apple should make the Control Center toggles last until the user flips them back on, rather than overriding the user’s choice early the next morning. It’s simply a question of communicating better to users, and giving them control and clarity when they want their settings off—not “off-ish.”

No Airport Biometric Surveillance

Facial recognition, fingerprinting, and retina scans—all of these and more could be extracted from travelers by the government at checkpoints throughout domestic airports. Please join EFF in opposing the dangerous new bill, sponsored by Senator Thune (R-SD), which would authorize this expanded biometric surveillance.

The TSA Modernization Act (S. 1872)would authorize the U.S. Transportation Security Administration and U.S. Customs and Border Protection (CBP) to deploy “biometric technology to identify passengers” throughout our nation’s airports, including at “checkpoints, screening lanes, [and] bag drop and boarding areas.”

The bill would make a bad situation worse. Today, CBP is subjecting travelers on certain outgoing international flights to facial recognition screening. The bill would expand biometric screening to domestic flights (not just international flights), and would increase the frequency that a traveler is subjected to biometric screening (not just once per trip).

Facial recognition is a unique threat to our privacy, because our faces are easy to capture and hard to change. Also, facial recognition has significant accuracy problems, especially for nonwhite travelers, who apparently are underrepresented in algorithmic training data. When the government gathers sensitive biometric information, data thieves might steal it and government employees might misuse it. Finally, the government might expand the ways it uses its biometric system from just identifying travelers to also screening them against problematic databases. Arrest warrant databases, for example, are riddled with error, and include many people accused of minor offenses. That’s why EFF has repeatedly opposed biometric surveillance at airports.

This bill may be inspired by a plan, announced earlier this year by a CBP official, to collect travelers’ biometric information throughout airports: “Why not look to drive the innovation across the entire airport experience? . . . We want to make it available for every transaction in the airport where you have to show an ID today.”

CBP’s alarming vision of pervasive biometric surveillance at airports cuts against the rights to privacy, travel, and association that are hallmarks of our democratic country. The invasive data collection proposed in Sen. Thune’s new bill would invade the biometric privacy of countless innocent Americans and foreigners.

Join EFF in speaking out against this bill by emailing your Senator today.

Will the Equifax Data Breach Finally Spur the Courts (and Lawmakers) to Recognize Data Harms?

This summer 143 million Americans had their most sensitive information breached, including their name, addresses, social security numbers (SSNs), and date of birth. The breach occurred at Equifax, one of the three major credit reporting agencies that conducts the credit checks relied on by many industries, including landlords, car lenders, phone and cable service providers, and banks that offer credits cards, checking accounts and mortgages. Misuse of this information can be financially devastating. Worse still, if a criminal uses stolen information to commit fraud, it can lead to the arrest and even prosecution of an innocent data breach victim.    

Given the scope and seriousness of the risk that the Equifax breach poses to innocent people, and the anxiety that these breaches cause, you might assume that legal remedies would be readily available to compensate those affected. You’d be wrong.

While there are already several lawsuits filed against Equifax, the pathway for those cases to provide real help to victims is far from clear.  That’s because even as the number and severity of data breaches increases, the law remains too narrowly focused on people who have suffered financial losses directly traceable to a breach.

The law consistently fails to recognize other sorts of harms to victims. In some cases this arises in the context of threshold “standing” to sue, a legal requirement that requires proof of harm (lawyers call it “injury in fact”) to even get into the door in federal courts. In other cases the problem arises within the claim itself, where “harm” is a legal element that must be proven for a plaintiff to win the case. Regardless of how the issue of “harm” comes up, judges are too often failing to ensure that data breach victims have legal remedies. 

The consequences of this failure are two-fold. First, there’s the direct problem that the courthouse door is closed to hundreds of millions of people who face real risk and the accompanying reasonable fears about the misuse of their information. Second, but perhaps even more important, the lack of legal accountability means that the companies that hold our sensitive data continue to have insufficient incentives to take the steps necessary to protect us against the next breach. 

Effective computer security is hard, and no system will be free of bugs and errors. 

But in the Equifax hack, as in so many others, the breach resulted from a known security vulnerability. A patch to fix the vulnerability had been available for two months, but Equifax failed to implement it even though the vulnerability was being actively exploited. This wasn’t the first time that Equifax has failed to take computer security seriously.

Even if increasing liability only accomplished an increased incentive to patch known security problems, that alone would protect millions of people.

The High Bar to Harm

While there are exceptions, too often courts dismiss data breach lawsuits based on a cramped view of what constitutes “harm.” These courts mistakenly require actual or imminent loss of money due to the misuse of information that is directly traceable to a single security breach.

Yet outside of data breach cases, courts routinely handle cases where damages aren’t just a current loss of money or property. The law has long recognized harms such as the infliction of emotional distress, assault, damage to reputation and future business dealings.1 Victims of medical malpractice and toxic exposures can receive current compensation for potential for future pain and suffering. As two law professors, EFF Advisory Board member Daniel J. Solove and Danielle Keats Citron, noted in comparing data breach cases to the recent claims of emotional distress brought by Terry Bollea (Hulk Hogan) against Gawker: “Why does the embarrassment over a sex video amount to $115 million worth of harm but the anxiety over the loss of personal data (such as a Social Security number and financial information) amount to no harm?” “Why does the embarrassment over a sex video amount to $115 million worth of harm but the anxiety over the loss of personal data (such as a Social Security number and financial information) amount to no harm?”

For harms that can be difficult to quantify, some specific laws (e.g. copyright, wiretapping) provide for “statutory damages,” which sets an amount per infraction.2 

The recent decision dismissing the cases arising from the 2014-2015 Office of Personnel Management (OPM) hack is a good example of these “data breach blinders.” The court required that the plaintiffs—mostly government employees—demonstrate that they faced a certain, impending, and substantial risk that the stolen information would be misused against them, and that they be able to trace any harm they alleged to the actual breach. The fact that the data sufficient to impersonate was stolen, and stolen due to negligence of OPM, was not sufficient. The court then disappointingly found that the fact that the Chinese government—as opposed to ordinary criminals—are suspected of having stolen the information counted against the plaintiffs in demonstrating likely misuse.  

The ruling is especially troubling because we know that it can take years before the harms of a breach are realized. Criminals often trade our information back and forth before acting on it; indeed there are entire online forums devoted to this exchange. Stolen credentials can be used to set up a separate persona that incurs debts, commits crimes, and more for quite a long time before the victim is aware of it. And it can be difficult if not impossible to trace a problem with credit or criminal activity misuse back to any particular breach.

How are you to prove that the bad data that torpedoed your mortgage application came from the breaches at Equifax as opposed to the OPM, Target, Anthem, or Yahoo breaches, just to name a few?  

What the Future Holds

When data is being declared the ‘oil of the digital era’ and millions in venture capital funding await those who can exploit it, it’s time to reevaluate how to think of data breaches and misuse, and how we restore access to the courts for those impacted by them. 

When data is being declared the ‘oil of the digital era,’ it’s time to reevaluate how to think of data breaches and misuse.

Simply shrugging shoulders, as the OPM judge did, is not sufficient. Courts need to start applying what they already know in awarding emotional distress damages, reputational damages, and prospective business advantage damages to data breach cases, along with the recognition of current harm due to future risks, as in medical malpractice and pollution cases. If the fear caused by an assault can be actionable, so should the fear caused by the loss of enough personal data for a criminal to take out a mortgage in your name. These lessons can and should be brought to bear to help data breach victims get into the courthouse door and all the way to the end of the case.

If the political will is there, legislatures, both federal and state, can step up and create incentives for greater security and a much steeper downside for companies that fail to take the necessary steps to protect our data. 

The standing problem requires innovation in crafting claims, but even the Supreme Court in the recent Spokeo decision recognized that intangible harms can still be harms under the Constitution and Congress can make that intention even more clear with proper legislative language. Alternately, as in copyright or wiretapping cases where the damages are hard to quantify, Congress can use techniques like statutory damages to ensure that those harmed receive compensation. Making such remedies clearly available in data misuse and breach cases is worthy of careful consideration. So far, the federal bills being floated in response to the Equifax breach and earlier breaches do not remove these obstacles to victims bringing legal claims and ensure a private right of action.  

Similarly, outside of the shadow of federal standing requirements, state legislatures can consider models of specific state law protections like California’s Lemon Law, formally known as the Song-Beverly Consumer Warranty Act. The Lemon Law provides specific extra remedies for those purchasing a new car that needs significant repairs. States should be able to recognize that data breach situations are special and may similarly require special remedies. Things to consider are giving victims easier (and free) ways to clean up their credit rather than just the standard insufficient credit monitoring schemes. 

By looking at various options, Congress and state legislatures could spur a race to the top on computer security and create real consequences for those who choose to linger on the bottom.

Of course, shoring up our legal remedies isn’t the only avenue for incentivizing companies to protect our data better. Government agencies like the Federal Trade Commission and state attorneys general have a role to play, as does public pressure and media attention.  

One thing is for sure: as long as the consequences for neglecting to protect user data are weak, data breaches like the Equifax breach will continue to occur. Worse, it will become increasingly difficult for victims to demonstrate which breach caused their credit rate to drop, their job prospects to dim, or their hopes for a mortgage to be dashed. It’s long past time for us to rethink the approach to harm in data breach cases.



  • 1. Most of the ideas here come from a terrific forthcoming law review article: Daniel J. Solove & Danielle Keats Citron, Risk and Anxiety: A Theory of Data Breach Harms, 96 TEx. L. REV. (forthcoming 2017) (manuscript at 12), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2885638.
  • 2. While we have been sharply critical of the mismatch between statutory damages and harm in copyright law, the idea itself is worthwhile in situations where harm is difficult to prove.

California is Close to Bringing Transparency and Accountability to Gang Databases

In April 2017, Terry Spears shared his story with San Diego’s local public media station KPBS on what it’s like to be listed in the CalGang database. Even though Mr. Spears says he has never been in a gang, it hasn’t stopped law enforcement from harassing him, and he once had his car seized for two weeks, disrupting his livelihood. He’s not the only one. California has several shared gang databases, the biggest of which is CalGang, and they are in desperate need of reform.

Fortunately, Governor Brown can sign a bill today, A.B. 90, that will go far in solving these problems.

As we explained in our earlier blogpost about A.B. 90, a 2016 California State Auditor’s report on California’s gang database was damning. It detailed how the CalGang database is riddled with errors and unsubstantiated information. It contains records on individuals that should never have been included in the database as well as records that should have long since been purged. And the system lacks basic oversight safeguards. The report went as far as saying that due to the inaccurate information in the database, it’s crime-fighting value was “diminished.” 

With the engagement of a broad coalition of civil liberties organizations—such as Youth Justice Coalition, National Immigration Law Center, Urban Peace Institute, among others—much needed reform was passed last year. However, that bill (A.B. 2298) was written prior to the California Auditor publishing its findings and therefore did not anticipate many of the important problems identified by the audit. Therefore, further work is needed to ensure that the reforms passed last year are followed through by law enforcement agencies, and that we build on them to prevent future abuses.

A.B. 90 has passed the California Senate and Assembly and is currently awaiting Gov. Jerry Brown’s signature. As we argued in our letter to Gov. Brown:

A.B. 90 enhances accountability of the CalGang and similar databases by codifying new standards and regulations for operating a shared gang database, including audits for accuracy and proper use. The bill would also create a new technical advisory committee comprised of all stakeholders—including criminal defense representatives, civil rights and immigration experts, gang-intervention specialist, and a person personally impacted because of being labeled as a gang member—as opposed to just representatives from law enforcement.

Further, the legislation would ensure that CalGang database administration is supported by empirical research findings in developing criteria for including Californians in the database and to ensure that the information is not retained indefinitely.

A.B. 90 builds-upon and brings additional common-sense reforms to ensure that Californians can hold law enforcement accountable when they are unfairly targeted and listed in opaque databases.

Take Action

Tell Gov. Brown to sign S.B. 90 into law

Apple does right by users and advertisers are displeased

With the new Safari 11 update, Apple takes an important step to protect your privacy, specifically how your browsing habits are tracked and shared with parties other than the sites you visit. In response, Apple is getting criticized by the advertising industry for “destroying the Internet’s economic model.” While the advertising industry is trying to shift the conversation to what they call the economic model of the Internet, the conversation must instead focus on the indiscriminate tracking of users and the violation of their privacy.

When you browse the web, you might think that your information only lives in the service you choose to visit. However, many sites load elements that share your data with third parties. First-party cookies are set by the domain you are visiting, allowing sites to recognize you from your previous visits but not to track you across other sites. For example, if you visit first examplemedia.com and then socialmedia.com, your visit would only be known to each site. In contrast, third-party cookies are those set by any other domains than the one you are visiting, and were created to circumvent the original design of cookies. In this case, when you would visit examplemedia.com and loads tracker.socialmedia.com as well, socialmedia.com would be able to track you an all sites that you visit where it’s tracker is loaded.

Websites commonly use third-party tracking to allow analytics services, data brokerages, and advertising companies to set unique cookies. This data is aggregated into individual profiles and fed into a real-time auction process where companies get to bid for the right to serve an ad to a user when they visit a page. This mechanism can be used for general behavioral advertising but also for “retargeting.” In the latter case,  the vendor of a product viewed on one site buys the chance to target the user later with ads for the same product on other sites around the web. As a user, you should be able to expect you will be treated with respect and that your personal browsing habits will be protected. When websites share your behavior without your knowledge, that trust is broken.

Safari has been blocking third-party cookies by default since Safari 5.1, released in 2010, and has been key to Apple’s emerging identity as a defender of user privacy. Safari distinguished between these seedy cookies from those placed on our machines by first parties – sites we visit intentionally. From 2011 onwards, advertising companies have been devising ways to circumvent these protections. One of the biggest retargeters, Criteo, even acquired a patent on a technique to subvert this protection 1. Criteo, however, was not the first company to circumvent Safari’s user protection. In 2012, Google paid 22.5 million dollars to settle an action by the FTC after they used another workaround to track Safari users with cookies from the DoubleClick Ad Network. Safari had an exception to the third-party ban for submission forms where the user entered data deliberately (e.g. to sign up). Google exploited this loophole when Safari users visited sites participating in Google’s advertising network to set a unique cookie.

The new Safari update, with Intelligent Tracking Protection, closes loopholes around third-party cookie-blocking by using machine learning to distinguish the sites a user has a relationship with from those they don’t, and treating the cookies differently based on that. When you visit a site, any cookies that are set can be used in a third-party context for twenty-four hours. During the first twenty-four hours the third-party cookies can be used to track the user, but afterward can only be used to login and not to track. This means that sites that you visit regularly are not significantly affected. The companies this will hit hardest are ad companies unconnected with any major publisher.

At EFF we understand the need for sites to build a successful business model, but this should not come at the expense of people’s privacy. This is why we launched initiatives like the EFF DNT Policy and tools like Privacy Badger. These initiatives and tools target tracking, not advertising. Rather than attacking Apple for serving their users, the advertising industry should treat this as an opportunity to change direction and develop advertising models that respect (and not exploit) users.

Apple has been a powerful force in user privacy on a mass scale in recent years, as reflected by their support for encryption, the intelligent processing of user data on device rather than in the cloud, and limitations on ad tracking on mobile and desktop. By some estimates, Apple handles 30% of all pages on mobile. Safari’s innovations are not the silver bullet that will stop all tracking, but by stepping up to protect their users’ privacy Apple has set a challenge for other browser developers. When the user’s privacy interests conflict with the business models of the advertising technology complex, is it possible to be neutral? We hope that Mozilla, Microsoft and Google will follow Apple, Brave and Opera’s lead.

  • 1. In order to present themselves as a first party, Criteo had their host website include code on the internal links in their website to redirect when clicked. So if you click on a link to jackets in a clothes store, your click brings you for an instant to Criteo before forwarding you on to your intended destination. This trick makes them appear as a first party to your browser and they pop up a notification informing you and stating that by clicking on the page you consent to them storing a cookie. Once Safari accepted a first party cookie once, that site was allowed to set cookies also when it was a third party. So now they can retarget you elsewhere. Other companies (AdRoll, for example) used the same trick.

Crypto is For Everyone—and American History Proves It

Over the last year, law enforcement officials around the world have been pressing hard on the notion that without a magical “backdoor” to access the content of any and all encrypted communications by ordinary people, they’ll be totally incapable of fulfilling their duties to investigate crime and protect the public. EFF and many others have pushed back—including launching a petition with our friends to SaveCrypto, which this week reached 100,000 signatures, forcing a response from President Obama.

Read more…

Here’s What Public Safety Agencies Think About Automated License Plate Recognition 

Law enforcement agencies around the country have been all too eager to adopt mass surveillance technologies, but sometimes they have put little effort into ensuring the systems are secure and the sensitive data they collect on everyday people is protected.

Read more…

How To Find and Delete Everything You’ve Ever Said to Google Now

Google likes to keep all of your voice searches on its servers so it can more easily learn to recognize your voice, understand what you might be looking for in the future, and, of course, serve you ads. If you want to review this archive of Google Now searches and clear it out, here’s what to do.

Read more…

Google Swears Android Auto Isn’t Spying On You (That Much)

The era of car computers is upon us, and it’s a little scary from a privacy perspective. Look no further than the recent controversy of how much data Google is collecting about drivers using Android Auto. We know this much: Google is probably collecting more data than you realize.

Read more…

Private Medical Data of Over 1.5 Million People Wound Up Exposed to Everyone Online 

Police injury reports, drug tests, detailed doctor visit notes, social security numbers—all were inexplicably unveiled on a public subdomain of Amazon Web Services. Welcome to the next big data breach horrorshow. Instead of hackers, it’s old-fashioned neglect that exposed your most sensitive information.

Read more…

Social Media Auto Publish Powered By : XYZScripts.com