You are hereFeed aggregator / Categories / Privacy

Privacy


Republican DACA Bill Would Expand Use of Drones, Biometrics

EPIC - 3 hours 4 min ago

The Secure and Succeed Act (S. Amdt. 1959 to H.R. 2579), sponsored by several Republican Senators, would link DACA with hi-tech border surveillance. Customs and Border Protection would use facial recognition and other biometric technologies to inspect travelers, both US citizens and non-citizens, at airports. The bill also establishes "Operation Phalanx" that instructs the Department of Defense—a military agency—to conduct surveillance using drones for domestic surveillance. EPIC has pursued many FOIA cases on border surveillance involving biometrics, drones, and airport body scanners, In a letter to the House, EPIC warned that "many of the techniques that are proposed to enhance border surveillance have direct implications for the privacy of American citizens."

Categories: Privacy

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation

EFF News - Tue, 2018-02-20 19:30

In the coming decades, artificial intelligence (AI) and machine learning technologies are going to transform many aspects of our world. Much of this change will be positive; the potential for benefits in areas as diverse as health, transportation and urban planning, art, science, and cross-cultural understanding are enormous. We've already seen things go horribly wrong with simple machine learning systems; but increasingly sophisticated AI will usher in a world that is strange and different from the one we're used to, and there are serious risks if this technology is used for the wrong ends.

Today EFF is co-releasing a report with a number of academic and civil society organizations1 on the risks from malicious uses of AI and the steps that should be taken to mitigate them in advance.

At EFF, one area of particular concern has been the potential interactions between computer insecurity and AI. At present, computers are inherently insecure, and this makes them a poor platform for deploying important, high-stakes machine learning systems. It's also the case that AI might have implications for computer [in]security that we need to think about carefully in advance. The report looks closely at these questions, as well as the implications of AI for physical and political security. You can read the full document here.

Categories: Privacy

EPIC Amicus: Supreme Court to Hear Arguments in Wiretap Act Case

EPIC - Tue, 2018-02-20 17:15

The Supreme Court will hear arguments this week in Dahda v. United States, a case concerning the federal Wiretap Act and the suppression of evidence obtained following an invalid wiretap order. The Wiretap Act requires exclusion of evidence obtained as a result of an invalid order, but a lower court denied suppression in the case even though the order was unlawfully broad. In an amicus brief, EPIC wrote that "it is not for the courts to create textual exceptions" to federal privacy laws. EPIC explained that Congress enacted strict and unambiguous privacy provisions in the Wiretap Act. "If the government wishes a different outcome," EPIC wrote, "then it should go to Congress to revise the statute." EPIC routinely participates as amicus curiae in privacy cases before the Supreme Court, most recently in Byrd v. United States (suspicionless searches of rental cars) and Carpenter v. United States (warrantless searches of cellphone location records).

Categories: Privacy

Did Congress Really Expect Us to Whittle Our Own Personal Jailbreaking Tools?

EFF News - Tue, 2018-02-20 14:46

In 1998, Congress passed the Digital Millennium Copyright Act (DMCA), and profoundly changed the relationship of Americans to their property.

Section 1201 of the DMCA bans the bypassing of "access controls" for copyrighted works. Originally, this meant that even though you owned your DVD player, and even though it was legal to bring DVDs home with you from your European holidays, you weren't allowed to change your DVD player so that it would play those out-of-region DVDs. DVDs were copyrighted works, the region-checking code was an access control, and so even though you owned the DVD, and you owned the DVD player, and even though you were allowed to watch the disc, you weren't allowed to modify your DVD player to play your DVD (which you were allowed to watch).

Experts were really worried about this: law professors, technologists and security experts saw that soon we'd have software—that is, copyrighted works—in all kinds of devices, from cars to printer cartridges to voting machines to medical implants to thermostats. If Congress banned tinkering with the software in the things you owned, it would tempt companies to use that software to create "private laws" that took away your rights to use your property in the way you saw fit. For example, it's legal to use third party ink in your HP printer, but once HP changed its printers to reject third-party ink, they could argue that anything you did to change them back was a violation of the DMCA.

Congress's compromise was to order the Library of Congress and the Copyright Office to hold hearings every three years, in which the public would be allowed to complain about ways in which these locks got in the way of their legitimate activities. Corporations weigh in about why their business interests outweigh your freedom to use your property for legitimate ends, and then the regulators deliberate and create some temporary exemptions, giving the public back the right to use their property in legal ways, even if the manufacturers of their property don't like it.

If it sounds weird that you have to ask the Copyright Office for permission to use your property, strap in, we're just getting started.

Here's where it gets weird: DMCA 1201 allows the Copyright Office to grant "use" exemptions, but not "tools" exemptions. That means that if the Copyright Office likes your proposal, they can give you permission to jailbreak your gadgets to make some use (say, install third-party apps on your phone, or record clips from your DVDs to use in film studies classes), but they can't give anyone the right to give you the tool needed to make that use (law professor and EFF board member Pam Samuelson argues that the Copyright Office can go farther than this, at least some of the time, but the Copyright Office disagrees).

Apparently, fans of DMCA 1201 believe that the process for getting permission to use your own stuff should go like this:

1. A corporation sells you a gadget that disallows some activity, or they push a software update to a gadget you already own to take away a feature it used to have;

2. You and your lawyers wait up to three years, then you write to the Copyright Office explaining why you think this is unfair;

3. The corporation that made your gadget tells the Copyright Office that you're a whiny baby who should just shut up and take it;

4. You write back to the Copyright Office to defend your use;

5. Months later, the Library of Congress gives you a limited permission to use your property (maybe);

And then...

6. You get a degree in computer science, and subject your gadget to close scrutiny to find a flaw in the manufacturer's programming;

7. Without using code or technical information from anyone else (including other owners of the same gadget) you figure out how to exploit that flaw to let you use your device in the way the government just said you could;

8. Three years later, you do it again.

Now, in practice, that's not how it works. In practice, people who want to use their own property in ways that the Copyright Office approves of just go digging around on offshore websites, looking for software that lets them make that use. (For example, farmers download alternative software for their John Deere tractors from websites they think might be maintained by Ukrainian hackers, though no one is really sure). If that software bricks their device, or steals their personal information, they have no remedy, no warranty, and no one to sue for cheating them.

That's the best case.

But often, the Library of Congress makes it even harder to make the uses they're approving. In 2015, they granted car owners permission to jailbreak their cars in order to repair them—but they didn't give mechanics the right to jailbreak the cars they were fixing. That ruling means that you, personally, can fix your car, provided that 1) you know how to fix a car; and 2) you can personally jailbreak the manufacturer's car firmware (in addition to abiding by the other snares in the final exemption language).

In other cases, the Copyright Office limits the term of the exemption as well as the scope: in the 2015 ruling, the Copyright Office gave security researchers the right to jailbreak systems to find out whether they were secure enough to be trusted, but not industrial systems (whose security is very important and certainly needs to be independently verified by those systems' owners!) and they also delayed the exemption's start for a full year, meaning that security researchers would only get two years to do their jobs before they'd have to go back to the Copyright Office and start all over again.

This is absurd.

Congress crafted the exemptions process to create an escape valve on the powerful tool it was giving to manufacturers with DMCA 1201. But even computer scientists don't hand-whittle their own software tools for every activity: like everyone else, they rely on specialized toolsmiths who make software and hardware that is tested, warranted, and maintained by dedicated groups, companies and individuals. The idea that every device in your home will have software that limits your use, and you can only get those uses back by first begging an administrative agency and then gnawing the necessary implement to make that use out of the lumber of your personal computing environment is purely absurd.

The Copyright Office is in the middle of a new rulemaking, and we've sent in requests for several important exemptions, but we're not kidding ourselves here: as important as it is to get the US government to officially acknowledge that DMCA 1201 locks up legitimate activities, and to protect end users, without the right to avail yourself of tools, the exemptions don't solve the whole problem.

That's why we're suing the US government to invalidate DMCA 1201. DMCA 1201 wasn't fit for purpose in 1998, and it has shown its age and contradictions more with each passing year.

Categories: Privacy

Supreme Court Leaves Data Breach Decision In Place

EPIC - Tue, 2018-02-20 13:40

The Supreme Court has denied a petition for a writ of certiorari in Carefirst, Inc. v. Attias, a case concerning standing to sue in data breach cases. Consumers had sued health insurer Carefirst after faulty security practices allowed hackers to obtain 1.1 million customer records. EPIC filed an amicus brief backing the consumers, arguing that if "companies fail to invest in reasonable security measures, then consumers will continue to face harm from data breaches." The federal appeals court agreed with EPIC and held that consumers may sue companies that fail to safeguard their personal data. Carefirst appealed the decision, but the Supreme Court chose not to take the case. EPIC regularly files amicus briefs defending standing in consumer privacy cases, most recently in Eichenberger v. ESPN, where the Ninth Circuit also held for consumers, as well as Gubala v. Time Warner Cable and In re SuperValu Customer Data Security Breach Litigation.

Categories: Privacy

"FREE from Chains!": Eskinder Nega is Released from Jail

EFF News - Fri, 2018-02-16 20:13

Eskinder Nega, one of Ethiopia's most prominent online writers, winner of the Golden Pen of Freedom in 2014, the International Press Institute's World Press Freedom Hero for 2017, and PEN International's 2012 Freedom to Write Award, has been finally set free.

Eskinder is greeted by well-wishers on his release. Picture by Befekadu Hailu

Eskinder has been detained in Ethiopian jails since September 2011. He was accused and convicted of violating the country's Anti-Terrorism Proclamation, primarily by virtue of his warnings in online articles that if Ethiopia's government continued on its authoritarian path, it might face an Arab Spring-like revolt.

Ethiopia's leaders refused to listen to Eskinder's message. Instead they decided the solution was to silence its messenger. Now, within the last few months, that refusal to engage with the challenges of democracy has led to the inevitable result. For two years, protests against the government have risen in frequency and size. A new Prime Minister, Hailemariam Desalegn, sought to reduce tensions by introducing reforms and releasing political prisoners like Eskinder. Despite thousands of prisoner releases, and the closure of one of the country's more notorious detention facilities, the protests continue. A day after Eskinder's release, Desalegn was forced to resign from his position. A day later, and the government has declared a new state of emergency.

Even as it comes face-to-face with the consequences of suppressing critics like Eskinder, the Ethiopian authorities pushed back against the truth. Eskinder's release was delayed for days, after prison officials repeatedly demanded that Eskinder sign a confession that falsely claimed he was a member of Ginbot 7, an opposition party that is banned as a terrorist organization within Ethiopia.

Eventually, following widespread international and domestic pressure, Eskinder was released without concession.

Eskinder, who was in jail for nearly seven years, joins a world whose politics and society have been transformed since his arrest. His predictions about the troubles Ethiopia would face if it silenced free expression may have come true, but his views were not perfect. He was, and will be again, an online writer, not a prophet. The promise of the Arab Spring that he identified has descended into its own authoritarian crackdowns. The technological tools he used to bypass Ethiopia's censorship and speak to a wider public are now just as often used by dictators to silence them. But that means we need more speakers like Eskinder, not fewer. And those speakers should be carefully listened to, not forced into imprisonment and exile.

Categories: Privacy

New National Academy of Sciences Report on Encryption Asks the Wrong Questions

EFF News - Fri, 2018-02-16 16:04

The National Academy of Sciences (NAS) released a much-anticipated report yesterday that attempts to influence the encryption debate by proposing a “framework for decisionmakers.” At best, the report is unhelpful. At worst, its framing makes the task of defending encryption harder.

The report collapses the question of whether the government should mandate “exceptional access” to the contents of encrypted communications with how the government could accomplish this mandate. We wish the report gave as much weight to the benefits of encryption and risks that exceptional access poses to everyone’s civil liberties as it does to the needs—real and professed—of law enforcement and the intelligence community.

From its outset two years ago, the NAS encryption study was not intended to reach any conclusions about the wisdom of exceptional access, but instead to “provide an authoritative analysis of options and trade-offs.” This would seem to be a fitting task for the National Academy of Sciences, which is a non-profit, non-governmental organization, chartered by Congress to provide “objective, science-based advice on critical issues affecting the nation.” The committee that authored the report included well-respected cryptographers and technologists, lawyers, members of law enforcement, and representatives from the tech industry. It also held two public meetings and solicited input from a range of outside stakeholders, EFF among them.

EFF’s Seth Schoen and Andrew Crocker presented at the committee’s meeting at Stanford University in January 2017. We described what we saw as “three truths” about the encryption debate: First, there is no substitute for “strong” encryption, i.e. encryption without any intentionally included method for any party (other than the intended recipient/device holder) to access plaintext to allow decryption on demand by the government. Second, an exceptional access mandate will help law enforcement and intelligence investigations in certain cases. Third, “strong” encryption cannot be successfully fully outlawed, given its proliferation, the fact that a large proportion of encryption systems are open-source, and the fact that U.S. law has limited reach on the global stage. We wish the report had made a concerted attempt to grapple with that first truth, instead of confining its analysis to the second and third.

We recognize that the NAS report was undertaken in good faith, but the trouble with the final product is twofold.

First, its framing is hopelessly slanted. Not only does the report studiously avoid taking a position on whether compromising encryption is a good idea, its “options and tradeoffs” are all centered around the stated government need of “ensuring access to plaintext.” To that end, the report examines four possible options: (1) taking no legislative action, (2) providing additional support for government hacking and other workarounds, (3) a legislative mandate that providers provide government access to plaintext, and (4) mandating a particular technical method for providing access to plaintext.

EFF raised concerns that encryption does not just support free expression, it is free expression.

But all of these options, including “no legislative action,” treat government agencies’ stated need to access to plaintext as the only goal worth study, with everything else as a tradeoff. For example, from EFF’s perspective, the adoption of encryption by default is one of the most positive developments in technology policy in recent years because it permits regular people to keep their data confidential from eavesdroppers, thieves, abusers, criminals, and repressive regimes around the world. By contrast, because of its framing, the report discusses these developments purely in terms of criminals “who may unknowingly benefit from default settings” and thereby evade law enforcement.

By approaching the question only as one of how to deliver plaintext to law enforcement, rather than approaching the debate more holistically, the NAS does us a disservice. The question of whether encryption should or shouldn’t be compromised for “exceptional access” should not be treated as one of several in the encryption debate: it is the question.

Second, although it attempts to recognize the downsides of exceptional access, the report’s discussion of the possible risks to civil liberties is notably brief. In the span of only three pages (out of nearly a hundred), it acknowledges the importance of encryption to supporting values such as privacy and free expression. Unlike the interests of law enforcement, which are represented in every section, the report discusses the risks to civil liberties posed by exceptional access as just one more tradeoff, and addresses them as a stand-alone concern.

To emphasize the report’s focus, the civil liberties section ends with the observation that criminals and terrorists use encryption to “take actions that negatively impact the security of law-abiding individuals.” This ignores the possibility that encryption can both enhance civil liberties and preserve individual safety. That’s why, for example, experts on domestic violence argue that smartphone encryption protects victims from their abusers, and that law enforcement should not seek to compromise smartphone encryption in order to prosecute these crimes.

Furthermore, the simple act of mandating that providers break encryption in their products is itself a significant civil liberties concern, totally apart from privacy and security implications that would result. Specifically, EFF raised concerns that encryption does not just support free expression, it is free expression. Notably absent is any examination of the rights of developers of cryptographic software, particularly given the role played by free and open source software in the encryption ecosystem. It ignores the legal landscape in the United States—one that strongly protects the principle that code (including encryption) is speech, protected by the First Amendment.

The report also underplays the international implications of any U.S. government mandate for U.S.-based providers. Currently, companies resist demands for plaintext from regimes whose respect for the rule of law is dubious, but that will almost certainly change if they accede to similar demands from U.S. agencies. In a massive understatement, the report notes that this could have “global implications for human rights.” We wish that the NAS had given this crucial issue far more emphasis and delved more deeply into the question, for instance, of how Apple could plausibly say no to a Chinese demand to wiretap a Chinese user’s FaceTime conversations while providing that same capacity to the FBI.

In any tech policy debate, expert advice is valuable not only to inform how to implement a particular policy but whether to undertake that policy in the first place. The NAS might believe that as the provider of “objective, science-based advice,” it isn’t equipped to weigh in on this sort of question. We disagree.

Categories: Privacy

House Draft Data Security Bill Preempts Stronger State Safeguards

EPIC - Fri, 2018-02-16 14:45

Rep. Luetkemeyer (R-MO) and Rep. Maloney (D-NY) circulated a draft bill, the "Data Acquisition and Technology Accountability and Security Act," that would set federal requirements for companies collecting personal data and require prompt breach notification. The Federal Trade Commission, which has often failed to pursue important data breach cases, and state Attorneys General would both be responsible for enforcing the law. The law would only trigger liability if the personal data breached is "reasonably likely to result in identity theft, fraud, or economic loss" and would preempt stronger state data breach laws. Earlier this week, EPIC President Marc Rotenberg testified before the House, calling for comprehensive data privacy legislation that would preserve stronger state laws. Last fall, EPIC testified at a Senate hearing on the Equifax breach, calling it one of the worst in U.S. history.

Categories: Privacy

Mueller Indicts Russian Nationals, Entities for Election Interference

EPIC - Fri, 2018-02-16 14:20

Special Counsel Robert Mueller has indicted thirteen Russian nationals and three Russian entities for interfering in the 2016 U.S. presidential election. "Beginning as early as 2014" the defendants began operations "to interfere with the U.S. political system" and "sow discord," the indictment explains. They also posed as U.S. persons online, reaching "significant numbers of Americans" on social media. EPIC first sought details of the Russians' "multifaceted" influence campaign in January 2017, pursuing release of the complete Intelligence Community assessment on Russian meddling. EPIC President Marc Rotenberg recently highlighted the role of the Russian Internet Research Agency, named in the Mueller indictment, explaining, "Facebook sold advertising to Russian troll farms working to undermine the American political process." EPIC launched a new project on Democracy an Cybersecurity in early 2017 to help preserve democratic institutions.

Categories: Privacy

EFF and MuckRock Are Filing a Thousand Public Records Requests About ALPR Data Sharing

EFF News - Fri, 2018-02-16 13:28

EFF and MuckRock have a launched a new public records campaign to reveal how much data law enforcement agencies have collected using automated license plate readers (ALPRs) and are sharing with each other.

Over the next few weeks, the two organizations are filing approximately 1,000 public records requests with agencies that have deals with Vigilant Solutions, one of the nation’s largest vendors of ALPR surveillance technology and software services. We’re seeking documentation showing who’s sharing ALPR data with whom. We are also requesting information on how many plates each agency scanned in 2016 and 2017 and how many of those plates were on predetermined “hot lists” of vehicles suspected of being connected to crimes.

You can see the full list of agencies and track the progress of each request through the Street-Level Surveillance: ALPR Campaign page on MuckRock.

As Easy As Adding a Friend on Facebook

“Joining the largest law enforcement LPR sharing network is as easy as adding a friend on your favorite social media platform.”

That’s a direct quote from Vigilant Solutions in its promotional materials for its ALPR technology. Through its LEARN system, Vigilant Solutions has made it possible for government agencies—particularly sheriff’s offices and police departments—to grant 24-7, unrestricted database access to hundreds of other agencies around the country.

ALPRs are camera systems that scan every license plate that passes in order to create enormous databases of where people drive and park their cars both historically and in real time. Collected en masse by ALPRs mounted on roadways and vehicles, this data can reveal sensitive information about people, such as where they work, socialize, worship, shop, sleep at night, and seek medical care or other services. ALPR allows your license plate to be used as a tracking beacon and a way to map your social networks.

Here’s the question: who is on your local police department’s and sheriff office’s ALPR friend lists?

Perhaps you live in a “sanctuary city.” There’s a very real chance local police are sharing ALPR data with Immigration & Customs Enforcement, Customs & Border Patrol, or one of their subdivisions.

Perhaps you live thousands of miles from the South. You’d be surprised to learn that scores of small towns in rural Georgia have round-the-clock access to your ALPR data. This includes towns like Meigs, which serves a population of 1,000 and did not even have full-time police officers until last fall.

In 2017, EFF and the Center for Human Rights and Privacy filed records requests with several dozen law enforcement agencies in California. We found that police departments were routinely sharing ALPR data with a wide variety of agencies that may be difficult to justify. Police often shared with the DEA, FBI, and U.S. Marshals—but they also shared with federal agencies with a less clear interest, such as the U.S. Forest Service, the U.S. Department of Veteran Affairs, and the Air Force base at Fort Eustis. California agencies were also sharing with public universities on the East Coast, airports in Tennessee and Texas, and agencies that manage public assistance programs, like food stamps and indigent health care. In some cases, the records indicate the agencies were sharing with private actors.

Meanwhile, most agencies are connected to an additional network called the National Vehicle Locator System (NVLS), which shares sensitive information with more than 500 government agencies, the identities of which have never been publicly disclosed.

Here are the data sharing documents we obtained in 2017, which we are seeking to update with our new series of requests.

We hope to create a detailed snapshot of the ALPR mass surveillance network linking law enforcement and other government agencies nationwide. Currently, the only entity that has the definitive list is Vigilant Solutions, which, as a private company, is not subject to state or federal public record disclosure laws. So far, the company has not volunteered this information, despite reaping many millions in tax dollars.

Until they do, we’ll keep filing requests.

For more information on ALPRs, visit EFF’s Street-Level Surveillance hub.

Categories: Privacy

The Secure and Succeed Act Is [Still] Bad For Immigrants and Americans Alike

CDT - Fri, 2018-02-16 11:53

Senator Chuck Grassley (R-Iowa) recently introduced The Secure and Succeed Act of 2018 (“Secure Act”), an almost 600-page omnibus immigration bill. The Secure Act has many co-sponsors, including Senator John Cornyn (R-TX), whose impact on the bill is readily apparent. The Secure Act mirrors Cornyn’s Building America’s Trust Act which was introduced in August 2017. Cornyn’s bill has since been duplicated in multiple legislative proposals, including this recent Grassley addition to the immigration debate. The White House endorsed the Secure Act on February 14. The bill addresses the future of Dreamers, limitations on legal immigration, new immigration enforcement measures and border security.

This blog focuses on border security. CDT would welcome measured proposals to address border security challenges, but this legislation fails to deliver. Two of the more troubling types of provisions include: 1) discriminatory and invasive screening of visa applicants and visa holders and 2) inflexible and intrusive border security mandates. Yesterday, this was one of the four immigration bills that received a vote in the Senate, and thankfully it failed 39 to 60. As Congress goes back to the drawing board, legislators should avoid returning to the Secure Act or the Building America’s Trust Act for inspiration.

Discriminatory and Invasive Screening of Visa Holders and Applicants
Social Media Screening
The Secure Act calls on the Department of Homeland Security (DHS) to, “to the greatest extent practicable, and in a risk-based manner and on an individualized basis, review the social media accounts of visa applicants who are citizens of, or who reside in, high-risk countries.” High-risk countries are identified by their level of cooperation to combat terrorism with the United States and “any other criteria the [Secretary of Homeland Security] determines appropriate.” Sec. 1736. Targeting individuals on the basis of nationality is discriminatory, an ineffective means of identifying security threats, and will likely burden nationals of Muslim countries.

CDT has strenuously and repeatedly argued against the collection of social media identifiers. As a matter of security, this type of screening will fail to yield desired results. Wrongdoers would simply abstain from engaging with social media, or provide sanitized accounts for review. Instead of bolstering security this type of screening would waste resources, and burden travelers. Reviewing the social media information of visa applicants would be invasive, chill free speech and association, and would burden the right to anonymity. Social media information is idiosyncratic and context-dependant and the risk of drawing negative, mistaken inferences from the data is great. This screening would also implicate Americans’ freedoms. Americans’ social media information could get swept up in a visa review, and they may find themselves facing similar scrutiny when they travel abroad if this practice proliferates.

Finally, while social media review of this kind is unfortunately already taking place, this language is particularly troubling because it could be interpreted to authorize DHS to demand social media passwords, an alarming departure from existing practice. This conclusion is drawn for two reasons. First, this section fails to specify that the review is of public social media information. Second, in the subsequent section of the bill, DHS is called to complete ‘open source screening’ of visa applicants. If DHS were to only review public social media information, this would surely qualify as ‘open source’ and there would therefore be no need for a separate social media section. It’s not clear if this is simply a case of poor drafting, or an intentionally vague provision. Regardless, the collection of social media passwords would be incredibly invasive, create a significant cybersecurity threat, and if adopted by the US would certainly be adopted by other countries and applied to Americans seeking entry. Under either interpretation, Congress should not adopt social media screening of visa applicants and holders.

Continuous Screening
Sec. 1733 calls for U.S. Customs and Border Protection (CBP) to, “in a risk-based manner, continuously screen individuals issued any visa and [nationals of Visa Waiver countries] who are present, or expected to arrive within 30 days, in the United States, against the appropriate criminal, national security, and terrorism databases maintained by the Federal Government.” Because this Administration has repeatedly singled out nationals of Muslim countries, it’s unfortunately all too likely to apply this provision in a discriminatory manner. This section authorizes continuous screening once individuals are in the United States, inviting monitoring simply because these individuals are not U.S. persons. The United States has long supported legal immigration because immigrants have been shown to be on the whole law-abiding members of society and do not deserve ongoing and unwarranted scrutiny. Foreign status is not an appropriate justification for constant surveillance of individuals in the U.S and this is not activity Congress should endorse.

Inflexible and Intrusive Border Security Mandates
Mandated Increase in Hours of Drone Surveillance
The Secure Act demands air and marine operations consisting of no fewer than 24 hours of drone flight operations, five days a week. There are three major problems with this plan. First, CBP asserts that its border enforcement authorities extend as far as 100 miles from the physical border, a geography that contains over 200 million Americans. Expanding CBP’s drone program absent severe geographic restrictions would subject millions of Americans to invasive, continuous drone surveillance. Second, this program is vulnerable to mission creep. CBP has demonstrated a willingness to loan its drones to local law enforcement, and an internal review of the CBP drone program revealed that CBP uses its drones “in support of other federal, state, or local law enforcement activities.” Third, mandating a specific number of flight hours is an ineffective strategy. It fails to take into consideration the fact that mission needs and technologies change, and it ignores the troubled history of CBP’s existing drone program. A 2015 review of the program by the DHS Inspector General determined that “CBP drones are dubious achievers” and they are very expensive. This history strongly cautions against relying on drones.

Mandated Deployment of Capabilities to Specific Locations
The Secure Act details the types of capabilities that must be used at the various divisions of the border. For example, for the Yuma Sector, the bill demands that “mobile vehicle-mounted and man-portable surveillance capabilities” and “man-portable unmanned aerial vehicles” be deployed. Prematurely committing specific capabilities for particular regions of the border is bad strategy because the needs on the ground will change. It is cumbersome to change laws, and a plan this specific shouldn’t be legislated.

Further, some of the capabilities described are invasive and their use shouldn’t be mandated absent use restrictions. For example, unmanned aerial vehicles, or drones, will be deployed to more sectors of the border. The associated problems were discussed above. In addition, stingrays, which trick cell phones into connecting to a device rather than to a cell tower, qualify under the bill as “vehicle-mounted and man-portable surveillance capabilities,” and are a capability to be deployed to several regions of the border. Stingrays are a powerful and controversial piece of surveillance technology. Once a device connects to a stingray, the operator can determine the device’s location and identifying data, and can review the content of phone calls and text messages. CBP has in its possession 33 stingrays, and a recent Congressional review of the technology recommended that “Congress [] pass legislation to establish a clear, nationwide framework for when and how geolocation information can be accessed and used.” Regulation of CBP’s use of Stingrays consists of a DHS policy, which only informs CBP practice during criminal investigations. For criminal investigations, there is a presumption that the operator first needs to get a warrant based on probable cause. However, this guidance does not apply during immigration enforcement or border patrol activities, which again extend 100 miles from the border. Arming CBP with surveillance technologies like drones and stingrays, and endorsing their continued use, invites invasive surveillance of Americans and immigrants. CBP’s use of these technologies without restrictions poses a huge threat to the civil liberties of immigrants and Americans alike.

Given the scope of these problems, Congress should not adopt any of these ineffective and invasive programs.

 

Categories: Privacy

Federal Judge Says Embedding a Tweet Can Be Copyright Infringement

EFF News - Thu, 2018-02-15 21:12

Rejecting years of settled precedent, a federal court in New York has ruled [PDF] that you could infringe copyright simply by embedding a tweet in a web page. Even worse, the logic of the ruling applies to all in-line linking, not just embedding tweets. If adopted by other courts, this legally and technically misguided decision would threaten millions of ordinary Internet users with infringement liability.

This case began when Justin Goldman accused online publications, including Breitbart, Time, Yahoo, Vox Media, and the Boston Globe, of copyright infringement for publishing articles that linked to a photo of NFL star Tom Brady. Goldman took the photo, someone else tweeted it, and the news organizations embedded a link to the tweet in their coverage (the photo was newsworthy because it showed Brady in the Hamptons while the Celtics were trying to recruit Kevin Durant). Goldman said those stories infringe his copyright.

Courts have long held that copyright liability rests with the entity that hosts the infringing content—not someone who simply links to it. The linker generally has no idea that it’s infringing, and isn’t ultimately in control of what content the server will provide when a browser contacts it. This “server test,” originally from a 2007 Ninth Circuit case called Perfect 10 v. Amazon, provides a clear and easy-to-administer rule. It has been a foundation of the modern Internet.

Judge Katherine Forrest rejected the Ninth Circuit’s server test, based in part on a surprising approach to the process of embedding. The opinion describes the simple process of embedding a tweet or image—something done every day by millions of ordinary Internet users—as if it were a highly technical process done by “coders.” That process, she concluded, put publishers, not servers, in the drivers’ seat:

[W]hen defendants caused the embedded Tweets to appear on their websites, their actions violated plaintiff’s exclusive display right; the fact that the image was hosted on a server owned and operated by an unrelated third party (Twitter) does not shield them from this result.

She also argued that Perfect 10 (which concerned Google’s image search) could be distinguished because in that case the “user made an active choice to click on an image before it was displayed.” But that was not a detail that the Ninth Circuit relied on in reaching its decision. The Ninth Circuit’s rule—which looks at who actually stores and serves the images for display—is far more sensible.

If this ruling is appealed (there would likely need to be further proceedings in the district court first), the Second Circuit will be asked to consider whether to follow Perfect 10 or Judge Forrest’s new rule. We hope that today’s ruling does not stand. If it did, it would threaten the ubiquitous practice of in-line linking that benefits millions of Internet users every day.

Related Cases: Perfect 10 v. Google

Categories: Privacy

The False Teeth of Chrome's Ad Filter

EFF News - Thu, 2018-02-15 21:00

Today Google launched a new version of its Chrome browser with what they call an "ad filter"—which means that it sometimes blocks ads but is not an "ad blocker." EFF welcomes the elimination of the worst ad formats. But Google's approach here is a band-aid response to the crisis of trust in advertising that leaves massive user privacy issues unaddressed. 

Last year, a new industry organization, the Coalition for Better Ads, published user research investigating ad formats responsible for "bad ad experiences." The Coalition examined 55 ad formats, of which 12 were deemed unacceptable. These included various full page takeovers (prestitial, postitial, rollover), autoplay videos with sound, pop-ups of all types, and ad density of more than 35% on mobile. Google is supposed to check sites for the forbidden formats and give offenders 30 days to reform or have all their ads blocked in Chrome. Censured sites can purge the offending ads and request reexamination. 

The Coalition for Better Ads Lacks a Consumer Voice

The Coalition involves giants such as Google, Facebook, and Microsoft, ad trade organizations, and adtech companies and large advertisers. Criteo, a retargeter with a history of contested user privacy practice is also involved, as is content marketer Taboola. Consumer and digital rights groups are not represented in the Coalition.

This industry membership explains the limited horizon of the group, which ignores the non-format factors that annoy and drive users to install content blockers. While people are alienated by aggressive ad formats, the problem has other dimensions. Whether it’s the use of ads as a vector for malware, the consumption of mobile data plans by bloated ads, or the monitoring of user behavior through tracking technologies, users have a lot of reasons to take action and defend themselves.

But these elements are ignored. Privacy, in particular, figured neither in the tests commissioned by the Coalition, nor in their three published reports that form the basis for the new standards. This is no surprise given that participating companies include the four biggest tracking companies: Google, Facebook, Twitter, and AppNexus. 

Stopping the "Biggest Boycott in History"

Some commentators have interpreted ad blocking as the "biggest boycott in history" against the abusive and intrusive nature of online advertising. Now the Coalition aims to slow the adoption of blockers by enacting minimal reforms. Pagefair, an adtech company that monitors adblocker use, estimates 600 million active users of blockers. Some see no ads at all, but most users of the two largest blockers, AdBlock and Adblock Plus, see ads "whitelisted" under the Acceptable Ads program. These companies leverage their position as gatekeepers to the user's eyeballs, obliging Google to buy back access to the "blocked" part of their user base through payments under Acceptable Ads. This is expensive (a German newspaper claims a figure as high as 25 million euros) and is viewed with disapproval by many advertisers and publishers. 

Industry actors now understand that adblocking’s momentum is rooted in the industry’s own failures, and the Coalition is a belated response to this. While nominally an exercise in self-regulation, the enforcement of the standards through Chrome is a powerful stick. By eliminating the most obnoxious ads, they hope to slow the growth of independent blockers.

What Difference Will It Make?

Coverage of Chrome's new feature has focused on the impact on publishers, and on doubts about the Internet’s biggest advertising company enforcing ad standards through its dominant browser. Google has sought to mollify publishers by stating that only 1% of sites tested have been found non-compliant, and has heralded the changed behavior of major publishers like the LA Times and Forbes as evidence of success. But if so few sites fall below the Coalition's bar, it seems unlikely to be enough to dissuade users from installing a blocker. Eyeo, the company behind Adblock Plus, has a lot to lose should this strategy be successful. Eyeo argues that Chrome will only "filter" 17% of the 55 ad formats tested, whereas 94% are blocked by AdblockPlus.

User Protection or Monopoly Power?

The marginalization of egregious ad formats is positive, but should we be worried by this display of power by Google? In the past, browser companies such as Opera and Mozilla took the lead in combating nuisances such as pop-ups, which was widely applauded. Those browsers were not active in advertising themselves. The situation is different with Google, the dominant player in the ad and browser markets.

Google exploiting its browser dominance to shape the conditions of the advertising market raises some concerns. It is notable that the ads Google places on videos in Youtube ("instream pre-roll") were not user-tested and are exempted from the prohibition on "auto-play ads with sound." This risk of a conflict of interest distinguishes the Coalition for Better Ads from, for example, Chrome's monitoring of sites associated with malware and related user protection notifications.

There is also the risk that Google may change position with regard to third-party extensions that give users more powerful options. Recent history justifies such concern: Disconnect and Ad Nauseam have been excluded from the Chrome Store for alleged violations of the Store’s rules. (Ironically, Adblock Plus has never experienced this problem.)

Chrome Falls Behind on User Privacy 

This move from Google will reduce the frequency with which users run into the most annoying ads. Regardless, it fails to address the larger problem of tracking and privacy violations. Indeed, many of the Coalition’s members were active opponents of Do Not Track at the W3C, which would have offered privacy-conscious users an easy opt-out. The resulting impression is that the ad filter is really about the industry trying to solve its adblocking problem, not about addressing users' concerns.

Chrome, together with Microsoft Edge, is now the last major browser to not offer integrated tracking protection. Firefox introduced this feature last November in Quantum, enabled by default in "Private Browsing" mode with the option to enable it universally. Meanwhile, Apple's Safari browser has Intelligent Tracking Prevention, Opera ships with an ad/tracker blocker for users to activate, and Brave has user privacy at the center of its design. It is a shame that Chrome's user security and safety team, widely admired in the industry, is empowered only to offer protection against outside attackers, but not against commercial surveillance conducted by Google itself and other advertisers. If you are using Chrome (1), you need EFF's Privacy Badger or uBlock Origin to fill this gap.

(1) This article does not address other problematic aspects of Google services. When users sign into Gmail, for example, their activity across other Google products is logged. Worse yet, when users are signed into Chrome their full browser history is stored by Google and may be used for ad targeting. This account data can also be linked to Doubleclick's cookies. The storage of browser history is part of Sync (enabling users access to their data across devices), which can also be disabled. If users desire to use Sync but exclude the data from use for ad targeting by Google, this can be selected under ‘Web And App Activity’ in Activity controls. There is an additional opt-out from Ad Personalization in Privacy Settings.

Categories: Privacy

Customs and Border Protection's Biometric Data Snooping Goes Too Far

EFF News - Thu, 2018-02-15 20:21

The U.S. Department of Homeland Security (DHS), Customs and Border Protection (CBP) Privacy Office, and Office of Field Operations recently invited privacy stakeholders—including EFF and the ACLU of Northern California—to participate in a briefing and update on how the CBP is implementing its Biometric Entry/Exit Program.

As we’ve written before, biometrics systems are designed to identify or verify the identity of people by using their intrinsic physical or behavioral characteristics. Because biometric identifiers are by definition unique to an individual person, government collection and storage of this data poses unique threats to privacy and security of individual travelers.

EFF has many concerns about the government collecting and using biometric identifiers, and specifically, we object to the expansion of several DHS programs subjecting Americans and foreign citizens to facial recognition screening at international airports. EFF appreciated the opportunity to share these concerns directly with CBP officers and we hope to work with CBP to allow travelers to opt-out of the program entirely.

You can read the full letter we sent to CBP here.

Categories: Privacy

Law Enforcement Use of Face Recognition Systems Threatens Civil Liberties, Disproportionately Affects People of Color: EFF Report

EFF News - Thu, 2018-02-15 10:45

Independent Oversight, Privacy Protections Are Needed

San Francisco, California—Face recognition—fast becoming law enforcement’s surveillance tool of choice—is being implemented with little oversight or privacy protections, leading to faulty systems that will disproportionately impact people of color and may implicate innocent people for crimes they didn’t commit, says an Electronic Frontier Foundation (EFF) report released today.

Face recognition is rapidly creeping into modern life, and face recognition systems will one day be capable of capturing the faces of people, often without their knowledge, walking down the street, entering stores, standing in line at the airport, attending sporting events, driving their cars, and utilizing public spaces. Researchers at the Georgetown Law School estimated that one in every two American adults—117 million people—are already in law enforcement face recognition systems.

This kind of surveillance will have a chilling effect on Americans’ willingness to exercise their rights to speak out and be politically engaged, the report says. Law enforcement has already used face recognition at political protests, and may soon use face recognition with body-worn cameras, to identify people in the dark, and to project what someone might look like from a police sketch or even a small sample of DNA.

Face recognition employs computer algorithms to pick out details about a person’s face from a photo or video to form a template. As the report explains, police use face recognition to identify unknown suspects by comparing their photos to images stored in databases and to scan public spaces to try to find specific pre-identified targets.

But no face recognition system is 100 percent accurate, and false positives—when a person’s face is incorrectly matched to a template image—are common. Research shows that face recognition misidentifies African Americans and ethnic minorities, young people, and women at higher rates than whites, older people, and men, respectively. And because of well-documented racially biased police practices, all criminal databases—including mugshot databases—include a disproportionate number of African-Americans, Latinos, and immigrants.

For both reasons, inaccuracies in face recognition systems will disproportionately affect people of color.

“The FBI, which has access to at least 400 million images and is the central source for facial recognition identification for federal, state, and local law enforcement agencies, has failed to address the problem of false positives and inaccurate results,” said EFF Senior Staff Attorney Jennifer Lynch, author of the report. “It has conducted few tests to ensure accuracy and has done nothing to ensure its external partners—federal and state agencies—are not using face recognition in ways that allow innocent people to be identified as criminal suspects.”

Lawmakers, regulators, and policy makers should take steps now to limit face recognition collection and subject it to independent oversight, the report says. Legislation is needed to place meaningful checks on government use of face recognition, including rules limiting retention and sharing, requiring notification when face prints are collected, ensuring robust security procedures to prevent data breaches, and establishing legal processes governing when law enforcement may collect face images from the public without their knowledge, the report concludes.

“People should not have to worry that they may be falsely accused of a crime because an algorithm mistakenly matched their photo to a suspect. They shouldn’t have to worry that their data will end up in the hands of identify thieves because face recognition databases were breached. They shouldn’t have to fear that their every move will be tracked if face recognition is linked to the networks of surveillance cameras that blanket many cities,” said Lynch. “Without meaningful legal protections, this is where we may be headed.”

For the report:

Online version: https://www.eff.org/wp/law-enforcement-use-face-recognition

PDF version: https://www.eff.org/files/2018/02/15/face-off-report-1b.pdf

One pager on facial recognition: https://www.eff.org/document/facial-recognition-one-pager

Contact:  JenniferLynchSenior Staff Attorneyjlynch@eff.org

Categories: Privacy

Court Dismisses Playboy's Lawsuit Against Boing Boing (For Now)

EFF News - Wed, 2018-02-14 19:48

In a win for free expression, a court has dismissed a copyright lawsuit against Happy Mutants, LLC, the company behind acclaimed website Boing Boing. The court ruled [PDF] that Playboy’s complaint—which accused Boing Boing of copyright infringement for linking to a collection of centerfolds—had not sufficiently established its copyright claim. Although the decision allows Playboy to try again with a new complaint, it is still a good result for supporters of online journalism and sensible copyright.

Playboy Entertainment’s lawsuit accused Boing Boing of copyright infringement for reporting on a historical collection of Playboy centerfolds and linking to a third-party site. In a February 2016 post, Boing Boing told its readers that someone had uploaded scans of the photos, noting they were “an amazing collection” reflecting changing standards of what is considered sexy. The post contained links to an imgur.com page and YouTube video—neither of which were created by Boing Boing.

EFF, together with co-counsel Durie Tangri, filed a motion to dismiss [PDF] on behalf of Boing Boing. We explained that Boing Boing did not contribute to the infringement of any Playboy copyrights by including a link to illustrate its commentary. The motion noted that another judge in the same district had recently dismissed a case where Quentin Tarantino accused Gawker of copyright infringement for linking to a leaked script in its reporting.

Judge Fernando M. Olguin’s ruling quotes the Tarantino decision, noting that:

An allegation that a defendant merely provided the means to accomplish an infringing activity is insufficient to establish a claim for copyright infringement. Rather, liability exists if the defendant engages in personal conduct that encourages or assists the infringement.

Given this standard, the court was “skeptical that plaintiff has sufficiently alleged facts to support either its inducement or material contribution theories of copyright infringement.”

From the outset of this lawsuit, we have been puzzled as to why Playboy, once a staunch defender of the First Amendment, would attack a small news and commentary website. Today’s decision leaves Playboy with a choice: it can try again with a new complaint or it can leave this lawsuit behind. We don’t believe there’s anything Playboy could add to its complaint that would meet the legal standard. We hope that it will choose not to continue with its misguided suit.

Related Cases: Playboy Entertainment Group v. Happy Mutants

Categories: Privacy

Congressional Task Force Releases Report on Election Security

EPIC - Wed, 2018-02-14 15:18

The Congressional Task Force on Election Security today released its final report detailing vulnerabilities in U.S. election systems. The report includes many recommendations, purchasing voting systems with paper ballots, post-election audits, and funding for IT support. The report also proposes a national strategy to counter efforts to undermine democratic institutions. Election experts have said that Congress has not done enough to safeguard the mid-term elections. In early 2017, EPIC launched the Project on Democracy and Cybersecurity. EPIC is currently pursuing several FOIA cases concerning Russian interference with the 2016 election, including EPIC v. FBI (cyberattack victim notification), EPIC v. ODNI (Russian hacking), EPIC v. IRS (release of Trump's tax returns), and EPIC v. DHS (election cybersecurity).

Categories: Privacy

Will Canada Be the New Testing Ground for SOPA-lite? Canadian Media Companies Hope So

EFF News - Wed, 2018-02-14 13:33

A consortium of media and distribution companies calling itself “FairPlay Canada” is lobbying for Canada to implement a fast-track, extrajudicial website blocking regime in the name of preventing unlawful downloads of copyrighted works. It is currently being considered by the Canadian Radio-television and Telecommunications Commission (CRTC), an agency roughly analogous to the Federal Communications Commission (FCC) in the U.S.

The proposal is misguided and flawed. We’re still analyzing it, but below are some preliminary thoughts.

The Proposal

The consortium is requesting the CRTC establish a part-time, non-profit organization that would receive complaints from various rightsholders alleging that a website is “blatantly, overwhelmingly, or structurally engaged” in violations of Canadian copyright law. If the sites were determined to be infringing, Canadian ISPs would be required to block access to these websites. The proposal does not specify how this would be accomplished.

The consortium proposes some safeguards in an attempt to show that the process would be meaningful and fair. It proposes the affected websites, ISPs, and members of the public would be allowed to respond to any blocking request. It also suggests that any blocking request would not be implemented unless a recommendation to block were adopted by the CRTC, and any affected party would have the right to appeal to a court.

Fairplay argues the system is necessary because, according to Fairplay, unlawful downloads are destroying the Canadian creative industry and harming Canadian culture.

(Some of) The Problems

As Michael Geist, the Canada Research Chair in Internet and E-Commerce Law at the University of Ottawa points out, Canada had more investment in film and TV production last year than any other time in history. And it’s not just investment in creative industries that is seeing growth: legal means of accessing creative content is also growing, as Bell itself recognized in a statement to financial analysts. Contrary to the argument pushed by the content industry and other FairPlay backers, investment and lawful film and TV services are growing, not shrinking. The Canadian film and TV industries don’t need website-blocking.

The proposal would require service providers to “disappear” certain websites, endangering Internet security and sending a troubling message to the world: it’s okay to interfere with the Internet, even effectively blacklisting entire domains, as long as you do it in the name of IP enforcement. Of course, blacklisting entire domains can mean turning off thousands of underlying websites that may have done nothing wrong. The proposal doesn’t explain how blocking is to be accomplished, but when such plans have been raised in other contexts, we’ve noted the significant concerns we have about various technological ways of “blocking” that wreak havoc on how the Internet works.

And we’ve seen how harmful mistakes can be. For example, back in 2011, the U.S. government seized the domain names of two popular websites based on unsubstantiated allegations of copyright infringement. The government held those domains for over 18 months. As another example, one company named a whopping 3,343 websites in a lawsuit as infringing on trademark and copyright rights. Without an opposition, the company was able to get an order that required domain name registrars to seize these domains. Only after many defendants had their legitimate websites seized did the Court realize that statements made about many of the websites by the rightsholder were inaccurate.  Although the proposed system would involve blocking (however that is accomplished) and not seizing domains, the problem is clear: mistakes are made, and they can have long-lasting effect. 

But beyond blocking for copyright infringement, we’ve also seen that once a system is in place to take down one type of content, it will only lead to calls for more blocking, including that of lawful speech. This raises significant freedom of expression and censorship concerns.

We’re also concerned about what’s known as “regulatory capture” with this type of system, the idea that the regulator often tends to align its interests with those of the regulated. Here, the system would be initially funded by rightsholders, would be staffed “part-time” by those with “relevant experience,” and would get work when rightsholders view it as a valuable system. These sort of structural aspects of the proposal have a tendency to cause regulatory capture. An impartial judiciary that sees cases and parties from across a political, social, and cultural spectrum helps avoid this pitfall.

Finally, we’re also not sure why this proposal is needed at all. Canada already has some of the strongest anti-piracy laws in the world. The proposal just adds complexity and strips away some of the protections that a court affords those who may be involved in legitimate business (even if the content owners don’t like those businesses).

These are just some of the concerns raised by this proposal. Professor Geist’s blog highlights more, and in more depth.

What you can do

The CRTC is now accepting public comment on the proposal, and has already received over 4,000 comments. The deadline is March 1, although an extension has been sought. We encourage any interested members of the public to submit comments to let the Commission know your thoughts. Please note that all comments are made public, and require certain personal information to be included.

Categories: Privacy

Let's Encrypt Hits 50 Million Active Certificates and Counting

EFF News - Wed, 2018-02-14 13:02

In yet another milestone on the path to encrypting the web, Let’s Encrypt has now issued over 50 million active certificates. Depending on your definition of “website,” this suggests that Let’s Encrypt is protecting between about 23 million and 66 million websites with HTTPS (more on that below). Whatever the number, it’s growing every day as more and more webmasters and hosting providers use Let’s Encrypt to provide HTTPS on their websites by default.

Source: https://letsencrypt.org/stats/ as of February 14, 2018

Let’s Encrypt is a certificate authority, or CA. CAs like Let’s Encrypt are crucial to secure, HTTPS-encrypted browsing. They issue and maintain digital certificates that help web users and their browsers know they’re actually talking to the site they intended to.

One of the things that sets Let’s Encrypt apart is that it issues these certificates for free. And, with the help of EFF’s Certbot client and a range of other automation tools, it’s easy for webmasters of varying skill and resource levels to get a certificate and implement HTTPS. In fact, HTTPS encryption has become an automatic part of many hosting providers’ offerings.

50 million active certificates represents the number of certificates that are currently valid and have not expired. (Sometimes we also talk about “total issuance,” which refers to the total number of certificates ever issued by Let’s Encrypt. That number is around 217 million now.) Relating these numbers to names of “websites” is a bit complicated. Some certificates, such as those issued by certain hosting providers, cover many different sites. Yet some certificates are also redundant with others, so there may be a handful of active certificates all covering precisely the same names.

Every website protected is one step closer to encrypting the entire web, and milestones like this remind us that we are on our way to achieving that goal together.

One way to count is by “fully qualified domains active”—in other words, different names covered by non-expired certificates. This is now at 66 million. This metric can overcount sites; while most people would say that eff.org and www.eff.org are the same website, they count as two different names here.

Another way to count the number of websites that Let’s Encrypt protects is by looking at “registered domains active,” of which Let’s Encrypt currently has about 26 million. This refers to the number of different top-level domain names among non-expired certificates. In this case, supporters.eff.org and www.eff.org would be counted as one name. In cases where pages under the same top-level domain are run by different people with different content, this metric may undercount different sites.

No matter how you slice it, Let’s Encrypt is one of the largest CAs. And it has grown largely by giving websites their first-ever certificate rather than by grabbing websites from other CAs. That means that, as Let’s Encrypt grows, the number of HTTPS-protected websites on the web tends to grow too. Every website protected is one step closer to encrypting the entire web, and milestones like this remind us that we are on our way to achieving that goal together.

Categories: Privacy

The Revolution and Slack

EFF News - Wed, 2018-02-14 12:44

UPDATE (2/16/18): We have corrected this post to more accurately reflect the limits of Slack's encryption of user data at rest. We have also clarified that granular retention settings are only available on paid Slack workspaces.

The revolution will not be televised, but it may be hosted on Slack. Community groups, activists, and workers in the United States are increasingly gravitating toward the popular collaboration tool to communicate and coordinate efforts. But many of the people using Slack for political organizing and activism are not fully aware of the ways Slack falls short in serving their security needs. Slack has yet to support this community in its default settings or in its ongoing design.  

We urge Slack to recognize the community organizers and activists using its platform and take more steps to protect them. In the meantime, this post provides context and things to consider when choosing a platform for political organizing, as well as some tips about how to set Slack up to best protect your community.

The Mismatch

Slack is designed as an enterprise system built for business settings. That results in a sometimes dangerous mismatch between the needs of the audience the company is aimed at serving and the needs of the important, often targeted community groups and activists who are also using it.

We urge Slack to recognize the community organizers and activists using its platform and take more steps to protect them.

Two things that EFF tends to recommend for digital organizing are 1) using encryption as extensively as possible, and 2) self-hosting, so that a governmental authority has to get a warrant for your premises in order to access your information. The central thing to understand about Slack (and many other online services) is that it fulfills neither of these things. This means that if you use Slack as a central organizing tool, Slack stores and is able to read all of your communications, as well as identifying information for everyone in your workspace.

We know that for many, especially small organizations, self-hosting is not a viable option, and using strong encryption consistently is hard. Meanwhile, Slack is easy, convenient, and useful. Organizations have to balance their own risks and benefits. Regardless of your situation, it is important to understand the risks of organizing on Slack.

First, The Good News

Slack follows several best practices in standing up for users. Slack does require a warrant for content stored on its servers. Further, it promises not to voluntarily provide information to governments for surveillance purposes. Slack also promises to require the FBI to go to court to enforce gag orders issued with National Security Letters, a troubling form of subpoena. Additionally, federal law prohibits Slack from handing over content (but not metadata like membership lists) in response to civil subpoenas.

Slack also stores your data in encrypted form when it’s at rest. This method will protect against someone walking into one of the data centers Slack uses and stealing a hard drive. But Slack does not claim to encrypt that data while it is stored in memory, so it is not protected against attacks or data breaches. This is also not useful if you are worried about governments or other entities putting pressure on Slack to hand over your information.

Risks With Slack In Particular

And now the downsides. These are things that Slack could change, and EFF has called on them to do so.

Slack can turn over content to law enforcement in response to a warrant. Slack’s servers store everything you do on its platform. Since Slack can read this information on its servers—that is, since it’s not end-to-end encrypted—Slack can be forced to hand it over in response to law enforcement requests. Slack does require warrants to turn over content, and can resist warrants it considers improper or overbroad. But if Slack complies with a warrant, users’ communications are readable on Slack’s servers and available for it to turn over to law enforcement.

Slack may fail to notify users of government information requests. When the government comes knocking on a website’s door for user data, that website should, at a minimum, provide users with timely, detailed notice of the request. Slack’s policy in this regard is lacking. Although it states that it will provide advance notice to users of government demands, it allows for a broad set of exceptions to that standard. This is something that Slack could and should fix, but it refuses to even explain why it has included these loopholes

Slack content can make its way into your email inbox. Signing up for a Slack workspace also signs you up, by default, for email notifications when you are directly mentioned or receive a direct message. These email notifications can include the content of those mentions and messages. If you expect sensitive messages to stay in the Slack workspace where they were written and shared, this might be an unpleasant surprise. With these defaults in place, you have to trust not only Slack but also your email provider with your own and others’ private content.

Risks With Third-Party Platforms in General

Many of the risks that come with using Slack are also risks that come with using just about any third-party online platform. Most of these are problems with the law that we all must work on to fix together. Nevertheless, organizers must consider these risks when deciding whether Slack or any other online third-party platform is right for them.

Many of the risks that come with using Slack are also risks that come with using just about any third-party online platform.

Much of your sensitive information is not subject to a warrant requirement.  While a warrant is required for content, some of the most sensitive information held by third-party platforms—including the identities and locations of the people in a Slack workspace—is considered “non-content” and not currently protected by the warrant requirement federally and in most states. If the identities of your organization’s membership is sensitive, consider whether Slack or any other online third party is right for you. 

Companies can be legally prevented from giving users notice. While Slack and many other platforms have promised to require the FBI to justify controversial National Security Letter gags, these gags may still be enforced in many cases. In addition, many warrants and other legal process contain different kinds of gags ordered by a court, leaving companies with no ability to notify you that the government has seized your data.

Slack workspaces are subject to civil discovery. Government is not the only entity that could seek information from Slack or other third parties. Private companies and other litigants have sought, and obtained, information from hosts ranging from Google to Microsoft to Facebook and Twitter. While federal law prevents them from handing over customer content in civil discovery, it does not protect “non-content” records, such as membership identities and locations.

A group is only as trustworthy as its members. Any group environment is only as trustworthy as the people who participate in it. Group members can share and even screenshot content, so it is important to establish guidelines and expectations that all members agree on. Establishing trusted admins or moderators to facilitate these agreements can also be beneficial.

Making Slack as Secure as Possible

If using Slack is still right for you, you can take steps to harden your security settings and make your closed workspaces as private as possible.

By default, Slack retains all the messages in a workspace or channel (including direct messages) for as long as the workspace exists. The same goes for any files submitted to the workspace. If you are using a paid workspace, the lowest-hanging privacy fruit is to change a workspace’s retention settings. Workspace admins have the ability to set shorter retention periods, which can mean less content available for government requests or legal inquiries. Unfortunately, this kind of granular retention control is currently only available for paid workspaces.

Users can also address the email-leaking concern described above by minimizing email notification settings. This works best if all of the members of a group agree to do it, since email notifications can expose multiple users’ messages. 

The privacy of a Slack workspace also relies on the security of individual members’ accounts. Setting up two-factor authentication can add an extra layer of security to an account, and admins even have the option of making two-factor authentication mandatory for all the members of a workspace

However, no settings tweak can completely mitigate the concerns described above. We strongly urge Slack to step up to protect the high-risk groups that are using it along with its enterprise customers.  And all of us must stand together to push changes to the law.

Technology should stand with those who wish to make change in our world. Slack has made a great tool that can help, and it’s time for Slack to step up with its policies.

Categories: Privacy