You are hereFeed aggregator / Categories / Privacy

Privacy


Supreme Court Upholds Patent Office Power to Invalidate Bad Patents

EFF News - Tue, 2018-04-24 19:19

In one of the most important patent decisions in years, the Supreme Court has upheld the power of the Patent Office to review and cancel issued patents. This power to take a “second look” is important because, compared to courts, administrative avenues provide a much faster and more efficient means for challenging bad patents. If the court had ruled the other way, the ruling would have struck down various patent office procedures and might even have resurrected many bad patents. Today’s decision [PDF] in Oil States Energy Services, LLC v. Greene’s Energy Group, LLC is a big win for those that want a more sensible patent system.

Oil States challenged the inter partes review (IPR) procedure before the Patent Trial and Appeal Board (PTAB). The PTAB is a part of the Patent Office and is staffed by administrative patent judges. Oil States argued that the IPR procedure is unconstitutional because it allows an administrative agency to decide a patent’s validity, rather than a federal judge and jury.

Together with Public Knowledge, Engine Advocacy, and the R Street Institute, EFF filed an amicus brief [PDF] in the Oil States case in support of IPRs. Our brief discussed the history of patents being used as a public policy tool, and how Congress has long controlled how and when patents can be canceled. We explained how the Constitution sets limits on granting patents, and how IPR is a legitimate exercise of Congress’s power to enforce those limits.

Our amicus brief also explained why IPRs were created in the first place. The Patent Office often does a cursory job reviewing patent applications, with examiners spending an average of about 18 hours per application before granting 20-year monopolies. IPRs allow the Patent Office to make sure it didn’t make a mistake in issuing a patent. The process also allows public interest groups to challenge patents that harm the public, like EFF’s successful challenge to Personal Audio’s podcasting patent. (Personal Audio has filed a petition for certiorari asking the Supreme Court to reverse, raising some of the same grounds argued by Oil States. That petition will be likely be decided in May.)

The Supreme Court upheld the IPR process a 7-2 decision. Writing for the majority, Justice Thomas explained:

Inter partes review falls squarely within the public rights doctrine. This Court has recognized, and the parties do not dispute, that the decision to grant a patent is a matter involving public rights—specifically, the grant of a public franchise. Inter partes review is simply a reconsideration of that grant, and Congress has permissibly reserved the PTO’s authority to conduct that reconsideration. Thus, the PTO can do so without violating Article III.

Justice Thomas noted that IPRs essentially serve the same interest as initial examination: ensuring that patents stay within their proper bounds.

Justice Gorsuch, joined by Chief Justice Roberts, dissented. He argued that only Article III courts should have the authority to cancel patents. If that view had prevailed, it likely would have struck down IPRs, as well as other proceedings before the Patent Office, such as covered business method review and post-grant review. It would also have left the courts with difficult questions regarding the status of patents already found invalid in IPRs. 

In a separate decision [PDF], in SAS Institute v. Iancu, the Supreme Court ruled that, if the PTAB institutes an IPR, it must decide the validity of all challenged claims. EFF did not file a brief in that case. While the petitioner had tenable arguments under the statute (indeed, it won), the result seems to make the PTAB’s job harder and creates a variety of problems (what is supposed to happen with partially-instituted IPRs currently in progress?). Since it is a statutory decision, Congress could amend the law. But don’t hold your breath for a quick fix.

Now that IPRs have been upheld, we may see a renewed push from Senator Coons and others to gut the PTAB’s review power. That would be a huge step backwards. As Justice Thomas explained, IPRs protect the public’s “paramount interest in seeing that patent monopolies are kept within their legitimate scope.” We will defend the PTAB’s role serving the public interest.

Categories: Privacy

Stop Egypt’s Sweeping Ridesharing Surveillance Bill

EFF News - Tue, 2018-04-24 18:11

The Egyptian government is currently debating a bill which would compel all ride-sharing companies to store any Egyptian user data within Egypt. It would also create a system that would let the authorities have real-time access to their passenger and trip information. If passed, companies such as Uber and its Dubai-based competitor Careem would be forced to grant unfettered direct access to their databases to unspecified security authorities. Such a sweeping surveillance measure is particularly ripe for abuse in a country known for its human rights violations, including an attempts to use surveillance against civil society. The bill is expected to pass a final vote before Egypt’s House on May 14th or 15th.

Article 10 of the bill requires companies to relocate their servers containing all Egyptian users’ information to within the borders of the Arab Republic of Egypt. Compelled data localization has frequently served as an excuse for enhancing a state’s ability to spy on its citizens.  

Even more troubling, article 9 of the bill forces these same ride-sharing companies to electronically link their local servers directly to unspecified authorities, from police to intelligence agencies. Direct access to a server would provide the Egyptian government unrestricted, real-time access to data on all riders, drivers, and trips. Under this provision, the companies themselves would have no ability to monitor the government’s use of their network data.

Effective computer security is hard, and no system will be free of bugs and errors.  As the volume of ride-sharing usage increases, risks to the security and privacy of ridesharing databases increase as well. Careem just admitted on April 23rd that its databases had been breached earlier this year. The bill’s demand to grant the Egyptian government unrestricted server access greatly increases the risk of accidental catastrophic data breaches, which would compromise the personal data of millions of innocent individuals. Careem and Uber must focus on strengthening the security of their databases instead of granting external authorities unfettered access to their servers.

Direct access to the databases of any company without adequate legal safeguards undermines the privacy and security of innocent individuals, and is therefore incompatible with international human rights obligations. For any surveillance measure to be legal under international human rights standards, it must be prescribed by law. It must be “necessary” to achieve a legitimate aim and “proportionate” to the desired aim. These requirements are vital in ensuring that the government does not adopt surveillance measures which threaten the foundations of a democratic society.

The European Court of Human Rights, in Zakharov v. Russia, made clear that direct access to servers is prone to abuse:

“...a system which enables the secret services and the police to intercept directly the communications of each and every citizen without requiring them to show an interception authorisation to the communications service provider, or to anyone else, is particularly prone to abuse.”                                                                                             

Moreover, the Court of Justice of the European Union (CJEU) has also discussed the importance of having an independent authorization prior to government access to electronic data. In Tele2 Sverige AB v. Post, held:

“it is essential that access of the competent national authorities to retained data should, as a general rule, (...) be subject to a prior review carried out either by a court or by an independent administrative body, and that the decision of that court or body should be made following a reasoned request by those authorities submitted...”.

Unrestricted direct access to the data of innocent individuals using ridesharing apps, by its very nature, eradicates any consideration of proportionality and due process. Egypt must turn back from the dead-end path of unrestricted access, and uphold its international human rights obligations. Sensitive data demands strong legal protections, not an all-access pass. Hailing a rideshare should never include a blanket access for your government to follow you. We hope Egypt’s House of Representatives rejects the bill.

Categories: Privacy

EPIC to Congress: Enhanced Surveillance at Border Will Impact Rights of U.S. Citizens

EPIC - Tue, 2018-04-24 17:50

EPIC has sent a statement to the House Homeland Security Committee in advance of a hearing with the Commissioner of Customs and Border Protection. EPIC urged the Committee to ask the CBP Commissioner about the collection of biometric data at US airports. EPIC described the growing use of facial recognition that capture the images of US travelers. EPIC also pointed to a recent study that found racial disparities with the technique. EPIC is currently seeking records from the federal agency concerning the accuracy of facial recognition. EPIC also recommended the Committee examine how CBP will comply with state laws prohibiting warrantless aerial surveillance when deploying drones at the border. As a result of an earlier FOIA lawsuit, EPIC found that the CBP is deploying drones with facial recognition technology without warrant authority.

Categories: Privacy

California Bill Would Guarantee Free Credit Freezes in 15 Minutes

EFF News - Tue, 2018-04-24 15:09

 

After the shocking news of the massive Equifax data breach, which has now ballooned to jeopardize the privacy of nearly 148 million people, many Americans are rightfully scared and struggling to figure out how to protect themselves from the misuse of their personal information.

To protect against credit fraud, many consumer rights and privacy organizations recommend placing a ‘credit freeze’ with the credit bureaus. When criminals seek to use breached data to borrow money in the name of a breach victim, the potential lender normally runs a credit check with a credit bureau. If there’s a credit freeze in place, then it’s harder to obtain the loan.

But placing a credit freeze can be cumbersome, time-consuming, and costly. The process can also vary across states. It can be an expensive time-suck if a consumer wants to place a freeze across all credit bureaus and for all family members.

Fortunately, California now has an opportunity to dramatically streamline the credit freeze process for its residents, thanks to a state bill introduced by Sen. Jerry Hill, S.B. 823. EFF is proud to support it.

The bill will allow Californians to place, temporarily lift, and remove credit freezes easily and at no charge. Credit reporting agencies will be required to carry out the request in 15 minutes or less if the consumer uses the company’s website or mobile app.

The response time for written requests has been cut as well from three days to just 24 hours. Additionally, credit reporting agencies must offer consumers the option of passing along credit freeze requests to other credit reporting agencies, saving Californians time and reducing the likelihood of the misuse of their information. 

You can read our support letter for the bill here.

Free and convenient credit freezes are becoming even more important as many consumer credit reporting agencies are pushing their inferior “credit lock” products. These products don’t offer the same protections built into credit freezes by law, and to use some of them, consumers have to agree to have their personal information be used for targeted ads.

The bill has passed the California Senate and will soon be heading to the Assembly for a vote. EFF endorses this effort to empower consumers to protect their sensitive information.

Categories: Privacy

CDT Urges Court to Uphold Fourth Amendment Protections for Email Content

CDT - Tue, 2018-04-24 11:45

Recently, CDT joined the Electronic Frontier Foundation, National Association of Criminal Defense Lawyers, and the Brennan Center for Justice in a brief to argue that a user’s Fourth Amendment rights in email content do not expire when an email service provider terminates a user’s account pursuant to its terms of service. The government must still obtain a warrant prior to searching that user’s email account. The case is United States v. Ackerman, in which a district court determined – based on those facts – that a warrant was unnecessary to access email content because termination of the account vitiated the account holder’s reasonable expectation of privacy in his email. The case was appealed and we filed an amicus brief opposing this holding.

The Facts

Walter Ackerman owned an email account through America Online (AOL). AOL had an automated filter designed to prevent the transmission of child pornography by comparing hashes of files sent through its network to hashes of known child porn. When Ackerman sent one particular email to a third person, the filter identified one of the images attached to the email as pornography. AOL intercepted the email, shut down Ackerman’s account, and forwarded the email to the National Center for Missing and Exploited Children (NCMEC). NCMEC opened the file without a warrant. Ackerman argued that NCMEC was a government agent and as such needed a warrant to search his email and attachment. Writing for a panel of the 10th Circuit, Judge Gorsuch (now Supreme Court Justice Gorsuch) agreed and the case was remanded for a determination of whether or not Ackerman had a reasonable expectation of privacy in his email (United States v. Ackerman, 831 F.3d 1292 (10th Cir. 2016)).

On remand, the district court dismissed Ackerman’s motion to suppress the evidence, finding that he did not have an objectively reasonable expectation of privacy in his email after the account service was terminated. The district court specifically pointed to AOL’s terms of service (TOS) which admonished users not to “participate in, facilitate, or further illegal activities.” They further stated that “[t]o prevent violations and enforce this TOS and remediate any violations, we can take any technical, legal, or other actions that we deem, in our sole discretion, necessary and appropriate without notice to you.” Siding with the government, the lower court held that as a result of this claimed right of access, Ackerman no longer had a reasonable expectation of privacy once his account was terminated.

The Law

It is CDT’s view that the district court’s holding undermines widely recognized Fourth Amendment protections for email content. Email and other electronic communications have become so pervasive that many would “consider them to be essential means or necessary instruments for self-expression, even self-identification” (City of Ontario v. Quon, 560 U.S. 746, 760 (2010)). The Fourth Amendment protects the content of email because it “is the technological scion of tangible mail, and it plays an indispensable part in the Information Age” (United States v. Warshak, 631 F.3d 266, 286 (6th Cir. 2010)). Warshak affirmed Fourth Amendment protections for email content holding that a “subscriber enjoys a reasonable expectation of privacy in the content of emails” stored, sent, or received through an internet service provider, and as such the government must have a search warrant before it can compel them to turn over the contents of a subscriber’s emails.

Oddly, the government agreed that Ackerman initially had an expectation of privacy in his email stored by AOL. But the government then argued that this expectation was extinguished when AOL terminated his account for violating AOL’s TOS. The district court agreed, holding that ISPs can unilaterally corrupt individuals’ Fourth Amendment rights merely by terminating a user’s account for a violation of a private contractual term. This holding is illogical: either the defendant had a reasonable expectation of privacy in his email, as the government conceded, or AOL’s TOS prevented him from ever forming an objectively reasonable expectation of privacy. Both cannot be true.

The Stakes

If the district court’s ruling is allowed to stand, email protections would be dealt a huge blow in the 10th Circuit. Under the court’s rationale, a service provider’s unilateral actions could vitiate any email user’s reasonable expectation of privacy in their entire account – likely comprising thousands of emails describing sensitive and intimate details of that user’s life. While in this case the account holder violated the TOS by distributing child pornography, non-criminal and less serious activity can violate a provider’s TOS. For example, for some platforms “intimidation” violates the provider’s TOS. Surely the user’s Fourth Amendment rights against a governmental search should not rest on a private company’s interpretation of a vague TOS, and its decision as to whether to terminate an account to enforce the TOS.

Generally, terms of service governing a consumer’s use of a private email service should not determine the scope of their Fourth Amendment protections in their private email. This isn’t to say that TOS can never inform the scope of rights afforded other online accounts. If you use a service to disseminate information to the public, the TOS for that service will state that your content is publically accessible. By agreeing to those terms you will have a limited expectation of privacy in the information you post publically, or none at all. However, those simply aren’t the facts of this case.

Most consumers want service providers to have some access to their accounts to filter out spam, to ensure strong cybersecurity, and to monitor for illicit content like child pornography. Access to perform these kinds of services, to which consumers routinely agree in TOS, should not destroy their expectation of privacy in their email content. If it did, Fourth Amendment protections for email content will cease to exist because most consumers regard such services as essential to their use of email. Service providers may remove their access to enable customers to claim Fourth Amendment protections, and customers would lose essential services.

For all of these reasons, we urged the 10th Circuit to reverse the district court’s decision, and to rule that termination of an account because of a violation of terms of service does not vitiate a consumer’s reasonable expectation of privacy in private email.

Categories: Privacy

EPIC to Senate: Weaknesses in Cybersecurity Threaten Both Consumers and Democratic Institutions

EPIC - Tue, 2018-04-24 10:40

EPIC submitted a statement to the Senate Homeland Security Committee in advance of a hearing on "Cyber Threats Facing America." Last year, the White House National Security Strategy report set out the administration's goals for global policy. EPIC supports several of the goals in the National Strategy report, including enhanced cybersecurity, support for democratic institutions, and protection of human rights. EPIC wrote to the Senate Committee to seek assurances that those goals will remain priorities for this administration. Quoting former world chess champion Garry Kasparov, EPIC also said "perhaps it is a firewall and not a border wall that the United States needs to safeguard our national interests at this moment in time."

Categories: Privacy

Get to Knows CDT’s Fellows: Mark Raymond

CDT - Tue, 2018-04-24 09:30

Mark Raymond is the Wick Cary Assistant Professor of International Security at the University of Oklahoma. He is also one of CDT’s non-resident Fellows, engaging with our policy teams to provide valuable insight from his research. In this Q & A we get to learn more about Mark and his current work.

What is your current research focus?

My research deals with the politics of global rule-making, especially as it pertains to cybersecurity and internet governance issues. So, for example, I have written about multistakeholder governance approaches; and I have also written about the challenges entailed by the highly decentralized nature of Internet governance arrangements. While organizations like ICANN and deliberations in the UN receive a lot of the attention, alongside national legal and regulatory frameworks, many consequential decisions pertaining to internet issues are made by firms or by state and local governments in countries around the world, as well as by individual judges dealing with court cases. The problem is that decisions made in one jurisdiction can often have negative unintended effects on citizens, firms, and users in other jurisdictions. Given the rapid adoption of Internet of Things (IoT) technologies, including in critical infrastructure systems, these questions are only becoming more complex – and more important! My research tries to identify these kinds of challenges, and to provide solutions.

What is the most pressing internet policy question of today?

I’m not sure it’s possible or productive to identify one specific question. But there is a general tendency, especially in the United States given its historical role with respect to the internet, to get too focused on using national law and regulation to solve internet policy problems. Or, even worse in some ways, to think that technological problems must have technological solutions. People are pretty creative. If the underlying problem is about human values and interests, people will innovate around technological solutions. In general, I think it’s wise to focus on identifying the underlying human behavior problems and focus on those. And in doing so, we need to focus on how to solve them in a globalized world with highly decentralized governance arrangements that include public and private sector actors.

What issues do you think more students should be studying?

Global governance and policy issues, very much including ethics. Figuring out how to put limits around the affordances of internet technologies, and learning how to maximize their benefits, is going to be the key to continued human flourishing. Ultimately, that is a set of social science questions. There are no right answers to these questions, and they won’t ever go away because we can’t “solve” them once and for all. Governance and policy are about ongoing management of issues in the public interest.

Categories: Privacy

Net Neutrality Did Not Die Today

EFF News - Mon, 2018-04-23 17:01

When the FCC’s “Restoring Internet Freedom Order,” which repealed net neutrality protections the FCC had previously issued, was published on February 22nd, it was interpreted by many to mean it would go into effect on April 23. That’s not true, and we still don’t know when the previous net neutrality protections will end.

On the Federal Register’s website—which is the official daily journal of the United States Federal Government and publishes all proposed and adopted rules, the so-called “Restoring Internet Freedom Order” has an “effective date” of April 23. But that only applies to a few cosmetic changes. The majority of the rules governing the Internet remain the same—the prohibitions on blocking, throttling, and paid prioritization—remain.

Before the FCC’s end to those protections can take effect, the Office of Management and Budget has to approve the new order, which it hasn’t done. Once that happens, we’ll get another notice in the Federal Register. And that’s when we’ll know for sure when the ISPs will be able to legally start changing their actions.

If your Internet experience hasn’t changed today, don’t take that as a sign that ISPs aren’t going to start acting differently once the rule actually does take effect;  for example, Comcast changed the wording on its net neutrality pledge almost immediately after last year’s FCC vote.

Net neutrality protections didn’t end today, and you can help make sure they never do. Congress can still stop the repeal from going into effect by using the Congressional Review Act (CRA) to overturn the FCC’s action. All it takes is a simple majority vote held within 60 legislative working days of the rule being published. The Senate is only one vote short of the 51 votes necessary to stop the rule change, but there is a lot more work to be done in the House of Representatives. See where your members of Congress stand and voice your support for the CRA here.

Take Action

Save the net neutrality rules

Categories: Privacy

Stupid Patent of the Month: Suggesting Reading Material

EFF News - Mon, 2018-04-23 16:49

Online businesses—like businesses everywhere—are full of suggestions. If you order a burger, you might want fries with that. If you read Popular Science, you might like reading Popular Mechanics. Those kinds of suggestions are a very old part of commerce, and no one would seriously think it’s a patentable technology.

Except, apparently, for Red River Innovations LLC, a patent troll that believes its patents cover the idea of suggesting what people should read next. Red River filed a half-dozen lawsuits in East Texas throughout 2015 and 2016. Some of those lawsuits were against retailers like home improvement chain Menards, clothier Zumiez, and cookie retailer Ms. Fields. Those stores all got sued because they have search bars on their websites.

In some lawsuits, Red River claimed the use of a search bar infringed US Patent No. 7,958,138. For example, in a lawsuit against Zumiez, Red River claimed [PDF] that “after a request for electronic text through the search box located at www.zumiez.com, the Zumiez system automatically identifies and graphically presents additional reading material that is related to a concept within the requested electronic text, as described and claimed in the ’138 Patent.” In that case, the “reading material” is text like product listings for jackets or skateboard decks.

In another lawsuit, Red River asserted a related patent, US Patent No. 7,526,477, which is our winner this month. The ’477 patent describes a system of electronic text searching, where the user is presented with “related concepts” to the text they’re already reading. The examples shown in the patent display a kind of live index, shown to the right of a block of electronic text. In a lawsuit against Infolinks, Red River alleged [PDF] infringement because “after a request for electronic text, the InText system automatically identifies and graphically presents additional reading material that is related to a concept within the requested electronic text.”   

Suggesting and providing reading material isn’t an invention, but rather an abstract idea. The final paragraph of the ’477 patent’s specification makes it clear that the claimed method could be practiced on just about any computer. Under the Supreme Court’s decision in Alice v. CLS Bank, an abstract idea doesn’t become eligible for a patent merely because you suggest performing it with a computer. But hiring lawyers to make this argument is an expensive task, and it can be daunting to do so in a faraway locale, like the East Texas district where Red River has filed its lawsuits so far. That venue has historically attracted “patent troll” entities that see it as favorable to their cases.

The ’477 patent is another of the patents featured in Unified Patents’ prior art crowdsourcing project Patroll. If you know of any prior art for the ’477 patent, you can submit it (before April 30) to Unified Patents for a possible $2,000 prize.

The good news for anyone being targeted by Red River today is that it’s not going to be as easy to drag businesses from all over the country into a court of their choice. The Supreme Court’s TC Heartland decision, combined with a Federal Circuit case called In re Cray, mean that patent owners have to sue in a venue where defendants actually do business.

It’s also a good example of why fee-shifting in patent cases, and upholding the case law of the Alice decision, are so important. Small companies using basic web technologies shouldn’t have to go through a multi-million dollar jury trial to get a chance to prove that a patent like the ’477 is abstract and obvious.

Categories: Privacy

Paid Prioritization: We Have Solved This Problem Before

CDT - Mon, 2018-04-23 16:20

Net neutrality does not end today. Although today does mark 60 days since the publication of the FCC’s order repealing its own rules, that repeal (due to some obscure and protracted administrative procedure) has not yet taken effect. Keep this in mind if you read or hear any arguments pointing out that ISPs haven’t ruined the internet, even without the net neutrality rules. For now, they still exist. And if the current effort to shut down the repeal through the Congressional Review Act (CRA) succeeds, the net neutrality protections will survive even longer.* But that doesn’t mean the debate is standing still. Instead, opponents of the rules are using the recent and repeated regulatory swings (that they caused) as justification for a legislative compromise. Specifically, some in the telecom industry have argued for watered-down consumer protections, most recently on the subject of paid prioritization.

Although it has been a key tenet of the net neutrality discussion for years, paid prioritization has recently become a more prominent focal point. Commonly spoken of in terms of “fast lanes,” paid prioritization is when online companies pay ISPs to give their data traffic preferential treatment. It allows ISPs to double charge by charging both the customer for service and edge providers to reach customers, and lets well-funded companies buy an advantage over their competitors. Because the value (and therefore the price) of paid prioritization increases as networks become more congested, it also rewards ISPs for letting their networks become clogged rather than upgrading their capacity.

Last week, the House Energy and Commerce Subcommittee on Communications and Technology held a hearing on the subject, ostensibly to “have a realistic discussion” about it and to develop a “nuanced approach.” This language fits nicely with the industry’s calls for compromise legislation, but conveniently discounts the decades-long discussion that led up to the 2015 Open Internet Order (OIO).

In some ways, the focus on paid prioritization represents progress. (It even sells hamburgers!) Practices like blocking websites or applications or throttling certain net traffic have become so universally disapproved that they have faded from the debate. Most ISPs either have no interest in blocking or throttling or they have given up fighting for the ability to do so, and even the current ISP-friendly legislative proposals would prohibit these practices. Paid prioritization, however, remains a core source of disagreement.

Unfortunately, ISPs and their advocates have tried to confuse the issue to hide the negative effects and incentives paid prioritization creates. They have claimed that banning paid prioritization jeopardizes telemedicine applications and autonomous vehicle safety and would inhibit emergency first responders and 911 systems. They have claimed that content delivery networks (CDNs) do the same thing as paid prioritization. They have talked about beneficial network traffic management techniques and paid prioritization as though they are one and the same. They have argued that small businesses would benefit from paid prioritization. They have claimed that paid prioritization would somehow lower the cost of internet access and have even used TSA PreCheck as a positive example of paid prioritization. But these claims are either misleading, ridiculous, or just plain wrong.

The net neutrality rules created by the OIO banned paid prioritization because of its potential for harm to innovation and competition at the edges of the internet was “overwhelming.” The rules, (which the current FCC has voted to repeal) applied only to broadband internet access service (BIAS) and did not apply to “specialized” or non-BIAS services, such as telemedicine applications or autonomous vehicle support. The rules also created exceptions for emergency services. So, under the OIO, ISPs would still be able to offer paid prioritization for the use cases they list because they do not constitute broadband internet access.

The arguments about CDNs and network management amount to semantic sleight-of-hand. CDNs allow companies to store information, like the files that make up websites or the music and movie files for streaming, closer to end users. This decentralized distribution makes for a better, faster experience by minimizing the distance and number of network segments between the user and the information. Prioritization, on the other hand, involves giving favorable treatment to some traffic as it crosses a network. For instance, an ISP can prioritize the traffic from an affiliate’s video streaming service by letting those packets jump the queue at the ISP’s routers, or by creating a separate queue just for the affiliate’s traffic.

Beyond the structural differences between paid prioritization and CDNs, they also have different effects on both network function and competition. Not only do CDNs offer more efficient delivery for their customers, they also reduce traffic loads between distant parts of the internet, improving speeds for everyone else. There is no limit to how many companies can benefit from CDNs, nor do CDNs create a disadvantage for non-customers; no traffic is made slower by CDN usage. Paid prioritization, however, cannot benefit everyone; by definition, it is impossible to prioritize everyone. By the same token, paid prioritization necessarily disadvantages all those who do not, or cannot pay for preferential treatment.

Supporters also try to blur the line between paid prioritization and reasonable network traffic management. Traffic management consists of several techniques by which network operators like ISPs can improve the overall functionality of their network. For instance, operators may be able to provide a better quality of experience for subscribers using real-time video applications by prioritizing that traffic over less time-sensitive traffic like email or software updates. Done properly, no one’s quality of experience is degraded and all similar kinds of traffic enjoy the same treatment. The network works better and no one loses.

The protections against blocking, throttling, and unreasonable discrimination in the OIO each had exceptions for reasonable network management. The rule against paid prioritization, however, did not. According to the Order, paid prioritization, by definition, is not a network management practice because it “does not primarily have a technical network management purpose.” Although (unpaid) prioritization can be a network traffic management technique, it takes on a completely different character when compensation is part of the deal, creating perverse incentives for ISPs and distorting competition online. This is why it’s so important to distinguish paid prioritization from everything else and not fall for the trickery of using paid prioritization and other, harmless terms interchangeably.

The claims that paid prioritization could somehow give small businesses an advantage are almost laughable. Paid prioritization is all about buying an advantage; how can small businesses hope to out-spend their deep-pocketed competitors? Equally ludicrous are the claims that ISPs would somehow drop broadband subscription prices if they could charge for prioritized treatment. As we’ve already said, paid prioritization monetizes network congestion, giving ISPs a way to charge more for getting around traffic jams that they create. In this light, Congresswoman Blackburn’s comparison of paid prioritization to the TSA PreCheck program is somewhat accurate, but it’s also illustrative of the perverse incentives it creates for ISPs.

The conversation about paid prioritization is far from over, and you can be sure that efforts to confuse the issue will continue. Just remember this: the problems with paid prioritization all stem from the “paid” aspect. Whatever other aspects of prioritization ISPs may talk about, getting paid is what they want. But net neutrality cannot coexist with paid prioritization of web traffic; real net neutrality protections must prohibit paid prioritization. The 2015 Open Internet Order did this, while also allowing flexibility to perform reasonable network management and to support limited-purpose “specialized” services like telemedicine. That sounds like a compromise to me.

* There are also two court cases pending: one to strike down the 2015 rules is stalled in front of the Supreme Court, and one to strike down the 2018 repeal of the rules is gearing up for briefing. The outcome of either of these could alter the existing rule set. To add to the complexity, litigation against the various state initiatives to put net neutrality protections in place will emerge as soon as the repeal takes effect.

Categories: Privacy

EPIC Sues FTC for Release of Facebook's Audits

EPIC - Fri, 2018-04-20 17:45

EPIC has filed a Freedom of Information Act lawsuit to obtain the release of the unredacted Facebook Assessments from the FTC. The FTC Consent Order. required Facebook to provide to the FTC biennial assessments conducted by an independent auditor. In March, EPIC filed a Freedom of Information Act request for the 2013, 2015, 2017 Facebook Assessments and related records. EPIC's FOIA request drew attention to a version of the 2017 report available at the FTC website. But that version is heavily redacted. EPIC is suing now for the release of unredacted report. EPIC has an extensive open government practice and has previously obtained records from many federal agencies. The case is EPIC v. FTC, No. 18-942 (D.D.C. filed April 20, 2018).

Categories: Privacy

EPIC Obtains Partial Release of 2017 Facebook Audit

EPIC - Fri, 2018-04-20 17:10

EPIC has obtained a redacted version of the 2017 Facebook Assessment required by the 2012 Federal Trade Commission Consent Order. The Order required Facebook to conduct biennial assessments from a third-party auditor of Facebook's privacy and security practices. In March, EPIC filed a Freedom of Information Act request for the 2013, 2015, and 2017 Facebook Assessments as well as related records. The 2017 Facebook Assessment, prepared by PwC, stated that "Facebook's privacy controls were operating with sufficient effectiveness" to protect the privacy of users. This assessment was prepared after Cambridge Analytica harvested the personal data of 87 million Facebook users. In a statement to Congress for the Facebook hearings last week, EPIC noted that FTC Commissioners represented that the Consent Order protected the privacy of hundreds of millions of Facebook users in the United States and Europe.

Categories: Privacy

We’re in the Uncanny Valley of Targeted Advertising

EFF News - Fri, 2018-04-20 14:22

Mark Zuckerberg, Facebook’s founder and CEO, thinks people want targeted advertising. The “overwhelming feedback,” he said multiple times during his congressional testimony, was that people want to see “good and relevant” ads. Why then are so many Facebook users, including leaders of state in the U.S. Senate and House, so fed up and creeped out by the uncannily on-the-nose ads? Targeted advertising on Facebook has gotten to the point that it’s so “good,” it’s bad—for users, who feel surveilled by the platform, and for Facebook, who is rapidly losing its users’ trust. But there’s a solution, which Facebook must prioritize: stop collecting data from users without their knowledge or explicit, affirmative consent.

It should never be the user’s responsibility to have to guess what’s happening behind the curtain.

Right now, most users don’t have a clear understanding of all the types of data that Facebook collects or how it’s analyzed and used for targeting (or for anything else). While the company has heaps of information about its users to comb through, if you as a user want to know why you’re being targeted for an ad, for example, you’re mostly out of luck. Sure, there's a “why was I shown this” option on an individual ad", but each generally reveals only bland categories like “Over 18 and living in California”—and to get an even semi-accurate picture of all the ways you can be targeted, you’d have to click through various sections, one at a time, on your “Ad Preferences” page.

Text from Facebook explaining why an ad has been shown to the user

Even more opaque are categories of targeting called “Lookalike audiences.” Because Facebook has so many users—over 2 billion per month—it can automatically take a list of people supplied by advertisers, such as current customers or people who like a Facebook page—and then do behind-the-scenes magic to create a new audience of similar users to beam ads at.

Facebook does this by identifying “the common qualities” of the people in the uploaded list, such as their related demographic information or interests, and finding people who are similar to (or "look like") them, to create an all-new list. But those comparisons are made behind the curtain, so it’s impossible to know what data, specifically, Facebook is using to decide you look like another group of users. And to top if off: much of what’s being used for targeting generally isn’t information that users have explicitly shared—it’s information that’s been actively—and silently—taken from them.

Telling the user that targeting data is provided by a third party like Acxiom doesn’t give any useful information about the data itself, instead bringing up more unanswerable questions about how data is collected

Just as vague is targeting using data that’s provided by third party “data brokers.” Changes by Facebook in March to discontinue one aspect of this data sharing called partner categories, wherein data brokers like Acxiom and Experian use their own massive datasets combined with Facebook’s to target users, are the kinds of changes Facebook has touted to “help improve people’s privacy”—but they won’t have a meaningful impact on our knowledge of how data is collected and used.

As a result, the ads we see on Facebook—and other places online where behaviors are tracked to target users—creep us out. Whether they’re for shoes that we’ve been considering buying to replace ours, for restaurants we happened to visit once, or even for toys that our children have mentioned, the ads can indicate a knowledge of our private lives that the company has consistently failed to admit to having, and moreover, knowledge that was supplied via Facebook’s AI, which makes inferences about people—such as their political affiliation and race—that’s clearly out of many users’ comfort zones. This AI-based ad targeting on Facebook is so obscured in its functioning that even Zuckerberg thinks it’s a problem. “Right now, a lot of our AI systems make decisions in ways that people don't really understand,” he told Congress during his testimony. “And I don't think that in 10 or 20 years, in the future that we all want to build, we want to end up with systems that people don't understand how they're making decisions.”

But we don’t have 10 or 20 years. We’ve entered an uncanny valley of opaque algorithms spinning up targeted ads that feel so personal and invasive that both the House and the Senate mentioned the spreading myth that the company wiretaps its users’ phones. It’s understandable that users have come to conclusions like this for the creeped out feelings that they rightfully experience. The concern that you’re being surveilled persists, essentially, because you are being surveilled—just not via your microphone. Facebook seems to possess an almost human understanding of us. Like the unease and discomfort people sometimes experience interacting with a not-quite-human-like robot, being targeted highly accurately by machines based on private, behavioral information that we never actively gave out feels creepy, uncomfortable, and unsettling.

The trouble isn’t that personalization is itself creepy. When AI is effective it can produce amazing results that feel personalized in a delightful way—but only when we actively participated in teaching the system what we like and don't like. AI-generated playlists, movie recommendations, and other algorithm-powered suggestions work to benefit users because the inputs are transparent and based on information we knowingly give those platforms, like songs and television shows we like. AI that feels accurate, transparent, and friendly can bring users out of the uncanny valley to a place where they no longer feel unsettled, but instead, assisted.

But apply a similar level of technological prowess to other parts of our heavily surveilled, AI-infused lives, and we arrive in a world where platforms like Facebook creepily, uncannily, show us advertisements for products we only vaguely remember considering purchasing or people we had only just met once or just thought about recently—all because the amount of data being hoovered up and churned through obscure algorithms is completely unknown to us.

Unlike the feeling that a friend put together a music playlist just for us, Facebook’s hyper-personalized advertising—and other AI that presents us with surprising, frighteningly accurate information specifically relevant to us—leaves us feeling surveilled, but not known. Instead of feeling wonder at how accurate the content is, we feel like we’ve been tricked.

To keep us out of the uncanny valley, advertisers and platforms like Facebook must stop compiling data about users without their knowledge or explicit consent. Zuckerberg multiple times told Congress that “an ad-supported service is the most aligned with [Facebook’s] mission of trying to help connect everyone in the world.” As long as Facebook’s business model is built around surveillance and offering access to users’ private data for targeting purposes to advertisers, it’s unlikely we’ll escape the discomfort we get when we’re targeted on the site. Steps such as being more transparent about what is collected, though helpful, aren’t enough. Even if users know what Facebook collects and how they use it, having no way of controlling data collection, and more importantly, no say in the collection in the first place, will still leave us stuck in the uncanny valley.

Even Facebook’s “helpful” features, such as reminding us of birthdays we had forgotten, showing pictures of relatives we’d just been thinking of (as one senator mentioned), or displaying upcoming event information we might be interested in, will continue to occasionally make us feel like someone is watching. We'll only be amazed (and not repulsed) by targeted advertising—and by features like this—if we feel we have a hand in shaping what is targeted at us. But it should never be the user’s responsibility to have to guess what’s happening behind the curtain.

While advertisers must be ethical in how they use tracking and targeting, a more structural change needs to occur. For the sake of the products, platforms, and applications of the present and future, developers must not only be more transparent about what they’re tracking, how they’re using those inputs, and how AI is making inferences about private data. They must also stop collecting data from users without their explicit consent. With transparency, users might be able to make their way out of the uncanny valley—but only to reach an uncanny plateau. Only through explicit affirmative consent—where users not only know but have a hand in deciding the inputs and the algorithms that are used to personalize content and ads—can we enjoy the “future that we all want to build,” as Zuckerberg put it.

Arthur C. Clarke said famously that “any sufficiently advanced technology is indistinguishable from magic”—and we should insist that the magic makes us feel wonder, not revulsion. Otherwise, we may end up stuck on the uncanny plateau, becoming increasingly distrustful of AI in general, and instead of enjoying its benefits, fear its unsettling, not-quite-human understanding.  

Categories: Privacy

Minnesota Supreme Court Ruling Will Help Shed Light on Police Use of Biometric Technology

EFF News - Fri, 2018-04-20 12:43

A decision by the Minnesota Supreme Court on Wednesday will help the public learn more about how law enforcement use of privacy invasive biometric technology.

The decision in Webster v. Hennepin County is mostly good news for the requester in the case, who sought the public records as part of a 2015 EFF and MuckRock campaign to track mobile biometric technology use by law enforcement across the country. EFF filed a brief in support of Tony Webster, arguing that the public needed to know more about how officials use these technologies.

Across the country, law enforcement agencies have been adopting technologies that allow cops to identify subjects by matching their distinguishing physical characteristics to giant repositories of biometric data. This could include images of faces, fingerprints, irises, or even tattoos. In many cases, police use mobile devices in the field to scan and identify people during stops. However, police may also use this technology when a subject isn’t present, such as grabbing images from social media, CCTV, or even lifting biological traces from seats or drinking glasses.

Webster’s request to Hennepin County officials sought a variety of records, and included a request for the agencies to search officials’ email messages for keywords related to biometric technology, such as “face recognition” and “iris scan.”

Officials largely ignored the request and when Webster brought a legal challenge, they claimed that searching their email for keywords would be burdensome and that the request was improper under the state’s public records law, the Minnesota Government Data Practices Act.

Webster initially prevailed before an administrative law judge, who ruled that the agencies had failed to comply with the Data Practices Act in several respects. The judge also ruled that request a search of email records for keywords was proper under the law and was not burdensome.

County officials appealed that decision to a state appellate court. That court agreed that Webster’s request was proper and not burdensome. But it disagreed that the agencies had violated the Data Practices Act by not responding to Webster’s request or that they had failed to set up their records so that they could be easily searched in response to records requests.

Webster appealed to the Minnesota Supreme Court, who on Wednesday agreed with him that the agencies had failed to comply with the Data Practices Act by not responding to his request. The court, however, agreed with the lower appellate court that county officials did not violate the law in how they had configured their email service or arranged their records systems.

In a missed opportunity, however, the court declined to rule on whether searching for emails by keywords was appropriate under the Data Practices Act and not burdensome. The court claimed that it didn’t have the ability to review that issue because Webster had prevailed in the lower court and county officials failed to properly raise the issue.

Although this means that the lower appellate court’s decision affirming that email keyword searches are proper and not burdensome still stands, it would have been nice if the state’s highest court weighed in on the issue.

EFF is nonetheless pleased with the court’s decision as it means Webster can finally access records that document county law enforcement’s use of biometric technology. We would like to thank attorneys Timothy Griffin and Thomas Burman of Stinson Leonard Street LLP for drafting the brief and serving as local counsel.

For more on biometric identification, such as face recognition, check out EFF’s Street-Level Surveillance project.

Categories: Privacy

Senator Blumenthal Calls On FTC To Enforce Consent Order Against Facebook

EPIC - Fri, 2018-04-20 10:25

Senator Richard Blumenthal (D-CT) has called for "monetary penalties that provide redress for consumers and stricter oversight" in a letter to the Federal Trade Commission. Senator Blumenthal focused on the FTC's 2011 Consent Order that EPIC, and a coalition of consumer groups obtained, after preparing a detailed complaint in 2009. Referring to the Cambridge Analytica scandal, Senator Blumenthal wrote that "three of the FTC's claims concerned the misrepresentation of verification and privacy preferences of third-party apps." Senator Blumenthal also raised questions about the FTC's monitoring of the consent order, noting that "even the most rudimentary oversight would have uncovered these problematic terms of service." And the Senator stated, "The Cambridge Analytica matter also calls into question Facebook's compliance with the consent decree's requirements to respect privacy settings and protect private information." EPIC and other consumer groups recently urged the FTC to reopen the investigation. The FTC has confirmed that an investigation of Facebook is now underway.

Categories: Privacy

Dear Canada: Accessing Publicly Available Information on the Internet Is Not a Crime

EFF News - Thu, 2018-04-19 23:00

Canadian authorities should drop charges against a 19-year-old Canadian accused of “unauthorized use of a computer service” for downloading thousands of public records hosted and available to all on a government website. The whole episode is an embarrassing overreach that chills the right of access to public records and threatens important security research.

At the heart of the incident, as reported by CBC news this week, is the Nova Scotian government’s embarrassment over its own failure to protect the sensitive data of 250 people who used the province’s Freedom of Information Act (FOIA) to request their own government files. These documents were hosted on the government web server that also hosted public records containing no personal information. Every request hosted on the server contained very similar URLs, which differed only in a single document ID number at the end of the URL. The teenager took a known ID number, and then, by modifying the URL, retrieved and stored all of the FOIA documents available on the Nova Scotia FOIA website.

Beyond the absurdity of charging someone with downloading public records that were available to anyone with an Internet connection, if anyone is to blame for this mess, it’s Nova Scotia officials. They have both insecurely set up their public records server to permit public access to others’ private information. Officials should accept responsibility for failing to secure such sensitive data rather than ginning up a prosecution. The fact that the government was publishing documents that contained sensitive data in a public website without any passwords or access controls demonstrates their own failure to protect the private information of individuals. Moreover, it does not appear that the site even deployed minimal technical safeguards to exclude widely-known indexing tools such as Google search and the Internet Archive from archiving all the records published on the site, as both appear to have cached some of the documents.

The lack of any technical safeguards shielding the Freedom of Information responses from public access would make it difficult for anyone to know that they were downloading material containing private information, much less provide any indication that such activity was “without authorization” under the criminal statute. According to the report, more than 95% of the 7,000 Freedom of Information responses in question included redactions for any information properly excluded from disclosure under Nova Scotia’s FOI law. Freedom of Information laws are about furthering public transparency, and information released through the FOI process is typically considered to be public to everyone.

But beyond the details of this case, automating access to publicly available freedom of information requests is not conduct that should be criminalized: Canadian law criminalizes unauthorized use of  computer systems, but these provisions are only intended to be applied when the use of the service is both unauthorized and carried out with fraudulent intent. Neither element should be stretched to meet the specifics in this case. The teenager in question believed he was carrying out a research and archiving role, preserving the results of freedom of information requests. And given the setup of the site, he likely wasn’t aware that a few of the documents contained personal information. If true, he would not have had any fraudulent intent.

“The prosecution of this individual highlights a serious problem with Canada’s unauthorized intrusion regime,”  Tamir Israel, Staff Lawyer at CIPPIC, told us. “Even if he is ultimately found innocent, the fact that these provisions are sufficiently ambiguous to lay charges can have a serious chilling effect on innovation, free expression and legitimate security research.”

The deeper problem with this case is that it highlights how concerns about computer crime can lead to absurd prosecutions. The Canadian police are using to prosecute the teen was implemented after Canada sign the Budapest Cybercrime Convention. The convention’s original intent was to punish those who break into protected computers to steal data or cause damage.

Criminalizing access to publicly available data over the Internet twists the Cybercrime Convention’s purpose. Laws that offer the possibility of imposing criminal liability on someone simply for engaging with freely available information on the web pose a continuing threat to the openness and innovation of the Internet. They also threaten legitimate security research. As technology law professor Orin Kerr describes it, publicly posting information on the web and then telling someone they are not authorized to access it is “like publishing a newspaper but then forbidding someone to read it.”

Canada should take the lead from the  United States federal court’s decision in Sandvig v. Sessions, which made clear that using automated tools to access freely available information is not a computer crime. As the court wrote:  

"Scraping is merely a technological advance that makes information collection easier; it is not meaningfully different from using a tape recorder instead of taking written notes, or using the panorama function on a smartphone instead of taking a series of photos from different positions.”

The same is true in the case of the Canadian teen.

We've long defended the use of “automated scraping,” which is the process of using web crawlers or bots — applications that run automated tasks over the Internet—to extract content and data from a website. Scraping provides a wide range of valuable tools and services that Internet users, programmers, journalists, and researchers around the world rely on every day to the benefit of the broader public.

The value of automated scraping value goes well beyond curious teenagers seeking access to freedom of information requests. The Internet Archive has long been scraping public portions of the world wide web and preserving them for future researchers. News aggregation tools, including Google’s Crisis Map, which aggregated critical information about the California’s October 2016 wildfires, involve scraping. ProPublica journalists used automated scrapers to investigate Amazon’s algorithm for ranking products by price and uncovered that Amazon’s pricing algorithm was hiding the best deals from many of its customers. The researchers who studied racial discrimination on Airbnb also used bots, and found that distinctively African American names were 16 percent less likely to be accepted relative to identical guests with distinctively white names.

Charging the Canadian teen with a computer crime for what amounts to his scraping publicly available online content has severe consequences for him and the broader public. As a result of the charges against him, the teen is banned from using the Internet and is concerned he may not be able to complete his education.

More broadly, the prosecution is a significant deterrent to anyone who wanted to use common tools such as scraping to collect public government records from websites, as the government’s own failure to adequately protect private information can now be leveraged into criminal charges against journalists, activists, or anyone else seeking to access public records.

Even if the teen is ultimately vindicated in court, this incident calls for a re-examination of Canada’s unauthorized intrusion regime and law enforcement’s use of it. The law was not intended for cases like this, and should never have been raised against an innocent Internet user.

Categories: Privacy

A Tale of Two Poorly Designed Cross-Border Data Access Regimes

EFF News - Thu, 2018-04-19 22:57

On Tuesday, the European Commission published two legislative proposals that could further cement an unfortunate trend towards privacy erosion in cross-border state investigati­ons. Building on a foundation first established by the recently enacted U.S. CLOUD Act, these proposals compel tech companies and service providers to ignore critical privacy obligations in order to facilitate easy access when facing data requests from foreign governments. These initiatives collectively signal the increasing willingness of states to sacrifice privacy as a way of addressing pragmatic challenges in cross-border access that could be better solved with more training and streamlined processes.

The EU proposals (which consist of a Regulation and a Directive) apply to a broad range of companies1 that offer services in the Union and that have a “substantial connection” to one or more Member States.2 Practically, that means companies like Facebook, Twitter, and Google, though not based in the EU, would still be affected by these proposals. The proposals create a number of new data disclosure powers and obligations, including:

  • European court orders that compel internet companies and service providers to preserve data they already stored at the time the order is received (European preservation orders);
  • European court orders for content and ‘transactional’ data3 for investigation of a crime that carries a custodial sentence of at least 3 years or more (European production orders for content data);
  • European orders for some metadata defined as “access data” (IP addresses, service access times) and customer identification data (including name, date of birth, billing data and email addresses) that could be issued for any criminal offense (European production orders for access and subscriber data);4
  • An obligation for some service providers to appoint an EU legal representative who will be responsible for complying with data access demands from any EU Member State;
  • The package of proposals does not address real-time access to communications (in contrast to the CLOUD Act).
Who Is Affected and How?

Such orders would affect Google, Facebook, Microsoft, Twitter, instant messaging services, voice over IP, apps, Internet Service Providers, and e-mail services, as well as cloud technology providers, domain name registries, registrars, privacy and proxy service providers, and digital marketplaces.

Moreover, tech companies and service providers would have to comply with law enforcement orders for data preservation and delivery within 10 days or, in the case of an imminent threat to life or physical integrity of a person or to a critical infrastructure, within just six hours. Complying with these orders would be costly and time-consuming.

Alarmingly, the EU proposals would compel affected companies (which include diverse entities ranging from small ISPs and burgeoning startups to multibillion dollar global corporations) to develop extensive resources and expertise in the nuances of many EU data access regimes. A small regional German ISP  will need the capacity to process demands from France, Estonia, Poland, or any other EU member state in a manner that minimizes legal risks. Ironically, the EU proposals are presented as beneficial to businesses and service providers on the basis that they provide ‘legal certainty and clarity’. In reality, they do the opposite, forcing these entities to devote resources to understanding the law of each member state. Even worse, the proposal would immunize businesses from liability in situations where good faith compliance with a data request might conflict with EU data protection laws. This creates a powerful incentive to err on the side of compliance with a data demand at cost to privacy. There is no comparable immunity from the heavy fines that could be levied for ignoring a data access request on the basis of good-faith compliance with EU data protection rules.

No such liability limitation at all is available to companies and service providers subject to non-EU privacy protections. In some instances, the companies would be forced to choose between complying with EU data demands issued further to EU standards and complying with legal restrictions on data exposure imposed by other jurisdictions. For example, mechanisms requiring service providers to disclose customer identification data on the basis of a prosecutorial demand could conflict with Canada’s data protection regime. Personal Information Protection and Electronic Documents Act (PIPEDA), a Canadian privacy law, has been held to prevent service providers from identifying customers associated with anonymous online activity in the absence of a court order. As the European proposals purport to apply to domain name registries as well, these mechanisms could also interfere with efforts at ICANN to protect anonymity in website registration by shielding customer registration information.

The EU package could also compel U.S.-based providers to violate the Stored Communications Act (SCA), which prevents the disclosure of stored communications content in the absence of a court order.5 The recent U.S. CLOUD Act created a new mechanism for bypassing these safeguards—allowing certain foreign nations (if the United States enters into a “executive agreement” with them under the CLOUD Act) to compel data production from U.S.-based providers without following U.S. law or getting an order from a U.S. judge. However, the United States has not entered into any such an agreement with the EU or any EU member states at this stage, and the European package would require compliance even in the absence of one.

No Political Will to Fix the MLAT Process

The unfortunate backdrop to this race to subvert other states’ privacy standards is a regime that already exists for navigating cross-border data access. The Mutual Legal Assistance Treaty (MLAT) system creates global mechanisms by which one state can access data hosted in another while still complying with privacy safeguards in both jurisdictions. The MLAT system is in need of reform, as the volume of cross-border requests in modern times has strained some of its procedural mechanisms to the point where delays in responses can be significant. However, the fundamental basis of the MLAT regime remains sound and the pragmatic flaws in its implementation are far from insurmountable. Instead of reforming the MLAT regime in a way that would retain the current safeguards it respects, the European Commission and the United States seem to prefer to jettison these safeguards.

Perhaps ironically, much of the delay within the MLAT system arises from a lack of expertise in state agencies and officials in the data access laws of foreign states. Developing such expertise would allow state agencies to formulate foreign data access requests faster and more efficiently. It would also allow state officials to process incoming requests with greater speed. The EU proposals seek to bypass this requirement by effectively privatizing the legal assessment process: meaning that we're losing a real judge making real judgments. Service providers will now need to decide whether foreign requests are properly formulated under foreign laws. Yet the judicial authorities and state agencies are far better placed to make these assessments—not only from a resource management perspective, but also from a legitimacy perspective.

Contrary to this trend, European courts have continued to assert their own domestic privacy standards when protecting EU individuals’ data from access by foreign state agencies. Late last week, an Irish court questioned whether U.S. state agencies ( particularly the NSA and FBI who are granted broad powers under the U.S. Foreign Intelligence Surveillance Court) are sufficiently restrained in their ability to access EU individuals’ data. The matter was referred to the EU’s highest court and an adverse finding on the matter could prevent global communications platforms from exporting EU individuals’ data to the U.S. Such a finding could even prevent those same platforms from complying with some U.S. data demands regarding EU individuals’ data if additional privacy safeguards and remedies are not added. It is not yet clear what role such restrictions might ultimately play in any EU-U.S. agreement that might be negotiated under the U.S. CLOUD Act.

Ultimately, both the U.S. CLOUD Act and the EU proposal are a missed opportunity to work towards cross border data access regime that facilitates efficient law enforcement access and respects privacy, due process, and freedom of expression.

Conclusion

Unlike the last-minute rush to approve the U.S. CLOUD Act, there is still a long way to go before finalizing the EU proposals. Both documents need to be reviewed by the European Parliament and the Council of the European Union, and be subject to amendments. Once approved by both institutions, the regulation will become immediately enforceable as law in all Member States simultaneously, and it will override all national laws dealing with the same subject matter. The directive, however, will need to be transposed into national law.

We call on EU policy-makers to avoid the privatization of law enforcement and work instead to enhance judicial cooperation within and outside the European Union.

  • 1. Specifically listed are: providers of electronic communications service, social networks, online marketplaces, hosting service providers, and Internet infrastructure providers such as IP address and domain name registries. See Article 2, Definitions.
  • 2. A substantial connection is defined in the regulation as having an establishment in one or more Member States. In the absence of an establishment in the Union, a substantive connection will be the existence of a significant number of users in one or more Member States, or the targeting of activities towards one or more Member States (including factors such as the use of a language or a currency generally used in a Member State, availability of an app in the relevant national app store from providing local advertising or advertising in the language used in a Member State, from making use of any information originating from persons in Member States in the course of its activities, among others). See Article 3 Scope of the Regulation.
  • 3. Transactional data is “generally pursued to obtain information about the contacts and whereabouts of the user and may be served to establish a profile of an individual concerned”. The regulation described transactional data as the “the source and destination of a message or another type of interaction, data on the location of the device, date, time, duration, size, route, format, the protocol used and the type of compression, unless such data constitutes access data.
  • 4. The draft regulation states that access data is “typically recorded as part of a record of events (in other words a server log) to indicate the commencement and termination of a user access session to a service. It is often an individual IP address (static or dynamic) or other identifier that singles out the network interface used during the access session.”
  • 5. Most large U.S. providers insist on a warrant based on probable cause to disclose content, although the SCA allows disclosure on a weaker standard in some cases.

Categories: Privacy

A Little Help for Our Friends

EFF News - Thu, 2018-04-19 21:01

In periods like this one, when governments seem to ignore the will of the people as easily as companies violate their users’ trust, it’s important to draw strength from your friends. EFF is glad to have allies in the online freedom movement like the Internet Archive. Right now, donations to the Archive will be matched automatically by the Pineapple Fund.

Founded 21 years ago by Brewster Kahle, the Internet Archive’s mission is to provide free and universal access to knowledge through its vast digital library. Their work has helped capture the massive—yet now too often ephemeral—proliferation of human creativity and knowledge online. Popular tools like the Wayback Machine have allowed people to do things like view deleted and altered webpages and recover public statements to hold officials accountable.

EFF and the Internet Archive have stood together in a number of digital civil liberties cases. We fought back when the Archive became the recipient of a National Security Letter, a tool often used by the FBI to force Internet providers and telecommunications companies to turn over the names, addresses, and other records about their customers, and frequently accompanied by a gag order. EFF and the Archive have worked together to fight threats to free expression, online innovation, and the free flow of information on the Internet on numerous occasions. We have even collaborated on community gatherings like EFF’s own Pwning Tomorrow speculative fiction launch and the recent Barlow Symposium exploring EFF co-founder John Perry Barlow’s philosophy of the Internet.

EFF co-founder John Perry Barlow with the Internet Archive’s Brewster Kahle.

This month, the Bitcoin philanthropist behind the Pineapple Fund is challenging the world to support the Internet Archive and the movement for online freedom. The Pineapple Fund will match up to $1 million in donations to the Archive through April 30. (EFF was also the grateful recipient of a $1 million Pineapple Fund grant in January of this year.) If you would like to support the future of libraries and preserve online knowledge for generations to come, consider giving to the Internet Archive today. We salute the Internet Archive for supporting privacy, free expression, and the open web.

Categories: Privacy

Patent Office Throws Out GEMSA’s Stupid Patent on a GUI For Storage

EFF News - Thu, 2018-04-19 18:14

The Patent Trial and Appeal Board has issued a ruling [PDF] invalidating claims from US Patent No. 6,690,400, which had been the subject of the June 2016 entry in our Stupid Patent of the Month blog series. The patent owner, Global Equity Management (SA) Pty Ltd. (GEMSA), responded to that post by suing EFF in Australia. Eventually, a U.S. court ruled that EFF’s speech was protected by the First Amendment. Now the Patent Office has found key claims from the ’400 patent invalid.

The ’400 patent described its “invention” as “a Graphic User Interface (GUI) that enables a user to virtualize the system and to define secondary storage physical devices through the graphical depiction of cabinets.” In other words, virtual storage cabinets on a computer. E-Bay, Alibaba, and Booking.com, filed a petition for inter partes review arguing that claims from the ’400 patent were obvious in light of the Partition Magic 3.0 User Guide (1997) from PowerQuest Corporation. Three administrative patent judges from the Patent Trial and Appeal Board (PTAB) agreed.

The PTAB opinion notes that Partition Magic’s user guide teaches each part of the patent’s Claim 1, including the portrayal of a “cabinet selection button bar,” a “secondary storage partitions window,” and a “cabinet visible partition window.” This may be better understood through diagrams from the opinion. The first diagram below reproduces a figure from the patent labeled with claim elements. The second is a figure from Partition Magic, labeled with the same claim elements.

GEMSA argued that the ’400 patent was non-obvious because the first owner of the patent, a company called Flash Vos, Inc., “moved the computer industry a quantum leap forward in the late 90’s when it invented Systems Virtualization.” But the PTAB found that “Patent Owner’s argument fails because [it] has put forth no evidence that Flash Vos or GEMSA actually had any commercial success.”

The constitutionality of inter partes review is being challenged in the Supreme Court in the Oil States case. (EFF filed an amicus brief in that case in support of the process.) A decision is expected in Oil States before the end of June. The successful challenge to GEMSA’s patent shows the importance of inter partes review. GEMSA had sued dozens of companies alleging infringement of the ’400 patent. GEMSA can still appeal the PTAB’s ruling. If the ruling stands, however, it should end those suits as to this patent.

Related Cases: EFF v. Global Equity Management (SA) Pty Ltd

Categories: Privacy

New York Judge Makes the Wrong Call on Stingray Secrecy

EFF News - Thu, 2018-04-19 15:15

A New York judge has ruled that the public and the judiciary shouldn’t second-guess the police when it comes to secret snooping on the public with intrusive surveillance technologies.

He couldn’t be more wrong. 

A core part of EFF’s mission is questioning the decisions of our law enforcement and intelligence agencies over digital surveillance. We’ve seen too many cases where police have abused databases, hidden the use of invasive technologies, targeted people exercising their First Amendment rights, disparately burdened immigrants and people of color, and captured massive amounts of unnecessary information on innocent people. 

We’re outraged about New York Judge Shlomo Hager’s recent ruling against the New York Civil Liberties Union in a public records case. The judge upheld the New York Police Department’s decision to withhold records about its purchases of cell-site simulator equipment (colloquially known as Stingrays), including the names of surveillance products and how much they cost taxpayers. 

As the judge said in the hearing [PDF]: 

The case law is clear … "It is bad law and bad policy to second-guess the predictive judgments made by the government’s intelligence agencies" … Therefore, this Court will defer to Detective Werner, as well as to Inspector Gregory Antonsen’s expertise, that disclosure of the names of the StingRay devices, as well as the prices, would pose a substantial threat and would reveal the nonroutine information to bad actors that would use it to evade detection.

We wholeheartedly disagree. Holding police accountable and shining light on the criminal justice system is absolutely good law, good policy, and good for community relations. Questioning authority is one of the most important ways to defend democracy.

Up until a few years ago, a lot of law enforcement agencies around the country went to extreme lengths to hide the existence of cell-site simulators. These devices mimic cell towers in order to connect to people’s phones. Police would reject public records requests about this technology, while prosecutors would sometimes drop cases rather than let information come to light. One of the main vendors, Harris Corp., even had agencies sign non-disclosure agreements.

Transparency advocates sued and the technology’s capabilities began to surface. Police departments were using the technology to track phones without a warrant. They were sucking up data on thousands of innocent phone owners with each use. They were surveilling protesters. The technology reportedly interferes with cellphone coverage, which disparately impacts people of color because police much more frequently deploy cell-site simulators in their neighborhoods. 

In California, legislators were so outraged by the secrecy that they passed a law requiring any agency using a cell-site simulator to publish a privacy and usage policy online and hold public meetings before acquiring the technology. California also passed a law requiring a warrant before police can use a cell-site simulator as well as mandating annual public disclosures about these warrants. 

What’s good enough for California should be good for New York. Transparency in New York City about high-tech spying is especially important, given the NYPD’s track record of civil liberties violations—including illegal surveillance of Muslims and the practice of “testilying.” 

The argument that transparency is going to put more information in the hands of criminals is a weak diversion. By that logic, nothing law enforcement does should be open to public scrutiny, and we should resign ourselves to an Orwellian America monitored only by secret police. That argument failed to hold water in California. In the years since California legislators mandated greater transparency about acquisition and use of cell-site simulators, there is no evidence that these laws contributed to any crime. In recent years, many other agencies have handed over documents about cell-site simulators with little objection. 

The judge’s misguided ruling is a reminder that we must seek transparency through all available means. That’s why we support efforts in the New York City Council to pass the Public Oversight of Surveillance Technology (POST) Act. This measure would require NYPD to publish a use policy for each electronic surveillance technologies it has or seeks use to use in the future. We’re also supporting a variety of measures across the country that would require even stronger oversight of spy tech, including a public process before equipment is acquired. Already, Santa Clara County, Davis, and Berkeley in California have passed such ordinances. 

The time for secrecy over cell-site simulators has passed. The Stingray is out of the bag, and we’re going to keep fighting to make sure it remains in the open.

Learn more about cell-site simulators at EFF’s Street-Level Surveillance project.

Categories: Privacy