GDPR Privacy Regulation

GDPR Article 35 and Article 25 Square Off

For those not buried in the details of the European General Data Protection Regulation (GDPR), there is often confusion about be the differences between Data Protection Impact Assessments (Article 35) and Data Protection by Design and Default (Article 25). Many people assume that DPIAs, as the impact assessments are called, are synonymous with with Data Protection by Design and Default. This article we highlight some of the key differences.

Article 35 Data Protection Impact Assessments

  • Applies to: processing of personal data that likely poses a high risk to individuals, especially where there is automated processing, processing large scale special categories of information or systematic monitoring of public spaces
  • Requires: documentation of the measures to address the risk and demonstrate compliance with the regulation
  • When: prior to processing

Article 25 Data Protection by Design and Default

  • Applies to: all processing of personal data
  • Requires: implementing appropriate technical and organizational measures designed to implement data protection principles and only process the personal data necessary for the specific purposes
  • When: at the time of determination of the means of processing AND at the time of processing

I’ve obviously summarized the language of the articles but only to highlight the differences. So let me dive a little further. First off, you’ll notice the first key distinction on the applicability. DPIAs are only necessary for high risk processing, whereas Data Protection by Design (and default) applies to ALL processing of personal data. Of course, to get to a DPIA, most organizations rely on some threshold analysis which would suggest whether or not the processing is high risk. This is not necessary for Data Protection by Design because it applies to all processing.

The second key distinction is that DPIAs are about documenting your measures and compliance whereas Data Protection by Design is about implementing measures. Article 35 DPIA is about proving you’re complying whereas Article 25 Data Protection by Design and Default is about trying to comply (i.e. the measures are “designed” to implement data protection principles). Presumably, if you’ve designed data protection into your processing, the DPIA is about ensuring that you’ve formally documented it (with all your i’s dotted and t’s crossed). An example might help. If you’re planning on collecting contact information of potential customers at a concert, you might implement an organizational measure (a policy) that tells your employees to ensure they tell potential customers what their data will be used for. That is a measure designed to comply with the data protection principle of transparency. Will some of your people forget to tell them? Perhaps. Perfection is not the goal. Change this up to you’re planning on video recording individuals at the concert and doing demographic analysis on ethnicities in attendance. Now you fall under the systematic monitoring clause of Article 35 (and special categories clause as well). You have to document how you’re complying with the regulation, including all the technical and organizational measures. Maybe only three employees have access to the data. Maybe you’re doing this under the lawful basis of being carried out in the public interest. Maybe you had notice printed on the back of the ticket before everyone entered. Document. Document. Document is what DPIAs are all about.

The final key distinction is about timing. For DPIAs, you need to do that anytime prior to processing. The idea here is if you don’t have the documentation or can’t prove your complying with the regulation, that would stop you from processing the personal data at high risk to the individuals (or at least give you pause). Because, Data Protection by Design is about implementing measures rather than documenting those measures, it must be done (1) when you determine what processing you’re going to do and (2) at the time of processing. The reason for this is because the measure may have different effects at different times. For instance, one measure (in accordance with the data minimization principle) might be to exclude collection of certain information, say ethnicity, when asking for contact information. This might be implemented on the form being used to collect data by not having an ethnicity field. Since we create the form at the “time of determination of the means of processing” we’re implementing that measure at that time. Another measure might be to audit the forms to make sure employees aren’t secreting marking codes next to minority names and contact information. That measure would obviously be at the “time of the processing itself.”

GDPR Privacy Regulation

Lawful Basis under GDPR: Performance of a Contract

The newly enacted General Data Protection Regulation (GDPR) in the European Union provides for six lawful bases for processing data. Just as a baseline for readers who may not be familiar with the GDPR, in general processing is prohibited unless you have a lawful basis.  Article 6 of the regulation provides for the list of bases:

  1. Consent
  2. Necessary for performance of a contract
  3. Necessary for compliance with the law
  4. Necessary to protect vital interest of the data subject
  5. Necessary for task carried out in the public interest
  6. Necessary for a legitimate interest of the controller or third party

The most common justification by organizations is probably (6) legitimate interest. The easiest example of this would be fraud prevention. An organization has a legitimate interest in preventing fraud from occurring. Of course, the balancing test for legitimate interest must still be carried out. You can’t justify doing just anything you want on the basis of fraud prevention.

The basis which garners the most press and most debate is consent. In fact, the regulation devotes an entire article to what constitute valid consent. The Working Party 29, the official EU advisory group on data protection , also published a 30 page guide to consent. Consent is essentially a last resort for organizations wanting to use data. If you can’t find a valid basis under the other five, consent is your only option.

Bases 3, 4 and 5 are fairly narrow and of limited general purpose use, only available in certain circumstances.

Which leaves us #2 performance of a contract, the subject of this post. In full, the text of the regulation on this reads: “processing is necessary for the performance of a contract to which the data subject is party or in order to take steps at the request of the data subject prior to entering into a contract.

Unfortunately, many are read this option too broadly as, simply, part of a “contract.”  In other words, their feeling is that, anything put into the contract, makes this a valid basis for processing. But let’s look a little closer.

processing is necessary for the performance of a contract to which the data subject is party”

First off, it’s clear the data subject must be a party to a contract. It can’t be a contract between two organizations concerning the data subject (or their information).  What about the other part of that sentence “necessary for the performance of a contract?” While performance is not defined under the Principles of European Contract Law (PECL), non-performance is in Art 1:301:

“`non-performance‘ denotes any failure to perform an obligation under the contract, whether or not excused, and includes delayed performance, defective performance and failure to co-operate in order to give full effect to the contract.”

Article 7 of PECL goes on to describe, in more detail, issues of performance. One can deduce from the counter definition that performance means completion of an obligation under the contract (in a timely, non-defective and cooperative way).  Article 6:101 describes that a statement in a contract gives rise to an obligation to a party, if the other party reasonably expected it to give rise to that obligation, taking into account (a) the apparent importance of the statement to the other party; (b) whether the party was making the statement in the course of business; and (c) the relative expertise of the parties. Clause (a) is crucial in the analysis for the lawful basis of performance of a contract under GDPR.

In order for “performance of a contract” to be the lawful basis, the processing of data must be necessary to fulfill an obligation, under the contract, of the controller which is important to the data subject.

Let’s look at a clear example: I hire you to design and print business cards for me. Without my name and contact information, it would be impossible for you to fulfill your obligation under the contract. I’ve set you up for failure and arguably non-performed my obligations for failure to co-operate if you’re not allowed to use that information. Processing of that data is necessary for your performance.

Let’s look at one more common one, payment processing. You hire my consulting firm to provide privacy by design training and the firm expects payment for that service. In receiving that payment, I’m in receipt of your personal information, which may be supplied to a payment processor, used to create an invoice, etc. From the contracts perspective, the obligations are that you pay the firm and that the firm provides training services.  Processing payment is for the firm’s benefit (aka in their legitimate interest in facilitating payment) not to fulfill an obligation to you.

The bottom line is, just because there is a contract, doesn’t mean the lawfulness of processing is based on performance of that contract. It has to support and be necessary to perform your obligations under the contract.

Ethereum Privacy Security

When is a hack a hack?

This was cross-posted from LinkedIn.


The recent kerfuffle around Ethereum and the #DAO “hack” is just another in a long list of events which illustrate the difficultly in defining the term “hacking.”  For those unfamiliar with Ethereum and the DAO, a little background. Ethereum is a blockchain technology which expanded on the idea of Bitcoin, to allow for a more programmable blockchain. For simplicity sake, think of Ethereum as a giant distributed virtual computer running on thousands or millions of other computers. Incentive to run this computer is paid in the form of ether (which can be traded for Bitcoin or other forms of money, directly or indirectly). The DAO is a program that was created to run on this computer, that acted like a giant venture capital firm, but without any partners, or anybody else running the helm. Anybody who contributed ether to the DAO was able to help determine the investments the DAO made. All of this was done through code, snippets of computer programs running Ethereum language of choice, golang. The DAO is actually a specific instance of a generic form of DAO or Decentralized Autonomous Organization (Ethereum refers to them as Democratic Autonomous Organization). In the height of hubris, the first DAO called itself the DAO, something akin to the first Corporation calling itself “The Corporation.”

Don’t worry if your head is spinning, it’s a lot to take in and a paradigm shift for sure. I’ve left audiences in a collective coma talking about the future of DAOs. Suffice to say, if half the words in the preceding paragraph were befuddling, you should start learning and fast. This is the future and it’s coming faster than you think. Regardless, what happened next in the story of the DAO is nothing short of extraordinary. People starting throwing money at the DAO: millions of dollars, something north of $150 million at one point. Then, disaster struck. Remember the DAO is just a computer program running on a distributed computer. Someone realized they could send some instructions to the computer program and simply direct all that money to them. It was eloquent and simple. Poof. $60 million dollars in ether was drained from the DAO. The Ethereum crowd was in shock. Their shining example of the future had just been hacked. Or had it?  The hacker claimed the program acted as it was programmed to do. He was just able to interact with that program in such a way that earned him $60 million. Now Ethereum is facing an existential crisis. The whole point of a DAO is an unstoppable immutable program, but now that all this money went bye-bye, they want to stop that program and can fork the Ethereum blockchain to do so (or make a change to the underlying infrastructure to do so). But Ethereum’s crisis is not the subject of this article. The subject is hacking. You see this is the first case where hacking may not really be hacking. In fact, every case maybe the same.

Computers do what you tell them to do

In the United States, the principle anti-hacking law is the Computer Fraud and Abuse Act (CFAA). However, much has been made about the ambiguity of the law. The law makes criminal someone who “intentionally accesses a computer without authorization or exceeds authorized access, and thereby obtains … (C) information from any protected computer.” A protected computer is broadly defined in a way that means just about any computer attached to the internet. The act was used in the prosecution of Aaron Swartz who downloaded massive numbers of articles from JSTOR. As a Harvard researcher, he was entitled to access those files though not in the manner he did (a potential violation of the JSTOR terms of service). While it has been surmised that his intent was to upload all the articles for free access, he never did so, having been arrested prior to that. Regardless, that would have been a violation of copyright law, not the CFAA. The question here is whether violating a sites terms of service “exceeds authorized access” and is a federal felony.

Another notorious example is Lori Drew. She was prosecuted for creating a fake MySpace page and using that page to court then taunt a teenage girl, who later committed suicide. Again, a violation of MySpace’s terms of service and again, a federal felony.

Finally, there is the case of Andrew “Weev” Augheimer. Weev accessed an AT&T website used by iPads users to register their iPads. When the website was accessed with a user’s ID number, if they had previously registered, it displayed their email address that they registered with. Weev wrote a script that cycled through ID numbers and grabbed email addresses. In other words, he accessed a publically facing website (of the form and simply incremented the ID numbers.

None of the people in the previous two cases are shining examples of model citizens. Swartz is more of a Robin Hood character than swashbuckling criminal. But the question remains, is what they did (on a technical basis) so heinous? If I were to create a website with a link on the front page that says “You are not authorized to click this button” and you did, and it provided information on a second page; you’re now a criminal. Does this seem right?

While hacking is defined on a technical basis, the unauthorized access or exceeding authorized access of a computer, the criminality seems more based on the results, motives or intent. Clearly a case for prosecutorial discretion. No sane prosecutor would contemplate your trial for clicking that button, but Weev was a “bad” person. The prosecutor is that case said “His entire adult life has been dedicated to taking advantage of others, using his computer expertise to violate others’ privacy, to embarrass others, to build his reputation on the backs of those less skilled than he.”  In this case, Weev wasn’t trying to spam the email addresses or gain financially, he was out to embarrass AT&T for their bad security.

You don’t have to be a jerk to be scared of the law

But what about security researchers? White hat hackers whose job it is to expose security vulnerabilities with the aim of benefiting society by making it more security. They are scared. Scared of prosecution by an overzealous prosecutor or overly defensive company making a federal case out a genuine desire to do good. Rather than shore up their security, many companies would choose to hide behind the law, going after security researchers rather than improve their own products or spend the resources up front to build security in.

While I don’t have a good suggestion for codification of a law that punishes evil-doers while not punishing saints, I do know that the current state is not sustainable. The criminality should be in the results not the mechanism.

Which brings us back to Ethereum and the DAO. Ethereum is an experiment. It portends a future state of truly revolutionary computing. The DAO was an experiment. As with any start-up, its hard to spend money on security when you’re trying to build your product. But as the DAO shows, security can’t be an afterthought, even when you’re just experimenting.




1st Amendment Privacy Regulation

Agency Information Collection Activities: Arrival and Departure Record (Forms I-94 and I-94W) and Electronic System for Travel Authorization

June 5th, 2016

U.S. Customs and Border Protection
Attn: Paperwork Reduction Act Officer
Regulations and Rulings
Office of Trade
90 K Street NE.
10th Floor
Washington, DC 20229-1177.

I am writing in response to the notice published in Federal Register on 6/23/2016 entitled “Agency Information Collection Activities: Arrival and Departure Record (Forms I-94 and I-94W) and Electronic System for Travel Authorization

I am responding to the question of “whether the collection of information is necessary for the proper performance of the functions of the agency, including whether the information shall have practical utility.”

The proposed changes to the I-94W and I-94 forms, albeit small, have potentially grave ramifications to the fundamental ideals upon which the United States is founded and practically will result in no net improvement to the security of the country.

Constitutional Problems – Chilling effect on speech

In 1996, a three judge panel from the Eastern District of Pennsylvania declared the Communications Decency Act unconstitutional. Judge Dalzell, writing the opinion of court, declared: “[T]he Internet may fairly be regarded as a never-ending worldwide conversation. The Government may not, through the CDA, interrupt that conversation. As the most participatory form of mass speech yet developed, the Internet deserves the highest protection from governmental intrusion (emphasis added).”
The Internet, in its present form, is used by billions of individuals around the world to communicate with each other. Whether it is for business, pleasure, entertainment, enlightenment or political discourse, social media on the Internet is perhaps the principle forum today by which people of diverse cultures, countries and mindsets interact on a daily basis. Ostentatiously, the objective of the form change, is to identify social media profiles of visitors to the United States. The social media profiles will be reviewed and analyzed, whether by automated or manual means. Potentially, individuals whose social media profiles indicate they are in some way threatening to the United States, will be prohibited from entry, or their entry will be more closely scrutinized.
What is more likely the outcome is that
(1) Individuals with controversial writings will choose not to visit the United States, reducing the diversity of ideas and discussion on those topics (within the geographic United States).
(2) Individuals with controversial thoughts will scrutinize their social media presence and avoid discussions on those thoughts on what Judge Dalzell called “a never-ending worldwide conversation.” This will reduce the diversity of ideas and discussions on those topics (on the Internet).

The chilling effect is not just on foreign nationals but negatively affects the ability of United States citizens to listen to and discuss controversial topics with foreigners abroad. In 1965, the Supreme Court in Lamont v. Postmaster General, 381. U.S. 301 struck down section 305 of the Postal Service and Federal Employees Salary Act because it required the Postmaster General to detain foreign mailings of communist political propaganda unless the addressee affirmatively acknowledge their acceptance and desire to receive such material. The Supreme Court recognized that this would reduce the recipient’s unfettered access to constitutionally protected speech, and thus the act was unconstitutional. The courts have consistently ruled that acts of government, even when they do not have a direct prohibition on speech, but have a chilling effect, are never the less, unconstitutional. This change to form I-94 and I-94W will have a similar effect.

As to the necessity of the proposed change to the function of the agency, an unconstitutional act can never be necessary.

Practical Utility of the proposed change

Selection bias is defined as “selection of individuals, groups or data for analysis in such a way that proper randomization is not achieved, thereby ensuring that the sample obtained is not representative of the population intended to be analyzed.” The simple fact is that those attempting to enter the United States to perform terrorist acts are simply not going to list their Jihadi forum screennames on the I-94 forms. Those filling out this optional section are most likely to be people who believe the mundanity of their social presences leaves them immune from any issue with entering the U.S. This will result in three practical problems:
(1) While Facebook, Twitter and a few others constitute the biggest players in social media, there are thousands upon thousands of smaller social media sites catering to every niche, minority and social group. Further, many people maintain multiple identities on different platforms. Any collection of information will, no doubt, be incomplete.
(2) Large amounts of data from visitors who pose no threat will be collected, resulting in wasted effort and resources by the government to review that data, whether by automated or manual means.
(3) Since many of the most threatening visitors or potential visitors will provide no or sanitized information only, the most likely people that this is going to stop are those whose social media posts or connections are taken out of context or who, while not representing a threat to the U.S., have controversial views. This will result in investigatory efforts into and dealing with appeals from individuals who have wrongly denied entry. Additionally, for those that are denied entry, it will result a chilling effect and inability for those in the U.S. to interact, learn from and discuss topics with the denied party.

The net result is the proposed change is likely subject to a claim of unconstitutionality and practically will not achieved the desired ends.


R. Jason Cronk, Esq.
Florida Bar #90009