The argument for end-to-end encryption is apparently heating up with the work moving forward on TLSv1.3 currently in progress in the IETF. The naysayers, however, are also out in force, arguing that end-to-end encryption is a net negative. What is the line of argument? According to a recent article in CircleID, it seems to be something like this:
- Governments have a right to asymmetrical encryption capabilities in order to maintain order. In other words, governments have the right to ensure that all private communication is ultimately readable by the government for any lawful purpose.
- Standards bodies that enable end-to-end encryption that will prevent this absolute governmental good endanger society. The leaders of such standards bodies may, in fact, be prosecuted for their role in subverting government power.
The idea of end-to-end encryption is recast as a form of extremism, a radical idea that should not be supported by the network engineering community. Is end-to-end encryption really extremist? Is it really a threat to the social order?
Let me begin here: this is not just a technical issue. There are two opposing worldviews in play. Engineers don't often study worldviews or philosophy, so these questions tend to get buried in a lot of heated rhetoric.
In the first, people are infinitely malleable, and will be or should be shaped by someone, with the government being the most logical choice, into a particular moral mold. In this view, the government must always have asymmetry; if any individual citizen, or any group of citizens, can stand against the government, then the government is under direct existential threat. By implication, if government is the founding order of a society, then society itself is at risk.
In the second, the government arises out of the moral order of the people themselves. In this view, the people have the right to subvert the government; this subversion is only a problem if the people are ethically or morally incompetent in a way that causes such undermining to destroy the society. However, the catch in this view is this: as the government grows out of the people, the undermining of the government in this situation is the least of your worries. For if the society is immoral, the government — being made up of people drawn from the society — will be immoral as a matter of course. To believe a moral government can be drawn from an immoral population is, in this view, the height of folly.
What we are doing in our modern culture is trying to have it both ways. We want the government to provide the primary ordering of our society, but we want the people to be sovereign in their rights, as well. Leaving aside the question of who is right, this worldview issue that cannot be solved on technical grounds. How do we want our society ordered? Do we want it grounded in individuals who have self-discipline and constraint, or in government power to control and care for individuals who do not have self-discipline and constraint? The question is truly just that stark.
Now, to the second point: what of the legal basis laid out in the CircleID article? The author points to a settlement around the 3G standard where one participant claimed their business was harmed because a location tracking software was not considered for the standard, primarily because the members of the standards body did not want to enable user tracking in this way. The company stated the members of the standards body acted in a way that was a conspiracy. Hence the actions of the standards body fell under anti-trust laws.
Since there was a settlement, there was no actual ruling, and I'm not a lawyer, but the issues seem different in the case of encryption technology than what was considered in the case pointed to above (TruePosition, Inc. v. LM Ericsson Telephone Co., No. 11-4574 E.D. Pa. Oct. 4, 2012). In the case of encryption technology, it seems, to me, that the case would need to be somewhat different. Assume someone uses a piece of software that implements an encryption standard in the commission of a crime. Turn the software into a car, and the argument would need to look something like this:
Since the car used for the crime depended on tires that were made by a particular company for general commercial use, which depended on the specifications set out by a standards body made up of a number of tire manufacturers in order to allow for interoperability between the various manufacturers in the market, the standards body is responsible for the crime.
I'm just not certain this would be a very compelling argument; you need to take the responsibility from the criminal to the manufacturer, and then from the manufacturer to the standards body. So you would need to prove that the manufacturer created the product primarily for use in a criminal enterprise, and then that the standards body created the standard primarily in order to allow the successful manufacture of (inter-operable) software designed for criminal use. This seems to be a far different line of reasoning than the one used in the case given above.
For the argument against end-to-end encryption to stand, two things must happen. First, we must decide that we want the kind of society where we are essentially wards of an all-knowing state. Second, we must build some sort of legal theory that transfers criminal liability from the criminal to the manufacturer, and to the standards body, the manufacturer participates in through the manufacturer. I am not certain how such a legal theory might work, or am quite certain the unintended consequences of such a theory would be disastrous in many ways we cannot now imagine.
Written by Russ White, Network Architect at LinkedIn