Encryption standards and Bullrun

Previous section: The National Security Agency and Perfect Forward Secrecy

The Enigma system had been developed by a small group of inventors who believed the messages it encoded to be practically undecipherable. They turned out to be anything but when a much larger group of analysts applied themselves to cracking them. Solving a puzzle is so much more motivating than creating one! The modern encryption methods used by most businesses are open standards. They are designed so that understanding how they work is of no help in deciphering a message without knowing the right key, and they can be scrutinised by all the mathematical brains in the world who care to look. The fact that the inner operations of the methods are common knowledge makes them no less secure. On the contrary, opening them up to analysis before they enter general use greatly reduces the chance that ways to attack them will be discovered in the future.

The organisation that evaluates and standardizes the encryption methods that are generally used in global commerce, and probably by most governments as well, is a U.S. government agency called the National Institute of Standards and Technology (NIST). In 2013, Edward Snowden revealed the existence of a secret National Security Agency (NSA) programme known as Bullrun. Its aim was to undermine the effectiveness of commonly used encryption methods. Among the tactics employed were attempts to subvert NIST standards. It is alleged that, over several years in the mid-2000s, the NSA had taken control of the authorship of a NIST standard that specified a supposedly secure method of generating random numbers called Dual_EC_DRBG. Randomness forms the basis of a number of encryption methods. In reality, the NSA is claimed to have had a secret way of predicting which numbers the method would generate. Messages that had been encrypted using techniques based on this random number generator would have been open to being decoded by the U.S. authorities.

In the event, the NSA’s attempt to introduce an ineffective encryption method had already attracted attention well before the Snowden revelations. Question marks over the mathematics the standard was based on were raised even before it was adopted in June 2006. By the summer of 2007, details of how to crack it had been published, and November 2007 saw the first suggestion that the flawed method had been deliberately planted by the American security services. On the one hand, this is a tremendous vindication of the theory that open standards lead to secure encryption. On the other hand, it is telling that the standard continued to be available in a number of important software products even after it had been demonstrated to be ineffective. In one of them it was even the default option, which meant anyone who was not up on encryption would be likely to end up using it.

The Bullrun documentation that is in the public domain claims that the NSA has the capacity to defeat a great deal of the encryption in common use on the Internet. Exactly what this entails remains a secret, partly because Snowden did not have access to all the information himself and partly because the NSA has persuaded journalists not to reveal further details for reasons of national security. However, in the light of the Dual_EC_DRBG story, it seems probable that most of the Bullrun capabilities result from software that either uses outdated encryption methods or that has some other kind of flaw the NSA can take advantage of.

Although it is certainly possible that the NSA has discovered secret mathematical means of cracking commonly used encryption standards, this is most likely to be the case for standards that are no longer quite state-of-the-art. It would be genuinely astounding if the NSA were so far ahead of the curve that they had managed to defeat encryption methods that most mathematicians still regard as watertight. Because of their open nature, most of the world retains their trust in the best of the NIST standards even in the light of the revelation that there have been active attempts at sabotage.

The advice that encryption methods should always be subject to open review is repeated like a mantra by many security experts. Surprisingly, however, the U.S. security services choose to ignore it at least some of the time. The National Security Agency maintains a Suite A of encryption methods that are kept secret, while Suite B constitutes the standard NIST methods known to everyone else. As the choice of letters betrays, the NSA regards Suite A as the more secure option and employs it for especially sensitive information. That they use Suite B at all is a consequence of the fact that Suite A encryption can only be used for communication between organisations that can be trusted to keep the hardware and software that perform it under lock and key. Otherwise, the computers could be analysed to reveal how the methods work.

While it is true that an attacker attempting to eavesdrop on Suite A communications would have to figure out how the encryption method worked before she could start trying to crack it, this hardly seems to make the task all that much more difficult. After all, mathematicians all round the world have tried to attack the well-known Suite B methods, and none has succeeded, otherwise the methods would have been removed from the canon. Even if the National Security Agency does have an unmatched team of world-class experts, it remains possible that one or more of the Suite A methods has a weakness none of them has noticed. It seems surprising that they value secrecy over the additional assurance offered by free reviews.

Cybertwists book cover
Publication of Cybertwists is planned for 2017.

As exemplified in the discussion of the San Luca code above, an encryption method whose inner workings are understood can be hacked by simply testing all possible keys until one is found that decrypts a code to sensible text rather than nonsense. Like most encryption techniques, the standard NIST methods are not immune to this sort of brute-force attack. The problem is addressed by allowing such a large number of possible keys that trying them all out one by one would take an impracticably long time.

However, the power and speed of computer hardware is constantly increasing. One common estimation is that it doubles every two years. A brute-force attack that is infeasible now could come within the reach of attackers in several years’ time. Encryption guidelines published in 2016 by the German Federal Office for Information Security state unequivocally that they only apply until the end of 2022. This means that any systems intended to operate past this point have to allow for the eventuality that, in the future, the encryption methods with which they are originally designed to run might have to be replaced by new ones. Making this possible is a sensible precaution anyway, because nobody can predict when a breakthrough in mathematics might reveal a previously undiscovered means of cracking a currently secure encryption method.

Tweet about the NSA’s attempt to undermine an encryption standard

Next section: Quantum encryption