Although it is oft-repeated that implementing cryptographic algorithms by non-professionals is a bad idea, we would like to give some concrete examples of how things can go wrong, and show some of the ways these pitfalls have been avoided by some of the well-known cryptographic libraries. Hopefully, these examples will serve as danger signs to those who consider implementing cryptographic algorithms from scratch based on potentially imprecise specifications.
It’s often overlooked that every programming language is executed over an abstract machine. The abstract machine that C works over is surprisingly complicated and has some unexpected behaviours even to experienced developers. However, one does not need to go to the level of C to find unexpected behaviour. Even modern assembly language is running over an abstract machine. This can easily be demonstrated by running an executable that accesses the same data item, say a lookup table, repeatedly. Parts of the lookup table will end up in some of the L1/L2/L3 cache of the CPU and the time it takes to execute parts of program that access the table might radically decrease as time goes on . Unfortunately, when things take different time to execute depending on secret data, such as the key or the correct authentication code, there is a potential for attack. Similarly, branching behaviour can also affect timing even if the code under the branches only deals with registers and takes the same number of micro ops. Modern CPUs have complex branch prediction logic that favours past branch outcomes. This can significantly affect timing and can lead to practical attacks . However, it’s not only execution time that abstract machines influence. The C standard allows compilers to perform optimizations such as removing expressions whose value is not used and that produce no needed side-effects (C11 standard §18.104.22.168 paragraph 4). As such if one attempts, using a modern C compiler, to remove sensitive data by naively zeroing the memory region before freeing it, the generated code will often not contain the zero writes. Such subtleties surprise many programmers but are well-known to those who regularly implement cryptographic algorithms (many of whom will have learnt from their own, or others, previous mistakes). Countermeasures against these issues are not easy and sometimes require difficult trade-offs. It’s relatively easy to avoid using lookup tables for S-boxes, but removing branches that can leak sensitive data requires the careful consideration of an expert.
A software developer is often required to be an interpreter of the intention and decisions of the software designers and the broader context (social, cultural, technical, etc.) in which those decisions were made. What this means in practice is that the developer is often left with many choices when it comes to implementation. In the case of cryptography, some things may only have been decided in what a cryptographer would consider a very rough specification, such as: use HMAC-SHA-2 and AES. However, that leaves many questions unanswered. Should different keys be used for each? What block cipher mode should be used for AES? How should the Message authentication Code (MAC) and the ciphertext be combined? How should the common pitfalls associated with both encrypt-then-MAC and MAC-then-encrypt be avoided? When using a standard off-the-shelf library’s high-level primitives, such questions are usually already answered, at least when defaults are not overridden, by the authors of the library. For example, NaCl’s secretbox  uses Authenticated Encryption with Associated Data (AEAD) with Salsa20 as the encryption provider and Poly1305 as the MAC provider in a well-combined manner. These are strong, fast, well-chosen (if not universally recognised) algorithms and implementations that work well together and non-coincidentally avoid all the pitfalls mentioned above. Asking a developer not proficient in cryptography to put such a system together is inherently risky.
Another issue with implementing cryptographic systems is that some properties of these systems are rather fragile. A prime example is that of AES-CBC which uses padding to extend the length of the message to be a multiple of the block size of AES. If the developer decides to return a different error code when the padding is malformed rather than when it is correct, but the decoded plaintext doesn’t match the MAC, the plaintext can be decoded. At GDS, we have not only seen such issues in deployed systems but have also published a tool to demonstrate them . Such pitfalls are not unique to symmetric cryptographic primitives. It is not unheard of to see developers being surprised that RSA signing and encryption operations both require secure random numbers. If one reads through the RSA page of Wikipedia  for example, there is nothing random about the mathematical operation. However, if implemented purely, without randomness, both uses of RSA have some trivially unsuitable properties. For example, if a number between 0 and 65535 has been encrypted, one only needs to perform the RSA encryption operation 65536 times using the public key to know for certain what number hasbeen “encrypted”. In case the number is some form of pre-commitment, such as how much stock one is willing to buy, the effects can be devastating. All library implementations of RSA encryption avoid this pitfall by using the proper RSAES-OAEP/PKCS1-v2.0 encryption schemes.
Finally, the issue with hand-rolling cryptographic systems is that if and when they get broken, it is very hard to replace them without risking the ecosystem that the product created, paying a very high price, or both. For example, A5/1 used for GSM encryption has long been known to be broken , yet replacing it has been almost impossible for over a decade thanks to the millions of handsets still in operation from decades ago. This has left billions of phones susceptible to a wide range of attacks both from governments and private enthusiasts. As the Internet of Things (IoT) is gathering speed by the day and commodity devices such as TVs and kettles start being connected to the Internet, it is crucial to make sure that these systems have good security from the get go and can be securely, remotely updated with full user consent if need be. Unfortunately, not everything can be updated remotely, such as the MIFARE Classic cards, marketed as Oyster transportation cards in London. There, a cipher with an extremely short, 48 bit key was deployed to “secure” digital wallets. Upgrading thousands of entry points and millions of Oyster Cards to one capable of 128 bit AES encryption was probably not the cheapest of endeavours.
At GDS we see many products with incomplete crypto specifications and resultant code that does not take into account the often complicated and subtle issues around the design, implementation, and use of cryptography. When such code is already in production, especially when it has been in use for years, the cost of replacing it can be prohibitively high. Hence our recommendation is to always do a thorough, complete evaluation of where the final product will be used to properly establish context, then work out a full, precise specification that can match the requirements established and finally to use well-established cryptographic libraries’ crypto components to implement the specification. Once the product is in use, it is also important to monitor whether the context in which it is deployed in has changed. Not only the cryptographic landscape can change as new attacks are found but those who designed the product may find that over the years its use-case changes from the original intention. In these cases a new requirements analysis and potentially amended specification and implementation is warranted.