Books / Crypto 101 / Chapter 5

Public-key encryption

So far, we have only done secret-key encryption. Suppose, that you could have a cryptosystem that didn’t involve a single secret key, but instead had a key pair: one public key, which you freely distribute, and a private one, which you keep to yourself.

People can encrypt information intended for you by using your public key. The information is then impossible to decipher without your private key. This is called public-key encryption.

For a long time, people thought this was impossible. However, starting in the 1970s, such algorithms started appearing. The first publicly available encryption scheme was produced by three cryptographers from MIT: Ron Rivest, Adi Shamir and Leonard Adleman. The algorithm they published is still the most common one today, and carries the first letters of their last names: RSA.

public-key algorithms aren’t limited to encryption. In fact, you’ve already seen a public-key algorithm in this book that isn’t directly used for encryption. There are actually three related classes of public-key algorithm:

  1. Key exchange algorithms, such as Diffie-Hellman, which allow you to agree on a shared secret across an insecure medium.
  2. Encryption algorithms, such as the ones we’ll discuss in this chapter, which allow people to encrypt without having to agree on a shared secret.
  3. Signature algorithms, which we’ll discuss in a later chapter, which allow you to sign any piece of information using your private key in a way that allows anyone else to easily verify it using your public key.

Why not use public-key encryption for everything?

At face value, it seems that public-key encryption algorithms obsolete all our previous secret-key encryption algorithms. We could just use public key encryption for everything, avoiding all the added complexity of having to do key agreement for our symmetric algorithms. However, when we look at practical cryptosystems, we see that they’re almost always hybrid cryptosystems: while public-key algorithms play a very important role, the bulk of the encryption and authentication work is done by secret-key algorithms.

By far the most important reason for this is performance. Compared to our speedy stream ciphers (native or otherwise), public-key encryption mechanisms are extremely slow. RSA is limited to at most its key size, which for 2048-bit means 256 bytes. Under these circumstances encryption takes 0.29 megacycles, and decryption takes a whopping 11.12 megacycles. To put this into perspective, symmetric key algorithms work within an order of magnitude of 10 or so cycles per byte in either direction. This means it will take a symmetric key algorithm approximately 3 kilocycles in order to decrypt 256 bytes, which is about 4000 times faster than the asymmetric version. The state of the art in secure symmetric ciphers is even faster: AES-GCM with hardware acceleration or Salsa20/ChaCha20 only need about 2 to 4 cycles per byte, further widening the performance gap.

There are a few other problems with most practical cryptosystems. For example, RSA can’t encrypt anything larger than its modulus, which is generally less than or equal 4096 bits, far smaller than the largest messages we’d like to send. Still, the most important reason is the speed argument given above.

RSA

As we already mentioned, RSA is one of the first practical public-key encryption schemes. It remains the most common one to this day.

Encryption and decryption

RSA encryption and decryption relies on modular arithmetic. You may want to review the modular arithmetic primer before continuing.

This section describes the simplified math problem behind RSA, commonly referred to as “textbook RSA”. By itself, this doesn’t produce a secure encryption scheme. We’ll see a secure construction called OAEP that builds on top of it in a later section.

In order to generate a key, you pick two large prime numbers \(p\) and \(q\). These numbers have to be picked at random, and in secret. You multiply them together to produce the modulus \(N\), which is public. Then, you pick an encryption exponent \(e\), which is also public. Usually, this value is either 3 or 65537. Because those numbers have a small number of 1’s in their binary expansion, you can compute the exponentiation more efficiently. Put together, \((N, e)\) is the public key. Anyone can use the public key to encrypt a message \(M\) into a ciphertext \(C\):

\[C \equiv M^e \pmod{N}\]

The next problem is decryption. It turns out that there is a value \(d\), the decryption exponent, that can turn \(C\) back into \(M\). That value is fairly easy to compute assuming that you know \(p\) and \(q\), which we do. Using \(d\), you can decrypt the message like so:

\[M \equiv C^d \pmod{N}\]

The security of RSA relies on that decryption operation being impossible without knowing the secret exponent \(d\), and that the secret exponent \(d\) is very hard (practically impossible) to compute from the public key \((N, e)\). We’ll see approaches for breaking RSA in the next section.

Breaking RSA

Like many cryptosystems, RSA relies on the presumed difficulty of a particular mathematical problem. For RSA, this is the RSA problem, specifically: to find the plaintext message \(M\), given a ciphertext \(C\), and public key \((N, e)\) in the equation:

\[C \equiv M^e \pmod{N}\]

The easiest way we know how to do that is to factor \(N\) back into \(p \cdot q\). Given \(p\) and \(q\), the attacker can just repeat the process that the legitimate owner of the key does during key generation in order to compute the private exponent \(d\).

Fortunately, we don’t have an algorithm that can factor such large numbers in reasonable time. Unfortunately, we also haven’t proven it doesn’t exist. Even more unfortunate is that there is a theoretical algorithm, called Shor’s algorithm, that would be able to factor such a number in reasonable time on a quantum computer. Right now, quantum computers are far from practical, but it does appear that if someone in the future manages to build one that’s sufficiently large, RSA becomes ineffective.

In this section, we have only considered a private key recovery attack that attacks the purely abstract mathematical RSA problem by factoring the modulus. In the next section, we will see all sorts of realistic attacks on RSA that rely on flaws in the implementation, rather than the mathematical problem stated above.

Implementation pitfalls

Right now, there are no known practical complete breaks against RSA. That’s not to say that systems employing RSA aren’t routinely broken. Like with most broken cryptosystems, there are plenty of cases where sound components, improperly applied, result in a useless system.

PKCSv1.5 padding

Salt

Salt1 is a provisioning system written in Python. It has one major flaw: it has a module named crypt. Instead of reusing existing complete cryptosystems, it implements its own, using RSA and AES provided by a third party package.

For a long time, Salt used a public exponent (\(e\)) of 1, which meant the encryption phase didn’t actually do anything: \(P^e \equiv P^1 \equiv P \pmod N\). This meant that the resulting ciphertext was in fact just the plaintext. While this issue has now been fixed, this only goes to show that you probably shouldn’t implement your own cryptography. Salt currently also supports SSH as a transport, but the aforementioned DIY RSA/AES system remains, and is at time of writing still the recommended and the default transport.

OAEP

OAEP, short for optimal asymmetric encryption padding, is the state of the art in RSA padding. It was introduced by Mihir Bellare and Phillip Rogaway in 1995. Its structure looks like this:

/img/books/crypto101//Diagram.svg

The thing that eventually gets encrypted is \(X \| Y\), which is \(n\) bits long, where \(n\) is the number of bits of \(N\), the RSA modulus. It takes a random block \(R\) that’s \(k\) bits long, where \(k\) is a constant specified by the standard. The message is first padded with zeroes to be \(n - k\) bits long. If you look at the above “ladder”, everything on the left half is \(n - k\) bits long, and everything on the right half is \(k\) bits long. The random block \(R\) and zero-padded message \(M \| 000\ldots\) are combined using two “trapdoor” functions, \(G\) and \(H\). A trapdoor function is a function that’s very easy to compute in one direction and very hard to reverse. In practice, these are cryptographic hash functions; we’ll see more about those later.

As you can tell from the diagram, \(G\) takes \(k\) bits and turns them into \(n - k\) bits, and \(H\) is the other way around, taking \(n - k\) bits and turning them into \(k\) bits.

The resulting blocks \(X\) and \(Y\) are concatenated, and the result is encrypted using the standard RSA encryption primitive, to produce the ciphertext.

To see how decryption works, we reverse all the steps. The recipient gets \(X \| Y\) when decrypting the message. They know \(k\), since it is a fixed parameter of the protocol, so they can split up \(X \| Y\) into \(X\) (the first \(n - k\) bits) and \(Y\) (the final \(k\) bits).

In the previous diagram, the directions are for padding being applied. Reverse the arrows on the side of the ladder, and you can see how to revert the padding.

We want to get to \(M\), which is in \(M \| 000\ldots\). There’s only one way to compute that, which is:

\[M \| 000\ldots = X ⊕ G(R)\]

Computing \(G(R)\) is a little harder:

\[G(R) = G(H(X) ⊕ Y)\]

As you can see, at least for some definitions of the functions \(H\) and \(G\), we need all of \(X\) and all of \(Y\) (and hence the entire encrypted message) in order to learn anything about \(M\). There are many functions that would be a good choice for \(H\) and \(G\); based on cryptographic hash functions, which we’ll discuss in more detail later in the book.

Remaining problem: unauthenticated encryption

Most public-key encryption schemes can only encrypt small chunks of data at a time, much smaller than the messages we want to be able to send. They are also generally quite slow, much slower than their symmetric counterparts. Therefore public-key cryptosystems are almost always used in conjunction with secret-key cryptosystems.

When we discussed stream ciphers, one of the remaining issues that we were facing was that we still had to exchange secret keys with a large number of people. With public-key cryptosystems such as public encryption and key exchange protocols, we’ve now seen two ways that we can solve that problem. That means that we can now communicate with anyone, using only public information, completely secure from eavesdroppers.

So far we’ve only been talking about encryption without any form of authentication. That means that while we can encrypt and decrypt messages, we cannot verify that the message is what the sender actually sent.

While unauthenticated encryption may provide secrecy, we have already seen that without authentication an active attacker can generally modify valid encrypted messages successfully, despite the fact that they don’t necessarily know the corresponding plaintext. Accepting these messages can often lead to secret information being leaked, meaning we don’t even get secrecy. The CBC padding attacks we’ve already discussed illustrate this.

As a result it has become evident that we need ways to authenticate as well as encrypt our secret communications. This is done by adding extra information to the message that only the sender could have computed. Just like encryption, authentication comes in both private-key (symmetric) and public-key (asymmetric) forms. Symmetric authentication schemes are typically called message authentication codes, while the public-key equivalent is typically called a signature.

First, we will introduce a new cryptographic primitive: hash functions. These can be used to produce both signature schemes as well as message authentication schemes. Unfortunately, they are also very often abused to produce entirely insecure systems.

  1. So, there’s Salt the provisioning system, salts the things used in broken password stores, NaCl pronounced “salt” the cryptography library, and NaCl which runs native code in some browsers, and probably a bunch I’m forgetting. Can we stop naming things after it? 


Licenses and Attributions


Speak Your Mind

-->