Client certificate authentication vs. password authentication
In the early days of computing, computers were protected by locked doors — if you had the key to open the door to get into the computer room, you could use the computer and all of its resources. Since they didn't store much of anything, that wasn't a big problem: the computer equipment itself was more valuable than any data that it might have protected.
These days, although we do expend some effort in protecting our computer equipment whose value can run to a few thousand dollars, it's usually our data that we're usually most concerned about. This trend has been evolving for quite a while — after computers started moving out of large research institutions into large corporations, although the computer itself was still protected behind locked doors, access to it was made through terminals to which one had to log in before gaining access to the computer and any of the data that it was responsible for maintaining.
These logins were protected in a simple enough way, with a shared secret (aka
the user account was configured, a secret word was associated with it which (presumedly) was only
known to two entities - the human account holder and the computer itself. The user could then
type their password into the access terminal, the computer could verify it and, if it matched,
As it turns out, though, protecting these passwords was a bit tricky. The very same passwords which were used as the safeguards against unauthorized access to the data on the computer were themselves data on the computer! By default, without some extra precaution, anybody with an account would be able to read anybody else's password and hence impersonate them. Role-based access control helped a little bit here: the computer could maintain lists of which users could access which data. Unix, for instance, kept all of its data, including lists of users and their passwords, in files. The operating system could then be made responsible for associating each file with an owner (one specific user) as well as a group (collection of users). Owners, groups, and "the rest of the world" (at least, the part of the world that was authorized to access that computer in the first place) were given specific permissions to read, write or execute the file. Ordinary users could simply not be given access to read the password file.
System administrators, however, did have access to these password files, along with everything else. And a slight misconfiguration or mistake could expose the password file to other users, or even non-users. Obfuscating, encrypting, or hashing passwords go a long way toward mitigating the risk here, but the fact remains that the authenticating system must store the password in some form or another.
While system administrators and computer manufacturers wrestled with this problem, security researchers were investigating a seemingly unrelated problem: the problem of secure key exchange. In addition to restricting access to computer systems and their data, there was a lot of interest in protecting data in transit as well as computer systems increasingly became networked with one another. The only way to protect data against passive eavesdroppers is to encrypt it — scramble it in such a way that it can only be descrambled with the aid of a shared secret. When used in the context of user authentication, a shared secret is called a password, but when used in the context of encryption, a shared secret is called a key. The problem that security researchers were running across was in sharing the key between the communicating entities without making it accessible to anybody else.
Researchers at MIT developed a complex key agreement protocol called Kerberos. It required
an offline (unsecured) key agreement between each communication participant and a centralized
key distribution center, but afterwards, each communicating party would be granted a
temporary encryption key by it for a communication session. Martin Hellman and Whitfield Diffie,
however, developed an alternative key agreement method that relied on the intractability of the
discrete logarithm problem and required no up-front key agreement. The protocol is simple enough
to understand (although it undoubtedly took a stroke of genius to think of it). When two parties
want to communicate, they first agree on two numbers
first party (the initiator) then picks a random number
x and computes:
x' = gx%p
y and computes:
y' = gy%p
y', while keeping their original
y values secret. As a final step, the initiator computes:
And the receiver computes:k = y'x%p
Which are guaranteed to result in the same final valuek = x'y%p
The real brilliance of this exchange is that a passive eavesdropper can collect the valuesgxy%p = gx%py%p = x'y%p = gy%px%p = y'x%p
y'but without either
k, which can henceforth be used by both parties as a secret key. This whole scheme hinges on the intractability of the discrete logarithm problem: there is no known algorithm to compute
gx%p = kin any reasonable amount of time, and there are a lot of good reasons to believe that one doesn't exist.
Security researchers Ron Rivest, Adi Shamir and Leonard Adelman expanded on this idea and came up with the RSA cryptosystem that is similarly based on the intractability of the discrete logarithm problem. It's a bit more complicated (but not much; I examined it in more detail here), but it allows the key to be computed beforehand and split into two pieces: the private key and the public key. Once they've been computed, the private key holder can share the public key; the public key can then be used to encode secret messages that only the private key holder can decrypt.
Additionally, and most pertinent to the discussion of authentication, RSA can be used to verify identity by having the private key holder "encrypt" an assertion of identity using the private key. Anybody in possession of the corresponding public key can verify that the message really came from the holder of the private key by "decrypting" the same assertion with the public key: only the private key holder would be able to produce such a valid assertion. Once a public key has been successfully associated with a particular user, the system can request a signed assertion of identity rather than a password in order to establish credentials. This is the basis of a Public Key Infrastructure (PKI). These credentials — public and private keys — are typically stored as X.509 certificates; this form of authentication is called certificate-based authentication.
So: which is better? One of the main benefits of passwords is that they're "portable": they exist in the memory of a human user who can walk around from one system to the next and authenticate regardless of location. One of the downsides, though, is that the authenticating side must also have the password in some form; in a PKI, the authenticator need only know the public key, which is useless without the private key. On the flip side, a private key is typically a very long number - somewhere on the order of 300 decimal digits at a minimum. No human could possibly be expected to remember a secure private key, much less one for each system to which he might be expected to authenticate. As a result, the private key does need to be stored — not by the authenticator, but by the end-user.
In the end, password-based authentication is a necessity when humans are expected to authenticate, but certificate-based authentication represents a good compromise for B2B-type communications where unattended file transfers need to be secured. There's some research being done into Secure Remote Password that tries to combine the portability of passwords with the additional security of distributed key agreement, but for now, it's not widely supported by off-the-shelf tools.