Anish Athalye

Validity, Trust, and the Design of Interfaces

In secure communication schemes, there are three main goals — confidentiality, integrity, and authenticity. There are a lot of real-world software and systems out there that don’t get integrity and authenticity quite right, many times as a result of poor interface design.

Considering adversarial models, we see that there are very few situations where integrity by itself is useful. Standalone integrity checks would protect against network errors, for example, but they are not useful for much beyond that. In fact, when dealing with adversaries, integrity without authenticity is worthless. On the other hand, authenticity implies integrity, so that should be the gold standard for security.

Along the same lines, in digital signature systems, validity without trust is worthless. Data being correctly signed by some public key doesn’t mean anything unless the key is trusted. This is especially relevant in web of trust systems such as GPG, where a shared keyring is used to store public keys. Any program can add keys to the keyring, and some programs enable automatic key retrieval, so the mere presence of a public key in the keyring is meaningless. Data signed by a key can only be trusted if the public key has been assigned an explicit trust value or if it can be trusted under some web of trust model.

Interfaces and Security Implications

An interface is a boundary between components in a system, either between two pieces of software or between a program and a human. In security-sensitive applications, interface design is critical. Something that is relying on security-related software needs to know exactly what guarantees are provided by the software. With libraries, it’s necessary to know what is the client’s responsibility and what the library handles internally. When possible, software should do the most intuitive and secure thing by default, and the simplest way to use the software should also be the most secure way.

Bugs in the implementation of software, once identified, can be fixed without consequence. On the other hand, bad interface design is incredibly difficult to fix once there exists software that uses the API — changes could break compatibility with all existing software that uses the API.

Case Studies

To better understand the importance of good interface design in security-critical applications, we can critique existing systems, looking at both human-computer interface boundaries and software-software interface boundaries.

GPGMail for macOS

GPGTools distributes a suite of GPG-related software for macOS. Among these tools is GPGMail, an Apple Mail plugin that lets users send and receive signed and encrypted mail using PGP. When someone receives PGP-signed email, it looks like this:

The indication to the user is that everything is fine — there’s a nice big check mark visible. With the default configuration, the plugin automatically downloads the public key and verifies that the signature is valid. Perfect!

Except if we click on the check mark, we can see that everything is not ok:

In this screen, we can see that “this signature is not to be trusted”. The signature is valid, but it cannot be trusted! It’s a simple thing to check, but the user has to remember to manually check this for every single email received if they want security guarantees. Otherwise, someone could spoof an email and sign it with some other public key that has been uploaded to the key servers, and the mail client would faithfully automatically download the untrusted key from the public key servers and verify that the signature is valid. As the software is designed, the check mark basically means nothing in terms of security.

This is bad design! It’s not a bug in the cryptographic protocols or the implementation of the software, but it’s arguably at least as bad. The mail plugin behaves in an unintuitive manner, and the check mark gives users a false sense of security.

Better Designs

There are email clients handle this better. For example, Evolution shows the following when viewing an email signed by an untrusted key:

There’s a clear warning in the user interface. When the email is signed by a trusted key, it looks very different:

CPAN Signature Checks

CPAN is a package manager for Perl. Like many other package managers, CPAN has support for digital signatures, which can be enabled using the check_sigs option:

CPAN packages can be digitally signed by authors and thus verified with the security provided by strong cryptography. The exact mechanism is defined in the Module::Signature module.

Unfortunately, due to the way Module::Signature’s interface is built, App::Cpan doesn’t really end up providing any strong cryptographic guarantees.

CPAN checks signatures like this:

my $rv = eval { Module::Signature::_verify($chk_file) };

if ($rv == Module::Signature::SIGNATURE_OK()) {
    $CPAN::Frontend->myprint("Signature for $chk_file ok\n");
    return $self->{SIG_STATUS} = "OK";
} else {
    # print error message and abort

Module::Signature, in turn, runs this:

my @cmd = (
    $gpg, qw(--verify --batch --no-tty), @quiet, ($KeyServer ? (
        ($AutoKeyRetrieve and $version ge '1.0.7')
            ? '--keyserver-options=auto-key-retrieve'
            : ()
    ) : ()), $fh->filename

In the code, $AutoKeyRetrieve is enabled by default, and $keyserver is set to, a public PGP key server. This means that when checking the signature, if the public key is not found in the local keyring, it will automatically be fetched from the key server.

To verify the signature, Module::Signature runs gpg in a subprocess and checks the return value, yielding SIGNATURE_OK if gpg returns 0 and yielding SIGNATURE_BAD otherwise.

Unfortunately, gpg’s return value doesn’t indicate whether the signature is trusted or not — it indicates whether the signature is valid or not. When using a shared keyring and especially when enabling automatic key retrieval, this guarantee doesn’t mean anything. It’s no better than a checksum — it offers no protection against an adversary.

Because of this vulnerability, a man-in-the-middle attacker can run arbitrary code on machines running cpan install, even when CPAN has signature checks enabled.

Who is to Blame?

It’s unclear exactly which package is responsible for this security issue. Is it App::Cpan, Module::Signature, or gpg? App::Cpan uses a function called verify() without understanding exactly what verify means. Module::Signature uses the return value from gpg and doesn’t provide any distinction in signature status besides SIGNATURE_OK and SIGNATURE_BAD. gpg doesn’t expose a nice programmatic interface for simultaneously verifying a signature and validating the trustworthiness of the key that was used, instead only printing warning text to stderr when a key is untrusted.

At this point, it’s pretty hard to fix this issue. It’s hard for Module::Signature to change its API — it’s depended on by many modules, and an API change could break many of them.

Better Designs

There are libraries similar to Module::Signature that do a better job with the design of their API. For example, GNOME Camel, which is used by Evolution, has a function camel_cipher_context_verify_sync(), which returns one of the following results after verifying a signature:

typedef enum _camel_cipher_validity_sign_t {
} CamelCipherValiditySign;

This return type is much richer than the binary return type of Module::Signature’s verify(), so it’s possible to disambiguate between good signatures made with a trusted key and valid signatures made with an untrusted key.


In both human-computer and software-software interfaces, interface design is critical to security. Bad designs can lead to serious security vulnerabilities, and these vulnerabilities can be incredibly difficult to fix. For this reason, interfaces of security-related components must be designed carefully, have secure defaults, and clearly communicate the responsibilities of the library and the responsibilities of the user.