In summary, security in large organizations isn't in great shape, and I believe it is because security is a late requirement. As an example, take the usage of centrally managed identity providers  like Azure AD, KeyCloak/LDAP, or PGP signing. These identity providers are rarely seen in production environments with in-house custom code, and I've never seen them used for service to service calls. As engineering teams we tend to rely on what comes with the system we are using (e.g. IAM for AWS, Azure AD for Azure)

We need good authentication with personal ids, and more variety in our encryption keys. Today we have too many shared accounts, shared certs, and secretes shared across our system. That leaves a large attack surface, but more importantly once part of the system is exploited, the rest of the system is open. For example imagine if you were able to grab the certificate or secrete used to authenticate deployments. Deployments are often centralized so with this one certificate or secrete a bad person could update almost any application or device.

Ideally every endpoint, every device, and every person would have their own secrete keys for authentication, encryption, and signing. This variety and diversity of keys would dramatically reduce the attack surface, and reduce the expose from any exploits. Even better if public/private key pairs were used instead of symmetric keys. Using public/private keys enables a device, person, or endpoint

Top Questions

How does your organization enable people and services to security authenticate and communicate?

  • Do you sign in as yourself or with a role based account? (ericp vs web3_engineering)?
  • Is multi-factor auth a personally generated software key on your device or a special hardware key you plug in?
  • Is cross-service communicate done via TLS supported by the same long lived certificates (e.g. I can get marketing data because a cert lives on my host)?
  • Do software deployments require all services to have the same deployment account and long lived deployment certificate?

Why Role Based Account Are Bad

Authentication is done via role based accounts shared across many people. This provides a wide surface area for attack because every person with assess to this account is venerable. If a hacker can run a phishing campaign, or they can hack into your device/account they can likely gain access to the role based account. Shared accounts may have password notes. Worse case example would be notes sticky-taped to monitors. The point isn't these bad practices the point is the large number of people that need to do everything right.

In addition, role based account are shared across many surfaces and devices. This presents a large attack surface. If a device with this account has a zero day flaw the account may be exposed. In addition, once the role based account is exposed the damage is difficult to contain. Making a change to this account risks shutting off access to key services, and you may not even know what services need to be updated and change.

Obviously I'm a fan of personal accounts with roles granted for access. In addition having the fewest central authorities for your account management is good. I say fewest because you may have Azure AD for Azure, IAM for Azure, and KeyCloak/LDAP for on premises. It isn't always possible or desirable to tie all these systems together.

Why Multi-Factor Auth Isn't Working

Multi-Factor auth is a great tool to improve security for people. Its works pretty well for people It isn't viable for service to service calls.

MFA Works for People. In summary it works because people have the hardware and can negotiate the multi-step authentication process. Using Multi-Factor Auth (MFA) a user has a username, password, and an additional key or code. That additional key or code can be provided by a google authenticator app, SMS text message, or hardware key, or software key. The websites and apps provide integrate with the users devices to request the additional key, and prompt the user to add the new key. Each website is a little different in the prompts and process for multi-factor auth, and people can follow the process and execute.

MFA does not work for Services. In summary it doesn't work because services lack the protocols and they lack the keys. The most common way to connect two services is via HTTPS, in which TLS negotiates an encrypted channel between two HTTP servers. The problem is TLS only supports pre-shared certs on both endpoints, or generates and sends over the keys to use for encryption. What is missing from TLS is a way to support authentication via a cryptographic challenge, and a way to support public/private key encryption. In plain english, TLS and by extension HTTPS doesn't have a way to ask for ask for secretes. The client making the call can't pass along another secrete code for verification.

Public/Private Key Pairs A Possible Solution

In summary, public/private key pairs make cryptographic challenges possible. That cryptographic challenge is a valid authentication. In addition, public/private keys offer the possibility of separate encryption channels, and compartmentalize the key space. In such a setup, keys would be pre-generated once and the key pairs could be used for authentication or encrypted communications. Ideally there would be a mechanisms to do the following:

  • recover lost keys
  • generate new keys
  • invalidate keys
  • re-authorize keys

Periodic re-authorization may be required to continue access and encrypted communication.

Ideally every client and service would have their own set of public and private keys. That is a lot of public and private keys. Let me explain how they are used. The public key would encrypt and the private key would decrypt. For example, a server would encrypt using the client's public key, and the client would use their private key to decrypt. To verify the private key would sign, and the public key would verify.  A client would encrypt a message using their private key and the server would verify the client by decrypting with the client's public key.

This public/private key infrastructure breaks us away from pushing out the same shared secrete to all the services and clients. If desired every device and every instance of a service or an application could have their own set of keys. A compromise device or compromised application would limit the damage to that single key pair. In addition, hackers would need to compromise many devices and many different applications to gain meaningful access to the system. That means more work to gain access, and limited damage when a key is exposed.  

OAuth to the Rescue?

OAuth does support public/private keys. In fact, if you use OAuth with a username password it generates and exchanges public/private keys in the background. Once authenticated with a token, you can skip the user/name password challenge and issue an authentication token instead. An HTTP response for an authorization token might look like this. Note: refresh token is to get a new token after the "access token" is no longer valid (eg expired).

HTTP/1.1 200 OK
Content-Type: application/json
Cache-Control: no-store

{
"access_token":"MTQ0NjJkZmQ5OTM2NDE1ZTZjNGZmZjI3",
"token_type":"Bearer",
"expires_in":3600,
"refresh_token":"IwOGYzYTlmM2YxOTQ5MGE3YmNmMDFkNTVk",
"scope":"create"
}
https://www.oauth.com/oauth2-servers/access-tokens/access-token-response/

All of the major cloud providers support oAuth2 access tokens. These services are paid additional services in the application space. There is a path to access cloud services and accounts. To access cloud accounts and services a user will need to authorize themselves and store the token. This token can then be pushed out to hosts and devices to provide access to clients.

How to get a private key on every device/application? Now that we have explored oAuth2 we are right back to the same problem.

  • You need a human to authenticate to generate the access token
  • oAuth2 provides authentication not encryption

You need a human to work through the authentication process as a requisite for acquiring the token. An enterprise with 10,000+ devices would need 10,000+ keys to have a unique key on every device. This leaves some bad options.

  1. option1: share the same key across lots of devices
  2. option2: leverage an existing device key, unique to the hardware
  3. option3: roll your own service to generate and maintain key pairs

We'll skip option1, and explore option2. For option2 we'll look at Fido2 keys, keys from TPM chips, and external usb fobs with keys.

Fido2 isn't a solution to hardware key on every device. To address this problem Android and iOS have introduced unique hardware private keys on every device. As you can see from the below announcement Fido2 is a web standard for authentication of websites. It isn't a general authentication solution like oAuth2. Fid02 is available as of Android 7+, and on MacOS with Big Sur, and on Safari in iOS. To my knowledge all of these private keys are generated on the device using bio-metrics of a touch id or face id.

Hardware keys via TPM chips is not an option. This isn't a viable option for cross platform companies (by cross platform I mean non Windows-OS). TPM chips are supported well in windows OS, but to use these chips to protect your keys you are locked into a windows proprietary API set. Forget about extending support to Android. Linux has a kernel patch for TPM chips, and it isn't clear that Linux VM/containers can utilize TPM chips.  

Hardware keys via an external usb, ok for devices bad for cloud. An organization can purchase a usb key with a private key that is connected at the bottom of the device. You can also have a separate physical key that acts as an NFC device near the device (or even glued to the device). This will be a pain to deploy these secure keys across all the devices. Almost certainly the keys will be lost or stolen.

This won't work for cloud devices. You can't plug in a key to a cloud hosts.

Rolling your own public/private key service. That leaves option three, build your own infrastructure to support public/private keys. Today public/private keys live in the application space. It doesn't need to be that way. Take for example logging into a linux account with your private key. You ssh into a host, and they key is used to support authentication and encryption. Services and applications could incorporate public/private keys as a MFA authentication, cryptography attestation, or end-to-end encryption. Even better if these keys are software based independent from the hardware. Software keys could be cross-platform supported across different operating systems, and different devices. With software you can generate thousands of keys and distributed them to devices. With software you can manage the lifecycle of public/private keys.

  • Rotate to a new key
  • Replace a lost key
  • Expire or deactivate a key
  • Generate new keys automatically

Some of public/private key infrastructure is coming together as part of Decentralized Identifiers (DID) and Distributed Key Management (DKMG). You can see some hope with support for oAuth2 and pre-generated JWT tokens. The basic ingredients are out there is just hasn't come together to solve today's security problem. The lack of HTTPS (TLS) support for pre-generated public/private is definitely an obstacle.

Look for upcoming articles on how to build your own public/private key storage.