Introduction

I’ve written about Windows Hello for Business before along with the promises and confusion it brings. To Microsoft’s credit they’ve been maturing and improving incrementally, adding features and pushing their “passwordless” initative consistently. With the advent of Windows 10 21H2 Microsoft introduced a new way to onboard and authenticate devices using a “Cloud Trust” model also called “Cloud Kerberos Trust”. I think this is a great step forward and between Microsoft, Google, and Apple, hopefully we can continue to displace passwords.

To take a step back, Microsoft is leaning-in to a “passwordless” strategy that involves the use of trusted cryptographic devices that can attest to an account, rather than a memorized secret.

In a discussion on Passkeys, Google’s Adam Langley discusses why we may need to re-think our relationship to ‘multifactor’ because of how bad passwords are:

[M]ultifactor was sort of a sensible concept in the world where the first factor of the password was so bad. Uh, but if you do want to think in terms of multifactor, right security keys can be several factors.

In other words most of the ‘second factors’ we’ve added to authentications have really been bandaids for just how insecure passwords are. Since crypto-backed authenticators are fundamentally different from passwords, I think it’s right that we can treat them with a different attitude as security professionals, if certain conditions are met.

Windows Hello for Business trust models

Microsoft provides two major approaches to “passwordless” credentials in WHFB: key trust and certificate trust. Certificate trust requires a Public Key Infrastructure (PKI) that you administer to issue certificates to end-users and devices, typically this consists of Active Directory Certificate Services with Active Directory Federation Services deployed to issue certificates. There are some scenarios that require certificate sign-on such as Remote Desktop Protocol (RDP) or Citrix’s cloud-based Virtual Desktop Infrastructure gateway.

Because running on-premise PKI is complex and fraught with possible security misconfigurations, Microsoft encourages key trust deployments. Key trust utilizes a FIDO-type device container to generate private keys on a device in order to link the credential to a user.

Previously, WHFB’s key trust deployment separated the credential completely from on-premise AD by issuing separate certificates to devices as part of a hybrid join process. Because this is effectively a secondary (but linked) credential to an on-prem identity, organizations had to carefully replicate and synchronize on-premise AD and cloud Azure AD objects, especially computer objects.

With cloud Kerberos trust, Azure is strongly linked to on-premise infrastructure through the use of a dummy Read Only Domain Controller (RODC) object. This Azure AD Kerberos Server delegates the ability to issue Kerberos tickets to Azure AD. With this link in place, the user and device can authenticate against Azure resources without any line-of-sight to on-premise resources.

Deployment

A huge advantage to the cloud Kerberos trust is simplicity of deployment. If Windows 10 clients are at least 21H2, activating cloud Kerberos trust results from deploying a server with the Azure AD Kerberos agent installed and enabling the appropriate WHFB Group Policy or Intune policy.

Then as long as a user can achieve line-of-sight to a domain controller, they will enroll a PIN (and other biometrics if they’re allowed through WHFB policy) and are then enabled for WHFB login. Accessing on-prem AD-secured resources still requires line-of-sight to a DC, though it’s likely enabled already.

Given the limited pre-requisites, I believe WHFB cloud Kerberos trust can help many organizations launch their own ‘passwordless’ initiatives without large infrastructure changes. This approach might also improve cloud security through the use of conditional access controls that could require the use of a strong WHFB credential as opposed to a password/MFA combination.

Is it MFA?

Recalling the discussion on how terrible passwords function as authenticators, in my opinion WHFB cloud trust does constitute multifactor authentication. A protected, non-exportable private key that has to be actively unlocked by a gesture, PIN, or other biometric certainly qualifies. Some auditors might balk that the ‘what you have’ is included in the device accessing the requested resource, I’d argue the vast increase in credential strength offsets this more minor threat models.

Unless an organization’s individual users are targeted physically for theft or non-trivial attacks (like SIM swapping), WHFB cloud Kerberos trust definitely improves resistance to phishing and credential theft to the level of more traditional MFA.


There still seems to be a lot of confusion around what is multi factor, multi step, or what constitutes good authentication in the first place.

Background

The definitive guide for digital identity is NIST 800-63r3. Updated in 2017 this standard covers identification proofing (how do you know who the person is), authenticator issuance (how do I get you your credential), maintenance (how do you handle lost/expired authenticators), the authentication process itself, and federating (e.g. authenticating your users for external services). None of this guidance is specific to the US Federal government, so anyone is free to use it and adopt its best practices as they see fit. NIST provides some suggestions of threat considerations when selecting factors. I highly recommend giving this section a read.

Not every system requires extensive identification or authentication. Your Reddit account is probably not as important as your GitHub or banking account. You’re free to pick an appropriate assurance level and match up your tools accordingly. So NIST defines 3 assurance levels and provides guidance for each area of identity verification, authenticator robustness, and federation assurance.

Identity verification:

  • IA1 - self-asserted identity
  • IA2 - remote or in-person identity verification
  • IA3 - in-person strong identity verification

Authenticator strength:

  • AAL1 - some assurance that the subject actually holds the credential (single-factor).
  • AAL2 - high assurance that the subject holds the credentials through multiple factors, one of which involves cryptography.
  • AAL3 - very high assurance provided by (usually hardware-backed) cryptographic means only.

    Subject Binding

    It’s extremely important to point out that NIST ties an identity to a person or subject, and not a device or location. Locations and devices are not good factors for authentication. Location-based (usually expressed as a trusted IP address or private network) does not really do much to identify the individual and is based on out-dated security rules. Device-based factors (think AD-joined workstation or 802.1x-verified workstation) are better, but because they are not bound to a user they still do not help to identify that you’ve got the right person. The process of binding a credential should cause it to be associated with the one user only. A different user (even in the same organization) shouldn’t be able to take the same exact token or device and use it for their own second factor at the same time. Identifying your users and binding them to credentials securely is very important, but a little outside my scope so lets move on:

    Authenticators

    So the real question is, what constitutes good authenticators? Clearly we’re discussing Multi Factor Authentication but note NIST’s definition is a little bit different than just “two things”. Also note that NIST separates the authenticator (the factor you have) from the verifier (the system authenticating you). They have separate but related requirements to assure identity.

Multifactor Authenticators

There is a type of device called a ‘multifactor authenticator’ that itself provides the assurance of two factors (typically these are smart cards where you ‘have’ the card and you ‘know’ a PIN). This strong assurance is provided by the cryptographic infrastructure (PKI) and hardware features of the smart card. Unfortunately this strong assurance comes at a price of complexity and inflexibility. PKI is notoriously hard to setup correctly and smartcards require special hardware to provision and maintain. In addition you need actual humans to grant the cards and un-block them when someone forgets their PIN.

Setting up an in-house PKI is not for the faint of heart, the cards require special drivers and readers, and even the built-in virtual smartcards provided by Microsoft are non-trivial to configure.

Single-factor Cryptographic Device

Commonly called U2F or FIDO keys, these devices can provision and store cryptographic keys on the fly. This means that when you register your Yubikey or fingerprint in an iPhone, the device creates and stores a private key for you which then gets tied to your account. In this way these keys get the same hardware protections as a smartcard without all the infrastructure required. On the downside, there is still a need to verify or reset the account when you lose or replace the physical key or iPhone. One huge advantage of using a FIDO2 device is that they can be practically un-phishable. A technique called ‘origin binding’ means that only the service that you registered with can successfully authenticate you. Man-in-the-middle attacks do not work against properly-configured FIDO2 authentication (as described in this much more detailed review).

Single-factor OTP

These are the common RSA dongle that shows a rotating one-time-password of 6-8 digits. OTP devices can have software and hardware forms, but are basically simple hash functions that combine the current time with a secret “seed” value. The verifier also knows the seed, and if the time is in-sync they both get the same value which proves the OTP device has the seed. There are also event-based OTP generators that just increment a value combined with the seed to create the OTP. When the seed is properly protected, this is a very secure system but can still be subject to phishing attacks. Since there’s no way to tell when an attacker is relaying the OTP, other controls have to be trusted (like the human operator correctly identifying the wrong logon website).

Passwords

“Memorized secrets” or passwords are the bane of secure systems. They’re simple and universal, but have been the Achilles heel of system security ever since the first networks were created. The two biggest differences made in 800-63r3 are the ellimination of password expiration, and changing composition rules. First off, passwords should not expire or forced to change unless they are suspected of compromise (e.g. the user fell for a phish). Secondly, passwords should not simply be composed of multiple arbitrary character types. A more sophisticated approach is necessary to exclude easily broken passwords:

  • Passwords obtained from previous breach corpuses.
  • Dictionary words.
  • Repetitive or sequential characters (e.g. ‘aaaaaa’, ‘1234abcd’).
  • Context-specific words, such as the name of the service, the username, and derivatives thereof.

This is not an exhaustive list, but password composition should be carefully considered if it’s still a required factor. Cracking and spraying techniques should factor in to your password creation tools. Overall though, you should strive to reduce the number of memorized passwords. People have terrible memories for passwords, but can typically develop one or two really good passwords if you let them. Password managers and vaults are especially worth the time to adopt, even the lowly KeePass is a strong way to generate and save passwords securely.


Lionel Richie says Hello

Microsoft has an irritating habit of naming products in confusing ways. Skype vs. Skype for Business; Forefront products became Defender products; Zune became… They’ve been doing this forever

They don’t stop with OS components either: Windows Hello! is theoretically an authentication mechanism or suite of controls around enabling Microsoft’s password-less strategy. I love the idea of getting rid of passwords! They’re frequently the cause of breaches and I’ve personally seen them play a key role in red team wins, pentest failures, and account compromises. The bolt-on of multi factor/multi step/2FA doesn’t always fix the problem since there are ways to side-step or intercept the second factor.

This Tweet:

generated some discussion that made me realize just how confusing (and bad for security) Hello can be.

Here is the skinny:

Windows Hello is a set of features built-in to Windows 10 called “convenience unlock” which stores a copy of your password in memory in order to unlock your device with a picture, PIN, or biometric. You still have to log in with a password! This password is not stored very securely[^Ok it’s better than storing your password in the registry, but not by much]. This feature is similar to primitive Android gesture unlocks and I assume was targeted mostly at mobile Windows.

Windows Hello for Business (WHFB), formerly Passport for Business, formerly .Net Passport for Business, is an actual security framework that enables key-based (i.e FIDO2) or certificate-based authentication combining a user account and a device to provide true multi factor authentication without the need for a password[^this is actually a lie, but I’ll talk about that later]

WHFB requires an Azure AD or Active Directory on-premises infrastructure. Both the device and user must be joined in some way to the directory (this is where the ‘trust’ comes from). The user registers on the device using a multi factor prompt and the directory issues a key to both the device and the user. This key can optionally be stored in a trusted platform module (TPM), if it’s available and meets the TPM 1.2 standard. The TPM-stored key provides genuine “multi factor cryptographic device” (as defined in NIST 800-63) and is a nice way to fulfill “unprivileged user multi factor network access” for those of you that have upcoming CMMC mandates.

The logon experience is somewhat similar to Hello: the device unlocks the key in several ways to provide authentication. PIN, which is default and required, biometrics (such as compliant cameras with heat sensors and fingerprint readers), or an associated Bluetooth device (unlock only) are some of the built-in options. When stored in the TPM, the device itself provides some hammering resistance and can even wipe the key material if too many failed authentications are tried.

+++

So here is the enormous and annoying caveat: you still have a password with WHFB. Until and unless you move your whole authentication infrastructure to Azure AD (or Microsoft makes massive changes to on-prem Active Directory), you are stuck with a legacy Windows account with all the associated steal-able password hashes.

You can check the user box “smart card required for interactive authentication” BUT this only applies to interactive logins, not network logins AND you need to have setup the enterprise certificate-trust model of WHFB.

I won’t go into the pitfalls of Windows network security, suffice it to say that Windows servers and networks still have problems that WHFB doesn’t fix (pass-the-hash, bronze/silver/gold tickets, etc). WHFB at least addresses multi factor during the initial login and finally makes smart cards/FIDO a viable option for regular system administrators to use.


Ok, Hear Me Out.

I’m not advocating obscurity or complexity should ever be used as a sole method for securing data. AWS buckets with long random-named files can still be fuzzed and found, running services on non-standard ports can be scanned, etc. etc. In most cases obscurity and complexity probably make you less secure since it’s harder to understand your systems and harder to spot issues.

HOWEVER, in certain cases, obscure systems or non-default ways of accessing systems can delay or stop some attackers enough that you’re left responding to a minor incident rather than a full-blown account compromise. As long as you’re looking at the places attackers actually target with bulk, automated campaigns you can really get some value out of being a bit out-of-the-ordinary.

Take Azure/Office365 logins: Most phishing kits are designed to capture username and password and maybe the Microsoft Authenticator and get a valid login. In these cases, using the default (even multi-factor) login can still be successfully phished. But, if you run ADFS federation with Azure AD and a non-normal (Okta, for instance) MFA provider, phishing kits largely break down and stop working. Often just capturing the password isn’t enough to authenticate and it just hangs.

The attacker might still be smart enough to attempt an auth against it manually, but that delay is probably enough time for the user to suspect something is up. Besides, most MFA is only good for 90 seconds or so.

Even better, if you can construct a login that separates the username and password sequence (most SSO is doing this now), you have to make the phishers jump through more hoops that are custom to your environment. For instance, I can set a cookie that the browser has to re-use across the session, making it much harder to implement a proxy for the attack.

This is all predicated on configuring multi factor authentication in your environment which really does solve a huge percent of phishing attacks by itself. Going that extra step in a mature organization can help your CISO sleep better and keep your blueteam from responding unnecessarily to email incidents.

Also, don’t forget to include any authentication providers in your scope for penetration tests, DR planning, and security audits. Think of how they could be bypassed or misconfigured. How can they fail? What happens?


As a security director with a large professional services firm, we get a lot of requests from clients to fill-out The Spreadsheets. You know the ones, the insane multi-tab macro-enabled monstrosities or garbage platforms like RSA Archer (slowly being replaced by silly startups like Whistic). All of which are just trying to glean the same basic information: Are you a complete moron who’s not going to lie to us? Seriously, though, I’m protecting data. I have regulatory requirements, and already Bad Things will happen if I screw that up.

Look, I get that vendor management is really important, just ask Target, TJX, and dozens of other companies. I have vendors to manage myself. We all have problems communicating about security.

I’ve had the pleasure of working with clients who do GRC[^Global Risk and Compliance: the catch-all term for vendor/risk management] well. We got on a call with their blueteam, talked through controls, discussed our approach, and they tried a few ‘gotcha’ questions that indicated our maturity. This is ~subjective~ but I think far more effective than just a dumb spreadsheet.

AICPA’s SOC2 and SOC3 make the case for being the current independent audit standard. It’s got a broad approach, but doesn’t solve all the issues. Point-in-time and self-defined controls don’t answer all of your questions as a client, and are not particularly satisfying as a security manager. I like the ability to select controls that make sense to me and my organization, but other than a pass/fail followed by a management response, it’s still too basic.

Industry-wide maturity-based certifications (like HITRUST or CMMC) are a step in the right direction, but just like CMMI in the development world, I don’t see them catching on broadly. Likely they’ll continue to exist in niche areas like DoD and healthcare, but wide-spread use is not going to happen.

In my experience, I’ve found HITRUST extremely arbitrary and inflexible. Despite being maturity-based it has no ability to re-prioritize or select controls based on a use-case (outside of their “factors” selections). All the auditors I’ve spoken to recommend doing the bare minimum to “pass” since there’s no benefit to going beyond the “implemented” milestone. What’s worse, after all the pain and effort of a HITRUST certification, the majority of my clients have never heard of it.

CMMC makes sense to have overly-specific requirements, since it’s literally developed for one client: the US federal government. No one else is likely to care.

Overall, I think maturity models can work but I think there need to be some better standards:

  • Open standard: the standard itself should not be owned by a company. Some open standards body (something like OWASP) or government agency (like DHS) should own and promote the standard which should be free to anyone who wants to use it.
  • Customizable: the standard should support risk-based customization of controls in an obvious way. The standard should incorporate certain baseline minimums (akin to the SANS Top 20 ‘quick wins’ list), without compromising the flexibility
  • Clear: The HITRUST scoring matrix drives me insane. I understand why it exists, but it’s too much. Maturity does not have to be complicated to measure, depending on how we define maturity. It’s also very difficult to explain to someone not already familiar with HITRUST what all the different levels, percentages, and plans mean. A good, clear definition of security maturity given certain controls would improve a model significantly.
  • Dependency-aware: Currently all controls are measured independently, ignoring all others. In the real world you implement controls within the context of a system and other controls that have their own advantages, coverage, and depth. For instance, context-aware authentication could let you remove multi-factor authentication in some situations, or change re-authentication intervals based on other controls in effect. Currently you have to define compensating controls in a complicated, one-off manner (like in PCI), or like in HITRUST there are no compensating controls.
  • Self-certification: organizations should be able to self-certify, period. Independent audit is mostly a veneer over spreadsheets. Effective third-party testing (like penetration testing) can be an option or even a requirement, but certification against a standard should rest with the organization.

Obviously getting everyone to agree on some standard is incredibly difficult (hell, we can’t even agree on what good passwords are or what actually constitutes multi-factor authentication)[^this is not true: NIST 800-63r3 has already fixed both of these issues, but no one believes them] But I think it’s worthwhile to try. OWASP and SANS top 10 are great example of effective, consensus-driven security controls frameworks that don’t have to be impossible to implement.

There is no perfect system, but by exercising a clear, lowest-common-denominator approach, I think we can at least start to describe comparative operational security. I should be able to hand over one report and that be the end of the conversation. If it’s not adequate for an organization’s third-party risk management, then the report itself should be updated, so the additional controls or attestations can be captured and re-used. I mostly just want to stop filling-out the same spreadsheet over and over.