A new API proposal being prototyped within Chromium, and the brainchild of a team comprised solely of Google engineers, has become a topic of heated conversation recently - Web Environment Integrity. This proposal has actually been public on GitHub from as far back as late April, however has gained the attention (and ire) of much of the larger developer community and users alike due to its highly polarising end goal and far-reaching implications if implemented across web services and devices.

What does Web Environment Integrity seek to achieve? I’ll let Google’s Ben Wiser (@RupertBenWiser), one of the creators of the proposal, explain it for me.

[WEI seeks to] allow web servers to evaluate the authenticity of the device and honest representation of the software stack and the traffic from the device.
Ben Wiser, “WEI Explainer”

Essentially, WEI is yet another trusted computing technology. It aims to allow websites to verify the platform and environment of the browser using cryptographically signed “attestations”, provided by a limited set of “attesters” which through some mechanism determine the level of trust for the user’s platform. We’ve seen similar concepts to this introduced into many areas of software in recent years, the most notable of which that comes to mind being Windows 11’s hard requirement of a TPM, and they are, across the board, incredibly controversial.

And for good reason. Trusted computing inherently seeks to secure the hardware not just for the user, but against the user. By enforcing the use of a limited, specified set of hardware and software in order to operate, these technologies can entirely remove the freedom of users to freely use their devices by forcing them into a ‘Catch 22’; do you use your hardware or operating system of choice and lose access to a piece of potentially critical software, or bite the bullet and throw away your freedoms in order to avoid exclusion?

This is the question that Google now seeks to extend to the web with WEI.

Have Your Cake and Encrypt it Too

Proponents of trusted computing will often argue for the range of security merits which the technology can provide. Secure I/O, integrity measurement and cryptographic services provided at the hardware level by devices such as a TPM (Trusted Platform Module) can be used to create a secure environment in which tampering is made more difficult by sealing off software access to the guts of the security mechanisms and memory being used to facilitate applications on the device.

What these arguments frequently fail to consider, however, are the myriad of secondary effects that the use of such a cryptographic black box brings to the table, and what is (in my opinion) the fundamental misplacement of objectives in the name of providing further security.

Consider the following scenario: there is a well-established player in the music playback software market, developed by a large tech corporation. Let’s call it ‘Wotify’, for this example. With trusted computing being commonplace, conglomerate music publishers such as SME and Warner have begun enforcing remote attestation in order to access their digital music library, attempting to prevent piracy and ‘bad actors’ utilising their works. This relies on the use of an ‘attester’ that offers integrity measurement, the ability to measure the security and integrity of a program and attest this to third parties (sounds familiar, doesn’t it?).

By their very nature, ‘attesters’ are produced by a very limited group of large manufacturers. Allowing unvetted third parties to produce one would go against the fundamental concepts of trusted computing; they could simply produce a chip which acts in an unauthorised fashion and effectively disables any security guarantees that external third parties (which have no view or knowledge of the underlying hardware) could expect. For this reason, ‘attesters’ ship with an endorsement key or ‘EK’, which is uniquely generated at manufacture time and signed with keys from a trusted Certificate Authority (CA) much like an SSL certificate, designed to be resistant to hardware-level inspection.

So we’ve established that ‘attesters’ can only be created by a small group of manufacturers, but where does ‘Wotify’ come into this? Well, as the holder of a large market share in the music player space, they have the resources and contacts to have their attester-generated software signature added to the hard-coded allowlist of authenticated program signatures managed by the music publisher.

This read-only list of signatures allows the music publishers to verify that the software running on the user’s device is indeed the official build of ‘Wotify’, and not some cracked version which is primed to record and steal the precious audio data of Warner Chappell, in turn giving the user app access to securely decrypt and use music data which they (in this scenario) provide. Of course, this signature list is strongly controlled by the publishers alone.

Now time for the entry of an up-and-coming competitor: ‘Plotify’. Plotify has, for the sake of this example, an objectively far superior feature set to ‘Wotify’; not only this, it also runs much faster on low-power devices and is generally more efficient. Among its devoted group of users it is incredibly popular, and they are seeking to expand their feature set in order to be able to play protected digital music files from SME and others.

In order to do this, they must now pass through the gatekeepers that are the music publishers and cabal of ‘attester’ manufacturers. I imagine their conversation would go something like this:

Plotify Developers
Hello, please could we enter our program into your list of permitted client software?
SME-UMG-Warner Digital Music Inc.
Why should we bother? We don’t know if your software is secure, and you don’t seem to have a very large user base, or a large security team. Come back when you’re a billion dollar company.
Plotify Developers

Even if they somehow managed to convince the publishers to permit their application, to enable this functionality, all of their users must also be using hardware or software which hosts an attester supported by the end service, otherwise an acceptable signature can’t be generated in the first place.

At every stage in the pipeline there is a filter, an allowlist which binds the user to a set of choices decided entirely at the whim of the service provider, whomever that may be. We’ve taken what has traditionally been at the freedom of the user to decide – their choice of operating system, hardware and software – and wrestled it away in favour of giving the decision to the service providers.

The question here is fundamentally this:

Whose responsibility is the security of the user’s device? The user, or the service provider?

In my mind, the answer is clear. Why on earth should a single service provider, such as a music publisher or a bank, decide something like my operating system, which determines far more than just what music I listen to, or how I view my checking account? Protecting the user from themselves may be a good default, but it should not be the sole option.

Not only does the service provider lack the scope to be doing this, attestation only reinforces the worst practices that we’ve seen from modern tech companies in the last 20 years. As in the ‘Plotify’ example, only pre-existing large players are able to maintain a comprehensive feature set thanks to their influence over allowlists gatekeeping which programs are ’trusted’ enough to have the privelege of accessing a certain functionality or service.

This in turn entrenches these existing applications within their markets, pushing out smaller competitors and leaving little to no room for innovation. Even if you’ve developed an innovative product, if it can’t access half of the functionality of your main competitor, nobody’s going to use it. It also allows existing players to prevent the use of functionality that does not fit their business model or ‘vision’, as users will have no alternatives to turn to even if they desire the functionality they’ve been denied.

Of course Google and other behemoths promoting the use of technologies like this will insist that of course, they’ll allow any software that meets their set of standards for security and other bells and whistles. However, how well has trusting large tech companies to not breach anti-competition laws gone over the past 20 years? Additionally, smaller users of these APIs will almost certainly not have the resources or sufficient engineer hours to give the time of day to smaller competitor products, and we already see this with OS support.

A Two-Foot Garden Wall

One of the first major problems with Google’s proposal we can see within the first few lines of their stated goals:

Goals

- Allow web servers to evaluate the authenticity of the device and honest representation of the software stack and the traffic from the device.
- Offer an adversarially robust and long-term sustainable anti-abuse solution.
- Don't enable new cross-site user tracking capabilities through attestation.
- Continue to allow web browsers to browse the Web without attestation.

You may have noticed – as did many in the GitHub issues for the project – that the first two goals here, “Allow web servers to evaluate the authenticity of the device” and “Offer an adversarially robust and … anti-abuse solution”, conflict directly with the fourth: “Continue to allow web browsers to browse the Web without attestation.”

The immediate and most obvious use case for an attestation system like this is for the exclusion of unattested usage. After all, if you’re allowing everyone to use your service anyway, what’s the point of going through an attestation? All of the example use cases listed in the project explainer talk only about the “detection of” unwanted or illegitimate activity, but all must therefore imply the subsequent prevention of it. Nobody’s installing a burglar alarm just to see how many times their house is broken into.

As @tbrandirali so neatly put it, “this proposal is building a gate, and expecting it not to be used as a gate.” Google seems to somewhat realise this issue themselves in their explainer, later on describing a possible mechanism for mitigating this use case, which they title “holdback”:

We are evaluating whether attestation signals must sometimes be held back for a meaningful number of requests over a significant amount of time (in other words, on a small percentage of (client, site) pairs, platforms would simulate clients that do not support this capability).

Such a holdback would encourage web developers to use these signals for aggregate analysis and opportunistic reduction of friction, as opposed to a quasi-allowlist: A holdback would effectively prevent the attestation from being used for gating feature access in real time, because otherwise the website risks users in the holdback population being rejected.

The proposed solution is, quite hilariously, to simply make attestations probabilistic by sabotaging the functionality of the API itself. This has only negative effects on the value of attestation – if the efficacy of this API is high, service operators will simply accept that they may deny service to a small subset of legitimate users, and take the path of least resistance anyway, permitting only attested clients. If the efficacy of the API is low, service operators will simply opt not to use it, as it will not provide enough useful data even for “aggregate analytics” purposes, since you are essentially collecting fuzzed data.

There is no “Goldilocks zone” to hit here; the proposal at its core is fundamentally contradictory. The system is either useful and open to abuse by web services, or simply another useless addition to the pile of other fingerprinting techniques.

But hey, maybe you could skip past a Captcha or two?

Entrenching the Chrome Throne

One paragraph after describing ‘holdout’, the proposers seem to disregard their prior objections to blocking based on attestation, inviting the possibility of blocking based solely on the browser.

If the community thinks it’s important for the attestation to include the platform identity of the application, and is more concerned about excluding certain browsers than excluding certain OS/attesters, we could standardize the set of signals that browsers will receive from attesters, and have one of those signals be whether the attester recommends the browser for sites to trust (based on a well-defined acceptance criteria).

If it wasn’t already clear from my prior mention of anti-competitive practices, this would almost certainly be another step towards further entrenching Chrome’s already stranglehold grip on the browser market share.

‘Works best on Chrome’ has been a concept Google has tried to push fervently for many years, particularly with first party sites like YouTube, which has notoriously had issues working on browser engines other than Chromium. Johnathan Nightingale describes Google’s seemingly innocent march towards ‘deprecating’ other browsers during his time at Firefox in this Twitter thread, which I highly recommend, but I’ll place a small excerpt from it here:

I think they were running out the clock. We lost users during every ‘oops’. And we spent effort and frustration every clock tick on that instead of improving our product. We got outfoxed for a while and by the time we started calling it what it was, a lot of damage had been done.

– Johnathan Nightingale, Former VP @ Firefox

Google has a storied history of making seemingly innocent decisions in a step toward ‘security’ or ‘performance’ which inevitably end up, in some form or another, excluding competitors from the marketplace. The assurances given in this proposal are similarly weak, as they are forced to submit to the reality that attesters will inevitably developed by a small group of large companies.

In an attempt to provide some form of reassurance to the reader, they state that “established players” in the browser market (so, Chrome) would have to “only use attesters that respond quickly and fairly” to browsers requesting to be added to the club, however make no mention of who will be able to enforce this in practice. And of course they don’t! Chrome is the sole upstream sitting at the top of the mountain here; they have no overlying authority which can compel them to follow through on their promises. And why should we trust them, given their history?

The Leaning Tower of Trust

One thing that the team behind this proposal evidently predicted and attempted to address fears over is the use of attestation to prevent the use of browser extensions or browser modification. In fact, it’s directly addressed in their proposal explainer document (see below).

Users and developers are right to fear Google seeking to restrict or remove support for extensions; in fact, they were accused of attempting just this less than a year ago when it was announced they were going ahead with the impending switch to Manifest v3 (ironically also flown under the banner of “A step in the direction of security, privacy, and performance”).

The switch sought to massively hobble functionality of popular ad-blockers such as uBlock Origin and AdBlockPlus by kneecapping the capabilities of the underlying extension API, however was indefinitely delayed in December 2022.

So, how do the authors address the problem of extension use and browser modification in the context of their new API? Well,

How does this affect browser modifications and extensions?
Web Environment Integrity attests the legitimacy of the underlying hardware and software stack, it does not restrict the indicated application’s functionality: E.g. if the browser allows extensions, the user may use extensions; if a browser is modified, the modified browser can still request Web Environment Integrity attestation.

They don’t. It may not jump out at first glance, but this is a clever sidestep of the underlying question. Yes, the goal of attestation is indeed only to verify the integrity of the hardware and software stack, and a browser with extensions enabled or modifications applied may also request an attestation, but user requests can still then be filtered based on these attestations.

In fact, one of the key use cases cited in their own opening is use by websites that are “expensive to create and maintain” in order to prevent bots from viewing advertisements. If web services are provided with the tools to go that far, attestation-backed adblocker blocker for human users is the next logical step, and this proposal stays notably quiet on that topic, despite seemingly laying the groundwork for its implementation.

Even if extensions and modifications were entirely overlooked at an attestation level in this proposal, that simply creates another problem for the feasibility of the model, namely: what’s to stop people from installing malware or modifications in the browser layer? Trusted computing systems by design rely on a full vertical ’tower of trust’; if a single layer of the system is subject to modification, the entire trust model collapses.

This is an inherently weak configuration, made even more so when you are deliberately opening a layer of your system for modification and the running of user code, even if it is in a semi-sandboxed environment (although we now know far too well that browser extensions are successfully used extensively as malware). Indeed, this was pointed out as far back as the request for comment on the proposal to the Anti-Fraud Community Group in mid-April, and the proposal as of yet seems to have no good answer to this issue.

Conclusion

In conclusion, Google’s proposal for a ’trusted compute platform for the web’ strikes me as half-baked and internally contradictory. Many of the ideas expressed within parts of the proposal seem infeasible to pull off simultaneously, and those that seem most likely to be successfully implemented will likely result in a degraded or intentionally limited experience for the end user.

User enthusiasm for the proposal has been similarly grim, with the GitHub project for the project being bombarded with issues (some polite, some less so) discouraging the development of the proposed API. Please don’t do this, by the way:

Issue #138: ‘Cowards’, WEI GitHub repository.

Pull Request #98: ‘replace everything with my fursona’, WEI GitHub repository.

It’s not hard to believe that the engineers working on this project probably have a sincere belief in the goals they’re setting out, and that they themselves aren’t trying to blatantly strong-arm away market competition with this API. However, it’s becoming increasingly difficult to trust any standards proposal coming out of Google, especially in the browser space, and we have seemed destined for a browser market equivalent of a death by a thousand cuts for quite some time now.

All I can hope is that Google middle management sees some value in correcting course, and prevents this proposal from developing in the wrong direction. Somehow, however, I doubt that will be the outcome we see.