What the Jeff Bezos WhatsApp Hack Means for App Security

Marc Rogers, April 9, 2020

By now the whole world has heard that Jeff Bezos’s WhatsApp was hacked, leading to the theft, or exfiltration, of gigabytes of personal data. We don’t know what data the hackers stole, as the attackers, once finished with their operation, quickly deleted their tracks, destroying almost all of the tell-tale signs that they breached the phone. In the end, the one thing they could not hide, the sheer volume of data that suddenly left the phone, was what gave them away.

For now the focus is squarely on attributing who was responsible. However, there’s another bigger question we should be asking. Why are these attacks becoming so common?

Let’s take a look at what we know about the attack on Jeff Bezos’s phone. We know that Jeff Bezos received an MP4 media file and that all the suspicious activity took place immediately after. Looking at the National Vulnerability Database, I see one likely candidate:

CVE-2019-11931

A stack-based buffer overflow could be triggered in WhatsApp by sending a specially crafted MP4 file to a WhatsApp user. The issue was present in parsing the elementary stream metadata of an MP4 file and could result in a DoS or RCE. This affects Android versions prior to 2.19.274, iOS versions prior to 2.19.100, Enterprise Client versions prior to 2.25.3, Business for Android versions prior to 2.19.104 and Business for iOS versions prior to 2.19.100.

Published: November 14, 2019; 06:15:10 PM -05:00

What this means is that there was a software flaw in the WhatsApp code for handling MP4 media files. If an attacker triggered the flaw, the function in question would crash in a way that could allow a potential attacker to gain “RCE” or Remote Code Execution.

In layman's terms, this means the attacker could inject his own code into the application and, by triggering the flaw, make the application to run with all the privileges and access of the WhatsApp application itself.

Since we do not have the actual exploit code, we cannot be 100% certain. However, the timeline fits. The National Vulnerability Database contains known publicly reported vulnerabilities. Whatsapp did not publish CVE-2019-11931 to the database, making it public until November 14, 2019. Bezos’s phone was hacked May 1, 2018. So CVE-2019-11931 was in the code, but not public at the time of the hack. There is a very high likelihood that hackers used that vulnerability.

As the dust settles, people are pointing fingers in many directions. Was it WhatsApp’s fault for failing to adequately secure their application? Was it Apple’s fault for not adequately securing the operating system? Was it Bezo’s fault for trusting WhatsApp? The answer, in my mind, is more nuanced than this.

First, let’s look at WhatsApp.

There’s no question in my mind that WhatsApp does bear some of the responsibility. Software vulnerabilities happen; it’s an unfortunate but expected side effect of writing complicated code. However, this is well known and well understood. As a result, there are a number of very clear guidelines for writing security code, integrating thorough testing into your development environment, and attacking your applications to ensure they are secure by design. I have linked just a few into this paragraph.

What’s more concerning is when you take a look at the published vulnerabilities in the National Vulnerability Database. Remember, this is just the list of vulnerabilities the company or whitehat security researchers found and made public. It doesn't include so-called zerodays that may exist in the wild.

Table 1: Whatsapp Vulnerabilities submitted to the National Vulnerability Database by Date

Any vulnerability ranked HIGH or greater is likely severe enough to lead to complete compromise of the application in question. From the chart above, we can see Whatsapp reported 4 vulnerabilities between 2015 and 2018. However, after the Bezos hack became big news, they reported 11 vulnerabilities, 4 of these ranked with a HIGH severity and 5 with a CRITICAL severity, the most serious category of vulnerability possible.

What this looks like is a company that was doing little to look for vulnerabilities until suddenly it found itself in the spotlight, after which it focused significant resources on finding vulnerabilities, leading to the flurry announced in 2019.

Next, we should look at Apple

Apple in my mind has actually done a pretty solid job of securing its mobile devices against mobile security threats. All Apple phones are equipped with sophisticated hardware security, ranging from a dedicated secure enclave for storing sensitive material to a multi-stage encrypted bootloader to default encryption of the file system.

While I am normally skeptical of most biometric systems, and personally broke TouchID on the iPhone 6 and 6s, Apple’s FaceID is an impressive piece of engineering that I haven’t managed to completely break yet.

Apple’s designed its application architecture equally well. All applications run in their own containers, and access from one container to another is blocked. Data sharing between applications only takes place under Apple’s terms and only when specifically authorized by the device user. This is a good architecture, but it has one significant flaw that I will discuss later.

Unless the Bezos attack included a multi-exploit attack, also known as an exploit chain, to compromise the underlying exploit, then Apple was likely barely involved at all. Of course this is all supposition: the only way to be sure of this would be to analyze the exploit code in detail and unfortunately this is not possible.

Finally, let’s look at the bigger picture

While it’s easy to point the finger at WhatsApp, the simple fact is vulnerabilities happen. With a solid, secure software development cycle you can catch most of these, as illustrated by the solid work done by WhatsApp in 2019, but you will never catch them all. Bugs will always slip through.

This means we should be architecting our mobile applications to be ready for the worst-case scenario. If someone compromises an application, how do we minimize the impact? How do we limit their access? How do we detect and shut down hostile connections?

Homogenization and centralization have gifted the attacker of today with unprecedented reach. When you put everything in the cloud and fail to build an adequate, security model, you create a treasure trove few attackers can overlook. By homogenizing the apps we use to distribute our personal data and sensitive assets we have created a scenario where a single vulnerability can compromise millions of devices with one single reused exploit. In many ways we are damned if we do, yet damned if we don’t.

On a larger scale, the Zero Trust initiative strives to protect against exactly this from an infrastructure and an integration perspective. Under a well-implemented Zero Trust architecture, we don’t assume any connection is safe until it is proven to be safe. Active validation of connections, comparison with contextual information, and anchoring with a cryptographically assured digital identity create a framework that enables the enterprise architect to build robust, scalable systems that are secure by design. However on the micro-scale, from application to application, options are currently limited. This is the flaw that bad-guys are actively exploiting.

Not too long ago, to get access to a wide selection of sensitive data you would typically need to use a complex exploit chain. You would need one exploit to compromise a vulnerable application or service as a way into the device, another vulnerability to break out of the restrictions enforced by the application container, and then finally a third exploit to gain maximum access to the device and its data.

Today, this is no longer necessary. When you compromise an application, you gain access to anything it has been given permission to access. So now instead of a complex multi-stage vulnerability, you just need to select a vulnerable application with the right permissions. Given the overabundance of permissions an application can request, there is no shortage to choose from. In many ways this makes a mockery of all the time and effort mobile manufacturers have invested in hardening their devices and building a secure application ecosystem. Encrypted file systems, hardware security, biometrics, and application containers are useless if a $2 application with permission to access everything has a critical software flaw.

Whoever compromised WhatsApp on Jeff Bezos’s phone gained access to

  • Location

  • Phone Systems

  • Microphone

  • Call logs

  • SMS Messages

  • Contacts

  • Camera

  • Camera Roll

  • Siri & Search (Access to Siri alone can give significant access to an iPhone)

  • Wifi & cellular data

That’s a lot, but it is by no means unusual. These days it’s normal for apps to request as many permissions as possible. This is a big challenge because on the flip side there is nothing to restrict access when things go wrong. Sadly, this means we are going to see more hacks like this before things get better.

What we need to do

We need to take a close look at our mobile application architectures and ecosystems. Going forward, this likely means building something like a zero application trust model. Just because something has permission to access something else, doesn't mean we should blindly allow it to do so. Just like with Zero Trust, we need to take into account context and we need to rely more on a robust digital identity as a foundation on which we can build applications that are secure by design.

As consumers, we need to consider who has access to our data. We need to be conservative with which permissions we give applications. How much should you trust an application before it is given access to all our private messages? How do we measure that trust when you consider that some of the apps asking come from the biggest, most trusted tech companies in the industry?

Take a look in any application store and you will find tens of thousands of apps that request every permission possible. Likewise, we need to consider what these companies are going to do with our data. If you can’t be sure they are going to handle it properly, surely you should think twice about giving it to them. Remember, some data is permanent; once it leaks you can’t put that genie back into the bottle.

What this means is that as well as solving this application architecture flaw, we also need to have a long hard conversation about data. Few people understand just how valuable some of their data is until something bad happens. Meanwhile, attackers are evolving new techniques to use data to attack our infrastructure and our very identities every day. Until we get tough about data, companies will keep retaining everything they can and have little incentive to do much about safeguards.

Policies like GDPR in the European Union are a great step in the right direction, but we need technology to back it up. Companies should automatically purge, minimize, or anonymize data that isn’t needed realtime. Those time intervals should be in part driven by set policy but the user should also be able to play a role. If I leave a company, I should be able to “put my data on ice” or purge it outright.

Apps should be forced to justify their need for permissions, and that justification should be re-evaluated on a regular basis. Not accessed an app for a month? It should be forced to ask again for data access. Some companies like Apple have made progress in this space, but we have a long way to go. With health and banking data being integrated into more apps every day and insurance devices tracking everything we do in our cars, the risk has never been greater.