Menu
Menu
inquire
What needs protecting

Principles / 02 What needs protecting

Attackers can target mobile applications at any point in their lifecycle. This page gives an overview of their aims and means of doing so, and introduces some key priorities in preventing attacks.

How do attackers target mobile applications?

They can do so while your app is in three different states:

  • Project -- the app in development: source code and resources as they are created, modified, stored, compiled, and packaged, using repositories, build machines, IDEs, toolchains, testing platforms, and CI/CD platforms.
  • Package -- the app in storage: the compiled and bundled app archive (APK, AAB, AAR; IPA, XCARCHIVE, FRAMEWORK, XCFRAMEWORK)
  • Runtime -- the app in memory: the app’s compiled code being executed in a process. We can also consider this in an extended sense to include any data generated or processed by the app that might be stored locally.

The app in memory is a product of the app in storage, which itself is a product of the app in development. So you see these states are interrelated.

Each state can be the specific target of attacks, however. And each state has its own priorities when it comes to security and protection.

Project

The Project State might broadly include the entire development process and continuing cycle - from planning to testing to distribution. Security should be a priority in all phases.

Android Studio and Xcode

Our main focus in this guide, though, is on attacks and how to protect against them, and attackers can target the app directly in the project phase. The main concerns here are with the codebase being either exfiltrated (i.e. stolen) or infiltrated (i.e. injected with malicious code) via Software Supply Chain Attacks. There are a range of possible vectors that adversaries may seek to exploit:

  • Repositories
  • IDEs
  • Toolchains
  • Build machines
  • CI/CD platforms
  • Cloud-based security solutions

It’s vital to be aware that each of these may be intrinsically insecure or compromised, leaking data or containing vulnerabilities. This is why it’s so important to take as much control as possible over the build environment and to not sacrifice security for convenience by blindly trusting third-party providers.

Security audits and penetration testing should therefore focus on the development process and build cycle as well as the distributed application and server-side architecture.

The first steps in mitigation are to make sure that development tools are secure and up-to-date. Where possible, you should also use repositories and build machines that are internal to the organization itself, with access securely controlled via strong authentication mechanisms.

Third-party cloud-based security solutions should also be approached with caution. Uploading unencrypted proprietary source code (even in compiled form) to an external server brings with it the risk of internal data and IP being intercepted, leaked, or stolen.

So far we’ve focused on threats in the Project phase from the build environment. But you must also be wary of introducing dangers into your app’s code directly in the form of malicious or vulnerable dependencies.

It’s not uncommon for mobile apps to contain dozens, hundreds, perhaps even thousands of third-party components; the SDKs, libraries, plugins, and modules that are used and re-used to add ready-made functionalities to our apps. Whether open or licensed, in source or compiled form, it’s crucial to make sure they are neither vulnerable nor malicious. 

That’s why development and security teams must make sure they’re absolutely aware of every component in their apps - that they are necessary, up-to-date, and vetted for known or potential vulnerabilities. This should be done through a combination of careful research, manual code checks, and use of automated Software Composition Analysis (SCA) tools.

Package

Once the app is ready for distribution - either for the first time or as a new update - the code is compiled to binaries; binaries are bundled with assets (image files, sound files, XML files, and so on); the package is signed; and the app is ready for publication in its final form: .apk for Android, .ipa for iOS.

Note

Throughout this guide, for the sake of convenience, we focus on APKs and IPAs, these being the formats in which end users download apps. Other relevant file types are either means for generating and distributing APKs and IPAs (Android App Bundles (AABs), for Android, and Xcode Archives (XCARCHIVEs) for iOS), or libraries which are ultimately integrated into APKs and IPAs (Android Archives (AARs) and iOS Frameworks), meaning the same principles apply in almost all cases.

APKs and IPAs are compressed archives, equivalent to zip files. They can be uncompressed, their file contents can be extracted, and the binaries can be disassembled and decompiled. 

This means that as soon as your app is deployed, it’s out there in the world for anybody to download and analyze, using free and intuitive tools. This can be done manually - opening the app in a decompiler, reading the decompiled code, and searching for keywords - or automatically, with a variety of scanning tools.

Decompilation - transforming compiled binary machine code into a reconstructed version of the original source code - is usually the first step that attackers will take when targeting your app. The aim here is to analyze your app’s code, understand its logic, and identify any vulnerabilities it contains. It’s therefore used both for reverse engineering, and for the static analysis that mobile security testers also perform.

Decompilation may give attackers access to information that they can exploit immediately, without even needing to run the app:

  • credentials
  • API keys
  • cryptographic keys and algorithms
  • proprietary algorithms
  • resource files

The first danger, in other words, is theft of Internal data and IP.

But it also gives attackers insights into how exactly your app

  • enforces user authentication
  • uses platform APIs and Inter-Process Communication (IPC) mechanisms
  • manages sensitive data
  • communicates with backend services
  • makes use of cryptography
  • tries to protect itself

and enables them to target any weak points. They might use the knowledge gained to exploit the app directly, for the purposes of illegitimately accessing Restricted functionalities, for example. Or they could design malware targeting those weak points in order to control end users’ devices or to harvest Sensitive user data.

There is also the danger of modification (a.k.a. patching and tampering) with the compiled binaries, by adding, changing, or removing code.

Adversaries might use this for either of the two purposes outlined previously in this guide:

  • to exploit the application directly, for instance by accessing Restricted functionalities, such as ‘locked’ features and content on their own devices, or bypassing local authentication checks; or, 
  • to exploit the application’s users, for instance by repackaging the app and distributing it via app stores or via social engineering techniques like phishing. The repackaged app is designed to seem as similar as possible to the legitimate app, but with additional or modified logic to steal Sensitive user data, control devices remotely, or display advertisements for the adversary’s own benefit.

From the perspective of security, modifying an app is functionally equal to extracting the app’s code and resources for use in a different app.

And here’s an additional problem with attackers modifying the compiled binaries: they may be able to remove or override any RASP (Runtime Application Self-Protection) or network security features. This would allow them to run (and/or to redistribute) an unprotected version of what you might think is your secure application.

These are the main reasons why it is so important to prevent and mitigate the effects of decompilation and modification.

Runtime

Runtime is of course what the app is designed for, and ultimately the target of the majority of attacks. 

Here is a basic overview of an app’s runtime state:

  • The system launches the app in response to a trigger, such as the user pressing the icon on the home screen, or a request from another app 
  • The operating system spawns a new process specifically for the app, and begins to load its code and resources from storage into memory
  • The UI (user interface) is displayed, and the user can begin interacting with the app’s components
  • As the app’s code is executed, in combination with the user’s inputs, the system grants and denies access to system services and resources (the camera, the network, GPS, storage, etc.) and to other apps 
  • As the user interacts with the app, the system processes input data, transmits data to and from the app’s remote endpoints, and stores data in the app’s sandboxed directory and elsewhere. It also generates new UI views, starts new services, and may launch other apps. Much of this also occurs while the app is in the background, invisible to the user.
  • The system terminates the app in response to a trigger, such as the user exiting or because it was dormant for too long; the system clears the app’s code and resources from memory

In both Android and iOS, apps are sandboxed. That means that the system runs them in isolated processes, and stores their data in a directory that can not (in principle) be accessed by other apps. Sandboxed apps can only access resources and data explicitly granted by the user and/or the system. This is designed to prevent apps from interfering with other apps or system processes, and accessing sensitive information, resources, or hardware functionalities without authorization.

It’s worth emphasizing here that the operating system is always in control of the app. The system itself launches the app, grants or denies access to other apps and to device resources, generates UI views, and so on.

This means that if the operating system is compromised in some way (for example if the firmware is modified, or if a user has rooted their device to obtain escalated privileges), the security guarantees afforded by sandboxing are irrelevant. 

It’s within the context of this fundamental framework that an app does everything it was designed to do.

So, what needs protecting at runtime?

Firstly, a quick reminder that attacks of all types fall into two basic categories:

Attacks aiming to exploit the application directly

For the purposes of spoofing, cheating games, stealing IP, ransomware, etc.

Attacks aiming to exploit the application’s users

Credential & personal identifying information theft, impersonation, fraudulent transactions, etc.

Attacks aiming to exploit the application directly

In this scenario, the attacker is likely to be using their own device. And their goal is to illegitimately access and potentially manipulate privileged, protected, or hidden functionalities.

The primary target in this type of attack is not user data; the main security concern is rather the logic of the app and how it (intentionally or unintentionally) provides users with access to restricted or privileged functionalities. This includes access to premium content.

The priorities are therefore

  1. To protect the app’s Internal data and IP against reverse engineering and theft through dynamic analysis
  2. To protect Restricted functionalities, such as client-side authentication logic, or any functionalities that might be abused or exploited for the purposes of fraud or cheating

That means preventing the use of anything that gives an attacker access to and control over the app’s processes during runtime:

  • patched or modified versions of the app

  • dynamic instrumentation tools

  • virtualized environments

Attacks aiming to exploit the application’s users

In the second type, the attack takes place on legitimate end users’ devices, or at some point in the network between those devices and backend servers. And although in some cases attackers will have physical access to others’ devices, these attacks are more commonly carried out by malware or by a fully remote attacker.

When considering this threat, protecting Sensitive user data is indeed the ultimate priority. This covers all data that is stored and/or processed on the device, including any transmitted between the device and the network. That might include:

  • Credentials used for authentication
  • Biometric data used for authentication
  • Session tokens and identifiers used for authentication
  • Personal Identifying Information
  • Sensitive data generated or processed by any of the device’s components: telephone calls; messages; GPS locations; sensor data; photographs and videos; sound recordings, etc.

Encryption is often used to ensure the confidentiality of such data. In such cases, protecting cryptographic processes and keys is also a maximum priority.

Consider for a moment the range of ways such data enters the device. Some via input to the keyboard; some received via the network from a remote endpoint; and some via hardware such as the device’s camera.

And now think about the points through which such data passes and may be exposed:

  • the UI itself
  • device memory
  • APIs which apps use to communicate with each other, with the system, with device hardware, and with the network
  • local storage
  • the network, during transmission to and from remote endpoints

User data may be exposed at any of these points, at any given interface, either by design or by accident. And it is these points which attackers (and their malware) target specifically through:

  • Repackaged or cloned apps
  • Malicious app wrappers with control over the app’s execution environment
  • Malicious libraries
  • Malware exploiting exposed app components and APIs, such as data providers
  • Malware with screen capture, keylogging, overlay attack, and UI hijacking capabilities
  • Network communications interception (Man-in-the-Middle Attacks and Network Sniffing)