The CommonsBlog

Android's Lessons for New Mobile Operating Systems: Security

This blog post series, which began with yesterday’s discussion of audience, is exploring what new mobile OSes, like Firefox Mobile OS, can learn from Android’s successes and failures. Today, I’d like to look at security — here are some things to consider based upon what we have seen in Android over the past ~4 years.

Think Surface Area

The surface area of an operating system (or a runtime environment, or a library, or whatever) consists of all ways somebody can tell that OS (or whatever) to do something. This includes everything from formally-supported APIs to the results of SQL injection attacks (see: Little Bobby Tables).

Android’s surface area is simply huge. It not only has hundreds of developers of classes and tens of thousands of methods across the framework API, but there are countless more classes and methods “hidden”, yet accessible via Java reflection. Then there is the stuff accessible via the JNI/NDK combination, stuff exposed via the Linux substrate (e.g., /proc), stuff exposed via the adb debugging interface, stuff exposed via AOSP and other firmware-installed apps, and so on.

Trying to defend against such a large front is very difficult (see: Soviet Union v. Germany, WWII). And while you may try to get away with assuming that some portions do not need to be defended (see: the Ardennes forest, compared with the Maginot Line, WWII), somebody might find ways of exploiting that undefended region (see: Fall Gelb).

Security is easier with less surface area. Trying to strike a balance between “less surface area” and “enough surface area to write the apps that we want” may be difficult.

Usability vs. Security

Android’s permission-based system, on the whole, has its merits. While one can certainly debate how many Android users really pay attention to, or even understand, what the permissions mean, there are certainly worse possible solutions (see: Microsoft Vista, “the CPU would like to execute an instruction — allow? deny?”).

However, one of the oft-repeated complaints about the solution, from power users, is that permissions are an all-or-nothing deal. Either you agree to all of the permissions, or you do not install the app. There is no notion of an app declaring optional permissions, that the user could grant or deny as desired. This has been proposed — along with the more draconian “all permissions are optional” variant — and Google’s response is that they are concerned about usability.

Frankly, this rather surprises me. While I could see various arguments about this capability, usability would not be one of them.

There have long been approaches for having power user configurability in ways that ordinary users would never encounter (see: Firefox about:config). Even if you are concerned that ordinary users might be put off by having to decide up front which permissions to grant or deny, simply use the current behavior and grant them all, with a power-user option somewhere in Settings to tailor the permissions more.

While other mobile OSes might not have the Android permission system, the issue of security vs. usability will likely rear its head elsewhere. Again, trying to strike a balance will be important, and where that balance lies will depend upon the primary audience of the OS. The solution for children might be different than the solution for security-conscious users.

The Good-Intentions-Paved Road

App lockers and replacement lock screens are popular on Android, as they allow devices to be more widely shared with fewer possible repercussions. However, Android does not have support for app lockers or replacement lock screens. Instead, the authors of such apps exploit various loopholes (courtesy of Android’s enormous surface area) to have the effect of preventing users from accessing certain things. The problem is that if the author of an app locker or lock screen can block users, so can malware authors. And, in a world where “malware authors” may include major governments (see: Stuxnet), malware takes on a whole new scope.

New mobile OSes can learn from Android in terms of what sorts of apps are popular, then plan ahead to specifically address them, one way or another. Perhaps the desired functionality would simply be part of the OS, eliminating the need to expose APIs that might be exploited. Perhaps the desired functionality can be handled in some carefully managed way, to allow the legitimate uses while (hopefully) disallowing nefarious uses. Perhaps the functionality is deemed inappropriate for the base OS, but that the OS makes it very easy to create remixes (akin to Android ROM mods), where it would be possible for other “distros” to offer such functionality if they so chose. Or perhaps there are other proactive solutions.

The key is to learn what sorts of things are popular (based on Android’s results), determine which of those things are based on dicey implementations, then decide how to address them in a new mobile OS.

Spy on the Spies

There are many ::bleeps:: in this world. Some of these ::bleeps:: like to create covert surveillance software, to help governments spy on their citizens, and the like (see: FinSpy). Others like to create software that offer automated means to abscond with a device’s data, ostensibly for forensic purposes (see: Cellebrite).

If the creators of a mobile OS do not like such ::bleeps::, it is perhaps worth spending some time to determine how you can monitor what they do and how to detect (and, later, defend against) their methods. For example:

  • Offer a “bug bounty” not for software bugs, but for “bugs” in the spy sense, where you reward those who point out firms that are advertising that they create such “official malware”, or set up some sort of Wikileaks-style means for anonymous contributions of the same

  • Find some firms or organizations that can work with you to help get samples of the malware (e.g., as part of a theoretical sales process) that you can reverse-engineer and try to block

At the same time, be careful that your means of enabling detection of such software do not themselves introduce new security holes (e.g., software to scan for spyware that opens a door for “forensic” data mining).

Tomorrow’s post will take a look at an OS’s app distribution, and how Android’s approaches for that have had their pluses and minuses.

Need an Android programming guide for your development team? An Enterprise Warescription to The Busy Coder’s Guide to Android Development is available for teams of 10+ members. Contact Mark Murphy for details.