Code obfuscation technologies play an interesting role in the recent debate over mandatory encryption backdoors. In fact, the debate suggests yet another application for code obfuscation.
Let us suppose for a moment that, despite the advice of the technical community, some form of mandatory backdoor requirements are placed on all platform providers. Apple would be required to retain the ability to decrypt iOS devices, as would Google for Android, Microsoft for Windows, and so on.
What would happen? In effect, this would force encryption to move up the software stack. Consider an application developer who wants to protect his law-abiding customers from identity theft, commercial espionage, and other common digital scams. Since the developer cannot rely on encryption services provided by the underlying platform, the only option is to embed encryption software directly into the application. Now, Apple’s backdoor would be of no use in recovering this application’s data.
Well, if only it were that simple. Since Apple controls the platform, its operating system could, in principle, interfere with the application’s encryption implementation. For example, the platform could take snapshots of application memory as it is running, modify the application’s code or data, and even eavesdrop and tamper with the application’s interaction with the user. What if the platform could find the location in memory where the application stores the decryption key and simply make a copy of that key? Doing so will defeat the application’s attempt to encrypt user data.
What is the application developer to do? He or she seem to be facing an impossible situation: the application code is running on an untrusted platform that can peek into the application’s private memory as it is running. How can we possibly run encryption and decryption algorithms in such extreme settings?
Enter code obfuscation. Suppose the application developer uses obfuscation (say VBB obfuscation for the sake of the argument — yes, VBB can only be done in generic groups, but let’s ignore that for now). Obfuscation ensures that the platform can learn nothing about the inner workings of the application beyond its input-output behavior. It would be impossible for the platform to extract the application’s decryption key or any other information about its inner workings. In effect, that platform cannot access user data after the application encrypts it on the device.
This form of obfuscation has a name: it is called white-box cryptography. This area examines the question of securely running cryptographic algorithms on untrusted devices. Not surprisingly, obfuscation is the main tool used to provide some level of security.
Why force Apple and others to implement backdoors if any application developer can render those backdoors useless?
While this is a compelling application for obfuscation, there is a significant hole in this design. The trouble is that the underlying platform can emulate user input to the obfuscated application and observe the resulting output. For example, the platform can emulate user input asking the application to decrypt certain data and then record the values displayed on the screen. Or it can record the user as he or she are typing a password into the application and use that later. Obfuscation cannot protect against this type of behavior. Is there a solution to this UI-based weakness? We need some way to ensure a trusted path between the application and the user, despite the untrusted underlying platform. A good area for further research.