Entries in iOS (5)


Building Secure Container Slides From Appsec USA

Thanks to everyone who came to our presentation on Building Secure Containers for iOS at the OWASP AppSec USA conference in New York last week. The talk focused on uncovering the techniques used by commercial BYOD secure container solutions in order to implement your own. We also covered some real world security issues found in secure containers and various security concerns to be aware of when creating or selecting a secure container solution.

We have posted the slides, demo videos and sample code for the presentation to our Github repository.


Retrieving Crypto Keys via iOS Runtime Hooking 

I am going to walk you through a testing technique that can be used at runtime to uncover security flaws in an iOS application when source code is not available, and without having to dive too deeply into assembly. I am going to use a recent example of an iOS application I reviewed, which performed its own encryption when storing data onto the device. These types of applications are a lot of fun to look at due to the variety of insecure ways people implement their own crypto. In this example the application required authentication, and then pulled down some data and stored it encrypted on the device for caching. The data was presented to the user where they could “act” upon it. Sounds pretty generic, but hopefully the scenario is familiar enough to those who assess mobile apps.

Upon analyzing the application traffic, it was obvious that no crypto keys were being returned from the server. After sweeping the iOS Keychain and the entire Application container, I could make the educated assumption that the key is either a hardcoded value or derived using device specific information.

Using the Hopper Disassembler (Available on the Mac App Store), I was able to see that the application was leveraging the Common Crypto library for its encryption. I checked the cross-references for calls to the CCCryptorCreate function in order find the code areas which perform encryption. The following screenshot shows getSymmetricKeyBytes being called right before the CCCryptorCreate function. I felt pretty confident that the purpose of the getSymmetricKeyBytes method was going to be to return the symmetric key used for encryption. 

I decided to create a Mobile Substrate tweak in order to hook into getSymmetricKeyBytes and read the return value. I used the class-dump-z tool to get a listing of all the exposed Objective-C interfaces. From here it is easy to get more detailed information about the method, such as the class name, return type and any required parameters. The following is a short snippet retrieved from the class-dump-z results.

@interface SecKeyWrapper : XXUnknownSuperclass {

                  NSData* publicTag;

                  NSData* privateTag;

                  NSData* symmetricTag;

                  unsigned typeOfSymmetricOpts;

                  SecKey* publicKeyRef;

                  SecKey* privateKeyRef;

                  NSData* symmetricKeyRef;




-(id)doCipher:(id)cipher key:(id)key context:(unsigned)context padding:(unsigned*)padding;


We can quickly create a tweak by using the Theos framework. The tweak in this case looked as follows: 

%hook SecKeyWrapper

- (id)getSymmetricKeyBytes {

    NSLog(@”HOOKED getSymmetricKey”);

    id theKey = %orig;

    NSLog(@”KEY: %@”, theKey);

    return theKey;




%ctor {

    NSLog(@”SecKeyWrapper is created.”);



It doesn’t do much more then read the return value of the original method call and write it out to the console. It was possible to confirm that a static key was being used by running the tweak on another iPad, and observing that the same symmetric key was returned. The next step was to decrypt the files. We could hook into the doCipher:key:context:padding method and just print out the first parameter to get the plaintext data. That would work, but that wouldn’t be reproducible since the Tweak code would only execute when the doCipher:key:context:padding method is actually run by the application. A quick Google search on the SecWrapper class turned up the following sample code from Apple.

By leveraging the wrapper it was possible to create an offline script to decrypt the application contents.

While looking at sample code I noticed two things. The app developer chose to change Apple’s implementation of the getSymmetricKeyBytes method and return a static key. The other interesting discovery was bad practices in Apple’s sample code for the doCipher:key:context:padding method.  The following code snippet shows that it will use a static IV of 16 bytes of 0x0’s.

 // Initialization vector; dummy in this case 0’s.

uint8_t iv[kChosenCipherBlockSize];

memset((void *) iv, 0x0, (size_t) sizeof(iv));

An alternative method to achieve the same result would be to use cycript, which provides a Javascript interpreter to hook run arbitrary objective-c code and also hook into iOS applications at runtime without having to go through the whole Mobile Substrate Tweak creation. The following example shows how cycript could be used to retrieve the symmetric crypto key. 

rgutie01s-iPad:~ root# cycript -p 290
cy# var sharedwrapper = [SecKeyWrapper sharedWrapper];
@”<SecKeyWrapper: 0x183080>”
cy# [sharedwrapper getSymmetricKeyBytes]
@”<[Symmetric Key Value Omitted>”

To recap:

  1. Runtime analysis can be leveraged to easily break custom encryption when source code is not available and without having to dive into assembly.
  2. Developers need to beware of using sample code downloaded from the web, especially crypto code as it’s really hard to get right (as shown by the sample from Apple). 

Plaintext Caching with iOS Document Interaction APIs

The iOS Document Interaction APIs provide applications with the ability to have another application installed on the device handle a file. The most common scenario of this behavior is the Mail application. The Mail application receives emails which may contain document files like PDFs as attachments. Although the Mail application does have PDF preview functionality, there may be a separate PDF viewing or editing application installed on the device. The Mail application therefore opts to leverage the Document Interaction API in order to provide users with the ability to open the attached PDF file using any application on the device registered to handle PDF file types.

iOS applications can register to handle file types by setting the supported UTI types within the “CFBundleDocumentTypes” section of their Info.plist file. The application would also need to implement the “application:didFinishLaunchingWithOptions:” application delegate method to handling incoming files. When applications leverage the Documentation Interaction APIs to View or Open a file with another application, iOS will already know which applications have registered to handle the data type. The UI displays a listing of installed applications which can handle the file type.

At this point, people familiar with iOS may be wondering how this“Open In” functionality works due to the security restrictions of the iOS application sandbox. The sandbox prevents one application from accessing any data stored within another application’s container. When a user chooses an application listed in the “Open In” prompt, a copy of the file is made to the other application’s container.

This is where it starts to get interesting. The copy of the file is written to the receiving application’s Documents/Inbox folder. This file is not stored using data protection and will persist on the device even after the application is closed or the device is rebooted.

This becomes an issue when dealing with sensitive files in “secure container” applications. Secure containers are applications which implement their own form of data protection in order to supplement the data protection feature provided by iOS. In many cases these custom secure containers are created due to not having the ability to enforce device pass-codes on unmanaged devices. One scenario we encountered was a secure container application that wanted to incorporate “Open In” functionality for sending files to the Good For Enterprise (GFE) application. The GFE application would then provide users with the ability to email the received documents using their corporate email. Since GFE is also a secure container application, the organization assumed the file would remain encrypted on the device. Due to the discussed plaintext caching of the file, it becomes the receiving application’s responsibility to perform proper clean up of any files it has received. Unfortunately, GFE was not performing the necessary cleanup and the file remained stored in plaintext.  

The main take away from the blog post should be to be very cautious when performing any form of inter-process communication with sensitive documents in your iOS application. IOS contains many subtle caching issues which could cause data expected to be stored encrypted to be unintentionally cached elsewhere in plaintext.


Debunking NSLog Misconceptions

It is a fairly common occurrence to encounter iOS applications that are logging sensitive data during mobile application security assessments.  Some examples of sensitive data we have seen logged include authentication tokens, session cookies, passwords, etc. We have also noticed that developers sometimes do not fully understand the implications of logging this data using the NSLog function. Lets walk through some of these misconceptions.

1. NSLog data is only displayed in the device console and not stored on the device.

When writing an iOS application, developers commonly use NSLog for debugging purposes and the data is displayed within the device console provided by XCode. Behind the scenes, the data passed to the NSLog function is logged using the Apple System Log (ASL), which is Apple’s alternative for syslogd. On iOS devices, the data logged using ASL appears to be cached until the device is rebooted.

2. NSLog data cannot be read by other applications.

The Apple System Log C library (asl.h), which is available for Mac OS X, is also available on iOS. This library can be used to print out the contents of the ASL and even perform queries to retrieve specific log data. An example of a query would be querying for data logged by specific applications. One might ask, what about the iOS sandbox? Shouldn’t the sandbox prevent applications from accessing data logged by another application? Unfortunately, the iOS sandbox does not protect the ASL and therefore any application is able to view the data logged by another application. The following documentation details the ASL API for writing, reading and querying data from the ASL:

3. iOS prevents applications from utilizing these low level C APIs during their submission review process.

Unfortunately, it does not look like Apple has a strict restriction on iOS applications utilizing the ASL C library in order to retrieve data from the ASL. There are applications currently in the App Store that are able to read and perform queries on ASL data. One example of such an application is the “AppSwitch” application. 

So let us recap,

  • iOS application data logged using NSLog utilizes the Apple System Log (ASL) which caches the data logged until the device is rebooted
  • The ASL data can be read and queried through a C API available for iOS applications. This API is not restricted by Apple’s application review process.
  • The ASL data is not sandboxed and therefore any iOS application can read data logged by arbitrary applications

Due to all these conditions, logging sensitive data using NSLog should be considered a fairly high-risk issue. If applications are logging sensitive authentication data, a malicious application would be able to actively query for this data and send it off to a remote server.

Developers should get in the habit of using a preprocessor macro for performing any logging used during the development process. The following blog post provides a nice walkthrough on how to utilize NSLog when building in DEBUG mode and how to remove all NSLog statements within production builds.


Accepting Un-Trusted Certificates using the iOS Simulator

There are scenarios where an iOS developer might want to accept an un-trusted SSL certificate, such as when they are testing their application using the iOS simulator. By default applications using the NSUrlConnection API for performing remote connections contains built-in certificate validation. Therefore, developers or testers may encounter issues when testing HTTPS traffic using the iOS simulator. Some example scenarios may include applications communicating with remote services hosted on a non-production environment using self-signed certificates or the testers who need to debug SSL communication between the application and service using a local proxy tool, such as Burp Proxy or Fiddler. From a developer’s perspective, what is the best way to accept SSL certificates? While performing a Google search, I encountered the following thread on Stack Overflow discussing ways to accept self-signed certificates when using NSUrlConnection to connect to a website. In general, the responses all recommended performing code level changes in order to disable the built in certificate validation performed by iOS. Although, some answers recommend disabling certificate validation against certain hosts, there are also recommendations for disabling validation against all hosts. Given the temptation to copy and paste, this guidance is likely to result in insecure iOS application releases to the Apple App Store as the applications will be susceptible to man in the middle attacks.

Is there a better way to temporarily trust un-trusted certificates within the Simulator? In my opinion, the more secure way is to add the Certificate Authority(CA) certificate which signed the website’s certificate as a Trusted CA on the simulator. On an iOS device, this can be performed easily by opening the CA certificate on the device by emailing the certificate; however this is not possible with the simulator. Behind the scenes, when a CA certificate is added as a Trusted CA on the device, the certificate is inserted into the tsettings table of the TrustStore.sqlite3 database. This database is also used by the Simulator and can be found in the ~/Library/Application Support/iPhone Simulator/<SDK version>/Library/Keychains/ directory on your Mac workstation.

The tsettings table stores the contents of the CA certificate (Fingerprint, Subject, etc) but the only field needed by iOS during validation is the sha1 column which refers to the certificate’s SHA1 fingerprint. The table can be manually modified by using one of the many available SQLite clients. In order to simplify this process, I wrote a simple python script which can be used to import CA certificates into each TrustStore database  used by the Simulator. The following example will walkthrough the steps for importing the Portswigger CA certificate. Importing this certificate will provide testers with the ability to intercept application HTTPS traffic using Burp Proxy. Although we can view and intercept SSL HTTP traffic while testing applications, the insecurity of accepting un-trusted certificates is no longer built into the application logic

Step 1: Modify the System Preferences/Network Proxy settings on your Mac in order to have all HTTP/HTTPS traffic be sent to your Burp Proxy.

Step 2: Visit an HTTPS website using Firefox. You will be shown a “This Connection is Untrusted” error page. Choose the Add Exception option and then click the View button. Enter the Details tab and you will be shown information about the certificate chain. Select the PortSwigger CA within the “Certificate Hierarchy” listing. Export the Certificate to the directory of your choice.

Step 3: Run the add_ca_to_iossim script and pass in the exported certificate as an argument. 

Sample Usage: 

python PortSwiggerCA.cer

Successfully added CA to /User/GDS/Library/Application Support/iPhone Simulator/4.3/Library/Keychains/TrustStore.sqlite3

Successfully added CA to /User/GDS/Library/Application Support/iPhone Simulator/4.3.2/Library/Keychains/TrustStore.sqlite3

Run the simulator while proxying through Burp Proxy and you should be able to intercept HTTPS application sent by your application.

The add_ca_to_iossim python script can be download within the GDS Github page.