Search
Twitter
Thursday
Dec272007

Yet Another Flawed Authentication Scheme

It seems like every day I hear about a new web-based authentication technique intended to enhance user security and/or thwart phishing scams. This is especially common in the banking world, where most applications are starting to use strong two-factor authentication. Unfortunately for most of the larger consumer web applications, implementing strong multi-factor authentication (i.e. Smart-cards or SecureID) is just not cost effective or practical when you have several million users. As a result, these applications must resort to other creative ways to strengthen their authentication.

One increasingly popular practice is the use of security images (known as "watermarks") to thwart phishing scams. For those not familiar with this concept (generically known as site-to-user authentication), it's supposed to like this:

During registration, the user selects (or is assigned) a specific image. The image is one of potentially hundreds of possible images and is intended to help user distinguish the real web-site from an impostor. The actual act of authenticating to the website is split into the following three steps:

  • Step 1: The user submits their username (only) to the website
  • Step 2: The website shows the user their personal "watermark" image, allowing them to verify that they are at the correct site.
  • Step 3: If the watermark image is correct, the user should enter his/her password to complete the login process. If the watermark image is not correct (or not shown), the user should not proceed as they are likely not at the correct website.

The general concept is pretty simple, and was pioneered by PassMark (acquired by RSA/EMC) several years ago. The concept (and PassMark) has been the subject of much scrutiny by both the FFIEC and security researchers in recent years, who have even published papers outlining various ways in which this scheme can be abused and subverted. What I find most interesting is that, in addition to all of the potential technical flaws that have been identified with Passmark (and similar concepts), it seems to suffer from an even more critical and fundamental flaw ' that most users just don't understand it.

A study published earlier this year found that 97% of people who use an image oriented site-to-user authentication scheme (as described above) still provided their password to an imposter website even though the correct security image was not shown. Even worse, it seems that some of the companies who implement this authentication scheme don't completely understand it. Consider the following real-life example:

Like many folks this holiday season, I found myself at a department store checkout counter faced with the question that every retail clerk is programmed to ask ("Would you like to save an additional 15% today by opening up a new credit card?"). Normally I decline this offer while the clerk is in mid-sentence; however, on this day I proceeded to open an account.

A few days later, I went online to pay my bill and quickly noticed the site touting its *high security* (this seems to be the marketing norm these days). During the registration process, the site forced me to pick a "Security Image" that is used to protect me from phishing scams (ala PassMark). Knowing how this process is supposed to work, you can imagine my surprise when my subsequent login to the website looked like this:

Screen 1: Login Screen (requesting user-name and password)

Login Page

Screen 2: After authentication (displays my security image)

After Login

What's wrong with these pictures? Unfortunately they don't show me my security image until after I have completely authenticated to the website (instead of before I provided my password)! Clearly there seems to be a lack of understanding and/or education somewhere on the other side.

A quick survey of some non-technical friends and relatives during the holidays also served to further confirm my suspicions. While all of them use at least one banking/bill-pay website that incorporates the use of a security image ("Oh yea, I have a special picture that they show me every time I log in"), not one of them could explain what the image was for or even whether it gets shown to them before or after they provide their password.

The takeaway here is that (not surprisingly) end-user awareness still, and likely always will be, a fundamental component to the success of any good security measure. There is little point in implementing a new security mechanism (especially one that depends on the user understanding it) unless the appropriate steps have been taken to ensure that everyone has been properly educated.

Thursday
Dec202007

Application Security Training - New Delhi, India

Recently returned from two weeks in New Delhi, India where I was teaching classes on secure Java and .NET application development (complete with SCA Associate level certification).

I had a great time, and the development teams were extremely receptive. I am impressed with the level of focus and dedication that such a large US based company is taking in training their development teams in India around application security. Very refreshing to see firsthand.

During the weekend I also managed to make it to Agra to see the sublime Taj Mahal and Agra Fort. This was a real experience and tough for pictures to do any kind of justice.

Taj Mahal 1

 

Taj Mahal 2

Wednesday
Dec122007

Practical Notes on Threat Modeling

While the practice of threat modeling has been around for some time now, I still find it interesting that many organizations do not generate them as a consistent part of their systems development life cycle. Microsoft has long made it well known that they religiously perform threat modeling for all of their applications, but they are one of only a few organizations that perform this type of analysis regularly (this also comes as no surprise since they wrote one of the more well-known books on threat modeling).

In our line of business, we talk to a lot of companies (and people) about threat modeling. Here are some common tips that I typically recommend to clients that want to do threat modeling. I will assume that you are already familiar with the basic idea of threat modeling, so if you are not I would suggest that you read Larry Osterman's great series of blog posts on threat modeling.

Both developers and security teams should be involved in the threat modeling process.

In order to be most effective, you need to involve both developers and security experts in the threat modeling process. This is not to say that developers cannot build threat models by themselves or that a security professional cannot do the same; however, the most effective and complete threat models are typically created as a result of a joint effort from both teams. Specifically, there are certain steps within the threat modeling process that are clearly better performed by one of the two parties.

The developer (or architect) of the application, for example, is in a better position to decompose the application since he or she was the one who built the system. So in general, the application development team should drive the application decomposition process. By that same notion, the process of generating application threat scenarios is much more effective when performed by the security professional. Just like the developer knows more about the individual application components, the security expert knows a lot more about security threats than the typical developer. Another reason this approach tends to be more effective is that typically the developer is already likely to mitigate potential threat scenarios that he or she has thought of. If this is the case then the threat model may not produce as much value.

Threat modeling is not just a pre-development exercise.

In a perfect world, we would always build the threat model first. But the reality is that there are a huge amount of existing applications that, while not initially threat modeled, are still critical and are not going away any time soon. Most companies do not grasp that the threat modeling process can be an extremely effective way to assess the design of an existing application. One benefit of doing a threat model for an existing application is that we can decompose it with complete accuracy since we know the design will not change due to factors that arise during development. Additionally, post-development threat models can serve to both identify security control deficiencies without the need for testing and highlight specific application controls that must be validated during future penetration tests.

For an existing application, I typically use the list of generated threats to build a list of questions to ask developers regarding which countermeasures are (or are supposed to be) in place to prevent each threat from being realized. The threats that have not been mitigated are a deficiency that must be addressed, while those that have been mitigated (based on the response from the developer) result in specific items that can be tested and validated as a follow up to the exercise (after all, we can't just take all of the developer's assertions at face value...trust, but verify).

Threat models don't need to be exhaustive in order to be effective.

Now I say this carefully, as detailed threat models tend to be the best and most valuable ones. But that is not to say that a high level threat model cannot be effective (especially if this is the first pass at developing one). Threat models, like all development artifacts, are meant to be living documents. The threat modeling process itself should be iterative, and the level of effort required to build a threat model is highly related to depth with which the application gets decomposed.

A good approach for beginning threat models is to decompose the application into the bare minimum number of elements possible, and to use this as a starting point for generating threats. For example, if one of the categories we are using for application decomposition is "users" and our application has 4 different types of users (anonymous, employee, manager, administrator), then it might be easier to first develop a threat model where we only distinguish authenticated versus un-authenticated. Once you have the initial threat model developed, you can then build a more detailed one that considers threats against every level of user. Of course we would expect to have more threats defined (and thus a more useful threat model) when we distinguish each role of user, however this is a practical way to start (after all, Rome wasn't built in a day).

Threat modeling does not eliminate all security vulnerabilities.

This is more of an expectation that needs to be set up front. I would love to say that by effectively threat modeling applications you will eliminate all of your security vulnerabilities, but that is simply unrealistic. Certainly threat modeling should reduce the number of significant bugs within a given application, and typically the bugs that it will prevent are the "expensive" bugs -- the ones that require significant re-work or re-design to fix. However, the reality is that no matter how detailed and well developed your threat model is, it will not (and cannot) prevent coding errors or flaws due to the developer's lack of secure coding practices.

The goal of threat modeling is not to prevent the vulnerabilities that result from errors or poor coding (basic developer security training and security testing should be used for this), but rather to prevent bugs that stem from an inherently insecure design. If the application is completely missing a certain security feature or control, this is likely a design problem that should have been picked up by threat modeling. If the application security feature was mis-coded, however, this is typically a developer education or quality assurance problem. Like everything else, threat modeling is not a panacea.

Tuesday
Nov272007

Beta version of the new Burp Suite released

A quick note this morning - Portswigger has just released a beta version of the new Burp Suite. New features listed include:

  • Burp Sequencer - for analyzing the randonmness of session tokens
  • Burp Decoder - for decoding/encoding of data
  • Burp Comparer - for visually comparing two data items

Also included are a number of fixes and improvements. I'm downloading it to try it out now.

Saturday
Nov172007

Early Look at Tracer 2.0 Beta 

So, continuing in the vein of testing Fortify's various Beta products (see Andrew's SCA 5.0 sneak peak), I recently got to work with the Fortify Tracer 2.0 beta, which I've gotta say was very interesting. If you're not familiar with Tracer, here's a little background - out of the box, Tracer inserts monitors into your deployable code, tracks all known inputs, and flags instances where the use of certain API's with those inputs are either un-validated or used in an insecure manner. This sets the table for more thorough coverage when you're performing Black Box testing. It is well known that many "set and forget" automated scanners almost never hit every available page which leads to gaps in testing and lots of frustration. Tracer fills those voids and is especially handy for performing Black Box app testing when in a time crunch.

Older releases of Tracer were ideally geared towards use by security professionals. However, with Tracer 2.0, the user only needs to be able to crawl the entire application. Whether it's manual or automated, no penetration testing procedures are needed. This is a bonus for those non-security professionals that want to be able to easliy add this tool to their pre-deployment check-list.

Another nice addition included in Tracer 2.0 is the increased ease of setting up the tool. Previously with Tracer 1.1, you would run the compile-time instrumentation against your J2EE executable (deployable WAR or EAR file) using the supplied command line tool. You would then have to manually deploy the WAR file that you just instrumented into your environment.

Example of 1.1 command line syntax:
C:\>tracer.bat -a Tomcat5 --appserver-home "C:\Program Files\Fortify Software\Fortify Tracer 1.1\Core\tomcat" --coverage "com.order.splc.*" splc.war

Tracer 2.0 Beta has made the instrumentation process MUCH easier. They've introduced a new method called "load-time weaving," where the monitors are introduced when the class files are loaded rather than before the application is deployed. This makes for a very fast instrumentation process and you don't have to redeploy your application anymore.

To instrument using load-time weaving, Tracer 2.0 comes with an app called Tracerltw (Figure 1). This web application allows you to just give it the web root directory of the application you want it to monitor. Users will find this point and click instrumentation much easier and more intuitive than the previous command line options.

Tracerltw

Figure 1 ' Tracerltw application

Another nice feature is support for .NET applications. Previous releases were strictly limited to J2EE applications. In general, Fortify seems to be broadening their language coverage for all their products. Just look at SCA 5.0 and its support for 4 more languages.

This is a very green beta, so as new functionality comes out in future beta releases, I'll keep you updated.