Search
Twitter
Thursday
Dec202007

Application Security Training - New Delhi, India

Recently returned from two weeks in New Delhi, India where I was teaching classes on secure Java and .NET application development (complete with SCA Associate level certification).

I had a great time, and the development teams were extremely receptive. I am impressed with the level of focus and dedication that such a large US based company is taking in training their development teams in India around application security. Very refreshing to see firsthand.

During the weekend I also managed to make it to Agra to see the sublime Taj Mahal and Agra Fort. This was a real experience and tough for pictures to do any kind of justice.

Taj Mahal 1

 

Taj Mahal 2

Wednesday
Dec122007

Practical Notes on Threat Modeling

While the practice of threat modeling has been around for some time now, I still find it interesting that many organizations do not generate them as a consistent part of their systems development life cycle. Microsoft has long made it well known that they religiously perform threat modeling for all of their applications, but they are one of only a few organizations that perform this type of analysis regularly (this also comes as no surprise since they wrote one of the more well-known books on threat modeling).

In our line of business, we talk to a lot of companies (and people) about threat modeling. Here are some common tips that I typically recommend to clients that want to do threat modeling. I will assume that you are already familiar with the basic idea of threat modeling, so if you are not I would suggest that you read Larry Osterman's great series of blog posts on threat modeling.

Both developers and security teams should be involved in the threat modeling process.

In order to be most effective, you need to involve both developers and security experts in the threat modeling process. This is not to say that developers cannot build threat models by themselves or that a security professional cannot do the same; however, the most effective and complete threat models are typically created as a result of a joint effort from both teams. Specifically, there are certain steps within the threat modeling process that are clearly better performed by one of the two parties.

The developer (or architect) of the application, for example, is in a better position to decompose the application since he or she was the one who built the system. So in general, the application development team should drive the application decomposition process. By that same notion, the process of generating application threat scenarios is much more effective when performed by the security professional. Just like the developer knows more about the individual application components, the security expert knows a lot more about security threats than the typical developer. Another reason this approach tends to be more effective is that typically the developer is already likely to mitigate potential threat scenarios that he or she has thought of. If this is the case then the threat model may not produce as much value.

Threat modeling is not just a pre-development exercise.

In a perfect world, we would always build the threat model first. But the reality is that there are a huge amount of existing applications that, while not initially threat modeled, are still critical and are not going away any time soon. Most companies do not grasp that the threat modeling process can be an extremely effective way to assess the design of an existing application. One benefit of doing a threat model for an existing application is that we can decompose it with complete accuracy since we know the design will not change due to factors that arise during development. Additionally, post-development threat models can serve to both identify security control deficiencies without the need for testing and highlight specific application controls that must be validated during future penetration tests.

For an existing application, I typically use the list of generated threats to build a list of questions to ask developers regarding which countermeasures are (or are supposed to be) in place to prevent each threat from being realized. The threats that have not been mitigated are a deficiency that must be addressed, while those that have been mitigated (based on the response from the developer) result in specific items that can be tested and validated as a follow up to the exercise (after all, we can't just take all of the developer's assertions at face value...trust, but verify).

Threat models don't need to be exhaustive in order to be effective.

Now I say this carefully, as detailed threat models tend to be the best and most valuable ones. But that is not to say that a high level threat model cannot be effective (especially if this is the first pass at developing one). Threat models, like all development artifacts, are meant to be living documents. The threat modeling process itself should be iterative, and the level of effort required to build a threat model is highly related to depth with which the application gets decomposed.

A good approach for beginning threat models is to decompose the application into the bare minimum number of elements possible, and to use this as a starting point for generating threats. For example, if one of the categories we are using for application decomposition is "users" and our application has 4 different types of users (anonymous, employee, manager, administrator), then it might be easier to first develop a threat model where we only distinguish authenticated versus un-authenticated. Once you have the initial threat model developed, you can then build a more detailed one that considers threats against every level of user. Of course we would expect to have more threats defined (and thus a more useful threat model) when we distinguish each role of user, however this is a practical way to start (after all, Rome wasn't built in a day).

Threat modeling does not eliminate all security vulnerabilities.

This is more of an expectation that needs to be set up front. I would love to say that by effectively threat modeling applications you will eliminate all of your security vulnerabilities, but that is simply unrealistic. Certainly threat modeling should reduce the number of significant bugs within a given application, and typically the bugs that it will prevent are the "expensive" bugs -- the ones that require significant re-work or re-design to fix. However, the reality is that no matter how detailed and well developed your threat model is, it will not (and cannot) prevent coding errors or flaws due to the developer's lack of secure coding practices.

The goal of threat modeling is not to prevent the vulnerabilities that result from errors or poor coding (basic developer security training and security testing should be used for this), but rather to prevent bugs that stem from an inherently insecure design. If the application is completely missing a certain security feature or control, this is likely a design problem that should have been picked up by threat modeling. If the application security feature was mis-coded, however, this is typically a developer education or quality assurance problem. Like everything else, threat modeling is not a panacea.

Tuesday
Nov272007

Beta version of the new Burp Suite released

A quick note this morning - Portswigger has just released a beta version of the new Burp Suite. New features listed include:

  • Burp Sequencer - for analyzing the randonmness of session tokens
  • Burp Decoder - for decoding/encoding of data
  • Burp Comparer - for visually comparing two data items

Also included are a number of fixes and improvements. I'm downloading it to try it out now.

Saturday
Nov172007

Early Look at Tracer 2.0 Beta 

So, continuing in the vein of testing Fortify's various Beta products (see Andrew's SCA 5.0 sneak peak), I recently got to work with the Fortify Tracer 2.0 beta, which I've gotta say was very interesting. If you're not familiar with Tracer, here's a little background - out of the box, Tracer inserts monitors into your deployable code, tracks all known inputs, and flags instances where the use of certain API's with those inputs are either un-validated or used in an insecure manner. This sets the table for more thorough coverage when you're performing Black Box testing. It is well known that many "set and forget" automated scanners almost never hit every available page which leads to gaps in testing and lots of frustration. Tracer fills those voids and is especially handy for performing Black Box app testing when in a time crunch.

Older releases of Tracer were ideally geared towards use by security professionals. However, with Tracer 2.0, the user only needs to be able to crawl the entire application. Whether it's manual or automated, no penetration testing procedures are needed. This is a bonus for those non-security professionals that want to be able to easliy add this tool to their pre-deployment check-list.

Another nice addition included in Tracer 2.0 is the increased ease of setting up the tool. Previously with Tracer 1.1, you would run the compile-time instrumentation against your J2EE executable (deployable WAR or EAR file) using the supplied command line tool. You would then have to manually deploy the WAR file that you just instrumented into your environment.

Example of 1.1 command line syntax:
C:\>tracer.bat -a Tomcat5 --appserver-home "C:\Program Files\Fortify Software\Fortify Tracer 1.1\Core\tomcat" --coverage "com.order.splc.*" splc.war

Tracer 2.0 Beta has made the instrumentation process MUCH easier. They've introduced a new method called "load-time weaving," where the monitors are introduced when the class files are loaded rather than before the application is deployed. This makes for a very fast instrumentation process and you don't have to redeploy your application anymore.

To instrument using load-time weaving, Tracer 2.0 comes with an app called Tracerltw (Figure 1). This web application allows you to just give it the web root directory of the application you want it to monitor. Users will find this point and click instrumentation much easier and more intuitive than the previous command line options.

Tracerltw

Figure 1 ' Tracerltw application

Another nice feature is support for .NET applications. Previous releases were strictly limited to J2EE applications. In general, Fortify seems to be broadening their language coverage for all their products. Just look at SCA 5.0 and its support for 4 more languages.

This is a very green beta, so as new functionality comes out in future beta releases, I'll keep you updated.

Wednesday
Nov072007

Ruby port of Extended Scanner released

An old colleague of ours has just released a Ruby port of Extended Scanner on his blog at securitytechscience.com. If you're not familiar with it, Extended Scanner is a simple proof of concept web application scanner (in Perl) written by GDS co-founder Brian Holyfield for the book Network Security Tools.. The original Perl version can be found on our Tools download page here.

Quoting from his posting :-

The only thing I have added is the MySQL code as my demo app has a MySQL backend. Before I chat about this, the code can now perform the following:

  1. Validate SQL injection (i.e., reduces false positives)
  2. Enumerate backend database type (currently detects MS SQL, Oracle and MySQL)
  3. Enumerate the number of columns at the injection point
  4. Enumerate the data type of each column identified