Search
Twitter

Entries in Threat Modeling (1)

Wednesday
Dec122007

Practical Notes on Threat Modeling

While the practice of threat modeling has been around for some time now, I still find it interesting that many organizations do not generate them as a consistent part of their systems development life cycle. Microsoft has long made it well known that they religiously perform threat modeling for all of their applications, but they are one of only a few organizations that perform this type of analysis regularly (this also comes as no surprise since they wrote one of the more well-known books on threat modeling).

In our line of business, we talk to a lot of companies (and people) about threat modeling. Here are some common tips that I typically recommend to clients that want to do threat modeling. I will assume that you are already familiar with the basic idea of threat modeling, so if you are not I would suggest that you read Larry Osterman's great series of blog posts on threat modeling.

Both developers and security teams should be involved in the threat modeling process.

In order to be most effective, you need to involve both developers and security experts in the threat modeling process. This is not to say that developers cannot build threat models by themselves or that a security professional cannot do the same; however, the most effective and complete threat models are typically created as a result of a joint effort from both teams. Specifically, there are certain steps within the threat modeling process that are clearly better performed by one of the two parties.

The developer (or architect) of the application, for example, is in a better position to decompose the application since he or she was the one who built the system. So in general, the application development team should drive the application decomposition process. By that same notion, the process of generating application threat scenarios is much more effective when performed by the security professional. Just like the developer knows more about the individual application components, the security expert knows a lot more about security threats than the typical developer. Another reason this approach tends to be more effective is that typically the developer is already likely to mitigate potential threat scenarios that he or she has thought of. If this is the case then the threat model may not produce as much value.

Threat modeling is not just a pre-development exercise.

In a perfect world, we would always build the threat model first. But the reality is that there are a huge amount of existing applications that, while not initially threat modeled, are still critical and are not going away any time soon. Most companies do not grasp that the threat modeling process can be an extremely effective way to assess the design of an existing application. One benefit of doing a threat model for an existing application is that we can decompose it with complete accuracy since we know the design will not change due to factors that arise during development. Additionally, post-development threat models can serve to both identify security control deficiencies without the need for testing and highlight specific application controls that must be validated during future penetration tests.

For an existing application, I typically use the list of generated threats to build a list of questions to ask developers regarding which countermeasures are (or are supposed to be) in place to prevent each threat from being realized. The threats that have not been mitigated are a deficiency that must be addressed, while those that have been mitigated (based on the response from the developer) result in specific items that can be tested and validated as a follow up to the exercise (after all, we can't just take all of the developer's assertions at face value...trust, but verify).

Threat models don't need to be exhaustive in order to be effective.

Now I say this carefully, as detailed threat models tend to be the best and most valuable ones. But that is not to say that a high level threat model cannot be effective (especially if this is the first pass at developing one). Threat models, like all development artifacts, are meant to be living documents. The threat modeling process itself should be iterative, and the level of effort required to build a threat model is highly related to depth with which the application gets decomposed.

A good approach for beginning threat models is to decompose the application into the bare minimum number of elements possible, and to use this as a starting point for generating threats. For example, if one of the categories we are using for application decomposition is "users" and our application has 4 different types of users (anonymous, employee, manager, administrator), then it might be easier to first develop a threat model where we only distinguish authenticated versus un-authenticated. Once you have the initial threat model developed, you can then build a more detailed one that considers threats against every level of user. Of course we would expect to have more threats defined (and thus a more useful threat model) when we distinguish each role of user, however this is a practical way to start (after all, Rome wasn't built in a day).

Threat modeling does not eliminate all security vulnerabilities.

This is more of an expectation that needs to be set up front. I would love to say that by effectively threat modeling applications you will eliminate all of your security vulnerabilities, but that is simply unrealistic. Certainly threat modeling should reduce the number of significant bugs within a given application, and typically the bugs that it will prevent are the "expensive" bugs -- the ones that require significant re-work or re-design to fix. However, the reality is that no matter how detailed and well developed your threat model is, it will not (and cannot) prevent coding errors or flaws due to the developer's lack of secure coding practices.

The goal of threat modeling is not to prevent the vulnerabilities that result from errors or poor coding (basic developer security training and security testing should be used for this), but rather to prevent bugs that stem from an inherently insecure design. If the application is completely missing a certain security feature or control, this is likely a design problem that should have been picked up by threat modeling. If the application security feature was mis-coded, however, this is typically a developer education or quality assurance problem. Like everything else, threat modeling is not a panacea.