David Robert's -castlebbs- Blog

To content | To menu | To search

Saturday 17 April 2010

Apache.org incident: started with a XSS flaw

Apache.org has suffered a targeted attack between the 5th and the 9th of April. The Apache infrastructure team wrote a comprehensive incident report that is worth reading.

I find this report interesting because it is well written, and it is a good example of a successful attack that is not too difficult to understand from a technical point of view. It outlines well that a successful attack is most of the time the result of successive exploitations of different vulnerabilities.


The exploitation of a single vulnerability is often not enough to compromise a system (this can happen though). Most of the time, it is the presence of several vulnerabilities and the smart exploitation of a combination of them that enable attackers to achieve their goals.

It's important to bear this in mind when assessing the security of system. If you use risk classification for vulnerabilities and you only look at them individually you may underestimate the risk.

Reflected XSS risk?

This attack leaded to a compromise of two servers used by the foundation (shell access, one server with root privileges), numerous passwords were stolen, a web application has been modified to steal even more passwords etc. and this wouldn't have been possible without the exploitation of the first weakness: a non-persistent (or reflected) XSS.

This type of cross-site scripting issue on web applications is very common. Because this type of XSS is non-persistent, I unfortunately still often see on security reports a level of risk Minor for this type of XSS. Obviously, a non-persistent XSS won't give you a remote root account, but as you can see on the Apache compromise, this was the first step. Without this first step the apache compromise wouldn't have been possible (at least this scenario).

Ironically, I understood that XSS vulnerability is stepping down from rank 1 to rank 2 on the new 2010 OWASP Top Ten because from now on, the team will focus more on risks that probability. This is my understanding listening to this podcast. (The 2010 OWASP Top Ten is not published yet at the time I wrote this article).

What can we learn from this incident?

In the report, the team explain, what the issues were and how they fixed them (Sections What worked?, What didn't work?, What are we changing?).

As said at the beginning of this article, this attack was not particularly sophisticated. Using XSS to steal a session cookie is a case study, brute force attack is obviously not new, improper file/folders permissions on a webserver is more than common and issues related to storage of passwords are basics as well etc.

Although some of the vulnerabilities on the hacked servers/application could have been fixed before, humans make mistakes and this is not going to change. An important issue in my opinion is that the attack was left undetected for a few days. With the current technology, the Apache team had to be alerted in real time for a least:

  • Brute force attack (hundred of thousands of password combination attack cannot be left undetected)
  • Change of an application administration settings (change the path to upload attachments)
  • Change of an application (New JSP files, JAR file that would collect all passwords on login and save them)

It's a common mistake to focus only on preventive controls. On the report I can't read much about plans for detective controls. Even in the section "What didn't work?" and "What are we changing?" there is no mention of being alerted that something is going wrong.

Today, it is important to have pro-active monitoring on what is happening on our servers. Logs should be monitored and alerts should be raised based on some criteria/threshold, operating systems and applications configuration and program files should be monitored using integrity checking tools etc. Procedures should be in place to monitor and react based on these events. At the end of the day it's people that will take actions so their involvement in the monitoring process should not be overlooked.

Regarding technology for monitoring, many products are available, on the OpenSource side, I would definitely recommend having a look at ossec,  this is a Host Based Intrusion Detection System (HIPS)

Also, it is important to mention that this kind of attack is very common and it's not possible to rely on network infrastructure security to prevent those attacks: firewalls, network Intrusion Prevention Systems etc. are likely to let the attackers in. There was no buffer overflows or usage of unauthorized network ports used in this attack for instance.

Other considerations

  • Tinyurl and the others URL shortening websites are used now to deceive you clicking on the link and being victim of cross-site scripting attacks. You should be careful clicking on these links and I would recommend using the Preview feature.

  • On this attack, once an administrator session was hijacked exploiting the XSS vulnerability, the next steps were possible because of a badly configured web applications: it was possible to copy JSP files to a folder that will execute them. This is an issue I see very often: the operating system user that run the web server should not have the right to write to a folder that execute dynamic web pages.

Thursday 3 May 2007

Penetration Testing Framework 0.4

J'invite les personnes qui ne connaissent pas le "Penetration Testing Framework" de Kev Orrey et Lee Lawson d'y jeter un oeil :

Version html :


Version PDF :


Source freemind :


Ce "framework" peut être intéressant pour une personne désirant effectuer des tests de pénétration. Une description est inutile, allez-voir la version html.

Cette nouvelle version inclue entre autres les sections suivantes :

  • Pentest wireless
  • AS400
  • VOIP
  • Bluetooth
  • Cisco

Wednesday 14 February 2007

CVE-2007-0882 : faille solaris telnetd

Analyse d'un 0day sur Solaris 10 si simple qu'on pourrait croire à une backdoor :

Il suffit de se connecter par telnet avec la commande suivante pour se connecter root et sans mot de passe :

$ telnet -l"-froot" vuln_host

Enorme non ? Mais ce problème avait été signalé sur d'autres système Unix il y a plus de 12 ans (plus précisément sur rlogin). Pourquoi apparait il maintenant sur une version récente de Solaris ?

Bon une petite explication s'impose : En général lorsque l'on utilise telnet (hélas telnet est encore beaucoup utilisé) on saisit seulement une option : le nom d'hôte

 $ telnet monjoliserveurlinux

Par la suite on saisi le nom de login puis le mot de passe.

Donc là tout le monde a suivit et a compris que l'exploit c'est d'utiliser l'option -l"-froot". J'ai pas de solaris sous la main mais j'imagine que ça doit marcher également avec -l-froot ou -l -froot...

La question est de comprendre ce qu'est ce -l et -f et comment on arrive root sans mot de passe (!).

L'option -l de telnet permet de spécifier au niveau de la ligne de commande le nom d'utilisateur avec lequel on veut s'identifier sur le site distant. Le protocole telnet permet au client telnet de transmettre des variables d'environnement au serveur. En particulier, le nom de l'utilisateur est transmis au serveur via la variable d'environnement USER.

Si vous voulez en savoir plus sur la négociation et l'échange de variable d'environnement dans le protocole telnet, consultez la RFC 1572 et analysez la capture d'une session telnet avec wireshark.

Donc si vous avez suivi, on transmet au serveur telnet la variable USER=-froot

Le serveur telnetd lance la commande login qui vous permettra de vous authentifier et de vous offrir un shell : telnetd -> login -> sh

Voici comment telnetd lance la commande login via l'appel système execl :

3191 		(void) execl(LOGIN_PROGRAM, "login",
3192			    "-p",
3193			    "-d", slavename,
3194			    "-h", host,
3195			    "-s", pam_svc_name,
3196			    (AuthenticatingUser != NULL ? AuthenticatingUser :
3197			    getenv("USER")),
3198			    0);

Le contenu de la variable USER est mis en paramètre de la commande login sans aucun nettoyage ou autre contrôle (on se croirait dans une application web :-) L'élément le plus important dans cet exploit vient de là. Le contenu de user est -froot donc c'est en réalité une option supplémentaire qui est passée au programme login et non le nom de l'utilisateur.

Maintenant, à quoi sert -f pour le programme login ? Ben tout simplement à s'affranchir de l'authentification.

Voici les modifications apportées à telnetd pour corriger le problème.

3201		    "-p", "-h", host, "-d", slavename, "--",
3202		    getenv("USER"), 0);

le -- permet au programme login (via getopts) de considérer que ce qui suit n'est plus à parser comme des options. Ceci corrige donc le problème. Donc patchez votre système !

La façon dont le problème est traité sous GNU/Linux (netkit-telnet-0.17) :

            if (getenv("USER")) {
                addarg(&avs, getenv("USER"));
                if (*getenv("USER") == '-') {
                    write(1,"I don't hear you!
                    syslog(LOG_ERR,"Attempt to login with an option!");

Alors, backdoor ou pas ?

Pour plus de précision sur l'arrivée de cette faille dans Solaris 10, je vous conseille de lire la discussion sur ce sujet sur bugtraq. Déjà pourquoi ce bug apparait t'il maintenant et pour la première fois sur Solaris ? Un élément de réponse est que la prise en compte par login de l'option -f vient d'apparaitre pour le support de kerberos (ktelnetd). Ceci permet d'exploiter le bug de telnetd qui ne protège pas des injections de paramètres via la variable d'environnement USER. Donc backdoor ou pas ? A vous de juger, certains pensent que oui d'autres qu'il est évident que non. Mais bon, ce qui est sur c'est qu'il faut patcher votre système.

d1cac3 Gaal l3p3t1t