Dave's Mess > Blog

<<< Little Bobby Tables The air powered car >>>

MoneySavingExpert under DDoS attack

11pm, 30th October 2007 - Geek, News, Web, Security, Sysadmin, Hardware

Martin Lewis: The Money Saving Expert.Last weekend, MoneySavingExpert (my old employer) was the subject of what appears to be a fairly hefty DDoS attack. It has been reported on several blogs and shortly afterwards on Digg.

There has been much speculation about why it's happening just now and who could be behind it but, as always, without any data to analyse there's no way of making any guess more accurate than a wild stab in the dark. There has also been much wailing and gnashing of teeth about the powerlessness one feels when being attacked by half the internet. Not that the tech team over at Money Saving Towers were wailing or gnashing their teeth, they just got in and fixed the problem. By Sunday afternoon there was a static holding page up which I could actually request and receive in a browser and by Monday morning the site appeared to be back up and running as usual although I think the forums were still down at that time.

There are some things that can be done when you are the victim of a DoS attack. If MoneySavingExpert can survive it, then so can you.

How you deal with a DoS depends greatly on how it's happening. If you don't already know why your site is down, start trying to find the reason. Log files and aggregated statistics are always the first two places I look.

At my current place of employment, we have a series of graphs generated using Orca and RRDTool for each of our servers. These graphs show us everything from CPU load to disk space used to the number of open TCP connections to the machine's uptime. If a particular server is causing the problem then I can load all of its graphs in a single window and scroll down the list looking for anything unusual. If the problem is with a particular website then I can load up just the servers that website affects. If I don't know which part of our system is the cause of the downtime, then I can load them all up.

Unusual patterns in log files can also be an indicator that something is wrong. If I notice that one IP address has requested more web pages than the next ten combined then I start to suspect that something is wrong at that IP address. If I notice that today's log file is twenty times the size of yesterday's log file, then I'm going to want to have a look inside both of them. At this stage, all I'm doing is gathering information because I don't even know if it's a deliberate DoS or just some other sort of site outage. Either way, the logfiles often hold the answer.

There are many different ways a DoS can be caused. Simply flooding a webserver with ten times the normal number of requests it has to deal with is a crude but effective method. This method will often cause your upstream bandwidth provider to start dropping packets because it can't keep up the pace. Even if your webserver could serve all the requests, some of them won't make it all the way there. Other types of DoS exist, however, and it's worth mentioning some of them here.

There are plenty of vulnerabilities in the off-by-one-buffer-overflow category that will cause a program to crash. These are inevitably classed as denial of service vulnerabilities because that's usually all that can be exploited with them. The important thing to note is that you don't need a large botnet or even a small one to cause a DoS to someone using this method. All an attacker needs is a single computer with the ability to anonymise it's payload through something like ToR or a list of proxy servers. Every crash (i.e every request) is going to cause several minutes of downtime.

Another class of DoS attack is caused by requesting a page that causes a lot of resource usage, such as requesting '%' from a badly written search function. If the page is vulnerable, this example will cause the result set of the search to include every row in the database. This will chew up large amounts of CPU and RAM even if it only actually displays the top ten results.

A DoS attacker could also request pages that cause lots of logging to occur, hence filling up the victim's file system. I have actually caused this to happen completely by accident on one guy's website. Apparently, in the space of about half an hour I caused 60GB of log files to be generated on their webserver. Luckily, they knew what I was doing and had my phone number so they could ask me to stop.

These sorts of attacks - the ones that cause resource starvation on your webserver - can be caught with an IDS such as Snort, any decent firewall or a dedicated appliance. Once you can identify the packets that are part of the DoS it is simply a matter of knowing how your firewall/IDS is configured and configuring it to drop those packets.

The other sort of DoS attack - the sort that attacks the services that support your site rather than the site itself - cannot be stopped by you. They will require the people who run the service that failed to do whatever they need to do to survive the attack. In the case of MoneySavingExpert, it appears that they have requested the services of ProLexic, a company that specialises in mitigating the effects of bandwidth-based DDoS attacks. Essentially, ProLexic point all of the victim's traffic at their own servers, filter out the bad requests and pass the remaining requests on to the real webservers. It's a simple but effective tactic that works against the crude but effective attack.

Related posts:

So many servers, all hacked.
Time to move on
Clever girl...
Distribution and layers
Galumph went the little green frog one day.


On Mon 5th Nov 2007 at 5pm Filipe Freitas said:

By the way, what's wrong with your RSS? It doesn't show any of your posts..
On Mon 5th Nov 2007 at 9pm Dave said:

Analysing and filtering the DDoS is precisely what ProLexic do. They have written custom software for exactly that purpose.

You're also right about it being costly, however most DDoS attacks are followed by an extortion attempt. Sometimes the amount requested will be less than the cost of fighting the attack but a successful extortion attempt is usually followed by another and another and another...

Some DDoS attacks are started by someone who is just holding a grudge against the victim. When that happens, you just have to balance the cost of mitigating the attack against the cost of not doing any business for several days.

There's a great article somewhere about the founder of ProLexic and how he got started with this company. He just had a good idea one day about how he could stop a DDoS and convinced someone who was under attack to give him some money and let him have a go. It worked, and the rest is history.

I'm not sure about the RSS feed... it's supposed to be generated every time I update my blog but it appears to have failed the last time. I re-generated it manually. Thanks for letting me know... I don't subscribe to my own RSS feed so I probably wouldn't ever have noticed !
On Sun 22nd Jun 2008 at 12pm Canober said:

There are various DDOS protection tools and some companies even provide solutions for this. Its not a thing I'd worry too much, there will be a downtime. If a DDOS attack takes place then I'd go to AWSTATS and then sort the ips based on the requests and then block them using htaccess.
On Sat 23rd May 2009 at 5pm Money Savings Expert Videos said:

Most DDOS attacked are carried using zombie machines, so tracing the logs is pretty pointless in reality...
On Wed 27th May 2009 at 12am Dave said:

Hi Mr Money Savings Expert Videos (Y'know what ? I'm gonna call you MSEV from now on.)

What we're talking about here are distributed denial of service attacks. They were invented because normal denial of service attacks were easily circumvented by blocking the single IP address that was attacking you. By their very nature, it is practically impossible to create a distributed denial of service attack through any other means than by using zombie computers. (For the record, I would include hotlinking an image from a popular site to be using zombie computers even though it doesn't fit a strict definition of "zombie".)

Manually identifying and blocking individual IP addresses from your webserver log files is very time consuming and, with a zombie botnet of any reasonable size, completely impractical, as you have mentioned.

On the other hand, if you can identify the IP addresses of the attacking computers automatically instead of manually then there may be something you can do.

The main thrust of my post was that there may be ways you can identify an attack request apart from the IP address. A DoS attack only has to exhaust one of your resources. It doesn't have to be bandwidth. It can be RAM or hard disk or CPU time and it can be in any part of your infrastructure. It can be in the web servers or the database servers or the storage servers or even in the networking equipment. Stateful firewalls are a common target because they are the first machine that attempts to reconstruct split TCP packets. Sending the first part of a split packet and never sending the second part will chew up RAM on a firewall very quickly.

If you can prevent these attack packets from getting to the resource they are exhausting then you can survive the DDoS. If each machine in the botnet is making more than one request then maintaining a list and blocking the IP addresses you identify as participating in the DDoS attack can be cheaper than trying to identify them on the fly or letting them go through to the web server. If you find that the botnet is so large that you only ever see each attacking IP address once then you will gain nothing by blocking IP addresses and will be wasting precious resources on your firewall. In that case you would be better off writing a custom packet filter on your firewall that matches the attack requests based on patterns in the request itself.

As for the patterns, you may find that the attack bots are only requesting one page on your site or that they are all using the same user-agent field or possibly that all the IP addresses are in the same country or even from the same ISP. Looking for patterns is the key here. As soon as you can find a pattern in the attacks, you can filter the attacks out at the firewall. The first place you can look for patterns is in your web server logs. Following on from that, you should add something to your website or firewall that logs much more data in a separate log file (such as request headers or the entire TCP or even IP packet) than what is stored in the web server logs by default.

The three stages of surviving a DDoS are:
1. Collect data.
2. Identify patterns.
3. Filter attacks based on patterns.
On Thu 3rd Nov 2011 at 3pm Dave said:

At my current place of work, we are undergoing something similar to a DDoS. The problem is excessive signups so it's not actually depriving anyone else of our websites, but they are happening at a rate that will cause us to run out of disk space in about a month. Given enough time, this could be considered a DDoS. But that's not why I'm mentioning it here.

What makes it similar to a DDoS is that the signups are happening from a bot net. We already have a 10 signups per IP address limit built in to the site but most of the IP addresses in this attack are only creating one or two accounts. The email addresses being used are also almost all unique with most only being used once or twice. We have had over 30,000 signups from this bot net in the last two weeks. A day or two after signing up, the bot net comes back and adds spam content to the account.

I know it's a bot net because reverse lookups and whois lookups on the IP addresses show that most of them are residential ISP accounts.

So how do we detect them and stop them ? Even though the IP addresses and email addresses are all different, the HTTP request that causes the account to be made has several unique characteristics:

1. The User Agent is always the same and is always an old version of Firefox.
2. The bot doesn't request any javascript, css or image files.
3. It also doesn't pay any attention to the session cookie we send it and makes all of its requests without a Cookie: header.
4. The bot net always chooses the same theme for their newly created account.
5. The HTTP headers are in a different order to the way Firefox sends them.
6. The headers state that the POST data will be url-encoded but it isn't actually url-encoded. Most browsers do this correctly (IE only partially does.)
7. The fields in the POST data are nearly always filled in with the same value in all fields. (Or [the same value]@hotmail.com or [the same value]@yahoo.com for the email field.)
8. In the cases where the fields are not all the same, the email field always matches one of three simple regexes.
9. The fields that are the same match a different simple regex.

Something interesting about the emails is that even though they look like random strings, they are not bouncing from any of the providers. This means that whoever is controlling the bot net has already used it to sign up for thousands of Hotmail and Yahoo email accounts.

We determined the above characteristics of the spam/DDoS requests by using a number of different techniques. MySQL queries after the signups were complete to match rows where all the values were the same worked quite well. I discovered at this time that MySQL can match regexes in a query which is somewhat more powerful than using LIKE. Writing a custom script that kept a spam score based on all these values during the signup process was also fairly easy and effective. Using tcpdump and grep on the servers to watch the signup requests in real time helped to figure out what they were doing, particularly with respect to the non url-encoding of the POST data and the ordering of the HTTP headers. It seems you can use php://input in PHP to get access to the raw headers and POST data. The Apache access logs were not very helpful in this particular attack. We could have used them to pick up IP addresses that didn't request css, javascript or images but there was no way of connection an IP address to an account so we would have no idea what to delete from the database.

I'll post more here if anything new and interesting comes up.

(not shown publicly)

Limited HTML
Like BBCode
Common Usage
What's all this ?

Older blog posts: