Monday, September 28, 2009

Harnessing spam bots for cyber warfare

Disclaimer

I have not researched internet criminal law and I won't speculate on the legality of this idea. I'm not advocating any type of internet warfare or vandalism. Don't implement this idea unless you determine that it is completely legal.

Summary

With that disclaimer out of the way, I'd like to explain one of my most ambitious and long-planned idea: redirect spam bots to launch a DDoS attack on a website. Even small websites have to guard their forms against bot spam. Why shouldn't this enormous source of resource-wasting power be put to good use?

Background

Spam bots work on the economics of scale. Unscrupulous companies and criminal organizations pay spammers to use their servers to crawl the internet, looking for comment forms and public email addresses. When a spam server (or bot) finds a web form, it fills it out with a mix of garbage and spam links and moves on. If the data is posted on the website in some way (blog comment, forum posting, wiki entry), the bot has succeeded in exposing the link to more people. If even a minute percentage of the people who see the link click on it, the hiring organization can make money by infecting the unwary user's computer with malware and selling his personal information.

However, if the spam is detected by any part of the system, it is blocked and the bot has failed. Unfortunately, it has still consumed the bandwidth and computing power of the victim web server. In addition, the victim server's organization has to use its resources to harden its website against spam bots. On a low-traffic website, the spam bot traffic is negligible, but on larger sites, the cost of spam bots is significant. One only needs to examine the measures taken against bot spam to realize its power: reCaptcha, image rotation tests, and a few more esoteric schemes. Despite all these barriers, spammers can make money with spam bots.

Plan of action

Why should all those spam bot processing cycles be used for nefarious purposes? Right now, when a website detects a spam bot, it has several options: it can block the IP address of the bot, preventing the bot from coming back; it can simply reroute the bot to a dead-end page; or it can attempt to waste the spam bot's processing cycles by rerouting the bot to a bot trap. Typically, bot traps work by enticing the bot to fill out a never-ending line of forms or follow a web of garbage links.

What if this stream of spam bots was instead pointed at link farms or phishing sites? Web forms could reroute the spam bots they detect to a spam bot portal site that would reroute the spam bots to known nefarious sites. If a significant number of websites used the portal as a bot trap, the effect on the nefarious sites could be devastating. The spurious sites would be crushed by the traffic from their own advertising bots.

Eventually, the portal site could be automated to find and destroy targets on its own. It could pick its targets from the list of sites ejected from Google's index for phishing or link farming. Once it had chosen a site, it would redirect its traffic to that site, checking each minute to see if the site was still functioning. Perhaps it could crush multiple sites simultaneously by evaluating their stability and pointing just enough traffic at each to overwhelm it.

Obviously, this idea would be far more difficult to implement than to describe. How would the portal handle the massive amounts of traffic? Who would want to shoulder the cost of this plan? How would the predator portal get the list of sites rejected from Google's index?

Do you think this plan is viable?