Tuesday, April 15, 2008

Worm-like Intranet/Proxy Hacks

Remote file include worms are nothing new. Santy worm, for example, abuses PHP applications that allow user specified locations to be passed to require() and include() functions. When attacker controlled URLs are passed to these functions, the attacker can serve code to the application which will then be executed on the victim server.

Intranet hacking vulnerabilities, like the one RSnake recently found, share some characteristics of these vicious PHP application flaws. Both flaws take user specified resources and fetch them automatically without any validation. This means you can tell the application to go get http://www.cnn.com, or http://localhost/app/web.config, and the application will generate an HTTP request for that resource.

What sets the two issues apart, is that the PHP vulnerability consists of an automated request AND arbitrary code execution - that's why it is so easily wormable. Intranet-hacking type vulnerabilities, in general, only provide an attacker a method to automate and proxy requests (propagation), without any vector for remote code execution. Unless, there is some other vulnerability in the same script...

If in the same URL that has the arbitrary request vulnerability, there is a persistent XSS/SQL injection/command injection vulnerability, you now have a vector to execute malicious code, as well as a method to propagate it.

Below is one theoretical example. But first, lets get some assumptions out of the way :-)

Assumption 1: The application has a pre-auth vulnerability allowing users to specificy URLs which the application will fetch without validation.

Assumption 2: The same code that doesn't validate the user specified URL also doesn't validate other querystring data it is sent.

Assumption 3: All querystring data gets written to a log or database, without being encoded, where it can be viewed in a web browser by an administrator or some privileged user.

Initial attacker generated request:
http://somewebapp/app/fetch?path=[URL]&XSSpayload=<script>alert(document.cookie)</script>

In the above request, [URL] = http://someotherwebapp/app/fetch%3Fpath%3D[URL2]%26XSSpayload%3D<script>alert(document.cookie)</script>

[URL] contains the first automated request that will be generated, [URL2] contains the next one, and so on . . .

Another differentiator is that while the malicious PHP worm code can find targets on its own using Google searches, with Intranet hacking it is up to the attacker to identify targets and build his exploit URL. In the end, he'll have a very long URL that contains the target and payload for every server you wish to attack. Not pretty, but it should get the job done.

Something else to be aware of is how the execution path of the victim code. When the code makes the remote request, it will probably hang until it gets a response, or until it times out (depending on how many requests are made and how long it takes for them to be fulfilled).

If, in our above example, the persistent XSS is written to the database before the request is made, your attack should still work. If it is written to the database after the HTTP request, you could be out of luck if the request times out and the code takes an alternate execution path.

Again, this is all theory. Next step would be a PoC . . . or maybe actually find something like this in the wild.

Monday, April 14, 2008

TCP MITM Tools

I wrote a post on the PhishMe blog about a tool I'm building. Check it out here.

It's basically a TCP proxy that allows you to intercept/modify all application traffic sent via TCP and SSL, regardless of application protocol (HTTP/SOAP/proprietary stuff). It has helped me out quite a bit on some projects, so I'm curious to see if the rest of the world will find it useful!

Thursday, April 10, 2008

Front Range Web Application Security Summit

On June 10th, I'll be speaking in Denver at the Front Range OWASP Web Application Security Summit. My topic will be abusing SSL VPN client/servers and open reverse HTTP proxies. I'll be talking about a lot of what you've read here, as well as demo some POCs and exploits. Check it out here: http://www.froc.us

Also on the bill are Jeremiah Grossman, RSnake, and Ed Bellis, a CISO for Orbitz World Wide. Looks like there will be a wide range of interesting tech and management subject matter!

Tuesday, April 8, 2008

Preventing BlackHat Automation

Yesterday on PhishMe, I talked about my trip to Chicago to visit my good friend Sachin. While out there, we hung out with his buddy John who works for a small web design firm. John's firm sells quite a few e-commerce sites, and he had an interesting tale of a DoS attack against one of their client sites.

Except, DoS wasn't the intended purpose of the attack. The attackers were actually abusing the web application to scrub a list of credit cards in order to figure out which ones were good. The DoS occurred when Authorize.NET stopped accepting transactions from the site, preventing legit customers from making purchases.

Their immediate solution was to block the offending IP which stopped the attack. But the vulnerability still existed in the application, so they put in further validation, the details of which we didn't really get into.

I made the argument that no matter what kind of validation you put in place, you still need to let anonymous users place orders with credit cards. Additionally, you can't effectively block malicious users from placing an infinite number of orders because:

1. Malicious users can utilize open proxies, preventing IP filtering.
2. Malicious users can utilize multiple identities and credit card numbers, so you can't just block John Q. Hacker.
3. Limits on sessions won't help, since attackers can just get new session tokens.

John pretty much agreed, and added that they now have an added sense of vigilance and more actively monitor usage of their sites. Unfortunately, this monitoring is all done manually, which doesn't make for a scalable solution.

Monitoring, both manual and even automated, is also only effective against stupid attackers who don't throttle and/or don't make an effort to make their HTTP requests look legit. A savvy attacker should be able to randomize the time between his automated requests, as well as automatically utilize a large number of open proxies to hide his true intentions. If the attacker does some homework and determines the number of legitimate orders a given web site handles per day, he should be able to throttle his requests such that no red flags will be raised.

Below is a list of possible countermeasures that could be deployed to prevent such abuse and automation.

1. Make the attacker, but not browser based users, jump through hoops. Enforce strict HTTP request validation on critical transactional URLs. Make sure any header a normal browser would send is present in the request.

2. Reverse CAPTCHA. Build a response that a human can understand, but would be difficult for an automated bot to interpret. Show a picture of some dogs, and display a message to the effect of: "If you see 3 golden retrievers in this picture, your order was successful."

3. Use alternate communication channels for response data. An easy one would be to SMS the transaction result to a mobile number. Of course, you can't rely on all users being able to receive SMS...

4. Use a combination of numbers two and three, and make your process dynamic.

There are two themes to these countermeasures:
1. Make your process complicated to automate.
2. Make your responses hard to interpret programatically.

And one last idea. If you're content with monitoring for the time being, you still need to be able to react when you see someone doing something naughty. In the case of credit card validation abuse, an easy countermeasure would be to easily disable real time processing. Your site will still be accepting orders, but they won't be processed until after human review.

Monday, April 7, 2008

Go PhishMe!

I've started writing on the PhishMe.com blog. You can read my first post here.

I'll still be writing on Schmoilitos Way, but I'm going to be writing about some new tools I'm developing on the PhishMe blog, so definitely keep an eye on it. The other guys on the Intrepidus team also have some really interesting/relevant/fun stuff to write about, so I'd recommend adding it to your feeds!

Cheers!

Friday, April 4, 2008

Defending your visitors

Last night I was hanging out with my friend Andy, who is a real estate agent here in Hoboken. I was shocked when he told me he read my blog, and blown away when he said he got some value out of it!

While not a tech guy, Andy handles most of his computer issues on his own. He also runs a simple web site he developed to promote his real estate listings. He is looking to add functionality to his site, and he found some individuals on guru.com who are bidding for the work. While his current web site and the work he needs done is not overly complicated, my last post (Outsourcing Pain) still struck a nerve with him.

Andy realized how easily he, his web site, and people who view his web site, could be taken advantage of by some mystery outsourcer. He still needs help getting his work done, but he will be a little more vigilant in choosing who does it for him.

One of the features Andy wants implemented in his site is form validation on some of his contact forms that users can fill out. I told him to make sure when he pays someone to do it, that they give him server side validation as opposed to client side JS validation. I gave him the usual appsec drill about injection attacks, and how no validation leads to compromised assets (data, server, etc). This lead us to another great point in our discussion.

Andy asked: "What assets do I have? Why would anyone want to hack my rinky dink real estate web site?"

If Andy's web site was pwned, it wouldn't be a big hit to his wallet. It's not a major source of business for him. However, the people who view his web site - current and would-be customers - probably use their PC's for more then just browsing Andy's web site. They use it to check email, bank online, work, and all that good stuff.

So while hackers might not want to steal Andy's database, they would be more than happy to take control of his site in order to serve malware to his visitors and spam the rest of us with viagra emails and the like. The assets Andy's web site puts at risk may not necessarily be his own. A compromise of his site could lead to much frustration for his visitors, but not necessarily Andy himself.

Wednesday, April 2, 2008

Outsourcing Pain

I have some friends who own a small business together and are going through some outsourced development trouble. The business originally started with one owner in the U.S. and an outsourced Eastern block nation developer responsible for all architecture and development. Together, they built a robust web app that was great for its intended purpose, and quite pleasing to the eye. The two had a great relationship, albeit one based on no more than a virtual hand shake.

Things started to get sketchy when the owner, realizing he could only take the business so far, brought in partners and gave away equity. With the new partners involved, the original relationship with the developer started to sour. The developer responded by working directly for clients, raising his rates, and making him self scarce.

FUD

They asked me what the worst-case scenario would be if the relationship soured to the point where they no longer worked with this developer. They assumed he could take the source code, his knowledge of the source code, and start a competing firm. Since he is in a foreign country, and they have no written agreement at all, the partnership would be S.O.L. That is bad news.

While the above is true, I told them the really bad news. Since he is the only one who understands the code and was the sole person responsible for managing it, it would be good too assume that he knows backdoors into the app where he can steal sensitive client data. They wondered why this is a "good" assumption. I explained that it is a good assumption because they have a chance to be proactive about it.

Take Back Control

The first thing they need to do is get a handle on the source code. They need to maintain a source code repository and make the outsourced developer use it. Once they have the source, they need to get a security consultant to review the code for security problems. Expensive, but necessary. Finally, they need to either formalize the relationship with this outsourcer or plan on moving to someone else.

In hind sight, it is easy to see how all eggs were in one basket. Since all design, architecture, and coding was done by the outsourced resource, there was no internal knowledge of how the application actually worked. The developer had all the control. A simple solution would have been to hire one more outsourced resource, and split the workload between the two. This would have allowed an additional technical resource to develop over time, providing the owner with some stability in case the secret police came in the night and took the primary developer away.

For Next Time

If an individual were to ask me for advice on initiating an overseas outsourced development project, here are some tips I would give them.

1. Go with a legit company. Don't just pick some guy off rent-a-coder who you can only contact via IM and email. If you want a long term relationship, you will want to be able to speak with them.

2. If possible, go with a company that has a presence in your country. Not only does this mean you have someone to speak with during your own business hours, but you have someone you can more easily hold legally responsible if things don't work out.

3. Make sure the developers document their code.

4. Have a contingency plan if things don't work out. Try and establish a relationship with another development firm, or be prepared to hire an internal resource. Outsourcing can be a great value, but having an in-house resource can be invaluable.

5. Understand that security is important. Non-technical people can come up with great ideas for software. Unfortunately, when they have someone develop it, they look for functionality and turn a blind eye on security (actually that sounds like a lot of technical people I know). Before you begin your project, check out OWASP, and at least talk to someone about application security - a friend, a consultant - anyone knowledgeable.

6. When you begin testing the application internally, pay a third party to perform a security assessment of it. This way you can report security issues to the developers in addition to the functional bugs you will be finding and reporting. It is more dollars up front, but it will be worth it in the long run. The sooner you address security, the cheaper it will be.

There are probably alot more points that I don't address above. If you have any advice for people with a great idea for an app and are looking to take advantage of outsourced development, feel free to post in the comments.