Thursday, February 28, 2008

XMLHTTP, res://, and name look-ups

After reading another one of RSnake's posts about file enumeration using res://, I went back to see how XMLHTTP handles this protocol. It can make for some interesting results if the URI can be controlled by the user.

------------------------------------------------------------

set http = createobject("Msxml2.XMLHTTP")
http.open "GET", "res://cmd.exe"

msxml3.dll error '80070005'
Access is denied.
------------------------------------------------------------
http.open "GET", "res://notepad.exe"

msxml3.dll error '80070715'
The specified resource type cannot be found in the image file.
------------------------------------------------------------
http.open "GET", "res://garbage.exe"

msxml3.dll error '80070002'
The system cannot find the file specified.

------------------------------------------------------------
http.open "GET", "res://shdoclc.dll/IESecHelp.htm"

"Internet Explorer Enhanced Security Configuration places your server and Microsoft Internet Explorer ...."


When a server makes a remote or local request for a user specified resource, there is a lot more going on server-side then you might initially realize. Filemon is a good tool to see what's going when trying to access local files. And of course, Wireshark is fun for seeing what happens when you request remote files.

For example, on a Windows box, specifying http://shortname will cause the server to send out NetBios Name Service (NBNS) UDP broadcasts on its subnet, while requesting http://fqdn.com will get the server to make DNS requests (assuming the host is not in the DNS cache). Clearly, you can get the server to reach out to more machines than just the server hosting the content you're trying to fetch, which means we have more attack vectors to worry about.

Wednesday, February 27, 2008

Insider-threat Evolution

Yesterday at work, we were doing some group threat modeling exercises based on HacMeBank. One of the threats we modeled was that of the theoretical admin portion of the site, for which all admins shared one set of credentials and had free reign over customer data and accounts.

While everyone agreed this was bad, everyone except me said the threat was less severe because employees should be vetted and trusted. Prospective employees go through background checks and other processes before being hired to ensure that they are of the highest moral fiber and good character.

While an employee could be a low threat when they are being indoctrinated to the company, two years later when the adjustable rate on Joe's mortgage kicks in, his wife gets pregnant, and he loses his promotion to Bob, Joe maybe moved to take some drastic steps to ensure the well being of his family. Such as selling the companies consumer data on IRC.

While it is important to thoroughly check out prospective employees during the hiring process, just because you hire them shouldn't mean they get the keys to the kingdom. In many cases, the keys consist literally of encryption keys, as well as application layer functionality.

I tell developers to never trust a user, and a user can be an anonymous web surfer, a paying customer, or an employee. In the same way, the business needs to have a healthy distrust of employees - from the CEO on down to the guys in the mail room, and especially your outsourced third-party developers.
I'm not advocating invading employee privacy with cloak and dagger tactics. I just want businesses to realize the need to invest in security for internal applications, and take the threat just as seriously as most are beginning to take the outside threat. In the same way technology changes and external threats change and evolve, people change, and internal threats can come to exist from entities which were once vetted and trusted.

For more on insider threats, there is a write up currently on security focus about employees abusing privileges in a Wisconsin power company (with links to related stories). And lets not forget Jerome Kerviel, who abused internal systems and policies to rack up an astronomical financial loss for Societe Generale.

Monday, February 25, 2008

Leave Cookies to the Keebler Elves

If you don't know what CSRF is, you can read about it here.

I was talking to some developers today about using tokens and requiring passwords on forms to prevent CSRF. Then my friend Matt asked why can't we just stop using cookies?

If an application doesn't rely on a browser automatically sending cookies for session management, CSRF will be a non issue. Of course, this means that the developers have to handle session management with additional code (those poor souls!). Instead of the browser automatically sending the cookies with each request, the developers need to make sure any link or form action has the session ID in it. We looked at Tomcat, and to some extent, it has the ability to automatically append session identifiers to some URLs (I need to research this more).

Having the session ID in the query string does introduce other risks, such as making it visible in logs on the web server and any intermediate proxy. Right now, I think the benefit of zero CSRF outweighs those risks.

HTTP wasn't designed to be maintain state. Cookies were an after thought, once people realized they needed a way to group multiple requests together into a logical "session".

Unfortunately, Web2.0 gives us much more code executing on the client, all of which relies on the web browser sending cookies with each outgoing request (assuming the receiving code on the app server actually verifies the request are part of a valid session). That said, I don't think we'll see any mass exodus from cookies any time soon.

Updated 2:26PM EST 2/25/2008
I totally forgot about session hijacking attacks, which are a good reason not to rely on session identifiers being sent in URL query-strings. This is where attackers forge links with session ID's in them and trick users into clicking on them and logging in. These attacks could lead to badness worse than that of CSRF.

Updated 9:24PM EST 2/25/2008
Doh! I should have read my last post. Session ID's in the URL are also vulnerable to referer leakage!

(That's what happens when I get to eager to press "publish" and head out to lunch!)

Thursday, February 21, 2008

Leaky Web Browsers

I'm pretty compulsive when it comes to checking out my blog stats in Google analytics, specifically, the referer data. For a while, I'd been mulling over some corporate intranet hosts that were sending clicks through. I Google'd them, but couldn't find out what company they were coming from.

Today I found out.

I was on a conf call with a security vendor who was giving his pitch and demo'ing his software. One part of the tool showed some sample data. Data which contained some internal host names from his corporate intranet. Host names which were on the same domain as the mystery referer in my stats!

Out of respect, I won't shout them out on here. But I do think it goes to show how easy it is to inadvertently leak corporate data. Referer data leaks have been talked about for a while now. I wonder when browsers will allow users to prevent referers from being sent across security zones.

Thanks for the links everone :-)

Tuesday, February 19, 2008

ActiveX and Applet Security

"Designing for security is important because an ActiveX control is particularly vulnerable to attack. An application running on any Web site can use the control; all that it needs is the control's class identifier (CLSID)."
http://msdn2.microsoft.com/en-us/library/aa752035(VS.85).aspx

The threat of malicious web sites abusing ActiveX objects already installed on client PCs is real, as confirmed by the above MSDN article, and numerous published vulnerabilities. In some instances, cached Java applets can be abused in the same way. Depending on what the code does, this problem can lead to some really nasty vulnerabilities in application logic.

An easy way to prevent ActiveX objects and Java applets from being misused by other web sites is to hard code the list of domains allowed to use the object in its own source. Unfortunately, in some cases this is not a feasible solution, as perhaps the code is meant to be hosted on multiple domains. Case in point, the MS RDP activex. Lots of web sites use this control to have a browser embedded remote desktop window.

Since we can't rely on the components to protect themselves from abuse, I think the browser should provide more protection. Why can't the browser sandbox the code such that only the web sites which installed them can actually invoke them? How hard is it to maintain an inventory of installed objects/applets and enforce access control lists for websites? Even in cases where mutltiple sites use the same control, a user controllable white-list could help protect against abuse. Users should be able to white-list web sites which are allowed to use the control, or white-list the control itself for all sites.

One particular Java applet I've analyzed employs a hard coded white list. The purpose of this applet is to launch a server specified executable file on the client computer. If a malicious web site were able to invoke this applet from the browsers cache, it could possibly launch an arbitrary executable. See the below code:

private static final String legalDomains[] = {

".domainA.com", ".domainB.com", ".domainC.com"
};

private boolean IsLegalDomain()
{
String browserDomain = getDocumentBase().getHost();
boolean securityException = false;
for(int i = 0; i <>
{
if(!browserDomain.endsWith(legalDomains[i]))
continue;
securityException = true;
break;
}
return securityException;
}


If the domain returned by the web browser is not in the hard coded white list, the applet will fail securely. Unfortunately, this would not stop a dedicated attacker; it would just force him or her to change their approach.

While this reduces the client side attack surface, it makes the white listed websites more of a target because they are trusted by the client. Without the white list, you could just own the client. With the white list, you need to own the domain in order to own the client. XSS, persistent XSS and other vulns that allow attackers to control site content should allow you to successfully attack the client side code.

Additionally, this opens the door for another client side vulnerability to be leveraged to exploit the vulnerable AX or applet code. DNS spoofing and CSRF based router attacks that lead to client DNS setting compromise can both lead to malicious code being hosted under trusted, white listed domains.

It is nothing new that client side code such as applets and AX objects greatly increase client side attack surface. What scares me is that it doesn't seem like software vendors are any less hesitant to use them. In fact, from the research I've been doing, it seems that vendors would generally opt for enhanced client side functionality (like auto launching EXE's) at the expense of security.

Friday, February 15, 2008

Internet Facing SEO/Validator Sites vulnerable to Intranet Hacking

While looking for additional data to progress the development of my WebSite Intranet Scanning tool based on RSnake's original paper on Intranet hacking through web sites, I hit the jackpot in terms of finding sites that appear to be vulnerable.

SEO (search engine optimization) site scanners and html/xml/etc page validators generally allow anonymous users to specify URI's to be scanned and validated. Unfortunately, almost NONE of these sites I found appear to validate these user supplied URI's before fetching and performing whatever analysis they do.

Some screen shots for your viewing pleasure.
The first site allowed us to scan http://127.0.0.1 revealing all sorts of good stuff about whats running on the server, as well as the fact that cPanel control panel is hosted on this address.

Since these sites work by sending a simple HTTP GET for the user specified URI, we could probably leverage known cPanel vulnerabilities against this site, and possibly own the box from the inside.

The second screen shot is from another site and shows more of the same.

What is really interesting about these sites versus the ones I've found before is that many of them actually spit the content back out at you. I'd been focusing on using timing to determine if hosts/ports are listening, but these applications will give you the content served, or if the host is not up, give you a nice error message saying so.

I thought about responsibly disclosing these problems to the web site owners, but I'd be doing that until my fingers fall off since there are so many of them. So consider this my responsible disclosure.

How can this be prevented? Easy: validate user supplied input. Don't allow users to submit URI's with 127.0.0.1, localhost, and addresses in your local address space. It's probably not too hard to make a decent white list regex to allow FQDN's and public IP ranges.

Some sites actually did block the request. Funnier still, some sites with multiple scanners/validators had mixed results, with one scanner being vulnerable and another not.

Thursday, February 14, 2008

IIS Remote Exploit

MS08-006 is a treat we haven't had in a while: a remotely exploitable code execution vuln in IIS. To be fair, the remotely exploitable part requires that an ASP script be written in such a way that it allows user supplied input to be passed to a vulnerable function. That said, it is still pretty cool.

HD Moore has a great write up detailing how he reverse engineered the MS08-006 patch using IDA Pro & BinDiff to find the actual vulnerability. I'm sure a handful of people out there have done the same, but it is pretty cool to see a blow by blow account of how it is actually done.

Tuesday, February 12, 2008

EMail 101

Originally, I started off this post by complaining that people don't understand the value of their data. Unfortunately, this morning I realized that we're even further jammed behind the eight ball then I originally thought. People still don't understand the power of email.

I checked my corporate email this morning to find that someone spammed thousands of people asking if we would have sushi in one of our cafeterias this Friday. How did this happen? The notorious "Reply All" button. On top of that, two messages later, was the convenient "so and so wishes to recall a message" message. Now I have two extraneous emails in my inbox. When I click on the recall message, it goes away, but I'm still left with the original spam. Sweet.

Last week I read an article about an information leakage snafu that happened between two law firms working on a high profile case against Eli Lily, brought on by the US DoJ.

A lawyer (lawyer A) from one law firm tried to email a sensitive document to a co-counsel (lawyer B) who worked for a different law firm. The co-counsel shared the same last name as a NY Times reporter who also happened to be in lawyer A's address book. Instead of emailing the document to lawyer B, lawyer A inadvertently gave the NY Times reporter the scoop on a big settlement Eli Lilly was about to make with the US DoJ.

This situation could have been prevented if lawyer A had the forethought and awareness to realize that maybe this document should be encrypted. Even if it was sent to the correct recipient, it is possible that the document could be viewed by others en-route to the destination mail box.

But thats how a security guy thinks.

Regular users like lawyer A don't even think to look at who is in the "to" and "cc" fields before they send email, let alone worry about whether or not data should be encrypted.

One idea I have for a technical control to force people to think about the email they are about to send is to place a daily limitation on the number of emails one person can send. I know the only time some people think about cleaning old messages off the email server is when they are not allowed to send any more outgoing messages.

If you were only allowed to send 5 corporate email messages per day, you'd better make sure you really need to send a particular message. Hopefully, it would also make people think twice about the content and the recipient list.

On top of that, perhaps making PGP or S/MIME encryption a default for sending outgoing messages would help increase awareness about handling sensitive data. Alerting users when they are plaintext emailing someone who they don't have the required certificates might give them pause when sending.

If users can't be trusted to use email responsibly, it is up to administrators to put controls in place.

Wednesday, February 6, 2008

WhiteHat Security NJ Lunch

I just got back from the WhiteHat Security lunch event here in NJ. It was really cool to meet Jeremiah Grossman, and together with my colleague Daniel, the three of us had a fun discussion about CSRF & DNS rebinding.

The first part of the presentation was delivered by Jeremiah, as he went over the Top 10 web Hacks of 2007. He crammed alot of technical content into 35 minutes, but it was very well organized and he did a good job of simplifying the content. I think a lot of audience was non-technical, but I always think it's good for those folks to get a healthy dose of reality explained to them. I also picked up some technical details I had previously not realized.

The second part was a presentation by a WhiteHat partner company called SecurView. They offer outsourced, managed security solutions, one of which is WhiteHat's Sentinel. SecurView makes an argument that security talent and expertise is hard to recruit and retain, so outsourcing is the way to go. That doesn't make sense to me, since won't it be just as hard for SecurView to recruit and retain from the same limited pool of security talent?

If you don't have the resources to hire and retain security talent, I think the best thing a company can do is bring in consultants to help them build their security infrastructure in house. Outsourcing may seem like a cheap alternative, but you sacrifice control and need to place a lot of confidence and faith (trust) in the outsourcer, and who ever they subsequently outsource to. Case in point, SecurView mentioned that they use co-location centers for hosting to keep costs down. How do you you know the colo company is doing things right?

I think what it comes down to isn't risk transference, but blame transference. A company who can't get a handle on security themselves might be willing to pay a premium and place their trust in a company who claims to provide secure services. This way, if they get hacked, it's the fault of the outsourcer.

The final part of the event was an overview of the Sentinel service from WhatHat's VP of Sales. I've always thought this service is a cool idea, and I'd really like to see what it looks like from a client perspective. Sentinel is a hybrid service, that combines technology (a scanner) and people in an interative process that develops knowledge of how an application works to be able to more effectively find vulnerabilities. The operations folks at WhiteHat add a human element to the scanner, as they manually follow up on scanner findings and act as a reference point for clients who need more info.

All in the all, the food was good (and free!), and the subject matter discussed was very interesting to me. If you have the time, I would recommend checking WhiteHat out the next time they come around.

Tuesday, February 5, 2008

Trust: to have confidence or faith in

(title is a definition from google.com)

Trust is what lets management sleep at night. Managers trust their employees to do the right thing and keep business progressing, and employees trust managers to take care of them. Much like any other part of business, on the application development and security side, trust is what gets us into hot water. Developers expect users to make mistakes, but they also trust them to not be malicious. Companies also trust third parties to develop applications securely, with proper access control, input validation, and zero back doors.

This undeserved trust is the cause of many application security vulnerabilities, and I'm not sure where it comes from. Some people say "trust but verify". I say "trust = validation". If you haven't performed validation on it, you can't trust it. This goes for data flowing in from the Internet, as well as your new app being delivered from your overseas outsourcer. You can validate the data flowing across trust boundaries using regular expressions, and you can validate your new codebase using manual review and automated static analysis.

I spoke with some developers today, and instead of trying to train them to hack their own applications, I explained to them how their blind trust of users and data is a paradigm that can't exist anymore. I think that if we can break this trust, coding habits will rapidly change. Teaching developers to be hackers is the wrong approach, but that's a post for another day.

I told these guys you can never trust a user; you can only trust a users data, and only after the data is validated. Validation consists of strong typing of input as well as CSRF protection using anti-CSRF tokens. CSRF is a perfect reason for not blindly trusting users or authenticated requests in web apps, as you can't be sure that an authenticated user actually intended to send the request.

Next, I told them they need to compartmentalize the different components of their applications using trust boundaries. When you look at an applications trust boundary diagrams, you are really looking at data flow across the app. A picture says a thousand words, and when a developer sees that his "trusted" middle tier is just a conduit for tainted data from the Internet, he may think twice about trusting said middle tier.

I'll conclude by saying that trust is a lame way to place responsibility on someone else when you get hacked, and if you trust the wrong people, hacked is what you will be. Just ask Jérôme Kerviel.

2/6/2008 Note: The above link used to go to an interview with Jerome Kerviel which seems to have been taken down. Here is a new link to a different article: http://www.france24.com/france24Public/en/news/world/20080125-societe-generale-kerviel-banking-scandal-police-custody.php

Also, Mike Tracy has an interesting blog post up on Matasano here.

Friday, February 1, 2008

Event ID: 46, description: Unable to allocate development resources for security

It's been said before, I'll say it again: security does not come from tools.

The latest and greatest web app scanner or static analysis tool will not, on it's own, make your applications and your business more secure. Unfortunately, many development groups and IT organizations, under the pressure of time lines and budget guidelines, believe that these tools will offer a solution to their application security woes.

That belief is . . . . . . . . . . . . FALSE.

While these tools can be part of the solution, they are not THE solution. The real solution is to build a better process for developing code and deploying applications that contain fewer security vulnerabilities.

Such a process requires resources: people and time. This is a hard pill for development managers to swallow. From one manager: "I thought this static analysis tool was supposed to make things easier. Instead, you're asking us to do more work? WTF!"

The only reason it requires MORE work is because they were not dedicating any man hours to security. Yes, a scanner will let you find a large amount of vulnerability data in minutes. But to get any value from it, the vulnerability data needs to be analyzed and the problems need to be fixed. This takes hours, days, even weeks.

I don't ask for a lot of resources - that scares people off. I start by asking dev management to pick one person on the development team to be in charge of security and source auditing. Once they pick someone, I ask for 8 hours per application, per week, and bargain down from there. If I get a development group to dedicate 8 man hours per month to security, up from zero hours, I consider that a win.

Before we even begin looking at the source and using tools, I get the newly crowned security lead, and possibly their entire team, enrolled in basic application security training. Too many times I've scanned code with a developer only to find them not understanding what the vulnerability data actually means. Training is the only way to prevent that.

I also try to motivate the security lead to really become a security leader on their development team! They should be the one on the team who is excited about security, and works to schedule additional training for everyone, gets management to spring for new books, etc.

To sum up, development organizations need to learn to write secure code for themselves. Businesses cannot solely rely on consultants and security tools to fix the problems with application security, as this is expensive and ineffective.