Friday, May 16, 2008

Black Hat USA 2008

I have some exciting news. Well, exciting for me anyway :-)

I was accepted to speak at Black Hat 2008 this summer in Las Vegas. You can read my abstract here. I'm in the network track (since I'm talking about SSL VPN) but rest assured there will be a ton of application security content in my talk.

A lot of big names in the security space will also be speaking there, and it's an honor to be included among them.

Wednesday, May 14, 2008

Changing target landscapes

Dancho Danchev asks if the recent mass SQL injection attacks are intended to steal databases from vulnerable sites, or if they are being used to build a network of compromised hosts for later attacks. I would just assume that an intelligent attacker would do both

As popular targets harden, attackers will adapt and look for ways to exploit less popular targets en masse. While it may have been easy a few years ago to compromise one valuable target, today it might be easier (and safer) for an attacker to compromise 100 smaller targets and get the same value.

As the target landscape changes, attacks and tool kits will mature, making it easier to automatically compromise sites using a wide range of vulnerabilities. A few years ago, we had big compromises at large organizations like TJ Maxx. Today, we see more attacks targeted at smaller organizations, like the most recent one I know of at multiple locations of a popular restaurant chain in the states.

In the same way new and expensive technologies trickle down from the largest companies that can afford them, the attackers will trickle down from the hardened juicy targets to the smaller and softer, yet more prevalent, targets.

Tuesday, May 6, 2008

A Dynamic Approach to Static Analysis

The following is intended for people who want to do things differently, and see an improvement in the security of their web applications. If you are looking for an acronym and a quick fix to get insecure code fast-tracked into production so you get a bigger bonus, go here.

The problem with static analysis tools is not false positives. It's also not getting code to compile, or integrating source code scanning into the nightly/weekly/monthly build process. These are limited, one time challenges that can be overcome with a few days, sometimes just hours, of work by a capable individual.

In my opinion, the problems that hamper the success of static analysis tools are:

1. Management sees source code scanning as a panacea, and lacks an understanding of the business process changes needed to support static analysis in a large development organization.

2. Developers are often not trained to fully understand the findings the tools provide, and are not given the resources and education required to fix bugs or alter bad development practices.

Management likes to know about the badness in their apps, and the pretty graphs and reports that the tools can generate make them feel like they have a handle on things. Unfortunately, the 1500 cross site scripting vulnerabilities in the report won't disappear unless the developers have time to fix them.

Making the vuln's disappear from the chart might make management feel warm and fuzzy, but if they are merely obfuscated such that the tool doesn't see them, everyone is even worse off then before. That's why it's important that your developers know how to properly fix the problems. A black-listing routine may trick your static analysis tool, but it won't trick a pen-tester - or an adversary.

The first time you scan your code and find that you have a gazillion cross site scripting vulnerabilities, your first priority should not be to put the static analysis tool on everyones desktop, or to configure nightly automated scans. If you want to make some serious improvements to your code base, try following these three steps:

Step One - Get Help
Hire a security architect - or bring in a consultant - to help you understand how the security issue effects the application has a whole, as opposed to individual lines of code. When you realize that the application fails to properly sanitize input across the board, you can develop remediation plans that solve this problem across the board (like secure APIs).

Step Two - Get Resources
Make management aware of the problem and how it is costing them money. Exploit vulnerabilities in the application in front of them. Show them how YOU can hijack THEIR account. When you do this, you need to also describe your remediation plan, and let them know the resources you need to make things right. Show them how efficiently you can standardize input validation using a secure API. Your consultant or architect should be able to help you here. If not, get a new one!

Step Three - Get Results
Once management has eagerly met your demand for resources, train your developers. Exploit the same vulnerabilities in front of them that you showed management, but also show them where the problem exists in the source code. Show them how easy it is to fix each vulnerability with the secure API! Send them off to review and fix their own code. Follow up with a static analysis scan in a week or two.

After your first round of step three, you should see a sharp decline in the number of vulnerabilities. Let your developers know the good news, and keep repeating step three until input validation problems are a thing of the past!

After that, you can focus on things that static analysis tools are not good at finding - like access control issues in your business logic.

Friday, May 2, 2008

Transloading? WTF!

Some research I've been doing led me down an interesting path. The path of the WebTV user (or Webbie). I've never used or seen a WebTV, but I've come across a number of sites the past few days that INSIST I browse them from a WebTV or they will not let me in :-)

Anyway, the sites I've been Exploring are called "transloading" sites. Transloading is when you tell ServerA.com to fetch a URL from ServerB.com, and store it on ServerC.com. WebTV users need this, I presume, because either they can't store files locally and then upload them, or it is just a big PITA.

The fetching part is the normal application/proxy stuff I usually rant about. The storage part is a little more interesting. All of these site request an FTP host, user name, password, if you wish to rename the file, and sometimes even a desired permission level to set on the new file. Forget about the fact that none of them use SSL when transmitting these credentials. Forget, again, that this type of site reinforces bad habits that lead to people getting phished.

The scariest part is that some of these sites are also pretty good at spitting out the command line results when they attempt to FTP to your specified host. There seems to be some pretty obvious command injection vulnerabilities in many of them. So even if your a shrewd "webbie" (no homo) who trusts the person hosting the transloading script, their box could easily be compromised and your ftp creds are being harvested by someone else.

Many of the transloading sites actually seem to use the same CGI script to do their job. I haven't been able to locate a copy of this script, but I haven't tried too hard to find it either. If you want to know more about transloading, checkout Beth's site