Wednesday, January 30, 2008

All your critical infrastructure are belong to us!

The otherday during an application security 101 training course, I heard someone say that denial of service attacks are a thing of the past. I don't know the current statistics describing how many of these attacks take place today, and I'm also not quite sure of the context of the comment, but overall I would have to disagree. In fact, I also think DoS attacks will become more prevalent,
while the services that get disabled reach far beyond networks and web servers.

Maybe he was saying network based DoS attacks are on the decline - although I would still disagree. But in reality, Application Security vulnerabilities open doors when it comes to finding ways to deny users legiatmate use of a system. In many less packets then a SYN flood or other network based DoS, an attacker can exploit an SQL injection vulnerability to DROP TABLE's, rendering a database and application useless.

How long does it take to recover from dropped tables? That depends on a number of things. First, does the victim regularly back up data? Second, do they even realize what has happened? Third, when they realize what has happened, do they even know how many SQLi vectors exist in their application? And do they know how to fix them?

My above example still focuses on denying users the use of an application. What if the application controls a utility service - like your electricity, phone, or water? An article on Security Focus covers some interesting events disclosed by a CIA analyst: "In multiple incidents, unknown attackers breached the networks of utilities and disrupted the power to cities outside the United States. . ."

The devices and applications used to control systems such as power utilities, health care services, and other critical infrastructure are classified as Supervisory Controland Data Acquisition (SCADA) and Distributed Control System (DCS). Another Security Focus article talks about the debate between SCADA/DCS vendors and security researchers. While a researcher disclosing a popular OS flaw could bring havoc to some homes and businesses, the vendors complain that a researcher disclosing a security flaw in a SCADA/DCS system could lead to a break down of critical infrastructure.

Because of the work of independent security researchers, knowledge of SQL injection attacks has helped businesses learn to prevent them, respond to them, and fix them. This means that should your bank get hacked and tables dropped, they should be able to recover quickly. If security research is not done on SCADA/DCS systems, how do we, the general public, know that the vendors are aware of the their problems and are educated on how to respond and fix them?

I could live with my bank being down for a week (keep your money in your mattress!). But I'd be pretty pissed off if I had no electricity because the power utility was being extorted.

Monday, January 28, 2008

Scanning Intranets using Web Sites

(Another follow up to RSnakes post on hacking intranets using web browsers.)

I've been working on a utility that takes advantage of user controlled server side HTTP requests to perform host/port scans behind vulnerable web apps, reverse proxies, and WAFs. I have the guts of the scanner written, but now I need to work on making it configurable for different applications. I also need to improve the detection part a bit, as well as add some knowledge and finger printing capabilities.

An example of an application that uses this type of functionality is Google Documents - however Google does perform validation on the host names users pass in. Fortunately, there are still many apps that don't. The security implication here occurs when the web application can access other networks/servers/applications that the end user is not authorized for, like the internal network behind the firewall that is not normally routable from the Internet.

In addition to web applications, this type of functionality can also be used in SSLVPN's and reverse proxy appliances. One example, which happens to have a demo site with creds, is the SonicWALL SSL-VPN:

If you log in and click on the Microsoft Outlook link, you will see this in your address bar:

What if you were to modify the part of the URL?

Fortunately, other SSL-VPN vendors take a better approach. The Microsoft IAG (originally the Whale appliance - I worked for Whale, and then Microsoft) encrypts the host name within the URL. They call this Host Address Translation, or HAT for short. The last time I worked on an IAG, it still had a "Browse Internal Applications" form, which once a user became authenticated, allowed them to specify arbitrary internal hosts. If the IAG was configured to provide access to that internal host, it would serve up content from the host or complain about insufficient privileges (IAG has some robust ACL management). If the host was not configured for access, the IAG would serve an appropriate error message. So while still possible to enumerate some hosts, the IAG does what it can to lock things down.

While reverse proxies make host enumeration easier, they generally (but not always) require authentication, making them less vulnerable to these attacks in the wild. Nonetheless, this info is good if you ever need to perform an assessment of one. Generic web applications are a different story.

When an application lets a user specify a URL to be fetched, the user types the URL into their browser and submits the form. Once they press submit, the clock begins ticking as they wait for their request to get to the application, the application to parse it, and the application to initiate a new request for the specified URL. Once the application gets a response to its request, it can build a response to send back to the user. This is the same as with a reverse proxy. The big difference between a reverse proxy and a generic web app, is that the web app might not be configured to send all the content back out to the client. For example, an app that might allow a user to import an image from a different web server. While you can specify a url that is not an image and the application actually fetches it, it will perform some content checking and not actually show you the results.

Since we can't rely on viewing the actual content, the only piece of information the client has about the backend server is the time it takes for this transaction to complete. By analyzing the time it takes for this request -> request -> response -> response chain to take place, the client can determine characteristics of the backend server.

For example, I might pass the internet facing web application a url like Assuming that address is routable from the web server, the web app will initiate a TCP connection to on port 80. Next lets assume that port 80 is not blocked by a firewall, and that the server is in fact listening on port 80. This should result in a relatively quick transaction time, meaning that it should not take very long to receive a response from the Internet facing web app. However, if port 80 is not listening, not speaking HTTP, or is blocked by a firewall, the this transaction could take much longer. By specifying different ports and analyzing the response time, we can determine what ports are listening on the server.

Unfortunately, there are still many variables that can effect the timing of our request. First, we have network/Internet latency. Under normal circumstances with minimal latency, we can determine a baseline average response time from the web server. Second, we need to have an idea of how long it takes for a script on the web server to time out. Third, we need to get a handle on how long the server component used to make the backend request on our behalf takes to timeout. Below is an example profile of a vulnerable web app:

Script: fetch.aspx?url=
Server: IIS6.0
Default Server Script Timeout: 90 seconds
Average Resonse Time for 100 requests: 0.7 Seconds
Server Side Request Component: WinHttp.5.1
Default Component Timeout: 30 seconds
*WinHttp will fail immediatly when attempting to connect over ports 21/22/23.
Average Response time on forced component failure: .27 seconds
Average Response time when attempting known bad hosts (DNS failure): 30 seconds

From this profile, we can make some assumptions about communicating with hosts behind our web server. If there is a web server listening on the web servers local network, we can assume that a response would take less than .7 seconds (but not necessarily) and longer then .27 seconds (time when the component fails). If we have a 90 second server timeout, we know that WinHttp did not timeout (it is communicating with something) but it did not return control to the script yet. This would be a good indication that the specified server/port may be listening. If we get a response in 30 seconds, we know that WinHttp timed out while trying to contact the host, and that either the host is down or the port is closed.

I'll wrap up this post with a snip of source code. Python makes it very easy to time code execution. This line calls and times the execution of the fetch_page() method.

duration = min(Timer(setup="from __main__ import fetch_page", stmt="fetch_page(%s)"%('"'+host+'"')).repeat(1,1))

Hopefully soon I'll have a script up for interested folks to try!

Thursday, January 24, 2008

XMLHTTP on the Server Side

RSnakes post on intranet hacking using web sites got the gears in my head turning a couple of months ago.

I started playing around with the MSXML and WinHTTP libraries from Microsoft. A quick google for XMLHTTP will generate alot of results concerning client side/AJAX stuff, but over the years I've seen XMLHTTP used quite a bit on the server side (for server to server communication). I've also seen it used interchangeably with WinHTTP to make server side HTTP requests.

I wrote a simple classic ASP script to do some tests. The script takes a resource location as a
querystring parameter, and makes a server side request for the resource. It then sends the response to the user via Response.Write(). Since this is a simple script and performs no validation, you can pass the script internal host names on the web servers local network, send HTTP requests to them, and analyze/display the responses (or lack there of). This is the basic premise of RSnakes paper.

dim http
set http = createobject("Msxml2.XMLHTTP")
'set http = createobject("Msxml2.ServerXMLHTTP.4.0")
'set http = createobject("Msxml2.XMLHTTP")
'set http = createobject("Microsoft.xmlhttp")
'Set http = CreateObject("WinHttp.WinHttpRequest.5.1") "GET", request.querystring("url"), false
response.write http.ResponseText
set http = nothing

Next, I started playing around with different protocols. Not too much interesting here, until I got to the file:// protocol, which WinHTTP complained was invalid.

Example URL: file:///c:/windows/system32/drivers/etc/hosts

MSXML.XMLHTTP on the other hand, did not:

Using XMLHTTP allowed me to view arbitrary files on the web server. Pretty cool. So if you use MSXML.XMLHTTP in your application and pass it user supplied URL's without validating them, you could introduce an information disclosure vulnerability in your application.

I did some more digging to find out what the appropriate uses of the different versions of MSXML and WinHTTP are. I found one interesting discussion here .

I also found that there is a separate class in MSXML to use when doing server-to-server communication. While MSXML2.XMLHTTP returned local files, MSXML2.ServerXMLHTTP gave the same error that WinHTTP does: Invalid Protocol.

So it turns out that the ServerXMLHTTP class is more robust then regular XMLHTTP. From

"With version 3.0, MSXML introduced ServerXMLHTTP, a class designed to do server-side HTTP access. ServerXMLHTTP is based on WinHTTP, a new HTTP access component from Microsoft that is designed to be thread-safe, scalable, and UI-less. ServerXMLHTTP is really a very powerful class and can really come very handy while doing server-to-server or cross-tier HTTP data communication."

To sum this post up, if you're going to allow users to specify resources for server side requests via MSXML, use ServerXMLHTTP!

Wednesday, January 23, 2008

Crying Wolf

It seems a lot of software development people have a hard time committing money and resources towards proactive security measures. They rationalize this mindset by telling themselves that "nothing bad has happened yet, so why should I spend resources on security?" Well, atleast nothing has happened that they know about. The underlying problem here is that they don't understand the threats or the risks.

When these people discover that a breach or incident has occured, that mind set quickly changes. "Quick, call the information security people we've been giving the run around since they wanted to pen-test our app and review our source code! And how much money was lost?" Now they want answers, but do they even know the right questions to ask?

When a development organization has to respond to an incident, and they find the application has inadequate logging and is full of vulnerabilities, they suddenly realize how much they coulda/shoulda done to better protect themselves. Additionally, they find that the don't even know how best to react to such an incident. Many organizations already perform regular penetration tests and other security reviews. Unfortunately, as I describe above, many development groups lack the proper motivation to fully understand and respond to the
results of these excersizes. I think one way to change this mindset is to perform an Incident Response Drill.

In such a drill, a person from the InfoSec side of the company, possibly in kahoots with someone from the business side, would approach the development organization with a fake "complaint" from a real client. For example, Acme Products claims that they are seeing strange withdrawals out of their accounts, and one of them is for a large sum of money. Acme says they never authorized these transactions. Now see how the developers react. Can they provide any evidence to the contrary of the report? I would hope they could atleast give a record of all transactions processed under Acme's account, with some garauntee of non-repudiation.

An excersize like this would have better results if the application developers never know - before AND after - that the excersize is a fake. A little bit of paranoia and FUD on the part of the developers might go a long way in motivating them to better understand the threats/risks their applications face.

I googled "incident response drills" and there were only two relevant hits. I wonder if any InfoSec people are putting this technique into practice?

Tuesday, January 22, 2008

Web Services Training in NYC

Web Service security training with Gunnar Peterson in NYC on 3/10 & 3/11.

Are your loved ones "Hacker Safe"?

There has been a lot of fuss across the 'net over being hacked while proudly displaying the "Hacker Safe" image from ScanAlert. I'm sure that even before this incident, people have been blogging about certified "Hacker Safe" sites which suffer from such prevalent vulnerabilities as cross-site scripting. Jeremiah Grossman has an interesting blog post where he quotes ScanAlert's Director of Enterprise services and points out some inconsistencies in ScanAlert's approach to security and XSS.

I'll spare you technical details of XSS and ScanAlert, as I'd rather describe an interesting conversation I had with my fiance a few weeks ago. My fiance, who is admittedly non-technical and HATES when I talk about computers, security and other geeky stuff, called me when she was trying to book us movie tickets on

She said: "I'm trying to get us tickets. The web page is asking me for my birthday, and it has this this Hacker Safe thing - what the heck does that mean?"

The HackerSafe logo totally aroused her suspicions about something fishy going on with the site. I told her that it doesn't necessarily mean something malicious is going on, but it also proves absolutely nothing about the security of that web site and her data. She considers her birthday sensitive information, questioned it's relevance to purchasing tickets, and the Hacker Safe logo just made her feel uncomfortable. In the end, she opted NOT to book the tickets online, and we would take our chances of long lines at the box office.

I was so proud of her! What a vigilant web surfer she is. Surely, my being a security professional has gotten to her, and maybe she is getting over her abhorrence of computer speak. Excited, I went into geek over-drive and started telling to her to download Firefox, run No-Script, and how she should really start using separate web-browsers. That was followed up by a quick "WHATEVER."

Oh well. Baby steps, I guess....