Zu Hilfe! Zu Hilfe! der listigen .. firewall?

- by Paul Murphy -

If you ever feel in need of a lesson in humility, try reading through the TCP/IP RFCs and related literature. I have two questions I have no idea how to answer, but rather naively expected that reading this material would help. It didn't, in truth because I didn't understand most of it; so now I'm asking you to explain the issues to me.

The two questions are first: why can't router software let us stamp out address spoofing? and secondly: why do we use firewalls?

Address spoofing depends crucially on being able to hide the real source address, so why not make that impossible?

One way to do it would be to have all the ISPs and network carriers whose connections constitute the internet certify where packets entering the network come from. Any packet has to have an origin characterised, from an internet perspective, by the point at which it first reaches part of the shared resource - usually a router or other device maintained by an ISP or backbone carrier. Suppose, therefore, that we put software on those devices that allowed them to form a self authenticating community and insert a signed source address into every packet forwarded from the customer's premises.

You can do that yourself if you have a network of machines running Solaris 2.7 or later and the capability appears to be built into IPsec too, but the goal here would be to shift responsibility and therefore control from the originator to the carrier first putting the packet on the shared network. There seems to be room for this in the standard, the technologies needed are well understood, and the number of major carriers and ISPs involved is relatively small.

Making it happen shouldn't be very hard and the nature of the carrier's role as the packet producer's gateway to the internet means that attempts to send packets with false information already loaded will fail while internal authentication among participating routers should make spoofing it at the point of delivery impossible.

If even a relatively small percentage of email and other net aware applications were changed to make use of the resulting unforgeable "this came from" information in every arriving packet, both web and spam spoofing would very quickly become impractical.

That has good consequences for both the anti-spam and ant-phishing forces. For example software to automate ISP source tracing and customer notification if a PC or other device started spewing spam would be almost trivial - meaning that putting someone else's machines into dis-service through a virus or worm would become much less effective than it is now. More importantly, spam recipients could safely respond to spam in the obvious way - by bouncing the mail back to its real source several dozen times, thus initiating a denial of service response on the spammer without allowing others to use the predictability of this response as a means of initiating a denial of service attack on the innocent.

So what's wrong with this picture to explain why it isn't done? I haven't a clue, do you?

My second issue is both more straightforward and stranger. I've been watching a client blow up his network by installing SP2 in support of the notion that several thousand PCs inside a highly protected, and carefully segmented, network should all have their own firewall protection too. That seemed a bit of overkill to me, but asking why they thought this appropriate showed that they were responding on auto-pilot and thus raised the more general question: why use firewalls at all?

Like everyone else I know firewalls should be used, but not why. Is this something sensible like wearing a seatbelt or an obsolete workaround, like DHCP, for long gone limitations in Windows?

The way firewalls work is instructive here. In the simplest case, arriving packets are captured before any local processing takes place and either discarded or passed on for normal handling depending on whether the destination port is supposed to be open or not. Think about that, am I being naive or wouldn't just not starting the service have the same effect without incurring the overheads associated with the firewall?

Obviously there are some services users don't want to do without but sometimes that's just too bad --my own secure workstation, for example, is protected by a combination of Solaris 2.8 on SPARC and an empty socket where the network connector should be.

Most of the time, however, layering functionally identical firewalls just seems to lead to user frustration without providing much in the way of real protection. Not only do things like Microsoft's use of SOAP and webXML as a programming environment for the internet provide nearly ideal means of bypassing firewalls, but the reality is that even relatively simple hacks can render most firewalls useless. Two widely known methods from the past, for example, used the ttl (time to live) setting and significant differences in the packet assembly process used by Unix and Windows derivititives to bypass both firewalls and active intrusion detection systems.

There are more esoteric attacks -reprogramming the firewall's own receipt-side NIC produces an undectable bypass- and easier ones like over-powering a wi-fi installation, but such things may simply not be necessary at all. After all, if firewalls are so valuable where's their impact on the spread of worms? When SQL-Slammer took roughly nineteen minutes to slam most of the world's network facing Windows machines - includes Microsoft's own - where were the firewalls? Out of the loop and useless, that's where.

So why do we use firewalls instead of debugging services for security related errors, using standard tools like sub-netting and non routable packets to limit external exposure while turning off things we don't need -like SMB networking, error reporting, or QoS futzing? The more I worry at it, the less obvious the answer becomes, so if you can tell me - without calling me an idiot - I'd really like to know.


Paul Murphy wrote and published The Unix Guide to Defenestration. Murphy is a 20-year veteran of the IT consulting industry.