kestas.kuliukas.com

Simple firewalling and traffic shaping with PF

Update 9/9/08: The latest pf.conf, with 3 years to have collected more comments. Should provide some extra help, in addition to the article below, to those who want a full example to help frame their developing syntax knowledge.

This isn't a guide to pf syntax; that's what man pf.conf is for, and it's pretty self explanatory anyway. Also I've been told that I'm overusing 'quick', but I wrote this anyway because I could have used a walkthrough like this when I started.

Anyway here is a walkthrough of my pf.conf:

I've got a home LAN, connected to the internet with a FreeBSD gateway via pppoe. I've got 512Kb down 128Kb up, the gateway machine runs some services; sshd, httpd, a proxy and samba. Only sshd and httpd are accessible from the internet, and sshd comes under frequent attack. The reason I wrote this though is because anyone can hog up the connection, and I can't play Soldat. :P So I want to do some traffic shaping so that I'll be able to do what I want even while others are downloading or video conferencing. Our private subnet is 10.0.0.0/8.

The basics

Okay, first we set up the basics; variables to hold details which may change, a banned address table, and our first rules: We just want to open up the loopback interface and make sure nothing weird is going on (like packets addressed to 127.0.0.1 coming from the internet).

### Macros
ext_if="tun0" # External interface
int_if="fxp0" # Internal interface
pri_addr="10.10.2.3" # My address

### Tables
# Non-public/weird addresses, doesn't include our 10.10.x.x subnet, anything in here shouldn't be going anywhere
table <banned> { 192.168.0.0/16, 192.0.2.0/24, 172.16.0.0/12, 127.0.0.0/8, 0.0.0.0/8, 169.254.0.0/16, 224.0.0.0/3, 204.152.64.0/23 }

### Options
# We want to sent ICMP RST or unreachable when a packet is blocked, if we don't people have to wait for a timeout
set block-policy return

### Filtering
# Let all loopback traffic through
pass quick on lo0
# no traffic is trying to get into the loopback interface from outside.
block quick from any to lo0:network

#--- Making sure all traffic is coming to/going from the right interface
# Make sure no banned addresses are around
block quick from <banned> to any
block quick from any to <banned>

# all traffic to/from the internal network is addressed to/from the internal network
block in quick on $int_if from ! $int_if:network to any
block out quick on $int_if from any to ! $int_if:network

# all traffic to/from the external network is addressed to/from our external address specifically
# $ext_if is in brackets because the IP is dynamic, in brackets pf knows the IP may change
block in quick on $ext_if from any to ! ($ext_if)
block out quick on $ext_if from ! ($ext_if) to any


block log all



Now we've made sure all traffic is going to and coming from only where it should be, but apart from allowing traffic on the loopback interface we've only said which traffic we don't want, we have to open up the firewall to allow ourselves through.

From our box, out

First we'll allow connections initiated with the gateway. We'll use 'keep state' lots in this config; it means that you only have to keep track of connections opening up. When you connect to a web server lots of packets fly back and forth, but they're all part of the same connection. Once a connection has been opened we already know it's fine, so pf creates a state which remembers that a connection is open, and lets all packets in the connection through without running it through the rules.

Also here we take advantage of the fact that not all users should be allowed to connect outwards. Out of our two external services sshd and httpd httpd is the most likely one to be compromised by far, and if it is compromised they'll be able to get their way into the postgresql and mysql accounts too.
If they can get into any of these accounts we have to limit what they can do, and by limiting their ability to connect we can stop them from running reverse shell exploits, setting up a warez server or DDoS zombie, or penetrating deeper into the network.

It'd probably be more elegant to put all the users we want to allow into a group called 'network' or something and allow only that group, rather than blacklist users, but this'll do.

[...]
### Filtering
# Let all loopback traffic through
pass quick on lo0
# no traffic is trying to get into the loopback interface from outside.
block quick from any to lo0:network

#--- Making sure all traffic is coming to/going from the right interface
# Make sure no banned addresses are around
block quick from <banned> to any
block quick from any to <banned>

# all traffic to/from the internal network is addressed to/from the internal network
block in quick on $int_if from ! $int_if:network to any
block out quick on $int_if from any to ! $int_if:network

# all traffic to/from the external network is addressed to/from our external address specifically
block in quick on $ext_if from any to ! ($ext_if)
block out quick on $ext_if from ! ($ext_if) to any

#>>> From this box
#--- Don't let restricted users initiate their own connections
# icmp doesn't seem to be associated with user accounts, so we have to
# specify tcp and udp
block log quick proto { tcp, udp } from any to any user \
{ www, mysql, pgsql, conrad, algis, nobody, games, news, man, smmsp, mailnull, pop, uucp, bind }
#--- Allow outbound connections from this server
pass out quick on $int_if proto { tcp, udp, icmp } from $int_if to $int_if:network keep state
pass out quick on $ext_if proto { tcp, udp, icmp } from ($ext_if) to any keep state

# We can check what gets caught here to see if we've missed anything
block log all



Allow NAT

Now we can create connections connecting out to others, but no-one can connect to us. This might be the end of the
line for a single user client computer, but we've still got to allow NAT and services.

Here we set up NAT

[...]
### Translation: specify how addresses are to be mapped or redirected.
# nat: packets going out through $ext_if with source address in the network will
# get translated as coming from the address of $ext_if, a state is created for
# such packets, and incoming packets will be redirected to the internal address.
nat on $ext_if proto { tcp, udp, icmp } from $int_if:network to ! $int_if -> ($ext_if)

### Filtering
# Let all loopback traffic through
[...]



We've set it up, but we're not allowing it in the filter rules.

[...]
# all traffic to/from the external network is addressed to/from our external address specifically
block in quick on $ext_if from any to ! ($ext_if)
block out quick on $ext_if from ! ($ext_if) to any

#<<< Through this box >>>
#--- Allow NAT traffic out, create a state so traffic can get back in
pass in quick on $int_if proto { tcp, udp, icmp } from $int_if:network to ! $int_if keep state

#>>> From this box
#--- Don't let restricted users initiate their own connections
block out log quick from any to any user \
{ www, pgsql, conrad, algis, nobody, toor, games, news, man, smmsp, mailnull, pop, uucp, bind }
#--- Allow outbound connections from this server
pass out quick on $int_if proto { tcp, udp, icmp } from $int_if to $int_if:network keep state
pass out quick on $ext_if proto { tcp, udp, icmp } from ($ext_if) to any keep state

# We can check what gets caught here to see if we've missed anything
block log all



From the LAN, to the box's services

Okay, now we can access the internet, but still nothing can access the gateway machine itself. We've got to poke some holes through to allow for services on the gateway.

[...]
# all traffic to/from the external network is addressed to/from our external address specifically
block in quick on $ext_if from any to ! ($ext_if)
block out quick on $ext_if from ! ($ext_if) to any

#<<< To this box
#--- Internal services, there aren't many people internally but each'll be using lots of connections
# httpd, samba, and the proxy servers
# 2000 max states, 20 max users, 100 max states per user
pass in quick on $int_if proto tcp from any to $int_if port { www, 445, 139, 3001, 3128 } flags S/SA \
keep state (max 2000, source-track rule, max-src-nodes 20, max-src-states 100)

# Treat sshd differently so that even if all other services are maxed out sshd will be available
pass in quick on $int_if proto tcp from any to $int_if port ssh flags S/SA \
keep state (max 20, source-track rule, max-src-nodes 2, max-src-states 10)

# If it's still inbound, coming specifically to this box, it's not using our services, then don't let it in.
# This rule is probably redundant as any traffic still coming in will fall into the 'block log all' at the end
# but it's reassuring that you don't have to worry about traffic coming in from here on out.
block in log quick from any to { $int_if, ($ext_if) }

#<<< Through this box >>>
#--- Allow NAT traffic out, create a state so traffic can get back in
pass in quick on $int_if proto { tcp, udp, icmp } from $int_if:network to ! $int_if keep state

#>>> From this box
#--- Don't let restricted users initiate their own connections
block out log quick from any to any user \
{ www, pgsql, conrad, algis, nobody, toor, games, news, man, smmsp, mailnull, pop, uucp, bind }
#--- Allow outbound connections from this server
pass out quick on $int_if proto { tcp, udp, icmp } from $int_if to $int_if:network keep state
pass out quick on $ext_if proto { tcp, udp, icmp } from ($ext_if) to any keep state

# We can check what gets caught here to see if we've missed anything
block log all



From the net, to the box's services

Okay, we can get out to the internet, connect from the gateway machine out, and internal users can connect inwards. Now we've got to let users out on the internet connect to the gateway's services, we've got to be careful that we only let in what we want to let in, and we should also log everyone from the internet who connects to our external services.
Also we'll use synproxy state instead of keep state; this way a connection to the listening service will only be started once the client has completed the 3-way handshake. This stops a number of clients sending lots of syn packets at once, which would quickly use up all the listening daemons (this is a DDoS attack). If a service gets DDoSed it will be PF that handles the malicious syn packets, not the services which can't do anything but wait for the handshake to be completed. If, during a DDoS, a real user tries to connect, they will complete the 3-way handshake with PF and a connection to the service will be started.

[...]
# all traffic to/from the external network is addressed to/from our external address specifically
block in quick on $ext_if from any to ! ($ext_if)
block out quick on $ext_if from ! ($ext_if) to any

#<<< To this box
#--- Globally accessible services

# sshd from external; this gets attacked every couple of days so we have to carefully limit this:
# 10 max states, 5 different users at once, 2 connections per user, 2 connections per minute
# and when an IP surpasses any of these restrictions add them to the banned list
pass in log quick on $ext_if proto tcp from any to ($ext_if) port ssh flags S/SA \
synproxy state (max 30, source-track rule, max-src-nodes 10, max-src-states 2, \
max-src-conn 2, max-src-conn-rate 2/60, overload <banned>)

# httpd, more relaxed settings as users will grab lots of things like images on a page all at once
# 1000 max states, 50 different users at once, 30 connections per user, etc
pass in log quick on $ext_if proto tcp from any to ($ext_if) port www flags S/SA \
synproxy state (max 1000, source-track rule, max-src-nodes 50, max-src-states 30, \
max-src-conn 30, overload <banned>)

#--- Internal services, there aren't many people internally but each'll be using lots of connections
# httpd, samba, and the proxy servers
# 2000 max states, 20 max users, 100 max states per user
pass in quick on $int_if proto tcp from any to $int_if port { www, 445, 139, 3001, 3128 } flags S/SA \
keep state (max 2000, source-track rule, max-src-nodes 20, max-src-states 100)

# Treat sshd differently so that even if all other services are maxed out sshd will be available
pass in quick on $int_if proto tcp from any to $int_if port ssh flags S/SA \
keep state (max 20, source-track rule, max-src-nodes 2, max-src-states 10)

# If it's still inbound, coming specifically to this box, it's not using our services, then don't let it in.
# This rule is probably redundant as any traffic still coming in will fall into the 'block log all' at the end
# but it's reassuring that you don't have to worry about traffic coming in from here on out.
block in log quick from any to { $int_if, ($ext_if) }
[...]



Intro to queueing

Okay, now we're only letting people connect where we want them to, but if we want to limit the amount of bandwidth people can use, make sure no-one's clogging anything up, and that I can play my game we have to do traffic shaping. This is where it gets a bit trickier because you have to know some of pfs internals, and the documentation is very thin on this. I'll take a detour here to discuss PF and ALTQ queueing:

Queues have two main features; parent queues have limits on the amount of bandwidth which can go through them per second, and child queues have packet buckets where packets wait until they can go through their parent queue. Queue prioritising works when several buckets have packets in, and a choice has to be made about which packets go through first.

eg You have five computers all sharing a 512Kb download link, you might create a parent queue(/altq interface) for the 512Kb link, and a child queue for each computer sharing the parent link (packets are assigned to each queue depending on which computer they're going to).


---------------------
---->tocomp1 (50 packets)
---------------------
---->tocomp2 (50 packets)
---------------------      -----------------
---->tocomp3 (50 packets)      parent 512Kb ---->
---------------------      -----------------
---->tocomp4 (50 packets)
---------------------
---->tocomp5 (50 packets)
---------------------



The logical choice is to limit the parent to 512Kb, because this is the physical limit of how many packets can go through at once, and why would you want make the maximum throughput lower than the incoming rate. But when you think about it making the bandwidth limit higher than the incoming bandwidth makes no sense; you can only prioritise packets when the buckets are starting to fill, and how will the buckets fill when packets are being sent off as fast as they come in.

The next question you might ask is; if packets are supposed to come in faster than they get sent out, won't the buckets fill? They will fill, and when they're full any more packets which come in are discarded. This doesn't seem like the best way to make full use of a connection, but this is the only way you can tell the computer sending the packets to slow down. When some of the packets getting sent are lost the sending computer slows sending, and other queues which don't have full buckets start to get their packets faster, and each queue balances out and ends up getting the share you specify.

There is a bit of a problem with this though; a queue will use all the bandwidth it can until the bucket fills, at which point the end computer will slow sending packets and it'll stall. Then the bucket will get cleared and packets will quickly come gushing in again, only to stall again when it hits the end of the bucket.
This is why Random Early Detection was created; instead of dropping packets when the bucket is full it randomly drops packets with increasing likelyhood as the bucket gets more and more full. So when the bucket has few packets in a packet probably won't get dropped, when it's almost full it'll probably get dropped, and when it's full it'll certainly get dropped. This smooths out incoming bandwidth so there aren't any spikes.

What about the size of a bucket? If you have a bucket size of 5 packets any small increase in packet input compared to the desired packet output will mean packets will suddenly start getting dropped. This means that if you have a low priority connection using lots of bandwidth, and suddenly a high priority one starts using bandwidth the low priority queue will immidiately stall and the queues will very quickly stabilise with the high priority queue getting its share.
Conversely if you have a massive bucket it'll take a longer time for queues to stabilise, but you won't get a sudden stall in bandwidth to the lower priority queue with lots of packet loss which would be undesirable for something like video streaming, especially if the high priority queue starts and stops frequently. Ideally you'll be able to find the sweet spot at which the low priority queue will slow down very quickly but without stalling.
So, it's depends what you're using it for; small for unstable but responsive queues, large for stable but unresponsive.

So what ratio of the bandwidth should you limit it to? When the output:input ratio is 1 no packets will go into the buckets, they'll all get sent off straight away. When it's much smaller the total amount of bandwidth coming in will stabilise at the smaller output bandwidth, and the extra bandwidth which could be used is wasted. Ideally you want the buckets to fill up /fairly/ slowly. Too slowly and the prioritizing engine won't be able to drop enough packets to slow connections down, although as long as the output is lower than the input it will eventually stabilise. If you have the ratio quite low, but not so low as to waste very much bandwidth you'll get more packet loss, but the prioritizer will be able to quickly choose which packets get priority and which don't; you'll waste more bandwidth but changes in priority will occur faster.
Again it's a matter of your requirements, if you're running some sort of file server which serves up large files over long periods of time you won't need quick changes, as long as it happens eventually it'll do. In the more likely scenario you get lots of quick connections which don't last very long wasting bandwidth won't be so much of a concern, but getting the shares to what they should be as quickly as possible is important.
Similarly; (relatively) small ratio (perhaps 0.8) for responsive but inefficient queues, close to 1 ratio for unresponsive but efficient queues.

All the above applies nicely to TCP; when a packet gets dropped somewhere the sender knows about it and slows down. No such luck with UDP, if a UDP packet gets dropped neither sender nor reciever knows, which means that you can't shape UDP. You have to set upper limits on UDP packets rather than shape it.

It might come as a surprise that you can only queue packets going out of an interface with PF and ALTQ. Misinformed PF users will tell you that this is because you can't shape inbound traffic, because it has already come in, so what's the point of shaping it? This isn't true, because when packets drop along a connection the sending host will send packets at a slower rate. See my threads on this
here and here.

So you might be wondering about the example I gave earlier, in which I shaped inbound traffic coming in over a 512Kb connection. This is possible, but only when the machine with PF on has two interfaces, one external internet interface, and one internal interface heading to the LAN. You can use the internal interface to queue packets which came in on the external interface, and are going out the other side. It's a messy hack, and it has drawbacks, but lots of PF users use it to get around this major shortcoming in PF.

That's the ALTQ basics, there's lots of stuff I haven't touched on because it's not relevant to most needs. Now I'm going to go over the basics of a prioritizing engine; Hierarchical Fair Service Curve.
Now that we've covered ALTQ basics HFSC does the rest without requiring any understanding of how it works, so this'll be brief. HFSC queues have three useful options; realtime <bw>, linkshare <bw>, and upperlimit <bw>. <bw> is a % or hard limit of the amount of bandwidth you can allocate.
realtime sets the minimum abount of bandwidth this queue should recieve, if packets are coming in at a speed less than realtime they'll get sent through the output right away, at the expense of other queues. The maximum you can set it to is 70%.
linkshare is the ideal throughput. If all queues are using all the bandwidth they can the amount of bandwidth each uses will tend towards linkshare.
upperlimit is the upper limit of how much bandwidth can flow through a queue at once. This can be useful because it means there won't be any stabilising period while one queue slows down to accomodate another, because it's already at that speed.

Two state, two interface, inbound/outbound NAT queueing

Okay, back to my pf.conf. So to be able to shape up and down NAT traffic seperately you have to create two states; one on the external interface for uploads, one on the internal interface for downloads. Before we do that we have to make sure that a state sticks to its interface; by default states in pf can 'float' from interface to interface (I'm not sure why anyone would want this).

[...]
### Options
# We want to sent ICMP RST or unreachable when a packet is blocked, if we don't people have to wait for a timeout
set block-policy return
# Bind states to interfaces so we can have a queue for each interface
set state-policy if-bound
[...]



Now we've got to create a state for both interfaces, we want to be able to put traffic into different queues depending on where it comes from (if it comes from me, give it priority). There's another problem here though.
Packets are processed on both interfaces. A packet can come in one interface, get processed, and get processed again as it goes out the other. With NAT packets as they come in their destination addresses are translated before being filtered, and their source addresses are translated as they go out of the filter. When they exit on another interface they're processed fully translated.

The problem is that the packets enter the external interface with a fully translated source. We have to put it into the right queue based on the source address it had before it was translated. For this we use pf's tag feature to tag packets as coming from me, or not, so that we can detect the tags after the packets have been translated and are otherwise unrecognisable as coming from me or someone else.

[...]
# If it's still inbound, coming specifically to this box, it's not using our services, then don't let it in.
# This rule is probably redundant as any traffic still coming in will fall into the 'block log all' at the end
# but it's reassuring that you don't have to worry about traffic coming in from here on out.
block in log quick from any to { $int_if, ($ext_if) }

#<<< Through this box >>>
#--- Let out NAT traffic from the internal network to the internet
# Tag it so that when it gets run through the filter again on the external interface we'll know
# that it came from a priority (pri) or default (def) address.
# If we were going to restrict outbound traffic (eg restricting access to http services or banning
# certain websites) we would do it here
pass in quick on $int_if from $pri_addr to ! $int_if keep state tag fromint_pri
pass in quick on $int_if from ! $pri_addr to ! $int_if keep state tag fromint_def

# We have to create a state on the external interface for traffic that has been passed, so that we can
# create an upload queue.
pass out quick on $ext_if tagged fromint_pri keep state
pass out quick on $ext_if tagged fromint_def keep state

#>>> From this box
#--- Don't let restricted users initiate their own connections
block out log quick from any to any user \
{ www, pgsql, conrad, algis, nobody, toor, games, news, man, smmsp, mailnull, pop, uucp, bind }
#--- Allow outbound connections from this server
pass out quick on $int_if proto { tcp, udp, icmp } from $int_if to $int_if:network keep state
pass out quick on $ext_if proto { tcp, udp, icmp } from ($ext_if) to any keep state
[...]



Queue Rules

We're now creating a state on both internal and external interfaces, and everything is stateful, so everything can be fit into a queue. Now we'll design the queues themselves. There's no real gold standard rule to design queues; as I explained above it's a matter of what your needs are, what compromises you're willing to make, and a matter of fine tuning. When you've set the queue up use pfctl -vvsq and try the queues out to see if they work as you hoped, if not then tweak it around and try again. It's a long process and it takes a while to get a feel for what various changes make.

[...]
### Tables
# Non-public/weird addresses, doesn't include our 10.10.x.x subnet, anything in here shouldn't be going anywhere
table <banned> { 192.168.0.0/16, 192.0.2.0/24, 172.16.0.0/12, 127.0.0.0/8, 0.0.0.0/8, 169.254.0.0/16, 224.0.0.0/3,
204.152.64.0/23 }

### Queueing: rule-based bandwidth control.
# Internal interface; download queue
altq on $int_if bandwidth 100Mb hfsc queue { ether, nattraffic }
queue ether hfsc ( default, upperlimit 70% ) bandwidth 10% priority 0
# Ethernet traffic
queue nattraffic hfsc ( upperlimit 400Kb ) bandwidth 420Kb { toint_pri, toint_def }
queue toint_pri qlimit 10 hfsc ( red, realtime 35%, linkshare 50% ) priority 4 bandwidth 70%
queue toint_def qlimit 10 hfsc ( red, realtime 15%, linkshare 30% ) priority 3 bandwidth 20%

# External interface; upload queue
altq on $ext_if hfsc ( upperlimit 90Kb ) bandwidth 100Kb queue { fromint_pri, fromint_def, server, fromint_ack }
# External interface, stuff which goes out on this interface has 128Kb bandwidth
queue fromint_pri hfsc ( realtime 20Kb ) bandwidth 10%
queue fromint_def hfsc ( realtime 40Kb ) bandwidth 10%
# From others
queue server hfsc ( default ) bandwidth 10%
# To the server from external
queue fromint_ack hfsc ( realtime 5Kb ) bandwidth 10% priority 7
# TCP ACK packets, saying we've got a packet, we have to get these off asap

### Translation: specify how addresses are to be mapped or redirected.
[...]



Some extras to clean up the rough edges

Before we finish off by adding queues to everything we'll add a scrub filter, which performs some sanitizing.

[...]
### Options, most of these default are fine
# We want to sent ICMP RST or unreachable
set block-policy return
# Bind states to interfaces so we can have a queue for each interface
set state-policy if-bound

### Normalization: reassemble fragments and resolve or reduce traffic ambiguities.
scrub on $ext_if all random-id reassemble tcp fragment reassemble
scrub on $int_if all random-id reassemble tcp fragment reassemble
# random-id: Randomize IP ID fields to protect againt packet injections by ID prediction
# reassemble tcp: Protect against some DoS and info gathering attacks
# fragment reassemble: Packet fragments are reassembled before processing, it's forced when using NAT anyway
# Don't normalize traffic on the loopback

### Queueing: rule-based bandwidth control.
[...]



All queues added we end up with the finished ruleset:

### Macros
ext_if="tun0" # External interface
int_if="fxp0" # Internal interface
pri_addr="10.10.2.3" # My address

### Tables
# Non-public/weird addresses, doesn't include our 10.10.x.x subnet, anything in here shouldn't be going anywhere
table <banned> { 192.168.0.0/16, 192.0.2.0/24, 172.16.0.0/12, 127.0.0.0/8, 0.0.0.0/8, 169.254.0.0/16, 224.0.0.0/3,
204.152.64.0/23 }

### Options, most of these default are fine
# We want to sent ICMP RST or unreachable
set block-policy return
# Bind states to interfaces so we can have a queue for each interface
set state-policy if-bound

### Normalization: reassemble fragments and resolve or reduce traffic ambiguities.
scrub on $ext_if all random-id reassemble tcp fragment reassemble
scrub on $int_if all random-id reassemble tcp fragment reassemble
# random-id: Randomize IP ID fields to protect againt packet injections by ID prediction
# reassemble tcp: Protect against some DoS and info gathering attacks
# fragment reassemble: Packet fragments are reassembled before processing, it's forced when using NAT anyway
# Don't normalize traffic on the loopback

### Queueing: rule-based bandwidth control.
# Internal interface; download queue
altq on $int_if bandwidth 100Mb hfsc queue { ether, nattraffic }
queue ether hfsc ( default, upperlimit 70% ) bandwidth 10% priority 0
# Ethernet traffic
queue nattraffic hfsc ( upperlimit 400Kb ) bandwidth 420Kb { toint_pri, toint_def }
queue toint_pri qlimit 10 hfsc ( red, realtime 35%, linkshare 50% ) priority 4 bandwidth 70%
queue toint_def qlimit 10 hfsc ( red, realtime 15%, linkshare 30% ) priority 3 bandwidth 20%

# External interface; upload queue
altq on $ext_if hfsc ( upperlimit 90Kb ) bandwidth 100Kb queue { fromint_pri, fromint_def, server, fromint_ack }
# External interface, stuff which goes out on this interface has 128Kb bandwidth
queue fromint_pri hfsc ( realtime 20Kb ) bandwidth 10%
queue fromint_def hfsc ( realtime 40Kb ) bandwidth 10%
# From others
queue server hfsc ( default ) bandwidth 10%
# To the server from external
queue fromint_ack hfsc ( realtime 5Kb ) bandwidth 10% priority 7
# TCP ACK packets, saying we've got a packet, we have to get these off asap

### Translation: specify how addresses are to be mapped or redirected.
# nat: packets going out through $ext_if with source address $internal_net will
# get translated as coming from the address of $ext_if, a state is created for
# such packets, and incoming packets will be redirected to the internal address.
nat on $ext_if proto { tcp, udp, icmp } from $int_if:network to ! $int_if -> ($ext_if)

# rdr local mail to remote mail, for those who don't like using NAT
rdr on $int_if proto tcp from $int_if:network to $int_if port 2525 -> mail.kuliukas.com port 2525
rdr on $int_if proto tcp from $int_if:network to $int_if port 1100 -> mail.kuliukas.com port 110

### Filtering
# Let all loopback traffic through
pass quick on lo0
# no traffic is trying to get into the loopback interface from outside.
block quick from any to lo0:network

#--- Making sure all traffic is coming to/going from the right interface
# Make sure no banned addresses are around
block quick from <banned> to any
block quick from any to <banned>

# all traffic to/from the internal network is addressed to/from the internal network
block in quick on $int_if from ! $int_if:network to any
block out quick on $int_if from any to ! $int_if:network

# all traffic to/from the external network is addressed to/from our external address specifically
block in quick on $ext_if from any to ! ($ext_if)
block out quick on $ext_if from ! ($ext_if) to any

#<<< To this box
#--- Globally accessible services

# sshd from external; this gets attacked every couple of days so we have to carefully limit this:
# 10 max states, 5 different users at once, 2 connections per user, 2 connections per minute
# and when an IP surpasses any of these restrictions add them to the banned list
pass in log quick on $ext_if proto tcp from any to ($ext_if) port ssh flags S/SA \
keep state (max 30, source-track rule, max-src-nodes 10, max-src-states 2, \
max-src-conn 2, max-src-conn-rate 2/60, overload <banned>) queue ( server, fromint_ack )

# httpd, more relaxed settings as users will grab lots of things like images on a page all at once
# 1000 max states, 50 different users at once, 30 connections per user, etc
pass in log quick on $ext_if proto tcp from any to ($ext_if) port www flags S/SA \
keep state (max 1000, source-track rule, max-src-nodes 50, max-src-states 30, \
max-src-conn 30, overload <banned>) queue ( server, fromint_ack )

#--- Internal services, there aren't many people internally but each'll be using lots of connections
# httpd, samba, and the proxy servers
# 2000 max states, 20 max users, 100 max states per user
pass in quick on $int_if proto tcp from any to $int_if port { www, 445, 139, 3001, 3128 } flags S/SA \
keep state (max 2000, source-track rule, max-src-nodes 20, max-src-states 100) queue ether

# Treat sshd differently so that even if all other services are maxed out sshd will be available
pass in quick on $int_if proto tcp from any to $int_if port ssh flags S/SA \
keep state (max 20, source-track rule, max-src-nodes 2, max-src-states 10) queue ether

# If it's still inbound, coming specifically to this box, it's not using our services, then don't let it in.
# This rule is probably redundant as any traffic still coming in will fall into the 'block log all' at the end
# but it's reassuring that you don't have to worry about traffic coming in from here on out.
block in log quick from any to { $int_if, ($ext_if) }

#<<< Through this box >>>
#--- Let out NAT traffic from the internal network to the internet
# Tag it so that when it gets run through the filter again on the external interface we'll know
# that it came from a priority (pri) or default (def) address.
# If we were going to restrict outbound traffic (eg restricting access to http services or banning
# certain websites) we would do it here
pass in quick on $int_if from $pri_addr to ! $int_if keep state tag fromint_pri queue ( toint_pri )
pass in quick on $int_if from ! $pri_addr to ! $int_if keep state tag fromint_def queue ( toint_def )

# We have to create a state on the external interface for traffic that has been passed, so that we can
# create an upload queue.
pass out quick on $ext_if tagged fromint_pri keep state queue ( fromint_pri, fromint_ack )
pass out quick on $ext_if tagged fromint_def keep state queue ( fromint_def, fromint_ack )

#>>> From this box
#--- Don't let restricted users initiate their own connections
block out log quick from any to any user \
{ www, pgsql, conrad, algis, nobody, toor, games, news, man, smmsp, mailnull, pop, uucp, bind }
#--- Allow outbound connections from this server
pass out quick on $int_if proto { tcp, udp, icmp } from $int_if to $int_if:network keep state queue ether
pass out quick on $ext_if proto { tcp, udp, icmp } from ($ext_if) to any keep state queue ( server, fromint_ack )

# We can check what gets caught here to see if we've missed anything
block log all



We might have wanted to limit outbound connections to, say, web servers, e-mail, and MSN, and then we might want to limit it to certain sites, or restrict certain sites, etc. But this is where I drew the convenience vs security line.

Last revised: 02/05/2006