[OpenVPN home] [Date Prev] [Date Index] [Date Next]
[OpenVPN mailing lists] [Thread Prev] [Thread Index] [Thread Next]
Google
 
Web openvpn.net

[Openvpn-users] MTU, link-mtu and tun-mtu


  • Subject: [Openvpn-users] MTU, link-mtu and tun-mtu
  • From: Phillip Vandry <vandry@xxxxxxxxx>
  • Date: Sat, 22 Oct 2005 23:21:26 -0400

Hello,

I am attempting to use OpenVPN for the first time. I've looked
through the documentation. The features and how they are configured
seem to make a lot of sense. I have configured a tunnel between two
machines and it works. I am mostly happy with it so far.

However I'm experiencing a little bit of trouble with the MTU
settings.

>From the manpage:

--link-mtu n
    Sets an upper bound on the size of UDP packets which are sent
    between OpenVPN peers. It's best not to set this parameter unless
    you know what you're doing. 
--tun-mtu n
    Take the TUN device MTU to be n and derive the link MTU from it
    (default=1500). In most cases, you will probably want to leave this
    parameter set to its default value.

I was a little bit confused by the description of these two options.
The way I understand it, link-mtu is the size of the carrier packets
OpenVPN will generate and tun-mtu is the size of the payload packets.
Whichever one is specified, one can of course be derived from the
other because the software should know how much tunelling overhead
it will incur. If this interpretation is correct, then I believe the
tun-mtu description could use some rewording.

Normally I would imagine that you should set the link-mtu parameter
rather than the tun-mtu. This is because you probably know the
path MTU between your OpenVPN peers or you can easily figure out
with ping as the largest packet size that will pass unfragmented,
and that is what you should be using as the link-mtu.

I should mention that before I decided to play with these parameters,
I was hoping OpenVPN's automatic path MTU discovery would work
(after all, it is possible that under some conditions the path MTU
could change dynamically as routing changed; finding it automatically
is a good solution). However, even after trying different settings
for --mtu-disc (matched on both peers each time), I still had
problems sending large packets through the tunnel where they would
not make it to the other side with no forthcoming recovery
forthcoming. I did not try --mtu-test.

In my case, I know that my path MTU is 1460 (because of ethernet
and PPPoE and L2TP over ethernet in the path), so I gave this
as the link-mtu on both sides.

After the tunnel was up, looking at the resulting MTU on the tun0
interface, I found it to be 1419. This is in line with what I expected:
several dozen bytes less than the link MTU. However, in testing, I
found that I could not pass packets larger than 1395 bytes (IP
datagram size). By sniffing, I found that 1395 byte payloads
resulted in 1357 byte encrypted packets. So far, so good. But add
one byte to the payload, to 1396, and the result was 1465, and
that's too big.

Did OpenVPN miscalculate its overhead? If it calculated an MTU for the
tun0 interface (tun-mtu) of 1419 when I specified a link-mtu of 1460, I
would have thought that meant that all payloads up to 1419 bytes
should fit inside the link-mtu I specified, 1460. It shounds like it
should have calculated 1395 as the tun-mtu.

By setting the tun-mtu to 1395 on both sides, the problem is of
course gone.

server side: Linux 2.4.31 OpenVPN 2.0 (Debian stable)
client side: Linux 2.4.20 OpenVPN 2.0.2
TLS mode (--mode server and --mode client)

Also, can anyone expand on the documentation of the --mtu-disc option?
I assume that at least one of these options should cause OpenVPN to
react to ICMP unreachable fragmentation needed messages by cranking
down the link & tunnel MTU, but which one?

I tried examining the source code to find my answers but I didn't
find what I needed.

I have another question, also related to the MTU. When used in server
mode (multiple clients on a single tun interface), the tunnel
interface on the server can only have one MTU, of course. Am I
stuck using the lowest MTU of all of the clients? I guess yes. Is
there a workaround for this, or must I use OpenVPN the old way
(one interface and one daemon per tunnel) if I want the tunnels to
have different MTUs?

Thank you for your help!

-Phil

____________________________________________
Openvpn-users mailing list
Openvpn-users@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/openvpn-users