[OpenVPN home] [Date Prev] [Date Index] [Date Next]
[OpenVPN mailing lists] [Thread Prev] [Thread Index] [Thread Next]
Web openvpn.net

Re: [Openvpn-users] MTU, link-mtu and tun-mtu

  • Subject: Re: [Openvpn-users] MTU, link-mtu and tun-mtu
  • From: James Yonan <jim@xxxxxxxxx>
  • Date: Mon, 24 Oct 2005 13:16:44 -0600 (MDT)

On Sat, 22 Oct 2005, Phillip Vandry wrote:

> Hello,
> I am attempting to use OpenVPN for the first time. I've looked
> through the documentation. The features and how they are configured
> seem to make a lot of sense. I have configured a tunnel between two
> machines and it works. I am mostly happy with it so far.
> However I'm experiencing a little bit of trouble with the MTU
> settings.
> >From the manpage:
> --link-mtu n
>     Sets an upper bound on the size of UDP packets which are sent
>     between OpenVPN peers. It's best not to set this parameter unless
>     you know what you're doing. 
> --tun-mtu n
>     Take the TUN device MTU to be n and derive the link MTU from it
>     (default=1500). In most cases, you will probably want to leave this
>     parameter set to its default value.
> I was a little bit confused by the description of these two options.
> The way I understand it, link-mtu is the size of the carrier packets
> OpenVPN will generate and tun-mtu is the size of the payload packets.
> Whichever one is specified, one can of course be derived from the
> other because the software should know how much tunelling overhead
> it will incur. If this interpretation is correct, then I believe the
> tun-mtu description could use some rewording.
> Normally I would imagine that you should set the link-mtu parameter
> rather than the tun-mtu. This is because you probably know the
> path MTU between your OpenVPN peers or you can easily figure out
> with ping as the largest packet size that will pass unfragmented,
> and that is what you should be using as the link-mtu.
> I should mention that before I decided to play with these parameters,
> I was hoping OpenVPN's automatic path MTU discovery would work
> (after all, it is possible that under some conditions the path MTU
> could change dynamically as routing changed; finding it automatically
> is a good solution). However, even after trying different settings
> for --mtu-disc (matched on both peers each time), I still had
> problems sending large packets through the tunnel where they would
> not make it to the other side with no forthcoming recovery
> forthcoming. I did not try --mtu-test.
> In my case, I know that my path MTU is 1460 (because of ethernet
> and PPPoE and L2TP over ethernet in the path), so I gave this
> as the link-mtu on both sides.
> After the tunnel was up, looking at the resulting MTU on the tun0
> interface, I found it to be 1419. This is in line with what I expected:
> several dozen bytes less than the link MTU. However, in testing, I
> found that I could not pass packets larger than 1395 bytes (IP
> datagram size). By sniffing, I found that 1395 byte payloads
> resulted in 1357 byte encrypted packets. So far, so good. But add
> one byte to the payload, to 1396, and the result was 1465, and
> that's too big.
> Did OpenVPN miscalculate its overhead? If it calculated an MTU for the
> tun0 interface (tun-mtu) of 1419 when I specified a link-mtu of 1460, I
> would have thought that meant that all payloads up to 1419 bytes
> should fit inside the link-mtu I specified, 1460. It shounds like it
> should have calculated 1395 as the tun-mtu.
> By setting the tun-mtu to 1395 on both sides, the problem is of
> course gone.

--link-mtu refers to maximum UDP payload size and doesn't include the IP 
or UDP header.

A lot of experience gained during the OpenVPN 1.x releases showed that 
it's best to fix the tunnel MTU at 1500 and vary the other parameters (and 
use --mssfix to prevent fragmentation rather than a lower tunnel MTU).

> server side: Linux 2.4.31 OpenVPN 2.0 (Debian stable)
> client side: Linux 2.4.20 OpenVPN 2.0.2
> TLS mode (--mode server and --mode client)
> Also, can anyone expand on the documentation of the --mtu-disc option?
> I assume that at least one of these options should cause OpenVPN to
> react to ICMP unreachable fragmentation needed messages by cranking
> down the link & tunnel MTU, but which one?
> I tried examining the source code to find my answers but I didn't
> find what I needed.
> I have another question, also related to the MTU. When used in server
> mode (multiple clients on a single tun interface), the tunnel
> interface on the server can only have one MTU, of course. Am I
> stuck using the lowest MTU of all of the clients? I guess yes. Is
> there a workaround for this, or must I use OpenVPN the old way
> (one interface and one daemon per tunnel) if I want the tunnels to
> have different MTUs?

Yes, the OpenVPN server daemon can only accept one global tunnel MTU
value.  You would need to run multiple servers.

Of course this gets back to why changing link-mtu is discouraged.  If you 
leave the MTUs at their defaults (tun-mtu = 1500), then different clients 
could use different "mssfix" values, and you would accomplish the same 
objective by different means.


Openvpn-users mailing list