[OpenVPN home] [Date Prev] [Date Index] [Date Next]
[OpenVPN mailing lists] [Thread Prev] [Thread Index] [Thread Next]
Google
 
Web openvpn.net

[Openvpn-users] Re: Re: Re: Re: Routing forever


  • Subject: [Openvpn-users] Re: Re: Re: Re: Routing forever
  • From: Jochen Witte <jwitte@xxxxxxxxxxxxx>
  • Date: Thu, 20 Jan 2005 18:01:51 +0100

Am Thu, 20 Jan 2005 17:02:38 +0100 schrieb Mathias Sundman:

> On Thu, 20 Jan 2005, Jochen Witte wrote:
> 
>> Am Thu, 20 Jan 2005 15:17:22 +0100 schrieb Mathias Sundman:
>>
>>> On Thu, 20 Jan 2005, Jochen Witte wrote:
>>>
>>>>>>>> I have a rather simple setup:
>>>>>>>> - 2 static, public ip servers (<pip1>, <pip2>)
>>>>>>>> - 2 private subnets (10.128.0.0/24, 192.168.0.0/24)
>>>>>>>> - OpenVPN network: 10.129.0.1<->10.129.0.2
>>>>>>>>
>>>>>>>> Here is the picture:
>>>>>>>>
>>>>>>>> Subnet A                 GW1            GW2           SubnetB
>>>>>>>> 10.128.0.0/24<--->10.128.0.1        192.168.0.254<--->192.168.0.0/24
>>>>>>>>                       |                 |
>>>>>>>>                  10.129.0.1        10.129.0.2
>>>>>>>>                   (<pip1>)<-------->(<pip2>)
>>>>>>>>                              VPN
>>>>>>>>
>>>>>>>> Obviously this is a routing problem (no firewalling, since all packets are
>>>>>>>> logged for debuggung).
>>>>>>>>
>>>>>>>> GW1 routes:
>>>>>>>> 10.129.0.2  0.0.0.0         255.255.255.255 UH    0      0        0 tun0
>>>>>>>> <pipnet1>   0.0.0.0         255.255.255.248 U     0      0        0 eth1
>>>>>>>> 10.128.0.0  0.0.0.0         255.255.255.0   U     0      0        0 eth0
>>>>>>>> 192.168.0.0 10.129.0.2      255.255.255.0   UG    0      0        0 tun0
>>>>>>>> 169.254.0.0 0.0.0.0         255.255.0.0     U     0      0        0 eth1
>>>>>>>> 0.0.0.0     <default-gw>    0.0.0.0         UG    0      0        0 eth1
>>>>>>>>
>>>>>>>> GW2 routes:
>>>>>>>> <default-gw>    0.0.0.0    255.255.255.255 UH    0      0        0 ppp0
>>>>>>>> 10.129.0.1      0.0.0.0    255.255.255.255 UH    0      0        0 tun0
>>>>>>>> 10.128.0.0      10.129.0.1 255.255.255.0   UG    0      0        0 tun0
>>>>>>>> 192.168.0.0     0.0.0.0    255.255.0.0     U     0      0        0 eth0
>>>>>>>> 0.0.0.0         <default-gw>  0.0.0.0      UG    0      0        0 ppp0
>>>>>>
>>>>>> The packets get stuck immediately in the gateway. (GW1 for packets from
>>>>>> 10.128.0.0 and GW2 for 192.168.0.0).
>>>>>
>>>>> Can you see it both on the ethX device and on tun0?
>>>>>
>>>> No, I just see it on my internal ethx and then it is gone. I even can't
>>>> see it on the external device (e.g. ppp0)
>>>
>>> Then I'd bet on a firewall problem after all. If routing is enabled, but
>>> you still can't see the packet traverse from ethX to tun0, then it's most
>>> likly blocked by netfilter.
>>>
>>> If you would have seen it on some other interface, like ppp0, then it
>>> would have been a routing problem.
>>
>> Hm, I do not agree. I log all traffic to example host 10.128.0.10 with:
>>
>>        # Log-Chain
>>        ###########
>>        $IPTABLES -N my_log
>>        $IPTABLES -A my_log -p ICMP -j LOG --log-level info --log-prefix "LOG-ICMP "
>>        $IPTABLES -A my_log -p UDP -j LOG --log-level info --log-prefix "LOG-UDP "
>>        $IPTABLES -A my_log -p TCP -j LOG --log-level info --log-prefix "LOG-TCP "
>>
>> $IPTABLES -A FORWARD -d 10.128.0.10 -j my_log
>> $IPTABLES -A INPUT -d 10.128.0.10 -j my_log
>> $IPTABLES -A OUTPUT -d 10.128.0.10 -j my_log
>>
>>
>> This is one of the first things I do in my script.
>> I can see packages, when sending from the GW:
>>
>> Jan 20 15:55:48 <host> kernel: LOG-ICMP IN= OUT=tun0 SRC=10.129.0.2
>> DST=10.128.0.10 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=ICMP TYPE=8
>> CODE=0 ID=51242 SEQ=0
>>
>> But nothing happens, when sending from the inside host.
> 
> Hm, so the packet arrives on eth0, it's not routed to any other interface, 
> and not seen by netfilter via your log rules.
> 
> Stills smells like a simple typo in the firewall ruleset to me.
> 
> Some things to check:
> 
> Is rp_filter enabled? Could it be causing any problems. I really don't 
> know, as I've never used it. I rely on my iptables rules instead.
> 

OK, I used this in my fw-script:

echo "1" > /proc/sys/net/ipv4/tcp_syncookies
echo "1" > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts
echo "1" > /proc/sys/net/ipv4/icmp_ignore_bogus_error_responses
echo "5" > /proc/sys/net/ipv4/icmp_ratelimit
echo "0" > /proc/sys/net/ipv4/tcp_ecn

for if in  $IF ; do
       echo "1" > /proc/sys/net/ipv4/conf/$if/rp_filter
       echo "0" > /proc/sys/net/ipv4/conf/$if/accept_redirects
       echo "0" > /proc/sys/net/ipv4/conf/$if/accept_source_route
       echo "0" > /proc/sys/net/ipv4/conf/$if/bootp_relay
       echo "1" > /proc/sys/net/ipv4/conf/$if/log_martians
done

But: disabling this, restarting network -> no changes (with values from 
/proc/sys/net/ipv4/conf/default/)

> Do you have any PREROUTING DNAT rules that could be natting your packets 
> incorrectly so you don't see them with your dest based log rules?
> 
No.

> You ain't doing any other fancy policy routing via iproute2, that isn't 
> seen in you normal routing table?
> 
[root@host root]# ip route
<ppp-gw> dev ppp0  proto kernel  scope link  src <pip>
10.129.0.1 dev tun0  proto kernel  scope link  src 10.129.0.2 
10.128.0.0/24 via 10.129.0.1 dev tun0 
169.254.0.0/16 dev eth0  scope link 
192.168.0.0/16 dev eth0  proto kernel  scope link  src 192.168.0.254 
default via <ppp-gw> dev ppp0 



> To make absolutly sure it's not a firewall issue, would it be possible to 
> try running without any rules, or with defauly policy ALLOW and just one
> blocking rule denying traffic from the internet interface?
> 
I tried without:

[root@host root]# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain my_drop (0 references)
target     prot opt source               destination         

Chain my_log (0 references)
target     prot opt source               destination   


[root@firefly root]# iptables -L -t nat
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
SNAT       all  --  anywhere             anywhere            to:217.140.81.31 

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination   


> One thing I'm a little uncertain about is whether the packets can be seen 
> on tun0 regardless of if any userspace application is processing the 
> packets or not. My previous assumptions are made on that you can always 
> see the packets on tun0 if the kernel is routing properly even if OpenVPN 
> is not running at all. If this is not case, then we should perhaps move on 
> and have a look at your openvpn configs.

I am not sure about this. Here are my configs:

GW2:
----
dev tun

# Our OpenVPN peer is the office gateway.
remote <pip1>

# 10.1.0.2 is our local VPN endpoint (home).
# 10.1.0.1 is our remote VPN endpoint (office).
ifconfig 10.129.0.2 10.129.0.1

up ./office.up

# Our pre-shared static key
secret office.key


GW1:
----
dev tun

# peers adress
remote <pip2>

# 10.1.0.1 is our local VPN endpoint (office).
# 10.1.0.2 is our remote VPN endpoint (home).
ifconfig 10.129.0.1 10.129.0.2

# Our up script will establish routes
# once the VPN is alive.
up ./home.up

# Our pre-shared static key
secret home.key


VERY simple, isn't it. Really strange. 





____________________________________________
Openvpn-users mailing list
Openvpn-users@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/openvpn-users