We have a large private network (10.0.0.0/8) to which we need to allow enterprise users access from (172.16.0.0/12) running on netbsd 5.x (ipf v4.1.29).
The public side nat ip's are 192.168.100.0/24. /etc/ipnat.conf has various lines mapping ip subnets of the enterprise to special ips thusly: map bge0 172.16.0.0/16 -> 192.168.1.4/32 portmap tcp auto map bge0 172.17.0.0/16 -> 192.168.1.5/32 portmap tcp auto map bge0 172.18.0.0/16 -> 192.168.1.6/32 portmap tcp auto map bge0 172.19.0.0/16 -> 192.168.1.7/32 portmap tcp auto etc. /etc/ipnat.conf also has internal private network ip's mapped one to one with 192.168.100.0/24 thusly: map bge1 10.1.2.3/32 -> 192.168.100.7/32 portmap tcp auto map bge1 10.1.20.4/32 -> 192.168.100.8/32 portmap tcp auto map bge1 10.1.2.5/32 -> 192.168.100.9/32 portmap tcp auto map bge1 10.1.34.88/32 -> 192.168.100.10/32 portmap tcp auto etc. /etc/ipf.conf allows ports 12000 thru 12100 thru for all hosts with no port translation. My issue is that the nat is not working even though ipmon shows all the packets passing without issue. When I snoop on the internal network I see messages like "ICMP NETWORK UNREACH". I also see the connection starting on "netstat -an" output but hung with a state of "SYN_RCVD". The internal host has the proper route in "netstat -nr" and knows how to reach 192.168.1.0/24. So what am I doing wrong? I hope this is detailed enough to help y'all. Thanks, Abby
