On Sat, May 23, 2009 at 8:55 PM, Justin Credible
<mista.justin.credi...@gmail.com> wrote:
> On Sat, May 23, 2009 at 6:35 PM, Justin Credible
> <mista.justin.credi...@gmail.com> wrote:
>>
>> Hi there,
>>
>> I am running OpenBSD 4.4 with OpenBGPD and multiple peers.
>>
>> For some reason the device is selecting Level3 as the default route for
absolutely everything which is not statically set.
>>
>> On Level3 config i have set:
>>
>> set localpref 100
>> softreconfig in yes
>> max-prefix 100 restart 300
>>
>> For the others I have not set max-prefix.
>>
>> Also set
>>
>> nexthop qualify via bgp
>> rde route-age evaluate
>>
>> and then stopped the session for Level 3 and started it again so it would
seem "less stable" to the decision engine since it is a newer session, it is
still the default for every single route. I even did a route flush and flushed
them all, and did a refresh from another peer, at which point all routes came
back, defaulting to Level3!
>>
>> How do i stop this from being my default route?
>>
>> The reason why is because it is not the best route most of the time. E.g.
to some parts of the US it takes 16 hops through Level3, whereas Global
Crossing will do it in 1 hop, Abovenet in 3, etc.
>>
>> Thanks!
>>
>> Ken
>
> If you need more examples here you go:
>
> # bgpctl show rib 199.185.137.3
> flags: * = Valid, > = Selected, I = via IBGP, A = Announced
> origin: i = IGP, e = EGP, ? = Incomplete
> flags destination         gateway          lpref   med aspath origin
> *>    199.185.136.0/23    64.x.x.x      200     1 3549 812 812 812 812
> 3602 22512 i
> *     199.185.136.0/23    212.x.x.x     100   500 3356 6453 812 3602 22512
i
> # route -n show | grep 199.185.136.0/23
> # route -n show | grep 199.185.136
> 199.185.136/23     212.x.x.x     UG1        0        0     -    48 vlan400
> # route delete 199.185.136/23
> delete net 199.185.136/23
> # ping 199.185.137.3
> PING 199.185.137.3 (199.185.137.3): 56 data bytes
> 64 bytes from 199.185.137.3: icmp_seq=0 ttl=245 time=150.000 ms
> 64 bytes from 199.185.137.3: icmp_seq=1 ttl=245 time=155.865 ms
> --- 199.185.137.3 ping statistics ---
> 2 packets transmitted, 2 packets received, 0.0% packet loss
> round-trip min/avg/max/std-dev = 150.000/152.932/155.865/2.958 ms
> # route -n show | grep 199.185.136
> 199.185.136/23     212.x.x.x     UG1        0        0     -    48 vlan400
> # bgpctl show rib 199.185.137.3
> flags: * = Valid, > = Selected, I = via IBGP, A = Announced
> origin: i = IGP, e = EGP, ? = Incomplete
> flags destination         gateway          lpref   med aspath origin
> *>    199.185.136.0/23    64.x.x.x      200     1 3549 812 812 812 812
> 3602 22512 i
> *     199.185.136.0/23    212.x.x.x     100   500 3356 6453 812 3602 22512
i
>
>
> I've even set my config to be EXTREMELY biased against Level3 but it
> (the 212 address) still populates my routing tables:
>
>
> BGP routing table entry for 199.185.136.0/23
>    3549 812 812 812 812 3602 22512
>    Nexthop 64.x.x.x (via 212.x.x.x) from gblx-p1 (208.48.250.230)
>    Origin IGP, metric 1, localpref 200, external, valid, best
>    Last update: 00:26:45 ago
>    Communities: 3549:4356 3549:8013 3549:8023 3549:8043 3549:8073
> 3549:8090 3549:8163 3549:8173 3549:8223 3549:8233 3549:30840
> BGP routing table entry for 199.185.136.0/23
>    3356 6453 812 3602 22512
>    Nexthop 212.x.x.x (via 212.x.x.x) from level3-p2 (4.69.187.4)
>    Origin IGP, metric 500, localpref 100, external, valid
>    Last update: 00:26:45 ago
>
>
> # traceroute -n 199.185.137.3
> traceroute to 199.185.137.3 (199.185.137.3), 64 hops max, 40 byte packets
>  1  212.x.x.x  0.550 ms  0.555 ms  0.448 ms
>  2  4.69.136.93  0.529 ms  0.445 ms  0.575 ms
>  3  4.69.136.90  11.273 ms  17.935 ms  11.317 ms
>  4  4.69.139.73  11.396 ms  11.439 ms  11.317 ms
>  5  4.68.63.106  16.769 ms  17.935 ms  17.939 ms
>  6  195.219.195.37  11.772 ms 195.219.83.2  11.687 ms 195.219.195.89  11.562
ms
>  7  195.219.243.14  12.17 ms 195.219.195.22  164.349 ms  164.471 ms
>  8  195.219.144.10  83.354 ms 195.219.144.1  12.184 ms  12.62 ms
>  9  195.219.144.10  83.355 ms  83.270 ms 216.6.98.1  109.634 ms
> 10  216.6.98.1  109.835 ms  109.880 ms 216.6.98.30  163.602 ms
> 11  216.6.98.30  163.552 ms  163.741 ms 64.86.115.38  178.523 ms
> 12  64.86.115.38  178.788 ms  179.88 ms 24.153.7.137  203.204 ms
> 13  24.153.7.137  180.416 ms  210.443 ms  238.549 ms
> 14  24.153.4.77  177.923 ms  178.712 ms 24.153.3.38  173.844 ms
> 15  24.153.3.38  173.921 ms  174.215 ms  173.595 ms
> 16  204.50.251.202  196.411 ms 207.107.204.178  177.465 ms  176.209 ms
> 17  207.107.204.178  177.542 ms  177.960 ms  176.719 ms
> 18  199.185.230.2  177.924 ms 199.185.137.3  177.712 ms 199.185.230.2
> 176.215 ms
> # route add 199.185.137.3 64.x.x.x
> add host 199.185.137.3: gateway 64.x.x.x
> # traceroute -n 199.185.137.3
> traceroute to 199.185.137.3 (199.185.137.3), 64 hops max, 40 byte packets
>  1  64.x.x.x  10.505 ms  10.427 ms  10.316 ms
>  2  64.208.169.150  98.472 ms  98.635 ms  98.513 ms
>  3  69.63.248.98  97.96 ms  102.9 ms  97.141 ms
>  4  66.185.80.186  138.946 ms  107.131 ms  107.136 ms
>  5  24.153.4.74  149.191 ms  152.977 ms  159.354 ms
>  6  24.153.3.34  146.816 ms  146.733 ms  146.861 ms
>  7  204.50.251.141  146.942 ms  146.975 ms  146.860 ms
>  8  207.107.204.178  149.314 ms  149.353 ms  149.371 ms
>  9  199.185.230.2  149.149 ms  149.229 ms  149.231 ms
> 10  199.185.137.3  149.314 ms  149.230 ms  149.483 ms
>


I figured this one out. This particular problem was caused because i had set:

nexthop qualify via bgp

I don't know why that setting in particular set all of my routes to
point at Level 3 regardless of the preferential settings against it,
but how i got around it is simple.

route add -mpath default gw1
route add -mpath default gw2
etc...

Then change that setting to

nexthop qualify via default

Also make sure that the metric, localpref, etc are equal on all of the
peers (unless you want one taking up all of the routing tables). then
do a bgpctl reload

The routing tables seem to have evened out now and become more
"realistic" and unbiased. There are now more routes through GBLX than
Level3 but only a few thousand, as opposed to the previous problem of
no dynamic routes pointing to GBLX.

Regards,

Ken

Reply via email to