Re: [c-nsp] Current BGP BCP for anchoring and announcing local prefixes

2010-03-16 Thread Asbjorn Hojmark - Lists
On Mon, 15 Mar 2010 17:01:07 -0400, you wrote:

 router bgp asnr
  address-family ipv4
   aggregate-address A.A.A.A M.M.M.M attribute-map BGP-LOCAL
 
 route-map BGP-LOCAL permit 10
  set metric 10
  set local-preference 1000
  set origin igp
  set community whatever

 Indeed.  That not withstanding, my problem with relying on aggregate-
 address is that the prefix isn't announced unless it, or a candidate
 prefix exists in the BGP table.

True, but in my opinion, that's typically not extremely important: If
there is no component route, the rest of the world has little use of
the aggregate. Also, if the address space is actually in use, there
should always be a component route.


But anyway, if you want to be nice and stable, and the route to never
go away, instead use a static route to null0, and then redistribute
into BGP with a route map.

ip route A.A.A.A M.M.M.M null0

ip access-list standard STATIC-TO-BGP
 permit ip A.A.A.A W.W.W.W

route-map STATIC-TO-BGP permit 10
 match ip address STATIC-TO-BGP
 set metric 10
 set local-preference 1000
 set origin igp
 set community whatever

router bgp asnr
 address-family ipv4
  redistribute static route-map STATIC-TO-BGP

-A

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] User-Based Rate Limiting on CRS-1/3 ?

2010-03-16 Thread oles
Hi,
How can I use the UBRL (User-Based Rate Limiting) on CRS-1/3 ?

Regards,
Octave

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 3560 buffering (was: 3560 mtu miss-match causing output drops)

2010-03-16 Thread mb

Quoting Peter Rathlev pe...@rathlev.dk:

Different congestion control algorithms (on linux 2.6) give varying
success; CUBIC and HTCP seem to cope okay-ish, BIC and Reno are

worse.

None of then can pull more than ~75 Mbps. When replacing the 3560

with a

3550 it can pull 97 Mbps with no drops.

Since we currently only prioritise voice traffic we've simple

allocated

all other buffer space to one queue to carry data. This works for

us.




 Hi,

 Wondering if you could share you current setup for buffers/queues
etc (We are currently running default)

 Thanks in advance.



-
This e-mail was sent via GCOMM WebMail http://www.gcomm.com.au/

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] inet vrf

2010-03-16 Thread Tim Durack
On Mon, Mar 15, 2010 at 6:39 PM, Manu Chao linux.ya...@gmail.com wrote:
 AFAIK, FIB and LFIB are just not the same table and the MSFC distributes the
 routing information in both tables to the PFC3B(XL).


Not sure about that:

RTR-1#sh mls cef summary

Total routes:29281
IPv4 unicast routes: 29084
IPv4 non-vrf routes: 93
IPv4 vrf routes: 28991
IPv4 Multicast routes:   6
MPLS routes: 112
IPv6 unicast routes: 75
IPv6 non-vrf routes: 6
IPv6 vrf routes: 69
IPv6 multicast routes:   3
EoM routes:  1

This is with mpls label mode all-vrfs protocol bgp-vpnv4 per-vrf
enabled in the config.

-- 
Tim:
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IPv6

2010-03-16 Thread Gert Doering
Hi,

On Tue, Mar 16, 2010 at 12:32:41PM +0200, Mohammad Khalil wrote:
 i am new on version 6 , i want to test this addressing scheme
 i want the best way to subnetting the subnet i have 
 for example i want to test as in version 4 /30 what does it equal in version 6

/64

gert
-- 
USENET is *not* the non-clickable part of WWW!
   //www.muc.de/~gert/
Gert Doering - Munich, Germany g...@greenie.muc.de
fax: +49-89-35655025g...@net.informatik.tu-muenchen.de


pgpHWNdL2RoQj.pgp
Description: PGP signature
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] 3560 buffering

2010-03-16 Thread Peter Rathlev
On Tue, 2010-03-16 at 20:34 +1000, m...@adv.gcomm.com.au wrote:
 Wondering if you could share you current setup for buffers/queues etc
 (We are currently running default)

Sure, I've pasted our default template below.

Keep in mind that it was developed in a trial and error (or maybe
rather try all and err!) fashion, without any real knowledge about
exactly how the hardware platform works.

It's a very very simple QoS configuration. We only do voice
policing/priority and no real QoS. We previously tried a more complex
service policy, but kept getting these messages:

 %QOSMGR-4-HARDWARE_NOT_SUPPORTED  Hardware limitation has reached for 
policymap ACCESS-INGRESS

This configuration only attempts to allocate as much buffer as possible
for data traffic.

I hope the configuration doesn't make me the laughingstock of c-nsp.
Comments about inappropriate configuration are more than welcome. :-)


* Start of generic template *

!;Configuration template for RM client access switches
! Quality of Service MLS configuration
! This template does not include interface related configuration
! Last update: 2009-11-15, peter.rath...@stab.rm.dk
!
! Reset mls qos settings
no mls qos map cos-dscp
no mls qos map policed-dscp
no mls qos srr-queue input threshold 1
no mls qos srr-queue input threshold 2
no mls qos srr-queue input priority-queue 1
no mls qos srr-queue input dscp-map
no mls qos srr-queue input cos-map
no mls qos srr-queue input bandwidth
no mls qos srr-queue input buffers
no mls qos queue-set output 1 threshold
no mls qos queue-set output 1 buffers
no mls qos queue-set output 2 threshold
no mls qos queue-set output 2 buffers
no mls qos srr-queue output dscp-map
no mls qos srr-queue output cos-map
!
! Standard CoS-DSCP-map, i.e. DSCP = CoS * 8 except CoS 5 which is mapped to EF 
(46).
mls qos map cos-dscp 0 8 16 24 32 46 48 56
! Excess EF traffic down-classed to AF41
mls qos map policed-dscp 46 to 34
!
!
! * Definition of input-queues *
!
mls qos srr-queue input threshold 1 90 100
mls qos srr-queue input threshold 2 90 95
! Input queue 2 is is priority with 5% bandwidth guarantee
mls qos srr-queue input priority-queue 2 bandwidth 5
! DSCP-mapping
mls qos srr-queue input dscp-map queue 1 threshold 3  0  1  2  3  4  5  6  7
mls qos srr-queue input dscp-map queue 1 threshold 1  8  9 10 11 12 13 14 15
mls qos srr-queue input dscp-map queue 1 threshold 3 16 17 18 19 20 21 22 23
mls qos srr-queue input dscp-map queue 1 threshold 3 24 25 26 27 28 29 30 31
mls qos srr-queue input dscp-map queue 1 threshold 3 32 33 34 35 36 37 38 39
mls qos srr-queue input dscp-map queue 2 threshold 2 40 41 42 43 44 45 46 47
mls qos srr-queue input dscp-map queue 2 threshold 3 48 49 50 51 52 53 54 55
mls qos srr-queue input dscp-map queue 2 threshold 3 56 57 58 59 60 61 62 63
! CoS-translation of above
mls qos srr-queue input cos-map queue 1 threshold 1   1
mls qos srr-queue input cos-map queue 1 threshold 3 0   2 3 4
mls qos srr-queue input cos-map queue 2 threshold 2   5
mls qos srr-queue input cos-map queue 2 threshold 3 6 7
!
! Input bandwidth, 90% for queue 1, 10% for queue 2
mls qos srr-queue input bandwidth 9 1
! Input buffer depth, 95% for queue 1, 5% for queue 2 (which is non-bursty)
mls qos srr-queue input buffers 95 5
!
!
! * Definition of output-queues *
!
! Generally speaking:
!   Queue 1 is priority, 5% bandwidth and 5% buffers
!   Queue 2 is other, 95% bandwidth and 95% buffers
!
! Q1: Voice traffic, control traffic
! Q2: Other traffic
! Q3: Unused
! Q4: Unused
!
mls qos queue-set output 1 buffers 5 95 0 0
! WTD: Queue 1 is priority, WTD set to 100% for all levels.
mls qos queue-set output 1 threshold 1 100 100 100 100
! WTD: Queue 2 is other
mls qos queue-set output 1 threshold 2 3100 3100 100 3200
!
! Egress mapping
! Priority= queue 1, threshold 3 (40-63)
! Scavenger   = queue 2, threshold 1 (8-15)
! Other   = queue 2, threshold 3 (0-7,16-39)
mls qos srr-queue output dscp-map queue 2 threshold 3  0  1  2  3  4  5  6  7
mls qos srr-queue output dscp-map queue 2 threshold 1  8  9 10 11 12 13 14 15
mls qos srr-queue output dscp-map queue 2 threshold 3 16 17 18 19 20 21 22 23
mls qos srr-queue output dscp-map queue 2 threshold 3 24 25 26 27 28 29 30 31
mls qos srr-queue output dscp-map queue 2 threshold 3 32 33 34 35 36 37 38 39
mls qos srr-queue output dscp-map queue 1 threshold 3 40 41 42 43 44 45 46 47
mls qos srr-queue output dscp-map queue 1 threshold 3 48 49 50 51 52 53 54 55
mls qos srr-queue output dscp-map queue 1 threshold 3 56 57 58 59 60 61 62 63
!
mls qos srr-queue output cos-map queue 2 threshold 1   1
mls qos srr-queue output cos-map queue 2 threshold 3 0   2 3 4
mls qos srr-queue output cos-map queue 1 threshold 3   5 6 7
!
mls qos
!
end

* End of generic template *

Then there's the service-policy defintion:

* Start of service-policy template *

!;Configuration template for RM client access switches
! Quality of Service ACL, 

Re: [c-nsp] IPv6

2010-03-16 Thread Devon True
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 3/16/2010 9:19 AM, Drew Weaver wrote:
 Hi,
 
 I believe most people feel that a /126 should be used the same place you 
 would use /30

FWIW, the recent NANOG meeting discussed numbering your IPv6 links.

http://nanog.org/meetings/nanog48/abstracts.php?pt=MTU1NCZuYW5vZzQ4nm=nanog48

Best advice I thought was regardless of what scheme you decide to use
(/126, /112, etc), reserve the entire /64 so that you don't shoot
yourself in the foot in case some new must have feature appears and
requires a /64 on ptp links.

- --
Devon
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.12 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkufiCEACgkQWP2WrBTHBS8YTACfZuEY2PjPGTmlbAK0i3HoVQDk
IiEAnAqj8jSpTNQ9u3KEgHOZ1TQ1ZpE1
=lx+S
-END PGP SIGNATURE-
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IPv6

2010-03-16 Thread Jens Link
Drew Weaver drew.wea...@thenap.com writes:

 Hi,

 I believe most people feel that a /126 should be used the same place you
 would use /30

Many people feel that a /64 should be used as smallest network size for
IPv6 even on point-to-point links. 

Jens
-- 
-
| Foelderichstr. 40  | 13595 Berlin, Germany | +49-151-18721264 |
| http://www.quux.de | http://blog.quux.de   | jabber: jensl...@guug.de |
-
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Cheap 10G between 7600 and Procurve 5406zl

2010-03-16 Thread Marian Ďurkovič
On Sun, 14 Mar 2010, Lincoln Dale wrote:
 SFP+ is one of the newest transceiver formats and has a lot more of the 
 'stuff' 
 that used to be inside the transceiver on the switch PCB itself.  one of the 
 things that has been moved is a component called the EDC (electronic 
 dispersion 
 compensation).  different transceiver types have different requirements as 
 far 
 as EDC parameters and this has been one case where it shows that not all 
 transceivers are created equal.
 
 with non-optimal or incorrect EDC values you may still get link up but you 
 may have such an excessive error-rate that its practically unusable.  or you 
 may get cases where link comes up but randomly drops out.  or doesn't drop 
 out 
 when the link partner goes away.
 
 point being here is that while its a commonly held belief that all 
 transceivers are created equal we have seen this to not be the case with 
 SFP+ 
 -- probably because its the newest transceiver format for 10G.

Well, SFP+ design has violated the long-established practice to have decent
and deterministic host-to-module interface - so it's no surprise that
real-life experience is so bad. Yes, SFP+ works for trivial cases like SR
and LR, but anything more complex is either plain impossible (DWDM) or
requires too much hassle just to get it working. 

A decent pluggable compensates all fiber/coax impairments inside and presents
decent digital signal (zeroes/ones) to the switch. Thus all EDC stuff is the
sole responsibility of transceiver manufacturer, which could fine-tune EDC
characteristics differently for each module type (LRM/DWDM/coax). This way,
indeed all modules are equal from the switch point of view, as all of them
produce the same digital signal.

On the other hand, linear SFP+ presents analog signal to the switch,
meaning it's no longer zeroes/ones, but the actual signal coming from the
fiber/coax. This signal is further distorted by the PCB traces in the switch
and each module type might require different EDC characteristics. So what
was before the job of transceiver manufacturer, must now be compensated
and fine-tuned in the switch, which is basically a nightmare (ready to
upgrade IOS to get rid of bit errors on some specific module type?)

Thus, the massive rush towards SFP+ might at the end of the day turn out 
to be a serious flaw, since the interop issues, lack of long reach / DWDM
versions and vendor-locking games might turn it into a dead end in
comparision with XFP, 10GBase-T and possible upcoming technologies.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IPv6

2010-03-16 Thread Drew Weaver
Hi,

I'm not sure I would want that many port scans, etc being bounced off of my 
'connected' router interfaces whether the rest of the IPs are 'routed' or not.

YMMV.

-Drew


-Original Message-
From: cisco-nsp-boun...@puck.nether.net 
[mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Jens Link
Sent: Tuesday, March 16, 2010 9:33 AM
To: cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] IPv6

Drew Weaver drew.wea...@thenap.com writes:

 Hi,

 I believe most people feel that a /126 should be used the same place you
 would use /30

Many people feel that a /64 should be used as smallest network size for
IPv6 even on point-to-point links. 

Jens
-- 
-
| Foelderichstr. 40  | 13595 Berlin, Germany | +49-151-18721264 |
| http://www.quux.de | http://blog.quux.de   | jabber: jensl...@guug.de |
-
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Any one familiar with CSCsd22834

2010-03-16 Thread krunal shah
On 7609 router we had following errors

025774: SLOT 1: Mar 16 09:33:33.132: %SIP200_MP-4-PAUSE: Non-master CPU is
suspended for too long, from 0x4022D0BC(5) to 0x4022D188 for 310671 CPU
cycles.
-Traceback= 4030DE7C 402E8620 402E86C8 4022C598 40133024
025775: SLOT 7: Mar 16 09:33:36.312: %SIP200_MP-4-PAUSE: Non-master CPU is
suspended for too long, from 0x4022D0BC(5) to 0x4022D188 for 323651 CPU
cycles.
-Traceback= 4030DE7C 402E8620 402E86C8 4022C598 40133024


Slot 1 and 7 are loaded with 7600-SIP-200 cards. I found this issue is
related to a bug CSCsd22834 . But Cisco website does not have a workaround
and conditions of the bug mentioned. Any one has encountered this errors
before??

Krunal
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IPv6

2010-03-16 Thread Ziv Leyes
Many people nowadays believe we can start wasting the IPv6 address space 
because they all claim it's so astronomically large that we shouldn't worry 
about depletion as it happened with IPv4

And I'd like to set an appointment with all of them in let's say 20/30 years 
from now and talk about it again.

If you could go back in time and ask the guys that planned the IPv4 why didn't 
they do it larger they would tell you there is no way we'll ever need more than 
that, right?
We're now facing the same situation, we can't even imagine what can happen in 
20 years from now, as I'm already hearing about giving every milk cartoon a /64 
which won't be recycled.
Think about it, how many milk cartoon we dispose daily? How many IP addresses 
are in a /64? Now start making the math and you'll see that the astronomic and 
virtually endless range starts to slowly shrink...
But hey, I'm used to be the crazy guy that alerts about nothing and then a 
few years later say I told you...
So who's up for it? Let's talk about it in 20 years from now?
BTW, I will be more than happy to be wrong!


-Original Message-
From: cisco-nsp-boun...@puck.nether.net 
[mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Jens Link
Sent: Tuesday, March 16, 2010 3:33 PM
To: cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] IPv6

Drew Weaver drew.wea...@thenap.com writes:

 Hi,

 I believe most people feel that a /126 should be used the same place you
 would use /30

Many people feel that a /64 should be used as smallest network size for
IPv6 even on point-to-point links. 

Jens
-- 
-
| Foelderichstr. 40  | 13595 Berlin, Germany | +49-151-18721264 |
| http://www.quux.de | http://blog.quux.de   | jabber: jensl...@guug.de |
-
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

 
 

This footnote confirms that this email message has been scanned by
PineApp Mail-SeCure for the presence of malicious code, vandals  computer 
viruses.





 
 

This footnote confirms that this email message has been scanned by
PineApp Mail-SeCure for the presence of malicious code, vandals  computer 
viruses.





___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] debug ip routing crashed 3750E

2010-03-16 Thread Antonio Soares
For those interested, i can confirm that 12.2(53)SE resolves this problem.


Regards,
 
Antonio Soares, CCIE #18473 (RS/SP)
amsoa...@netcabo.pt

-Original Message-
From: cisco-nsp-boun...@puck.nether.net 
[mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Antonio Soares
Sent: quinta-feira, 4 de Março de 2010 18:44
To: cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] debug ip routing crashed 3750E

I have it:


CSCei59309 Bug Details 
Crash in iprouting_set_ndb_last_rdb()  

Symptoms
A Cisco platform can crash after enabling debug ip routing.
This was observed on switch 3750E running IOS release 12.2(37)SE.

Conditions
The problem does happen consistently, each time customer execute the command.

Workaround
Don`t use command debug ip routing

Further Problem Description
the crash do not show any exception nor log message however the following line 
can be observed in show context 
Signal = 5, Vector = 0x1100

and the following line can be observed in show version
System returned to ROM by bus error at PC 0x..., address 0x0 


Unbelievable !


Regards,
 
Antonio Soares, CCIE #18473 (RS/SP)
amsoa...@netcabo.pt

-Original Message-
From: Antonio Soares [mailto:amsoa...@netcabo.pt] 
Sent: quinta-feira, 4 de Março de 2010 16:18
To: 'cisco-nsp@puck.nether.net'
Subject: debug ip routing crashed 3750E

Group,

Well, today i was troubleshooting a routing problem and i enabled debug ip 
routing as i do many times. But this time i got a very
unpleasant surprise:

++
XXX uptime is 1 minute
System returned to ROM by bus error at PC 0x29CA3C, address 0x0
System image file is 
flash:c3750e-universal-mz.122-35.SE5/c3750e-universal-mz.122-35.SE5.bin
++

I was connected via telnet. I issued the terminal monitor to see what was 
going on and i just saw my telnet connection going down
:( I didn't see a single flaping route !

I already have a TAC case for this but i would to share this information and 
also get some comments from the group.

The process related with this must be EIGRP:

(...)
 Partial decode of process block 

Pid 249: Process IP-EIGRP(0): PDM ^D7'XIP-EIGRP(0)
stack 0x4A272E4  savedsp 0x2A66D4C 
Flags: analyze prefers_new process_arg_valid 
Status 0x Orig_ra   0x Routine0x Signal 0
Caller_pc  0x Callee_pc 0x Dbg_events 0x State  0
Totmalloc  3837470844   Totfree   3185315280   Totgetbuf  2268
Totretbuf  0  Edisms0x0Eparm 0x0   
Elapsed0x2F85F52  Ncalls0x1EF89E84 Ngiveups 0x3AA39   
Priority_q 3  Ticks_5s  0  Cpu_5sec   0Cpu_1min 172
Cpu_5min   176Stacksize 0x2328 Lowstack 0x2328
Ttyptr 0x2A45C80  Mem_holding 0x67E10Thrash_count 0
Wakeup_reasons  0x0FFF  Default_wakeup_reasons 0x0FFF
Direct_wakeup_major 0x  Direct_wakeup_minor 0x

Regs R14-R31, CR, PC, MSR at last suspend; R3 from proc creation:
 R3 :   R14: 00B7040C  R15:   R16:   R17:  
 R18:   R19:   R20:   R21: 04A7C068  R22: 0480D678 
 R23: 04A7A2FC  R24: 04A7A144  R25: 0013  R26:   R27:  
 R28: 04E49F90  R29: 03C756E4  R30:   R31:   CR : 3359 
(...)



Thanks.

Regards,
 
Antonio Soares, CCIE #18473 (RS/SP)
amsoa...@netcabo.pt


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Multicast Core

2010-03-16 Thread Tony Bunce
I'm looking for a router to sit at the core of a small-ish multicast network 
(about 100 sources and receivers total, but would expect it to double over 
time).  Currently all of the sources and receivers are in the same VLAN but we 
would like for the occasional receiver to be located in  a different vlan.  We 
are currently using 2960Gs with IGMP querier enabled; which works but that 
doesn't look like it is going to scale well.  It looks like the switch sends 
ALL multicast data to the mrouter port so we are going to need something that 
can handle all the traffic (probably around 4Gbps), so even if we only need to 
route 1 20Mbps stream to a different vlan the router has to receive all 4Gbps 
of traffic.

Does anyone have any recommendations for this scenario? I'm thinking a 4948 or 
3750G would work.

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IPv6

2010-03-16 Thread Phil Mayers

On 16/03/10 15:14, Drew Weaver wrote:

Hi,

I'm not sure I would want that many port scans, etc being bounced off
of my 'connected' router interfaces whether the rest of the IPs are
'routed' or not.


Well, you've got iACLs right?

;o)

Seriously I found the NANOG presentations interesting; the potential NDP 
DoS is somewhat concerning (we use /112) and I'll definitely be 
considering carefully what we do.

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Stable Image for UBR7246

2010-03-16 Thread Stephen Cobb
There are no [documented on cisco.com] related bugs for that
release...checked cisco's bug toolkit and IOS release notes caveats for the
latest release 12.2.33-SCD(ED) as well...fyi.

-- 
Stephen F. Cobb • Senior Sales Engineer, CCNA/CCDA/DCNID/CSE
Telecoast Communications, LLC • Santa Barbara, CA
o 877.677.1182 x272 • c 760.807.0570 • f 805.618.1610
aim/yahoo telecoaststephen

On Tue, Mar 16, 2010 at 4:33 AM, Brian Raaen bra...@zcorum.com wrote:

 I have a UBR7246VXR with a G-2 engine that I am trying to find a stable
 image
 for.  I had an issue with the original image not letting CPE devices get
 leases, after which we opened a TAC case.  The TAC engineer had us change
 to
 and the unit is now flaky as all get out (rebooting on segV about every 45
 mins).  The image they had us use is ubr7200p-jk9su2-mz.122-33.SCB6.bin.
  TAC
 has already RMAed the engine.  I am not sure if these are hardware or
 software
 issues, the box was bought from a mainstream vendor (CDW) as new.  Is
 anyone
 successfully using this image?  I am still working with TAC on this, but
 was
 wanting some input from the community.

 --

 --

 Brian Raaen
 Network Engineer
 bra...@zcorum.com

 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/




-- 
Stephen F. Cobb • Senior Sales Engineer, CCNA/CCDA/DCNID/CSE
Telecoast Communications, LLC • Santa Barbara, CA
o 877.677.1182 x272 • c 760.807.0570 • f 805.618.1610
aim/yahoo telecoaststephen
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Multicast Core

2010-03-16 Thread Adrian Minta

Tony Bunce wrote:

I'm looking for a router to sit at the core of a small-ish multicast network 
(about 100 sources and receivers total, but would expect it to double over 
time).  Currently all of the sources and receivers are in the same VLAN but we 
would like for the occasional receiver to be located in  a different vlan.  We 
are currently using 2960Gs with IGMP querier enabled; which works but that 
doesn't look like it is going to scale well.  It looks like the switch sends 
ALL multicast data to the mrouter port so we are going to need something that 
can handle all the traffic (probably around 4Gbps), so even if we only need to 
route 1 20Mbps stream to a different vlan the router has to receive all 4Gbps 
of traffic.

Does anyone have any recommendations for this scenario? I'm thinking a 4948 or 
3750G would work.
  
You will need a router in the path. The router need to support only the 
bandwith summ of all the multicast sources multiply with the number of 
vlans ( worst case scenario) . Dowstream switches will do the 
replication of pachets.


3750G will not handle this becase multicast routing is done in CPU. I 
believe the first switch that do multicast routing in hardware is 6500.


If all the multicast streams multiply by the number of vlans is bellow 
1Gbps  a soft router like 7200NPE G1 will do the job.


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Cisco 6509 SUP2-2GE /w PFC2 - which code?

2010-03-16 Thread neal rauhauser
  I have just received a couple of Catalyst 6509s that are destined for a
small exchange point. They've got SUP2-2GE /w PFC2, eight port 6408 GBIC
blades, and flexwan blades that will be taking POS  ATM WAN interfaces.
They'll be running BGP+OSPF and not much else.

   I see one has 8.6.4 CatOS and the other IOS 12.1.27b.E4. Which is going
to be best/most stable?

   I've never had 6500s under my care but I do recall that there was an
issue with netflow accounting not working - something to the effect that the
intelligent linecards had their own forwarding information and all that
netflow reported was the setup and teardown for TCP connections. Is this
still the case, or is there a mix of software and practices that makes
netflow functional? If not, how are people handling bandwidth monitoring for
these systems?


  Thanks in advance for your wise answers ...

-- 
mailto:n...@layer3arts.com //
GoogleTalk: nrauhau...@gmail.com
GV: 202-642-1717
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IPv6

2010-03-16 Thread Alan Buxey
Hi,

 And I'd like to set an appointment with all of them in let's say 20/30 years 
 from now and talk about it again.
 
 If you could go back in time and ask the guys that planned the IPv4 why 
 didn't they do it larger they would tell you there is no way we'll ever need 
 more than that, right?
 We're now facing the same situation, we can't even imagine what can happen in 
 20 years from now, as I'm already hearing about giving every milk cartoon a 
 /64 which won't be recycled.
 Think about it, how many milk cartoon we dispose daily? How many IP addresses 
 are in a /64? Now start making the math and you'll see that the astronomic 
 and virtually endless range starts to slowly shrink...
 But hey, I'm used to be the crazy guy that alerts about nothing and then a 
 few years later say I told you...
 So who's up for it? Let's talk about it in 20 years from now?
 BTW, I will be more than happy to be wrong!

I hear you. its all down to address management - .oh, and whether your 
vendor supports
THT way of doing things. i know hat Cisco are quite fussy about what type/size 
of IPv6 address
gets used for eg HSRPv2 or for a direct P2P ethernet link

alan

PS yes, I agree, its silly to waste a /64 on some trivial use...if everyone 
does it then thats basically
throwing addresses away 'just becaus eyou can (right now).' lets revisit this 
thread and topic in,
as you say, 30 years time.  though by then i'll have just retired so can laugh 
at the whole thing..and
get called in at huge rates for consultancy! 8-)
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Multicast Core

2010-03-16 Thread Phil Mayers

On 16/03/10 17:04, Adrian Minta wrote:

3750G will not handle this becase multicast routing is done in CPU. I
believe the first switch that do multicast routing in hardware is 6500.


I don't think that's true. I think 3750s do multicast in hardware.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Multicast Core

2010-03-16 Thread Alexander Clouter
Phil Mayers p.may...@imperial.ac.uk wrote:

 On 16/03/10 17:04, Adrian Minta wrote:
 3750G will not handle this becase multicast routing is done in CPU. I
 believe the first switch that do multicast routing in hardware is 6500.
 
 I don't think that's true. I think 3750s do multicast in hardware.
 
Ours definiately do, otherwise I would imagine all that IPTV traffic on 
our network would be crippling them...plus they probably would not be 
all idling at 2%.

Cheers

-- 
Alexander Clouter
.sigmonster says:   This report is filled with omissions.

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] STP in L2TPv3

2010-03-16 Thread Gert Doering
Hi,

On Tue, Mar 16, 2010 at 10:51:48AM -0700, Chris Flav wrote:
 I saw this across a few router platforms; so I'm guessing in may me embedded 
 in the base IOS code:
 * 7200
 * 1800
 * 2600
 
 How incredibly annoying.  Is there any L2 tunneling means that will allow for 
 STP packets tunneled over a L3 network?

Well, *working* L2TPV3 and *working* EoMPLS will certainly do so...

This is just buggered software.

L2 tunnels should be fully transparent (or configurable which packet 
types to forward and which not).

gert
-- 
USENET is *not* the non-clickable part of WWW!
   //www.muc.de/~gert/
Gert Doering - Munich, Germany g...@greenie.muc.de
fax: +49-89-35655025g...@net.informatik.tu-muenchen.de


pgpARfywbiyLj.pgp
Description: PGP signature
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] Multicast Core

2010-03-16 Thread Seth Mattinen
On 3/16/2010 10:28, Tony Bunce wrote:
 The total of the multicast sources is 4Gbps so I don't think a 7200 would 
 work unfortunately.  The router would mostly serve as a IGMP querier and 
 would only ever route 2-3 streams at a time, so I think my worst case would 
 be sum of all sources + 3 streams
 
 The 4948 says Hardware-based wire-speed multicast management but I'm not 
 sure if that is marketing speak for igmp snooping or hardware multicast 
 routing.  A 6500 seems like overkill but that might be my only option.
 

The 4900M is a different beast than the 4948 even though they share a
similar model number.

~Seth
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Cisco 6509 SUP2-2GE /w PFC2 - which code?

2010-03-16 Thread Nick Hilliard
On 16/03/2010 18:55, Gert Doering wrote:
 Anyway: Netflow on Sup2 works OK (as far as I know, at least works for
 me) but it won't show you layer2-switched flows.  Bridged Netflow
 is something more recent, and I'm not sure whether it works - DECIX
 tried it, and it didn't.

I think you need a pfc3b for netflow, no?

Incidentally, the pfc3b still does not support netflow data export for
bridged ipv6 data.  This is annoying.

Nick
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Multicast Core

2010-03-16 Thread Tony Bunce

 Ours definiately do, otherwise I would imagine all that IPTV traffic on

Are you using the 3750s for layer3 or just layer2?  If just layer2 what are you 
using as your as your multicast router?

It is looking like both the 3750 and  4948 do hardware multicast.  Any reason 
to pick one over the other?
I can get the 4948 with 10GB but the 3750 can do stacking.  Pricewise it looks 
like the 4948 might be a bit cheaper but not by much.

Thanks for all the help!

-Tony

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Multicast Core

2010-03-16 Thread Seth Mattinen
On 3/16/2010 13:28, Tony Bunce wrote:
 
 Ours definiately do, otherwise I would imagine all that IPTV traffic on
 
 Are you using the 3750s for layer3 or just layer2?  If just layer2 what are 
 you using as your as your multicast router?
 
 It is looking like both the 3750 and  4948 do hardware multicast.  Any reason 
 to pick one over the other?
 I can get the 4948 with 10GB but the 3750 can do stacking.  Pricewise it 
 looks like the 4948 might be a bit cheaper but not by much.
 
 Thanks for all the help!
 


If IPv6 is important to you now or in the future, don't pick the 4948.

~Seth
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Multicast Core

2010-03-16 Thread Rene F.

On 3/16/10 2:49 PM, Alexander Clouter wrote:

Phil Mayersp.may...@imperial.ac.uk  wrote:
   

On 16/03/10 17:04, Adrian Minta wrote:
 

3750G will not handle this becase multicast routing is done in CPU. I
believe the first switch that do multicast routing in hardware is 6500.
   

I don't think that's true. I think 3750s do multicast in hardware.

 

Ours definiately do, otherwise I would imagine all that IPTV traffic on
our network would be crippling them...plus they probably would not be
all idling at 2%.

Cheers

   

Same here. Our 3750Gs (and ME3400s) are doing gigs of multicast in hardware.

Rene
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] top of rack switch recommendations

2010-03-16 Thread alex
Hi,

I'm looking for some switch recommendations. We currently have 2950/2960's 
10/100 with gig uplinks and are looking at upgrading. New features we'd like to 
get are:

 * 40+ gigabit ports
 * hardware acl support for ipv4 and ipv6 on ingress and egress traffic per 
port (just src/dst:[port])
 * ip source guard type support to restrict allowed mac / ip on each port for 
ipv4 and ipv6
 * qos on ingress/egress traffic per port for ipv4 and ipv6 (primarily just to 
throttle down from gigabit rates as needed)
 * jumbo frame support

Some nice to have:

 * 10 gig uplink support
 * redundant power
 * reversible air flow (hot air coming out of network port side)

but price is a big factor here. i.e. the 4900 would be great, but at almost 
$20k per switch, is really out of the question. 4948 doesn't seem to do ipv6 in 
hardware which knocks it out. I'm not clear on the differences between 3560 and 
3560-E. It's really hard to tell the level of ipv6 support some of the switches 
have, and last thing we want to do is upgrade the access layer and get stuck 
with switching ipv6 in software. The 2350 looks intriguing, but seems to be POE 
only, not something we need/want. We also don't need any routing features. Open 
to non cisco options as well. 

Thanks for any advice!

Alex
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IPv6

2010-03-16 Thread Peter Rathlev
As Jens points out: This has been discussed extensively in several
places before, and the NANOG list have left no stone unturned. :-)

I found this thread rather interesting, albeit long:
http://www.merit.edu/mail.archives/nanog/msg00756.html

On Tue, 2010-03-16 at 17:18 +0200, Ziv Leyes wrote:
 If you could go back in time and ask the guys that planned the IPv4
 why didn't they do it larger they would tell you there is no way we'll
 ever need more than that, right?

They realised pretty fast that the initial design didn't match the
growing popularity. That's what happens. The problem is that it has
taken us close to 30 years (!) since we discovered the issue to handle
to the situation. And we're not even there yet.

 We're now facing the same situation, we can't even imagine what can
 happen in 20 years from now, as I'm already hearing about giving every
 milk cartoon a /64 which won't be recycled.

Though I would side with you on the let's be conservative approach,
the argument you present here isn't really relevant, neither to IPv4 nor
to IPv6.

If we were to assign /64 subnets to end-stations like milk cartons,
the whole point of subnets vanishes. No hierarchical scheme (suitable
for routing) fits that purpose as far as I can think. RFID would
probably fit better.

All in all I would personally prefer that we do _something_ and start
seriously deploying/using IPv6, and then in parallel continue discussing
address allocation policy et cetera. :-D

-- 
Peter


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Multicast Core

2010-03-16 Thread Alexander Clouter
Tony Bunce to...@go-concepts.com wrote:
 
 Ours definiately do, otherwise I would imagine all that IPTV traffic on
 
 Are you using the 3750s for layer3 or just layer2?  If just layer2 
 what are you using as your as your multicast router?

Mixed, but generally L3.  The uplink links are port-channel'd 'hybrid' 
L2/L3 links:

interface Port-channel1
 switchport trunk encapsulation dot1q
 switchport trunk native vlan 979
 switchport trunk allowed vlan 127-130,901,979
 switchport mode trunk
 ip arp inspection trust
 ip dhcp snooping trust
end

 
The native VLAN carries all the L3 routing and thus obviously also the 
multicast traffic up to the access layer.  FYI, VLAN's 127-130,901 are 
the L2 and RSPAN bits, but those carry next to no multicast traffic.

 ...but the 3750 can do stacking.

Cross stack channel bonding is *very* nice.  We use it for our servers 
and our uplinks with great success; especially handy when you want to be 
clever with your UPS and hook up half of your stack to the UPS feed and 
the other to raw mains.

Cheers

-- 
Alexander Clouter
.sigmonster says: Sorry.  Nice try.

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] top of rack switch recommendations

2010-03-16 Thread Alex Krohn
Hi,

On Tue Mar 16 2:24:32 PM, Ryan West wrote:
 Where did you get PoE from on the 2350?  
 http://www.cisco.com/en/US/products/ps10116/index.html

D'oh, was thinking the 2750. The 2350 looks nice, but no mention of ipv6 
support 
at all, and the 12.2.46EY IOS in feature navigator seems pretty sparse as well. 
Does anyone have any 2350's and can comment?

 Also, you might consider the nexus 5k with 2k extenders for the ToR.  I tried 
 to
 find some documentation on the IPv6 support, but I can't find much.

Will have a look, was hoping for a one-to-one replacement of 2950/2960's 
though. Cisco 
was all set to sell me on 4948's even though I mentioned ipv6 support as a 
requirement, so
not feeling too happy about that. :)

On Tue Mar 16 2:57:02 PM, Ryan Otis wrote:
 The new FCX648, looks nice on paper.  I'd highly recommend doing a trial
 / POC and test the exact combination of the features you need/want
 before committing to a purchase of any small stackable.  We are a mixed
 shop and I've found a few features on the Frocade gear to have broken
 implementations. (more so than Cisco)  A quick skim of the specs and
 manual suggest it supports everything on your list.  Definitely test. :)

Thanks, appreciate it! Will check it out.

Alex
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] inet vrf

2010-03-16 Thread Manu Chao
This feature is a nice label allocation optimisation, are you using this
command on RTR-2?

On Tue, Mar 16, 2010 at 11:40 AM, Tim Durack tdur...@gmail.com wrote:

 On Mon, Mar 15, 2010 at 6:39 PM, Manu Chao linux.ya...@gmail.com wrote:
  AFAIK, FIB and LFIB are just not the same table and the MSFC distributes
 the
  routing information in both tables to the PFC3B(XL).
 

 Not sure about that:

 RTR-1#sh mls cef summary

 Total routes:29281
IPv4 unicast routes: 29084
IPv4 non-vrf routes: 93
IPv4 vrf routes: 28991
IPv4 Multicast routes:   6
MPLS routes: 112
IPv6 unicast routes: 75
 IPv6 non-vrf routes: 6
 IPv6 vrf routes: 69
 IPv6 multicast routes:   3
EoM routes:  1

 This is with mpls label mode all-vrfs protocol bgp-vpnv4 per-vrf
 enabled in the config.

 --
 Tim:

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Cheap 10G between 7600 and Procurve 5406zl

2010-03-16 Thread Lincoln Dale
On 17/03/2010, at 12:54 AM, Marian Ďurkovič wrote:
 [..] Thus, the massive rush towards SFP+ might at the end of the day turn out 
 to be a serious flaw, [..]

you list downsides without giving fair balance to the upsides.

like many things engineering, its often not a case of something being better on 
all counts or being all things to all people.
rather its a constant tradeoff between competing goals.

certainly if you are most focussed on long-distance optics or DWDM then indeed 
SFP+ is probably not for you.

on the other hand, within (say) a datacenter environments, SFP+ offers benefits 
above what other transceiver types could offer:
 - SFP+ enables 10G densities that would not be possible with other transceiver 
formats.
 - SFP+ being the same form-factor as SFP means that one can often build a 
switch with both 1G and 10G transceivers that can be intermixed
 - enables incredibly cost effective 10G in the form of CX1.

its not realistic to include 10GBaseT in any comparison at this point due to 
power/heat/PHY latency although that will, of course, improve over time.

from a switch design standpoint if you are designing a switch that could be 
used in many places in the network then reality is one probably needs to 
support multiple transceiver types if you want to address all requirements.  
nothing new here.  its no different to having to do copper (RJ45) ports as well 
as transceiver ports.


cheers,

lincoln.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] IPv6

2010-03-16 Thread TJ
I tried holding back ... and failed.


 -Original Message-
 From: cisco-nsp-boun...@puck.nether.net [mailto:cisco-nsp-
 boun...@puck.nether.net] On Behalf Of Ziv Leyes
 Sent: Tuesday, March 16, 2010 11:19
 To: cisco-nsp@puck.nether.net
 Subject: Re: [c-nsp] IPv6
 
 Many people nowadays believe we can start wasting the IPv6 address space
 because they all claim it's so astronomically large that we shouldn't
worry
 about depletion as it happened with IPv4

/64s are not about waste, or address efficiency.  It is about efficiency in
delivering services ... scaling ... 


 And I'd like to set an appointment with all of them in let's say 20/30
 years from now and talk about it again.

To loosely quote Vint Cerf, 32 bits was meant to be a test run / proof of
concept - wasn't supposed to become The Real Deal.


 If you could go back in time and ask the guys that planned the IPv4 why
 didn't they do it larger they would tell you there is no way we'll ever
 need more than that, right?

While they almost certainly couldn't have expected what the Internet has
become, to say they didn't appreciate the need for address space is folly.


 We're now facing the same situation, we can't even imagine what can happen
 in 20 years from now, as I'm already hearing about giving every milk
 cartoon a /64 which won't be recycled.

A milk cartoon would not get a /64.  It wouldn't get an address at all
unless it is network connected.  
Long before we need to worry about IPv6 address exhaustion we'll need to
fret about MAC addresses being used up.
(These types of statements are a pet peeve of mine - addresses are only
really addresses when connected to a network, or intended to be so
connected.  Otherwise they are not addresses so much as another form of
unique ID.  If you REALLY, REALLY wanted to give them all an address it
would be a /128 - so let me know when you have used up one /64
(~18BillionBillion) ... and then we'll talk.


 Think about it, how many milk cartoon we dispose daily? How many IP
 addresses are in a /64? Now start making the math and you'll see that the
 astronomic and virtually endless range starts to slowly shrink...
 But hey, I'm used to be the crazy guy that alerts about nothing and then
 a few years later say I told you...

That's the thing - it doesn't, not really.  It is simply a LOT of addresses
... and, in the VERY RARE care that we find that 2000::/3 was just burned
through (a problem some of us would welcome!), we can reconsider for
4000::/3, or 6000::/3, or 8000::/3, or A000::/3, or C000::/3 ... (note: I am
not arguing for inefficiency, just a different model of efficiency).


 So who's up for it? Let's talk about it in 20 years from now?
 BTW, I will be more than happy to be wrong!

Yes, let's!  I'll buy you a drink and we can commiserate about how easy the
next generation has it, what with all of those IPv6 gadgets just working and
making life so easy they don't even think about the network anymore ;).


/TJ

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] inet vrf

2010-03-16 Thread Tim Durack
On Tue, Mar 16, 2010 at 6:22 PM, Manu Chao linux.ya...@gmail.com wrote:
 This feature is a nice label allocation optimisation, are you using this
 command on RTR-2?

Yes, both routers of a pair. Seems to me like it should really be the
default behavior.

-- 
Tim:
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Current BGP BCP for anchoring and announcing local prefixes

2010-03-16 Thread ML
On 3/16/2010 9:19 AM, Drew Weaver wrote:
 No to thread Hijack, but how do you guys handle injecting /32s for 
 null/blackhole into your upstream providers?
 
 Using a tag on the static route? with a route-map that matches the tag? which 
 then adds a community?
 
 thanks,
 -Drew

*If* your upstream allows you to announce longer than /24.  Then a
static route + tag is a nice way to tack on extra communities in a
route-map to signal upstream that you want them to nullroute.

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Current BGP BCP for anchoring and announcing local prefixes

2010-03-16 Thread Joe Provo
On Mon, Mar 15, 2010 at 01:08:03PM -0400, Jason Lixfeld wrote:
 I've been in the habit of using communities to anchor and announce
 prefixes into BGP for years and I think my ways are somewhat dated.
 I'm looking for a bit of a refresh.  Wondering if anyone here has
 any thoughts ;)
[snip]

Nothing above hinges on network or aggregate-address statements. For 
covering prefixes redist static through a route-map, apply your
communities and job done.  You should also only carry your links, 
loopbacks, and eddges in your IGP - yarger prefixes should just be
in your iBGP mesh.

The real decision in where you anchor the larger prefixes, and that
depends on you external connecttivity layout, prefix-stability goals,
etc.

Cheers!

Joe

-- 
 RSUC / GweepNet / Spunk / FnB / Usenix / SAGE
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 3560 buffering

2010-03-16 Thread mb

Quoting Peter Rathlev pe...@rathlev.dk:


On Tue, 2010-03-16 at 20:34 +1000, m...@adv.gcomm.com.au wrote:

Wondering if you could share you current setup for buffers/queues etc
(We are currently running default)


Sure, I've pasted our default template below.


Much Appreciated - Thanks Peter.


-
This e-mail was sent via GCOMM WebMail http://www.gcomm.com.au/


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] OSPF Routing

2010-03-16 Thread Anthony Gown - Comm-AG Networks P/L
Hi,

 

I have a OSPF neighbour relationship established between 4503-SupV (Core)
and 3750G-12S (Remote Site).

 

The 3750 can see the routes from the Core in its routing table, but the LAN
routes from the remote site are not present in the routing table of the
4503-SupV Core Switch.

Note: the LAN routes from the remote site are present in the 3750 routing
table.

 

Can someone help explain how to check whether OSPF on the 3750 is
advertising the LAN route's from the remote site to the Core Switch ?

 

Thanks

Anthony

 

 

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/