Re: [c-nsp] just installed a Huawei...
On (2011-07-25 21:51 -0700), Rogelio wrote: Hi, Not sure if it's any interest of this group, but I just installed a Huawei CX600 router this last week. I'm very interested, not much word in community for some reason about using Huawei in L3 The worst part about the Huawei is probably the documentation. It's scattered all over the place, so if you want something simple (like telnet access), it's in a completely different PDF than if you want, say, VLAN configuration commands. Finding it all is a huge scavenger hunt. Most vendors have quite appalling documentation, enabling telnet on CX can be adventure, but ssh is interesting too, considering there are no ssh keywords, as it is 'secure telnet' :). Anyhow of course for larger organization the CLI is mostly irrelevant as is the overhead of finding out how to deploy given product in given platform, if it is possible, the CLI-wide complexity is not a factor, as everything would/should be done via automated provisioning systems. For smaller organisations CSCO/JNPR with good community support and few internal resources things are of course much different. But hey...for like a 1/4 of the price or whatever (so I've heard), I'd say it's worth it. :b It looks very good on paper. -- ++ytti ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] LAN (Branch) to LAN (HO) traffic is not flowing on ISDN (backuplink)
Hi All I have an issue in backup link connectivity (ISDN). During testing we shutdown the Primary MPLS link to switch the traffic to ISDN. ISDN is triggering but LAN(Branch) to LAN(HO) ping and traffic is not flowing. Attached is the debug ip packet which was captured from branch during testing. Following encapsulation failed error is also observed. Anyone can pls help me out. JZBRT#39.594 UAE: IP: s=10.1.47.21 (GigabitEthernet0/0.10), d=192.168.2.25 (Dialer1), len 48, encapsulation failed .Jun 28 18:05:38.466 UAE: IP: s=172.21.47.43 (GigabitEthernet0/0.20), d=172.20.18.20 (Dialer1), len 44, output JZBRT# feature, Dialer idle reset(70), rtype 1, forus FALSE, sendself FALSE, mtu 0, fwdchk FALSE .Jun 28 18:05:38.466 UAE: IP: s=172.21.47.43 (GigabitEthernet0/0.20), d=172.20.18.20 (Dialer1), g=172.20.18.20, len 44, forward .Jun 28 18:05:38.466 UAE: IP: s=172.21.47.43 (GigabitEthernet0/0.20), d=172.20.18.20 (Dialer1), len 44, encapsulation failed .Jun 28 18:05:38.466 UAE: IP: s=172.21.47.34 (GigabitEthernet0/0.20), d=172.20.18.20, len 44, input feature, MCI Check(66), rtype 0, forus FALSE, sendself FALSE, mtu 0, fwdchk FALSE .Jun 28 18:05:38.466 UAE: IP: s=172.21.47.34 (GigabitEthernet0/0.20), d=172.20.18.20 (Dialer1), len 44, output feature, Dialer idle reset(70), rtype 1, forus FALSE, sendself FALSE, mtu 0, fwdchk FALSE .Jun 28 18:05:38.466 UAE: IP: s=172.21.47.34 (GigabitEthernet0/0.20), d=172.20.18.20 (Dialer1), g=172.20.18.20, len 44, forward .Jun 28 18:05:38.466 UAE: IP: s=172.21.47.34 (GigabitEthernet0/0.20), d=172.20.18.20 (Dialer1), len 44, encapsulation failed .Jun 28 18:05:38.758 UAE: IP: s=172.21.47.39 (GigabitEthernet0/0.20), d=172.20.1.31, len 44, input feature, MCI Check(66), rtype 0, forus FALSE, sendself FALSE, mtu 0, fwdchk FALSE .Jun 28 18:05:38.758 UAE: IP: s=172.21.47.39 (GigabitEthernet0/0.20), d=172.20.1.31 (Dialer1), len 44, output feature, Dialer idle reset(70), rtype 1, forus FALSE, sendself FALSE, mtu 0, fwdchk FALSE .Jun 28 18:05:38.758 UAE: IP: s=172.21.47.39 (GigabitEthernet0/0.20), d=172.20.1.31 (Dialer1), g=172.20.1.31, len 44, forward .Jun 28 18:05:38.758 UAE: IP: s=172.21.47.39 (GigabitEthernet0/0.20), d=172.20.1.31 (Dialer1), len 44, encapsulation failed .Jun 28 18:05:39.594 UAE: IP: s=10.1.47.21 (GigabitEthernet0/0.10), d=192.168.2.25, len 48, input feature, MCI Check(66), rtype 0, forus FALSE, sendself FALSE, mtu 0, fwdchk FALSE .Jun 28 18:05:39.594 UAE: IP: s=10.1.47.21 (GigabitEthernet0/0.10), d=192.168.2.25 (Dialer1), len 48, output feature, Dialer idle reset(70), rtype 1, forus FALSE, sendself FALSE, mtu 0, fwdchk FALSE .Jun 28 18:05:39.594 UAE: IP: s=10.1.47.21 (GigabitEthernet0/0.10), d=192.168.2.25 (Dialer1), g=192.168.2.25, len 48, forward .Jun 28 18:05: JZBRT#39.594 UAE: IP: s=10.1.47.21 (GigabitEthernet0/0.10), d=192.168.2.25 (Dialer1), len 48, encapsulation failed .Jun 28 18:05:39.594 UAE: IP: s=10.48.47.250 (local), d=224.0.0.10 (GigabitEthernet0/0.50), len 60, sending broad/multicast .Jun 28 18:05:39.594 UAE: IP: s=10.48.47.250 (local), d=224.0.0.10 (GigabitEthernet0/0.50), len 60, sending full packet .Jun 28 18:05:39.598 UAE: IP: s=10.1.47.250 (local), d=192.168.243.185 (Dialer1), len 576, output feature, Dialer idle reset(70), rtype 1, forus FALSE, sendself FALSE, mtu 0, fwdchk FALSE .Jun 28 18:05:39.602 UAE: IP: s=10.1.47.250 (local), d=192.168.243.185 (Dialer1), len 576, output feature, Dialer idle reset(70), rtype 1, forus FALSE, sendself FALSE, mtu 0, fwdchk FALSE .Jun 28 18:05:39.602 UAE: IP: s=10.1.47.250 (local), d=192.168.243.185 (Dialer1), len 576, output feature, Dialer idle reset(70), rtype 1, forus FALSE, sendself FALSE, mtu 0, fwdchk FALSE .Jun 28 18:05:39.602 UAE: IP: s=10.1.47.250 (local), d=192.168.243.185 (Dialer1), len 576, output feature, Dialer idle reset(70), rtype 1, forus FALSE, sendself FALSE, mtu 0, fwdchk FALSE .Jun 28 18:05:39.602 UAE: IP: s=10.1.47.250 (local), d=192.168.243.185 (Dialer1), len 576, output feature, Dialer idle reset(70), rtype 1, forus FALSE, sendself FALSE, mtu 0, fwdchk FALSE .Jun 28 18:05:39.606 UAE: IP: s=172.21.47.22 (GigabitEthernet0/0.20), d=172.20.18.20, len 44, input feature, MCI Check(66), rtype 0, forus FALSE, sendself FALSE, mtu 0, fwdchk FALSE .Jun 28 18:05:39.606 UAE: IP: s=172.21.47.22 (GigabitEthernet0/0.20), d=172.20.18.20 (Dialer1), len 44, output feature, Dialer idle reset(70), rtype 1, forus FALSE, sendself FALSE, mtu 0, fwdchk FALSE .Jun 28 18:05:39.606 UAE: IP: s=172.21.47.22 (GigabitEthernet0/0.20), d=172.20.18.20 (Dialer1), g=172.20.18.20, len 44, forward .Jun 28 18:05:39.606 UAE: IP: s=172.21.47.22 (GigabitEthernet0/0.20), d=172.20.18.20 (Dialer1), len 44, encapsulation failed .Jun 28 18:05:39.606 UAE: IP: s=172.21.47.30 (GigabitEthernet0/0.20), d=172.20.18.20, len 44, input feature, Dialer idle reset(70), rtype 1, forus FALSE, sendself FALSE, mtu 0, fwdchk FALSE .Jun 28 18:05:41.034 UAE: IP: s=172.21.47.45
[c-nsp] Round trip time internet providers
Hi, We manage multi-home transit providers around our network and they don t provide us RTT graphs about RTT depending on geographical locations. Is there a way to know and get such information ? Let s say for example, we want to compare UK routes originated from 2 providers interoute and opentransit ,so we can make preferences over communities and so on ... depending on best RTT results. Regards alexandre durand ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] just installed a Huawei...
Hi, On 26 July 2011 16:51, Rogelio scubac...@gmail.com wrote: {cut} The worst part about the Huawei is probably the documentation. It's scattered all over the place, so if you want something simple (like telnet access), it's in a completely different PDF than if you want, say, VLAN configuration commands. Finding it all is a huge scavenger hunt. Basic documentation is there (if you know where to find it on their website), sometimes just the finding the right feature require a mindset change :-) . Debugging and troubleshooting are the real problems IMHO. Just have a look in the logs to see what's to come. Messages are cryptic, often misleading. Some aspects can not be really debugged without assitance from Huawei. So if you don't have a good Huawei guy nearby that you can run your problems past - it might be a challenge. The cost of equipment is not just the initial cost, it's also the cost of support, both internally and externally. It's still quite difficult to find people with Huawei experience. kind regards Pshem ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Round trip time internet providers
$quoted_author = Alexandre Durand ; We manage multi-home transit providers around our network and they don t provide us RTT graphs about RTT depending on geographical locations. Is there a way to know and get such information ? Let s say for example, we want to compare UK routes originated from 2 providers interoute and opentransit ,so we can make preferences over communities and so on ... depending on best RTT results. Given the number of prefixes in a full feed it's not surprising that providers don't provide this. You need to select remote prefixes that are important to you or provide a representative sample for a particular network or geographic region and then put in place your own monitoring. It's quite common to use communities to select routes on more deterministic parameters like where the route was learnt or who it was learnt from, and then layer on top some custom tweaking for prefixes where the result is still sub-optimal. But there is no one size fits all approach which is why you need to invest time and resources in implementing this for your network. cheers Marty ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Round trip time internet providers
Hi, We used to use SmokePing to learn the RTT from different providers then accordingly make decision. You may use any other tool to study the important routes before going to implement the changes. Regards, Farhan Jaffer On Tue, Jul 26, 2011 at 1:10 PM, Alexandre Durand alexandre.dur...@tasfrance.com wrote: Hi, We manage multi-home transit providers around our network and they don t provide us RTT graphs about RTT depending on geographical locations. Is there a way to know and get such information ? Let s say for example, we want to compare UK routes originated from 2 providers interoute and opentransit ,so we can make preferences over communities and so on ... depending on best RTT results. Regards alexandre durand __**_ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/**mailman/listinfo/cisco-nsphttps://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/**pipermail/cisco-nsp/http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] 3750E cluster replacement
We have a network based on a VSS with 20G channels to 3750E-24 clusters top-of-rack. We are seeing a lot of discards on the cluster which connects to our NetApp SANs. I suspect this is because of the small buffers in the 3750E switches and the growth of our traffic to the SAN, especially ISCI traffic. I'm considering replacing this cluster with something else, but I'm not sure what to put there. I read that 4900M have larger buffer and this would offer the needed mix of 1G en 10G ports but you can't cluster these switches and seen the importance of the connected devices, this is not really an option. Buffering on nexus 55xx seems also better and there you have the vPc possibility. Do you consider this the way to go or has anyone else a suggestion for a (clustered) device to replace this 3750E cluster ? Wim Holemans Netwerkdienst Universiteit Antwerpen Network Services University of Antwerp ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] 3750E cluster replacement
Why's it important you maintain a cluster? You're absolutely correct, 3750's are weak ToR switches. I would go with the 5500 if you find yourself looking toward a wider nexus deployment in the next 18-36 months. On Tue, Jul 26, 2011 at 7:03 AM, Holemans Wim wim.holem...@ua.ac.be wrote: We have a network based on a VSS with 20G channels to 3750E-24 clusters top-of-rack. We are seeing a lot of discards on the cluster which connects to our NetApp SANs. I suspect this is because of the small buffers in the 3750E switches and the growth of our traffic to the SAN, especially ISCI traffic. I'm considering replacing this cluster with something else, but I'm not sure what to put there. I read that 4900M have larger buffer and this would offer the needed mix of 1G en 10G ports but you can't cluster these switches and seen the importance of the connected devices, this is not really an option. Buffering on nexus 55xx seems also better and there you have the vPc possibility. Do you consider this the way to go or has anyone else a suggestion for a (clustered) device to replace this 3750E cluster ? Wim Holemans Netwerkdienst Universiteit Antwerpen Network Services University of Antwerp ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Round trip time internet providers
You can take help from RIPE TTM and Atlas projects. They are quite handy for this purpose. Regards, Aftab A. Siddiqui On Tue, Jul 26, 2011 at 3:16 PM, Farhan Jaffer bandh...@gmail.com wrote: Hi, We used to use SmokePing to learn the RTT from different providers then accordingly make decision. You may use any other tool to study the important routes before going to implement the changes. Regards, Farhan Jaffer On Tue, Jul 26, 2011 at 1:10 PM, Alexandre Durand alexandre.dur...@tasfrance.com wrote: Hi, We manage multi-home transit providers around our network and they don t provide us RTT graphs about RTT depending on geographical locations. Is there a way to know and get such information ? Let s say for example, we want to compare UK routes originated from 2 providers interoute and opentransit ,so we can make preferences over communities and so on ... depending on best RTT results. Regards alexandre durand __**_ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/**mailman/listinfo/cisco-nsp https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/**pipermail/cisco-nsp/ http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] 3750E cluster replacement
We use clusters to protect us from hardware failures ; all servers and SAN are dual connected to both switches. We have plans to install nexus in another server room, we could install 5500s in both and use them as interconnect (replacing the interconnects now made with 3750E). Wim Holemans Netwerkdienst Universiteit Antwerpen Network Services University of Antwerp From: chandler.bass...@gmail.com [mailto:chandler.bass...@gmail.com] On Behalf Of Chandler Bassett Sent: dinsdag 26 juli 2011 13:14 To: Holemans Wim Cc: cisco-nsp Subject: Re: [c-nsp] 3750E cluster replacement Why's it important you maintain a cluster? You're absolutely correct, 3750's are weak ToR switches. I would go with the 5500 if you find yourself looking toward a wider nexus deployment in the next 18-36 months. On Tue, Jul 26, 2011 at 7:03 AM, Holemans Wim wim.holem...@ua.ac.bemailto:wim.holem...@ua.ac.be wrote: We have a network based on a VSS with 20G channels to 3750E-24 clusters top-of-rack. We are seeing a lot of discards on the cluster which connects to our NetApp SANs. I suspect this is because of the small buffers in the 3750E switches and the growth of our traffic to the SAN, especially ISCI traffic. I'm considering replacing this cluster with something else, but I'm not sure what to put there. I read that 4900M have larger buffer and this would offer the needed mix of 1G en 10G ports but you can't cluster these switches and seen the importance of the connected devices, this is not really an option. Buffering on nexus 55xx seems also better and there you have the vPc possibility. Do you consider this the way to go or has anyone else a suggestion for a (clustered) device to replace this 3750E cluster ? Wim Holemans Netwerkdienst Universiteit Antwerpen Network Services University of Antwerp ___ cisco-nsp mailing list cisco-nsp@puck.nether.netmailto:cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] 3750E cluster replacement
$quoted_author = Holemans Wim ; We use clusters to protect us from hardware failures ; all servers and SAN are dual connected to both switches. You don't need the clustering if you run active-backup. It's only LACP that requires a stack or virtual chassis. cheers Marty ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Round trip time internet providers
Hi, Cisco Performance Routing can help you to make routing decisions based on latency, packet loss, etc... http://www.cisco.com/en/US/products/ps8787/products_ios_protocol_option_ home.html Solution from a different vendor: http://www.internap.com/business-internet-connectivity-services/route-op timization-flow-control/ Regards, Sergio Ramos -Original Message- From: cisco-nsp-boun...@puck.nether.net [mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Alexandre Durand Sent: 26 July 2011 10:11 To: cisco-nsp@puck.nether.net Subject: [c-nsp] Round trip time internet providers Hi, We manage multi-home transit providers around our network and they don t provide us RTT graphs about RTT depending on geographical locations. Is there a way to know and get such information ? Let s say for example, we want to compare UK routes originated from 2 providers interoute and opentransit ,so we can make preferences over communities and so on ... depending on best RTT results. Regards alexandre durand ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] Re LAN (Branch) to LAN (HO) traffic is not flowing on ISDN (backuplink)
On Tue, Jul 26, 2011 at 3:58 AM, cisco-nsp-requ...@puck.nether.net wrote: [c-nsp] LAN (Branch) to LAN (HO) traffic is not flowing on ISDN(backuplink) I believe encapsulation failed is due to a layer two issue on the router. It is unable to complete the packet, ie. cannot find the mac address (if on ethernet). Is the ISDN up? What does your sho ip route look like? Take a look at the ISDN setup, it looks like a misconfiguration on the port. -- CJ http://convergingontheedge.com http://www.convergingontheedge.com ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] 3750E cluster replacement
We do run lacp on most of our server, nas, connections. We also need these 2G channels towards our SAN to accommodate for the accumulated ISCI traffic coming from different servers. 3750E also have only one power supply, so we cluster them and use port-channels to protect against hw/power failures. Even when replacing the 3750E with nexus 55xx (if needed combined with FEX) we intend to double them and have portchannels on both. Wim Holemans Netwerkdienst Universiteit Antwerpen Network Services University of Antwerp -Original Message- From: cisco-nsp-boun...@puck.nether.net [mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Martin Barry Sent: dinsdag 26 juli 2011 14:12 To: cisco-nsp@puck.nether.net Subject: Re: [c-nsp] 3750E cluster replacement $quoted_author = Holemans Wim ; We use clusters to protect us from hardware failures ; all servers and SAN are dual connected to both switches. You don't need the clustering if you run active-backup. It's only LACP that requires a stack or virtual chassis. cheers Marty ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Re LAN (Branch) to LAN (HO) traffic is not flowing on ISDN (backuplink)
Dear CJ Tanks for your reply yes the ISDN is up. Please find attached Topology observation during testing, show run of branch. From: cjinfant...@gmail.com Date: Tue, 26 Jul 2011 08:44:57 -0400 To: cisco-nsp@puck.nether.net Subject: [c-nsp] Re LAN (Branch) to LAN (HO) traffic is not flowing on ISDN (backuplink) On Tue, Jul 26, 2011 at 3:58 AM, cisco-nsp-requ...@puck.nether.net wrote: [c-nsp] LAN (Branch) to LAN (HO) traffic is not flowing on ISDN (backuplink) I believe encapsulation failed is due to a layer two issue on the router. It is unable to complete the packet, ie. cannot find the mac address (if on ethernet). Is the ISDN up? What does your sho ip route look like? Take a look at the ISDN setup, it looks like a misconfiguration on the port. -- CJ http://convergingontheedge.com http://www.convergingontheedge.com ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ sh run Building configuration... Current configuration : 11217 bytes ! ! Last configuration change at 17:10:36 UAE Wed Jun 29 2011 by local ! version 15.0 service nagle no service pad service tcp-keepalives-in service tcp-keepalives-out service timestamps debug datetime msec localtime show-timezone service timestamps log datetime msec localtime show-timezone service password-encryption ! ! boot-start-marker boot-end-marker ! ! no aaa new-model clock timezone UAE 4 ! dot11 syslog no ip source-route ! ! ip cef no ip dhcp use vrf connected ip dhcp excluded-address 172.21.47.1 172.21.47.19 ip dhcp excluded-address 172.21.47.246 172.21.47.254 ! ip dhcp pool VOICE network 172.21.47.0 255.255.255.0 option 150 ip 172.20.1.30 default-router 172.21.47.250 ! ! no ip bootp server no ip domain lookup ip domain name rakbank.co.ae ip name-server 192.168.2.23 ip multicast-routing login block-for 120 attempts 5 within 30 login on-failure every 5 no ipv6 cef ! multilink bundle-name authenticated ! ! ! ! isdn switch-type basic-net3 ! ! voice call convert-discpi-to-prog voice rtp send-recv ! voice class codec 1 codec preference 1 g711ulaw codec preference 2 g711alaw codec preference 3 g729br8 ! voice-card 0 dsp services dspfarm ! license udi pid CISCO2851 sn FHK1444F1YB username local privilege 15 secret 5 $1$SsQl$gUAsMHeGAv5lnAIuWrhsE0 username rakadmin privilege 15 secret 5 $1$..6M$LrousJc.ehPwZjv59HkVG0 username HONWANRT02 password 7 111B180E15130507557878 ! redundancy ! ! ! ! crypto isakmp policy 100 encr 3des authentication pre-share group 2 crypto isakmp key dmRAK#1024!vpn address 0.0.0.0 0.0.0.0 crypto isakmp keepalive 30 ! ! crypto ipsec transform-set raktrans esp-3des esp-sha-hmac mode transport ! crypto ipsec profile DMVPN set security-association lifetime seconds 1800 set transform-set raktrans ! ! interface Tunnel1 description Primary DMVPN Cloud 1 bandwidth 4096 ip address 192.168.247.114 255.255.255.0 no ip redirects no ip unreachables no ip proxy-arp ip mtu 1400 no ip next-hop-self eigrp 100 ip nhrp authentication rakbank ip nhrp map multicast 192.168.241.2 ip nhrp map 192.168.247.5 192.168.241.2 ip nhrp network-id 101 ip nhrp holdtime 180 ip nhrp nhs 192.168.247.5 ip tcp adjust-mss 1360 no ip split-horizon eigrp 100 tunnel source GigabitEthernet0/1 tunnel mode gre multipoint tunnel key 1001 tunnel protection ipsec profile DMVPN shared ! ! interface Tunnel2 description Primary DMVPN Cloud 2 bandwidth 4096 ip address 192.168.246.114 255.255.255.0 no ip redirects no ip unreachables no ip proxy-arp ip mtu 1400 no ip next-hop-self eigrp 100 ip nhrp authentication rakbank ip nhrp map multicast 192.168.244.2 ip nhrp map 192.168.246.5 192.168.244.2 ip nhrp network-id 100 ip nhrp holdtime 180 ip nhrp nhs 192.168.246.5 ip tcp adjust-mss 1360 no ip split-horizon eigrp 100 tunnel source GigabitEthernet0/1 tunnel mode gre multipoint tunnel key 1000 tunnel protection ipsec profile DMVPN shared ! ! interface Tunnel3 description DR DMVPN Cloud 1 bandwidth 4096 ip address 192.168.248.114 255.255.255.0 no ip redirects no ip unreachables no ip proxy-arp ip mtu 1400 no ip next-hop-self eigrp 100 ip nhrp authentication rakbank ip nhrp map multicast 192.168.241.82 ip nhrp map 192.168.248.1 192.168.241.82 ip nhrp network-id 102 ip nhrp holdtime 180 ip nhrp nhs 192.168.248.1 ip tcp adjust-mss 1360 no ip split-horizon eigrp 100 tunnel source GigabitEthernet0/1 tunnel mode gre multipoint tunnel key 1002 tunnel protection ipsec profile DMVPN shared ! ! interface Tunnel4 description DR DMVPN Cloud 2 bandwidth 4096 ip address 192.168.249.114 255.255.255.0 no ip redirects no ip unreachables no ip proxy-arp ip mtu 1400 no ip next-hop-self eigrp 100 ip nhrp authentication rakbank ip nhrp map
Re: [c-nsp] Round trip time internet providers
Hi, Thank you all for you answers. I ll have a look to all these solutions. OER/PFR looks great but quite complex to implement, I might try to use it first in lab environment and maybe then in production. Regards, alexandre durand On 26/07/11 14:43, Sergio Ramos wrote: Hi, Cisco Performance Routing can help you to make routing decisions based on latency, packet loss, etc... http://www.cisco.com/en/US/products/ps8787/products_ios_protocol_option_ home.html Solution from a different vendor: http://www.internap.com/business-internet-connectivity-services/route-op timization-flow-control/ Regards, Sergio Ramos -Original Message- From: cisco-nsp-boun...@puck.nether.net [mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Alexandre Durand Sent: 26 July 2011 10:11 To: cisco-nsp@puck.nether.net Subject: [c-nsp] Round trip time internet providers Hi, We manage multi-home transit providers around our network and they don t provide us RTT graphs about RTT depending on geographical locations. Is there a way to know and get such information ? Let s say for example, we want to compare UK routes originated from 2 providers interoute and opentransit ,so we can make preferences over communities and so on ... depending on best RTT results. Regards alexandre durand ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ -- Alexandre DURAND TAS FRANCE WTC 1-K, 1300 route des Crêtes 06560 Valbonne Sophia Antipolis Phone :+33 (0)4 92 94 56 93 Fax :+33 (0)4 92 94 33 99 Web: http://www.tasfrance.com Email :alexandre.dur...@tasfrance.com peering: http://as8554.peeringdb.com ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] ASA 8.3/8.4 management issues...
I have some remote sites running off of ASA 5505s, and an existing VPN cluster running 8.4(2). For consistency's sake, I was trying to update the 5505s to 8.4(2) -- had one on 7.2 and one on 8.1. Everything appears to be working on them except management sessions (ssh or https or ASDM), they simply timeout. They are setup with management-acc inside. When I attempt to connect to the inside IP address (through the site-to-site tunnel) I can see the connection established... but then nothing else. When I attempt to ping the inside IP address (icmp permitted on both inside/outside interfaces) I can see the ICMP connection established... but then nothing else. From a host inside the 5505, I can ping the campus address (over the tunnel) attempting the connections, but not vice-versa. Ring any bells? Jeff ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] ASA 8.3/8.4 management issues...
Jeff, On Tue, Jul 26, 2011 at 10:44:19, Jeff Kell wrote: Subject: [c-nsp] ASA 8.3/8.4 management issues... I have some remote sites running off of ASA 5505s, and an existing VPN cluster running 8.4(2). For consistency's sake, I was trying to update the 5505s to 8.4(2) -- had one on 7.2 and one on 8.1. I've rolled everything back to 8.4.1 interim. I have an open bug for 8.4(2) relating to remote access VPN tunnels traversing other tunnels (same-security intra-interface). I would switch back to 8.4.1 and see if your problem follows. If you're interested in the bugID, I'll let you know once one is generated. -ryan ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] ASA 8.3/8.4 management issues...
On 7/26/2011 10:58 AM, Ryan West wrote: On Tue, Jul 26, 2011 at 10:44:19, Jeff Kell wrote: Subject: [c-nsp] ASA 8.3/8.4 management issues... I have some remote sites running off of ASA 5505s, and an existing VPN cluster running 8.4(2). I've rolled everything back to 8.4.1 interim. I have an open bug for 8.4(2) relating to remote access VPN tunnels traversing other tunnels (same-security intra-interface). I would switch back to 8.4.1 and see if your problem follows. If you're interested in the bugID, I'll let you know once one is generated. Turns out my issue was the route-lookup clause on the new NAT configuration commands. Seems that the update conversion of legacy NAT exempt ranges does not include that by default. Working fine now, haven't hit the intra-interface bug [yet]. Jeff ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] iSCSI, port buffers, and small switches
Anyone, Been following the 3750 vs. other switches for datacenter use thread here. We're putting together a small system using a couple IBM DS3512 storage units, with 6 physical hosts containing maybe 25 VMs total. My original plan was etherchannel (LACP) from the hosts to a 3750X stack of 2 switches, and LACP to the SANs as well. SAN vs. user traffic probably on separate channels. Everything is 1 gig copper. I'm assuming all hosts and SAN do channeling, need to confirm with our server guy. Anyway, I think in the grand scheme of things, this is a pretty small setup. This port buffer issue has me a bit concerned however. I'd like to keep the etherchannel and 2 physical chassis for load-balancing plus redundancy reasons. Seems like my options are: Pair of Nexus 5K switches - price probably too high, no immediate need for 10 gig. Pair of 4948 - does away with port buffer issue, but no multichassis etherchannel 4507, dual sups, dual line cards - This seems like it would work, but pricey as well. 3750X stack - backplane seems fine, port buffers seem to be the worry. I guess what I'm asking, for those with 3750/3750E/3750X and seeing performance issues, is it because you're a much larger setup, or am I likely to regret the 3750s as well? I don't have much baseline on this now, as we're adding more hosts and VMs, so current numbers will grow. Thanks, Chuck ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] iSCSI, port buffers, and small switches
On 26/07/11 16:16, Chuck Church wrote: I guess what I'm asking, for those with 3750/3750E/3750X and seeing performance issues, is it because you're a much larger setup, or am I likely to regret the 3750s as well? I don't have much baseline on this now, as we're adding more hosts and VMs, so current numbers will grow. We see problems with 3750 buffering at 100meg, on lightly loaded switches acting in a CPE role (2-4 ports live). I would avoid them in the datacentre, especially for storage. ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Re LAN (Branch) to LAN (HO) traffic is not flowing on ISDN (backuplink)
On 7/26/11 6:01 AM, Farooq Razzaque wrote: Dear CJ Tanks for your reply yes the ISDN is up. Please find attached Topology observation during testing, show run of branch. You might want to go ahead and change your passwords, too. ~Seth ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Common uRPF setting on all interfaces
IPv6 urpf supposedly available on sup2T and per interface configuration. Don't have my hands on one so cannot verify. Mack -Original Message- From: cisco-nsp-boun...@puck.nether.net [mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Gert Doering Sent: Monday, July 25, 2011 2:34 PM To: Ross Halliday Cc: cisco-nsp@puck.nether.net Subject: Re: [c-nsp] Common uRPF setting on all interfaces Hi, On Mon, Jul 25, 2011 at 03:04:53PM -0400, Ross Halliday wrote: Has anyone seen this before? I did a couple of quick searches but my Google-fu is letting me down. Is there some secret that only one possible stanza for uRPF is allowed on this box, unless the line isn't present? Exactly this. The box can only do a single mode of uRPF on all interfaces that have uRPF active. Hardware limitation. (And no IPv6 uRPF in hardware at all) gert -- USENET is *not* the non-clickable part of WWW! //www.muc.de/~gert/ Gert Doering - Munich, Germany g...@greenie.muc.de fax: +49-89-35655025g...@net.informatik.tu-muenchen.de ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] iSCSI, port buffers, and small switches
Thanks, Phil. Sounds like I really need the 4900 then. Thanks, Chuck On Jul 26, 2011 11:31 AM, Phil Mayers p.may...@imperial.ac.uk wrote: ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] iSCSI, port buffers, and small switches
Hello Chuck, The customers I have come across with buffering problems on 3750 were either not effectively configuring QoS - sometimes the best config is no QoS actually. The buffers get divided up when you enable mls qos and if you don't use all the queues as they are set up, they will not be optimal, especially for the default class. The big scenarios you need buffering are speed difference (regardless of average rate!) or many-to-one host at the same time, otherwise you don't need significant buffers at all. Since you have the same speed interfaces, how many transactions do you think will be happening at once? That's where I would focus. Regards, John Gill cisco On 7/26/11 11:16 AM, Chuck Church wrote: Anyone, Been following the 3750 vs. other switches for datacenter use thread here. We're putting together a small system using a couple IBM DS3512 storage units, with 6 physical hosts containing maybe 25 VMs total. My original plan was etherchannel (LACP) from the hosts to a 3750X stack of 2 switches, and LACP to the SANs as well. SAN vs. user traffic probably on separate channels. Everything is 1 gig copper. I'm assuming all hosts and SAN do channeling, need to confirm with our server guy. Anyway, I think in the grand scheme of things, this is a pretty small setup. This port buffer issue has me a bit concerned however. I'd like to keep the etherchannel and 2 physical chassis for load-balancing plus redundancy reasons. Seems like my options are: Pair of Nexus 5K switches - price probably too high, no immediate need for 10 gig. Pair of 4948 - does away with port buffer issue, but no multichassis etherchannel 4507, dual sups, dual line cards - This seems like it would work, but pricey as well. 3750X stack - backplane seems fine, port buffers seem to be the worry. I guess what I'm asking, for those with 3750/3750E/3750X and seeing performance issues, is it because you're a much larger setup, or am I likely to regret the 3750s as well? I don't have much baseline on this now, as we're adding more hosts and VMs, so current numbers will grow. Thanks, Chuck ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] iSCSI, port buffers, and small switches
On Tue, 2011-07-26 at 14:47 -0400, John Gill wrote: The customers I have come across with buffering problems on 3750 were either not effectively configuring QoS - sometimes the best config is no QoS actually. The buffers get divided up when you enable mls qos and if you don't use all the queues as they are set up, they will not be optimal, especially for the default class. Hmm... this specific part has me a little confused. As the archives mention in detail the apparently too small buffers on the 3560/3750 family of switches is not an uncommon problem. We have ourselves worked hard to come up with a solution, since replacing several hundreds of switches would be prohibitively expensive. The most successful of the solutions has been to actually configure QoS and re-partition the buffers. This makes me think that a switch in this family with no mls qos does not actually use the available buffers fully. I've talked with several SEs and some marketing boss (following up on a survey) about this, and none have been able to give any answers without having me sign NDAs. I'd rather not do that, since we have an acceptable solution until these switches are naturally replaced. We haven't fully tested the newest software (12.2(55)SE and later where the switch does a lot of ASIC reprogramming on startup) to see if the problem persists. The big scenarios you need buffering are speed difference (regardless of average rate!) or many-to-one host at the same time, otherwise you don't need significant buffers at all. This is an important point IMO. If you don't need any kind of traffic differentiation and have ample bandwidth there should be no problems with small buffers. -- Peter ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] iSCSI, port buffers, and small switches
Hi, On Tue, Jul 26, 2011 at 09:38:56PM +0200, Peter Rathlev wrote: This is an important point IMO. If you don't need any kind of traffic differentiation and have ample bandwidth there should be no problems with small buffers. Unless you have many-to-one traffic with micro bursts. Which was our problem with the 2960 - the 1-second-average of the ingress ports would have nicely fit the egress ports (with a big margin), but the traffic was bursty enough to cause significant egress drops. gert -- USENET is *not* the non-clickable part of WWW! //www.muc.de/~gert/ Gert Doering - Munich, Germany g...@greenie.muc.de fax: +49-89-35655025g...@net.informatik.tu-muenchen.de pgp23Yusyh05q.pgp Description: PGP signature ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] iSCSI, port buffers, and small switches
I have also played with the QOS buffers and repartitioning on C3750E switches. It helped, but i couldn't get the drops to go away completely. We do have 10GE uplinks with 1GE server ports, so might need buffering. On tip i have is also the following: although the C3750E switch is a 'shared bufffer switch', each ASIC does have memory on-board. Normally 4 GE ports are using 1 ASIC. (you can see the mapping with sh platform pm port-asic or something like that). If you really have a critical server, connect it to its own ASIC and don't use the other three ports. Then the server has all the buffer space for himself. Also , don't set the QOS buffer division to small and let each queue burst up to maximum size (full asic memory). This was not the default setting in some IOS versions, hence the problems. If you set the default queue too small or don't let it burst up to full memory, i have noticed drops early on , even starting from 3Mbps from 3000-4000 pkts/sec... regards, Geert 2011/7/26 Peter Rathlev pe...@rathlev.dk On Tue, 2011-07-26 at 14:47 -0400, John Gill wrote: The customers I have come across with buffering problems on 3750 were either not effectively configuring QoS - sometimes the best config is no QoS actually. The buffers get divided up when you enable mls qos and if you don't use all the queues as they are set up, they will not be optimal, especially for the default class. Hmm... this specific part has me a little confused. As the archives mention in detail the apparently too small buffers on the 3560/3750 family of switches is not an uncommon problem. We have ourselves worked hard to come up with a solution, since replacing several hundreds of switches would be prohibitively expensive. The most successful of the solutions has been to actually configure QoS and re-partition the buffers. This makes me think that a switch in this family with no mls qos does not actually use the available buffers fully. I've talked with several SEs and some marketing boss (following up on a survey) about this, and none have been able to give any answers without having me sign NDAs. I'd rather not do that, since we have an acceptable solution until these switches are naturally replaced. We haven't fully tested the newest software (12.2(55)SE and later where the switch does a lot of ASIC reprogramming on startup) to see if the problem persists. The big scenarios you need buffering are speed difference (regardless of average rate!) or many-to-one host at the same time, otherwise you don't need significant buffers at all. This is an important point IMO. If you don't need any kind of traffic differentiation and have ample bandwidth there should be no problems with small buffers. -- Peter ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] iSCSI, port buffers, and small switches
On 26/07/2011 19:47, John Gill wrote: The big scenarios you need buffering are speed difference (regardless of average rate!) or many-to-one host at the same time, otherwise you don't need significant buffers at all. it's still quite common these days to have a 10g path from the san/nas to the network, but 1G access from the network to the client machines, modelled using a 10G distribution layer switch connected to both the san/nas and a bunch of ToR switches. The ToR switches would have 10G uplinks, but 1G downlinks; and unless these ToR switches have sufficient buffering capabilities, packet loss may occur on the 1G egress path.So although cheaper, it can often prove to be false economy. Also, 1GE is getting to be quite slow for disk access these days. My 4.5yo laptop pushes 180 mb sequential reads, which is way more than 1Gbit/sec. There's a rather good paper on C3560/C3750 egress qos here (thanks to Saku Ytti for the link): https://supportforums.cisco.com/docs/DOC-8093 But as you point out, you can largely obviate the need for buffering by using the same speed throughout the network. It's on this basis that the usual models of 10G ToR cut-thru switches make technical sense even though they invariably have tiny buffers, e.g. N5K or any of the other vendor 10G merchant silicon-based switches. Nick ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] iSCSI, port buffers, and small switches
On 26/07/2011 21:47, Geert Nijs wrote: On tip i have is also the following: although the C3750E switch is a 'shared bufffer switch', each ASIC does have memory on-board. Normally 4 GE ports are using 1 ASIC. (you can see the mapping with sh platform pm port-asic or something like that). I spent a long time looking one day, and eventually found the following URL: http://www.cisco.com/en/US/docs/solutions/Enterprise/Video/tpqoscampus.html#wp1062179 Additionally, these platforms provide (minimally) 750 KB of receive buffers and (up to) 2 MB of transmit buffers for each set of 4 ports. So, this indicates that there is a shared buffer pool per ASIC with = 2M transmit buffers for egress traffic. As this is a store-n-forward switch, this is really very little. And as pointed out before, if you enable mls qos the default setting is to carve this up into 4, leaving you with a default of = 512k for class-default per 4 ports. Little wonder people drop packets on these boxes! It would help if Cisco were more forthcoming about their switch buffer sizes. Unfortunately, the received wisdom is: if you have to ask, the buffers are too small for your requirements. Other vendors (Juniper, Brocade, etc) are much more up-front about their kit's capabilities in this regard. Nick ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] Subrate T3 card
I have a T3/E3 card in a cisco 3640 that I want to use as a serialT3, but it does not show up as a serial interface, nor is there even a controller line in the config. It only shows up in the hardware infomration as a Subrate T3/E3 port. What does this mean? gw1.dist uptime is 1 hour, 36 minutes System returned to ROM by power-on System restarted at 17:00:56 EDT Tue Jul 26 2011 System image file is flash:c3640-is-mz.123-6.bin cisco 3640 (R4700) processor (revision 0x00) with 124928K/6144K bytes of memory. Processor board ID 11876053 R4700 CPU at 100MHz, Implementation 33, Rev 1.0 Bridging software. X.25 software, Version 3.0.0. SuperLAT software (copyright 1990 by Meridian Technology Corp). 2 FastEthernet/IEEE 802.3 interface(s) 2 Serial network interface(s) 1 Subrate T3/E3 ports(s) DRAM configuration is 64 bits wide with parity disabled. 125K bytes of non-volatile configuration memory. 24576K bytes of processor board System flash (Read/Write) Configuration register is 0x2102 gw1.dist# ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Subrate T3 card
On 7/26/2011 15:41, Joseph Mays wrote: I have a T3/E3 card in a cisco 3640 that I want to use as a serialT3, but it does not show up as a serial interface, nor is there even a controller line in the config. It only shows up in the hardware infomration as a Subrate T3/E3 port. What does this mean? You have to set the type first: card type t3 slot ~Seth ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Subrate T3 card
Does it give you the option to set up the controller? On Jul 26, 2011, at 6:41 PM, Joseph Mays wrote: I have a T3/E3 card in a cisco 3640 that I want to use as a serialT3, but it does not show up as a serial interface, nor is there even a controller line in the config. It only shows up in the hardware infomration as a Subrate T3/E3 port. What does this mean? gw1.dist uptime is 1 hour, 36 minutes System returned to ROM by power-on System restarted at 17:00:56 EDT Tue Jul 26 2011 System image file is flash:c3640-is-mz.123-6.bin cisco 3640 (R4700) processor (revision 0x00) with 124928K/6144K bytes of memory. Processor board ID 11876053 R4700 CPU at 100MHz, Implementation 33, Rev 1.0 Bridging software. X.25 software, Version 3.0.0. SuperLAT software (copyright 1990 by Meridian Technology Corp). 2 FastEthernet/IEEE 802.3 interface(s) 2 Serial network interface(s) 1 Subrate T3/E3 ports(s) DRAM configuration is 64 bits wide with parity disabled. 125K bytes of non-volatile configuration memory. 24576K bytes of processor board System flash (Read/Write) Configuration register is 0x2102 gw1.dist# ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] ip helper-address, VRF, and Windows 2008 DHCP Server
Hello All I'm trying to troubleshoot a new network setup. I've got a VRF setup for a client with a couple sites connected via metro Ethernet to replace a VPN. In this setup I'm trying to use the ip helper-address to relay DHCP requests to a central Windows SBS 2008 DHCP server. Using wireshark I see the requests arriving but the server isn't replying to them. The correct scope is built on the server so I'm wondering if there is something else that needs set on the router to manipulate the packets further before forwarding them. This is what the subinterface looks like: interface GigabitEthernet0/1.178 description Customer encapsulation dot1Q 178 ip vrf forwarding customer-vrf ip address 10.24.3.254 255.255.255.0 ip helper-address 10.24.1.250 no ip proxy-arp end This is what the relayed packet looks like when it hits the DHCP server: No. TimeSourceDestination Protocol Length Info 3045 5098.662128 10.24.3.254 10.24.1.250 DHCP 351 DHCP Discover - Transaction ID 0x73a7980e Frame 3045: 351 bytes on wire (2808 bits), 351 bytes captured (2808 bits) Ethernet II, Src: Cisco_38:a4:1b (00:08:20:38:a4:1b), Dst: Dell_50:34:cd (00:24:e8:50:34:cd) Destination: Dell_50:34:cd (00:24:e8:50:34:cd) Address: Dell_50:34:cd (00:24:e8:50:34:cd) ...0 = IG bit: Individual address (unicast) ..0. = LG bit: Globally unique address (factory default) Source: Cisco_38:a4:1b (00:08:20:38:a4:1b) Address: Cisco_38:a4:1b (00:08:20:38:a4:1b) ...0 = IG bit: Individual address (unicast) ..0. = LG bit: Globally unique address (factory default) Type: IP (0x0800) Internet Protocol Version 4, Src: 10.24.3.254 (10.24.3.254), Dst: 10.24.1.250 (10.24.1.250) Version: 4 Header length: 20 bytes Differentiated Services Field: 0x00 (DSCP 0x00: Default; ECN: 0x00: Not-ECT (Not ECN-Capable Transport)) Total Length: 337 Identification: 0x0ce1 (3297) Flags: 0x00 Fragment offset: 0 Time to live: 255 Protocol: UDP (17) Header checksum: 0x9393 [correct] Source: 10.24.3.254 (10.24.3.254) Destination: 10.24.1.250 (10.24.1.250) User Datagram Protocol, Src Port: bootps (67), Dst Port: bootps (67) Source port: bootps (67) Destination port: bootps (67) Length: 317 Checksum: 0xc2a6 [validation disabled] Bootstrap Protocol Message type: Boot Request (1) Hardware type: Ethernet Hardware address length: 6 Hops: 1 Transaction ID: 0x73a7980e Seconds elapsed: 0 Bootp flags: 0x (Unicast) Client IP address: 0.0.0.0 (0.0.0.0) Your (client) IP address: 0.0.0.0 (0.0.0.0) Next server IP address: 0.0.0.0 (0.0.0.0) Relay agent IP address: 10.24.3.254 (10.24.3.254) Client MAC address: Avaya_86:13:ed (b4:b0:17:86:13:ed) Client hardware address padding: Server host name not given Boot file name not given Magic cookie: DHCP Option: (t=53,l=1) DHCP Message Type = DHCP Discover Option: (53) DHCP Message Type Length: 1 Value: 01 Option: (t=50,l=4) Requested IP Address = 10.24.1.39 Option: (50) Requested IP Address Length: 4 Value: 0a180127 Option: (t=12,l=9) Host Name = AVX8613ED Option: (12) Host Name Length: 9 Value: 415658383631334544 Option: (t=55,l=11) Parameter Request List Option: (55) Parameter Request List Length: 11 Value: 011c030f060c071a2a2bf2 1 = Subnet Mask 28 = Broadcast Address 3 = Router 15 = Domain Name 6 = Domain Name Server 12 = Host Name 7 = Log Server 26 = Interface MTU 42 = Network Time Protocol Servers 43 = Vendor-Specific Information 242 = Private Option: (t=57,l=2) Maximum DHCP Message Size = 1000 Option: (57) Maximum DHCP Message Size Length: 2 Value: 03e8 Option: (t=60,l=13) Vendor class identifier = ccp.avaya.com Option: (60) Vendor class identifier Length: 13 Value: 6363702e61766179612e636f6d Option: (t=82,l=14) Agent Information Option Option: (82) Agent Information Option Length: 14 Value: 020c020a0a1803fe01b2 Agent Remote ID: 020a0a1803fe01b2 End Option Thanks for any help! Dave ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] RPS 675 SNMP monitoring
Hi, I've googled and have not gotten a good answer so please excuse me if I missed something obvious. I want to poll my cisco switches with snmp and verify they are on AC power and, if possible, verify that they also have the RPS connected and that it's reporting good status as well. I have looked over also the cisco-envmon-mib and it's not clear to me what variable(s) I should look at or which values would indicate 'good' and so forth. I *think* I want to be looking at: ciscoEnvMonSupplyState 1.3.6.1.4.1.9.9.13.1.5.1.3 ciscoEnvMonSupplySource 1.3.6.1.4.1.9.9.13.1.5.1.4 But on my switches, these aren't complete oid's. I need to add '.1003' to the end to get the value. And on another model of switch, I don't. So it seems inconsistient at best. I just want my monitoring system to throw alerts if it's ever observed that any switch is on RPS. Can anyone give me the oid's to look for and the values I should see? Thanks. Mike- ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Subrate T3 card
You have to set the type first: card type t3 slot That was it. I've never heard of that or had to do that before. Thanks much! ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] iSCSI, port buffers, and small switches
Peter, Not too long ago there were some QoS knobs which gave us the ability to increase the shared pool to an even greater extent than the default no mls qos. Queue-set thresholds used to be configurable in the range of 1-100%, then 400%, and now up to 3200% of the reserved pool. This effectively makes the buffer more like a fully shared pool, instead of a mix of reserved and shared. So, I meant to say that the default vs just turning on mls qos without any configuration, and while using only one class of traffic, will result in less buffers available. Chuck, The decision on what switch to buy would depend on proper configuration (some older configurations and questions to the archives may not have been using the code sufficient to configure the queue-set in this manner), what classes you need to support, and the traffic patterns you expect to be able to deal with in terms of how much congestion/buffering you want to be able to absorb. If we are just talking pure buffer space, the 4948 will be best for a scenario where fully shared buffering is preferable and you aren't worried about starvation for individual interfaces out of the box. Scale matters too; a test of 2 ports to 1 may perform well, but note a test where 2 instances of this pattern (2 to 1 and a separate 2 to 1 is also present) things start to look different. If you have just one busy destination, then it is fairly simple - just get as much buffer as you can. But speeding up your destination in that scenario would be money wisely spent. Regards, John Gill cisco On 7/26/11 3:38 PM, Peter Rathlev wrote: On Tue, 2011-07-26 at 14:47 -0400, John Gill wrote: The customers I have come across with buffering problems on 3750 were either not effectively configuring QoS - sometimes the best config is no QoS actually. The buffers get divided up when you enable mls qos and if you don't use all the queues as they are set up, they will not be optimal, especially for the default class. Hmm... this specific part has me a little confused. As the archives mention in detail the apparently too small buffers on the 3560/3750 family of switches is not an uncommon problem. We have ourselves worked hard to come up with a solution, since replacing several hundreds of switches would be prohibitively expensive. The most successful of the solutions has been to actually configure QoS and re-partition the buffers. This makes me think that a switch in this family with no mls qos does not actually use the available buffers fully. I've talked with several SEs and some marketing boss (following up on a survey) about this, and none have been able to give any answers without having me sign NDAs. I'd rather not do that, since we have an acceptable solution until these switches are naturally replaced. We haven't fully tested the newest software (12.2(55)SE and later where the switch does a lot of ASIC reprogramming on startup) to see if the problem persists. The big scenarios you need buffering are speed difference (regardless of average rate!) or many-to-one host at the same time, otherwise you don't need significant buffers at all. This is an important point IMO. If you don't need any kind of traffic differentiation and have ample bandwidth there should be no problems with small buffers. ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] iSCSI, port buffers, and small switches
This paper is indeed a great read, it goes over some really good points of the buffering and queuing that are commonly misunderstood. All good points in this thread, I will make one small comment about the N5k here if one wanted to compare buffer sizes on paper: The N5k uses ingress buffering with virtual output queues. So when you oversubscribe a single egress interface, buffers available for use are proportional to the number sending to that interface. It essentially acts like a shared buffer. Regards, John Gill cisco On 7/26/11 4:14 PM, Nick Hilliard wrote: On 26/07/2011 19:47, John Gill wrote: The big scenarios you need buffering are speed difference (regardless of average rate!) or many-to-one host at the same time, otherwise you don't need significant buffers at all. it's still quite common these days to have a 10g path from the san/nas to the network, but 1G access from the network to the client machines, modelled using a 10G distribution layer switch connected to both the san/nas and a bunch of ToR switches. The ToR switches would have 10G uplinks, but 1G downlinks; and unless these ToR switches have sufficient buffering capabilities, packet loss may occur on the 1G egress path. So although cheaper, it can often prove to be false economy. Also, 1GE is getting to be quite slow for disk access these days. My 4.5yo laptop pushes 180 mb sequential reads, which is way more than 1Gbit/sec. There's a rather good paper on C3560/C3750 egress qos here (thanks to Saku Ytti for the link): https://supportforums.cisco.com/docs/DOC-8093 But as you point out, you can largely obviate the need for buffering by using the same speed throughout the network. It's on this basis that the usual models of 10G ToR cut-thru switches make technical sense even though they invariably have tiny buffers, e.g. N5K or any of the other vendor 10G merchant silicon-based switches. Nick ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] ip helper-address, VRF, and Windows 2008 DHCP Server
Dave, So you captured this on the DHCP server itself? What is the gateway for this server configured to do? By default IOS these days does not allow directed broadcasts, but you do need to configure ip directed-broadcast on the DHCP server gateway L3 interface. The DHCP DISCOVER should be a broadcast - perhaps this is why your server doesn't reply to it. Load-balancers or firewalls in use? Regards, John Gill cisco On 7/26/11 7:31 PM, Dave Weis wrote: Hello All I'm trying to troubleshoot a new network setup. I've got a VRF setup for a client with a couple sites connected via metro Ethernet to replace a VPN. In this setup I'm trying to use the ip helper-address to relay DHCP requests to a central Windows SBS 2008 DHCP server. Using wireshark I see the requests arriving but the server isn't replying to them. The correct scope is built on the server so I'm wondering if there is something else that needs set on the router to manipulate the packets further before forwarding them. This is what the subinterface looks like: interface GigabitEthernet0/1.178 description Customer encapsulation dot1Q 178 ip vrf forwarding customer-vrf ip address 10.24.3.254 255.255.255.0 ip helper-address 10.24.1.250 no ip proxy-arp end This is what the relayed packet looks like when it hits the DHCP server: No. TimeSourceDestination Protocol Length Info 3045 5098.662128 10.24.3.254 10.24.1.250 DHCP 351 DHCP Discover - Transaction ID 0x73a7980e Frame 3045: 351 bytes on wire (2808 bits), 351 bytes captured (2808 bits) Ethernet II, Src: Cisco_38:a4:1b (00:08:20:38:a4:1b), Dst: Dell_50:34:cd (00:24:e8:50:34:cd) Destination: Dell_50:34:cd (00:24:e8:50:34:cd) Address: Dell_50:34:cd (00:24:e8:50:34:cd) ...0 = IG bit: Individual address (unicast) ..0. = LG bit: Globally unique address (factory default) Source: Cisco_38:a4:1b (00:08:20:38:a4:1b) Address: Cisco_38:a4:1b (00:08:20:38:a4:1b) ...0 = IG bit: Individual address (unicast) ..0. = LG bit: Globally unique address (factory default) Type: IP (0x0800) Internet Protocol Version 4, Src: 10.24.3.254 (10.24.3.254), Dst: 10.24.1.250 (10.24.1.250) Version: 4 Header length: 20 bytes Differentiated Services Field: 0x00 (DSCP 0x00: Default; ECN: 0x00: Not-ECT (Not ECN-Capable Transport)) Total Length: 337 Identification: 0x0ce1 (3297) Flags: 0x00 Fragment offset: 0 Time to live: 255 Protocol: UDP (17) Header checksum: 0x9393 [correct] Source: 10.24.3.254 (10.24.3.254) Destination: 10.24.1.250 (10.24.1.250) User Datagram Protocol, Src Port: bootps (67), Dst Port: bootps (67) Source port: bootps (67) Destination port: bootps (67) Length: 317 Checksum: 0xc2a6 [validation disabled] Bootstrap Protocol Message type: Boot Request (1) Hardware type: Ethernet Hardware address length: 6 Hops: 1 Transaction ID: 0x73a7980e Seconds elapsed: 0 Bootp flags: 0x (Unicast) Client IP address: 0.0.0.0 (0.0.0.0) Your (client) IP address: 0.0.0.0 (0.0.0.0) Next server IP address: 0.0.0.0 (0.0.0.0) Relay agent IP address: 10.24.3.254 (10.24.3.254) Client MAC address: Avaya_86:13:ed (b4:b0:17:86:13:ed) Client hardware address padding: Server host name not given Boot file name not given Magic cookie: DHCP Option: (t=53,l=1) DHCP Message Type = DHCP Discover Option: (53) DHCP Message Type Length: 1 Value: 01 Option: (t=50,l=4) Requested IP Address = 10.24.1.39 Option: (50) Requested IP Address Length: 4 Value: 0a180127 Option: (t=12,l=9) Host Name = AVX8613ED Option: (12) Host Name Length: 9 Value: 415658383631334544 Option: (t=55,l=11) Parameter Request List Option: (55) Parameter Request List Length: 11 Value: 011c030f060c071a2a2bf2 1 = Subnet Mask 28 = Broadcast Address 3 = Router 15 = Domain Name 6 = Domain Name Server 12 = Host Name 7 = Log Server 26 = Interface MTU 42 = Network Time Protocol Servers 43 = Vendor-Specific Information 242 = Private Option: (t=57,l=2) Maximum DHCP Message Size = 1000 Option: (57) Maximum DHCP Message Size Length: 2 Value: 03e8 Option: (t=60,l=13) Vendor class identifier = ccp.avaya.com Option: (60) Vendor class identifier Length: 13 Value: 6363702e61766179612e636f6d Option: (t=82,l=14) Agent Information Option Option: (82) Agent Information Option Length: 14 Value: 020c020a0a1803fe01b2 Agent Remote ID:
[c-nsp] NAT ISSUE
Hi, Am getting the following on my NAT address translations: Pro Inside global Inside local Outside local Outside global udp PUBLIC IP ADDRESS :1404 10.10.10.14:2030 172.16.1.1:2011 172.16.1.1:2011 Weird thing is that the outside local IP,s are not correct as what am trying to access is a public IP address and the ports are not correct. Any assistance will be highly appreciated. Regards, Jacob Miller ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/