Re: [j-nsp] Cisco ASR 9001 vs Juniper MX104

2015-12-02 Thread Colton Conor
Stephen,

Which RE is that on the MX480? The RE2000 or the quad core one?

On Wed, Dec 2, 2015 at 4:42 AM, Stepan Kucherenko  wrote:

> Should've put it here in the first post, got already asked about it
> offlist couple of times.
>
> I was testing it on MX80 with slow RE, so obviously numbers will change on
> faster REs but difference will still be there.
>
> ~1.5min taking full table from MX480 (nice RE, 85k updates)
> ~3min from 7600 (old and slow RE, 89k updates)
> almost 5min from ASR9k (nice RE, 450k updates)
>
> It'll be even more noticeable when Junos will be able to run rpd on a
> dedicated core.
>
>
>
> Keep in mind that it's still not actual convergence time, Junos is still
> lagging with FIB updates long after that.
>
> Sadly I was unable to find my old convergence test numbers but krt queue
> was dissipating for at least couple of minutes after BGP converged. I case
> you're wondering if it was the known rpd bug with low krt priority - no, I
> tested it after it was fixed. Not that I'd call it "fixed".
>
> And that's what I don't like about MX-es :-) Not sure if it's faster or
> slower on ASR9k though.
>
>
> On 02.12.2015 12:30, James Bensley wrote:
>
>> On 1 December 2015 at 17:29, Stepan Kucherenko  wrote:
>>
>>> My biggest gripe with ASR9k (or IOS XR in particular) is that Cisco
>>> stopped
>>> grouping BGP prefixes in one update if they have same attributes so it's
>>> one
>>> prefix per update now (or sometimes two).
>>>
>>> Transit ISP we tested it with pinged TAC and got a response that it's
>>> "software/hardware limitation" and nothing can be done.
>>>
>>> I don't know when this regression happened but now taking full feed from
>>> ASR9k is almost twice as slow as taking it from 7600 with weak RE and 3-4
>>> times slower than taking it from MX.
>>>
>>> I'm not joking, test it yourself. Just look at the traffic dump. As I
>>> understand it, it's not an edge case so you must see it as well.
>>>
>>> In my case it was 450k updates per 514k prefixes for full feed from
>>> ASR9k,
>>> 89k updates per 510k prefixes from 7600 and 85k updates per 516k prefixes
>>> from MX480. Huge difference.
>>>
>>> It's not a show stopper but I'm sure it must be a significant impact on
>>> convergence time.
>>>
>>
>> How long timewise is it taking you to converge?
>>
>> Last time I bounced a BGP session to a full table provider it took sub
>> 1 minute to take in all the routes. I wasn't actually timing so I
>> don't know how long exactly.
>>
>> Cheers,
>> James.
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
>> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Unwanted newline characters in Netconf XML

2015-12-02 Thread Stacy W. Smith

> On Dec 2, 2015, at 2:31 AM, Dave Bell  wrote:
> 
> On 2 December 2015 at 07:04, Tore Anderson  wrote:
> 
>> Works fine for me? Even in JUNOS versions as old as 11.4. Try:
>> 
>> {master:1}[edit]
>> tore@lab-ex4200# load merge terminal
>> [Type ^D at a new line to end input]
>> /* This is a
>> * multi-line
>> * comment.
>> */
>> protocols{}
>> [edit]
>>  'protocols'
>>warning: statement has no contents; ignored
>> load complete
>> 
> 
> Ah, I was using 'annotate ' which doesn't appear to allow it. This
> method does.

[edit]
user@r0# annotate system "This is a\nmulti-line\ncomment"   

[edit]
user@r0# show | find ^} 
}
apply-groups [ global re0 ];
/* This is a
multi-line
comment */
system {
ports {
console log-out-on-disconnect;
}
}

[edit]
user@r0# show system | display xml 
http://xml.juniper.net/junos/15.1I0/junos;>

/* This is a
multi-line
comment */









[edit]




--Stacy

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Juniper and Cisco - BGP MPLS L2VPN VPLS interoperability

2015-12-02 Thread Aaron
Is it normal for a Route Reflector to reflect routes back to the client that 
send them in the first place ?  I'm still trying to figure out why this ME3600 
is resetting it's bgp session so I enabled some debugs and am wondering if 
something weird is happening here with this ME3600 and this version of IOS...

Like I said before, I bring up BGP L2VPN Address Family on a Juniper ACX5048 or 
MX104 and then terrible things happen to my ME3600's that run 15.2.(4)S3 and S5 
... BUT, not S1.  15.2(4)S1 is fine.  Also ASR920 with IOS XE 03.15.00.S is 
fine.

This ME3600 is 10.101.12.251 and does have a bgp-based l2vpn with the following 
info...

interface Loopback0
 ip address 10.101.12.251 255.255.255.255

eng-lab-3600-1#sh bgp l2vpn vpls al
...
 Network  Next HopMetric LocPrf Weight Path
Route Distinguisher: 64512:10920
 *>  64512:10920:10.101.12.251/96
   0.0.0.032768 ?

eng-lab-3600-1#sh run | sec l2 vfi
l2 vfi v920 autodiscovery
 vpn id 10920
 shutdown

eng-lab-3600-1#sh vfi
...
VFI name: v920, state: admindown, type: multipoint, signaling: LDP
  VPN ID: 10920, VPLS-ID: 64512:10920
  RD: 64512:10920, RT: 64512:10920
  Bridge-Domain 920 attachment circuits:
Vlan920
  Neighbors connected via pseudowires:
  Peer Address VC IDDiscovered Router IDS

* so now that you know that this ME3600 is generating 
64512:10920:10.101.12.251/96 NLRI, now see bgp debugs on this ME below

Dec  2 17:18:57.848: BGP(9): (base) 10.101.0.254 send UPDATE (format) 
64512:10920:10.101.12.251/96, next 10.101.12.251, metric 0, path Local, 
extended community RT:64512:10920 L2VPN AGI:64512:10920
Dec  2 17:18:57.852: BGP(4): (base) 10.101.0.254 send UPDATE (format) 
10.101.12.251:1:172.30.176.80/28, next 10.101.12.251, label 393, metric 0, path 
Local, extended community RT:1:1
Dec  2 17:18:57.852: BGP(4): (base) 10.101.0.254 send UPDATE (format) 
10.101.12.251:6:2.2.2.0/24, next 10.101.12.251, label 411, metric 0, path 
Local, extended community RT:6:6
Dec  2 17:19:02.848: BGP(9): 10.101.0.254 rcv UPDATE w/ attr: nexthop 
10.101.12.251, origin ?, localpref 100, metric 0, originator 10.101.12.251, 
clusterlist 10.101.0.254, merged path , AS_PATH , community , extended 
community RT:64512:10920 L2VPN AGI:64512:10920, SSA attribute
Dec  2 17:19:02.848: BGPSSA ssacount is 0
**
*** SEE HERE PLEASE, it seems that right when I rcv a UPDATE from the RR 
(10.101.0.254), in that same time stamp Dec  2 17:19:02.848 I see BGP Closing.  
Is this coincidental ? or is this ME3600 running this version of software not 
able to deal with this ? and also, what in the world does the juniper got to do 
with this , such that when I enable bgp l2vpn on the juniper, this phenomena 
begins !
**
Dec  2 17:19:02.848: BGP(9): 10.101.0.254 rcv UPDATE about 
64512:10920:10.101.12.251/96 -- DENIED due to: ORIGINATOR is us; MP_REACH 
NEXTHOP is our own address;
Dec  2 17:19:02.848: BGP: 10.101.0.254 went from Established to Closing
Dec  2 17:19:02.852: %BGP-3-NOTIFICATION: sent to neighbor 10.101.0.254 3/10 
(illegal network) 1 bytes 00
Dec  2 17:19:02.852: BGP: ses global 10.101.0.254 (0x1132A048:1) Send 
NOTIFICATION 3/10 (illegal network) 1 bytes 00
Dec  2 17:19:02.852: %BGP-4-MSGDUMP: unsupported or mal-formatted message 
received from 10.101.0.254:
        006A 0200  5390 0E00 2000 1941 040A
650C F500 0015 0001 0A65 0CF5 8000 0001 0001 0002 C350 0101 0002 0040 0101 0040
0200 4005 0400  64C0 1010 800A 0502  0064 0002   2774 800A 040A
6500 FE80 0904 0A65 0CF5
Dec  2 17:19:07.064: BGP: 10.101.0.254 local error close after sending 
NOTIFICATION
Dec  2 17:19:07.064: %BGP-5-NBR_RESET: Neighbor 10.101.0.254 reset (BGP 
Notification sent)
Dec  2 17:19:07.064: BGP: nbr_topo global 10.101.0.254 VPNv4 Unicast:base 
(0x1132A048:1) NSF delete stale NSF not active
Dec  2 17:19:07.064: BGP: nbr_topo global 10.101.0.254 VPNv4 Unicast:base 
(0x1132A048:1) NSF no stale paths state is NSF not active
Dec  2 17:19:07.064: BGP: nbr_topo global 10.101.0.254 VPNv4 Unicast:base 
(0x1132A048:1) Resetting ALL counters.
Dec  2 17:19:07.064: BGP: nbr_topo global 10.101.0.254 L2VPN Vpls:base 
(0x1132A048:1) NSF delete stale NSF not active
Dec  2 17:19:07.064: BGP: nbr_topo global 10.101.0.254 L2VPN Vpls:base 
(0x1132A048:1) NSF no stale paths state is NSF not active
Dec  2 17:19:07.064: BGP: nbr_topo global 10.101.0.254 L2VPN Vpls:base 
(0x1132A048:1) Resetting ALL counters.
Dec  2 17:19:07.064: BGP: 10.101.0.254 closing
Dec  2 17:19:07.064: BGP: ses global 10.101.0.254 (0x1132A048:1) Session close 
and reset neighbor 10.101.0.254 topostate
Dec  2 17:19:07.064: BGP: nbr_topo global 10.101.0.254 L2VPN Vpls:base 
(0x1132A048:1) Resetting ALL counters.
Dec  2 17:19:07.064: BGP: ses global 10.101.0.254 (0x1132A048:1) Session close 
and reset neighbor 

Re: [j-nsp] Juniper and Cisco - BGP MPLS L2VPN VPLS interoperability

2015-12-02 Thread Aaron
(reformatting email with carriage returns between debug lines, hopefully
that helps readability)

Is it normal for a Route Reflector to reflect routes back to the client that
send them in the first place ?  I'm still trying to figure out why this
ME3600 is resetting it's bgp session so I enabled some debugs and am
wondering if something weird is happening here with this ME3600 and this
version of IOS...

Like I said before, I bring up BGP L2VPN Address Family on a Juniper ACX5048
or MX104 and then terrible things happen to my ME3600's that run 15.2.(4)S3
and S5 ... BUT, not S1.  15.2(4)S1 is fine.  Also ASR920 with IOS XE
03.15.00.S is fine.

This ME3600 is 10.101.12.251 and does have a bgp-based l2vpn with the
following info...

interface Loopback0
 ip address 10.101.12.251 255.255.255.255

eng-lab-3600-1#sh bgp l2vpn vpls al
...
 Network  Next HopMetric LocPrf Weight Path
Route Distinguisher: 64512:10920
 *>  64512:10920:10.101.12.251/96
   0.0.0.032768 ?

eng-lab-3600-1#sh run | sec l2 vfi
l2 vfi v920 autodiscovery
 vpn id 10920
 shutdown

eng-lab-3600-1#sh vfi
...
VFI name: v920, state: admindown, type: multipoint, signaling: LDP
  VPN ID: 10920, VPLS-ID: 64512:10920
  RD: 64512:10920, RT: 64512:10920
  Bridge-Domain 920 attachment circuits:
Vlan920
  Neighbors connected via pseudowires:
  Peer Address VC IDDiscovered Router IDS

* so now that you know that this ME3600 is generating
64512:10920:10.101.12.251/96 NLRI, now see bgp debugs on this ME below


Dec  2 17:18:57.848: %BGP-5-ADJCHANGE: neighbor 10.101.0.254 Up

Dec  2 17:18:57.848: BGP: ses global 10.101.0.254 (0x1132A048:1) read
request no-op

Dec  2 17:18:57.848: BGP(9): (base) 10.101.0.254 send UPDATE (format)
64512:10920:10.101.12.251/96, next 10.101.12.251, metric 0, path Local,
extended community RT:64512:10920 L2VPN AGI:64512:10920

Dec  2 17:18:57.852: BGP(4): (base) 10.101.0.254 send UPDATE (format)
10.101.12.251:1:96.8.176.80/28, next 10.101.12.251, label 393, metric 0,
path Local, extended community RT:1:1

Dec  2 17:18:57.852: BGP(4): (base) 10.101.0.254 send UPDATE (format)
10.101.12.251:6:2.2.2.0/24, next 10.101.12.251, label 411, metric 0, path
Local, extended community RT:6:6

Dec  2 17:19:02.848: BGP(9): 10.101.0.254 rcv UPDATE w/ attr: nexthop
10.101.12.251, origin ?, localpref 100, metric 0, originator 10.101.12.251,
clusterlist 10.101.0.254, merged path , AS_PATH , community , extended
community RT:64512:10920 L2VPN AGI:64512:10920, SSA attribute

Dec  2 17:19:02.848: BGPSSA ssacount is 0

**
*** SEE HERE PLEASE, it seems that right when I rcv a UPDATE from the RR
(10.101.0.254), in that same time stamp Dec  2 17:19:02.848 I see BGP
Closing.  Is this coincidental ? or is this ME3600 running this version of
software not able to deal with this ? and also, what in the world does the
juniper got to do with this , such that when I enable bgp l2vpn on the
juniper, this phenomena begins !
**

Dec  2 17:19:02.848: BGP(9): 10.101.0.254 rcv UPDATE about
64512:10920:10.101.12.251/96 -- DENIED due to: ORIGINATOR is us; MP_REACH
NEXTHOP is our own address;

Dec  2 17:19:02.848: BGP: 10.101.0.254 went from Established to Closing

Dec  2 17:19:02.852: %BGP-3-NOTIFICATION: sent to neighbor 10.101.0.254 3/10
(illegal network) 1 bytes 00

Dec  2 17:19:02.852: BGP: ses global 10.101.0.254 (0x1132A048:1) Send
NOTIFICATION 3/10 (illegal network) 1 bytes 00

Dec  2 17:19:02.852: %BGP-4-MSGDUMP: unsupported or mal-formatted message
received from 10.101.0.254:
        006A 0200  5390 0E00 2000 1941
040A 650C F500 0015 0001 0A65 0CF5 8000 0001 0001 0002 C350 0101 0002 0040
0101 0040
0200 4005 0400  64C0 1010 800A 0502  0064 0002   2774 800A
040A
6500 FE80 0904 0A65 0CF5

Dec  2 17:19:07.064: BGP: 10.101.0.254 local error close after sending
NOTIFICATION

Dec  2 17:19:07.064: %BGP-5-NBR_RESET: Neighbor 10.101.0.254 reset (BGP
Notification sent)

Dec  2 17:19:07.064: BGP: nbr_topo global 10.101.0.254 VPNv4 Unicast:base
(0x1132A048:1) NSF delete stale NSF not active

Dec  2 17:19:07.064: BGP: nbr_topo global 10.101.0.254 VPNv4 Unicast:base
(0x1132A048:1) NSF no stale paths state is NSF not active

Dec  2 17:19:07.064: BGP: nbr_topo global 10.101.0.254 VPNv4 Unicast:base
(0x1132A048:1) Resetting ALL counters.

Dec  2 17:19:07.064: BGP: nbr_topo global 10.101.0.254 L2VPN Vpls:base
(0x1132A048:1) NSF delete stale NSF not active

Dec  2 17:19:07.064: BGP: nbr_topo global 10.101.0.254 L2VPN Vpls:base
(0x1132A048:1) NSF no stale paths state is NSF not active

Dec  2 17:19:07.064: BGP: nbr_topo global 10.101.0.254 L2VPN Vpls:base
(0x1132A048:1) Resetting ALL counters.

Dec  2 17:19:07.064: BGP: 10.101.0.254 closing

Dec  2 17:19:07.064: BGP: ses global 10.101.0.254 (0x1132A048:1) Session
close and reset neighbor 

Re: [j-nsp] Cisco ASR 9001 vs Juniper MX104

2015-12-02 Thread Stepan Kucherenko

Some RE-S-1800X4, yeah.

ASR9k has RSP440, so quad core x86 as well. Comparable I think.

Not sure about 7600 but definitely something old.

02.12.2015 19:18, Colton Conor пишет:

Stephen,

Which RE is that on the MX480? The RE2000 or the quad core one?

On Wed, Dec 2, 2015 at 4:42 AM, Stepan Kucherenko > wrote:

Should've put it here in the first post, got already asked about it
offlist couple of times.

I was testing it on MX80 with slow RE, so obviously numbers will
change on faster REs but difference will still be there.

~1.5min taking full table from MX480 (nice RE, 85k updates)
~3min from 7600 (old and slow RE, 89k updates)
almost 5min from ASR9k (nice RE, 450k updates)

It'll be even more noticeable when Junos will be able to run rpd on
a dedicated core.



Keep in mind that it's still not actual convergence time, Junos is
still lagging with FIB updates long after that.

Sadly I was unable to find my old convergence test numbers but krt
queue was dissipating for at least couple of minutes after BGP
converged. I case you're wondering if it was the known rpd bug with
low krt priority - no, I tested it after it was fixed. Not that I'd
call it "fixed".

And that's what I don't like about MX-es :-) Not sure if it's faster
or slower on ASR9k though.


On 02.12.2015 12:30, James Bensley wrote:

On 1 December 2015 at 17:29, Stepan Kucherenko > wrote:

My biggest gripe with ASR9k (or IOS XR in particular) is
that Cisco stopped
grouping BGP prefixes in one update if they have same
attributes so it's one
prefix per update now (or sometimes two).

Transit ISP we tested it with pinged TAC and got a response
that it's
"software/hardware limitation" and nothing can be done.

I don't know when this regression happened but now taking
full feed from
ASR9k is almost twice as slow as taking it from 7600 with
weak RE and 3-4
times slower than taking it from MX.

I'm not joking, test it yourself. Just look at the traffic
dump. As I
understand it, it's not an edge case so you must see it as well.

In my case it was 450k updates per 514k prefixes for full
feed from ASR9k,
89k updates per 510k prefixes from 7600 and 85k updates per
516k prefixes
from MX480. Huge difference.

It's not a show stopper but I'm sure it must be a
significant impact on
convergence time.


How long timewise is it taking you to converge?

Last time I bounced a BGP session to a full table provider it
took sub
1 minute to take in all the routes. I wasn't actually timing so I
don't know how long exactly.

Cheers,
James.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net

https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net

https://puck.nether.net/mailman/listinfo/juniper-nsp



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

[j-nsp] per flow rate-limiting on Juniper equipment

2015-12-02 Thread Martin T
Hi,

which Juniper products support per flow rate-limiting? I mean similar
functionality to for example iptables "recent"
module(http://www.netfilter.org/documentation/HOWTO/netfilter-extensions-HOWTO-3.html#ss3.16).
For example following iptables rules build dynamic source IP list if
new(not a reply traffic) UDP traffic with source port 53 enter the
interface eth0 and allow 4 packets within 10 seconds per IP address
through:

# iptables -t filter -L FORWARD -nv --line-numbers
Chain FORWARD (policy ACCEPT 9 packets, 1704 bytes)
num   pkts bytes target prot opt in out source
  destination
1   40  7200udp  --  eth0   *   0.0.0.0/0
  0.0.0.0/0udp spt:53 state NEW recent: SET name:
DNS-traffic-sources side: source mask: 255.255.255.255
2   34  6120 DROP   udp  --  eth0   *   0.0.0.0/0
  0.0.0.0/0udp spt:53 state NEW recent: UPDATE seconds: 10
hit_count: 4 name: DNS-traffic-sources side: source mask:
255.255.255.255
#


Is there any Juniper equipment which is able to do this?


thanks,
Martin
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Cisco ASR 9001 vs Juniper MX104

2015-12-02 Thread Mark Tinka


On 1/Dec/15 17:49, john doe wrote:

>  
>
>
> Yeah, I was just referring to cli experience. commits, rollback, hierarchy 
> within. Prior XR IOS was wall of text, no?

Still is, but you get used to working with what you have :-).

Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Cisco ASR 9001 vs Juniper MX104

2015-12-02 Thread Mark Tinka


On 1/Dec/15 18:43, Adam Vitkovsky wrote:

>
> I'd like to ask Mark and users of MX as peering routers (in a scaled 
> configuration) do you put every peer into separate group and you don't mind 
> or perceive any inefficiencies during BGP convergence resulting from many 
> update groups?
> Or you start with several peer groups and group peers based on common egress 
> policies into those and don't mind a peer flapping if it's policy needs to be 
> adjusted and the peer is being put into its own update group?

We run BGP on the MX chassis', as well as the MX80.

We are just deploying our first MX104, but I expect that perform like
the MX80 control- and management plane-wise anyway.

To answer your question, each eBGP peer is a separate group for us, even
when they are sharing the same inbound and outbound routing policies.
It's just easier to manage that way, and we do that mostly for the
flexibility in case we need to do some peer-specific things.

No performance issue on the x86-based MX's. The MX80 is just slow, but
this is in general. I'm not certain it is due to our BGP group strategy,
but I also have no empirical data to dispute this. We are talking
hundreds of BGP groups on MX80's, as we use those more for peering than
our MX480's (which are more for customer edge).

Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Cisco ASR 9001 vs Juniper MX104

2015-12-02 Thread James Bensley
On 1 December 2015 at 14:14, Mark Tinka  wrote:
>
>
> On 1/Dec/15 15:03, john doe wrote:
>
>>
>>
>> I think price wise MX is a better deal. ASR fully loaded with cards and 
>> licences for various services gets expensive fast.
>
> Depends what cards you are loading in there.
>
> If you're packing an ASR1000 with Ethernet line cards, then you get what
> you deserve.
>
> If you need dense Ethernet aggregation, the ASR9000 and MX are better
> than the ASR1000.
>
> If you need a mix-and-match, the ASR1000 is better than the ASR9000 or MX.

With the exception of LAGs (IMO) as port-channels on the ASR1000
series does not support QoS very well at all on them;

http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/qos_mqc/configuration/xe-3s/qos-mqc-xe-3s-book/qos-eth-int.html#GUID-95630B2A-986E-4063-848B-BC0AB7456C44


Cheers,
James.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Cisco ASR 9001 vs Juniper MX104

2015-12-02 Thread Stepan Kucherenko
Should've put it here in the first post, got already asked about it 
offlist couple of times.


I was testing it on MX80 with slow RE, so obviously numbers will change 
on faster REs but difference will still be there.


~1.5min taking full table from MX480 (nice RE, 85k updates)
~3min from 7600 (old and slow RE, 89k updates)
almost 5min from ASR9k (nice RE, 450k updates)

It'll be even more noticeable when Junos will be able to run rpd on a 
dedicated core.




Keep in mind that it's still not actual convergence time, Junos is still 
lagging with FIB updates long after that.


Sadly I was unable to find my old convergence test numbers but krt queue 
was dissipating for at least couple of minutes after BGP converged. I 
case you're wondering if it was the known rpd bug with low krt priority 
- no, I tested it after it was fixed. Not that I'd call it "fixed".


And that's what I don't like about MX-es :-) Not sure if it's faster or 
slower on ASR9k though.


On 02.12.2015 12:30, James Bensley wrote:

On 1 December 2015 at 17:29, Stepan Kucherenko  wrote:

My biggest gripe with ASR9k (or IOS XR in particular) is that Cisco stopped
grouping BGP prefixes in one update if they have same attributes so it's one
prefix per update now (or sometimes two).

Transit ISP we tested it with pinged TAC and got a response that it's
"software/hardware limitation" and nothing can be done.

I don't know when this regression happened but now taking full feed from
ASR9k is almost twice as slow as taking it from 7600 with weak RE and 3-4
times slower than taking it from MX.

I'm not joking, test it yourself. Just look at the traffic dump. As I
understand it, it's not an edge case so you must see it as well.

In my case it was 450k updates per 514k prefixes for full feed from ASR9k,
89k updates per 510k prefixes from 7600 and 85k updates per 516k prefixes
from MX480. Huge difference.

It's not a show stopper but I'm sure it must be a significant impact on
convergence time.


How long timewise is it taking you to converge?

Last time I bounced a BGP session to a full table provider it took sub
1 minute to take in all the routes. I wasn't actually timing so I
don't know how long exactly.

Cheers,
James.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Cisco ASR 9001 vs Juniper MX104

2015-12-02 Thread James Bensley
On 1 December 2015 at 17:29, Stepan Kucherenko  wrote:
> My biggest gripe with ASR9k (or IOS XR in particular) is that Cisco stopped
> grouping BGP prefixes in one update if they have same attributes so it's one
> prefix per update now (or sometimes two).
>
> Transit ISP we tested it with pinged TAC and got a response that it's
> "software/hardware limitation" and nothing can be done.
>
> I don't know when this regression happened but now taking full feed from
> ASR9k is almost twice as slow as taking it from 7600 with weak RE and 3-4
> times slower than taking it from MX.
>
> I'm not joking, test it yourself. Just look at the traffic dump. As I
> understand it, it's not an edge case so you must see it as well.
>
> In my case it was 450k updates per 514k prefixes for full feed from ASR9k,
> 89k updates per 510k prefixes from 7600 and 85k updates per 516k prefixes
> from MX480. Huge difference.
>
> It's not a show stopper but I'm sure it must be a significant impact on
> convergence time.

How long timewise is it taking you to converge?

Last time I bounced a BGP session to a full table provider it took sub
1 minute to take in all the routes. I wasn't actually timing so I
don't know how long exactly.

Cheers,
James.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Unwanted newline characters in Netconf XML

2015-12-02 Thread Dave Bell
On 2 December 2015 at 07:04, Tore Anderson  wrote:

> Works fine for me? Even in JUNOS versions as old as 11.4. Try:
>
> {master:1}[edit]
> tore@lab-ex4200# load merge terminal
> [Type ^D at a new line to end input]
> /* This is a
>  * multi-line
>  * comment.
>  */
> protocols{}
> [edit]
>   'protocols'
> warning: statement has no contents; ignored
> load complete
>

Ah, I was using 'annotate ' which doesn't appear to allow it. This
method does.

Regards,
Dave
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Cisco ASR 9001 vs Juniper MX104

2015-12-02 Thread James Bensley
On 2 December 2015 at 09:17, Mark Tinka  wrote:
>
>
> On 1/Dec/15 17:49, john doe wrote:
>
>>
>>
>>
>> Yeah, I was just referring to cli experience. commits, rollback, hierarchy 
>> within. Prior XR IOS was wall of text, no?
>
> Still is, but you get used to working with what you have :-).

IOS does support configuration reverting and rollbacks, not in exactly
the same way as IOS-XR/Junos but I always use it when workingon the
production network. Just enable configuration archiving:

conf t
 archive
  path sup-bootdisk:/config-backup-
  maximum 10
  write-memory
  end
wr



conf term lock revert timer 20

%ARCHIVE_DIFF-5-ROLLBK_CNFMD_CHG_BACKUP: Backing up current running
config to sup-bootdisk:/config-backup-Nov-25-2015-23-04-57.804-UTC-166

%ARCHIVE_DIFF-5-ROLLBK_CNFMD_CHG_START_ABSTIMER: User: james.bensley:
Scheduled to rollback to config
sup-bootdisk:/config-backup-Nov-25-2015-23-04-57.804-UTC-166 in 20
minutes


! config changes goes here

end


! Check everythign is OK, then confirm the changes to cancel the rollback timer,
! If I make a big boo boo that cuts me off, the config will roll back
after 20 mins (as above)
! without me confirming it

configure confirm


! Oh no, I haven't made such a big mistake that I've been disconnected
! but actually I do need to rollback

configure replace
sup-bootdisk:/config-backup-Nov-25-2015-23-04-57.804-UTC-166 list

Nov 25 2015 23:25:17.479 UTC:
%ARCHIVE_DIFF-5-ROLLBK_CNFMD_CHG_ROLLBACK_START: Start rolling to:
sup-bootdisk:/config-backup-Nov-25-2015-23-04-57.804-UTC-166



Cheers,
James.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Cisco ASR 9001 vs Juniper MX104

2015-12-02 Thread Mark Tinka


On 2/Dec/15 11:44, James Bensley wrote:

> With the exception of LAGs (IMO) as port-channels on the ASR1000
> series does not support QoS very well at all on them;
>
> http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/qos_mqc/configuration/xe-3s/qos-mqc-xe-3s-book/qos-eth-int.html#GUID-95630B2A-986E-4063-848B-BC0AB7456C44

Anything IOS and IOS XE is utterly and completely rubbish when it comes
to policing and general QoS on LAG's. Again, this is where the MX (Trio
+ Junos) outshines them all.

We've been doing some work with Cisco in trying to get better QoS and
policing on LAG's on IOS and IOS XE systems, but this won't happen soon.
It's one of the biggest flaws in IOS and IOS XE today, if you ask me.

For now, we try to avoid having to run LAG's on links that require
complex QoS and policing features on IOS and IOS XE boxes. Other than
that, peachy...

Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp