Hey all,
Am hoping I can get some pointers here, as this has me stumped.
I'm trying to provision an IPv6 prefix to a remote SRX.
So far I've got a GRE tunnel between the devices, with an IPv6 prefix on.
The tunnel is between the loopback of the MX (main table) and the interface
facing interface
to remove the S-VLAN.
pop isn't an option in "vlans interface mapping", and JUNOS
doesn't want to accept swap on a trunk interface (VC2 to VC3 is a trunk port).
Help?
Thanks
--
Mike Williams
___
juniper-nsp mailing list j
re you traffic level?
> Did it work? Is it bullet-proof?
>
> Looking forward to your messages and feedbacks.
>
> Alex
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-ns
ted, 0 prefix rejected
>
>
> The MX104 never actually advertises any prefixes to the MX80 though.
>
> > show route advertising-protocol bgp
>
> ... zilch ...
>
>
> Is there some inbuilt protection preventing iBGP prefixes from being sent to
> another i
ar you really shouldn't be using this release, so who knows.
--- JUNOS 15.1F3.11 built 2015-10-27 19:44:29 UTC
At least one package installed on this device has limited support.
Run 'file show /etc/notices/unsupported.txt' for details.
--
Mike Wi
the backup are internally
redirected to the master?
Thanks
--
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
!
--
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
/mailman/listinfo/juniper-nsp
--
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
];
}
then {
count stateless-dhcpv6;
log;
routing-instance stateless;
}
}
Seems the flow lookup doesn't respect that.
Does anyone have any ideas?
Thanks
--
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
of the second CPU?
From the Freescale docs both CPUs are dual-core.
Thanks
--
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
can archive the entire config after each commit (archival configuration
transfer-on-commit).
You can apply a comment to each commit (# commit comment blah)
How do you archive that comment?
It's not included in the config.
Thanks
--
Mike Williams
Hi all,
Random thought for the day.
You can archive the entire config after each commit (archival configuration
transfer-on-commit).
You can apply a comment to each commit (# commit comment blah)
How do you archive that comment?
It's not included in the config.
Thanks
--
Mike Williams
On Monday 10 February 2014 09:44:51 Yucong Sun wrote:
Hi,
VCP cable for EX switch looks a lot like a plain SFF-8088 cable, can
someone confirm? SFF-8088 cable is sold $10 on ebay, while the VCP
cable is at least $100...
VCP cables are, at least for the EX4200, PCIe x8 not SAS.
--
Mike
will not be ... warning from SSH.
--
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
be better. Time for some logical tunnels! J-series devices don't support
logical tunnels though.
Argh!
--
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
interface.
I don't mind if anyone does prove I'm being dense!
Thanks
--
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
On Tuesday 02 April 2013 17:47:08 Mike Williams wrote:
I accept that clustering across a switch isn't necessarily advisable, I'm
just wondering if it's fundamentally possible.
Has anyone ever even tried to put a switch between a J-series, or
SRX-series, cluster?
Thanks very much to all
copper
between the providers (if that's even possible) when the VC is way more than
fast enough.
Traffic levels are way way below 10Gbps, and it's highly unlikely they'll ever
get that high.
--
Mike Williams
___
juniper-nsp mailing list juniper-nsp
.
But they're all at least an order of magnitude faster.
--
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
--
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
is on the packet saying it's packet-mode, which isn't
removed/reset when it's wrapped in a GRE header, so IPSec sees a packet-mode
packet and drops it.
This was with 10.4R6.5, we didn't get the chance to try anything newer.
--
Mike Williams
Thanks for all the responses.
Clustering does indeed seem to be by far the best solution.
I think I'll take a crack at an event script anyway, as I haven't touched them
before, and any knowledge would probably be useful eventually.
On Thursday 27 September 2012 17:30:42 Mike Williams wrote
to find yet that will alter the OSPF metric for a logical interface
based on the VRRP state.
Does anyone know if such a knob exists?
Honestly I'm not holding out much hope, as there isn't a direct corrolation
between VRRP and logical interfaces (many VRRP groups per unit).
Thanks
--
Mike Williams
fpc1 and fpc2 have similar numbers, even though these packets have no need to
leave fpc0. There aren't even any active servers off fpc1/2 yet.
fpc0 has been up 33 days, so has seen almost 30 duplicate acks per second
since it booted.
--
Mike Williams
in port 1. Didn't have to do anything special either, although I did tell the
VC to use ports 2 and 3 which was probably unnecessary.
1GbE SFPs cause a ge-x/1/x interface to appear, and 10GbE SFPs cause an
xe-x/1/x interface to appear.
--
Mike Williams
Senior Infrastructure Architect
Comodo CA
and upgraded an SRX220 to 11.2 yesterday (evening GMT)
for better IPv6 support, as new no 11.4 release had showed up yet.
--
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
--
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
-duper or a particularly new version of the chassis?
Cheers,
Phil
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
--
Mike Williams
___
juniper
, no problems (at least
that's what the juniper SE told me when I bought mine).
--
Mike Williams
Senior Infrastructure Architect
Comodo CA Ltd
Office Tel Europe: +44 (0) 161 8747070
Fax Europe: +44 (0) 161 8771767
___
juniper-nsp mailing list juniper-nsp
belief mixed mode operation is supported too.
Thanks
--
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
be driving cable lengths anywhere near 200 meters, but all would
be on SMF for consistancy, if that matters at all.
Thanks
--
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
On Friday 11 November 2011 17:42:29 Mike Williams wrote:
So. VPLS. Point-to-multiple-point. Virtual LAN. Brilliant!
I managed build up the courage, and time, to have a crack at this today,
figuring it could take while.
However it took me less than an hour to convert my mesh of l2vpns to a VPLS
;
}
}
}
# show protocols mpls
path-mtu {
rsvp mtu-signaling;
}
label-switched-path fsed-rmdcjs1 {
from a.b.c.d;
to w.x.y.z;
bandwidth 90m;
no-cspf;
fast-reroute;
primary fsed-rmdcjs1;
}
path fsed-rmdcjs1 {
e.f.g.h strict;
}
--
Mike Williams
to churn through that job, and that's a
bit annoying when you make a small change to a unrelated policy!
Now, is that us being stupid, or the RE being slow? I know what I'd like to
hear :)
Cheers
--
Mike Williams
___
juniper-nsp mailing list juniper-nsp
sessions are coming up, it takes 15
minutes to process all the routes?
Do you mean commit?
Scott
On Thu, Sep 8, 2011 at 7:41 AM, Mike Williams
mike.willi...@comodo.comwrote:
Hi all,
Recently a discussion touched on the routing engine speed of the MX
series, but there wasn't much like
to the One True Path (TM), or even
confirmation we're already on the right path, all gladly welcomed.
Thanks
--
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
by intermediaries,
will those intermediaries use their tables and hijack my packets down their
bits of wet string through 15 other ASs and to the moon and back?
Thanks
--
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https
elegant way, without multi-hop
ebgp.
--
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
-profile DDR2 PC2-6400 sticks though. Everyone,
and their Gran, can do regular profile (30mm) sticks but low-profile
(18-19mm) PC2-6400 seems rarer than hens teeth. A supplier can do us DDR2
PC2-5300, but the lower speed concerns me.
Thanks
--
Mike Williams
address/prefix in proxy-identity though, so that
couldn't possibly work as no CIDR mask is given in the request.
Could any one possibly enlighten me please?
Thanks
--
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https
--
Mike Williams
Senior Systems Administrator
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
, 2010 at 10:00:36PM +0100, Mike Williams wrote:
Hi, could you possibly expand on lacks V6 please?
The one big change in 10.2 for the SRX platforms is the addition of IPv6
flow mode. The SRXes will still pass IPv6 traffic in earlier releases,
but without any policy evaluation.
- Mark
42 matches
Mail list logo