On Tue, Oct 29, 2019 at 11:43:49AM +0100, Clement Cavadore
wrote:
>
> Nice trick, thanks for sharing !
Because of the image size, the environment is very limited.
But there is a ftp client that we used to copy diagnosis files to the
outside.
___
w you can connect via telnet to (additional and special) IP address
configured above. No authentication is taking place, root access.
Best regards,
Franz Georg Köhler
___
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.ne
On Wed, Mar 06, 2019 at 11:48:49AM +0100, Clement Cavadore
wrote:
>
> 1- Can I use NI-MLX-10Gx8-M with an XMR management card ?
> I have read that it could be possible to mix XMR and MLX cards, with
> the result that the whole chassis would be "downgraded" to MLX mode,
> but last time I tried,
On Mo, Dez 17, 2018 at 10:54:33 +0100, Clement Cavadore
wrote:
> Hello Franz Georg,
>
> I was pretty sure you'd be the first to answer my strange question :-))
>
> Actually, the release notes does not inform me regarding the
> intermediary version. I found an intermediary (sxs04300.bin) on
On So, Dez 16, 2018 at 03:11:02 +0100, Clement Cavadore
wrote:
>
> Anyone has some intermediary versions for me ?
Do you already know, what version is needed?
V02 seems to be really antique. However it might be the proper version
if the switch is being used in a historical context :-)
On Di, Sep 18, 2018 at 11:01:21 -0600, Daniel Schmidt
wrote:
> I've a strange issue - just one of my MLX polls very, very slowly for no
> discernible reason. A simple snmpwalk confirms this. Fearing it was
> somehow being over polled, I added log statements to my snmp acl. I did
> not find
On Do, Sep 13, 2018 at 08:23:06 -0600, Eldon Koyle
wrote:
> Have you tried running dm pstat? It can sometimes help identify the type
> of traffic causing issues. First run is a throwaway, it shows counts since
> the previous run.
Thank you, Eldon.
I think this is a hint into the right
On Mo, Sep 10, 2018 at 08:22:19 -0600, Eldon Koyle
wrote:
> You can enable cpu-protection on the vlan IIRC, I don't remember all the
> caveats; definitely look at the manual before enabling.
Also with CPU protection enabled on the VLAN I see packets hitting the
CPU with "reason: Layer 2 packet
On Mi, Sep 12, 2018 at 04:20:46 -0600, Eldon Koyle
wrote:
> Does anyone have a recommendation for a code version for MLXe?
>
> The last I saw recommended here was recent 5.8, I'm wondering if it is
> worth investigating 6.0 or later yet (or ever).
We are currently running v6.0 on all MLX that
I see on another device traffic hitting the CPU that looks like the same
packet hits twice with different reasons.
Does anyone have an idea why this happens?
I wonder why it is being claffied as multicast only once (and finally it
doesn't look like multicast)?
[ppcr_rx_packet]: Packet received
e can
> try?
Is there any configuration left from another module in that specific
slot? In that case, it will not boot up the new module, as its type
differs from the configuration.
Best regards,
Franz Georg Köhler
___
foundry-nsp mailing list
foun
Hello everyone,
I currently see increased CPU% one one of our MLXe line cards:
#sh cpu lp
15:48:40 GMT+01 Mon Aug 13 2018
SLOT #: LP CPU UTILIZATION in %:
in 1 second: in 5 seconds: in 60 seconds: in 300
seconds:
1:6 5 4
On Do, Jun 28, 2018 at 10:15:14 +0200, Frank Menzel wrote:
>
> interface ethernet 1/1
> no ip redirect
You usually want to set no icmp redirects on global level.
Best regards,
Franz Georg Köhler
___
foundry-nsp mailing list
fo
On Do, Feb 08, 2018 at 01:34:58 +0100, Franz Georg Köhler <li...@openunix.de>
wrote:
> On Mi, Dez 13, 2017 at 09:31:56 +, Tim Warnock <tim...@timoid.org> wrote:
> > I used to test 10,000 “show run” and 20 “show techs”, along side 10,000
> > full SNMP “walks”.
Th
problem here.
You can check with dm ipv4 hw-route / dm ipv4 hw-arp if HW entries are
programmed correctly while the host is unreachable.
Do you have independent management connectivity to the FCX to check the
status while it stops routing?
Best regards,
Franz Georg Köhler
onf", not "sh run", but is pretty easy to replicate
with the right command. Connection type (ssh/console) doesn't matter.
This has been reported to Ruckus and is going to be fixed in 08061c
coming end of March.
Not sure if this is present in 08040 and 08060.
B
On Mi, Dez 13, 2017 at 09:48:46 +0100, Clement Cavadore
wrote:
> Hello,
>
> I have found a probable memory leak on ICX7450 code version SPR08061a
> (and b).
Did you try to open a case?
ICX is ruckus now: https://support.ruckuswireless.com/cases
Best regards,
Franz
On Mi, Sep 06, 2017 at 07:53:57 +, Derek Maxwell
wrote:
>
> We have a Brocade/Foundry RX-4 with (2) 24 port fiber cards
> (RX-BI-24F) and (2) 24 port copper cards (RX-BI-24C). One of our
> transit providers is NTT, with whom we have a single GigE circuit
>
to be part of the VRF.
Secondly, the system will only send out management packets on loopback1,
not on the management port.
How do you address those problems?
Best regards,
Franz Georg Köhler
___
foundry-nsp mailing list
foundry-nsp@puck.nethe
different link
speeds. The same behavior also holds TRUE for RSTP deployments.
Best regards,
Franz Georg Köhler
___
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
and use one of the
remaining 40G interfaces for break out.
However, this is currently unsupported as you cannot enable stacking
mode as long as one of the ports in configured in breakout mode.
Best regards,
Franz Georg Köhler
___
foundry-nsp mailing
? Is there such a thing like hardware support for breakout cables?
Until now, I was under the impression that a 40G port is just an
aggregated 4x10G port and the switch just needs to deactivate the
aggregation in order to break out? Am I wrong?
Best regards,
Franz Georg Köhler
/2/6:4: Type : 40GE-SR4 100m (QSFP+)
Best regards,
Franz Georg Köhler
___
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
On Wed, Mar 02, 2016 at 09:31:30AM +0100, Franz Georg Köhler wrote:
> On Tue, Mar 01, 2016 at 10:00:56PM +0100, "Rolf Hanßen" wrote:
>> Hi,
>>
>> I have no solution for that beside using vpls/vll-local. I don't think
>> there is one.
>
> Interesting
ing on the same interface).
As most systems ignore ICMP redirects anyway, there is no benefit in keeping
the default option enabled and I strongly recommend to always disable it.
Best regards,
Franz Georg Köhler
___
foundry-nsp mailing list
foundry-nsp@
|
| +---+---+
| | |
| | firewall |
+--+ |
+---+
Best regards,
Franz Georg Köhler
___
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman
LAG without bringing down the LAG.
:-)
I do not see SFM temperatures on show chassis any more with 5.8 and 5.9,
does anybode alse see this phenomenon?
Best regards,
Franz Georg Köhler
___
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http
Hello,
this could be bogon filtering.
You can try to disable it and afterwards reinstall thge route manually:
no ip martian filtering-on
Best regards,
Franz Georg Köhler
Am 01.10.14 um 21:19 schrieb José Santos:
Hi,
We are new to Brocade and we are experiencing an unexpected behavihor
Hello,
yes, you are right, this is an MLXe with V5.6.0b.
Unfortunately, I did not have IPV4 connectivity at this time to test this.
But as I never had configured a management VRF, I was curious if there
was some configuration error in my config.
Best regards,
Franz Georg Köhler
, basic config which should be OK to my understanding?
vrf management
rd 29066:01
address-family ipv6
ipv6 route ::/0 gw
int man 1
vrf forwarding management
ipv6 addr ip
There is also no change if I set
management-vrf management
or not.
Best regards,
Franz Georg Köhler
: 1584
TDM_A Lost Pkt Count: 0
TDM_B Lost Pkt Count: 0
Best regards,
Franz Georg Köhler
___
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
for Discard: 0 (Disable)]
Prg EGQ EnQue Pkt Count: 279637915732
Prg EGQ EnQue Byte Count: 49565697750048
Prg EGQ Discard Pkt Count:0
Prg EGQ Discard Byte Count: 0
Best regards,
Franz Georg Köhler
data errors detected on LP 1/TM 0
Does anybody know if this is a hardware or software related problem?
Best regards,
Franz Georg Köhler
___
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
Am 10.10.12 19:50, schrieb Darren O'Connor:
Anyone running it yet? I see it's been released.
I'm using it and it is working.
Manifest upgrade did not upgrade boot images on line cards. I did this
manually afterwards.
I'm particularly keen for the routing over VPLS feature.
I do not use
On Netiron MLX, my existing IronWare Software gets deleted while I want
to install a backup in the secondary flash:
#show flash
Active Management Module (Top Slot)
Code Flash - Type MT28F128J3, Size 32 MB
o IronWare Image
Hello,
I read this, but both images fit onto the management module.
However, I removed the secondary LP image since I thought it is more
valuable to have a secondary management ironware to boot from in
emergency cases than a secondary lp image (which could be downloaded
from tftp anyway, as long
Dis No No Ope
2/411 100 Yes L Agg Syn Col Dis No No Ope
Best regards,
Franz Georg Köhler
___
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
%(used)
:SNet 11: 16(size), 16(free), 00.00%(used)
Do I have to worry about a totally used SNet?
Best regards,
Franz Georg Köhler
___
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo
)
Best regards,
Franz Georg Köhler
___
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
?
Best regards,
Franz Georg Köhler
___
foundry-nsp mailing list
foundry-nsp@puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp
!
NetIron MLX LP SL 1:
Total SDRAM : 536870912 bytes
Available Memory:22589440 bytes
Available Memory (%): 4 percent
Are you sure that message reflects TCAM and not general DRAM?
No, this message is not about CAM but total line card memory.
Best regards,
Franz Georg
Hello,
I am currently seeing a lot of those messages on a MLX:
WARN: Current Total Free Memory (22589440) on LP 1 is below 5 percent of
Installed Memory.
Since the RAM usage seems to remain stable at 96% on this line card, I wonder
if:
- it is generally considered to be save to run the LP
On Di, Jun 21, 2011 at 10:52:34 +0200, Rens r...@autempspourmoi.be wrote:
Small question regarding the modules interchangeability between MLX-XMR?
Can a NI-MLX-1Gx20-SFP 20-port 1GbE-100FX Module be used in a XMR?
Can a NI-XMR-1Gx20-SFP 20-port 1GbE/100FX Module be used in a MLX?
43 matches
Mail list logo