If you cant afford taking down the whole VC do not work with VC This is my philosophy with VC Do MC-LAG or EVPN in these environment (even with VC just to increase the number of ports )
Regarding the host they dont have to be the same unless there is a known issue From https://www.juniper.net/documentation/en_US/junos/topics/task/installation/qfx-series-software-upgrading.html However, pay attention to these notes regarding Junos OS and Host OS versions: - The Junos OS and Host OS versions do not need to be the same. - During an ISSU, the Host OS cannot be upgraded. - Upgrading the Host OS is not required for every software upgrade, as noted above. Nitzan On Thu, Sep 6, 2018 at 8:10 PM Louis Kowolowski <[email protected]> wrote: > It seems reasonable, and no worse than the situation now, with mis-matched > versions. I don't see why the host software would have anything to do with > VC, seems like it should just be handling VM operations, and doing > hardware-passthrough... > > > > On Sep 6, 2018, at 12:01 PM, Chuck Anderson <[email protected]> wrote: > > > > Logically, why couldn't you isolate one member at a time, do the > upgrade, then rejoin it to the VC? > > > > On Thu, Sep 06, 2018 at 11:12:59AM -0500, Louis Kowolowski wrote: > >> I currently have a 6 node VC of qfx5100. All are running 14.1X53-D43.7 > and host software 13.2X51-D38. In discussions with JTAC, they claim that > upgrading the host software to match the VM, it requires a reboot of *all* > nodes in the VC at the same time. > >> > >> Has anybody else had to deal with this? Are there any work-arounds? > Taking the whole thing down is extremely awkward. Can we do a rolling > upgrade (manually, I know ISSU/NSSU doesn't handle this) and stay > operational? We are working on a plan to re-architect this into 2x 3 node > VC and MC-LAG them together, but it would be nice to be able to fix this > more short-term. > > -- > Louis Kowolowski [email protected] > Cryptomonkeys: > http://www.cryptomonkeys.com/ > > Making life more interesting for people since 1977 > > _______________________________________________ > juniper-nsp mailing list [email protected] > https://puck.nether.net/mailman/listinfo/juniper-nsp > _______________________________________________ juniper-nsp mailing list [email protected] https://puck.nether.net/mailman/listinfo/juniper-nsp

