I heard that works. But didn't think it was supported for production use. But I
guess data migration isn't really production in that sense.
Simon
From: gpfsug-discuss-boun...@spectrumscale.org
[gpfsug-discuss-boun...@spectrumscale.org] on behalf of
If one googles "GPFS AFM Migration" you'll find several IBM presentations,
white papers and docs on the subject.
Also, I thought one can run AFM between two file systems, both file
systems in the same cluster. Yes I'm saying local cluster == remote
cluster == same cluster.
I thought I did
I was under the impression that AFM could not move between filesystems in the
same cluster without going through NFS, but perhaps that is outdated. We’ve
only used it in the past to move data between clusters. Could someone with more
experience with AFM within a cluster comment? Our goal is to
I don't know the particulars of the case in question, nor much about ESS
rules...
But for a vanilla Spectrum Scale cluster -.
1) There is nothing wrong or ill-advised about upgrading software and then
creating a new version 5.x file system... keeping any older file systems
in place.
2) I
Our understanding is there is no supported way to create an SBA-enabled pool
other than as part of a filesystem that is already at a sufficient 5.x level
(we've heard this a couple times in discussions with multiple IBMers).
The other reasons for us include the perceived difficulty of in-place
On Fri, 29 Mar 2019, Christopher Black wrote:
...
Main reasoning of the new cluster for us is to be able to make a fully V19+
filesystem with
sub-block allocation.
Our understanding from talking to IBM is that there is no way to upgrade a pool
to be
SBA-compatible, nor is it advisable to
I suggest option A.
We are facing a similar transition and are going with a new cluster and then
4.x cluster to 5.x cluster migration of existing data. An extra wrinkle for us
is we are going to join some of the old hardware to the new cluster once it is
free of serving current data.
Main
So I have an ESS today and it's nearing end of life (just our own
timeline/depreciation etc) and I will be purchasing a new ESS. I'm working
through the logistics of this. Here is my thinking so far:
This is just a big Data Lake and not an HPC environment
Option A
Purchase new ESS and set it
You mean like the Mellanox QSA?
https://store.mellanox.com/products/mellanox-mam1q00a-qsa-sp-single-pack-mam1q00a-qsa-ethernet-cable-adapter-40gb-s-to-10gb-s-qsfp-to-sfp.html
We use hundreds of these in CX-4 cards.
But not with the SFP+ Bob mentioned, we normally use breakout cables from the
On Fri, 2019-03-29 at 12:22 +, Oesterlin, Robert wrote:
> Anyone come across this? Or know what I might look at to fix it? I
> did find a reference online for ConnectX-5 cards, but nothing for X-
> 4. The SFP (Cisco SFP-10G-SR) is certified by Mellanox to work.
>
> [3.807332] mlx5_core
Anyone come across this? Or know what I might look at to fix it? I did find a
reference online for ConnectX-5 cards, but nothing for X-4. The SFP (Cisco
SFP-10G-SR) is certified by Mellanox to work.
[3.807332] mlx5_core :01:00.1: Port module event[error]: module 1,
Cable error, Power
I got this code cleaned up a little bit and posted the initial version out to
https://github.com/ckerner/ssacl.git . There are detailed examples in the
README, but I listed a few quick ones below. I will be merging in the default
ACL code, recursion, and backup/restoration of ACL branches
12 matches
Mail list logo