OK, that is good to know.
I'll give it a try with snapshot then. We already have 3.5 almost
everywhere, and planing for 4.2 upgrade (reading the posts with
interest)
Thanks
Jaime
Quoting Yuri L Volobuev :
Under both 3.2 and 3.3 mmbackup would always lock up our
On 16/03/16 18:07, Damir Krstic wrote:
Sven,
For us, at least, at this point in time, we have to create new
filesystem with version flag. The reason is we can't take downtime to
upgrade all of our 500+ compute nodes that will cross-cluster mount this
new storage. We can take downtime in June
IBM ESS, GSS, GNR, and Perseus refer to the same "declustered" IBM
raid-in-software technology with advanced striping and error recovery.
I just googled some of those terms and hit this not written by IBM
summary:
http://www.raidinc.com/file-storage/gss-ess
Also, this is now a "mature"
Fri Mar 18 11:50:43 CDT 2016: mmcesop: /vol/system/ found but is not on
a GPFS filesystem
On 3/18/16 11:39 AM, Matt Weil wrote:
> upgrading to 4.2.2 fixed the dependency issue. I now get Unable to
> access CES shared root.
>
> # /usr/lpp/mmfs/bin/mmlsconfig | grep 'cesSharedRoot'
> cesSharedRoot
Thanks for all replies. Do all of the same restrictions apply to 4.1? We
have an option of installing ESS with 4.1. If we install ESS with 4.1 can
we then cross mount to 3.5 with FS version of 4.1? Also with 4.1 are there
any issues with key exchange?
Thanks,
Damir
On Tue, Mar 15, 2016 at 10:29
There are two related, but distinctly different issues to consider.
1) File system format and backward compatibility. The format of a given
file system is recorded on disk, and determines the level of code required
to mount such a file system. GPFS offers backward compatibility for older
file
Hi Matt
I’ve done a fair amount of work (testing) with the installer. It’s great if you
want to install a new cluster, not so much if you have one setup. You’ll need
to manually define everything. Be careful tho – do some test runs to verify
what it will really do. I’ve found the installer
Sven,
For us, at least, at this point in time, we have to create new filesystem
with version flag. The reason is we can't take downtime to upgrade all of
our 500+ compute nodes that will cross-cluster mount this new storage. We
can take downtime in June and get all of the nodes up to 4.2 gpfs
My first suggestion is: Don’t deploy the CES nodes manually – way to many
package dependencies. Get those setup right and the installer does a good job.
If you go through and define your cluster nodes to the installer, you can do a
GPFS upgrade that way. I’ve run into some issues, especially
Does the installer manage to make the rpm kernel layer ok on clone oses?
Last time I tried mmmakegpl, it falls over as I don't run RedHat enterprise...
(I must admit I haven't used the installer, but be have config management
recipes to install and upgrade).
Simon
Please see question 2.10 in our faq.
http://www.ibm.com/support/knowledgecenter/api/content/nl/en-us/STXKQY/gpfsclustersfaq.pdf
We only support clusters that are running release n and release n-1 and
release n+1. So 4.1 is supported to work with 3.5 and 4.2. Release 4.2 is
supported to work
We have multiple clusters with thousands of nsd's surely there is an
upgrade path. Are you all saying just continue to manually update nsd
servers and manage them as we did previously. Is the installer not
needed if there are current setups. Just deploy CES manually?
On 3/16/16 12:20 PM, Simon
upgrading to 4.2.2 fixed the dependency issue. I now get Unable to
access CES shared root.
# /usr/lpp/mmfs/bin/mmlsconfig | grep 'cesSharedRoot'
cesSharedRoot /vol/system
On 3/16/16 2:51 PM, Simon Thompson (Research Computing - IT Services) wrote:
> Have you got a half updated system maybe?
>
>
Jonathan,
Gradual upgrade is indeed a nice feature of GPFS. We are planning to
gradually upgrade our clients to 4.2. However, before all, or even most
clients are upgraded, we have to be able to mount this new 4.2 filesystem
on all our compute nodes that are running version 3.5. Here is our
Hi, Damir,
you cannot mount a 4.x fs level from a 3.5 level cluster / node. You need
to create the fs with a sufficiently low level, fs level downgrade is not
possible, AFAIK.
3.5 nodes can mount fs from 4.1 cluster (fs at 3.5.0.7 fs level), that I
can confirm for sure.
Uwe
Mit
15 matches
Mail list logo