No need to specify REPLICATE(1), but no harm either.
No need to specify a FROM POOL, unless you want to restrict the set of
files considered. (consider a system with more than two pools...)
If a file is already in the target (TO) POOL, then no harm, we just skip
over that file.
From: "Simon
ervices Ltd // Bristol & Bath Science Park // Dirac Crescent //
Emersons
> Green // Bristol // BS16 7FR
>
> CFMS Services Ltd is registered in England and Wales No 05742022 - a
> subsidiary of CFMS Ltd
> CFMS Services Ltd registered office // 43 Queens Squ
Hallo Simon replication attributes of a file won't
be changed just by the fact , that the pool attribute is changed.. or in other words .. if a file gets
migrated from POOLA to POOLB, does not change the replication automatically...
even if the pool consists of NSDs with multiple fgso depending
Hi all,
We'd like to move some data from a non replicated pool to another pool, but
keep replication at 1 (the fs default is 2).
When using an ILM policy, is the default to keep the current replication or use
the fs default?
I.e.just wondering if I need to include a "REPLICATE(1)" clause.
Regarding MPI-IO, how do you mean “building the applications against GPFS”?
We try to advise our users about things to avoid, but we have some poster-ready
“chaos monkeys” as well, who resist guidance. What apps do your users favor?
Molpro is one of our heaviest apps right now.
Thanks,
— ddj
Happy to share on the list in case anyone else finds it useful:
We use GPFS for home/scratch on our HPC clusters, supporting engineering
applications, so 95+% of our jobs are multi-node MPI. We have had some
questions/concerns about GPFS+Singularity+MPI-IO, as we've had issues with
GPFS+MPI-IO
I am interested to learn this too. So please add me sending a direct mail.
Thanks,
Yugi
> On Apr 26, 2018, at 10:51 AM, Oesterlin, Robert
> wrote:
>
> Hi Lohit, Nathan
>
> Would you be willing to share some more details about your setup? We are just
> getting
Hi Lohit, Nathan
Would you be willing to share some more details about your setup? We are just
getting started here and I would like to hear about what your configuration
looks like. Direct email to me is fine, thanks.
Bob Oesterlin
Sr Principal Storage Engineer, Nuance
From:
I have my own sandbox set up to explore this, but nothing worth reporting just yet.
I too though would be interested in any wisdom that others can share.
Daniel
We do run Singularity + GPFS, on our production HPC clusters.
Most of the time things are fine without any issues.
However, i do see a significant performance loss when running some applications
on singularity containers with GPFS.
As of now, the applications that have severe performance issues
We are running on a test system at the moment, and haven't run into any
issues yet, but so far it's only been 'hello world' and running FIO.
I'm interested to hear about experience with MPI-IO within Singularity.
On 26 April 2018 at 15:20, Oesterlin, Robert
wrote:
Anyone (including IBM) doing any work in this area? I would appreciate hearing
from you.
Bob Oesterlin
Sr Principal Storage Engineer, Nuance
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
Hi
You knew the answer, still is no.
https://www.mail-archive.com/gpfsug-discuss@spectrumscale.org/msg02249.html
--
Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations
Luis Bolinches
Consultant IT Specialist
Mobile Phone: +358503112585
13 matches
Mail list logo