They hadn’t asked, but neither is the process of raising the maximum, which
could be what they’re asking about (might be some momentary performance hit —
can’t recall, but I don’t believe it’s significant if so).
--
|| \\UTGERS,
Hi,
I have tried this before and I would like to temper your expectations.
If you use a placement policy to allow users to write any files into your
"small" pool (e.g. by directory), they will get E_NOSPC when your small
pool fills up. And they will be confused because they can't see the pool
Hi, all, Mladen,
(This is my first post to the GPFSug-discuss list. I am IBMer, am the IBM
worldwide technical support Evangelist on Spectrum Scale/ESS. I am based
in Florida. Apologies if my attachment URL is not permitted or if I did
not reply properly to tie my reply to the original
Upon further thought, it occurs to me that Spectrum Scale V5's introduction
of variable sub-blocks must
by necessity have changed the inode calculation that I describe below. I
would be interested to know
how exactly in Spectrum Scale V5 formatted file systems, how one may need
to change the
ource files.
-a : pass -a to cp
-v : pass -v to cp
-- next part --
An HTML attachment was scrubbed...
URL: <
http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190306/0361a3dd/attachment.html
>
-- next part --
A non-text attachment was scrubbed
In the case where tar -C doesn’t work, you can always use a subshell (I do this
regularly):
tar -cf . | ssh someguy@otherhost "(cd targetdir; tar -xvf - )"
Only use -v on one end. :)
Also, for parallel work that’s not designed that way, don't underestimate the
-P option to GNU and BSD
--An HTML attachment was scrubbed...URL: <http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190306/0361a3dd/attachment.html>-- next part --A non-text attachment was scrubbed...Name: not availableType: image/gifSize: 21994 bytesDesc: not availableURL:
While Fred is right, in most cases you shouldn’t see this, under heavy burst
create workloads before 5.0.2 you can even trigger out of space errors even you
have plenty of space in the filesystem (very hard to reproduce so unlikely to
hit for a normal enduser). to address the issues there have
Last time this was mentioned, it doesn't do ACLs?
Simon
From: gpfsug-discuss-boun...@spectrumscale.org
[gpfsug-discuss-boun...@spectrumscale.org] on behalf of makap...@us.ibm.com
[makap...@us.ibm.com]
Sent: 06 March 2019 15:01
To: gpfsug main discussion
mmxcp may be in samples/ilm if not, perhaps we can put it on an approved
file sharing service ...
+ mmxcp script, for use with mmfind ... -xargs mmxcp ...
Which makes parallelized file copy relatively easy and super fast!
Usage: /gh/bin/mmxcp -t target -p strip_count source_pathname1
It might be the case that AsynchronousFileChannel is actually doing mmap
access to the files. Thus, the memory management will be completely
different with GPFS in compare to local fs.
Regards,
Tomer Perry
Scalable I/O Development (Spectrum Scale)
email: t...@il.ibm.com
1 Azrieli Center, Tel
Hi, in that case I'd open several tar pipes in parallel, maybe using
directories carefully selected, like
tar -c | ssh "tar -x"
I am not quite sure whether "-C /" for tar works here ("tar -C / -x"), but
along these lines might be a good efficient method. target_hosts should be
all nodes
For any process with a large number of threads the VMM size has become an
imaginary number ever since the glibc change to allocate a heap per thread.
I look to /proc/$pid/status to find the memory used by a proc RSS + Swap +
kernel page tables.
Jim
On Wednesday, March 6, 2019, 4:25:48
Some of you had questions to my original post. More information:
Source:
- Files are straight GPFS/Posix - no extended NFSV4 ACLs
- A solution that requires $’s to be spent on software (ie, Aspera) isn’t a
very viable option
- Both source and target clusters are in the same DC
- Source is
Yes and its licensed based on the size of your network pipe.
Andrew Beattie
File and Object Storage Technical Specialist - A/NZ
IBM Systems - Storage
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
- Original message -From: "Frederick Stock" Sent by:
Does Aspera require a license?
Fred__Fred Stock | IBM Pittsburgh Lab | 720-430-8821sto...@us.ibm.com
- Original message -From: "Yaron Daniel" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list Cc:Subject:
Hi Scale-folk.
I have an IBM ESS GH14S building block currently configured for my HPC
workloads.
I've got about 1PB of /scratch filesystem configured in mechanical spindles via
GNR and about 20TB of SSD/flash sitting in another GNR filesystem at the
moment. My intention is to destroy that
Hi Bob, so Simon has hit the nail on the head.
So it’s a challenge, we used dcp with multiple parallel threads per nsd with
mmdsh - 2PB and millions of files, it’s worth a test as it does look after
xattribs, but test it.
See https://github.com/hpc/dcp
Test the preserve:
-p, --preserve
Hello to everyone,
Here a PSI we're observing something that in principle seems strange (at least
to me).
We run a Java application writing into disk by mean of a standard
AsynchronousFileChannel, whose I do not the details.
There are two instances of this application: one runs on a node writing
Hi
U can also use today Aspera - which will replicate gpfs extended attr.
Integration of IBM Aspera Sync with IBM Spectrum Scale: Protecting and
Sharing Files Globally
http://www.redbooks.ibm.com/redpieces/abstracts/redp5527.html?Open
Regards
Yaron Daniel
94 Em Ha'Moshavot Rd
Hi
U can also use today Aspera - which will replicate gpfs extended attr.
Integration of IBM Aspera Sync with IBM Spectrum Scale: Protecting and
Sharing Files Globally
http://www.redbooks.ibm.com/redpieces/abstracts/redp5527.html?Open
I used in the past the arsync - used for Sonas - i think
AFM doesn’t work well if you have dependent filesets though .. which we did for
quota purposes.
Simon
From: on behalf of "y...@il.ibm.com"
Reply-To: "gpfsug-discuss@spectrumscale.org"
Date: Wednesday, 6 March 2019 at 09:01
To: "gpfsug-discuss@spectrumscale.org"
Subject: Re:
Hi
What permissions you have ? Do u have only Posix , or also SMB attributes
?
If only posix attributes you can do the following:
- rsync (which will work on different filesets/directories in parallel.
- AFM (but in case you need rollback - it will be problematic)
Regards
Yaron
23 matches
Mail list logo