Re: [gpfsug-discuss] Question about inodes incrise

2019-03-06 Thread Ryan Novosielski
They hadn’t asked, but neither is the process of raising the maximum, which could be what they’re asking about (might be some momentary performance hit — can’t recall, but I don’t believe it’s significant if so). -- || \\UTGERS,

Re: [gpfsug-discuss] SLURM scripts/policy for data movement into a flash pool?

2019-03-06 Thread Alex Chekholko
Hi, I have tried this before and I would like to temper your expectations. If you use a placement policy to allow users to write any files into your "small" pool (e.g. by directory), they will get E_NOSPC when your small pool fills up. And they will be confused because they can't see the pool

[gpfsug-discuss] Fw: Question about inodes increase - how to increase non-disruptively - orig question by Mladen Portak on 3/6/19 - 09:46 GMT

2019-03-06 Thread John M Sing
Hi, all, Mladen, (This is my first post to the GPFSug-discuss list. I am IBMer, am the IBM worldwide technical support Evangelist on Spectrum Scale/ESS. I am based in Florida. Apologies if my attachment URL is not permitted or if I did not reply properly to tie my reply to the original

Re: [gpfsug-discuss] Question about inodes increase - how to increase non-disruptively - orig question by Mladen Portak on 3/6/19 - 09:46 GMT

2019-03-06 Thread John M Sing
Upon further thought, it occurs to me that Spectrum Scale V5's introduction of variable sub-blocks must by necessity have changed the inode calculation that I describe below. I would be interested to know how exactly in Spectrum Scale V5 formatted file systems, how one may need to change the

Re: [gpfsug-discuss] gpfsug-discuss mmxcp

2019-03-06 Thread Marc A Kaplan
ource files. -a : pass -a to cp -v : pass -v to cp -- next part -- An HTML attachment was scrubbed... URL: < http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190306/0361a3dd/attachment.html > -- next part -- A non-text attachment was scrubbed

Re: [gpfsug-discuss] Follow-up: migrating billions of files

2019-03-06 Thread Stephen Ulmer
In the case where tar -C doesn’t work, you can always use a subshell (I do this regularly): tar -cf . | ssh someguy@otherhost "(cd targetdir; tar -xvf - )" Only use -v on one end. :) Also, for parallel work that’s not designed that way, don't underestimate the -P option to GNU and BSD

Re: [gpfsug-discuss] gpfsug-discuss mmxcp

2019-03-06 Thread Edward Boyd
--An HTML attachment was scrubbed...URL: <http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190306/0361a3dd/attachment.html>-- next part --A non-text attachment was scrubbed...Name: not availableType: image/gifSize: 21994 bytesDesc: not availableURL:

Re: [gpfsug-discuss] Question about inodes incrise

2019-03-06 Thread Sven Oehme
While Fred is right, in most cases you shouldn’t see this, under heavy burst create workloads before 5.0.2 you can even trigger out of space errors even you have plenty of space in the filesystem (very hard to reproduce so unlikely to hit for a normal enduser). to address the issues there have

Re: [gpfsug-discuss] Migrating billions of files? mmfind ... mmxcp

2019-03-06 Thread Simon Thompson
Last time this was mentioned, it doesn't do ACLs? Simon From: gpfsug-discuss-boun...@spectrumscale.org [gpfsug-discuss-boun...@spectrumscale.org] on behalf of makap...@us.ibm.com [makap...@us.ibm.com] Sent: 06 March 2019 15:01 To: gpfsug main discussion

Re: [gpfsug-discuss] Migrating billions of files? mmfind ... mmxcp

2019-03-06 Thread Marc A Kaplan
mmxcp may be in samples/ilm if not, perhaps we can put it on an approved file sharing service ... + mmxcp script, for use with mmfind ... -xargs mmxcp ... Which makes parallelized file copy relatively easy and super fast! Usage: /gh/bin/mmxcp -t target -p strip_count source_pathname1

Re: [gpfsug-discuss] Memory accounting for processes writing to GPFS

2019-03-06 Thread Tomer Perry
It might be the case that AsynchronousFileChannel is actually doing mmap access to the files. Thus, the memory management will be completely different with GPFS in compare to local fs. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: t...@il.ibm.com 1 Azrieli Center, Tel

Re: [gpfsug-discuss] Follow-up: migrating billions of files

2019-03-06 Thread Uwe Falke
Hi, in that case I'd open several tar pipes in parallel, maybe using directories carefully selected, like tar -c | ssh "tar -x" I am not quite sure whether "-C /" for tar works here ("tar -C / -x"), but along these lines might be a good efficient method. target_hosts should be all nodes

Re: [gpfsug-discuss] Memory accounting for processes writing to GPFS

2019-03-06 Thread Jim Doherty
For any process with a large number of threads the VMM size has become an imaginary number ever since the glibc change to allocate a heap per thread. I look to /proc/$pid/status to find the memory used by a proc  RSS + Swap + kernel page tables.  Jim On Wednesday, March 6, 2019, 4:25:48

[gpfsug-discuss] Follow-up: migrating billions of files

2019-03-06 Thread Oesterlin, Robert
Some of you had questions to my original post. More information: Source: - Files are straight GPFS/Posix - no extended NFSV4 ACLs - A solution that requires $’s to be spent on software (ie, Aspera) isn’t a very viable option - Both source and target clusters are in the same DC - Source is

Re: [gpfsug-discuss] Migrating billions of files?

2019-03-06 Thread Andrew Beattie
Yes and its licensed based on the size of your network pipe.   Andrew Beattie File and Object Storage Technical Specialist - A/NZ IBM Systems - Storage Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: "Frederick Stock" Sent by:

Re: [gpfsug-discuss] Migrating billions of files?

2019-03-06 Thread Frederick Stock
Does Aspera require a license?  Fred__Fred Stock | IBM Pittsburgh Lab | 720-430-8821sto...@us.ibm.com     - Original message -From: "Yaron Daniel" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list Cc:Subject:

[gpfsug-discuss] SLURM scripts/policy for data movement into a flash pool?

2019-03-06 Thread Jake Carroll
Hi Scale-folk. I have an IBM ESS GH14S building block currently configured for my HPC workloads. I've got about 1PB of /scratch filesystem configured in mechanical spindles via GNR and about 20TB of SSD/flash sitting in another GNR filesystem at the moment. My intention is to destroy that

Re: [gpfsug-discuss] Migrating billions of files?

2019-03-06 Thread Chris Schlipalius
Hi Bob, so Simon has hit the nail on the head. So it’s a challenge, we used dcp with multiple parallel threads per nsd with mmdsh - 2PB and millions of files, it’s worth a test as it does look after xattribs, but test it. See https://github.com/hpc/dcp Test the preserve: -p, --preserve

[gpfsug-discuss] Memory accounting for processes writing to GPFS

2019-03-06 Thread Dorigo Alvise (PSI)
Hello to everyone, Here a PSI we're observing something that in principle seems strange (at least to me). We run a Java application writing into disk by mean of a standard AsynchronousFileChannel, whose I do not the details. There are two instances of this application: one runs on a node writing

Re: [gpfsug-discuss] Migrating billions of files?

2019-03-06 Thread Yaron Daniel
Hi U can also use today Aspera - which will replicate gpfs extended attr. Integration of IBM Aspera Sync with IBM Spectrum Scale: Protecting and Sharing Files Globally http://www.redbooks.ibm.com/redpieces/abstracts/redp5527.html?Open Regards Yaron Daniel 94 Em Ha'Moshavot Rd

Re: [gpfsug-discuss] suggestions for copying one GPFS file system into another

2019-03-06 Thread Yaron Daniel
Hi U can also use today Aspera - which will replicate gpfs extended attr. Integration of IBM Aspera Sync with IBM Spectrum Scale: Protecting and Sharing Files Globally http://www.redbooks.ibm.com/redpieces/abstracts/redp5527.html?Open I used in the past the arsync - used for Sonas - i think

Re: [gpfsug-discuss] Migrating billions of files?

2019-03-06 Thread Simon Thompson
AFM doesn’t work well if you have dependent filesets though .. which we did for quota purposes. Simon From: on behalf of "y...@il.ibm.com" Reply-To: "gpfsug-discuss@spectrumscale.org" Date: Wednesday, 6 March 2019 at 09:01 To: "gpfsug-discuss@spectrumscale.org" Subject: Re:

Re: [gpfsug-discuss] Migrating billions of files?

2019-03-06 Thread Yaron Daniel
Hi What permissions you have ? Do u have only Posix , or also SMB attributes ? If only posix attributes you can do the following: - rsync (which will work on different filesets/directories in parallel. - AFM (but in case you need rollback - it will be problematic) Regards Yaron