Stupid me. The space between the {} and \; is significant.
/usr/local/bin/mmfind /hpc/bscratch -type f -exec /bin/ls {} \;
Still would be nice to have the documentation clarified please.
From: gpfsug-discuss-boun...@spectrumscale.org
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On
I do not think AFM is intended to solve the problem you are trying to
solve. If I understand your scenario correctly you state that you are
placing metadata on NL-SAS storage. If that is true that would not be
wise especially if you are going to do many metadata operations. I
suspect your
Hi All,
I am trying to figure out a GPFS tiering architecture with flash storage in
front end and near line storage as backend, for Supercomputing
The Backend storage will be a GPFS storage on near line of about 8-10PB. The
backend storage will/can be tuned to give out large streaming
Hi all,
I wanted to know, how does mmap interact with GPFS pagepool with respect to
filesystem block-size?
Does the efficiency depend on the mmap read size and the block-size of the
filesystem even if all the data is cached in pagepool?
GPFS 4.2.3.2 and CentOS7.
Here is what i observed:
I
My apologies for not being more clear on the flash storage pool. I meant
that this would be just another GPFS storage pool in the same cluster, so
no separate AFM cache cluster. You would then use the file heat feature
to ensure more frequently accessed files are migrated to that all flash
Hi Lohit,
i am working with ray on a mmap performance improvement right now, which
most likely has the same root cause as yours , see -->
http://gpfsug.org/pipermail/gpfsug-discuss/2018-January/004411.html
the thread above is silent after a couple of back and rorth, but ray and i
have active
Thank you.
I am sorry if i was not clear, but the metadata pool is all on SSDs in the GPFS
clusters that we use. Its just the data pool that is on Near-Line Rotating
disks.
I understand that AFM might not be able to solve the issue, and I will try and
see if file heat works for migrating the
Thanks a lot Sven.
I was trying out all the scenarios that Ray mentioned, with respect to lroc and
all flash GPFS cluster and nothing seemed to be effective.
As of now, we are deploying a new test cluster on GPFS 5.0 and it would be good
to know the respective features that could be enabled and
I've been exploring the idea for a while of writing a SLURM SPANK plugin
to allow users to dynamically change the pagepool size on a node. Every
now and then we have some users who would benefit significantly from a
much larger pagepool on compute nodes but by default keep it on the
smaller
Thanks, I will try the file heat feature but i am really not sure, if it would
work - since the code can access cold files too, and not necessarily files
recently accessed/hot files.
With respect to LROC. Let me explain as below:
The use case is that -
The code initially reads headers (small
This is also interesting (although I don't know what it really means).
Looking at pmap run against mmfsd I can see what happens after each step:
# baseline
7fffe4639000 59164K 0K 0K 0K 0K ---p [anon]
7fffd837e000 61960K 0K 0K 0K 0K ---p [anon]
Leaving aside the -exec option, and whether you choose classic find or
mmfind,
why not just use the -ls option - same output, less overhead...
mmfind pathname -type f -ls
From: John Hearns
To: gpfsug main discussion list
Cc:
More recent versions of mmfind support an -xargs option... Run mmfind
--help and see:
-xargs [-L maxlines] [-I rplstr] COMMAND
Similar to find ... | xargs [-L x] [-I r] COMMAND
but COMMAND executions may run in parallel. This is preferred
to -exec. With -xargs mmfind will
13 matches
Mail list logo