Hi Aaron,
the best way to express this 'need' is to vote and leave comments in the
RFE's :
this is an RFE for GNR as SW :
http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=95090
everybody who wants this to be one should vote for it and leave comments on
what they expect.
Sve
Thanks Everyone for your replies! (Quick disclaimer, these opinions are
my own, and not those of my employer or NASA).
Not knowing what's coming at the NDA session, it seems to boil down to
"it ain't gonna happen" because of:
- Perceived difficulty in supporting whatever creative hardware
so
Lukas,
CES is more than just an export service for SMB, obviously it also supports NFS and Object as export protocols, and more specifically it allows us to move items like NFS away from a kernel service and up to a user service which makes the system more secure.
We also use the CES nodes as
The exact behavior depends on the client and the application. I would
suggest explicit testing of the protocol failover if that is a concern.
Samba does not support persistent handles, so that would be a completely
new feature.
There is some support available for durable handles which have weak
On Wed, Sep 28, 2016 at 10:25:01PM +, Andrew Beattie wrote:
>In that scenario, would you not be better off using a native Spectrum
>Scale client installed on the workstation that the video editor is using
>with a local mapped drive, rather than a SMB share?
>
>This would pr
Are there any presentation available online that provide diagrams of the
directory/file creation process and modifications in terms of how the
blocks/inodes and indirect blocks etc are used. I would guess there are a few
different cases that would need to be shown.
This is the sort of thing tha
Lukas,
In that scenario, would you not be better off using a native Spectrum Scale client installed on the workstation that the video editor is using with a local mapped drive, rather than a SMB share?
This would prevent this the scenario you have proposed occurring.
Andrew Beattie
Software Def
On Wed, Sep 28, 2016 at 01:33:45PM -0700, Christof Schmitt wrote:
> The client has to reconnect, open the file again and reissue request that
> have not been completed. Without persistent handles, the main risk is that
> another client can step in and access the same file in the meantime. With
>
I think the guideline for 4K inodes is roughly 3.5KB depending on use of
extended attributes,
-Bryan
From: gpfsug-discuss-boun...@spectrumscale.org
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Oesterlin, Robert
Sent: Wednesday, September 28, 2016 1:14 PM
To: gpfsug main discuss
The client has to reconnect, open the file again and reissue request that
have not been completed. Without persistent handles, the main risk is that
another client can step in and access the same file in the meantime. With
persistent handles, access from other clients would be prevented for a
d
On Wed, 28 Sep 2016 10:34:05 -0400
Marc A Kaplan wrote:
> Consider using samples/ilm/mmfind (or mmapplypolicy with a LIST ...
> SHOW rule) to gather the stats much faster. Should be minutes, not
> hours.
>
I'll agree with the policy engine. Runs like a beast if you tune it a
little for nodes
What the largest file that will fit inside a 1K, 2K, or 4K inode?
Bob Oesterlin
Sr Storage Engineer, Nuance HPC Grid
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
Suppose, we could "dynamically" change the pool assignment of a file.
How/when would you have us do that? When will that generate unnecessary,
"wasteful" IOPs?
How do we know if/when/how often you will access a file in the future?
This is similar to other classical caching policies, but there
OKAY, I'll say it again. inodes are PACKED into a single inode file. So
a 4KB inode takes 4KB, REGARDLESS of metadata blocksize. There is no
wasted space.
(Of course if you have metadata replication = 2, then yes, double that.
And yes, there overhead for indirect blocks (indices), allocation
Consider using samples/ilm/mmfind (or mmapplypolicy with a LIST ... SHOW
rule) to gather the stats much faster. Should be minutes, not hours.
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsu
/usr/lpp/mmfs/samples/debugtools/filehist
Look at the README in that directory.
Bob Oesterlin
Sr Storage Engineer, Nuance HPC Grid
From: on behalf of
"greg.lehm...@csiro.au"
Reply-To: gpfsug main discussion list
Date: Wednesday, September 28, 2016 at 2:40 AM
To: "gpfsug-discuss@spectrumsca
On Tue, 27 Sep 2016, Eric Horst wrote:
Thanks Eric for the hint,
shouldn't we as the users define a requirement for such a dynamic heat
assisted file tiering option (DHAFTO).
Keeping track which files have increased heat and triggering a transparent
move to a faster tier.
Since I haven't
I am wondering what people use to produce a file size distribution report for
their filesystems. Has everyone rolled their own or is there some goto app to
use.
Cheers,
Greg
From: gpfsug-discuss-boun...@spectrumscale.org
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Buterbaug
18 matches
Mail list logo