Hallo Stephan.. @all,
I think .. yes.. RFE is the way to go ...
the current behavior is really works-as designed.... even though I see your point, currently : a move of a file between filesets is smth like writing a new file and delete the old
so I expect, this will remain always the case , when moving between different inode spaces (regardless of the storage pool)
for moving files between different -but dependent- filesets (so same inode space) - and - within the same storage pool, there might be an alternative way - to evaluate and check.. yes .. we should have an RFE . .
----- Original message -----
From: Stephan Graf <[email protected]>
Sent by: [email protected]
To: <[email protected]>
Cc:
Subject: [EXTERNAL] Re: [gpfsug-discuss] gpfs filesets question
Date: Mon, Apr 20, 2020 10:42 AM
Hi,
we recognized this behavior when we tried to move HSM migrated files
between filesets. This cases a recall. Very annoying when the data are
afterword stored on the same pools and have to be migrated back to tape.
@IBM: should we open a RFE to address this?
Stephan
Am 18.04.2020 um 17:04 schrieb Stephen Ulmer:
> Is this still true if the source and target fileset are both in the same
> storage pool? It seems like they could just move the metadata…
> Especially in the case of dependent filesets where the metadata is
> actually in the same allocation area for both the source and target.
>
> Maybe this just doesn’t happen often enough to optimize?
>
> --
> Stephen
>
>
>
>> On Apr 16, 2020, at 12:50 PM, Oesterlin, Robert
>> <[email protected] <mailto:[email protected]>> wrote:
>>
>> Moving data between filesets is like moving files between file
>> systems. Normally when you move files between directories, it’s simple
>> metadata, but with filesets (dependent or independent) is a full copy
>> and delete of the old data.
>> Bob Oesterlin
>> Sr Principal Storage Engineer, Nuance
>> *From:*<[email protected]
>> <mailto:[email protected]>> on behalf of "J.
>> Eric Wonderley" <[email protected] <mailto:[email protected]>>
>> *Reply-To:*gpfsug main discussion list
>> <[email protected]
>> <mailto:[email protected]>>
>> *Date:*Thursday, April 16, 2020 at 11:32 AM
>> *To:*gpfsug main discussion list <[email protected]
>> <mailto:[email protected]>>
>> *Subject:*[EXTERNAL] [gpfsug-discuss] gpfs filesets question
>> I have filesets setup in a filesystem...looks like:
>> [root@cl005 ~]# mmlsfileset home -L
>> Filesets in file system 'home':
>> Name Id RootInode ParentId Created
>> InodeSpace MaxInodes AllocInodes Comment
>> root 0 3 -- Tue Jun 30
>> 07:54:09 2015 0 402653184 320946176 root fileset
>> hess 1 543733376 0 Tue Jun 13
>> 14:56:13 2017 0 0 0
>> predictHPC 2 1171116 0 Thu Jan 5
>> 15:16:56 2017 0 0 0
>> HYCCSIM 3 544258049 0 Wed Jun 14
>> 10:00:41 2017 0 0 0
>> socialdet 4 544258050 0 Wed Jun 14
>> 10:01:02 2017 0 0 0
>> arc 5 1171073 0 Thu Jan 5
>> 15:07:09 2017 0 0 0
>> arcadm 6 1171074 0 Thu Jan 5
>> 15:07:10 2017 0 0 0
>> I beleive these are dependent filesets. Dependent on the root
>> fileset. Anyhow a user wants to move a large amount of data from one
>> fileset to another. Would this be a metadata only operation? He has
>> attempted to small amount of data and has noticed some thrasing.
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss atspectrumscale.org <http://spectrumscale.org/>
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
