Hi

Than means, the 'mv' operation should be done if src and dst
are in the same pool, and the client should have same permission
on both src and dst.

Do I have the right understanding?

Marc Roos <[email protected]> 于2018年12月11日周二 下午4:53写道:

> >Moving data between pools when a file is moved to a different directory
>
> >is most likely problematic - for example an inode can be hard linked to
>
> >two different directories that are in two different pools - then what
> >happens to the file?  Unix/posix semantics don't really specify a
> parent
> >directory to a regular file.
> >
> >That being said - it would be really nice if there were a way to move
> an
> >inode from one pool to another transparently (with some explicit
> >command).  Perhaps locking the inode up for the duration of the move,
> >and releasing it when the move is complete (so that clients that have
> >the file open don't notice any disruptions).  Are there any plans in
> >this direction?
>
> I do also hope so. Because this would be for me expected behavior. I ran
> into this issue accidentally because I had different permissions on the
> pools. How can I explain a user that if they move files between 2
> specific folders that they should not mv but cp. Now I have to
> workaround this buy apply separate mounts.
>
>
> -----Original Message-----
> From: Andras Pataki [mailto:[email protected]]
> Sent: 11 December 2018 00:34
> To: Marc Roos; ceph; ceph-users
> Subject: Re: [ceph-users] move directories in cephfs
>
> Moving data between pools when a file is moved to a different directory
> is most likely problematic - for example an inode can be hard linked to
> two different directories that are in two different pools - then what
> happens to the file?  Unix/posix semantics don't really specify a parent
>
> directory to a regular file.
>
> That being said - it would be really nice if there were a way to move an
>
> inode from one pool to another transparently (with some explicit
> command).  Perhaps locking the inode up for the duration of the move,
> and releasing it when the move is complete (so that clients that have
> the file open don't notice any disruptions).  Are there any plans in
> this direction?
>
> Andras
>
> On 12/10/18 10:55 AM, Marc Roos wrote:
> >
> >
> > Except if you have different pools on these directories. Then the data
> > is not moved(copied), which I think should be done. This should be
> > changed, because no one will expect a symlink to the old pool.
> >
> >
> >
> >
> > -----Original Message-----
> > From: Jack [mailto:[email protected]]
> > Sent: 10 December 2018 15:14
> > To: [email protected]
> > Subject: Re: [ceph-users] move directories in cephfs
> >
> > Having the / mounted somewhere, you can simply "mv" directories around
> >
> > On 12/10/2018 02:59 PM, Zhenshi Zhou wrote:
> >> Hi,
> >>
> >> Is there a way I can move sub-directories outside the directory.
> >> For instance, a directory /parent contains 3 sub-directories
> >> /parent/a, /parent/b, /parent/c. All these directories have huge data
> >> in it. I'm gonna move /parent/b to /b. I don't want to copy the whole
> >> directory outside cause it will be so slow.
> >>
> >> Besides, I heard about cephfs-shell early today. I'm wondering which
> >> version will ceph have this command tool. My cluster is luminous
> >> 12.2.5.
> >>
> >> Thanks
> >>
> >>
> >>
> >> _______________________________________________
> >> ceph-users mailing list
> >> [email protected]
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> > _______________________________________________
> > ceph-users mailing list
> > [email protected]
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > [email protected]
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to