On Sat, Dec 11, 2021 at 9:11 PM [email protected]
<[email protected]> wrote:
>
> Concerning very large file recovery process, are there any solutions to 
> alleviate the negative impact? Otherwise we may have to limit file size to an 
> acceptable level ...
>

If you can afford losing mtime update, you can modify mds code to not
scan all objects.

>
> ________________________________
> [email protected]
>
>
> From: Yan, Zheng
> Date: 2021-12-11 06:42
> To: [email protected]
> CC: ceph-users
> Subject: Re: [ceph-users] CephFS single file size limit and performance impact
> On Sat, Dec 11, 2021 at 2:21 AM [email protected]
> <[email protected]> wrote:
> >
> > Dear Ceph experts,
> >
> > I encounter a use case wherein the size of a single file may go beyound 
> > 50TB, and would like to know whether CephFS can support a single file with 
> > size over 50TB? Furthermore, if multiple clients, say 50, want to access 
> > (read/modify) this big file, do we expect any performance issues, e.g. 
> > something like a big lock on the whole file. I wonder whether Cephfs 
> > supports the so-called parallel feature like multiple clients can 
> > read/write different parts of the same big file...
> >
> > Comments, suggestions, experience are highly appreciated,
> >
>
> The problem is file recovery.  (If a client opens the file in write
> mode disconnect  abnormally, mds need to probe the file's objects, to
> recover mtime and file size). operations such as stat(2) hang the file
> is in recovery. For very large file,  its recovery process may take a
> long time.
> > Kind regards,
> >
> > Samuel
> >
> >
> >
> > [email protected]
> > _______________________________________________
> > ceph-users mailing list -- [email protected]
> > To unsubscribe send an email to [email protected]
>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to