Hi,

While looking at something else in the documentation, I came across this: https://docs.ceph.com/en/latest/cephfs/administration/#maximum-file-sizes-and-performance

"CephFS enforces the maximum file size limit at the point of appending to files or setting their size. It does not affect how anything is stored. When users create a file of an enormous size (without necessarily writing any data to it), some operations (such as deletes) cause the MDS to have to do a large number of operations to check if any of the RADOS objects within the range that could exist (according to the file size) really existed. The max_file_size setting prevents users from creating files that appear to be eg. exabytes in size, causing load on the MDS as it tries to enumerate the objects during operations like stats or deletes."

Thought it might help.

--
Cordialement,

Frédéric Nass

Direction du Numérique
Sous-Direction Infrastructures et Services
Université de Lorraine.

Le 11/12/2020 à 20:41, Paul Mezzanini a écrit :
 From how I understand it, that setting is a rev-limiter to prevent users from 
creating HUGE sparse files and then wasting cluster resources firing off 
deletes.

We have ours set to 32T and haven't seen any issues with large files.

--
Paul Mezzanini
Sr Systems Administrator / Engineer, Research Computing
Information & Technology Services
Finance & Administration
Rochester Institute of Technology
o:(585) 475-3245 | pfm...@rit.edu

CONFIDENTIALITY NOTE: The information transmitted, including attachments, is
intended only for the person(s) or entity to which it is addressed and may
contain confidential and/or privileged material. Any review, retransmission,
dissemination or other use of, or taking of any action in reliance upon this
information by persons or entities other than the intended recipient is
prohibited. If you received this in error, please contact the sender and
destroy any copies of this information.
------------------------

________________________________________
From: Adam Tygart <mo...@ksu.edu>
Sent: Friday, December 11, 2020 1:59 PM
To: Mark Schouten
Cc: ceph-users; Patrick Donnelly
Subject: [ceph-users] Re: CephFS max_file_size

I've had this set to 16TiB for several years now.

I've not seen any ill effects.

--
Adam

On Fri, Dec 11, 2020 at 12:56 PM Patrick Donnelly <pdonn...@redhat.com> wrote:
Hi Mark,

On Fri, Dec 11, 2020 at 4:21 AM Mark Schouten <m...@tuxis.nl> wrote:
There is a default limit of 1TiB for the max_file_size in CephFS. I altered 
that to 2TiB, but I now got a request for storing a file up to 7TiB.

I'd expect the limit to be there for a reason, but what is the risk of setting 
that value to say 10TiB?
There is no known downside. Let us know how it goes!

--
Patrick Donnelly, Ph.D.
He / Him / His
Principal Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

--
Cordialement,

Frédéric Nass

Direction du Numérique
Sous-Direction Infrastructures et Services
Université de Lorraine.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to