On 07/09/2014 06:48 PM, Patrick J. LoPresti wrote:
On Wed, Jul 9, 2014 at 1:27 PM, Lamar Owen <[email protected]> wrote:
I don't recall if I had to specify that option or not with CentOS 5.10:
+++++++++++++++++++++++++++
[root@backup-rdc ~]# df -h
Filesystem Size Used Avail Use% Mounted on
...
/dev/mapper/plates-cx3--80
27T 26T 805G 98% /opt/plates
/dev/mapper/vg_opt-lv_backups
5.8T 5.4T 365G 94% /opt/backups
[root@backup-rdc ~]# blkid
You are getting a little bit lucky, I think...
Perhaps. But I ran into the >16TB problem with the 32-bit install (same
filesystem, by the way, lvm exported from the 32-bit install and
imported on the 64-bit reinstall) as soon as the 'Used' column passed
16TB (binary TB....). This is the very filesystem on which that happened.
The failure happens
when the first 16TB of the block device (as opposed to file system)
are in use. Since XFS allocates blocks from allocation groups all over
the disk, it is improbable that the first 16TB is ever actually in use
until the entire file system fills up....
Hmm, interesting.
I swear I am not making up this problem; see e.g.
http://www.doc.ic.ac.uk/~dcw/xfs_16tb/
Oh, I believe you. Most likely the reason I've not hit it is due to the
file mix that's on the filesystem; the number of files is pretty small,
but each file is large (in the GB size range per file; they are scanned
astronomical photographic plates in uncompressed FITS format). Thus all
the inodes should be able to be in the first 1TB of the disk.
Anyway, inode64 is the recommended mount option for large XFS file
systems unless you have some specific legacy need (like exporting via
NFSv2 to 32-bit Solaris... guess how I know)
Oh, I believe you. It may be worthwhile for me to add that mount
option; can't hurt. It's just that the 16TB issue I hit was different
from this one; I hit the 4k stack size issue with the 32-bit kernel,
and that issue went away with the 64-bit kernel. But this issue is
different.