On 2010-02-10, at 17:29, David Simas wrote: > On Wed, Feb 10, 2010 at 02:41:55PM -0700, Andreas Dilger wrote: >> >> - primarily, the upstream e2fsprogs does not yet have full support >> for >> > 16TB filesystems, and while experimental patches exist there >> are still >> bugs being found occasionally in that code >> - there is a certain amount of testing that we need to do before we >> can say that Lustre supports that configuration > > We are interested in testing OSTs larger than 16 TB. I found a > public repository for 64-bit e2fsprogs at > > git://git.kernel.org/pub/scm/fs/ext2/val/e2fsprogs.git > > Is this were I'd find the patches you refer to?
This is an outdated version of the 64-bit patches. The updated 64-bit patches are part of the main e2fsprogs git repo in: http://repo.or.cz/w/e2fsprogs.git This is in a state of flux since 1.41.10 was released, since the 64- bit patches are slated to be part of the 1.42 release. I believe that most of the 64-bit patches are in the "pu" branch (proposed updates). > Would I apply them against the 1.8.2 distribution's e2fsprogs sources? For 64-bit testing purposes, I think you can use the upstream 64-bit mke2fs without any of the lustre-specific changes. Those are mostly related to making e2fsck more robust, and adding support for lfsck and such. >> That said, with 1.8.2 it is still possible to format the filesystem >> with the experimental 64-bit e2fsprogs, and mount the OSTs with "-o >> force_over_16tb" and test this out yourselves. Feedback is of course >> welcome. I would suggest running "llverfs" on the mounted Lustre >> filesystem (or other tool which can post-facto verify the data being >> written) to completely fill an OST, probably unmount/remount it to >> clear any cache, and then read the data back and ensure that you are >> getting the correct data back. Running an "e2fsck -f" on the OST >> would also help verify the on-disk filesystem structure. >> >> At some point we will likely conduct this same testing and >> "officially" support this configuration, but it wasn't done for >> 1.8.2. At some point, the e2fsck overhead of a large ext4/ldiskfs >> filesystem becomes too high to support huge configurations (e.g. >> larger than, say, 128TB if even that). While ext4 and e2fsprogs have >> gotten a lot of improvements to speed up e2fsck time, there is a >> limit >> to what can be done with this. >> >>>> -----Original Message----- >>>> From: [email protected] [mailto:[email protected]] On >>>> Behalf >>> Of >>>> Andreas Dilger >>>> Sent: Tuesday, February 09, 2010 7:13 PM >>>> To: Roger Spellman >>>> Cc: [email protected] >>>> Subject: Re: [Lustre-discuss] 16T LUNs >>>> >>>> On 2010-02-09, at 15:02, Roger Spellman wrote: >>>>> I see that 1.8.2 supports 16T OSTs for RHEL. >>>>> >>>>> Does anyone know when this will be supported for SLES? >>>> >>>> No, it will not, because SLES doesn't provide a very uptodate ext4 >>>> code, and a number of 16TB fixes went into ext4 late in the game. >>>> RHEL5.4, on the other hand, has very uptodate ext4 code and the >>>> RHEL >>>> ext4 maintainer is one of the ext4 maintainers himself. >>>> >>>>> Is anyone currently using a 16T OST, who could share their >>>>> experiences? Is it stable? >>>> >>>> >>>> I believe a few large customers are already testing/using this. >>>> I'll >>>> let them speak for themselves. >>>> >>>> Cheers, Andreas >>>> -- >>>> Andreas Dilger >>>> Sr. Staff Engineer, Lustre Group >>>> Sun Microsystems of Canada, Inc. >>> >> >> >> Cheers, Andreas >> -- >> Andreas Dilger >> Sr. Staff Engineer, Lustre Group >> Sun Microsystems of Canada, Inc. >> >> _______________________________________________ >> Lustre-discuss mailing list >> [email protected] >> http://lists.lustre.org/mailman/listinfo/lustre-discuss > _______________________________________________ > Lustre-discuss mailing list > [email protected] > http://lists.lustre.org/mailman/listinfo/lustre-discuss Cheers, Andreas -- Andreas Dilger Sr. Staff Engineer, Lustre Group Sun Microsystems of Canada, Inc. _______________________________________________ Lustre-discuss mailing list [email protected] http://lists.lustre.org/mailman/listinfo/lustre-discuss
