On Sat, 2010-06-05 at 08:04 -0700, Sandon Van Ness wrote: > Dave Kleikamp wrote: > > On Fri, 2010-06-04 at 12:20 -0700, Sandon Van Ness wrote: > > > > > > > Thanks a ton for this! I formatted my 32+ TiB partition without any > > > issues and the fsck ran ok as well. Just to test I went ahead and tried > > > creating and fscking a 511 TiB sparse file so it was just under 512TiB > > > and the mkfs seemed to run ok but the fsck failed: > > > > > > > Try the attached patch. > > > > Shaggy > > > It looks like that fixed it so its working with a 511 TiB sparse file > but it fails at 512 TiB or is that expected? It does look like its > properly getting past 128 TiB now/ > I'd have to investigate it. It could be another bug. If we run into some limitation we weren't aware of, mkfs should enforce the limit so it exits gracefully with a useful error message.
> 511 TiB: > > sabayonx86-64 / # fsck.jfs -f -v /jfs_511tb.sparse > fsck.jfs version 1.1.14, Jun 5 2010 > processing started: 6/5/2010 7.33.41 > The current device is: /jfs_511tb.sparse > Open(...READ/WRITE EXCLUSIVE...) returned rc = 0 > Primary superblock is valid. > The type of file system for the device is JFS. > Block size in bytes: 4096 > Filesystem size in blocks: 137170518016 > **Phase 0 - Replay Journal Log > LOGREDO: Log already redone! > logredo returned rc = 0 > **Phase 1 - Check Blocks, Files/Directories, and Directory Entries > **Phase 2 - Count links > **Phase 3 - Duplicate Block Rescan and Directory Connectedness > **Phase 4 - Report Problems > **Phase 5 - Check Connectivity > **Phase 6 - Perform Approved Corrections > **Phase 7 - Rebuild File/Directory Allocation Maps > **Phase 8 - Rebuild Disk Allocation Maps > Filesystem Summary: > Blocks in use for inodes: 8 > Inode count: 64 > File count: 0 > Directory count: 1 > Block count: 137170518016 > Free block count: 137149538804 > 548682072064 kilobytes total disk space. > 0 kilobytes in 1 directories. > 0 kilobytes in 0 user files. > 0 kilobytes in extended attributes > 0 kilobytes in access control lists > 83916848 kilobytes reserved for system use. > 548598155216 kilobytes are available for use. > Filesystem is clean. > All observed inconsistencies have been repaired. > Filesystem has been marked clean. > **** Filesystem was modified. **** > processing terminated: 6/5/2010 7:56:18 with return code: 0 exit > code: 0. > > > 512 TiB: > > > fsck.jfs version 1.1.14, Jun 5 2010 > processing started: 6/5/2010 5.2.35 > The current device is: /jfs_512tb.sparse > Block size in bytes: 4096 > Filesystem size in blocks: 137438953472 > **Phase 0 - Replay Journal Log > **Phase 1 - Check Blocks, Files/Directories, and Directory Entries > Errors detected in the Primary File/Directory Allocation Table. > Errors detected in the Secondary File/Directory Allocation Table. > CANNOT CONTINUE. > > > I forgot to run the 512 TiB with the verbose option though so I re-run > it if it could use some more verboseness and was not expected to fail > @ 512TiB. About 60 gigabytes of metadata must be written when > creating file-systems with sparse files of this size and its not > totally sequential (my regular disk chumed along at only 20 > megabytes/sec) so I know testing this on a regular drive can be > extremely slow but the mkfs doesn't take too long on my raid array so > I can definitely test some larger sizes (even up to 4 PiB). Of course > copying back my data from my opensolaris server after creaing my 36 > TiB partition which kept its speed @ 100 megabytes/sec didn't help > with the mkfs and fsck times: > > Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s > avgrq-sz avgqu-sz await svctm %util > sda 0.00 0.00 0.00 0.00 0.00 0.00 > 0.00 0.00 0.00 0.00 0.00 > sda1 0.00 0.00 0.00 0.00 0.00 0.00 > 0.00 0.00 0.00 0.00 0.00 > sdc 0.00 0.00 767.50 639.90 92.22 91.61 > 267.50 3.03 2.15 0.43 59.97 > sdc1 0.00 0.00 767.50 639.90 92.22 91.61 > 267.50 3.03 2.15 0.43 59.97 > sdb 0.00 0.00 0.00 0.00 0.00 0.00 > 0.00 0.00 0.00 0.00 0.00 > sdd 0.00 214.70 3.30 775.90 0.01 107.74 > 283.21 5.85 7.50 0.26 20.18 > sdd1 0.00 214.70 3.30 775.90 0.01 107.74 > 283.21 5.85 7.50 0.26 20.18 > > > I know http://jfs.sourceforge.net/project/pub/jfs.pdf said the max > size is 512 TiB when dealing with 512 byte blocks and its 4 PB when > dealing with 4096 byte blocks so I would think that it should be able > to do 512 TiB as it is using 4096 bytes as the block size. -- Dave Kleikamp IBM Linux Technology Center ------------------------------------------------------------------------------ ThinkGeek and WIRED's GeekDad team up for the Ultimate GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the lucky parental unit. See the prize list and enter to win: http://p.sf.net/sfu/thinkgeek-promo _______________________________________________ Jfs-discussion mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/jfs-discussion
