On Wed, 16 Feb 2000, Dan Nelson wrote:
:In the last episode (Feb 16), Greg Lehey said:
: On Tuesday, 15 February 2000 at 3:40:58 -0600, Joe Greco wrote:
:
: Dunno how many terabyte filesystem folks are out there.
:
: None, by the looks of it.
:
:Possibly no FreeBSD folks, but on Solaris,
On Tue, 15 Feb 2000, John Milford wrote:
I will assert that it is insanity to build and use a 1TB UFS
for small files (~ 2.5e8 inodes or 32GB) at least with the current
technology. Maybe I am wrong, if anyone thinks so feel free to tell
It has to be said that whilst reading this
Greg Lehey wrote:
On Tuesday, 15 February 2000 at 3:40:58 -0600, Joe Greco wrote:
So I wanted to vinum my new 1.9TB of disks together just for chuckles, and
it went OK up to the newfs..
S play.p0.s0State: up PO:0 B Size: 46 GB
S play.p0.s1
On Tue, Feb 15, 2000 at 01:25:24PM -0800, Mike Smith wrote:
The scary thing about this posting is that Joe was able to construct his
1TB+ filesystem with *ONLY* 37 hard drives.
...
I think you'll
continue to see a move towards some sort of "storage appliance" for various
In the last episode (Feb 16), Greg Lehey said:
On Tuesday, 15 February 2000 at 3:40:58 -0600, Joe Greco wrote:
Dunno how many terabyte filesystem folks are out there.
None, by the looks of it.
Possibly no FreeBSD folks, but on Solaris, VXFS scales very well to
large volumes. We've got
On Tue, 15 Feb 2000, Matthew Dillon wrote:
Not with LVD. The whole point was to be able to have longer SCSI
busses.
LVD allows SCSI bus cable lengths up to 25 meters. With 16 devices
the limit is 12 meters. That's 36 feet, folks!
Which isn't really that much, once you
On Tuesday, 15 February 2000 at 16:12:36 -0800, Matthew Dillon wrote:
sure your amount of metadata that needs checking is reasonably low.
Also, it seems like 64 bit processors will be in use before 1 TB
filesystems are common. Won't the filesystem need to be 64-bitted for
that?
I would
:I think that at some point we'll need larger block numbers than will
:no longer fit in 32 bits. I'd rather we went to byte offsets, though,
:rather than block numbers.
:
:Greg
Good point. When we are faced with having to move from 32 to 64 bit
block numbers, I agree completely that
:S play.p0.s0State: up PO:0 B Size: 46 GB
:S play.p0.s1State: up PO: 32 MB Size: 46 GB
:S play.p0.s2State: up PO: 64 MB Size: 46 GB
:...
:S play.p0.s35 State: up PO: 1120 MB
:S play.p0.s0State: up PO:0 B Size: 46 GB
:S play.p0.s1State: up PO: 32 MB Size: 46 GB
:S play.p0.s2State: up PO: 64 MB Size: 46 GB
:...
:S play.p0.s35 State: up PO: 1120 MB
Peter Wemm [EMAIL PROTECTED] wrote:
/usr/include/sys/disklabel.h:
u_int32_t p_size; /* number of sectors in partition */
newfs.c:
int fssize; /* file system size */
..
havelabel:
if (fssize == 0)
fssize = pp-p_size;
ie:
The scary thing about this posting is that Joe was able to construct his
1TB+ filesystem with *ONLY* 37 hard drives.
37? 38. And you, a programmer! ;-) And that gets you 1.9TB (disk mfr
counting-wise). I'd have loved to break 2TB but couldn't imagine how to
handle the
On Tue, Feb 15, 2000 at 11:24:30AM -0800, John Milford wrote:
Is there any real interest in moving beyond 1TB? I think that
it would incur a non-trival overhead as I believe that unsigned ints
would not work and we would be looking at going to 64 bit values. Or
I guess something
:
: The scary thing about this posting is that Joe was able to construct his
: 1TB+ filesystem with *ONLY* 37 hard drives.
:
: 37? 38. And you, a programmer! ;-) And that gets you 1.9TB (disk mfr
: counting-wise). I'd have loved to break 2TB but couldn't imagine how to
:
:Speaking of which, I'd like to compliment you on the overall design of the
:Diablo system. It has scaled very well to handle a hundred million articles
:on-spool.
:
:Dumping, hash table 67108864 entries, record size 28 == :-) :-) :-)
:@268435472
:diload: 104146775/104146944 entries loaded
Matthew Dillon wrote:
:
: ie: there is a signed 32 bit sector count limit. 2^31 == 1TB. It shouldn
't
: be too hard to get it to create 2^32 bit (2TB) filesystem though. I'd exp
ect
: there to be more problems that this to bite you though. :-(
:
: 2^31 also happens to be the
On Tuesday, 15 February 2000 at 3:40:58 -0600, Joe Greco wrote:
So I wanted to vinum my new 1.9TB of disks together just for chuckles, and
it went OK up to the newfs..
S play.p0.s0State: up PO:0 B Size: 46 GB
S play.p0.s1State: up
:
: And, of course, the reader uses a multi-fork/multi-thread design,
: resulting in an extremely optimal footprint.
:
:I hate your threads. Still trying to see some parts of the bigger
:picture. But it was a reasonable design decision, I'll grant.
heh heh. Be happy, it's second
On Tue, Feb 15, 2000 at 06:27:03PM -0800, John Milford wrote:
Brooks Davis [EMAIL PROTECTED] wrote:
On Tue, Feb 15, 2000 at 11:24:30AM -0800, John Milford wrote:
Is there any real interest in moving beyond 1TB? I think that
it would incur a non-trival overhead as I believe that
So I wanted to vinum my new 1.9TB of disks together just for chuckles, and
it went OK up to the newfs..
S play.p0.s0State: up PO:0 B Size: 46 GB
S play.p0.s1State: up PO: 32 MB Size: 46 GB
S play.p0.s2State: up
Joe Greco wrote:
So I wanted to vinum my new 1.9TB of disks together just for chuckles, and
it went OK up to the newfs..
S play.p0.s0State: up PO:0 B Size: 46 GB
S play.p0.s1State: up PO: 32 MB Size: 46 GB
[..]
S
Joe Greco wrote:
So I wanted to vinum my new 1.9TB of disks together just for chuckles, and
it went OK up to the newfs..
S play.p0.s0State: up PO:0 B Size: 46 GB
S play.p0.s1State: up PO: 32 MB Size: 46 GB
[..]
S
: ie: there is a signed 32 bit sector count limit. 2^31 == 1TB. It shouldn't
: be too hard to get it to create 2^32 bit (2TB) filesystem though. I'd expect
: there to be more problems that this to bite you though. :-(
:
: 2^31 also happens to be the mmap() file offset limit FWIW.
2^31
The trick to fsck is that you don't want more inodes than you really need.
Once you get past that, fsck flies. The previous generation of binaries
server, worked on 27 36GB drives split into 10 partitions, designed for
parallelism. Hit RESET and the news filesystems take ~30 seconds to
The trick to fsck is that you don't want more inodes than you really need.
Once you get past that, fsck flies. The previous generation of binaries
server, worked on 27 36GB drives split into 10 partitions, designed for
parallelism. Hit RESET and the news filesystems take ~30 seconds
:
: ie: there is a signed 32 bit sector count limit. 2^31 == 1TB. It shouldn't
: be too hard to get it to create 2^32 bit (2TB) filesystem though. I'd expect
: there to be more problems that this to bite you though. :-(
:
: 2^31 also happens to be the mmap() file offset limit FWIW.
No it
:Speaking of which, I'd like to compliment you on the overall design of the
:Diablo system. It has scaled very well to handle a hundred million articles
:on-spool.
:
:Dumping, hash table 67108864 entries, record size 28 == :-) :-) :-)
:@268435472
:diload: 104146775/104146944 entries
On Tuesday, 15 February 2000 at 3:40:58 -0600, Joe Greco wrote:
So I wanted to vinum my new 1.9TB of disks together just for chuckles, and
it went OK up to the newfs..
S play.p0.s0State: up PO:0 B Size: 46 GB
S play.p0.s1State: up PO:
Personally I think going to 64 bit block numbers is overkill. 32 bits
is plenty (for the next few decades) and, generally, people running
filesystems that large tend to be in the 'fewer larger files' category
rather then the 'billions of tiny files' category, so using a
On Tue, Feb 15, 2000 at 06:27:03PM -0800, John Milford wrote:
Brooks Davis [EMAIL PROTECTED] wrote:
On Tue, Feb 15, 2000 at 11:24:30AM -0800, John Milford wrote:
Is there any real interest in moving beyond 1TB? I think that
it would incur a non-trival overhead as I
Joe Greco [EMAIL PROTECTED] wrote:
Joe seem to want one. This size is certaintly within the reach of an
ISP now, and disks just keep getting bigger. My administrative bias is
that partitioning for a reason other then policy should be avoided and
thus I'd love to see filesystem size
31 matches
Mail list logo