Hans Reiser wrote:
Pavel Machek wrote:
Yes, I'm afraid redundancy/checksums kill write speed,
they kill write speed to cache, but not to disk our compression
plugin is faster than the uncompressed plugin.
Regarding cache, do we do any sort of consistency checking for RAM, or
do w
Jan Engelhardt wrote:
Yes, it looks like a business of node plugin, but AFAIK, you
objected against such checks:
Did I really? Well, I think that allowing users to choose whether to
checksum or not is a reasonable thing to allow them. I personally would
skip the checksum on my computer, but ot
>> Yes, it looks like a business of node plugin, but AFAIK, you
>> objected against such checks:
>
>Did I really? Well, I think that allowing users to choose whether to
>checksum or not is a reasonable thing to allow them. I personally would
>skip the checksum on my computer, but others
>
>It
Pavel Machek wrote:
>On Wed 2006-08-09 02:37:45, Hans Reiser wrote:
>
>
>>Pavel Machek wrote:
>>
>>
>>
>>>Yes, I'm afraid redundancy/checksums kill write speed,
>>>
>>>
>>>
>>they kill write speed to cache, but not to disk our compression
>>plugin is faster than the uncompressed p
On Wed 2006-08-09 02:37:45, Hans Reiser wrote:
> Pavel Machek wrote:
>
> >
> >
> >Yes, I'm afraid redundancy/checksums kill write speed,
> >
> they kill write speed to cache, but not to disk our compression
> plugin is faster than the uncompressed plugin.
Yes, you can get clever. But you
Edward Shishkin wrote:
> Hans Reiser wrote:
>
>> Edward Shishkin wrote:
>>
>>
How about we switch to ecc, which would help with bit rot not sector
loss?
>>>
>>>
>>>
>>> Interesting aspect.
>>>
>>> Yes, we can implement ECC as a special crypto transform that inflates
>>> data. As I m
Pavel Machek wrote:
>
>
>Yes, I'm afraid redundancy/checksums kill write speed,
>
they kill write speed to cache, but not to disk our compression
plugin is faster than the uncompressed plugin.
Hi!
> >> Most users not only cannot patch a kernel, they don't know what a patch
> >> is. It most certainly does.
> >
> >
> > obviously you can provide complete kernels, including precompiled ones.
> > Most distros have a yum or apt or similar tool to suck down packages,
> > it's trivial for u
Matthias Andree wrote:
[stripping Cc: list]
On Thu, 03 Aug 2006, Edward Shishkin wrote:
What kind of forward error correction would that be,
Actually we use checksums, not ECC. If checksum is wrong, then run
fsck - it will remove the whole disk cluster, that represent 64K of
data.
Well,
[EMAIL PROTECTED] wrote:
It seems that finding all the bits and pieces to do ext3 on-line
expansion has been a study in obfuscation. Somewhat surprising since
this feature is a must for enterprise class storage management.
Not really. Having people who can dig through the obfuscation is also
Hi!
> > Using guilt as an argument in a technical discussion is a flashing red
> > sign that says "I have no technical rebuttal"
>
> Wow, that is really nervy. Let's recap this all:
>
> * reiser4 has a 2x performance advantage over the next fastest FS
> (ext3), and when compression ships in a m
>> With the latest e2fsprogs and 2.6 kernels, the online resizing
>> support has been merged in, and as long as the filesystem was
>> created with space reserved for growing the filesystem (which is now
>> the default, or if the filesystem has the off-line prepration step
>> ext2prepare run on it),
On Jul 31, 3:41pm, Theodore Tso wrote:
} Subject: Re: the " 'official' point of view" expressed by kernelnewbies.or
> On Mon, Jul 31, 2006 at 06:54:06PM +0200, Matthias Andree wrote:
> > > > This looks rather like an education issue rather than a technical limit.
> > >
> > > We aren't talking ab
[stripping Cc: list]
On Thu, 03 Aug 2006, Edward Shishkin wrote:
> >What kind of forward error correction would that be,
>
> Actually we use checksums, not ECC. If checksum is wrong, then run
> fsck - it will remove the whole disk cluster, that represent 64K of
> data.
Well, that's quite a diff
Pavel Machek wrote:
On Tue 01-08-06 11:57:10, David Masover wrote:
Horst H. von Brand wrote:
Bernd Schubert <[EMAIL PROTECTED]> wrote:
While filesystem speed is nice, it also would be great
if reiser4.x would be very robust against any kind of
hardware failures.
Can't have both.
Why not? I
On Tue 01-08-06 11:57:10, David Masover wrote:
> Horst H. von Brand wrote:
> >Bernd Schubert <[EMAIL PROTECTED]> wrote:
>
> >>While filesystem speed is nice, it also would be great
> >>if reiser4.x would be very robust against any kind of
> >>hardware failures.
> >
> >Can't have both.
>
> Why n
Hans Reiser wrote:
Edward Shishkin wrote:
How about we switch to ecc, which would help with bit rot not sector
loss?
Interesting aspect.
Yes, we can implement ECC as a special crypto transform that inflates
data. As I mentioned earlier, it is possible via translation of key
offsets with s
On Sun, 6 Aug 2006, Matthias Andree wrote:
> (changing subject to catch Ted's attention)
> Bodo Eggert schrieb am 2006-08-05:
> > - I have an ext3 that can't be fixed by e2fsck (see below). fsck will fix
> > some errors, trash some files and leave a fs waiting to throw the same
> > error again
(changing subject to catch Ted's attention)
Bodo Eggert schrieb am 2006-08-05:
> - I have an ext3 that can't be fixed by e2fsck (see below). fsck will fix
> some errors, trash some files and leave a fs waiting to throw the same
> error again. I'm fixing it using mkreiserfs now.
If such a bug
Jan-Benedict Glaw <[EMAIL PROTECTED]> wrote:
> On Mon, 2006-07-31 12:17:12 -0700, Clay Barnes <[EMAIL PROTECTED]> wrote:
>> On 20:43 Mon 31 Jul , Jan-Benedict Glaw wrote:
>> > On Mon, 2006-07-31 20:11:20 +0200, Matthias Andree <[EMAIL PROTECTED]>
>> > > Jan-Benedict Glaw schrieb am 2006-07-31:
Antonio Vargas wrote:
> On 8/4/06, Edward Shishkin <[EMAIL PROTECTED]> wrote:
>
>> Hans Reiser wrote:
>> > Edward Shishkin wrote:
>> >
>> >
>> >>Matthias Andree wrote:
>> >>
>> >>
>> >>>On Tue, 01 Aug 2006, Hans Reiser wrote:
>> >>>
>> >>>
>> >>>
>> You will want to try our compression plugin,
Edward Shishkin wrote:
>
>>
>>
>> How about we switch to ecc, which would help with bit rot not sector
>> loss?
>
>
> Interesting aspect.
>
> Yes, we can implement ECC as a special crypto transform that inflates
> data. As I mentioned earlier, it is possible via translation of key
> offsets with s
Horst H. von Brand wrote:
Vladimir V. Saveliev <[EMAIL PROTECTED]> wrote:
On Tue, 2006-08-01 at 17:32 +0200, Åukasz Mierzwa wrote:
What fancy (beside cryptocompress) does reiser4 do now?
it is supposed to provide an ability to easy modify filesystem behaviour
in various aspects without brea
Russell Leighton wrote:
Is there a recovery mechanism, or do you just be happy you know there is
a problem (and go to backup)?
You probably go to backup anyway. The recovery mechanism just means you
get to choose the downtime to restore from backup (if there is
downtime), versus being sudde
On 8/4/06, Edward Shishkin <[EMAIL PROTECTED]> wrote:
Hans Reiser wrote:
> Edward Shishkin wrote:
>
>
>>Matthias Andree wrote:
>>
>>
>>>On Tue, 01 Aug 2006, Hans Reiser wrote:
>>>
>>>
>>>
You will want to try our compression plugin, it has an ecc for every
64k
>>>
>>>
>>>
>>>What kind
Hans Reiser wrote:
Edward Shishkin wrote:
Matthias Andree wrote:
On Tue, 01 Aug 2006, Hans Reiser wrote:
You will want to try our compression plugin, it has an ecc for every
64k
What kind of forward error correction would that be,
Actually we use checksums, not ECC. If check
That was exactly the summary I was looking for.
I would enourage folks to read the referenced link Toby sent:
http://blogs.sun.com/roller/page/bonwick?entry=zfs_end_to_end_data
...also the linked RAID-Z summary from this article was very
interesting, since something like this is needed for
On 4-Aug-06, at 3:25 AM, Russell Leighton wrote:
If the software (filesystem like ZFS or database like Berkeley DB)
finds a mismatch for a checksum on a block read, then what?
Is there a recovery mechanism, or do you just be happy you know
there is a problem (and go to backup)?
ZFS wi
If the software (filesystem like ZFS or database like Berkeley DB)
finds a mismatch for a checksum on a block read, then what?
Is there a recovery mechanism, or do you just be happy you know there is
a problem (and go to backup)?
Thx
Matthias Andree wrote:
Berkeley DB can, since version
On 8/3/06, Matthias Andree <[EMAIL PROTECTED]> wrote:
Berkeley DB can, since version 4.1 (IIRC), write checksums (newer
versions document this as SHA1) on its database pages, to detect
corruptions and writes that were supposed to be atomic but failed
(because you cannot write 4K or 16K atomically
Edward Shishkin wrote:
> Matthias Andree wrote:
>
>> On Tue, 01 Aug 2006, Hans Reiser wrote:
>>
>>
>>> You will want to try our compression plugin, it has an ecc for every
>>> 64k
>>
>>
>>
>> What kind of forward error correction would that be,
>
>
>
> Actually we use checksums, not ECC. If ch
On Thu, Aug 03, 2006 at 04:03:07PM +0200, Matthias Andree wrote:
> On Tue, 01 Aug 2006, Ric Wheeler wrote:
>
> > Mirroring a corrupt file system to a remote data center will mirror your
> > corruption.
> >
>
> Which makes me wonder if backup systems shouldn't help with this. If
> they are readi
Matthias Andree wrote:
On Tue, 01 Aug 2006, Hans Reiser wrote:
You will want to try our compression plugin, it has an ecc for every 64k
What kind of forward error correction would that be,
Actually we use checksums, not ECC. If checksum is wrong, then run
fsck - it will remove the wh
On Tue, 01 Aug 2006, Hans Reiser wrote:
> You will want to try our compression plugin, it has an ecc for every 64k
What kind of forward error correction would that be, and how much and
what failure patterns can it correct? URL suffices.
--
Matthias Andree
On Tue, 01 Aug 2006, Ric Wheeler wrote:
> Mirroring a corrupt file system to a remote data center will mirror your
> corruption.
>
> Rolling back to a snapshot typically only happens when you notice a
> corruption which can go undetected for quite a while, so even that will
> benefit from havi
On Tue, 01 Aug 2006, David Masover wrote:
> >RAID deals with the case where a device fails. RAID 1 with 2 disks can
> >in theory detect an internal inconsistency but cannot fix it.
>
> Still, if it does that, that should be enough. The scary part wasn't
> that there's an internal inconsistency,
Dnia Wed, 02 Aug 2006 20:45:07 +0200, Horst H. von Brand
<[EMAIL PROTECTED]> napisał:
Vladimir V. Saveliev <[EMAIL PROTECTED]> wrote:
On Tue, 2006-08-01 at 17:32 +0200, �ukasz Mierzwa wrote:
> Dnia Fri, 28 Jul 2006 18:33:56 +0200, Linus Torvalds
<[EMAIL PROTECTED]>
> napisał:
> > In othe
Vladimir V. Saveliev <[EMAIL PROTECTED]> wrote:
> On Tue, 2006-08-01 at 17:32 +0200, Åukasz Mierzwa wrote:
> > Dnia Fri, 28 Jul 2006 18:33:56 +0200, Linus Torvalds <[EMAIL PROTECTED]>
> > napisaÅ:
> > > In other words, if a filesystem wants to do something fancy, it needs to
> > > do so WITH TH
Ian Stirling wrote:
David Masover wrote:
David Lang wrote:
On Mon, 31 Jul 2006, David Masover wrote:
Oh, I'm curious -- do hard drives ever carry enough
battery/capacitance to cover their caches? It doesn't seem like it
would be that hard/expensive, and if it is done that way, then I
thin
David Masover wrote:
David Lang wrote:
On Mon, 31 Jul 2006, David Masover wrote:
Oh, I'm curious -- do hard drives ever carry enough
battery/capacitance to cover their caches? It doesn't seem like it
would be that hard/expensive, and if it is done that way, then I
think it's valid to leave
David Masover <[EMAIL PROTECTED]> writes:
>> RAID deals with the case where a device fails. RAID 1 with 2 disks
>> can
>> in theory detect an internal inconsistency but cannot fix it.
>
> Still, if it does that, that should be enough. The scary part wasn't
> that there's an internal inconsistency
Ric Wheeler wrote:
Alan Cox wrote:
Ar Maw, 2006-08-01 am 16:52 +0200, ysgrifennodd Adrian Ulrich:
WriteCache, Mirroring between 2 Datacenters, snapshotting.. etc..
you don't need your filesystem beeing super-robust against bad sectors
and such stuff because:
You do it turns out. Its becomin
Gregory Maxwell wrote:
> This is why ZFS offers block checksums... it can then try all the
> permutations of raid regens to find a solution which gives the right
> checksum.
>
ZFS performance is pretty bad in the only benchmark I have seen of it.
Does anyone have serious benchmarks of it? I susp
Ric Wheeler wrote:
> Alan Cox wrote:
>
>>
>>
>> You do it turns out. Its becoming an issue more and more that the sheer
>> amount of storage means that the undetected error rate from disks,
>> hosts, memory, cables and everything else is rising.
>
>
>
> I agree with Alan
You will want to try our
Alan, I have seen only anecdotal evidence against reiserfsck, and I have
seen formal tests from Vitaly (which it seems a user has replicated)
where our fsck did better than ext3s. Note that these tests are of the
latest fsck from us: I am sure everyone understands that it takes time
for an fsck t
Alan Cox wrote:
Ar Maw, 2006-08-01 am 16:52 +0200, ysgrifennodd Adrian Ulrich:
WriteCache, Mirroring between 2 Datacenters, snapshotting.. etc..
you don't need your filesystem beeing super-robust against bad sectors
and such stuff because:
You do it turns out. Its becoming an issue more and
> > This is why ZFS offers block checksums... it can then try all the
> > permutations of raid regens to find a solution which gives the right
> > checksum.
>
> Isn't there a way to do this at the block layer? Something in
> device-mapper?
Remember: Suns new Filesystem + Suns new Volume Manage
> You do it turns out. Its becoming an issue more and more that the sheer
> amount of storage means that the undetected error rate from disks,
> hosts, memory, cables and everything else is rising.
IMHO the possibility to hit such a random-so-far-undetected-corruption
is very low with one of the b
Jan Engelhardt wrote:
>>Wandering logs is a term specific to reiser4, and I think you are making
>>a more general remark.
>>
>>
>
>So, what is UDF's "wandering" log then?
>
>
>
>Jan Engelhardt
>
>
I have no idea, when did they introduce it?
Gregory Maxwell wrote:
On 8/1/06, David Masover <[EMAIL PROTECTED]> wrote:
Yikes. Undetected.
Wait, what? Disks, at least, would be protected by RAID. Are you
telling me RAID won't detect such an error?
Unless the disk ECC catches it raid won't know anything is wrong.
This is why ZFS offe
Alan Cox wrote:
Ar Maw, 2006-08-01 am 11:44 -0500, ysgrifennodd David Masover:
Yikes. Undetected.
Wait, what? Disks, at least, would be protected by RAID. Are you
telling me RAID won't detect such an error?
Yes.
RAID deals with the case where a device fails. RAID 1 with 2 disks can
in th
On 8/1/06, David Masover <[EMAIL PROTECTED]> wrote:
Yikes. Undetected.
Wait, what? Disks, at least, would be protected by RAID. Are you
telling me RAID won't detect such an error?
Unless the disk ECC catches it raid won't know anything is wrong.
This is why ZFS offers block checksums... it
Theodore Tso wrote:
Ah, but as soon as the repacker thread runs continuously, then you
lose all or most of the claimed advantage of "wandering logs".
[...]
So instead of a write-write overhead, you end up with a
write-read-write overhead.
This would tend to suggest that the repacker should n
Ar Maw, 2006-08-01 am 11:44 -0500, ysgrifennodd David Masover:
> Yikes. Undetected.
>
> Wait, what? Disks, at least, would be protected by RAID. Are you
> telling me RAID won't detect such an error?
Yes.
RAID deals with the case where a device fails. RAID 1 with 2 disks can
in theory detect
Christian Trefzer wrote:
On Mon, Jul 31, 2006 at 10:57:35AM -0500, David Masover wrote:
Wil Reichert wrote:
Any idea how the fragmentation resulting from re-syncing the tree
affects performance over time?
Yes, it does affect it a lot. I have no idea how much, and I've never
benchmarked it, b
Hello
On Tue, 2006-08-01 at 17:32 +0200, Łukasz Mierzwa wrote:
> Dnia Fri, 28 Jul 2006 18:33:56 +0200, Linus Torvalds <[EMAIL PROTECTED]>
> napisał:
>
> > In other words, if a filesystem wants to do something fancy, it needs to
> > do so WITH THE VFS LAYER, not as some plugin architecture of it
Horst H. von Brand wrote:
Bernd Schubert <[EMAIL PROTECTED]> wrote:
While filesystem speed is nice, it also would be great if reiser4.x would be
very robust against any kind of hardware failures.
Can't have both.
Why not? I mean, other than TANSTAAFL, is there a technical reason for
them
Alan Cox wrote:
Ar Maw, 2006-08-01 am 16:52 +0200, ysgrifennodd Adrian Ulrich:
WriteCache, Mirroring between 2 Datacenters, snapshotting.. etc..
you don't need your filesystem beeing super-robust against bad sectors
and such stuff because:
You do it turns out. Its becoming an issue more and mo
Dnia Fri, 28 Jul 2006 18:33:56 +0200, Linus Torvalds <[EMAIL PROTECTED]>
napisał:
In other words, if a filesystem wants to do something fancy, it needs to
do so WITH THE VFS LAYER, not as some plugin architecture of its own. We
already have exactly the plugin interface we need, and it literall
Ar Maw, 2006-08-01 am 16:52 +0200, ysgrifennodd Adrian Ulrich:
> WriteCache, Mirroring between 2 Datacenters, snapshotting.. etc..
> you don't need your filesystem beeing super-robust against bad sectors
> and such stuff because:
You do it turns out. Its becoming an issue more and more that the sh
> > While filesystem speed is nice, it also would be great if reiser4.x would
> > be
> > very robust against any kind of hardware failures.
>
> Can't have both.
..and some people simply don't care about this:
If you are running a 'big' Storage-System with battery protected
WriteCache, Mirrori
Bernd Schubert <[EMAIL PROTECTED]> wrote:
> On Monday 31 July 2006 21:29, Jan-Benedict Glaw wrote:
> > The point is that it's quite hard to really fuck up ext{2,3} with only
> > some KB being written while it seems (due to the
> > fragile^Wsophisticated on-disk data structures) that it's just easy
>> >I didn't mean to say your particular drive were crap, but 200GB SATA
>> >drives are low end, like it or not --
>>
>> And you think an 18 GB SCSI disk just does it better because it's SCSI?
>
>18 GB SCSI disks are 1999 gear, so who cares?
>Seagate didn't sell 200 GB SATA drives at that time.
>
Jan Engelhardt schrieb am 2006-08-01:
> >I didn't mean to say your particular drive were crap, but 200GB SATA
> >drives are low end, like it or not --
>
> And you think an 18 GB SCSI disk just does it better because it's SCSI?
18 GB SCSI disks are 1999 gear, so who cares?
Seagate didn't sell 200
On Mon, Jul 31, 2006 at 06:05:01PM +0200, Łukasz Mierzwa wrote:
> I gues that extens are much harder to reuse then normal inodes so when You
> have something as big as portage tree filled with nano files wich are
> being modified all the time then You just can't keep performance all the
> ti
>
>Wandering logs is a term specific to reiser4, and I think you are making
>a more general remark.
So, what is UDF's "wandering" log then?
Jan Engelhardt
--
>
>I didn't mean to say your particular drive were crap, but 200GB SATA
>drives are low end, like it or not --
And you think an 18 GB SCSI disk just does it better because it's SCSI?
Esp. in long sequential reads.
Jan Engelhardt
--
planning sometimes is not possible, exspecially in certain highly stressed
environment.
Just think. I had 3 years ago a database that was 2 TB, we were supposing
it could grow in three years of 6 TB, but now it is 40 TB because
the market situation is changed, and with this the number of the us
On Mon, Jul 31, 2006 at 05:59:58PM +0200, Adrian Ulrich wrote:
> Hello Matthias,
>
> > This looks rather like an education issue rather than a technical limit.
>
> We aren't talking about the same issue: I was asking to do it
> on-the-fly. Umounting the filesystem, running e2fsck and resize2fs
>
Matthias Andree wrote:
No, it is valid to run the test on commodity hardware, but if you (or
the benchmark rather) is claiming "transactions", I tend to think
"ACID", and I highly doubt any 200 GB SATA drive manages 3000
synchronous writes per second without causing either serious
fragmentation
Theodore Tso wrote:
>On Mon, Jul 31, 2006 at 09:41:02PM -0700, David Lang wrote:
>
>
>>just becouse you have redundancy doesn't mean that your data is idle enough
>>for you to run a repacker with your spare cycles. to run a repacker you
>>need a time when the chunk of the filesystem that you a
Matthias Andree wrote:
On Tue, 01 Aug 2006, Avi Kivity wrote:
> There's no reason to repack *all* of the data. Many workloads write
and
> delete whole files, so file data should be contiguous. The repacker
> would only need to move metadata and small files.
Move small files? What for?
W
Matthias Andree wrote:
>
>>Have you ever seen VxFS or WAFL in action?
>>
>>
>
>No I haven't. As long as they are commercial, it's not likely that I
>will.
>
>
WAFL was well done. It has several innovations that I admire,
including quota trees, non-support of fragments for performance reaso
On Tue, 01 Aug 2006, Avi Kivity wrote:
> There's no reason to repack *all* of the data. Many workloads write and
> delete whole files, so file data should be contiguous. The repacker
> would only need to move metadata and small files.
Move small files? What for?
Even if it is "only" moving m
Adrian Ulrich schrieb am 2006-08-01:
> > suspect, particularly with 7200/min (s)ATA crap.
>
> Quoting myself (again):
> >> A quick'n'dirty ZFS-vs-UFS-vs-Reiser3-vs-Reiser4-vs-Ext3 'benchmark'
>
> Yeah, the test ran on a single SATA-Harddisk (quick'n'dirty).
> I'm so sorry but i don't have acces
On Mon, Jul 31, 2006 at 10:57:35AM -0500, David Masover wrote:
>
> Wil Reichert wrote:
>
> >Any idea how the fragmentation resulting from re-syncing the tree
> >affects performance over time?
>
> Yes, it does affect it a lot. I have no idea how much, and I've never
> benchmarked it, but purely s
> suspect, particularly with 7200/min (s)ATA crap.
Quoting myself (again):
>> A quick'n'dirty ZFS-vs-UFS-vs-Reiser3-vs-Reiser4-vs-Ext3 'benchmark'
Yeah, the test ran on a single SATA-Harddisk (quick'n'dirty).
I'm so sorry but i don't have access to a $$$ Raid-System at home.
Anyway: The test s
> So ZFS isn't "state-of-the-art"?
Of course it's state-of-the-art (on Solaris ;-) )
> WAFL is for high-turnover filesystems on RAID-5 (and assumes flash memory
> staging areas).
s/RAID-5/RAID-4/
> Not your run-of-the-mill desktop...
The WAFL-Thing was just a joke ;-)
Regards,
Adrian
Theodore Tso wrote:
Ah, but as soon as the repacker thread runs continuously, then you
lose all or most of the claimed advantage of "wandering logs".
Specifically, the claim of the "wandering log" is that you don't have
to write your data twice --- once to the log, and once to the final
location
On Mon, Jul 31, 2006 at 09:41:02PM -0700, David Lang wrote:
> just becouse you have redundancy doesn't mean that your data is idle enough
> for you to run a repacker with your spare cycles. to run a repacker you
> need a time when the chunk of the filesystem that you are repacking is not
> being
>
>> A filesystem with a fixed number of inodes (= not readjustable while
>> mounted) is ehr.. somewhat unuseable for a lot of people with
>> big and *flexible* storage needs (Talking about NetApp/EMC owners)
>
>Which is untrue at least for Solaris, which allows resizing a life file
>system. FreeBS
On 1-Aug-06, at 4:15 AM, Jeffrey V. Merkey wrote:
...I was and have remained loyal to Linux through it all.
Except for that little fling with SCO, eh?
Off topic, but no more so than your self-aggrandising.
--T
David Lang wrote:
On Mon, 31 Jul 2006, David Masover wrote:
And perhaps a
really good clustering filesystem for markets that
require NO downtime.
Thing is, a cluster is about the only FS I can imagine that could
reasonably require (and MAYBE provide) absolutely no downtime.
Everything else
On Mon, 31 Jul 2006, David Masover wrote:
And perhaps a
really good clustering filesystem for markets that
require NO downtime.
Thing is, a cluster is about the only FS I can imagine that could reasonably
require (and MAYBE provide) absolutely no downtime. Everything else, the more
you say
On Mon, 2006-07-31 at 23:00 -0400, Theodore Tso wrote:
> The problem is that many benchmarks (such as taring and untaring the
> kernel sources in reiser4 sort order) are overly simplistic, in that
> they don't really reflect how people use the filesystem in real life.
> (How many times can you guar
David Lang wrote:
On Mon, 31 Jul 2006, David Masover wrote:
Oh, I'm curious -- do hard drives ever carry enough
battery/capacitance to cover their caches? It doesn't seem like it
would be that hard/expensive, and if it is done that way, then I think
it's valid to leave them on. You could ju
Timothy Webster wrote:
Different users have different needs.
I'm having trouble thinking of users who need an FS that doesn't need a
repacker.
The disk error problem, though, you're right -- most users will have to
get bitten by this, hard, at least once, or they'll never get the
importanc
On Mon, 31 Jul 2006, David Masover wrote:
Oh, I'm curious -- do hard drives ever carry enough battery/capacitance to
cover their caches? It doesn't seem like it would be that hard/expensive,
and if it is done that way, then I think it's valid to leave them on. You
could just say that other f
Different users have different needs.
I agree, there are many users who can not afford any
downtime.
I worked at the NYSE and they reboot all their
computers once a week. We had a policy at NYSE. If
you suspect a computer has hardware problems, take it
off line. It is better to be short a few com
Theodore Tso wrote:
On Mon, Jul 31, 2006 at 08:31:32PM -0500, David Masover wrote:
So you use a repacker. Nice thing about a repacker is, everyone has
downtime. Better to plan to be a little sluggish when you'll have
1/10th or 1/50th of the users than be MUCH slower all the time.
Actually,
On Mon, Jul 31, 2006 at 08:31:32PM -0500, David Masover wrote:
> So you use a repacker. Nice thing about a repacker is, everyone has
> downtime. Better to plan to be a little sluggish when you'll have
> 1/10th or 1/50th of the users than be MUCH slower all the time.
Actually, that's a problem
Matthias Andree wrote:
On Mon, 31 Jul 2006, Nate Diller wrote:
this is only a limitation for filesystems which do in-place data and
metadata updates. this is why i mentioned the similarities to log
file systems (see rosenblum and ousterhout, 1991). they observed an
order-of-magnitude increase
On 7/31/06, Matthias Andree <[EMAIL PROTECTED]> wrote:
On Mon, 31 Jul 2006, Nate Diller wrote:
> this is only a limitation for filesystems which do in-place data and
> metadata updates. this is why i mentioned the similarities to log
> file systems (see rosenblum and ousterhout, 1991). they ob
On Mon, 31 Jul 2006, Nate Diller wrote:
> this is only a limitation for filesystems which do in-place data and
> metadata updates. this is why i mentioned the similarities to log
> file systems (see rosenblum and ousterhout, 1991). they observed an
> order-of-magnitude increase in performance fo
Nate Diller wrote:
On 7/31/06, Jeff V. Merkey <[EMAIL PROTECTED]> wrote:
Nate Diller wrote:
> On 7/31/06, Jeff V. Merkey <[EMAIL PROTECTED]> wrote:
>
>> Gregory Maxwell wrote:
>>
>> > On 7/31/06, Alan Cox <[EMAIL PROTECTED]> wrote:
>> >
>> >> Its well accepted that reiserfs3 has some robustne
On 7/31/06, David Lang <[EMAIL PROTECTED]> wrote:
On Mon, 31 Jul 2006, Nate Diller wrote:
> On 7/31/06, David Lang <[EMAIL PROTECTED]> wrote:
>> On Mon, 31 Jul 2006, Nate Diller wrote:
>>
>> >
>> > On 7/31/06, Matthias Andree <[EMAIL PROTECTED]> wrote:
>> >> Adrian Ulrich wrote:
>> >>
>> >> > Se
On Mon, 31 Jul 2006, Nate Diller wrote:
On 7/31/06, David Lang <[EMAIL PROTECTED]> wrote:
On Mon, 31 Jul 2006, Nate Diller wrote:
>
> On 7/31/06, Matthias Andree <[EMAIL PROTECTED]> wrote:
>> Adrian Ulrich wrote:
>>
>> > See also: http://spam.workaround.ch/dull/postmark.txt
>> >
>> > A quick'n
On 7/31/06, David Lang <[EMAIL PROTECTED]> wrote:
On Mon, 31 Jul 2006, Nate Diller wrote:
>
> On 7/31/06, Matthias Andree <[EMAIL PROTECTED]> wrote:
>> Adrian Ulrich wrote:
>>
>> > See also: http://spam.workaround.ch/dull/postmark.txt
>> >
>> > A quick'n'dirty ZFS-vs-UFS-vs-Reiser3-vs-Reiser4-vs
On 7/31/06, Jeff V. Merkey <[EMAIL PROTECTED]> wrote:
Nate Diller wrote:
> On 7/31/06, Jeff V. Merkey <[EMAIL PROTECTED]> wrote:
>
>> Gregory Maxwell wrote:
>>
>> > On 7/31/06, Alan Cox <[EMAIL PROTECTED]> wrote:
>> >
>> >> Its well accepted that reiserfs3 has some robustness problems in the
>>
On Mon, 31 Jul 2006, Nate Diller wrote:
On 7/31/06, Matthias Andree <[EMAIL PROTECTED]> wrote:
Adrian Ulrich wrote:
> See also: http://spam.workaround.ch/dull/postmark.txt
>
> A quick'n'dirty ZFS-vs-UFS-vs-Reiser3-vs-Reiser4-vs-Ext3 'benchmark'
Whatever Postmark does, this looks pretty besid
1 - 100 of 233 matches
Mail list logo