Sander Sweers wrote:
>
>
>With the approval of Namesys I would like to add a new entry to the wiki
>frontpage.
>
It would be very appreciated.
Nate Diller wrote:
On 8/1/06, David Masover <[EMAIL PROTECTED]> wrote:
Vladimir V. Saveliev wrote:
I could be entirely wrong, though. I speak for neither
Hans/Namesys/reiserfs nor LKML. Talk amongst yourselves...
i should clarify things a bit here. yes, hans' goal is for there to
be no d
Ian Stirling wrote:
David Masover wrote:
David Lang wrote:
On Mon, 31 Jul 2006, David Masover wrote:
Oh, I'm curious -- do hard drives ever carry enough
battery/capacitance to cover their caches? It doesn't seem like it
would be that hard/expensive, and if it is done that way, then I
thin
David Masover wrote:
David Lang wrote:
On Mon, 31 Jul 2006, David Masover wrote:
Oh, I'm curious -- do hard drives ever carry enough
battery/capacitance to cover their caches? It doesn't seem like it
would be that hard/expensive, and if it is done that way, then I
think it's valid to leave
Ricardo (Tru64 User) wrote:
Hi,
Hello
I read on reiserfs site, faq #1, about max file sizes:
max file size 2^60 - bytes => 1 Ei,
but page cache limits this to 8 Ti on architectures
with 32 bit int for reiserfs 3.6
I do have a reiserfs 3.6 filesystem, on kernel
2.6.12-21mdksmp (mandriva 20
On Tue, 2006-08-01 at 23:12 +0200, Maciej Sołtysiak wrote:
> Hello Sander,
Hey
>
> Tuesday, August 1, 2006, 8:10:34 PM, you wrote:
> > Yes, and in case of gentoo there are already people maintaining an
> > ebuild which pull in r4 on the wiki.
> > http://gentoo-wiki.com/HOWTO_Reiser4_With_Gentoo-So
Hi,
I read on reiserfs site, faq #1, about max file sizes:
max file size 2^60 - bytes => 1 Ei,
but page cache limits this to 8 Ti on architectures
with 32 bit int for reiserfs 3.6
I do have a reiserfs 3.6 filesystem, on kernel
2.6.12-21mdksmp (mandriva 2006) that would not take a
filesize greate
Hello Sander,
Tuesday, August 1, 2006, 8:10:34 PM, you wrote:
> Yes, and in case of gentoo there are already people maintaining an
> ebuild which pull in r4 on the wiki.
> http://gentoo-wiki.com/HOWTO_Reiser4_With_Gentoo-Sources
Debian has reiser4progs and kernel-patch-2.6-reiser4:
- stable: 20040
On Fri, 2006-06-23 at 02:51 +0300, Jussi Judin wrote:
> After that I upgraded to Debian patched kernel 2.6.16-14 and to reiser4
> patch 2.6.16-4 for that kernel and ran fsck.reiser4. Then I got errors
> like this in kern.log after a while:
>
> WARNING: Error for inode 1731981 (-2)
> reiser4[nfsd
David Masover <[EMAIL PROTECTED]> writes:
>> RAID deals with the case where a device fails. RAID 1 with 2 disks
>> can
>> in theory detect an internal inconsistency but cannot fix it.
>
> Still, if it does that, that should be enough. The scary part wasn't
> that there's an internal inconsistency
On 8/1/06, David Masover <[EMAIL PROTECTED]> wrote:
Vladimir V. Saveliev wrote:
> Do you think that if reiser4 supported xattrs - it would increase its
> chances on inclusion?
Probably the opposite.
If I understand it right, the original Reiser4 model of file metadata is
the file-as-directory
Hello Ingo,
there is a new reiser4 / lock validator problem:
On Sunday 30 July 2006 22:57, Laurent Riffard wrote:
> ===
> [ INFO: possible circular locking dependency detected ]
> ---
> mv/2901
On 8/1/06, Andrew Morton <[EMAIL PROTECTED]> wrote:
On Tue, 01 Aug 2006 15:24:37 +0400
"Vladimir V. Saveliev" <[EMAIL PROTECTED]> wrote:
> > >The writeout code is ugly, although that's largely due to a mismatch
between
> > >what reiser4 wants to do and what the VFS/MM expects it to do.
>
> Yes.
Ric Wheeler wrote:
Alan Cox wrote:
Ar Maw, 2006-08-01 am 16:52 +0200, ysgrifennodd Adrian Ulrich:
WriteCache, Mirroring between 2 Datacenters, snapshotting.. etc..
you don't need your filesystem beeing super-robust against bad sectors
and such stuff because:
You do it turns out. Its becomin
Gregory Maxwell wrote:
> This is why ZFS offers block checksums... it can then try all the
> permutations of raid regens to find a solution which gives the right
> checksum.
>
ZFS performance is pretty bad in the only benchmark I have seen of it.
Does anyone have serious benchmarks of it? I susp
Ric Wheeler wrote:
> Alan Cox wrote:
>
>>
>>
>> You do it turns out. Its becoming an issue more and more that the sheer
>> amount of storage means that the undetected error rate from disks,
>> hosts, memory, cables and everything else is rising.
>
>
>
> I agree with Alan
You will want to try our
Alan, I have seen only anecdotal evidence against reiserfsck, and I have
seen formal tests from Vitaly (which it seems a user has replicated)
where our fsck did better than ext3s. Note that these tests are of the
latest fsck from us: I am sure everyone understands that it takes time
for an fsck t
Alan Cox wrote:
Ar Maw, 2006-08-01 am 16:52 +0200, ysgrifennodd Adrian Ulrich:
WriteCache, Mirroring between 2 Datacenters, snapshotting.. etc..
you don't need your filesystem beeing super-robust against bad sectors
and such stuff because:
You do it turns out. Its becoming an issue more and
> > This is why ZFS offers block checksums... it can then try all the
> > permutations of raid regens to find a solution which gives the right
> > checksum.
>
> Isn't there a way to do this at the block layer? Something in
> device-mapper?
Remember: Suns new Filesystem + Suns new Volume Manage
> You do it turns out. Its becoming an issue more and more that the sheer
> amount of storage means that the undetected error rate from disks,
> hosts, memory, cables and everything else is rising.
IMHO the possibility to hit such a random-so-far-undetected-corruption
is very low with one of the b
On Tue, 2006-08-01 at 13:28 +0200, Maciej Sołtysiak wrote:
> Hello David,
>
> Monday, July 31, 2006, 11:46:34 PM, you wrote:
> > You must be new here...
> ;-)
>
> I wanted to point out that because:
> > Options B and C are all that ever seems to happen when reiserfs-list and
> > lkml collide.
>
Jan Engelhardt wrote:
>>Wandering logs is a term specific to reiser4, and I think you are making
>>a more general remark.
>>
>>
>
>So, what is UDF's "wandering" log then?
>
>
>
>Jan Engelhardt
>
>
I have no idea, when did they introduce it?
Gregory Maxwell wrote:
On 8/1/06, David Masover <[EMAIL PROTECTED]> wrote:
Yikes. Undetected.
Wait, what? Disks, at least, would be protected by RAID. Are you
telling me RAID won't detect such an error?
Unless the disk ECC catches it raid won't know anything is wrong.
This is why ZFS offe
Alan Cox wrote:
Ar Maw, 2006-08-01 am 11:44 -0500, ysgrifennodd David Masover:
Yikes. Undetected.
Wait, what? Disks, at least, would be protected by RAID. Are you
telling me RAID won't detect such an error?
Yes.
RAID deals with the case where a device fails. RAID 1 with 2 disks can
in th
On 8/1/06, David Masover <[EMAIL PROTECTED]> wrote:
Yikes. Undetected.
Wait, what? Disks, at least, would be protected by RAID. Are you
telling me RAID won't detect such an error?
Unless the disk ECC catches it raid won't know anything is wrong.
This is why ZFS offers block checksums... it
Theodore Tso wrote:
Ah, but as soon as the repacker thread runs continuously, then you
lose all or most of the claimed advantage of "wandering logs".
[...]
So instead of a write-write overhead, you end up with a
write-read-write overhead.
This would tend to suggest that the repacker should n
Ar Maw, 2006-08-01 am 11:44 -0500, ysgrifennodd David Masover:
> Yikes. Undetected.
>
> Wait, what? Disks, at least, would be protected by RAID. Are you
> telling me RAID won't detect such an error?
Yes.
RAID deals with the case where a device fails. RAID 1 with 2 disks can
in theory detect
Christian Trefzer wrote:
On Mon, Jul 31, 2006 at 10:57:35AM -0500, David Masover wrote:
Wil Reichert wrote:
Any idea how the fragmentation resulting from re-syncing the tree
affects performance over time?
Yes, it does affect it a lot. I have no idea how much, and I've never
benchmarked it, b
Hello
On Tue, 2006-08-01 at 17:32 +0200, Łukasz Mierzwa wrote:
> Dnia Fri, 28 Jul 2006 18:33:56 +0200, Linus Torvalds <[EMAIL PROTECTED]>
> napisał:
>
> > In other words, if a filesystem wants to do something fancy, it needs to
> > do so WITH THE VFS LAYER, not as some plugin architecture of it
Horst H. von Brand wrote:
Bernd Schubert <[EMAIL PROTECTED]> wrote:
While filesystem speed is nice, it also would be great if reiser4.x would be
very robust against any kind of hardware failures.
Can't have both.
Why not? I mean, other than TANSTAAFL, is there a technical reason for
them
Vladimir V. Saveliev wrote:
Do you think that if reiser4 supported xattrs - it would increase its
chances on inclusion?
Probably the opposite.
If I understand it right, the original Reiser4 model of file metadata is
the file-as-directory stuff that caused such a furor the last big push
for
Alan Cox wrote:
Ar Maw, 2006-08-01 am 16:52 +0200, ysgrifennodd Adrian Ulrich:
WriteCache, Mirroring between 2 Datacenters, snapshotting.. etc..
you don't need your filesystem beeing super-robust against bad sectors
and such stuff because:
You do it turns out. Its becoming an issue more and mo
Dnia Fri, 28 Jul 2006 18:33:56 +0200, Linus Torvalds <[EMAIL PROTECTED]>
napisał:
In other words, if a filesystem wants to do something fancy, it needs to
do so WITH THE VFS LAYER, not as some plugin architecture of its own. We
already have exactly the plugin interface we need, and it literall
Hello
On Tue, 2006-08-01 at 07:33 -0700, Andrew Morton wrote:
> On Tue, 01 Aug 2006 15:24:37 +0400
> "Vladimir V. Saveliev" <[EMAIL PROTECTED]> wrote:
>
> > > >The writeout code is ugly, although that's largely due to a mismatch
> > > >between
> > > >what reiser4 wants to do and what the VFS/MM
Ar Maw, 2006-08-01 am 16:52 +0200, ysgrifennodd Adrian Ulrich:
> WriteCache, Mirroring between 2 Datacenters, snapshotting.. etc..
> you don't need your filesystem beeing super-robust against bad sectors
> and such stuff because:
You do it turns out. Its becoming an issue more and more that the sh
> > While filesystem speed is nice, it also would be great if reiser4.x would
> > be
> > very robust against any kind of hardware failures.
>
> Can't have both.
..and some people simply don't care about this:
If you are running a 'big' Storage-System with battery protected
WriteCache, Mirrori
On Tue, 01 Aug 2006 15:24:37 +0400
"Vladimir V. Saveliev" <[EMAIL PROTECTED]> wrote:
> > >The writeout code is ugly, although that's largely due to a mismatch
> > >between
> > >what reiser4 wants to do and what the VFS/MM expects it to do.
>
> Yes. reiser4 writeouts atoms. Most of pages get into
Bernd Schubert <[EMAIL PROTECTED]> wrote:
> On Monday 31 July 2006 21:29, Jan-Benedict Glaw wrote:
> > The point is that it's quite hard to really fuck up ext{2,3} with only
> > some KB being written while it seems (due to the
> > fragile^Wsophisticated on-disk data structures) that it's just easy
>> >I didn't mean to say your particular drive were crap, but 200GB SATA
>> >drives are low end, like it or not --
>>
>> And you think an 18 GB SCSI disk just does it better because it's SCSI?
>
>18 GB SCSI disks are 1999 gear, so who cares?
>Seagate didn't sell 200 GB SATA drives at that time.
>
Jan Engelhardt schrieb am 2006-08-01:
> >I didn't mean to say your particular drive were crap, but 200GB SATA
> >drives are low end, like it or not --
>
> And you think an 18 GB SCSI disk just does it better because it's SCSI?
18 GB SCSI disks are 1999 gear, so who cares?
Seagate didn't sell 200
On Mon, Jul 31, 2006 at 06:05:01PM +0200, Łukasz Mierzwa wrote:
> I gues that extens are much harder to reuse then normal inodes so when You
> have something as big as portage tree filled with nano files wich are
> being modified all the time then You just can't keep performance all the
> ti
Hello
On Mon, 2006-07-31 at 20:18 -0600, Hans Reiser wrote:
> Andrew Morton wrote:
> >The writeout code is ugly, although that's largely due to a mismatch between
> >what reiser4 wants to do and what the VFS/MM expects it to do.
Yes. reiser4 writeouts atoms. Most of pages get into atoms via
sys_
Hello David,
Monday, July 31, 2006, 11:46:34 PM, you wrote:
> You must be new here...
;-)
I wanted to point out that because:
> Options B and C are all that ever seems to happen when reiserfs-list and
> lkml collide.
and:
> The speed of a nonworking program is irrelevant.
> The cost-effectiv
>
>Wandering logs is a term specific to reiser4, and I think you are making
>a more general remark.
So, what is UDF's "wandering" log then?
Jan Engelhardt
--
>
>I didn't mean to say your particular drive were crap, but 200GB SATA
>drives are low end, like it or not --
And you think an 18 GB SCSI disk just does it better because it's SCSI?
Esp. in long sequential reads.
Jan Engelhardt
--
planning sometimes is not possible, exspecially in certain highly stressed
environment.
Just think. I had 3 years ago a database that was 2 TB, we were supposing
it could grow in three years of 6 TB, but now it is 40 TB because
the market situation is changed, and with this the number of the us
On Mon, Jul 31, 2006 at 05:59:58PM +0200, Adrian Ulrich wrote:
> Hello Matthias,
>
> > This looks rather like an education issue rather than a technical limit.
>
> We aren't talking about the same issue: I was asking to do it
> on-the-fly. Umounting the filesystem, running e2fsck and resize2fs
>
Matthias Andree wrote:
No, it is valid to run the test on commodity hardware, but if you (or
the benchmark rather) is claiming "transactions", I tend to think
"ACID", and I highly doubt any 200 GB SATA drive manages 3000
synchronous writes per second without causing either serious
fragmentation
Theodore Tso wrote:
>On Mon, Jul 31, 2006 at 09:41:02PM -0700, David Lang wrote:
>
>
>>just becouse you have redundancy doesn't mean that your data is idle enough
>>for you to run a repacker with your spare cycles. to run a repacker you
>>need a time when the chunk of the filesystem that you a
Matthias Andree wrote:
On Tue, 01 Aug 2006, Avi Kivity wrote:
> There's no reason to repack *all* of the data. Many workloads write
and
> delete whole files, so file data should be contiguous. The repacker
> would only need to move metadata and small files.
Move small files? What for?
W
Matthias Andree wrote:
>
>>Have you ever seen VxFS or WAFL in action?
>>
>>
>
>No I haven't. As long as they are commercial, it's not likely that I
>will.
>
>
WAFL was well done. It has several innovations that I admire,
including quota trees, non-support of fragments for performance reaso
On Tue, 01 Aug 2006, Avi Kivity wrote:
> There's no reason to repack *all* of the data. Many workloads write and
> delete whole files, so file data should be contiguous. The repacker
> would only need to move metadata and small files.
Move small files? What for?
Even if it is "only" moving m
Andrew Morton wrote:
>On Mon, 31 Jul 2006 10:26:55 +0100
>"Denis Vlasenko" <[EMAIL PROTECTED]> wrote:
>
>
>
>>The reiser4 thread seem to be longer than usual.
>>
>>
>
>Meanwhile here's poor old me trying to find another four hours to finish
>reviewing the thing.
>
>
Thanks Andrew.
>The wr
Adrian Ulrich schrieb am 2006-08-01:
> > suspect, particularly with 7200/min (s)ATA crap.
>
> Quoting myself (again):
> >> A quick'n'dirty ZFS-vs-UFS-vs-Reiser3-vs-Reiser4-vs-Ext3 'benchmark'
>
> Yeah, the test ran on a single SATA-Harddisk (quick'n'dirty).
> I'm so sorry but i don't have acces
On Mon, Jul 31, 2006 at 10:57:35AM -0500, David Masover wrote:
>
> Wil Reichert wrote:
>
> >Any idea how the fragmentation resulting from re-syncing the tree
> >affects performance over time?
>
> Yes, it does affect it a lot. I have no idea how much, and I've never
> benchmarked it, but purely s
> suspect, particularly with 7200/min (s)ATA crap.
Quoting myself (again):
>> A quick'n'dirty ZFS-vs-UFS-vs-Reiser3-vs-Reiser4-vs-Ext3 'benchmark'
Yeah, the test ran on a single SATA-Harddisk (quick'n'dirty).
I'm so sorry but i don't have access to a $$$ Raid-System at home.
Anyway: The test s
> So ZFS isn't "state-of-the-art"?
Of course it's state-of-the-art (on Solaris ;-) )
> WAFL is for high-turnover filesystems on RAID-5 (and assumes flash memory
> staging areas).
s/RAID-5/RAID-4/
> Not your run-of-the-mill desktop...
The WAFL-Thing was just a joke ;-)
Regards,
Adrian
Theodore Tso wrote:
Ah, but as soon as the repacker thread runs continuously, then you
lose all or most of the claimed advantage of "wandering logs".
Specifically, the claim of the "wandering log" is that you don't have
to write your data twice --- once to the log, and once to the final
location
58 matches
Mail list logo