Re: Horrible ftruncate performance

2003-07-17 Thread Dieter Nützel
Am Mittwoch, 16. Juli 2003 12:57 schrieb Oleg Drokin:
> Hello!
>
> On Wed, Jul 16, 2003 at 12:47:53PM +0200, Dieter N?tzel wrote:
> > > > Somewhat.
> > > > Mouse movement is OK, now. But...
> > > > 1+0 Records aus
> > > > 0.000u 3.090s 0:16.81 18.3% 0+0k 0+0io 153pf+0w
> > > > 0.000u 0.050s 0:00.27 18.5% 0+0k 0+0io 122pf+0w
> > > > INSTALL/SOURCE> time dd if=/dev/zero of=sparse1 bs=1 seek=200G
> > > > count=1 ; time sync
> > > > 1+0 Records ein
> > > > 1+0 Records aus
> > > > 0.000u 3.010s 0:15.27 19.7% 0+0k 0+0io 153pf+0w
> > > > 0.000u 0.020s 0:01.01 1.9%  0+0k 0+0io 122pf+0w
> > >
> > > So you create a file in 15 seconds
> >
> > Right.
> >
> > > and remove it in 15 seconds.
> >
> > No. "Normaly" ~5 seconds.
>
> Ah, yes. Looking at wrong timeing info ;)
> I see that yesterday without the patch you had 1m, 9s, 5s, 2m times
> for 4 deletes...
>
> > > Kind of nothing changed except mouse now moves,
> > >
> > > > INSTALL/SOURCE> time rm sparse ; time sync
> > > > 0.000u 14.990s 1:31.15 16.4%0+0k 0+0io 130pf+0w
> > > > 0.000u 0.030s 0:00.22 13.6% 0+0k 0+0io 122pf+0w
> > >
> > > So the stuff fell out of cache and we need to read it again.
> >
> > Shouldn't this take only 15 seconds, then?
>
> Probably there was some seeking due to removal of lots of blocks.
>
> > Worst case was ~5 minutes.
>
> Yeah, this is of course sad.
> BTW is this with search_reada patch?

Yes.

> What if you try without it?

Do _NOT_ really help.

INSTALL/SOURCE> l
insgesamt 1032
drwxrwxr-x2 root root  176 Jul 17 20:05 .
drwxr-xr-x3 root root   72 Jul  3 01:39 ..
-rw-r--r--1 nuetzel  users  452390 Jul 15 00:29 
kmplayer-0.7.96.tar.bz2
-rw-r--r--1 nuetzel  users  403358 Jul 14 21:46 
modutils-2.4.21-18.src.rpm
-rw-r--r--1 nuetzel  users  194505 Jul 14 22:01 
procps-2.0.13-1.src.rpm

INSTALL/SOURCE> time dd if=/dev/zero of=sparse bs=1 seek=200G count=1 ; time 
sync
1+0 Records ein
1+0 Records aus
0.000u 2.770s 0:15.88 17.4% 0+0k 0+0io 153pf+0w
0.000u 0.000s 0:00.79 0.0%  0+0k 0+0io 122pf+0w

INSTALL/SOURCE> time dd if=/dev/zero of=sparse1 bs=1 seek=200G count=1 ; time 
sync
1+0 Records ein
1+0 Records aus
0.010u 2.440s 0:15.03 16.3% 0+0k 0+0io 153pf+0w
0.010u 0.020s 0:01.08 2.7%  0+0k 0+0io 122pf+0w

INSTALL/SOURCE> time dd if=/dev/zero of=sparse2 bs=1 seek=200G count=1 ; time 
sync
1+0 Records ein
1+0 Records aus
0.010u 2.710s 0:14.94 18.2% 0+0k 0+0io 153pf+0w
0.000u 0.000s 0:01.76 0.0%  0+0k 0+0io 122pf+0w

INSTALL/SOURCE> l
insgesamt 615444
drwxrwxr-x2 root root  248 Jul 17 20:06 .
drwxr-xr-x3 root root   72 Jul  3 01:39 ..
-rw-r--r--1 nuetzel  users  452390 Jul 15 00:29 
kmplayer-0.7.96.tar.bz2
-rw-r--r--1 nuetzel  users  403358 Jul 14 21:46 
modutils-2.4.21-18.src.rpm
-rw-r--r--1 nuetzel  users  194505 Jul 14 22:01 
procps-2.0.13-1.src.rpm
-rw-r--r--1 nuetzel  users214748364801 Jul 17 20:06 sparse
-rw-r--r--1 nuetzel  users214748364801 Jul 17 20:06 sparse1
-rw-r--r--1 nuetzel  users214748364801 Jul 17 20:07 sparse2

INSTALL/SOURCE> time sync
0.000u 0.000s 0:00.02 0.0%  0+0k 0+0io 122pf+0w

INSTALL/SOURCE> time rm sparse2 ; time sync
0.000u 4.860s 0:04.82 100.8%0+0k 0+0io 130pf+0w
0.010u 0.000s 0:00.03 33.3% 0+0k 0+0io 122pf+0w

INSTALL/SOURCE> time rm sparse1 ; time sync
0.000u 4.910s 0:04.82 101.8%0+0k 0+0io 130pf+0w
0.000u 0.020s 0:00.03 66.6% 0+0k 0+0io 122pf+0w

!!!

INSTALL/SOURCE> time rm sparse ; time sync
0.010u 6.500s 0:48.47 13.4% 0+0k 0+0io 130pf+0w
0.000u 0.000s 0:00.02 0.0%  0+0k 0+0io 122pf+0w

!!!

INSTALL/SOURCE> l
insgesamt 1032
drwxrwxr-x2 root root  176 Jul 17 20:08 .
drwxr-xr-x3 root root   72 Jul  3 01:39 ..
-rw-r--r--1 nuetzel  users  452390 Jul 15 00:29 
kmplayer-0.7.96.tar.bz2
-rw-r--r--1 nuetzel  users  403358 Jul 14 21:46 
modutils-2.4.21-18.src.rpm
-rw-r--r--1 nuetzel  users  194505 Jul 14 22:01 
procps-2.0.13-1.src.rpm


Overwrite:

INSTALL/SOURCE> time sync
0.000u 0.000s 0:00.02 0.0%  0+0k 0+0io 122pf+0w
INSTALL/SOURCE> time dd if=/dev/zero of=sparse bs=1 seek=200G count=1 ; time 
sync
1+0 Records ein
1+0 Records aus
0.010u 2.890s 0:16.17 17.9% 0+0k 0+0io 153pf+0w
0.000u 0.020s 0:01.27 1.5%  0+0k 0+0io 122pf+0w

INSTALL/SOURCE> l
insgesamt 205836
drwxrwxr-x2 root root  200 Jul 17 20:09 .
drwxr-xr-x3 root root   72 Jul  3 01:39 ..
-rw-r--r--1 nuetzel  users  452390 Jul 15 00:29 
kmplayer-0.7.96.tar.bz2
-rw-r--r--1 nuetzel  users  403358 Jul 14 21:46 
modutils-2.4.21-18.src.rpm
-rw-r--r--1 nuetzel  users  194505 Jul 14 22:01 
procps-2.0.13-1.src.rpm
-rw-r--r--1 nuetzel  users214748364801 Jul 17 20:

Re: Horrible ftruncate performance

2003-07-16 Thread Oleg Drokin
Hello!

On Wed, Jul 16, 2003 at 12:47:53PM +0200, Dieter N?tzel wrote:
> > > Somewhat.
> > > Mouse movement is OK, now. But...
> > > 1+0 Records aus
> > > 0.000u 3.090s 0:16.81 18.3% 0+0k 0+0io 153pf+0w
> > > 0.000u 0.050s 0:00.27 18.5% 0+0k 0+0io 122pf+0w
> > > INSTALL/SOURCE> time dd if=/dev/zero of=sparse1 bs=1 seek=200G count=1 ;
> > > time sync
> > > 1+0 Records ein
> > > 1+0 Records aus
> > > 0.000u 3.010s 0:15.27 19.7% 0+0k 0+0io 153pf+0w
> > > 0.000u 0.020s 0:01.01 1.9%  0+0k 0+0io 122pf+0w
> > So you create a file in 15 seconds
> Right.
> > and remove it in 15 seconds.
> No. "Normaly" ~5 seconds.

Ah, yes. Looking at wrong timeing info ;)
I see that yesterday without the patch you had 1m, 9s, 5s, 2m times
for 4 deletes...

> > Kind of nothing changed except mouse now moves,

> > > INSTALL/SOURCE> time rm sparse ; time sync
> > > 0.000u 14.990s 1:31.15 16.4%0+0k 0+0io 130pf+0w
> > > 0.000u 0.030s 0:00.22 13.6% 0+0k 0+0io 122pf+0w
> > So the stuff fell out of cache and we need to read it again.
> Shouldn't this take only 15 seconds, then?

Probably there was some seeking due to removal of lots of blocks.

> Worst case was ~5 minutes.

Yeah, this is of course sad.
BTW is this with search_reada patch? What if you try without it?

Bye,
Oleg


Re: Horrible ftruncate performance

2003-07-16 Thread Dieter Nützel
Am Mittwoch, 16. Juli 2003 12:35 schrieb Oleg Drokin:
> Hello!
>
> On Tue, Jul 15, 2003 at 09:55:09PM +0200, Dieter N?tzel wrote:
> > Somewhat.
> > Mouse movement is OK, now. But...
> >
> > 1+0 Records aus
> > 0.000u 3.090s 0:16.81 18.3% 0+0k 0+0io 153pf+0w
> > 0.000u 0.050s 0:00.27 18.5% 0+0k 0+0io 122pf+0w
> > INSTALL/SOURCE> time dd if=/dev/zero of=sparse1 bs=1 seek=200G count=1 ;
> > time sync
> > 1+0 Records ein
> > 1+0 Records aus
> > 0.000u 3.010s 0:15.27 19.7% 0+0k 0+0io 153pf+0w
> > 0.000u 0.020s 0:01.01 1.9%  0+0k 0+0io 122pf+0w
>
> So you create a file in 15 seconds

Right.

> and remove it in 15 seconds.

No. "Normaly" ~5 seconds.

INSTALL/SOURCE> time rm sparse2 ; time sync
0.000u 4.930s 0:04.88 101.0%0+0k 0+0io 130pf+0w
0.000u 0.010s 0:00.02 50.0% 0+0k 0+0io 122pf+0w

> Kind of nothing changed except mouse now moves,

Yes.

> am I reading this wrong?

No. ;-)

> > INSTALL/SOURCE> time rm sparse ; time sync
> > 0.000u 14.990s 1:31.15 16.4%0+0k 0+0io 130pf+0w
> > 0.000u 0.030s 0:00.22 13.6% 0+0k 0+0io 122pf+0w
>
> So the stuff fell out of cache and we need to read it again.

Shouldn't this take only 15 seconds, then?
Worst case was ~5 minutes.

> hence the increased time. Hm, probably this case can be optimized
> if there is only one item in the leaf and this item should be removed.
> Need to take closer look to balancing code.

Now, out of office.

Greetings,
Dieter



Re: Horrible ftruncate performance

2003-07-16 Thread Oleg Drokin
Hello!

On Tue, Jul 15, 2003 at 09:55:09PM +0200, Dieter N?tzel wrote:

> Somewhat.
> Mouse movement is OK, now. But...
> 
> 1+0 Records aus
> 0.000u 3.090s 0:16.81 18.3% 0+0k 0+0io 153pf+0w
> 0.000u 0.050s 0:00.27 18.5% 0+0k 0+0io 122pf+0w
> INSTALL/SOURCE> time dd if=/dev/zero of=sparse1 bs=1 seek=200G count=1 ; time 
> sync
> 1+0 Records ein
> 1+0 Records aus
> 0.000u 3.010s 0:15.27 19.7% 0+0k 0+0io 153pf+0w
> 0.000u 0.020s 0:01.01 1.9%  0+0k 0+0io 122pf+0w

So you create a file in 15 seconds and remove it in 15 seconds.
Kind of nothing changed except mouse now moves, am I reading this wrong?

> INSTALL/SOURCE> time rm sparse ; time sync
> 0.000u 14.990s 1:31.15 16.4%0+0k 0+0io 130pf+0w
> 0.000u 0.030s 0:00.22 13.6% 0+0k 0+0io 122pf+0w

So the stuff fell out of cache and we need to read it again.
hence the increased time. Hm, probably this case can be optimized
if there is only one item in the leaf and this item should be removed.
Need to take closer look to balancing code.

Bye,
Oleg


Re: Horrible ftruncate performance

2003-07-15 Thread Dieter Nützel
Am Dienstag, 15. Juli 2003 19:05 schrieb Oleg Drokin:
> Hello!
>
> On Tue, Jul 15, 2003 at 06:48:58PM +0200, Dieter N?tzel wrote:
> > !
> >! The 'right' questions are:
> > * Are the 204,804 MB really needed?
>
> In reiserfs - yes.
>
> > * How many space do XFS, JFS and ext3 'use'?
>
> ext2 uses 16 kb for me (4k blocksize)
>
> > * When the above three can do it a 1000 times faster why can't we?
>
> We can, only in case of reiserfs we have nasty races if we do that.
> Or may be not. I have something to discuss around, I guess ;)
>
> > * Do the three other do it right?
>
> I do not know.
>
> > Removing of the sparse files can take horrible amount of time.
>
> Does the patch below helps?

Somewhat.
Mouse movement is OK, now. But...

NSTALL/SOURCE> l
insgesamt 1032
drwxrwxr-x2 root root  176 Jul 15 21:45 .
drwxr-xr-x3 root root   72 Jul  3 01:39 ..
-rw-r--r--1 nuetzel  users  452390 Jul 15 00:29 
kmplayer-0.7.96.tar.bz2
-rw-r--r--1 nuetzel  users  403358 Jul 14 21:46 
modutils-2.4.21-18.src.rpm
-rw-r--r--1 nuetzel  users  194505 Jul 14 22:01 
procps-2.0.13-1.src.rpm

INSTALL/SOURCE> time dd if=/dev/zero of=sparse bs=1 seek=200G count=1 ; time 
sync
1+0 Records ein
1+0 Records aus
0.000u 3.090s 0:16.81 18.3% 0+0k 0+0io 153pf+0w
0.000u 0.050s 0:00.27 18.5% 0+0k 0+0io 122pf+0w

INSTALL/SOURCE> sync
INSTALL/SOURCE> time dd if=/dev/zero of=sparse1 bs=1 seek=200G count=1 ; time 
sync
1+0 Records ein
1+0 Records aus
0.000u 3.010s 0:15.27 19.7% 0+0k 0+0io 153pf+0w
0.000u 0.020s 0:01.01 1.9%  0+0k 0+0io 122pf+0w

INSTALL/SOURCE> sync
INSTALL/SOURCE> time dd if=/dev/zero of=sparse2 bs=1 seek=200G count=1 ; time 
sync
1+0 Records ein
1+0 Records aus
0.000u 2.760s 0:15.16 18.2% 0+0k 0+0io 153pf+0w
0.000u 0.030s 0:00.85 3.5%  0+0k 0+0io 122pf+0w

INSTALL/SOURCE> sync
INSTALL/SOURCE> l
insgesamt 615444
drwxrwxr-x2 root root  248 Jul 15 21:46 .
drwxr-xr-x3 root root   72 Jul  3 01:39 ..
-rw-r--r--1 nuetzel  users  452390 Jul 15 00:29 
kmplayer-0.7.96.tar.bz2
-rw-r--r--1 nuetzel  users  403358 Jul 14 21:46 
modutils-2.4.21-18.src.rpm
-rw-r--r--1 nuetzel  users  194505 Jul 14 22:01 
procps-2.0.13-1.src.rpm
-rw-r--r--1 nuetzel  users214748364801 Jul 15 21:46 sparse
-rw-r--r--1 nuetzel  users214748364801 Jul 15 21:46 sparse1
-rw-r--r--1 nuetzel  users214748364801 Jul 15 21:47 sparse2

INSTALL/SOURCE> time rm sparse2 ; time sync
0.000u 4.930s 0:04.88 101.0%0+0k 0+0io 130pf+0w
0.000u 0.010s 0:00.02 50.0% 0+0k 0+0io 122pf+0w

INSTALL/SOURCE> time rm sparse1 ; time sync
0.000u 4.950s 0:04.87 101.6%0+0k 0+0io 130pf+0w
0.000u 0.030s 0:00.05 60.0% 0+0k 0+0io 122pf+0w

!!

INSTALL/SOURCE> time rm sparse ; time sync
0.000u 14.990s 1:31.15 16.4%0+0k 0+0io 130pf+0w
0.000u 0.030s 0:00.22 13.6% 0+0k 0+0io 122pf+0w

!!

> > If I 'create' a second sparse file with the same name over an existing
> > one (overwrite) I get the 1000 times speedup!
>
> Hm. Interesting observation.

Unchanged.

INSTALL/SOURCE> time dd if=/dev/zero of=sparse bs=1 seek=200G count=1 ; time 
sync
1+0 Records ein
1+0 Records aus
0.000u 2.750s 0:15.31 17.9% 0+0k 0+0io 153pf+0w
0.010u 0.050s 0:00.93 6.4%  0+0k 0+0io 122pf+0w

INSTALL/SOURCE> l
insgesamt 205836
drwxrwxr-x2 root root  200 Jul 15 21:52 .
drwxr-xr-x3 root root   72 Jul  3 01:39 ..
-rw-r--r--1 nuetzel  users  452390 Jul 15 00:29 
kmplayer-0.7.96.tar.bz2
-rw-r--r--1 nuetzel  users  403358 Jul 14 21:46 
modutils-2.4.21-18.src.rpm
-rw-r--r--1 nuetzel  users  194505 Jul 14 22:01 
procps-2.0.13-1.src.rpm
-rw-r--r--1 nuetzel  users214748364801 Jul 15 21:52 sparse

INSTALL/SOURCE> sync
INSTALL/SOURCE> time dd if=/dev/zero of=sparse bs=1 seek=200G count=1 ; time 
sync
1+0 Records ein
1+0 Records aus
0.010u 0.000s 0:00.00 0.0%  0+0k 0+0io 153pf+0w
0.000u 0.030s 0:00.03 100.0%0+0k 0+0io 122pf+0w

INSTALL/SOURCE> l
insgesamt 205836
drwxrwxr-x2 root root  200 Jul 15 21:52 .
drwxr-xr-x3 root root   72 Jul  3 01:39 ..
-rw-r--r--1 nuetzel  users  452390 Jul 15 00:29 
kmplayer-0.7.96.tar.bz2
-rw-r--r--1 nuetzel  users  403358 Jul 14 21:46 
modutils-2.4.21-18.src.rpm
-rw-r--r--1 nuetzel  users  194505 Jul 14 22:01 
procps-2.0.13-1.src.rpm
-rw-r--r--1 nuetzel  users214748364801 Jul 15 21:53 sparse

> > Chris and Oleg what do you need?
> > SYSRQ-P/M?
>
> I guess I need additional pair of brains ;)
> Also some time that I can spend thinking about all of this.
> I guess I'd have plently of it in the Saturday, while flying to Canada.
> O

Re: Horrible ftruncate performance

2003-07-11 Thread Szakacsits Szabolcs

On Sat, 12 Jul 2003, Carl-Daniel Hailfinger wrote:
> Szakacsits Szabolcs wrote:
> > On Fri, 11 Jul 2003, Dieter [iso-8859-1] Nützel wrote:
> >
> >>More than 506 times...
> >>=> 506.34 seconds (8:26.34) / 0.01 seconds = 50.634 times ;-)))
> >
> > I guess you mean 50,634 or 50634 times faster? But I'm afraid you didn't
> > test what you should have. Interestingly, the above speedup, 50 times, is
> > just what I got with Oleg's patch. So now it's only about 2000 times slower
> > than XFS, JFS and ext3.
>
> This is an i18n issue. In Germany, the meaning of "." and "," in numbers
> is just the reverse of U.S. usage. Of course, we Germans argue the U.S. do
> it backwards.
> Pi=3,14159265
> fifty thousand six hundred thirty four=50.634

I know very well, but we are communicating in English, not German.

Szaka



Re: Horrible ftruncate performance

2003-07-11 Thread Carl-Daniel Hailfinger
Szakacsits Szabolcs wrote:
> On Fri, 11 Jul 2003, Dieter [iso-8859-1] Nützel wrote:
> 
>>More than 506 times...
>>=> 506.34 seconds (8:26.34) / 0.01 seconds = 50.634 times ;-)))
> 
> I guess you mean 50,634 or 50634 times faster? But I'm afraid you didn't
> test what you should have. Interestingly, the above speedup, 50 times, is
> just what I got with Oleg's patch. So now it's only about 2000 times slower
> than XFS, JFS and ext3.

This is an i18n issue. In Germany, the meaning of "." and "," in numbers
is just the reverse of U.S. usage. Of course, we Germans argue the U.S. do
it backwards.
Pi=3,14159265
fifty thousand six hundred thirty four=50.634


Carl-Daniel



Re: Horrible ftruncate performance

2003-07-11 Thread Szakacsits Szabolcs

On Fri, 11 Jul 2003, Dieter [iso-8859-1] Nützel wrote:

> As reminder the old numbers (single U160, IBM 10k rpm):

For the below test, disk doesn't really matter. It's almost [should be (*)]
pure CPU job. Otherwise I'd have suggested a 'sync' at the end(**).

> 2.4.21-jam1 (aa1) plus all data-logging
>
> SOURCE/dri-trunk> time dd if=/dev/zero of=sparse bs=1 seek=200G count=1
> 0.000u 362.760s 8:26.34 71.6%   0+0k 0+0io 124pf+0w
>
> It was runinng with a paralell C++ (-j20) compilation.
> Now the new ones (I've test with and without above C++ compilation ;-)
>
> INSTALL/SOURCE> time dd if=/dev/zero of=sparse2 bs=1 seek=200G count=1
> 0.000u 0.010s 0:00.00 0.0%  0+0k 0+0io 153pf+0w

This is too good to be true. You shouldn't have sparse2 before running the
test. You wrote you tried this twice ("with and without above C++
compilation"). This should be the result of the second run.

> More than 506 times...
> => 506.34 seconds (8:26.34) / 0.01 seconds = 50.634 times ;-)))

I guess you mean 50,634 or 50634 times faster? But I'm afraid you didn't
test what you should have. Interestingly, the above speedup, 50 times, is
just what I got with Oleg's patch. So now it's only about 2000 times slower
than XFS, JFS and ext3.

Footnotes:
(*) So disk shouldn't matter because the other filesystems use only 4kB to
store this 200GB sparse file. ReiserFS wastes 201MB! One could
mistakenly think this IO could be one of the reasons for the slowless,
but not. No 'sync' and anyway it's just 2 sec to do it on my disk
later on.

(**) There are a lot of bogus fs benchmarks out there. One of the many
issues missed is sync after write tests. I also noticed ReiserFS is
quick to say "I did it" but sometimes it "hangs" for a painful time.
Ragged, sluggish. Without the sync, I've seen and experienced ResierFS
frequently coming out as the winner [not sustained writes]. With sync,
the results are different.

Ok, last complain for today :) I couldn't find an API (I admit, because of
the above reasons, I didn't search hard) to query the map of disk blocks or
[offsets, length] couples used by a file, like XFS's XFS_IOC_GETBMAPX does.
This makes also ReiserFS inadequate to work with sparse files efficiently.

Szaka



Re: Horrible ftruncate performance

2003-07-11 Thread Dieter Nützel
Am Freitag, 11. Juli 2003 20:32 schrieb Chris Mason:
> On Fri, 2003-07-11 at 13:27, Dieter Nützel wrote:
> > > 2.5 porting work has restarted at last, Oleg's really been helpful with
> > > keeping the 2.4 stuff up to date.
> >
> > Nice but.
> >
> > Patches against latest -aa could be helpful, then.
>
> Hmmm, the latest -aa isn't all that latest right now.

True.

2.4.21-jam1 has 2.4.21pre8aa1 plus several other stuff.

> Do you want something against 2.4.21-rc8aa1 or should I wait until andrea
> updates to 2.4.22-pre something?

I think the later is better.

Up and running with "hand crafted" stuff ;-)

As reminder the old numbers (single U160, IBM 10k rpm):

2.4.21-jam1 (aa1) plus all data-logging

SOURCE/dri-trunk> time dd if=/dev/zero of=sparse bs=1 seek=200G count=1
0.000u 362.760s 8:26.34 71.6%   0+0k 0+0io 124pf+0w

It was runinng with a paralell C++ (-j20) compilation.


Now the new ones (I've test with and without above C++ compilation ;-)

INSTALL/SOURCE> time dd if=/dev/zero of=sparse2 bs=1 seek=200G count=1
1+0 Records ein
1+0 Records aus
0.000u 0.010s 0:00.00 0.0%  0+0k 0+0io 153pf+0w
INSTALL/SOURCE> l
insgesamt 19294
drwxrwxr-x2 root root  192 Jul 11 21:46 .
drwxr-xr-x3 root root   72 Jul  3 01:39 ..
-rw-r--r--1 nuetzel  users 1696205 Jul  3 01:32 atlas3.5.6.tar.bz2
-rw-r--r--1 nuetzel  users 2945814 Jul  2 02:53 k3b-0.9pre2.tar.gz
-rw-r--r--1 nuetzel  users15078557 Jul  2 03:04 movix-0.8.0rc2.tar.gz
-rw-r--r--1 nuetzel  users214748364801 Jul 11 21:46 sparse2

More than 506 times...

=> 506.34 seconds (8:26.34) / 0.01 seconds = 50.634 times ;-)))

GREAT stuff!

Thanks,
Dieter



Re: Horrible ftruncate performance

2003-07-11 Thread Philippe Gramoullé

Hello Dieter,

Before we  used to host web files so reiserfs + quota was mandatory. We could live 
without the data-logging
code. Now that we use reiserfs for fsync intensive applications, data-logging + quota 
+ latest
IO improvements from Chris that were committed recently in 2.4.22-pre and it will soon 
become a must have.

Of course i wish that it would all be included in 2.4.22 but if Marcello doesn't agree
i guess, we'll continue to use the out-of-tree patches and diff's that all people here 
kindly provided
for years.

Thanks,

Philippe

On Fri, 11 Jul 2003 19:27:06 +0200
Dieter Nützel <[EMAIL PROTECTED]> wrote:

  | > Marcelo seems to like being really conservative on this point, and I
  | > don't have a problem with Oleg's original idea to just do relocation in
  | > 2.4.22 and the full data logging in 2.4.23-pre4 (perhaps +quota now that
  | > 32 bit quota support is in there).
  | 
  | So, it's another half year away...?


Re: Horrible ftruncate performance

2003-07-11 Thread Chris Mason
On Fri, 2003-07-11 at 13:27, Dieter Nützel wrote:

> > 2.5 porting work has restarted at last, Oleg's really been helpful with
> > keeping the 2.4 stuff up to date.
> 
> Nice but.
> 
> Patches against latest -aa could be helpful, then.

Hmmm, the latest -aa isn't all that latest right now.  Do you want
something against 2.4.21-rc8aa1 or should I wait until andrea updates to
2.4.22-pre something?

-chris




Re: Horrible ftruncate performance

2003-07-11 Thread Dieter Nützel
Am Freitag, 11. Juli 2003 19:09 schrieb Chris Mason:
> On Fri, 2003-07-11 at 11:44, Oleg Drokin wrote:
> > Hello!
> >
> > On Fri, Jul 11, 2003 at 05:34:12PM +0200, Marc-Christian Petersen wrote:
> > > > Actually I did it already, as data-logging patches can be applied to
> > > > 2.4.22-pre3 (where this truncate patch was included).
> > > >
> > > > > Maybe it _IS_ time for this _AND_ all the other data-logging
> > > > > patches? 2.4.22-pre5?
> > > >
> > > > It's Chris turn. I thought it is good idea to test in -ac first,
> > > > though (even taking into account that these patches are part of
> > > > SuSE's stock kernels).
> > >
> > > Well, I don't think that testing in -ac is necessary at all in this
> > > case.
> >
> > May be not. But it is still useful ;)
> >
> > > I am using WOLK on many production machines with ReiserFS mostly as
> > > Fileserver (hundred of gigabytes) and proxy caches.
> >
> > I am using this code on my production server myself ;)
> >
> > > If someone would ask me: Go for 2.4 mainline inclusion w/o going via
> > > -ac! :)
> >
> > Chris should decide (and Marcelo should agree) (Actually Chris thought it
> > is good idea to submit data-logging to Marcelo now, too). I have no
> > objections. Also now, that quota v2 code is in place, even quota code can
> > be included.
> >
> > Also it would be great to port this stuff to 2.5 (yes, I know Chris wants
> > this to be in 2.4 first)
>
> Marcelo seems to like being really conservative on this point, and I
> don't have a problem with Oleg's original idea to just do relocation in
> 2.4.22 and the full data logging in 2.4.23-pre4 (perhaps +quota now that
> 32 bit quota support is in there).

So, it's another half year away...?

> 2.5 porting work has restarted at last, Oleg's really been helpful with
> keeping the 2.4 stuff up to date.

Nice but.

Patches against latest -aa could be helpful, then.

Thanks,
Dieter



Re: Horrible ftruncate performance

2003-07-11 Thread Chris Mason
On Fri, 2003-07-11 at 11:44, Oleg Drokin wrote:
> Hello!
> 
> On Fri, Jul 11, 2003 at 05:34:12PM +0200, Marc-Christian Petersen wrote:
> 
> > > Actually I did it already, as data-logging patches can be applied to
> > > 2.4.22-pre3 (where this truncate patch was included).
> > > > Maybe it _IS_ time for this _AND_ all the other data-logging patches?
> > > > 2.4.22-pre5?
> > > It's Chris turn. I thought it is good idea to test in -ac first, though
> > > (even taking into account that these patches are part of SuSE's stock
> > > kernels).
> > Well, I don't think that testing in -ac is necessary at all in this case.
> 
> May be not. But it is still useful ;)
> 
> > I am using WOLK on many production machines with ReiserFS mostly as Fileserver 
> > (hundred of gigabytes) and proxy caches.
> 
> I am using this code on my production server myself ;)
> 
> > If someone would ask me: Go for 2.4 mainline inclusion w/o going via -ac! :)
> 
> Chris should decide (and Marcelo should agree) (Actually Chris thought it is good
> idea to submit data-logging to Marcelo now, too). I have no objections.
> Also now, that quota v2 code is in place, even quota code can be included.
> 
> Also it would be great to port this stuff to 2.5 (yes, I know Chris wants this to be 
> in 2.4 first)

Marcelo seems to like being really conservative on this point, and I
don't have a problem with Oleg's original idea to just do relocation in
2.4.22 and the full data logging in 2.4.23-pre4 (perhaps +quota now that
32 bit quota support is in there).

2.5 porting work has restarted at last, Oleg's really been helpful with
keeping the 2.4 stuff up to date.

-chris




Re: Horrible ftruncate performance

2003-07-11 Thread Oleg Drokin
Hello!

On Fri, Jul 11, 2003 at 05:34:12PM +0200, Marc-Christian Petersen wrote:

> > Actually I did it already, as data-logging patches can be applied to
> > 2.4.22-pre3 (where this truncate patch was included).
> > > Maybe it _IS_ time for this _AND_ all the other data-logging patches?
> > > 2.4.22-pre5?
> > It's Chris turn. I thought it is good idea to test in -ac first, though
> > (even taking into account that these patches are part of SuSE's stock
> > kernels).
> Well, I don't think that testing in -ac is necessary at all in this case.

May be not. But it is still useful ;)

> I am using WOLK on many production machines with ReiserFS mostly as Fileserver 
> (hundred of gigabytes) and proxy caches.

I am using this code on my production server myself ;)

> If someone would ask me: Go for 2.4 mainline inclusion w/o going via -ac! :)

Chris should decide (and Marcelo should agree) (Actually Chris thought it is good
idea to submit data-logging to Marcelo now, too). I have no objections.
Also now, that quota v2 code is in place, even quota code can be included.

Also it would be great to port this stuff to 2.5 (yes, I know Chris wants this to be 
in 2.4 first)

Bye,
Oleg


Re: Horrible ftruncate performance

2003-07-11 Thread Oleg Drokin
Gello!

On Fri, Jul 11, 2003 at 05:32:49PM +0200, Dieter N?tzel wrote:
> > > OK some "hand work"...
> Where comes this from?

It was there for a lot of time. Like for not less than 2 years, I'd say.

> I don't find it my tree:

reiserfs quota patch got rid of it.
Here's relevant part of my diff:
if (retval) {
-   reiserfs_free_block (th, allocated_block_nr);
+   reiserfs_free_block (th, inode, allocated_block_nr, 1);
goto failure;
}
-   if (done) {
-   inode->i_blocks += inode->i_sb->s_blocksize / 512;
-   } else {
+   if (!done) {
/* We need to mark new file size in case this function will be
   interrupted/aborted later on. And we may do this only for
   holes. */

Bye,
Oleg


Re: Horrible ftruncate performance

2003-07-11 Thread Dieter Nützel
Am Freitag, 11. Juli 2003 17:36 schrieb Marc-Christian Petersen:
> On Friday 11 July 2003 17:32, Dieter Nützel wrote:
>
> Hi Dieter,
>
> > Where comes this from?
> > I don't find it my tree:
> >
> > fs/eiserfs/inode.c
> >
> > -if (un.unfm_nodenum)
> >  inode->i_blocks += inode->i_sb->s_blocksize / 512;
> >  //mark_tail_converted (inode);
>
> reiserfs-quota patch removes this.

Ah, thank you very much.

I can go ahead, then.

-Dieter



Re: Horrible ftruncate performance

2003-07-11 Thread Dieter Nützel
Am Freitag, 11. Juli 2003 17:32 schrieb Oleg Drokin:
> Hello!
>
> On Fri, Jul 11, 2003 at 05:27:25PM +0200, Dieter N?tzel wrote:
> > > Actually I did it already, as data-logging patches can be applied to
> > > 2.4.22-pre3 (where this truncate patch was included).
> >
> > No -aaX.
>
> Right.
>
> > > > Maybe it _IS_ time for this _AND_ all the other data-logging patches?
> > > > 2.4.22-pre5?
> > >
> > > It's Chris turn. I thought it is good idea to test in -ac first, though
> > > (even taking into account that these patches are part of SuSE's stock
> > > kernels).
> >
> > I don't think -ac would make it => No big Reiser involved...
>
> Would make what?
> I think Alan have agreed to put data-logging code in already.

OK, good to hear.

But "all" ReiserFS data-logging users running -aa (SuSE) kernels (apart from 
WOLK ;-).

-Dieter



Re: Horrible ftruncate performance

2003-07-11 Thread Marc-Christian Petersen
On Friday 11 July 2003 17:32, Dieter Nützel wrote:

Hi Dieter,

> Where comes this from?
> I don't find it my tree:
>
> fs/eiserfs/inode.c
>
> -if (un.unfm_nodenum)
>  inode->i_blocks += inode->i_sb->s_blocksize / 512;
>  //mark_tail_converted (inode);

reiserfs-quota patch removes this.

--
ciao, Marc



Re: Horrible ftruncate performance

2003-07-11 Thread Marc-Christian Petersen
On Friday 11 July 2003 17:24, Oleg Drokin wrote:

Hi Oleg,

> Actually I did it already, as data-logging patches can be applied to
> 2.4.22-pre3 (where this truncate patch was included).
> > Maybe it _IS_ time for this _AND_ all the other data-logging patches?
> > 2.4.22-pre5?
> It's Chris turn. I thought it is good idea to test in -ac first, though
> (even taking into account that these patches are part of SuSE's stock
> kernels).
Well, I don't think that testing in -ac is necessary at all in this case.

Hundred (maybe even thousands) of users are using data-logging stuff, SuSE has 
it since years, WOLK has it since years. At least for WOLK I know that there 
isn't at least one problem with data-logging stuff (there might be but not 
hit yet ;) and there are also tons of wolk+reiserfs-data-logging users and I 
can be sure 100%, if there were a problem, my inbox would tell me so ;)

I am using WOLK on many production machines with ReiserFS mostly as Fileserver 
(hundred of gigabytes) and proxy caches.

If someone would ask me: Go for 2.4 mainline inclusion w/o going via -ac! :)

--
ciao, Marc



Re: Horrible ftruncate performance

2003-07-11 Thread Dieter Nützel
Am Freitag, 11. Juli 2003 17:24 schrieb Oleg Drokin:
> Hello!
>
> On Fri, Jul 11, 2003 at 05:16:56PM +0200, Dieter N?tzel wrote:
> > OK some "hand work"...

Where comes this from?
I don't find it my tree:

fs/eiserfs/inode.c

-if (un.unfm_nodenum)
 inode->i_blocks += inode->i_sb->s_blocksize / 512;
 //mark_tail_converted (inode);

Thanks,
Dieter



Re: Horrible ftruncate performance

2003-07-11 Thread Oleg Drokin
Hello!

On Fri, Jul 11, 2003 at 05:27:25PM +0200, Dieter N?tzel wrote:
> > Actually I did it already, as data-logging patches can be applied to
> > 2.4.22-pre3 (where this truncate patch was included).
> No -aaX.

Right.

> > > Maybe it _IS_ time for this _AND_ all the other data-logging patches?
> > > 2.4.22-pre5?
> > It's Chris turn. I thought it is good idea to test in -ac first, though
> > (even taking into account that these patches are part of SuSE's stock
> > kernels).
> I don't think -ac would make it => No big Reiser involved...

Would make what?
I think Alan have agreed to put data-logging code in already.

Bye,
Oleg


Re: Horrible ftruncate performance

2003-07-11 Thread Dieter Nützel
Am Freitag, 11. Juli 2003 17:24 schrieb Oleg Drokin:
> Hello!
>
> On Fri, Jul 11, 2003 at 05:16:56PM +0200, Dieter N?tzel wrote:
> > OK some "hand work"...
>
> Actually I did it already, as data-logging patches can be applied to
> 2.4.22-pre3 (where this truncate patch was included).

No -aaX.

> > Maybe it _IS_ time for this _AND_ all the other data-logging patches?
> > 2.4.22-pre5?
>
> It's Chris turn. I thought it is good idea to test in -ac first, though
> (even taking into account that these patches are part of SuSE's stock
> kernels).

I don't think -ac would make it => No big Reiser involved...

-Dieter