Re: [reiserfs-list] reiserfs -o notail less throughput than ext3?

2002-03-03 Thread Bradley Kite

On Sat, Mar 02, 2002 at 06:55:24PM +0300, Oleg Drokin wrote:
 Hello!
 
 This is a known problem I and Chris are working on it exactly right now
 This is a problem related to the fact that metadata is located on the other
 side of disk then the actual data
 
Might it be better to read (and cache) all of the meta data when a file is first 
opened,
and then just read the file data as need be

--
Brad



Re: [reiserfs-list] reiserfs -o notail less throughput than ext3?

2002-03-03 Thread Anders Widman


 On Saturday, March 02, 2002 06:55:24 PM +0300 Oleg Drokin [EMAIL PROTECTED] wrote:

 Hello!
 
 On Fri, Mar 01, 2002 at 07:16:08PM +0100, Matthias Andree wrote:
 I have some observation here that I cannot explain to myself.
 It seems as though ReiserFS impaired my throughput on 650 MB files,
 while ext3fs on the same drive did not.
 Known problem.
 
 drive to hold images for writing CDs, /dev/sdb4, formatted with
 reiserfs, and, because tail packing is pointless anyhow, mounted with -o
 notail.
 Tails are not used for files bigger than 16k
 
 However, when writing to an ATAPI 16x CD writer, the buffer ran empty,
 triggering burnproof support. I then ran zcav to figure how fast the
 drive itself was, and /dev/sdb ranged from 7.9 to 4.8 MB/s, no problem
 here. When I read a CD-Image with dd (tried default block size and
 bs=1048576), I only got 1.9 MB/s, evidently not sufficient to keep
 feeding the CD-writer (16x needs 2.4 MB/s). I then nuked the whole disk,
 reformatted it with ext3fs, everything is fine now, dd to the CD-Image
 gives me 7.8 MB/s with bs=1M.
 
 Does any of the *pending patches address this problem? I observed this
 on several kernel versions, 2.4.14, 2.4.16, 2.4.19-pre1-ac2.
 
 This is a known problem. I and Chris are working on it exactly right now.
 This is a problem related to the fact that metadata is located on the other
 side of disk then the actual data.

 I would not say that speeds this bad are a known problem.  1.9MB/s is
 much too slow.  Is that FS very full?  Fragmentation is the only thing
 that should be causing this.

 -chris

Even with 'heavy' fragmentation this is quite low. A quick benchmark
of my 5400rpm 80GB disk gave me an average on 30MB/s. However, when
simulating large fragmentation (10 000+ fragments on a 1GB file) I get
about 2MB/s.

Is DMA, unmask IRQ, read ahead and similar activated?

//AW




Re: [reiserfs-list] Reiserfs on Freebsd

2002-03-03 Thread Hans Reiser

Bradley Kite wrote:

Hi there

I currently have too much time on my hands and am looking
into the possibilities (and feasibility) of implementing ReiserFS
on FreeBSD

Does any body know if someone is currently doing this? I dont want to
duplicate the effort involved, and would much rather join an
existing project that is trying to achieve the same thing

Also, (correct me if I am wrong), I believe that the main ReiserFS
code can remain pritty much the same, and only the parts where it
interfaces with the VFS, and low-level device drivers would need
to be changed in order to port it over to FreeBSD This is just a guess
tho, as I havent really had a propper look at the reiserFS code yet

Any guidance/advice will be much appreciated

--
Bradley Kite


No one is doing it  Be aware that ReiserFS is GPL'd, and that you need 
to maintain the GPL when doing the port

It is a very large job you are taking on  Are you sure you want to 
commit that much time without a commercial sponsor paying for it?  (We 
have not done the port ourselves because no one will pay for it)

Hans




Re: [reiserfs-list] postmark performance numbers for ReiserFS

2002-03-03 Thread Hans Reiser

We'll try to reproduce your results  We haven't run postmark recently I 
think  Elena can you try to reproduce?

Can you describe your hardware?  Do you have tails on or off?

Hans

Ray Bryant wrote:

I've been working on a draft of a file systems performance paper
comparing ext2, ext3, resiferfs, and xfs performance when running under
a couple of different benchmarks, one of which is postmark  I'm seeing
numbers which are pretty dramatically different than the ones that are
posted on the wwwreiserfsorg web site for this benchmark  Here is an
example of the kind of results I have been getting:

4KB   File System Block Size
512B  read/write size   (postmark default)
500-96K file size distribution (postmark default)
3 trials (shown below as trial1/trial2/trial3)
1 thread (we've made postmark multithreaded)
 
file system   create/s transactions/s  delete/s
ext2692/703/711 405/417/408 2129/27691/27507
ext3264/262/265 238/246/246 469/505/472
reiserfs58/64/5858/55/5977/84/76
reiserfs notail 77/77/7481/77/77120/116/124
xfs 958/971/921 227/224/224 222/223/221

This is with kernel 2416, the machine is a 4-way 700 MHZ Xeon with 3GB
of memory  There are 4 file systems being used  The disks are old slow
SCSI disks (8 GB each) and each file system is on a separate disk  The
configuration file for postmark is as follows:

# Like default config, but only 1 thread
# 512b read/write
# 500 to 977 KB file size
# 8 threads
set location + /mnt/sdb1 1
set location + /mnt/sdc1 1
set location + /mnt/sdd1 1
set location + /mnt/sde1 1
set number 10
set transactions 6
set threads 1
set subdirectories 2000
set seed random
set print 10
set bias read 75
set bias create 60
show
run
quit

We've extended the existing postmark benchmark in a number of ways, one
of the ways is to make it multithreaded  However, this set of runs was
done with a single thread so it SHOULD be comparable to the published
numbers

The problem is that our numbers show ReiserFS is slower than the other
measured systems while the measurements on the web page indicate
ReiserFS is faster for this particular benchmark  We'd like to be as
correct and fair as possible about any comparisons we publish, (we don't
want to create another Mindcraft like controversy), so we are interested
in understanding if we have somehow configured or are running ResiferFS
incorrectly

Just to make sure we've not done anything silly in our version of the
postmark benchmark, I've attached a copy of the current code  We'd
appreciate it if someone on this list could either point out what we are
doing that makes Reiserfs run slowly, or if they could verify or refute
our results on some other hardware setup we would appreciate that as
well







Re: [reiserfs-list] postmark-2.0.c benchmark; previous email

2002-03-03 Thread Hans Reiser

Ray Bryant wrote:

The list sofwtare apparently stripped this off because it was a binary
file (I had to gzip it to get it below the 40KB message size limit for
this list)  Anyway, if you are interested in a copy of the benchmark,
please email me and I will send it along to you

Please send to reiserfs-dev  Anything that finds a performance hole, we 
want to look at in detail

hans





Re: [reiserfs-list] reiserfs -o notail less throughput than ext3?

2002-03-03 Thread Hans Reiser

Anders Widman wrote:

On Saturday, March 02, 2002 06:55:24 PM +0300 Oleg Drokin [EMAIL PROTECTED] wrote:


Hello!

On Fri, Mar 01, 2002 at 07:16:08PM +0100, Matthias Andree wrote:

I have some observation here that I cannot explain to myself.
It seems as though ReiserFS impaired my throughput on 650 MB files,
while ext3fs on the same drive did not.

Known problem.

drive to hold images for writing CDs, /dev/sdb4, formatted with
reiserfs, and, because tail packing is pointless anyhow, mounted with -o
notail.

Tails are not used for files bigger than 16k

However, when writing to an ATAPI 16x CD writer, the buffer ran empty,
triggering burnproof support. I then ran zcav to figure how fast the
drive itself was, and /dev/sdb ranged from 7.9 to 4.8 MB/s, no problem
here. When I read a CD-Image with dd (tried default block size and
bs=1048576), I only got 1.9 MB/s, evidently not sufficient to keep
feeding the CD-writer (16x needs 2.4 MB/s). I then nuked the whole disk,
reformatted it with ext3fs, everything is fine now, dd to the CD-Image
gives me 7.8 MB/s with bs=1M.

Does any of the *pending patches address this problem? I observed this
on several kernel versions, 2.4.14, 2.4.16, 2.4.19-pre1-ac2.

This is a known problem. I and Chris are working on it exactly right now.
This is a problem related to the fact that metadata is located on the other
side of disk then the actual data.


I would not say that speeds this bad are a known problem.  1.9MB/s is
much too slow.  Is that FS very full?  Fragmentation is the only thing
that should be causing this.


-chris


Even with 'heavy' fragmentation this is quite low. A quick benchmark
of my 5400rpm 80GB disk gave me an average on 30MB/s. However, when
simulating large fragmentation (10 000+ fragments on a 1GB file) I get
about 2MB/s.

Is DMA, unmask IRQ, read ahead and similar activated?

//AW



So, if Anders nukes his ext3fs partition, and reformats with reiserfs, 
what is the performance?

We need the repacker.  A pity it won't happen until January when v4.1 
will come out.

Hans





Re: [reiserfs-list] Reiserfs on Freebsd

2002-03-03 Thread Bradley Kite

On Sun, Mar 03, 2002 at 03:00:38PM +0300, Hans Reiser wrote:
 No one is doing it  Be aware that ReiserFS is GPL'd, and that you need 
 to maintain the GPL when doing the port
 
 It is a very large job you are taking on  Are you sure you want to 
 commit that much time without a commercial sponsor paying for it?  (We 
 have not done the port ourselves because no one will pay for it)
 
 Hans

Hi there Hans

I am mearly looking at the feasibility of porting it to FreeBSD at present
and have not fully commited myself to it yet, as I still have a lot
of studying to do (both with regards to linux's implementaion, as well
as FreeBSD's VFS details)

Should I decide to go ahead with it, I think it best I do a read-only
version at first which seems like a much easier task than writing
the journaling code etc, and solves my problem of convenience
(accessing files on my ReiserFS partition from within FreeBSD)
This should hopefully encourage others to consider adding write
support in the future, but it is not my main objective

If I do go ahead with writing the read-only implementaion, I will deal
with the licensing issue as I get to it, because it may well be easier
to start from scratch than to untangle ReiserFS from the Linux kernel

--
Brad




[reiserfs-list] HP370: ATARAID/ReiserFS/LVM need advice

2002-03-03 Thread Dieter Ntzel

Hello to all of you!

I have to reinstall a Linux server for a school which is a reference system 
for more to come. Main usage is SAMBA (DOMAIN logon and DOMAIN master), 
squid, and Apache.

It was running since June 2001 under SuSE 7.2, ReiserFS 3.6 and kernel 
2.4.6-ac (the first HP370 ataraid stuff). Much crafted stuff...

It consists of four identical disks on the HP370 and I used it with (software) 
ATARAID 0+1 (yes it worked since). But the performance was poor (compared to 
numbers spooking in the Windoze world around) and there are some hiccup 
(kupdated) during low/medium IO and some network traffic.

I've configured the system with an install harddisk from which I did fdisk and 
format without a hitch.

6Uniform Multi-Platform E-IDE driver Revision: 6.31
4ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx
4VP_IDE: IDE controller on PCI bus 00 dev 39
4VP_IDE: chipset revision 6
4VP_IDE: not 100%% native mode: will probe irqs later
4ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx
6VP_IDE: VIA vt82c686b (rev 40) IDE UDMA100 controller on pci00:07.1
4ide0: BM-DMA at 0xa000-0xa007, BIOS settings: hda:pio, hdb:pio
4ide1: BM-DMA at 0xa008-0xa00f, BIOS settings: hdc:DMA, hdd:pio
4HPT370: IDE controller on PCI bus 00 dev 98
6PCI: Found IRQ 11 for device 00:13.0
6PCI: Sharing IRQ 11 with 00:09.0
4HPT370: chipset revision 3
4HPT370: not 100%% native mode: will probe irqs later
4ide2: BM-DMA at 0xcc00-0xcc07, BIOS settings: hde:pio, hdf:pio
4ide3: BM-DMA at 0xcc08-0xcc0f, BIOS settings: hdg:pio, hdh:pio
4hdc: LG DVD-ROM DRD-8120B, ATAPI CD/DVD-ROM drive
4hde: FUJITSU MPG3204AT E, ATA DISK drive
4hdf: FUJITSU MPG3204AT E, ATA DISK drive
4hdg: FUJITSU MPG3204AT E, ATA DISK drive
4hdh: FUJITSU MPG3204AT E, ATA DISK drive
4ide1 at 0x170-0x177,0x376 on irq 15
4ide2 at 0xbc00-0xbc07,0xc002 on irq 11
4ide3 at 0xc400-0xc407,0xc802 on irq 11
6hde: 40031712 sectors (20496 MB) w/512KiB Cache, CHS=39714/16/63
6hdf: 40031712 sectors (20496 MB) w/512KiB Cache, CHS=39714/16/63
6hdg: 40031712 sectors (20496 MB) w/512KiB Cache, CHS=39714/16/63
6hdh: 40031712 sectors (20496 MB) w/512KiB Cache, CHS=39714/16/63
4hdc: ATAPI 40X DVD-ROM drive, 512kB Cache, UDMA(33)
6Uniform CD-ROM driver Revision: 3.12
6Partition check:
6 hde:
6 hdf: [PTBL] [2491/255/63] hdf1 hdf2 hdf3 hdf4  
6 hdg:
6 hdh: unknown partition table
[-]
6 ataraid/d0: ataraid/d0p1 ataraid/d0p2 ataraid/d0p3 ataraid/d0p4  
ataraid/d06Highpoint HPT370 Softwareraid driver for linux version 0.01
6Drive 0 is 19546 Mb
6Drive 1 is 19546 Mb
6Raid array consists of 2 drives.

nordbeck@stmartin:~ df
Dateisystem  1k-BlöckeBenutzt Verfügbar Ben% montiert auf
/dev/ataraid/d0p1   144540104116 40424  73% /
/dev/ataraid/d0p2   979928 37804942124   4% /tmp
/dev/ataraid/d0p3  1967892 83040   1884852   5% /var
/dev/ataraid/d0p5  9767184 32840   9734344   1% /var/squid
/dev/ataraid/d0p6 17381760269840  17111920   2% /home
/dev/ataraid/d0p7  4891604   1305296   3586308  27% /usr
/dev/ataraid/d0p8  4891604452176   4439428  10% /opt
shmfs   257120 0257120   0% /dev/shm

fstab
/dev/ataraid/d0p1   /   reiserfsdefaults,noatime
/dev/ataraid/d0p2   /tmpreiserfsdefaults,notail
/dev/ataraid/d0p3   /varreiserfsdefaults
/dev/ataraid/d0p5   /var/squid  reiserfsdefaults
/dev/ataraid/d0p6   /home   reiserfsdefaults
/dev/ataraid/d0p7   /usrreiserfsdefaults
/dev/ataraid/d0p8   /optreiserfsdefaults
/dev/cdrom  /media/cdromautoro,noauto,user,exec
/dev/dvd/media/dvd  autoro,noauto,user,exec
/dev/fd0/media/floppy   autonoauto,user,sync
proc/proc   procdefaults
devpts  /dev/ptsdevpts  defaults
usbdevfs/proc/bus/usb   usbdevfsdefaults,noauto

Second problem was lilo (version 21.7-5). I didn't get the system to boot from 
the RAID. I tried it with an old fifth disk but no luck to. So I had to boot 
from floppy. Not so nice for a standalone server.
I had to type in root=/dev/ataraid/d0p1 with 2.4.6-ac2 and root=7201 with 
newer kernels at the lilo boot prompt. Why the change?

To get it going I tested lilo 22.1-beta and 22.1 final but nothing worked.
Lilo didn't show any error messages but it didn't boot.

I tried many different lilo.conf versions but no go. Some examples:

lilo.conf
boot= /dev/hdf
vga = 791
read-only
menu-scheme = Wg:kw:Wg:Wg
lba32
prompt
timeout = 80
message = /boot/message

  disk   = /dev/hdc
  bios   = 0x82

  disk   = /dev/hde
  bios   = 0x80

  disk   = /dev/hdg
  bios   = 0x81

  image  = /boot/vmlinuz
  label  = linux
  root   = 7201
  initrd = /boot/initrd
  append = 

[reiserfs-list] Re: HP370: ATARAID/ReiserFS/LVM need advice

2002-03-03 Thread Alan Cox

 from floppy Not so nice for a standalone server
 I had to type in root=/dev/ataraid/d0p1 with 246-ac2 and root=7201 with 
 newer kernels at the lilo boot prompt Why the change?

Linus never took the patch to put /dev/ataraid/ in the device name list
for initc

Alan



Re: [reiserfs-list] postmark performance numbers for ReiserFS

2002-03-03 Thread Chris Mason



On Friday, March 01, 2002 11:55:18 PM -0600 Ray Bryant [EMAIL PROTECTED] wrote:

 I've been working on a draft of a file systems performance paper
 comparing ext2, ext3, resiferfs, and xfs performance when running under
 a couple of different benchmarks, one of which is postmark.  I'm seeing
 numbers which are pretty dramatically different than the ones that are
 posted on the www.reiserfs.org web site for this benchmark.  Here is an
 example of the kind of results I have been getting:
 
 4KB   File System Block Size
 512B  read/write size   (postmark default)
 500-9.6K file size distribution (postmark default)

With this file size, the reiserfs tail code will be a bottleneck.  
I'd suggest mounting with -o notail, it makes a huge difference in
my postmark tests.

-chris




Re: [reiserfs-list] Reiserfs on Freebsd

2002-03-03 Thread Greg Lehey

On Sunday,  3 March 2002 at 15:00:38 +0300, Hans Reiser wrote:
 Bradley Kite wrote:

 Hi there

 I currently have too much time on my hands and am looking
 into the possibilities (and feasibility) of implementing ReiserFS
 on FreeBSD

 Does any body know if someone is currently doing this? I dont want to
 duplicate the effort involved, and would much rather join an
 existing project that is trying to achieve the same thing

 Also, (correct me if I am wrong), I believe that the main ReiserFS
 code can remain pritty much the same, and only the parts where it
 interfaces with the VFS, and low-level device drivers would need
 to be changed in order to port it over to FreeBSD This is just a guess
 tho, as I havent really had a propper look at the reiserFS code yet

 Any guidance/advice will be much appreciated

 No one is doing it  Be aware that ReiserFS is GPL'd, and that you
 need to maintain the GPL when doing the port

Yes, we've discussed this already wrt jfs  The requirements are
relatively easy to fulfil

 It is a very large job you are taking on  Are you sure you want to
 commit that much time without a commercial sponsor paying for it?
 (We have not done the port ourselves because no one will pay for
 it)

Agreed, this is not a task to be taken lightly

Greg
--
See complete headers for address and phone numbers



Re: [reiserfs-list] reiserfs -o notail less throughput than ext3?

2002-03-03 Thread Matthias Andree

Chris Mason [EMAIL PROTECTED] writes:

 I would not say that speeds this bad are a known problem.  1.9MB/s is
 much too slow.  Is that FS very full?  Fragmentation is the only thing
 that should be causing this.

We can exclude that, the partition is empty except that single file, or
maybe two files at several hundred MB each. CD Images, Debian 2.2r5 in
my case.

-- 
Matthias Andree

GPG encrypted mail welcome, unless it's unsolicited commercial email.



Re: [reiserfs-list] reiserfs -o notail less throughput than ext3?

2002-03-03 Thread Matthias Andree

Oleg Drokin [EMAIL PROTECTED] writes:

 Yes, it is slow, but overal disk throughput of 7M/sec suggests this is
 old drive. Old drives tend to have worse seeking speed than today's drives.

But how much seeking is done on one 650 MB file that's been written onto
an empty partition? I presume, not too much.

-- 
Matthias Andree

GPG encrypted mail welcome, unless it's unsolicited commercial email.



Re: [reiserfs-list] reiserfs -o notail less throughput than ext3?

2002-03-03 Thread Matthias Andree

Anders Widman [EMAIL PROTECTED] writes:

 Even with 'heavy' fragmentation this is quite low. A quick benchmark
 of my 5400rpm 80GB disk gave me an average on 30MB/s. However, when
 simulating large fragmentation (10 000+ fragments on a 1GB file) I get
 about 2MB/s.

 Is DMA, unmask IRQ, read ahead and similar activated?

SCSI here, with aic7xxx 5.x and 6.x driver, no particular tuning in
place except that I told the AHA2940 to negotiate Ultra-Wide, it has
braindead default settings (negotiates 10 MXfers/s only, no Ultra), so
we can safely assume it did DMA.

-- 
Matthias Andree

GPG encrypted mail welcome, unless it's unsolicited commercial email.



Re: [reiserfs-list] reiserfs -o notail less throughput than ext3?

2002-03-03 Thread Chris Mason



On Monday, March 04, 2002 02:04:52 AM +0100 Matthias Andree 
[EMAIL PROTECTED] wrote:

 Oleg Drokin [EMAIL PROTECTED] writes:
 
 Yes, it is slow, but overal disk throughput of 7M/sec suggests this is
 old drive. Old drives tend to have worse seeking speed than today's drives.
 
 But how much seeking is done on one 650 MB file that's been written onto
 an empty partition? I presume, not too much.

Ok, I think Oleg has the right idea then.  What happens is, all the
tree nodes go at the start of the disk, and the data blocks are after
the 10% mark.  So, you're probably seeking between the tree nodes and 
the data blocks.

He and I have been trading patches to be smarter about reading
the metadata, I can't demonstrate the slow down here, but he can.  So,
hopefully we'll be able to send you something this week.

-chris




[reiserfs-list] Re: [PATCH] radix-tree pagecache for 2.4.19-pre2-ac2

2002-03-03 Thread Ed Tomlinson

On March 3, 2002 03:03 pm, Christoph Hellwig wrote:
 I have uploaded an updated version of the radix-tree pagecache patch
 against 2.4.19-pre2-ac2.  News in this release:

 * fix a deadlock when vmtruncate takes i_shared_lock twice by introducing
   a new mapping-page_lock that mutexes mapping-page_tree. (akpm)
 * move setting of page-flags back out of move_to/from_swap_cache. (akpm)
 * put back lost page state settings in shmem_unuse_inode. (akpm)
 * get rid of remove_page_from_inode_queue - there was only one caller. (me)
 * replace add_page_to_inode_queue with ___add_to_page_cache. (me)

 Please give it some serious beating while I try to get 2.5 working and
 port the patch over 8)

Got this after a couple of hours with pre2-ac2+preempth+radixtree.

Reverted to pre2-ac2.

Hope this helps,

Ed Tomlinson


ksymoops 2.4.3 on i586 2.4.19-pre2-ac2.  Options used
 -V (default)
 -k 20020303231146.ksyms (specified)
 -l 20020303231146.modules (specified)
 -o /lib/modules/2.4.19-pre2-ac2-prert (specified)
 -m /boot/System.map-2.4.19-pre2-ac2-prert (specified)

Warning: loading 
/lib/modules/2.4.19-pre2-ac2-prert/kernel/net/ipv4/netfilter/ipchains.o will taint the 
kernel: non-GPL license - BSD without advertisement clause
ac97_codec: AC97 Audio codec, id: 0x4352:0x5903 (Cirrus Logic CS4297)
kernel BUG at page_alloc.c:239!
invalid operand: 
CPU:0
EIP:0010:[c012d247]Tainted: P 
Using defaults from ksymoops -t elf32-i386 -a i386
EFLAGS: 00010282
eax: 0020   ebx: c026a73c   ecx: ffdd   edx: d40e
esi: c13ca53c   edi: 0001   ebp: d40e   esp: d40e1cb0
ds: 0018   es: 0018   ss: 0018
Process kdeinit (pid: 786, stackpage=d40e1000)
Stack: c022d19c 00ef c026a73c c026a8f8 01ff  c01117e4 d40e 
   000150d8 0292 c026a77c  c026a73c c012d435 00f0 c15b2b14 
    7b9c 7b9c 0001 c026a8f4 00f0 c012d2e6 0003 
Call Trace: [c01117e4] [c012d435] [c012d2e6] [c0124b23] [c0137c91] 
   [c0137f20] [c0135f37] [c0136124] [c0171f32] [c016df25] [c0136124] 
   [c015cfe1] [c015d568] [c015d68f] [c01467f7] [c013d940] [c013e2e9] 
   [c013d5fe] [c013e57a] [c013ea21] [c0132ba0] [c0106da3] 
Code: 0f 0b 83 c4 08 8d 74 26 00 8b 46 14 a8 80 74 19 68 ef 00 00 

EIP; c012d246 rmqueue+256/2e0   =
Trace; c01117e4 schedule+230/268
Trace; c012d434 __alloc_pages+70/2bc
Trace; c012d2e6 _alloc_pages+16/18
Trace; c0124b22 find_or_create_page+2a/d8
Trace; c0137c90 grow_dev_page+20/ac
Trace; c0137f20 grow_buffers+f4/13c
Trace; c0135f36 getblk+2a/40
Trace; c0136124 bread+18/bc
Trace; c0171f32 reiserfs_bread+16/24
Trace; c016df24 search_by_key+5c/d40
Trace; c0136124 bread+18/bc
Trace; c015cfe0 search_by_entry_key+1c/1c0
Trace; c015d568 reiserfs_find_entry+7c/134
Trace; c015d68e reiserfs_lookup+6e/e0
Trace; c01467f6 d_alloc+1a/194
Trace; c013d940 real_lookup+70/118
Trace; c013e2e8 link_path_walk+79c/a14
Trace; c013d5fe getname+5e/9c
Trace; c013e57a path_walk+1a/1c
Trace; c013ea20 __user_walk+34/50
Trace; c0132ba0 sys_access+94/128
Trace; c0106da2 system_call+32/40
Code;  c012d246 rmqueue+256/2e0
 _EIP:
Code;  c012d246 rmqueue+256/2e0   =
   0:   0f 0b ud2a  =
Code;  c012d248 rmqueue+258/2e0
   2:   83 c4 08  add$0x8,%esp
Code;  c012d24a rmqueue+25a/2e0
   5:   8d 74 26 00   lea0x0(%esi,1),%esi
Code;  c012d24e rmqueue+25e/2e0
   9:   8b 46 14  mov0x14(%esi),%eax
Code;  c012d252 rmqueue+262/2e0
   c:   a8 80 test   $0x80,%al
Code;  c012d254 rmqueue+264/2e0
   e:   74 19 je 29 _EIP+0x29 c012d26e rmqueue+27e/2e0
Code;  c012d256 rmqueue+266/2e0
  10:   68 ef 00 00 00push   $0xef

 1Unable to handle kernel paging request at virtual address e5746608
c0129ece
*pde = 17423067
Oops: 
CPU:0
EIP:0010:[c0129ece]Tainted: P 
EFLAGS: 00010082
eax: c906e0c0   ebx: f0415880   ecx: c158e2f0   edx: 0007
esi: c158e2f0   edi: 0246   ebp: 0020   esp: d3a29de8
ds: 0018   es: 0018   ss: 0018
Process kdeinit (pid: 789, stackpage=d3a29000)
Stack: d495fb74 0001  0001 c0222fc5 c158e2f0 0020  
   c1362970 d495fb74  c022304b d495fb74 0001 d3a28000 c1362970 
   d495fb74 0001 c02230f7 d495fb74 0001 d3a29e40 c026a73c c01245d2 
Call Trace: [c0222fc5] [c022304b] [c02230f7] [c01245d2] [c01246dc] 
   [c012474a] [c0125c7a] [c0121fad] [c0122263] [c01108bf] [c01107a8] 
   [c013b3fe] [c0106ef4] 
Code: 8b 44 81 18 89 41 14 83 f8 ff 75 18 8b 41 04 8b 11 89 42 04 

EIP; c0129ece kmem_cache_alloc+92/d4   =
Trace; c0222fc4 radix_tree_extend+54/9c
Trace; c022304a radix_tree_reserve+3e/d8
Trace; c02230f6 radix_tree_insert+12/2c
Trace; c01245d2 add_to_page_cache+32/ac
Trace; c01246dc page_cache_read+90/d8
Trace; c012474a read_cluster_nonblocking+26/40
Trace; c0125c7a filemap_nopage+12a/220
Trace; 

[reiserfs-list] Re: [PATCH] radix-tree pagecache for 2.4.19-pre2-ac2

2002-03-03 Thread Mike Fedyk

On Sun, Mar 03, 2002 at 11:55:57PM -0500, Ed Tomlinson wrote:
 On March 3, 2002 03:03 pm, Christoph Hellwig wrote:
  I have uploaded an updated version of the radix-tree pagecache patch
  against 2419-pre2-ac2  News in this release:
 
  * fix a deadlock when vmtruncate takes i_shared_lock twice by introducing
a new mapping-page_lock that mutexes mapping-page_tree (akpm)
  * move setting of page-flags back out of move_to/from_swap_cache (akpm)
  * put back lost page state settings in shmem_unuse_inode (akpm)
  * get rid of remove_page_from_inode_queue - there was only one caller (me)
  * replace add_page_to_inode_queue with ___add_to_page_cache (me)
 
  Please give it some serious beating while I try to get 25 working and
  port the patch over 8)
 
 Got this after a couple of hours with pre2-ac2+preempth+radixtree
 

Can you try again without preempt?



Re: [reiserfs-list] postmark performance numbers for ReiserFS

2002-03-03 Thread Chris Mason



On Sunday, March 03, 2002 09:30:06 PM -0700 Andreas Dilger [EMAIL PROTECTED] 
wrote:

 On Mar 03, 2002  18:04 -0500, Chris Mason wrote:
  500-9.6K file size distribution (postmark default)
 
 With this file size, the reiserfs tail code will be a bottleneck.  
 I'd suggest mounting with -o notail, it makes a huge difference in
 my postmark tests.
 
 He already posted numbers with notail (reproduced from the original
 email below):

Ouch, sorry ray, for some reason I only saw the first reiserfs line.
I suspect it has something to do with the number of sub directories,
but that is a wild guess.  I'll try to reproduce.

How slow are those scsi disks?  I don't think data=journal will help
ext3 here, unless you've changed postmark to fsync.

-chris





Re: [reiserfs-list] postmark performance numbers for ReiserFS

2002-03-03 Thread Andreas Dilger

On Mar 04, 2002  00:32 -0500, Chris Mason wrote:
 Ok, I'm not going to be able to replicate the entire test, but I can
 at least demonstrate the high number of subdirectories is slowing down
 the creation time  I'm guessing it is either caused by the
 subdirectory inodes not being in cache often enough, or increased
 log traffic
 
 Try setting the number of subdirectories to 10  If this fixes the
 reiserfs performance problem, we can look into solutions

Hmm, interesting  When Andrew was doing MTA performance testing on
ext3, he found that _increasing_ the number of directories improved
performance  I don't have the thread handy, but I _think_ it had to
do with VFS locking on the directories - more directories means that
more operations can be done in parallel since Al made this part of
the VFS thread-safe

Cheers, Andreas
--
Andreas Dilger
http://sourceforgenet/projects/ext2resize/
http://www-mddspenelucalgaryca/People/adilger/




Re: [reiserfs-list] postmark performance numbers for ReiserFS

2002-03-03 Thread Chris Mason



On Sunday, March 03, 2002 10:46:18 PM -0700 Andreas Dilger [EMAIL PROTECTED] 
wrote:

 On Mar 04, 2002  00:32 -0500, Chris Mason wrote:
 Ok, I'm not going to be able to replicate the entire test, but I can
 at least demonstrate the high number of subdirectories is slowing down
 the creation time.  I'm guessing it is either caused by the
 subdirectory inodes not being in cache often enough, or increased
 log traffic.
 
 Try setting the number of subdirectories to 10.  If this fixes the
 reiserfs performance problem, we can look into solutions.
 
 Hmm, interesting.  When Andrew was doing MTA performance testing on
 ext3, he found that _increasing_ the number of directories improved
 performance.  I don't have the thread handy, but I _think_ it had to
 do with VFS locking on the directories - more directories means that
 more operations can be done in parallel since Al made this part of
 the VFS thread-safe.

Very true, you should be able to see the locking issue with a 
multiprocess benchmarking.  But, it probably helps less and less
as the number of directories grows over the number of writers.

-chris






Re: [reiserfs-list] reiserfs -o notail less throughput than ext3?

2002-03-03 Thread Oleg Drokin

Hello!

On Mon, Mar 04, 2002 at 02:04:52AM +0100, Matthias Andree wrote:
  Yes, it is slow, but overal disk throughput of 7M/sec suggests this is
  old drive Old drives tend to have worse seeking speed than today's drives
 But how much seeking is done on one 650 MB file that's been written onto
 an empty partition? I presume, not too much
1625*2 seeks (that's right, 2 seeks per each 4M of data)
This figure is for reading
Writing is more complex due to journal

Bye,
Oleg