Re: [reiserfs-list] /proc/fs/reiserfs question

2002-10-15 Thread Hans Reiser

Nikita Danilov wrote:

Hans Reiser writes:
  Nikita Danilov wrote:
  
  Philippe Gramoullé writes:
Hi,

Is there any documentation that gives the description for each line 
of each file in /proc/fs/reiserfs/sd(x,x)/*

like what is the meaning of s_fix_nodes ? etc..
  
  I am afraid one has to dig it out of the sources. Here is description of
  some fields displayed in super, they pertain to the in-core
  super-block:
  
 state: REISERFS_VALID_FS/REISERFS_ERROR_FS
  
 mount options: options given to mount
  
 gen. counter: file system generation acounter---this is
 incremented with each balancing.
  
 s_kmallocs: how many kmallocs (calls to general purpose
 kernel memory allocator) were performed by reiserfs code
  
 s_disk_reads: not maintained
  
 s_disk_writes: not maintained
  
 s_fix_nodes: how many time fix-nodes (1st phase of
 balancing) was performed.
  
 s_do_balance: how many times do-balance (2nd phase of
 balancing) was performed
  
 s_unneeded_left_neighbor: not maintained
  
 s_good_search_by_key_reada: not maintained
  
 s_bmaps: not maintained
  
 s_bmaps_without_search: not maintained
  
 s_direct2indirect: how many direct-indirect converions
 were performed
  
 s_indirect2direct: how many indirect-direct converions
 were performed
  
 max_hash_collisions: maximal hash collision met so far.
  
 breads: not maintained
  
 bread_misses: not maintained
  
 search_by_key: how many times search_by_key (main tree
 traversal routine) was called.
  
 search_by_key_fs_changed: how many times search_by_key was
 performed concurrently to balancing.
  
 search_by_key_restarted: how many times search_by_key had
 to restart due to concurrent balancing.
  
 insert_item_restarted---+
 paste_into_item_restarted   +
 cut_from_item_restarted +- how many times particular
 delete_solid_item_restarted +   balancing operation had to
 delete_item_restarted---+   restart
  
 leaked_oid: how many object-ids (unique identifiers
 assigned to files) were missed
  
 leaves_removable: how many times three leaf nodes of the
 balanced tree were mergeable into one
  
  I guess we should keep this information somewhere on the web-site. If
  you need more detailed info, let me know.
  

Thanks much,

Philippe.
  
  Nikita.
  
  

  
  The unmaintained fields should be removed.  Put into post-Halloween todo 
  list.

What about making them maintained, in stead? Should be trivial.

Review which ones look like they would be genuinely informative, and 
then maintain just those, ok?  After Halloween.


  
  Hans
  

Nikita.


  







Re: [reiserfs-list] Reiserfs with Samba vs NetApp Filer

2002-10-15 Thread Russell Coker

On Tue, 15 Oct 2002 15:42, Hans Reiser wrote:
 Russell Coker wrote:
 See the following graph:
 http://www.coker.com.au/~russell/hardware/46g.png
 
 This shows testing a single 46G drive, two drives on different buses at
  the same time, and two drives on the same bus at the same time.  zcav
  (part of Bonnie++) was used to perform the tests.

 I am surprised that separating them onto different buses has so little
 effect.  It looks like most of the bottleneck for large reads off two
 drives is not the IDE bus, but something else (maybe CPU or memory
 bandwidth).

I was surprised too.  Especially as it's an ATA-66 bus (the bus was expected 
to be a bottleneck).

Only a single CPU.

I would like to do more research on this matter and write a magazine article 
(I already have a magazine wanting to publish it).  All I need is suitable 
access to the latest hardware to perform my tests (tests on old hardware 
while still being interesting research doesn't sell magazines).

-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page




[reiserfs-list] back up to disk

2002-10-15 Thread Russell Coker

Here's an interesting article I just read.  It's just a device with a bunch of 
ATA drives inside, up to 2T of storage.  Probably anyone here could produce 
something based on ReiserFS to compete with it...


Storage start-up Avamar Technologies is launching an appliance 
this week that it claims backs up network data more quickly and 
less expensively than tape. 
http://www.nwfusion.com/news/2002/1014avamar.html?net

-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page




RE: [reiserfs-list] back up to disk

2002-10-15 Thread Bingner Sam J Contractor CAF CSS/SCHE

can anybody say SAN?

SAN:  Acronym: Storage Area Network

This looks like it is just a SAN, and a little software to show when the
same data is written twice and reference the first instance instead of
writing it again...  I suspect it could use any filesystem on the drives you
wanted...

Sam Bingner
PACAF CSS/SCHE

-Original Message-
From: Russell Coker [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, October 15, 2002 11:06 AM
To: ReiserFS
Subject: [reiserfs-list] back up to disk


Here's an interesting article I just read.  It's just a device with a bunch
of 
ATA drives inside, up to 2T of storage.  Probably anyone here could produce 
something based on ReiserFS to compete with it...


Storage start-up Avamar Technologies is launching an appliance 
this week that it claims backs up network data more quickly and 
less expensively than tape. 
http://www.nwfusion.com/news/2002/1014avamar.html?net

-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page



Re: [reiserfs-list] Recommended patches for mail server?

2002-10-15 Thread JP Howard

On Tue, 15 Oct 2002 16:29:23 +0400, Oleg Drokin
[EMAIL PROTECTED] said:
 On Tue, Oct 15, 2002 at 10:55:21AM +, JP Howard wrote:
   They should apply cleanly, if you are worried about some offsets,
   this is ok and should not concern you. (just use -s switch to patch
   command to not see unneeded output ;) )
  I got quite a few HUNK FAILED. Also, I found that with the data
  logging patches these other patches didn't apply cleanly.

 Hm. You apply the patches in order, is that right?

Yes.

  Could you please tell me which, if any, of these patches are
  important? Also, I noticed 07-mmaped_data_loss_fix.diff  here:
  http://www.uwsg.iu.edu/hypermail/linux/kernel/0201.3/1161.html

 07-mmaped_data_loss_fix.diff is in 2.4.19 already, may be you are just
 downloading patches from the wrong dir?

I've been searching the archives to see what patches are around. I
noticed this patch and figured it sounded important; it's not related to
my question above about the 2.4.19.pending patches.

 Other patches are some cosmetic cleanups, also new block allocator, but
 this block allocator is uncompatible with Chris' data logging patches
 yet, and you need Chris' patches more. There also this NFS fix I
 recommended and two not very important fixes that you would probably
 never ever need. (fixes are patches 13-remount-rw-fix.diff and
 04-item_alignment_fix.diff)

Well, it sounds like there's nothing for us here to worry about then! If
we ever need NFS, I'll make sure that we apply the patch that you
mentioned.

Thanks again,
  Jeremy



Re: [reiserfs-list] [PATCH] finally... data logging for 2.4.20-pre10

2002-10-15 Thread JP Howard

On 15 Oct 2002 15:24:42 -0400, Chris Mason [EMAIL PROTECTED] said:
 Aside from merging into the latest kernel, there are a few bug fixes for
 corner case oopsen hit during suse testing.
 
Could you possibly provide a patch for that bug against 2.4.19?

TIA,
  Jeremy



Re: [reiserfs-list] back up to disk

2002-10-15 Thread JP Howard

On Tue, 15 Oct 2002 23:05:44 +0200, Russell Coker
[EMAIL PROTECTED] said:
 Here's an interesting article I just read.  It's just a device with a
 bunch of ATA drives inside, up to 2T of storage.  Probably anyone here
 could produce something based on ReiserFS to compete with it...

 Storage start-up Avamar Technologies is launching an appliance this
 week that it claims backs up network data more quickly and less
 expensively than tape.
 http://www.nwfusion.com/news/2002/1014avamar.html?net

We've been thinking about something like that, using this extremely nifty
trick:

http://www.mikerubel.org/computers/rsync_snapshots/

Back in the old days (i.e. last week) when we were planning around Ext3,
we were thinking of combining it with this product:

http://www.shaolinmicro.com/product/cogofs/index.php

The two combined, with ATA RAID, provide fast, redundent, incremental,
compressed backups.

Does ReiserFS support transparent compression? If not, are there any
plans in this direction? Benchmarks I've seen in the past suggest that
compressed file systems generally improve performance (especially when
using something fast like LZOP) since CPUs are so fast--and of course for
backups being able to store more on fewer disks is nice...

Off-topic: anyone know of good vendors offering rackmount servers with
room for lots of IDE drives? Looking at Dell, IBM, and Compaq they all
use exclusively SCSI in there 2U rackmount servers.



Re: [reiserfs-list] back up to disk

2002-10-15 Thread Hans Reiser

JP Howard wrote:

We've been thinking about something like that, using this extremely nifty
trick:

http://www.mikerubel.org/computers/rsync_snapshots/

This is brilliantly simple.


Back in the old days (i.e. last week) when we were planning around Ext3,
we were thinking of combining it with this product:

http://www.shaolinmicro.com/product/cogofs/index.php

This looks reasonable as a product, and not unreasonably expensive for 
servers.   There are some advantages to tight integration though, in 
that you can compress at flush time.


The two combined, with ATA RAID, provide fast, redundent, incremental,
compressed backups.

Does ReiserFS support transparent compression?

This is one of the features that won't make the Halloween deadline, but 
might be slipped in later.

 If not, are there any
plans in this direction? Benchmarks I've seen in the past suggest that
compressed file systems generally improve performance (especially when
using something fast like LZOP) since CPUs are so fast--and of course for
backups being able to store more on fewer disks is nice...

What I had heard was that they generally slowed peformance, but maybe my 
info is old.

CPUs are faster now, and maybe compression algorithms are faster.

Can you give more details?






[reiserfs-list] oops from 2.5.42-mm3 running lilo on reiserfs

2002-10-15 Thread rwhron

AMD Athlon 1333
1GB (highmem enabled)
IDE disk
reiserfs filesystems

While running lilo I got this oops:

 kernel BUG at mm/highmem.c:177!
 invalid operand: 

 CPU:0
 EIP:0060:[c01325cf]Not tainted
 EFLAGS: 00010246
 EIP is at kunmap_high+0xf/0x80
 eax:    ebx: c195fe70   ecx: c037bb1c   edx: c037bb14
 esi: 0a00   edi: f6b31df4   ebp: 1000   esp: f6b63f3c
 ds: 0068   es: 0068   ss: 0068
 Process lilo (pid: 810, threadinfo=f6b62000 task=f7435940)
 Stack: c195fe70 0a00 c0111f5b c017c633 c195fe70 0001 f6b31df4 f737d3c0
0001 f6b31e50  c017c560 f6b31df4 f737d3c0 c01448e8 f6b31df4
f737d3c0 4004cd01 0001 4004cd01 ffe7 f737d3c0 0001 c0144aeb
 Call Trace:
  [c0111f5b] kunmap+0x2b/0x30
  [c017c633] reiserfs_unpack+0xc3/0x108
  [c017c560] reiserfs_ioctl+0x20/0x30
  [c01448e8] file_ioctl+0x148/0x160
  [c0144aeb] sys_ioctl+0x1eb/0x220
  [c0106e0b] syscall_call+0x7/0xb


-- 
Randy Hron
http://home.earthlink.net/~rwhron/kernel/bigbox.html




Re: [reiserfs-list] back up to disk

2002-10-15 Thread Valdis . Kletnieks

On Wed, 16 Oct 2002 05:15:33 +0400, Hans Reiser said:

 What I had heard was that they generally slowed peformance, but maybe my 
 info is old.

Even back in the days when an IBM RS6000-220 (66mz 601 chipset) was a new
machine, I found it actually gave a 20-30% total throughput boost, because
even if the CPU wasn't blazing fast, it was faster to read 3-5 512 byte
blocks off the disk and decompress them to get a 4K filesystem block than
it was to actually read all 8 512-byte blocks.  Among other things, you
get back half your disk throughput, so you can sustain twice the I/Os
-- 
Valdis Kletnieks
Computer Systems Senior Engineer
Virginia Tech




msg06609/pgp0.pgp
Description: PGP signature


[reiserfs-list] Re: oops from 2.5.42-mm3 running lilo on reiserfs

2002-10-15 Thread rwhron

 While running lilo I got this oops:

  kernel BUG at mm/highmem.c:177!
  invalid operand: 

 Just delete it I think:

Yep, that fixes it.  By the way, system responsiveness
with all this stuff running is impressive:

Linux Test Project's runalltests.sh (lots of I/O).
configure/make/make test Python-2.2.2 (all tests okay)
setiathome  (cpu bound)
mp3blaster
lilo in a loop every 30 seconds.

mp3blaster was only quiet about 5 seconds during
a point when runalltests.sh was executing 100
CPU bound threads.  (float_bessel is the test)
There was no skipping during the test that writes
to all free memory + 600 megs swap. :)

2 python audio tests block on /dev/audio, but
that's expected.

You once had a 2.4 patch so lilo wouldn't wait too
long when there is a lot of I/O going on.  A couple
of the LTP tests would make lilo pause ~ 60 seconds:
growfiles -b -e 1 -i 0 -L 120 -u -g 4090 -T 100 -t 408990 -l -C 10 -c 1000 -S 10 -f 
Lgf02_
growfiles -b -e 1 -i 0 -L 120 -u -g 5000 -T 100 -t 40 -l -C 10 -c 1000 -S 10 -f 
Lgf03_

Those run for 120 seconds each in sync mode.  

I can enter and exit X with no problem.
2.5.42-mm3 is comfortable on my main box.
Thanks for making it so good.

-- 
Randy Hron
http://home.earthlink.net/~rwhron/kernel/bigbox.html