[reiserfs-list] Cannot mount reiserfs partition (more info)

2001-06-22 Thread mlist

Hello Again

Here's some more info about the problem with my partition. when I mount
it, I see in the syslog the next lines:
--
Jun 22 10:11:13 deot kernel: reiserfs: checking transaction log (device
03:46) ...
Jun 22 10:11:14 deot kernel: journal-1226: REPLAY FAILURE, fsck required!
buffer write failed
Jun 22 10:11:14 deot kernel: Replay Failure, unable to mount
Jun 22 10:11:14 deot kernel: reiserfs_read_super: unable to initialize
journal space
--

Now, I though that maybe 'raw mode' would help me. the thing is that I
haven't really got an idea how to use it, and couldn't find documents
about it. I just want to save the files which are in there (and they are,
I can see them with debugreiserfs!), any way to get access to them, even
read-only? If raw mode is indeed what I need, can anybody please tell me
how to use it?

It's hard to feel like I've lost all the files there, while I can see them
all using debugreiserfs..

Thanks!!

Cya,
Oren.




Re: [reiserfs-list] How to use resize_reiserfs?

2001-06-22 Thread mailing

No. You need to shift filesystem content to the beginning of the partitons.

/bin/dd may help,
   dd if=/dev/hda5 bs=1024 skip=number of 1k block you added at \
   beginning of=/dev/hda5

Use bigger block size if it is possible.

... I want to test this method first before recommending it for
important data. Even it works, any crash during this operation will
make you reiserfs unconsistent (`reiserfsck --rebuild-tree' should
help with metadata but recovering of unformatted nodes may be a
problem). Using of bigger block size will reduce copying time and the
risk. Check twice what distance you want to shift reiserfs content to !

So, it is enough complex and risky operation :( can you try this on
test system ?

Oh, I don't have testing system and I would like to know how can I know number of 1k 
block you added at beginning.
But it is so complex to me that I think I will create other partitin, move the data to 
it, and then create sybolic link to orginal directory.




Re: [reiserfs-list] kernel-2.4.6-pre3 to 2.2.19 NFS tests

2001-06-22 Thread Russell Coker

On Thursday 14 June 2001 15:51, Christian Mayrhuber wrote:
 I've run bonnie on nfs over a 10MBit/s network on a ext2 and
 a reiserfs partition on the same disk.

 The Bad:
 
 The performance loss to ext2 on the same disk ist quit drastic, about
 25% and this is only over a 10MBit/s network. What will happen on a
 100MBit/s network? I have no chance to test it on 100MBit/s, SCSI
 hardware and a 3c59x card till monday.

Try testing with Bonnie++, the file creation and deletion tests will give 
interesting results!  ;)

In one test I had an AIX machine that was moderately grunty (two fast 
POWER CPU's, 6 hard drives on a 160MB/s bus, 256M of RAM).  When running 
Bonnie++ I found that the AIX machine could create files on my Thinkpad 
over a 10baseT NFS mount faster than it could create them on a local JFS 
file system!


-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page



Re: [reiserfs-list] optimizing reiserfs for large files?

2001-06-22 Thread Russell Coker

On Thursday 14 June 2001 12:18, grobe wrote:
 I have a significant loss of performance in bonnie tests. The writing
 intelligently-test
 e.g. gives me 20710 kB/s with reiserfs, while I get 24753 kB/s with
 ext2 (1 GB-file).

How much RAM do you have?  If you have more than 512M of RAM then the 
results won't be a good indication of true performance.

Also older versions of bonnie never sync the data so the performance 
report depends to a large extent on how much data remains in the 
write-back cache at the end of the test!

Bonnie++ addresses these issue.

Also neither of those results is what you should expect from modern 
hardware.  Machines that were typically sold in corner stores about a 
year ago (such as the machine under my desk) return results better than 
that.  I have attached the results of an Athlon-800 with 256M of PC-133 
RAM and a single 46G ATA-66 IBM hard drive.  The machine was not the most 
powerful machine on the market when I bought it over a year ago.

What types of hard drives does the machine have?

-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page


Version 1.92b   --Sequential Output-- --Sequential Input- --Random-
Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
temp   496M   447  98 28609  16 10608   7   718  98 34694  15 199.8   1
Latency 22328us2074ms   56626us   57412us   43123us2984ms
Version 1.92b   --Sequential Create-- Random Create
temp-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 16   849  98 + +++ 15216  90   863  99 + +++  3423  98
Latency  9168us 113us 249us   12778us  41us1744us
1.92b,1.92b,temp,1,993204157,496M,,447,98,28609,16,10608,7,718,98,34694,15,199.8,1,16,849,98,+,+++,15216,90,863,99,+,+++,3423,98,22328us,2074ms,56626us,57412us,43123us,2984ms,9168us,113us,249us,12778us,41us,1744us



Re: [reiserfs-list] kernel-2.4.6-pre3 to 2.2.19 NFS tests

2001-06-22 Thread Christian Mayrhuber

Am Friday 22 June 2001 11:39 schrieben Sie:

 Try testing with Bonnie++, the file creation and deletion tests will give
 interesting results!  ;)

 In one test I had an AIX machine that was moderately grunty (two fast
 POWER CPU's, 6 hard drives on a 160MB/s bus, 256M of RAM).  When running
 Bonnie++ I found that the AIX machine could create files on my Thinkpad
 over a 10baseT NFS mount faster than it could create them on a local JFS
 file system!
This is not the case for me, nfs performance never reaches local disk
performance.
I think  the network is the limiting factor.
I don't have a idea if the bonnie file creation numbers over nfs are good
ones or not, at least stat seems to be speedy.


1GHZ Athlon AMI Megaraid Raid-5 138GB/total, kernel-2.4.6-pre5, local, reiserfs
---
Version 0.99e   --Sequential Create-- Random Create
Unknown -Create-- --Stat--- -Delete-- -Create-- --Stat--- -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 30 14169  99 + 105 16224  91 13047  96 + 100 14010 100
Unknown,,30,14169,99,+,105,16224,91,13047,96,+,100,14010,100

1GHz Athlon, client, Raid-5 array mounted over a 100MBit/s network
--
Version 0.99e   --Sequential Create-- Random Create
Unknown -Create-- --Stat--- -Delete-- -Create-- --Stat--- -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 30  3578  30 13676  53  4682  33  3614  31 17797  49  4251  29
Unknown,,30,3578,30,13676,53,4682,33,3614,31,17797,49,4251,29
-- 
WfG, Chris




Re: [reiserfs-list] 3.5 vs 3.6

2001-06-22 Thread Michal Pokryfka


well, i know that 2.2. kernels work only with reiserfs 3.5 whereas 2.4.x can work with 
3.6 as well
i was asking about the differences between 3.5 and 3.6
You mentioned that in 3.5 there is a 2GB file-size limit - is that the only differencs?

michal




Re: [reiserfs-list] reiserfs-raw

2001-06-22 Thread Russell Coker

On Monday 18 June 2001 21:57, Henrik Nordstrom wrote:
 For Squid it would become very interesting if in some time (lets say
 about a year, maybe more) there is a good volatile permanent object
 store similar to reiserfs-raw but with a slightly more flexible
 application interface.

One thing I have considered doing if I got a large amount of spare time 
(IE something that'll never happen) is to investigate getting the 
user-mode-linux code and taking the block IO part to make file systems 
run in user-space as a database interface.  For something like a large 
squid box it might get a performance gain to have small operations 
(directory lookups) take place in user-land rather than have a system 
call for each one.

Also it could potentially have some benefits for debugging.  I thought 
that combining the above with a LD_PRELOAD library to take over the 
read/write/open/etc library calls could allow an application to think 
it's using regular files while it's really accessing a user-land process 
and talking over named pipes.  Then you could test out a new version of a 
file system without risking crashing your machine!

-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page



Re: [reiserfs-list] optimizing reiserfs for large files?

2001-06-22 Thread Russell Coker

On Saturday 23 June 2001 01:11, Lars O. Grobe wrote:
  Also neither of those results is what you should expect from modern
  hardware.  Machines that were typically sold in corner stores about a
  year ago (such as the machine under my desk) return results better
  than that.  I have attached the results of an Athlon-800 with 256M of
  PC-133 RAM and a single 46G ATA-66 IBM hard drive.  The machine was
  not the most powerful machine on the market when I bought it over a
  year ago.
 
  What types of hard drives does the machine have?

 G should be quite fast sca-scsi ibm-drives. As I wrote, it's an 320GB
 array in a EXP15 connected to a IBM ServeRAID4M. The Netfinity has two
 833MHz PIIIs.

Hmm.  Sounds like the performance you describe is less than expected, and 
the performance is being over-stated too!  When you get some more 
accurate results it'll look even worse...

-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page