Re: Hans selling Namesys

2006-12-25 Thread Manuel Krause

Hello,
from reading all this stuff it seems to look like a competitors 
intrigue. But we don't have any evidences upon this neither do 
we have a sentence, yet...


Can we build up a ReiserFS-community based fund to buy or save 
Namesys over time to help continue the current development? Just 
an idea.


From my point of view the there are three general alternatives:
* let/help someone really interested to buy Namesys
* increase Hans' financial input directly to help himReiserFS
* ReiserFS34 get bought by someone and become commercial or
  will vanish at once

As I followed ReiserFS since 1997(?+-) but really don't know 
enough about the current case I'm not sure with my priorities in 
the list above.


Best wishes to all contributors,
my best wishes for your Xmas!

Manuel



Joe Feise wrote on '06-12-25 20:55:

http://www.wired.com/news/technology/0,72342-0.html






Re: Reiser4 status: benchmarked vs. V3 (and ext3)

2003-07-27 Thread Manuel Krause
On 07/27/03 14:28, Hans Reiser wrote:
Nikita Danilov wrote:

Shawn writes:
 Looks like the 2.5.74 is the last one of any respectable size. I'm
 thinking someone forgot a diff switch (N?) over at namesys...
  Hans? Time to long-distance spank someone?
Can you try following the instructions on the
http://www.namesys.com/code.html (requires bitkeeper)?
Nikita.

  On Wed, 2003-07-23 at 23:56, Tupshin Harper wrote:
  Shawn wrote:
This is pretty f'ed, but it's on ftp://ftp.namesys.com/pub/tmp
  
  Thanks, but I tried applying the
  2.6.0-test1-reiser4-2.6.0-test1.diff from that location with a 
lack of   success.
It applied cleanly, but it doesn't add a fs/reiser4 directory 
and   asociated contents. Is there an additional patch, or is this 
one broken?
-Tupshin
 

 

Nikita, how about phrasing this as:

   `Dear and esteemed potential Reiser4 user, I apologize that we put a 
tarball on our website and let it get so obsolete, thereby wasting your 
time.  I am deleting it now, and will soon put a new one up.  In the 
meantime, can you use bitkeeper if that is convenient for you?  Here is 
the URL with the instructions for doing that.'

It is the usual American business english used in such cases.;-)

No need to exaggerate... As I read these reiser4 related preliminary 
failure questions you can safely reuse
 http://www.namesys.com/support.html

That page doesn't even need a correction therefore...

;-))

Bye,

 Manuel

--
Pay 25 for reading this mail! Just contact [EMAIL PROTECTED] on 
how to pay! No tax problems any more... You'll get internationally 
valid bills and receipts. :-))



Re: Reiser4 status: benchmarked vs. V3 (and ext3)

2003-07-27 Thread Manuel Krause
On 07/27/03 17:04, Gene Heskett wrote:
On Sunday 27 July 2003 08:45, Tomas Szepe wrote:

[EMAIL PROTECTED]

Nikita, how about phrasing this as:

  `Dear and esteemed potential Reiser4 user, I apologize that we
put a tarball on our website and let it get so obsolete, thereby
wasting your time.  I am deleting it now, and will soon put a new
one up.  In the meantime, can you use bitkeeper if that is
convenient for you?  Here is the URL with the instructions for
doing that.'
It is the usual American business english used in such cases.;-)
Standard American puke inducing pretentious florid business
language if you want to be completely accurate.  8)


Huh?  I thought Hans was being facetious, or was practicing his  
standup comedy.  I can see something like that coming out of a far 
western oriental type when dealing with an american that isn't really 
understanding his accent, but here in the states, after we've 
explained it once in plain english, the next exchange will more than 
likely go the other way, possibly even making it to the 'hey 
dumbf*ck' stage on about the 4th reply.
Can you, please, make the last two sentences more readable for me, a 
middle european type, in 'first' stage plain english?

Please, make your dialogues clear and clean for international usage! 
You're not here in the states when posting to an international list.

 Manuel.

And I do object, albeit somewhat tongue-in-cheek, to applying that 
particular label/broad brush to all americans since I am one of those 
creatures.




Re: reiserfs data recovery

2003-07-27 Thread Manuel Krause
On 07/28/03 00:39, [EMAIL PROTECTED] wrote:
After a hard disk crash, I dont see the reiserfs filesystems anymore 
(cant mount them) although the partitions are there and the ext2 
filesystems are there. What are my best options trying to retrieve any 
data that were there in the reiserfs partions?

DS

Try the following from a spare/rescue partition:

A.1) Try the most recent reiserfsprogs from
  ftp://ftp.namesys.com/pub/reiserfsprogs
 on your lost partitions and provide the info of
 reiserfsck --check /lost/partition
A.2) Try reiserfsck --rebuild-tree /lost/partition on your
 own risk (!!!)
B) Use most recent plain kernel ( 2.4.19)

C) Provide more info in general (versions of kernel,
   reiserfsprogs, distro, credit card info ;-)
   and so on...)
D) Refer to http://www.namesys.com/support.html

You can mix the things above, but if you can, use at least the row A.1, 
C, B, A.2, always considering D before the next full report!

Best wishes,

 Manuel



Re: updated data logging available

2003-07-11 Thread Manuel Krause
On 07/11/03 23:22, [EMAIL PROTECTED] wrote:
 Hello all,
 
 ftp.suse.com/pub/people/mason/patches/data-logging/2.4.22
 
 Has a merge of the data logging and quota code into 2.4.22-pre4 (should
 apply to -pre5).  Overall, the performance of pre5 + reiserfs-jh is nice
 and smooth, I'm very happy with how things are turning out.
 
 Thanks to Oleg for merging data logging with his reiserfs warning
 patches and hole performance fixes.
 
 The relocation and base quota code is now in, so our number of patches
 is somewhat smaller.  The SuSE ftp server might need an hour or so to
 update, please be patient if the patches aren't there yet.
 
 -chris
 

They patch and run fine even with search_reada-5 (from this list) and
this funny-fast-responding HZ=1000 setting @ 2.4.22-pre5 at the moment
(greatest: no handcrafting needed on here with mostly only rml-preempt
(except for fixing this one) ;-) ).

And as I subjectively feel, after 2.4.21-final it goes like you say:
performance is nice and smooth now. The smoothness (however one may
measure, I keep listening to my near notebook disks for now) seems to
have increased greatly with 2.4.22-pre(35). E.g no unneeded often disk
transfers take place as they did in 2.4.21-final. And desktop usage
patterns (loading Netscape 7.1, OOo 1.1 and some overheaded apps like
that) didn't slowdown in any kind.


Great work done for ReiserFS!!!


Many thanks to the whole team,

 Manuel


[OT: BTW, if someone experienced performance problems with OpenOffice
1.0.x_y just try OOo 1.1rc1 - it's amazingly fast and fixes many
prereleases' problems, too.]



Debugreiserfs Security Question (3.6.7-pre1)

2003-06-19 Thread Manuel Krause
Hi!

If I currently use debugreiserfs -p  /dev/xxx | gzip -c  xxx.gz and
later for testing gunzip -c xxx.gz | unpack /dev/yyy  I get the same
filenames on the last target partition. (with reiserfsprogs 3.6.7-pre1)
If I don't want to spread info about
 /home/manuel/my_car/tech_overview/lies_for_BMW__DC/engine.259.fake.jpg
anywhere else than only on my HDD, shouldn't this file be converted to
 /d98/d4/d2/d65/d1/23.file
e.g., or something like that (random directory  filenames) within
debugreiserfs, in general?!
I don't know if that is a serious security issue. But it is one.

No no, I don't doubt your developers' debugging cyle and purpose at all.
But I don't need you (and others if we couldn't establish a secure
connection) to read our filenames. In case of real failure we may not
be able to rename anything any more, you know..
Best regards,

Manuel Krause

(The filenames mentioned above have NO real meaning in ANY sense.)



Re: datalogging and quota patches port to 2.4.21-pre6

2003-03-22 Thread Manuel Krause
On 03/22/2003 08:29 AM, Oleg Drokin wrote:
Hello!

On Sat, Mar 22, 2003 at 12:58:45AM +0100, Manuel Krause wrote:

GRIN !!!  I really would like to enjoy these, but you seem to be far 
away in future. Still no official patch seems to be out -- at least 
for me on here. (Yes, I'll wait and would not use bk.)
Have your previously pending patches been included into 2.4.21-pre6
(they are: 02-trivial1.diff
  03-more-mount-checks.diff
  01-journal-overflow-fix.diff
fromftp://ftp.namesys.com/pub/reiserfs-for-2.4/testing :
 iget5_locked_for_2.4.21-pre5-datalogging.diff.gz
and posted to our ML @ Wed, 12 Mar 2003 10:46:42 +0300 :
   direct-io-fix-II.diff
) ?
Only patches that are visible at 
ftp://ftp.namesys.com/pub/reiserfs-for-2.4/2.4.20-pending
were included into 2.4 tree.
iget5_locked patch is in testing right now.
direct-io-fix-II.diff have triggered some old long known minor metadata
updating problems in case of unexpected crash and I am looking how to resolve
that and trying various stuff.

My question only results from beeing unable to get them all applied to 
2.4.21-pre5 with data-logging without rejects and me feeling unsafe to 
adjust them manually.
Well, if you apply all of those patches from
ftp://ftp.namesys.com/pub/reiserfs-for-2.4/2.4.20-pending on clean 2.4.20
(or whatever is needed for 2.4.21-pre5 only frpm
ftp://ftp.namesys.com/pub/reiserfs-for-2.4/2.4.21-pre5.pending)
and then apply data logging patches as if you have 2.4.21-pre6,
and then iget5_locked stuff, it should work.


Thank you Oleg for the information and the hint.

When following your directions these patches apply fine and the kernel 
works pretty well, too.

Thanks again,

Manuel



Re: datalogging and quota patches port to 2.4.21-pre6

2003-03-21 Thread Manuel Krause
On 03/20/2003 04:13 PM, Oleg Drokin wrote:
Hello!

   Ok, so I ported Chris' patches to 2.4.21-pre6 (should appear on your
   kernel.org mirror soon, have appeared in bk already).
   Patches are at
   ftp://namesys.com/pub/reiserfs-for-2.4/testing/data-logging-and-quota-2.4.21-pre6
   They are intend to replace similar named patches from Chris' ftp directory at
   ftp://ftp.suse.com/pub/people/mason/patches/data-logging/2.4.21
   Also if you want to try these patches with 2.4.21-pre5-ac3,
   you only need to apply my 2.4.21-pre6 versions of 
05-2.4.21-pre6-data-logging-36.diff.gz
   and 08-2.4.21-pre6-reiserfs-quota-26.diff.gz (but I have not tried this 
configuration yet).
   Enjoy.

Bye,
Oleg
Hi Oleg,

GRIN !!!  I really would like to enjoy these, but you seem to be far 
away in future. Still no official patch seems to be out -- at least 
for me on here. (Yes, I'll wait and would not use bk.)

Have your previously pending patches been included into 2.4.21-pre6
(they are: 02-trivial1.diff
   03-more-mount-checks.diff
   01-journal-overflow-fix.diff
 fromftp://ftp.namesys.com/pub/reiserfs-for-2.4/testing :
  iget5_locked_for_2.4.21-pre5-datalogging.diff.gz
 and posted to our ML @ Wed, 12 Mar 2003 10:46:42 +0300 :
direct-io-fix-II.diff
) ?
My question only results from beeing unable to get them all applied to 
2.4.21-pre5 with data-logging without rejects and me feeling unsafe to 
adjust them manually.

Thanks,

Manuel

--
If we'll never find peace, freedom and democracy with our souls -- how 
can our world? We only need to begin. Yes, first step first, in peace...



Re: [PATCH] various allocator optimizations

2003-03-14 Thread Manuel Krause
On 03/14/2003 02:34 AM, Chris Mason wrote:
On Thu, 2003-03-13 at 19:15, Hans Reiser wrote:

[ discussion on how to implement lower fragmentation on ReiserFS ]

Let's get lots of different testers.  You may have a nice heuristic here 
though



If everyone agrees the approach is worth trying, I'll make a patch that
enables it via a mount option.
[...]

A dumb question inbetween: How do we - possible testers, users - get 
information about fragmentation on our ReiserFS partitions?

Thanks,

Manuel



Re: help recover filesystem corrupted by failing ide controller

2003-03-14 Thread Manuel Krause
On 03/15/2003 03:43 AM, Pierre Abbat wrote:
On Friday 14 March 2003 21:29, Manuel Krause wrote:

If you already did a --rebuild-tree with that _old_ (~2years!!!)
reiserfsck I pray you have a backup somewhere.


If I couldn't cat a single file without the cat hanging, how could I make a 
backup?
Sorry, Pierre, you should have done it 1 min. before this happened, err, 
regularily. I feel with you.
Try upgrading the kernel when this is fixed and check your hardware is 
o.k., before.

Try the latest -pre-release
ftp://ftp.namesys.com/pub/reiserfsprogs/pre/reiserfsprogs-3.6.5-pre2.tar.gz
that is said to have some bug fixes against the latest reiserfsprogs
release
ftp://ftp.namesys.com/pub/reiserfsprogs/reiserfsprogs-3.6.4.tar.gz
and a plus of performance and this release link is only given in case
you believe in release numbering itself.


I have 3.6.4 on this laptop. I can modify the spell and recast it. Is it OK to 
copy the binary to the other box (they're both AMD processors) and run it? 
(The laptop is running Source Mage, the box with the trashed filesystem is 
running Mandrake.)

Modify spell? Recast? Source Mage? Mmh.

You can try running the old binary. Depending on the corresponding libs 
it will work. If not - try a new thing.

And don't use it on the old system before you are sure to have the 
failures' reason: old old kernel, old old FS etc.

I refer to the following mail on this list I'll cite below mostly for 
the compiler-flag you'll need for a possibly needed independent binary 
-- so, choose your poison!

Manuel




On Tuesday 28 January 2003 05:41, Francois-Rene Rideau wrote:
 Dear all,

 to determine whether my DMA problem was due to the disk or the 
motherboard,
 I moved the disk to another machine, where I could successfully
 hdparm -Tt /dev/hdc without crashing linux while in udma2.
 However, when I put the disk back into its rightful place,
 linux died horribly with lots of error messages from reiserfs.
 I booted on /boot where I had kept a minimal runtime, and used 
reiserfsck.
 Reiserfsck reported an inconsistency that required using --rebuild-tree.
 And after approximately two hours, reiserfsck --rebuild-tree entered
 an endless loop, printing
 do_pass_2: The block (27179935) is in the tree already. Should not 
happen.

looks like progs were compiled with other libs. Could you rebuild progs 
where
you booted again or link them statically:
	make LDFLAGS='-static'
and try agan.

 Is there anything I can do about that, to put it in working order,
 or should I reformat and restore what I can from backup?

 I've got a distinct feeling this disk will soon go back to MAXTOR,
 just like its twin brother.

 [ Fran?ois-Ren? ?VB Rideau | ReflectionCybernethics |
 http://fare.tunes.org ] [  TUNES project for a Free Reflective Computing
 System  | http://tunes.org  ] http://Bastiat.org/ - debunking economic
 sophisms since 1845.
-- Thanks, Vitaly Fertman





Re: [PATCH] new data logging and quota patches available

2003-03-08 Thread Manuel Krause
On 03/06/2003 05:03 AM, Manuel Krause wrote:
On 03/06/2003 04:46 AM, Chris Mason wrote:

On Wed, 2003-03-05 at 21:49, Manuel Krause wrote:

On 02/28/2003 02:32 PM, Chris Mason wrote:

Comparing data-logging-35 vs data-logging-36+kinoded-9+dirty_inode 
I have a performance increase of 2.5% for the new data-logging. 
Like you say above.


But... though you were right with 2.4.21 data-logging performance 
relations: -- What was causing that performance loss between 
2.4.19 +data-logging and now???

Did you exclude your changing dataset as a potential cause?  If not I'm
very interested in the details.


Please, tell me how to respond to this question!

Sorry, I wasn't clear.  It sounds like you've got two very different
copy speeds on two possibly different data sets.  The problem will be
significantly easier to debug and discuss if you boot a 2.4.19 + data
logging kernel, do the test, and then do the test again on a 2.4.21-pre
data logging kernel.
I know this is a pain, I want to reduce the number of variables in
place.
Yes. That all is why I asked. I'll try to find and retest the old things 
tomorrow night. The new setup will go fine with 2.4.21-pre5 ? Or is this 
one more step of irritation? Mmm. Or should I wait for the patches you 
announce below?

Hi, and sorry for the delay.

So I copied over the same dataset several times. ;-)

The times measured include the umounts of both partitions.
The test dataset was freshly copied to the freshly created
 test FS under the same kernel version before timing.
This time I took the times of two subsequent runs.
The test script does only simple cp -a ...
/dev/hda115076348   3762908   1313420  75% /mnt/alpha
/dev/hdd115076348   3762908   1313440  75% /mnt/beta
2.4.21-pre5 data=ordered [A]
  first run  second run  diff  averagerate
real  9m05.375s   9m23.272s  3.2%  9m14.324s  6.63MB/s
user  0m01.440s   0m01.730s
sys   1m11.640s   1m08.870s
2.4.19 data=ordered [B]
  first run  second run  diff  averagerate
real  7m46.287s   7m34.031s  2.7%  7m40.159s  7.99MB/s
user  0m01.520s   0m01.580s
sys   1m21.920s   1m21.300s
- I didn't know that subsequent runs on the same conditions
   differ by ~3% ( - So don't take my statement of the 2.5%
   difference in my previous mail for that serious as I
   only timed one run that time)
- The throughput rate under 2.4.19 is about 20% higher
   than under 2.4.21-pre5 with this dataset
I hope it's visible this time...  ;-))

Best regards,

Manuel



P.S.: Disk data distribution file is available if you want.

Below the list of used patches just to be complete...

[A] Patches for 2.4.21-pre5:
01.m02-akpm-b_journal_head-1.diff
02.m03-relocation-6.diff
03.m04-reiserfs-sync_fs-1.diff
04.m05-data-logging-36.namesys.diff
05.m06-logging-export.diff
06.m06-write_times.diff
07.m09.kinoded-9.diff
08.m11.inode-dirty-for-kinoded.ML.diff
09.iget5_locked_for_datalogging.namesys.noquota.diff
preempt-kernel-rml-2.4.21-pre5-pre1-0.patch
[B] Patches for 2.4.19:
01.change_help_wording.diff
02.change_menu_wording.diff
03.confusing_journal_replay_warning_removal.diff
05.hash_on_empty_fs-1.diff
07.m01-relocation-4.diff
08.m02-commit_super-9-relocation.diff
09.m03-data-logging-25.diff
10.m03-logging-export.diff
11.m04-write_times.diff
12.m05-search_reada-4.diff
preempt-kernel-rml-2.4.19-2.patch



Re: [PATCH] new data logging and quota patches available

2003-02-22 Thread Manuel Krause
On 02/22/2003 05:12 PM, Chris Mason wrote:
And here's one more patch for those of you that want to experiment a
little.  reiserfs_dirty_inode gets called to log the inode every time
the vfs layer does an atime/mtime/ctime update, which is one of hte
reasons mounting with -o noatime and -o nodiratime makes things faster. 
We had to do this because otherwise kswapd can end up trying to write
inodes, which can lead to deadlock as he tries to wait on the log.

One of the patches in my data logging directory is kinoded-8, which
changes things so a new kinoded does all inode writeback instead of
kswapd.
That means that if you apply up to 05-data-logging-36 and then apply
09-kinoded-8 (you won't need any of the other quota patches), you can
also apply this patch.  It changes reiserfs to leave inodes dirty, which
saves us lots of time on atime/mtime updates.
I'l upload this after it gets a little additional testing, but wanted to
include it here in case anyone else was interested in benchmarking.
[11.dirty-inodes-for-kinoded.diff]
Hi Chris!

At least I'm not able to copy through my partition via cp -a ... with 
the last supposed setup + preempt without quota ...

When umounting the destination partition it says device is busy.

I don't know what is gone for now on my repository partition, but I know 
from earlier times this should not happen.

Bye,

Manuel




Re: [PATCH] new data logging and quota patches available

2003-02-22 Thread Manuel Krause
On 02/23/2003 01:50 AM, Manuel Krause wrote:
On 02/22/2003 05:12 PM, Chris Mason wrote:

And here's one more patch for those of you that want to experiment a
little.  reiserfs_dirty_inode gets called to log the inode every time
the vfs layer does an atime/mtime/ctime update, which is one of hte
reasons mounting with -o noatime and -o nodiratime makes things 
faster. We had to do this because otherwise kswapd can end up trying 
to write
inodes, which can lead to deadlock as he tries to wait on the log.

One of the patches in my data logging directory is kinoded-8, which
changes things so a new kinoded does all inode writeback instead of
kswapd.
That means that if you apply up to 05-data-logging-36 and then apply
09-kinoded-8 (you won't need any of the other quota patches), you can
also apply this patch.  It changes reiserfs to leave inodes dirty, which
saves us lots of time on atime/mtime updates.
I'l upload this after it gets a little additional testing, but wanted to
include it here in case anyone else was interested in benchmarking.
[11.dirty-inodes-for-kinoded.diff]
Hi Chris!

At least I'm not able to copy through my partition via cp -a ... with 
the last supposed setup + preempt without quota ...

When umounting the destination partition it says device is busy.

I don't know what is gone for now on my repository partition, but I know 
from earlier times this should not happen.
Sorry, I was incomplete with my report, at least with the following: 
After thinking over the remount strategy for some minutes with a pizza 
the partition was umountable again. I don't have exact values but it was 
under about 15 minutes max. with 3.5GB copied previously.

My setup in fact:
m02-akpm-b_journal_head-1.diff
m03-relocation-6.diff
m04-reiserfs-sync_fs-1.diff
m05-data-logging-36.diff
m06-logging-export.diff
m06-write_times.diff
m09-kinoded-8.diff
m11.inode-dirty-for-kinoded-ML.diff
(prefix m for you)
+ latest preempt patch (what was for 2.4.21-pre1).

Thanks,

Manuel



Re: reiserfs messages cleanup patch.

2003-02-21 Thread Manuel Krause
On 02/21/2003 08:37 AM, Oleg Drokin wrote:

Hello!

On Fri, Feb 21, 2003 at 08:27:32AM +0100, Manuel Krause wrote:



If you're ready with the patch (again) I would be glad to receive the 
preliminary version to use it anyways.


Our tester is just busy with other stuff, so I probably release
the patch just for outside testing. If anyone would like to, of course.
(NOTE, THIS WAS ONLY LIGHTLY TESTED BY ME).
Patch is against latest 2.4 bitkeeper snapshot.
Probably should apply fine to 2.4.21-pre4

Bye,
Oleg



Thank you for the quick thing!

It doesn't apply to my kernel setup [-pre4 + data-logging + preempt]
-- too many hunks failing in my eyes.

BTW, the directio-fix fails on the same setup, too [alternatively].

Maybe I've time over the weekend to have a closer look.


Many thanks to you anyways and best regards,

Manuel


 [PATCH: massage-cleanup.diff]




Re: About direntries pointing to nowhere on reiserfs problem in 2.4

2003-02-20 Thread Manuel Krause
On 02/20/2003 03:53 PM, Oleg Drokin wrote:

Hello!

Vladimir have finally tracked the problem to a race between
two iget4 running on same file whose inode is not in cache.
The sequence of events is like this (UP case):
1st thread:
take inode_lock
search through inode cache, but found nothing.
alloc new inode, mark it as locaked.
release inode_lock
call reiserfs_read_inode2().
 do some stuff.
 call search_by_key()
 schedule()

Now 2nd thread comes in:
take inode_lock
search through inode cache, found inode with same inode number.
check that there is find_actor defined for reiserfs.
call find_actor()
 check that inode's primary key's dir_id is equal to expected one.
   but at that time this part of inode is uninitialized yet!
   so we return 0;
... 
And we create second inode for the same file.

   This scenario seems possible for any filesystem that stores some cookie 
   in private part of inode and whose read_inode2 can schedule. We checked and
   coda seems safe because they take a semaphore in iget().

   So we solved that with patch below (Zygo, others who think they have this problem,
   please check).

   But Vladimir is really unhappy with that comparison with zero and guessing (though
   he agrees it is correct, if FS is undamaged).

   Andrew, Alan: Is there a possibility to have iget5_locked() kind of interface
   in 2.4? We need some way to init parts inode under inode_lock to solve this problem
   in more elegant way. (and inode_lock is not even exported, so I invented another spinlock
   to guard atomicity of inode pkey update on SMP).

Bye,
Oleg


Hi!

Is this fix safe for usage already?

Chris, can you, please, put out the latency-related patch/diff soon for 
data=ordered mode?

Mmmh. I have some hangs within KDE 2.2.2 Konqueror when copying over 
(existing) directory links for some weeks now. I don't copy often over 
links but when it stalls, a directory link is somewhere involved.
(No crashes, no messages in the logs, just minutes for copies or 
deletes that happened in seconds usually.)

Should I worry and use the patch
 -- or finally upgrade my KDE ;-))

Thanks,

Manuel

P.S.: My kernel is 2.4.21-pre4 + all Chris' data-logging patches + 
preempt-patch for -pre1 (in that order).

[PATCH - direntry-fix.diff]



Re: Error - Partition Correspondance [was Re: Corrupted/unreadablejournal: reiser vs. ext3]

2003-02-20 Thread Manuel Krause
On 02/18/2003 07:54 AM, Oleg Drokin wrote:

Hello!

On Tue, Feb 18, 2003 at 12:35:23AM +0100, Manuel Krause wrote:



BTW, do the ReiserFS errors nowadays print out a usable partition 
identification (like Chris actual data-logging patches perform at mount, 
e.g.)?


Sometimes it does.



I mostly always have 2 partitions with ReiserFS mounted, so -- is it 
still meaningless to get an error message related to one of them in my logs?


It depends on what are the messages.



I posted this circumstance some 3.6-ReiserFS levels ago and someone of 
your team wanted to implement this after his task-list was done, IIRC.


Yes. I have a patch dated back to May 7th, 2002. But it was never
accepted for reason I don't remember already.
I will dig through my email, though. Probably I will give it another try.


Yes. I have the origin dated back to exactly that time, too. :-))

If you're ready with the patch (again) I would be glad to receive the 
preliminary version to use it anyways.


Thanks for your great work
(applies for all the teams, of course!),

Manuel




What is [PATCH] 02-directio-fix.diff (namesys.com) for?

2003-02-17 Thread Manuel Krause
Hi!

Is this patch from 030213 it needed by anyone using ReiserFS within 
2.4.20 and 2.4.21-preX ?

What is DIRECT IO with reiserfs from the topic line of the patch:
# reiserfs: Fix DIRECT IO interference with tail packing ?

Thanks for the info and best regards,

Manuel

(I hope I didn't miss any hidden announcement...)



Error - Partition Correspondance [was Re: Corrupted/unreadablejournal: reiser vs. ext3]

2003-02-17 Thread Manuel Krause
On 02/17/2003 08:43 PM, Hans Reiser wrote:

Vitaly Fertman wrote:


Ok, so the reiserfs kernel code detects an error on disk, what does it
do?  Print out an error message, maybe BUG?  There is an error field
in the reiserfs superblock, I hope it is set when the kernel detects
something bad.

So, now what happens?  Maybe the user doesn't read their syslog and
doesn't see the error, or the error is just a prelude to memory 
corruption
which causes the system to crash.  When the system boots again, it goes
on its merry way, mounting the reiserfs filesystem with _known_ errors
on it, using bad allocation bitmaps, directories btrees, etc and maybe
double allocating blocks or overwriting blocks from other files causing
them to become corrupt, etc, etc, etc.  Until finally the filesystem is
totally corrupt, the system crashes miserably, the user emails this 
list
and reiserfsck has an impossible job trying to fix the filesystem.

Instead, what I propose is to have reiserfsck -a AS A STARTING POINT
simply check for a valid reiserfs superblock and the absence of the
error flag before declaring the filesystem clean and allowing the
system to boot.

What's even worse, the reiserfs_read_super (at least 2.4.18 RH kernel)
code OVERWRITES the superblock error status at mount time, making it
worse than useless, since each mount hides any errors that were 
detected
before the crash:

s-u.reiserfs_sb.s_mount_state = SB_REISERFS_STATE(s);
s-u.reiserfs_sb.s_mount_state = REISERFS_VALID_FS ;


Andreas seems reasonable, Vitaly, what are your thoughts?

  

Next, add journal replay to reiserfsck if it isn't already there,


Why, when it is in the kernel?
  

Because that is the next stage to allowing reiserfsck do checks on the
filesystem after a crash.  Do you tell me you would rather (and you
must, because it obviously currently does) have reiserfsck just throw
away everything in the journal, leaving possibly inconsistent data in
the filesystem for it to check?  Or maybe make the user mount the
filesystem (which obviously has problems or they wouldn't be running
reiserfsck to do a full check) just to clear out the journal and maybe
risk crashing or corruption if the filesystem is strangely corrupted?


Vitaly, answer this.
  


Ok, so probably we should make the following changes. The kernel set 
IO_ERROR
and FS_ERROR flags. In the case of IO_ERROR reiserfsck prints the 
message about hardware problems and returns error, so the fs does not 
get mounted at boot. On attempt mounting the fs with IO_ERROR flag set 
it is mounted ro with some message about hardware problems. When you 
are sure that problems disappeared you can mount it with a spetial 
option cleaning this flag and probably reiserfstune will have some 
option cleaning these flags also.
In the case of FS_ERROR - search_by_key failed or beyond end of device 
access or similar - reiserfsck gets -a option at boot, replays the 
journal if needed and checks for the flag. No flag - returns OK. Else 
- run fix-fixable. Errors
left - returns 'errors left uncorrected' and the fs does not get 
mounted at boot. On attempt mounting the fs with the flag just print 
the message about mounting the fs with errors and mount it. Not ro 
here as kernel will not do deep analysis of errors and it could be 
just a small insignificant error.

 

Sounds good to me.  Do it.  Reiser4 also.


Hi!

BTW, do the ReiserFS errors nowadays print out a usable partition 
identification (like Chris actual data-logging patches perform at mount, 
e.g.)?

I mostly always have 2 partitions with ReiserFS mounted, so -- is it 
still meaningless to get an error message related to one of them in my logs?

[For long times now (more than 6 months) I did not get any ReiserFS 
errors any more even with data-logging and preempt-kernel applied -- I 
only read them over the list. So I don't know the real meaning of error 
messages' variables content any more... :-( or really :-)))   ]

I posted this circumstance some 3.6-ReiserFS levels ago and someone of 
your team wanted to implement this after his task-list was done, IIRC.

So, if it's not implemented explicitly in words so far, this would seem 
to me to be valuable for users, too, IMO.


Best regards,

Manuel



Re: OT: Swapfile to RAM relation with 2.4.2X+

2003-02-07 Thread Manuel Krause
Thank you for your reply, Russell!

On 02/07/2003 12:09 PM, Russell Coker wrote:

On Fri, 7 Feb 2003 02:47, Manuel Krause wrote:


In the beginning of 2.4.0+ a relation of swapfile-to-RAM of 2-to-1 was
recommended. Due to my several system changes to come in those times I


Such recommendations are only generalisations.  Ignore them and look at what 
your system is doing.  If your swap space never runs out and you don't expect 

So far, I followed these thoughts as I always seemed to have enough swap 
space in this way of interpretation of swap usage. It never ran out.
(Except for buggy applications, e.g Netscape 6 betas, that sometimes 
first filled RAM to max and then the swap ... the system finally stalling.)

your usage patterns to require more (including cron jobs and periods of 
unexpected load) then you have enough.  If you run out of swap space then you 
need more, also you should have some swap even if you have a lot of memory.  
There's always data that isn't used much and can be paged out to make room 
for more disk cache.

Seldom things happened on here this afternoon. Trying an unexpected 
load: Me changing an image with  Gimp, opening some large chart in 
OpenOffice, a VMware doing disk defragmentation, running KDE2, and some 
other programs ... and Netscape 7.01 with several plugins also via 
crossover in separate plugin servers for some hours. Finally I had ~75% 
of the new huge swapfile filled.
From time to time I had this _rate_ before, so far, but not more. 
Today, closing the applications step-by-step revealed Netscape and 
plugins had about 256MB swap in use! (Of course I know NS6++ have always 
been memory hogs.) It was an acoustical and visual experience 
listening to 2 disks activity and watching KSysguard to show the actions.

I just want to report back and now find it quite funny having had a max 
75% filled swap so far, then repartitioning to have a doubled Linux 
swap, and having applications that use it up that excessively... still 
having a max 75% filled swap.

;-))

Best regards,

Manuel


BTW  Anything that is worth saying in a .sig can be said in 4 lines. 

Yes. It should. But it was not intended as .sig.




OT: Swapfile to RAM relation with 2.4.2X+

2003-02-06 Thread Manuel Krause
Hi everyone!

Maybe I should address someone else with this question but maybe someone 
on this list can answer this quickly throughout his own experience:

In the beginning of 2.4.0+ a relation of swapfile-to-RAM of 2-to-1 was 
recommended. Due to my several system changes to come in those times I 
refused to implement this setting at that moment of issue immediately 
(money missing for notebook RAM and for required disk space). I had 
256-512MB RAM and always 256MB swap.

Last weekend I implemented a 1:1 relation of Swap-to-RAM, at least 
(512:518 on here, now). I now see swap space filling up more now, and a 
bit more quickly. But not any subjective advantage on my previous 
system. Mmh. Linux is now using more RAM for (sometimes also VMware 
related) disk cache.

I don't know: Is that all on swap:RAM relation?! No real advantage???

Manuel


-
I want to express my personal feelings on here - Me just beeing wordless 
- just praying - concerning the Loss of Columbia: I feel with the 
families, the team; and the future of NASA / ESA / or any further manned 
 space mission to take, mourning and hoping.
I'm also hoping for more peace on earth, in near and far east, we should 
strictly take care of that, in hopefully non-violent diplomacy and no 
aggressions between any party or any ally being in freedom and peace...
-



Re: Crash: the problem was DMA!

2003-01-25 Thread Manuel Krause
On 01/24/2003 11:28 PM, Francois-Rene Rideau wrote:
 Dear reiserfs developers,

 here's an update on the trouble I had lately (the second disk crash):
 the culprit was IDE DMA!

[snip: success story  reiserfsck did a good job]

 PS: YES, my old kern.log's from before the BIOS update do show
 hda: 240121728 sectors (122942 MB) w/2048KiB Cache, CHS=14946/255/63,
 UDMA(66)
 While the newer one lack the UDMA(66).
 VP_IDE: VIA vt82c686a (rev 1b) IDE UDMA66 controller on pci00:04.1
 In case anyone cares, that's an ASUS K7M motherboard.
 Darn. I still have this performance bug - but at least, the
 computer works.
 Advice welcome, though it's becoming off-topic (so private message
 might be more suited).

Hi!

Since I have this cheap Notebook (Clevo 8500V, Taiwan) I've been
fiddling to work around the buggy chipset and/or the buggy BIOS.

I have a
   VP_IDE: VIA vt82c686a (rev 22) IDE UDMA66 controller on pci00:07.1

and @ ide0:
   hda: 39070080 sectors (20004 MB) w/2048KiB Cache, CHS=2432/255/63,
UDMA(66)
@ ide1:
   hdd: 35433216 sectors (18142 MB) w/418KiB Cache, CHS=35152/16/63,
UDMA(66)
   hdc: ATAPI 24X DVD-ROM drive, 128kB Cache, UDMA(33)

BIOS settings do not have an effect to my disk configuration under
Linux. Under Win98 I get severe transfer timeouts when hdc has a wrong
(increased) UDMA setting in the BIOS.

I mainly use the following to configure my chipset/disks under Linux:
* Kernel: append = ide0=ata66 ide1=ata66 [...]
  config: CONFIG_BLK_DEV_VIA82CXXX=y
  and the DMA related stuff, of course.
* hdparm @bootup:  -qB255qS0qW1qa16qd1qu1qK1qk1qX68 /dev/hda
   -...  ...X67 /dev/hdd
   -qa16qd1qu1qK1qk1qX67 /dev/hdc
   (I know each of them is over-tuned in -X by at least  1 ...
but so the hardware or kernel based max. settings stay on.
If I issued values according to and higher than the max.
rate I get ... not functional errors but the original
values remain)
  -- see ftp://sunsite.unc.edu/pub/Linux/system/hardware/ for
  the latest release of hdparm)
* powertweak settings
  -- http://powertweak.sourceforge.net/

(Some of the settings may be outdated by now, I've kept them in
tradition. Oh, yes, and they may be dangerous!!! Backups...)

cat /proc/ide/via gives me some hints if I really misconfigured
something but it doesn't show real conditions concerning the transfer
rate.

I tried many ways to adjust the parameters in former times to get equal
rates for hda and hdd for testing purposes. But the transfer rate
relation between hda and hdd (based on hdparm -t /dev/drivename and
real world ReiserFS copy sessions) _always_ stays between 1.3 and 1.45 .


Maybe one setting shown above could at least release your machines
general performance.


Best wishes and happy testing,

Manuel





01-iput-deadlock-fix data-logging

2003-01-24 Thread Manuel Krause
Hi all!

When I applied the new 01-iput-deadlock-fix from
ftp://ftp.namesys.com/pub/reiserfs-for-2.4/2.4.20-pending/01-iput-deadlock-fix.diff

on 2.4.21-pre3 - together with Chris Masons data-logging patches - I 
saw that the main data-logging patch didn't apply without rejects. This 
also affects 2.4.20.

Without knowing what I was doing (as ever ;-) ) I changed the related 
patch lines and now put the repacked results here for anyone interested:

for 2.4.20: 
http://manuel.krause.bei.t-online.de/downloads/2.4.20/03-data-logging-32b.diff.gz

for 2.4.21-pre(3):
http://manuel.krause.bei.t-online.de/downloads/2.4.21-pre/05-data-logging-33b.diff.gz


At first I applied the 01-iput-deadlock-fix.diff and then all the 
data-logging stuff. The kernel compiled and works for me.

I hope there is no mistake with it!


Bye,

Manuel



Re: what do you do that stresses your filesystem?

2002-12-23 Thread Manuel Krause
On 12/23/2002 12:28 PM, Hans Reiser wrote:

We were discussing how to optimize reiser4 best, and came to realize 
that us developers did not have a good enough intuition for what users 
do that stresses their filesystem enough that they care about its 
performance.

If you just do edits of files it probably does not matter too much what 
fs you use.

Booting the machine seems like one activity that many users end up 
waiting on the FS for.  Yes?

Starting up complex and big applications like xemacs and mozilla would 
be another.  Yes?

Others?

Anyone already mentioned VMware sessions? ;-)

Running VMware on a Linux host would be an example. The application 
itself is no problem for the fs. But maybe the case when the VMware's 
guest OS is under high memory usage and disk i/o and VMware swaps out to 
a file in /tmp on the host OS. Usually this happens at the same time 
when Linux swaps out the unneeded things, on here: parts of KDE and 
Netscape7+.
So it is no fs only stress test at all. I can suggest running SpeedDisk 
on your fragmented Win98 disks from within VMware to have this scenario. 
(Then, switching to a mail composer window under Linux and waiting for 
the cursor ... takes ... some time...)

Regards,

Manuel


Hans

PS

reiser4 performance is up a lot recently, and within two weeks I think 
cp -r will have been optimized as much as is worth doing.  cp -r 
accesses files in readdir order, and that does indeed seem worth 
optimizing, but soon we will need to optimize more sophisticated access 
patterns than that.

Hans




Re: Getting very interesting - Poor read performance

2002-11-07 Thread Manuel Krause
Hi Naoki and all others!

I like your kind of testing very well!

But, mainly, kernel 2.4.2 (??? really ???) is very very outdated in 
terms of reiserfs bug fixing activities in the recent past. Maybe the 
earlier bugs made some speed advantages due to missing checks. ;-)

You may want to try stock kernel 2.4.19 that was basically really fast 
itself with reiserfs and safe! Apply the fixes from
  ftp://ftp.namesys.com/pub/reiserfs-for-2.4/2.4.19.pending !

When you want to try the additional reiserfs mount option notail you 
could get another speedup. Then you should rebuild your original disk 
content (copy to a spare partition and copy back, e.g.).

If you want to exaggerate speedups try the data-logging patches from 
Chris Mason for 2.4.19 to be found in the base directory
  ftp://ftp.suse.com/pub/people/mason/patches/data-logging
(Then some original reiserfs patches (namesys ones) won't apply 
afterwards - skip them by trial and error)

Best wishes,

Manuel


On 11/08/2002 01:23 AM, Naoki wrote:
Aha, very good point.

Now strace'ing the 'wc' I see the same on both servers. Just a bunch of 
'read'
sys calls.

So why would 'read' generate such a different user time on the two machines?

Trying some tests with larger files in where the problem becomes far 
more evident:

'New' box :

# wc -l bigfile

 290314 bigfile

real0m23.047s
user0m16.072s
sys 0m0.348s

# mount
/dev/sda3 on / type reiserfs (rw,noatime,nodiratime,nolog)

# strace -c wc bigfile
execve(/usr/bin/wc, [wc, bigfile], [/* 23 vars */]) = 0
290314 3127604 40121984 bigfile
% time seconds  usecs/call callserrors syscall
-- --- --- - - 
 68.330.104694  43  2453   read
 31.510.048270  76   637   write
  0.070.000113   913 7 open
  0.020.29  10 3   munmap
  0.020.28   7 4   mmap2
  0.020.25   5 5   old_mmap
  0.010.20   3 6   fstat64
  0.010.13   2 6   close
  0.010.10   3 4   brk
  0.000.05   5 1   mprotect
  0.000.04   4 1   uname
-- --- --- - - 
100.000.153211  3133 7 total



Old box :

# time wc -l /root/bigfile
 290314 /root/bigfile
0.68user 0.09system 0:00.86elapsed 88%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (114major+19minor)pagefaults 0swaps

And again :

[root@banner23 logs]# strace -c wc /root/bigfile
execve(/usr/bin/wc, [wc, /root/bigfile], [/* 20 vars */]) = 0
 290314 3127604 40121984 /root/bigfile
% time seconds  usecs/call callserrors syscall
-- --- --- - - 
 99.590.100965  41  2453   read
  0.160.000161  1213 2 open
  0.070.75  15 5   brk
  0.060.62   513   old_mmap
  0.030.31   311   close
  0.020.23   3 9   fstat
  0.020.19   6 3   munmap
  0.010.15  15 1   write
  0.010.15   5 3   mprotect
  0.010.08   4 2   fstat64
  0.000.05   5 1   ioctl
  0.000.02   2 1   getpid
  0.000.02   2 1   personality
-- --- --- - - 
100.000.101383  2516 2 total


# mount
/dev/sda3 on / type reiserfs (rw,noatime,nodiratime)


That's messed up! Why are they doing exactly the same # of 'read' calls
but my newer system has decided to add 637 'write' calls ??

What reiserfs / mount options should I try to get this at a more normal 
level ?


-n.

[EMAIL PROTECTED] wrote:

On Thu, 07 Nov 2002 19:59:55 +0900, Naoki [EMAIL PROTECTED]
said:

 

real0m3.059s
user0m2.543s == say what???
sys 0m0.064s
   

0.25user 0.04system 0:00.38elapsed 76%CPU (0avgtext+0avgdata
   

0maxresident)k
 

0inputs+0outputs (115major+19minor)pagefaults 0swaps
   


I'd start by investigating why on the new box the 'user' time is 2.5+
seconds.

Notice that the 'system' times on both boxes are comparable (0.06 versus
0.04 - low enough that timer resolution probably matters in any jitter
in
the measurements).

Does 'which wc' show you running something other than /usr/bin/time on
the
new box?
 





[reiserfs-list] What about 2.4.20.pre8+data-logging ?

2002-10-01 Thread Manuel Krause

Hi all!

Just a simple question upon the subject line. I want to test it on here 
NOW!!!



Previous simple testing data (simple, though I got many very very 
helpful advices and hints from Oleg Drokin) for data-logging on a real 
partition can be read on:
   http://manuel.krause.bei.t-online.de/topics/sw-tests.2.htm
  and the parameters are shown on the next page (sw-tests.3.htm) .
Maybe I posted an unreliable variance of that to the list, so far. But 
that is my actual competitive comparison on the same disk content.

Disk content isn't altered still. Waiting for the next data-logging 
patch, my / filling up. File size distribution data is available upon 
request. (BTW, is it wanted, that I capture it according to my timings 
that I take, anyways? One command line more on this... and gzipped on 
the page it doesn't make a great difference. Instead of debugreiserfs' 
output... wow!  ;-)  )



Please, correct me if something went definitely wrong with this testing 
or this page.


Thank you,

Manuel




Re: [reiserfs-list] Copy time comparison 2.4.20-pre6 - 2.4.19+data-logging(was:Compatibility of current 2.4.19.pending ...)

2002-09-22 Thread Manuel Krause

On 09/19/2002 06:01 PM, Oleg Drokin wrote:
 Hello!
 
 On Thu, Sep 19, 2002 at 05:52:15PM +0200, Manuel Krause wrote:
 
[...]

Thanks for these reminders. Originally I wanted to copy data from one 
disk to another and on the way capture the timings to compare the 
throughput... Then you pointed me to re-check pure reads and pure writes 
mainly to make sure the writes were not read-speed bound and to compare 
this behaviour on -pre6 and -pre7.
 
 Originally I was mostly interested in reads from source fs, not the one
 where you have copied the data (though that one is also might be useful).
 
 Ok, thank you for lots of testing.
 
 Bye,
 Oleg
 

Hi Oleg  others!

O.k. somehow I managed to find the script to make the scripts to 
measure writes from nowhere to the target disk. I decided to make up 2 
scripts - one for the dirs and one for the files, originally for the 
comparison of pure file wites to the reads (read below) including the 
umounts. They're containing just the lists of commands Oleg posted 
recently (e.g. mkdir /mnt/beta/dir1 /// dd if=/dev/zero count=1 
bs=8466760 of=/mnt/beta/dir1/.../filename1 ). And I re-used Olegs 
posted command to measure reads from the source disk for the most recent 
reiserfs kernel versions.

The results are shown below.

I'm not very glad with comparing the pure reads against writes 
especially when watching the copy timings as different commands are 
included and their (relative) overhead is quite unknown, am I right? (So 
e.g. I didn't want to take these values' relation to tweak the disks 
read  write latency settings for now.)  CPU usage during the dd... and 
the find...cat commands is very high. Also the disk access when watched 
in ksysguardd is different for all these kinds of tests.

So, read and compare it yourself and I would be glad to get comments on 
how I needed to refine it.


Good night,

Manuel

--

/dev/hda11 5550248   3927572   1622676  71% /mnt/beta
/dev/hdd11 5550248   3927572   1622676  71% /mnt/gamma
containing 58015 files, 3481 files

kernel 2.4.19-  2.4.20-pre6  2.4.20-pre7
data-logging

# time sh -c cp -ax /mnt/gamma/. /mnt/beta/ ; umount /mnt/gamma ; 
umount /mnt/beta 
(representing copies from source to target disk)

real   7m46.970s10m57.716s   10m04.328s
user   0m01.710s 0m01.390s0m01.540s
sys1m25.440s 1m11.100s1m18.840s

# time /tmp/script.dirs.sh
(representing directory writes to target disk)

real   0m09.972s 0m10.055s0m10.316s
user   0m04.960s 0m04.770s0m04.880s
sys0m04.370s 0m04.590s0m04.550s

# time sh -c /tmp/script.files.sh ; umount /dev/hda11 ; umount /dev/hdd11 
(representing file writes to target disk)

real   7m35.992s 8m24.499s8m20.830s
user   1m31.100s 1m30.120s1m32.280s
sys2m21.480s 1m42.230s2m02.660s

# cd /mnt/ ; time sh -c find ./gamma/* -type f -exec cat {} /dev/null 
\; ; umount /dev/hda11 ; umount /dev/hdd11 
(representing reads from source disk)
real   8m34.665s 9m54.103s9m50.796s
user   1m11.650s 1m10.760s1m11.100s
sys1m15.000s 1m29.960s1m17.370s


I took care to freshly mount the participating partitions each
test, recreate and mount+umount the counterpart reiserfs
partition before when appropriate, had no overwrites and no
other accesses to the source partition during this test.

Going back from 2.4.20-pre7 to 2.4.19-data-logging and
beeing in doubt of the effects of the new block allocator I
recreated the source filesystem completely (copy to new
target and copy back) to measure the above data-logging
values. The first copy from the former 2.4.20-pre7 reiserfs
to 2.4.19-data-logging had this timings:
   real7m38.715s
   user0m2.030s
   sys 1m29.560s

Command to take the /tmp/script.[dirs,files].sh :
sh -c 'cd /mnt/gamma ; find * -type d -fprintf /tmp/script.dirs.sh 
mkdir \/mnt/beta/%p\\n ; find * -type f -fprintf 
/tmp/script.files.sh dd if=/dev/zero count=1 bs=%s 
of=\/mnt/beta/%p\\n ; chmod u+x /tmp/script.*.sh'

--




Re: [reiserfs-list] Copy time comparison 2.4.20-pre6 - 2.4.19+data-logging(was:Compatibility of current 2.4.19.pending ...)

2002-09-19 Thread Manuel Krause

On 09/19/2002 08:34 AM, Oleg Drokin wrote:
 Hello!
 
 On Thu, Sep 19, 2002 at 03:14:56AM +0200, Manuel Krause wrote:
 
 
And you can run resulting script in target dir.

Yes, I saw this work in a nightmare last night. Scheduled for some dark 
moonless cold snow flurry winter night, sorry. Except for someone 
experienced likes to provide me with a basic script for that... ;-))
 
This dataset is way too small and entirely fits into your RAM I presume.

Yes, it fits. I know that problem with this RAM based test. Though I may 
increase the testing directory a bit closer to the OOM limit, having 
512MB available.
 
 No, this is not enough of course since some data will remain unflushed and
 amount of such data is relatively big compared to total amount of data.

And if the participating filesystems are umounted after writing the data?

So to avoid any distortion or results you'd better have all periodic stuff
disabled. (though kupdated is still there) so it's better to run it several
times.
Also since it its into RAM, it must be flushed out, so I usually do this
using such command:
time sh -c cp -a /testfs0/linux-2.4.18 /mnt/ ; umount /mnt

Couldn't you have written these words to me some years earlier?! The 
effect is measurable and on almost any so far discussed fs interaction 
huge or at least relevant. So, after reviewing my 
partition-backup-scripts, forget _all_ results I posted to the list. 
They're all lacking the umount=flush component.
 
 It is only needed if data to be cached is big enough to be noticed when compared
 to total amount of data to be copied.
 
No. /mnt/beta/ is my software storage partition and contains this:
 /dev/hda11   5550248   4089088   1461160  74% /mnt/beta .
 
 Ah!
 
Oh. O.k. you get a definite No on this, sorry. I just reviewed the 
debugreiserfs' output file content and I would not send or publish this 
in any way. It is simply too sensitive as it contains direct file and 
directory names.
 
 No, then I do not need that debugreiserfs dump anyway.
 
 But here is another warning:
 I presume before each copy test is done, /mnt/beta/z.Backup.3 is removed
 completely and /mnt/beta is unmounted and mounted back, also
 between sevetral writing attempts (And during these attempts of course)
 no other processes can write to to this FS.

These clauses are both true and apply to the dd tests, too. Mmh, except 
for the case when /mnt/beta/z.Backup.3 or testfile.zero were the source 
to be copied/dd'ed... There haven't been overwrites or other writes to 
the disk during the testsets.

 If those two above clauses are not true, then results are also meaningless,
 as lots of unnecessary tree reads are issued for overwrite and new blocks are
 not allocated, but existing ones are reused.
 If somebody can write to FS, then with every next test blocks chosen for files
 are different (old ones may be already occupied).
 
Is it possible to provide the needed info without clear directory or 
file names in future?! (These names replaced by sequentionally taken 
numbers?)
 
 In such a case you can determine object id of big file (shown to userspace
 as inode number) and only provide it's SD and indirect items:
 |  9|4 357 0x0 SD (0), len 44, location 1572 entry count 65535, ...
 | 10|4 357 0x1 IND (1), len 504, location 1068 entr
 126 pointers
 [ 9948(126)]
 
 This is a example of file with objectid 357, that have 126 blocks in size.
 9948-10074 blocks (all continuous) are used.
 
 If file is very big, there would be several IND (indirect) items in other nodes,
 number in brackets will changes to show offset that this INDIRECT item starts
 with.
 
Comparison of dd actions:
---
reading command: time sh -c dd if=/mnt/beta/testfile.zero bs=1M
 count=1000 of=/dev/null ; umount /mnt/beta
writing command: time sh -c dd if=/dev/zero bs=1M count=1000
 of=/mnt/beta/testfile.zero ; umount /mnt/beta
 
 I presume you earased /mnt/beta/testfile.zero between tests and executed
 sync.

I umounted the partition and mounted it back. I thought that would be 
the right action to avoid what you describe in the following:

 Ah, until I forgot - in reiserfs if you erased something blocks that were
 freed are only get back to you on next journal flush or after sync.
 So if you do something like this:
 rm -f /mnt/beta/testfile.zero ; time sh -c dd ...,
 then second file will get different blocknumbers.
 
related df values:
/dev/hda11   5550248   4089088   1461160  74% /mnt/beta
/dev/hda11   5550248   5114104436144  93% /mnt/beta
Yes, that's going over the senfseful filesystem content value.
 
 Hm. This is before and after dd command or what?

Yes, without and with the 1G file.

Comparison of cp -a actions:
--
reading command: time sh -c cp -a /mnt/beta/z.Backup.3/. /mnt/ramfs/ ;
 umount /mnt/beta ; umount /mnt/ramfs
writing command: time sh -c cp -a /mnt/ramfs/. /mnt/beta/z.Backup.3/ ;
 umount /mnt/beta ; umount /mnt/ramfs
 
 You mean you executed your commands

Re: [reiserfs-list] Copy time comparison 2.4.20-pre6 - 2.4.19+data-logging(was:Compatibility of current 2.4.19.pending ...)

2002-09-18 Thread Manuel Krause

Hi!

Even facing the problem to make this mail unreadable I want to answer 
all inline. Please, don't complain.

I feel the urgent need to correct at least my last testing results as 
your last mail revealed heavy errors of my timing tests and kind-of 
opened my eyes.

On 09/18/2002 07:20 AM, Oleg Drokin wrote:
 Hello!
 
 On Tue, Sep 17, 2002 at 07:39:39PM +0200, Manuel Krause wrote:
 
Copy same amount of data from RAM/nowhere to FS.
E.g. make a file with file names and sizes and write a script that
writes this amount of data from /dev/zero with these same names and needed 
sizes into FS. (or just use RAMFS as your source if you have not much data 
and huge RAM)

To be honest, this already exceeds my linux knowledge...
 
 I meant something to this extent:
 You run a script that runs over your filesystem and creates shell script
 that first creates whole dir structure of source dir and then for each file
 creates necessary command to recreate file of the same size:
 e.g for this directory contents:
 green@angband:~/z ls -lR 
[...]
 Result of the work of the script would be:
 mkdir t
 mkdir t/z
 dd if=/dev/zero of=t/z/inode.c bs=69570 count=1
 dd if=/dev/zero of=t/z/stree.c bs=66478 count=1
 dd if=/dev/zero of=t/z/tail_conversion.c bs=10256 count=1
 
 And you can run resulting script in target dir.

Yes, I saw this work in a nightmare last night. Scheduled for some dark 
moonless cold snow flurry winter night, sorry. Except for someone 
experienced likes to provide me with a basic script for that... ;-))

I was fiddling with some test directories containing 195.8MB I copied to 
and from /dev/shm with swap turned off.

# time cp -a /dev/shm/. /mnt/beta/z.Backup.3/
kernel 2.4.20-pre7  | kernel 2.4.20-pre6
real0m9.006s| real0m6.740s
user0m0.190s| user0m0.230s
sys 0m5.250s| sys 0m4.780s
# rm -r /dev/shm/*
# time cp -a /mnt/beta/z.Backup.3/. /dev/shm/
kernel 2.4.20-pre7  | kernel 2.4.20-pre6
real0m6.349s| real0m6.180s
user0m0.210s| user0m0.220s
sys 0m2.450s| sys 0m2.510s
 
 This dataset is way too small and entirely fits into your RAM I presume.

Yes, it fits. I know that problem with this RAM based test. Though I may 
increase the testing directory a bit closer to the OOM limit, having 
512MB available.

 So to avoid any distortion or results you'd better have all periodic stuff
 disabled. (though kupdated is still there) so it's better to run it several
 times.
 Also since it its into RAM, it must be flushed out, so I usually do this
 using such command:
 time sh -c cp -a /testfs0/linux-2.4.18 /mnt/ ; umount /mnt

Couldn't you have written these words to me some years earlier?! The 
effect is measurable and on almost any so far discussed fs interaction 
huge or at least relevant. So, after reviewing my 
partition-backup-scripts, forget _all_ results I posted to the list. 
They're all lacking the umount=flush component.

Now, you caught me as the fool of reiserfs-list... Quite embarrassing. 
Mmmh. Painful.

# time dd if=/dev/zero bs=1M count=1000 of=/mnt/beta/testfile.zero
kernel 2.4.20-pre7  | kernel 2.4.20-pre6
real1m11.390s   | real1m42.011s
sys 0m11.230s   | sys 0m5.620s
 
 Hm. While system time is less as expected, real time increased, that's strange.
 
# time dd of=/dev/null bs=1M if=/mnt/beta/testfile.zero
kernel 2.4.20-pre7  | kernel 2.4.20-pre6
real1m16.738s   | real1m39.094s
sys 0m5.460s| sys 0m5.930s
 
 And real time is bigger for reads too, so it seems data layout is different.
 
 That's really strange. If you can reproduce this behaviour, I am interested
 in getting debugreiserfs -d output for each case after you umount this volume
 (I assume that2 /mnt/beta/ filesystems contains nothing but this testfile.zero
 file).

No. /mnt/beta/ is my software storage partition and contains this:
  /dev/hda11   5550248   4089088   1461160  74% /mnt/beta .

I have no means to provide this complete debugreiserfs -d /dev/hda11 
output set, 42MB one of four, if I read your wording correctly (-pre6 
without 1G file, -pre6 with 1GB file, -pre7 without 1GB file, -pre7 with 
1GB file) on my web-space. As .tar.gz it's 4MB one of four, and even 
that set doesn't fit on my private t-online  website. Maybe, it would 
work if sent sequentially by mail.
Oh. O.k. you get a definite No on this, sorry. I just reviewed the 
debugreiserfs' output file content and I would not send or publish this 
in any way. It is simply too sensitive as it contains direct file and 
directory names.
Is it possible to provide the needed info without clear directory or 
file names in future?! (These names replaced by sequentionally taken 
numbers?)



O.k., too many words about unneeded things. I've remade the testing I 
posted and kept to umount the interacting related partition inbetween in 
order to force the needed flush and captured that time, too, now. My 
previously posted values have not been reproducible, if I review the new

Re: [reiserfs-list] reiserfsprogs-3.6.4-pre2

2002-09-17 Thread Manuel Krause

On 09/17/2002 03:23 PM, Vitaly Fertman wrote:
 Hi all,
 
 A new reiserfsprogs pre release is available at
 ftp.namesys.com/pub/reiserfsprogs/pre/reiserfsprogs-3.6.4-pre2.tar.gz
 
 Changes went into 3.6.4-pre2:
 
 fix-fixable sets correct item formats in item headers if needed.
 rebuild got some extra checks for invalid tails on pass0.
 fsck check does not complain on wrong file sizes if safelink exists.
 check dma mode/spead of harddrive and warn the user if it descreased 
   -- it could happen due to some hardware problem.
 
 Bugs:
 
 during conversion tails to indirect items on pass2 and back conversion 
   on semantic pass.
 not proper cleaning flags in item headers.
 during relocation of shared objects.
 new block allocating on pass2 (very rare case).
 


Hi Vitaly!

Does this mean these Bugs are
  [a] only in 3.6.4-pre2, so: newly created
  [b] in 3.6.4-pre2 and the previous -pres
  [c] or even in the 3.6.3 release,too
???

No, I'm not in doubt of reiserfscks development efforts in general, just 
want to know for sure.

Thanks,

Manuel



 Changes went into 3.6.4-pre1:
 
 Correction of nlinks on fix-fixable was disabled, because
   fix-fixable zeroes nlinks on the first pass and wants to
   increment them on semantic pass. But semantic pass is skipped
   if there are fatal corruptions.
 Exit codes were fixed.
 Warning/error messages were changed to more user friendly form.
 
 Changes which got into 3.6.3-pre1, but were not included into the 
 release:
 
 Great speedups for pass2 of reiserfsck.
 
 Thanks,
 Vitaly Fertman
 




[reiserfs-list] compatibility of 2.4.20-pre2/.19.pending data-logging?

2002-08-14 Thread Manuel Krause

On 08/14/2002 04:49 PM, Oleg Drokin wrote:
 Hello!
 
 After some time of development and internal testing, I am releasing
 this in public (obviously before pushing it to Marcelo).
 There are two patches at
 ftp://ftp.namesys.com/pub/reiserfs-for-2.4/2.4.19.pending/testing
 These implement reiserfs_file_write() support (patches are against
 2.4.20-pre2) that should speedup writes for those applications
 that actually care about st_blksize content of stat(2) output.
 (including cp and dd of course)
 Before submittig these to Marcelo we'd like to get more wide testing,
 of course. So if you can try these and report any problems, that
 would be very much appreciated.
 Also first patch considerably speedups hole creation (I remember report
 about creating 128G empty file was taking ages, this should fix this).
 
 Please note that these patches are not compatible with Chris Mason's
 datalogging patches, though we will do a merge later of course.
 
 Bye,
 Oleg
 

Sorry, Oleg, Dieter, Chris  others,

for re-threading,

are the introduced patches in 2.4.20-pre2 and/or 2.4.19.pending 
compatible with Chris' patches? I didn't give them or 2.4.20-pre2 a try, 
so far.

Am I right to better wait until Chris is back and updates his 
data-logging stuff for the actual pre-kernel, namesys-patches and more - 
wait for the outstanding merge?? Anyone having experiences yet?

Thank you and best regards,


Manuel




Re: [reiserfs-list] data-logging time comparison

2002-08-02 Thread Manuel Krause

On 08/02/2002 01:24 AM, Manuel Krause wrote:
 Hi!
 
 Maybe you are interested in my recent backup time comparison. I simply 
 copied the partitions with cp -ax /.../a/. /.../b/ where /.../b/ is 
 a freshly created reiserfs. I hope the comparison is readable and makes 
 sense for someone.
 

O.k.

I made a new comparison with just the data-logging patch replaced by the 
new -24, and now, only using fresh filesystems. I added the related 
lines from tonight to show the improvement.

File size distribution statistics are available via email upon request 
(131kB as .tar.gz).

Best wishes,

Manuel


partition 1: /dev/hdd7   2722896   1925876797020  71% /mnt/alpha
  (is my / filesystem)

new FS noatime,notail,data=writeback
  real  3m11.881s  user  0m1.600s  sys  0m49.040s   ~9,802MB/s  =100%
new FS noatime,notail,data=ordered
  real  3m19.895s  user  0m1.400s  sys  0m49.890s   ~9,409MB/s96%
new FS noatime,notail,data=journal
  real  6m48.534s  user  0m1.570s  sys  1m0.830s~4,604MB/s47%
new FS noatime,notail data-logging-23
  (real  3m18.826s user  0m1.310s  sys  0m49.710s)  ~9,478MB/s97%
new FS noatime,notail,data=ordered data-logging-23
  (real  4m6.866s  user  0m1.620s  sys  0m50.120s)  ~7,633MB/s78%



partition 2: /dev/hdd11  5550248   4490128   1060120  81% /mnt/alpha
  (is my software repository)

new FS noatime,notail,data=writeback
  real  8m46.739s  user  0m1.430s sys   1m32.110s   ~8,325MB/s  =100%
new FS noatime,notail,data=ordered
  real  8m56.040s  user  0m1.410s  sys  1m27.920s   ~8,180MB/s98%
new FS noatime,notail,data=journal
  real 18m36.456s  user  0m1.750s  sys  1m58.480s   ~3,928MB/s47%
new FS noatime,notail partition 86% full, data-logging-23
  (real  9m20.956s user  0m1.680s  sys  1m39.190s)  ~8,248MB/s99%
new FS noatime,notail,data=ordered part. 86% full, data-logging-23
  (real 11m54.513s user  0m1.910s  sys  1m38.460s)  ~6,476MB/s78%




Re: [reiserfs-list] data-logging-20 BUG?!

2002-07-30 Thread Manuel Krause

On 07/29/2002 09:29 PM, Chris Mason wrote:
 On Sat, 2002-07-27 at 06:10, Manuel Krause wrote:
 
 
After having upgraded from data-logging -19 to -20 and using it for some 
hours I keep getting this bug (kernel BUG at journal.c:724!):
 
 
 Ok, I've uploaded -21, and it should fix the BUG().  It was a debugging
 check to make sure the journal code had properly sent all the ordered
 buffers to disk before sending the commit down.  There was a race in -20
 that broke this rule.
 
 The oops should not have caused any corruptions, the incremental from
 -20 to -21 is below.
 
 -chris
 

Hi!

The new -21 patch works fine on here. After going back to -19 I found 
out -20 really didn't cause any corruptions.  :-))


Thank you, Chris!

Manuel




Re: [reiserfs-list] Re: Fwd: Re: 2.4.19-rc1-jam2 (-rc1aa2)

2002-07-10 Thread Manuel Krause

On 07/11/2002 03:26 AM, Chris Mason wrote:
 On Wed, 2002-07-10 at 20:35, Dieter Nützel wrote:
 
Hello Chris,

symbol clash between latest -AA (-jam2, 00-aa-rc1aa2) kernel and your 
data-logging stuff ;-(
 
 
 You should be able to drop the EXPORT_SYMBOL(balance_dirty); from my
 patch.
 
 -chris
 

;-)

Are your patches going to follow 2.4.19/2.4.20-preX+namesys - once 
2.4.19/2.4.20-pre is finally out

(...me, just praying for this circumstance, also for namesys' hidden 
patches on this next kernel!...)  ?

:-))

I have no problems with: 2.4.19-rc1 + reiserfs-patches[01,02,03,05] + 
all your patches (the data-logging one is now -11, checked regression 
from -13) + rml-preempt-kernel.


Manuel




Re: [reiserfs-list] [PATCH CFT] tons of logging patches + addon

2002-07-09 Thread Manuel Krause

On 07/09/2002 04:15 PM, Dieter Nützel wrote:
 On Tuesday 9 June 2002 00:22, Manuel Krause wrote:
 
 [-]
 
The notebook now works like I bought it: fine, stable and the thermal 
(fan start/stop) patterns are quite well though I didn't replace the 
thermal pads between CPUfan and the heatsink-to-graphix-and-chipset as 
always and everywhere recommended. (Oh, try to get these special parts 
once!!!)
We had up to 35°C today here in Germany. That's a good restart!
 
 
 You are not exaggerate? Aren't you?
 Eastern German...;-)
 

I really took this black notebook outdoors for temperature stability 
testing. But I didn't place an external temp. sensor between keyboard 
and the chipsets heatsink, that's true... ;-)

 We have ~30°C (shadow) and I have 26°C in my working room, today here in 
 Hamburg, Northern Germany.

Same on here now, but some clouds...

 
 Have you ever considered to use lm_sensors?
 You need the latest kernel patch (2.6.4 CVS).

I've tried it some months ago. After collecting all the needed things 
and compiling over it didn't say more than my thermometer, IIRC one 
veryfiably correct temperature value :-(
Then I tried to get this info from a program within Win98 and didn't 
succeed, either. The first longer moment to think about cheap hardware...

But maybe I should give it a new try ?!

 
 Regards,
   Dieter
 
 BTW Ich liebe die Ossies ;-)
 

Me too! :-))

In 1994 I went from Goslar (Western Germany) to Ilmenau (Eastern 
Germany) to study mechanical engineering on a non-overcrowded 
university. The people are really o.k. here. Passing the last years 
through my mind it was a good choice and it's really nice and worth 
living here!


Greetings,

Manuel


What about this funny sentence: Ossis sind einfach die besseren Wessis. ?
;-)




Re: [reiserfs-list] Re: [PATCH CFT] tons of logging patches

2002-06-12 Thread Manuel Krause

On 06/06/2002 02:09 AM, Chris Mason wrote:
 Ok, ftp.suse.com/pub/people/mason/patches/data-logging has my latest
 updates, which have more optimizations for many threads all doing
 synchronous transactions.
 
 -chris
 

Mmh. I recompiled with the new one when it appeared on the server and 
kept the kernel append line with the obvious problem not to be able to 
easily reboot to my ext2 maintenance partition.

I had some disappearing files/dirs from /lib/modules/* and from 
somewhere within /var/* the second time now. First occurrence is about 6 
days ago. That takes action within a good uptime (I mean: not at startup 
or upon special programs' interaction).

Jun 12 20:52:48 firehead kernel: is_leaf: item location seems wrong 
(second one): *3.6* [5273 5274 0x1 IND], item_len 4, item_location 2120, 
free_space(entry_count) 0
Jun 12 20:52:48 firehead kernel: vs-5150: search_by_key: invalid format 
found in block 8422. Fsck?
Jun 12 20:52:48 firehead kernel: vs-13070: reiserfs_read_inode2: i/o 
failure occurred trying to find stat data of [5199 261934 0x0 SD]
Jun 12 20:52:48 firehead kernel: is_leaf: item location seems wrong 
(second one): *3.6* [5273 5274 0x1 IND], item_len 4, item_location 2120, 
free_space(entry_count) 0
Jun 12 20:52:48 firehead kernel: vs-5150: search_by_key: invalid format 
found in block 8422. Fsck?
[...] - many many more
Jun 12 20:54:46 firehead kernel: vs-13070: reiserfs_read_inode2: i/o 
failure occurred trying to find stat data of [5199 261934 0x0 SD]
Jun 12 20:54:46 firehead modprobe: modprobe: Can't open dependencies 
file /lib/modules/2.4.19-pre9/modules.dep (Permission denied)


I then needed to do a reiserfsck --rebuild-tree, as suggested by most 
recent reiserfsck (3.x.1c-pre4), the 3.x.1b-one said, it was fixable by 
--fix-fixable, but I then decided to use the latest. And then I had to 
adjust the lostfound manually. -- Huh, that took a long time, but 
nothing's missing, if I experience my system correctly now.

My system/partitions still look like the recently posted setup:
  Kernel.2.4.19-pre9
  ReiserFS.pending.01..03,05 (rest doesn't cooperate cleanly)
  ReiserFS.mason.01..03 (most recent)
  rml.preempt-kernel (maybe slightly modified for this setup)

/ mounted -o noatime,notail,data=ordered
and from applied lilo.conf: append = ... rootflags=data=ordered


Thanks, but the term speedup seems to be relative for me for now,

Manuel





Re: [reiserfs-list] [PATCH CFT] tons of logging patches

2002-06-05 Thread Manuel Krause

On 06/04/2002 03:12 PM, Chris Mason wrote:
  On Mon, 2002-06-03 at 23:28, Manuel Krause wrote:
 
 
 So, VMware is stable with it, too, on my well known heavy-private-test
 of it (running Norton SpeedDisk at least twice within a most recent
 VMware Win98). It doesn't show greatly different timings than to my
 setup before though having a different disk i/o pattern (due to the
 missing aa patches)... and me having a reduced RAM from 512to256MB at
 the moment. And I should be honest to say I can't give exact timings as
 the important disk contents changed during last weeks. But the
 disk-access-times/related-to-the-content are definitively _not_ higher
 than before!
 
 
  same speed on 1/2 the ram isn't bad ;-)
 
[...]
Don't know where to reply best...

Hi, again!

I want to make some more comments on my latest words.

As I said I first used the data=journal mode and got nice timings. O.k. 
I really think after that revision my previous kernel setup wasn't 
that well configured as I thought and felt. Long time degression?!

I really had the reiserfs messages in my logs that it explicitely used
this mode. The only problem I obviously had, so far, was to distinguish 
the mount options at darkest night: data=logging is no mount-option but 
the description data-logging, the mount option for it is data=journal
-- Passing rootflags=data=journal in lilo.conf and data=logging in fstab 
results in an uncontrollable kernel ;-) Huh!
Sorry, for my thoughtless testing. But my posted timings are quite 
relieble on here.

Concerning VMware the same speed on 1/2 RAM results are even more
impressing as VMware seems to buffer it's memory contents to /tmp/... fs 
again since I reduced the RAM. With 512MB it didn't seem to need this 
method usually.

The data=ordered mode saves 1..2secs from of my previously posted load
times for NS7 and OOo-1.0 and seems to be stable itself in everydays 
usage and for my VMware sessions, too. I didn't test the 
crash-no-garbage-in-files case and the more recent 
03-beta-data-logging-6.diff, yet.

I was extraordinary glad to see the explicit wording of the mounted 
partition in the logs we missed for so long time!

Thanks for your help,

Manuel





Re: [reiserfs-list] [PATCH CFT] tons of logging patches

2002-06-05 Thread Manuel Krause

On 06/05/2002 11:13 PM, Manuel Krause wrote:
 On 06/04/2002 03:12 PM, Chris Mason wrote:
   On Mon, 2002-06-03 at 23:28, Manuel Krause wrote:
  
  
  So, VMware is stable with it, too, on my well known heavy-private-test
  of it (running Norton SpeedDisk at least twice within a most recent
  VMware Win98). It doesn't show greatly different timings than to my
  setup before though having a different disk i/o pattern (due to the
  missing aa patches)... and me having a reduced RAM from 512to256MB at
  the moment. And I should be honest to say I can't give exact timings as
  the important disk contents changed during last weeks. But the
  disk-access-times/related-to-the-content are definitively _not_ higher
  than before!
  
  
   same speed on 1/2 the ram isn't bad ;-)
  
 [...]
 Don't know where to reply best...
[...]

BTW, what is this only diff good for (is it worth to recompile, I mean):
# diff '03-beta-data-logging-6.diff' '03-beta-data-logging-5.diff'
2777c2777
 +  if (SB_JOURNAL(p_s_sb)-j_num_lists  512) {
---
  +  if (SB_JOURNAL(p_s_sb)-j_num_lists  256) {

Thank you,

Manuel

 
 I was extraordinary glad to see the explicit wording of the mounted 
 partition in the logs we missed for so long time!
 
 Thanks for your help,
 
 Manuel
 






Re: [reiserfs-list] Re: [PATCH CFT] tons of logging patches

2002-06-05 Thread Manuel Krause

On 06/06/2002 02:09 AM, Chris Mason wrote:
 Ok, ftp.suse.com/pub/people/mason/patches/data-logging has my latest
 updates, which have more optimizations for many threads all doing
 synchronous transactions.
 
 -chris
 

Mhh. I was just starting to try the ftp, but  no-go-for-now.  O.k now 
I've got an old Santana CD in... Black magic ... Thanks to Chris,

bye, very nice to listen to,

Manuel




Re: [reiserfs-list] A couple of questions

2002-05-16 Thread Manuel Krause

Hi 'pcg\@goof.com ( Marc) (A.) (Lehmann )' and all others!

On 05/16/2002 11:44 PM, [EMAIL PROTECTED] ( Marc) (A.) (Lehmann ) wrote:

 On Thu, May 16, 2002 at 05:23:42PM -0400, Kuba Ober [EMAIL PROTECTED] wrote:
 
What I'm thinking of is this:
to the user, which most users w/o intimate filesystem knowledge won't be able 
to answer at all?

 
 Unix traditionally wasn't aimed at the point-and-click users without
 knowledge.


Don't loose contact with reality. Nowadays things change very often. And 
  at least Linux has to be usable for a traditional Win user soon if 
it should exist in future.

 
Looking at this list, what people want is to get their data 
back, as much as possible. They never want to get less than that. Why bother 
asking?

 
 Users who know nothing can still be told to just press y. Even better,
 somebody with some knowledge about the filesystem (and the contents!)
 layout can often do better with an interactive fsck (see ext2fs).
 
 I don't think it makes sense to enhance the dumb-user-mode while at the same
 time keeping informed users from working properly.
 


It makes sense to improve all user modes (in docs and usability) AND all 
automatic modes possible depending on the distro. Pardon, what is an 
informed user? Someone who reads the Docs or someone who can decidedly 
type Yes... and make a repeat loop??

 
that is what many unsuspecting users actually do. It should simply disregard 
read errors and try using whatever data there is in ok-read blocks.

 
 It should actually ask the user wether she wants the block to be repaired
 (if possible), or permanently marked defect.


Doesn't that really depend on the HW state - or can software reliably 
decide whether the info it gets is o.k. so far on Linux?? See the 
previous posts on the list.

 
I don't think that asking too many questions is worth it. He who runs fsck in 
fix mode wants his data back (whatever is left of it).

 
 Thats a big mistake. He who runs fsck wants to recover as much data as
 possible. Sometimes maybe more than fsck alone can do.
 


We all running reiserfsck want as much data back as possible and are 
in fear the FS has lost some things or is loosing things while running 
fsck (what is real with ext2 and vfat). What is a big mistake on 
reiserfs at least as it retrieves mostly all things possible. O.k. 
that point is inaccurate.

 
recovery stuff should be w/o questions in my opinion. At least that's what 
I'd expect all fsck's to do.

 
 for some strange reason no fsck behaves like that.
 


Yess. I agree on that opinion. The fixable things should pass without 
any question. Who knows the special missing inode, when it should be fixed?

Best wishes,

Manuel






[reiserfs-list] BTW: 2.4.19-patches-to-come?

2002-05-06 Thread Manuel Krause

Hi!

BTW, for 2.4.19-final it would be very nice to have...

1.) the deleted/truncated/completed-files-on-mount at least
 printed in the kernel logs, at best with the real filename
 -- as afterwards they are not retrievable --
 That's a security reason -- whoever can trigger a crash
 with various methods (I know the admin should take care
 against this case... but on my home sytem I'd like to know
 that info, too) but to get back the file from backups in
 case... who knows it before a
 crash... ? Am I missing something?

2.) a disk/drive/partition distinction in reiserfs related
 messages -- Oleg, you promised it to get real and best
 would be a real patch !

3.) a hint on how to turn on/off data-journaling for some
 of our existing reiserfs partitions if it exists at all
 for now and why it could be needed in some cases?!.

4.) a hint why there is iicache code in the latest
 speedup-compound-patch (so that the latest iicache patch
 would not apply)


Best regards for your stable ReiserFS, at all,

even under settings with 2.4.19-pre7 +reiserfs.pending +latest 
reiserfs.compound-speedup +aa.vm-for-2.4.19-pre7 +akpm.read-latency-2 
+rml.preempt-kernel + rml.lock-break +some-more nice aa.patches... 
That's a valuably fast  interactive experience!


Manuel





Re: [reiserfs-list] fsync() Performance Issue

2002-05-06 Thread Manuel Krause

On 05/07/2002 12:57 AM, Chris Mason wrote:

 On Mon, 2002-05-06 at 17:21, Hans Reiser wrote:
 
I'd rather not put it back in because it adds yet another corner case to
maintain for all time.  Most of the fsync/O_SYNC bound applications are
just given their own partition anyway, so most users that need data
logging need it for every write.


Does mozilla's mail user agent use fsync?  Should I give it its own 
partition?  I bet it is fsync bound;-)

 
 [ I took Wayne off the cc list, he's probably not horribly interested ]
 
 Perhaps, but I'll also bet the fsync performance hit doesn't affect the
 performance of the system as a whole.  Remember that data=journal
 doesn't make the fsyncs fast, it just makes them faster.
 
 
Most persons using small fsyncs are using it because the person who 
wrote their application wrote it wrong.  What's more, many of the 
persons who wrote those applications cannot understand that they did it 
wrong even if you tell them (e.g. qmail author reportedly cannot 
understand, sendmail guys now understand but had Kirk McKusick on their 
staff and attending the meeting when I explained it to them so they are 
not very typical).  

In other words, handling stupidity is an important life skill, and we 
all need to excell at it.;-)

 
 A real strength to linux is the application designers can talk directly
 to their own personal bottlenecks.  Hopefully we reward those that hunt
 us down and spend the time convincing us their applications are worth
 tuning for.  They then proceed to beat the pants off their competition.
 
 
Tell me what your thoughts are on the following:

If you ask randomly selected ReiserFS users (not the reiserfs-list, but 
the ones who would never send you an email)  the following 
questions, what percentage will answer which choice?

The filesystem you are using is named:

a) the Performance Optimized SuSE FS

b) NTFS

c) FAT

d) ext2

e) ReiserFS

 
 I believe the ones that know what a filesystem is will answer ReiserFS,
 You might get a lot of ext2 answers, just because that's what a lot of
 people think the linux filesystem is.
 
 
If you want to change reiserfs to use data journaling you must do which:

a) reinstall the reiserfs package using rpm

b) modify /etc/fs.conf

c) reinstall the operating system from scratch, and select different 
options during the install this time

d) reformat your reiserfs partition using mkreiserfs

e) none of the above

f) all of the above except e)

 
 These people won't be admins of systems big enough for the difference to
 matter.  data journaling is targeted at people with so much load they
 would have to buy more hardware to make up for it.  The new option
 lowers the price to performance ratio, which is exactly what we want to
 do for sendmails, egeneras, lycos, etc.  If it takes my laptop 20ms to
 deliver a mail message, cutting the time down to 10ms just won't matter.
 
 

What do you think the chances are that you can convince Hubert that 
every SuSE Enterprise Edition user should be asked at install time if 
they are going to use fsync a lot on each partition, and to use a 
different fstab setting if yes?

 
 Very little, I might tell them to buy the suse email server instead,
 since that would have the settings done right.  data=journal is just a
 small part of mail server tuning.
 
 
I know that you are an experienced sysadmin who was good at it.  Your 
intuition tells you that most sysadmins are like the ones you were 
willing to hire into your group at the university.  They aren't.

Linux needs to be like a telephone.  You plug it in, push buttons, and 
talk.  It works well, but most folks don't know why.


 
 Exactly.  I think there are 3 classes of users at play here.
 
 1) Those who don't understand and don't have enough load to notice.
 2) Those who don't understand and do have enough load to notice.
 3) Those who do understand and do have enough load to notice.
 
 #2 will buy support from someone, and they should be able to configure
 the thing right.
 
 #3 will find the docs and do it right themselves.
 
 
A moderate number of programs are small fsync bound for the simple 
reason that it is simpler to write them that way.We need to cover 
over their simplistic designs.

So, you have my sympathies Chris, because I believe you that it makes 
the code uglier and it won't be a joy to code and test.  I hope you also 
see that it should be done.

 
 Mostly, I feel this kind of tuning is a mistake right now.  The patch is
 young and there are so many places left to tweak...I'm still at the
 stage where much larger improvements are possible, and a better use of
 coding time.  Plus, it's monday and it's always more fun to debate than
 give in on mondays.
 
 -chris
 


Hi, Chris  Hans!

Don't think this somekind of destructive discussion would lead to 
anything useful for now, can you post a diff for 
2.4.19-pre7+latest-related-pending +compound-patch-from-ftp?

I'll try it and report if that leads 

Re: [reiserfs-list] Comparison of notail and notail,iicache(14) 2.4.19-pre7

2002-04-26 Thread Manuel Krause

On 04/27/2002 12:00 AM, Dieter Nützel wrote:

 On Friday 26 April 2002 23:21, Manuel Krause wrote:
 
On 04/26/2002 06:45 PM, Dieter Nützel wrote:

On Friday 26 April 2002 13:41, Yury Yu. Rupasov wrote:

Dieter Nützel wrote:

Manuel Krause wrote:

[-]

 
 [-]
 
 
Had that before but Chris and Oleg gave me advice.
I tried it this way:

/* lock the current transaction */
inline static void lock_journal(struct super_block *p_s_sb) {
  PROC_INFO_INC( p_s_sb, journal.lock_journal );
  debug_lock_break(1);
  conditional_schedule();
  down(SB_JOURNAL(p_s_sb)-j_lock);
}

/* unlock the current transaction */
inline static void unlock_journal(struct super_block *p_s_sb) {
  up(SB_JOURNAL(p_s_sb)-j_lock);
}

But then it seems to be _NOT_ preempt (+lock-break) save.
System lock up (nothing in the logs, SysRq key didn't work) during
latencytest0.42-png write test. Read test worked.

Thanks,
 Dieter

Mmh, I made this adjustment in the patch, too, since the first speedup
patches had been posted. I really don't know how serious this could be
in this code context.

I had random hard locks,too, but blamed it to my degenerating hardware
so far.

 
 I think it is hardly hardware related.
 Try without lock-break (only disable it during make xconfig).


I should, if it helped for you. I've lost too much work time inbetween 
for now.

 
 
Could anyone else advise on preempt+lock-break-awareness here?

 
 Yura, Chris, Oleg or at least Robert Love :-)
 But Robert hasn't updated the lock-break stuff for some time. He will do it 
 soon.
 


Yepp. I observed this fact and hope Robert to do so.

 
The linux-2.4.19p6-compound-speedup.patch seems to be another measurable
tick faster than Olegs first patchset. Maybe, I'll post some values on
my setup over the weekend. I've had one faster run so far but it locked
up without log hints... ;-))

 
 Ha, ha,...:-)
 
 Then I have one more for you:
 
 Page coloring patch.
 It gave ~10% speedup on my 1 GHz Athlon II SlotA (0,18µm, L2 512K) for memory 
 intensive apps. But be aware, it locks up randomly. The maintainer is looking 
 for SMP testers 'cause it has something to-do with SMP - preemption.
 My system is stable but lockup from time to time with the page_color module 
 loaded during heavy C and C++ compilations (~40 processes running in 
 parallel).


If I read this correctly I should leave my fingers from it...
Did I make a general mistake? Preempt+ does bring valuable advantages to 
my PIII 933 _uniprocessor_ setup, too.

 
 Have a nice weekend.
 
 -Dieter
 
 BTW I'm working on Win VFAT disk recovery. Two damaged IBM IC35L060AVER07-0 
 costomer disks. One with only a single partition and one with three 
 partitions. Any advice if I should try with dd on the whole disk or every 
 partition?
 
 

Maybe we should exchange our actual hdparm and powertweak settings in 
private via phone soon? ;-)

On my current setup I have higher disk troughput rates on small 
partitions (0.5GB) but dropoffs in the middle of larger ones (~2GB) if 
I trust the ksysguard display. I simply do dd if=/dev/hda3 bs=1M 
of=/dev/hdd3 after earlier timeless fiddling with bs=xyz and 
xyz~disk-cylinder-size(=16065 * 512 bytes)*2^(?)  did not give 
advantages in the past. I did not do complete disk dd-s so far but had 
many at the end of a dd with the current patch setup and partitions 
 0.3GB with the conventional bdflush and disk read/write-latency 
parameters. Find your poison!

Have a nice weekend, too! And thanks for your comments, Dieter!

Manuel




[reiserfs-list] One more result on speedup-patch 2.4.19-pre5

2002-04-15 Thread Manuel Krause

Hi Oleg and others!

I'm referring to Olegs speedup patch for 2.4.19-pre5+ - oh maybe it was 
a modified one for -pre6 (not to Chris' more actual version). In the 
attachment there are some beautified results on my backup cycle.

The new file size distribution statistics according to the alteration to 
the first known one are available upon request, Oleg, but the rate 
differences are really similar, so I assume you won't need them.

I removed the iicache patches.

I found these values very nice!

Bye,

Manuel



1st partition: 1958788 kB (system) notail:
--
 real 5m16.853s user0m1.660s sys 0m53.090s
   6.037MB/s
 
1st partition: 1951092 kB = 77% of part. notail + speedup:
--
 real 4m27.137s user0m1.490s sys 0m51.860s
   7,133MB/s = +18%
  
2nd partition: 4085252 kB (archives) notail:

 real11m3.024s  user0m1.460s sys 1m26.780s
   6.017MB/s
 
2nd partition: 4179504 kB = 76% of part. notail + speedup:
--
 real 9m28.626s user0m1.850s sys 1m27.220s
7.178MB/s = +19%


Data: 
-
Previously fresh backupped/copied partitions (shown are copy-back-times 
without fragmentation). Mount options -o noatime,notail

Settings: 

AA VM and AKPM max_bomb_segments settings same as in my first private 
mails, see information below, only disk content slightly differing. 

Kernel: 
---
Still Kernel 2.4.19-pre5, aa.vm-33 + some other aa.patches, 
akpm.read-latency-2 applied, some standard kernel values are altered 
with powertweak-0.99.4.

Hardware: 
-
933MHz-PIII-noname/Clevo.tw-notebook, one of the partitions' 
counterparts only ide1-tree udma(33), first one ide-0 udma(66), 
512MB RAM.

Circumstances:
--
Only KDE 2.2.1 running (untouched) additionally. 

No new bitmap allocator, yet.

CONFIG_REISERFS_PROC_INFO enabled.

That latest backup copies consume about 20..30% (varying) CPU. dd...s take
about 22%, ksysguard approx. values





Re: [reiserfs-list] Silly question, defrag

2002-04-03 Thread Manuel Krause

On 04/04/2002 01:49 AM, Tracy R Reed wrote:

 On Wed, Apr 03, 2002 at 08:08:21AM -0800, Matthew Johnson wrote:
 
On Wednesday 03 April 2002 00:21, Joe Cooper wrote:

Don't


Well I don't, but when newbies who are used to computing on win32 systems 
hear that they may not just accept the word don't. Actually its hard to find 
the reasons exactly why one does not defrag.

 
 It is more useful to look at why one DID defrag back in the bad ol' days
 of DOS and Windows. IIRC, the FAT filesystem would scan through it's
 equivalient of the free block list and start writing at the first free
 block. If it wrote for a while and then there was other data in the way it
 stop and go to the next free space. This way fragmentation was practically
 guarenteed and it happened rapidly. Modern filesystems use much smarter
 ways of laying out data on the disk so that fragmentation happens much
 less often. Now you will almost certainly waste more time by defragmenting
 than you would suffering whatever performance hit the little fragmentation
 there is causes. I've been using Linux/Unix for 10 years and I have never
 (not once!) defragged a filesystem.
 
 
Perhaps I should aim this message to the kernel mailing list, so that I can 
get response from a wider array of people who like other filesystems. But its 
not kernel related. 

 
 I wouldn't recommend doing that. The answer is pretty much the same
 regardless of the filesystem. If it's a non-FAT fs you probably don't have
 to worry about fragmentation.
 
 

Yesss. I like defragmenters on my Win98 disks, as I really see a speedup 
after using them (e.g after new software installations OR a longer 
status quo, but the effect depends on the defragmenters configuration).

You describe how/why this makes sense on FAT FS.

When I backupped my ReiserFS partitions monthly I used to recreate the 
original FS if everything was o.k. and copy back the whole content. 
That's no server here, it's a standalone notebook. After that procedure 
I found some applications that worked faster and some that were slower 
than before. O.k. I may have only subjectively compared the load times 
of NS6 +32MB disk cache and SO5.2. They were different than before 
copying, but the sum didn't show any advantage.

So?: I don't really need a Defragmenter on v3.6 ReiserFS in FAT scales 
for my usage and I really don't need to recreate and copy-back.

Mmh, just wanted to add my experience,

best wishes,

Manuel








Re: [reiserfs-list] mkreiserfs does not work on loopback

2002-01-13 Thread Manuel Krause

On 01/13/2002 05:05 PM, Ole Tange wrote:

 mkreiserfs seems not to work on plain files, which is pretty bad if you
 want to make a loopback-file with reiserfs.
 
 # dd if=/dev/zero of=loopfile count=100k
 # mkreiserfs -f loopfile
 mkreiserfs, 2001 - reiserfsprogs 3.x.0jmkreiserfs: loopfile is not a block
 special device.
 Forced to continue, but please confirm (y/n)y
 # mkdir mountpoint
 # mount loopfile mountpoint -oloop -t reiserfs
 mount: wrong fs type, bad option, bad superblock on /dev/loop0,
or too many mounted file systems
 
 
 /Ole
 
 
 


Hi Ole,

I didn't have problems with these commands and reiserfs-progs 3.x.1 . 
Here I had to add another -f:

# mkreiserfs -f -f /mnt/beta/loopfile

-mkreiserfs, 2001-
reiserfsprogs 3.x.1

mkreiserfs: /mnt/beta/loopfile is not a block special device
Continue (y/n):y
[...]


Manuel




Re: [reiserfs-list] Re: funny file permission after reiserfsck

2001-12-13 Thread Manuel Krause

On 12/13/2001 09:28 PM, Dieter Nützel wrote:

 Am Donnerstag, 13. Dezember 2001 20:44 schrieb Manuel Krause:
 
On 12/13/2001 07:53 PM, Dieter Nützel wrote:

Am Donnerstag, 13. Dezember 2001 10:26 schrieben Sie:

Dieter Nützel wrote:

On Thursday, 13. December 2001 03:15 you wrote:

[...]

 
 [-]
 
I compiled Dieters patch combination this morning except for bootmem and
nanosleep.

 
 They shouldn't harm. Even Preempt + lock-break (including the latest ReiserFS 
 lock-break ;-)


You mean lock-break-rml-2.4.17-pre8-1.patch or something very special? ;-)

 
 
After some minutes uptime I got disappearing files with this
behaviour
  rm: cannot remove `/dev/parport16': No such file or directory
  mknod: `/dev/parport16': File exists

and many lines vs-15011: reiserfs_release_objectid: tried to free free
object id (.)

 
 Yes, had such messages, too.
 
 
Of course, I was not able to set (or get) permissions for those files...

reiserfsck --check said --fix-fixable would solve the problems but it
didn't catch all. For now, I can say --rebuild-tree completely recovered
the filesystem. (reiserfsprogs 3.x.0k-pre13)

 
 Have you rebooted after that?
 I had your feeling, too but after reboot. --- Bang all problems were there, 
 again...


First I rebooted after --rebuild-tree to the same kernel and some 
minutes later the problems came up again. Yes. I repeated --rebuild-tree 
and booted to my old kernel (2.4.16 + reiserfs A..N + Andrew Mortons 
low-latency patch). Everything was restored o.k. then. Until...

 
This patch combination didn't seem to affect my ext2 partition at all.
As I never had the inode-attrs patch running but patches A..P

 
 You mean 2.4.16 + A-P except O and 2.4.17-pre4 and above + K-P except O, 
 right? Had you applied Chris's expanding-truncate-4.diff.
 


...I then combined 2.4.16 + A..N + P + Chris's expanding-truncate-4.diff 
+ low-latency. I got these disappearing files again afterwards. So again 
--rebuild-tree and so on. Now I have 2.4.17-pre8, K..N+P + preempt-1 + 
lock-break-1 running. (I didn't try 2.4.17-pre4)

Except for these many vs-15011 lines I didn't see a problem so far.

 
work well
with 2.4.16 there's something incompatible within the others.

Maybe someone wanted this additional information.

 
 Thanks, so didn't have to go back, only drop O...;-)
 


It's very clear now that the expanding-truncate-4 patch needs to be 
excluded and/or adjusted, too. :-) I hope that works for you after your 
inode-attrs experiment!


Manuel




Re: [reiserfs-list] Re: funny file permission after reiserfsck

2001-12-13 Thread Manuel Krause

On 12/13/2001 10:45 PM, Chris Mason wrote:

 
 On Thursday, December 13, 2001 10:41:13 PM +0100 Manuel Krause
 [EMAIL PROTECTED] wrote:
 
 
It's very clear now that the expanding-truncate-4 patch needs to be
excluded and/or adjusted, too. :-) I hope that works for you after your
inode-attrs experiment!

 
 Odd, which low latency patches were you running?
 
 -chris
 

With 2.4.16 +reiserfs-patches: Andrew Mortons low-latency patch

http://www.zip.com.au/~akpm/linux/2.4.16-low-latency.patch.gz
(from page http://www.zip.com.au/~akpm/linux/schedlat.html )

and with 2.4.17-pre8 +reiserfs-patches: the preemptible kernel  
lock-break patches from Robert M. Love

http://www.kernel.org/pub/linux/kernel/people/rml/preempt-kernel/v2.4/preempt-kernel-rml-2.4.17-pre8-1.patch

http://www.kernel.org/pub/linux/kernel/people/rml/lock-break/v2.4/lock-break-rml-2.4.17-pre8-1.patch
(from page http://www.tech9.net/rml/linux/ )

Thanks,

Manuel




Re: [reiserfs-list] Re: funny file permission after reiserfsck

2001-12-13 Thread Manuel Krause

On 12/13/2001 11:16 PM, Chris Mason wrote:

 
 On Thursday, December 13, 2001 11:17:12 PM +0100 Manuel Krause
 [EMAIL PROTECTED] wrote:
 
 
On 12/13/2001 10:45 PM, Chris Mason wrote:


On Thursday, December 13, 2001 10:41:13 PM +0100 Manuel Krause
[EMAIL PROTECTED] wrote:



It's very clear now that the expanding-truncate-4 patch needs to be
excluded and/or adjusted, too. :-) I hope that works for you after your
inode-attrs experiment!


Odd, which low latency patches were you running?

-chris


With 2.4.16 +reiserfs-patches: Andrew Mortons low-latency patch

http://www.zip.com.au/~akpm/linux/2.4.16-low-latency.patch.gz
(from page http://www.zip.com.au/~akpm/linux/schedlat.html )

 
 This should be safe.  Any chance I could talk you into testing 2.4.17-pre8 +
 andrew's patch?
 
 -chris
 


Andrew has another one for 2.4.17-pre2. Maybe that's better? Have you 
had a look on it, too? With patches A..N+P should I really apply 
expanding-truncate-4 ?

Manuel






Re: [reiserfs-list] Re: funny file permission after reiserfsck

2001-12-13 Thread Manuel Krause

On 12/13/2001 11:51 PM, Dieter Nützel wrote:

 Am Donnerstag, 13. Dezember 2001 23:16 schrieb Chris Mason:
 
On Thursday, December 13, 2001 11:17:12 PM +0100 Manuel Krause

[EMAIL PROTECTED] wrote:

On 12/13/2001 10:45 PM, Chris Mason wrote:

On Thursday, December 13, 2001 10:41:13 PM +0100 Manuel Krause

[EMAIL PROTECTED] wrote:

It's very clear now that the expanding-truncate-4 patch needs to be
excluded and/or adjusted, too. :-) I hope that works for you after your
inode-attrs experiment!

Odd, which low latency patches were you running?

-chris

With 2.4.16 +reiserfs-patches: Andrew Mortons low-latency patch

http://www.zip.com.au/~akpm/linux/2.4.16-low-latency.patch.gz
(from page http://www.zip.com.au/~akpm/linux/schedlat.html )

This should be safe.  Any chance I could talk you into testing 2.4.17-pre8
+ andrew's patch?

 
 Maybe Manuel will...;-)
 
 The latest lock-break-rml-2.4.17-pre8-1.patch from Robert is based on 
 Andrew's patch and include some (small) lock-breaks for ReiserFS.
 
 Any comments Chris?
 Has Robert something overlooked?
 Maybe only when expanding-truncate-4.patch comes into play?
 
 -Dieter
 
 BTW I am going after Manuel and try 2.4.17-pre8-preempt + lock-break + K-N+P 
 witchout Chris's one.
 
 

That might be no good idea. I've now double checked that I really 
applied the mentioned patches but still get corruptions in the old way. 
A --rebuild-tree fixes them, I reboot and reboot *again* some minutes 
later: Then I find wrong permissions and some disappearing files. Mmh. 
Dangerous.

Best wishes,

Manuel




Re: [reiserfs-list] mount -o notail of SuSE root filesystems?

2001-11-26 Thread Manuel Krause

On 11/26/2001 05:32 PM, Nikita Danilov wrote:

 Chris Mason writes:
   
   
   On Monday, November 26, 2001 12:39:23 PM +0300 Vladimir V. Saveliev
   [EMAIL PROTECTED] wrote:
   
   
Is there something that mounts my root partition regardless of any
fstab setting? Does someone have any idea how I could get my SuSE system
mounted notail from bootup to shutdown?! Or does this point to a bug?!

I would be very glad if someone could help!


It is a bug of course that remount ignores most of options. But in this
particular problem
would something like
append=rootflags=notail
in lilo.conf help?
   
   This patch should also allow mount -o remount,notail to work:
 
 Patch I am going to put on ftp site also handles bitmap allocator
 options during remount.
 
   
   --- linux/fs/reiserfs/super.c.1Tue Nov 20 21:32:27 2001
   +++ linux/fs/reiserfs/super.c  Tue Nov 20 21:33:41 2001
   @@ -274,6 +274,10 @@
/* Mount a partition which is read-only, read-write */
reiserfs_prepare_for_journal(s, SB_BUFFER_WITH_SB(s), 1) ;
s-u.reiserfs_sb.s_mount_state = sb_state(rs);
   +
   +if (test_bit(NOTAIL, mount_options)) {
   +set_bit(NOTAIL, (s-u.reiserfs_sb.s_mount_opt)) ;
   +}
s-s_flags = ~MS_RDONLY;
set_sb_state( rs, REISERFS_ERROR_FS );
/* mark_buffer_dirty (SB_BUFFER_WITH_SB (s), 1); */
 
 


Thanks again for your work!

So far everything works more than fine with 2.4.16, the latest namesys 
patches A..N and the restored bootup script and lilo.conf:

With the real notail mount one backup cycle of my root partition now 
takes about half the time it took before!!! Very very cool!

Regards,

Manuel





Re: [reiserfs-list] Re: [reiserfs-dev] 2.4.9ac7 vs. 2.4.10pre4

2001-09-07 Thread Manuel Krause

Hi!

On 09/07/2001 10:00 AM, Vladimir V. Saveliev wrote:

 Manuel Krause wrote:
 
 
Mmmh, like with a leaking fuel tank ... gasoline/petrol pours out? I
still don't understand memory ~ or disk space leakage from any
kind of meaning or position. Maybe anyone could improve my way to
interprete this / my English?!

 
 disk space leaked == disk space got lost
 
 
What I meant is what I said with the ongoing of the
2.4.7-unlink-truncate-rename-rmdir fix/patch:
vmware creates a memory (maybe RAM + VideoRAM) mapping file on disk I
don't get with ls but I see the disk space consumption with df
during vmware startup. After complete power off this file stays on disk
blocking disk space. I mean it stays there and I can't delete it.

 
 File should get deleted on next mount. If it does not - the patch does not do its 
job. We
 will do more tests with vmware.
 
 
There are 133MB lost until I have time to re-setup and reboot a
maintenance partition and fix it with a reiserfsck --rebuild-tree
/whatever (what does it very well in release 3.x.0k-pre9, really!)


 
 Does it link something 133 mb long to the lost+found?


Yes, I did a --rebuild-tree this evening (disk almost full after 
testing) and it gave me 4 of the 133 mb blockers and many other small 
files. [Some of the smaller ones recur on every --rebuild-tree though 
copying them to different locations.

 
 
! With 2.4.10-pre4 I did a find -inum xyz on one of the related
messages' inum and got the prompt back *without* any information
(=nothing!) but *with* some measurable seconds of disk activity! (Sorry,
I forgot to mention it in the first post)

[Regarding the two patches you sent (many thanks for all trial, I wasn't
able to find out which one of your two posts was failing with the
attachments as I filter the [reiserfs-list] ones and the private ones:
they would go to the same Netscape mail folder! One of them (one of the
2 posts from 11.00h@my local time and then one of the 2 from 11:10h had
the patches!!!): ??? Ok, I got them twice. Maybe you'll contact privately]

If I use kernel 2.4.10-pre4 and you explain your doubts
apply-to-kernel-version-? with expanding-truncate or get_block for
2.4.8 what is not tested too much but it works for you ... and you're
using kernel 2.4.7+ -- point me to the one I should use *now*, please.

 
 Try get_block patch.


I finally failed to apply / adjust the patch for 2.4.10-pre4 with 
linux/fs/reiserfs/inode.c, hunk @@ -512,23 +830,21 @@ static inline 
int _allocate_block(struct. I stopped it - I'm no programmer, don't 
know enough about how reiserfs works and you would make it better in any 
case.

 
 
A --rebuild-tree before using the patch shouldn't be a problem, if
vmware+reiserfs won't fill my disk unnecessarily after a sudden low
battery on my notebook afterwards. So, for now, I didn't test one in the
lack of time for today.

And, what I finally/originally meant is: What fills the hole of
 2.4.7-unlink-truncate-rename-rmdir.dif
(or what that patch made for vmware- and maybe soffice-useage)
... for 2.4.10-preX ?!


 
 unlink.. patch and get_block (or expanding truncate) patch solve different 
problems:
 
 unlink.. patch
 Fixes long-standing problem in reiserfs, when disk space gets lost
 if crash occurred when some process hold a reference to an unlinked file.
 
 get_block (or expanding truncate) is to fix wrong handling of races in
 reiserfs_writepage which occur when file is expanded by truncate and mmaped. That 
wrong
 handling of races is indicated by pap-5710 and vs-825 (of vs-: reiserfs_get_block: 
[XX YY
 0xZZ UNKNOWN] should not be found in older than 2.4.10-pre4).
 Note, that get_block patch still has pap-5710 (and few other additional printks I 
forgot
 to remove), but it should handle the races right.
 
 Did I answer your questions?
 
 Thanks,
 vs
 

Thank you very much for your detailed description! Maybe, I'll learn how reiserfs 
works over 

time. I would be sorry, if I went on your nerves in some cases?! ;-)


Maybe I should be patient now and wait for a real solution from you and 
your team?

Best regards,

Manuel




Re: [reiserfs-list] Re: still no reiserfs-patch in 2.4.10-pre2

2001-08-31 Thread Manuel Krause

On 08/31/2001 04:37 AM, Dieter Nützel wrote:

   Nikita Danilov wrote:
  
  Hans Reiser wrote:
  
  
   [-]
  
  You did not send it to him since the last release by him, so you
have not
  sent it to him by his rules.  We don't get to make the rules, we
just get
  to play.
  
  I did send them to him *four* times, each time after he released new
  kernel. No effect. I don't see how current situation is different, but I
  shall try anyway.
  
  
   Why do you don't sent a copy to Alan?
  
   Got them.
   Running 2.4.10-pre2 with all of them plus 2.4.9 - unsent.
  
   When will we see this?
   2.4.7-unlink-truncate-rename-rmdir.dif


If this is still needed for using VMware I'll need this before
attempting to use kernels  2.4.8  :-(
I'm currently running 2.4.8 with Vladimirs patch for 2.4.7 and it's
working very well. No stability issues on my side ;-))

What about Chris' updated transaction tracking patch for 2.4.8, see
[reiserfs-list] [PATCH] improve reiserfs O_SYNC and fsync speeds
from 08/14/2001 ? Did it get into the kernel or is it included in one of
the recent patches?

   2.4.7-plug-hole-and-pap-5660-pathrelse-fixes.dif
  
   Thanks,
   Dieter
  
  

Thanks,

Manuel




Re: [reiserfs-list] ReiserFS on a loopback device

2001-08-06 Thread Manuel Krause

On 08/06/2001 11:28 PM, Jean-Francois Landry wrote:

 On Mon, Aug 06, 2001 at 03:49:59PM -0400, Greg Ward wrote:
 
I'm curious enough about ReiserFS that I want to play with it, but not
curious enough to devote a whole partition to it just yet.  So I thought
I'd create a big file somewhere, put a filesystem in it, and mount -o
loop it.  Alas, mkreiserfs won't let me get away with it:

  # dd if=/dev/zero of=reiserfs.raw bs=4096 count=10240
  10240+0 records in
  10240+0 records out

  # mkreiserfs reiserfs.raw 

  -mkreiserfs, 2000-
  reiserfsprogs 3.x.0d
  mkreiserfs: reiserfs.raw is not a block special device.

Is there a way to create a ReiserFS in a regular file?

This is with Linux 2.4.2 (specifically the kernel-source-2.4.2 package
from Progeny Debian 1.0) and reiserfsprogs 3.x.0d.


  
 losetup /dev/loop0 /path/to/reiserfs.raw
 
 mkreiserfs /dev/loop0
 
 mount it, play with it, umount it, 
 
 losetup -d /dev/loop0 ; rm /path/to/reiserfs.raw
 
 man losetup for more info.
 
 Jean-Francois Landry
 

Maybe you, Greg, want to play with more disk space as the reiserfs 
journal would take 32MB, of course. But Jean-Francois' advice works with 
the latest  reiserfs-progs/kernel for me, too; though I mounted -o 
noatime, notail ...

Best wishes,

Manuel




Re: [reiserfs-list] 2.4.7+unlink...patch+vmware crashes+errors (was: Re: [reiserfs-list] Kernel 2.4.7 Released! Any Updates?)

2001-07-30 Thread Manuel Krause

On 07/30/2001 02:01 PM, Vladimir V.Saveliev wrote:

 Hi
 
 Manuel Krause wrote:
 
 
If I understood you correctly: The first time after this poweroff the
partition has to be mounted rw in order to complete the unlinks?
The uncompleted unlinks I have on disk will stay until next --rebuild-tree?

 
 Ok, would you like to try the attached patch?
 The 
ftp.namesys.com/pub/reiserfs-for-2.4/2.4.7.pending/2.4.7-unlink-truncate-rename-rmdir.dif.bz2
 is
 updated as well.
 
 Thanks,
 vs
 

[PATCH]

Hi!

And thanks again! I took the updated .bz2 from namesys. And I tested the 
poweroff with running vmware. Before that I changed back my lilo.conf to 
read-only mount the root partition. It works!

My /var/log/boot.msg shows the following during next bootup:

4reiserfs: checking transaction log (device 03:07) ...
4Warning, log replay starting on readonly filesystem
4reiserfs: replayed 19 transactions in 3 seconds
4Removing [25349 39257 0x0 SD]..4done
4Removing [25349 7437 0x0 SD]..4done
4Removing [25349 7357 0x0 SD]..4done
4There were 3 uncompleted unlinks/truncates. Completed
4Using r5 hash to sort names
4ReiserFS version 3.6.25
4VFS: Mounted root (reiserfs filesystem) readonly.

Yes, and disk space is freed.

Thank you!

Manuel




Re: [reiserfs-list] 2.4.7+unlink...patch+vmware crashes+errors (was: Re: [reiserfs-list] Kernel 2.4.7 Released! Any Updates?)

2001-07-30 Thread Manuel Krause

On 07/30/2001 05:40 PM, Manuel Krause wrote:

 On 07/30/2001 02:01 PM, Vladimir V.Saveliev wrote:
 
 Hi

 Manuel Krause wrote:

 If I understood you correctly: The first time after this poweroff the
 partition has to be mounted rw in order to complete the unlinks?
 The uncompleted unlinks I have on disk will stay until next 
 --rebuild-tree?


 Ok, would you like to try the attached patch?
 The 
 
ftp.namesys.com/pub/reiserfs-for-2.4/2.4.7.pending/2.4.7-unlink-truncate-rename-rmdir.dif.bz2
 
 is
 updated as well.

 Thanks,
 vs

 
 [PATCH]
 
 Hi!
 
 And thanks again! I took the updated .bz2 from namesys. And I tested the 
 poweroff with running vmware. Before that I changed back my lilo.conf to 
 read-only mount the root partition. It works!
 
 My /var/log/boot.msg shows the following during next bootup:
 
 4reiserfs: checking transaction log (device 03:07) ...
 4Warning, log replay starting on readonly filesystem
 4reiserfs: replayed 19 transactions in 3 seconds
 4Removing [25349 39257 0x0 SD]..4done
 4Removing [25349 7437 0x0 SD]..4done
 4Removing [25349 7357 0x0 SD]..4done
 4There were 3 uncompleted unlinks/truncates. Completed
 4Using r5 hash to sort names
 4ReiserFS version 3.6.25
 4VFS: Mounted root (reiserfs filesystem) readonly.
 
 Yes, and disk space is freed.
 
 Thank you!
 
 Manuel
 


The first time vmware accesses the disk after this reboot my 
/var/log/messages shows a line:

Jul 30 17:41:22 firehead kernel: vs-: reiserfs_get_block: [25349 39253 
0x80bd001 UNKNOWN] should not be found7

After another reboot it doesn't show up any more when running vmware.

I hope this won't cause problems?


Thanks,

Manuel





Re: [reiserfs-list] 2.4.7+unlink...patch+vmware crashes+errors (was: Re: [reiserfs-list] Kernel 2.4.7 Released! Any Updates?)

2001-07-29 Thread Manuel Krause

On 07/29/2001 04:41 PM, Vladimir V.Saveliev wrote:

 Hi
 
 Manuel Krause wrote:
  
Jul 22 15:03:50 firehead kernel: vs-2100: add_save_link:search_by_key
returned 1
Jul 22 15:04:39 firehead kernel: vs-15010: reiserfs_release_objectid:
tried to free free object id (37281)4vs-15010:...

...in fact it repeats the last one without end until...

 
 Ok, the unlink-truncate-rename.. patch
 
(ftp.namesys.com/pub/reiserfs-for-2.4/2.4.7.pending/2.4.7-unlink-truncate-rename-rmdir.dif.bz2)
 is updated.
 
 The problem was that previous version of the patch does not expect
 truncates to unlinked files which vmware does.
 
 Thanks,
 vs
 


Hi, Vladimir!


I applied the new patch, recompiled the kernel, booted, started vmware, 
made a complete power-off, then power-on, and df showed some 133MB more 
on disk (like it was when vmware was up). Rebooting didn't make a 
difference.

Seems your patch doesn't like this test any more? I haven't made a 
reiserfsck --rebuild-tree yet.

Thank you for your support!

Manuel






Re: [reiserfs-list] df diffs after backup/restore+reinstall of some RPMs

2001-07-29 Thread Manuel Krause

On 07/27/2001 12:38 PM, Nikita Danilov wrote:

 Manuel Krause writes:
   Hi!
   
   I keep backup+mkreiserfs+restoring my one root partition from time to 
   time. I _always_ mount the partitions -o noatime,notail. And usually get 
   df differences at least after final reboot. It usually grows by about 65 
   MB.
   
   When reinstalling some actual/updated RPMs from SuSE for XF86 4.1.0 and 
   KDE2.1 via yast, my partition shows about 65MB freed even after repeated 
   reboot or longer uptime/useage afterwards. It's not related to log 
   files' appends or caches' growth (it's within the about ;-) ).
   
   Yesterday after a recent backup+ I reinstalled the RPMs their files I 
   had on disk already (to let the same files having on disk beeing 
   rewritten again). And got the same behavour of shrinking. After that 
   it mostly always looked like:
   
   Filesystem   1k-blocks  Used Available Use% Mounted on
   /dev/hda7  2377508   1650340727168  70% /
   (just to show some stats/relations...)
   
   So the growth/shrinking makes about 2.8% of the allocated blocks.
 
 I am not sure I understood you correctly, but may be this is just due to
 sub-optimal tree packing: reiserfs stores all meta-data (and files
 tails) into balanced tree. For efficiency this tree is not always
 absolutely packed, that is, tree nodes are not stuffed with data
 completely. Actual packing depends on dynamic access pattern and can
 change. 


Don't know what got unclear...?! (You may want to point me to it...!)

Mmmh, so it really makes a difference in the dynamic access pattern if 
I reinstalled via rpm instead of copying via cp -a ? When I mount 
the partition notail files' tails don't get packed into balanced tree, 
but metadata does. And that is/maybe a part of the actually 
rpm-installed/re-written packages (in this case) ?!

Ok, you really do know reiserfs better than me... :-) I'll take it for 
normal reiserfs behaviour from now on.

 
   
   I had this in an earlier thread with Chris Mason but there was no final 
   conclusion about it. Is that a sign of a living fs or not tolerable at 
   all for you?!
   
   
   Thanks for your replies and best wishes for your work,
 
 By the way, are reasons for periodic restoring from backup
 reiserfs-related?


If I kept in mind correctly, this procedure of 
backup+re-mkreiserfs+restore was one way to correct failed unlinks after 
crashes. Of course, a reiserfsck --rebuild-tree /dev/whatever may have 
done the same job.
I had some quite regular crashes with early XF86 4.1.0 RPMs from SuSE 
and vmware, so I had too many disk filling and undeleteable files on my 
loved reiserfs partition.
You (reiserfs-team) still keep to remind us users to keep our backups 
up-to-date in addition...

It's somekind of reiserfs-related, you see, ;-) and solved 2 possible 
problems for me at once.
(Thanks to that fact, I learned to regularily backup all my home disk 
drives, too...!)

And, of course, I've found that behaviour interesting. So I wanted to 
test reinstalling some Linux components regularily - to see reiserfs 
response.

 
   
   Manuel
   
   Actually using kernel 2.4.7 +unlink-truncate+...patch on a SuSE7.1,
   fs copied/mkreiserfsed/restored since about kernel 2.4.1,
   reiserfsck --rebuild-tree doesn't show a diff in the dimension of 65MB
 
 Nikita.
 


Many thanks, Nikita!

Manuel





Re: [reiserfs-list] Kernel 2.4.7 Released! Any Updates?

2001-07-21 Thread Manuel Krause

On 07/21/2001 04:50 PM, Dirk Mueller wrote:

 On Sam, 21 Jul 2001, Daniel wrote:
 
 
Hey the subject says it all... ^ ^ ^ 

 
 everything important except quota support is in. 
 
 
 Dirk
 

Did fix for unlinking of opened files
(ftp.namesys.com/pub/reiserfs-for-2.4/2.4.6.pending/2.4.6-unlink-truncate-rename-rmdir.dif.bz2)
and Gergely Tamas' update make it in, too?!

That would be very very nice!

Manuel






Re: [reiserfs-list] 2.4.6-pre4 related [PATCH]es question

2001-06-21 Thread Manuel Krause

On 06/21/2001 03:08 PM, Chris Mason wrote:

 
 On Thursday, June 21, 2001 03:22:03 AM +0200 Manuel Krause
 [EMAIL PROTECTED] wrote:
 
 
Hi,

at the moment I'm running kernel-2.4.5 with the following patches applied
(when it came by mailing-list only - named according to the subject and
then marked)

patch-2.4.5-reiserfs-leak-in-error-paths.mail.patch
patch-2.4.5-reiserfs-mark-journal-new.mail.patch

 
 These were included in 2.4.6-pre3
 
 
patch-2.4.5-reiserfs-umount-fix.patch aka super.c patch

 
 2.4.6-pre1
 
 
patch-2.4.5-reiserfs-writepage-prealloc-race.mail.patch

 
 2.4.6-pre4
 
 
patch-2.4.5-truncate_inode_pages.patch (like in lkml and Mr. Reisers
request for context)

 
 2.4.6-pre3
 
 
without
patch-2.4.5-reiserfs-memory_leak.mail.patch aka Don't worry ?-patch
yet - and

 
 This has not been sent for inclusion yet.
 
 
without
problems/errors/oopses related since the last was out. Please, don't ask,
I like these procedures as it brings back some of the old times' feeling
for ReiserFS... Hopefully I didn't miss anything. ;-))


 
 You've got every reiserfs patch that has gone to Linus, except for the
 knfsd fixes from Neil Brown.  If you aren't using NFS, you're up to date.
 
 
Except perhaps ... for some df differences before/after cp -aing my
reiserfs-partitions for backup-purposes (what makes a difference  +2
percent of disk space after copying) so I thought to ignore this as I
don't miss anything and it is below normal technical/natural tolerances).
:-)

 
 Different usage of the tail option?



No, I usually create a new filesystem with identical size and type ( -h 
r5 -v 2) with the latest mkreiserfs on my backup disk and mount it in 
the same way (-o noatime,notail) and copy from the original partition to 
the new one. In this step I usually get the df differences. Then I 
create a new reiserfs in the original place and copy everything back and 
don't get df differences. If I didn't use notail I would save more than 
100MB of disk space with that installation - what would be very obvious 
then.

This procedure may be a bit complicated. Maybe I do it just to see my 
reiserfs growing... ;-) This complete procedure takes about 15 minutes 
for the 2 * 1,67GB. This partitions is normally mounted /root and I 
start the system on a different partition with the same kernel 
configuration for these copy-sessions.


 
 
-

So, now, I don't know if I should give 2.4.6-pre4 a try because of the
apm-pci related topics from 2.4.6-pre3 that confused my (mind's) -
(computer's hardware) - relation.

What did go into the new pre-patch 2.4.6-pre4 extra to:
-pre4:
  - Chris Mason: ReiserFS pre-allocation locking bugfix


 
 Nothing reiserfs related ;-)  pre5 quickly followed pre4, so I'm assuming
 there was something wrong with pre4.  Haven't read l-k yet though, it might
 just be that Linus forgot a patch and wanted it in.
 
 -chris




Many thanks for your detailed answers!

Manuel





Re: [reiserfs-list] linux-2.4.5 reiserfs-unmount

2001-05-26 Thread Manuel Krause

Hi!

Have a look on Yuras patch in his reply to 
   [reiserfs-list] segfault: journal_begin called without kernellock
held !

Mounting and umounting seem to work fine with this patch!


Manuel


Marc Lehmann wrote:
 
 Just upgraded (and downgraded ;) to linux-2.4.5, now, each time a
 filesystem gets unmounted I get a kernel oops like this (if required I'll
 provide more info, but it should be easily reproducable).
 
 journal_begin called without kernel lock held
 kernel BUG at journal.c:423!
...