Re: the 'official' point of view expressed by kernelnewbies.org regarding reiser4 inclusion
On Sun, 23 Jul 2006, Hans Reiser wrote: I want reiserfs to be the filesystem that professional system administrators view as the one with both the fastest technological pace, and the most conservative release management. Well, I, with the administrator hat on, phased out all reiserfs file systems and replaced them by ext3. This got me rid of silent corruptions, immature reiserfsprogs and hash collision chain limits. I apologize to users that the technology required a 5 year gap between releases. It just did, an outsider may not realize how deep the changes we made were. Things like per node locking based on a whole new approach to tree locking that goes bottom up instead of the usual top down are big tasks.Dancing trees are a big change, getting rid of blobs is a big change, wandering logs. We did a lot of things like that, and got very fortunate with them. If we had tried to add such changes to V3, the code would have been unstable the whole 5 years, and would not have come out right. And that is something that an administrator does not care the least about. It must simply work, and the tools must simply work. Once I hit issues like xfs_check believes / were mounted R/W (not ignoring rootfs) and refuses the R/O check, reiserfsck can't fix a R/O file system (I believed this one got fixed before 3.6.19) or particularly silent corruptions that show up later in a routine fsck --check after a kernel update, the filesystem and its tools appear in a bad light. I've never had such troubles with ext2fs or ext3fs or FreeBSD's or Solaris's ufs. I'm not sure what patches Chris added to SUSE's reiserfs, nor do I care any more. The father declared his child unsupported, and that's the end of the story for me. There's nothing wrong about focusing on newer code, but the old code needs to be cared for, too, to fix remaining issues such as the can only have N files with the same hash value. (I am well aware this is exploiting worst-case behavior in a malicious sense but I simply cannot risk such nonsense on a 270 GB RAID5 if users have shared work directories.) -- Matthias Andree
Re: the 'official' point of view expressed by kernelnewbies.org regarding reiser4 inclusion
Matthias Andree wrote: The father declared his child unsupported, I never did that. and that's the end of the story for me. There's nothing wrong about focusing on newer code, but the old code needs to be cared for, too, to fix remaining issues such as the can only have N files with the same hash value. Requires a disk format change, in a filesystem without plugins, to fix it. (I am well aware this is exploiting worst-case behavior in a malicious sense but I simply cannot risk such nonsense on a 270 GB RAID5 if users have shared work directories.)
Re: the 'official' point of view expressed by kernelnewbies.org regarding reiser4 inclusion
On Mon, 24 Jul 2006, Hans Reiser wrote: and that's the end of the story for me. There's nothing wrong about focusing on newer code, but the old code needs to be cared for, too, to fix remaining issues such as the can only have N files with the same hash value. Requires a disk format change, in a filesystem without plugins, to fix it. You see, I don't care a iota about plugins or other implementation details. The bottom line is reiserfs 3.6 imposes practial limits that ext3fs doesn't impose and that's reason enough for an administrator not to install reiserfs 3.6. Sorry. -- Matthias Andree
Re: other system with datacorruption (2.425 + datalogging patches)
Hello, I have been 'googling' and I have found people reporting problems with Mythtv and reiserfs v3. Could be any problem if freespace falls bellow 10%? Regards, Paco On Thursday, 20 de July de 2006 14:52, Francisco Javier Cabello wrote: Hello, I have other system with data corruption. I send you the output of 'reiserfsck --check' Regards, Paco -- One of my most productive days was throwing away 1000 lines of code (Ken Thompson) - PGP fingerprint: AF69 62B4 97EB F5BB 2C60 B802 568A E122 BBBE 5820 PGP Key available at http://pgp.mit.edu - pgpARSm50Eg5F.pgp Description: PGP signature
how to measure data fragmentation (reiserfs v3)
Hello, I want to measure data fragmentation in my reisefs v3 filesystem. I have found some applications to meassure ext2 and ext3 filesystem but nothing for reiserfs ones. Is there any application to check it? Regards, Paco -- One of my most productive days was throwing away 1000 lines of code (Ken Thompson) - PGP fingerprint: AF69 62B4 97EB F5BB 2C60 B802 568A E122 BBBE 5820 PGP Key available at http://pgp.mit.edu - pgpK0tHCy0E0W.pgp Description: PGP signature
Re: how to measure data fragmentation (reiserfs v3)
Hi, Paco, http://www.informatik.uni-frankfurt.de/~loizides/reiserfs/agesystem.html might be help. Could you give me the link of fragemenation measure on ext2/ext3? Do you know the one for XFS? Thanks, Michael Francisco Javier Cabello wrote: Hello, I want to measure data fragmentation in my reisefs v3 filesystem. I have found some applications to meassure ext2 and ext3 filesystem but nothing for reiserfs ones. Is there any application to check it? Regards, Paco
Re: the 'official' point of view expressed by kernelnewbies.org regarding reiser4 inclusion
On Monday 24 July 2006 12:25, Matthias Andree wrote: On Mon, 24 Jul 2006, Hans Reiser wrote: and that's the end of the story for me. There's nothing wrong about focusing on newer code, but the old code needs to be cared for, too, to fix remaining issues such as the can only have N files with the same hash value. Requires a disk format change, in a filesystem without plugins, to fix it. You see, I don't care a iota about plugins or other implementation details. The bottom line is reiserfs 3.6 imposes practial limits that ext3fs doesn't impose and that's reason enough for an administrator not to install reiserfs 3.6. Sorry. And what do you do if you, say, run of of inodes on ext3? Do you think the users will care about that? Or what if the number of files in your mail queue or proxy cache* become large enough for your fs operations to slow to a crawl? * Yes I know most programs work around this by using many subdirs, but that's really a bandaid solution. -- Regards, Christian Iversen
Re: other system with datacorruption (2.425 + datalogging patches)
Another one :( This time rebuild-tree failed. I guess the system didn't have enough memory. Swap is mounted as loopback file inside the reiserfs filesystem, then when I umount filesystem to check I lose swap. Regards, Paco On Thursday, 20 de July de 2006 14:52, Francisco Javier Cabello wrote: Hello, I have other system with data corruption. I send you the output of 'reiserfsck --check' Regards, Paco -- One of my most productive days was throwing away 1000 lines of code (Ken Thompson) - PGP fingerprint: AF69 62B4 97EB F5BB 2C60 B802 568A E122 BBBE 5820 PGP Key available at http://pgp.mit.edu - bad_internal: vpf-10320: block 263224, items 3 and 4: The wrong order of items: [2 5 0x87bb2001 IND (1)], [2 5 0xee9e001 IND (1)] the problem in the internal node occured (263224), whole subtree is skipped vpf-10640: The on-disk and the correct bitmaps differs. pgpK7sqXicykM.pgp Description: PGP signature
Re: the 'official' point of view expressed by kernelnewbies.org regarding reiser4 inclusion
On Mon, Jul 24, 2006 at 01:34:11PM +0200, Christian Iversen wrote: On Monday 24 July 2006 12:25, Matthias Andree wrote: The bottom line is reiserfs 3.6 imposes practial limits that ext3fs doesn't impose and that's reason enough for an administrator not to install reiserfs 3.6. Sorry. And what do you do if you, say, run of of inodes on ext3? Do you think the users will care about that? From what I've seen from our customers, that never happens. Yes, there are sometimes people with a million inodes in use, and we've seen four million once, but that's never been a problem, even not with a huge mail server with thousands of users having mailboxes in maildir format. We usually limit our own filesystems to 12 million inodes and it's never been a problem to store files from our customers. Or what if the number of files in your mail queue or proxy cache* become large enough for your fs operations to slow to a crawl? Not a problem anymore with htree dirextory indexing. If it's not yet enabled (dumpe2fs the filesystem and look for the dir_index feature), enable it with: tune2fs -O dir_index /dev/whatever After the next mount the filesystem will use it for new directories. To optimize existing directories, run e2fsck -D. Erik -- +-- Erik Mouw -- www.harddisk-recovery.com -- +31 70 370 12 90 -- | Lab address: Delftechpark 26, 2628 XH, Delft, The Netherlands
Re: serious Reiser4 fsck problem
Hello On Sat, 2006-07-22 at 12:33 +0200, Dirk wrote: Łukasz Mierzwa wrote: Dnia Sat, 22 Jul 2006 12:08:58 +0200, Dirk [EMAIL PROTECTED] napisał: Hello, will using fsck.reiser4 --build-fs result in data loss? I mean to fix the partition - NOT to lose the data on it.. The manual page is also not very specific about if data will be lost or not when using this option... Using --fix resulted in a message that I should use --build-fs to remove corruption... You will probably end up with some amount of files in lostfound dir, those will be the files that fsck can't recognize from what dir are coming and what are named, depending on what corruption You have there can always be some files that You won't get back. As always, backups are a must have on any fs. The hdd was full and a process was still trying to write and create new files... So I guess those files might end up in lost+found... Just want to make sure that --build-fs will not build a _new_ fs, meaning formatting the fs by formatting it?!? it will not format. This is another mode of recovering. That wasn't very clear from what I'd read in the manual... Dirk
wiki entry (Was: Re: portage tree)
On Sat, Jul 22, 2006 at 11:50:24PM -0600, Hans Reiser wrote: Thanks Christian. You can go ahead and add something to our wiki pointing to it if you would like. This might help tide people over until the repacker ships. I'd love to, but alas, wiki.namesys.com appears to have a serious problem at the moment. The machine (193.232.112.68) responds to ping, the http port is open but connections fail. The university's squid proxies have quite some timeout, and I repeatedly get connection reset by peer. Google search result yields This wiki is installed on a busy underpowered server running Reiser4. which seems like a mild understatement. How busy is that machine, actually? Kind regards, Chris pgpT7rfxf1ZXM.pgp Description: PGP signature
Re: other system with datacorruption (2.425 + datalogging patches)
The MythTV issue should only cause severe slowdown while writing data after freespace falls below 10%. I have not seen it cause any corruption whatsoever. I'm in the process of testing Jeff's patch to fix this issue right now. On Mon, 2006-07-24 at 12:30 +0200, Francisco Javier Cabello wrote: Hello, I have been 'googling' and I have found people reporting problems with Mythtv and reiserfs v3. Could be any problem if freespace falls bellow 10%? Regards, Paco On Thursday, 20 de July de 2006 14:52, Francisco Javier Cabello wrote: Hello, I have other system with data corruption. I send you the output of 'reiserfsck --check' Regards, Paco -- Mike Benoit [EMAIL PROTECTED] signature.asc Description: This is a digitally signed message part
Re: the 'official' point of view expressed by kernelnewbies.org regarding reiser4 inclusion
You see, I don't care a iota about plugins or other implementation details. The bottom line is reiserfs 3.6 imposes practial limits that ext3fsdoesn't impose and that's reason enough for an administrator not toinstall reiserfs 3.6. Sorry.I dont care a iota about ext3fs, reiser3 does its job for me. Ext3 got quite a number of its own limits, but your iotas and ext3 limit are not even close to ssubject of this conversation.thank you.-- liberation loophole will make it clear
Re: the 'official' point of view expressed by kernelnewbies.org regarding reiser4 inclusion
On Mon, 2006-07-24 at 12:25 +0200, Matthias Andree wrote: On Mon, 24 Jul 2006, Hans Reiser wrote: and that's the end of the story for me. There's nothing wrong about focusing on newer code, but the old code needs to be cared for, too, to fix remaining issues such as the can only have N files with the same hash value. Requires a disk format change, in a filesystem without plugins, to fix it. You see, I don't care a iota about plugins or other implementation details. The bottom line is reiserfs 3.6 imposes practial limits that ext3fs doesn't impose and that's reason enough for an administrator not to install reiserfs 3.6. Sorry. And EXT3 imposes practical limits that ReiserFS doesn't as well. The big one being a fixed number of inodes that can't be adjusted on the fly, which was reason enough for me to not use EXT3 and use ReiserFS instead. Do you consider the EXT3 developers to have abandoned it because they haven't fixed this issue? I don't, I just think of it as using the right tool for the job. I've been bitten by running out of inodes on several occasions, and by switching to ReiserFS it saved one company I worked for over $250,000 because they didn't need to buy a totally new piece of software. I haven't been able to use EXT3 on a backup server for the last ~5 years due to inode limitations. Instead, ReiserFS has been filling that spot like a champ. The bottom line is that every file system imposes some sort of limits that bite someone. In your case it sounds like EXT3 limits weren't an issue for you, in my case they were. Thats life. -- Mike Benoit [EMAIL PROTECTED] signature.asc Description: This is a digitally signed message part
Re: the 'official' point of view expressed by kernelnewbies.org regarding reiser4 inclusion
On Mon, 2006-07-24 at 14:06 -0400, Horst H. von Brand wrote: And EXT3 imposes practical limits that ReiserFS doesn't as well. The big one being a fixed number of inodes that can't be adjusted on the fly, Right. Plan ahead. That is great in theory. But back to reality, when your working for a company that is growing by leaps and bounds that isn't always possible. Why would I want to intentionally limit myself to a set number of inodes when I can get a performance increase and not have to worry about it by simply using ReiserFS? It all boils down to using the right tool for the job, ReiserFS was the right tool for this job. I've been bitten by running out of inodes on several occasions, Me too. It was rather painful each time, but fixable (and in hindsight, dumb user (setup) error). and by switching to ReiserFS it saved one company I worked for over $250,000 because they didn't need to buy a totally new piece of software. How can a filesystem (which by basic requirements and design is almost transparent to applications) make such a difference?! Very easily, the backup software the company had spent ~$75,000 on before I started working there used tiny little files as its database. Once the inodes ran out the entire system pretty much came to a screeching halt. We basically had two options, use ReiserFS, or find another piece of software that didn't use tiny little files as its database. If I recall correctly the database went from about 2million files to over 40 million in the span of just a few months. No one could have predicted that. I believe this database was on an 18GB SCSI drive, so even using 1K blocks wouldn't have worked. According to the mke2fs man page: -i bytes-per-inode This value generally shouldn't be smaller than the blocksize of the filesystem, since then too many inodes will be made. The only other option at that time was to purchase Veritas backup and its not cheap. We ended up switching to ReiserFS, it increased our backup/restore time immensely and also bought us about 1year before the system finally outgrew itself for good. By that time the company could afford to drop $250,000 on high end backup software so we could grow past 10TB. -- Mike Benoit [EMAIL PROTECTED] signature.asc Description: This is a digitally signed message part
Re: the 'official' point of view expressed by kernelnewbies.org regarding reiser4 inclusion
Mike Benoit wrote: I've been bitten by running out of inodes on several occasions, and by switching to ReiserFS it saved one company I worked for over $250,000 because they didn't need to buy a totally new piece of software. ext3fs's inode density is configurable, reiserfs's hash overflow chain length is not, and it doesn't show in df -i either. If you need lots of inodes, mkfs for lots. That's old Unix lore.
Re: the 'official' point of view expressed by kernelnewbies.org regarding reiser4 inclusion
Mike Benoit [EMAIL PROTECTED] wrote: On Mon, 2006-07-24 at 12:25 +0200, Matthias Andree wrote: On Mon, 24 Jul 2006, Hans Reiser wrote: and that's the end of the story for me. There's nothing wrong about focusing on newer code, but the old code needs to be cared for, too, to fix remaining issues such as the can only have N files with the same hash value. Requires a disk format change, in a filesystem without plugins, to fix it. You see, I don't care a iota about plugins or other implementation details. The bottom line is reiserfs 3.6 imposes practial limits that ext3fs doesn't impose and that's reason enough for an administrator not to install reiserfs 3.6. Sorry. And EXT3 imposes practical limits that ReiserFS doesn't as well. The big one being a fixed number of inodes that can't be adjusted on the fly, Right. Plan ahead. which was reason enough for me to not use EXT3 and use ReiserFS instead. I don't see this following in any way. Do you consider the EXT3 developers to have abandoned it because they haven't fixed this issue? I don't, I just think of it as using the right tool for the job. Dangerous parallel, that one... I've been bitten by running out of inodes on several occasions, Me too. It was rather painful each time, but fixable (and in hindsight, dumb user (setup) error). and by switching to ReiserFS it saved one company I worked for over $250,000 because they didn't need to buy a totally new piece of software. How can a filesystem (which by basic requirements and design is almost transparent to applications) make such a difference?! I haven't been able to use EXT3 on a backup server for the last ~5 years due to inode limitations. See comment above. Read mke2fs(8) with care. Instead, ReiserFS has been filling that spot like a champ. Nice for you. The bottom line is that every file system imposes some sort of limits that bite someone. Mostly that infinite disks are hard to come by ;-) In your case it sounds like EXT3 limits weren't an issue for you, in my case they were. I'd suspect the limits you ran into weren't exactly in ext3. Thats life. -- Dr. Horst H. von Brand User #22616 counter.li.org Departamento de Informatica Fono: +56 32 654431 Universidad Tecnica Federico Santa Maria +56 32 654239 Casilla 110-V, Valparaiso, ChileFax: +56 32 797513
Re: the 'official' point of view expressed by kernelnewbies.org regarding reiser4 inclusion
On Mon, 2006-07-24 13:37:13 -0700, Mike Benoit [EMAIL PROTECTED] wrote: On Mon, 2006-07-24 at 14:06 -0400, Horst H. von Brand wrote: The only other option at that time was to purchase Veritas backup and its not cheap. We ended up switching to ReiserFS, it increased our backup/restore time immensely and also bought us about 1year before the system finally outgrew itself for good. By that time the company could afford to drop $250,000 on high end backup software so we could grow past 10TB. Erm, how does your data look like and why does it require a $250,000 backup solution? Sounds like I'm in the wrong business... But that's for sure not lkml related... MfG, JBG -- Jan-Benedict Glaw [EMAIL PROTECTED]+49-172-7608481 Signature of: Don't believe in miracles: Rely on them! the second : signature.asc Description: Digital signature
Re: the 'official' point of view expressed by kernelnewbies.org regarding reiser4 inclusion
Mike Benoit [EMAIL PROTECTED] wrote: On Mon, 2006-07-24 at 14:06 -0400, Horst H. von Brand wrote: And EXT3 imposes practical limits that ReiserFS doesn't as well. The big one being a fixed number of inodes that can't be adjusted on the fly, Right. Plan ahead. That is great in theory. But back to reality, when your working for a company that is growing by leaps and bounds that isn't always possible. Why would I want to intentionally limit myself to a set number of inodes when I can get a performance increase and not have to worry about it by simply using ReiserFS? Place your filesystems on LVN, when they grow the number of inodes grows to match. Or did you run out of inodes and not of diskspace? Only ever happened to me on (static) /dev, or (wrong configured) newsspools... It all boils down to using the right tool for the job, ReiserFS was the right tool for this job. /One/ tool that did the job. Not the right one, perhaps not even the best one. I've been bitten by running out of inodes on several occasions, Me too. It was rather painful each time, but fixable (and in hindsight, dumb user (setup) error). and by switching to ReiserFS it saved one company I worked for over $250,000 because they didn't need to buy a totally new piece of software. How can a filesystem (which by basic requirements and design is almost transparent to applications) make such a difference?! Very easily, the backup software the company had spent ~$75,000 on before I started working there used tiny little files as its database. If you know that, you configure your filesystem like a newsspool or some such. Once the inodes ran out the entire system pretty much came to a screeching halt. Get a clue by for, apply to the vendor (for the design, or at the very least for not warning unsuspecting users)? We basically had two options, use ReiserFS, or find another piece of software that didn't use tiny little files as its database. Or reconfigure the filesystem with more inodes (you were willing to rebuild the filesystem in any case, so...) If I recall correctly the database went from about 2million files to over 40 million in the span of just a few months. No one could have predicted that. I believe this database was on an 18GB SCSI drive, so even using 1K blocks wouldn't have worked. According to the mke2fs man page: -i bytes-per-inode This value generally shouldn't be smaller than the blocksize of the filesystem, since then too many inodes will be made. 18GiB = 18 million KiB, you do have a point there. But 40 million files on that, with some space to spare, just doesn't add up. The only other option at that time was to purchase Veritas backup ... or a larger disk... and its not cheap. We ended up switching to ReiserFS, it increased our backup/restore time immensely and also bought us about 1year before the system finally outgrew itself for good. By that time the company could afford to drop $250,000 on high end backup software so we could grow past 10TB.
Re: ReiserFS v3 choking when free space falls below 10% - FIXED
I applied the attached patch that Jeff supplied me and so far it is working flawlessly. I currently have less than 4% free space on my drive and the CPU usage is less then 3% with two recordings going. I'll let it run until about 2% free space just to test further. It also _appears_ that overall CPU usage is down slightly based on the vmstat output from when we were trying to diagnose the problem before compared to now. The SYS CPU time was hovering between 3-10% before, and now it seems to be between 0-2%. I haven't done any actual performance tests though. Jeff, what drawbacks does this patch have? Thanks for all your hard work, I'm sure many other MythTV users will be appreciate it. On Thu, 2006-06-29 at 10:41 -0700, Mike Benoit wrote: My MythTV box recently started showing odd behavior during recordings, at certain times the load of the box would spike to 10+ and recordings would start losing frames and become unwatchable. TOP would show mythbackend as using 90+% SYS CPU usage, which under normal circumstances it normally uses about 5% USR. So I finally got around to profiling mythbackend when the load starts to spike. To my surprise it appears that once I have less then 10% (30GB) free on the drive reiserfs can't up, even just writing at 1mb/sec is too much for it. Is there something that can be done to fix this, 30gb seems like a lot of wasted space. #opreport CPU: CPU with timer interrupt, speed 0 MHz (estimated) Profiling through timer interrupt TIMER:0| samples| %| -- 77863 78.7856 reiserfs 18183 18.3984 vmlinux 695 0.7032 mysqld 452 0.4574 libc-2.4.so 360 0.3643 libmythtv-0.19.so.0.19.0 324 0.3278 ivtv 323 0.3268 nvidia 242 0.2449 libqt-mt.so.3.3.6 110 0.1113 libpthread-2.4.so 53 0.0536 libstdc++.so.6.0.8 35 0.0354 ld-2.4.so 23 0.0233 libperl.so 22 0.0223 libz.so.1.2.3 snip #opreport -l /usr/src/linux/vmlinux CPU: CPU with timer interrupt, speed 0 MHz (estimated) Profiling through timer interrupt samples %symbol name 9607 52.8351 default_idle 7694 42.3142 find_next_zero_bit 183 1.0064 __copy_from_user_ll 570.3135 handle_IRQ_event 370.2035 __copy_to_user_ll 340.1870 ide_outb 300.1650 ide_end_request 220.1210 ioread8 220.1210 schedule 210.1155 get_page_from_freelist 170.0935 mmx_clear_page snip System Details: --- Kernel v2.6.16.21 (custom compiled) - This issue also happened with 2.6.14 too though. FilesystemSize Used Avail Use% Mounted on /dev/hda1 280G 269G 12G 97% / [EMAIL PROTECTED] cat /proc/mounts rootfs / rootfs rw 0 0 /dev /dev tmpfs rw 0 0 /dev/root / reiserfs rw,noatime,nodiratime 0 0 [EMAIL PROTECTED] cat /proc/cpuinfo processor : 0 vendor_id : AuthenticAMD cpu family : 6 model : 6 model name : AMD Athlon(tm) XP 2100+ stepping: 2 cpu MHz : 1759.680 cache size : 256 KB [EMAIL PROTECTED] free total used free sharedbuffers cached Mem:515992 496256 19736 0 36256 271728 -/+ buffers/cache: 188272 327720 Swap: 262136408 261728 [EMAIL PROTECTED] ~]# hdparm -i /dev/hda /dev/hda: Model=ST3300622A, FwRev=3.AND, SerialNo=3NF1GAGW Config={ HardSect NotMFM HdSw15uSec Fixed DTR10Mbs RotSpdTol.5% } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=4 BuffType=unknown, BuffSize=16384kB, MaxMultSect=16, MultSect=16 CurCHS=4047/16/255, CurSects=16511760, LBA=yes, LBAsects=268435455 IORDY=on/off, tPIO={min:240,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio1 pio2 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 udma2 udma3 udma4 *udma5 AdvancedPM=no WriteCache=enabled Drive conforms to: Unspecified: ATA/ATAPI-1 ATA/ATAPI-2 ATA/ATAPI-3 ATA/ATAPI-4 ATA/ATAPI-5 ATA/ATAPI-6 ATA/ATAPI-7 * signifies the current active mode [EMAIL PROTECTED] ~]# hdparm -tT /dev/hda /dev/hda: Timing cached reads: 1296 MB in 2.00 seconds = 646.99 MB/sec Timing buffered disk reads: 166 MB in 3.02 seconds = 55.05 MB/sec vmstat 1 output: -- procs ---memory-- ---swap-- -io --system-- cpu r b swpd free buff cache si sobibo incs us sy id wa 8 0408 5800 29308 24860400 0 1036 406 132 2 98 0 0 4 0408 5644 29396 24860800 0 1128 437 184 2 92 0 6 7 0408 6316 29428 24802000 0 1316 539 287 0 86 0 14 5 0408 6104 29480 24818000 0 588 415 187 0 99 0 1 4 0408 5764 29536 24836400 0 1092 421
Re: ReiserFS v3 choking when free space falls below 10% - FIXED
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Mike Benoit wrote: I applied the attached patch that Jeff supplied me and so far it is working flawlessly. I currently have less than 4% free space on my drive and the CPU usage is less then 3% with two recordings going. I'll let it run until about 2% free space just to test further. It also _appears_ that overall CPU usage is down slightly based on the vmstat output from when we were trying to diagnose the problem before compared to now. The SYS CPU time was hovering between 3-10% before, and now it seems to be between 0-2%. I haven't done any actual performance tests though. Jeff, what drawbacks does this patch have? Thanks for all your hard work, I'm sure many other MythTV users will be appreciate it. Hi Mike - There really shouldn't be any. I suspect that the window searching was actually causing more problems than it was solving. The original goal would have been to try to keep chunks of blocks contiguous for better access patterns, but if those chunks end up getting spread out all over the disk, that's hardly the outcome we were looking for. So, what will now happen is that the allocator will allocate the next n blocks it can find, regardless of the window size. If there happens to be a window of the size we needed, it will automatically find it through the normal process of allocating one block at a time. - -Jeff - -- Jeff Mahoney SUSE Labs -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.2 (GNU/Linux) Comment: Using GnuPG with SUSE - http://enigmail.mozdev.org iD8DBQFExUqOLPWxlyuTD7IRAuaFAJ47W+zr2ZwIs//vMgm3RNHuw4dpwACdECdv ueI91PGuCLQdeKipY5G9kqk= =vk6Z -END PGP SIGNATURE-
Re: the 'official' point of view expressed by kernelnewbies.org regarding reiser4 inclusion
On Mon, 2006-07-24 at 17:51 -0400, Horst H. von Brand wrote: It all boils down to using the right tool for the job, ReiserFS was the right tool for this job. /One/ tool that did the job. Not the right one, perhaps not even the best one. Please, enlighten me as to what the best tool for the job would be in your opinion. Keep in mind this was around 2002-2003, which if I recall was before dir_index for EXT3 was around and stable too. If I recall correctly the database went from about 2million files to over 40 million in the span of just a few months. No one could have predicted that. I believe this database was on an 18GB SCSI drive, so even using 1K blocks wouldn't have worked. According to the mke2fs man page: -i bytes-per-inode This value generally shouldn't be smaller than the blocksize of the filesystem, since then too many inodes will be made. 18GiB = 18 million KiB, you do have a point there. But 40 million files on that, with some space to spare, just doesn't add up. The only other option at that time was to purchase Veritas backup ... or a larger disk... Why would we waste money purchasing larger disks just to _temporarily_ work around an EXT3 limitation? Even if we doubled the size of the disks (36gb) we would have run in to the same problem within months. I wouldn't consider this to be smart planning ahead as you put it. Its not just an inode issue either, its a performance issue. Benchmarks we ran with just 100,000 files making up 1GB of data, ReiserFS blew EXT3 out of the water where EXT3 was over 50% slower. If I recall the database was about 5GB at 25million files. When you have a limited window of time to do the backups in, performance is critical. http://fsbench.netnation.com/new_hardware/2.6.0-test9/scsi/bonnie.html (4th table down) -- Mike Benoit [EMAIL PROTECTED] signature.asc Description: This is a digitally signed message part