Re: regular fsck runs are too disturbing - and current approach does not work very well in detecting defects!
Jan Claeys wrote: > The main reason (IMO) why "defrag" is not useful (anymore) is that for > ages there hasn't been any (guaranteed) correlation between hardware > order and software order of sectors on a disk. Defragmenting disks > might actually fragment them more on a fysical level, and thus cause > slow-downs. And in some cases (fysically) fragmented sectors might be > faster to read/write than non-fragmented ones (I used a custom, > partially self-written, diskette formatting program to do exactly that > under MS-DOS!). So, any defrag program would require help from the hard > disk's firmware to be really efficient (and AFAIK no firmware supports > this). No, the only time the logical sectors become physically out of order are when defect remapping has taken place. Sequential reads of sectors in order are still the fastest way to access the disk, so access to files which are not fragmented is faster than files which are. -- Ubuntu-devel-discuss mailing list Ubuntu-devel-discuss@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss
Re: regular fsck runs are too disturbing - and current approach does not work very well in detecting defects!
Op maandag 08-10-2007 om 13:16 uur [tijdzone -0400], schreef Phillip Susi: > Jan Claeys wrote: > > But I think a similar API could be used to mark & move bad sectors or > > "lost" sectors, and that's more related to this discussion... > > As I said, there is no need to make such an effort because ext rarely > becomes fragmented enough to worry about. The fact that the defrag > package has not really been maintained in 10 years shows that there is > no strong need for an offline defrag, let alone an online one. The main reason (IMO) why "defrag" is not useful (anymore) is that for ages there hasn't been any (guaranteed) correlation between hardware order and software order of sectors on a disk. Defragmenting disks might actually fragment them more on a fysical level, and thus cause slow-downs. And in some cases (fysically) fragmented sectors might be faster to read/write than non-fragmented ones (I used a custom, partially self-written, diskette formatting program to do exactly that under MS-DOS!). So, any defrag program would require help from the hard disk's firmware to be really efficient (and AFAIK no firmware supports this). But, what I was thinking about was similar atomic operations that allow _other_ filesystem cleaning tasks to be done while a filesystem is in use (r/w). ('fsck' might be an example.) I understand these don't exist now, but they might be a good idea for future filesystems or filesystem versions... :) -- Jan Claeys -- Ubuntu-devel-discuss mailing list Ubuntu-devel-discuss@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss
Re: regular fsck runs are too disturbing - and current approach does not work very well in detecting defects!
Jan Claeys wrote: > Ext2/ext3 suffer from fragmentation too, when available disk space gets > low enough. Yea that's why the defrag package was written. > But I think a similar API could be used to mark & move bad sectors or > "lost" sectors, and that's more related to this discussion... As I said, there is no need to make such an effort because ext rarely becomes fragmented enough to worry about. The fact that the defrag package has not really been maintained in 10 years shows that there is no strong need for an offline defrag, let alone an online one. -- Ubuntu-devel-discuss mailing list Ubuntu-devel-discuss@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss
Re: regular fsck runs are too disturbing - and current approach does not work very well in detecting defects!
Op woensdag 03-10-2007 om 15:35 uur [tijdzone -0400], schreef Phillip Susi: > Jan Claeys wrote: > > About doing "live" fsck & defrag on a rw filesystem, IIRC Windows NT has > > a system API for doing e.g. atomic "swap 2 sectors" operations; does > > 'linux', or any of the filesystem drivers for it, support something like > > that? > > I think XFS or JFS supports online defragmenting, but no other work > has been done in that area due to lack of need. Even the offline > defrag package has not been maintained for the last 10 years due to > lack of interest. When you don't have a silly problem with > fragmentation, there is no motivation to solve the non problem. Ext2/ext3 suffer from fragmentation too, when available disk space gets low enough. But I think a similar API could be used to mark & move bad sectors or "lost" sectors, and that's more related to this discussion... -- Jan Claeys -- Ubuntu-devel-discuss mailing list Ubuntu-devel-discuss@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss
Re: regular fsck runs are too disturbing - and current approach does not work very well in detecting defects!
Jan Claeys wrote: > Indeed, 'smartmontools' for hardware-defects, "fsck" for > filesystem-defects. > > > About doing "live" fsck & defrag on a rw filesystem, IIRC Windows NT has > a system API for doing e.g. atomic "swap 2 sectors" operations; does > 'linux', or any of the filesystem drivers for it, support something like > that? I think XFS or JFS supports online defragmenting, but no other work has been done in that area due to lack of need. Even the offline defrag package has not been maintained for the last 10 years due to lack of interest. When you don't have a silly problem with fragmentation, there is no motivation to solve the non problem. -- Ubuntu-devel-discuss mailing list Ubuntu-devel-discuss@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss
Re: regular fsck runs are too disturbing - and current approach does not work very well in detecting defects!
Op dinsdag 02-10-2007 om 13:56 uur [tijdzone -0400], schreef Phillip Susi: > Jan Claeys wrote: > > I'm not an Ubuntu developer, but if 'badblocks' looks for hardware > > defects, it's mostly useless on most hard disks in use these days. The > > HDD firmware does internal bad block detection & replacement (using > > spare blocks on the disk reserved for that purpose). So if you can > > detect any bad blocks using a software check, it means that your hard > > disk is almost dead and should be replace ASAP (like, rather today than > > tomorrow). > > It can only remap the block on a write, not a read, Which means it might be useful as an emergency solution while you're waiting for the new disks to arrive. > but yea, smartmontools is a better method to monitor for defects. Indeed, 'smartmontools' for hardware-defects, "fsck" for filesystem-defects. About doing "live" fsck & defrag on a rw filesystem, IIRC Windows NT has a system API for doing e.g. atomic "swap 2 sectors" operations; does 'linux', or any of the filesystem drivers for it, support something like that? -- Jan Claeys -- Ubuntu-devel-discuss mailing list Ubuntu-devel-discuss@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss
Re: regular fsck runs are too disturbing - and current approach does not work very well in detecting defects!
Jan Claeys wrote: > I'm not an Ubuntu developer, but if 'badblocks' looks for hardware > defects, it's mostly useless on most hard disks in use these days. The > HDD firmware does internal bad block detection & replacement (using > spare blocks on the disk reserved for that purpose). So if you can > detect any bad blocks using a software check, it means that your hard > disk is almost dead and should be replace ASAP (like, rather today than > tomorrow). It can only remap the block on a write, not a read, but yea, smartmontools is a better method to monitor for defects. -- Ubuntu-devel-discuss mailing list Ubuntu-devel-discuss@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss
Re: regular fsck runs are too disturbing - and current approach does not work very well in detecting defects!
Op maandag 01-10-2007 om 18:19 uur [tijdzone +0200], schreef Waldemar Kornewald: > Could an Ubuntu developer please explain what advantages > and disadvantages there might be with badblocks I'm not an Ubuntu developer, but if 'badblocks' looks for hardware defects, it's mostly useless on most hard disks in use these days. The HDD firmware does internal bad block detection & replacement (using spare blocks on the disk reserved for that purpose). So if you can detect any bad blocks using a software check, it means that your hard disk is almost dead and should be replace ASAP (like, rather today than tomorrow). -- Jan Claeys -- Ubuntu-devel-discuss mailing list Ubuntu-devel-discuss@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss
Re: regular fsck runs are too disturbing - and current approach does not work very well in detecting defects!
Hi, On 10/1/07, Vincenzo Ciancia <[EMAIL PROTECTED]> wrote: > I still am convinced that fsck is _not_ the right tool for the purpose. > Ext3 already has a journal that should (hopefully) avoid file system > corruption due power failures. What is the point in running fsck > periodically? If it's to check for disk errors, then badblocks is the > right tool and it can run read-only on a mounted filesystem. Sounds good. Could an Ubuntu developer please explain what advantages and disadvantages there might be with badblocks and whether it would be difficult to switch to that tool (running in background)? Thanks. Regards, Waldemar Kornewald -- Ubuntu-devel-discuss mailing list Ubuntu-devel-discuss@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss
Re: regular fsck runs are too disturbing - and current approach does not work very well in detecting defects!
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I haven't looked at how it actually works yet, but the idea of being able to check the filesystem and/or blocks read-only while the system is running and only warn on error sounds fairly appealing. I imagine the implementation could look something like the notification for a needed reboot after a kernel upgrade, or the one for restarting firefox after an update, with a little "I have important information for you!" lightbulb in the notification area that would explain what's going on, and warn that the process could take a significant amount of time (if possible, an estimate based on disk size?). Note that we need to make sure the check in the background uses only idle CPU time, not running immediately after you boot (making your login annoyingly slow), or at a scheduled time jumping on it 100% (see Beagle). I don't for one minute buy the argument that "Windows manages without disk checks" being a valid point against us do it - I would be very upset if we did everything like Windows, as there is a reason I switched. I think both fsck and badblocks are useful tools, and definitely see the advantage to running them on a regular basis. The discussion here shouldn't be about *whether* to check for disk and filesystem errors, but *how* and *when* we could do so in a more effective and less intrusive manner, with more explanation of what is happening and warning of when time-consuming processes will be necessary. -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFHAPdqKlAIzV4ebxoRAuwtAKCwrk6NF9UpdGpHl+Gd8oXAxwDd+gCfe5oj QdXiEETFEHjWTQXXVOIPF8o= =ywTF -END PGP SIGNATURE- -- Ubuntu-devel-discuss mailing list Ubuntu-devel-discuss@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss
Re: regular fsck runs are too disturbing - and current approach does not work very well in detecting defects!
On 01/10/2007 Waldemar Kornewald wrote: > Did you ever use WinXP and run chkdsk from the command line? It warns > you that it can't *correct* errors (a reboot is needed if errors are > found), but it can at least *detect* errors on a mounted and active > partition (even the boot partition, in case you wondered). Why should > Linux not be able to copy this behavior? I still am convinced that fsck is _not_ the right tool for the purpose. Ext3 already has a journal that should (hopefully) avoid file system corruption due power failures. What is the point in running fsck periodically? If it's to check for disk errors, then badblocks is the right tool and it can run read-only on a mounted filesystem. Moreover, if the point is to check periodically, then we could check a small amount of blocks at a time,using low disk priority like search daemons (should) do, or even check random blocks. Finally, I want to point out to those that say fsck defends your data: I have a desktop machine which hosts an internal service, so it's continuously up. I once rebooted, disk was damaged, and I couldn't no longer boot or recover data (I had a backup, in any case, but it's not so typical with desktop users). However, it had an uptime of months. If I had an online check (e.g. read-only fsck, or smart, or badblocks) I would have discovered the problem before, and would have been able to recover some data. I know this by long experience, so don't tell me it's not likely. In my opinion, a blueprint should be written about checking _blocks_ of disks while running the os, in such a way that user work is unaffected at all, by modifying the badblocks command. Vincenzo -- Ubuntu-devel-discuss mailing list Ubuntu-devel-discuss@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss