Re: Is there a more aggressive fixer than btrfsck?

2010-06-29 Thread Daniel Kozlowski
On Mon, Jun 28, 2010 at 10:31 PM, Rodrigo E. De León Plicet
rdele...@gmail.com wrote:
 On Mon, Jun 28, 2010 at 8:48 AM, Daniel Kozlowski
 dan.kozlow...@gmail.com wrote:
 Sean Bartell wingedtachikoma at gmail.com writes:

  Is there a more aggressive filesystem restorer than btrfsck?  It simply
  gives up immediately with the following error:
 
  btrfsck: disk-io.c:739: open_ctree_fd: Assertion `!(!tree_root-node)'
  failed.

 btrfsck currently only checks whether a filesystem is consistent. It
 doesn't try to perform any recovery or error correction at all, so it's
 mostly useful to developers. Any error handling occurs while the
 filesystem is mounted.


 Is there any plan to implement this functionality. It would seem to me to be 
 a
 pretty basic feature that is missing ?

 If Btrfs aims to be at least half of what ZFS is, then it will not
 impose a need for fsck at all.

 Read No, ZFS really doesn't need a fsck at the following URL:

 http://www.c0t0d0s0.org/archives/6071-No,-ZFS-really-doesnt-need-a-fsck.html


Interesting idea. it would seem to me however that the functionality
described in that article is more concerned with a bad transaction
rather then something like a hardware failure where a block written
more then 128 transactions ago is now corrupted and consiquently the
entire partition is now unmountable( that is what I think i am looking
at with BTRFS )


-- 
S.D.G.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: default subvolume abilities/restrictions

2010-06-29 Thread Hubert Kario
On Friday 18 June 2010 23:01:37 C Anthony Risinger wrote:
 On Sun, Jun 13, 2010 at 12:47 PM, C Anthony Risinger anth...@extof.me 
wrote:
  On Sat, Jun 12, 2010 at 8:06 PM, C Anthony Risinger anth...@extof.me 
wrote:
  On Sat, Jun 12, 2010 at 7:22 PM, David Brown bt...@davidb.org wrote:
  On Sat, Jun 12, 2010 at 06:06:23PM -0500, C Anthony Risinger wrote:
  # btrfs subvolume create new_root
  # mv . new_root/old_root
  
  can i at least get confirmation that the above is possible?
  
  I've had no problem with
  
   # btrfs subvolume snapshot . new_root
   # mkdir old_root
   # mv * old_root
   # rm -rf old_root
  
  Make sure the 'mv' fails fo move new_root, and I'd look at the
  new_root before removing everything.
  
  David
  
  heh, yeah i as i was writing the last email i realized that all i
  really wanted was to:
  
  # mv * new_root
  
  for some reason i was convinced that i must snapshot the old_root (.)
  to new_root... and then remove the erroneous stuff from old_root (.).
  thus a way to parent the default subvol (old_root/.) seemed a better
  solution...
  
  but alas, a snapshot isn't necessary.  i can create an empty subvol
  new_root, and then mv * new_root.
  
  i don't know how that escaped me :-), sorry for all the noise.
  however, there probably is a legitimate use case for wanting to
  replace the default subvolume, but this isn't it.
  
  C Anthony
  
  ok i take it all back, i DO need this...
  
  i rewrote my initramfs hook to do the following operations:
  
  # btrfs subvolume create /new_root
  # mv /* /new_root
  
  instead of what i had:
  
  # btrfs subvolume snapshot / /new_root
  
  and it resulted in scarily COPYING my entire system... several gigs
  worth... to the newly created subvolume, which took forever, and
  grinded on my HD for awhile.  i don't know how long because i went to
  bed.
  
  this is why i need a way to parent the default subvolume.
  
  a snapshot is nice and quick, but it leaves / full of erroneous
  folders (dev/etc/usr/lib), an entire system, that will no longer be
  used.  this space will in time become dead wasted space unless my
  users manually rm -rf themselves.
  
  so... any input on this?  how can i effectively, and efficiently, move
  a users installation into a dedicated subvolume, when they have
  already installed into the default subvolume?
  
  i think the best way is what i originally suggested; make an empty
  subvolume the new top-level subvol, and place the old top-level subvol
  INTO it with a new name.
  
  thoughts?
 
 can i get a little feedback on this problem?  i choke slightly when
 telling users the only way to clean their old / is by rm -rf'ing
 everything
 
 i need the ability to move the default subvolume into a new, empty
 subvolume.  the empty subvolume then becomes the new default.
 
 the end result is moving the users installation into a dedicated
 subvolume, cleanly and efficiently, so the system can do other things
 with the subroot structure.
 
 the way i am doing it now is snapshotting the users / to /__active...
 however, the side effect is an entire system worth of files that will
 in time become dead space.
 
 moving the users install via mv into an empty subvol does not work.
 everything is actually copied = slow,slow,slow.

I don't have a btrfs file system on hand to actually try it, but did you try 
copying using cp --reflink=always?

 
 ideas???  how can i parent the default subvol with an empty subvol?
 this seems a legitimate operation.

 
 thanks,
 C Anthony

-- 
Hubert Kario
QBS - Quality Business Software
02-656 Warszawa, ul. Ksawerów 30/85
tel. +48 (22) 646-61-51, 646-74-24
www.qbs.com.pl
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: default subvolume abilities/restrictions

2010-06-29 Thread Goffredo Baroncelli
On Tuesday, June 29, 2010, Hubert Kario wrote:
 On Friday 18 June 2010 23:01:37 C Anthony Risinger wrote:
  On Sun, Jun 13, 2010 at 12:47 PM, C Anthony Risinger anth...@extof.me 
 wrote:
   On Sat, Jun 12, 2010 at 8:06 PM, C Anthony Risinger anth...@extof.me 
 wrote:
   On Sat, Jun 12, 2010 at 7:22 PM, David Brown bt...@davidb.org wrote:
   On Sat, Jun 12, 2010 at 06:06:23PM -0500, C Anthony Risinger wrote:
   # btrfs subvolume create new_root
   # mv . new_root/old_root
   
   can i at least get confirmation that the above is possible?
   
   I've had no problem with
   
# btrfs subvolume snapshot . new_root
# mkdir old_root
# mv * old_root
# rm -rf old_root
   
   Make sure the 'mv' fails fo move new_root, and I'd look at the
   new_root before removing everything.
   
   David
   
   heh, yeah i as i was writing the last email i realized that all i
   really wanted was to:
   
   # mv * new_root
   
   for some reason i was convinced that i must snapshot the old_root (.)
   to new_root... and then remove the erroneous stuff from old_root (.).
   thus a way to parent the default subvol (old_root/.) seemed a better
   solution...
   
   but alas, a snapshot isn't necessary.  i can create an empty subvol
   new_root, and then mv * new_root.
   
   i don't know how that escaped me :-), sorry for all the noise.
   however, there probably is a legitimate use case for wanting to
   replace the default subvolume, but this isn't it.
   
   C Anthony
   
   ok i take it all back, i DO need this...
   
   i rewrote my initramfs hook to do the following operations:
   
   # btrfs subvolume create /new_root
   # mv /* /new_root
   
   instead of what i had:
   
   # btrfs subvolume snapshot / /new_root
   
   and it resulted in scarily COPYING my entire system... several gigs
   worth... to the newly created subvolume, which took forever, and
   grinded on my HD for awhile.  i don't know how long because i went to
   bed.
   
   this is why i need a way to parent the default subvolume.
   
   a snapshot is nice and quick, but it leaves / full of erroneous
   folders (dev/etc/usr/lib), an entire system, that will no longer be
   used.  this space will in time become dead wasted space unless my
   users manually rm -rf themselves.
   
   so... any input on this?  how can i effectively, and efficiently, move
   a users installation into a dedicated subvolume, when they have
   already installed into the default subvolume?
   
   i think the best way is what i originally suggested; make an empty
   subvolume the new top-level subvol, and place the old top-level subvol
   INTO it with a new name.
   
   thoughts?
  
  can i get a little feedback on this problem?  i choke slightly when
  telling users the only way to clean their old / is by rm -rf'ing
  everything
  
  i need the ability to move the default subvolume into a new, empty
  subvolume.  the empty subvolume then becomes the new default.
  
  the end result is moving the users installation into a dedicated
  subvolume, cleanly and efficiently, so the system can do other things
  with the subroot structure.
  
  the way i am doing it now is snapshotting the users / to /__active...
  however, the side effect is an entire system worth of files that will
  in time become dead space.
  
  moving the users install via mv into an empty subvol does not work.
  everything is actually copied = slow,slow,slow.
 
 I don't have a btrfs file system on hand to actually try it, but did you try 
 copying using cp --reflink=always?

This should work for the files but not for the directories.

The real problem is that the . (default) subvolume is a special case 
subvolume. It is difficult to drop because all subvolumes depend by its.

The 'mv' command permits to move the subvolumes between subvolumes, but if you 
want to be capable to move all subvolumes, the . (default) subvolumes has to 
be mounted.

It was created the set-default command to change the default subvolume. 
But it didn't solve all the problem.

I am not an expert of the internal of btrfs, but I am still convinced that the 
subvolumes have to live in a different name-space, and them have to be mounted 
_only_ by  mount -o subvol=name

IMHO for now the best thing to do, is always create the root filesystem in a 
non-default subvolume, and use the . (default) subvolume _only_ to manage 
the subvolumes.

To move the default subvolume, the best thing to do it is to clone and then 
remove the original file.

 
  
  ideas???  how can i parent the default subvol with an empty subvol?
  this seems a legitimate operation.
 
  
  thanks,
  C Anthony
 



 -- 
 Hubert Kario
 QBS - Quality Business Software
 02-656 Warszawa, ul. Ksawerów 30/85
 tel. +48 (22) 646-61-51, 646-74-24
 www.qbs.com.pl
 --
 To unsubscribe from this list: send the line unsubscribe linux-btrfs in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 


-- 
gpg 

Re: Is there a more aggressive fixer than btrfsck?

2010-06-29 Thread Hubert Kario
On Tuesday 29 June 2010 12:37:45 Daniel Kozlowski wrote:
 On Mon, Jun 28, 2010 at 10:31 PM, Rodrigo E. De León Plicet
 
 rdele...@gmail.com wrote:
  On Mon, Jun 28, 2010 at 8:48 AM, Daniel Kozlowski
  
  dan.kozlow...@gmail.com wrote:
  Sean Bartell wingedtachikoma at gmail.com writes:
   Is there a more aggressive filesystem restorer than btrfsck?  It
   simply gives up immediately with the following error:
   
   btrfsck: disk-io.c:739: open_ctree_fd: Assertion
   `!(!tree_root-node)' failed.
  
  btrfsck currently only checks whether a filesystem is consistent. It
  doesn't try to perform any recovery or error correction at all, so it's
  mostly useful to developers. Any error handling occurs while the
  filesystem is mounted.
  
  Is there any plan to implement this functionality. It would seem to me
  to be a pretty basic feature that is missing ?
  
  If Btrfs aims to be at least half of what ZFS is, then it will not
  impose a need for fsck at all.
  
  Read No, ZFS really doesn't need a fsck at the following URL:
  
  http://www.c0t0d0s0.org/archives/6071-No,-ZFS-really-doesnt-need-a-fsck.h
  tml
 
 Interesting idea. it would seem to me however that the functionality
 described in that article is more concerned with a bad transaction
 rather then something like a hardware failure where a block written
 more then 128 transactions ago is now corrupted and consiquently the
 entire partition is now unmountable( that is what I think i am looking
 at with BTRFS )

Still, the FS alone should be able to recover from such situations. With 
multiple superblocks the probability that the fs is unmountable is very small
and if all superblocks are corrupted then you need a data recovery prorgram, 
not fsck.

-- 
Hubert Kario
QBS - Quality Business Software
02-656 Warszawa, ul. Ksawerów 30/85
tel. +48 (22) 646-61-51, 646-74-24
www.qbs.com.pl
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Is there a more aggressive fixer than btrfsck?

2010-06-29 Thread Freddie Cash
On Tue, Jun 29, 2010 at 3:37 AM, Daniel Kozlowski
dan.kozlow...@gmail.com wrote:
 On Mon, Jun 28, 2010 at 10:31 PM, Rodrigo E. De León Plicet
 rdele...@gmail.com wrote:
 On Mon, Jun 28, 2010 at 8:48 AM, Daniel Kozlowski
 dan.kozlow...@gmail.com wrote:
 Sean Bartell wingedtachikoma at gmail.com writes:

  Is there a more aggressive filesystem restorer than btrfsck?  It simply
  gives up immediately with the following error:
 
  btrfsck: disk-io.c:739: open_ctree_fd: Assertion `!(!tree_root-node)'
  failed.

 btrfsck currently only checks whether a filesystem is consistent. It
 doesn't try to perform any recovery or error correction at all, so it's
 mostly useful to developers. Any error handling occurs while the
 filesystem is mounted.


 Is there any plan to implement this functionality. It would seem to me to 
 be a
 pretty basic feature that is missing ?

 If Btrfs aims to be at least half of what ZFS is, then it will not
 impose a need for fsck at all.

 Read No, ZFS really doesn't need a fsck at the following URL:

 http://www.c0t0d0s0.org/archives/6071-No,-ZFS-really-doesnt-need-a-fsck.html


 Interesting idea. it would seem to me however that the functionality
 described in that article is more concerned with a bad transaction
 rather then something like a hardware failure where a block written
 more then 128 transactions ago is now corrupted and consiquently the
 entire partition is now unmountable( that is what I think i am looking
 at with BTRFS )

In the ZFS case, this is handled by checksumming and redundant data,
and can be discovered (and fixed) via either reading the affected data
block (in which case, the checksum is wrong, the data is read from a
redundant data block, and the correct data is written over the
incorrect data) or by running a scrub.

Self-healing, checksumming, data redundancy eliminate the need for
online (or offline) fsck.

Automatic transaction rollback at boot eliminates the need for fsck at
boot, as there is no such thing as a dirty filesystem.  Either the
data is on disk and correct, or it doesn't exist.  Yes, you may lose
data.  But you will never have a corrupted filesystem.

Not sure how things work for btrfs.


-- 
Freddie Cash
fjwc...@gmail.com
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Is there a more aggressive fixer than btrfsck?

2010-06-29 Thread Sean Bartell
On Tue, Jun 29, 2010 at 02:36:14PM -0700, Freddie Cash wrote:
 On Tue, Jun 29, 2010 at 3:37 AM, Daniel Kozlowski
 dan.kozlow...@gmail.com wrote:
  On Mon, Jun 28, 2010 at 10:31 PM, Rodrigo E. De León Plicet
  rdele...@gmail.com wrote:
  On Mon, Jun 28, 2010 at 8:48 AM, Daniel Kozlowski
  dan.kozlow...@gmail.com wrote:
  Sean Bartell wingedtachikoma at gmail.com writes:
 
   Is there a more aggressive filesystem restorer than btrfsck?  It simply
   gives up immediately with the following error:
  
   btrfsck: disk-io.c:739: open_ctree_fd: Assertion `!(!tree_root-node)'
   failed.
 
  btrfsck currently only checks whether a filesystem is consistent. It
  doesn't try to perform any recovery or error correction at all, so it's
  mostly useful to developers. Any error handling occurs while the
  filesystem is mounted.
 
 
  Is there any plan to implement this functionality. It would seem to me to 
  be a
  pretty basic feature that is missing ?
 
  If Btrfs aims to be at least half of what ZFS is, then it will not
  impose a need for fsck at all.
 
  Read No, ZFS really doesn't need a fsck at the following URL:
 
  http://www.c0t0d0s0.org/archives/6071-No,-ZFS-really-doesnt-need-a-fsck.html
 
 
  Interesting idea. it would seem to me however that the functionality
  described in that article is more concerned with a bad transaction
  rather then something like a hardware failure where a block written
  more then 128 transactions ago is now corrupted and consiquently the
  entire partition is now unmountable( that is what I think i am looking
  at with BTRFS )
 
 In the ZFS case, this is handled by checksumming and redundant data,
 and can be discovered (and fixed) via either reading the affected data
 block (in which case, the checksum is wrong, the data is read from a
 redundant data block, and the correct data is written over the
 incorrect data) or by running a scrub.
 
 Self-healing, checksumming, data redundancy eliminate the need for
 online (or offline) fsck.
 
 Automatic transaction rollback at boot eliminates the need for fsck at
 boot, as there is no such thing as a dirty filesystem.  Either the
 data is on disk and correct, or it doesn't exist.  Yes, you may lose
 data.  But you will never have a corrupted filesystem.
 
 Not sure how things work for btrfs.

btrfs works in a similar way. While it's writing new data, it keeps the
superblock pointing at the old data, so after a crash you still get the
complete old version. Once the new data is written, the superblock is
updated to point at it, ensuring that you see the new data. This
eliminates the need for any special handling after a crash.

btrfs also uses checksums and redundancy to protect against data
corruption. Thanks to its design, btrfs doesn't need to scan the
filesystem or cross-reference structures to detect problems. It can
easily detect corruption at run-time when it tries to read the
problematic data, and fixes it using the redundant copies.

In the event that something goes horribly wrong, for example if each
copy of the superblock or of a tree root is corrupted, you could still
find some valid nodes and try to piece them together; however, this is
rare and falls outside the scope of a fsck anyway.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Is there a more aggressive fixer than btrfsck?

2010-06-29 Thread Anthony Roberts
On Tue, 29 Jun 2010 18:34:13 +0200, Hubert Kario h...@qbs.com.pl wrote:
 On Tuesday 29 June 2010 12:37:45 Daniel Kozlowski wrote:
 On Mon, Jun 28, 2010 at 10:31 PM, Rodrigo E. De León Plicet
 
 rdele...@gmail.com wrote:
  On Mon, Jun 28, 2010 at 8:48 AM, Daniel Kozlowski
  
  dan.kozlow...@gmail.com wrote:
  Sean Bartell wingedtachikoma at gmail.com writes:
   Is there a more aggressive filesystem restorer than btrfsck?  It
   simply gives up immediately with the following error:
   
   btrfsck: disk-io.c:739: open_ctree_fd: Assertion
   `!(!tree_root-node)' failed.
  
  btrfsck currently only checks whether a filesystem is consistent.
It
  doesn't try to perform any recovery or error correction at all, so
  it's
  mostly useful to developers. Any error handling occurs while the
  filesystem is mounted.
  
  Is there any plan to implement this functionality. It would seem to
me
  to be a pretty basic feature that is missing ?
  
  If Btrfs aims to be at least half of what ZFS is, then it will not
  impose a need for fsck at all.
  
  Read No, ZFS really doesn't need a fsck at the following URL:
  
 
http://www.c0t0d0s0.org/archives/6071-No,-ZFS-really-doesnt-need-a-fsck.h
  tml
 
 Interesting idea. it would seem to me however that the functionality
 described in that article is more concerned with a bad transaction
 rather then something like a hardware failure where a block written
 more then 128 transactions ago is now corrupted and consiquently the
 entire partition is now unmountable( that is what I think i am looking
 at with BTRFS )
 
 Still, the FS alone should be able to recover from such situations. With

 multiple superblocks the probability that the fs is unmountable is very
 small
 and if all superblocks are corrupted then you need a data recovery
 prorgram, 
 not fsck.

On Tue, 29 Jun 2010 18:34:13 +0200, Hubert Kario h...@qbs.com.pl wrote:
 On Tuesday 29 June 2010 12:37:45 Daniel Kozlowski wrote:
 On Mon, Jun 28, 2010 at 10:31 PM, Rodrigo E. De León Plicet
 
 rdele...@gmail.com wrote:
  On Mon, Jun 28, 2010 at 8:48 AM, Daniel Kozlowski
  
  dan.kozlow...@gmail.com wrote:
  Sean Bartell wingedtachikoma at gmail.com writes:
   Is there a more aggressive filesystem restorer than btrfsck?  It
   simply gives up immediately with the following error:
   
   btrfsck: disk-io.c:739: open_ctree_fd: Assertion
   `!(!tree_root-node)' failed.
  
  btrfsck currently only checks whether a filesystem is consistent.
It
  doesn't try to perform any recovery or error correction at all, so
  it's
  mostly useful to developers. Any error handling occurs while the
  filesystem is mounted.
  
  Is there any plan to implement this functionality. It would seem to
me
  to be a pretty basic feature that is missing ?
  
  If Btrfs aims to be at least half of what ZFS is, then it will not
  impose a need for fsck at all.
  
  Read No, ZFS really doesn't need a fsck at the following URL:
  
 
http://www.c0t0d0s0.org/archives/6071-No,-ZFS-really-doesnt-need-a-fsck.h
  tml
 
 Interesting idea. it would seem to me however that the functionality
 described in that article is more concerned with a bad transaction
 rather then something like a hardware failure where a block written
 more then 128 transactions ago is now corrupted and consiquently the
 entire partition is now unmountable( that is what I think i am looking
 at with BTRFS )
 
 Still, the FS alone should be able to recover from such situations. With

 multiple superblocks the probability that the fs is unmountable is very
 small
 and if all superblocks are corrupted then you need a data recovery
 prorgram, 
 not fsck.

While it would be great to have a filesystem that can recover from such
situations, or at least fail gracefully, I'd also like to be able to
verify/repair the filesystem offline, without mounting it and potentially
making things worse. For example, say you have a single-disk filesystem,
and while it can detect corruption it can't repair it. That's the sort of
scenario where you want to specify what to do, interactively or with
command line options. I don't want the only choice to be bringing it
online and destructively forcing it into a consistent state based on
variables I don't control like when someone attempts to access the file.

Regards,

-Anthony
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Is there a more aggressive fixer than btrfsck?

2010-06-29 Thread Chris Mason
On Tue, Jun 29, 2010 at 06:22:43PM -0400, Sean Bartell wrote:
 On Tue, Jun 29, 2010 at 02:36:14PM -0700, Freddie Cash wrote:
  On Tue, Jun 29, 2010 at 3:37 AM, Daniel Kozlowski
  dan.kozlow...@gmail.com wrote:
   On Mon, Jun 28, 2010 at 10:31 PM, Rodrigo E. De León Plicet
   rdele...@gmail.com wrote:
   On Mon, Jun 28, 2010 at 8:48 AM, Daniel Kozlowski
   dan.kozlow...@gmail.com wrote:
   Sean Bartell wingedtachikoma at gmail.com writes:
  
Is there a more aggressive filesystem restorer than btrfsck?  It 
simply
gives up immediately with the following error:
   
btrfsck: disk-io.c:739: open_ctree_fd: Assertion 
`!(!tree_root-node)'
failed.
  
   btrfsck currently only checks whether a filesystem is consistent. It
   doesn't try to perform any recovery or error correction at all, so it's
   mostly useful to developers. Any error handling occurs while the
   filesystem is mounted.
  
  
   Is there any plan to implement this functionality. It would seem to me 
   to be a
   pretty basic feature that is missing ?
  
   If Btrfs aims to be at least half of what ZFS is, then it will not
   impose a need for fsck at all.

Everyone needs an fsck. Yan Zheng is working on a more complete fsck
right now, and making good progress ;)

The fsck is really for emergencies only, you won't have to run it after
a crash or anything.  It's for when you notice things have gone wrong
and just want your data back.

Over the long term we'll push more and more of the fsck into online
operations.

-chris

  
   Read No, ZFS really doesn't need a fsck at the following URL:
  
   http://www.c0t0d0s0.org/archives/6071-No,-ZFS-really-doesnt-need-a-fsck.html
  
  
   Interesting idea. it would seem to me however that the functionality
   described in that article is more concerned with a bad transaction
   rather then something like a hardware failure where a block written
   more then 128 transactions ago is now corrupted and consiquently the
   entire partition is now unmountable( that is what I think i am looking
   at with BTRFS )
  
  In the ZFS case, this is handled by checksumming and redundant data,
  and can be discovered (and fixed) via either reading the affected data
  block (in which case, the checksum is wrong, the data is read from a
  redundant data block, and the correct data is written over the
  incorrect data) or by running a scrub.
  
  Self-healing, checksumming, data redundancy eliminate the need for
  online (or offline) fsck.
  
  Automatic transaction rollback at boot eliminates the need for fsck at
  boot, as there is no such thing as a dirty filesystem.  Either the
  data is on disk and correct, or it doesn't exist.  Yes, you may lose
  data.  But you will never have a corrupted filesystem.
  
  Not sure how things work for btrfs.
 
 btrfs works in a similar way. While it's writing new data, it keeps the
 superblock pointing at the old data, so after a crash you still get the
 complete old version. Once the new data is written, the superblock is
 updated to point at it, ensuring that you see the new data. This
 eliminates the need for any special handling after a crash.
 
 btrfs also uses checksums and redundancy to protect against data
 corruption. Thanks to its design, btrfs doesn't need to scan the
 filesystem or cross-reference structures to detect problems. It can
 easily detect corruption at run-time when it tries to read the
 problematic data, and fixes it using the redundant copies.
 
 In the event that something goes horribly wrong, for example if each
 copy of the superblock or of a tree root is corrupted, you could still
 find some valid nodes and try to piece them together; however, this is
 rare and falls outside the scope of a fsck anyway.
 --
 To unsubscribe from this list: send the line unsubscribe linux-btrfs in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html