Re: Tux3 Report: Initial fsck has landed

2013-03-21 Thread Rob Landley

On 03/19/2013 06:00:32 PM, Martin Steigerwald wrote:

Am Dienstag, 29. Januar 2013 schrieb Daniel Phillips:
 On Mon, Jan 28, 2013 at 5:40 PM, Theodore Ts'o ty...@mit.edu  
wrote:

  On Mon, Jan 28, 2013 at 04:20:11PM -0800, Darrick J. Wong wrote:
  On Mon, Jan 28, 2013 at 03:27:38PM -0800, David Lang wrote:
   The situation I'm thinking of is when dealing with VMs, you  
make a
   filesystem image once and clone it multiple times. Won't that  
end up

   with the same UUID in the superblock?
 
  Yes, but one ought to be able to change the UUID a la tune2fs
  -U.  Even still... so long as the VM images have a different UUID
  than the fs that they live on, it ought to be fine.
 
  ... and this is something most system administrators should be
  familiar with.  For example, it's one of those things that Norton
  Ghost when makes file system image copes (the equivalent of  
tune2fs

  -U random /dev/XXX)

 Hmm, maybe I missed something but it does not seem like a good idea
 to use the volume UID itself to generate unique-per-volume metadata
 hashes, if users expect to be able to change it. All the metadata  
hashes

 would need to be changed.

I believe that is what BTRFS is doing.

And yes, AFAIK there is no easy way to change the UUID of a BTRFS  
filesystems

after it was created.


I'm confused, http://tux3.org/ lists a bunch of dates from 5 years ago,  
then nothing. Is this project dead or not?


Rob
___
Tux3 mailing list
Tux3@phunq.net
http://phunq.net/mailman/listinfo/tux3


Re: Tux3 Report: Initial fsck has landed

2013-03-20 Thread Daniel Phillips
On Tue, Mar 19, 2013 at 11:54 PM, Rob Landley r...@landley.net wrote:
 I'm confused, http://tux3.org/ lists a bunch of dates from 5 years ago, then
 nothing. Is this project dead or not?

Not. We haven't done much about updating tux3.org lately, however you
will find plenty of activity here:

 https://github.com/OGAWAHirofumi/tux3/tree/master/user

You will also find fairly comprehensive updates on where we are and
where this is going, here:

 http://phunq.net/pipermail/tux3/

At the moment we're being pretty quiet because of being in the middle
of developing the next-gen directory index. Not such a small task, as
you might imagine.

Regards,

Daniel

___
Tux3 mailing list
Tux3@phunq.net
http://phunq.net/mailman/listinfo/tux3


Re: Tux3 Report: Initial fsck has landed

2013-03-19 Thread David Lang

On Wed, 20 Mar 2013, Martin Steigerwald wrote:


Am Dienstag, 29. Januar 2013 schrieb Daniel Phillips:

On Mon, Jan 28, 2013 at 5:40 PM, Theodore Ts'o ty...@mit.edu wrote:

On Mon, Jan 28, 2013 at 04:20:11PM -0800, Darrick J. Wong wrote:

On Mon, Jan 28, 2013 at 03:27:38PM -0800, David Lang wrote:

The situation I'm thinking of is when dealing with VMs, you make a
filesystem image once and clone it multiple times. Won't that end up
with the same UUID in the superblock?


Yes, but one ought to be able to change the UUID a la tune2fs
-U.  Even still... so long as the VM images have a different UUID
than the fs that they live on, it ought to be fine.


... and this is something most system administrators should be
familiar with.  For example, it's one of those things that Norton
Ghost when makes file system image copes (the equivalent of tune2fs
-U random /dev/XXX)


Hmm, maybe I missed something but it does not seem like a good idea
to use the volume UID itself to generate unique-per-volume metadata
hashes, if users expect to be able to change it. All the metadata hashes
would need to be changed.


I believe that is what BTRFS is doing.

And yes, AFAIK there is no easy way to change the UUID of a BTRFS filesystems
after it was created.


In a world where systems are cloned, and many VMs are started from one master 
copy of a filesystem, a UUID is about as far from unique as anything you can 
generate.


BTRFS may have this problem, but why should Tux3 copy the problem?

David Lang

___
Tux3 mailing list
Tux3@phunq.net
http://phunq.net/mailman/listinfo/tux3


Re: Tux3 Report: Initial fsck has landed

2013-03-19 Thread Daniel Phillips
On Tue, Mar 19, 2013 at 9:04 PM, David Lang da...@lang.hm wrote:
 On Wed, 20 Mar 2013, Martin Steigerwald wrote:

 Am Dienstag, 29. Januar 2013 schrieb Daniel Phillips:

 On Mon, Jan 28, 2013 at 5:40 PM, Theodore Ts'o ty...@mit.edu wrote:

 On Mon, Jan 28, 2013 at 04:20:11PM -0800, Darrick J. Wong wrote:

 On Mon, Jan 28, 2013 at 03:27:38PM -0800, David Lang wrote:

 The situation I'm thinking of is when dealing with VMs, you make a
 filesystem image once and clone it multiple times. Won't that end up
 with the same UUID in the superblock?

 Yes, but one ought to be able to change the UUID a la tune2fs
 -U.  Even still... so long as the VM images have a different UUID
 than the fs that they live on, it ought to be fine.

 ... and this is something most system administrators should be
 familiar with.  For example, it's one of those things that Norton
 Ghost when makes file system image copes (the equivalent of tune2fs
 -U random /dev/XXX)

 Hmm, maybe I missed something but it does not seem like a good idea
 to use the volume UID itself to generate unique-per-volume metadata
 hashes, if users expect to be able to change it. All the metadata hashes
 would need to be changed.

 I believe that is what BTRFS is doing.

 And yes, AFAIK there is no easy way to change the UUID of a BTRFS
 filesystems
 after it was created.

 In a world where systems are cloned, and many VMs are started from one
 master copy of a filesystem, a UUID is about as far from unique as anything
 you can generate.

 BTRFS may have this problem, but why should Tux3 copy the problem?

Tux3 won't copy that problem. We have enough real problems to deal with
as it is, without manufacturing new ones.

Regards,

Daniel

___
Tux3 mailing list
Tux3@phunq.net
http://phunq.net/mailman/listinfo/tux3


Re: Tux3 Report: Initial fsck has landed

2013-01-28 Thread Theodore Ts'o
On Sun, Jan 27, 2013 at 10:13:37PM -0800, Daniel Phillips wrote:
  The thing that jumps out at me with this is the question of how you will
  avoid the 'filesystem image in a file' disaster that reiserfs had (where
  it's fsck could mix up metadata chunks from the main filesystem with
  metadata chunks from any filesystem images that it happened to stumble
  across when scanning the disk)
 
 Only superficially. Deep thoughts are in order. First, there needs to be a
 hole in the filesystem structure, before we would even consider trying to
 plug something in there. Once we know there is a hole, we want to
 narrow down the list of candidates to fill it. If a candidate already lies
 within a perfectly viable file, obviously we would not want to interpret
 that as lost metadata. Unless the filesystem is really mess up...
 
 That is about as far as I have got with the analysis. Clearly, much more
 is required. Suggestions welcome.

The obvious answer is what resierfs4 ultimately ended up using.  Drop
a file system UUID in the superblock; mix the UUID into a checksum
which protects each of the your metadata blocks.  We're mixing in the
inode number as well as the fs uuid in in ext4's new metadata checksum
feature to protect against an inode table block getting written to the
wrong location on disk.  It will also mean that e2fsck won't mistake
an inode table from an earlier mkfs with the current file system.
This will allow us to avoid needing to zero the inode table for newly
initialized file systems.

Regards,

- Ted

___
Tux3 mailing list
Tux3@phunq.net
http://phunq.net/mailman/listinfo/tux3


Re: Tux3 Report: Initial fsck has landed

2013-01-28 Thread Darrick J. Wong
On Mon, Jan 28, 2013 at 03:27:38PM -0800, David Lang wrote:
 On Mon, 28 Jan 2013, Theodore Ts'o wrote:
 
 On Sun, Jan 27, 2013 at 10:13:37PM -0800, Daniel Phillips wrote:
 The thing that jumps out at me with this is the question of how you will
 avoid the 'filesystem image in a file' disaster that reiserfs had (where
 it's fsck could mix up metadata chunks from the main filesystem with
 metadata chunks from any filesystem images that it happened to stumble
 across when scanning the disk)

Did that ever get fixed in reiserfs?

 
 Only superficially. Deep thoughts are in order. First, there needs to be a
 hole in the filesystem structure, before we would even consider trying to
 plug something in there. Once we know there is a hole, we want to
 narrow down the list of candidates to fill it. If a candidate already lies
 within a perfectly viable file, obviously we would not want to interpret
 that as lost metadata. Unless the filesystem is really mess up...
 
 That is about as far as I have got with the analysis. Clearly, much more
 is required. Suggestions welcome.
 
 The obvious answer is what resierfs4 ultimately ended up using.  Drop
 a file system UUID in the superblock; mix the UUID into a checksum
 which protects each of the your metadata blocks.  We're mixing in the
 inode number as well as the fs uuid in in ext4's new metadata checksum
 feature to protect against an inode table block getting written to the
 wrong location on disk.  It will also mean that e2fsck won't mistake
 an inode table from an earlier mkfs with the current file system.
 This will allow us to avoid needing to zero the inode table for newly
 initialized file systems.
 
 The situation I'm thinking of is when dealing with VMs, you make a
 filesystem image once and clone it multiple times. Won't that end up
 with the same UUID in the superblock?

Yes, but one ought to be able to change the UUID a la tune2fs -U.  Even
still... so long as the VM images have a different UUID than the fs that they
live on, it ought to be fine.

--D
 
 David Lang
 --
 To unsubscribe from this list: send the line unsubscribe linux-fsdevel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

___
Tux3 mailing list
Tux3@phunq.net
http://phunq.net/mailman/listinfo/tux3