Re: md0 + UUIDs for member disks

2023-12-31 Thread Felix Natter
hello Andy,

thanks for the detailed answer.

Andy Smith  writes:
> Hello,
>
> On Fri, Dec 29, 2023 at 06:17:07PM +0100, Felix Natter wrote:
>> Andy Smith  writes:
>> > But to be absolutely sure you may wish to totally ignore md0 and
>> > its member devices during install as all their data and the
>> > metadata on their member devices will still be there after
>> > install. You should just be able to see the assembled array in
>> > /proc/mdstat, and then mount the filesystem from /etc/fstab.
>> > Totally ignoring these devices during install avoids you making
>> > a mistake where you alter one of them.

-> I was referring to the above paragraph.

>> So /dev/md0 will be automatically assembled when I boot the system
>> (cat /proc/mdstat), and I can mount it using UUID?
>
> Yes, if the filesystem on /dev/md0 had a UUID before, it will still
> have one when the devices are plugged in to another system (or the
> same system after a reinstall).

That is very good.

>> I couldn't do this on another system, where the software raid(1) is for
>> the root filesystem, though. But as I understood you, in the case of the
>> root fs, the above mentioned problem does not occur?
>
> What problem do you refer to?

Please see above.

> The only thing that does sometimes happen when moving MD arrays
> around is that the name of the md device might need to be changed.
>
> For example, if you have a set of drives that have an md0 on them
> and want to move them to a system that already has an md0. The new
> system won't assemble that as md0 since it already has an md0 that
> has a different UUID. Worse, if you don't do anything and boot such
> a system with all the drives installed, it may be arbitrary as to
> WHICH ONE gets called md0! It will depend upon which one mdadm
> starts assembling first. The other one will be renamed, often to
> something weird like md127.
>
> To solve that problem one would list the pairs of array names and
> array UUIDs in /etc/mdadm/mdadm.conf. If the root filesystem is on
> an md array then the initramfs also has to be updated, to get a copy
> of mdadm.conf into that.
>
> But that is a bit of an advanced concern that you probably do not
> have. Normally /etc/mdadm/mdadm.conf is basically empty.

I am using single raid1 setups, so this does not concern me,
but many thanks for the heads-up!

Thanks and Best Regards,
Felix
-- 
Felix Natter




Re: md0 + UUIDs for member disks

2023-12-29 Thread Andy Smith
Hello,

On Fri, Dec 29, 2023 at 06:17:07PM +0100, Felix Natter wrote:
> Andy Smith  writes:
> > But to be absolutely sure you may wish to totally ignore md0 and
> > its member devices during install as all their data and the
> > metadata on their member devices will still be there after
> > install. You should just be able to see the assembled array in
> > /proc/mdstat, and then mount the filesystem from /etc/fstab.
> > Totally ignoring these devices during install avoids you making
> > a mistake where you alter one of them.
> 
> So /dev/md0 will be automatically assembled when I boot the system
> (cat /proc/mdstat), and I can mount it using UUID?

Yes, if the filesystem on /dev/md0 had a UUID before, it will still
have one when the devices are plugged in to another system (or the
same system after a reinstall).

> I couldn't do this on another system, where the software raid(1) is for
> the root filesystem, though. But as I understood you, in the case of the
> root fs, the above mentioned problem does not occur?

What problem do you refer to?

The only thing that does sometimes happen when moving MD arrays
around is that the name of the md device might need to be changed.

For example, if you have a set of drives that have an md0 on them
and want to move them to a system that already has an md0. The new
system won't assemble that as md0 since it already has an md0 that
has a different UUID. Worse, if you don't do anything and boot such
a system with all the drives installed, it may be arbitrary as to
WHICH ONE gets called md0! It will depend upon which one mdadm
starts assembling first. The other one will be renamed, often to
something weird like md127.

To solve that problem one would list the pairs of array names and
array UUIDs in /etc/mdadm/mdadm.conf. If the root filesystem is on
an md array then the initramfs also has to be updated, to get a copy
of mdadm.conf into that.

But that is a bit of an advanced concern that you probably do not
have. Normally /etc/mdadm/mdadm.conf is basically empty.

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: md0 + UUIDs for member disks

2023-12-29 Thread Felix Natter
hi Steve,

thanks for the quick reply!

Steve McIntyre  writes:
> fnat...@gmx.net wrote:
>>
>>I have /dev/md0 mounted at /storage which consists of two HDDs.
>>
>>Now I would like to add an SSD drive for better performance of
>>VMs. Usually, before doing this, I make sure that all of my disks are
>>mounted using UUID and not device names. I do not think this is
>>the case for the two member HDDs of md0 (cat /proc/mdstat).
>>Is there an easy way to fix this?
>>
>>If I need to reinstall, can I keep the two member HDDs with all the
>>data, i.e. does the Debian12 installer recognize the member HDDs
>>and will allow me to configure /dev/md0?
>
> The reason behing using UUIDs is that individual disks don't have
> persistent names attached: /dev/sda might be /dev/sdb next time, etc.
>
> MD RAID devices *do* include persistent metadata so that the system
> can recognise them reliably. You should be fine as you are.

Thanks for the help, this was what I needed.

Cheers and Best Regards,
Felix
-- 
Felix Natter




Re: md0 + UUIDs for member disks

2023-12-29 Thread Felix Natter
Andy Smith  writes:

> Hi Felix,

hello Andy,

thank you for the quick reply!

> On Fri, Dec 29, 2023 at 04:46:10PM +0100, Felix Natter wrote:
>> I have /dev/md0 mounted at /storage which consists of two HDDs.
>> 
>> Now I would like to add an SSD drive for better performance of
>> VMs. Usually, before doing this, I make sure that all of my disks are
>> mounted using UUID and not device names. I do not think this is
>> the case for the two member HDDs of md0 (cat /proc/mdstat).
>> Is there an easy way to fix this?
>
> What are you trying to fix? Filesystems have a UUID. MD RAID devices
> have a UID. And MD RAID member devices also have UUIDs, but they are
> all different *kinds* of UUID. Only filesystems have a filesystem
> UUID that you use in /etc/fstab and other places. You mount things
> by filesystem UUID. You don't mount MD RAID member devices; mdadm
> assembles them.
>
> RAID assembly normally happens automatically by udev finding block
> devices that have the same array id as the one that mdadm is trying
> to incrementally assemble, until all their member devices are found.
>
> So what is it that you are actually wanting to do? If you just want
> to mount whatever filesystem is on md0 by *filesystem* UUID, you do
> that in the normal way as it has nothing to do with MD: you use
> blkid or tune2fs or whatever to read the fs label and put that in
> /etc/fstab.

I was just worried that devices (i.e. /dev/sdx) will change when adding
another drive. I understand that Debian's SW-RAID assembly does not
refer to disks via device names :)

>> If I need to reinstall, can I keep the two member HDDs with all the
>> data, i.e. does the Debian12 installer recognize the member HDDs
>> and will allow me to configure /dev/md0?
>
> Yes. When you come to the partitioning and software RAID section of
> the installer your existing md0 should already be there.

That's very good!

> But to be
> absolutely sure you may wish to totally ignore md0 and its member
> devices during install as all their data and the metadata on their
> member devices will still be there after install. You should just
> be able to see the assembled array in /proc/mdstat, and then mount
> the filesystem from /etc/fstab. Totally ignoring these devices
> during install avoids you making a mistake where you alter one of
> them.

So /dev/md0 will be automatically assembled when I boot the system
(cat /proc/mdstat), and I can mount it using UUID?

I couldn't do this on another system, where the software raid(1) is for
the root filesystem, though. But as I understood you, in the case of the
root fs, the above mentioned problem does not occur?

Many Thanks and Best Regards,
Felix
-- 
Felix Natter




Re: md0 + UUIDs for member disks

2023-12-29 Thread Steve McIntyre
fnat...@gmx.net wrote:
>
>I have /dev/md0 mounted at /storage which consists of two HDDs.
>
>Now I would like to add an SSD drive for better performance of
>VMs. Usually, before doing this, I make sure that all of my disks are
>mounted using UUID and not device names. I do not think this is
>the case for the two member HDDs of md0 (cat /proc/mdstat).
>Is there an easy way to fix this?
>
>If I need to reinstall, can I keep the two member HDDs with all the
>data, i.e. does the Debian12 installer recognize the member HDDs
>and will allow me to configure /dev/md0?

The reason behing using UUIDs is that individual disks don't have
persistent names attached: /dev/sda might be /dev/sdb next time, etc.

MD RAID devices *do* include persistent metadata so that the system
can recognise them reliably. You should be fine as you are.

-- 
Steve McIntyre, Cambridge, UK.st...@einval.com
Can't keep my eyes from the circling sky,
Tongue-tied & twisted, Just an earth-bound misfit, I...



Re: md0 + UUIDs for member disks

2023-12-29 Thread Andy Smith
Hi Felix,

On Fri, Dec 29, 2023 at 04:46:10PM +0100, Felix Natter wrote:
> I have /dev/md0 mounted at /storage which consists of two HDDs.
> 
> Now I would like to add an SSD drive for better performance of
> VMs. Usually, before doing this, I make sure that all of my disks are
> mounted using UUID and not device names. I do not think this is
> the case for the two member HDDs of md0 (cat /proc/mdstat).
> Is there an easy way to fix this?

What are you trying to fix? Filesystems have a UUID. MD RAID devices
have a UID. And MD RAID member devices also have UUIDs, but they are
all different *kinds* of UUID. Only filesystems have a filesystem
UUID that you use in /etc/fstab and other places. You mount things
by filesystem UUID. You don't mount MD RAID member devices; mdadm
assembles them.

RAID assembly normally happens automatically by udev finding block
devices that have the same array id as the one that mdadm is trying
to incrementally assemble, until all their member devices are found.

So what is it that you are actually wanting to do? If you just want
to mount whatever filesystem is on md0 by *filesystem* UUID, you do
that in the normal way as it has nothing to do with MD: you use
blkid or tune2fs or whatever to read the fs label and put that in
/etc/fstab.

> If I need to reinstall, can I keep the two member HDDs with all the
> data, i.e. does the Debian12 installer recognize the member HDDs
> and will allow me to configure /dev/md0?

Yes. When you come to the partitioning and software RAID section of
the installer your existing md0 should already be there. But to be
absolutely sure you may wish to totally ignore md0 and its member
devices during install as all their data and the metadata on their
member devices will still be there after install. You should just
be able to see the assembled array in /proc/mdstat, and then mount
the filesystem from /etc/fstab. Totally ignoring these devices
during install avoids you making a mistake where you alter one of
them.

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



md0 + UUIDs for member disks

2023-12-29 Thread Felix Natter
hello Debian experts,

I have /dev/md0 mounted at /storage which consists of two HDDs.

Now I would like to add an SSD drive for better performance of
VMs. Usually, before doing this, I make sure that all of my disks are
mounted using UUID and not device names. I do not think this is
the case for the two member HDDs of md0 (cat /proc/mdstat).
Is there an easy way to fix this?

If I need to reinstall, can I keep the two member HDDs with all the
data, i.e. does the Debian12 installer recognize the member HDDs
and will allow me to configure /dev/md0?

Many Thanks and Best Regards,
Felix
-- 
Felix Natter




Re: UUIDS

2023-05-28 Thread tomas
On Sun, May 28, 2023 at 12:05:07PM -0400, Stefan Monnier wrote:
> >> IIRC booting with `resume=no` on the kernel's command line worked around
> >> the problem in my case.

[on restore & missing swap partition]

> IIRC the problem comes when it decides that maybe the swap partition
> hasn't shown up yet, so it waits (which in turn calls for a timeout
> system, etc... IOW, extra complexity that's difficult to justify and
> hard to test) :-(
> 
> [ Of course, an option could be to ask the user whether to wait or to just
>   skip the resume, but there might be no screen/keyboard connected or no
>   admin/user at the helm, so it's not a reliable solution either.  ]

Right. Sigh, it seems you just can't win. I solved it by having no swap :)

Cheers
-- 
t


signature.asc
Description: PGP signature


Re: UUIDS

2023-05-28 Thread Stefan Monnier
>> IIRC booting with `resume=no` on the kernel's command line worked around
>> the problem in my case.
>
> Yes, in your case, the system dumped its state into swap and kept
> a remark "where" to get that state back.

Actually, it had not.  But when booting up, it still needs to check
whether or not there is such a dumped state from which to resume.

> Not finding that partition is then cause for much grief (refusing to
> boot _at all_ does seem like one of those really nerdy design
> decisions which possibly isn't helpful to end users...)

IIRC the problem comes when it decides that maybe the swap partition
hasn't shown up yet, so it waits (which in turn calls for a timeout
system, etc... IOW, extra complexity that's difficult to justify and
hard to test) :-(

[ Of course, an option could be to ask the user whether to wait or to just
  skip the resume, but there might be no screen/keyboard connected or no
  admin/user at the helm, so it's not a reliable solution either.  ]


Stefan



Re: UUIDS

2023-05-28 Thread tomas
On Sun, May 28, 2023 at 09:34:58AM -0400, Stefan Monnier wrote:
> >> I'm sure it used to be that you could swap linux discs between PCs and it
> >> would sort itself out but I try swapping disks about and booting and they
> >> complain
> >> "Cannot find UUID..lots of identifying numbers"
> >> and gives intramfs prompt.
> >
> > The system seems to try to mount the root file system, which seems to be
> > specified in the fstab by UUID. It doesn't. This can have several reasons:
> 
> That's quite likely indeed.
> 
> This said, I've had such UUID problems in the early boot (initramfs) that
> involved the swap partition rather than the root partition because the
> code that tries to resume from hibernation looks for some tell-tale sign
> in the resume partition (usually also playing the role of the swap
> partition), and this is done before trying to mount the root partition.

Yes, the boot loader not finding the root partition came also to mind.
I deemed that less likely because, if I understood correctly, the OP
is already in an initramfs shell.

> IIRC booting with `resume=no` on the kernel's command line worked around
> the problem in my case.

Yes, in your case, the system dumped its state into swap and kept
a remark "where" to get that state back. Not finding that partition
is then cause for much grief (refusing to boot _at all_ does seem
like one of those really nerdy design decisions which possibly isn't
helpful to end users...)

Cheers
-- 
t


signature.asc
Description: PGP signature


Re: UUIDS

2023-05-28 Thread Stefan Monnier
>> I'm sure it used to be that you could swap linux discs between PCs and it
>> would sort itself out but I try swapping disks about and booting and they
>> complain
>> "Cannot find UUID..lots of identifying numbers"
>> and gives intramfs prompt.
>
> The system seems to try to mount the root file system, which seems to be
> specified in the fstab by UUID. It doesn't. This can have several reasons:

That's quite likely indeed.

This said, I've had such UUID problems in the early boot (initramfs) that
involved the swap partition rather than the root partition because the
code that tries to resume from hibernation looks for some tell-tale sign
in the resume partition (usually also playing the role of the swap
partition), and this is done before trying to mount the root partition.

IIRC booting with `resume=no` on the kernel's command line worked around
the problem in my case.


Stefan



Re: UUIDS

2023-05-27 Thread tomas
On Sun, May 28, 2023 at 12:37:20AM +0100, mick.crane wrote:
> I'm sure it used to be that you could swap linux discs between PCs and it
> would sort itself out but I try swapping disks about and booting and they
> complain
> "Cannot find UUID..lots of identifying numbers"
> and gives intramfs prompt.

The system seems to try to mount the root file system, which seems to be
specified in the fstab by UUID. It doesn't. This can have several reasons:

 1) the root fs is somehow broken
 2) some rescue procedure has changed the UUID
 3) perhaps some more

> Am I supposed to be able to sort it out from there?

In theory, yes. Knowledge and practice with initramfs doesn't to seem
widespread, alas. See here [1]. *If* you know which one is your root
partition, you might try to mount it and exit the initramfs shell for
the boot process to continue.

Alternatively, you might want to boot a rescue system, find out whether
you are 1) or 2) above. In case 1) fix, in case 2 boot again with "root=..."
and fix your initramfs afterwards.

In case 3) try to gather more info and come back :)

HTH

[1] https://wiki.debian.org/initramfs
-- 
t


signature.asc
Description: PGP signature


Re: UUIDS (addenum)

2023-05-27 Thread DdB
Am 28.05.2023 um 01:37 schrieb mick.crane:
> (...)
> and gives intramfs prompt.
> Am I supposed to be able to sort it out from there?
> like how?
> mick

oh the initramfs prompt... that is not the place to fix it easily.
much easier to stop while booting grub (like asking for the menuitem,
you are used to boot into, read that and next get a grub prompt (c) and
boot the system by hand. Of course, i did play with grub before in order
to be accustomed to do that in case of emergency. the grub help command
is just barely enough to be reminded of what you already know, not more.

And you should know, thet the idea behind UUID#s (Unique Universal
Identifiers) is, that they should be unique (at least on one system).
maybe just removing one of the duplicates can be enough to circumvent
the error?




Re: UUIDS

2023-05-27 Thread DdB
Am 28.05.2023 um 01:37 schrieb mick.crane:
> I'm sure it used to be that you could swap linux discs between PCs and
> it would sort itself out but I try swapping disks about and booting and
> they complain
> "Cannot find UUID..lots of identifying numbers"
> and gives intramfs prompt.
> Am I supposed to be able to sort it out from there?
> like how?
> mick
> 
> 
I am juggling with disks and partitions frequently. And would like you
to be awarem that there are different UUID#s involved. One resides in th
filesystem ext2/3/4 itself, and can be modified with tune2fs -U (while
fs is at rest/unmounted)

The other one site inside GPT, but you can change that also ((s)gdisk is
your friend).

But all that is meaningless, if you dont find the other end of the
equation (like fstab, or grub scripts) that reference the said UUID's.

But step by step you will get there, just be prepared to reboot several
times and work your way up the chain.



Re: UUIDS

2023-05-27 Thread David Christensen

On 5/27/23 16:37, mick.crane wrote:
I'm sure it used to be that you could swap linux discs between PCs and 
it would sort itself out 



Yes.  Please add that to my list of reasons for preferring BIOS-MBR booting:

https://www.mail-archive.com/debian-user@lists.debian.org/msg792977.html


but I try swapping disks about and booting and 
they complain

"Cannot find UUID..lots of identifying numbers"
and gives intramfs prompt.
Am I supposed to be able to sort it out from there?
like how?
mick



I had a similar experienced yesterday.  The solution was to boot the 
computer into Setup, reset to defaults, reboot into Setup again, disable 
all disks except the one I want, create a boot table entry for that 
disk, save settings, and reboot into Debian.



David



Re: UUIDS

2023-05-27 Thread Felix Miata
mick.crane composed on 2023-05-28 00:37 (UTC+0100):

> I'm sure it used to be that you could swap linux discs between PCs and 
> it would sort itself out but I try swapping disks about and booting and 
> they complain
> "Cannot find UUID..lots of identifying numbers"
> and gives intramfs prompt.
> Am I supposed to be able to sort it out from there?
> like how?

It depends on what can't find what. If UEFI BIOS are involved, it can be both
easier and harder to deal with than for MBR booting. Lots of buggy UEFI BIOS are
around to complicate what should make things easier harder. A fix could be as 
easy
as either clearing NVRAM of existing entries, or booting removable media to run
efibootmgr to either clear existing entry(s) and/or add a new one. It's also
possible the ESP partition's UUID is included in the initrd and needs to be
overridden with rd.hostonly=0 and/or rd.auto=1 on Grub's linu line.

So tell us, is this with UEFI, MBR, or a mixture? What hardware was in the 
source,
and what in the destination? How many disks in the destination? How many 
operating
systems on the disk(s)? Can you boot an installed system using installation 
media?

As always, as long as required drivers are included in the initrd, if you know
enough about device names, filesystem LABELs and filesystems involved, booting 
can
be accomplished manually from a Grub prompt without involving UUIDs, but from an
initramfs prompt it can be harder or impossible.
-- 
Evolution as taught in public schools is, like religion,
based on faith, not based on science.

 Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

Felix Miata



Re: UUIDS

2023-05-27 Thread Peter Ehlert



On May 27, 2023 4:37:20 PM PDT, "mick.crane"  wrote:
>I'm sure it used to be that you could swap linux discs between PCs and it 
>would sort itself out
Yes, it still works like that. I do it frequently.
 but I try swapping disks about and booting and they complain
>"Cannot find UUID..lots of identifying numbers"
>and gives intramfs prompt.

My guess is that the system was originally spread out on multiple disk drives.
Fstab has links to partitions that are not at the same location.

>Am I supposed to be able to sort it out from there?
>like how?
Review fstab and see what the links are, and remark out (#) the suspect and try 
again.
Best of luck 爛
>mick
>
>



UUIDS

2023-05-27 Thread mick.crane
I'm sure it used to be that you could swap linux discs between PCs and 
it would sort itself out but I try swapping disks about and booting and 
they complain

"Cannot find UUID..lots of identifying numbers"
and gives intramfs prompt.
Am I supposed to be able to sort it out from there?
like how?
mick



Re: Why I don't like UUIDs (Re: can't mount sdf1 in stretch, gparted claims its fat32)

2020-02-06 Thread Stefan Monnier
>> Every time you have to reboot, it means your OS has somewhat failed you.
>   i don't think that at all.  remember that each person can
> have different preferences, requirements and expectations.

That's why I wrote "have to".
Of course, if you choose to reboot it, it's not you OS's fault.

> system to begin with but i hate the idea that it is doing
> nothing at all useful just sitting there if i'm not going
> to be using it any time soon.

IIUC in your case you could just as well suspend/hibernate instead of
shutting down and that would solve the "just sitting there" problem
without having to reboot: the reboot is your choice and you're indeed
perfectly entitled to that preference.


Stefan



Re: Why I don't like UUIDs (Re: can't mount sdf1 in stretch, gparted claims its fat32)

2020-02-06 Thread David Wright
On Wed 05 Feb 2020 at 22:54:33 (-0500), Michael Stone wrote:
> On Wed, Feb 05, 2020 at 07:33:38PM -0600, David Wright wrote:
> > On Wed 05 Feb 2020 at 15:59:27 (-0500), Michael Stone wrote:
> > > On Wed, Feb 05, 2020 at 01:43:37PM -0600, David Wright wrote:
> > > > On Wed 05 Feb 2020 at 09:00:41 (-0500), Michael Stone wrote:
> > > > > On Tue, Feb 04, 2020 at 07:04:16PM -0500, Stefan Monnier wrote:
> > > > > > While I'm sure this can be managed by explicitly setting UUIDs, I've
> > > > > > found it much more pleasant to manage explicit names (I personally
> > > > > > prefer LVM names over filesystem labels, but filesystem labels work 
> > > > > > well
> > > > > > for those filesystems I don't put under LVM).  Not only I can 
> > > > > > pronounce
> > > > > > them and they carry meaning, but they tend to be much more visible 
> > > > > > (and
> > > > > > hence easier to manipulate).
> > > > >
> > > > > I dislike using names becaues it's *much* more common to find name
> > > > > collisions than UUID collisions. (E.g., a bunch of disks with
> > > > > filesystems all labeled with easy to remember names like "root" or
> > > > > "home".) Reboot with a couple of those in your system on a
> > > > > label-oriented configuration and you may have a very bad day.
> > > >
> > > > Rather a strawman argument there. There's no reason not to choose
> > > > sensible LABELs, unlike the examples you've given there, which fail
> > > > for at least two reasons: they're unlikely to be unique and they're
> > > > too overloaded with meaning.
> > > 
> > > What does "sensible" mean in this context? On a static system all of
> > > this is a complete waste of time because nothing changes. If you start
> > > upgrading disks, using external drives, moving things around, etc., it
> > > may very well be that the same "sensible" label applies to a
> > > filesystem found on more than one disk.
> > 
> > To make that argument, you have to put scare quotes round sensible
> > because common sense would lead you to use different LABELs for
> > partitions that you intend to distinguish by LABEL. To do otherwise
> > would be like suggesting that two teams who play in red should do
> > so without a change of shirt for one team.
> 
> You seem to just be stuck in this mindset where you think that it's
> somehow weird or unusual to name a root filesystem "root" instead of
> "kumquat" or some other meaningless string.

I opined that it's unwise to LABEL a filesystem "root" if you're using
LABELs as the determining name. It's not weird, and I have no idea
whether it's usual or not. The reason I called it unwise is that it's
limiting. For example, if I make it a habit to call the root
filesystem on this laptop "root", what should I LABEL the other root
filesystem as? Is it sensible to have two fstab files like:

LABEL=root /   ext4 errors=remount-ro 0 1
LABEL=previous /media/previous ext4 errors=remount-ro 0 1
and
LABEL=previous /   ext4 errors=remount-ro 0 1
LABEL=root /media/previous ext4 errors=remount-ro 0 1

That's what I mean about excessive overloading of meanings.

> I don't really have a
> response to that except to note that in my experience your naming
> conventions are an outlier and that it's a waste of time making
> assertions about what people should use as names instead of simply
> acknowledging what people actually use as names.

I have very little knowledge of what people are using as names, if
anything, unless they post them here. That doesn't prevent my having
an opinion, nor anybody else. How I spend my time is up to me.
I thought I had limited my assertions to pointing out that your
arguments appeared to be directed against things *not* stated,
like "labels prevent problems".

> > > Can problems be avoided
> > > through careful attention to detail?
> > 
> > I'd call the LABEL problem's solution blindingly obvious. Ironically,
> > one has to be more careful with *certain* operations when using UUIDs
> > because one is less likely to spot a duplication. I'm thinking of,
> > say, copying partitions.
> 
> Again, asserting that using labels in a fashion that avoids potential
> collisions is "blindlingly obvious" implies that the problems I've
> seen people have with label-based schemes must mean that they're too
> ignorant to do the "obviously" c

Re: Why I don't like UUIDs (Re: can't mount sdf1 in stretch, gparted claims its fat32)

2020-02-06 Thread songbird
Stefan Monnier wrote:
>>> PS: The only problem with LVM names is that Linux doesn't let you
>>> rename a volume group while it's active (at least last time I tried),
>>> which makes it painful to rename the volume group in which lives your
>>> root partition.
>> How painful is it to dd a live cd, boot from it and rename?
>
> Very.  It's called "downtime".
> Every time you have to reboot, it means your OS has somewhat failed you.

  i don't think that at all.  remember that each person can
have different preferences, requirements and expectations.

  i reboot my computer once or twice a day depending upon what
i'm doing.  it takes only a few seconds.

  at the end of the day i turn it off since i don't like 
wasting electricity.  it is a fairly low power consuming
system to begin with but i hate the idea that it is doing
nothing at all useful just sitting there if i'm not going
to be using it any time soon.  since it does shut down and
start up quickly enough it isn't an issue.


  songbird



Re: Why I don't like UUIDs (Re: can't mount sdf1 in stretch, gparted claims its fat32)

2020-02-05 Thread Michael Stone

On Wed, Feb 05, 2020 at 07:33:38PM -0600, David Wright wrote:

On Wed 05 Feb 2020 at 15:59:27 (-0500), Michael Stone wrote:

On Wed, Feb 05, 2020 at 01:43:37PM -0600, David Wright wrote:
> On Wed 05 Feb 2020 at 09:00:41 (-0500), Michael Stone wrote:
> > On Tue, Feb 04, 2020 at 07:04:16PM -0500, Stefan Monnier wrote:
> > > While I'm sure this can be managed by explicitly setting UUIDs, I've
> > > found it much more pleasant to manage explicit names (I personally
> > > prefer LVM names over filesystem labels, but filesystem labels work well
> > > for those filesystems I don't put under LVM).  Not only I can pronounce
> > > them and they carry meaning, but they tend to be much more visible (and
> > > hence easier to manipulate).
> >
> > I dislike using names becaues it's *much* more common to find name
> > collisions than UUID collisions. (E.g., a bunch of disks with
> > filesystems all labeled with easy to remember names like "root" or
> > "home".) Reboot with a couple of those in your system on a
> > label-oriented configuration and you may have a very bad day.
>
> Rather a strawman argument there. There's no reason not to choose
> sensible LABELs, unlike the examples you've given there, which fail
> for at least two reasons: they're unlikely to be unique and they're
> too overloaded with meaning.

What does "sensible" mean in this context? On a static system all of
this is a complete waste of time because nothing changes. If you start
upgrading disks, using external drives, moving things around, etc., it
may very well be that the same "sensible" label applies to a
filesystem found on more than one disk.


To make that argument, you have to put scare quotes round sensible
because common sense would lead you to use different LABELs for
partitions that you intend to distinguish by LABEL. To do otherwise
would be like suggesting that two teams who play in red should do
so without a change of shirt for one team.


You seem to just be stuck in this mindset where you think that it's 
somehow weird or unusual to name a root filesystem "root" instead of 
"kumquat" or some other meaningless string. I don't really have a 
response to that except to note that in my experience your naming 
conventions are an outlier and that it's a waste of time making 
assertions about what people should use as names instead of simply 
acknowledging what people actually use as names.



Can problems be avoided
through careful attention to detail?


I'd call the LABEL problem's solution blindingly obvious. Ironically,
one has to be more careful with *certain* operations when using UUIDs
because one is less likely to spot a duplication. I'm thinking of,
say, copying partitions.


Again, asserting that using labels in a fashion that avoids potential 
collisions is "blindlingly obvious" implies that the problems I've seen 
people have with label-based schemes must mean that they're too ignorant 
to do the "obviously" correct thing. That seems presumptuous at least.


OTOH, following best practices when copying raw partitions (including 
changing the UUID) seems to be something to be glossed over (presumably 
because only changing the label in that exact case is "obvious"?)



Stefan and I were posting why we like names. The uniqueness of UUIDs
is a given. (Ironically, it's in the name.)


Perhaps (although the fact that they won't be unique if you copy the raw 
filesystem is a recurring theme) but the argument at the top of the 
thread seems to be that they somehow change unexpectedly and can't be 
relied upon--which would seem to be an even more serious problem (if it 
existed).



And it's a different
argument from the one you appeared to make, 


Yes, I argued against the proposition that actually started the thread, 
which seemed to be that UUIDs are somehow unreliable--in particular,
as compared to labels. For some reason talking about why that isn't 
actually the case makes you start talking about straw men, as though you 
either didn't read or couldn't remember posts from a couple of days ago.



which was that you dislike
using names because other people (presumably) misuse them. 


And you've explained that anybody who uses them other than in the 
(extremely idiosyncratic) manner you prescribe is just doing them wrong 
and its their own fault if there are problems because they must be dumb. 
I'm glad we've managed to so succinctly summarize each other's position.


If I were to summarize my own position it would simply be that I 
encourage people to be aware of potential issues that arise when labels 
collide, and that UUIDs are a (not unreliable) alternative that may work 
better in some circumstances.



OK, perhaps
you run a helpdesk.


The closest I come to that is this mailing list.


Just because you personally use 

Re: Why I don't like UUIDs (Re: can't mount sdf1 in stretch, gparted claims its fat32)

2020-02-05 Thread David Wright
On Wed 05 Feb 2020 at 16:47:13 (-0500), Greg Wooledge wrote:
> On Wed, Feb 05, 2020 at 01:43:37PM -0600, David Wright wrote:
> > I don't suppose either of us will meet a UUID collision in our
> > lifetimes, and it's obviously a sensible scheme to use where there
> > are large numbers of commoditised objects to name.
> 
> Usually a UUID collision is a result of a subtle mistake, like cloning
> a disk and then trying to mount a file system by UUID while the clone
> is still attached.  At least, that's the first scenario I can think of.

There are versions of UUIDs that aren't quite what they seem;
IOW there are predictable ones. There are means of placing strings
into positions where UUIDs are expected, eg tune2fs -U. There's a
vanishingly small probability that a human will spot a deliberately
altered UUID. My assumption in writing the above was that we are
honest brokers, generating UUIDs in a random manner.

In the absence of a RNG of any quality whatsoever, I think the
cryptographic vulnerability of the system will exceed the likelihood
of UUID collisions occurring. I have no information to back that up :)

https://lists.debian.org/debian-user/2020/02/msg5.html

Cheers,
David.



Re: Why I don't like UUIDs (Re: can't mount sdf1 in stretch, gparted claims its fat32)

2020-02-05 Thread David Wright
On Wed 05 Feb 2020 at 15:59:27 (-0500), Michael Stone wrote:
> On Wed, Feb 05, 2020 at 01:43:37PM -0600, David Wright wrote:
> > On Wed 05 Feb 2020 at 09:00:41 (-0500), Michael Stone wrote:
> > > On Tue, Feb 04, 2020 at 07:04:16PM -0500, Stefan Monnier wrote:
> > > > While I'm sure this can be managed by explicitly setting UUIDs, I've
> > > > found it much more pleasant to manage explicit names (I personally
> > > > prefer LVM names over filesystem labels, but filesystem labels work well
> > > > for those filesystems I don't put under LVM).  Not only I can pronounce
> > > > them and they carry meaning, but they tend to be much more visible (and
> > > > hence easier to manipulate).
> > > 
> > > I dislike using names becaues it's *much* more common to find name
> > > collisions than UUID collisions. (E.g., a bunch of disks with
> > > filesystems all labeled with easy to remember names like "root" or
> > > "home".) Reboot with a couple of those in your system on a
> > > label-oriented configuration and you may have a very bad day.
> > 
> > Rather a strawman argument there. There's no reason not to choose
> > sensible LABELs, unlike the examples you've given there, which fail
> > for at least two reasons: they're unlikely to be unique and they're
> > too overloaded with meaning.
> 
> What does "sensible" mean in this context? On a static system all of
> this is a complete waste of time because nothing changes. If you start
> upgrading disks, using external drives, moving things around, etc., it
> may very well be that the same "sensible" label applies to a
> filesystem found on more than one disk.

To make that argument, you have to put scare quotes round sensible
because common sense would lead you to use different LABELs for
partitions that you intend to distinguish by LABEL. To do otherwise
would be like suggesting that two teams who play in red should do
so without a change of shirt for one team.

> Can problems be avoided
> through careful attention to detail?

I'd call the LABEL problem's solution blindingly obvious. Ironically,
one has to be more careful with *certain* operations when using UUIDs
because one is less likely to spot a duplication. I'm thinking of,
say, copying partitions.

> Sure--but the same is true of
> many other systems. I submit that labels are not inherently "better",
> but merely a matter of preference.

I haven't said one is better than the other, but have just pointed out
why I like LABELs, which you said you dislike.

> If you like them, great! Have fun!
> If giving every thumbdrive its own label and mountpoint makes you
> happy, go for it. I generally expect thumbdrives to be consumables,
> mount them at generic points, and call it a day. YMMV.

And I supported you there, but you snipped it; I wrote of UUIDs:

"it's obviously a sensible scheme to use where there are
large numbers of commoditised objects to name."

> And FWIW, those aren't "strawman arguments", those are based on things
> I've found in the real world.

Stefan and I were posting why we like names. The uniqueness of UUIDs
is a given. (Ironically, it's in the name.) And it's a different
argument from the one you appeared to make, which was that you dislike
using names because other people (presumably) misuse them. OK, perhaps
you run a helpdesk.

> Just because you personally use a
> feature in a particular way doesn't mean everyone else does. Sometimes
> a standard install of an OS can default to labeling schemes that cause
> conflicts if you put the drive from one machine into another
> machine--so this really isn't something I'm making up out of thin air
> that never happens in real life.

IIRC you're describing Debian in the early days, when partitions were
configured only by their kernel names, rather like some people prefer
their network interfaces to be named.

> > > LVM is
> > > more resistant to that as long as you keep the vg names unique. (Call
> > > everything vg0 and you're back to having a bad day.)
> > 
> > It seems that saying "keep the vg names unique" is not very different
> > from saying to keep filesystem LABELs unique.
> 
> It's pretty much exactly the same. If you have multiple logical
> entities addressed by the same name, you might not get the entity you
> expected to get. This isn't always obvious to someone starting out who
> reads something like "labels prevent problems" and hasn't yet run into
> cases where that isn't true and thus hasn't adopted strategies to
> avoid those problems.

That's another strawman argument: I haven't suggested that, you bring
it up, then knock it down: is

Re: Why I don't like UUIDs (Re: can't mount sdf1 in stretch, gparted claims its fat32)

2020-02-05 Thread 0...@caiway.net
On Tue, 04 Feb 2020 22:19:25 -0500
Stefan Monnier  wrote:
> > How painful is it to dd a live cd, boot from it and rename?
> 
> Very.  It's called "downtime".
> Every time you have to reboot, it means your OS has somewhat failed
> you.
> 
> 
> Stefan
> 

You are absolutely right!



Re: Why I don't like UUIDs (Re: can't mount sdf1 in stretch, gparted claims its fat32)

2020-02-05 Thread Stefan Monnier
> Usually a UUID collision is a result of a subtle mistake, like cloning
> a disk and then trying to mount a file system by UUID while the clone
> is still attached.  At least, that's the first scenario I can think of.

I wouldn't call it a "subtle mistake".  Instead it's what *always*
happens when I make a clone: I always mount it right afterwards to
perform some checks and/or tweaks, and I never have any reason to
unmount the original before doing that.

Of course, that doesn't bother me: the clone's LV has a different name,
so whether the UUID is the same or not doesn't affect it.


Stefan



Re: Why I don't like UUIDs (Re: can't mount sdf1 in stretch, gparted claims its fat32)

2020-02-05 Thread Greg Wooledge
On Wed, Feb 05, 2020 at 01:43:37PM -0600, David Wright wrote:
> I don't suppose either of us will meet a UUID collision in our
> lifetimes, and it's obviously a sensible scheme to use where there
> are large numbers of commoditised objects to name.

Usually a UUID collision is a result of a subtle mistake, like cloning
a disk and then trying to mount a file system by UUID while the clone
is still attached.  At least, that's the first scenario I can think of.



Re: Why I don't like UUIDs (Re: can't mount sdf1 in stretch, gparted claims its fat32)

2020-02-05 Thread Michael Stone

On Wed, Feb 05, 2020 at 01:43:37PM -0600, David Wright wrote:

On Wed 05 Feb 2020 at 09:00:41 (-0500), Michael Stone wrote:

On Tue, Feb 04, 2020 at 07:04:16PM -0500, Stefan Monnier wrote:
> While I'm sure this can be managed by explicitly setting UUIDs, I've
> found it much more pleasant to manage explicit names (I personally
> prefer LVM names over filesystem labels, but filesystem labels work well
> for those filesystems I don't put under LVM).  Not only I can pronounce
> them and they carry meaning, but they tend to be much more visible (and
> hence easier to manipulate).

I dislike using names becaues it's *much* more common to find name
collisions than UUID collisions. (E.g., a bunch of disks with
filesystems all labeled with easy to remember names like "root" or
"home".) Reboot with a couple of those in your system on a
label-oriented configuration and you may have a very bad day.


Rather a strawman argument there. There's no reason not to choose
sensible LABELs, unlike the examples you've given there, which fail
for at least two reasons: they're unlikely to be unique and they're
too overloaded with meaning.


What does "sensible" mean in this context? On a static system all of 
this is a complete waste of time because nothing changes. If you start 
upgrading disks, using external drives, moving things around, etc., it 
may very well be that the same "sensible" label applies to a filesystem 
found on more than one disk. Can problems be avoided through careful 
attention to detail? Sure--but the same is true of many other systems. I 
submit that labels are not inherently "better", but merely a matter of 
preference. If you like them, great! Have fun! If giving every 
thumbdrive its own label and mountpoint makes you happy, go for it. I 
generally expect thumbdrives to be consumables, mount them at generic 
points, and call it a day. YMMV.


And FWIW, those aren't "strawman arguments", those are based on things 
I've found in the real world. Just because you personally use a feature 
in a particular way doesn't mean everyone else does. Sometimes a 
standard install of an OS can default to labeling schemes that cause 
conflicts if you put the drive from one machine into another machine--so 
this really isn't something I'm making up out of thin air that never 
happens in real life.



LVM is
more resistant to that as long as you keep the vg names unique. (Call
everything vg0 and you're back to having a bad day.)


It seems that saying "keep the vg names unique" is not very different
from saying to keep filesystem LABELs unique.


It's pretty much exactly the same. If you have multiple logical entities 
addressed by the same name, you might not get the entity you expected to 
get. This isn't always obvious to someone starting out who reads 
something like "labels prevent problems" and hasn't yet run into cases 
where that isn't true and thus hasn't adopted strategies to avoid those 
problems.




Re: Why I don't like UUIDs (Re: can't mount sdf1 in stretch, gparted claims its fat32)

2020-02-05 Thread David Wright
On Wed 05 Feb 2020 at 09:00:41 (-0500), Michael Stone wrote:
> On Tue, Feb 04, 2020 at 07:04:16PM -0500, Stefan Monnier wrote:
> > > > Me too, so I usually label the permanent stuff at least. UUID's can and
> > > > will change for no detectable reason.
> > > For those reading along or finding this in search results: no, filesystem
> > > UUIDs don't change for no detectable reason. Don't implement anything 
> > > based
> > > on this theory.
> > 
> > What he meant is that filesystem UUIDs are (re)created automatically
> > based on a heuristic of what it means for a filesystem to be "the same".
> 
> You understand that he didn't actually say that, right? This seems
> like your own personal bugaboo instead.

Yes, what he wrote is elaborated in postings like
https://lists.debian.org/debian-user/2018/01/msg00787.html
https://lists.debian.org/debian-user/2018/01/msg00791.html

> > While I'm sure this can be managed by explicitly setting UUIDs, I've
> > found it much more pleasant to manage explicit names (I personally
> > prefer LVM names over filesystem labels, but filesystem labels work well
> > for those filesystems I don't put under LVM).  Not only I can pronounce
> > them and they carry meaning, but they tend to be much more visible (and
> > hence easier to manipulate).
> 
> I dislike using names becaues it's *much* more common to find name
> collisions than UUID collisions. (E.g., a bunch of disks with
> filesystems all labeled with easy to remember names like "root" or
> "home".) Reboot with a couple of those in your system on a
> label-oriented configuration and you may have a very bad day.

Rather a strawman argument there. There's no reason not to choose
sensible LABELs, unlike the examples you've given there, which fail
for at least two reasons: they're unlikely to be unique and they're
too overloaded with meaning.

I don't suppose either of us will meet a UUID collision in our
lifetimes, and it's obviously a sensible scheme to use where there
are large numbers of commoditised objects to name. Look at cars.
For practical purposes, a VIN is a (U)UID: great for computers,
but totally impractical for human use, hence licence plates,
whose numbers are chosen to be unique only within a juridiction.

My jurisdiction is Home, so every host gets a short memorable
name, and each disk, stick and card does too. Their filesystem
LABELs relate to those names. All are marked with those names,
and I couldn't manage that with UUIDs. None of the names carry
any functional meaning, because functions necessarily change.

Other people will choose LABELs in different ways, but the fact
that some people will choose unwisely does not condemn the method.

> LVM is
> more resistant to that as long as you keep the vg names unique. (Call
> everything vg0 and you're back to having a bad day.)

It seems that saying "keep the vg names unique" is not very different
from saying to keep filesystem LABELs unique.

> On Tue, Feb 04, 2020 at 10:19:25PM -0500, Stefan Monnier wrote:
> > > > PS: The only problem with LVM names is that Linux doesn't let you
> > > > rename a volume group while it's active (at least last time I tried),
> > > > which makes it painful to rename the volume group in which lives your
> > > > root partition.
> > > How painful is it to dd a live cd, boot from it and rename?
> > 
> > Very.  It's called "downtime".
> > Every time you have to reboot, it means your OS has somewhat failed you.
> 
> I'm trying to think of a reason to rename the root partition that
> doesn't involve downtime and coming up empty.

I don't know what the implications of LVM are because I've not used
them. Consequently I don't know why Stefan wrote 'The root partition
is always called "root"', or whether "called" even refers here to a
filesystem LABEL or something else. Some of the posts seem to conflate
the terms partition, VG, LV and LVM.

Cheers,
David.



Re: Why I don't like UUIDs (Re: can't mount sdf1 in stretch, gparted claims its fat32)

2020-02-05 Thread Stefan Monnier
>>What he meant is that filesystem UUIDs are (re)created automatically
>>based on a heuristic of what it means for a filesystem to be "the same".
> You understand that he didn't actually say that, right? This seems like your
> own personal bugaboo instead.

Definitely.

> I dislike using names becaues it's *much* more common to find name
> collisions than UUID collisions. (E.g., a bunch of disks with 
> filesystems all labeled with easy to remember names like "root" or "home".)

Yes, that happens frequently with filesystem labels, which is why I much
prefer LVM names.

> LVM is more resistant to that as long as you keep the vg names unique.

Indeed, my VG names are always unique and descriptive.

> I'm trying to think of a reason to rename the root partition that doesn't
> involve downtime and coming up empty. 

The root partition is always called "root", which is why I was talking
about renaming the VGs rather the LVs.

Yes, it usually involves downtime for some other related reason, but
every time I've had to change a VG's name it would have been *much* more
convenient to do it while the system is up (e.g. would have shortened
the downtime and the administration time noticeably).

Renaming the root VG is really the only case where I find LVM names to
be a point of friction.


Stefan



Re: Why I don't like UUIDs (Re: can't mount sdf1 in stretch, gparted claims its fat32)

2020-02-05 Thread Michael Stone

On Tue, Feb 04, 2020 at 07:04:16PM -0500, Stefan Monnier wrote:

Me too, so I usually label the permanent stuff at least. UUID's can and
will change for no detectable reason.

For those reading along or finding this in search results: no, filesystem
UUIDs don't change for no detectable reason. Don't implement anything based
on this theory.


What he meant is that filesystem UUIDs are (re)created automatically
based on a heuristic of what it means for a filesystem to be "the same".


You understand that he didn't actually say that, right? This seems like 
your own personal bugaboo instead. 


While I'm sure this can be managed by explicitly setting UUIDs, I've
found it much more pleasant to manage explicit names (I personally
prefer LVM names over filesystem labels, but filesystem labels work well
for those filesystems I don't put under LVM).  Not only I can pronounce
them and they carry meaning, but they tend to be much more visible (and
hence easier to manipulate).


I dislike using names becaues it's *much* more common to find name 
collisions than UUID collisions. (E.g., a bunch of disks with 
filesystems all labeled with easy to remember names like "root" or 
"home".) Reboot with a couple of those in your system on a 
label-oriented configuration and you may have a very bad day. LVM is 
more resistant to that as long as you keep the vg names unique. (Call 
everything vg0 and you're back to having a bad day.)


On Tue, Feb 04, 2020 at 10:19:25PM -0500, Stefan Monnier wrote:

PS: The only problem with LVM names is that Linux doesn't let you
rename a volume group while it's active (at least last time I tried),
which makes it painful to rename the volume group in which lives your
root partition.

How painful is it to dd a live cd, boot from it and rename?


Very.  It's called "downtime".
Every time you have to reboot, it means your OS has somewhat failed you.


I'm trying to think of a reason to rename the root partition that 
doesn't involve downtime and coming up empty. 



Re: Why I don't like UUIDs (Re: can't mount sdf1 in stretch, gparted claims its fat32)

2020-02-05 Thread Curt
>>> Me too, so I usually label the permanent stuff at least. UUID's can and
>>> will change for no detectable reason.
>> For those reading along or finding this in search results: no, filesystem
>> UUIDs don't change for no detectable reason. Don't implement anything based
>> on this theory.
>
> What he meant is that filesystem UUIDs are (re)created automatically

Your "analysis" appears to be more like what you want to mean than
what Gene was meaning.

> based on a heuristic of what it means for a filesystem to be "the same".

> This heuristic can be wrong in both directions: sometimes you delete and
>create a new filesystem which is supposed to "be" the same filesystem as

This seems like an eminently detectable reason for a UUID change to me.

-- 
"J'ai pour me guérir du jugement des autres toute la distance qui me sépare de
moi." Antonin Artaud




Re: Why I don't like UUIDs (Re: can't mount sdf1 in stretch, gparted claims its fat32)

2020-02-04 Thread Stefan Monnier
>> PS: The only problem with LVM names is that Linux doesn't let you
>> rename a volume group while it's active (at least last time I tried),
>> which makes it painful to rename the volume group in which lives your
>> root partition.
> How painful is it to dd a live cd, boot from it and rename?

Very.  It's called "downtime".
Every time you have to reboot, it means your OS has somewhat failed you.


Stefan



Re: Why I don't like UUIDs (Re: can't mount sdf1 in stretch, gparted claims its fat32)

2020-02-04 Thread 0...@caiway.net


> PS: The only problem with LVM names is that Linux doesn't let you
> rename a volume group while it's active (at least last time I tried),
> which makes it painful to rename the volume group in which lives your
> root partition.
> 

How painful is it to dd a live cd, boot from it and rename?
3 minutes of pain, if that is what you want to call it?



Why I don't like UUIDs (Re: can't mount sdf1 in stretch, gparted claims its fat32)

2020-02-04 Thread Stefan Monnier
>> Me too, so I usually label the permanent stuff at least. UUID's can and
>> will change for no detectable reason.
> For those reading along or finding this in search results: no, filesystem
> UUIDs don't change for no detectable reason. Don't implement anything based
> on this theory.

What he meant is that filesystem UUIDs are (re)created automatically
based on a heuristic of what it means for a filesystem to be "the same".

This heuristic can be wrong in both directions: sometimes you delete and
create a new filesystem which is supposed to "be" the same filesystem as
before (but gets a new UUID), and other times you copy a filesystem so
as to get "another one" (but it keeps its UUID).

While I'm sure this can be managed by explicitly setting UUIDs, I've
found it much more pleasant to manage explicit names (I personally
prefer LVM names over filesystem labels, but filesystem labels work well
for those filesystems I don't put under LVM).  Not only I can pronounce
them and they carry meaning, but they tend to be much more visible (and
hence easier to manipulate).


Stefan


PS: The only problem with LVM names is that Linux doesn't let you rename
a volume group while it's active (at least last time I tried), which
makes it painful to rename the volume group in which lives your root partition.



Re: Banishing UUIDs from grub

2018-01-19 Thread Felix Miata
Michael Stone composed on 2018-01-19 08:57 (UTC-0500):
...
> It's  also possible to use filesystem labels, but in practice it turned out 
> to be not uncommon for two different systems to have something like "root", 
> which caused a lot of trouble when you put a drive from one system into 
> another system. You could give the labels unique random names, but then 
> you've (re)invented the UUID.

I create non-random labels with a bit more complexity, e.g.:

p01jessie
p02home
p05usrlcl
p06stretch
p07buster

More complexity, though still human manageable, would be including a few of the
last characters of the physical device's serial number and/or model number, or
all or a portion of the hostname:

st5p08debsid-Z1W

If you assign a label to a swap partition, then mount swap by label, it can be a
little easier to deal with the installer's proclivity to reformat swap when a
new installation is added.
-- 
"Wisdom is supreme; therefore get wisdom. Whatever else you
get, get wisdom." Proverbs 4:7 (New Living Translation)

 Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

Felix Miata  ***  http://fm.no-ip.com/



Re: Banishing UUIDs from grub

2018-01-19 Thread Michael Stone
[not responding to the OP, I think he's already gotten an answer. this 
is for people reading the archive.]


The filesystem UUID is written into the filesystem when it is created. 
It's possible (though not necessarily easy) to change using tune2fs and 
other specialized filesystem tools. It does not "just change". It is 
also possible to write a label, or name, on the filesystem. It is 
written into the filesystem right next to the UUID. There are a couple 
of exceptions: FAT filesystems don't support a true UUID, for example. 
There are also partition UUIDs, the details for which depend on the 
partition table type. So why use UUIDs? In the old days we just used 
drive letters to identify filesystem locations, something like sda1 or 
hda1, and these rarely changed because they were associated with disk 
controllers with static cabling. (Although it was a bit of a pain to add 
or remove a controller.) On a modern system, though, it's actually not 
at all uncommon for drive letter assignments to change depending on 
what's plugged in at boot time, because of USB and other dynamic bus 
attachments. In practice, using UUIDs for filesystem assignments has 
been more reliable than relying on drive letters for quite a while. It's 
also possible to use filesystem labels, but in practice it turned out to 
be not uncommon for two different systems to have something like "root", 
which caused a lot of trouble when you put a drive from one system into 
another system. You could give the labels unique random names, but then 
you've (re)invented the UUID.


I understand that some people are saying that "something happened" and 
years ago they had a UUID change, but that's not really much to help 
diagnose a problem. In general it would be bad advice for people to 
assume there's a real problem and abandon UUIDs based on hearsay. If 
someone has a current case where they think a UUID is changing it would 
be useful to bring it up here so we could see exactly which ID they're 
talking about and try to diagnose what's going on. (It's probably not the 
case that the filesystem UUID is just randomly changing.)


For the curious, you can run "blkid" to see all of the IDs associated 
with your filesystems.


Mike Stone



Re: Banishing UUIDs from grub

2018-01-19 Thread Dave Sherohman
On Thu, Jan 18, 2018 at 06:42:41PM +0100, deloptes wrote:
> Dave Sherohman wrote:
> > What is the recommended method for preventing grub from using UUIDs to
> > refer to filesystems in the current Debian stable distribution?
> 
> what is the reason to avoid UUIDs? (if not very private)

The specific system where this is an issue is a backing image used as
the master for eight cloned developer virtual machines, so that each of
our devs can have their own private space to work in which mimics the
production server and can easily be reset to a pristine state by
throwing out the clone and making a new one.  In the cloning process,
the disk and LVM volume UUIDs are discarded and new UUIDs are generated
(which makes sense, because it's no longer the same disk or same
volume), but grub is still looking for the original UUIDs and drops into
a recovery shell when it fails to find them.

At this point, I have the rest of the process automated.  The one step
that still requires manual intervention is walking grub through the
first boot of the system and telling it to boot from (lvm/system)
instead of (lvm/[vg UUID]/[volume UUID]).

-- 
Dave Sherohman



Re: Banishing UUIDs from grub

2018-01-19 Thread Dave Sherohman
On Thu, Jan 18, 2018 at 11:52:11AM -0500, Marc Auslander wrote:
> Dave Sherohman <d...@sherohman.org> writes:
> 
> >What is the recommended method for preventing grub from using UUIDs to
> >refer to filesystems in the current Debian stable distribution?
> 
> I don't know about "recommended" but could you put your own menu
> entry into /etc/grub.d and make it the default?

Short and sweet.  Why didn't I think of that?

Thanks!

-- 
Dave Sherohman



Re: Banishing UUIDs from grub

2018-01-18 Thread Gene Heskett
On Thursday 18 January 2018 20:25:51 David Wright wrote:

> On Thu 18 Jan 2018 at 14:46:26 (-0500), Gene Heskett wrote:
> > On Thursday 18 January 2018 14:22:13 Pascal Hambourg wrote:
> > > Le 18/01/2018 à 19:54, Gene Heskett a écrit :
> > > > UUID's have turned out to be quite volatile over system
> > > > upgrades.
> > >
> > > Not on mine.
> > >
> > > > Give me a familiar disklabel any day.
> > >
> > > Don't you mean a filesystem or partition label ?
> > > "Disklabel" is a synonym for "partition table".
> >
> > Yes of course. I have had the UUID change on this system, on my
> > amandatapes drive at almost every install or upgrade. I finally
> > labeled that partition as amandatapes about 5 years back. No further
> > problems. It been the same 1T seacrate disk since they came out, and
> > this one came out of the box with 25 re-allocated sectors. I updated
> > its firmware a week later, UUID changed. Labeled the partition, and
> > nearly 70 thousand spinning hours later it still shows 25
> > re-allocated sectors. I am both amazed and pleased as punch. It
> > hovers in the 80% usage range, as a vtape is actually a directory,
> > there are 30 of them, 1, very occasionally 2 of them gets cleaned
> > out and reused every night.
>
> What sort of filesystem does this partition hold?
>
Std ext4.

> Cheers,
> David.


Cheers, Gene Heskett
The above content, added by Maurice E. Heskett, is Copyright 2018 by 
Maurice E. Heskett.
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page 



Re: Banishing UUIDs from grub

2018-01-18 Thread Stefan Monnier
>> What is the recommended method for preventing grub from using UUIDs to
>> refer to filesystems in the current Debian stable distribution?
> One method for you use case it to put /boot or at least /boot/grub
> in a plain partition on the same disk as GRUB's core image.

Indeed, that's what I have here and it works fine when the UUID is wrong
(as long as that wrong UUID doesn't map to any accessible partition),
because the "search --set=root" is not needed anyway.


Stefan



Re: Banishing UUIDs from grub

2018-01-18 Thread Michael Stone

On Thu, Jan 18, 2018 at 08:50:11PM -0500, Cindy-Sue Causey wrote:

I've had it happen, too. Feels like recently, but was probably a
couple years ago, if not more like 3. I could never figure out how it
happened.


vfat filesystem?

Mike Stone



Re: Banishing UUIDs from grub

2018-01-18 Thread Cindy-Sue Causey
On 1/18/18, Gene Heskett  wrote:
> On Thursday 18 January 2018 16:04:26 Don Armstrong wrote:
>
>> On Thu, 18 Jan 2018, Gene Heskett wrote:
>> > On Thursday 18 January 2018 14:22:13 Pascal Hambourg wrote:
>> > > Le 18/01/2018 à 19:54, Gene Heskett a écrit :
>> > > > UUID's have turned out to be quite volatile over system
>> > > > upgrades.
>> > >
>> > > Not on mine.
>> >
>> > I have had the UUID change on this system, on my
>> > amandatapes drive at almost every install or upgrade.
>>
>> Which UUID changed? The filesystem UUID shouldn't change unless you
>> reformat the partition, and the partition UUID shouldn't change unless
>> you repartition it (or you specifically change the UUID).
>>
> In this case, I installed a fresh firmware image on the drive, didn't
> lose a byte, but the UUID was changed, discovered when amanda couldn't
> find its virtual tapes drive after the reboot. I checked the fstab, then
> ran blkid, and it had changed. So I applied a label to the partition,
> edited fstab, and its been 69,600 or so spinning hours since. I've
> updated the system from ubuntu hardy to debian wheezy, no change.


I've had it happen, too. Feels like recently, but was probably a
couple years ago, if not more like 3. I could never figure out how it
happened.

>From one reboot to the next, the value or values changed, but I had
not *knowingly* done anything that should have changed it. By
"knowingly", I mean via gparted which is the only place I've ever
changed those manually. I've probably only done *that* maybe twice in
five or six years, probably just to have done it to know how it would
affect things.

/etc/fstab was my go-to to see where things didn't match. Had a copy
floating around just to remind myself whenever I stumbled back over it
occasionally. I never thought to check grub.cfg before getting
everything to match back up properly at the time.

It hasn't happened since, and I've messed all kinds of other things up
in the meantime... like killing my sound (again) about 2 days ago
while trying to roll my own of something that I've already forgotten
what it was. I think it was something about "that" dialup modem. :D

Cindy :)
-- 
Cindy-Sue Causey
Talking Rock, Pickens County, Georgia, USA

* runs with duct tape *



Re: Banishing UUIDs from grub

2018-01-18 Thread David Wright
On Thu 18 Jan 2018 at 14:46:26 (-0500), Gene Heskett wrote:
> On Thursday 18 January 2018 14:22:13 Pascal Hambourg wrote:
> 
> > Le 18/01/2018 à 19:54, Gene Heskett a écrit :
> > > UUID's have turned out to be quite volatile over system upgrades.
> >
> > Not on mine.
> >
> > > Give me a familiar disklabel any day.
> >
> > Don't you mean a filesystem or partition label ?
> > "Disklabel" is a synonym for "partition table".
> 
> Yes of course. I have had the UUID change on this system, on my 
> amandatapes drive at almost every install or upgrade. I finally labeled 
> that partition as amandatapes about 5 years back. No further problems. 
> It been the same 1T seacrate disk since they came out, and this one came 
> out of the box with 25 re-allocated sectors. I updated its firmware a 
> week later, UUID changed. Labeled the partition, and nearly 70 thousand 
> spinning hours later it still shows 25 re-allocated sectors. I am both 
> amazed and pleased as punch. It hovers in the 80% usage range, as a 
> vtape is actually a directory, there are 30 of them, 1, very 
> occasionally 2 of them gets cleaned out and reused every night.

What sort of filesystem does this partition hold?

Cheers,
David.



Re: Banishing UUIDs from grub

2018-01-18 Thread Don Armstrong
On Thu, 18 Jan 2018, Gene Heskett wrote:
> On Thursday 18 January 2018 16:04:26 Don Armstrong wrote:
> > Which UUID changed? The filesystem UUID shouldn't change unless you
> > reformat the partition, and the partition UUID shouldn't change
> > unless you repartition it (or you specifically change the UUID).
> >
> In this case, I installed a fresh firmware image on the drive, didn't
> lose a byte, but the UUID was changed, discovered when amanda couldn't
> find its virtual tapes drive after the reboot. I checked the fstab,
> then ran blkid, and it had changed.

I'm still not clear which UUID was changed. Even if you change the
firmware image,[1] the filesystem UUID should not change, because that
would require changing the filesystem.

[Pretty much anything which changes the partition UUID can also change
the partition label, though perhaps whatever did the update doesn't
change the label.]

But in any event, LABEL works just as well.

-- 
Don Armstrong  https://www.donarmstrong.com

Identical parts aren't.
 -- Beach's Law



Re: Banishing UUIDs from grub

2018-01-18 Thread Gene Heskett
On Thursday 18 January 2018 16:04:26 Don Armstrong wrote:

> On Thu, 18 Jan 2018, Gene Heskett wrote:
> > On Thursday 18 January 2018 14:22:13 Pascal Hambourg wrote:
> > > Le 18/01/2018 à 19:54, Gene Heskett a écrit :
> > > > UUID's have turned out to be quite volatile over system
> > > > upgrades.
> > >
> > > Not on mine.
> >
> > I have had the UUID change on this system, on my
> > amandatapes drive at almost every install or upgrade.
>
> Which UUID changed? The filesystem UUID shouldn't change unless you
> reformat the partition, and the partition UUID shouldn't change unless
> you repartition it (or you specifically change the UUID).
>
In this case, I installed a fresh firmware image on the drive, didn't 
lose a byte, but the UUID was changed, discovered when amanda couldn't 
find its virtual tapes drive after the reboot. I checked the fstab, then 
ran blkid, and it had changed. So I applied a label to the partition, 
edited fstab, and its been 69,600 or so spinning hours since. I've 
updated the system from ubuntu hardy to debian wheezy, no change.

One of these days I need to convert it to 64 bit. Probably by putting 
most of that stuff on the amandatapes partition, putting in a fresh 1T 
drive, and installing a 64 bit jessie. Once thats running, I have a 2T 
drive that I'll format for amanda and swap this one out. Nothing wrong 
with it, but I keep adding more machines to the disklist. 7 now.

> I've been using UUIDs for *ages*, and I've never seen one change
> unless I've specifically done something which would change it.


Cheers, Gene Heskett
The above content, added by Maurice E. Heskett, is Copyright 2018 by 
Maurice E. Heskett.
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page <http://geneslinuxbox.net:6309/gene>



Re: Banishing UUIDs from grub

2018-01-18 Thread Don Armstrong
On Thu, 18 Jan 2018, Gene Heskett wrote:
> On Thursday 18 January 2018 14:22:13 Pascal Hambourg wrote:
> > Le 18/01/2018 à 19:54, Gene Heskett a écrit :
> > > UUID's have turned out to be quite volatile over system upgrades.
> >
> > Not on mine.
>
> I have had the UUID change on this system, on my 
> amandatapes drive at almost every install or upgrade.

Which UUID changed? The filesystem UUID shouldn't change unless you
reformat the partition, and the partition UUID shouldn't change unless
you repartition it (or you specifically change the UUID).

I've been using UUIDs for *ages*, and I've never seen one change unless
I've specifically done something which would change it.

-- 
Don Armstrong  https://www.donarmstrong.com

There is no mechanical problem so difficult that it cannot be solved
by brute strength and ignorance.
 -- William's Law



Re: Banishing UUIDs from grub

2018-01-18 Thread Gene Heskett
On Thursday 18 January 2018 14:22:13 Pascal Hambourg wrote:

> Le 18/01/2018 à 19:54, Gene Heskett a écrit :
> > UUID's have turned out to be quite volatile over system upgrades.
>
> Not on mine.
>
> > Give me a familiar disklabel any day.
>
> Don't you mean a filesystem or partition label ?
> "Disklabel" is a synonym for "partition table".

Yes of course. I have had the UUID change on this system, on my 
amandatapes drive at almost every install or upgrade. I finally labeled 
that partition as amandatapes about 5 years back. No further problems. 
It been the same 1T seacrate disk since they came out, and this one came 
out of the box with 25 re-allocated sectors. I updated its firmware a 
week later, UUID changed. Labeled the partition, and nearly 70 thousand 
spinning hours later it still shows 25 re-allocated sectors. I am both 
amazed and pleased as punch. It hovers in the 80% usage range, as a 
vtape is actually a directory, there are 30 of them, 1, very 
occasionally 2 of them gets cleaned out and reused every night.

Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page 



Re: Banishing UUIDs from grub

2018-01-18 Thread Pascal Hambourg

Le 18/01/2018 à 19:54, Gene Heskett a écrit :


UUID's have turned out to be quite volatile over system upgrades.


Not on mine.


Give me a familiar disklabel any day.


Don't you mean a filesystem or partition label ?
"Disklabel" is a synonym for "partition table".



Re: Banishing UUIDs from grub

2018-01-18 Thread Pascal Hambourg

Le 18/01/2018 à 10:31, Dave Sherohman a écrit :

What is the recommended method for preventing grub from using UUIDs to
refer to filesystems in the current Debian stable distribution?


One method for you use case it to put /boot or at least /boot/grub in a 
plain partition on the same disk as GRUB's core image.




Re: Banishing UUIDs from grub

2018-01-18 Thread Gene Heskett
On Thursday 18 January 2018 12:42:41 deloptes wrote:

> Dave Sherohman wrote:
> > What is the recommended method for preventing grub from using UUIDs
> > to refer to filesystems in the current Debian stable distribution?
>
> what is the reason to avoid UUIDs? (if not very private)

UUID's have turned out to be quite volatile over system upgrades. Give me 
a familiar disklabel any day.

Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page <http://geneslinuxbox.net:6309/gene>



Re: Banishing UUIDs from grub

2018-01-18 Thread David Wright
On Thu 18 Jan 2018 at 11:52:11 (-0500), Marc Auslander wrote:
> Dave Sherohman <d...@sherohman.org> writes:
> 
> >What is the recommended method for preventing grub from using UUIDs to
> >refer to filesystems in the current Debian stable distribution?
> >
> 
> I don't know about "recommended" but could you put your own menu
> entry into /etc/grub.d and make it the default?

I prefer to let grub do the grunt work and then run a filter over
grub.cfg. I use LABELs myself; the filter finds the necessary
information in /run/udev/data and performs substitutions of --fs-uuid
options, root=UUID= and the UUIDs themselves. (It also mangles, to my
taste, the menuentry_id_option strings etc a little further.)
It takes no time to run a filter whenever grub.cfg gets rebuilt.

I don't encrypt system partitions or use VMs so I don't know how
well a filtering scheme would translate to that.

Cheers,
David.



Re: Banishing UUIDs from grub

2018-01-18 Thread deloptes
Dave Sherohman wrote:

> What is the recommended method for preventing grub from using UUIDs to
> refer to filesystems in the current Debian stable distribution?

what is the reason to avoid UUIDs? (if not very private)





Re: Banishing UUIDs from grub

2018-01-18 Thread Marc Auslander
Dave Sherohman <d...@sherohman.org> writes:

>What is the recommended method for preventing grub from using UUIDs to
>refer to filesystems in the current Debian stable distribution?
>

I don't know about "recommended" but could you put your own menu
entry into /etc/grub.d and make it the default?



Re: Banishing UUIDs from grub

2018-01-18 Thread David Wright
On Thu 18 Jan 2018 at 07:19:45 (-0600), Dave Sherohman wrote:

> My guess at explaining this would be that the GRUB_DISABLE_LINUX_UUID
> flag is very literal and *only* affects whether "GRUB [passes]
> "root=UUID=xxx" parameter to Linux", but not how grub itself identifies
> the root device ("set root='lvmid/[UUID]').

It's subtle, but that's probably why the parameter is called
GRUB_DISABLE_LINUX_UUID and not GRUB_DISABLE_UUID.

>From the docs:

'GRUB_DISABLE_LINUX_UUID'
 Normally, 'grub-mkconfig' will generate menu entries that use
     universally-unique identifiers (UUIDs) to identify the root
 filesystem to the Linux kernel, using a 'root=UUID=...' kernel
 parameter.  This is usually more reliable, but in some cases it may
 not be appropriate.  To disable the use of UUIDs, set this option
 to 'true'.

Cheers,
David.



Re: Banishing UUIDs from grub

2018-01-18 Thread Dave Sherohman
On Thu, Jan 18, 2018 at 11:11:32AM +0100, Stephan Seitz wrote:
> On Do, Jan 18, 2018 at 03:31:30 -0600, Dave Sherohman wrote:
> >What is the recommended method for preventing grub from using UUIDs to
> >refer to filesystems in the current Debian stable distribution?
> 
> In /etc/default/grub I have the option:
> 
> # Uncomment if you don’t want GRUB to pass „root=UUID=xxx” parameter to Linux
> #GRUB_DISABLE_LINUX_UUID=true

That doesn't seem to be a complete solution for booting from an LVM
volume.  I've enabled it:

$ grep UUID /etc/default/grub 
# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to
# Linux
GRUB_DISABLE_LINUX_UUID=true

and re-run update-grub, but /boot/grub/grub.cfg still has a mix of
device names and UUIDs:

menuentry 'Debian GNU/Linux' --class debian --class gnu-linux --class gnu 
--class os $menuentry_id_option 
'gnulinux-simple-c5bb6082-0b8b-46e5-a253-c4811a1f011a' {
load_video
insmod gzio
if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
insmod part_msdos
insmod lvm
insmod ext2
set 
root='lvmid/wf5YhU-vt2F-uZM9-cVso-qn6Z-fdY9-iQO26v/sBd6ej-DTMK-RUxu-LuRW-MjLj-rRLf-C6OwT2'
if [ x$feature_platform_search_hint = xy ]; then
  search --no-floppy --fs-uuid --set=root 
--hint='lvmid/wf5YhU-vt2F-uZM9-cVso-qn6Z-fdY9-iQO26v/sBd6ej-DTMK-RUxu-LuRW-MjLj-rRLf-C6OwT2'
  c5bb6082-0b8b-46e5-a253-c4811a1f011a
else
  search --no-floppy --fs-uuid --set=root 
c5bb6082-0b8b-46e5-a253-c4811a1f011a
fi
echo'Loading Linux 4.9.0-5-amd64 ...'
linux   /boot/vmlinuz-4.9.0-5-amd64 root=/dev/mapper/system ro  quiet
echo'Loading initial ramdisk ...'
initrd  /boot/initrd.img-4.9.0-5-amd64
}


My guess at explaining this would be that the GRUB_DISABLE_LINUX_UUID
flag is very literal and *only* affects whether "GRUB [passes]
"root=UUID=xxx" parameter to Linux", but not how grub itself identifies
the root device ("set root='lvmid/[UUID]').


-- 
Dave Sherohman



Re: Banishing UUIDs from grub

2018-01-18 Thread Stephan Seitz

On Do, Jan 18, 2018 at 03:31:30 -0600, Dave Sherohman wrote:

What is the recommended method for preventing grub from using UUIDs to
refer to filesystems in the current Debian stable distribution?


In /etc/default/grub I have the option:

# Uncomment if you don’t want GRUB to pass „root=UUID=xxx” parameter to Linux
#GRUB_DISABLE_LINUX_UUID=true

Shade and sweet water!

Stephan

--
| Public Keys: http://fsing.rootsland.net/~stse/keys.html |


smime.p7s
Description: S/MIME cryptographic signature


Re: Banishing UUIDs from grub

2018-01-18 Thread Michael Lange
Hi,

On Thu, 18 Jan 2018 03:31:30 -0600
Dave Sherohman <d...@sherohman.org> wrote:

> What is the recommended method for preventing grub from using UUIDs to
> refer to filesystems in the current Debian stable distribution?

not sure about this; have you tried to set

GRUB_DISABLE_LINUX_UUID=true

in /etc/default/grub ?

Regards

Michael


.-.. .. ...- .   .-.. --- -. --.   .- -. -..   .--. .-. --- ... .--. . .-.

Death, when unnecessary, is a tragic thing.
-- Flint, "Requiem for Methuselah", stardate 5843.7



Banishing UUIDs from grub

2018-01-18 Thread Dave Sherohman
What is the recommended method for preventing grub from using UUIDs to
refer to filesystems in the current Debian stable distribution?

---

In an attempt to head off a "but you really want to use UUIDs!" debate:

The specific use-case I'm dealing with here is cloned virtual machines.
When I clone them, the virtual disks' UUIDs are cleared and new UUIDs
are assigned, which is as it should be.  However, this causes the first
boot to fail because grub can't find the UUID it wants to boot from,
requiring me to manually boot the system through the grub rescue shell.

Once the cloned VM is up and running, I can run grub-install to fix it,
but the use of UUIDs prevents the cloned VMs from booting unattended
until this is done.  If grub were to try to boot from lvm/system[1]
instead of lvm/UUID, that would remove the need for per-machine manual
intervention.

[1] ...and it will always and forever be lvm/system.  Removable media
and hardware which can autodetect in nondeterministic sequences are not
concerns here.

-- 
Dave Sherohman



Re: drive names and UUIDs, was Re: Intresting dd fsck grub uuid fstab action

2017-06-03 Thread Joel Rees
On Sun, Jun 4, 2017 at 12:59 AM, Pascal Hambourg  wrote:
> Le 03/06/2017 à 17:48, Gene Heskett a écrit :
>>
>>
>> I don't believe that will work.  dd runs on the raw device, not to an
>> artificially created "partition".
>
>
> dd runs on any type of device, including partitions.
>

But it copies the raw data.

In the context of this discussion, it makes absolutely no sense to
have it twiddling any of the data it copies, much less any of the data
that refers to what is being copied.

(Sorry about the misfire, Pascal.)

-- 
Joel Rees

One of these days I'll get someone to pay me
to design a language that combines the best of Forth and C.
Then I'll be able to leap wide instruction sets with a single #ifdef,
run faster than a speeding infinite loop with a #define,
and stop all integer size bugs with a bare cast.

More of my delusions:
http://reiisi.blogspot.com/2017/05/do-not-pay-modern-danegeld-ransomware.html
http://reiisi.blogspot.jp/p/novels-i-am-writing.html



Re: drive names and UUIDs, was Re: Intresting dd fsck grub uuid fstab action

2017-06-03 Thread Joel Rees
(Google or something is screwing up the threading. My apologies if I
mess it up further.)

On Sun, Jun 4, 2017 at 7:30 AM, Fungi4All <fungil...@protonmail.com> wrote:
>
>> From: deb...@lionunicorn.co.uk
>>
>>> Ι was waiting to see if anyone else found something like this significant
>> and willing to contribute some wisdom
>> No wisdom here, I'm afraid.
>
> just evolution of the unix-dna

It would definitely be evolution. And it would be something that should be
relegated to an explicitly specified option (maybe like character
encoding stuff), if at all.

And it would be orders of magnitude more complex than what dd now
does. That's part of the reason we have (g)parted and other similar
tools. (And, if you are using LVM, LVM has its own tools.)

>>[...]
>>> Also, I believe that when dd is used to copy something from disk to disk
>>> it should provide an option of whether to
>>> produce a new uuid or retain the original (backup, not a concurrent
>>> system).
>
>> Here you're asking for the impossible. dd is blind to what it's
>> copying at that level. It can fiddle with something it calls "records"
>> (which stinks of IBM FB↔VB conversion) and that's about it.
>
>
> I actually like dd the more I learn about it but what I was suggesting was
> to have
> an option to change the uuid to a new random one after it is done copying.

If at all, an option, but it really is out of dd's scope. dd is not parted.

> I understand (think) that dd does not even care about the format of the fs
> it copies
> or that of what it copies to, just blocks of space, where to start and where
> to finish.

Very true.

> So if a 10gb NTFS partition is copied to a 20gb EXT4 partition, the target
> will be an
> ntfs 20gb partition.

No, the target will, depending on what you specify, be a 10gb file in
the ext4 partition or a 10gb NTFS partition (overwriting the ext4 file
system completely) and a 10gb gap of unused disk space that the MBR
says goes with the partition which is formatted as a 10gb NTFS
partition, but the NTFS partition really doesn't know anything about
(unless you tell it about it afterwards by expanding the partition
using NTFS tools.)

IIUC.

>  So I suspect it does formatting in there too,

dd is not parted, nor is it an NTFS partition editor.

> otherwise the
> partition would have been left half ntfs half ext4.

The MBR partition is left half used by the NTFS file system and half
unused. The unused part may have some useless bits of ext4 left
behind, but it has nothing like a file system in it.

> The kernel I assume as soon as dd is done picks up the new set of uuids and
> updates the table.  So if dd does not do it it leaves the system in a mesh.

Not that I am aware. Not unless you tell the kernel to do so.

>[...]


-- 
Joel Rees

One of these days I'll get someone to pay me
to design a language that combines the best of Forth and C.
Then I'll be able to leap wide instruction sets with a single #ifdef,
run faster than a speeding infinite loop with a #define,
and stop all integer size bugs with a bare cast.

More of my delusions:
http://reiisi.blogspot.com/2017/05/do-not-pay-modern-danegeld-ransomware.html
http://reiisi.blogspot.jp/p/novels-i-am-writing.html



Re: drive names and UUIDs, was Re: Intresting dd fsck grub uuid fstab action

2017-06-03 Thread Fungi4All
From: deb...@lionunicorn.co.uk

> Ι was waiting to see if anyone else found something like this significant and 
> willing to contribute some wisdom
No wisdom here, I'm afraid.

just evolution of the unix-dna

> I suspect the experiment would be simple.
> Let's say we make a new partition on a disk with a debian installed system 
> and use dd to clone that system in
> the new partition (I have no idea whether doing this in the same disk or a 
> second one makes a difference),
> Let's say that dd copies uuid to the new partition so we have two partitions 
> with the same uuid in the same system.
> Intentionally we make the mistake and log in to the original system and 
> update-grub with the conflict.

AIUI there's a race condition here, perhaps even several. The correct
MBR should be read as normal by the BIOS, but grub then searches
by UUID for the kernel/ramdisk, and the kernel searches by UUID for
the root filesystem, and we don't know how it chooses between them.

I think this bit of info gets one level deeper to source of problems.

Do they (a) take the UUID being searched for and look at each disk/
partition's UUID for a match, or (b) read all the available UUIDs into
a table (overwriting any duplicates by the new entry) and then take
the UUID being searched for and look it up in the table? These
strategies give opposite results.

I suspect at some point, when 2 matching uuids come up for different
disks or parts of the same disk, the partition is left out as unmountable.
In my case it would mix and match the partitions of one system with
some from its clone. How do I know this? I had labeled the partitions
as their sda/sdb numbers and the system booted with 5 partitions
showing me the mix match as unmounted and when I'd attempt to mount
them it would say no no.

> Then relogin and create new uuids for the clone, edit ftstab and check 
> original, then update-grub.
> Will it boot having the first root or the clone root? Then look at grub.cfg 
> of the original to see if the uuids for each
> entry are matching or there is a mismatch.
> I went through the grub manual and didn't come up with an answer. It smells 
> like a tiny bug or room for improvement.

I think you'd have to study both grub and the kernel's code to
find an answer, and their developers would say "Just don't do it,
it's not defined, it may change tomorrow".

if time=23:59 do yesterday :)

> Also, I believe that when dd is used to copy something from disk to disk it 
> should provide an option of whether to
> produce a new uuid or retain the original (backup, not a concurrent system).

Here you're asking for the impossible. dd is blind to what it's
copying at that level. It can fiddle with something it calls "records"
(which stinks of IBM FB↔VB conversion) and that's about it.

I actually like dd the more I learn about it but what I was suggesting was to 
have
an option to change the uuid to a new random one after it is done copying.
I understand (think) that dd does not even care about the format of the fs it 
copies
or that of what it copies to, just blocks of space, where to start and where to 
finish.
So if a 10gb NTFS partition is copied to a 20gb EXT4 partition, the target will 
be an
ntfs 20gb partition. So I suspect it does formatting in there too, otherwise the
partition would have been left half ntfs half ext4.
The kernel I assume as soon as dd is done picks up the new set of uuids and
updates the table. So if dd does not do it it leaves the system in a mesh.

For copying filesystems around, there are commands for doing that.
Years ago I used cpio -vdamp but I've seen better commands (cp -a ?)
in threads here more recently. These have the advantage that you
prepare the empty filesystem the way you want it before you copy
into it.

I thought that dd is more accurate and quicker with the exception of little
data in a huge partition, where it copies a huge amount of empty space.

Cheers,
David.

cheers back

Re: drive names and UUIDs, was Re: Intresting dd fsck grub uuid fstab action

2017-06-03 Thread tomas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Sat, Jun 03, 2017 at 05:59:06PM +0200, Pascal Hambourg wrote:
> Le 03/06/2017 à 17:48, Gene Heskett a écrit :
> >
> >I don't believe that will work.  dd runs on the raw device, not to an
> >artificially created "partition".
> 
> dd runs on any type of device, including partitions.

Exactly. Or to nit the pick a tad more: dd copies a raw byte stream:
that can be a device, a partition, a chunk thereof or even a file.
Whatever the OS can represent as a byte stream.

Cheers
- -- t
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEARECAAYFAlkzC0wACgkQBcgs9XrR2kYuPQCaAn0s5QjN7FgsIxsmm+B7H4vx
hpwAn33UTBG8fyWPPmE1hFtXZ6D4FKYb
=q5IS
-END PGP SIGNATURE-



Re: drive names and UUIDs, was Re: Intresting dd fsck grub uuid fstab action

2017-06-03 Thread Pascal Hambourg

Le 03/06/2017 à 18:04, David Wright a écrit :


AIUI there's a race condition here, perhaps even several. The correct
MBR should be read as normal by the BIOS, but grub then searches
by UUID for the kernel/ramdisk, and the kernel searches by UUID for
the root filesystem, and  we don't know how it chooses between them.


Note that the kernel does not know anything about filesystem UUIDs 
(stored in filesystem metadata). It only knows about partition UUIDs 
(stored in partition tables). The part that uses UUIDs to locate the 
root filesystem is the initrd/initramfs userland scripts, with the help 
of libblkid/udev.



Do they (a) take the UUID being searched for and look at each disk/
partition's UUID for a match, or (b) read all the available UUIDs into
a table (overwriting any duplicates by the new entry) and then take
the UUID being searched for and look it up in the table? These
strategies give opposite results.


I think GRUB does the former and libblkid/udev does the latter.



Re: drive names and UUIDs, was Re: Intresting dd fsck grub uuid fstab action

2017-06-03 Thread David Wright
On Sat 03 Jun 2017 at 11:02:54 (-0400), Fungi4All wrote:

> Ι was waiting to see if anyone else found something like this significant and 
> willing to contribute some wisdom

No wisdom here, I'm afraid.

> I suspect the experiment would be simple.
> Let's say we make a new partition on a disk with a debian installed system 
> and use dd to clone that system in
> the new partition (I have no idea whether doing this in the same disk or a 
> second one makes a difference),
> Let's say that dd copies uuid to the new partition so we have two partitions 
> with the same uuid in the same system.
> Intentionally we make the mistake and log in to the original system and 
> update-grub with the conflict.

AIUI there's a race condition here, perhaps even several. The correct
MBR should be read as normal by the BIOS, but grub then searches
by UUID for the kernel/ramdisk, and the kernel searches by UUID for
the root filesystem, and  we don't know how it chooses between them.

Do they (a) take the UUID being searched for and look at each disk/
partition's UUID for a match, or (b) read all the available UUIDs into
a table (overwriting any duplicates by the new entry) and then take
the UUID being searched for and look it up in the table? These
strategies give opposite results.

> Then relogin and create new uuids for the clone, edit ftstab and check 
> original, then update-grub.
> Will it boot having the first root or the clone root? Then look at grub.cfg 
> of the original to see if the uuids for each
> entry are matching or there is a mismatch.
> I went through the grub manual and didn't come up with an answer. It smells 
> like a tiny bug or room for improvement.

I think you'd have to study both grub and the kernel's code to
find an answer, and their developers would say "Just don't do it,
it's not defined, it may change tomorrow".

> Also, I believe that when dd is used to copy something from disk to disk it 
> should provide an option of whether to
> produce a new uuid or retain the original (backup, not a concurrent system).

Here you're asking for the impossible. dd is blind to what it's
copying at that level. It can fiddle with something it calls "records"
(which stinks of IBM FB↔VB conversion) and that's about it.

For copying filesystems around, there are commands for doing that.
Years ago I used cpio -vdamp but I've seen better commands (cp -a ?)
in threads here more recently. These have the advantage that you
prepare the empty filesystem the way you want it before you copy
into it.

Cheers,
David.



Re: drive names and UUIDs, was Re: Intresting dd fsck grub uuid fstab action

2017-06-03 Thread Pascal Hambourg

Le 03/06/2017 à 17:48, Gene Heskett a écrit :


I don't believe that will work.  dd runs on the raw device, not to an
artificially created "partition".


dd runs on any type of device, including partitions.



Re: drive names and UUIDs, was Re: Intresting dd fsck grub uuid fstab action

2017-06-03 Thread Gene Heskett
On Saturday 03 June 2017 11:02:54 Fungi4All wrote:

>  Original Message 
> From: deb...@lionunicorn.co.uk
> To: debian-user@lists.debian.org
>
> On Thu 01 Jun 2017 at 12:24:28 (-0400), Fungi4All wrote:
> > Why don't just skip all this that we are in perfect agreement with
> > and go to the juicy part. After all uuids are unique and fstab are
> > all correct, updating-grub would mix match uuids in writing its
> > grub.cfg
> > Two uuids on the same entry! Over and over again till I edited
> > it out to the correct ones and it all worked. Why does everyone
> > choses to skip on this issue and keeps explaining me over and over
> > what I have well understood by now?
>
> Well, we don't have a lot of evidence. But we do have a tiny bit:
>
> Ι was waiting to see if anyone else found something like this
> significant and willing to contribute some wisdom I suspect the
> experiment would be simple.
> Let's say we make a new partition on a disk with a debian installed
> system and use dd to clone that system in the new partition (I have no
> idea whether doing this in the same disk or a second one makes a
> difference),

I don't believe that will work.  dd runs on the raw device, not to an 
artificially created "partition".

rsync OTOH, can run on a partition to partition basis.  And that should 
give you different blkid's.

> Let's say that dd copies uuid to the new partition so we 
> have two partitions with the same uuid in the same system.
> Intentionally we make the mistake and log in to the original system
> and update-grub with the conflict. Then relogin and create new uuids
> for the clone, edit ftstab and check original, then update-grub. Will
> it boot having the first root or the clone root? Then look at grub.cfg
> of the original to see if the uuids for each entry are matching or
> there is a mismatch.
> I went through the grub manual and didn't come up with an answer. It
> smells like a tiny bug or room for improvement. Also, I believe that
> when dd is used to copy something from disk to disk it should provide
> an option of whether to produce a new uuid or retain the original
> (backup, not a concurrent system).
>
> if [ x$feature_platform_search_hint = xy ]; then
> search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos3
> --hint-efi=hd0,msdos3 --hint-baremetal=ahci0,msdos3
> --hint='hd0,msdos3' UUID on sda3 XXX else
> search --no-floppy --fs-uuid --set=root UUID on sda3 YYY
> fi


Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page <http://geneslinuxbox.net:6309/gene>



Re: drive names and UUIDs, was Re: Intresting dd fsck grub uuid fstab action

2017-06-03 Thread Fungi4All
 Original Message 
From: deb...@lionunicorn.co.uk
To: debian-user@lists.debian.org

On Thu 01 Jun 2017 at 12:24:28 (-0400), Fungi4All wrote:

> Why don't just skip all this that we are in perfect agreement with and go to 
> the juicy part.
> After all uuids are unique and fstab are all correct, updating-grub would mix 
> match uuids in writing
> its grub.cfg
> Two uuids on the same entry! Over and over again till I edited it out to 
> the correct ones and it all worked.
> Why does everyone choses to skip on this issue and keeps explaining me over 
> and over
> what I have well understood by now?

Well, we don't have a lot of evidence. But we do have a tiny bit:

Ι was waiting to see if anyone else found something like this significant and 
willing to contribute some wisdom
I suspect the experiment would be simple.
Let's say we make a new partition on a disk with a debian installed system and 
use dd to clone that system in
the new partition (I have no idea whether doing this in the same disk or a 
second one makes a difference),
Let's say that dd copies uuid to the new partition so we have two partitions 
with the same uuid in the same system.
Intentionally we make the mistake and log in to the original system and 
update-grub with the conflict.
Then relogin and create new uuids for the clone, edit ftstab and check 
original, then update-grub.
Will it boot having the first root or the clone root? Then look at grub.cfg of 
the original to see if the uuids for each
entry are matching or there is a mismatch.
I went through the grub manual and didn't come up with an answer. It smells 
like a tiny bug or room for improvement.
Also, I believe that when dd is used to copy something from disk to disk it 
should provide an option of whether to
produce a new uuid or retain the original (backup, not a concurrent system).

if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos3 
--hint-efi=hd0,msdos3 --hint-baremetal=ahci0,msdos3 --hint='hd0,msdos3' UUID on 
sda3 XXX
else
search --no-floppy --fs-uuid --set=root UUID on sda3 YYY
fi

Re: drive names and UUIDs, was Re: Intresting dd fsck grub uuid fstab action

2017-06-02 Thread David Wright
On Thu 01 Jun 2017 at 12:24:28 (-0400), Fungi4All wrote:

> Why don't just skip all this that we are in perfect agreement with and go to 
> the juicy part.
> After all uuids are unique and fstab are all correct, updating-grub would mix 
> match uuids in writing
> its grub.cfg
> Two uuids on the same entry! Over and over again till I edited it out to 
> the correct ones and it all worked.
> Why does everyone choses to skip on this issue and keeps explaining me over 
> and over
> what I have well understood by now?

Well, we don't have a lot of evidence. But we do have a tiny bit:

if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos3 
--hint-efi=hd0,msdos3 --hint-baremetal=ahci0,msdos3 --hint='hd0,msdos3' UUID on 
sda3 XXX
else
search --no-floppy --fs-uuid --set=root UUID on sda3 YYY
fi
insmod png

Not a lot of context, and also edited and unindented, so the best
I can do to try to find what wrote this is to grep "platform_search"
and that comes up with /usr/share/grub/grub-mkconfig_lib in grub-common.

Within this file are these lines (starting around l.169):

  # If there's a filesystem UUID that GRUB is capable of identifying, use it;
  # otherwise set root as per value in device.map.
  fs_hint="`"${grub_probe}" --device $@ --target=compatibility_hint`"
  if [ "x$fs_hint" != x ]; then
echo "set root='$fs_hint'"
  fi
  if fs_uuid="`"${grub_probe}" --device $@ --target=fs_uuid 2> /dev/null`" ; 
then
hints="`"${grub_probe}" --device $@ --target=hints_string 2> /dev/null`" || 
hints=
echo "if [ x\$feature_platform_search_hint = xy ]; then"
echo "  search --no-floppy --fs-uuid --set=root ${hints} ${fs_uuid}"
echo "else"
echo "  search --no-floppy --fs-uuid --set=root ${fs_uuid}"
echo "fi"
  fi
  IFS="$old_ifs"

I'll let the shell experts come up with ideas about how ${fs_uuid}
produces two different substitutions within three lines.
I have made one assumption, that these lines from grub.cfg are from
sections 00 throught 10.

Once you reach section 30, things are different. Although I still
would not expect to see the disagreement in UUIDs quoted above,
I wouldn't be surprised if the UUIDs in the lines   linux /boot/vmlinuz…
bore no relation to the other UUIDs because _this_ linux… line is
copied directly from grub.cfg files scattered elsewhere on the disks.

I can illustrate this with the following. Here's the first menuentry
(top level) in my wheezy partition, freshly generated, but then doctored
by inserting "doctored" in the title and zeroing the first two digits
of the UUID throughout this entry. (This grub.cfg is never actioned
because the MBR doesn't point to it.)


### BEGIN /etc/grub.d/10_linux ###
menuentry 'doctored Debian GNU/Linux, with Linux 3.2.0-4-686-pae' --class 
debian --class gnu-linux --class gnu --class os {
load_video
insmod gzio
insmod part_msdos
insmod ext2
set root='(hd0,msdos2)'
search --no-floppy --fs-uuid --set=root 
0097eef1-c934-4b8f-a76b-4b084d6cf6f0
echo'Loading Linux 3.2.0-4-686-pae ...'
linux   /boot/vmlinuz-3.2.0-4-686-pae 
root=UUID=0097eef1-c934-4b8f-a76b-4b084d6cf6f0 ro  quiet
echo'Loading initial ramdisk ...'
initrd  /boot/initrd.img-3.2.0-4-686-pae
}


Now here are the first two menuentries in section 30 of my jessie
partition (which the MBR points to), generated after the above.
The first menuentry is the top-level one, the second is the
duplicate inside the submenu.

### BEGIN /etc/grub.d/30_os-prober ###
menuentry 'Debian GNU/Linux (7.11) (on /dev/sda2)' --class gnu-linux --class 
gnu --class os $menuentry_id_option 
'osprober-gnulinux-simple-dd97eef1-c934-4b8f-a76b-4b084d6cf6f0' {
insmod part_msdos
insmod ext2
set root='hd0,msdos2'
if [ x$feature_platform_search_hint = xy ]; then
  search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos2 
--hint-efi=hd0,msdos2 --hint-baremetal=ahci0,msdos2 --hint='hd0,msdos2'  
dd97eef1-c934-4b8f-a76b-4b084d6cf6f0
else
  search --no-floppy --fs-uuid --set=root 
dd97eef1-c934-4b8f-a76b-4b084d6cf6f0
fi
linux /boot/vmlinuz-3.2.0-4-686-pae 
root=UUID=0097eef1-c934-4b8f-a76b-4b084d6cf6f0 ro quiet
initrd /boot/initrd.img-3.2.0-4-686-pae
}
submenu 'Advanced options for Debian GNU/Linux (7.11) (on /dev/sda2)' 
$menuentry_id_option 
'osprober-gnulinux-advanced-dd97eef1-c934-4b8f-a76b-4b084d6cf6f0' {
menuentry 'doctored Debian GNU/Linux, with Linux 3.2.0-4-686-pae (on 
/dev/sda2)' --class gnu-linux --class gnu --class os $menuentry_id_option 
'osprober-gnulinux-/boot/vmlinuz-3.2.0-4-686-pae--dd97eef1-c934-4b8f-a76b-4b084d6cf6f0'
 {
insmod part_msdos

Re: drive names and UUIDs, was Re: Intresting dd fsck grub uuid fstab action

2017-06-01 Thread Fungi4All
 Original Message 
Subject: drive names and UUIDs, was Re: Intresting dd fsck grub uuid fstab 
action
From: joel.r...@gmail.com
To: Fungi4All <fungil...@protonmail.com>

Fungi4All-san,

I'll try explaining what we don't know whether you understand or not.

I understand everything you have written below.

So when you dd a parition/volume, you copy the
UUID for that partition/volume, too.

I asked specifically for this and I never got a straight answer. Is it possible 
though to
copy something and NOT get the same uuid but the partition to retain its 
original uuid?
Because this is what happened. When I made a list of the uuids after dd the 
source and
targets were not all the same. Some had transferred some did not.

And when you dd the whole device, you copy all the UUIDs on every
partition/volume on the device.

I use gparted for partition work. I made a new experiment to see in action what 
may happen
with a usb drive being coppied. The stick had two partitions, ext4, and the 
targets were two
partitions on hd0 (the only hard drive). In this case the targets got the same 
uuids. I tried
over and over again to change the stick's uuid's and gparted will not allow me.

In order to have both the duplicate and the original connected to the
computer at the same time, you have to figure out a way it can tell
them apart.

As far as I can tell when a duplicate exists it just doesn't get mounted. With 
gparted you
see it fine, it is there with a duplicate.

...
If you read through this and understand it, and can tell us what you did
in a way that we can tell you understand this, we can continue to try to
help.

Why don't just skip all this that we are in perfect agreement with and go to 
the juicy part.
After all uuids are unique and fstab are all correct, updating-grub would mix 
match uuids in writing
its grub.cfg
Two uuids on the same entry! Over and over again till I edited it out to 
the correct ones and it all worked.
Why does everyone choses to skip on this issue and keeps explaining me over and 
over
what I have well understood by now?

Grub when installed it has a way of picking up if there is a bootable system in 
each partition and makes a
table of entries/menu-items
Menu 1 Debian 8
Menu 2 Debian 9
Menu 3 Zlumbudu

Here I have one entry Menu2 and within the same instruction two different uuids 
exist, one is the same as
menu 1 the next is the same of menu2  you chose 2 and 1 starts. Over and 
over I tried to update it, it
showed as a new file, the same error. I correct it manually everything works 
great!

--
Joel Rees

delusions of being a novelist:
http://joel-rees-economics.blogspot.com/2017/01/soc500-00-00-toc.html

drive names and UUIDs, was Re: Intresting dd fsck grub uuid fstab action

2017-05-31 Thread Joel Rees
Fungi4All-san,

I'll try explaining what we don't know whether you understand or not.
First, about

   /dev/sda
   /dev/sdb
   ...

When you turn the machine on, these "names" do not exist. Well, at
least, the computer does not know which physical device is /dev/sda
and which is /dev/sdb, etc.

When the BIOS is finding hard disks and other disk-like persistent
storage devices right after you turn the computer on, it remembers them
in the order it finds them. That means that, if one device finishes
powering up before another, it is likely to have a lower device name.

But even that is a probability, not a guarantee, because BIOS is
generally not looking at each device at the exact moment it powers up.

Because of this, /dev/sda may be your boot drive one time, and may be
your backup drive another time. Or, if you have multiple boot drives, it
may be your MSWIndows boot drive one time and your debian boot
drive the next, etc.

You want to think it should be more simple, but it's not. And it's not your
fault, and it's not Debian's fault. (Who's fault? Microsoft, Intel, Apple,
Maxtor, Seagate, Commodore, Atari, Radio Shack, IBM, DEC,
Honeywell, ..., pretty much all the companies involved.)

Why can't the drive itself just say it's /dev/sda? Well, what happens
when you go to the store and need a /dev/sdc, but all the drives in stock
are named /dev/sdb?

Labels of various sorts were tried, but when labels tended to be too
simply done, like "accounting" instead of something like "ACTG20170601".

So UUIDs were invented as a new sort of label that would theoretically
never be duplicated. They are separate from the labels that the are
called labels in /etc/fstab and gparted's listing, etc.

This explanation is too simple, but close enough to what's happening
to explain what we thing has happened to your drives.

UUIDs or other kinds of labels, where are they stored?

In the storage area of the drive itself, along with the MBR, the partition
information, the file system information, and the program and data files.

That means that, when you use dd to duplicate your storage device,
even the UUID is duplicated.

Now, it turns out that it's convenient to label or name partitions/volumes
within the device, and UUIDs are now generally assigned to each
partition/volume. These are different UUIDs, and they are also stored
on the disk itself. So when you dd a parition/volume, you copy the
UUID for that partition/volume, too.

And when you dd the whole device, you copy all the UUIDs on every
partition/volume on the device.

In order to have both the duplicate and the original connected to the
computer at the same time, you have to figure out a way it can tell
them apart.

The easiest way is to first change the UUID for the new device, and
then the UUIDs for each partition/volume, as well. and then edit
/etc/fstab on the new device to point to the changed UUIDs. You can do
this with an install CD or a live OS on a USB, etc.

You can't do it easily by booting either the new or old device, even if
you boot a different OS on either the new or old device.

It can be done,by giving the necessary volumes and the device itself
labels (that are not UUIDs) and changing /etc/fstab on the device that
will be booted to use labels instead of UUIDs.

Anything you changed before you changed either the UUIDs or labels,
edited /etc/fstab to use the new ones, and rebooted, you really don't
know which drive you did it to.

Running fsck before you take care of that could be really dangerous.

If you read through this and understand it, and can tell us what you did
in a way that we can tell you understand this, we can continue to try to
help.

--
Joel Rees

delusions of being a novelist:
http://joel-rees-economics.blogspot.com/2017/01/soc500-00-00-toc.html



Re: mdadm and UUIDs for its component drives

2011-07-03 Thread Luke Kenneth Casson Leighton
On Mon, Jun 27, 2011 at 1:59 PM, Philip Hands p...@hands.com wrote:

  ok, i bring in phil now, who i was talking to yesterday about this.
 what he said was (and i may get this wrong: it only went in partly) -
 something along the lines of remember to build the drives with
 individual mdadm bitmaps enabled.  this will save a great deal of
 arseing about when re-adding drives which didn't get properly added:
 only 1/2 a 1Tb drive will need syncing, not an entire drive :)  the
 bitmap system he says has hierarchical granularity apparently.

 What I said was: internal bitmaps

 ahh.  yes.  i missed the word internal but heard the good bits,
then looked up the man page and went ohh, ok, that must be it.  i
get there in the end :)


 also, he recommended taking at least one of the external drives *out*

 I think I said: WTF?

 ha ha :)

  You buy a machine that had 4 hot swap SATA bays,
 and you're plugging crappy external USB drives into it instead?  Are you
 mental?  (or at least, if I didn't say that out loud, that's what I was
 thinking ;-)

 i seem to remember the incredulity which definitely had the words
are you mental?? behind it

 I must say that I'm a little beffuddled about how you managed to make
 the system sensitive to which device contains which MD component -- I
 seem to remember you mentioning that you had devices listed in your
 mdadm.conf -- just get rid of them.

 well, i may have implied that, on account of not being able to
express it - i get it now: the things i thought were devices are
actually the UUIDs associated with the RAID array...

 ARRAY /dev/md/2 metadata=1.2 UUID=65c09661:02fc3a16:402916d3:5d4987f4 
 name=sheikh:2

 ... just like this.

 No mention of devices, which is a good job because that machine seems
 to randomise the device mapping on each boot, and is capable of moving
 them about when running if you pop the drive out of the machine and back
 in again.

 yehhs, i noticed that.  even the bloody boot drive comes up as
/dev/sde occasionally.  last reboot i was adding drive 4 to the array,
it was named /dev/sda.  kinda freaky.

 ok.

 so.

 let's have a go at some updating the docs...

 DESCRIPTION
   RAID  devices  are  virtual devices created from two or more real block
   devices.  This allows multiple devices (typically disk drives or parti‐
   tions  thereof)  to be combined into a single device to hold (for exam‐
   ple) a single filesystem.  Some RAID levels include redundancy  and  so
   can survive some degree of device failure.

   Linux  Software  RAID  devices are implemented through the md (Multiple
   Devices) device driver.  UUIDs are used internally through Linux Software
   RAID to identify any device that is part of a RAID.  In this
way, names may
   change but the innocent are protected.

 ok, scratch that last sentence :)

   Linux  Software  RAID  devices are implemented through the md (Multiple
   Devices) device driver.  UUIDs are used internally in Linux Software RAID
   to identify any device that is part of a RAID, thus ensuring
that even if the
   name changes (such as may happen if devices are removed and placed
   into another system, or if using removable hot-swappable media) Linux
   RAID can still correctly identify the component devices.

can we start with that - what you think, martin?  it's right at the
top: it spells things out, and it makes linux RAID look good :)  i'll
try to find appropriate places to put the same info, but the page is
really quite long.  perhaps on --add somewhere?

 l.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/capweedxp2gfowv0xj9x6+0bmfi85w8mndr4czgy9xhv7cp1...@mail.gmail.com



Re: mdadm and UUIDs for its component drives

2011-06-28 Thread Tom H
On Mon, Jun 27, 2011 at 3:19 AM, martin f krafft madd...@debian.org wrote:
 also sprach Tom H tomh0...@gmail.com [2011.06.27.0851 +0200]:

  Partitions do not have UUIDs. What you are seeing are the MD UUIDs
  stored in the superblock of the sda1 device.

 I called them mdadm UUIDs rather than MD UUIDs but they definitely
 exist, are different from the MD Array UUID, and, AFAIK, unused by
 the user tools.

 I misunderstood you. Partitions do not have UUIDs, but you were
 talking about individual array constituents — those do have UUIDs
 that are separate from the array UUID.

  # mdadm --examine /dev/sda2 | grep UUID
  Array UUID : bfb705a9:69bfc685:92b80aa8:ff445936
  Device UUID : ed7cb6d2:32f8dda4:bdd22f74:c4ef720b

Exactly (my fault for misusing partition).


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/BANLkTi=gvoed6i39afljpri3jyemmu-...@mail.gmail.com



Re: mdadm and UUIDs for its component drives

2011-06-28 Thread Tom H
On Mon, Jun 27, 2011 at 8:59 AM, Philip Hands p...@hands.com wrote:

 I must say that I'm a little beffuddled about how you managed to make
 the system sensitive to which device contains which MD component -- I
 seem to remember you mentioning that you had devices listed in your
 mdadm.conf -- just get rid of them.

I think that you may have resolved the OP's problem. I couldn't
understand why the newly-plugged-in USB disk might be considered a
member of the array. Having, for example,
devices=/dev/sda1,/dev/sdb1 on the ARRAY line would probably do
it! Now the question is whether this is the OP's setup.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/BANLkTi=zdgaqnhpgkhleuuc0qwba_zr...@mail.gmail.com



Re: mdadm and UUIDs for its component drives

2011-06-27 Thread Tom H
On Mon, Jun 27, 2011 at 12:52 AM, martin f krafft madd...@debian.org wrote:
 also sprach Tom H tomh0...@gmail.com [2011.06.26.2328 +0200]:

 mdadm --examine /dev/sda1 returns mdadm UUIDs of the array and
 the partition. (I've never seen the mdadm UUID of a partition be
 used for anything. Can an array be assembled by referring to an
 mdadm UUID of a partition to add a partition? Would it make any
 sense?!)

 Partitions do not have UUIDs. What you are seeing are the MD UUIDs
 stored in the superblock of the sda1 device.

I called them mdadm UUIDs rather than MD UUIDs but they definitely
exist, are different from the MD Array UUID, and, AFAIK, unused by
the user tools.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/BANLkTim1b7Lr2=onrtnj9npcgb3orve...@mail.gmail.com



Re: mdadm and UUIDs for its component drives

2011-06-27 Thread Tom H
On Mon, Jun 27, 2011 at 1:51 AM, Andrew McGlashan
andrew.mcglas...@affinityvision.com.au wrote:
 Tom H wrote:

 You have / set up as a RAID 1 array md0 with sda1 and sdb1 as its
 components.

 No / would be on an internal drive,  right now that is not the concern as it
 has nothing to do with the external drive array(s) in question for this
 issue.

Thanks for explaining...

Forget about /. Your external array is mounted on /path/to/array
and its members are sdXA and sdYA. When you plug in another USB drive,
it becomes sdXA or sdYA and mdadm tries to assemble it into the array
even though it doesn't have any mdadm metadata whatsoever.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/BANLkTinObgAe22s=1g8lc0vvthoawjq...@mail.gmail.com



Re: mdadm and UUIDs for its component drives

2011-06-27 Thread martin f krafft
also sprach Tom H tomh0...@gmail.com [2011.06.27.0851 +0200]:
  Partitions do not have UUIDs. What you are seeing are the MD UUIDs
  stored in the superblock of the sda1 device.
 
 I called them mdadm UUIDs rather than MD UUIDs but they definitely
 exist, are different from the MD Array UUID, and, AFAIK, unused by
 the user tools.

I misunderstood you. Partitions do not have UUIDs, but you were
talking about individual array constituents — those do have UUIDs
that are separate from the array UUID.

  # mdadm --examine /dev/sda2 | grep UUID
  Array UUID : bfb705a9:69bfc685:92b80aa8:ff445936
  Device UUID : ed7cb6d2:32f8dda4:bdd22f74:c4ef720b

-- 
 .''`.   martin f. krafft madduck@d.o  Related projects:
: :'  :  proud Debian developer   http://debiansystem.info
`. `'`   http://people.debian.org/~madduckhttp://vcs-pkg.org
  `-  Debian - when you have better things to do than fixing systems
 
alas, i am dying beyond my means.
-- oscar wilde


digital_signature_gpg.asc
Description: Digital signature (see http://martin-krafft.net/gpg/sig-policy/999bbcc4/current)


Re: mdadm and UUIDs for its component drives

2011-06-27 Thread Philip Hands
On Sun, 26 Jun 2011 14:42:03 +0100, Luke Kenneth Casson Leighton 
luke.leigh...@gmail.com wrote:
 On Sun, Jun 26, 2011 at 2:25 PM, Andrew McGlashan
 andrew.mcglas...@affinityvision.com.au wrote:
 
  I hear what you are saying, but I had a related problem which was similar.
 
  well... it's funny, because this is exactly what i need.
 
  Anyway the long and short of it is, I can use mdadm without regard to
  what devices are found, such as /dev/sda /dev/sdb /dev/sdc and the like as I
  rely purely on the UUID functionality, which as you know, mdadm handles
  perfectly well.  ;-)
 
  :)
 
  well.  that was nice.  the scenario you describe is precisely what i
 sort-of had planned, but didn't have the expertise to do so was going
 to recommend just two drives and then rsync to the other two.
 
  _however_, given that you've solved exactly what is needed / best /
 recommended for when you have 4 external drives (which i do) that's
 bloody fantastic :)
 
  ok, i bring in phil now, who i was talking to yesterday about this.
 what he said was (and i may get this wrong: it only went in partly) -
 something along the lines of remember to build the drives with
 individual mdadm bitmaps enabled.  this will save a great deal of
 arseing about when re-adding drives which didn't get properly added:
 only 1/2 a 1Tb drive will need syncing, not an entire drive :)  the
 bitmap system he says has hierarchical granularity apparently.

What I said was: internal bitmaps

madam(8):

   -b, --bitmap= 
  
  Specify a file to store a write-intent bitmap in.  The
  file should not exist unless --force is also given.  The
  same file should be provided when assembling the array.
  If the word internal is given, then the bitmap is stored
  with the metadata on the array, and so is replicated on
  all devices.  If the word none is given with --grow mode,
  then any bitmap that is present is removed.

  To help catch typing errors, the filename must contain at
  least one slash ('/') if it is a real file (not 'internal'
  or 'none').

  Note: external bitmaps are only known to work on ext2 and
  ext3.  Storing bitmap files on other filesystems may
  result in serious problems.

and I probably also gave my half-arsed understanding of what that means.

Feel free to consult actual documentation for a proper understanding of
reality.

 also, he recommended taking at least one of the external drives *out*

I think I said: WTF?  You buy a machine that had 4 hot swap SATA bays,
and you're plugging crappy external USB drives into it instead?  Are you
mental?  (or at least, if I didn't say that out loud, that's what I was
thinking ;-)

I must say that I'm a little beffuddled about how you managed to make
the system sensitive to which device contains which MD component -- I
seem to remember you mentioning that you had devices listed in your
mdadm.conf -- just get rid of them.

The mdadm.conf on one of my servers looks like this:

=-=-=-=-
DEVICE partitions
CREATE owner=root group=disk mode=0660 auto=yes
HOMEHOST system
MAILADDR root
ARRAY /dev/md/2 metadata=1.2 UUID=65c09661:02fc3a16:402916d3:5d4987f4 
name=sheikh:2
ARRAY /dev/md/3 metadata=1.2 UUID=e82f516b:64bf463c:adf65c9c:fd728d05 
name=sheikh:3
ARRAY /dev/md/4 metadata=1.2 UUID=56adc7ca:c7097e9b:00ac12c0:d1d278f2 
name=sheikh:4
ARRAY /dev/md/5 metadata=1.2 UUID=6c5362e4:74b56fad:8c74a317:e4dce6d0 
name=sheikh:5
ARRAY /dev/md/6 metadata=1.2 UUID=99ed31bd:cc608687:76f7b5a3:7bca24bc 
name=sheikh:6
ARRAY /dev/md/7 metadata=1.2 UUID=87cdaf12:94c2a356:4ba1d3bd:c80ac3b3 
name=sheikh:7
ARRAY /dev/md/8 metadata=1.2 UUID=08e708b8:0989607b:d99709d2:8b5e4d58 
name=sheikh:8
ARRAY /dev/md/11 metadata=1.2 UUID=210e1b53:3937b017:c947361e:2d2884b1 
name=sheikh:11
=-=-=-=-

No mention of devices, which is a good job because that machine seems
to randomise the device mapping on each boot, and is capable of moving
them about when running if you pop the drive out of the machine and back
in again.

As also mentioned somewhere in the docs, the output of the command:

  mdadm --examine --scan

can be used to populate the relevant bits of mdadm.conf

Cheers, Phil.
-- 
|)|  Philip Hands [+44 (0)20 8530 9560]http://www.hands.com/
|-|  HANDS.COM Ltd.http://www.uk.debian.org/
|(|  10 Onslow Gardens, South Woodford, London  E18 1NE  ENGLAND


pgpufrVxAMzIe.pgp
Description: PGP signature


mdadm and UUIDs for its component drives

2011-06-26 Thread Luke Kenneth Casson Leighton
at martin's request i'm forwarding this to debian-user so that it can
be found for archival purposes and general discussion.  this is the
context: a follow-up question will be sent, without all the crap
below.

l.

[original]

allo martin,

haven't spoken to you for a while.  got an interesting feature request
/ bug in mdadm to report.  here's a bit of background, a lovely ubuntu
user trying to tell the world he's got it right, when it's all quite
likely to be spectacularly... inept and ineffective :)

http://ubuntuforums.org/archive/index.php/t-582775.html

i have set up a system which has WD external USB drives: they're
RAID-1 mirrored.  the idea is to keep the temperature of the machine
down, by having the data drives external: that way, the main fan
doesn't fire up and it's all nice and quiet.

the issue is this: if put in, say, a USB memory stick in first, and it
pops up as /dev/sdc, and _then_ put in the two USB external drives,
what happens to mdadm?  it sees /dev/sdc as being, instead of one of
the RAID drives, as a USB memory stick!

from the above thread (and i can confirm it - i've tried):
 mdadm --manage /dev/md1 --add
/dev/disk/by-id/usb-WD_Ext_HDD_1021_574341565933303838393734-0:0

mdadm i can confirm goes and hunts down the symlinks and adds
/dev/sdd!  i don't _want_ it to add /dev/sdd, i want it to add
/dev/disk/by-id/usb-WD_blah_blah :)

question is: how?  or, does it not matter: does mdadm use UUIDs internally?

tia,

[martin's reply]


On Sat, Jun 25, 2011 at 8:26 PM, martin f krafft madd...@debian.org wrote:
 also sprach Luke Kenneth Casson Leighton luke.leigh...@gmail.com 
 [2011.06.25.1938 +0200]:
 mdadm i can confirm goes and hunts down the symlinks and adds
 /dev/sdd!  i don't _want_ it to add /dev/sdd, i want it to add
 /dev/disk/by-id/usb-WD_blah_blah :)

 question is: how?  or, does it not matter: does mdadm use UUIDs internally?

 Yes, it uses UUIDs. The problem you describe should not happen.

 Please direct your questions to debian-user@lists.debian.org so that
 others can profit from this discussion.

 --
  .''`.   martin f. krafft madduck@d.o      Related projects:
 : :'  :  proud Debian developer               http://debiansystem.info
 `. `'`   http://people.debian.org/~madduck    http://vcs-pkg.org
  `-  Debian - when you have better things to do than fixing systems

 without a god, life is only a matter of opinion.
                                                    -- douglas adams



--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/BANLkTi=r-cA1TSwkGk=qq1t0anxmzdq...@mail.gmail.com



Re: mdadm and UUIDs for its component drives

2011-06-26 Thread Luke Kenneth Casson Leighton
moorning martin: thanks for responding. apologies for not thinking to
ask on debian-user earlier, and apologies for the long-winded style:
just got ddragged out of bed to go chase a lamb out of the garden that
was eating our flowers and vegetables.  if i wasn't stumbling about
half-asleep or concerned for our future food supply i'd find a lamb
head-butting fence posts incredibly funny.

On Sat, Jun 25, 2011 at 8:26 PM, martin f krafft madd...@debian.org wrote:

 also sprach Luke Kenneth Casson Leighton luke.leigh...@gmail.com 
 [2011.06.25.1938 +0200]:
 mdadm i can confirm goes and hunts down the symlinks and adds
 /dev/sdd!  i don't _want_ it to add /dev/sdd, i want it to add
 /dev/disk/by-id/usb-WD_blah_blah :)

 question is: how?  or, does it not matter: does mdadm use UUIDs internally?

 Yes, it uses UUIDs. The problem you describe should not happen.

 ok, so the question therefore morphs into a long-winded
self-answering one: what is it about mdadm that causes people not to
be aware that UUIDs are used internally, such that they invest quite a
bit of time to e.g. modify their udev rules in /etc/ (rather than add
alternatives to /usr/local) and thus make their lives more awkward for
future upgrades, and search for solutions such as endeavouring to
use --add /dev/disk/by-id/XXX?

 the answer is that mdadm tracks down the hardlink and displays, as
best i can tell, only that, with no immediately obvious options to get
it to display the disk UUIDs.

 sooo here's some further questions:

 * is there an option to mdadm to make it display UUIDs instead of or
as well as the disk name?

 * if not, would adding one be a good idea?

 * also, how about making mention of how mdadm works, in the man page
somewhere reaaasonably prominently?

the basic gist is that mdadm is a fantastic tool, does a far better
job than people believe or understand it to be doing, protects them
from themselves and any lack of knowledge of its inner workings, but
that means that unfortunately it's under-promoted and in danger of
being ignored (or worse, NIH-rewritten!)

 i would hate to see a better tool being written which has, at the
very top of its home page, and in all freshmeat and sourceforget
prominent short descriptions, yeah!  we're l33t!  our software RAID
tool uses UUIDs, which makes it better than mdadm.  we r0ck!

 :)

 l.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/BANLkTikbVC9ZsXT4SF-Lb3=gk2nqlmx...@mail.gmail.com



Re: mdadm and UUIDs for its component drives

2011-06-26 Thread Andrew McGlashan

Hi Luke,

Luke Kenneth Casson Leighton wrote:

 the answer is that mdadm tracks down the hardlink and displays, as
best i can tell, only that, with no immediately obvious options to get
it to display the disk UUIDs.


I hear what you are saying, but I had a related problem which was similar.

When starting up a machine with two external 2TB drives which had been 
set up as a mirror, it would sometimes only find one drive and then it 
would happily mount the RAID1 array in a degraded state.  Then, when the 
other drive was added in, it had to do a rebuild of the array.  It's not 
much good having to rebuild the array after each boot when the mirror 
should be perfectly fine.



So I solved it by adding the following to my /etc/rc.local

nohup /usr/local/bin/u1-mirror-drive.sh 21 /dev/null 



Note that external mirror drive, which mounts at /u1, has this 
/etc/fstab entry:


/dev/mapper/vg--external--1-vg--external--1--u1   /u1 ext4 
  noauto,rw 0   0




I've masked the UUID below, but I don't see how it could cause any 
trouble if I did not do that.



The RAID1 is identified ONLY with UUID in /etc/mdadm/mdadm.conf

#  grep ARRAY /etc/mdadm/mdadm.conf
ARRAY /dev/md0 UUID=06fd3d46----



Here's my script that handles getting everything working after a boot:

#  cat /usr/local/bin/u1-mirror-drive.sh
#!/bin/bash

RAID_DRIVE_ID=06fd3d46----
RAID_DRIVES=2
TIME_LIMIT=60

echo RAID Drive ID: $RAID_DRIVE_ID
echo Number of Devices required: $RAID_DRIVES
echo Time Limit: $TIME_LIMIT

function error_drive_missing ()
{
echo
echo -en \aMissing drive(s) ... cannot assemble /dev/md0\n\n
/sbin/blkid|/bin/grep $RAID_DRIVE_ID
exit
}

(

echo ==
echo -en Waiting for $RAID_DRIVES drives to be visible for 
\linux_raid_member(s)\ with blkid of: \$RAID_DRIVE_ID\ ... \n\t

CNT=1
echo -en 00
while [ $(/sbin/blkid|/bin/grep $RAID_DRIVE_ID|/usr/bin/wc -l) -lt 
$RAID_DRIVES ]

do
if [ $CNT -lt 10 ]; then echo -en \b$CNT; else echo -en 
\b\b$CNT;fi
if [ $CNT -eq $TIME_LIMIT ]; then echo -en \b\b$TIME_LIMIT 
seconds \n;error_drive_missing;fi

CNT=$(($CNT + 1))
/bin/sleep 1
done
echo -en \n\nAll required drives found in $CNT seconds\n
echo ==
echo -en \n\n

cmds='/sbin/mdadm --assemble /dev/md0~
/sbin/vgscan~
/sbin/vgchange -ay vg-external-1~
/bin/mount /u1~
/bin/mount~
/bin/df -Th~
/bin/date~
/sbin/mdadm -D /dev/md0~
/bin/cat /proc/mdstat'

echo ..

IFS='~'
echo $cmds|
while read cmd
do
IFS=$' \t\n'
echo ==
echo ${cmd}
echo --
$cmd
echo -en ==\n\n
done

echo ..

) 21 | /usr/bin/tee /var/log/md0-vg-external-1-u1-wrk.$(date 
+%Y%m%d%H%M).out




Right now, I am using 2x 2TB drives as mirrors, I plan to add a 3rd 
drive as a 3-way mirror and to let it sync up, then remove the drive for 
off-site storage.  A 4th drive will come into play as well to rotate 
off-site storage.  Consequently, I catered for that scenario in the 
mount script above -- ie I can easily change the number of RAID 
devices to find before continuing if I choose to have more online or not 
during boot.  The script gives up if it cannot find the required number 
of devices within 60 seconds, then I will have to manually intervene.


As you can see from the script, there is some logging taking place so 
that I can check things over if necessary.


I may end up using multiple external mirrors at some stage; if I do that 
then I'll likely have duplicated scripts for each metadevice and the 
scripts will be [slightly] modified as required.  I may end up with a 
parameter file with a single script, but it's probably not worth the 
further effort.  Although using command line variables would be an easy 
and viable option.


Anyway the long and short of it is, I can use mdadm without regard 
to what devices are found, such as /dev/sda /dev/sdb /dev/sdc and the 
like as I rely purely on the UUID functionality, which as you know, 
mdadm handles perfectly well.  ;-)



--
Kind Regards
AndrewM

Andrew McGlashan
Broadband Solutions now including VoIP


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/4e07335a.4010...@affinityvision.com.au



Re: mdadm and UUIDs for its component drives

2011-06-26 Thread Luke Kenneth Casson Leighton
On Sun, Jun 26, 2011 at 2:25 PM, Andrew McGlashan
andrew.mcglas...@affinityvision.com.au wrote:

 I hear what you are saying, but I had a related problem which was similar.

 well... it's funny, because this is exactly what i need.

 Anyway the long and short of it is, I can use mdadm without regard to
 what devices are found, such as /dev/sda /dev/sdb /dev/sdc and the like as I
 rely purely on the UUID functionality, which as you know, mdadm handles
 perfectly well.  ;-)

 :)

 well.  that was nice.  the scenario you describe is precisely what i
sort-of had planned, but didn't have the expertise to do so was going
to recommend just two drives and then rsync to the other two.

 _however_, given that you've solved exactly what is needed / best /
recommended for when you have 4 external drives (which i do) that's
bloody fantastic :)

 ok, i bring in phil now, who i was talking to yesterday about this.
what he said was (and i may get this wrong: it only went in partly) -
something along the lines of remember to build the drives with
individual mdadm bitmaps enabled.  this will save a great deal of
arseing about when re-adding drives which didn't get properly added:
only 1/2 a 1Tb drive will need syncing, not an entire drive :)  the
bitmap system he says has hierarchical granularity apparently.

also, he recommended taking at least one of the external drives *out*
of its external box and making it *internal*.  the reason for that is
that a) if the drives ever get powered down accidentally (e.g. by
cleaning ladies) you're fd b) if you move a drive or two
internally, it's possible to prioritise those drives as reading
ones, and to make the external ones write priority.  so, the data
gets read from the internal one, and changes get propagated to all
drives.

... thoughts?

l.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/BANLkTi=9WA9ayQ8sioeBsaocqJBigN+=7...@mail.gmail.com



Re: mdadm and UUIDs for its component drives

2011-06-26 Thread martin f krafft
also sprach Luke Kenneth Casson Leighton luke.leigh...@gmail.com 
[2011.06.26.1241 +0200]:
  * is there an option to mdadm to make it display UUIDs instead of or
 as well as the disk name?

mdadm -Es

  * also, how about making mention of how mdadm works, in the man page
 somewhere reaaasonably prominently?

Search manpage for partitions. Please suggest patches if you find
the information insufficient.

-- 
 .''`.   martin f. krafft madduck@d.o  Related projects:
: :'  :  proud Debian developer   http://debiansystem.info
`. `'`   http://people.debian.org/~madduckhttp://vcs-pkg.org
  `-  Debian - when you have better things to do than fixing systems
 
this product is under strict quality contril with perfect packing and
quality when leving the factory.please keep away from damp.high
temperature or sun expose.If found any detectives when purchasing.
please return the productby airmail to our administration section and
inform the time, place.and store of this purchase for our
improvement.We shall give you a satisfactory reply.Thanks for your
patronage and welcome your comments.
 -- http://www.engrish.com


digital_signature_gpg.asc
Description: Digital signature (see http://martin-krafft.net/gpg/sig-policy/999bbcc4/current)


Re: mdadm and UUIDs for its component drives

2011-06-26 Thread Luke Kenneth Casson Leighton
On Sun, Jun 26, 2011 at 2:25 PM, Andrew McGlashan
andrew.mcglas...@affinityvision.com.au wrote:

 Anyway the long and short of it is, I can use mdadm without regard to
 what devices are found, such as /dev/sda /dev/sdb /dev/sdc and the like as I
 rely purely on the UUID functionality, which as you know, mdadm handles
 perfectly well.  ;-)

 :)

 ... you see, this is the bit that has me concerned.  /dev/mdN can
be referred to by its unique UUID, but that's *not* what i'm referring
to.  and, from what you're saying, you appear to be implying that yes,
the external drives can pop up as /dev/sda through /dev/sdc and be
confused - and thus it is pure luck (or actually design) that the
drives *happen* to all be part of the same identical RAID-1 mirroring
array.

 so i realise martin that you've already answered, but it would be
really good if you could explicitly confirm:

 yes, mdadm names its RAID drives by UUID (as can clearly be seen in
/dev/mdadm/mdadm.conf) but does it *also* refer to its *COMPONENT*
drives (internally, and non-obviously, and undocumentedly) by UUID and
then report to the outside world that it's using whatever name
(/dev/sdX) which can, under these external-drives scenario, change.

 l.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/BANLkTi=tzm+p-tgq1hjo6khljg_ufm9...@mail.gmail.com



Re: mdadm and UUIDs for its component drives

2011-06-26 Thread Luke Kenneth Casson Leighton
On Sun, Jun 26, 2011 at 3:11 PM, martin f krafft madd...@debian.org wrote:
 also sprach Luke Kenneth Casson Leighton luke.leigh...@gmail.com 
 [2011.06.26.1241 +0200]:
  * is there an option to mdadm to make it display UUIDs instead of or
 as well as the disk name?

 mdadm -Es

 oo!  yaay!  there is, however, no mention of the fact that these
options display UUIDs, and, confusingly, -s is listed as only working
with the -R option... oh wait, that's for Incremental Assembly mode
(eek!)  ok, so -s (or --scan), scan /proc/mdstat, and -E for show
components.



  * also, how about making mention of how mdadm works, in the man page
 somewhere reaaasonably prominently?

 Search manpage for partitions.

 that's odd.  i read around each part (man mdadm^M /partitions^M),
paragraph back and forwards: no mention of the UUIDs of drive
components of an array was clearly evident.

 Please suggest patches if you find the information insufficient.

 ok.  feeling slightly overwhelmed by the task, my lack of knowledge
on the detailed workings of mdadm somewhat getting in the way, but
i'll do my best.

 l.


 --
  .''`.   martin f. krafft madduck@d.o      Related projects:
 : :'  :  proud Debian developer               http://debiansystem.info
 `. `'`   http://people.debian.org/~madduck    http://vcs-pkg.org
  `-  Debian - when you have better things to do than fixing systems

 this product is under strict quality contril with perfect packing and
 quality when leving the factory.please keep away from damp.high
 temperature or sun expose.If found any detectives when purchasing.
 please return the productby airmail to our administration section and
 inform the time, place.and store of this purchase for our
 improvement.We shall give you a satisfactory reply.Thanks for your
 patronage and welcome your comments.
                                             -- http://www.engrish.com



--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/BANLkTinX=y6pxcpb2+csk8un6llv31r...@mail.gmail.com



Re: mdadm and UUIDs for its component drives

2011-06-26 Thread Andrew McGlashan

Luke Kenneth Casson Leighton wrote:

 yes, mdadm names its RAID drives by UUID (as can clearly be seen in
/dev/mdadm/mdadm.conf) but does it *also* refer to its *COMPONENT*
drives (internally, and non-obviously, and undocumentedly) by UUID and
then report to the outside world that it's using whatever name
(/dev/sdX) which can, under these external-drives scenario, change.

 l.


The other thing is that both drives in the array have the same UUID, so 
you need to be able to tell them apart some way or another and the 
/dev/sd* view is just fine.



And this works fine too fwiw :

  mdadm -D /dev/disk/by-id/md-uuid-*

So long as mdadm can determine the drives in use, I don't care how it 
uses them internally.  However, if a drive goes bad, then I need to know 
which one.


Let's say that /dev/sda has gone bad of a two drive RAID1 array; I can 
visually detect the drive by doing the following:


dd if=/dev/sda of=/dev/null

Go look to see which drive is busy [hopefully it will show with a 
flashing activity LED] and I can see which one has failed -- if that 
doesn't work, then I can reverse the test and try all drives that are 
meant to be okay to eliminate them.


--
Kind Regards
AndrewM

Andrew McGlashan
Broadband Solutions now including VoIP


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/4e074a21.6000...@affinityvision.com.au



Re: mdadm and UUIDs for its component drives

2011-06-26 Thread martin f krafft
also sprach Luke Kenneth Casson Leighton luke.leigh...@gmail.com 
[2011.06.26.1634 +0200]:
  Search manpage for partitions.
 
  that's odd.  i read around each part (man mdadm^M /partitions^M),
  paragraph back and forwards: no mention of the UUIDs of drive
  components of an array was clearly evident.

I was not trying to suggest that there was a mention of the UUIDs.
mdadm's manpage only mentions /proc/partitions; it scans that file
and then looks for UUIDs on each of the listed partitions, building
a list indexed by UUID [0]. This is called scanning.

And now it has the list of devices (partitions) that constitute
individual arrays identified by UUID…

0. not sure this is the actual implementation…

  Please suggest patches if you find the information insufficient.
 
  ok.  feeling slightly overwhelmed by the task, my lack of
  knowledge on the detailed workings of mdadm somewhat getting in
  the way, but i'll do my best.

I'll try my best to provide feedback.

Thanks,

-- 
 .''`.   martin f. krafft madduck@d.o  Related projects:
: :'  :  proud Debian developer   http://debiansystem.info
`. `'`   http://people.debian.org/~madduckhttp://vcs-pkg.org
  `-  Debian - when you have better things to do than fixing systems
 
no, 'eureka' is greek for 'this bath is too hot.'
-- dr. who


digital_signature_gpg.asc
Description: Digital signature (see http://martin-krafft.net/gpg/sig-policy/999bbcc4/current)


Re: mdadm and UUIDs for its component drives

2011-06-26 Thread William Hopkins
On 06/27/11 at 01:02am, Andrew McGlashan wrote:
 Luke Kenneth Casson Leighton wrote:
  yes, mdadm names its RAID drives by UUID (as can clearly be seen in
 /dev/mdadm/mdadm.conf) but does it *also* refer to its *COMPONENT*
 drives (internally, and non-obviously, and undocumentedly) by UUID and
 then report to the outside world that it's using whatever name
 (/dev/sdX) which can, under these external-drives scenario, change.
 
  l.
 
 
 Let's say that /dev/sda has gone bad of a two drive RAID1 array; I
 can visually detect the drive by doing the following:
 
 dd if=/dev/sda of=/dev/null
 
 Go look to see which drive is busy [hopefully it will show with a
 flashing activity LED] and I can see which one has failed -- if that
 doesn't work, then I can reverse the test and try all drives that
 are meant to be okay to eliminate them.

It seems to me that you'd be well served by simply using the UUID (by-uuid, not
by-id) in all things, including mounting and managing. Then you would never
need to figure out which disk sda was, you could just figure out which disk the
UUID was (and you'd only have to learn it once).

-- 
Liam


signature.asc
Description: Digital signature


Re: mdadm and UUIDs for its component drives

2011-06-26 Thread Luke Kenneth Casson Leighton
On Sun, Jun 26, 2011 at 4:26 PM, martin f krafft madd...@debian.org wrote:
 also sprach Luke Kenneth Casson Leighton luke.leigh...@gmail.com 
 [2011.06.26.1634 +0200]:
  Search manpage for partitions.

  that's odd.  i read around each part (man mdadm^M /partitions^M),
  paragraph back and forwards: no mention of the UUIDs of drive
  components of an array was clearly evident.

 I was not trying to suggest that there was a mention of the UUIDs.
 mdadm's manpage only mentions /proc/partitions; it scans that file
 and then looks for UUIDs on each of the listed partitions, building
 a list indexed by UUID [0].

 ahh, ok.  that's very cool.

 And now it has the list of devices (partitions) that constitute
 individual arrays identified by UUID…

 0. not sure this is the actual implementation…

 :)

  Please suggest patches if you find the information insufficient.

  ok.  feeling slightly overwhelmed by the task, my lack of
  knowledge on the detailed workings of mdadm somewhat getting in
  the way, but i'll do my best.

 I'll try my best to provide feedback.

 thx.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/banlktinnaoudu409w1+evy+adrv7-sz...@mail.gmail.com



Re: mdadm and UUIDs for its component drives

2011-06-26 Thread Andrew McGlashan

Hi,

Luke Kenneth Casson Leighton wrote:

 well.  that was nice.  the scenario you describe is precisely what i
sort-of had planned, but didn't have the expertise to do so was going
to recommend just two drives and then rsync to the other two.

 _however_, given that you've solved exactly what is needed / best /
recommended for when you have 4 external drives (which i do) that's
bloody fantastic :)


Great, hope it helps!


 ok, i bring in phil now, who i was talking to yesterday about this.
what he said was (and i may get this wrong: it only went in partly) -
something along the lines of remember to build the drives with
individual mdadm bitmaps enabled.  this will save a great deal of
arseing about when re-adding drives which didn't get properly added:
only 1/2 a 1Tb drive will need syncing, not an entire drive :)  the
bitmap system he says has hierarchical granularity apparently.


I just added a bitmap, more info. here [1]  [2] -- worth a read.  You 
can add and remove them, sometimes you must remove them ... for 
instance, if growing the array.


Quote from [2] reference:

In some configurations you may not be able to grow the array until you 
have removed the internal bitmap. You can add the bitmap back again 
after the array has been grown.


It also seems best to use internal for the bitmap from my reading, so 
this is what I did for that.tip:


# mdadm --grow --bitmap=internal /dev/md0


also, he recommended taking at least one of the external drives *out*
of its external box and making it *internal*.  the reason for that is
that a) if the drives ever get powered down accidentally (e.g. by
cleaning ladies) you're fd b) if you move a drive or two
internally, it's possible to prioritise those drives as reading
ones, and to make the external ones write priority.  so, the data
gets read from the internal one, and changes get propagated to all
drives.


Yes, that sounds like a good idea(tm) ... worth considering, but right 
now I prefer all the RAID drive members external on this particular 
machine.  The drives and machine are all on a suitable UPS and there is 
no cleaning lady to worry about -- my wife won't touch the drives either.


Down the track, I'm sure to move to USB 3.0 and maybe even further down 
the track to external PCI Express ... [3], which looks interesting.  And 
more so in the future, IBM Racetrack memory [4] which looks to replace 
the drive as we know it today.



[1] https://raid.wiki.kernel.org/index.php/Write-intent_bitmap

[2] http://en.wikipedia.org/wiki/Mdadm

[3]
http://www.zdnet.com/blog/computers/look-out-thunderbolt-external-pci-express-spec-being-developed/6220?tag=nl.e539

[4] http://www.youtube.com/watch?v=q5jRHZWQ0sc

--
Kind Regards
AndrewM


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/4e0774de.8070...@affinityvision.com.au



Re: mdadm and UUIDs for its component drives

2011-06-26 Thread Tom H
On Sun, Jun 26, 2011 at 6:41 AM, Luke Kenneth Casson Leighton
luke.leigh...@gmail.com wrote:

 is there an option to mdadm to make it display UUIDs instead of or
 as well as the disk name?

mdadm --examine /dev/sdXY gives you the device and the array UUIDs.

mdadm --examine --scan gives you the array UUID(s).

mdadm --detail /dev/mdZ gives you the array UUID.

mdadm --detail --scan gives you the array UUID(s).


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/BANLkTi=pt6m41iwnquyvoc5yuv1agmk...@mail.gmail.com



Re: mdadm and UUIDs for its component drives

2011-06-26 Thread Tom H
On Sun, Jun 26, 2011 at 9:55 AM, Luke Kenneth Casson Leighton
luke.leigh...@gmail.com wrote:
 On Sun, Jun 26, 2011 at 2:25 PM, Andrew McGlashan
 andrew.mcglas...@affinityvision.com.au wrote:

 Anyway the long and short of it is, I can use mdadm without regard to
 what devices are found, such as /dev/sda /dev/sdb /dev/sdc and the like as I
 rely purely on the UUID functionality, which as you know, mdadm handles
 perfectly well.  ;-)

  :)

  ... you see, this is the bit that has me concerned.  /dev/mdN can
 be referred to by its unique UUID, but that's *not* what i'm referring
 to.  and, from what you're saying, you appear to be implying that yes,
 the external drives can pop up as /dev/sda through /dev/sdc and be
 confused - and thus it is pure luck (or actually design) that the
 drives *happen* to all be part of the same identical RAID-1 mirroring
 array.

  so i realise martin that you've already answered, but it would be
 really good if you could explicitly confirm:

  yes, mdadm names its RAID drives by UUID (as can clearly be seen in
 /dev/mdadm/mdadm.conf) but does it *also* refer to its *COMPONENT*
 drives (internally, and non-obviously, and undocumentedly) by UUID and
 then report to the outside world that it's using whatever name
 (/dev/sdX) which can, under these external-drives scenario, change.

I've never plugged a USB drive into a mdadm'd box so I'm trying to get
my head around this - and check whether I'm understanding you
correctly.

You have / set up as a RAID 1 array md0 with sda1 and sdb1 as its components.

You plug in a USB drive and its first/only partition become sda1.

mdadm tries to assemble the USB drive's sda1 with sdb1 (whether it's
the old sda1 or the old sdb1) even though sda1 doesn't have md0's
(array) UUID in its metadata, let alone any mdadm metadata!


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/BANLkTikSkfmyc1vLkD+cd=jjkyvxq8r...@mail.gmail.com



Re: mdadm and UUIDs for its component drives

2011-06-26 Thread Tom H
On Sun, Jun 26, 2011 at 11:29 AM, William Hopkins we.hopk...@gmail.com wrote:

 It seems to me that you'd be well served by simply using the UUID (by-uuid, 
 not
 by-id) in all things, including mounting and managing. Then you would never
 need to figure out which disk sda was, you could just figure out which disk 
 the
 UUID was (and you'd only have to learn it once).

There are UUIDs and UUIDs.

For an array md0 with sda1 and sdb1 as its components.

blkid /dev/md0 returns the filesystem UUID.

mdadm --detail /dev/md0 returns the mdadm UUID of the array.

blkid /dev/sda1 returns the mdadm UUID of the array.

mdadm --examine /dev/sda1 returns mdadm UUIDs of the array and the
partition. (I've never seen the mdadm UUID of a partition be used for
anything. Can an array be assembled by referring to an mdadm UUID of a
partition to add a partition? Would it make any sense?!)

The /dev/disk/by-id/ symlinks are the most stable ones (for a
specific disk) should anyone want to use them because they're hardware
IDs. I don't have an mdadm'd box at hand to check but I think that
md0's entry in this directory includes the mdadm array UUID of md0
because md0 doesn't have a real hardware ID. So, for md0,
/dev/disk/by-id and /dev/disk/by-uuid are equivalent.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/BANLkTi=_s+tls+x84q-n_4ewn2mwafv...@mail.gmail.com



Re: mdadm and UUIDs for its component drives

2011-06-26 Thread martin f krafft
also sprach Tom H tomh0...@gmail.com [2011.06.26.2328 +0200]:
 mdadm --examine /dev/sda1 returns mdadm UUIDs of the array and
 the partition. (I've never seen the mdadm UUID of a partition be
 used for anything. Can an array be assembled by referring to an
 mdadm UUID of a partition to add a partition? Would it make any
 sense?!)

Partitions do not have UUIDs. What you are seeing are the MD UUIDs
stored in the superblock of the sda1 device.

-- 
 .''`.   martin f. krafft madduck@d.o  Related projects:
: :'  :  proud Debian developer   http://debiansystem.info
`. `'`   http://people.debian.org/~madduckhttp://vcs-pkg.org
  `-  Debian - when you have better things to do than fixing systems
 
a woman is like your shadow;
 follow her, she flies;
 fly from her, she follows.
-- sébastien-roch-nicolas chamfort


digital_signature_gpg.asc
Description: Digital signature (see http://martin-krafft.net/gpg/sig-policy/999bbcc4/current)


Re: mdadm and UUIDs for its component drives

2011-06-26 Thread Andrew McGlashan

Tom H wrote:

You have / set up as a RAID 1 array md0 with sda1 and sdb1 as its components.


No / would be on an internal drive,  right now that is not the concern 
as it has nothing to do with the external drive array(s) in question for 
this issue.


--
Kind Regards
AndrewM


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/4e081a5e.5060...@affinityvision.com.au



Re: Understanding LVM UUIDS

2010-07-02 Thread Daniel Barclay

Aaron Toponce wrote:

UUIDs are unique to the device/filesystem. 


Are these (disk) UUIDs stored somewhere in the partition (in the
filesystem), or are they stored at or generated from a lower level?

In particular, if one used dd to copy the contents (a file system) of
one partition to another partition, does the target partition end up
with the same UUID or a different UUID?  If you did a byte-for-byte
copy of an entire disk to another disk of the exact same size (and
model, but different serial number), would the UUIDs of partitions
change or still be the same on the target disk?)

(Is it similar to, or different than, the situation with filesystem
labels (specifically, that if you copied a partition as above you'd
end up with two partitions with the same label, and you'd have to
change one (or otherwise deal with the non-uniqueness))?)

Thanks,
Daniel


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/4c2e249c.6090...@fgm.com



Re: Understanding LVM UUIDS

2010-07-02 Thread Ron Johnson

On 07/02/2010 12:40 PM, Daniel Barclay wrote:

Aaron Toponce wrote:


UUIDs are unique to the device/filesystem.


Are these (disk) UUIDs stored somewhere in the partition (in the
filesystem), or are they stored at or generated from a lower level?


In the superblock.

# dumpe2fs -h /dev/mapper/main_huge_vg-main_huge_lv
dumpe2fs 1.41.12 (17-May-2010)
Filesystem volume name:   BIG_LV
Last mounted on:  /data/big
Filesystem UUID:  4efe50a9-0915-4022-8446-a036607492b7
Filesystem magic number:  0xEF53
Filesystem revision #:1 (dynamic)
Filesystem features:  has_journal ext_attr resize_inode 
dir_index filetype needs_recovery extent flex_bg sparse_super 
large_file huge_file uninit_bg dir_nlink extra_isize

Filesystem flags: signed_directory_hash
Default mount options:(none)
Filesystem state: clean
Errors behavior:  Continue
Filesystem OS type:   Linux
Inode count:  241491968
Block count:  965967872
Reserved block count: 48298393
Free blocks:  323162178
Free inodes:  241184065
First block:  0
Block size:   4096
Fragment size:4096
Reserved GDT blocks:  793
Blocks per group: 32768
Fragments per group:  32768
Inodes per group: 8192
Inode blocks per group:   512
Flex block group size:16
Filesystem created:   Thu Jul 23 19:32:26 2009
Last mount time:  Sat May 29 08:46:56 2010
Last write time:  Sat May 29 08:46:56 2010
Mount count:  4
Maximum mount count:  26
Last checked: Sat May  8 21:02:48 2010
Check interval:   15552000 (6 months)
Next check after: Thu Nov  4 21:02:48 2010
Lifetime writes:  1825 GB
Reserved blocks uid:  0 (user root)
Reserved blocks gid:  0 (group root)
First inode:  11
Inode size:   256
Required extra isize: 28
Desired extra isize:  28
Journal inode:8
Default directory hash:   half_md4
Directory Hash Seed:  1a030567-e52e-4f1e-bc60-31a288283802
Journal backup:   inode blocks
Journal features: journal_incompat_revoke
Journal size: 128M
Journal length:   32768
Journal sequence: 0x00032ed5
Journal start:12599



In particular, if one used dd to copy the contents (a file system) of
one partition to another partition, does the target partition end up
with the same UUID or a different UUID? If you did a byte-for-byte
copy of an entire disk to another disk of the exact same size (and
model, but different serial number), would the UUIDs of partitions
change or still be the same on the target disk?)

(Is it similar to, or different than, the situation with filesystem
labels (specifically, that if you copied a partition as above you'd
end up with two partitions with the same label, and you'd have to
change one (or otherwise deal with the non-uniqueness))?)

Thanks,
Daniel





--
Seek truth from facts.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/4c2e2949.30...@cox.net



Re: Understanding LVM UUIDS

2010-06-25 Thread Aaron Toponce
On 06/23/2010 03:30 PM, Camaleón wrote:
 On Wed, 23 Jun 2010 13:02:36 -0600, Aaron Toponce wrote:
 Whether or not these are his reasons, I can tell you why that is a wise
 move. UUIDs are unique to the device/filesystem. The major advantage of
 using UUIDs is that you don't have to worry about reordering of disks by
 the kernel when it sees it in a different order than previous.
 
 Yes, I know.
 
 But if the installer has setup (by its own) as default method for naming 
 devices the old one and I am not experiencing any problem with that, for 
 sure I won't change that. If it ain't broke, don't fix it.

Sure. But you can also avoid breakage through proper administration.

 This isn't recommended, because if the Linux kernel developers change
 drivers, and the drives become a new device (just as it happened when
 ditching the PATA driver for SATA, and /dev/hda became /dev/sda), your
 partitions/volumes won't mount. Instead, you should either be using
 LABELs or UUIDs.
 
 I know, I know... but Lenny developers decided to go this way for any 
 reason and I will respect that. I'm aware that nowadays any modern 
 distribution is using uuid or id at least in /etc/fstab but as I 
 said, I still have not seen any good reason to change it.

So, you blindly accept what the developers think is good for your
system? I understand they're developers for a reason, but even
developers make mistakes. And having /dev/sd?? in your /etc/fstab just
might be one of them.

FWIW, when the kernel switched disk drivers from PATA to SATA for
identifying IDE drives, I had already moved my /etc/fstab to UUIDs, and
I didn't have a problem with the upgrade. Friends of mine, however, got
to rescue their system, because it wouldn't boot. To each their own.

-- 
. O .   O . O   . . O   O . .   . O .
. . O   . O O   O . O   . O O   . . O
O O O   . O .   . O O   O O .   O O O



signature.asc
Description: OpenPGP digital signature


Re: Understanding LVM UUIDS

2010-06-25 Thread Camaleón
On Fri, 25 Jun 2010 07:42:28 -0600, Aaron Toponce wrote:

 On 06/23/2010 03:30 PM, Camaleón wrote:

 On Wed, 23 Jun 2010 13:02:36 -0600, Aaron Toponce wrote:
 Whether or not these are his reasons, I can tell you why that is a
 wise move. UUIDs are unique to the device/filesystem. The major
 advantage of using UUIDs is that you don't have to worry about
 reordering of disks by the kernel when it sees it in a different order
 than previous.
 
 Yes, I know.
 
 But if the installer has setup (by its own) as default method for
 naming devices the old one and I am not experiencing any problem with
 that, for sure I won't change that. If it ain't broke, don't fix it.
 
 Sure. But you can also avoid breakage through proper administration.

Or you can generate further problems... who knows.

The point here is that things are working properly with the current 
configuration, so why change it?

 This isn't recommended, because if the Linux kernel developers change
 drivers, and the drives become a new device (just as it happened when
 ditching the PATA driver for SATA, and /dev/hda became /dev/sda), your
 partitions/volumes won't mount. Instead, you should either be using
 LABELs or UUIDs.
 
 I know, I know... but Lenny developers decided to go this way for any
 reason and I will respect that. I'm aware that nowadays any modern
 distribution is using uuid or id at least in /etc/fstab but as I
 said, I still have not seen any good reason to change it.
 
 So, you blindly accept what the developers think is good for your
 system? 

Sure! :-) 

I expect developers take the right (and conscious) decision on these kind 
of issues. They know (or should be aware) better than me the keys of 
every released version and the choose for going one direction or another 
is theirs.

 I understand they're developers for a reason, but even
 developers make mistakes. And having /dev/sd?? in your /etc/fstab just
 might be one of them.
 
 FWIW, when the kernel switched disk drivers from PATA to SATA for
 identifying IDE drives, I had already moved my /etc/fstab to UUIDs, and
 I didn't have a problem with the upgrade. Friends of mine, however, got
 to rescue their system, because it wouldn't boot. To each their own.

I will change it as soon as I get any problems, I promise :-)

Greetings,

-- 
Camaleón


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/pan.2010.06.25.14.14...@gmail.com



Re: Understanding LVM UUIDS

2010-06-24 Thread Paul E Condon
On 20100623_054331, Ron Johnson wrote:
 On 06/23/2010 05:20 AM, Javier Barroso wrote:
 On Wed, Jun 23, 2010 at 10:40 AM, Alan Chandler
 a...@chandlerfamily.org.ukwrote:
 
 I feel I should move my entire /etc/fstab over to using uuids
 
 [snip]
 
 Which do I use, and what does the other one mean?
 
 
 Get your uuid from dumpe2fs -h /dev/vg/lv | grep UUID
 
 But /dev/vg/lv is a persistent name, so no sense changing it to uuid, or
 maybe I'm missing something ?
 
 
 Or use labels.

In squeeze, a recent revision in pata support seemed to introduce rewriting
/etc/fstab to reference all file systems via UUID. 
Before that, I had constructed a system using labels which was totally 
clobbered by the pata upgrade (which also suppressed all mention of 
/dev/hd[ab][123] )

I don't use LVM. Perhaps using LVM protects you from 'upgrades' from
ata to pata. Or from future upgrades in pata support. But I'm inclined
to believe that we are in for a spate of instability as pata is worked
out in all its unintended ramifications.

FYI, the UUID is just a 128 bit number that is stored in a 16 byte
space in the superblock of the ext[234] file system. Mostly it is set
to a random value by mke2fs when the fs is originally created. But it
can also be set to a user designated (non-random?) value by
tune2fs -U UUID device.


-- 
Paul E Condon   
pecon...@mesanetworks.net


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20100624194611.gg3...@big.lan.gnu



  1   2   >