Re: LVM activation on boot hangs after crossgrade

2022-03-25 Thread Reiner Buehl
It seems that the command /sbin/lvm pvscan --cache --activate ay 9:0 from
the Systemd unit lvm2-pvscan@9:0.service is hanging and blocks all
subsequent lvm activities.

Am Fr., 25. März 2022 um 20:40 Uhr schrieb Reiner Buehl <
reiner.bu...@gmail.com>:

> Hi all,
>
> I am crossgrading my Debian Buster system from i386 to amd64 following the
> guide from the Debian Wiki (https://wiki.debian.org/CrossGrading). After
> the apt full-upgrade I rebooted but now my LVM volumes are not activated
> again.
> The LVM event activation by systemd times out after 90s. I can boot the
> system if I comment out the filesystem that is on LVM (I only have one vg
> with one logical volume) but I would like to get that one back.
> The system creates lock files P_global and V_vg_data in /var/lock/lvm
> which prevent a manual activation. Tools like vgscan or vgdisplay hang
> because they can't get the lock.
>
> What can I do to resolve this?
>
> Best regards,
> Reiner
>


Re: LVM passphrase

2021-12-30 Thread Polyna-Maude Racicot-Summerside
Hi Andrew,

On 2021-12-28 5:00 p.m., Andrew M.A. Cater wrote:
> On Wed, Dec 29, 2021 at 08:55:29AM +1100, David wrote:
>> On Tue, 28 Dec 2021 at 21:06, Pierre-Elliott Bécue  wrote:
>>> Polyna-Maude Racicot-Summerside  wrote on 
>>> 28/12/2021 at 07:39:16+0100:
>>
 I got two logical volume on my hard disk.
 One is the swap
 Other is the root
 Both have the same passphrase.
 How can I make grub ask only once ?
>>

> Encrypting boot partitions would be hard - how would you get to the
> point of entering a passphrase ... this is why "encrypted LVM setup" _doesn't_
> encrypt boot in the default settings from the Debian partitioner.
> 
My boot partition is not encrypted.
I created the same scheme as Debian usually do for beginner (one
partition for all) except I wanted a larger swap space.
Now it ask twice for the passphrase.

I have one partition (/boot sda1) + another partition (logical /sda5)
I have one volume group
I have two logical volume, one being the swap (16 GB) and the other one
being my root (760 GB). Would 6 GB RAM + 16 GB SWAP be enough for a
simple laptop used for copying files from my cameras and doing basic
work on photo (the big stuff is done on my desktop).

>> If we are talking about somehow using both LVM and LUKS
>> in combination, then decrypting a single LUKS volume that
>> has been partitioned into root and swap with LVM will only
>> require one password given once to the init started by the
>> initrd, when booting the system.
>>
> 
> This is why the encrypted LVM setup in Debian has an unencrypted boot
> and swap is contained within the single encrypted volume, I think
> 
>> Maybe providing the output of 'lsblk -f' would help to clarify
>> the situation, so that we can see what is on the disk.
>>
I will do so...
> 
> Hope this helps - all best, as ever,
> 
> Andy Cater 
> 

-- 
Polyna-Maude R.-Summerside
-Be smart, Be wise, Support opensource development



OpenPGP_signature
Description: OpenPGP digital signature


Re: LVM passphrase

2021-12-29 Thread Pierre-Elliott Bécue

David  wrote on 28/12/2021 at 22:55:29+0100:

> On Tue, 28 Dec 2021 at 21:06, Pierre-Elliott Bécue  wrote:
>> Polyna-Maude Racicot-Summerside  wrote on 28/12/2021 
>> at 07:39:16+0100:
>
>> > I got two logical volume on my hard disk.
>> > One is the swap
>> > Other is the root
>> > Both have the same passphrase.
>> > How can I make grub ask only once ?
>
>> First, for the sake of clarity, I guess you are talking about LUKS
>> filesystems on logical volumes?
>>
>> If so, I guess you're not dealing with grub but with initramfs scripts
>> and then init asking for passphrases. Indeed, GRUB only asks the
>> passphrase of a potential encrypted /boot to fetch its configuration in
>> order to know what to boot.
>>
>> Now let's move to the initramfs + init passphrases prompts. Initramfs'
>> job is to find the root partition and "pivot" on it, ie exec /sbin/init
>> which is located on the root partition and which will mount the other
>> filesystems, start services, … you know the drill.
>>
>> To find the root partition, initramfs has a lot of helper scripts, and
>> if the root partition is encrypted, it also has access to cryptsetup
>> binaries and passfifo. It therefore prompts for a password to recrypt
>> your rootfs.
>>
>> Later on, init wants to make your swap available and therefore also
>> needs to ask you for a passphrase.
>
> I am not clear exactly what is being asked here. Is the question
> about Grub asking for passwords, or about the initrd asking
> for passwords? Grub will ask before booting the kernel, the
> initrd will ask after Grub invokes the kernel.
>
> I don't know about Grub asking for passwords, because I don't
> encrypt boot partitions. But if the question is about the initrd
> password prompt, then ...
>
> If we are talking about somehow using both LVM and LUKS
> in combination, then decrypting a single LUKS volume that
> has been partitioned into root and swap with LVM will only
> require one password given once to the init started by the
> initrd, when booting the system.
>
> Maybe providing the output of 'lsblk -f' would help to clarify
> the situation, so that we can see what is on the disk.

I think my answer covers most of the cases.

Polyna-Maude is free to come back at us in case more help is needed.

Cheers,
-- 
PEB


signature.asc
Description: PGP signature


Re: LVM passphrase

2021-12-28 Thread Andy Smith
Hello,

On Tue, Dec 28, 2021 at 10:00:51PM +, Andrew M.A. Cater wrote:
> On Wed, Dec 29, 2021 at 08:55:29AM +1100, David wrote:
> > I don't know about Grub asking for passwords, because I don't
> > encrypt boot partitions. But if the question is about the initrd
> > password prompt, then ...
> 
> Encrypting boot partitions would be hard - how would you get to the
> point of entering a passphrase ... this is why "encrypted LVM setup" _doesn't_
> encrypt boot in the default settings from the Debian partitioner.

grub2 does support unlocking LUKS so some people do encrypt /boot
and have grub2 unlock it, but this isn't yet supported in the Debian
installer so it seems unlikely that Polyna-Maude has done this.

https://cryptsetup-team.pages.debian.net/cryptsetup/encrypted-boot.html

If Polyna-Maude *has* done this, then the above link does also give
some hints as to how to reduce the number of times a passphrase is
asked.

Otherwise if the use of LUKS is more conventional (unencrypted
/boot, initramfs unlocks /) then Polyna-Mause may want to look in to
ephemeral passphrase for swap that is set on every boot. Or perhaps
just using a swapfile inside / so as to not have an extra block
device to encrypt.

Possibly more information needed as to what the OP's setup actually
is.

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: LVM passphrase

2021-12-28 Thread Andrew M.A. Cater
On Wed, Dec 29, 2021 at 08:55:29AM +1100, David wrote:
> On Tue, 28 Dec 2021 at 21:06, Pierre-Elliott Bécue  wrote:
> > Polyna-Maude Racicot-Summerside  wrote on 
> > 28/12/2021 at 07:39:16+0100:
> 
> > > I got two logical volume on my hard disk.
> > > One is the swap
> > > Other is the root
> > > Both have the same passphrase.
> > > How can I make grub ask only once ?
> 
> > First, for the sake of clarity, I guess you are talking about LUKS
> > filesystems on logical volumes?
> >
> > If so, I guess you're not dealing with grub but with initramfs scripts
> > and then init asking for passphrases. Indeed, GRUB only asks the
> > passphrase of a potential encrypted /boot to fetch its configuration in
> > order to know what to boot.
> >
> > Now let's move to the initramfs + init passphrases prompts. Initramfs'
> > job is to find the root partition and "pivot" on it, ie exec /sbin/init
> > which is located on the root partition and which will mount the other
> > filesystems, start services, … you know the drill.
> >
> > To find the root partition, initramfs has a lot of helper scripts, and
> > if the root partition is encrypted, it also has access to cryptsetup
> > binaries and passfifo. It therefore prompts for a password to recrypt
> > your rootfs.
> >
> > Later on, init wants to make your swap available and therefore also
> > needs to ask you for a passphrase.
> 
> I am not clear exactly what is being asked here. Is the question
> about Grub asking for passwords, or about the initrd asking
> for passwords? Grub will ask before booting the kernel, the
> initrd will ask after Grub invokes the kernel.
> 
> I don't know about Grub asking for passwords, because I don't
> encrypt boot partitions. But if the question is about the initrd
> password prompt, then ...
> 

Encrypting boot partitions would be hard - how would you get to the
point of entering a passphrase ... this is why "encrypted LVM setup" _doesn't_
encrypt boot in the default settings from the Debian partitioner.

> If we are talking about somehow using both LVM and LUKS
> in combination, then decrypting a single LUKS volume that
> has been partitioned into root and swap with LVM will only
> require one password given once to the init started by the
> initrd, when booting the system.
> 

This is why the encrypted LVM setup in Debian has an unencrypted boot
and swap is contained within the single encrypted volume, I think

> Maybe providing the output of 'lsblk -f' would help to clarify
> the situation, so that we can see what is on the disk.
>

Hope this helps - all best, as ever,

Andy Cater 



Re: LVM passphrase

2021-12-28 Thread David
On Tue, 28 Dec 2021 at 21:06, Pierre-Elliott Bécue  wrote:
> Polyna-Maude Racicot-Summerside  wrote on 28/12/2021 
> at 07:39:16+0100:

> > I got two logical volume on my hard disk.
> > One is the swap
> > Other is the root
> > Both have the same passphrase.
> > How can I make grub ask only once ?

> First, for the sake of clarity, I guess you are talking about LUKS
> filesystems on logical volumes?
>
> If so, I guess you're not dealing with grub but with initramfs scripts
> and then init asking for passphrases. Indeed, GRUB only asks the
> passphrase of a potential encrypted /boot to fetch its configuration in
> order to know what to boot.
>
> Now let's move to the initramfs + init passphrases prompts. Initramfs'
> job is to find the root partition and "pivot" on it, ie exec /sbin/init
> which is located on the root partition and which will mount the other
> filesystems, start services, … you know the drill.
>
> To find the root partition, initramfs has a lot of helper scripts, and
> if the root partition is encrypted, it also has access to cryptsetup
> binaries and passfifo. It therefore prompts for a password to recrypt
> your rootfs.
>
> Later on, init wants to make your swap available and therefore also
> needs to ask you for a passphrase.

I am not clear exactly what is being asked here. Is the question
about Grub asking for passwords, or about the initrd asking
for passwords? Grub will ask before booting the kernel, the
initrd will ask after Grub invokes the kernel.

I don't know about Grub asking for passwords, because I don't
encrypt boot partitions. But if the question is about the initrd
password prompt, then ...

If we are talking about somehow using both LVM and LUKS
in combination, then decrypting a single LUKS volume that
has been partitioned into root and swap with LVM will only
require one password given once to the init started by the
initrd, when booting the system.

Maybe providing the output of 'lsblk -f' would help to clarify
the situation, so that we can see what is on the disk.



Re: LVM passphrase

2021-12-28 Thread Pierre-Elliott Bécue

Polyna-Maude Racicot-Summerside  wrote on 28/12/2021 at 
07:39:16+0100:

> [[PGP Signed Part:No public key for 4B5CC29996718046 created at 
> 2021-12-28T07:39:16+0100 using RSA]]
> Hi,
> I got two logical volume on my hard disk.
> One is the swap
> Other is the root
> Both have the same passphrase.
> How can I make grub ask only once ?
> Thanks

Hi,

First, for the sake of clarity, I guess you are talking about LUKS
filesystems on logical volumes?

If so, I guess you're not dealing with grub but with initramfs scripts
and then init asking for passphrases. Indeed, GRUB only asks the
passphrase of a potential encrypted /boot to fetch its configuration in
order to know what to boot.

Now let's move to the initramfs + init passphrases prompts. Initramfs'
job is to find the root partition and "pivot" on it, ie exec /sbin/init
which is located on the root partition and which will mount the other
filesystems, start services, … you know the drill.

To find the root partition, initramfs has a lot of helper scripts, and
if the root partition is encrypted, it also has access to cryptsetup
binaries and passfifo. It therefore prompts for a password to recrypt
your rootfs.

Later on, init wants to make your swap available and therefore also
needs to ask you for a passphrase.

Theoretically, if you use systemd >= 227, you don't get prompted for
such passphrase, because the systemd's changelog for version 227 reads:

>* The "ask-password" framework used to query for LUKS harddisk
>  passwords or SSL passwords during boot gained support for
>  caching passwords in the kernel keyring, if it is
>  available. This makes sure that the user only has to type in
>  a passphrase once if there are multiple objects to unlock
>  with the same one. Previously, such password caching was
>  available only when Plymouth was used; this moves the
>  caching logic into the systemd codebase itself. The
>  "systemd-ask-password" utility gained a new --keyname=
>  switch to control which kernel keyring key to use for
>  caching a password in. This functionality is also useful for
>  enabling display managers such as gdm to automatically
>  unlock the user's GNOME keyring if its passphrase, the
>  user's password and the harddisk password are the same, if
>  gdm-autologin is used.

There could be reasons why this doesn't work, like the kernel keyring is
not accessible, or you are relying on an init system not using this
feature, …

Anyway, in case it doesn't work you can use the good ol' /etc/crypttab
file to add some automation. First step is to add another luks
passphrase to your swap partition. Then, add this passphrase in a file
available from your rootfs (eg /etc/luks.keys/swap.key) (mind using an
editor that doesn't add a line return to the file, otherwise it won't
work, you are technically allowed to use dd to generate a random binary
passphrase into a file and then use this file to seed a passphrase to
your partition), and then reference this file in /etc/crypttab for your
swap partition (man crypttab for more intel on this).

There are some examples here[0], but please mind to assert if it fits
your usecase.

Cheers,

-- 
PEB

[0] 
https://www.howtoforge.com/automatically-unlock-luks-encrypted-drives-with-a-keyfile


signature.asc
Description: PGP signature


Re: LVM passphrase

2021-12-27 Thread basti
you can add a key to swap. and place this somewhere in the root
partition. the key must known by /etc/crypttab, so it should ask only once.

Am 28.12.21 um 07:39 schrieb Polyna-Maude Racicot-Summerside:
> Hi,
> I got two logical volume on my hard disk.
> One is the swap
> Other is the root
> Both have the same passphrase.
> How can I make grub ask only once ?
> Thanks
> 



Re: LVM raid0

2021-05-31 Thread Gokan Atmaca
> vgcreate vg2t /dev/sda /dev/sdb
> lvcreate --type raid0 -name lv-stg --size 16700GiB vg2t


I solved the problem by manually activating it initially.

On Sat, May 29, 2021 at 10:41 PM Tom Dial  wrote:
>
>
>
> On 5/28/21 12:58, Gokan Atmaca wrote:
> >> Is your '/etc/crypttab' file properly populated?
> >
> > There is no encrypted volume.
> >
> >
> > On Fri, May 28, 2021 at 9:37 PM john doe  wrote:
> >>
> >> On 5/28/2021 8:31 PM, Gokan Atmaca wrote:
> >>> Additionally I found something like the following in the dmesg logs.
> >>>
> >>> [Fri May 28 14:14:19 2021] x86/cpu: VMX (outside TXT) disabled by BIOS
> >>> [Fri May 28 14:14:20 2021] r8169 :06:00.0: unknown chip XID 641
> >>> [Fri May 28 14:14:22 2021] device-mapper: table: 253:2: raid: Failed
> >>> to run raid array
> >>> [Fri May 28 14:14:22 2021] device-mapper: table: 253:2: raid: Failed
> >>> to run raid array
> >>> [Fri May 28 14:15:25 2021] hdaudio hdaudioC0D2: Unable to bind the codec
> >>>
> >>> On Fri, May 28, 2021 at 9:27 PM Gokan Atmaca  
> >>> wrote:
> 
>  Hello
> 
>  I did LVM raid 0. But when reboot the disks come as "inherit".
>  What would be the reason ?
> 
>  lvdisplay
> --- Logical volume ---
> LV Path/dev/vg2t/lv-st0
> LV Namelv-st0
> VG Namevg2t
> LV UUIDJOfIdw-8uhQ-OvsF-4Sdp-LMDm-NEVv-UMjFDW
> LV Write Accessread/write
> LV Creation host, time ob, 2021-05-28 10:46:49 -0400
> LV Status  NOT available
> LV Size1.81 TiB
> Current LE 474482
> Segments   1
> Allocation inherit
> Read ahead sectors auto
> 
> --- Logical volume ---
> LV Path/dev/vg2t/lv_storage14t
> LV Namelv_storage14t
> VG Namevg2t
> LV UUIDjHbg36-GKU0-Mked-PbMd-Vnio-IPbE-lpGWD4
> LV Write Accessread/write
> LV Creation host, time ob 2021-05-28 13:41:04 -0400
> LV Status  NOT available
> LV Size14.50 TiB
> Current LE 3801088
> Segments   1
> Allocation inherit
> Read ahead sectors auto
>
> Allocation  inherit is the default (inherited from the volume group) if
> you did not specify an allocation rule in the lvcreate command that
> created the volume group. Based on my experience and existing volume
> groups, it also is the default for vgcreate command if nothing else is
> specified.
>
> The above also shows "LV Status NOT available". That likely indicates
> that the volume group was not activated at boot. That would prevent use
> of the logical volumes for anything and probably explain the device
> mapper messages shown above.
>
> As Reco suggested in a later reply, it would be helpful to see the
> output of both vgdisplay -v and pvdisplay.
>
> It also might be helpful if you could show the exact commands you used
> originally to set up the RAID environment.
>
> I realize that may be impossible, but wonder if you defined a raid0
> device on top of the LVM logical volumes using external raid management
> software. My understanding is that while that might be possible, the
> usual way to create raid under LVM is to specify it by type when
> creating the logical volume. In this case, for (partly made up) example:
>
> vgcreate vg2t /dev/sda /dev/sdb
> lvcreate --type raid0 -name lv-stg --size 16700GiB vg2t
>
> This would result in one logical volume, /dev/vg2t/, split between the
> two physical volumes (assumed here to be /dev/sda and /dev/sdb but maybe
> different on your system), with total storage of about 16.3 TiB. I guess
> that allocation would be first from /dev/sda and, when that is
> exhausted, /dev/sdb. Other allocation rules could be specified in the
> vgcreate command (and inherited by the logical volume) or the lvcreate
> command. With the very different sized disks involved, it is not clear
> that would be useful.
>
> Regards,
> Tom Dial
>
> 
> 
>  Thanls.
> 
> 
> 
> 
>  --
>  ⢀⣴⠾⠻⢶⣦⠀
>  ⣾⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
>  ⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org
>  ⠈⠳⣄
> >>>
> >>
> >> Is your '/etc/crypttab' file properly populated?
> >>
> >> --
> >> John Doe
> >>



Re: LVM raid0

2021-05-29 Thread Tom Dial



On 5/28/21 12:58, Gokan Atmaca wrote:
>> Is your '/etc/crypttab' file properly populated?
> 
> There is no encrypted volume.
> 
> 
> On Fri, May 28, 2021 at 9:37 PM john doe  wrote:
>>
>> On 5/28/2021 8:31 PM, Gokan Atmaca wrote:
>>> Additionally I found something like the following in the dmesg logs.
>>>
>>> [Fri May 28 14:14:19 2021] x86/cpu: VMX (outside TXT) disabled by BIOS
>>> [Fri May 28 14:14:20 2021] r8169 :06:00.0: unknown chip XID 641
>>> [Fri May 28 14:14:22 2021] device-mapper: table: 253:2: raid: Failed
>>> to run raid array
>>> [Fri May 28 14:14:22 2021] device-mapper: table: 253:2: raid: Failed
>>> to run raid array
>>> [Fri May 28 14:15:25 2021] hdaudio hdaudioC0D2: Unable to bind the codec
>>>
>>> On Fri, May 28, 2021 at 9:27 PM Gokan Atmaca  wrote:

 Hello

 I did LVM raid 0. But when reboot the disks come as "inherit".
 What would be the reason ?

 lvdisplay
--- Logical volume ---
LV Path/dev/vg2t/lv-st0
LV Namelv-st0
VG Namevg2t
LV UUIDJOfIdw-8uhQ-OvsF-4Sdp-LMDm-NEVv-UMjFDW
LV Write Accessread/write
LV Creation host, time ob, 2021-05-28 10:46:49 -0400
LV Status  NOT available
LV Size1.81 TiB
Current LE 474482
Segments   1
Allocation inherit
Read ahead sectors auto

--- Logical volume ---
LV Path/dev/vg2t/lv_storage14t
LV Namelv_storage14t
VG Namevg2t
LV UUIDjHbg36-GKU0-Mked-PbMd-Vnio-IPbE-lpGWD4
LV Write Accessread/write
LV Creation host, time ob 2021-05-28 13:41:04 -0400
LV Status  NOT available
LV Size14.50 TiB
Current LE 3801088
Segments   1
Allocation inherit
Read ahead sectors auto

Allocation  inherit is the default (inherited from the volume group) if
you did not specify an allocation rule in the lvcreate command that
created the volume group. Based on my experience and existing volume
groups, it also is the default for vgcreate command if nothing else is
specified.

The above also shows "LV Status NOT available". That likely indicates
that the volume group was not activated at boot. That would prevent use
of the logical volumes for anything and probably explain the device
mapper messages shown above.

As Reco suggested in a later reply, it would be helpful to see the
output of both vgdisplay -v and pvdisplay.

It also might be helpful if you could show the exact commands you used
originally to set up the RAID environment.

I realize that may be impossible, but wonder if you defined a raid0
device on top of the LVM logical volumes using external raid management
software. My understanding is that while that might be possible, the
usual way to create raid under LVM is to specify it by type when
creating the logical volume. In this case, for (partly made up) example:

vgcreate vg2t /dev/sda /dev/sdb
lvcreate --type raid0 -name lv-stg --size 16700GiB vg2t

This would result in one logical volume, /dev/vg2t/, split between the
two physical volumes (assumed here to be /dev/sda and /dev/sdb but maybe
different on your system), with total storage of about 16.3 TiB. I guess
that allocation would be first from /dev/sda and, when that is
exhausted, /dev/sdb. Other allocation rules could be specified in the
vgcreate command (and inherited by the logical volume) or the lvcreate
command. With the very different sized disks involved, it is not clear
that would be useful.

Regards,
Tom Dial



 Thanls.




 --
 ⢀⣴⠾⠻⢶⣦⠀
 ⣾⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
 ⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org
 ⠈⠳⣄
>>>
>>
>> Is your '/etc/crypttab' file properly populated?
>>
>> --
>> John Doe
>>



Re: LVM raid0

2021-05-28 Thread Charles Curley
On Fri, 28 May 2021 21:10:03 +0200
john doe  wrote:

> On 5/28/2021 8:58 PM, Gokan Atmaca wrote:
> >> Is your '/etc/crypttab' file properly populated?  
> >
> > There is no encrypted volume.
> >  
> 
> That file (1) needs to be  populated for it to work at boot! :)

No, not if (as M. Atmaca has already stated) there is no encrypted
portion of the system.


> 
> 
> 1)
> https://wiki.archlinux.org/title/Dm-crypt/System_configuration#Mounting_at_boot_time

This article is on DM-crypt, which should be irrelevant to the OP's
situation.

-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/



Re: LVM raid0

2021-05-28 Thread Reco
Hi.

On Fri, May 28, 2021 at 09:31:06PM +0300, Gokan Atmaca wrote:
> Additionally I found something like the following in the dmesg logs.
> 
...
> [Fri May 28 14:14:22 2021] device-mapper: table: 253:2: raid: Failed
> to run raid array
> [Fri May 28 14:14:22 2021] device-mapper: table: 253:2: raid: Failed
> to run raid array

Chances are your initrd lacks dm-raid kernel module. Try adding it to
/etc/initramfs-tools/modules and rebuild your initrd.
Everything else in this dmesg does not relate to the problem.


> > What would be the reason ?

pvdisplay and vgdisplay would be nice.
And "lsmod | grep ^dm" while we're at it.


Oh, and please disgregard that crypttab advice. crypttab is only good
for something if you're using dm-crypt, and most likely you're not.

Reco



Re: LVM raid0

2021-05-28 Thread Gokan Atmaca
> That file (1) needs to be  populated for it to work at boot! :)

thanks, i didn't know. I will check it. :)

On Fri, May 28, 2021 at 10:10 PM john doe  wrote:
>
> On 5/28/2021 8:58 PM, Gokan Atmaca wrote:
> >> Is your '/etc/crypttab' file properly populated?
> >
> > There is no encrypted volume.
> >
>
> That file (1) needs to be  populated for it to work at boot! :)
>
>
> 1)
> https://wiki.archlinux.org/title/Dm-crypt/System_configuration#Mounting_at_boot_time
>
> --
> John Doe
>



Re: LVM raid0

2021-05-28 Thread john doe

On 5/28/2021 8:58 PM, Gokan Atmaca wrote:

Is your '/etc/crypttab' file properly populated?


There is no encrypted volume.



That file (1) needs to be  populated for it to work at boot! :)


1)
https://wiki.archlinux.org/title/Dm-crypt/System_configuration#Mounting_at_boot_time

--
John Doe



Re: LVM raid0

2021-05-28 Thread Gokan Atmaca
> Is your '/etc/crypttab' file properly populated?

There is no encrypted volume.


On Fri, May 28, 2021 at 9:37 PM john doe  wrote:
>
> On 5/28/2021 8:31 PM, Gokan Atmaca wrote:
> > Additionally I found something like the following in the dmesg logs.
> >
> > [Fri May 28 14:14:19 2021] x86/cpu: VMX (outside TXT) disabled by BIOS
> > [Fri May 28 14:14:20 2021] r8169 :06:00.0: unknown chip XID 641
> > [Fri May 28 14:14:22 2021] device-mapper: table: 253:2: raid: Failed
> > to run raid array
> > [Fri May 28 14:14:22 2021] device-mapper: table: 253:2: raid: Failed
> > to run raid array
> > [Fri May 28 14:15:25 2021] hdaudio hdaudioC0D2: Unable to bind the codec
> >
> > On Fri, May 28, 2021 at 9:27 PM Gokan Atmaca  wrote:
> >>
> >> Hello
> >>
> >> I did LVM raid 0. But when reboot the disks come as "inherit".
> >> What would be the reason ?
> >>
> >> lvdisplay
> >>--- Logical volume ---
> >>LV Path/dev/vg2t/lv-st0
> >>LV Namelv-st0
> >>VG Namevg2t
> >>LV UUIDJOfIdw-8uhQ-OvsF-4Sdp-LMDm-NEVv-UMjFDW
> >>LV Write Accessread/write
> >>LV Creation host, time ob, 2021-05-28 10:46:49 -0400
> >>LV Status  NOT available
> >>LV Size1.81 TiB
> >>Current LE 474482
> >>Segments   1
> >>Allocation inherit
> >>Read ahead sectors auto
> >>
> >>--- Logical volume ---
> >>LV Path/dev/vg2t/lv_storage14t
> >>LV Namelv_storage14t
> >>VG Namevg2t
> >>LV UUIDjHbg36-GKU0-Mked-PbMd-Vnio-IPbE-lpGWD4
> >>LV Write Accessread/write
> >>LV Creation host, time ob 2021-05-28 13:41:04 -0400
> >>LV Status  NOT available
> >>LV Size14.50 TiB
> >>Current LE 3801088
> >>Segments   1
> >>Allocation inherit
> >>Read ahead sectors auto
> >>
> >>
> >> Thanls.
> >>
> >>
> >>
> >>
> >> --
> >> ⢀⣴⠾⠻⢶⣦⠀
> >> ⣾⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
> >> ⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org
> >> ⠈⠳⣄
> >
>
> Is your '/etc/crypttab' file properly populated?
>
> --
> John Doe
>



Re: LVM raid0

2021-05-28 Thread john doe

On 5/28/2021 8:31 PM, Gokan Atmaca wrote:

Additionally I found something like the following in the dmesg logs.

[Fri May 28 14:14:19 2021] x86/cpu: VMX (outside TXT) disabled by BIOS
[Fri May 28 14:14:20 2021] r8169 :06:00.0: unknown chip XID 641
[Fri May 28 14:14:22 2021] device-mapper: table: 253:2: raid: Failed
to run raid array
[Fri May 28 14:14:22 2021] device-mapper: table: 253:2: raid: Failed
to run raid array
[Fri May 28 14:15:25 2021] hdaudio hdaudioC0D2: Unable to bind the codec

On Fri, May 28, 2021 at 9:27 PM Gokan Atmaca  wrote:


Hello

I did LVM raid 0. But when reboot the disks come as "inherit".
What would be the reason ?

lvdisplay
   --- Logical volume ---
   LV Path/dev/vg2t/lv-st0
   LV Namelv-st0
   VG Namevg2t
   LV UUIDJOfIdw-8uhQ-OvsF-4Sdp-LMDm-NEVv-UMjFDW
   LV Write Accessread/write
   LV Creation host, time ob, 2021-05-28 10:46:49 -0400
   LV Status  NOT available
   LV Size1.81 TiB
   Current LE 474482
   Segments   1
   Allocation inherit
   Read ahead sectors auto

   --- Logical volume ---
   LV Path/dev/vg2t/lv_storage14t
   LV Namelv_storage14t
   VG Namevg2t
   LV UUIDjHbg36-GKU0-Mked-PbMd-Vnio-IPbE-lpGWD4
   LV Write Accessread/write
   LV Creation host, time ob 2021-05-28 13:41:04 -0400
   LV Status  NOT available
   LV Size14.50 TiB
   Current LE 3801088
   Segments   1
   Allocation inherit
   Read ahead sectors auto


Thanls.




--
⢀⣴⠾⠻⢶⣦⠀
⣾⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org
⠈⠳⣄




Is your '/etc/crypttab' file properly populated?

--
John Doe



Re: LVM raid0

2021-05-28 Thread Gokan Atmaca
Additionally I found something like the following in the dmesg logs.

[Fri May 28 14:14:19 2021] x86/cpu: VMX (outside TXT) disabled by BIOS
[Fri May 28 14:14:20 2021] r8169 :06:00.0: unknown chip XID 641
[Fri May 28 14:14:22 2021] device-mapper: table: 253:2: raid: Failed
to run raid array
[Fri May 28 14:14:22 2021] device-mapper: table: 253:2: raid: Failed
to run raid array
[Fri May 28 14:15:25 2021] hdaudio hdaudioC0D2: Unable to bind the codec

On Fri, May 28, 2021 at 9:27 PM Gokan Atmaca  wrote:
>
> Hello
>
> I did LVM raid 0. But when reboot the disks come as "inherit".
> What would be the reason ?
>
> lvdisplay
>   --- Logical volume ---
>   LV Path/dev/vg2t/lv-st0
>   LV Namelv-st0
>   VG Namevg2t
>   LV UUIDJOfIdw-8uhQ-OvsF-4Sdp-LMDm-NEVv-UMjFDW
>   LV Write Accessread/write
>   LV Creation host, time ob, 2021-05-28 10:46:49 -0400
>   LV Status  NOT available
>   LV Size1.81 TiB
>   Current LE 474482
>   Segments   1
>   Allocation inherit
>   Read ahead sectors auto
>
>   --- Logical volume ---
>   LV Path/dev/vg2t/lv_storage14t
>   LV Namelv_storage14t
>   VG Namevg2t
>   LV UUIDjHbg36-GKU0-Mked-PbMd-Vnio-IPbE-lpGWD4
>   LV Write Accessread/write
>   LV Creation host, time ob 2021-05-28 13:41:04 -0400
>   LV Status  NOT available
>   LV Size14.50 TiB
>   Current LE 3801088
>   Segments   1
>   Allocation inherit
>   Read ahead sectors auto
>
>
> Thanls.
>
>
>
>
> --
> ⢀⣴⠾⠻⢶⣦⠀
> ⣾⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
> ⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org
> ⠈⠳⣄



Re: LVM Boot fail

2020-02-16 Thread Tom Dial



On 2/16/20 05:36, Reco wrote:
>   Hi.
> 
> On Sat, Feb 15, 2020 at 10:57:36PM -0700, Tom Dial wrote:
>> Neither the host nor the guest VM is rebooted often, and it is not a
>> particularly serious problem now that it's known, but it would be better
>> gone. I'm not averse to doing work to sort this out, but would be
>> grateful for pointers to relevant documentation or other information, or
>> suggestions for fixing it. and wouldn't object to information about
>> fixing it, if anyone has encountered it previously.
> 
> What a typical Debian initramfs does with LVM is defined at
> /usr/share/initramfs-tools/scripts/local-block/lvm2, and specifically
> it's:
> 
> lvm lvchange -aay -y --sysinit --ignoreskippedcluster "$@"
> 
> Note the '-aay' part here, it refers to auto_activation_volume_list in
> lvm.conf.
> What it should do is to activate all and every LV you have if
> auto_activation_volume_list is not defined, and restrict LV activation
> to the list contents otherwise.
> 
> So, first things first, check your lvm.conf.

Thanks, Reco; I think this set me on a path to a fix, although so far
everything in my setup is as you described.

Tom

> 
> 
> Second, you can cheat. Just add yet another initramfs script to call
> "lvchange -ay".
> 
> Reco
> 



Re: LVM Boot fail

2020-02-16 Thread Stefan Monnier
> Boot faults to an (initrd) prompt with a complaint that the /usr LV,
> correctly identified by its UUID, does not exist. It does, but is not
> activated. In fact, lvscan shows that only the root and swap LVs
> are active, and the others are not.

Why does the initrd want to check activation of something else than root
and swap (root is ... well because we want to mount it, and swap is to
check for a hibernation image, AFAIK)?

Oh wait, I guess with the new usr/bin change /usr needs to be "in the
root filesystem" now or at least mounted just like the root, so
apparently some part of the initramfs hasn't been adjusted to this new
need.  But an easy fix might be to merge your /usr and / partitions.


Stefan



Re: LVM Boot fail

2020-02-16 Thread Reco
Hi.

On Sat, Feb 15, 2020 at 10:57:36PM -0700, Tom Dial wrote:
> Neither the host nor the guest VM is rebooted often, and it is not a
> particularly serious problem now that it's known, but it would be better
> gone. I'm not averse to doing work to sort this out, but would be
> grateful for pointers to relevant documentation or other information, or
> suggestions for fixing it. and wouldn't object to information about
> fixing it, if anyone has encountered it previously.

What a typical Debian initramfs does with LVM is defined at
/usr/share/initramfs-tools/scripts/local-block/lvm2, and specifically
it's:

lvm lvchange -aay -y --sysinit --ignoreskippedcluster "$@"

Note the '-aay' part here, it refers to auto_activation_volume_list in
lvm.conf.
What it should do is to activate all and every LV you have if
auto_activation_volume_list is not defined, and restrict LV activation
to the list contents otherwise.

So, first things first, check your lvm.conf.


Second, you can cheat. Just add yet another initramfs script to call
"lvchange -ay".

Reco



Re: LVM setup with snapshots

2018-05-12 Thread Eduardo M KALINOWSKI
On 11-05-2018 21:46, Forest Dean Feighner wrote:
> I really didn't prepare for lvm. I never used lvm before this so had
> no idea of lvm before.
>
> Snapshots sound like an awesome idea.
>
> I would like to do a configured base install, create a snapshot, and
> modify (fork), the base for different things.
>
> With 20/20 hindsight. The default doesn't seem to have room. What are
> different solutions other debian/lvm users have used?

You can shrink most kinds of partitions (including ext4), and then
shrink the logical volumes (in this order). It's not as convenient as
growing the LVs since the partitions must not be mounted and shrinking
can be somewhat slow if data needs to be shuffled around, but it's possible.

As always, there's a small risk of data loss, so better have a backup of
important data.

-- 

She is descended from a long line that her mother listened to.
-- Gypsy Rose Lee

Eduardo M KALINOWSKI
edua...@kalinowski.com.br



Re: LVM setup with snapshots

2018-05-11 Thread Forest Dean Feighner
I really didn't prepare for lvm. I never used lvm before this so had no
idea of lvm before.

Snapshots sound like an awesome idea.

I would like to do a configured base install, create a snapshot, and modify
(fork), the base for different things.

With 20/20 hindsight. The default doesn't seem to have room. What are
different solutions other debian/lvm users have used?

Thanks
Forest





On Fri, May 11, 2018 at 2:57 PM, Forest Dean Feighner <
forest.feigh...@gmail.com> wrote:

>
>
> On Fri, May 11, 2018 at 8:15 AM, Greg Wooledge 
> wrote:
>
>> On Fri, May 11, 2018 at 09:20:32AM +0200, Pascal Hambourg wrote:
>> > > To me, it seems me the partition is too large to to reduce for
>> snapshots.
>> >
>> > What do you mean ?
>> > Did you allocate all the available space in the volume group to the
>> logical
>> > volumes ? Creating snapshots requires space.
>>
>> Yeah, if you let the installer do LVM partitioning "for" you, it
>> notoriously uses up all the extents, leaving you nothing to work with,
>> totally defeating the purpose of LVM.
>>
>> If you do an LVM install with Debian, you have to do manual partitioning.
>> Or at least, you really *really* want to.
>>
>>
>
> Yes, this was my first lvm install using the defaults which did not leave
> enough room for snapshots.
>
>
>
>
>


Re: LVM setup with snapshots

2018-05-11 Thread Forest Dean Feighner
On Fri, May 11, 2018 at 8:15 AM, Greg Wooledge  wrote:

> On Fri, May 11, 2018 at 09:20:32AM +0200, Pascal Hambourg wrote:
> > > To me, it seems me the partition is too large to to reduce for
> snapshots.
> >
> > What do you mean ?
> > Did you allocate all the available space in the volume group to the
> logical
> > volumes ? Creating snapshots requires space.
>
> Yeah, if you let the installer do LVM partitioning "for" you, it
> notoriously uses up all the extents, leaving you nothing to work with,
> totally defeating the purpose of LVM.
>
> If you do an LVM install with Debian, you have to do manual partitioning.
> Or at least, you really *really* want to.
>
>

Yes, this was my first lvm install using the defaults which did not leave
enough room for snapshots.


Re: LVM setup with snapshots

2018-05-11 Thread Greg Wooledge
On Fri, May 11, 2018 at 09:20:32AM +0200, Pascal Hambourg wrote:
> > To me, it seems me the partition is too large to to reduce for snapshots.
> 
> What do you mean ?
> Did you allocate all the available space in the volume group to the logical
> volumes ? Creating snapshots requires space.

Yeah, if you let the installer do LVM partitioning "for" you, it
notoriously uses up all the extents, leaving you nothing to work with,
totally defeating the purpose of LVM.

If you do an LVM install with Debian, you have to do manual partitioning.
Or at least, you really *really* want to.



Re: LVM setup with snapshots

2018-05-11 Thread Pascal Hambourg

Le 11/05/2018 à 01:21, Forest Dean Feighner a écrit :

I'm completely new to lvm.


Then you really should read more about LVM and experiment it before 
installing a system on LVM.



lvs
LV VG   Attr   LSize   Pool Origin Data%  Meta%  Move Log
Cpy%Sync Convert
root   build-vg -wi-ao 463.52g
swap_1 build-vg -wi-ao   2.00g

I used the default stretch of an lvm partition with the gnome.

To me, it seems me the partition is too large to to reduce for snapshots.


What do you mean ?
Did you allocate all the available space in the volume group to the 
logical volumes ? Creating snapshots requires space.



What would be a good layout for a stretch install doing lvm snapshots?


When installing with LVM, an important rule is to always leave plenty of 
free space in the volume group. Don't worry, extending a logical volume 
when needed is easy. This implies to use manual partitioning, because 
guided partitioning allocates all the available space.




Re: LVM partitions not mounting after upgrade

2018-03-02 Thread deloptes
Andy Pont wrote:

> When booting it sits for 90 seconds flashing messages of the form:
> 
> Start job running for dev-mapper-sdcserver/x2dvar.device
> Start job running for dev-mapper-sdcserver/x2dopt.device
> Start job running for dev-mapper-sdcserver/x2dhome.device
> 

smells like systemd



Re: LVM partitions not mounting after upgrade

2018-03-01 Thread David Christensen

On 02/28/18 07:28, Andy Pont wrote:

Hello,

Today I have upgraded the third of our three Debian servers from Jessie (8.10) 
to Stretch (9.3) and whilst the first two went without a problem the final one 
only boots to the maintenance mode prompt.

This particular server uses an Intel motherboard and has a 4 disk raid array 
(mirrored and striped) created using the BIOS in which exists (excuse the 
terminology if it isn’t quite correct) a LVM physical volume and three 
partitions (/opt, /home and /var).

When booting it sits for 90 seconds flashing messages of the form:

Start job running for dev-mapper-sdcserver/x2dvar.device
Start job running for dev-mapper-sdcserver/x2dopt.device
Start job running for dev-mapper-sdcserver/x2dhome.device

After the 90 seconds these turn into "Timed out waiting for..." messages and I 
get presented with Control-D maintenance mode prompt.

Looking in /dev there is the /dev/md126 device for the raid array but there are 
no /dev/dm-X entries and no /dev/vg_sdcserver as I see on a similar machine 
that has a similar setup.

When I try to investigate with commands such as pvcreate or vgchange, in test 
mode, then they all show messages about duplicates.

Could someone guide me how to recreate the necessary files in /dev so I can 
mount these volumes and boot the server?

Thanks,

-Andy.


https://lists.debian.org/debian-user/2018/03/msg5.html


What is "sdcserver"?  Secondary Domain Controller?


What is the model of the Intel motherboard?


Is the RAID controller on the motherboard or a card?  If card, what is 
the make and model?



What are the makes and models of the disks?


If you created the RAID10 using the BIOS, did Jesse see one physical 
disk or did you need to install additional software?



Is root on the RAID10 or on other disk(s)?


David



Re: LVM: how to avoid scanning all devices

2018-01-03 Thread Steve Keller
On Wed, Dec 20, 2017 at 02:17:59PM +1100, Igor Cicimov wrote:
 
> Look at filter examples in /etc/lvm/lvm.conf
 
That's not what I'm looking for.  I *do* have LVM physical and logical
volumes on most of my drives, e.g. a volume group on my backup drive.
And I want an explicit call to vgscan to find all these volumes.
Therefore a filter excluding these device is not the solution.

But I dislike all my drives spinning up and having to wait for that when I
simply call vgdisplay vg0 to see how much space is left in my primary volume
group with /, /usr, /var and /home file systems, which are active all time.

Steve



Re: LVM: how to avoid scanning all devices

2017-12-19 Thread Igor Cicimov
On 15 Dec 2017 11:36 pm, "Steve Keller"  wrote:

When calling LVM commands it seems they all scan all disks for
physical volumes.  This is annoying because it spins up all disks that
are currently idle and causes long delays to wait for these disks to
come up.  Also, I don't understand why LVM commands scan the disks so
often since the information is in /etc/lvm already.  For example a
command like vgdisplay vg0 where vg0 is actively used and on a disk
that is up and running still causes a long delay because it scans all
my devices for other volumes although this is completely unneeded.

IMO only an explicit call to vgscan should scan for and update all LVM
information.

Steve


Look at filter examples in /etc/lvm/lvm.conf


Re: LVM: how to avoid scanning all devices

2017-12-18 Thread Andy Smith
Hi Steve,

On Fri, Dec 15, 2017 at 01:19:46PM +0100, Steve Keller wrote:
> When calling LVM commands it seems they all scan all disks for
> physical volumes.  This is annoying because it spins up all disks that
> are currently idle and causes long delays to wait for these disks to
> come up.

Can you avoid it by using global_filter to restrict LVM's operation
to certain devices?

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: LVM RAID vs LVM over MD

2016-12-13 Thread Igor Cicimov
On 12 Dec 2016 10:21 pm, "Jonathan Dowland"  wrote:

On Tue, Dec 06, 2016 at 10:53:30AM +1100, Igor Cicimov wrote:
> It depends. If you are using cloud services with remote shared storage
like
> AWS EBS it does not make sense using LVM on top of RAID. To me it is just
> adding complexity to already complex SAN storage. You also have no idea
> what the block devices presented to the VM are coming from it might be a
> file coming over iSCSI. I've been using LVM raid on AWS EBS for years
> without any issues. My advice is test and match them all before you make
> your decision each ones user case and experience is different.

I should have prefixed my answer with "If you want RAID...". I don't
personally use RAID anywhere, myself, at the moment.

In the situation you describe then you are doing logical volume management
elsewhere and you would indeed not need LVM. You should also address
redundancy
at that other layer so you wouldn't need (local) RAID either, either LVM or
MD
based, IMHO.

You don't explain why you chose to use LVM RAID over mdadm, but as I said, I
wouldn't use either in your case.

--
Jonathan Dowland
Please do not CC me, I am subscribed to the list.


It is not all about redundancy but performance too. In my tests the
lvm-raid performed better than plain lvm.


Re: LVM RAID vs LVM over MD

2016-12-12 Thread Jonathan Dowland
On Tue, Dec 06, 2016 at 10:53:30AM +1100, Igor Cicimov wrote:
> It depends. If you are using cloud services with remote shared storage like
> AWS EBS it does not make sense using LVM on top of RAID. To me it is just
> adding complexity to already complex SAN storage. You also have no idea
> what the block devices presented to the VM are coming from it might be a
> file coming over iSCSI. I've been using LVM raid on AWS EBS for years
> without any issues. My advice is test and match them all before you make
> your decision each ones user case and experience is different.

I should have prefixed my answer with "If you want RAID...". I don't
personally use RAID anywhere, myself, at the moment.

In the situation you describe then you are doing logical volume management
elsewhere and you would indeed not need LVM. You should also address redundancy
at that other layer so you wouldn't need (local) RAID either, either LVM or MD
based, IMHO.

You don't explain why you chose to use LVM RAID over mdadm, but as I said, I
wouldn't use either in your case.

-- 
Jonathan Dowland
Please do not CC me, I am subscribed to the list.


signature.asc
Description: Digital signature


Re: LVM RAID vs LVM over MD

2016-12-05 Thread Igor Cicimov
On 6 Dec 2016 5:14 am, "Nicholas Geovanis"  wrote:
>
> I'd like to make sure I'm taking away the right thing from this
conversation.
> It seems we have high-level recommendations _not_ to use LVM RAID1.
> Not just over MD, simply don't use it at all. Do I get that right?
>
> On Mon, Dec 5, 2016 at 4:25 AM, Jonathan Dowland  wrote:
>>
>> On Sat, Dec 03, 2016 at 07:39:37PM +0100, Kamil Jońca wrote:
>> > So far I used lvm with raid1 device as PV.
>> >
>> > Recently I have to extend my VG
>> > (https://lists.debian.org/debian-user/2016/11/msg00909.html)
>> >
>> > and I read some about lvm.
>> > If I understand correctly, LVM have builtin RAID1 functionality.
>> > And I wonder about migrating
>> > lvm over md --> (lvm with raid1) over physical hard drive partitions.
>> >
>> >
>> > Any cons?
>>
>> MD is older, has had more development and is generally considered to be
>> more robust. I would always choose to do RAID with MD, directly on the
>> disks, and put LVM on top of the MD virtual block devices.
>>
>> --
>> Jonathan Dowland
>> Please do not CC me, I am subscribed to the list.
>
>
It depends. If you are using cloud services with remote shared storage like
AWS EBS it does not make sense using LVM on top of RAID. To me it is just
adding complexity to already complex SAN storage. You also have no idea
what the block devices presented to the VM are coming from it might be a
file coming over iSCSI. I've been using LVM raid on AWS EBS for years
without any issues. My advice is test and match them all before you make
your decision each ones user case and experience is different.


Re: LVM RAID vs LVM over MD

2016-12-05 Thread Sven Hartge
Roman Tsisyk  wrote:
> On Mon, Dec 5, 2016 at 10:47 PM, Sven Hartge  wrote:
>> Dan Ritter  wrote:

>>> If you want LVM on top of RAID, use LVM on top of mdadm, but
>>> consider whether you might actually want ZFS instead.

>> Side note: With ZFS you don't want to use MD (or any other RAID)
>> below ZFS but instead put all disk directly into a (or multiple)
>> VDEV.

> I wonder do you want your data back? :)

Why?

S°

-- 
Sigmentation fault. Core dumped.



Re: LVM RAID vs LVM over MD

2016-12-05 Thread Roman Tsisyk
On Mon, Dec 5, 2016 at 10:47 PM, Sven Hartge  wrote:
> Dan Ritter  wrote:
>
>> If you want LVM on top of RAID, use LVM on top of mdadm, but consider
>> whether you might actually want ZFS instead.
>
> Side note: With ZFS you don't want to use MD (or any other RAID) below
> ZFS but instead put all disk directly into a (or multiple) VDEV.
>

I wonder do you want your data back? :)

-- 
WBR,
  Roman Tsisyk 



Re: LVM RAID vs LVM over MD

2016-12-05 Thread Sven Hartge
Dan Ritter  wrote:

> If you want LVM on top of RAID, use LVM on top of mdadm, but consider
> whether you might actually want ZFS instead.

Side note: With ZFS you don't want to use MD (or any other RAID) below
ZFS but instead put all disk directly into a (or multiple) VDEV.

S°

-- 
Sigmentation fault. Core dumped.



Re: LVM RAID vs LVM over MD

2016-12-05 Thread Sven Hartge
Nicholas Geovanis  wrote:

> I'd like to make sure I'm taking away the right thing from this
> conversation.
> It seems we have high-level recommendations _not_ to use LVM RAID1.

Yes.

> Not just over MD, simply don't use it at all. Do I get that right?

Yes. With MD lower in the stack, you don't need LVM-RAID1.

S°

-- 
Sigmentation fault. Core dumped.



Re: LVM RAID vs LVM over MD

2016-12-05 Thread Dan Ritter
On Mon, Dec 05, 2016 at 01:14:14PM -0600, Nicholas Geovanis wrote:
> I'd like to make sure I'm taking away the right thing from this
> conversation.
> It seems we have high-level recommendations _not_ to use LVM RAID1.
> Not just over MD, simply don't use it at all. Do I get that right?
> 

Yes.

If you want RAID1, use mdadm (and use RAID10 with 2 devices).

If you want LVM, use LVM.

If you want LVM on top of RAID, use LVM on top of mdadm, but consider
whether you might actually want ZFS instead.

-dsr-



Re: LVM RAID vs LVM over MD

2016-12-05 Thread Nicholas Geovanis
I'd like to make sure I'm taking away the right thing from this
conversation.
It seems we have high-level recommendations _not_ to use LVM RAID1.
Not just over MD, simply don't use it at all. Do I get that right?

On Mon, Dec 5, 2016 at 4:25 AM, Jonathan Dowland  wrote:

> On Sat, Dec 03, 2016 at 07:39:37PM +0100, Kamil Jońca wrote:
> > So far I used lvm with raid1 device as PV.
> >
> > Recently I have to extend my VG
> > (https://lists.debian.org/debian-user/2016/11/msg00909.html)
> >
> > and I read some about lvm.
> > If I understand correctly, LVM have builtin RAID1 functionality.
> > And I wonder about migrating
> > lvm over md --> (lvm with raid1) over physical hard drive partitions.
> >
> >
> > Any cons?
>
> MD is older, has had more development and is generally considered to be
> more robust. I would always choose to do RAID with MD, directly on the
> disks, and put LVM on top of the MD virtual block devices.
>
> --
> Jonathan Dowland
> Please do not CC me, I am subscribed to the list.
>


Re: LVM RAID vs LVM over MD

2016-12-05 Thread Jonathan Dowland
On Sat, Dec 03, 2016 at 07:39:37PM +0100, Kamil Jońca wrote:
> So far I used lvm with raid1 device as PV.
> 
> Recently I have to extend my VG
> (https://lists.debian.org/debian-user/2016/11/msg00909.html)
> 
> and I read some about lvm.
> If I understand correctly, LVM have builtin RAID1 functionality.
> And I wonder about migrating
> lvm over md --> (lvm with raid1) over physical hard drive partitions.
> 
> 
> Any cons?

MD is older, has had more development and is generally considered to be
more robust. I would always choose to do RAID with MD, directly on the
disks, and put LVM on top of the MD virtual block devices.

-- 
Jonathan Dowland
Please do not CC me, I am subscribed to the list.


signature.asc
Description: Digital signature


Re: LVM RAID vs LVM over MD

2016-12-03 Thread Kamil Jońca
Henrique de Moraes Holschuh  writes:

> On Sat, 03 Dec 2016, Kamil Jońca wrote:
>> If I understand correctly, LVM have builtin RAID1 functionality.
>> And I wonder about migrating
>> lvm over md --> (lvm with raid1) over physical hard drive partitions.
>> 
>> Any cons?
>
> Yes, many.  Don't do it.

For example?

So far found by googling:
1. it's rather new code (yes I know that is md based, but integration
... in lvm)
2. very little community (so harder to get answers for questions)
3. lack of recovery advices (or I can't find them)
4. no "mdadm --monitor" equivalent (or I can't find)

but these can be only my inexperience or "code newness" :)
KJ
-- 
http://stopstopnop.pl/stop_stopnop.pl_o_nas.html
Try to divide your time evenly to keep others happy.



Re: LVM RAID vs LVM over MD

2016-12-03 Thread Roman Tsisyk
On Sat, Dec 3, 2016 at 9:39 PM, Kamil Jońca  wrote:
> So far I used lvm with raid1 device as PV.
>
> Recently I have to extend my VG
> (https://lists.debian.org/debian-user/2016/11/msg00909.html)
>
> and I read some about lvm.
> If I understand correctly, LVM have builtin RAID1 functionality.
> And I wonder about migrating
> lvm over md --> (lvm with raid1) over physical hard drive partitions.
>

Please ask yourself a simple question: do you know how to recover LVM RAID?
I don't. mdraid is proven technology which just works.

The truth is that all these overcomplicated stuff (lvm, pulseaudio,
systemd, etc.) is designed to increase sales of premium support from
RedHat. SCNR.

-- 
WBR,
  Roman Tsisyk 



Re: LVM RAID vs LVM over MD

2016-12-03 Thread Sven Hartge
Kamil Jońca  wrote:

> So far I used lvm with raid1 device as PV.

> Recently I have to extend my VG
> (https://lists.debian.org/debian-user/2016/11/msg00909.html)

> and I read some about lvm.  If I understand correctly, LVM have
> builtin RAID1 functionality.  And I wonder about migrating lvm over md
> --> (lvm with raid1) over physical hard drive partitions.

My last information on the RAID1 code in LVM is, that it is inferior to
the one in MD.

The MD code for example is able to fix broken sectors by reading the data
from the other disk, overwriting the sector on the broken disk with the
correct data, trying to get the drive to remap the sector.

Also, the last time I checked (which was a few years ago, so take my
advise with a bit of caution) the LVM code had no feature to do a
regular scrubbing of the RAID, detecting bit-rot in advance.

Grüße,
Sven.

-- 
Sigmentation fault. Core dumped.



Re: LVM RAID vs LVM over MD

2016-12-03 Thread Henrique de Moraes Holschuh
On Sat, 03 Dec 2016, Kamil Jońca wrote:
> If I understand correctly, LVM have builtin RAID1 functionality.
> And I wonder about migrating
> lvm over md --> (lvm with raid1) over physical hard drive partitions.
> 
> Any cons?

Yes, many.  Don't do it.

-- 
  Henrique Holschuh



Re: LVM disappeared after upgrade in jessie

2016-03-10 Thread Sebastian Weckend

I found the problem, here is what happened:

When first creating the raid partition for the lvm, I saved the config 
to /etc/mdadm.conf


Everything worked fine, because somehow it still got detected and was 
assembled as md127. I didnt notice and went on.


It seems that after an update (not shure which) it does not do the 
automated search and assembling of the unconfigured raid partitions 
anymore. I put the missing configuration in /etc/mdadm/mdadm.conf and 
everything is fine again.


thanks, Sebastian


Am 10.03.2016 um 14:43 schrieb Dan Ritter:

On Thu, Mar 10, 2016 at 01:37:06PM +0100, Sebastian Weckend wrote:

In the backup of the LVM configuration I found

physical_volumes {
 pv0 {
 id = "zKO1Xq-VGmI-nX8P-Rwh3-wiFn-GuD4-26dAoZ"
 device = "/dev/md127"   # Hint only
 status = ["ALLOCATABLE"]
 flags = []
 dev_size = 5798400896   # 2.70009 Terabytes
 pe_start = 2048
 pe_count = 707812   # 2.70009 Terabytes
 }
}

but there is no /dev/md127. So it seems it is a software-raid problem.

Did I configure the software-raid wrong? And why did it disappear
just now? Any idea how I get the partition back?



Does it appear in /proc/mdstat in any form? I suspect that the
device got renamed.

Any mentions of mdadm in dmesg output?

You can examine specific partitions with mdadm -Q /dev/sdXX

-dsr-





Re: LVM disappeared after upgrade in jessie

2016-03-10 Thread Dan Ritter
On Thu, Mar 10, 2016 at 01:37:06PM +0100, Sebastian Weckend wrote:
> In the backup of the LVM configuration I found
> 
> physical_volumes {
> pv0 {
> id = "zKO1Xq-VGmI-nX8P-Rwh3-wiFn-GuD4-26dAoZ"
> device = "/dev/md127"   # Hint only
> status = ["ALLOCATABLE"]
> flags = []
> dev_size = 5798400896   # 2.70009 Terabytes
> pe_start = 2048
> pe_count = 707812   # 2.70009 Terabytes
> }
> }
> 
> but there is no /dev/md127. So it seems it is a software-raid problem.
> 
> Did I configure the software-raid wrong? And why did it disappear
> just now? Any idea how I get the partition back?


Does it appear in /proc/mdstat in any form? I suspect that the
device got renamed.

Any mentions of mdadm in dmesg output?

You can examine specific partitions with mdadm -Q /dev/sdXX

-dsr-



Re: LVM disappeared after upgrade in jessie

2016-03-10 Thread Sebastian Weckend

In the backup of the LVM configuration I found

physical_volumes {
pv0 {
id = "zKO1Xq-VGmI-nX8P-Rwh3-wiFn-GuD4-26dAoZ"
device = "/dev/md127"   # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 5798400896   # 2.70009 Terabytes
pe_start = 2048
pe_count = 707812   # 2.70009 Terabytes
}
}

but there is no /dev/md127. So it seems it is a software-raid problem.

Did I configure the software-raid wrong? And why did it disappear just 
now? Any idea how I get the partition back?



Am 10.03.2016 um 12:56 schrieb Sebastian Weckend:

Hello,

after the last upgrade and rebooting I lost my LVM Container.

following packets were upgraded:

bind9-host 1:9.9.5.dfsg-9+deb8u6
dnsutils 1:9.9.5.dfsg-9+deb8u6
host 1:9.9.5.dfsg-9+deb8u6
libbind9-90 1:9.9.5.dfsg-9+deb8u6
libdns100 1:9.9.5.dfsg-9+deb8u6
libdns-export100 1:9.9.5.dfsg-9+deb8u6
libirs-export91 1:9.9.5.dfsg-9+deb8u6
libisc95 1:9.9.5.dfsg-9+deb8u6
libisccc90 1:9.9.5.dfsg-9+deb8u6
libisccfg90 1:9.9.5.dfsg-9+deb8u6
libisccfg-export90 1:9.9.5.dfsg-9+deb8u6
libisc-export95 1:9.9.5.dfsg-9+deb8u6
liblwres90 1:9.9.5.dfsg-9+deb8u6
linux-image-3.16.0-4-amd64 3.16.7-ckt20-1+deb8u4
xen-linux-system-3.16.0-4-amd64 3.16.7-ckt20-1+deb8u4

# pvscan
   No matching physical volumes found

It seems the software-raid of the PV cannot be found anymore

Any ideas how to get access to the LVM back? Or what else to check? It
seems still to be on sda7 and sdb7. Unfortunately I dont have access to
my documentation to check how I configured the LVM in the first place.

# fdisk -l
Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 592AB453-29D1-4BAE-B799-6A66EDFCCE98

DeviceStartEndSectors  Size Type
/dev/sdb1  4096   16781311   167772168G Linux RAID
/dev/sdb2  16781312   178298871048576  512M Linux RAID
/dev/sdb3  17829888   38801407   20971520   10G Linux RAID
/dev/sdb4  38801408   59772927   20971520   10G Linux RAID
/dev/sdb5  59772928   6187007920971521G Linux RAID
/dev/sdb6  2048   4095   20481M BIOS boot
/dev/sdb7  61870080 5860533134 5798663055  2.7T Linux LVM

Partition table entries are not in disk order.
Disk /dev/sda: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 71504D20-CE2D-499B-B052-76817310B91E

DeviceStartEndSectors  Size Type
/dev/sda1  4096   16781311   167772168G Linux RAID
/dev/sda2  16781312   178298871048576  512M Linux RAID
/dev/sda3  17829888   38801407   20971520   10G Linux RAID
/dev/sda4  38801408   59772927   20971520   10G Linux RAID
/dev/sda5  59772928   6187007920971521G Linux RAID
/dev/sda6  2048   4095   20481M BIOS boot
/dev/sda7  61870080 5860533134 5798663055  2.7T Linux LVM

Partition table entries are not in disk order.
Disk /dev/md0: 8 GiB, 8585674752 bytes, 16768896 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md1: 511.7 MiB, 536543232 bytes, 1047936 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md2: 10 GiB, 10728898560 bytes, 20954880 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md3: 10 GiB, 10728898560 bytes, 20954880 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md4: 1023.4 MiB, 1073152000 bytes, 2096000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Thanks, Sebastian





Re: LVM chiffré et passage de Wheezie à Jesssie.

2016-03-07 Thread Randy11

Merci beaucoup !

Je suis vraiment heureux de l'aide qui m'a été apportée.

Bonne journée.

Randy11

On 07/03/2016 23:51, Pascal Hambourg wrote:

Randy11 a écrit :

[  504.460680] EXT4-fs (sda4): unable to read superblock
[  504.462564] EXT4-fs (sda4): unable to read superblock
[  504.464474] EXT4-fs (sda4): unable to read superblock
[  504.475289] FAT-fs (sda4): bogus number of reserved sectors
[  504.477480] FAT-fs (sda4): bogus number of reserved sectors
[  504.487249] qnx4: no qnx4 filesystem (no root dir).

(...)

Device Boot  StartEndSectors  Size Id Type
/dev/sda1  *63  209728574  209728512  100G  7 HPFS/NTFS/exFAT
/dev/sda2209728575  734025914  524297340  250G  7 HPFS/NTFS/exFAT
/dev/sda3734025915  797514794   6340 30,3G  c W95 FAT32 (LBA)
/dev/sda4797515774 3907028991 3109513218  1,5T  5 Extended
/dev/sda5797515776 3906828992 3109313217  1,5T 8e Linux LVM
/dev/sda6   3906832384 3907028991 196608   96M 83 Linux

Que cela signifie-t-il ?

Que sda4 n'est pas une partition ext4, FAT ou QNX. Pas étonnant puisque
c'est une partition étendue.


Est-ce grave docteur ?

Non.





Re: LVM chiffré et passage de Wheezie à Jesssie.

2016-03-07 Thread Pascal Hambourg
Randy11 a écrit :
> 
> [  504.460680] EXT4-fs (sda4): unable to read superblock
> [  504.462564] EXT4-fs (sda4): unable to read superblock
> [  504.464474] EXT4-fs (sda4): unable to read superblock
> [  504.475289] FAT-fs (sda4): bogus number of reserved sectors
> [  504.477480] FAT-fs (sda4): bogus number of reserved sectors
> [  504.487249] qnx4: no qnx4 filesystem (no root dir).
(...)
> Device Boot  StartEndSectors  Size Id Type
> /dev/sda1  *63  209728574  209728512  100G  7 HPFS/NTFS/exFAT
> /dev/sda2209728575  734025914  524297340  250G  7 HPFS/NTFS/exFAT
> /dev/sda3734025915  797514794   6340 30,3G  c W95 FAT32 (LBA)
> /dev/sda4797515774 3907028991 3109513218  1,5T  5 Extended
> /dev/sda5797515776 3906828992 3109313217  1,5T 8e Linux LVM
> /dev/sda6   3906832384 3907028991 196608   96M 83 Linux
> 
> Que cela signifie-t-il ?

Que sda4 n'est pas une partition ext4, FAT ou QNX. Pas étonnant puisque
c'est une partition étendue.

> Est-ce grave docteur ?

Non.



Re: LVM chiffré et passage de Wheezie à Jesssie.

2016-03-07 Thread Randy11




C'est bien, mais le problème est que mes partitions cryptées qui 
doivent

correspondre à "/home" et "/swap" ne sont pas utilisées.


Le paquet cryptsetup est-il installé ?
As-tu essayé d'ouvrir les volumes chiffrés avec cryptsetup luksOpen... ?
Si cela fonctionne, tu pourras les ajouter au fichier /etc/crypttab 
(à moins que tu aies conservé celui de Wheezy) pour les ouvrir 
automatiquement au démarrage.




Merci pour les indications, j'ai récupéré l'ancien /etc/crypttab, 
refait ma swap cryptée, mis à jour /etc/crypttab avec le nouvel UID et 
ÇA MARCHE !!!


Après 15 jours de galère, je récupère enfin mon PC :-)

Un très grand MERCI à tous pour votre aide.

Randy11



On devrait toujours attendre avant de se réjouir :-(
J'allais mettre en veille mon PC quand l'idée m'est venue de faire 
"apt-get dist-upgrade", voilà

ce que j'obtiens (ce n'est que la fin) :

Paramétrage de linux-image-3.16.0-4-amd64 (3.16.7-ckt20-1+deb8u4) ...
/etc/kernel/postinst.d/initramfs-tools:
update-initramfs: Generating /boot/initrd.img-3.16.0-4-amd64
/etc/kernel/postinst.d/zz-update-grub:
Création du fichier de configuration GRUB…
Found background image: /usr/share/images/desktop-base/desktop-grub.png
Image Linux trouvée : /boot/vmlinuz-3.16.0-4-amd64
Image mémoire initiale trouvée : /boot/initrd.img-3.16.0-4-amd64
[  504.460680] EXT4-fs (sda4): unable to read superblock
[  504.462564] EXT4-fs (sda4): unable to read superblock
[  504.464474] EXT4-fs (sda4): unable to read superblock
[  504.475289] FAT-fs (sda4): bogus number of reserved sectors
[  504.477480] FAT-fs (sda4): bogus number of reserved sectors
[  504.487249] qnx4: no qnx4 filesystem (no root dir).
Windows NT/2000/XP trouvé sur /dev/sda1
fait
Paramétrage de openssl (1.0.1k-3+deb8u4) ...
Paramétrage de python-pil:amd64 (2.6.1-2+deb8u2) ...
Paramétrage de python-imaging (2.6.1-2+deb8u2) ...
Traitement des actions différées (« triggers ») pour libc-bin 
(2.19-18+deb8u3) .


Pour mémoire, voici le partionnement :
Disque /dev/sda : 1,8 TiB, 2000398934016 octets, 3907029168 secteurs
Unités : secteur de 1 × 512 = 512 octets
Taille de secteur (logique / physique) : 512 octets / 512 octets
taille d'E/S (minimale / optimale) : 512 octets / 512 octets
Type d'étiquette de disque : dos
Identifiant de disque : 0x17121712

Device Boot  StartEndSectors  Size Id Type
/dev/sda1  *63  209728574  209728512  100G  7 HPFS/NTFS/exFAT
/dev/sda2209728575  734025914  524297340  250G  7 HPFS/NTFS/exFAT
/dev/sda3734025915  797514794   6340 30,3G  c W95 FAT32 (LBA)
/dev/sda4797515774 3907028991 3109513218  1,5T  5 Extended
/dev/sda5797515776 3906828992 3109313217  1,5T 8e Linux LVM
/dev/sda6   3906832384 3907028991 196608   96M 83 Linux



Que cela signifie-t-il ? Est-ce grave docteur ?



Re: LVM chiffré et passage de Wheezie à Jesssie.

2016-03-07 Thread Randy11



On 07/03/2016 00:37, Pascal Hambourg wrote:

Le 06/03/2016 23:35, Randy11 a écrit :


On 06/03/2016 13:01, Pascal Hambourg wrote:

- Créer une petite partition (5 Mo devraient suffire) formatée en 
ext2 à
monter sur /boot/grub, ce qui au mieux réduira suffisamment la 
taille de

l'image core pour qu'elle contienne dans l'espace post-MBR en évitant
l'intégration du module lvm, et au pire permettra à grub-install avec
l'option --force d'utiliser les listes de blocs.

(...)

J'ai créer une partition en etx2 /dev/sda6 en réduisant la taille de
mon volume groupe qui prenait toute la partition /dev/sda5.


Il n'était pas nécessaire que la partition contienne tout /boot ; 
/boot/grub suffisait et nécessitait une taille plus modeste.



C'est bien, mais le problème est que mes partitions cryptées qui doivent
correspondre à "/home" et "/swap" ne sont pas utilisées.


Le paquet cryptsetup est-il installé ?
As-tu essayé d'ouvrir les volumes chiffrés avec cryptsetup luksOpen... ?
Si cela fonctionne, tu pourras les ajouter au fichier /etc/crypttab (à 
moins que tu aies conservé celui de Wheezy) pour les ouvrir 
automatiquement au démarrage.




Merci pour les indications, j'ai récupéré l'ancien /etc/crypttab, refait 
ma swap cryptée, mis à jour /etc/crypttab avec le nouvel UID et ÇA 
MARCHE !!!


Après 15 jours de galère, je récupère enfin mon PC :-)

Un très grand MERCI à tous pour votre aide.

Randy11



Re: LVM chiffré et passage de Wheezie à Jesssie.

2016-03-07 Thread MERLIN Philippe
Bonjour,
Je te rappelle je ne suis pas un expert, cependant j'ai vécu à peu près le même 
souci sur mon 
portable lors d'une migration d'un disque sur un autre avec agrandissement des 
partitions. 
Je me suis trouvé avec Linux OK et Windows Vista OUT.
Dans ce cas il y a plusieurs solutions :
Réinstaller Windows XP  l'achat d'un Windows XP Pro ne doit pas être très cher, 
si tu ne l'as 
plus.
Deuxième solution utiliser des logiciels Gratuits (généralement pour 30 
jours)qui gèrent ce 
problème c'est ce que j'ai fait et le Windows Vista est à nouveau OK.

> Le partionnement a été fait lors de l'installation d'un WindowsXP...
> c'est vieux. J'ai gardé
> du Windows pour deux raisons : l'acquisition vidéo avec une carte
> Hauppauge et la
> récupérations de mes parcours enregistrés sur mon GPS Garmin Zumo 350LM.
> Le GPS
> est la dernière raison pour garder un Windows.
> 
Philippe Merlin


Re: LVM chiffré et passage de Wheezie à Jesssie.

2016-03-06 Thread Pascal Hambourg

Le 06/03/2016 23:35, Randy11 a écrit :


On 06/03/2016 13:01, Pascal Hambourg wrote:


- Créer une petite partition (5 Mo devraient suffire) formatée en ext2 à
monter sur /boot/grub, ce qui au mieux réduira suffisamment la taille de
l'image core pour qu'elle contienne dans l'espace post-MBR en évitant
l'intégration du module lvm, et au pire permettra à grub-install avec
l'option --force d'utiliser les listes de blocs.

(...)

J'ai créer une partition en etx2 /dev/sda6 en réduisant la taille de
mon volume groupe qui prenait toute la partition /dev/sda5.


Il n'était pas nécessaire que la partition contienne tout /boot ; 
/boot/grub suffisait et nécessitait une taille plus modeste.



C'est bien, mais le problème est que mes partitions cryptées qui doivent
correspondre à "/home" et "/swap" ne sont pas utilisées.


Le paquet cryptsetup est-il installé ?
As-tu essayé d'ouvrir les volumes chiffrés avec cryptsetup luksOpen... ?
Si cela fonctionne, tu pourras les ajouter au fichier /etc/crypttab (à 
moins que tu aies conservé celui de Wheezy) pour les ouvrir 
automatiquement au démarrage.




Re: LVM chiffré et passage de Wheezie à Jesssie.

2016-03-06 Thread Randy11



On 06/03/2016 13:01, Pascal Hambourg wrote:

Randy11 a écrit :

On 05/03/2016 22:48, Pascal Hambourg wrote:

Randy11 a écrit :

Pour installer Jessie, j'ai seulement pris :
- lv--01 pour /
- lv--05 pour /var
- lv--08 pour swap

Avis personnel : donner des noms génériques aux volumes logiques
constitue une négation d'un des avantages de LVM, à savoir de pouvoir
donner aux volumes logiques des noms signifiants et non de simples
numéros comme les partitions.

Bonne remarque. C'est une habitude d'un collègue que j'ai reprise. Je aivais
choisi ces noms pour avoir le moins de partitons facilement identifiables
puisque j'ai certaines sont chiffrées : ne pas faciliter les choses.

C'est un point de vue. Mais ne pas faciliter les choses pour qui ? Pour
un administrateur que cela ne va pas aider à s'y retrouver, c'est sûr.
Pour un éventuel attaquant qui peut de toute façon assez facilement
savoir ce que contiennent les volumes, sur quoi ils sont montés et à
quoi ils servent, j'ai des doutes. On n'est pas loin de la sécurité par
l'obscurité.


Soit, j'en tiendrais compte pour la suite ;-)


L'installation date de la première version de Wheezy.

L'installateur de Wheezy créait déjà des partitions alignées sur des
blocs de 1 Mio (2048 secteurs de 512 octets). D'ailleurs on voit bien
que le début de la partition LVM est aligné sur cette taille.

Par contre ce n'est pas le cas des partitions NTFS et FAT qui sont
alignées "à l'ancienne" sur des pistes ou cylindres (63 secteurs/piste
pour la première, multiples de 255 têtes x 63 secteurs/piste pour les
deux suivantes). Comment ont-elles été créées ? Sauf erreur,
l'installateur de Windows Vista applique aussi l'alignement "moderne".


Le partionnement a été fait lors de l'installation d'un WindowsXP... 
c'est vieux. J'ai gardé
du Windows pour deux raisons : l'acquisition vidéo avec une carte 
Hauppauge et la
récupérations de mes parcours enregistrés sur mon GPS Garmin Zumo 350LM. 
Le GPS

est la dernière raison pour garder un Windows.



Solutions possibles :
- Créer une petite partition (5 Mo devraient suffire) formatée en ext2 à
monter sur /boot/grub, ce qui au mieux réduira suffisamment la taille de
l'image core pour qu'elle contienne dans l'espace post-MBR en évitant
l'intégration du module lvm, et au pire permettra à grub-install avec
l'option --force d'utiliser les listes de blocs.

J'ai déjà 3 partitions Windows et 1 "extended"/LVM. Si les choses n'ont
pas trop évoluées, il je ne peux plus créer de 4ème partition
primaire.

La partition n° 4 est une partition étendue qui contient actuellement
uniquement la partition logique n° 5 servant de PV LVM, mais elle peut
contenir un nombre arbitraire de partitions logiques. Il suffit qu'il
reste un tout petit peu d'espace non alloué, même non aligné, à la fin
du disque ou au début de la partition étendue.

Dans le cas contraire, tu dois bien pouvoir réduire un poil le PV LVM et
la partition logique n° 5 qui le contient.


J'ai créer une partition en etx2 /dev/sda6 en réduisant la taille de
mon volume groupe qui prenait toute la partition /dev/sda5.
J'ai sauvegardé toutes les étapes,
je les donnerais après.


Avec la netinstall, j'ai tenté de réinstaller Grub : reinstallation 
éffectuée sans

plainte. Au reboot le menu Grub avec les OS à démarrer est présent, mais
message d'erreur :
/boot/vmliuz... non trouvé ???


Les contenu de la partition /boot est :
system.map-3.16.0-4-amd64
config-3.16.0-4-amd64
initrd.img-3.16.0-4-amd64
vmlinuz-3.16.0-4-amd64
grub/
lost+found/

La ligne pour /boot dans /etc/fstab est :
/dev/sda6 /boot ext2 defaults 0 2

Cette fois-ci reprise de l'installation de Grub par "chroot /target" 
après le lancement d'un shell.


mount /var
update-grub
Création du fichier de configuration GRUB...
Found background image: /usr/share/images/desktop-base/desktop-grub.png
Image Linux trouvée : /boot/vmlinuz-3.16.0-4-amd64
Image mémoire initiale trouvée : /boot/initrd.img-3.16.0-4-amd64
Wintwos NT/2000/XOP trouvé sur /dev/sda1
fait


Redémarrage, on progresse. Le début de démarrage se fait, mais finit en
échec avec le message :
Welcome to emergency mode ! ... "journalctl -xb"..."systemctl reboot"...
"systemctl default"
...
Give root password for maintenance :

Les dernières étapes en échec sont :
[DEPEND] Dependency failed for /mnt
[DEPEND] Dependency failed for Local File System
[DEPEND] Dependency failed for File System Check on 
/dev/mapper/vg-lv--03_crypt
[ TIME ] Timed out waiting for device 
dev-mapper-vg\x2dlv\x2d\x2d02_crypt.device.

[DEPEND]Dependency failed for /dev/mapper/vg-lv--02_crypt
[DEPEND]Dependency failed for Swap


J'ai viré les partitions cryptées du fstab, refait une installation de 
Grub et rédémarré.


J'ai récupéré une Jessie fonctionnelle ! :-)


C'est bien, mais le problème est que mes partitions cryptées qui doivent 
correspondre

à "/home" et "/swap" ne sont pas utilisées.

Y-a-t-il une bonne âme pour m'aider ?

Merci par avance.

P.S.: je vais mettre en forme les infos pour 

Re: LVM chiffré et passage de Wheezie à Jesssie.

2016-03-06 Thread Pascal Hambourg
Randy11 a écrit :
> On 05/03/2016 22:48, Pascal Hambourg wrote:
>> Randy11 a écrit :
>>>
>>> Pour installer Jessie, j'ai seulement pris :
>>> - lv--01 pour /
>>> - lv--05 pour /var
>>> - lv--08 pour swap
>>
>> Avis personnel : donner des noms génériques aux volumes logiques
>> constitue une négation d'un des avantages de LVM, à savoir de pouvoir
>> donner aux volumes logiques des noms signifiants et non de simples
>> numéros comme les partitions.
>
> Bonne remarque. C'est une habitude d'un collègue que j'ai reprise. Je aivais
> choisi ces noms pour avoir le moins de partitons facilement identifiables
> puisque j'ai certaines sont chiffrées : ne pas faciliter les choses.

C'est un point de vue. Mais ne pas faciliter les choses pour qui ? Pour
un administrateur que cela ne va pas aider à s'y retrouver, c'est sûr.
Pour un éventuel attaquant qui peut de toute façon assez facilement
savoir ce que contiennent les volumes, sur quoi ils sont montés et à
quoi ils servent, j'ai des doutes. On n'est pas loin de la sécurité par
l'obscurité.

> L'installation date de la première version de Wheezy.

L'installateur de Wheezy créait déjà des partitions alignées sur des
blocs de 1 Mio (2048 secteurs de 512 octets). D'ailleurs on voit bien
que le début de la partition LVM est aligné sur cette taille.

Par contre ce n'est pas le cas des partitions NTFS et FAT qui sont
alignées "à l'ancienne" sur des pistes ou cylindres (63 secteurs/piste
pour la première, multiples de 255 têtes x 63 secteurs/piste pour les
deux suivantes). Comment ont-elles été créées ? Sauf erreur,
l'installateur de Windows Vista applique aussi l'alignement "moderne".

>> Solutions possibles :
>> - Créer une petite partition (5 Mo devraient suffire) formatée en ext2 à
>> monter sur /boot/grub, ce qui au mieux réduira suffisamment la taille de
>> l'image core pour qu'elle contienne dans l'espace post-MBR en évitant
>> l'intégration du module lvm, et au pire permettra à grub-install avec
>> l'option --force d'utiliser les listes de blocs.
>
> J'ai déjà 3 partitions Windows et 1 "extended"/LVM. Si les choses n'ont
> pas trop évoluées, il je ne peux plus créer de 4ème partition
> primaire.

La partition n° 4 est une partition étendue qui contient actuellement
uniquement la partition logique n° 5 servant de PV LVM, mais elle peut
contenir un nombre arbitraire de partitions logiques. Il suffit qu'il
reste un tout petit peu d'espace non alloué, même non aligné, à la fin
du disque ou au début de la partition étendue.

Dans le cas contraire, tu dois bien pouvoir réduire un poil le PV LVM et
la partition logique n° 5 qui le contient.

>> - Convertir la table de partition au format GPT avec gdisk et créer une
>> petite partition (1 Mo) de type "BIOS boot" où grub-install pourra
>> installer l'image core. Mais si le disque contient une installation de
>> Windows, ce dernier ne pourra plus démarrer. Ce problème peut néanmoins
>> être contourné avec un MBR hybride créé au moyen de gdisk.
>
> Je n'ai pas encore regarder GPT et gdisk en détail, mais est-ce compatible
> avec les partions actuelles

GPT est compatible avec tout type de partition. Il y a suffisamment
d'espace au début du disque pour y loger une table de partition GPT
standard (34 secteurs). Par contre C'est l'amorçage du Windows installé
qui ne sera pas compatible avec GPT.

> Et si après je veux changer de version de Windows comment les choses
> se passeront ?

Dans l'état actuel, Windows ne peut démarrer depuis un disque au format
GPT qu'en mode UEFI, et doit avoir été installé pour cela. D'où la
suggestion du MBR hybride qui contient une table de partition au format
MSDOS traditionnel.

>> - Si la machine a un firmware UEFI, activer l'amorçage en mode UEFI,
>> créer une partition système EFI (500 Mo recommandés, mais 5 Mo
>> suffisent) formatée en FAT à monter sur /boot/efi, installer
>> grub-efi-amd64 pour un firmware 64 bits ou grub-efi-ia32 pour un
>> firmware 32 bits. Exécuter "grub-install --removable" si nécessaire.
>> Mais à nouveau, une installation de Windows ne pourra plus démarrer en
>> dual boot avec Debian. Il faudra restaurer l'amorce de Windows dans le
>> MBR et sélectionner l'OS via le mode d'amorçage du firmware, UEFI pour
>> Debian et legacy pour Windows.
>>
> La partition EFI, peut-elle être prise sur la partie en LVM ?

Non. Les firmwares UEFI ne comprennent pas le format LVM de Linux, il ne
faut pas rêver.

J'ai mentionné les deux dernières solutions pour être complet, mais je
pense que la première est la plus simple, sûre et adaptée à la situation.



Re: LVM chiffré et passage de Wheezie à Jesssie.

2016-03-06 Thread Randy11
ws comment les choses
se passeront ?



- Si la machine a un firmware UEFI, activer l'amorçage en mode UEFI,
créer une partition système EFI (500 Mo recommandés, mais 5 Mo
suffisent) formatée en FAT à monter sur /boot/efi, installer
grub-efi-amd64 pour un firmware 64 bits ou grub-efi-ia32 pour un
firmware 32 bits. Exécuter "grub-install --removable" si nécessaire.
Mais à nouveau, une installation de Windows ne pourra plus démarrer en
dual boot avec Debian. Il faudra restaurer l'amorce de Windows dans le
MBR et sélectionner l'OS via le mode d'amorçage du firmware, UEFI pour
Debian et legacy pour Windows.


La partition EFI, peut-elle être prise sur la partie en LVM ?


Merci pour les explications déjà données.

Randy11


/*********/
 Forwarded Message 
Subject:Re: LVM chiffré et passage de Wheezie à Jesssie.
Date:   Sat, 5 Mar 2016 22:37:10 +0100
From:   Randy11 <rand...@free.fr>
To: MERLIN Philippe <phil-deb1.mer...@laposte.net>



Bonjour,

Apparemment, ils ont eu la bonne idée de changer la version de Grub 
entre Wheezy et Jessie.
Cela ne me dérangerait pas si au passage la limite des 63 secteurs en 
début de disques
habituellement utilisée n'était pas dépassée avec la nouvelle version de 
Grub.
La solution la plus couramment proposée est de refaire le partionnement 
du disque !!!???


Inutile de dire qu'avec mes 3 partitions Windows en début de disque, je 
ne suis pas pour

cette solution.

Je continue à chercher à partir du lien fournit. Après quelques heures 
de recherches, je

n'ai toujours pas de solution.

J'ai une idée, mais j'ignore si elle est bonne et comment la réaliser : 
est-il possible de réduire
la taille de l'installation dans le MBR ? Mettre un minimum de choses 
dans /dev/sda et le

reste dans /dev/vg/lv--01 ?



On 05/03/2016 14:27, MERLIN Philippe wrote:


Le samedi 5 mars 2016, 13:49:13 Randy11 a écrit :

> your core.img is unusually large.

en cherchant sur google ce message j'ai trouvé ceci, je ne sais pas si 
cela te rendra service, je ne suis pas un expert en partition.


grub2 error 
<https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1059827>


Philippe Merlin





/**/
Bonjour,

Apparemment, ils ont eu la bonne idée de changer la version de Grub 
entre Wheezy et Jessie.
Cela ne me dérangerait pas si au passage la limite des 63 secteurs en 
début de disques
habituellement utilisée n'était pas dépassée avec la nouvelle version de 
Grub.
La solution la plus couramment proposée est de refaire le partionnement 
du disque !!!???


Inutile de dire qu'avec mes 3 partitions Windows en début de disque, je 
ne suis pas pour

cette solution.

Je continue à chercher à partir du lien fournit. Après quelques heures 
de recherches, je

n'ai toujours pas de solution.

J'ai une idée, mais j'ignore si elle est bonne et comment la réaliser : 
est-il possible de réduire
la taille de l'installation dans le MBR ? Mettre un minimum de choses 
dans /dev/sda et le

reste dans /dev/vg/lv--01 ?



On 05/03/2016 14:27, MERLIN Philippe wrote:


Le samedi 5 mars 2016, 13:49:13 Randy11 a écrit :

> your core.img is unusually large.

en cherchant sur google ce message j'ai trouvé ceci, je ne sais pas si 
cela te rendra service, je ne suis pas un expert en partition.


grub2 error 
<https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1059827>


Philippe Merlin






Re: LVM chiffré et passage de Wheezie à Jesssie.

2016-03-05 Thread Pascal Hambourg
Randy11 a écrit :
> 
> À partir d'une configuration avec Wheezy qui comporte des partions 
> Windows et LVM dont 2 partitions LVM chiffrées, j'ai voulu passer de 
> Wheezy à Jessie par une installation complète de Jessie et non une mise 
> à jour - ma config Wheezy avait été un trop bricolée.
> 
> Disque /dev/sda :
> /dev/sda1 100G NTFS
> /dev/sda2 250G NTFS
> /dev/sda3 30.3G FAT32
> /dev/sda4 1.5T Extended
> /dev/sda5 1.5T Linux LVM
> 
> Dans /dev/sda5, il y a 8 partitions LVM.
(...)
> Pour installer Jessie, j'ai seulement pris :
> - lv--01 pour /
> - lv--05 pour /var
> - lv--08 pour swap

Avis personnel : donner des noms génériques aux volumes logiques
constitue une négation d'un des avantages de LVM, à savoir de pouvoir
donner aux volumes logiques des noms signifiants et non de simples
numéros comme les partitions.

> En mode "rescue", message d'erreur en faisant un "chroot" avec lv--01 
> montée et avec un "update-grub" suivi d'un "grub-install" :
> grub-install: info:  Scanning for lvm device on distk hostdisk://dev/sda
> grub-install: info:  no LVM signature found.
> grub-install: info:  Scanning for DISKFILTER device on disk 
> hostdisk//dev/sda
> grub-install: info:  Partition 0 starts from 63.
> grub-install: info:  Partition 1 start from 209728575.
> grub-install: info:  Partition 2 start from 734025915.
> grub-install: info:  Partition 4 start from 797515776
> grub-install: info:  guess root_dev `lvmid/.././' from dir 
> `/boot/grub/i386-pc'
> grub-install: info:  setting the root device to `lvmid/.../.../'.
> grub-install: info:  warning: your core.img is unusually large. It won't 
> fit in the embedding area.
> grub-install: info:  error: embedding is not possible, but this is 
> required for RAID and LVM install

En clair :

Le contenu de /boot est dans un volume logique LVM, ce qui induit deux
contraintes contradictoires sur l'image core de GRUB :
1) Elle doit intégrer le module lvm lui permettant de lire le LV, ce qui
augmente sa taille.
2) Elle doit être installée dans l'espace réservé entre le MBR et la
première partition, les listes de blocs n'étant pas utilisables avec LVM.

Or cet espace est très réduit sur ton disque puisque la première
partition commence au secteur 63, ce qui laisse 62*512=31744 octets ou
31 Kio.

A noter que depuis un certain temps les programmes de partitionnement
actuels, y compris l'installateur Debian, ont déplacé à 2048 secteurs (1
Mio) le début de la première partition pour d'autres raison (alignement
avec les tailles de blocs des disques durs au format avancé et des SSD).
Le partitionnement de ce disque a donc dû être fait avec un programme de
partitionnement assez ancien, ce qui est contradictoire avec sa capacité
respectable d'au moins 2 To.

Et comme tout logiciel, GRUB a tendance à grossir au fur et à mesure des
versions. Je viens de regarder sur un disque avec LVM contenant à la
fois Wheezy et Jessie :
- sur Wheezy, /boot/grub/core.img = 28779 octets
- sur Jessie, /boot/grub/i386-pc/core.img = 31956 octets, soit
légèrement plus que l'intervalle post-MBR de ton disque.

Dans ces conditions, l'image core de GRUB ne peut plus être installée
sur ton disque.

> Après chaque boot, je me trouve fasse au prompt de grub :

> GRUB loading
> Welcome to GRUB!
> 
> error: file not found.

Normal, c'est l'image core de GRUB de Wheezy qui est encore présente,
qui se lance et ne trouve plus ses fichiers dans le LV racine.

Solutions possibles :
- Créer une petite partition (5 Mo devraient suffire) formatée en ext2 à
monter sur /boot/grub, ce qui au mieux réduira suffisamment la taille de
l'image core pour qu'elle contienne dans l'espace post-MBR en évitant
l'intégration du module lvm, et au pire permettra à grub-install avec
l'option --force d'utiliser les listes de blocs.

- Convertir la table de partition au format GPT avec gdisk et créer une
petite partition (1 Mo) de type "BIOS boot" où grub-install pourra
installer l'image core. Mais si le disque contient une installation de
Windows, ce dernier ne pourra plus démarrer. Ce problème peut néanmoins
être contourné avec un MBR hybride créé au moyen de gdisk.

- Si la machine a un firmware UEFI, activer l'amorçage en mode UEFI,
créer une partition système EFI (500 Mo recommandés, mais 5 Mo
suffisent) formatée en FAT à monter sur /boot/efi, installer
grub-efi-amd64 pour un firmware 64 bits ou grub-efi-ia32 pour un
firmware 32 bits. Exécuter "grub-install --removable" si nécessaire.
Mais à nouveau, une installation de Windows ne pourra plus démarrer en
dual boot avec Debian. Il faudra restaurer l'amorce de Windows dans le
MBR et sélectionner l'OS via le mode d'amorçage du firmware, UEFI pour
Debian et legacy pour Windows.



Re: LVM chiffré et passage de Wheezie à Jesssie.

2016-03-05 Thread MERLIN Philippe
Le samedi 5 mars 2016, 13:49:13 Randy11 a écrit :
> your core.img is unusually large.
en cherchant sur google ce message j'ai trouvé ceci, je ne sais pas si 
cela te rendra service, je ne suis pas un expert en partition.
grub2 error[1] 
Philippe Merlin


[1] 
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1059827


[PARTAL success] Re: LVM info - OTHER than HOWTO's

2015-11-22 Thread Richard Owlett

On 11/17/2015 6:08 PM, Richard Owlett wrote:

In some of my reading I came across a page recommending LVM for
ease of adjusting space.

When searching for more information all I'm finding are
essentially HOWTO's with only a couple of paragraphs on "Whats"
and "Whys". Essentially nothing on "Why not".



I've found samples of outlook I was looking for:

http://www.standalone-sysadmin.com/blog/2008/09/introduction-to-lvm-in-linux/

http://www.standalone-sysadmin.com/blog/2008/11/more-lvm-information/

Useful information might be in 
http://tldp.org/HOWTO/LVM-HOWTO/index.html .








Re: LVM info - OTHER than HOWTO's

2015-11-22 Thread Richard Owlett

On 11/21/2015 4:40 PM, Lisi Reisz wrote:

On Saturday 21 November 2015 12:13:37 Richard Owlett wrote:

My analogy would be "When planning a trip thru NYC, via Grand
Central and Penn Station, are you really interested in number of
steps between levels of intervening subway stations?"


Very much so.  I spend much time sorting out just that.  It makes the
difference between a manageable journey and a non-starter.  Now if it has
lifts for every level change...  Underground traIns (subway) never do, but
Railway Trains sometimes do.  Hence the planning.

Lisi



Point taken. The last seven years have taught me "drive on 
pavement, damaged vertebrae impede mobility". Minimally I use two 
canes, on occasion a wheelchair. I was using an image from when I 
worked in NYC.


Maybe a better illustration would be based on not seeing the 
forest for the trees.





Re: LVM info - OTHER than HOWTO's

2015-11-21 Thread John L. Ries
On Sat, 2015-11-21 at 12:55 -0600, Joel Rees wrote:
> That's the common way of explaining fstab, and it is, indeed, the way
> I should have explained it if I were going to bother explaining it
> where slaves to convention congregate.

I agree with your points, but it's rude to sneer.



Re: LVM info - OTHER than HOWTO's

2015-11-21 Thread Joel Rees
On Sat, Nov 21, 2015 at 5:16 PM, Pascal Hambourg  wrote:
> Joel Rees a écrit :
>>
>> Thinking in terms of partitions as the things you mount in /etc/fstab.
>
> Err, no.

Sometimes you think of things in ways that don't match the common
convention. Sometimes those ways of thinking spill out onto the WWW.

Anyone else feeling the desparate need to correct me, go ahead, just
realize you are going to add to the confusion.

> The things you mount in /etc/fstab are filesystems, not
> partitions. A filesystem may not even lie in a partition or volume
> (think about tmpfs, nfs...).

That's the common way of explaining fstab, and it is, indeed, the way
I should have explained it if I were going to bother explaining it
where slaves to convention congregate.

But, one, I don't have time to pull in the entire set of common
convention definitions every time I comment on a mailing list, and,
two, many here are not familiar with all the common conventions yet,
and, three, there are often contexts in which slavery to convention
neatly avoids mapping the concepts that are being confused.

Sorry for the acid spill, Pascal. You are not the one who raised the
PH level of my mental processes, you just happened to tip the beaker
was all. Chris was a little more to blame, but the primary blame for
my attitude right now is  not on this list. And that's my problem, not
the list's.

But there are a lot of people here who could profit by giving each
other a break.

-- 
Joel Rees

Be careful when you look at conspiracy.
Arm yourself with knowledge of yourself, as well:
http://reiisi.blogspot.jp/2011/10/conspiracy-theories.html



Re: LVM info - OTHER than HOWTO's

2015-11-21 Thread Lisi Reisz
On Saturday 21 November 2015 12:13:37 Richard Owlett wrote:
> My analogy would be "When planning a trip thru NYC, via Grand
> Central and Penn Station, are you really interested in number of
> steps between levels of intervening subway stations?"

Very much so.  I spend much time sorting out just that.  It makes the 
difference between a manageable journey and a non-starter.  Now if it has 
lifts for every level change...  Underground traIns (subway) never do, but 
Railway Trains sometimes do.  Hence the planning.

Lisi



Re: LVM info - OTHER than HOWTO's

2015-11-21 Thread Brian
On Sat 21 Nov 2015 at 06:13:37 -0600, Richard Owlett wrote:

> My analogy would be "When planning a trip thru NYC, via Grand Central and
> Penn Station, are you really interested in number of steps between levels of
> intervening subway stations?"

You are at liberty to answer your own question. It might provide some
help to travellers on a metro system, particlarly those with restricted
mobility.

Meanwhile, I'd suggest you get on with investigating LVM and stop
moaning about any lack of information about it - which there is not.

If you have a particular query about LVM, ask it. And stop restricting
the type of response in the subject line or body of the mail; it isn't
helpful to responders and is likely to be ignored by sensible people.



Re: LVM info - OTHER than HOWTO's

2015-11-21 Thread Pascal Hambourg
Joel Rees a écrit :
> 
> Thinking in terms of partitions as the things you mount in /etc/fstab.

Err, no. The things you mount in /etc/fstab are filesystems, not
partitions. A filesystem may not even lie in a partition or volume
(think about tmpfs, nfs...).



Re: LVM info - OTHER than HOWTO's

2015-11-21 Thread Richard Owlett

On 11/19/2015 6:46 PM, Joel Rees wrote:

On Wed, Nov 18, 2015 at 10:02 PM, Richard Owlett  wrote:

On 11/18/2015 4:07 AM, Joel Rees wrote:


2015/11/18 9:09 "Richard Owlett":



In some of my reading I came across a page recommending LVM for ease
  of adjusting space.[snip]

I've a machine set aside for experimenting with how an install is configured
for a couple of personal projects. Relative space usage cannot be determined
in advance.


Sounds like an excellent reason to at least experiment with LVM. ...

When searching for more information all I'm finding are essentially
HOWTO's with only a couple of paragraphs on "Whats" and "Whys".


Essentially nothing


Uhm, no news is good news?


Not really. i.e. if all you have is a hammer, everything looks 
like a nail.






  on "Why not".



Reasons not to use it range from laziness to not needing it after
all (right now) to wanting to use incompatible software. My
impression is that the incompatible software usually tells you
somewhere it's not compatible.



I have a current possible use but want to know in advance of rough spots to
estimate if the effort is likely to be productive.


Rough spots?


In the sense of appropriate/inappropriate applications (cf hammer 
vs screw).



[snip]

Suggestions?


man -k lvm   ?


That lead to a productive but OT rabbit trail ;)
Man pages are by design inherently very detailed HOWTO's.



No, those are not rabbit trails. That's basically the stuff you need
to know when you are considering using it.


My analogy would be "When planning a trip thru NYC, via Grand 
Central and Penn Station, are you really interested in number of 
steps between levels of intervening subway stations?"





Re: LVM info - OTHER than HOWTO's

2015-11-21 Thread Javi Barroso
Hello,

El 18 de noviembre de 2015 1:08:49 CET, Richard Owlett  
escribió:
>In some of my reading I came across a page recommending LVM for 
>ease of adjusting space.
>
>When searching for more information all I'm finding are 
>essentially HOWTO's with only a couple of paragraphs on "Whats" 
>and "Whys". Essentially nothing on "Why not".
>
>No information on dual boot.
>
>Suggestions?

See the wikipedia, it has useful infotmation:
https://en.m.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)

For dual boot, as said on the list,i think windows cannot read lvm.

But if you dont want share data between both, you can create a boot partition, 
then an windows partition, and a lvm extended partition. And install linux on 
the lvm partition

Regards



Re: LVM info - OTHER than HOWTO's

2015-11-21 Thread Richard Owlett

On 11/20/2015 4:28 PM, Joel Rees wrote:

On Fri, Nov 20, 2015 at 4:47 PM, Chris Bannister
 wrote:
[snip]

http://linuxconfig.org/linux-lvm-logical-volume-manager


And that might be the sort of overview the OP was looking for, even
though it looks more liike instructions for use.


It may be more easily massaged into a usable layout than than 
many HOTO's.
See my comments to Javi Barroso about differences in visual noise 
in Wikipedia "mobile" and "desktop" page layouts.





Re: LVM info - OTHER than HOWTO's

2015-11-21 Thread Richard Owlett

On 11/21/2015 2:06 AM, Javi Barroso wrote:

Hello,

El 18 de noviembre de 2015 1:08:49 CET, Richard Owlett  
escribió:

In some of my reading I came across a page recommending LVM for
ease of adjusting space.

When searching for more information all I'm finding are
essentially HOWTO's with only a couple of paragraphs on "Whats"
and "Whys". Essentially nothing on "Why not".

No information on dual boot.

Suggestions?


See the wikipedia, it has useful information:
https://en.m.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)


*EXTREMELY USEFUL* link, but not quite in manner intended .
After I got ~3/4 of the way thru it something began to look familiar.
Saw the content before as 
en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux).

NOTE BENE en.m.wikipedia.org/... versus en.wikipedia.org/...
The "mobile" format of Wikipedia removes visual noise of the 
"desktop" format.
I found myself following more links in the "mobile" format and 
getting more out of each link.




For dual boot, as said on the list,i think windows cannot read lvm.


Not an issue. I'll multi boot among versions and configurations 
of Debian. The medium term goal is understanding what I want from 
an OS ;)




But if you don't want share data between both, you can create a boot partition, 
then an windows partition, and a lvm extended partition. And install linux on 
the lvm partition

Regards






Re: LVM info - OTHER than HOWTO's

2015-11-20 Thread Joel Rees
On Fri, Nov 20, 2015 at 4:47 PM, Chris Bannister
 wrote:
> On Fri, Nov 20, 2015 at 09:46:34AM +0900, Joel Rees wrote:
>> LVM is much more flexible and less prone to do things to your data
>> than, say, the tools that re-size your partitions the hard way.

Thinking in terms of partitions as the things you mount in /etc/fstab.

The confusion about what exatly a partition is does not resolve itself
easily. You think you have it figured out, but then you try to tell
someone about it and they often have different understanding of the
underlying terms.

Everyone who talks about it seems to use the terms differently.

>> You do
>> still have to exercise common sense, however.
>>
>> I've lost a re-sized partition permanently using a commercial tool
>> whose name I refuse to remember.
>>
>> I lost a bunch of LVM  partitions once through pure carelessness and
>> used the LVM maintenance tools to recover them. Took a couple of hours
>> to read the manuals, recover, and be on my way. Kept using that
>> partition map for several years with no problems.
>
> Please don't confuse physical volumes and partitions, the OP seems
> confused enough already. :)

Indeed, I was not being clear. :(

> http://linuxconfig.org/linux-lvm-logical-volume-manager

And that might be the sort of overview the OP was looking for, even
though it looks more liike instructions for use.

:-/

> --
> "If you're not careful, the newspapers will have you hating the people
> who are being oppressed, and loving the people who are doing the
> oppressing." --- Malcolm X
>

I wonder if I have time to write something up that might help resolve
this confusion today.

-- 
Joel Rees

Be careful when you look at conspiracy.
Arm yourself with knowledge of yourself, as well:
http://reiisi.blogspot.jp/2011/10/conspiracy-theories.html



Re: LVM info - OTHER than HOWTO's

2015-11-20 Thread Pascal Hambourg
Joel Rees a écrit :
> 
> I think I have heard of people booting straight out of LVM partitions,
> but that takes more gum tape than I like to use. I do believe grub is
> able to look into LVM partitions somewhat these days,

Indeed. And Linux software RAID.

> so you may want
> to play with having grub on a ("BIOS/DOS" map) primary partition
> booting to a boot/root partition in an LVM managed logical volume.

Using a logical volume as the root filesystem does not require the
bootloader to be able to read LVM. It just requires an initrd or
initramfs, because the kernel itself cannot read LVM on its own. Only
having /boot on LVM requires the bootloader to be able to read LVM.

The same goes with software RAID.



Re: LVM info - OTHER than HOWTO's

2015-11-19 Thread Joel Rees
On Thu, Nov 19, 2015 at 2:42 PM, Martin Str|mberg  wrote:
> [...]
>
>> No information on dual boot.
>
> If with not Linux, it won't work.

That's news to me.

I've mulit-booted openBSD, Fedora in a non-VM LVM, debian, SUSE, and a
previous version of the OSS fork of Solaris. Not all at once, but
three or four at a time.

And I'm pretty sure I've dual-booted MSWindows 7 and Fedora with LVM.

Haven't tried LVM in a GPT mapped disk yet.

-- 
Joel Rees

Be careful when you look at conspiracy.
Arm yourself with knowledge of yourself, as well:
http://reiisi.blogspot.jp/2011/10/conspiracy-theories.html



Re: LVM info - OTHER than HOWTO's

2015-11-19 Thread Joel Rees
On Wed, Nov 18, 2015 at 10:02 PM, Richard Owlett  wrote:
> On 11/18/2015 4:07 AM, Joel Rees wrote:
>>
>> 2015/11/18 9:09 "Richard Owlett":
>>>
>>>
>>> In some of my reading I came across a page recommending LVM for ease
>>>  of adjusting space.
>>
>>
>> Yeah. I'm not using it now, but it did come in handy when I was
>> still getting a feeling for partitioning.
>
>
> I've a machine set aside for experimenting with how an install is configured
> for a couple of personal projects. Relative space usage cannot be determined
> in advance.

Sounds like an excellent reason to at least experiment with LVM.

One reason I used it in the past was that, for some reason, I did not
want to use the so-called DOS/BIOS (or linux) extended partitions and
I needed lots of partitions. LVM is a bit more flexible. (I don't
recall trying to use more than one linux extended partition, I've only
recognized that they seem to exist lately.)

With GPT the limits on numbers of partitions, etc., are supposedly
relaxed, but I haven't done that, yet.

>>> When searching for more information all I'm finding are essentially
>>> HOWTO's
>>>  with only a couple of paragraphs on "Whats" and "Whys".
>
> Essentially nothing

Uhm, no news is good news?

>>>
>>>  on "Why not".
>>>
>>
>> Reasons not to use it range from laziness to not needing it after
>> all (right now) to wanting to use incompatible software. My
>> impression is that the incompatible software usually tells you
>> somewhere it's not compatible.
>
>
> I have a current possible use but want to know in advance of rough spots to
> estimate if the effort is likely to be productive.

Rough spots?

I never saw any in over ten years of use. That's part of the reason
you don't hear much about it.

Well, the GUI gadget is a bit slow, slower than gparted when
re-scanning your disk after an edit is applied. But that's only a
problem when you are in a rush, but you really don't want to be in
that kind of a rush when messing with your partitions.

LVM is much more flexible and less prone to do things to your data
than, say, the tools that re-size your partitions the hard way. You do
still have to exercise common sense, however.

I've lost a re-sized partition permanently using a commercial tool
whose name I refuse to remember.

I lost a bunch of LVM  partitions once through pure carelessness and
used the LVM maintenance tools to recover them. Took a couple of hours
to read the manuals, recover, and be on my way. Kept using that
partition map for several years with no problems.

>>> No information on dual boot.
>>
>> That's fairly straightforward.

I think I have heard of people booting straight out of LVM partitions,
but that takes more gum tape than I like to use. I do believe grub is
able to look into LVM partitions somewhat these days, so you may want
to play with having grub on a ("BIOS/DOS" map) primary partition
booting to a boot/root partition in an LVM managed logical volume.

>>> Suggestions?
>>
>> man -k lvm   ?
>
> That lead to a productive but OT rabbit trail ;)
> Man pages are by design inherently very detailed HOWTO's.
>

No, those are not rabbit trails. That's basically the stuff you need
to know when you are considering using it.

If you are worried about it, use your experimenting box first. If not,
I'm not sure why you would think twice unless your RAID docs say they
don't play nice with LVM (or LVM doesn't play nice with them) or, say,
you are using that XML file system or some other exotic file system.

your usual efs2, 3, and 4 all work fine in LVM partitions. After
adding extents to a logical partition, you may need to add inodes, and
you do want to be careful when shrinking a partitition with live data
in it.  (Common sense, right? Back-up the data, etc.)

Oh, use the GUI at first. It helps you visualize what you are doing.
Other than being slow, it works well for initial partitioning, adding
extents, and reminding yourself where the spare physical partitions
are, etc.

-- 
Joel Rees

Be careful when you look at conspiracy.
Arm yourself with knowledge of yourself, as well:
http://reiisi.blogspot.jp/2011/10/conspiracy-theories.html



Re: LVM info - OTHER than HOWTO's

2015-11-19 Thread Martin Str|mberg
In article  Joel Rees  wrote:
> On Thu, Nov 19, 2015 at 2:42 PM, Martin Str|mberg  wrote:
> > [...]
> >
> >> No information on dual boot.
> >
> > If with not Linux, it won't work.

> That's news to me.

> I've mulit-booted openBSD, Fedora in a non-VM LVM, debian, SUSE, and a
> previous version of the OSS fork of Solaris. Not all at once, but
> three or four at a time.

Ok, I don't know if the BSDs can handle LVM or not, but when did
Fedora and Suse leave the Linux kernel? (That's news to me.) Don't know
about "OSS fork of Solaris".

> And I'm pretty sure I've dual-booted MSWindows 7 and Fedora with LVM.

I don't believe you. I'm pretty sure that WINDOWS can't read LVM.

Even if it did, I wouldn't want it to muck around in it.

> Haven't tried LVM in a GPT mapped disk yet.

Obviously that'll work (although I don't remember if I tried
that). It's just partitions...


-- 
MartinS



Re: LVM info - OTHER than HOWTO's

2015-11-19 Thread Richard Owlett

On 11/18/2015 9:58 AM, Darac Marjal wrote:

On Tue, Nov 17, 2015 at 06:08:49PM -0600, Richard Owlett wrote:


In some of my reading I came across a page recommending LVM for
ease of adjusting space.

When searching for more information all I'm finding are
essentially HOWTO's with only a couple of paragraphs on "Whats"
and "Whys". Essentially nothing on "Why not".

No information on dual boot.

Suggestions?



The patent for logical volumes,
http://worldwide.espacenet.com/publicationDetails/biblio?CC=US=5129088==E=en_EP,
might give you an idea of the motivation of LVM.


Using that to get the patent number, I found 
http://www.uspto.gov/ easier to navigate.

Once I edit out the "lawyer talk" it should be informative.



Also note that LVM on linux is quite intimately linked to the
Device Mapper system.


Web search for "Device Mapper" was illuminating.




Re: LVM info - OTHER than HOWTO's

2015-11-19 Thread Joel Rees
On Fri, Nov 20, 2015 at 11:01 AM, Martin Str|mberg  wrote:
> In article  Joel Rees  
> wrote:
>> On Thu, Nov 19, 2015 at 2:42 PM, Martin Str|mberg  wrote:
>> > [...]
>> >
>> >> No information on dual boot.
>> >
>> > If with not Linux, it won't work.
>
>> That's news to me.
>
>> I've mulit-booted openBSD, Fedora in a non-VM LVM, debian, SUSE, and a
>> previous version of the OSS fork of Solaris. Not all at once, but
>> three or four at a time.
>
> Ok, I don't know if the BSDs can handle LVM or not,

I'm told that it builds and "works" for some purposes. I think you
have to enable the Linux emulation stuff. I was not using LVM on it,
however. Just on Fedora.

> but when did
> Fedora and Suse leave the Linux kernel? (That's news to me.) Don't know
> about "OSS fork of Solaris".

Check the wikipedia page for Solaris. Illumos and OpenSolaris are
mentioned, althout Indiana and others are not.

When I was playing with it, LVM did build on Solaris, IIRC. That was
before ZFS or whatever that is.. I don't remember whether I used it
there or not.

No, Solaris is not Linux. Closer to the BSDs, but getting further away.

And I was talking about multi-booting in general, which is what the OP
asked about, not about the Linux kernel.

>> And I'm pretty sure I've dual-booted MSWindows 7 and Fedora with LVM.
>
> I don't believe you. I'm pretty sure that WINDOWS can't read LVM.

I never asked it to.

Set up a vfat formatted partition for sharing.

BTW, I could mount and read NTFS from Fedora, even wrote to it once or twice.

I have heard somewhere of a project that had succeeded in making LVM
visible to MSWindows, but I haven't heard how well they progressed.
They may have given up.

> Even if it did, I wouldn't want it to muck around in it.

That's probably wise.

>> Haven't tried LVM in a GPT mapped disk yet.
>
> Obviously that'll work (although I don't remember if I tried
> that). It's just partitions...

It seems odd to me that you think it obvious. GPT mucks with a lot of
underlying assumptions.

> --
> MartinS
>

-- 
Joel Rees

Be careful when you look at conspiracy.
Arm yourself with knowledge of yourself, as well:
http://reiisi.blogspot.jp/2011/10/conspiracy-theories.html



Re: LVM info - OTHER than HOWTO's

2015-11-19 Thread Chris Bannister
On Fri, Nov 20, 2015 at 09:46:34AM +0900, Joel Rees wrote:
> LVM is much more flexible and less prone to do things to your data
> than, say, the tools that re-size your partitions the hard way. You do
> still have to exercise common sense, however.
> 
> I've lost a re-sized partition permanently using a commercial tool
> whose name I refuse to remember.
> 
> I lost a bunch of LVM  partitions once through pure carelessness and
> used the LVM maintenance tools to recover them. Took a couple of hours
> to read the manuals, recover, and be on my way. Kept using that
> partition map for several years with no problems.

Please don't confuse physical volumes and partitions, the OP seems
confused enough already. :)

http://linuxconfig.org/linux-lvm-logical-volume-manager

-- 
"If you're not careful, the newspapers will have you hating the people
who are being oppressed, and loving the people who are doing the 
oppressing." --- Malcolm X



Re: LVM info - OTHER than HOWTO's

2015-11-18 Thread John L. Ries
Which systems do you intend to dual boot?  My understanding is that if 
one of them is Windows, you're out of luck; but you can always run 
Windows in a VM and let Linux manage the LVM file systems.


--|
John L. Ries  |
Salford Systems   |
Phone: (619)543-8880 x107 |
or (435)867-8885  |
--|


On Tuesday 2015-11-17 17:08, Richard Owlett wrote:


Date: Tue, 17 Nov 2015 17:08:49
From: Richard Owlett 
To: debian-user 
Subject: LVM info - OTHER than HOWTO's

In some of my reading I came across a page recommending LVM for
ease of adjusting space.

When searching for more information all I'm finding are
essentially HOWTO's with only a couple of paragraphs on "Whats"
and "Whys". Essentially nothing on "Why not".

No information on dual boot.

Suggestions?







Re: LVM info - OTHER than HOWTO's

2015-11-18 Thread Darac Marjal

On Tue, Nov 17, 2015 at 06:08:49PM -0600, Richard Owlett wrote:

In some of my reading I came across a page recommending LVM for ease of 
adjusting space.


When searching for more information all I'm finding are essentially 
HOWTO's with only a couple of paragraphs on "Whats" and "Whys". 
Essentially nothing on "Why not".


No information on dual boot.

Suggestions?



The patent for logical volumes, 
http://worldwide.espacenet.com/publicationDetails/biblio?CC=US=5129088==E=en_EP, 
might give you an idea of the motivation of LVM.


Also note that LVM on linux is quite intimately linked to the Device 
Mapper system. 


-- For more information, please reread.


signature.asc
Description: PGP signature


Re: LVM info - OTHER than HOWTO's

2015-11-18 Thread Dan Ritter
On Tue, Nov 17, 2015 at 06:08:49PM -0600, Richard Owlett wrote:
> In some of my reading I came across a page recommending LVM for ease
> of adjusting space.
> 
> When searching for more information all I'm finding are essentially
> HOWTO's with only a couple of paragraphs on "Whats" and "Whys".
> Essentially nothing on "Why not".
> 
> No information on dual boot.
> 
> Suggestions?

Here's why not:

LVM is a kludge.

That doesn't mean it isn't useful, but it does mean that it's
rarely the best tool.

LVM can increase the size of partitions by giving them more space on
either an empty section of disk or another disk. Either way, you
then need to increase the filesystem size on that partition,
which is usually but not always doable. It does not grant any
extra redundancy, so when you add an extra disk, you have
increased your chances of hardware failure taking out data.

LVM can snapshot a partition, so that you can back it up or
go back in time. Doing so is more complex, hairy, and
time-consuming in LVM than in most alternatives, and should
usually be avoided.

LVM can do RAID mirroring, but not as well as mdadm RAID1 or 10.

LVM can do RAID striping, but not as well as mdadm RAID0 or 10.

LVM can do RAID5 or 6, but it takes more thought than with
mdadm, and most people should avoid RAID5 or 6 most of the time.

If you need to move things around once in a while, having a nice
big disk available to use as a spare and copying things with
rsync and dd is going to be easier.

If you need to move things around a lot, you might want btrfs or
zfs instead of lvm.

If you don't need to move things around much, but just want
performance, mdadm is better.

-dsr-




Re: LVM info - OTHER than HOWTO's

2015-11-18 Thread Steve McIntyre
d...@randomstring.org wrote:
>
>Here's why not:
>
>LVM is a kludge.

Not at all, no. LVM *as a concept* has been around for ages in a lot
of enterprise systems. The Linux implementation using device-mapper
works reasonably well and provides a lot of features that people use a
lot.

>That doesn't mean it isn't useful, but it does mean that it's
>rarely the best tool.
>
>LVM can increase the size of partitions by giving them more space on
>either an empty section of disk or another disk. Either way, you
>then need to increase the filesystem size on that partition,
>which is usually but not always doable. It does not grant any
>extra redundancy, so when you add an extra disk, you have
>increased your chances of hardware failure taking out data.
>
>LVM can snapshot a partition, so that you can back it up or
>go back in time. Doing so is more complex, hairy, and
>time-consuming in LVM than in most alternatives, and should
>usually be avoided.
>
>LVM can do RAID mirroring, but not as well as mdadm RAID1 or 10.
>
>LVM can do RAID striping, but not as well as mdadm RAID0 or 10.
>
>LVM can do RAID5 or 6, but it takes more thought than with
>mdadm, and most people should avoid RAID5 or 6 most of the time.
>
>If you need to move things around once in a while, having a nice
>big disk available to use as a spare and copying things with
>rsync and dd is going to be easier.
>
>If you need to move things around a lot, you might want btrfs or
>zfs instead of lvm.
>
>If you don't need to move things around much, but just want
>performance, mdadm is better.

Or, do what lots of people do: use mdadm to provide the redundancy /
striping at the disk level, then use LVM on top of the mdadm-provided
RAID devices to give the flexibility with snapshots, flexible
partitioning, etc. As/when you need to migrate data, vgextend and
pvmove are massively useful tools. In my experience, pvmove makes a
huge difference - you can move data at ~raw disk speed from old disks
to new without downtime.

To Richard: LVM is never going to work with a dual-boot system unless
you're hosting that system virtually under Linux. It's just like
Windows' own multi-disc flexible storage stuff isn't going to work
with anybody else. If that's an issue for you, then either stay away
from LVM or make sure you have some spare space that's available for
sharing between the different OSes using a common filesystem.

-- 
Steve McIntyre, Cambridge, UK.st...@einval.com
"Further comment on how I feel about IBM will appear once I've worked out
 whether they're being malicious or incompetent. Capital letters are forecast."
 Matthew Garrett, http://www.livejournal.com/users/mjg59/30675.html



Re: LVM info - OTHER than HOWTO's

2015-11-18 Thread Richard Owlett
[As I'm subscribed "Reply-To" set to debian-user 
 ]


On 11/18/2015 11:33 AM, John L. Ries wrote:

Which systems do you intend to dual boot?


Two configurations of Squeeze, possibly one of Jessie.


 My understanding is
that if one of them is Windows, you're out of luck; but you can
always run Windows in a VM and let Linux manage the LVM file
systems.


Windows XP runs on a dedicated machine whose functions are 
browsing and email.
Debian machines *PHYSICALLY ISOLATED* from internet "for cause" 
;/ [ don't ask ;]





On Tuesday 2015-11-17 17:08, Richard Owlett wrote:


Date: Tue, 17 Nov 2015 17:08:49
From: Richard Owlett 
To: debian-user 
Subject: LVM info - OTHER than HOWTO's

In some of my reading I came across a page recommending LVM for
ease of adjusting space.

When searching for more information all I'm finding are
essentially HOWTO's with only a couple of paragraphs on "Whats"
and "Whys". Essentially nothing on "Why not".

No information on dual boot.

Suggestions?









Re: LVM info - OTHER than HOWTO's

2015-11-18 Thread Stephan Seitz

On Wed, Nov 18, 2015 at 01:29:01PM -0500, Dan Ritter wrote:

LVM is a kludge.


Not at all.


LVM can increase the size of partitions by giving them more space on
either an empty section of disk or another disk. Either way, you


Yes.


then need to increase the filesystem size on that partition,
which is usually but not always doable. It does not grant any


You can resize an ext3 or ext4 partition online without downtime.

I’m doing this quite often with virtual hosts.
- Oh, the partition is getting too small in the VM
- Add a new disk to the VM, hotplug feature
- Add the new disk to the LVM partition
- Resize the filesystem
- Finished and no downtime

This is working with Debian and SLES.


If you need to move things around a lot, you might want btrfs or
zfs instead of lvm.


I consider btrfs still experimental. Maybe I will try it in one year, and 
zfs seems to be only available in a fuse implemention.


I prefer ext4 and ext3.

Shade and sweet water!

Stephan

--
| Stephan Seitz  E-Mail: s...@fsing.rootsland.net |
| Public Keys: http://fsing.rootsland.net/~stse/keys.html |


smime.p7s
Description: S/MIME cryptographic signature


Re: LVM info - OTHER than HOWTO's

2015-11-18 Thread John L. Ries
LVM should work the same way for both distros, but just in case, you might 
want to do the initial setup in Squeeze.  I don't know if anything has 
changed in the LVM format in the past 20 years, but...


--|
John L. Ries  |
Salford Systems   |
Phone: (619)543-8880 x107 |
or (435)867-8885  |
--|


On Wed, 18 Nov 2015, Richard Owlett wrote:


[As I'm subscribed "Reply-To" set to debian-user
 ]

On 11/18/2015 11:33 AM, John L. Ries wrote:

Which systems do you intend to dual boot?


Two configurations of Squeeze, possibly one of Jessie.


 My understanding is
that if one of them is Windows, you're out of luck; but you can
always run Windows in a VM and let Linux manage the LVM file
systems.


Windows XP runs on a dedicated machine whose functions are
browsing and email.
Debian machines *PHYSICALLY ISOLATED* from internet "for cause"
;/ [ don't ask ;]




On Tuesday 2015-11-17 17:08, Richard Owlett wrote:


Date: Tue, 17 Nov 2015 17:08:49
From: Richard Owlett 
To: debian-user 
Subject: LVM info - OTHER than HOWTO's

In some of my reading I came across a page recommending LVM for
ease of adjusting space.

When searching for more information all I'm finding are
essentially HOWTO's with only a couple of paragraphs on "Whats"
and "Whys". Essentially nothing on "Why not".

No information on dual boot.

Suggestions?












Re: LVM info - OTHER than HOWTO's

2015-11-18 Thread Richard Owlett

On 11/18/2015 4:07 AM, Joel Rees wrote:

2015/11/18 9:09 "Richard Owlett":


In some of my reading I came across a page recommending LVM for ease
 of adjusting space.


Yeah. I'm not using it now, but it did come in handy when I was
still getting a feeling for partitioning.


I've a machine set aside for experimenting with how an install is 
configured for a couple of personal projects. Relative space 
usage cannot be determined in advance.





When searching for more information all I'm finding are essentially HOWTO's
 with only a couple of paragraphs on "Whats" and "Whys". 

Essentially nothing

 on "Why not".



Reasons not to use it range from laziness to not needing it after
all (right now) to wanting to use incompatible software. My
impression is that the incompatible software usually tells you
somewhere it's not compatible.


I have a current possible use but want to know in advance of 
rough spots to

estimate if the effort is likely to be productive.



No information on dual boot.


That's fairly straightforward.


Suggestions?


man -k lvm   ?


That lead to a productive but OT rabbit trail ;)
Man pages are by design inherently very detailed HOWTO's.




Re: LVM info - OTHER than HOWTO's

2015-11-18 Thread Pascal Hambourg
Richard Owlett a écrit :
> In some of my reading I came across a page recommending LVM for 
> ease of adjusting space.
> 
> When searching for more information all I'm finding are 
> essentially HOWTO's with only a couple of paragraphs on "Whats" 
> and "Whys". Essentially nothing on "Why not".
> 
> No information on dual boot.
> 
> Suggestions?

Suggestions about what ? Do you have precise questions ?



Re: LVM info - OTHER than HOWTO's

2015-11-18 Thread Richard Owlett

On 11/18/2015 2:03 AM, Pascal Hambourg wrote:

Richard Owlett a écrit :

In some of my reading I came across a page recommending LVM for
ease of adjusting space.

When searching for more information all I'm finding are
essentially HOWTO's with only a couple of paragraphs on "Whats"
and "Whys". Essentially nothing on "Why not".

No information on dual boot.

Suggestions?


Suggestions about what ? Do you have precise questions ?




Perhaps I should have said "What should I be reading?"
I don't yet know enough about LVM to be precise.

Introductions to journalism sometimes refer to "the 5 W's and 
sometimes H" meaning "Who, What, When, Where, Why and sometimes 
How". What I'm finding is almost pure "How" with a dash of 
"What". I particularly need understanding of When/Where and Why.


[I've been a computer _user_ since the days of CORC and CUPL in 
the 60's. Noe, as a retiree, I've time to understand the guts.]






Re: LVM info - OTHER than HOWTO's

2015-11-18 Thread Joel Rees
2015/11/18 9:09 "Richard Owlett" :
>
> In some of my reading I came across a page recommending LVM for ease of
adjusting space.

Yeah. I'm not using it now, but it did come in handy when I was still
getting a feeling for partitioning.

> When searching for more information all I'm finding are essentially
HOWTO's with only a couple of paragraphs on "Whats" and "Whys". Essentially
nothing on "Why not".
>

Reasons not to use it range from laziness to not needing it after all
(right now) to wanting to use incompatible software. My impression is that
the incompatible software usually tells you somewhere it's not compatible.

> No information on dual boot.

That's fairly straightforward.

> Suggestions?

man -k lvm   ?

Joel Rees

Computer memory is just fancy paper,
CPUs just fancy pens.
All is a stream of text
flowing from the past into the future.


Re: LVM info - OTHER than HOWTO's

2015-11-18 Thread Andrew McGlashan
On 19/11/2015 6:14 AM, Richard Owlett wrote:
> Windows XP runs on a dedicated machine whose functions are browsing and
> email.

XP -- that's dead now; lots of security issues that will never get fixed.

Do you really need Windows now?  What is it that XP does for you that
you can't do with Windows 7 or Debian?

Cheers
A.



Re: LVM info - OTHER than HOWTO's

2015-11-18 Thread Martin Str|mberg
In article  Richard Owlett  
wrote:
> When searching for more information all I'm finding are 
> essentially HOWTO's with only a couple of paragraphs on "Whats" 
> and "Whys". Essentially nothing on "Why not".

One good use is when you're encrypting / (and /home if it's own) and swap.

I use luks to encrypt the partition, then put LVM on it and then /,
/home and swap on LVM. This was you only need to give the
passwd/unlock once.


> No information on dual boot.

If with not Linux, it won't work. With Debians it will. Other Linuxes
probably will. I'm running several Debian installations in the above
encrypted LVM scenario. (You have to fight the installer for the
non-first installations though.)


-- 
MartinS



Re: LVM question

2015-04-20 Thread Dark Victorian Spirit
Clear, thanks :)

On Mon, Apr 20, 2015 at 11:46:38AM +0100, Darac Marjal wrote:
 On Mon, Apr 20, 2015 at 12:05:17PM +0200, Dark Victorian Spirit wrote:
  I hope i can ask a question on top of this one,
  what if i have a PV which is configured and in use for a while,
  but i found out that i forgot to set the pertition type on LVM.
  
  Can i still change this without data loss or risk?
  And if i don't will i face issues of another kind?
 
 I believe so. Under Linux, partition types are mostly bookkeeping.
 According to wikipedia, some partition types imply certain access
 schemes (such as saying that a XENIX root partition (type 02h) should
 only be accessed using CHS, not LBA), but no such restrictions apply to
 Linux partitions. The Partition Type doesn't constrain the data stored
 on the partition - that is, you can put an NTFS filesystem into a
 Linux partition, even though there's a different code for NTFS.
 
 To cut a long story short, yes, you should be fine. Nothing's ever
 entirely risk-free, though so A) if it ain't broke, don't fix it and
 B) backup before you change it :)
 
  
  
  On Mon, Apr 20, 2015 at 10:33:13AM +0100, Darac Marjal wrote:
   On Mon, Apr 20, 2015 at 09:26:54AM +0200, Petter Adsen wrote:
Is it possible to have two VGs on the same PV?
   
   I don't believe so. The VG is the mapping layer in the LVM stack. It
   maps the LVs to the PVs. If you were to share a PV between VGs, then
   you'd need some way to tell the VGs which parts of the PV they can use
   (letting them battle it out and potentially over-commit the PV is not
   really a good idea). The easiest idea is to split the underlying device
   into multiple PVs (e.g. use partitions).
   

If so, how can I make a VG with lots of free space smaller? I'm
suspecting that the answer to my first question is no, since this
doesn't seem possible from the man pages.
   
   A quick bit of searching suggests the incantation would be:
* Boot from a rescue/live disc
* Activate your VG
* (You say you've got unallocated space in your VG, so resizing
   filesystems/LVs won't be covered)
* lvm pvs should, at this point, indicate some PFree, which is how
   much you can shrink the PV
* Run lvm pvresize /dev/whatever --setphysicalvolumesize 50G
  (Where /dev/whatever is the PV device and 50G is the new size to
  resize to)
* Finally, resize the PV's partition appropriately.
   
   At this point, you will have a smaller PV and less unallocated space in
   your VG. You can now create another partition, PV that and add it to a
   second VG.
   

Petter

-- 
I'm ionized
Are you sure?
I'm positive.
   
   
  
  
  
  -- 
  To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
  with a subject of unsubscribe. Trouble? Contact 
  listmas...@lists.debian.org
  Archive: https://lists.debian.org/20150420100517.ga5...@kernelbug.org
  



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20150420112044.gb6...@kernelbug.org



Re: LVM question

2015-04-20 Thread Petter Adsen
On Mon, 20 Apr 2015 10:33:13 +0100
Darac Marjal mailingl...@darac.org.uk wrote:

 On Mon, Apr 20, 2015 at 09:26:54AM +0200, Petter Adsen wrote:
  Is it possible to have two VGs on the same PV?
 
 I don't believe so. The VG is the mapping layer in the LVM stack. It
 maps the LVs to the PVs. If you were to share a PV between VGs, then
 you'd need some way to tell the VGs which parts of the PV they can use
 (letting them battle it out and potentially over-commit the PV is not
 really a good idea). The easiest idea is to split the underlying
 device into multiple PVs (e.g. use partitions).

I see, thank you for the explanation.

  If so, how can I make a VG with lots of free space smaller? I'm
  suspecting that the answer to my first question is no, since this
  doesn't seem possible from the man pages.
 
 A quick bit of searching suggests the incantation would be:
  * Boot from a rescue/live disc
  * Activate your VG
  * (You say you've got unallocated space in your VG, so resizing
 filesystems/LVs won't be covered)
  * lvm pvs should, at this point, indicate some PFree, which is how
 much you can shrink the PV
  * Run lvm pvresize /dev/whatever --setphysicalvolumesize 50G
(Where /dev/whatever is the PV device and 50G is the new size to
resize to)
  * Finally, resize the PV's partition appropriately.
 
 At this point, you will have a smaller PV and less unallocated space
 in your VG. You can now create another partition, PV that and add it
 to a second VG.

I figured I would have to do that. It's not really a problem right now,
I was mostly wondering if it was possible without messing with
partitioning. But now I know how to do it if it crops up in the future,
and that's a good thing :)

Petter

-- 
I'm ionized
Are you sure?
I'm positive.


pgpzQA6xgPWs8.pgp
Description: OpenPGP digital signature


Re: LVM question

2015-04-20 Thread Dark Victorian Spirit
I hope i can ask a question on top of this one,
what if i have a PV which is configured and in use for a while,
but i found out that i forgot to set the pertition type on LVM.

Can i still change this without data loss or risk?
And if i don't will i face issues of another kind?


On Mon, Apr 20, 2015 at 10:33:13AM +0100, Darac Marjal wrote:
 On Mon, Apr 20, 2015 at 09:26:54AM +0200, Petter Adsen wrote:
  Is it possible to have two VGs on the same PV?
 
 I don't believe so. The VG is the mapping layer in the LVM stack. It
 maps the LVs to the PVs. If you were to share a PV between VGs, then
 you'd need some way to tell the VGs which parts of the PV they can use
 (letting them battle it out and potentially over-commit the PV is not
 really a good idea). The easiest idea is to split the underlying device
 into multiple PVs (e.g. use partitions).
 
  
  If so, how can I make a VG with lots of free space smaller? I'm
  suspecting that the answer to my first question is no, since this
  doesn't seem possible from the man pages.
 
 A quick bit of searching suggests the incantation would be:
  * Boot from a rescue/live disc
  * Activate your VG
  * (You say you've got unallocated space in your VG, so resizing
 filesystems/LVs won't be covered)
  * lvm pvs should, at this point, indicate some PFree, which is how
 much you can shrink the PV
  * Run lvm pvresize /dev/whatever --setphysicalvolumesize 50G
(Where /dev/whatever is the PV device and 50G is the new size to
resize to)
  * Finally, resize the PV's partition appropriately.
 
 At this point, you will have a smaller PV and less unallocated space in
 your VG. You can now create another partition, PV that and add it to a
 second VG.
 
  
  Petter
  
  -- 
  I'm ionized
  Are you sure?
  I'm positive.
 
 



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20150420100517.ga5...@kernelbug.org



Re: LVM question

2015-04-20 Thread Darac Marjal
On Mon, Apr 20, 2015 at 09:26:54AM +0200, Petter Adsen wrote:
 Is it possible to have two VGs on the same PV?

I don't believe so. The VG is the mapping layer in the LVM stack. It
maps the LVs to the PVs. If you were to share a PV between VGs, then
you'd need some way to tell the VGs which parts of the PV they can use
(letting them battle it out and potentially over-commit the PV is not
really a good idea). The easiest idea is to split the underlying device
into multiple PVs (e.g. use partitions).

 
 If so, how can I make a VG with lots of free space smaller? I'm
 suspecting that the answer to my first question is no, since this
 doesn't seem possible from the man pages.

A quick bit of searching suggests the incantation would be:
 * Boot from a rescue/live disc
 * Activate your VG
 * (You say you've got unallocated space in your VG, so resizing
filesystems/LVs won't be covered)
 * lvm pvs should, at this point, indicate some PFree, which is how
much you can shrink the PV
 * Run lvm pvresize /dev/whatever --setphysicalvolumesize 50G
   (Where /dev/whatever is the PV device and 50G is the new size to
   resize to)
 * Finally, resize the PV's partition appropriately.

At this point, you will have a smaller PV and less unallocated space in
your VG. You can now create another partition, PV that and add it to a
second VG.

 
 Petter
 
 -- 
 I'm ionized
 Are you sure?
 I'm positive.




signature.asc
Description: Digital signature


Re: LVM question

2015-04-20 Thread Darac Marjal
On Mon, Apr 20, 2015 at 12:05:17PM +0200, Dark Victorian Spirit wrote:
 I hope i can ask a question on top of this one,
 what if i have a PV which is configured and in use for a while,
 but i found out that i forgot to set the pertition type on LVM.
 
 Can i still change this without data loss or risk?
 And if i don't will i face issues of another kind?

I believe so. Under Linux, partition types are mostly bookkeeping.
According to wikipedia, some partition types imply certain access
schemes (such as saying that a XENIX root partition (type 02h) should
only be accessed using CHS, not LBA), but no such restrictions apply to
Linux partitions. The Partition Type doesn't constrain the data stored
on the partition - that is, you can put an NTFS filesystem into a
Linux partition, even though there's a different code for NTFS.

To cut a long story short, yes, you should be fine. Nothing's ever
entirely risk-free, though so A) if it ain't broke, don't fix it and
B) backup before you change it :)

 
 
 On Mon, Apr 20, 2015 at 10:33:13AM +0100, Darac Marjal wrote:
  On Mon, Apr 20, 2015 at 09:26:54AM +0200, Petter Adsen wrote:
   Is it possible to have two VGs on the same PV?
  
  I don't believe so. The VG is the mapping layer in the LVM stack. It
  maps the LVs to the PVs. If you were to share a PV between VGs, then
  you'd need some way to tell the VGs which parts of the PV they can use
  (letting them battle it out and potentially over-commit the PV is not
  really a good idea). The easiest idea is to split the underlying device
  into multiple PVs (e.g. use partitions).
  
   
   If so, how can I make a VG with lots of free space smaller? I'm
   suspecting that the answer to my first question is no, since this
   doesn't seem possible from the man pages.
  
  A quick bit of searching suggests the incantation would be:
   * Boot from a rescue/live disc
   * Activate your VG
   * (You say you've got unallocated space in your VG, so resizing
  filesystems/LVs won't be covered)
   * lvm pvs should, at this point, indicate some PFree, which is how
  much you can shrink the PV
   * Run lvm pvresize /dev/whatever --setphysicalvolumesize 50G
 (Where /dev/whatever is the PV device and 50G is the new size to
 resize to)
   * Finally, resize the PV's partition appropriately.
  
  At this point, you will have a smaller PV and less unallocated space in
  your VG. You can now create another partition, PV that and add it to a
  second VG.
  
   
   Petter
   
   -- 
   I'm ionized
   Are you sure?
   I'm positive.
  
  
 
 
 
 -- 
 To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
 with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
 Archive: https://lists.debian.org/20150420100517.ga5...@kernelbug.org
 


signature.asc
Description: Digital signature


Re: LVM and ZFS

2015-04-17 Thread David Wright
Quoting David Christensen (dpchr...@holgerdanske.com):
 
 Then I heard about ZFS.  So, I tried zfs-fuse (Debian package) and
 then ZFS on Linux (http://zfsonlinux.org/).

I love pages like this. I clicked on Debian. The page assumes you know
all about package keys and signing, but feels the necessity to explain
how to uncomment a line by removing the #. Sweet.

 1.  By design, ZFS has a very different mindset from traditional
 partitions, volume management, and file systems.  Just running it
 required a fair amount of learning.  Backing up and restoring ZFS
 was even harder.

I agree. I looked at some docs and even tried to help someone here.
I think they missed the fact that fstab doesn't apply to zfs unless
you specifically tell it to use it, and even then, I'm not sure it
doesn't use vfstab and not fstab. But they went away...

 3.  ZFS should really be run on a machine with ECC memory, which I
 don't have.

...and when I read that, I completely lost interest. (I do have a
Pentium III coppermine with 500MB ECC!)

Cheers,
David.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20150417230318.GB15854@alum



Re: LVM and mdadm

2015-04-04 Thread Reco
 Hi.

On Sat, 4 Apr 2015 12:48:32 +0200
Petter Adsen pet...@synth.no wrote:

 I've just finished setting up Jessie with mdadm and LVM, the latter of
 which I have never used before.
 
 /dev/md0 is a 1G mirror for /boot, no LVM there. /dev/md1 is a mirror,
 than consists of the major part of /dev/sda and /dev/sdb - both 250G.
 There are also 4G swap partitions on sda and sdb, no RAID there.
 
 The installation went smoothly, and I think I got everything right.
 However, when I run pvdisplay -v, it says:
 DEGRADED MODE. Incomplete RAID LVs will be processed.
 Scanning for physical volume names
   --- Physical volume ---
   PV Name   /dev/md1
   VG Name   ROOTVG
   PV Size   227.90 GiB / not usable 2.00 MiB
   Allocatable   yes 
   PE Size   4.00 MiB
   Total PE  58342
   Free PE   17812
   Allocated PE  40530
   PV UUID   bpoMmU-Z9w0-arNA-Q6Je-jdyl-P2nH-JiPtJY
 
 Does that mean there is something wrong with the mirror under LVM?

It's possible that you have logical volumes created with --mirrors
option, for example. It's not bad, just redundant (i.e. mirroring over
mirroring).

Please post the output of vgdisplay -v.

Reco


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/20150404135633.03096bf086a6d68790d14...@gmail.com



Re: LVM and mdadm

2015-04-04 Thread Reco
 Hi.

On Sat, 4 Apr 2015 13:04:16 +0200
Petter Adsen pet...@synth.no wrote:
 
  Please post the output of vgdisplay -v.
 
 Here goes. freshinstall is just a snapshot of the system right after
 the first boot.

Yup. vgdisplay -v says just that.

 root@fenris:~# vgdisplay -v
 DEGRADED MODE. Incomplete RAID LVs will be processed.
 Finding all volume groups
 Finding volume group ROOTVG
   --- Volume group ---
 skip

Ok, I have a good news, and I have a bad news.

The good news are:

1) You don't have any mirrored LVs. Such LVs are explicitly marked by
having 'Mirrored volumes', and you have none of those.

2) You don't seem to have a problem with your LVM configuration,
everything appears to be in place.

And the bad news are:

'-v' option misleads you.

It says that 'Incomplete RAID LVs will be processed' and other
scary stuff regardless of presence of such LVs. It says 'DEGRADED
MODE', even in the case there everything is OK.

About the only way to know for sure whenever it's OK or not - is to
carefully view the status of every VG, LV and PV.

In fact, it seems that any LVM command with '-v' option will spew such
'warnings'.

It's Jessie's new feature, and it seems that nobody cared about
documenting it. Short of original bug report, of course - [1].

[1] https://bugzilla.redhat.com/show_bug.cgi?id=905063

Reco


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/20150404150920.e2a005d0a61a40666a1e4...@gmail.com



Re: LVM and mdadm

2015-04-04 Thread Petter Adsen
On Sat, 4 Apr 2015 13:56:33 +0300
Reco recovery...@gmail.com wrote:

  Hi.
 
 On Sat, 4 Apr 2015 12:48:32 +0200
 Petter Adsen pet...@synth.no wrote:
 
  I've just finished setting up Jessie with mdadm and LVM, the latter
  of which I have never used before.
  
  /dev/md0 is a 1G mirror for /boot, no LVM there. /dev/md1 is a
  mirror, than consists of the major part of /dev/sda and /dev/sdb -
  both 250G. There are also 4G swap partitions on sda and sdb, no
  RAID there.
  
  The installation went smoothly, and I think I got everything right.
  However, when I run pvdisplay -v, it says:
  DEGRADED MODE. Incomplete RAID LVs will be processed.
  Scanning for physical volume names
--- Physical volume ---
PV Name   /dev/md1
VG Name   ROOTVG
PV Size   227.90 GiB / not usable 2.00 MiB
Allocatable   yes 
PE Size   4.00 MiB
Total PE  58342
Free PE   17812
Allocated PE  40530
PV UUID   bpoMmU-Z9w0-arNA-Q6Je-jdyl-P2nH-JiPtJY
  
  Does that mean there is something wrong with the mirror under LVM?
 
 It's possible that you have logical volumes created with --mirrors
 option, for example. It's not bad, just redundant (i.e. mirroring over
 mirroring).

Yeah, that would be redundant. I wouldn't really know, as I just
created the RAID mirror with the installer, and also used the installer
to create a PV filling md1, with a single VG, and a few LV's inside
that.

 Please post the output of vgdisplay -v.

Here goes. freshinstall is just a snapshot of the system right after
the first boot.

root@fenris:~# vgdisplay -v
DEGRADED MODE. Incomplete RAID LVs will be processed.
Finding all volume groups
Finding volume group ROOTVG
  --- Volume group ---
  VG Name   ROOTVG
  System ID 
  Formatlvm2
  Metadata Areas1
  Metadata Sequence No  6
  VG Access read/write
  VG Status resizable
  MAX LV0
  Cur LV4
  Open LV   3
  Max PV0
  Cur PV1
  Act PV1
  VG Size   227.90 GiB
  PE Size   4.00 MiB
  Total PE  58342
  Alloc PE / Size   42924 / 167.67 GiB
  Free  PE / Size   15418 / 60.23 GiB
  VG UUID   XviEb0-4Xq7-4aJO-4m8c-LtxI-aVGj-8CUIoN
   
  --- Logical volume ---
  LV Path/dev/ROOTVG/LV_ROOT
  LV NameLV_ROOT
  VG NameROOTVG
  LV UUIDec5Fg4-OF3y-yB1S-dsXi-v2Nv-TqTJ-cwbEBg
  LV Write Accessread/write
  LV Creation host, time fenris, 2015-04-04 11:23:03 +0200
  LV snapshot status source of
 freshinstall [active]
  LV Status  available
  # open 1
  LV Size9.31 GiB
  Current LE 2384
  Segments   1
  Allocation inherit
  Read ahead sectors auto
  - currently set to 256
  Block device   253:0
   
  --- Logical volume ---
  LV Path/dev/ROOTVG/LV_VAR
  LV NameLV_VAR
  VG NameROOTVG
  LV UUID8JVtGt-kZtM-P0bv-wFJe-MAse-u1MO-6SenQJ
  LV Write Accessread/write
  LV Creation host, time fenris, 2015-04-04 11:23:43 +0200
  LV Status  available
  # open 1
  LV Size55.88 GiB
  Current LE 14305
  Segments   1
  Allocation inherit
  Read ahead sectors auto
  - currently set to 256
  Block device   253:1
   
  --- Logical volume ---
  LV Path/dev/ROOTVG/LV_HOME
  LV NameLV_HOME
  VG NameROOTVG
  LV UUIDIenPg1-9gY1-n4At-rC3v-6auK-X2Vd-tJ2ldf
  LV Write Accessread/write
  LV Creation host, time fenris, 2015-04-04 11:24:13 +0200
  LV Status  available
  # open 1
  LV Size93.13 GiB
  Current LE 23841
  Segments   1
  Allocation inherit
  Read ahead sectors auto
  - currently set to 256
  Block device   253:2
   
  --- Logical volume ---
  LV Path/dev/ROOTVG/freshinstall
  LV Namefreshinstall
  VG NameROOTVG
  LV UUIDn835Yc-gPq8-asVg-1yDV-1dgL-5eBU-sSV9db
  LV Write Accessread/write
  LV Creation host, time fenris, 2015-04-04 12:55:44 +0200
  LV snapshot status active destination for LV_ROOT
  LV Status  available
  # open 0
  LV Size9.31 GiB
  Current LE 2384
  COW-table size 9.35 GiB
  COW-table LE   2394
  Allocated to snapshot  0.00%
  Snapshot chunk size4.00 KiB
  Segments   1
  Allocation inherit
  Read ahead sectors auto
  - currently set to 256
  Block device   253:3
 

Re: LVM and mdadm

2015-04-04 Thread Petter Adsen
On Sat, 4 Apr 2015 15:09:20 +0300
Reco recovery...@gmail.com wrote:

  Hi.
 
 On Sat, 4 Apr 2015 13:04:16 +0200
 Petter Adsen pet...@synth.no wrote:
  root@fenris:~# vgdisplay -v
  DEGRADED MODE. Incomplete RAID LVs will be processed.
  Finding all volume groups
  Finding volume group ROOTVG
--- Volume group ---
  skip
 
 Ok, I have a good news, and I have a bad news.
 
 The good news are:
 
 1) You don't have any mirrored LVs. Such LVs are explicitly marked by
 having 'Mirrored volumes', and you have none of those.
 
 2) You don't seem to have a problem with your LVM configuration,
 everything appears to be in place.

Good. :)

 And the bad news are:
 
 '-v' option misleads you.
 
 It says that 'Incomplete RAID LVs will be processed' and other
 scary stuff regardless of presence of such LVs. It says 'DEGRADED
 MODE', even in the case there everything is OK.

Yes, I noticed that. While viewing the configuration here, the message
only came up with -v, which I thought would be odd if it was really
serious.

 About the only way to know for sure whenever it's OK or not - is to
 carefully view the status of every VG, LV and PV.
 
 In fact, it seems that any LVM command with '-v' option will spew such
 'warnings'.
 
 It's Jessie's new feature, and it seems that nobody cared about
 documenting it. Short of original bug report, of course - [1].
 
 [1] https://bugzilla.redhat.com/show_bug.cgi?id=905063

OK, I didn't see that when I was searching for answers. Thank you for
taking the time anyway, it is good to know it was set up correctly,
especially since this is new to me. It would be nice if this was clear
in the man pages, though.

Petter

-- 
I'm ionized
Are you sure?
I'm positive.


pgpTrp7WxDwz9.pgp
Description: OpenPGP digital signature


  1   2   3   4   5   6   7   8   9   >