On 10/12/18 5:46 PM, Sherpa Sherpa wrote:
Thank you for reply i dont mind if fstab sees partitions. I read this
"To avoid striping performance
problems LVM can't tell that two PVs are on the same physical disk, so if
you create a striped LV then the stripes could be on different
partitions on the same disk resulting in a *decrease* in performance
rather than an increase." in the tldp.org <http://tldp.org> but does
this apply to disks made from RAID backend ?
If you use partitioning, only create one partition per backing device
and use it as a PV.
This avoids striping across multiple PVs on the same backing device.
The same config flaw (i.e. use multiple partitions on the same backing
device as PVs thus potentially
stripe across them) may apply to any backing store allowing for
partitioning. So don't do it on SW/HW RAID either.
Heinz
Warm Regards
Urgen Sherpa
On Fri, Oct 12, 2018 at 9:09 PM Heinz Mauelshagen <[email protected]
<mailto:[email protected]>> wrote:
On 10/11/18 4:31 PM, Emmanuel Gelati wrote:
If you use sdb only for data, you don't have need to use
partition on the disk.
Though that's true, keeping 1 partition per disk for each LVM PV
adds additional
'visibility' by tools like fdisk/[cs]fdisk, parted etc. showing
the partition type to be 'Liinux LVM'.
Using the whole disk, blkid or lsblk will provide that information
still,
e.g. 'blkid --match-token TYPE=LVM2_member'.
Heinz
Il giorno gio 11 ott 2018 alle ore 16:26 David Teigland
<[email protected] <mailto:[email protected]>> ha scritto:
On Thu, Oct 11, 2018 at 08:53:07AM +0545, Sherpa Sherpa wrote:
> I have LVM(backed by hardware RAID5) with logical volume
and a volume group
> named "dbstore-lv" and "dbstore-vg" which have sdb1 sdb2
sdb3 created from
> same sdb disk.
> sdb 8:16 0 19.7T 0 disk
> ├─sdb1 8:17 0 7.7T 0 part
> │ └─dbstore-lv (dm-1) 252:1 0 9.4T 0 lvm
/var/db/st01
> ├─sdb2 8:18 0 1.7T 0 part
> │ └─dbstore-lv (dm-1) 252:1 0 9.4T 0 lvm
/var/db/st01
> └─sdb3 8:19 0 10.3T 0 part
> └─archive--archivedbstore--lv (dm-0) 252:0 0
10.3T 0 lvm
> I am assuming this is due to disk seek problem as the same
disk partitions
> are used for same LVM or may be its due to saturation of
the disks
You shouldn't add different partitions as different PVs. If
it's too late
to fix, it might help to create new LV that uses only one of the
partitions, e.g. lvcreate -n lv -L size vg /dev/sdb2, and
then copy your
current LV to the new one.
_______________________________________________
linux-lvm mailing list
[email protected] <mailto:[email protected]>
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
--
.~.
/V\
// \\
/( )\
^`~'^
_______________________________________________
linux-lvm mailing list
[email protected] <mailto:[email protected]>
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO athttp://tldp.org/HOWTO/LVM-HOWTO/
_______________________________________________
linux-lvm mailing list
[email protected] <mailto:[email protected]>
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
_______________________________________________
linux-lvm mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/