Re: [linux-lvm] when bringing dm-cache online, consumes all memory and reboots

2020-03-25 Thread Zdenek Kabelac

Dne 24. 03. 20 v 23:35 Gionatan Danti napsal(a):

Il 2020-03-24 16:09 Zdenek Kabelac ha scritto:

In past we had problem that when users have been using huge chunk size,
and small 'migration_threashold' - the cache was unable to demote chunks
from the cache to the origin deviceĀ  (the size of 'required' data for
demotion were bigger then what has been allowed by threshold).

So lvm2/libdm implemented protection to always set at least 8 chunks
is the bare minimum.

Now we face clearly the problem from 'the other side' - users have way
too big chunks (we've seen users with 128M chunks) - and so threshold
is set to 1G
and users are facing serious bottleneck on the cache side doing to many
promotions/demotions.

We will likely fix this by setting max chunk size somewhere around 2MiB. >
Thanks for the explanation. Maybe is a naive proposal, but can't you simply 
set migration_threshold equal to a single chunk for >2M sized chunks, and 8 
chunks for smaller ones?


Using large cache chunks likely degrades the usefulness and purpose of cache.
Though we are missing some comparative tables showing how the
optimal layouts should look like.

So the idea is not to just 'let it somehow work' but rather to move it towards
more efficient usage of available resources.

Zdenek

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] lvm raid5 : drives all present but vg/lvm will not assemble

2020-03-25 Thread Andrew Falgout
The disks are seen, the volume groups are seen.  When I try to activate the
VG I get this:

vgchange -ay vg1
  device-mapper: reload ioctl on  (253:19) failed: Input/output error
  0 logical volume(s) in volume group "vg1" now active

I executed 'vgchange -ay vg1 - -' and this is the only time an
error was thrown.
20:53:16.552602 vgchange[10795] device_mapper/libdm-deptree.c:2921  Adding
target to (253:19): 0 31256068096 raid raid5_ls 3 128 region_size 32768 3
253:13 253:14 253:15 253:16 253:17 253:18
20:53:16.552609 vgchange[10795] device_mapper/ioctl/libdm-iface.c:1853  dm
table   (253:19) [ opencount flush ]   [16384] (*1)
20:53:16.552619 vgchange[10795] device_mapper/ioctl/libdm-iface.c:1853  dm
reload   (253:19) [ noopencount flush ]   [16384] (*1)
20:53:16.572481 vgchange[10795] device_mapper/ioctl/libdm-iface.c:1903
 device-mapper: reload ioctl on  (253:19) failed: Input/output error

I've uploaded two very verbose and debug ridden.
https://pastebin.com/bw5eQBa8
https://pastebin.com/qV5yft05

Ignore the naming.  It's not a gluster.  I was planning on making two and
mirroring them in a gluster.

./drae


On Mon, Mar 23, 2020 at 5:14 AM Bernd Eckenfels 
wrote:

> Do you see any dmesg kernel errors when you try to activate the LVs?
>
> Gruss
> Bernd
>
>
> --
> http://bernd.eckenfels.net
> --
> *Von:* linux-lvm-boun...@redhat.com  im
> Auftrag von Andrew Falgout 
> *Gesendet:* Saturday, March 21, 2020 4:22:04 AM
> *An:* linux-lvm@redhat.com 
> *Betreff:* [linux-lvm] lvm raid5 : drives all present but vg/lvm will not
> assemble
>
>
> This started on a Raspberry PI 4 running raspbian.  I moved the disks to
> my Fedora 31 system, that is running the latest updates and kernel.  When I
> had the same issues there I knew it wasn't raspbian.
>
> I've reached the end of my rope on this. The disks are there, all three
> are accounted for, and the LVM data on them can be seen.  But it refuses to
> activate stating I/O errors.
>
> [root@hypervisor01 ~]# pvs
>   PV VGFmt  Attr PSizePFree
>   /dev/sda1  local_storage01   lvm2 a--  <931.51g   0
>   /dev/sdb1  local_storage01   lvm2 a--  <931.51g   0
>   /dev/sdc1  local_storage01   lvm2 a--  <931.51g   0
>   /dev/sdd1  local_storage01   lvm2 a--  <931.51g   0
>   /dev/sde1  local_storage01   lvm2 a--  <931.51g   0
>   /dev/sdf1  local_storage01   lvm2 a--  <931.51g <931.51g
>   /dev/sdg1  local_storage01   lvm2 a--  <931.51g <931.51g
>   /dev/sdh1  local_storage01   lvm2 a--  <931.51g <931.51g
>   /dev/sdi3  fedora_hypervisor lvm2 a--27.33g   <9.44g
>   /dev/sdk1  vg1   lvm2 a--<7.28t   0
>   /dev/sdl1  vg1   lvm2 a--<7.28t   0
>   /dev/sdm1  vg1   lvm2 a--<7.28t   0
> [root@hypervisor01 ~]# vgs
>   VG#PV #LV #SN Attr   VSize  VFree
>   fedora_hypervisor   1   2   0 wz--n- 27.33g <9.44g
>   local_storage01 8   1   0 wz--n- <7.28t <2.73t
>   vg1 3   1   0 wz--n- 21.83t 0
> [root@hypervisor01 ~]# lvs
>   LVVGAttr   LSize  Pool Origin Data%  Meta%
>  Move Log Cpy%Sync Convert
>   root  fedora_hypervisor -wi-ao 15.00g
>
>   swap  fedora_hypervisor -wi-ao  2.89g
>
>   libvirt   local_storage01   rwi-aor--- <2.73t
>  100.00
>   gluster02 vg1   Rwi---r--- 14.55t
>
>
> The one in question is the vg1/gluster02 lvm group.
>
> I try to activate the VG:
> [root@hypervisor01 ~]# vgchange -ay vg1
>   device-mapper: reload ioctl on  (253:19) failed: Input/output error
>   0 logical volume(s) in volume group "vg1" now active
>
> I've got the debugging output from :
> vgchange -ay vg1 - -
> lvchange -ay --partial vg1/gluster02 - -
>
> Just not sure where I should dump the data for people to look at.  Is
> there a way to tell the md system to ignore the metadata since there wasn't
> an actual disk failure, and rebuild the metadata off what is in the lvm?
> Or can I even get the LV to mount, so I can pull the data off.
>
> Any help is appreciated.  If I can save the data great.  I'm tossing this
> to the community to see if anyone else has an idea of what I can do.
> ./digitalw00t
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/