Re: [qubes-users] Q4.0 - LVM Thin Pool volumes - lsblk returns very large (256kb) MIN-IO and DISC-GRAN values

2019-05-28 Thread Chris Laprise
On 5/28/19 8:42 AM, brendan.h...@gmail.com wrote: On Saturday, May 25, 2019 at 2:28:13 PM UTC-4, Chris Laprise wrote: I think the only _good_ way to deal with COW metadata expansion, since its always related to data fragmentation, is to keep expanding it and let system performance degrade

Re: [qubes-users] Q4.0 - LVM Thin Pool volumes - lsblk returns very large (256kb) MIN-IO and DISC-GRAN values

2019-05-28 Thread brendan . hoar
On Saturday, May 25, 2019 at 2:28:13 PM UTC-4, Chris Laprise wrote: > I think the only _good_ way to deal with COW metadata expansion, since > its always related to data fragmentation, is to keep expanding it and > let system performance degrade accordingly. Yup. One could argue that the same

Re: [qubes-users] Q4.0 - LVM Thin Pool volumes - lsblk returns very large (256kb) MIN-IO and DISC-GRAN values

2019-05-28 Thread brendan . hoar
On Saturday, May 25, 2019 at 8:50:57 PM UTC-4, unman wrote: > Docs also say that where a thin pool is used primarily for thin > provisioning a larger value is optional. Did you mean to say "optimal" or did the docs really say that larger cluster sizes are optional? In any case, I think the docs

Re: [qubes-users] Q4.0 - LVM Thin Pool volumes - lsblk returns very large (256kb) MIN-IO and DISC-GRAN values

2019-05-25 Thread unman
On Fri, May 24, 2019 at 08:03:22PM -0700, brendan.h...@gmail.com wrote: > Looks like the chunksize of the pool is the controlling factor (256kb) here. > > % lvs -o name,chunksize|grep pool > > Docs say the default value is 64kb (that’s also the minimum for a thin pool). > Not sure why qubesos

Re: [qubes-users] Q4.0 - LVM Thin Pool volumes - lsblk returns very large (256kb) MIN-IO and DISC-GRAN values

2019-05-25 Thread Chris Laprise
On 5/25/19 12:45 PM, Brendan Hoar wrote: On Sat, May 25, 2019 at 12:09 PM Chris Laprise > wrote: It would be interesting if thin-lvm min transfer were the reason for this difference in behavior between fstrim and the filesystem. Indeed. Pretty sure that is

Re: [qubes-users] Q4.0 - LVM Thin Pool volumes - lsblk returns very large (256kb) MIN-IO and DISC-GRAN values

2019-05-25 Thread 'awokd' via qubes-users
Brendan Hoar wrote on 5/25/19 4:45 PM: On Sat, May 25, 2019 at 12:09 PM Chris Laprise wrote: I’m going to add an Issue/Feature request to add metadata store monitoring and alerts to the disk space widget. :) I had the same thought reading Chris' email. -- You received this message because

Re: [qubes-users] Q4.0 - LVM Thin Pool volumes - lsblk returns very large (256kb) MIN-IO and DISC-GRAN values

2019-05-25 Thread Brendan Hoar
On Sat, May 25, 2019 at 12:09 PM Chris Laprise wrote: > > It would be interesting if thin-lvm min transfer were the reason for > this difference in behavior between fstrim and the filesystem. Indeed. Pretty sure that is the case for some workloads. However, I think you're wrong to assume that

Re: [qubes-users] Q4.0 - LVM Thin Pool volumes - lsblk returns very large (256kb) MIN-IO and DISC-GRAN values

2019-05-25 Thread Chris Laprise
On 5/24/19 10:00 PM, brendan.h...@gmail.com wrote: Hi folks, Summary/Questions: 1. Is the extremely large minimum-IO value of 256KB for the dom0 block devices representing Q4 VM volume in the thin pool ... intentional? 2. And if so, to what purpose (e.g. performance, etc.)? 3. And if so, has

[qubes-users] Q4.0 - LVM Thin Pool volumes - lsblk returns very large (256kb) MIN-IO and DISC-GRAN values

2019-05-24 Thread brendan . hoar
Looks like the chunksize of the pool is the controlling factor (256kb) here. % lvs -o name,chunksize|grep pool Docs say the default value is 64kb (that’s also the minimum for a thin pool). Not sure why qubesos value is higher. -- You received this message because you are subscribed to the

[qubes-users] Q4.0 - LVM Thin Pool volumes - lsblk returns very large (256kb) MIN-IO and DISC-GRAN values

2019-05-24 Thread brendan . hoar
Hi folks, Summary/Questions: 1. Is the extremely large minimum-IO value of 256KB for the dom0 block devices representing Q4 VM volume in the thin pool ... intentional? 2. And if so, to what purpose (e.g. performance, etc.)? 3. And if so, has the impact of this value on depending on discards