Re: [linux-lvm] Why doesn't the lvmcache support the discard (trim) command?

2018-10-19 Thread Gionatan Danti

Il 19-10-2018 15:08 Zdenek Kabelac ha scritto:

Hi

It's rather about different workload takes benefit from different
caching approaches.

If your system is heavy on writes -  dm-writecache is what you want,
if you mostly reads - dm-cache will win.

That's why there is  dmstats to also help identify hotspots and overal 
logic.

There is nothing to win always in all cases - so ATM 2 different
targets are provided -  NVDIMMs already seems to change game a lot...

dm-writecache could be seen as 'extension' of your page-cache to held
longer list of dirty-pages...

Zdenek


Thanks for these information. Reading a bit the commit which provide 
dm-writeback, it seems a sort of "L2 pagecache", right? It should be 
*very* interesting for use with NVDIMMs and/or fast NVME devices...


Regards.

--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.da...@assyoma.it - i...@assyoma.it
GPG public key ID: FF5F32A8

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Why doesn't the lvmcache support the discard (trim) command?

2018-10-19 Thread Ilia Zykov
> 
> dm-writecache could be seen as 'extension' of your page-cache to held
> longer list of dirty-pages...
> 
> Zdenek
> 

Does it mean that the dm-writecache is always empty, after reboot?
Thanks.




smime.p7s
Description: S/MIME Cryptographic Signature
___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] Why doesn't the lvmcache support the discard (trim) command?

2018-10-19 Thread Ilia Zykov


On 19.10.2018 16:08, Zdenek Kabelac wrote:
> Dne 19. 10. 18 v 14:45 Gionatan Danti napsal(a):
>> On 19/10/2018 12:58, Zdenek Kabelac wrote:
>>> Hi
>>>
>>> Writecache simply doesn't care about caching your reads at all.
>>> Your RAM with it's page caching mechanism keeps read data as long as
>>> there is free RAM for this - the less RAM goes to page cache - less
>>> read operations remains cached.
>>
>> Hi, does it mean that to have *both* fast write cache *and* read cache
>> one should use a dm-writeback target + a dm-cache writethrough target
>> (possibly pointing to different devices)?
>>
>> Can you quantify/explain why and how faster is dm-writeback for heavy
>> write workload?
> 
> 
> 
> Hi
> 
> It's rather about different workload takes benefit from different
> caching approaches.
> 
> If your system is heavy on writes -  dm-writecache is what you want,
> if you mostly reads - dm-cache will win.
> 
> That's why there is  dmstats to also help identify hotspots and overal
> logic.
> There is nothing to win always in all cases - so ATM 2 different targets
> are provided -  NVDIMMs already seems to change game a lot...
> 
> dm-writecache could be seen as 'extension' of your page-cache to held
> longer list of dirty-pages...
> 
> Zdenek

Sorry, but I don't understand too. What be if reboot happens between data 
writes from the fast cache to the slow device? After reboot what data will be 
reads? A new data from the fast cache or an old from the slow device? And what 
data will be read 'dd if=/dev/cached iflag=direct'? 
Thanks.



smime.p7s
Description: S/MIME Cryptographic Signature
___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] Why doesn't the lvmcache support the discard (trim) command?

2018-10-19 Thread Zdenek Kabelac

Dne 19. 10. 18 v 14:45 Gionatan Danti napsal(a):

On 19/10/2018 12:58, Zdenek Kabelac wrote:

Hi

Writecache simply doesn't care about caching your reads at all.
Your RAM with it's page caching mechanism keeps read data as long as there 
is free RAM for this - the less RAM goes to page cache - less read 
operations remains cached.


Hi, does it mean that to have *both* fast write cache *and* read cache one 
should use a dm-writeback target + a dm-cache writethrough target (possibly 
pointing to different devices)?


Can you quantify/explain why and how faster is dm-writeback for heavy write 
workload?




Hi

It's rather about different workload takes benefit from different caching 
approaches.


If your system is heavy on writes -  dm-writecache is what you want,
if you mostly reads - dm-cache will win.

That's why there is  dmstats to also help identify hotspots and overal logic.
There is nothing to win always in all cases - so ATM 2 different targets are 
provided -  NVDIMMs already seems to change game a lot...


dm-writecache could be seen as 'extension' of your page-cache to held longer 
list of dirty-pages...


Zdenek

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Why doesn't the lvmcache support the discard (trim) command?

2018-10-19 Thread Gionatan Danti

On 19/10/2018 12:58, Zdenek Kabelac wrote:

Hi

Writecache simply doesn't care about caching your reads at all.
Your RAM with it's page caching mechanism keeps read data as long as 
there is free RAM for this - the less RAM goes to page cache - less read 
operations remains cached.


Hi, does it mean that to have *both* fast write cache *and* read cache 
one should use a dm-writeback target + a dm-cache writethrough target 
(possibly pointing to different devices)?


Can you quantify/explain why and how faster is dm-writeback for heavy 
write workload?


Thanks.

--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.da...@assyoma.it - i...@assyoma.it
GPG public key ID: FF5F32A8

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Why doesn't the lvmcache support the discard (trim) command?

2018-10-19 Thread Zdenek Kabelac

Dne 19. 10. 18 v 11:55 Ilia Zykov napsal(a):



On 19.10.2018 12:12, Zdenek Kabelac wrote:

Dne 19. 10. 18 v 0:56 Ilia Zykov napsal(a):

Maybe it will be implemented later? But it seems to me a little
strange when there is no way to clear the cache from a garbage.
Maybe I do not understand? Can you please explain this behavior.
For example:


Hi

Applying my brain logic here:

Cache (by default) operates on 32KB chunks.
SSD (usually) have the minimal size of trimable block as 512KB.

Conclusion can be there is non-trivial to even implement TRIM support
for cache - as something would need to keep a secondary data structure
which would keep the information about which all cached blocks are
completely 'unused/trimmed' and available from a 'complete block trim'
(i.e. something like when ext4  implements 'fstrim' support.)

Second thought -  if there is a wish to completely 'erase' cache - there
is very simple path by using 'lvconvert --uncache' - and once the cache
is needed again, create cache again from scratch.

Note - dm-cache is SLOW moving cache - so it doesn't target acceleration
one-time usage - i.e. if you read block just once from slow storage - it
doesn't mean it will be immediately cached.

Dm-cache is about keeping info about used blocks on 'slow' storage (hdd)
which typically does not support/implemnent TRIM. There could be
possibly a multi-layer cache, where even the cached device can handle
TRIM - but this kind on construct is not really support and it's even
unclear if it would make any sense to introduce this concept ATM  (since
there would need to be some well measurable benefit).

And final note - there is upcoming support for accelerating writes with
new dm-writecache target.

Regards


Zdenek



Thank you, I supposed it is so.
One more little question about dm-writecache:
The description says that:

"It doesn't cache reads because reads are supposed to be cached in page cache
in normal RAM."

Is it only mean, missing reads not promoted to the cache?



Hi

Writecache simply doesn't care about caching your reads at all.
Your RAM with it's page caching mechanism keeps read data as long as there is 
free RAM for this - the less RAM goes to page cache - less read operations 
remains cached.


It's probably worth to add comment about older dm-cache - where read access is 
 basically accounted (so the most used blocks cat be promoted to caching 
storage device) - if the reads are served by your page-cache - they can't be 
accounted - that's just to explain why repeated reads of the same block which 
is basically served by your page-cache doesn't lead to quick promotion of 
block to cache like one could expect without thinking about details behind



Zdenek

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] Why doesn't the lvmcache support the discard (trim) command?

2018-10-19 Thread Ilia Zykov


On 19.10.2018 12:12, Zdenek Kabelac wrote:
> Dne 19. 10. 18 v 0:56 Ilia Zykov napsal(a):
>> Maybe it will be implemented later? But it seems to me a little
>> strange when there is no way to clear the cache from a garbage.
>> Maybe I do not understand? Can you please explain this behavior.
>> For example:
> 
> Hi
> 
> Applying my brain logic here:
> 
> Cache (by default) operates on 32KB chunks.
> SSD (usually) have the minimal size of trimable block as 512KB.
> 
> Conclusion can be there is non-trivial to even implement TRIM support
> for cache - as something would need to keep a secondary data structure
> which would keep the information about which all cached blocks are
> completely 'unused/trimmed' and available from a 'complete block trim'
> (i.e. something like when ext4  implements 'fstrim' support.)
> 
> Second thought -  if there is a wish to completely 'erase' cache - there
> is very simple path by using 'lvconvert --uncache' - and once the cache
> is needed again, create cache again from scratch.
> 
> Note - dm-cache is SLOW moving cache - so it doesn't target acceleration
> one-time usage - i.e. if you read block just once from slow storage - it
> doesn't mean it will be immediately cached.
> 
> Dm-cache is about keeping info about used blocks on 'slow' storage (hdd)
> which typically does not support/implemnent TRIM. There could be
> possibly a multi-layer cache, where even the cached device can handle
> TRIM - but this kind on construct is not really support and it's even
> unclear if it would make any sense to introduce this concept ATM  (since
> there would need to be some well measurable benefit).
> 
> And final note - there is upcoming support for accelerating writes with
> new dm-writecache target.
> 
> Regards
> 
> 
> Zdenek
> 

Thank you, I supposed it is so.
One more little question about dm-writecache:
The description says that:

"It doesn't cache reads because reads are supposed to be cached in page cache
in normal RAM."

Is it only mean, missing reads not promoted to the cache?




smime.p7s
Description: S/MIME Cryptographic Signature
___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] Why doesn't the lvmcache support the discard (trim) command?

2018-10-19 Thread Zdenek Kabelac

Dne 19. 10. 18 v 11:42 Gionatan Danti napsal(a):

On 19/10/2018 11:12, Zdenek Kabelac wrote:
And final note - there is upcoming support for accelerating writes with new 
dm-writecache target.


Hi, should not it be already possible with current dm-cache and writeback 
caching?



Hi

dm-cache targets different goal - new  dm-writeback cache has way more optimal 
write-pattern to accelerate writes - compared even with 'writeback' mode.


Another point is - dm-writecache is written with 'NVDIMMs' in mind and it's 
also simpler.



Zdenek


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Why doesn't the lvmcache support the discard (trim) command?

2018-10-19 Thread Gionatan Danti

On 19/10/2018 11:12, Zdenek Kabelac wrote:
And final note - there is upcoming support for accelerating writes with 
new dm-writecache target.


Hi, should not it be already possible with current dm-cache and 
writeback caching?


Thanks.


--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.da...@assyoma.it - i...@assyoma.it
GPG public key ID: FF5F32A8

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[linux-lvm] The node was fenced in the cluster when cmirrord was enabled on LVM2.2.02.120

2018-10-19 Thread Gang He
Hello List,

I got a bug report from the customer, which said the node was fenced in the 
cluster when they enabled cmirrord.
Before the node was fenced, we can see some log printed as below,

2018-09-25T12:55:26.555018+02:00 qu1ci11 cmirrord[6253]: cpg_mcast_joined 
error: 2
2018-09-25T12:55:31.604832+02:00 qu1ci11 sbd[2865]:  warning: inquisitor_child: 
/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-0-0-2 requested a reset
2018-09-25T12:55:31.608112+02:00 qu1ci11 sbd[2865]:emerg: do_exit: 
Rebooting system: reboot
2018-09-25T12:55:33.202189+02:00 qu1ci11 kernel: [ 4750.932328] device-mapper: 
dm-log-userspace: [LYuPIux2] Request timed out: [15/93273] - retrying
2018-09-25T12:55:35.186091+02:00 qu1ci11 kernel: [ 4752.916268] device-mapper: 
dm-log-userspace: [LYuPIux2] Request timed out: [9/93274] - retrying
2018-09-25T12:55:41.382129+02:00 qu1ci11 kernel: [ 4759.112231] device-mapper: 
dm-log-userspace: [LYuPIux2] Request timed out: [15/93275] - retrying
2018-09-25T12:55:41.382157+02:00 qu1ci11 kernel: [ 4759.116237] device-mapper: 
dm-log-userspace: [LYuPIux2] Request timed out: [15/93276] - retrying
2018-09-25T12:55:41.534092+02:00 qu1ci11 kernel: [ 4759.264201] device-mapper: 
dm-log-userspace: [LYuPIux2] Request timed out: [15/93278] - retrying
2018-09-25T12:55:41.534117+02:00 qu1ci11 kernel: [ 4759.264274] device-mapper: 
dm-log-userspace: [LYuPIux2] Request timed out: [15/93279] - retrying
2018-09-25T12:55:41.534119+02:00 qu1ci11 kernel: [ 4759.264278] device-mapper: 
dm-log-userspace: [LYuPIux2] Request timed out: [15/93277] - retrying
 ...

2018-09-25T12:56:26.439557+02:00 qu1ci11 lrmd[3795]:  warning: 
rsc_VG_ASCS_monitor_6 process (PID 4467) timed out
2018-09-25T12:56:26.439974+02:00 qu1ci11 lrmd[3795]:  warning: 
rsc_VG_ASCS_monitor_6:4467 - timed out after 6ms
2018-09-25T12:56:26.534104+02:00 qu1ci11 kernel: [ 4804.264240] device-mapper: 
dm-log-userspace: [LYuPIux2] Request timed out: [15/93321] - retrying
2018-09-25T12:56:26.534122+02:00 qu1ci11 kernel: [ 4804.264287] device-mapper: 
dm-log-userspace: [LYuPIux2] Request timed out: [15/93320] - retrying
2018-09-25T12:56:26.534124+02:00 qu1ci11 kernel: [ 4804.264311] device-mapper: 
dm-log-userspace: [LYuPIux2] Request timed out: [15/93322] - retrying

Did you guys encounter the similar issue before? I can find the similar bug 
report at 
http://lists.linux-ha.org/pipermail/linux-ha/2014-December/048427.html 
If you know the root cause, please let me know. 


Thanks
Gang


  



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Why doesn't the lvmcache support the discard (trim) command?

2018-10-19 Thread Zdenek Kabelac

Dne 19. 10. 18 v 0:56 Ilia Zykov napsal(a):

Maybe it will be implemented later? But it seems to me a little strange when 
there is no way to clear the cache from a garbage.
Maybe I do not understand? Can you please explain this behavior.
For example:


Hi

Applying my brain logic here:

Cache (by default) operates on 32KB chunks.
SSD (usually) have the minimal size of trimable block as 512KB.

Conclusion can be there is non-trivial to even implement TRIM support for 
cache - as something would need to keep a secondary data structure which would 
keep the information about which all cached blocks are completely 
'unused/trimmed' and available from a 'complete block trim' (i.e. something 
like when ext4  implements 'fstrim' support.)


Second thought -  if there is a wish to completely 'erase' cache - there is 
very simple path by using 'lvconvert --uncache' - and once the cache is needed 
again, create cache again from scratch.


Note - dm-cache is SLOW moving cache - so it doesn't target acceleration 
one-time usage - i.e. if you read block just once from slow storage - it 
doesn't mean it will be immediately cached.


Dm-cache is about keeping info about used blocks on 'slow' storage (hdd) which 
typically does not support/implemnent TRIM. There could be possibly a 
multi-layer cache, where even the cached device can handle TRIM - but this 
kind on construct is not really support and it's even unclear if it would make 
any sense to introduce this concept ATM  (since there would need to be some 
well measurable benefit).


And final note - there is upcoming support for accelerating writes with new 
dm-writecache target.


Regards


Zdenek

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/