Dne 22.4.2017 v 18:32 Xen napsal(a):
Gionatan Danti schreef op 22-04-2017 9:14:
Il 14-04-2017 10:24 Zdenek Kabelac ha scritto:
However there are many different solutions for different problems -
and with current script execution - user may build his own solution -
i.e.  call
'dmsetup remove -f' for running thin volumes - so all instances get
'error' device   when pool is above some threshold setting (just like
old 'snapshot' invalidation worked) - this way user will just kill
thin volume user task, but will still keep thin-pool usable for easy
maintenance.


This is a very good idea - I tried it and it indeed works.

So a user script can execute dmsetup remove -f on the thin pool?

Oh no, for all volumes.

That is awesome, that means a errors=remount-ro mount will cause a remount 
right?

Well 'remount-ro' will fail but you will not be able to read anything
from volume as well.

So as said - many users many different solutions are needed...

Currently lvm2 can't support that much variety and complexity...



However, it is not very clear to me what is the best method to monitor
the allocated space and trigger an appropriate user script (I
understand that versione > .169 has %checkpoint scripts, but current
RHEL 7.3 is on .166).

I had the following ideas:
1) monitor the syslog for the "WARNING pool is dd.dd% full" message;

This is what my script is doing of course. It is a bit ugly and a bit messy by now, but I could still clean it up :p.

However it does not follow syslog, but checks periodically. You can also follow with -f.

It does not allow for user specified actions yet.

In that case it would fulfill the same purpose as > 169 only a bit more poverly.

One more thing: from device-mapper docs (and indeed as observerd in my
tests), the "pool is dd.dd% full" message is raised one single time:
if a message is raised, the pool is emptied and refilled, no new
messages are generated. The only method I found to let the system
re-generate the message is to deactiveate and reactivate the thin pool
itself.

This is not my experience on LVM 111 from Debian.

For me new messages are generated when:

- the pool reaches any threshold again
- I remove and recreate any thin volume.

Because my system regenerates snapshots, I now get an email from my script when the pool is > 80%, every day.

So if I keep the pool above 80%, every day at 0:00 I get an email about it :p. Because syslog gets a new entry for it. This is why I know :p.

The explanation here is simple - when you create a new thinLV - there is currently full suspend - and before 'suspend' pool is 'unmonitored'
after resume again monitored - and you get your warning logged again.


Zdenek

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Reply via email to