Dne 10.6.2016 v 05:08 Bruce Dubbs napsal(a):
I am having a problem with building LVM2.2.02.155 from source.
Actually, the build is OK:
./configure --prefix=/usr \
--exec-prefix= \
--enable-applib \
--enable-cmdlib \
Dne 9.6.2016 v 22:24 Markus Mikkolainen napsal(a):
I seem to have hit the same snag as Mark describes in his post.
https://www.redhat.com/archives/linux-lvm/2015-April/msg00025.html
with kernel 4.4.6 I detached (--splitcache) a writeback cache from a mounted
lv which was then synchronized and
Dne 30.5.2016 v 21:53 Phillip Susi napsal(a):
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
On 05/30/2016 04:15 AM, Zdenek Kabelac wrote:
Hi
Please provide full 'vgchage -ay -' trace of such activation
command.
Also specify which version of lvm2 is in use by your distro.
2.02.133
Dne 31.5.2016 v 12:57 Brian J. Murrell napsal(a):
I have a Fedora 23 system running (presumably, since I can't boot it to
be 100% sure) 2.02.132 of LVM. It has ceased to boot and reports:
device-mapper: resume ioctl on (253:2) failed: Invalid argument
Unable to resume laptop-pool00-tpool
Dne 23.6.2016 v 20:02 Chris Friesen napsal(a):
On 06/23/2016 11:21 AM, Zdenek Kabelac wrote:
Dne 23.6.2016 v 18:35 Chris Friesen napsal(a):
[root@centos7 centos]# vgscan --mknodes
Configuration setting "snapshot_autoextend_percent" invalid. It's not part
of any section.
Con
Dne 23.6.2016 v 18:35 Chris Friesen napsal(a):
On 06/23/2016 02:34 AM, Zdenek Kabelac wrote:
Dne 22.6.2016 v 16:52 Chris Friesen napsal(a):
On 06/22/2016 03:23 AM, Zdenek Kabelac wrote:
Dne 21.6.2016 v 17:22 Chris Friesen napsal(a):
I'm using the stock CentOS7 version, I think.
LVM
Dne 20.1.2016 v 13:54 Марк Коренберг napsal(a):
2016-01-20 17:09 GMT+05:00 Zdenek Kabelac <zkabe...@redhat.com
<mailto:zkabe...@redhat.com>>:
ed sta
Thanks for the response, but I do not understand how thin-provisioning is
related to question i'm asking.
As far as I under
Dne 19.2.2016 v 19:40 Lentes, Bernd napsal(a):
Hi,
i have a script in which i invoke lvremove and lvcreate. With lvremove i don't
have proplems but with lvcreate.
I'm redirecting stdout and stderr to a file because the script is executed by
cron and i'd like to have a look afterwards if
Dne 20.2.2016 v 14:25 Lentes, Bernd napsal(a):
- Am 19. Feb 2016 um 21:31 schrieb Zdenek Kabelac zdenek.kabe...@gmail.com:
Dne 19.2.2016 v 19:40 Lentes, Bernd napsal(a):
Hi,
i have a script in which i invoke lvremove and lvcreate. With lvremove i don't
have proplems but with lvcreate
Dne 16.3.2016 v 00:52 Steven Dake (stdake) napsal(a):
On 3/15/16, 3:56 PM, "linux-lvm-boun...@redhat.com on behalf of Zdenek
Kabelac" <linux-lvm-boun...@redhat.com on behalf of
zdenek.kabe...@gmail.com> wrote:
Dne 15.3.2016 v 23:31 Serguei Bezverkhi (sbezverk) napsal(a):
Hel
Dne 29.2.2016 v 12:35 e...@thyrsus.com napsal(a):
This is automatically generated email about markup problems in a man
page for which you appear to be responsible. If you are not the right
person or list, please tell me so I can correct my database.
See http://catb.org/~esr/doclifter/bugs.html
On 27.4.2016 12:50, Xen wrote:
1. Does LVM cache support discards of the underlying blocks (in the cache)
when the filesystem discards the blocks?
It does in a devel branch - hopefully will be upstreamed shortly...
Regards
Zdenek
___
linux-lvm
On 25.4.2016 10:59, Gionatan Danti wrote:
Il 23-04-2016 10:40 Gionatan Danti ha scritto:
On 22/04/2016 16:04, Zdenek Kabelac wrote:
I assume you miss newer kernel.
There was originally this bug.
Regards
Zdenek
Hi Zdenek,
I am running CentOS 6.7 fully patched, kernel version
2.6.32
On 22.5.2016 19:07, Xen wrote:
I am not sure if, due to my recent posts ;-) I would still be allowed to write
here. But, perhaps it is important regardless.
I have an embedded LVM. The outer volume is a cached LVM, that is to say, two
volumes are cached from a different PV. One of the cached
On 24.5.2016 15:45, Gionatan Danti wrote:
Il 18-05-2016 15:47 Gionatan Danti ha scritto:
One question: I did some test (on another machine), deliberately
killing/stopping the lvmetad service/socket. When the pool was almost
full, the following entry was logged in /var/log/messages
WARNING:
On 17.5.2016 15:09, Gionatan Danti wrote:
Well yeah - ATM we rather take 'early' action and try to stop any user
on overfill thin-pool.
It is a very reasonable standing
Basically whenever 'lvresize' failed - dmeventd plugin now tries
to unconditionally umount any associated
On 17.5.2016 19:17, Xen wrote:
Strange, I didn't get my own message.
Zdenek Kabelac schreef op 17-05-2016 11:43:
There is no plan ATM to support boot from thinLV in nearby future.
Just use small boot partition - it's the safest variant - it just hold
kernels and ramdisks...
That's not what
On 15.5.2016 12:33, Gionatan Danti wrote:
Hi list,
I had an unexptected filesystem unmount on a machine were I am using thin
provisioning.
Hi
Well yeah - ATM we rather take 'early' action and try to stop any user
on overfill thin-pool.
It is a CentOS 7.2 box (kernel 3.10.0-327.3.1.el7,
On 16.5.2016 13:14, Alasdair G Kergon wrote:
On Sun, May 15, 2016 at 04:35:44AM +, Tom Jay wrote:
I've posted a question to the debian-user mailing list, but am yet to receive a
response.
I am running Debian 7.9 64-bit with kernel version 3.2.0 and would like to use
the 'lvconvert
On 18.5.2016 10:13, Tom Jay wrote:
Hello, This question was in fact answered by the debian-kernel mailing list,
who informed me that the documentation was probably wrong. The kernel does
actually support merging volumes, just the 'dmsetup targets' output only
refers to output from modules that
On 18.5.2016 03:34, Xen wrote:
Zdenek Kabelac schreef op 18-05-2016 0:26:
On 17.5.2016 22:43, Xen wrote:
Zdenek Kabelac schreef op 17-05-2016 21:18:
I don't know much about Grub, but I do know its lvm.c by heart now almost :p.
lvm.c by grub is mostly useless...
Then I feel we should take
On 2.5.2016 13:03, Fabian Herschel wrote:
Can LVM doing a delta sync and does LVM have something like a bitamp to show
which parts are in sync and which needs a sync?
Hi
lvm uses small metadata LV with each raid1 leg - which holds the bitmap about
individual regionsize areas in sync.
So
On 8.5.2016 17:01, Илья Бока wrote:
#lvcreate -L600M -n test vg-arch
Insufficient free space: 150 extents needed, but only 0 available
but fixed with|
#pvchange --allocatable y /dev/mapper/lvm
|
|maybe possibly change message to more usable?
Hi
Thanks for report.
I'd say that making PV
Dne 19.4.2016 v 03:05 shankha napsal(a):
Hi,
Please allow me to describe our setup.
1) 8 SSDS with a raid5 on top of it. Let us call the raid device : dev_raid5
2) We create a Volume Group on dev_raid5
3) We create a thin pool occupying 100% of the volume group.
We performed some experiments.
Dne 19.7.2016 v 17:28 Scott Sullivan napsal(a):
Hello,
Could someone please clarify if there is a legitimate reason to worry about
data security of a old (removed) LVM snapshot?
For example, when you lvremove a LVM snapshot, is it possible for data to be
recovered if you create another LVM and
Dne 30.6.2016 v 11:10 Scott Hazelhurst napsal(a):
Dear all
I sent the email below about three weeks ago. Today I upgraded another
machine — I had exactly the same thing happen and I appear to have lost
some LVM volumes
Hmm. At least I know it’s a problem and can backup the LVM volumes before
Dne 27.7.2016 v 21:17 Stuart Gathman napsal(a):
On 07/19/2016 11:28 AM, Scott Sullivan wrote:
Could someone please clarify if there is a legitimate reason to worry
about data security of a old (removed) LVM snapshot?
For example, when you lvremove a LVM snapshot, is it possible for data
to be
Dne 17.9.2016 v 20:02 Lars Ellenberg napsal(a):
On Sat, Sep 17, 2016 at 04:40:36PM +0200, Xen wrote:
Lars Ellenberg schreef op 17-09-2016 15:49:
On Sat, Sep 17, 2016 at 09:29:16AM +0200, Xen wrote:
I want to ask again:
What is the proper procedure when duplicating a disk with DD?
depends
Dne 7.9.2016 v 11:29 Marcus Meissner napsal(a):
Hi,
Is there a specific reasons that old LVM2 tarballs are removed from
ftp://sources.redhat.com/pub/lvm2/
e.g. there are wide gaps in the series 133-143 144-147 147-152 152-154 154-162
162-165
Yep, some releases are 'developmental'
Dne 28.9.2016 v 15:51 Charles Koprowski napsal(a):
On Sun, 10 Aug 2008 18:45:16 +0100, John Leach wrote:
On Thu, 2008-08-07 at 20:41 +0100, Alasdair G Kergon wrote:
> To remove the metadata areas you need to:
>
>get an up-to-date metadata backup (vgcfgbackup)
>
>
Dne 7.11.2016 v 16:58 Alexander 'Leo' Bergolth napsal(a):
On 11/07/2016 11:22 AM, Zdenek Kabelac wrote:
Dne 7.11.2016 v 10:30 Alexander 'Leo' Bergolth napsal(a):
I am experiencing a dramatic degradation of the sequential write speed
on a raid1 LV that resides on two USB-3 connected harddisks
Dne 24.11.2016 v 15:02 Stefan Bauer napsal(a):
Hi Peter,
now all make sense. On this ubuntu machine upstartd with udev is taking care of
vgchange.
After some digging, /lib/udev/rules.d/85-lvm2.rules shows, that vgchange is
only executed with -a y
We will test this on weekend but I'm certain
Dne 8.11.2016 v 16:15 Alexander 'Leo' Bergolth napsal(a):
On 11/08/2016 10:26 AM, Zdenek Kabelac wrote:
Dne 7.11.2016 v 16:58 Alexander 'Leo' Bergolth napsal(a):
On 11/07/2016 11:22 AM, Zdenek Kabelac wrote:
Is there a way to change the regionsize for an existing LV?
I'm afraid
Dne 28.11.2016 v 12:58 Tomaz Beltram napsal(a):
Hi,
I'm doing backup of a running mongodb using LVM snapshot. Sometimes I
run into a deadlock situation and kernel reports blocked tasks for jbd2,
mongod, dmeventd and my tar doing backup.
This happens very rarely (one in a thousand) but the
Dne 14.1.2017 v 06:41 kn...@knebb.de napsal(a):
Hi all,
I thought my filter rules where fine now. But they are not.
I have in /etc/lvm/lvm.conf:
filter = [ "r|/dev/sdb|","r|/dev/sdc|" ]
I scan my PVs:
[root@backuppc ~]# pvscan --cache
WARNING: PV AvK0Vn-vAdJ-K4nf-0N1x-u1fR-dlWG-dJezdg on
Dne 29.11.2016 v 11:28 Tomaz Beltram napsal(a):
On 29. 11. 2016 09:38, Zdenek Kabelac wrote:
Please switch to newer version of lvm2.
Sequence with snapshot activation had been reworked to minimize possibility
to hit this kernel race - race is still there even with the latest kernel
Dne 2.1.2017 v 20:20 kn...@knebb.de napsal(a):
Hi,
So I tried:
filter = [ "r|sdb.*/|","a|drbd0/|","a|.*/|" ]
Just check the 'comment' for filters in lvm.conf -
there is stated you should NOT mix 'r' & 'a' together
(there is even no need for this in 99.%)
Wellit writes "be
look for.
Zdenek
On 09/03/2017 16:33, Gionatan Danti wrote:
On 09/03/2017 12:53, Zdenek Kabelac wrote:
Hmm - it would be interesting to see your 'metadata' - it should be
still
quite good fit 128M of metadata for 512G when you are not using
snapshots.
What's been your actual test scenario
Dne 10.4.2017 v 11:29 lejeczek napsal(a):
hi there
I could not extend my stripped LV, had 3 stripes and wanted to add one more.
Only way LVM let me do it was where I ended up with this:
--- Segments ---
Logical extents 0 to 751169:
Typestriped
Stripes3
Stripe
Dne 10.4.2017 v 13:16 lejeczek napsal(a):
On 10/04/17 12:03, Zdenek Kabelac wrote:
Dne 10.4.2017 v 11:29 lejeczek napsal(a):
hi there
I could not extend my stripped LV, had 3 stripes and wanted to add one more.
Only way LVM let me do it was where I ended up with this:
--- Segments
Dne 13.4.2017 v 15:52 Xen napsal(a):
Stuart Gathman schreef op 13-04-2017 14:59:
If you are going to keep snapshots around indefinitely, the thinpools
are probably the way to go. (What happens when you fill up those?
Hopefully it "freezes" the pool rather than losing everything.)
My
Dne 14.4.2017 v 11:07 Gionatan Danti napsal(a):
Il 14-04-2017 10:24 Zdenek Kabelac ha scritto:
But it's currently impossible to expect you will fill the thin-pool to
full capacity and everything will continue to run smoothly - this is
not going to happen.
Even with EXT4 and errors=remount-ro
Dne 22.4.2017 v 18:32 Xen napsal(a):
Gionatan Danti schreef op 22-04-2017 9:14:
Il 14-04-2017 10:24 Zdenek Kabelac ha scritto:
However there are many different solutions for different problems -
and with current script execution - user may build his own solution -
i.e. call
'dmsetup remove -f
Dne 23.4.2017 v 07:29 Xen napsal(a):
Zdenek Kabelac schreef op 22-04-2017 23:17:
That is awesome, that means a errors=remount-ro mount will cause a remount
right?
Well 'remount-ro' will fail but you will not be able to read anything
from volume as well.
Well that is still preferable
Dne 13.7.2017 v 15:25 Matthias Leopold napsal(a):
hi,
i'm fiddling around with LVM backed KVM raw disks, that i want to use
_directly_ in oVirt virtualization (as "Direct LUN"). i would like to avoid
"importing", dd, etc. if possible. in the KVM origin system exists a mapping
of one iSCSI
Dne 26.4.2017 v 09:26 Gionatan Danti napsal(a):
Il 24-04-2017 23:59 Zdenek Kabelac ha scritto:
If you set '--errorwhenfull y' - it should instantly fail.
It's my understanding that "--errorwhenfull y" will instantly fail writes
which imply new allocation requests, but writes
Dne 26.4.2017 v 10:10 Gionatan Danti napsal(a):
I'm not sure this is sufficient. In my testing, ext4 will *not* remount-ro on
any error, rather only on erroneous metadata updates. For example, on a
thinpool with "--errorwhenfull y", trying to overcommit data with a simple "dd
if=/dev/zero
Dne 26.4.2017 v 15:37 Gionatan Danti napsal(a):
On 26/04/2017 13:23, Zdenek Kabelac wrote:
You need to use 'direct' write more - otherwise you are just witnessing
issues related with 'page-cache' flushing.
Every update of file means update of journal - so you surely can lose
some data
Dne 24.4.2017 v 15:49 Gionatan Danti napsal(a):
On 22/04/2017 23:22, Zdenek Kabelac wrote:
ATM there is even bug for 169 & 170 - dmeventd should generate message
at 80,85,90,95,100 - but it does it only once - will be fixed soon...
Mmm... quite a bug, considering how important is monito
Dne 11.5.2017 v 12:39 Tomasz Lasko napsal(a):
Hi,
I'm not a part of the list or the project, just a random guy dropping by to
say I found one suspicious thing:
after looking for what 's' size stands for, I found that your lvmcmdline.c
source code
Dne 15.5.2017 v 05:16 Xen napsal(a):
On discards,
I have a thin pool that filled up when there was enough space, because the
filesystem hadn't issued discards.
I have ext4 mounted with the discard option, I hope, because it is in the list
of default mount options of tune2fs:
Default mount
Dne 16.5.2017 v 09:53 Gionatan Danti napsal(a):
On 15/05/2017 17:33, Zdenek Kabelac wrote:> Ever tested this:
mount -o errors=remount-ro,data=journal ?
Yes, I tested it - same behavior: a full thinpool does *not* immediately put
the filesystem in a read-only state, even when using s
Dne 23.5.2017 v 17:57 Oliver Rath napsal(a):
Hi list,
if I use mulitiple PVs, Im able to select these PVs which should be used
by my lv, i.e.
lvcreate --size 10G --name mylv myvg /dev/sda3 /dev/sdb3
Is it possible to set these "/dev/sda3 /dev/sdb3" as default in lvm.conf
if nothing is
Dne 5.6.2017 v 10:48 Gionatan Danti napsal(a):
Hi Zdenek,
thanks for pointing me to thin_delta - very useful utility. Maybe I can code
around it...
As an additional question, is direct lvm2 support for send/receive planned, or
not?
Hi
It will certainly happen - just not sure in which
Dne 5.6.2017 v 10:11 Gionatan Danti napsal(a):
Hi all,
I wonder if using LVM snapshots (even better: lvmthin snapshots, which are way
faster) we can wire something like zfs or btrfs send/receive support.
In short: take a snapshot, do a full sync, take another snapshot and sync only
the
Dne 14.9.2017 v 00:39 Dale Stephenson napsal(a):
On Sep 13, 2017, at 4:19 PM, Zdenek Kabelac <zkabe...@redhat.com> wrote:
Dne 13.9.2017 v 17:33 Dale Stephenson napsal(a):
Distribution: centos-release-7-3.1611.el7.centos.x86_64
Kernel: Linux 3.10.0-514.26.2.el7.x86_64
LVM: 2.02.166(2)
Dne 14.9.2017 v 11:00 Zdenek Kabelac napsal(a):
Dne 14.9.2017 v 00:39 Dale Stephenson napsal(a):
On Sep 13, 2017, at 4:19 PM, Zdenek Kabelac <zkabe...@redhat.com> wrote:
md127 is an 8-drive RAID 0
As you can see, there’s no lvm striping; I rely on the software RAID
unde
Dne 15.9.2017 v 09:34 Xen napsal(a):
Zdenek Kabelac schreef op 14-09-2017 21:05:
But if I do create snapshots (which I do every day) when the root and boot
snapshots fill up (they are on regular lvm) they get dropped which is nice,
old snapshot are different technology for different
Dne 15.9.2017 v 10:15 matthew patton napsal(a):
From the two proposed solutions (lvremove vs lverror), I think I would
prefer the second one.
I vote the other way. :)
First because 'remove' maps directly to the DM equivalent action which brought
this about. Second because you are in
Dne 14.9.2017 v 12:57 Gionatan Danti napsal(a):
On 14/09/2017 11:37, Zdenek Kabelac wrote:
Sorry my typo here - is NOT ;)
Zdenek
Hi Zdenek,
as the only variable is the LVM volume type (fat/thick vs thin), why the thin
volume is slower than the thick one?
I mean: all other things being
Dne 13.9.2017 v 20:43 Xen napsal(a):
There is something else though.
You cannot set max size for thin snapshots?
We are moving here in right direction.
Yes - current thin-provisiong does not let you limit maximum number of blocks
individual thinLV can address (and snapshot is ordinary
Dne 18.9.2017 v 09:52 Tom Hale napsal(a):
Hi,
MAN PAGE
In http://man7.org/linux/man-pages/man8/pvscan.8.html I see the
following issues:
* The string "-a--activate" appears several times. Should be:
"-a|--activate"
* "-a|--activate y|n|ay" is mentioned, but later on:
"Only ay is
Dne 18.9.2017 v 21:07 Gionatan Danti napsal(a):
Il 18-09-2017 20:55 David Teigland ha scritto:
It's definitely an irritation, and I described a configurable alternative
here that has not yet been implemented:
https://bugzilla.redhat.com/show_bug.cgi?id=1465974
Is this the sort of topic where
Dne 19.9.2017 v 10:49 Gionatan Danti napsal(a):
On 18/09/2017 23:10, matthew patton wrote:
If the warnings are not being emitted to STDERR then that needs to be fixed
right off the bat.
The line with WARNINGs are written on STDERR, at least con recent LVM version.
'lvs -q blah' should
Dne 21.9.2017 v 16:49 Xen napsal(a):
However you would need LVM2 to make sure that only origin volumes are marked
as critical.
'dmeventd' executed binary - which can be a simple bash script called at
threshold level can be tuned to various naming logic.
So far there is no plan to enforce
Dne 22.9.2017 v 08:03 Mauricio Tavares napsal(a):
I have a lv, vmzone/desktop that I use as drive for a kvm guest;
nothing special here. I wanted to restore its snapshot so like I have
done many times before I shut guest down and then
lvconvert --merge vmzone/desktop_snap_20170921
Logical
Dne 21.9.2017 v 12:22 Xen napsal(a):
Hi,
thank you for your response once more.
Zdenek Kabelac schreef op 21-09-2017 11:49:
Hi
Of course this decision makes some tasks harder (i.e. there are surely
problems which would not even exist if it would be done in kernel) -
but lots of other
Dne 13.9.2017 v 17:33 Dale Stephenson napsal(a):
Distribution: centos-release-7-3.1611.el7.centos.x86_64
Kernel: Linux 3.10.0-514.26.2.el7.x86_64
LVM: 2.02.166(2)-RHEL7 (2016-11-16)
Volume group consisted of an 8-drive SSD (500G drives) array, plus an
additional SSD of the same size. The
Dne 14.9.2017 v 07:59 Xen napsal(a):
Zdenek Kabelac schreef op 13-09-2017 21:35:
We are moving here in right direction.
Yes - current thin-provisiong does not let you limit maximum number of
blocks individual thinLV can address (and snapshot is ordinary thinLV)
Every thinLV can address
Dne 19.9.2017 v 16:14 David Teigland napsal(a):
On Tue, Sep 19, 2017 at 01:11:09PM +0200, Zdenek Kabelac wrote:
IMHO the most convenient in my eyes is a usage of some sort of 'envvar'
LVM_SUPPRESS_POOL_WARNINGS
I think we're looking at the wrong thing. The root problem is what we're
Dne 19.9.2017 v 16:34 matthew patton napsal(a):
LVM and thin in particular is not for noobies or novices. If they get burned then they deserve it for using a technology they didn't bother to study and learn
Well it's not for novices ;) yet I believe 'lvm2' should not be an easy
weapon for
Dne 20.9.2017 v 15:05 Xen napsal(a):
Gionatan Danti schreef op 18-09-2017 21:20:
Xen, I really think that the combination of hard-threshold obtained by
setting thin_pool_autoextend_threshold and thin_command hook for
user-defined script should be sufficient to prevent and/or react to
full thin
Dne 7.9.2017 v 15:12 lejeczek napsal(a):
On 07/09/17 10:16, Zdenek Kabelac wrote:
Dne 7.9.2017 v 10:06 lejeczek napsal(a):
hi fellas
I'm setting up a lvm raid0, 4 devices, I want raid0 and I understand &
expect - there will be four stripes, all I care of is speed.
I do:
$ lvcreate --
Dne 8.9.2017 v 11:39 lejeczek napsal(a):
On 08/09/17 10:34, Zdenek Kabelac wrote:
Dne 8.9.2017 v 11:22 lejeczek napsal(a):
On 08/09/17 09:49, Zdenek Kabelac wrote:
Dne 7.9.2017 v 15:12 lejeczek napsal(a):
On 07/09/17 10:16, Zdenek Kabelac wrote:
Dne 7.9.2017 v 10:06 lejeczek napsal
Dne 8.9.2017 v 11:22 lejeczek napsal(a):
On 08/09/17 09:49, Zdenek Kabelac wrote:
Dne 7.9.2017 v 15:12 lejeczek napsal(a):
On 07/09/17 10:16, Zdenek Kabelac wrote:
Dne 7.9.2017 v 10:06 lejeczek napsal(a):
hi fellas
I'm setting up a lvm raid0, 4 devices, I want raid0 and I understand
Dne 11.9.2017 v 12:55 Xen napsal(a):
Zdenek Kabelac schreef op 11-09-2017 12:35:
As thin-provisioning is about 'promising the space you can deliver
later when needed' - it's not about hidden magic to make the space
out-of-nowhere.
The idea of planning to operate thin-pool on 100% fullness
Dne 11.9.2017 v 17:31 Eric Ren napsal(a):
Hi Zdenek,
On 09/11/2017 09:11 PM, Zdenek Kabelac wrote:
[..snip...]
So don't expect lvm2 team will be solving this - there are more prio work
Sorry for interrupting your discussion. But, I just cannot help to ask:
It's not the first time I
Dne 11.9.2017 v 18:55 David Teigland napsal(a):
On Mon, Sep 11, 2017 at 03:11:06PM +0200, Zdenek Kabelac wrote:
Aye but does design have to be complete failure when condition runs out?
YES
I am not satisfied with the way thin pools fail when space is exhausted,
and we aim to do better. Our
Dne 13.9.2017 v 00:55 Gionatan Danti napsal(a):
Il 13-09-2017 00:41 Zdenek Kabelac ha scritto:
There are maybe few worthy comments - XFS is great on stanadar big
volumes, but there used to be some hidden details when used on thinly
provisioned volumes on older RHEL (7.0, 7.1)
So now it depend
Dne 13.9.2017 v 02:04 matthew patton napsal(a):
'yes'
The filesystem may not be resident on the hypervisor (dom0) so 'dmsetup
suspend' is probably more apropos. How well that propagates upward to the
unwary client VM remains to be seen. But if one were running a NFS server using
Dne 13.9.2017 v 10:28 Gionatan Danti napsal(a):
Il 13-09-2017 10:15 Zdenek Kabelac ha scritto:
Ohh this is pretty major constrain ;)
I can well imagine LVM will let you forcible replace such LV with
error target - so instead of thinLV - you will have single 'error'
target snapshot
Dne 12.9.2017 v 19:14 Gionatan Danti napsal(a):
On 12/09/2017 16:37, Zdenek Kabelac wrote:
ZFS with zpolls with thin with thinpools running directly on top of device.
If zpools - are 'equally' fast as thins - and gives you better protection,
and more sane logic the why is still anyone using
Dne 13.9.2017 v 04:23 matthew patton napsal(a):
I don't recall seeing an actual, practical, real-world example of why this
issue got broached again. So here goes.
Create a thin LV on KVM dom0, put XFS/EXT4 on it, lay down (sparse) files as
KVM virtual disk files.
Create and launch VMs and
Dne 13.9.2017 v 00:41 Gionatan Danti napsal(a):
Il 13-09-2017 00:16 Zdenek Kabelac ha scritto:
Dne 12.9.2017 v 23:36 Gionatan Danti napsal(a):
Il 12-09-2017 21:44 matthew patton ha scritto:
Again, please don't speak about things you don't know.
I am *not* interested in thin provisioning
Dne 12.9.2017 v 13:34 Gionatan Danti napsal(a):
On 12/09/2017 13:01, Zdenek Kabelac wrote:
There is very good reason why thinLV is fast - when you work with thinLV -
you work only with data-set for single thin LV.
Sad/bad news here - it's not going to work this way
No, I absolutely *do
Dne 11.9.2017 v 15:46 Xen napsal(a):
Zdenek Kabelac schreef op 11-09-2017 15:11:
Thin-provisioning is - about 'postponing' available space to be
delivered in time
That is just one use case.
Many more people probably use it for other use case.
Which is fixed storage space and thin
Dne 11.9.2017 v 23:59 Gionatan Danti napsal(a):
Il 11-09-2017 12:35 Zdenek Kabelac ha scritto:
The first question here is - why do you want to use thin-provisioning ?
Because classic LVM snapshot behavior (slow write speed and linear performance
decrease as snapshot count increases) make
Dne 12.9.2017 v 14:47 Xen napsal(a):
Zdenek Kabelac schreef op 12-09-2017 14:03:
Unfortunatelly lvm2 nor dm can be responsible for whole kernel logic and
all user-land apps...
What Gionatan also means, or at least what I mean here is,
If functioning is chain and every link can
Dne 29.9.2017 v 18:42 Jan Tulak napsal(a):
Hi guys,
I found out this difference and I'm not sure what is the cause. A
command for creating a thin LV, which works on Archlinux, Centos and
Fedora, fails on Debian and Ubuntu:
lvm lvcreate FOOvg1 \
-T \
-l 100%PVS \
-n FOOvg1_thin001
Dne 31.8.2017 v 08:32 Kalyana sundaram napsal(a):
Thanks all people
I understand reboot/fencing is mandatory
I hope the visibility might be better in external locking tool like redis
With lvmlockd I find no deb available for ubuntu, and documentations for clvm
to handle an issue is difficult to
Dne 21.10.2017 v 16:33 Oleg Cherkasov napsal(a):
On 20. okt. 2017 21:35, John Stoffel wrote:
"Oleg" == Oleg Cherkasov writes:
Oleg> On 19. okt. 2017 21:09, John Stoffel wrote:
Oleg> RAM 12Gb, swap around 12Gb as well. /dev/sda is a hardware RAID1, the
Oleg> rest are
Dne 23.10.2017 v 18:40 Alexander 'Leo' Bergolth napsal(a):
On 10/23/2017 04:44 PM, Heinz Mauelshagen wrote:
LVM snapshots are meant to be used on the user visible raid1 LVs.
You found a bug allowing it to be used on its hidden legs.
Removing such per leg snapshot should be possible after the
Dne 22.11.2017 v 09:55 Xen napsal(a):
Ehm,
When you split a cache and later reattach it, LVM ensures it is in a
consistent state right?
LVM is a bit old, I mean Ubuntu 16.04 version, so something about 133.
Hi
Sorry but version 133 is really ancient - the original purpose of 'cache'
Dne 16.11.2017 v 12:02 Alexander 'Leo' Bergolth napsal(a):
On 2017-11-13 15:51, Zdenek Kabelac wrote:
Dne 13.11.2017 v 14:41 Alexander 'Leo' Bergolth napsal(a):
I have a EL7 desktop box with two sata harddisks and two ssds in a
LVM raid1 - thin pool - cache configuration. (Just migrated
Dne 13.11.2017 v 14:41 Alexander 'Leo' Bergolth napsal(a):
Hi!
I have a EL7 desktop box with two sata harddisks and two ssds in a
LVM raid1 - thin pool - cache configuration. (Just migrated to this
setup a few weeks ago.)
After some days, individual processes start to block in disk wait.
I
Dne 13.11.2017 v 16:12 Alexander 'Leo' Bergolth napsal(a):
Hi!
On 11/13/2017 03:51 PM, Zdenek Kabelac wrote:
Dne 13.11.2017 v 14:41 Alexander 'Leo' Bergolth napsal(a):
I have a EL7 desktop box with two sata harddisks and two ssds in a
LVM raid1 - thin pool - cache configuration. (Just
Dne 13.11.2017 v 18:41 Gionatan Danti napsal(a):
On 13/11/2017 16:20, Zdenek Kabelac wrote:
Are you talking about RH bug 1388632?
https://bugzilla.redhat.com/show_bug.cgi?id=1388632
Unfortunately I can only view the google-cached version of the bugzilla
page, since the bug is restricted
Dne 4.12.2017 v 05:30 Ming-Hung Tsai napsal(a):
Hi All,
I'm not sure if it is a bug or an intention. If there's error in
volume creation, the function _lv_create_an_lv() invokes lv_remove()
to delete the newly created volume. However, in the case of creating
thin volumes, it just queues a
Dne 12.12.2017 v 14:13 VESELÍK Jan napsal(a):
Hello,
I would like to suggest small tweak in command pvresize. If you use parameter
–setphysicalvolumesize, you will get only pasive wargning with this
potentially dangerous action. In comparrison to lvresize with parameter –L -,
meaning making
Dne 26.10.2017 v 10:07 Zhangyanfei (YF) napsal(a):
Hello
I find an issue when use dmsetup in the situation udev event timeout.
Dmsetup use the dm_udev_wait function sync with udev event.When use the
dmsetup generate a new dm-disk, if the raw disk is abnormal(for example ,a
ipsan disk hung
1 - 100 of 374 matches
Mail list logo