Dne 01. 07. 19 v 8:31 Gang He napsal(a):
Hello List,
I am using lvm2-2.02.180 on SLES12SP4, I cannot remove the snap LV of root
volume, which is based on multipath disk PV.
e.g.
linux-kkay:/ # lvremove /dev/system/snap_root
WARNING: Reading VG system from disk because lvmetad metadata is inv
Dne 25. 06. 19 v 9:56 Martin Wilck napsal(a):
Hello Zdenek,
On Tue, 2019-06-25 at 05:30 +, Heming Zhao wrote:
Hello Zdenek,
I raise this topic again. Lvm should have bug in filter code.
Let me show the example.
filter = [ "a|^/dev/sd.*|", "r|.*|" ]
As document description, above filter ru
Dne 20. 06. 19 v 7:41 Gang He napsal(a):
Hello Guys,
Any ideas? or what information do I need to provide to help this problem?
Hi
It does look like dmtable had different type of device in place and lvm2
was not able to remove device type.
So was this state a result of some 'regular' workf
Dne 13. 06. 19 v 21:54 Gionatan Danti napsal(a):
Il 13-06-2019 18:05 Ilia Zykov ha scritto:
Hello.
Tell me please, how can I get the maximum address used by a virtual disk
(disk created with -V VirtualSize). I have several large virtual disks,
but they use only a small part at the beginning of t
Dne 13. 06. 19 v 9:41 Heming Zhao napsal(a):
Hello List,
I created a md device, and used pvcreate to format it.
But the pvcreate was failed with filter rules.
the filter in /etc/lvm/lvm.conf:
```
filter = [ "r|/dev/.*/by-path/.*|", "r|/dev/.*/by-id/.*|",
"r|/dev/fd.*|", "r|/dev/cdrom|" ]
```
Dne 12. 06. 19 v 6:18 Steve Keller napsal(a):
I want to convert from LVM1 to LVM2 using vgconvert on a server. Is
vgconvert -M2 considered a risky operation? Should I switch
the server into single-user mode and make a backup before running
vgconvert?
And will vgconvert take a long time? May
Dne 06. 06. 19 v 15:30 Heming Zhao napsal(a):
Hello,
the filter is:
filter = [ "r|/dev/.*/by-path/.*|", "r|/dev/.*/by-id/.*|",
"r|/dev/fd.*|", "r|/dev/cdrom|", "a/.*/" ]
if filter doesn't contain "a/.*/":
- pvcreate, vgcreate & vgextend use regex filter to reject the disk.
(correct logic)
> if
Dne 06. 06. 19 v 10:16 Heming Zhao napsal(a):
Hello,
BTW,
Only vgextend doesn't work, which must be a bug. It looks the filter
handling codes have bug.
Hi
Please provide full 'vgextend -' trace with some explanation of what you
believe is a bug in handling code.
Regards
Zdenek
__
Dne 03. 06. 19 v 23:12 Ilia Zykov napsal(a):
Presumably you want a thick volume but inside a thin pool so that you
can used snapshots?
If so have you considered the 'external snapshot' feature?
Yes, in some cases they are quite useful. Still, a fast volume
allocation can be an handy addition.
Dne 03. 06. 19 v 15:03 Heming Zhao napsal(a):
Hello,
I met below filter action when executed 'vgextend'.
why the filter take no effect on executing pvcreate or vgcreate?
# rpm -qa | grep lvm2
lvm2-clvm-2.02.180-8.16.x86_64
lvm2-cmirrord-2.02.180-8.16.x86_64
lvm2-2.02.180-8.16.x86_64
the filt
Dne 04. 06. 19 v 21:35 Stephen Boyd napsal(a):
Quoting Helen Koike (2019-06-04 10:38:59)
On 6/3/19 8:02 PM, Stephen Boyd wrote:
I'm trying to boot a mainline linux kernel on a chromeos device with dm
verity and a USB stick but it's not working for me even with this patch.
I've had to hack arou
Dne 31. 05. 19 v 9:13 Shawn Guo napsal(a):
Hi David, Zdenek,
Comparing to stable-2.02 branch, I noticed that there are significant
changes around locking infrastructure on master branch. I have a
couple of questions regarding to these changes.
1. I see External Locking support was removed as p
Dne 16. 05. 19 v 11:03 Gang He napsal(a):
Hello Guys,
I found lvconvert command (in lvm lvm2-2.02.120) did not handle "--stripes"
option correctly.
The reproduce steps are as below,
# vgcreate vgtest /dev/vdb /dev/vdc /dev/vdd /dev/vde /dev/vdf /dev/vdg
# lvcreate -n lvtest -L 8G vgtest
# lvcon
Dne 14. 05. 19 v 8:25 Gang He napsal(a):
Hello Guys,
Anybody touched this area?
Thanks
Gang
Hi
I'll take a look - although it looks like the problem is possibly with libaio ?
Is libaio usable with -flto ?
ATM libaio is mandatory for building lvm2.
BTW - why do you need to use this option
Dne 12. 05. 19 v 0:52 "Rainer Fügenstein" napsal(a):
hi,
I am (was) using Fedora 28 installed in several LVs on /dev/sda5 (= PV),
where sda is a "big" SSD.
by accident, I attached (via SATA hot swap bay) an old harddisk
(/dev/sdc1), which was used about 2 months temporarily to move the volume
g
Dne 12. 05. 19 v 0:07 Roy Sigurd Karlsbakk napsal(a):
Hi
With lvmcache and all, is there anything in the works of making an HSM
(Hierarchical storage management aka tiered storage) with LVM as with less-used
data on slow, spinning rust and more-used data on higher tiers?
Hi
There is no tec
Dne 24. 04. 19 v 17:35 David Teigland napsal(a):
On Wed, Apr 24, 2019 at 10:37:26AM +0200, Zdenek Kabelac wrote:
Dne 23. 04. 19 v 16:27 David Teigland napsal(a):
On Mon, Apr 22, 2019 at 11:15:53PM -0600, Gang He wrote:
Hello List,
One user complained this error message.
The user has a usb sd
Dne 23. 04. 19 v 16:27 David Teigland napsal(a):
On Mon, Apr 22, 2019 at 11:15:53PM -0600, Gang He wrote:
Hello List,
One user complained this error message.
The user has a usb sd card reader with no media present. When they issue a
pvscan under lvm2-2.02.180 the device is opened which result
Dne 19. 04. 19 v 21:30 Konstantin Ryabitsev napsal(a):
Hi, all:
I know it's possible to set up dm-cache to combine network-attached block
devices and local SSDs, but I'm having a hard time finding any first-hand
evidence of this being done anywhere -- so I'm wondering if it's because there
ar
Dne 12. 04. 19 v 17:03 Eric Ren napsal(a):
Hi,
Although the /dev/dm-26 is visible, but the device seems not ready in kernel.
Sorry, it's not:
[root@iZuf6dbyd7ede51sykedamZ ~]# dmsetup info /dev/dm-26
Device dm-26 not found
Command failed.
[root@iZuf6dbyd7ede51sykedamZ ~]# dmsetup info /dev/
Dne 12. 04. 19 v 16:56 Eric Ren napsal(a):
Hi,
Since the /dev/dm-x has been created, I don't understand what it waits
udev to do?
Just waits udev rules to create device symbol links?
Although the /dev/dm-26 is visible, but the device seems not ready in kernel.
[root@iZuf6dbyd7ede51sykedamZ ~
Dne 12. 04. 19 v 12:42 Eric Ren napsal(a):
Hi!
Looking at provided log file - the system seems to be using some weird udev
rule - which results in generating strange /dev/__ symlinks.
Yes! I also see these weird device names, but I don't have a good
explanation for it, so that I'm stupid to
Dne 12. 04. 19 v 10:58 Eric Ren napsal(a):
Hi,
As subject, it seems a interaction problem between lvm and systemd-udev:
```
#lvm version
LVM version: 2.02.130(2)-RHEL7 (2015-10-14)
Library version: 1.02.107-RHEL7 (2015-10-14)
Driver version: 4.35.0
```
lvm call trace when hangs:
Dne 11. 04. 19 v 19:33 Eric Ren napsal(a):
Hi Zdenek,
Anyway - proper reproducer with full - log would be really the most
explanatory and needed to move on here.
The activation error is reproduced once more. Please see lvm2.log attached.
Please search the error message like this:
$grep
Dne 11. 04. 19 v 16:49 Lentes, Bernd napsal(a):
- On Apr 11, 2019, at 2:32 PM, Bernd Lentes
bernd.len...@helmholtz-muenchen.de wrote:
- On Apr 11, 2019, at 1:09 PM, Zdenek Kabelac zkabe...@redhat.com wrote:
Hello list,
I have a two-node HA-cluster which uses local and cluster LVM
Dne 11. 04. 19 v 15:09 Eric Ren napsal(a):
Hi,
So do you get 'partial' error on thin-pool activation on your physical server ?
Yes, the VG of the thin pool only has one simple physical disk, at
beginning, I also suspected the disk may disconnect at that moment.
But, I start to think maybe it
Dne 11. 04. 19 v 13:49 Eric Ren napsal(a):
Hi,
Hi,
I could recommend to orient towards the solution where the 'host' system
provides some service for your containers - so container ask for action,
service orchestrates the action on the system - and returns asked resource
to
Dne 11. 04. 19 v 13:26 Eric Ren napsal(a):
Hi Zdenek,
Thanks for your reply. The use case is for containerd snapshooter, yes, all
lvm setup is on host machine, creating thin LV for VM-based/KATA container as
rootfs.
For example:
https://github.com/containerd/containerd/pull/3136
and
https:
Dne 11. 04. 19 v 12:01 Eric Ren napsal(a):
Hi,
Another error message is:
"Failed to suspend thin snapshot origin ..."
which is in _lv_create_an_lv():
```
7829 } else if (lv_is_thin_volume(lv)) {
7830 /* For snapshot, suspend active thin origin first */
...
is active
Dne 10. 04. 19 v 15:36 Lentes, Bernd napsal(a):
- Am 9. Apr 2019 um 15:24 schrieb Zdenek Kabelac zdenek.kabe...@gmail.com:
Dne 09. 04. 19 v 15:00 Lentes, Bernd napsal(a):
Hello list,
I have a two-node HA-cluster which uses local and cluster LVM.
cLVM is currently stopped, I try to
Dne 11. 04. 19 v 2:27 Eric Ren napsal(a):
Hello list,
Recently, we're exercising our container environment which uses lvm to manage
thin LVs, meanwhile we found a very strange error to activate the thin LV:
Hi
The reason is very simple here - lvm2 does not work from containers.
It's unsu
Dne 09. 04. 19 v 15:00 Lentes, Bernd napsal(a):
Hello list,
I have a two-node HA-cluster which uses local and cluster LVM.
cLVM is currently stopped, I try to remove a snapshot from the root lv which
is located on a local VG.
I get this error:
ha-idg-2:/mnt/spp # lvremove -fv vg_local/lv_snap_pr
Dne 21. 01. 19 v 11:32 Zdenek Kabelac napsal(a):
Dne 18. 01. 19 v 1:53 Davis, Matthew napsal(a):
Hi Zdenek,
I assumed that LVM thin snapshots would work like git branches.
Since git also uses diffs on the backend, and git is popular with
developers, the same kind of behaviour seems reasonable
Dne 25. 01. 19 v 18:33 Alexander 'Leo' Bergolth napsal(a):
Hi!
If I extend a raid1 LV using lvextend, the newly allocated space in the two
rimage devices isn't synced, just like if I had used the option --nosync.
As a result, a following raid check will report mismatches.
Hi
Please open B
Dne 26. 01. 19 v 22:14 Andrei Borzenkov napsal(a):
I attempt to put device nodes for volumes in sub-directory of /dev to
avoid accidental conflict between device name and volume group and
general /dev pollution. VxVM always did it, using /dev/vx as base
directory, so I would use something like /d
Dne 24. 01. 19 v 16:06 Eric Ren napsal(a):
Hi,
With single command to create thin-pool, the metadata LV is not created
with striped
target. Is this designed on purpose, or just the command doesn't handle
this case very
well for now?
My main concern here is, if the metada
Dne 24. 01. 19 v 15:54 Eric Ren napsal(a):
Hi,
As you can see, only "mythinpool_tdata" LV has 2 stripes. Is that OK?
If I want to benefit performance from stripes, will it works for me? Or,
should I create dataLV, metadata LV, thinpool and thin LV using
step-by-step way
and s
Dne 18. 07. 18 v 16:58 Douglas Paul napsal(a):
On Wed, Jul 18, 2018 at 03:25:10PM +0100, Ryan Launchbury wrote:
Does anyone have any other ideas or potential workarounds for this issue?
Please let me know if you require more info.
I didn't see this in the previous messages, but have you tried
Dne 18. 01. 19 v 1:53 Davis, Matthew napsal(a):
Hi Zdenek,
I assumed that LVM thin snapshots would work like git branches.
Since git also uses diffs on the backend, and git is popular with developers,
the same kind of behaviour seems reasonable to me.
Hi
There is very good reason why the gi
Dne 18. 01. 19 v 1:53 Davis, Matthew napsal(a):
Hi Zdenek,
I assumed that LVM thin snapshots would work like git branches.
Since git also uses diffs on the backend, and git is popular with developers,
the same kind of behaviour seems reasonable to me.
e.g.
```
git checkout master
git branch b
Dne 17. 01. 19 v 2:12 Davis, Matthew napsal(a):
Hi Zdenek,
What do you mean "it's origin is already gone"?
Hi
Your field 'Origin' in your 'lvs -a' was empty - so the actual origin used for
taking 'fresh' LV snapshot is simply no longer existing.
lvm2 is (ATM) not a database tool trying to
Dne 16. 01. 19 v 0:03 Davis, Matthew napsal(a):
Hi Zdenek,
Here's what I see with `sudo lvs -a`. (My snapshots are actually called `fresh`
and `fresh2` not `mySnap`)
```
LV VG Attr LSize Pool Origin Data% Meta% Move
Log Cpy%Sync Convert
fresh cento
Dne 14. 01. 19 v 23:44 Davis, Matthew napsal(a):
Hi Zdenek,
`sudo lvs --version` says :
LVM version: 2.02.180(2)-RHEL7 (2018-07-20)
Library version: 1.02.149-RHEL7 (2018-07-20)
Driver version: 4.37.1
So that means it's version 2, right?
(I'm running the latest version of CentOS.
Dne 10. 01. 19 v 7:23 Davis, Matthew napsal(a):
Hi Marian,
I'm trying to do it with thin snapshots now. It's all very confusing, and I
can't get it to work.
I've read a lot of the documentation about thin stuff, and it isn't clear
what's happening.
I took a snapshot with
sudo lvcreate -
Dne 10. 01. 19 v 4:05 james harvey napsal(a):
On lvm2 2.02.183, man lvcreate includes:
-Z|--zero y|n
Controls zeroing of the first 4KiB of data in the new LV. Default
is y. Snapshot COW volumes are always zeroed. LV is not zeroed if
the read only flag is set. Warning: trying to mount an
Dne 05. 12. 18 v 16:01 Marcin Wolcendorf napsal(a):
Hi,
Thanks for your reply!
On Wed, Dec 05, 2018 at 03:23:21PM +0100, Zdenek Kabelac wrote:
Dne 05. 12. 18 v 7:19 Marcin Wolcendorf napsal(a):
My setup:
I have 2 mdraid devices: one 24T raid6, one 1T raid1, and one separate SSD. All
are LUKS
Dne 05. 12. 18 v 7:19 Marcin Wolcendorf napsal(a):
Hi Everyone,
Recently I have set up a simple lvmcache on my machine. This is a write-through
cache, so in my view it should never be necessary to copy data from the
cachepool to the origin LV. But this is exactly what happens: on system start
th
Dne 03. 12. 18 v 13:10 Far Had napsal(a):
Hi, Thank you Zdenek
You used "makes sure" phrase in your sentence:
"this makes sure you should have files at least 10 days back as well"
I was refering to that !
Hi
Oh - in that case I've been emphasizing the fact, that lvm2 will not REMOVE,
those.
Dne 03. 12. 18 v 7:17 Far Had napsal(a):
Hi,
I'm trying to understand how LVM thick snapshots work under the hood. I can
see that when new write requests are comming to the original volume that has a
snapshot, the current contents of storage blocks copy to the snapshot space
then the new data
Dne 01. 12. 18 v 13:44 Far Had napsal(a):
How does it _make sure_ that I've backup files younger than 10 days old? What
if I delete those files?
Hi
Archives in /etc/lvm/archive subdirs are not 'a must have' - they are optional
and purely for admin's pleasure.
So admin can erase them anytime
Dne 30. 11. 18 v 11:03 Gionatan Danti napsal(a):
On 30/11/2018 10:52, Zdenek Kabelac wrote:
Hi
The name of i/o layer bcache is only internal to lvm2 code for caching
reads form disks during disk processing - the name comes from usage
of bTree and caching - thus the name bcache.
It's
Dne 30. 11. 18 v 10:35 Gionatan Danti napsal(a):
Hi list,
in BZ 1643651 I read:
"In 7.6 (and 8.0), lvm began using a new i/o layer (bcache)
to read and write data blocks."
Last time I checked, bcache was a completely different caching layer,
unrelated from LVM. The above quote, instead, implie
Dne 28. 11. 18 v 13:18 Far Had napsal(a):
Thanks for the response,
but I still don't get the point. Assume that I use same names for my VGs, In
this case if I do for example:
/retain_days = 10/
and
/retain_min = 15/
in lvm.conf file, what is the system's behaviour when archiving backup files?
Dne 28. 11. 18 v 10:51 Far Had napsal(a):
Hi
There are two parameters in lvm.conf which I don't understand the exact
meaning of.
1. retain_min
2. retain_days
What do "*the minimum number of archive files you wish to keep*" and "*minimum
time you wish to keep an archive file for*" mean?
Is t
Dne 26. 11. 18 v 19:24 Andrew Hall napsal(a):
Hi
Can anyone confirm if the following situation is recoverable or not ?
Thanks very much.
1. We have an LV which was recently extended using a VG with
sufficient PE available. A filesystem resize operation was included
with the -r flag :
Let's
Dne 26. 11. 18 v 12:31 Cesare Leonardi napsal(a):
Resending, I erroneusly replied only to Zdenek, sorry.
I can provide details about this, that was filed by me:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=913119
It's about a desktop PC, with two SSD (Samsung 850 EVO) on which i build RA
Dne 25. 11. 18 v 0:30 Cesare Leonardi napsal(a):
Since my message did not reach the list, I'm resending, but first I've
subscribed myself.
--
Hello, I'm writing here to have your opinion and possibly some advice about
some Debian bugs related to LVM RAID that are still unresolv
Dne 22. 11. 18 v 19:05 Christoph Pleger napsal(a):
Hello,
I am now trying to not call external LVM commands, but to use LVM library
calls instead. Now I have another problem:
lvm library is DEAD, it's gone, it does not exists in upstream tree any more.
It's been marked as deprecated for man
Dne 16. 11. 18 v 14:43 Christoph Pleger napsal(a):
Hello,
Let's stop there. The fact you're asking a question about setuid
suggests you don't understand enough to be able to use it safely.
I get security by checking the real user id at the beginning of the program
and aborting the program i
Dne 31. 10. 18 v 8:18 Gang He napsal(a):
Hello List,
As you know, in the past versions(e.g. v2.02.120), lvcreate supports creating a mirror
type LV with "--mirrorlog mirrored" option,
But in the latest versions(e.g. v2.02.180), lvm2 says "mirrored is a persistent log
that is itself mirrored, b
Dne 19. 10. 18 v 19:00 Ilia Zykov napsal(a):
dm-writecache could be seen as 'extension' of your page-cache to held
longer list of dirty-pages...
Zdenek
Does it mean that the dm-writecache is always empty, after reboot?
Thanks.
No, writecache is journaled - so after reboot used content is
Dne 19. 10. 18 v 14:45 Gionatan Danti napsal(a):
On 19/10/2018 12:58, Zdenek Kabelac wrote:
Hi
Writecache simply doesn't care about caching your reads at all.
Your RAM with it's page caching mechanism keeps read data as long as there
is free RAM for this - the less RAM goes to
Dne 19. 10. 18 v 11:55 Ilia Zykov napsal(a):
On 19.10.2018 12:12, Zdenek Kabelac wrote:
Dne 19. 10. 18 v 0:56 Ilia Zykov napsal(a):
Maybe it will be implemented later? But it seems to me a little
strange when there is no way to clear the cache from a garbage.
Maybe I do not understand? Can
Dne 19. 10. 18 v 11:42 Gionatan Danti napsal(a):
On 19/10/2018 11:12, Zdenek Kabelac wrote:
And final note - there is upcoming support for accelerating writes with new
dm-writecache target.
Hi, should not it be already possible with current dm-cache and writeback
caching?
Hi
dm-cache
Dne 19. 10. 18 v 0:56 Ilia Zykov napsal(a):
Maybe it will be implemented later? But it seems to me a little strange when
there is no way to clear the cache from a garbage.
Maybe I do not understand? Can you please explain this behavior.
For example:
Hi
Applying my brain logic here:
Cache (by
Dne 3.9.2018 v 11:04 Oleksandr Panchuk napsal(a):
Hello, All
I have not found any documentation about how to prepend LV with some free space
.
Possible use case - we have LV with filesystem and data on it. But to migrate
this LV to KVM virtual machine we need to craft partition inside this LV w
Dne 27.8.2018 v 12:25 Jaco van Niekerk napsal(a):
Hi
I am receiving the following error when Pacemaker tries to start the LVM
volume/logical groups:
pcs resource debug-start r_lvm
Operation start for r_lvm (ocf:heartbeat:LVM) returned: 'unknown error' (1)
> stdout: volume_list=["centos","v
Dne 31.7.2018 v 23:17 Marc MERLIN napsal(a):
On Tue, Jul 31, 2018 at 02:35:42PM +0200, Zdenek Kabelac wrote:
If you monitor amount of free space for data AND for metadata in thin-pool
yourself you can keep easily threshold == 100.
Understood. Two things:
1) basically threshold < 100 all
Dne 31.7.2018 v 04:44 Marc MERLIN napsal(a):
On Fri, Jul 27, 2018 at 11:26:58AM -0700, Marc MERLIN wrote:
Hi Zdenek,
Thanks for your helpful reply.
Ha again Zdenek,
Just to confirm, am I going to be ok enough with the scheme I described
as long as I ensure that 'Allocated pool data' does n
Dne 26.7.2018 v 18:31 Marc MERLIN napsal(a):
Still learning about thin volumes.
Why do I want my thin pool to get auto extended? Does "extended" mean
resized?
yes extension == resize
Why would I want to have thin_pool_autoextend_threshold below 100 and
have it auto extend as needed vs havi
Dne 26.7.2018 v 17:49 Marc MERLIN napsal(a):
On Thu, Jul 26, 2018 at 10:40:42AM +0200, Zdenek Kabelac wrote:
What are you trying to achieve with 'mkdir /dev/vgds2/' ?
You shall never ever touch /dev content - it's always under full control
of udev - if you start to create there
Dne 26.7.2018 v 06:25 David F. napsal(a):
Never mind, the answer is the extent_size is the number of sectors (or perhaps
it's the number of 512 byte blocks, I'll have to test on 4K sector
drives). So in this case 4M and 4M*3840 is the 16G (not 16M which was the
3840*4096).
On Wed, Jul 25,
Dne 26.7.2018 v 09:24 Marc MERLIN napsal(a):
On Wed, Jul 25, 2018 at 05:41:54PM -0700, Marc MERLIN wrote:
Howdy,
Kernel 4.17, trying thin LV for the first time, and I'm getting this:
gargamel:~# lvcreate -L 14.50TiB -Zn -T vgds2/thinpool2
Using default stripesize 64.00 KiB.
Thin pool volu
Dne 8.7.2018 v 23:36 Dean Hamstead napsal(a):
Hi All,
I hope someone with very high LVM wizardy can save me from a pickle...
Ok so this happened:
Jul 3 13:16:24 saito kernel: [131695.910332] device-mapper: space map
metadata: unable to allocate new metadata b
lock
Jul 3 13:16:24 saito
Dne 10.7.2018 v 15:30 Roy Sigurd Karlsbakk napsal(a):
Hi all
I beleive I've removed the old lvmcache, but attempting to install a new one
after a reinstall seems not to work.
# lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log
Cpy%Sync Convert
data data -
Dne 22.6.2018 v 20:10 Gionatan Danti napsal(a):
Hi list,
I wonder if a method exists to have a >16 GB thin metadata volume.
When using a 64 KB chunksize, a maximum of ~16 TB can be addressed in a single
thin pool. The obvious solution is to increase the chunk size, as 128 KB
chunks are good fo
Dne 22.6.2018 v 20:13 Gionatan Danti napsal(a):
Il 20-06-2018 12:15 Zdenek Kabelac ha scritto:
Hi
Aren't there any kernel write errors in your 'dmegs'.
LV becomes fragile if the associated devices with cache are having HW
issues (disk read/write errors)
Zdenek
Is that true ev
Dne 20.6.2018 v 11:18 Ryan Launchbury napsal(a):
Hello,
I'm having a problem uncaching logical volumes when the cache data chunck size
is over 1MiB.
The process I'm using to uncache is: lvconvert --uncache vg/lv
The issue occurs across multiple systems with different hardware and different
Dne 30.5.2018 v 11:23 Gang He napsal(a):
Hello List,
As you know, I ever reported that lvcreate could not create a mirrored LV, the root cause
is a configure building item "--enable-cmirrord" was missed.
Now, I encounter another problem, pvmove does not work at all.
The detailed information/pro
Dne 25.5.2018 v 09:37 Gang He napsal(a):
Hello List,
I am using lvm version 2.02.177(2), tried to create a mirrored LV, but failed
with the errors,
tb0307-nd1:~ # pvs
PV VG Fmt Attr PSize PFree
/dev/vdb cluster-vg2 lvm2 a-- 40.00g 36.00g
/dev/vdc cluster-vg2 lvm
Dne 15.5.2018 v 10:11 Dennis Schridde napsal(a):
Hello!
In case the question comes up: Fedora 28 (the live system I am trying to use
for recovery) is using Linux 4.16.3-301.fc28 and `lvm version`:
LVM version 2.02.177(2)
Library version 1.02.146
Driver version 4.37.0
The system I was originally
Dne 14.5.2018 v 02:47 Pankaj Agarwal napsal(a):
Hi,
How do i set the nr_request value for LV's as it's not writable like other
drives on a linux system.
LV's are set as dm-0 and dm-1 on my system.
#cat /sys/block/dm-0/queue/nr_requests
128
# echo 256 > /sys/block/dm-0/queue/nr_requests
-b
Dne 9.5.2018 v 08:52 Oliver Rath napsal(a):
Hi list,
i tried to get some lvm-commands working using example_cmdlib.c
(modified, attached). Unfortunatly the example hangs trying a "lvcreate
--name test --size 12M levg" command:
Hi
Please avoid tweaking code to use cmdlib - it's internal libr
Dne 26.4.2018 v 04:07 Gang He napsal(a):
Hello Zdenek,
Do you remember LVM for this version supports PVmove, or not?
Since there is a user which is pinging this question.
Hi
lvm2 should be supporting clustered pvmove (in case cmirrord is fully
functional on your system) - but I've no idea i
Dne 10.4.2018 v 16:00 Richard W.M. Jones napsal(a):
On Tue, Apr 10, 2018 at 09:47:30AM +0100, Richard W.M. Jones wrote:
Recently in Fedora something changed that stops us from creating small
LVs for testing.
An example failure with a 64 MB partitioned disk:
# parted -s -- /dev/sda mklabel msdo
Dne 10.4.2018 v 16:49 Richard W.M. Jones napsal(a):
On Tue, Apr 10, 2018 at 04:43:12PM +0200, Zdenek Kabelac wrote:
Dne 10.4.2018 v 16:00 Richard W.M. Jones napsal(a):
On Tue, Apr 10, 2018 at 09:47:30AM +0100, Richard W.M. Jones wrote:
Recently in Fedora something changed that stops us from
Dne 10.4.2018 v 16:00 Richard W.M. Jones napsal(a):
On Tue, Apr 10, 2018 at 09:47:30AM +0100, Richard W.M. Jones wrote:
Recently in Fedora something changed that stops us from creating small
LVs for testing.
An example failure with a 64 MB partitioned disk:
# parted -s -- /dev/sda mklabel msdo
Dne 3.4.2018 v 04:28 Gang He napsal(a):
Hello list,
As you know, pvmove can run on old version (e.g. lvm2-2.02.120 on SLE12SP2),
But with new version lvm2-2.02.177, I can not run pvmove successfully in the
cluster.
Here, I paste some information from my test,
if you can know the cause, please h
Dne 27.3.2018 v 12:38 Michael Fladischer napsal(a):
Hi,
I'm unable to create PVs on Multipath-Volumes that are available at
/dev/dm-NN where N~[0-9] but I can create them on single digit devices
like /dev/dm-9:
# pvcreate /dev/dm-6
Physical volume "/dev/dm-6" successfully created.
# pvcreat
Dne 27.3.2018 v 12:27 Xen napsal(a):
Zdenek Kabelac schreef op 27-03-2018 12:22:
IMO 'vgreduce --removemissing' doesn't look to me like a real rocket science.
Yeah I don't wanna get into it.
--force didn't work very well when the missing PV was a cache PV as it r
Dne 27.3.2018 v 13:05 Gionatan Danti napsal(a):
On 27/03/2018 12:39, Zdenek Kabelac wrote:
Hi
And last but not least comment - when you pointed out 4MB extent usage -
it's relatively huge chunk - and if the 'fstrim' wants to succeed - those
4MB blocks fitting thin-pool chu
Dne 27.3.2018 v 09:44 Gionatan Danti napsal(a):
What am I missing? Is the "data%" field a measure of how many data chunks are
allocated, or does it even track "how full" are these data chunks? This would
benignly explain the observed discrepancy, as a partially-full data chunks can
be used to
Dne 27.3.2018 v 11:12 Xen napsal(a):
Gang He schreef op 27-03-2018 7:55:
I just reproduced a problem from the customer, since they did virtual
disk migration from one virtual machine to another one.
According to your comments, this does not look like a LVM code problem,
the problem can be cons
Dne 27.3.2018 v 11:40 Gionatan Danti napsal(a):
On 27/03/2018 10:30, Zdenek Kabelac wrote:
Hi
Well just for the 1st. look - 116MB for metadata for 7.21TB is *VERY* small
size. I'm not sure what is the data 'chunk-size' - but you will need to
extend pool's meta
Dne 27.3.2018 v 07:55 Gang He napsal(a):
Hi Fran,
On 26 March 2018 at 08:04, Gang He wrote:
It looks like each PV includes a copy meta data for VG, but if some PV has
changed (e.g. removed, or moved to another VG),
the remained PV should have a method to check the integrity when each
st
Dne 27.3.2018 v 09:44 Gionatan Danti napsal(a):
Hi all,
I can't wrap my head on the following reported data vs metadata usage
before/after a snapshot deletion.
System is an updated CentOS 7.4 x64
BEFORE SNAP DEL:
[root@ ~]# lvs
LV VG Attr LSize Pool Origin
Dne 26.3.2018 v 08:04 Gang He napsal(a):
Hi Xen,
Gang He schreef op 23-03-2018 9:30:
6) attach disk2 to VM2(tb0307-nd2), the vg on VM2 looks abnormal.
tb0307-nd2:~ # pvs
WARNING: Device for PV JJOL4H-kc0j-jyTD-LDwl-71FZ-dHKM-YoFtNV not
found or rejected by a filter.
PV VG Fm
Dne 5.3.2018 v 10:42 Gionatan Danti napsal(a):
Il 04-03-2018 21:53 Zdenek Kabelac ha scritto:
On the other hand all common filesystem in linux were always written
to work on a device where the space is simply always there. So all
core algorithms simple never counted with something like
'
Dne 3.3.2018 v 18:52 Xen napsal(a):
I did not rewrite this entire message, please excuse the parts where I am a
I'll probably repeat my self again, but thin provision can't be
responsible for all kernel failures. There is no way DM team can fix
all the related paths on this road.
Are you sayin
Dne 3.3.2018 v 19:17 Xen napsal(a):
In the past was argued that putting the entire pool in read-only mode
(where *all* writes fail, but read are permitted to complete) would be
a better fail-safe mechanism; however, it was stated that no current
dmtarget permit that.
Right. Don't forget my mai
201 - 300 of 510 matches
Mail list logo