Dne 14.5.2018 v 02:47 Pankaj Agarwal napsal(a):
Hi,
How do i set the nr_request value for LV's as it's not writable like other
drives on a linux system.
LV's are set as dm-0 and dm-1 on my system.
#cat /sys/block/dm-0/queue/nr_requests
128
# echo 256 > /sys/block/dm-0/queue/nr_requests
Dne 9.5.2018 v 08:52 Oliver Rath napsal(a):
Hi list,
i tried to get some lvm-commands working using example_cmdlib.c
(modified, attached). Unfortunatly the example hangs trying a "lvcreate
--name test --size 12M levg" command:
Hi
Please avoid tweaking code to use cmdlib - it's internal
Dne 15.5.2018 v 10:11 Dennis Schridde napsal(a):
Hello!
In case the question comes up: Fedora 28 (the live system I am trying to use
for recovery) is using Linux 4.16.3-301.fc28 and `lvm version`:
LVM version 2.02.177(2)
Library version 1.02.146
Driver version 4.37.0
The system I was
Dne 25.5.2018 v 09:37 Gang He napsal(a):
Hello List,
I am using lvm version 2.02.177(2), tried to create a mirrored LV, but failed
with the errors,
tb0307-nd1:~ # pvs
PV VG Fmt Attr PSize PFree
/dev/vdb cluster-vg2 lvm2 a-- 40.00g 36.00g
/dev/vdc cluster-vg2
Dne 22.6.2018 v 20:10 Gionatan Danti napsal(a):
Hi list,
I wonder if a method exists to have a >16 GB thin metadata volume.
When using a 64 KB chunksize, a maximum of ~16 TB can be addressed in a single
thin pool. The obvious solution is to increase the chunk size, as 128 KB
chunks are good
Dne 22.6.2018 v 20:13 Gionatan Danti napsal(a):
Il 20-06-2018 12:15 Zdenek Kabelac ha scritto:
Hi
Aren't there any kernel write errors in your 'dmegs'.
LV becomes fragile if the associated devices with cache are having HW
issues (disk read/write errors)
Zdenek
Is that true even when using
Dne 27.10.2017 v 10:00 Zhangyanfei (YF) napsal(a):
Hello
If the udevd daemon would not timeout, I think dmsetup mandatory wait udev
finalizing any timeouts is good idea.
But udevd would timeout in 180 sencond and kill the event process( systemd-udevd[39029]: timeout: killing). In this
Dne 10.1.2018 v 15:42 Eric Ren napsal(a):
Zdenek,
Thanks for helping make this more clear to me :)
There are couple fuzzy sentences - so lets try to make them more clear.
Default mode for 'clvmd' is to 'share' resource everywhere - which clearly
comes from original 'gfs' requirement and
Dne 26.7.2018 v 09:24 Marc MERLIN napsal(a):
On Wed, Jul 25, 2018 at 05:41:54PM -0700, Marc MERLIN wrote:
Howdy,
Kernel 4.17, trying thin LV for the first time, and I'm getting this:
gargamel:~# lvcreate -L 14.50TiB -Zn -T vgds2/thinpool2
Using default stripesize 64.00 KiB.
Thin pool
Dne 26.7.2018 v 06:25 David F. napsal(a):
Never mind, the answer is the extent_size is the number of sectors (or perhaps
it's the number of 512 byte blocks, I'll have to test on 4K sector
drives). So in this case 4M and 4M*3840 is the 16G (not 16M which was the
3840*4096).
On Wed, Jul
Dne 26.7.2018 v 18:31 Marc MERLIN napsal(a):
Still learning about thin volumes.
Why do I want my thin pool to get auto extended? Does "extended" mean
resized?
yes extension == resize
Why would I want to have thin_pool_autoextend_threshold below 100 and
have it auto extend as needed vs
Dne 31.7.2018 v 04:44 Marc MERLIN napsal(a):
On Fri, Jul 27, 2018 at 11:26:58AM -0700, Marc MERLIN wrote:
Hi Zdenek,
Thanks for your helpful reply.
Ha again Zdenek,
Just to confirm, am I going to be ok enough with the scheme I described
as long as I ensure that 'Allocated pool data' does
Dne 31.7.2018 v 23:17 Marc MERLIN napsal(a):
On Tue, Jul 31, 2018 at 02:35:42PM +0200, Zdenek Kabelac wrote:
If you monitor amount of free space for data AND for metadata in thin-pool
yourself you can keep easily threshold == 100.
Understood. Two things:
1) basically threshold < 100 all
Dne 26.7.2018 v 17:49 Marc MERLIN napsal(a):
On Thu, Jul 26, 2018 at 10:40:42AM +0200, Zdenek Kabelac wrote:
What are you trying to achieve with 'mkdir /dev/vgds2/' ?
You shall never ever touch /dev content - it's always under full control
of udev - if you start to create there your own files
Dne 10.7.2018 v 15:30 Roy Sigurd Karlsbakk napsal(a):
Hi all
I beleive I've removed the old lvmcache, but attempting to install a new one
after a reinstall seems not to work.
# lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log
Cpy%Sync Convert
data data
Dne 8.7.2018 v 23:36 Dean Hamstead napsal(a):
Hi All,
I hope someone with very high LVM wizardy can save me from a pickle...
Ok so this happened:
Jul 3 13:16:24 saito kernel: [131695.910332] device-mapper: space map
metadata: unable to allocate new metadata b
lock
Jul 3 13:16:24 saito
Dne 27.8.2018 v 12:25 Jaco van Niekerk napsal(a):
Hi
I am receiving the following error when Pacemaker tries to start the LVM
volume/logical groups:
pcs resource debug-start r_lvm
Operation start for r_lvm (ocf:heartbeat:LVM) returned: 'unknown error' (1)
> stdout:
Dne 28.2.2018 v 20:07 Gionatan Danti napsal(a):
Hi all,
Il 28-02-2018 10:26 Zdenek Kabelac ha scritto:
Overprovisioning on DEVICE level simply IS NOT equivalent to full
filesystem like you would like to see all the time here and you've
been already many times explained that filesystems
Dne 1.3.2018 v 08:14 Gionatan Danti napsal(a):
Il 28-02-2018 22:43 Zdenek Kabelac ha scritto:
On default - full pool starts to 'error' all 'writes' in 60 seconds.
Based on what I remember, and what you wrote below, I think "all writes" in
the context above means "writes to un
Dne 1.3.2018 v 10:43 Gianluca Cecchi napsal(a):
On Thu, Mar 1, 2018 at 9:31 AM, Zdenek Kabelac <zkabe...@redhat.com
<mailto:zkabe...@redhat.com>> wrote:
Also note - we are going to integrate VDO support - which will be a 2nd.
way for thin-provisioning with different set
Dne 10.4.2018 v 16:00 Richard W.M. Jones napsal(a):
On Tue, Apr 10, 2018 at 09:47:30AM +0100, Richard W.M. Jones wrote:
Recently in Fedora something changed that stops us from creating small
LVs for testing.
An example failure with a 64 MB partitioned disk:
# parted -s -- /dev/sda mklabel
Dne 27.3.2018 v 07:55 Gang He napsal(a):
Hi Fran,
On 26 March 2018 at 08:04, Gang He wrote:
It looks like each PV includes a copy meta data for VG, but if some PV has
changed (e.g. removed, or moved to another VG),
the remained PV should have a method to check the
Dne 27.3.2018 v 11:40 Gionatan Danti napsal(a):
On 27/03/2018 10:30, Zdenek Kabelac wrote:
Hi
Well just for the 1st. look - 116MB for metadata for 7.21TB is *VERY* small
size. I'm not sure what is the data 'chunk-size' - but you will need to
extend pool's metadata sooner or later
Dne 27.3.2018 v 11:12 Xen napsal(a):
Gang He schreef op 27-03-2018 7:55:
I just reproduced a problem from the customer, since they did virtual
disk migration from one virtual machine to another one.
According to your comments, this does not look like a LVM code problem,
the problem can be
Dne 27.3.2018 v 09:44 Gionatan Danti napsal(a):
What am I missing? Is the "data%" field a measure of how many data chunks are
allocated, or does it even track "how full" are these data chunks? This would
benignly explain the observed discrepancy, as a partially-full data chunks can
be used to
Dne 27.3.2018 v 13:05 Gionatan Danti napsal(a):
On 27/03/2018 12:39, Zdenek Kabelac wrote:
Hi
And last but not least comment - when you pointed out 4MB extent usage -
it's relatively huge chunk - and if the 'fstrim' wants to succeed - those
4MB blocks fitting thin-pool chunks needs
Dne 27.3.2018 v 12:27 Xen napsal(a):
Zdenek Kabelac schreef op 27-03-2018 12:22:
IMO 'vgreduce --removemissing' doesn't look to me like a real rocket science.
Yeah I don't wanna get into it.
--force didn't work very well when the missing PV was a cache PV as it removed
the entire cached
Dne 27.3.2018 v 12:38 Michael Fladischer napsal(a):
Hi,
I'm unable to create PVs on Multipath-Volumes that are available at
/dev/dm-NN where N~[0-9] but I can create them on single digit devices
like /dev/dm-9:
# pvcreate /dev/dm-6
Physical volume "/dev/dm-6" successfully created.
#
Dne 26.3.2018 v 08:04 Gang He napsal(a):
Hi Xen,
Gang He schreef op 23-03-2018 9:30:
6) attach disk2 to VM2(tb0307-nd2), the vg on VM2 looks abnormal.
tb0307-nd2:~ # pvs
WARNING: Device for PV JJOL4H-kc0j-jyTD-LDwl-71FZ-dHKM-YoFtNV not
found or rejected by a filter.
PV VG
Dne 3.4.2018 v 04:28 Gang He napsal(a):
Hello list,
As you know, pvmove can run on old version (e.g. lvm2-2.02.120 on SLE12SP2),
But with new version lvm2-2.02.177, I can not run pvmove successfully in the
cluster.
Here, I paste some information from my test,
if you can know the cause, please
Dne 3.3.2018 v 19:32 Xen napsal(a):
Zdenek Kabelac schreef op 28-02-2018 22:43:
It still depends - there is always some sort of 'race' - unless you
are willing to 'give-up' too early to be always sure, considering
there are technologies that may write many GB/s...
That's why I think
Dne 27.2.2018 v 19:39 Xen napsal(a):
Zdenek Kabelac schreef op 24-04-2017 23:59:
I'm just currious - what the you think will happen when you have
root_LV as thin LV and thin pool runs out of space - so 'root_LV'
is replaced with 'error' target.
Why do you suppose Root LV is on thin?
Why
Dne 20.6.2018 v 11:18 Ryan Launchbury napsal(a):
Hello,
I'm having a problem uncaching logical volumes when the cache data chunck size
is over 1MiB.
The process I'm using to uncache is: lvconvert --uncache vg/lv
The issue occurs across multiple systems with different hardware and different
Dne 19. 10. 18 v 0:56 Ilia Zykov napsal(a):
Maybe it will be implemented later? But it seems to me a little strange when
there is no way to clear the cache from a garbage.
Maybe I do not understand? Can you please explain this behavior.
For example:
Hi
Applying my brain logic here:
Cache
Dne 19. 10. 18 v 19:00 Ilia Zykov napsal(a):
dm-writecache could be seen as 'extension' of your page-cache to held
longer list of dirty-pages...
Zdenek
Does it mean that the dm-writecache is always empty, after reboot?
Thanks.
No, writecache is journaled - so after reboot used content
Dne 19. 10. 18 v 14:45 Gionatan Danti napsal(a):
On 19/10/2018 12:58, Zdenek Kabelac wrote:
Hi
Writecache simply doesn't care about caching your reads at all.
Your RAM with it's page caching mechanism keeps read data as long as there
is free RAM for this - the less RAM goes to page cache
Dne 19. 10. 18 v 11:55 Ilia Zykov napsal(a):
On 19.10.2018 12:12, Zdenek Kabelac wrote:
Dne 19. 10. 18 v 0:56 Ilia Zykov napsal(a):
Maybe it will be implemented later? But it seems to me a little
strange when there is no way to clear the cache from a garbage.
Maybe I do not understand? Can
Dne 19. 10. 18 v 11:42 Gionatan Danti napsal(a):
On 19/10/2018 11:12, Zdenek Kabelac wrote:
And final note - there is upcoming support for accelerating writes with new
dm-writecache target.
Hi, should not it be already possible with current dm-cache and writeback
caching?
Hi
dm-cache
Dne 3.9.2018 v 11:04 Oleksandr Panchuk napsal(a):
Hello, All
I have not found any documentation about how to prepend LV with some free space
.
Possible use case - we have LV with filesystem and data on it. But to migrate
this LV to KVM virtual machine we need to craft partition inside this LV
Dne 17. 01. 19 v 2:12 Davis, Matthew napsal(a):
Hi Zdenek,
What do you mean "it's origin is already gone"?
Hi
Your field 'Origin' in your 'lvs -a' was empty - so the actual origin used for
taking 'fresh' LV snapshot is simply no longer existing.
lvm2 is (ATM) not a database tool trying to
Dne 18. 07. 18 v 16:58 Douglas Paul napsal(a):
On Wed, Jul 18, 2018 at 03:25:10PM +0100, Ryan Launchbury wrote:
Does anyone have any other ideas or potential workarounds for this issue?
Please let me know if you require more info.
I didn't see this in the previous messages, but have you
Dne 18. 01. 19 v 1:53 Davis, Matthew napsal(a):
Hi Zdenek,
I assumed that LVM thin snapshots would work like git branches.
Since git also uses diffs on the backend, and git is popular with developers,
the same kind of behaviour seems reasonable to me.
e.g.
```
git checkout master
git branch
Dne 28. 11. 18 v 10:51 Far Had napsal(a):
Hi
There are two parameters in lvm.conf which I don't understand the exact
meaning of.
1. retain_min
2. retain_days
What do "*the minimum number of archive files you wish to keep*" and "*minimum
time you wish to keep an archive file for*" mean?
Is
Dne 05. 12. 18 v 7:19 Marcin Wolcendorf napsal(a):
Hi Everyone,
Recently I have set up a simple lvmcache on my machine. This is a write-through
cache, so in my view it should never be necessary to copy data from the
cachepool to the origin LV. But this is exactly what happens: on system start
Dne 01. 12. 18 v 13:44 Far Had napsal(a):
How does it _make sure_ that I've backup files younger than 10 days old? What
if I delete those files?
Hi
Archives in /etc/lvm/archive subdirs are not 'a must have' - they are optional
and purely for admin's pleasure.
So admin can erase them
Dne 03. 12. 18 v 13:10 Far Had napsal(a):
Hi, Thank you Zdenek
You used "makes sure" phrase in your sentence:
"this makes sure you should have files at least 10 days back as well"
I was refering to that !
Hi
Oh - in that case I've been emphasizing the fact, that lvm2 will not REMOVE,
those.
Dne 05. 12. 18 v 16:01 Marcin Wolcendorf napsal(a):
Hi,
Thanks for your reply!
On Wed, Dec 05, 2018 at 03:23:21PM +0100, Zdenek Kabelac wrote:
Dne 05. 12. 18 v 7:19 Marcin Wolcendorf napsal(a):
My setup:
I have 2 mdraid devices: one 24T raid6, one 1T raid1, and one separate SSD. All
are LUKS
Dne 30. 11. 18 v 11:03 Gionatan Danti napsal(a):
On 30/11/2018 10:52, Zdenek Kabelac wrote:
Hi
The name of i/o layer bcache is only internal to lvm2 code for caching
reads form disks during disk processing - the name comes from usage
of bTree and caching - thus the name bcache.
It's
Dne 30. 11. 18 v 10:35 Gionatan Danti napsal(a):
Hi list,
in BZ 1643651 I read:
"In 7.6 (and 8.0), lvm began using a new i/o layer (bcache)
to read and write data blocks."
Last time I checked, bcache was a completely different caching layer,
unrelated from LVM. The above quote, instead,
Dne 22. 11. 18 v 19:05 Christoph Pleger napsal(a):
Hello,
I am now trying to not call external LVM commands, but to use LVM library
calls instead. Now I have another problem:
lvm library is DEAD, it's gone, it does not exists in upstream tree any more.
It's been marked as deprecated for
Dne 26. 11. 18 v 19:24 Andrew Hall napsal(a):
Hi
Can anyone confirm if the following situation is recoverable or not ?
Thanks very much.
1. We have an LV which was recently extended using a VG with
sufficient PE available. A filesystem resize operation was included
with the -r flag :
Let's
Dne 28. 11. 18 v 13:18 Far Had napsal(a):
Thanks for the response,
but I still don't get the point. Assume that I use same names for my VGs, In
this case if I do for example:
/retain_days = 10/
and
/retain_min = 15/
in lvm.conf file, what is the system's behaviour when archiving backup files?
Dne 10. 01. 19 v 4:05 james harvey napsal(a):
On lvm2 2.02.183, man lvcreate includes:
-Z|--zero y|n
Controls zeroing of the first 4KiB of data in the new LV. Default
is y. Snapshot COW volumes are always zeroed. LV is not zeroed if
the read only flag is set. Warning: trying to mount an
Dne 10. 01. 19 v 7:23 Davis, Matthew napsal(a):
Hi Marian,
I'm trying to do it with thin snapshots now. It's all very confusing, and I
can't get it to work.
I've read a lot of the documentation about thin stuff, and it isn't clear
what's happening.
I took a snapshot with
sudo lvcreate
Dne 24. 01. 19 v 16:06 Eric Ren napsal(a):
Hi,
With single command to create thin-pool, the metadata LV is not created
with striped
target. Is this designed on purpose, or just the command doesn't handle
this case very
well for now?
My main concern here is, if the
Dne 24. 01. 19 v 15:54 Eric Ren napsal(a):
Hi,
As you can see, only "mythinpool_tdata" LV has 2 stripes. Is that OK?
If I want to benefit performance from stripes, will it works for me? Or,
should I create dataLV, metadata LV, thinpool and thin LV using
step-by-step way
and
Dne 21. 01. 19 v 11:32 Zdenek Kabelac napsal(a):
Dne 18. 01. 19 v 1:53 Davis, Matthew napsal(a):
Hi Zdenek,
I assumed that LVM thin snapshots would work like git branches.
Since git also uses diffs on the backend, and git is popular with
developers, the same kind of behaviour seems reasonable
Dne 26. 01. 19 v 22:14 Andrei Borzenkov napsal(a):
I attempt to put device nodes for volumes in sub-directory of /dev to
avoid accidental conflict between device name and volume group and
general /dev pollution. VxVM always did it, using /dev/vx as base
directory, so I would use something like
Dne 11. 04. 19 v 13:26 Eric Ren napsal(a):
Hi Zdenek,
Thanks for your reply. The use case is for containerd snapshooter, yes, all
lvm setup is on host machine, creating thin LV for VM-based/KATA container as
rootfs.
For example:
https://github.com/containerd/containerd/pull/3136
and
Dne 11. 04. 19 v 13:49 Eric Ren napsal(a):
Hi,
Hi,
I could recommend to orient towards the solution where the 'host' system
provides some service for your containers - so container ask for action,
service orchestrates the action on the system - and returns asked resource
to
Dne 11. 04. 19 v 2:27 Eric Ren napsal(a):
Hello list,
Recently, we're exercising our container environment which uses lvm to manage
thin LVs, meanwhile we found a very strange error to activate the thin LV:
Hi
The reason is very simple here - lvm2 does not work from containers.
It's
Dne 10. 04. 19 v 15:36 Lentes, Bernd napsal(a):
- Am 9. Apr 2019 um 15:24 schrieb Zdenek Kabelac zdenek.kabe...@gmail.com:
Dne 09. 04. 19 v 15:00 Lentes, Bernd napsal(a):
Hello list,
I have a two-node HA-cluster which uses local and cluster LVM.
cLVM is currently stopped, I try
Dne 11. 04. 19 v 15:09 Eric Ren napsal(a):
Hi,
So do you get 'partial' error on thin-pool activation on your physical server ?
Yes, the VG of the thin pool only has one simple physical disk, at
beginning, I also suspected the disk may disconnect at that moment.
But, I start to think maybe
Dne 11. 04. 19 v 16:49 Lentes, Bernd napsal(a):
- On Apr 11, 2019, at 2:32 PM, Bernd Lentes
bernd.len...@helmholtz-muenchen.de wrote:
- On Apr 11, 2019, at 1:09 PM, Zdenek Kabelac zkabe...@redhat.com wrote:
Hello list,
I have a two-node HA-cluster which uses local and cluster LVM
Dne 11. 04. 19 v 19:33 Eric Ren napsal(a):
Hi Zdenek,
Anyway - proper reproducer with full - log would be really the most
explanatory and needed to move on here.
The activation error is reproduced once more. Please see lvm2.log attached.
Please search the error message like this:
Dne 12. 04. 19 v 10:58 Eric Ren napsal(a):
Hi,
As subject, it seems a interaction problem between lvm and systemd-udev:
```
#lvm version
LVM version: 2.02.130(2)-RHEL7 (2015-10-14)
Library version: 1.02.107-RHEL7 (2015-10-14)
Driver version: 4.35.0
```
lvm call trace when hangs:
Dne 12. 04. 19 v 17:03 Eric Ren napsal(a):
Hi,
Although the /dev/dm-26 is visible, but the device seems not ready in kernel.
Sorry, it's not:
[root@iZuf6dbyd7ede51sykedamZ ~]# dmsetup info /dev/dm-26
Device dm-26 not found
Command failed.
[root@iZuf6dbyd7ede51sykedamZ ~]# dmsetup info
Dne 12. 04. 19 v 16:56 Eric Ren napsal(a):
Hi,
Since the /dev/dm-x has been created, I don't understand what it waits
udev to do?
Just waits udev rules to create device symbol links?
Although the /dev/dm-26 is visible, but the device seems not ready in kernel.
[root@iZuf6dbyd7ede51sykedamZ
Dne 12. 04. 19 v 12:42 Eric Ren napsal(a):
Hi!
Looking at provided log file - the system seems to be using some weird udev
rule - which results in generating strange /dev/__ symlinks.
Yes! I also see these weird device names, but I don't have a good
explanation for it, so that I'm stupid
Dne 12. 05. 19 v 0:07 Roy Sigurd Karlsbakk napsal(a):
Hi
With lvmcache and all, is there anything in the works of making an HSM
(Hierarchical storage management aka tiered storage) with LVM as with less-used
data on slow, spinning rust and more-used data on higher tiers?
Hi
There is no
Dne 12. 05. 19 v 0:52 "Rainer Fügenstein" napsal(a):
hi,
I am (was) using Fedora 28 installed in several LVs on /dev/sda5 (= PV),
where sda is a "big" SSD.
by accident, I attached (via SATA hot swap bay) an old harddisk
(/dev/sdc1), which was used about 2 months temporarily to move the volume
Dne 04. 06. 19 v 21:35 Stephen Boyd napsal(a):
Quoting Helen Koike (2019-06-04 10:38:59)
On 6/3/19 8:02 PM, Stephen Boyd wrote:
I'm trying to boot a mainline linux kernel on a chromeos device with dm
verity and a USB stick but it's not working for me even with this patch.
I've had to hack
Dne 03. 06. 19 v 15:03 Heming Zhao napsal(a):
Hello,
I met below filter action when executed 'vgextend'.
why the filter take no effect on executing pvcreate or vgcreate?
# rpm -qa | grep lvm2
lvm2-clvm-2.02.180-8.16.x86_64
lvm2-cmirrord-2.02.180-8.16.x86_64
lvm2-2.02.180-8.16.x86_64
the
Dne 31. 05. 19 v 9:13 Shawn Guo napsal(a):
Hi David, Zdenek,
Comparing to stable-2.02 branch, I noticed that there are significant
changes around locking infrastructure on master branch. I have a
couple of questions regarding to these changes.
1. I see External Locking support was removed as
Dne 06. 06. 19 v 10:16 Heming Zhao napsal(a):
Hello,
BTW,
Only vgextend doesn't work, which must be a bug. It looks the filter
handling codes have bug.
Hi
Please provide full 'vgextend -' trace with some explanation of what you
believe is a bug in handling code.
Regards
Zdenek
Dne 06. 06. 19 v 15:30 Heming Zhao napsal(a):
Hello,
the filter is:
filter = [ "r|/dev/.*/by-path/.*|", "r|/dev/.*/by-id/.*|",
"r|/dev/fd.*|", "r|/dev/cdrom|", "a/.*/" ]
if filter doesn't contain "a/.*/":
- pvcreate, vgcreate & vgextend use regex filter to reject the disk.
(correct logic)
>
Dne 12. 06. 19 v 6:18 Steve Keller napsal(a):
I want to convert from LVM1 to LVM2 using vgconvert on a server. Is
vgconvert -M2 considered a risky operation? Should I switch
the server into single-user mode and make a backup before running
vgconvert?
And will vgconvert take a long time? May
Dne 25. 06. 19 v 9:56 Martin Wilck napsal(a):
Hello Zdenek,
On Tue, 2019-06-25 at 05:30 +, Heming Zhao wrote:
Hello Zdenek,
I raise this topic again. Lvm should have bug in filter code.
Let me show the example.
filter = [ "a|^/dev/sd.*|", "r|.*|" ]
As document description, above filter
Dne 13. 06. 19 v 9:41 Heming Zhao napsal(a):
Hello List,
I created a md device, and used pvcreate to format it.
But the pvcreate was failed with filter rules.
the filter in /etc/lvm/lvm.conf:
```
filter = [ "r|/dev/.*/by-path/.*|", "r|/dev/.*/by-id/.*|",
"r|/dev/fd.*|", "r|/dev/cdrom|" ]
```
Dne 13. 06. 19 v 21:54 Gionatan Danti napsal(a):
Il 13-06-2019 18:05 Ilia Zykov ha scritto:
Hello.
Tell me please, how can I get the maximum address used by a virtual disk
(disk created with -V VirtualSize). I have several large virtual disks,
but they use only a small part at the beginning of
Dne 20. 06. 19 v 7:41 Gang He napsal(a):
Hello Guys,
Any ideas? or what information do I need to provide to help this problem?
Hi
It does look like dmtable had different type of device in place and lvm2
was not able to remove device type.
So was this state a result of some 'regular'
Dne 14. 05. 19 v 8:25 Gang He napsal(a):
Hello Guys,
Anybody touched this area?
Thanks
Gang
Hi
I'll take a look - although it looks like the problem is possibly with libaio ?
Is libaio usable with -flto ?
ATM libaio is mandatory for building lvm2.
BTW - why do you need to use this
Dne 16. 05. 19 v 11:03 Gang He napsal(a):
Hello Guys,
I found lvconvert command (in lvm lvm2-2.02.120) did not handle "--stripes"
option correctly.
The reproduce steps are as below,
# vgcreate vgtest /dev/vdb /dev/vdc /dev/vdd /dev/vde /dev/vdf /dev/vdg
# lvcreate -n lvtest -L 8G vgtest
#
Dne 10. 07. 19 v 13:54 Simon ELBAZ napsal(a):
Hi Zdenek,
Thanks for your feedback.
The kernel version is:
[root@panoramix ~]# uname -a
Linux panoramix.ch-perrens.fr 2.6.32-573.12.1.el6.x86_64 #1 SMP Tue Dec 15
21:19:08 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
Hi
So this is really a very
Dne 11. 07. 19 v 11:39 Simon ELBAZ napsal(a):
Hi,
I have setup a Debian 10 VM to understand the issue.
root@sympa:~# uname -a
Linux sympa 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5 (2019-06-19) x86_64 GNU/Linux
root@sympa:~# lvdisplay /dev/sympa-vg/swap_1
--- Logical volume ---
LV Path
Dne 12. 07. 19 v 14:38 Simon ELBAZ napsal(a):
Hi,
root@sympa:~# swapon /dev/sympa-vg/swap_1
root@sympa:~# lvdisplay
...
--- Logical volume ---
LV Path /dev/sympa-vg/swap_1
LV Name swap_1
VG Name sympa-vg
LV UUID
Dne 09. 07. 19 v 11:47 Simon ELBAZ napsal(a):
Hi,
I am trying to understand how the open field is computed.
On a CentOS6 server, lvdisplay has the following output:
/lvdisplay /dev/mapper/vg_obm-var_spool_imap //
// --- Logical volume ---//
// LV Path
Dne 01. 07. 19 v 8:31 Gang He napsal(a):
Hello List,
I am using lvm2-2.02.180 on SLES12SP4, I cannot remove the snap LV of root
volume, which is based on multipath disk PV.
e.g.
linux-kkay:/ # lvremove /dev/system/snap_root
WARNING: Reading VG system from disk because lvmetad metadata is
Dne 19. 04. 19 v 21:30 Konstantin Ryabitsev napsal(a):
Hi, all:
I know it's possible to set up dm-cache to combine network-attached block
devices and local SSDs, but I'm having a hard time finding any first-hand
evidence of this being done anywhere -- so I'm wondering if it's because there
Dne 24. 04. 19 v 17:35 David Teigland napsal(a):
On Wed, Apr 24, 2019 at 10:37:26AM +0200, Zdenek Kabelac wrote:
Dne 23. 04. 19 v 16:27 David Teigland napsal(a):
On Mon, Apr 22, 2019 at 11:15:53PM -0600, Gang He wrote:
Hello List,
One user complained this error message.
The user has a usb sd
Dne 09. 04. 19 v 15:00 Lentes, Bernd napsal(a):
Hello list,
I have a two-node HA-cluster which uses local and cluster LVM.
cLVM is currently stopped, I try to remove a snapshot from the root lv which
is located on a local VG.
I get this error:
ha-idg-2:/mnt/spp # lvremove -fv
Dne 23. 04. 19 v 16:27 David Teigland napsal(a):
On Mon, Apr 22, 2019 at 11:15:53PM -0600, Gang He wrote:
Hello List,
One user complained this error message.
The user has a usb sd card reader with no media present. When they issue a
pvscan under lvm2-2.02.180 the device is opened which
Dne 13. 08. 19 v 23:25 Bernd Eckenfels napsal(a):
Hello,
I did not know dmsetup works with lvm file names, did you try to run `dmsetup
ls` to see the known devices and the dm compatible name?
Hi
Of course it has to work.
lvm2 names are just symlinks to /dev/dm-XXX real device nodes.
If
Dne 14. 08. 19 v 9:22 Christoph Pleger napsal(a):
Hello,
I have a volume group with 20 logical volumes. Only the last one of these
volumes has a strange problem with dmsetup, shown by these commands and
output on the command line:
root@host:/home/linux# /sbin/dmsetup info -c -o name
Dne 13. 08. 19 v 9:11 Christoph Pleger napsal(a):
Hello,
I have a volume group with 20 logical volumes. Only the last one of these
volumes has a strange problem with dmsetup, shown by these commands and output
on the command line:
root@host:/home/linux# /sbin/dmsetup info -c -o name
Dne 21. 08. 19 v 9:58 Christoph Pleger napsal(a):
Hello,
Some time ago, we wrote an application that uses the lvm2app interface to
manage volume groups and logical volumes. Of course, the application does not
work anymore, now that lvm2app has been skipped. So, is there anywhere
something
Dne 23. 08. 19 v 2:18 Dave Cohen napsal(a):
I've read some old posts on this group, which give me some hope that I might
recover a failed drive. But I'm not well-versed in LVM, so details of what
I've read are going over my head.
My problems started when my laptop failed to shut down
Dne 23. 08. 19 v 13:40 Dave Cohen napsal(a):
$ thin_check --version
0.8.5
Hi
So if repairing fails even with the latest version - it's better to upload
metadata into BZ created here:
https://bugzilla.redhat.com/enter_bug.cgi?product=LVM%20and%20device-mapper
If so - feel free to
Dne 10. 09. 19 v 17:20 David Teigland napsal(a):
_pvscan_aa
vgchange_activate
_activate_lvs_in_vg
sync_local_dev_names
fs_unlock
dm_udev_wait <=== this point!
```
Could you explain to us what's happening in this code? IIUC, an
incoming uevent triggers pvscan, which
Dne 25. 07. 19 v 20:49 Julian Andres Klode napsal(a):
systems might have systemd as their normal init systems, but
might not be using it in their initramfs; or like Debian, support
different init systems.
Detect whether we are running on systemd by checking for /run/systemd/system
and then
101 - 200 of 374 matches
Mail list logo