Re: [linux-lvm] pvscan: /dev/sdc: open failed: No medium found

2019-04-23 Thread Gang He
Hello Peter and David,

Thank for your quick responses.
How do we handle this behavior further?
Fix it as an issue, filter this kind of disk silently. 
or keep the current error message printing, looking a bit unfriendly, but the 
logic is not wrong.


Thanks
Gang




>>> On 2019/4/23 at 23:24, in message
<9cd91b48-408b-f7a9-c4bc-df05d5537...@redhat.com>, Peter Rajnoha
 wrote:
> On 4/23/19 7:15 AM, Gang He wrote:
>> Hello List,
>> 
>> One user complained this error message.
>> The user has a usb sd card reader with no media present.  When they issue a 
> pvscan under lvm2-2.02.180 the device is opened which results in 'No medium 
> found' being reported. 
>> But lvm2-2.02.120 did not do this (the device appears to get filtered out 
> earlier). The customer views the 'No medium found' message as an issue/bug.
>> Any suggest/comments for this error message?
>> 
>> The detailed information is as below,
>> lvm2 2.02.180-9.4.2
>> OS: SLES12 SP4
>> Kernel 4.12.14-95.3-default
>> Hardware: HP ProLiant DL380 Gen10
>> 
>> After upgrade from sles12SP3 to SP4, customer is reporting the following 
> error message:
>> 
>>  # pvscan
>>  /dev/sdc: open failed: No medium found
>>  PV /dev/sdb   VG Q11vg10 lvm2 [5.24 TiB / 2.00 TiB free]
>>  Total: 1 [5.24 TiB] / in use: 1 [5.24 TiB] / in no VG: 0 [0   ]
>> 
>> 
> 
> See also https://github.com/lvmteam/lvm2/issues/13 
> 
> -- 
> Peter
> 
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com 
> https://www.redhat.com/mailman/listinfo/linux-lvm 
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[linux-lvm] Internal error: Referenced LV pvmove0 not listed in VG

2019-04-23 Thread Anssi Hannula

Hi all,

I'm getting the following error when trying to run a pvmove command:

# pvmove -v /dev/md0:18122768-19076597
Cluster mirror log daemon not included in build.
Archiving volume group "delta" metadata (seqno 72).
Creating logical volume pvmove0
activation/volume_list configuration setting not defined: Checking 
only host tags for delta/home_r.

Moving 167398 extents of logical volume delta/home_r.
activation/volume_list configuration setting not defined: Checking 
only host tags for delta/data_r.

  Internal error: Referenced LV pvmove0 not listed in VG delta.

This is with git master, but I see the same on 2.02.177.

The PV in question looks like following:

# pvdisplay /dev/md0 -m
  --- Physical volume ---
  PV Name   /dev/md0
  VG Name   delta
  PV Size   72,77 TiB / not usable 3,38 MiB
  Allocatable   yes (but full)
  PE Size   4,00 MiB
  Total PE  19076598
  Free PE   0
  Allocated PE  19076598
  PV UUID   P4jDYr-BjDD-3EPk-AMhp-JC1F-1fi2-DecHRl

  --- Physical Segments ---
  Physical extent 0 to 655359:
Logical volume  /dev/delta/home_r
Logical extents 0 to 655359
  Physical extent 655360 to 917503:
Logical volume  /dev/delta/data_r
Logical extents 16120362 to 16382505
  Physical extent 917504 to 17037865:
Logical volume  /dev/delta/data_r
Logical extents 0 to 16120361
  Physical extent 17037866 to 17431081:
Logical volume  /dev/delta/home_r
Logical extents 655360 to 1048575
  Physical extent 17431082 to 18909199:
Logical volume  /dev/delta/data_r
Logical extents 16382506 to 17860623
  Physical extent 18909200 to 19076597:
Logical volume  /dev/delta/home_r
Logical extents 1048576 to 1215973


Full log: http://onse.fi/pvmove/delta-pvmove.txt

vgcfgbackup: http://onse.fi/pvmove/delta-backup.txt


--
Anssi Hannula

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] pvscan: /dev/sdc: open failed: No medium found

2019-04-23 Thread Peter Rajnoha
On 4/23/19 7:15 AM, Gang He wrote:
> Hello List,
> 
> One user complained this error message.
> The user has a usb sd card reader with no media present.  When they issue a 
> pvscan under lvm2-2.02.180 the device is opened which results in 'No medium 
> found' being reported. 
> But lvm2-2.02.120 did not do this (the device appears to get filtered out 
> earlier). The customer views the 'No medium found' message as an issue/bug.
> Any suggest/comments for this error message?
> 
> The detailed information is as below,
> lvm2 2.02.180-9.4.2
> OS: SLES12 SP4
> Kernel 4.12.14-95.3-default
> Hardware: HP ProLiant DL380 Gen10
> 
> After upgrade from sles12SP3 to SP4, customer is reporting the following 
> error message:
> 
>  # pvscan
>  /dev/sdc: open failed: No medium found
>  PV /dev/sdb   VG Q11vg10 lvm2 [5.24 TiB / 2.00 TiB free]
>  Total: 1 [5.24 TiB] / in use: 1 [5.24 TiB] / in no VG: 0 [0   ]
> 
> 

See also https://github.com/lvmteam/lvm2/issues/13

-- 
Peter

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] pvscan: /dev/sdc: open failed: No medium found

2019-04-23 Thread David Teigland
On Mon, Apr 22, 2019 at 11:15:53PM -0600, Gang He wrote:
> Hello List,
> 
> One user complained this error message.
> The user has a usb sd card reader with no media present.  When they issue a 
> pvscan under lvm2-2.02.180 the device is opened which results in 'No medium 
> found' being reported. 
> But lvm2-2.02.120 did not do this (the device appears to get filtered out 
> earlier). The customer views the 'No medium found' message as an issue/bug.
> Any suggest/comments for this error message?
> 
> The detailed information is as below,
> lvm2 2.02.180-9.4.2
> OS: SLES12 SP4
> Kernel 4.12.14-95.3-default
> Hardware: HP ProLiant DL380 Gen10
> 
> After upgrade from sles12SP3 to SP4, customer is reporting the following 
> error message:
> 
>  # pvscan
>  /dev/sdc: open failed: No medium found
>  PV /dev/sdb   VG Q11vg10 lvm2 [5.24 TiB / 2.00 TiB free]
>  Total: 1 [5.24 TiB] / in use: 1 [5.24 TiB] / in no VG: 0 [0   ]

I've heard this a few times now, I guess we should drop it, it's probably
more trouble than help.

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Network-attached block storage and local SSDs for dm-cache

2019-04-23 Thread Konstantin Ryabitsev

On Mon, Apr 22, 2019 at 02:25:44PM -0400, Mike Snitzer wrote:

I know it's possible to set up dm-cache to combine network-attached
block devices and local SSDs, but I'm having a hard time finding any
first-hand evidence of this being done anywhere -- so I'm wondering
if it's because there are reasons why this is a Bad Idea, or merely
because there aren't many reasons for folks to do that.

The reason why I'm trying to do it, in particular, is for
mirrors.kernel.org systems where we already rely on dm-cache to
combine large slow spinning disks with SSDs to a great advantage.
Most hits on those systems are to the same set of files (latest
distro package updates), so dm-cache hit-to-miss ratio is very
advantageous. However, we need to build newest iterations of those
systems, and being able to use network-attached storage at providers
like Packet with local SSD drives would remove the need for us to
purchase and host huge drive arrays.

Thanks for any insights you may offer.


Only thing that could present itself as a new challenge is the
reliability of the network-attached block devices (e.g. do network
outages compromise dm-cache's ability to function).


I expect them to be *reasonably* reliable, but of course the chances of 
network-attached block storage becoming unavailable are higher than for 
directly-attached storage.



I've not done any focused testing for, or thinking about, the impact
unreliable block devices might have on dm-cache (or dm-thinp, etc).
Usually we advise people to ensure the devices that they layer upon are
adequately robust/reliable.  Short of that you'll need to create your
own luck by engineering a solution that provides network storage
recovery.


I expect that in writethrough mode the worst kind of recovery we'd have 
to deal with is rebuilding the dm-cache setup, as even if the underlying 
slow storage becomes unavailable, that shouldn't result in FS corruption 
on it. Even though mirrors.kernel.org data is just that, mirrors, we 
certainly would like to avoid situations where we have to re-sync 40TB 
all over, as that usually means a week-long outage.



If the "origin" device is network-attached and proves unreliable you
can expect to see the dm-cache experience errors.  dm-cache is not
raid.  So if concerned about network outages you might want to (ab)use
dm-multipath's "queue_if_no_path" mode to queue IO for retry once the
network-based device is available again (dm-multipath isn't raid
either, but for your purposes you need some way to isolate potential for
network based faults).  Or do you think you might be able to RAID1 or
RAID5 N of these network attached drives together?


I don't think that makes sense, as these volumes would likely be coming 
from the same NAS array, so we'd be increasing complexity without 
necessarily hedging any risks.


Thanks for your help -- I think we're going to try this out as 
experimental setup and then see what kind of issue we run into.


Best,
-K

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Network-attached block storage and local SSDs for dm-cache

2019-04-23 Thread Zdenek Kabelac

Dne 19. 04. 19 v 21:30 Konstantin Ryabitsev napsal(a):

Hi, all:

I know it's possible to set up dm-cache to combine network-attached block 
devices and local SSDs, but I'm having a hard time finding any first-hand 
evidence of this being done anywhere -- so I'm wondering if it's because there 
are reasons why this is a Bad Idea, or merely because there aren't many 
reasons for folks to do that.


The reason why I'm trying to do it, in particular, is for mirrors.kernel.org 
systems where we already rely on dm-cache to combine large slow spinning disks 
with SSDs to a great advantage. Most hits on those systems are to the same set 
of files (latest distro package updates), so dm-cache hit-to-miss ratio is 
very advantageous. However, we need to build newest iterations of those 
systems, and being able to use network-attached storage at providers like 
Packet with local SSD drives would remove the need for us to purchase and host 
huge drive arrays.


Thanks for any insights you may offer.



Hi

From lvm2 POV - if you put both devices into single VG - you should be able 
to easily configure the setup in a way, that your 'main/origin' LV sitting on 
network storage and  cache is setup to be located on SSD.


lvcreate -LMAXSIZE --name MYLV vg  /dev/networkshdd
lvcreate --cache -Lcachesize  vg/MYLV  /dev/ssd

But of course as Mike points out - cache currently expects the origin device 
is reliable one.


Regards

Zdenek

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/