Re: [ceph-users] rbd.ReadOnlyImage: [errno 30]

2019-06-04 Thread 解决
Thank your help, jason I find the reason.The exclusive-lock of image do not release after the disaster test. I release the exclusive-lock then The virtual machine start properly,and it can also create snap with nova user. At 2019-06-04 20:13:35, "Jason Dillaman" wrote: >On Tue, Jun 4,

Re: [ceph-users] v12.2.5 Luminous released

2019-06-04 Thread Alex Gorbachev
On Tue, Jun 4, 2019 at 3:32 PM Sage Weil wrote: > > [pruning CC list] > > On Tue, 4 Jun 2019, Alex Gorbachev wrote: > > Late question, but I am noticing > > > > ceph-volume: automatic VDO detection > > > > Does this mean that the OSD layer will at some point support > > deployment with VDO? > > >

Re: [ceph-users] v12.2.5 Luminous released

2019-06-04 Thread Sage Weil
[pruning CC list] On Tue, 4 Jun 2019, Alex Gorbachev wrote: > Late question, but I am noticing > > ceph-volume: automatic VDO detection > > Does this mean that the OSD layer will at some point support > deployment with VDO? > > Or that one could build on top of VDO devices and Ceph would

Re: [ceph-users] v12.2.5 Luminous released

2019-06-04 Thread Alex Gorbachev
Late question, but I am noticing ceph-volume: automatic VDO detection Does this mean that the OSD layer will at some point support deployment with VDO? Or that one could build on top of VDO devices and Ceph would detect this and report somewhere? Best, -- Alex Gorbachev ISS Storcium On Tue,

[ceph-users] ceph monitor keep crash

2019-06-04 Thread Jianyu Li
Hello, I have a ceph cluster running over 2 years and the monitor began crash since yesterday. I had some flapping OSDs up and down occasionally, sometimes I need to rebuild the OSD. I found 3 OSDs are down yesterday, they may cause this issue or may not. Ceph Version: 12.2.12, ( upgraded from

Re: [ceph-users] Large OMAP object in RGW GC pool

2019-06-04 Thread J. Eric Ivancich
On 6/4/19 7:37 AM, Wido den Hollander wrote: > I've set up a temporary machine next to the 13.2.5 cluster with the > 13.2.6 packages from Shaman. > > On that machine I'm running: > > $ radosgw-admin gc process > > That seems to work as intended! So the PR seems to have fixed it. > > Should be

[ceph-users] v13.2.6 Mimic released

2019-06-04 Thread Abhishek Lekshmanan
We're glad to announce the sixth bugfix release of the Mimic v13.2.x long term stable release series. We recommend that all Mimic users upgrade. We thank everyone for contributing towards this release. Notable Changes --- * Ceph v13.2.6 now packages python bindings for python3.6

Re: [ceph-users] rbd.ReadOnlyImage: [errno 30]

2019-06-04 Thread Jason Dillaman
On Tue, Jun 4, 2019 at 4:55 AM 解决 wrote: > > Hi all, > We use ceph(luminous) + openstack(queens) in my test environment。The > virtual machine does not start properly after the disaster test and the image > of virtual machine can not create snap.The procedure is as follows: > #!/usr/bin/env

Re: [ceph-users] Multiple rbd images from different clusters

2019-06-04 Thread Jason Dillaman
On Tue, Jun 4, 2019 at 8:07 AM Jason Dillaman wrote: > > On Tue, Jun 4, 2019 at 4:45 AM Burkhard Linke > wrote: > > > > Hi, > > > > On 6/4/19 10:12 AM, CUZA Frédéric wrote: > > > > Hi everyone, > > > > > > > > We want to migrate datas from one cluster (Hammer) to a new one (Mimic). We > > do

Re: [ceph-users] Multiple rbd images from different clusters

2019-06-04 Thread Jason Dillaman
On Tue, Jun 4, 2019 at 4:45 AM Burkhard Linke wrote: > > Hi, > > On 6/4/19 10:12 AM, CUZA Frédéric wrote: > > Hi everyone, > > > > We want to migrate datas from one cluster (Hammer) to a new one (Mimic). We > do not wish to upgrade the actual cluster as all the hardware is EOS and we > upgrade

Re: [ceph-users] performance in a small cluster

2019-06-04 Thread vitalif
Basically they max out at around 1000 IOPS and report 100% utilization and feel slow. Haven't seen the 5200 yet. Micron 5100s performs wonderfully! You have to just turn its write cache off: hdparm -W 0 /dev/sdX 1000 IOPS means you haven't done it. Although even with write cache enabled I

Re: [ceph-users] Large OMAP object in RGW GC pool

2019-06-04 Thread Wido den Hollander
On 5/30/19 2:45 PM, Wido den Hollander wrote: > > > On 5/29/19 11:22 PM, J. Eric Ivancich wrote: >> Hi Wido, >> >> When you run `radosgw-admin gc list`, I assume you are *not* using the >> "--include-all" flag, right? If you're not using that flag, then >> everything listed should be expired

[ceph-users] Two questions about ceph update/upgrade strategies

2019-06-04 Thread Rainer Krienke
I have a fresh ceph 14.2.1 cluster up and running based on Ubuntu 18.04. It consists of 9 hosts (+1 admin host). The nine hosts have each 16 ceph-osd daemons running, three in these nine hosts also have a ceph-mon and a ceph-mgr daemon running. So three hosts are running osd, mon and also mgr

[ceph-users] rbd.ReadOnlyImage: [errno 30]

2019-06-04 Thread 解决
Hi all, We use ceph(luminous) + openstack(queens) in my test environment。The virtual machine does not start properly after the disaster test and the image of virtual machine can not create snap.The procedure is as follows: #!/usr/bin/env python import rados import rbd with

Re: [ceph-users] Multiple rbd images from different clusters

2019-06-04 Thread Burkhard Linke
Hi, On 6/4/19 10:12 AM, CUZA Frédéric wrote: Hi everyone, We want to migrate datas from one cluster (Hammer) to a new one (Mimic). We do not wish to upgrade the actual cluster as all the hardware is EOS and we upgrade the configuration of the servers. We can’t find a “proper” way to mount

[ceph-users] Multiple rbd images from different clusters

2019-06-04 Thread CUZA Frédéric
Hi everyone, We want to migrate datas from one cluster (Hammer) to a new one (Mimic). We do not wish to upgrade the actual cluster as all the hardware is EOS and we upgrade the configuration of the servers. We can't find a "proper" way to mount two rbd images from two different cluster on the

Re: [ceph-users] CEPH MDS Damaged Metadata - recovery steps

2019-06-04 Thread James Wilkins
(Thanks Yan for confirming fix - we'll implement now) @Marc Yep - x3 replica on meta-data pools We have 4 clusters (all running same version) and have experienced meta-data corruption on the majority of them at some time or the other - normally a scan fixes - I suspect due to the use case -

Re: [ceph-users] CEPH MDS Damaged Metadata - recovery steps

2019-06-04 Thread Marc Roos
How did this get damaged? You had 3x replication on the pool? -Original Message- From: Yan, Zheng [mailto:uker...@gmail.com] Sent: dinsdag 4 juni 2019 1:14 To: James Wilkins Cc: ceph-users Subject: Re: [ceph-users] CEPH MDS Damaged Metadata - recovery steps On Mon, Jun 3, 2019 at