Thank your help, jason
I find the reason.The exclusive-lock of image do not release after the
disaster test. I release the exclusive-lock then The virtual machine start
properly,and it can also create snap with nova user.
At 2019-06-04 20:13:35, "Jason Dillaman" wrote:
>On Tue, Jun 4,
On Tue, Jun 4, 2019 at 3:32 PM Sage Weil wrote:
>
> [pruning CC list]
>
> On Tue, 4 Jun 2019, Alex Gorbachev wrote:
> > Late question, but I am noticing
> >
> > ceph-volume: automatic VDO detection
> >
> > Does this mean that the OSD layer will at some point support
> > deployment with VDO?
> >
>
[pruning CC list]
On Tue, 4 Jun 2019, Alex Gorbachev wrote:
> Late question, but I am noticing
>
> ceph-volume: automatic VDO detection
>
> Does this mean that the OSD layer will at some point support
> deployment with VDO?
>
> Or that one could build on top of VDO devices and Ceph would
Late question, but I am noticing
ceph-volume: automatic VDO detection
Does this mean that the OSD layer will at some point support
deployment with VDO?
Or that one could build on top of VDO devices and Ceph would detect
this and report somewhere?
Best,
--
Alex Gorbachev
ISS Storcium
On Tue,
Hello,
I have a ceph cluster running over 2 years and the monitor began crash
since yesterday. I had some flapping OSDs up and down occasionally,
sometimes I need to rebuild the OSD. I found 3 OSDs are down yesterday,
they may cause this issue or may not.
Ceph Version: 12.2.12, ( upgraded from
On 6/4/19 7:37 AM, Wido den Hollander wrote:
> I've set up a temporary machine next to the 13.2.5 cluster with the
> 13.2.6 packages from Shaman.
>
> On that machine I'm running:
>
> $ radosgw-admin gc process
>
> That seems to work as intended! So the PR seems to have fixed it.
>
> Should be
We're glad to announce the sixth bugfix release of the Mimic v13.2.x
long term stable release series. We recommend that all Mimic users
upgrade. We thank everyone for contributing towards this release.
Notable Changes
---
* Ceph v13.2.6 now packages python bindings for python3.6
On Tue, Jun 4, 2019 at 4:55 AM 解决 wrote:
>
> Hi all,
> We use ceph(luminous) + openstack(queens) in my test environment。The
> virtual machine does not start properly after the disaster test and the image
> of virtual machine can not create snap.The procedure is as follows:
> #!/usr/bin/env
On Tue, Jun 4, 2019 at 8:07 AM Jason Dillaman wrote:
>
> On Tue, Jun 4, 2019 at 4:45 AM Burkhard Linke
> wrote:
> >
> > Hi,
> >
> > On 6/4/19 10:12 AM, CUZA Frédéric wrote:
> >
> > Hi everyone,
> >
> >
> >
> > We want to migrate datas from one cluster (Hammer) to a new one (Mimic). We
> > do
On Tue, Jun 4, 2019 at 4:45 AM Burkhard Linke
wrote:
>
> Hi,
>
> On 6/4/19 10:12 AM, CUZA Frédéric wrote:
>
> Hi everyone,
>
>
>
> We want to migrate datas from one cluster (Hammer) to a new one (Mimic). We
> do not wish to upgrade the actual cluster as all the hardware is EOS and we
> upgrade
Basically they max out at around 1000 IOPS and report 100%
utilization and feel slow.
Haven't seen the 5200 yet.
Micron 5100s performs wonderfully!
You have to just turn its write cache off:
hdparm -W 0 /dev/sdX
1000 IOPS means you haven't done it. Although even with write cache
enabled I
On 5/30/19 2:45 PM, Wido den Hollander wrote:
>
>
> On 5/29/19 11:22 PM, J. Eric Ivancich wrote:
>> Hi Wido,
>>
>> When you run `radosgw-admin gc list`, I assume you are *not* using the
>> "--include-all" flag, right? If you're not using that flag, then
>> everything listed should be expired
I have a fresh ceph 14.2.1 cluster up and running based on Ubuntu 18.04.
It consists of 9 hosts (+1 admin host). The nine hosts have each 16
ceph-osd daemons running, three in these nine hosts also have
a ceph-mon and a ceph-mgr daemon running. So three hosts are running
osd, mon and also mgr
Hi all,
We use ceph(luminous) + openstack(queens) in my test environment。The
virtual machine does not start properly after the disaster test and the image
of virtual machine can not create snap.The procedure is as follows:
#!/usr/bin/env python
import rados
import rbd
with
Hi,
On 6/4/19 10:12 AM, CUZA Frédéric wrote:
Hi everyone,
We want to migrate datas from one cluster (Hammer) to a new one
(Mimic). We do not wish to upgrade the actual cluster as all the
hardware is EOS and we upgrade the configuration of the servers.
We can’t find a “proper” way to mount
Hi everyone,
We want to migrate datas from one cluster (Hammer) to a new one (Mimic). We do
not wish to upgrade the actual cluster as all the hardware is EOS and we
upgrade the configuration of the servers.
We can't find a "proper" way to mount two rbd images from two different cluster
on the
(Thanks Yan for confirming fix - we'll implement now)
@Marc
Yep - x3 replica on meta-data pools
We have 4 clusters (all running same version) and have experienced meta-data
corruption on the majority of them at some time or the other - normally a scan
fixes - I suspect due to the use case -
How did this get damaged? You had 3x replication on the pool?
-Original Message-
From: Yan, Zheng [mailto:uker...@gmail.com]
Sent: dinsdag 4 juni 2019 1:14
To: James Wilkins
Cc: ceph-users
Subject: Re: [ceph-users] CEPH MDS Damaged Metadata - recovery steps
On Mon, Jun 3, 2019 at
18 matches
Mail list logo