3:47 PM
To: Daniel Manzau
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] PG's Degraded on disk failure not remapped.
Hello,
There's a number of reasons I can think of why this would happen.
You say default behavior but looking at your map it's obvious that you
probably don't
...@gol.com]
Sent: Tuesday, 4 August 2015 3:47 PM
To: Daniel Manzau
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] PG's Degraded on disk failure not remapped.
Hello,
There's a number of reasons I can think of why this would happen.
You say default behavior but looking at your map it's obvious
Subject: Re: [ceph-users] PG's Degraded on disk failure not remapped.
Hello,
On Tue, 4 Aug 2015 20:33:58 +1000 Daniel Manzau wrote:
Hi Christian,
True it's not exactly out of the box. Here is the ceph.conf.
Crush rule file and a description (are those 4 hosts or are the HDD and
SSD shared
Hello,
There's a number of reasons I can think of why this would happen.
You say default behavior but looking at your map it's obvious that you
probably don't have a default cluster and crush map.
Your ceph.conf may help, too.
Regards,
Christian
On Tue, 4 Aug 2015 13:05:54 +1000 Daniel Manzau
: Re: [ceph-users] pg's degraded
Hi Craig,
Recreating the missing PG’s fixed it. Thanks for your help.
But when I tried to mount the Filesystem, it gave me the “mount error 5”. I
tried to restart the MDS server but it won’t work. It tells me that it’s
laggy/unresponsive.
BTW
...@centraldesktop.commailto:cle...@centraldesktop.com,
ceph-users ceph-us...@ceph.commailto:ceph-us...@ceph.com
Subject: Re: [ceph-users] pg's degraded
Thanks Michael. That was a good idea.
I did:
1. sudo service ceph stop mds
2. ceph mds newfs 1 0 —yes-i-really-mean-it (where 1 and 0 are pool ID’s
Yes, it was a healthy cluster and I had to rebuild because the OSD’s got
accidentally created on the root disk. Out of 4 OSD’s I had to rebuild 3 of
them.
[jshah@Lab-cephmon001 ~]$ ceph osd tree
# idweight type name up/down reweight
-1 0.5 root default
-2 0.0
Just to be clear, this is from a cluster that was healthy, had a disk
replaced, and hasn't returned to healthy? It's not a new cluster that has
never been healthy, right?
Assuming it's an existing cluster, how many OSDs did you replace? It
almost looks like you replaced multiple OSDs at the
So you have your crushmap set to choose osd instead of choose host?
Did you wait for the cluster to recover between each OSD rebuild? If you
rebuilt all 3 OSDs at the same time (or without waiting for a complete
recovery between them), that would cause this problem.
On Thu, Nov 20, 2014 at
Thanks for your help.
I was using puppet to install the OSD’s where it chooses a path over a device
name. Hence it created the OSD in the path within the root volume since the
path specified was incorrect.
And all 3 of the OSD’s were rebuilt at the same time because it was unused and
we had
If there's no data to lose, tell Ceph to re-create all the missing PGs.
ceph pg force_create_pg 2.33
Repeat for each of the missing PGs. If that doesn't do anything, you might
need to tell Ceph that you lost the OSDs. For each OSD you moved, run ceph
osd lost OSDID, then try the
Ok. Thanks.
—Jiten
On Nov 20, 2014, at 2:14 PM, Craig Lewis cle...@centraldesktop.com wrote:
If there's no data to lose, tell Ceph to re-create all the missing PGs.
ceph pg force_create_pg 2.33
Repeat for each of the missing PGs. If that doesn't do anything, you might
need to tell
Hi Craig,
Recreating the missing PG’s fixed it. Thanks for your help.
But when I tried to mount the Filesystem, it gave me the “mount error 5”. I
tried to restart the MDS server but it won’t work. It tells me that it’s
laggy/unresponsive.
BTW, all these machines are VM’s.
Maybe delete the pool and start over?
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of JIten
Shah
Sent: Thursday, November 20, 2014 5:46 PM
To: Craig Lewis
Cc: ceph-users
Subject: Re: [ceph-users] pg's degraded
Hi Craig,
Recreating the missing PG's fixed it. Thanks
14 matches
Mail list logo