nt: 2015年10月29日 9:14
To: Robert LeBlanc
Cc: Lindsay Mathieson; Gurjar, Unmesh; ceph-users@lists.ceph.com
<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] creating+incomplete issues
wow this sounds hard to me. can you show the details?
hello,
After installing ceph I tried to watch it with ceph -w,
2015-10-28 14:54:08.035995 mon.0 [INF] pgmap v82: 192 pgs: 104
active+degraded+remapped, 88 creating+incomplete; 0 bytes data, 36775 MB
used, 113 GB / 156 GB avail
2015-10-28 14:54:12.327050 mon.0 [INF] pgmap v83: 192 pgs: 104
ts.ceph.com>] On Behalf Of Wah Peng
Sent: 2015年10月29日 9:14
To: Robert LeBlanc
Cc: Lindsay Mathieson; Gurjar, Unmesh; ceph-users@lists.ceph.com
<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] creating+incomplete issues
wow this
On Behalf Of Wah
Peng
Sent: 2015年10月29日 9:14
To: Robert LeBlanc
Cc: Lindsay Mathieson; Gurjar, Unmesh; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] creating+incomplete issues
wow this sounds hard to me. can you show the details?
thanks a lot.
On 2015/10/29 星期四 9:01, Robert LeBlanc wrote:
15年10月29日 9:14
>> To: Robert LeBlanc
>> Cc: Lindsay Mathieson; Gurjar, Unmesh; ceph-users@lists.ceph.com
>> Subject: Re: [ceph-users] creating+incomplete issues
>>
>> wow this sounds hard to me. can you show the details?
>> thanks a lot.
>>
>>
>>
Hello,
$ ceph osd stat
osdmap e18: 2 osds: 2 up, 2 in
this is what it shows.
does it mean I need to add up to 3 osds? I just use the default setup.
thx.
On 2015/10/28 星期三 19:53, Gurjar, Unmesh wrote:
Are all the OSDs being reported as 'up' and 'in'? This can be checked by
executing
On 29 October 2015 at 10:29, Wah Peng wrote:
> $ ceph osd stat
> osdmap e18: 2 osds: 2 up, 2 in
>
> this is what it shows.
> does it mean I need to add up to 3 osds? I just use the default setup.
>
If you went with the defaults then your pool size will be 3, meaning
Hello,
Just did it, but still no good health. can you help? thanks.
ceph@ceph:~/my-cluster$ ceph osd stat
osdmap e24: 3 osds: 3 up, 3 in
ceph@ceph:~/my-cluster$ ceph health
HEALTH_WARN 89 pgs degraded; 67 pgs incomplete; 67 pgs stuck inactive;
192 pgs stuck unclean
On 2015/10/29 星期四
Please paste 'ceph osd tree'.
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Oct 28, 2015 6:54 PM, "Wah Peng" wrote:
> Hello,
>
> Just did it, but still no good health. can you help? thanks.
>
> ceph@ceph:~/my-cluster$ ceph osd stat
> osdmap
You need to change the CRUSH map to select osd instead of host.
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Oct 28, 2015 7:00 PM, "Wah Peng" wrote:
> $ ceph osd tree
> # idweight type name up/down reweight
> -1 0.24root
$ ceph osd tree
# idweight type name up/down reweight
-1 0.24root default
-2 0.24host ceph2
0 0.07999 osd.0 up 1
1 0.07999 osd.1 up 1
2 0.07999 osd.2 up 1
On 2015/10/29
wow this sounds hard to me. can you show the details?
thanks a lot.
On 2015/10/29 星期四 9:01, Robert LeBlanc wrote:
You need to change the CRUSH map to select osd instead of host.
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Oct 28, 2015 7:00 PM, "Wah Peng"
om
Subject: Re: [ceph-users] creating+incomplete issues
wow this sounds hard to me. can you show the details?
thanks a lot.
On 2015/10/29 星期四 9:01, Robert LeBlanc wrote:
> You need to change the CRUSH map to select osd instead of host.
>
> Robert LeBlanc
>
> Sent from a mobile devic
13 matches
Mail list logo