Dear expert

could you help to provide some guide upgrade Ceph from firefly to giant ?

many thanks !

2014-10-30 15:37 GMT+07:00 Joao Eduardo Luis <[email protected]>:

> On 10/30/2014 05:54 AM, Sage Weil wrote:
>
>> On Thu, 30 Oct 2014, Nigel Williams wrote:
>>
>>> On 30/10/2014 8:56 AM, Sage Weil wrote:
>>>
>>>> * *Degraded vs misplaced*: the Ceph health reports from 'ceph -s' and
>>>>     related commands now make a distinction between data that is
>>>>     degraded (there are fewer than the desired number of copies) and
>>>>     data that is misplaced (stored in the wrong location in the
>>>>     cluster).
>>>>
>>>
>>> Is someone able to briefly described how/why misplaced happens please,
>>> is it
>>> repaired eventually? I've not seen misplaced (yet).
>>>
>>
>> Sure.  An easy way to get misplaced objects is to do 'ceph osd
>> out N' on an OSD.  Nothing is down, we still have as many copies
>> as we had before, but Ceph now wants to move them somewhere
>> else. Starting with giant, you will see the misplaced % in 'ceph -s' and
>> not degraded.
>>
>>        leveldb_write_buffer_size = 32*1024*1024  = 33554432  // 32MB
>>>>       leveldb_cache_size        = 512*1024*1204 = 536870912 // 512MB
>>>>
>>>
>>> I noticed the typo, wondered about the code, but I'm not seeing the same
>>> values anyway?
>>>
>>> https://github.com/ceph/ceph/blob/giant/src/common/config_opts.h
>>>
>>> OPTION(leveldb_write_buffer_size, OPT_U64, 8 *1024*1024) // leveldb
>>> write
>>> buffer size
>>> OPTION(leveldb_cache_size, OPT_U64, 128 *1024*1024) // leveldb cache size
>>>
>>
>> Hmm!  Not sure where that 32MB number came from.  I'll fix it, thanks!
>>
>
> Those just happen to be the values used on the monitors (in ceph_mon.cc).
> Maybe that's where the mix up came from. :)
>
>   -Joao
>
>
> --
> Joao Eduardo Luis
> Software Engineer | http://inktank.com | http://ceph.com
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to