Got it. Thanks !

regards,
Yasu

From: Gregory Farnum <[email protected]>
Subject: Re: Question about configuration
Date: Thu, 10 Jan 2013 16:55:00 -0800
Message-ID: <CAPYLRzhrPaRpPwJ=m5g51-kpy_jev2fn_gxaxec6odcaedx...@mail.gmail.com>

> On Thu, Jan 10, 2013 at 4:51 PM, Yasuhiro Ohara <[email protected]> wrote:
>>
>> Hi, Greg,
>>
>> When I went through the Ceph document, I could find the description
>> about /etc/init.d only, so it is still the easiest for me.
>> Is there documentation on other (upstart?) system or do I need to
>> learn those system ? Or just letting me know how to install the
>> resource file (for Ceph in upstart) might work for me.
> 
> There isn't really any documentation right now and if you started off
> with sysvinit it's probably easiest to continue that way. It will work
> with that system too; it's just that if you run "sudo service ceph -a
> start" then it's going to go and turn on all the daemons listed in its
> local ceph.conf.
> -Greg
> 
>>
>> Thanks.
>>
>> regards,
>> Yasu
>>
>> From: Gregory Farnum <[email protected]>
>> Subject: Re: Question about configuration
>> Date: Thu, 10 Jan 2013 16:43:59 -0800
>> Message-ID: 
>> <capylrzhzr81ko3qgc1ogxs_12-c6z-evmuf3k47ej3j8p_t...@mail.gmail.com>
>>
>>> On Thu, Jan 10, 2013 at 4:39 PM, Yasuhiro Ohara <[email protected]> wrote:
>>>>
>>>> Hi,
>>>>
>>>> What will happen when constructing a cluster of 10 host,
>>>> but the hosts are gradually removed from the cluster
>>>> one by one (in each step waiting Ceph status to become healthy),
>>>> and reaches eventually to, say, 3 hosts ?
>>>>
>>>> In other words, is there any problem with having 10 osd configuration
>>>> in the ceph.conf, but actually only 3 is up (the 7 are down and out) ?
>>>
>>> If you're not using the /etc/init.d ceph script to start up everything
>>> with the -a option, this will work just fine.
>>>
>>>>
>>>> I assume that if the size of the replication is 3, we can turn off
>>>> 2 osds at each time, and Ceph can recover itself to the healthy state.
>>>> Is it the case ?
>>>
>>> Yeah, that should work fine. You might consider just marking OSDs
>>> "out" two at a time and not actually killing them until the cluster
>>> has become quiescent again, though — that way they can participate as
>>> a source for recovery.
>>> -Greg

Reply via email to