Hi Timofey,
assuming that you have more than one OSD hosts and that the replicator
factor is equal (or less) to the number of the hosts why don't you just
change the crushmap to host replication?
You just need to change the default CRUSHmap rule from
step chooseleaf firstn 0 type osd
to
On 09/05/15 01:28, Gregory Farnum wrote:
On Fri, May 8, 2015 at 4:55 PM, Joao Eduardo Luis j...@suse.de wrote:
All,
While working on #11545 (mon: have mon-specific commands under 'ceph mon
...') I crashed into a slightly tough brick wall.
The purpose of #11545 is to move certain commands,
Hello Cephers
Is there any option that i can use in my ceph.conf file to instruct
ceph-radosgw to use a specific user name.
I want to run ceph-radosgw daemon with user 'apache' , so could i specify
rgw_user=apache
In Ceph.conf file so that next time when i restart ceph-radosgw daemon it
Hi
I tested the patch. It seems that everything is OK now. Thanks.
On Fri, May 8, 2015 at 4:34 PM Yan, Zheng uker...@gmail.com wrote:
On Fri, May 8, 2015 at 11:15 AM, Dexter Xiong dxtxi...@gmail.com wrote:
I tried echo 3 /proc/sys/vm/drop_caches and dentry_pinned_count
dropped.
Hi list,
i had experiments with crush maps, and I've try to get raid1 like
behaviour (if cluster have 1 working osd node, duplicate data across
local disk, for avoiding data lose in case local disk failure and
allow client working, because this is not a degraded state)
(
in best case, i want
On 05/09/2015 09:57 AM, Loic Dachary wrote:
Hi,
On 09/05/2015 01:55, Joao Eduardo Luis wrote:
This approach gives a lifespan of roughly 3 releases (at current rate,
roughly 1.5 years) before being completely dropped. This should give
enough time to people to realize what has happened and