Hi Marcus

Please refer to the documentation:
http://docs.ceph.com/docs/master/rados/operations/crush-map/#editing-a-crush-map

I belive your suggestion only modifies the in-memory map and you never get a 
changed version written in the outfile, but it could easily be tested by 
decompiling the new version and checking the clear text version, but why not 
just do as the documentation suggests?


But you really should just set some default sane usable values (depending on 
kernel versions and your clients) and NOT create your own settings, allow the 
cluster to remap after the new profiles has been applied and then change CRUSH 
weights to correct values before you attempt any customization of tunables..


Regards,
Jens Dueholm Christensen 
Rambøll Survey IT

-----Original Message-----
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Marcus 
Müller
Sent: Wednesday, January 11, 2017 2:50 PM
To: Shinobu Kinjo
Cc: Ceph Users
Subject: Re: [ceph-users] PGs stuck active+remapped and osds lose data?!

Yes, but everything i want to know is, if my way to change the tunables is 
right or not?



> Am 11.01.2017 um 13:11 schrieb Shinobu Kinjo <ski...@redhat.com>:
> 
> Please refer to Jens's message.
> 
> Regards,
> 
>> On Wed, Jan 11, 2017 at 8:53 PM, Marcus Müller <mueller.mar...@posteo.de> 
>> wrote:
>> Ok, thank you. I thought I have to set ceph to a tunables profile. If I’m 
>> right, then I just have to export the current crush map, edit it and import 
>> it again, like:
>> 
>> ceph osd getcrushmap -o /tmp/crush
>> crushtool -i /tmp/crush --set-choose-total-tries 100 -o /tmp/crush.new
>> ceph osd setcrushmap -i /tmp/crush.new
>> 
>> Is this right or not?
>> 
>> I started this cluster with these 3 nodes and each 3 osds. They are vms. I 
>> knew that this cluster would expand very big, that’s the reason for my 
>> choice for ceph. Now I can’t add more HDDs to the vm hypervisor and I want 
>> to separate the nodes physically too. I bought a new node with these 4 
>> drives and now another node with only 2 drives. As I hear now from several 
>> people this was not a good idea. For this reason, I bought now additional 
>> HDDs for the new node, so I have two with the same amount of HDDs and size. 
>> In the next 1-2 months I will get the third physical node and then 
>> everything should be fine. But at this time I have no other option.
>> 
>> May it help to solve this problem by adding the 2 new HDDs to the new ceph 
>> node?
>> 
>> 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to