Re: [ceph-users] Fresh install - all OSDs remain down and out

2016-03-22 Thread Markus Goldberg

Hi desmond,
this seems to be much to do for 90 OSDs. And possible a few mistakes in 
typing.

Every change of disk needs extra editing too.
This weighting was automatically done in former versions.
Do you know why and where this changed or was i faulty at some point?

Markus
Am 21.03.2016 um 13:28 schrieb 施柏安:

Hi Markus

You should define the "osd device" and "host" then make ceph cluster work.
Take the types in your map (osd, host, chasis.root) to design the 
crushmap according to your needed.

Example:
​​
host node1 {
 id -1
 alg straw
 hash 0
 item osd.0 weight 1.00
 item osd.1 weight 1.00
}
host node2 {
 id -2
 alg straw
 hash 0
 item osd.2 weight 1.00
 item osd.3 weight 1.00
}
root default {
 id 0
 alg straw
 hash 0
 item node1 weight 2.00 (sum of its item)
 item node2 weight 2.00
}
​​Then you can use default ruleset. It is set to take the root "default".


2016-03-21 19:50 GMT+08:00 Markus Goldberg >:


Hi desmond,
this is my decompile_map:
root@bd-a:/etc/ceph# cat decompile_map
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable straw_calc_version 1

# devices

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
root default {
id -1   # do not change unnecessarily
# weight 0.000
alg straw
hash 0  # rjenkins1
}

# rules
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}

# end crush map
root@bd-a:/etc/ceph#

How should i change It?
I never had to edit anything in this area in former versions of
ceph. Has something changed?
Is any new parameter nessessary in ceph.conf while installing?

Thank you,
  Markus

Am 21.03.2016 um 10:34 schrieb 施柏安:

It seems that there no setting weight to all of your osd. So the
pg stuck in creating.
you can use some command to edit crushmap for setting weight:

# ceph osd getcrushmap -o map
# crushtool -d map -o decompile_map
# vim decompile_map (then you can change the weight to all of
your osd and its host weight)
# crushtool -c decompile_map -o changed_map
# ceph osd setcrushmap -i changed_map

Then, it should work in your situation.


2016-03-21 17:20 GMT+08:00 Markus Goldberg
>:

Hi,
root@bd-a:~# ceph osd tree
ID WEIGHT TYPE NAMEUP/DOWN REWEIGHT PRIMARY-AFFINITY
-1  0 root default
 0  0 osd.0 down0  1.0
 1  0 osd.1 down0  1.0
 2  0 osd.2 down0  1.0
...delete all the other OSDs as they are the same
...
88  0 osd.88 down0  1.0
89  0 osd.89 down0  1.0
root@bd-a:~#

bye,
  Markus

Am 21.03.2016 um 10:10 schrieb 施柏安:

What's your crushmap show? Or command 'ceph osd tree' show.

2016-03-21 16:39 GMT+08:00 Markus Goldberg
>:

Hi,
i have upgraded my hardware and installed ceph totally
new as described in
http://docs.ceph.com/docs/master/rados/deployment/
The last job was creating the OSDs
http://docs.ceph.com/docs/master/rados/deployment/ceph-deploy-osd/
I have used the create command and after that, the OSDs
should be in and up but they are all down and out.
An additionally osd activate command does not help.

Ubuntu 14.04.4 kernel 4.2.1
ceph 10.0.2

What should i do, where is my mistake?

This is ceph.conf:

[global]
fsid = 122e929a-111b-4067-80e4-3fef39e66ecf
mon_initial_members = bd-0, bd-1, bd-2
mon_host = xxx.xxx.xxx.20,xxx.xxx.xxx.21,xxx.xxx.xxx.22
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public network = xxx.xxx.xxx.0/24
cluster network = 192.168.1.0/24 
osd_journal_size = 10240
osd pool default size = 2
osd pool default min size = 1
osd pool default pg num = 333

Re: [ceph-users] Fresh install - all OSDs remain down and out

2016-03-21 Thread 施柏安
Hi Markus

You should define the "osd device" and "host" then make ceph cluster work.
Take the types in your map (osd, host, chasis.root) to design the
crushmap according to your needed.
Example:
​​

host node1 {
id -1
alg straw
hash 0
item osd.0 weight 1.00
item osd.1 weight 1.00
}
host node2 {
id -2
alg straw
hash 0
item osd.2 weight 1.00
item osd.3 weight 1.00
}
root default {
id 0
alg straw
hash 0
item node1 weight 2.00 (sum of its item)
item node2 weight 2.00
}

​​Then you can use default ruleset. It is set to take the root "default".


2016-03-21 19:50 GMT+08:00 Markus Goldberg :

> Hi desmond,
> this is my decompile_map:
> root@bd-a:/etc/ceph# cat decompile_map
> # begin crush map
> tunable choose_local_tries 0
> tunable choose_local_fallback_tries 0
> tunable choose_total_tries 50
> tunable chooseleaf_descend_once 1
> tunable straw_calc_version 1
>
> # devices
>
> # types
> type 0 osd
> type 1 host
> type 2 chassis
> type 3 rack
> type 4 row
> type 5 pdu
> type 6 pod
> type 7 room
> type 8 datacenter
> type 9 region
> type 10 root
>
> # buckets
> root default {
> id -1   # do not change unnecessarily
> # weight 0.000
> alg straw
> hash 0  # rjenkins1
> }
>
> # rules
> rule replicated_ruleset {
> ruleset 0
> type replicated
> min_size 1
> max_size 10
> step take default
> step chooseleaf firstn 0 type host
> step emit
> }
>
> # end crush map
> root@bd-a:/etc/ceph#
>
> How should i change It?
> I never had to edit anything in this area in former versions of ceph. Has
> something changed?
> Is any new parameter nessessary in ceph.conf while installing?
>
> Thank you,
>   Markus
>
> Am 21.03.2016 um 10:34 schrieb 施柏安:
>
> It seems that there no setting weight to all of your osd. So the pg stuck
> in creating.
> you can use some command to edit crushmap for setting weight:
>
> # ceph osd getcrushmap -o map
> # crushtool -d map -o decompile_map
> # vim decompile_map (then you can change the weight to all of your osd and
> its host weight)
> # crushtool -c decompile_map -o changed_map
> # ceph osd setcrushmap -i changed_map
>
> Then, it should work in your situation.
>
>
> 2016-03-21 17:20 GMT+08:00 Markus Goldberg :
>
>> Hi,
>> root@bd-a:~# ceph osd tree
>> ID WEIGHT TYPE NAMEUP/DOWN REWEIGHT PRIMARY-AFFINITY
>> -1  0 root default
>>  0  0 osd.0   down0  1.0
>>  1  0 osd.1   down0  1.0
>>  2  0 osd.2   down0  1.0
>> ...delete all the other OSDs as they are the same
>> ...
>> 88  0 osd.88  down0  1.0
>> 89  0 osd.89  down0  1.0
>> root@bd-a:~#
>>
>> bye,
>>   Markus
>>
>> Am 21.03.2016 um 10:10 schrieb 施柏安:
>>
>> What's your crushmap show? Or command 'ceph osd tree' show.
>>
>> 2016-03-21 16:39 GMT+08:00 Markus Goldberg < 
>> goldb...@uni-hildesheim.de>:
>>
>>> Hi,
>>> i have upgraded my hardware and installed ceph totally new as described
>>> in 
>>> http://docs.ceph.com/docs/master/rados/deployment/
>>> The last job was creating the OSDs
>>> 
>>> http://docs.ceph.com/docs/master/rados/deployment/ceph-deploy-osd/
>>> I have used the create command and after that, the OSDs should be in and
>>> up but they are all down and out.
>>> An additionally osd activate command does not help.
>>>
>>> Ubuntu 14.04.4 kernel 4.2.1
>>> ceph 10.0.2
>>>
>>> What should i do, where is my mistake?
>>>
>>> This is ceph.conf:
>>>
>>> [global]
>>> fsid = 122e929a-111b-4067-80e4-3fef39e66ecf
>>> mon_initial_members = bd-0, bd-1, bd-2
>>> mon_host = xxx.xxx.xxx.20,xxx.xxx.xxx.21,xxx.xxx.xxx.22
>>> auth_cluster_required = cephx
>>> auth_service_required = cephx
>>> auth_client_required = cephx
>>> public network = xxx.xxx.xxx.0/24
>>> cluster network = 192.168.1.0/24
>>> osd_journal_size = 10240
>>> osd pool default size = 2
>>> osd pool default min size = 1
>>> osd pool default pg num = 333
>>> osd pool default pgp num = 333
>>> osd crush chooseleaf type = 1
>>> osd_mkfs_type = btrfs
>>> osd_mkfs_options_btrfs = -f -n 32k -l 32k
>>> osd_mount_options_btrfs = rw,noatime,nodiratime,autodefrag
>>> mds_max_file_size = 50
>>>
>>>
>>> This is the log of the last osd:
>>> ##
>>> bd-2:/dev/sdaf:/dev/sdaf2
>>> ceph-deploy disk zap bd-2:/dev/sdaf
>>> [ceph_deploy.conf][DEBUG ] found configuration file at:
>>> /root/.cephdeploy.conf
>>> [ceph_deploy.cli][INFO  ] Invoked (1.5.31): /usr/bin/ceph-deploy osd
>>> create --fs-type btrfs bd-2:/dev/sdaf:/dev/sdaf2
>>> [ceph_deploy.cli][INFO  ] ceph-deploy options:
>>> [ceph_deploy.cli][INFO  

[ceph-users] Fresh install - all OSDs remain down and out

2016-03-21 Thread Markus Goldberg

Hi,
i have upgraded my hardware and installed ceph totally new as described 
in http://docs.ceph.com/docs/master/rados/deployment/
The last job was creating the OSDs 
http://docs.ceph.com/docs/master/rados/deployment/ceph-deploy-osd/
I have used the create command and after that, the OSDs should be in and 
up but they are all down and out.

An additionally osd activate command does not help.

Ubuntu 14.04.4 kernel 4.2.1
ceph 10.0.2

What should i do, where is my mistake?

This is ceph.conf:

[global]
fsid = 122e929a-111b-4067-80e4-3fef39e66ecf
mon_initial_members = bd-0, bd-1, bd-2
mon_host = xxx.xxx.xxx.20,xxx.xxx.xxx.21,xxx.xxx.xxx.22
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public network = xxx.xxx.xxx.0/24
cluster network = 192.168.1.0/24
osd_journal_size = 10240
osd pool default size = 2
osd pool default min size = 1
osd pool default pg num = 333
osd pool default pgp num = 333
osd crush chooseleaf type = 1
osd_mkfs_type = btrfs
osd_mkfs_options_btrfs = -f -n 32k -l 32k
osd_mount_options_btrfs = rw,noatime,nodiratime,autodefrag
mds_max_file_size = 50


This is the log of the last osd:
##
bd-2:/dev/sdaf:/dev/sdaf2
ceph-deploy disk zap bd-2:/dev/sdaf
[ceph_deploy.conf][DEBUG ] found configuration file at: 
/root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.31): /usr/bin/ceph-deploy osd 
create --fs-type btrfs bd-2:/dev/sdaf:/dev/sdaf2

[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username  : None
[ceph_deploy.cli][INFO  ]  disk  : [('bd-2', 
'/dev/sdaf', '/dev/sdaf2')]

[ceph_deploy.cli][INFO  ]  dmcrypt   : False
[ceph_deploy.cli][INFO  ]  verbose   : False
[ceph_deploy.cli][INFO  ]  overwrite_conf: False
[ceph_deploy.cli][INFO  ]  subcommand: create
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir   : 
/etc/ceph/dmcrypt-keys

[ceph_deploy.cli][INFO  ]  quiet : False
[ceph_deploy.cli][INFO  ]  cd_conf   : 


[ceph_deploy.cli][INFO  ]  cluster   : ceph
[ceph_deploy.cli][INFO  ]  fs_type   : btrfs
[ceph_deploy.cli][INFO  ]  func  : at 0x7f944e16b500>

[ceph_deploy.cli][INFO  ]  ceph_conf : None
[ceph_deploy.cli][INFO  ]  default_release   : False
[ceph_deploy.cli][INFO  ]  zap_disk  : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks 
bd-2:/dev/sdaf:/dev/sdaf2

[bd-2][DEBUG ] connected to host: bd-2
[bd-2][DEBUG ] detect platform information from remote host
[bd-2][DEBUG ] detect machine type
[bd-2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Deploying osd to bd-2
[bd-2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host bd-2 disk /dev/sdaf journal 
/dev/sdaf2 activate True
[bd-2][INFO  ] Running command: ceph-disk -v prepare --cluster ceph 
--fs-type btrfs -- /dev/sdaf /dev/sdaf2
[bd-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--check-allows-journal -i 0 --cluster ceph
[bd-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--check-wants-journal -i 0 --cluster ceph
[bd-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--check-needs-journal -i 0 --cluster ceph
[bd-2][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdaf uuid path is 
/sys/dev/block/65:240/dm/uuid
[bd-2][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdaf uuid path is 
/sys/dev/block/65:240/dm/uuid
[bd-2][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdaf uuid path is 
/sys/dev/block/65:240/dm/uuid
[bd-2][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdaf2 uuid path is 
/sys/dev/block/65:242/dm/uuid
[bd-2][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdaf2 uuid path is 
/sys/dev/block/65:242/dm/uuid
[bd-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--cluster=ceph --show-config-value=fsid
[bd-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_mkfs_options_btrfs
[bd-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_mount_options_btrfs
[bd-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--cluster=ceph --show-config-value=osd_journal_size
[bd-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
[bd-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[bd-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[bd-2][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdaf uuid path is 
/sys/dev/block/65:240/dm/uuid
[bd-2][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdaf2 uuid path is