Hello

I just added more nodes to quincy cluster(existing drives are SAS). The new nodes are sata drives.

The problem is after creating a new replicated crush rule and applying to a new pool. The ceph cluster is still using osd with "hdd" class and not using sata drives. I tried a number of crush rules using replicated and EC and I cannot get the pools to use sata osds.

Here is crush rule for sata:
{
    "rule_id": 9,
    "rule_name": "rbdsatareplicated",
    "type": 1,
    "steps": [
        {
            "op": "take",
            "item": -33,
            "item_name": "default~sata"
        },
        {
            "op": "chooseleaf_firstn",
            "num": 0,
            "type": "host"
        },
        {
            "op": "emit"
        }
    ]
}

I see in the crush rule "default~sata". I am wondering if this "default~" is causing me problems?

Just another note. After I added the new sata nodes, my first crush rule used ec instead of replicated. I was getting errors when I tried to create a image on the ec pool "qemu-img error rbd create: Operation not supported on new pool". So I tried a test pool using replicated instead.

Currently on the cluster I can create any number of pools and images if I use replicated/ec and the SAS drives which have a class of "hdd".

Hope this makes sense?

Here is some additional details:

My existing nodes used SAS and are labelled:

ceph osd tree

ID   CLASS  WEIGHT      TYPE NAME         STATUS  REWEIGHT  PRI-AFF
 -1         1122.64075  root default
 -3           34.03793      host node-01
  0    hdd     2.26920          osd.0         up   1.00000  1.00000
  1    hdd     2.26920          osd.1         up   1.00000  1.00000
  2    hdd     2.26920          osd.2         up   1.00000  1.00000
..etc

After adding SATA nodes I have:



ID   CLASS  WEIGHT      TYPE NAME         STATUS  REWEIGHT  PRI-AFF
 -1         1122.64075  root default
 -3           34.03793      host node-01
  0    hdd     2.26920          osd.0         up   1.00000  1.00000
  1    hdd     2.26920          osd.1         up   1.00000  1.00000
  2    hdd     2.26920          osd.2         up   1.00000  1.00000
....
...
-34          133.78070      host node-20
141   sata    11.14839          osd.141       up   1.00000  1.00000
142   sata    11.14839          osd.142       up   1.00000  1.00000
143   sata    11.14839          osd.143       up   1.00000  1.00000
144   sata    11.14839          osd.144       up   1.00000  1.00000
145   sata    11.14839          osd.145       up   1.00000  1.00000
146   sata    11.14839          osd.146       up   1.00000  1.00000

...


Thanks
jerry
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to