Hi, all:

For the scalability consideration, we would like to name the first
harddisk as "00101" on first server.

And named the first harddisk as "00201" on second server. The ceph.conf
seems like this:

[osd]
osd data = /srv/osd.$id
osd journal = /srv/osd.$id.journal
osd journal size = 1000

[osd.00101]
host = server-001
btrfs dev= /dev/sda

[osd.00102]
host = server-001
btrfs dev= /dev/sdb

[osd.00103]
host = server-001
btrfs dev= /dev/sdc

[osd.00201]
host = server-002
btrfs dev= /dev/sda

[osd.00202]
host = server-002
btrfs dev= /dev/sdb

[osd.00203]
host = server-002
btrfs dev= /dev/sdc

[osd.00301]
host = server-003
btrfs dev= /dev/sda

[osd.00302]
host = server-003
btrfs dev= /dev/sdb

[osd.00303]
host = server-003
btrfs dev= /dev/sdc

But we are worried about if it is an acceptable configuration for ceph.

As I see, the maximum osd is 304 now, although there are only 9 osds in
the cluster. 
Will this configuration influence about the performance? 
And what if we add osd.00204 in the future?

Thanks!


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to