> Op 30 juni 2017 om 13:35 schreef Малков Петр Викторович <[email protected]>:
> 
> 
> v12.1.0 Luminous RC released
> BlueStore:
> The new BlueStore backend for ceph-osd is now stable and the new
> default for newly created OSDs.
> 
> [global]
> fsid = a737f8ad-b959-4d44-ada7-2ed6a2b8802b
> mon_initial_members = ceph01, ceph02, ceph03
> mon_host = 192.168.148.189,192.168.148.5,192.168.148.43
> auth_cluster_required = cephx
> auth_service_required = cephx
> auth_client_required = cephx
> filestore_xattr_use_omap = true
> osd pool default size = 3
> osd pool default min_size = 2
> osd pool default pg num = 64
> osd pool default pgp num = 64
> public network = 192.168.0.0/16
> log_to_stderr = false
> 
> [client]
> rgw frontends = civetweb port=80
> 
> 
> config by default makes xfs disk but not bluestore
> even when I add
> 
> enable experimental unrecoverable data corrupting features = bluestore 
> rocksdb memdb
> [osd]
> bluestore = true
> osd objectstore = bluestore
> bluestore fsck on mount = true
> bluestore default buffered read = true
> 
> 
> what should be written now inspite of "exerimental..." and other parameters?

Nothing should be needed. I've deployed a cluster on Ubuntu 16.04 and all the 
OSDs have BlueSore.

Can you double check if the ceph-disk you have installed isn't a old version 
for example? It's ceph-disk which create BlueStore for you.

Wido

> 
> --
> Peter Malkov
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to