[ceph-users] RadosGW problems on Ubuntu

2015-08-14 Thread Stolte, Felix
Hello everyone, we are currently testing ceph (Hammer) and Openstack (Kilo) on Ubuntu 14.04 LTS Servers. Yesterday I tried to setup the radosgateway with keystone integration for swift via ceph-deploy. I followed the instructions on http://ceph.com/docs/master/radosgw/keystone/ and

[ceph-users] How repair 2 invalids pgs

2015-08-14 Thread Pierre BLONDEAU
Hy, Yesterday, I removed 5 ods on 15 from my cluster ( machine migration ). When I stopped the processes, I haven't verified that all the pages were in active stat. I removed the 5 ods from the cluster ( ceph osd out osd.9 ; ceph osd crush rm osd.9 ; ceph auth del osd.9 ; ceph osd rm osd.9 ) ,

Re: [ceph-users] Cache tier best practices

2015-08-14 Thread Vickey Singh
Thank you guys , this answers my query Cheers Vickey On Thu, Aug 13, 2015 at 8:02 PM, Bill Sanders billysand...@gmail.com wrote: I think you're looking for this. http://ceph.com/docs/master/man/8/rbd/#cmdoption-rbd--order It's used when you create the RBD images. 1MB is order=20, 512 is

[ceph-users] НА: CEPH cache layer. Very slow

2015-08-14 Thread Межов Игорь Александрович
Hi! Of course, it isn't cheap at all, but we use Intel DC S3700 200Gb for ceph journals and DC S3700 400Gb in the SSD pool: same hosts, separate root in crushmap. SSD pool are not yet in production, journаlling SSDs works under production load for 10 months. They're in good condition - no

Re: [ceph-users] НА: CEPH cache layer. Very slow

2015-08-14 Thread Ben Hines
Nice to hear that you have no SSD failures yet in 10months. How many OSDs are you running, and what is your primary ceph workload? (RBD, rgw, etc?) -Ben On Fri, Aug 14, 2015 at 2:23 AM, Межов Игорь Александрович me...@yuterra.ru wrote: Hi! Of course, it isn't cheap at all, but we use Intel

Re: [ceph-users] CEPH cache layer. Very slow

2015-08-14 Thread Voloshanenko Igor
72 osd, 60 hdd, 12 ssd Primary workload - rbd, kvm пятница, 14 августа 2015 г. пользователь Ben Hines написал: Nice to hear that you have no SSD failures yet in 10months. How many OSDs are you running, and what is your primary ceph workload? (RBD, rgw, etc?) -Ben On Fri, Aug 14, 2015 at

[ceph-users] ODS' weird status. Can not be removed anymore.

2015-08-14 Thread Marcin Przyczyna
Hello, this is my first posting to ceph-users mailgroup and because I am also new to this technology please be patient with me. A description of problem I get stuck follows: 3 Monitors are up and running, one of them is leader, the two are peons. There is no authentication between the nodes

Re: [ceph-users] ODS' weird status. Can not be removed anymore.

2015-08-14 Thread Wido den Hollander
On 14-08-15 14:30, Marcin Przyczyna wrote: Hello, this is my first posting to ceph-users mailgroup and because I am also new to this technology please be patient with me. A description of problem I get stuck follows: 3 Monitors are up and running, one of them is leader, the two are