[ceph-users] Can not disable rbd cache

2016-02-24 Thread wikison
Hi, I want to disable rbd cache in my ceph cluster. I've set the rbd cache to be false in the [client] section of ceph.conf and rebooted the cluster. But caching system was still working. How can I disable the rbd caching system? Any help? best regards. 2016-02-24 Esta Wang_

[ceph-users] pgs active & remapped

2015-10-19 Thread wikison
Hi, I came into a strange problem I've never seen, like this: esta@storageOne:~$ sudo ceph -s [sudo] password for esta: cluster 0b9b05db-98fe-49e6-b12b-1cce0645c015 health HEALTH_WARN 512 pgs stuck unclean recovery 1440/2160 objects degraded (66.667%) r

[ceph-users] qemu-img error connecting

2015-10-16 Thread wikison
After I set up the Ceph cluster. I tried to create a block device image from QEMU. But I got this: $ qemu-img create -f raw rbd:rbd/test 20G Formatting 'rbd:rbd/test', fmt=raw size=21474836480 qemu-img: rbd:rbd/test: error connecting There is a pool named rbd, and the output of ceph -s is:

Re: [ceph-users] pgs stuck inactive and unclean, too feww PGs per OSD

2015-10-07 Thread wikison
store data. One is 120GB SSD, and the other is 1TB HDD. I set the weight of SSD is 0.1 and weight of HDD is 1.0. -- Zhen Wang Shanghai Jiao Tong University At 2015-10-08 11:32:52, "Christian Balzer" wrote: > >Hello, > >On Thu, 8 Oct 2015 11:27:46 +0800 (CST)

Re: [ceph-users] pgs stuck inactive and unclean, too feww PGs per OSD

2015-10-07 Thread wikison
13:05:51, "Christian Balzer" wrote: > >Hello, >On Wed, 7 Oct 2015 12:57:58 +0800 (CST) wikison wrote: > >This is a very old bug, misfeature. >And creeps up every week or so here, google is your friend. > >> Hi, >> I have a cluster of one monitor and eight

[ceph-users] pgs stuck inactive and unclean, too feww PGs per OSD

2015-10-06 Thread wikison
Hi, I have a cluster of one monitor and eight OSDs. These OSDs are running on four hosts(each host has two OSDs). When I set up everything and started Ceph, I got this: esta@monitorOne:~$ sudo ceph -s [sudo] password for esta: cluster 0b9b05db-98fe-49e6-b12b-1cce0645c015 health HEALTH_W

[ceph-users] Diffrent OSD capacity & what is the weight of item

2015-09-23 Thread wikison
Hi, I have four storage machines to build a ceph storage cluster as storage nodes. Each of them is attached a 120 GB HDD and a 1 TB HDD. Is it OK to think that those storage devices are same when write a ceph.conf? For example, when setting osd pool default pg num , I thought: os

[ceph-users] ceph-disk prepare error

2015-09-22 Thread wikison
Hi, everybody When I use the ceph-disk utility to prepare the OSD, I executed the following orders: sudo ceph-disk prepare --cluster ceph --cluster-uuid aae7140c-7ee2-49f2-b8de-4c74f6f8651a --fs-type xfs /dev/sdb1 but I got: 2015-09-22 16:55:59.683674 7f2f3c97e7c0 -1 did not load config fi

[ceph-users] help! failed to start ceph-mon daemon

2015-09-20 Thread wikison
OS : Ubuntu 14.04 I have built the ceph source code and make it installed with the following commands: apt-get install a series of dependency packages ./autogen.sh ./configure make make install All these processes went well. When I type: which ceph Console shows: /usr/local/bin/ceph So I

[ceph-users] help! Ceph Manual Depolyment

2015-09-17 Thread wikison
Is there any detailed manual deployment document? I downloaded the source and built ceph, then installed ceph on 7 computers. I used three as monitors and four as OSD. I followed the official document on ceph.com. But it didn't work and it seemed to be out-dated. Could anybody help me? --