Re: [ceph-users] Ceph osd is all up and in, but every pg is incomplete

2015-03-30 Thread Yueliang
I think there no other way. :) --  Yueliang Sent with Airmail On March 30, 2015 at 13:17:55, Kai KH Huang (huangk...@lenovo.com) wrote: Thanks for the quick response, and it seems to work! But what I expect to have is (replica number = 3) on two servers ( 1 host will store 2 copies, and the

Re: [ceph-users] Ceph osd is all up and in, but every pg is incomplete

2015-03-30 Thread Kai KH Huang
Another strange thing is that the last few (24) pg seems never get ready and stuck at creating (after 6 hours of waiting): [root@serverA ~]# ceph -s 2015-03-30 17:14:48.720396 7feb5bd7a700 0 -- :/1000964 10.???.78:6789/0 pipe(0x7feb60026120 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7feb600263b0).fault

[ceph-users] How to test rbd's Copy-on-Read Feature

2015-03-30 Thread Tanay Ganguly
Hello All, I went through the below link and checked that Copy-on-Read is currently supported only on librbd and not on rbd kernel module. https://wiki.ceph.com/Planning/Blueprints/Infernalis/rbd%3A_kernel_rbd_client_supports_copy-on-read Can someone please let me know how to test Copy-on-Read

Re: [ceph-users] running Qemu / Hypervisor AND Ceph on the same nodes

2015-03-30 Thread Gurvinder Singh
On 03/30/2015 01:29 PM, Mark Nelson wrote: This is definitely something that we've discussed, though I don't think anyone has really planned out what a complete solution would look like including processor affinity, etc. Before I joined inktank I worked at a supercomputing institute and one

Re: [ceph-users] Radosgw authorization failed

2015-03-30 Thread Neville
Date: Wed, 25 Mar 2015 11:43:44 -0400 From: yeh...@redhat.com To: neville.tay...@hotmail.co.uk CC: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Radosgw authorization failed - Original Message - From: Neville neville.tay...@hotmail.co.uk To:

Re: [ceph-users] running Qemu / Hypervisor AND Ceph on the same nodes

2015-03-30 Thread Haomai Wang
We have a related topic in CDS about hadoop+ceph(https://wiki.ceph.com/Planning/Blueprints/Infernalis/rgw%3A_Hadoop_FileSystem_Interface_for_a_RADOS_Gateway_Caching_Tier). It's not directly solve the data locality problem but try to avoid data migration between different storage cluster. It would

Re: [ceph-users] running Qemu / Hypervisor AND Ceph on the same nodes

2015-03-30 Thread Mark Nelson
This is definitely something that we've discussed, though I don't think anyone has really planned out what a complete solution would look like including processor affinity, etc. Before I joined inktank I worked at a supercomputing institute and one of the projects we worked on was to develop

Re: [ceph-users] running Qemu / Hypervisor AND Ceph on the same nodes

2015-03-30 Thread Gurvinder Singh
One interesting use case of combining Ceph with computing is running big data jobs on ceph itself. As with CephFS coming along, you can run Haddop/Spark jobs directly on ceph without needed to move your data to compute resources with data locality support. I am wondering if anyone in community is

[ceph-users] Creating and deploying OSDs in parallel

2015-03-30 Thread Somnath Roy
Hi, I am planning to modify our deployment script so that it can create and deploy multiple OSDs in parallel to the same host as well as on different hosts. Just wanted to check if there is any problem to run say 'ceph-deploy osd create' etc. in parallel while deploying cluster. Thanks Regards

Re: [ceph-users] CephFS Slow writes with 1MB files

2015-03-30 Thread Gregory Farnum
On Sat, Mar 28, 2015 at 10:12 AM, Barclay Jameson almightybe...@gmail.com wrote: I redid my entire Ceph build going back to to CentOS 7 hoping to the get the same performance I did last time. The rados bench test was the best I have ever had with a time of 740 MB wr and 1300 MB rd. This was

Re: [ceph-users] SSD Journaling

2015-03-30 Thread Gregory Farnum
On Mon, Mar 30, 2015 at 1:01 PM, Garg, Pankaj pankaj.g...@caviumnetworks.com wrote: Hi, I’m benchmarking my small cluster with HDDs vs HDDs with SSD Journaling. I am using both RADOS bench and Block device (using fio) for testing. I am seeing significant Write performance improvements, as

[ceph-users] SSD Journaling

2015-03-30 Thread Garg, Pankaj
Hi, I'm benchmarking my small cluster with HDDs vs HDDs with SSD Journaling. I am using both RADOS bench and Block device (using fio) for testing. I am seeing significant Write performance improvements, as expected. I am however seeing the Reads coming out a bit slower on the SSD Journaling

Re: [ceph-users] SSD Journaling

2015-03-30 Thread Mark Nelson
On 03/30/2015 03:01 PM, Garg, Pankaj wrote: Hi, I’m benchmarking my small cluster with HDDs vs HDDs with SSD Journaling. I am using both RADOS bench and Block device (using fio) for testing. I am seeing significant Write performance improvements, as expected. I am however seeing the Reads

Re: [ceph-users] Is it possible to change the MDS node after its been created

2015-03-30 Thread Gregory Farnum
On Mon, Mar 30, 2015 at 3:15 PM, Francois Lafont flafdiv...@free.fr wrote: Hi, Gregory Farnum wrote: The MDS doesn't have any data tied to the machine you're running it on. You can either create an entirely new one on a different machine, or simply copy the config file and cephx keyring to

Re: [ceph-users] Is it possible to change the MDS node after its been created

2015-03-30 Thread Gregory Farnum
On Mon, Mar 30, 2015 at 1:51 PM, Steve Hindle mech...@gmail.com wrote: Hi! I mistakenly created my MDS node on the 'wrong' server a few months back. Now I realized I placed it on a machine lacking IPMI and would like to move it to another node in my cluster. Is it possible to

[ceph-users] Is it possible to change the MDS node after its been created

2015-03-30 Thread Steve Hindle
Hi! I mistakenly created my MDS node on the 'wrong' server a few months back. Now I realized I placed it on a machine lacking IPMI and would like to move it to another node in my cluster. Is it possible to non-destructively move an MDS ? Thanks!

Re: [ceph-users] Is it possible to change the MDS node after its been created

2015-03-30 Thread Francois Lafont
Gregory Farnum wrote: Sorry to enter in this post but how can we *remove* a mds daemon of a ceph cluster? Are the commands below enough? stop the daemon rm -r /var/lib/ceph/mds/ceph-$id/ ceph auth del mds.$id Should we edit something in the mds map to remove once and for

Re: [ceph-users] Is it possible to change the MDS node after its been created

2015-03-30 Thread Francois Lafont
Hi, Gregory Farnum wrote: The MDS doesn't have any data tied to the machine you're running it on. You can either create an entirely new one on a different machine, or simply copy the config file and cephx keyring to the appropriate directories. :) Sorry to enter in this post but how can we

[ceph-users] One host failure bring down the whole cluster

2015-03-30 Thread Kai KH Huang
Hi, all I have a two-node Ceph cluster, and both are monitor and osd. When they're both up, osd are all up and in, everything is fine... almost: [root~]# ceph -s health HEALTH_WARN 25 pgs degraded; 316 pgs incomplete; 85 pgs stale; 24 pgs stuck degraded; 316 pgs stuck inactive; 85 pgs

Re: [ceph-users] One host failure bring down the whole cluster

2015-03-30 Thread Lindsay Mathieson
On Tue, 31 Mar 2015 02:42:27 AM Kai KH Huang wrote: Hi, all I have a two-node Ceph cluster, and both are monitor and osd. When they're both up, osd are all up and in, everything is fine... almost: Two things. 1 - You *really* need a min of three monitors. Ceph cannot form a quorum with

Re: [ceph-users] One host failure bring down the whole cluster

2015-03-30 Thread Lindsay Mathieson
On Tue, 31 Mar 2015 02:42:27 AM Kai KH Huang wrote: Hi, all I have a two-node Ceph cluster, and both are monitor and osd. When they're both up, osd are all up and in, everything is fine... almost: Two things. 1 - You *really* need a min of three monitors. Ceph cannot form a quorum with

[ceph-users] Hi:everyone Calamari can manage multiple ceph clusters ?

2015-03-30 Thread robert
‍___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Cannot add OSD node into crushmap or all writes fail

2015-03-30 Thread Tyler Bishop
I have this ceph node that will correctly recover into my ceph pool and performance looks to be normal for the rbd clients. However after a few minutes once finishing recovery the rbd clients begin to fall over and cannot write data to the pool. I've been trying to figure this out for weeks!

Re: [ceph-users] One host failure bring down the whole cluster

2015-03-30 Thread Gregory Farnum
On Mon, Mar 30, 2015 at 8:02 PM, Lindsay Mathieson lindsay.mathie...@gmail.com wrote: On Tue, 31 Mar 2015 02:42:27 AM Kai KH Huang wrote: Hi, all I have a two-node Ceph cluster, and both are monitor and osd. When they're both up, osd are all up and in, everything is fine... almost: Two

Re: [ceph-users] CephFS Slow writes with 1MB files

2015-03-30 Thread Yan, Zheng
On Sun, Mar 29, 2015 at 1:12 AM, Barclay Jameson almightybe...@gmail.com wrote: I redid my entire Ceph build going back to to CentOS 7 hoping to the get the same performance I did last time. The rados bench test was the best I have ever had with a time of 740 MB wr and 1300 MB rd. This was

Re: [ceph-users] Where is the systemd files?

2015-03-30 Thread Ken Dreyer
The systemd service unit files were imported into the tree, but they have not been added into any upstream packaging yet. See the discussion at https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=769593 or git log -- systemd. I don't think there are any upstream tickets in Redmine for this yet.

Re: [ceph-users] Radosgw authorization failed

2015-03-30 Thread Yehuda Sadeh-Weinraub
- Original Message - From: Neville neville.tay...@hotmail.co.uk To: Yehuda Sadeh-Weinraub yeh...@redhat.com Cc: ceph-users@lists.ceph.com Sent: Monday, March 30, 2015 6:49:29 AM Subject: Re: [ceph-users] Radosgw authorization failed Date: Wed, 25 Mar 2015 11:43:44 -0400