Re: [ceph-users] Help needed porting Ceph to RSockets

2013-06-20 Thread Joao Eduardo Luis
On 06/20/2013 10:09 AM, Matthew Anderson wrote: Hi All, I've had a few conversations on IRC about getting RDMA support into Ceph and thought I would give it a quick attempt to hopefully spur some interest. What I would like to accomplish is an RSockets only implementation so I'm able to use

Re: [ceph-users] Problem with multiple hosts RBD + Cinder

2013-06-20 Thread Sebastien Han
Hi,No this must always be the same UUID. You can only specify one in cinder.conf.Btw nova does the attachment this is why it needs the uuid and secret.The first secret import generates an UUID, then always re-use the same one for all your compute node, do something like:secret ephemeral='no'

[ceph-users] Changelog about 0.61.4...

2013-06-20 Thread Fabio - NS3 srl
Hi, there is a changlog about 0.61.4? Thanks FabioFVZ ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Changelog about 0.61.4...

2013-06-20 Thread Joao Eduardo Luis
On 06/20/2013 01:23 PM, Fabio - NS3 srl wrote: Hi, there is a changlog about 0.61.4? There will be as soon as 0.61.4 is officially released. An announcement to ceph-devel, ceph-users and the blog at ceph.com usually accompanies the release. -Joao -- Joao Eduardo Luis Software Engineer

[ceph-users] palcing SSDs and SATAs in same hosts

2013-06-20 Thread Ugis
Hi, I am thinking how to make ceph with 2 pools - fast and slow. Plan is to use SSDs and SATAs(or SAS) in the same hosts and define pools that use fast and slow disks accordingly. Later it would be easy to grow either pool by need. I found example for CRUSH map that does similar thing by

Re: [ceph-users] palcing SSDs and SATAs in same hosts

2013-06-20 Thread Edward Huyer
Hi, I am thinking how to make ceph with 2 pools - fast and slow. Plan is to use SSDs and SATAs(or SAS) in the same hosts and define pools that use fast and slow disks accordingly. Later it would be easy to grow either pool by need. I found example for CRUSH map that does similar thing

[ceph-users] Relocation of a node

2013-06-20 Thread Kurt Bauer
Hi, we run a 3 node cluster, every node runs a mon and 4 osds. 2 defined pools, one with replication level 2, the second with replication level 3. We now want to relocate one node from one datacenter to another, which means a downtime of about 4 hours for that specific node, which shouldn't hurt

[ceph-users] v0.61.4 released

2013-06-20 Thread Sage Weil
We have resolved a number of issues that v0.61.x Cuttlefish users have been hitting and have prepared another point release, v0.61.4. This release fixes a rare data corruption during power cycle when using the XFS file system, a few monitor sync problems, several issues with ceph-disk and

Re: [ceph-users] Relocation of a node

2013-06-20 Thread Gregory Farnum
On Thursday, June 20, 2013, Kurt Bauer wrote: Hi, we run a 3 node cluster, every node runs a mon and 4 osds. 2 defined pools, one with replication level 2, the second with replication level 3. We now want to relocate one node from one datacenter to another, which means a downtime of about

Re: [ceph-users] palcing SSDs and SATAs in same hosts

2013-06-20 Thread Gregory Farnum
On Thursday, June 20, 2013, Edward Huyer wrote: Hi, I am thinking how to make ceph with 2 pools - fast and slow. Plan is to use SSDs and SATAs(or SAS) in the same hosts and define pools that use fast and slow disks accordingly. Later it would be easy to grow either pool by need.

[ceph-users] MON quorum a single point of failure?

2013-06-20 Thread Bo
Howdy! Loving working with ceph; learning a lot. :) I am curious about the quorum process because I seem to get conflicting information from experts. Those that I report to need a clear answer from me which I am currently unable to give. Ceph needs an odd number of monitors in any given cluster

Re: [ceph-users] Changelog about 0.61.4...

2013-06-20 Thread Fabio - NS3 srl
Il 20/06/13 15:40, Joao Eduardo Luis ha scritto: On 06/20/2013 01:23 PM, Fabio - NS3 srl wrote: Hi, there is a changlog about 0.61.4? There will be as soon as 0.61.4 is officially released. An announcement to ceph-devel, ceph-users and the blog at ceph.com usually accompanies the release.

Re: [ceph-users] MON quorum a single point of failure?

2013-06-20 Thread Mike Lowe
Quorum means you need at least %51 participating be it people following parliamentary procedures or mons in ceph. With one dead and two up you have %66 participating or enough to have a quorum. An even number doesn't get you any additional safety but does give you one more thing than can fail

Re: [ceph-users] v0.61.4 released

2013-06-20 Thread Guido Winkelmann
I'm still using Fedora 17 on some machines that use Ceph, and it seems that the official Ceph RPM repository for Fedora 17 (http://eu.ceph.com/rpm-cuttlefish/fc17/x86_64/) hasn't seen any new releases since 0.61.2. Are you discontinuing RPMs for Fedora 17? Guido

Re: [ceph-users] MON quorum a single point of failure?

2013-06-20 Thread Sage Weil
On Thu, 20 Jun 2013, Bo wrote: Howdy! Loving working with ceph; learning a lot. :) I am curious about the quorum process because I seem to get conflicting information from experts. Those that I report to need a clear answer from me which I am currently unable to give. Ceph needs an odd

Re: [ceph-users] MON quorum a single point of failure?

2013-06-20 Thread Gregory Farnum
On Thursday, June 20, 2013, Bo wrote: Howdy! Loving working with ceph; learning a lot. :) I am curious about the quorum process because I seem to get conflicting information from experts. Those that I report to need a clear answer from me which I am currently unable to give. Ceph needs

Re: [ceph-users] MON quorum a single point of failure?

2013-06-20 Thread Bo
Thank you, Mike Sage and Greg. Completely different than everything I had heard or read. Clears it all up. :) Gracias, -bo On Thu, Jun 20, 2013 at 11:15 AM, Gregory Farnum g...@inktank.com wrote: On Thursday, June 20, 2013, Bo wrote: Howdy! Loving working with ceph; learning a lot.

[ceph-users] Exclusive mount

2013-06-20 Thread Timofey Koolin
Is way to exclusive map rbd? For example - I map on host A, then I try map it on host B. I want fail map on host B while it mapped to host A. I read about lock command, I want atomic lock and mount rbd for one host and auto unlock it when host A fail. -- Blog: www.rekby.ru

[ceph-users] Openstack Multi-rbd storage backend

2013-06-20 Thread w sun
Anyone saw the same issue as below? We are trying to test the multi backend feature with two RBD pools on Grizzly release. At this point, it seems that rbd.py does not take separate cephx users for the two RBD pools for authentication as it defaults to the single ID defined in

Re: [ceph-users] palcing SSDs and SATAs in same hosts

2013-06-20 Thread Ugis
Thanks! Rethinking same first example I think it is doable even like shown there. Nothing prevents mapping osds to host-like entities whatever they are called. 2013/6/20 Gregory Farnum g...@inktank.com: On Thursday, June 20, 2013, Edward Huyer wrote: Hi, I am thinking how to make ceph

Re: [ceph-users] radosgw placement groups

2013-06-20 Thread Mandell Degerness
It is possible to create all of the pools manually before starting radosgw. That allows control of the pg_num used. The pools are: .rgw, .rgw.control, .rgw.gc, .log, .intent-log, .usage, .users, .users.email, .users.swift, .users.uid On Wed, Jun 19, 2013 at 6:13 PM, Derek Yarnell

[ceph-users] How to change the journal size at run time?

2013-06-20 Thread Da Chun
Hi List, The default journal size is 1G, which I think is too small for my Gb network. I want to extend all the journal partitions to 2 or 4G. How can I do that? The osds were all created by commands like ceph-deploy osd create ceph-node0:/dev/sdb. The journal partition is on the same disk

Re: [ceph-users] Desktop or Enterprise SATA Drives?

2013-06-20 Thread James Harper
Hi all I'm building a small ceph cluster with 3 nodes (my first ceph cluster). Each Node with one System Disk, one Journal SSD Disk and one SATA OSD Disk. My question is now should I use Desktop or Enterprise SATA Drives? Enterprise Drives have a higher MTBF but the Firmware is actually

[ceph-users] ceph-mon segfaulted

2013-06-20 Thread Artem Silenkov
Good day! Surprisingly we encountered ceph-mon core dumped today. It was not peak load time and system was technically in good state. Configuration Debian GNU/Linux 6.0 x64 Linux h01 2.6.32-19-pve #1 SMP Wed May 15 07:32:52 CEST 2013 x86_64 GNU/Linux ii ceph