Re: [ceph-users] Possible to bind one osd with a specific networkadapter?

2013-06-21 Thread Da Chun
James, Thank you?? No, I have not separated the public and cluster yet. They are on the same switch. As I don't have many nodes now, the switch won't be the bottleneck currently. -- Original -- From: "James Harper"; Date: Sat, Jun 22, 2013 12:41 PM To: "Da Chu

Re: [ceph-users] Possible to bind one osd with a specific network adapter?

2013-06-21 Thread James Harper
> > Hi List, > Each of my osd nodes has 5 network Gb adapters, and has many osds, one > disk one osd. They are all connected with a Gb switch. > Currently I can get an average 100MB/s of read/write speed. To improve the > throughput further, the network bandwidth will be the bottleneck, right? Do

Re: [ceph-users] Possible to bind one osd with a specific network adapter?

2013-06-21 Thread Gregory Farnum
On Friday, June 21, 2013, Da Chun wrote: > Hi List, > Each of my osd nodes has 5 network Gb adapters, and has many osds, one > disk one osd. They are all connected with a Gb switch. > Currently I can get an average 100MB/s of read/write speed. To improve the > throughput further, the network bandw

[ceph-users] Possible to bind one osd with a specific network adapter?

2013-06-21 Thread Da Chun
Hi List, Each of my osd nodes has 5 network Gb adapters, and has many osds, one disk one osd. They are all connected with a Gb switch. Currently I can get an average 100MB/s of read/write speed. To improve the throughput further, the network bandwidth will be the bottleneck, right? I can't affo

[ceph-users] monitor removal and re-add

2013-06-21 Thread Mandell Degerness
There is a scenario where we would want to remove a monitor and, at a later date, re-add the monitor (using the same IP address). Is there a supported way to do this? I tried deleting the monitor directory and rebuilding from scratch following the add monitor procedures from the web, but the moni

Re: [ceph-users] ceph-deploy gatherkeys failing with custom cluster name

2013-06-21 Thread Gregory Farnum
On Fri, Jun 21, 2013 at 10:58 AM, Noah Watkins wrote: > I've used ceph-deploy to create a new cluster with the default cluster name. > I now want to deploy a second cluster in parallel, using the same nodes. I > went through the same process for deploying the first, but with --cluster > option,

[ceph-users] ceph-deploy gatherkeys failing with custom cluster name

2013-06-21 Thread Noah Watkins
I've used ceph-deploy to create a new cluster with the default cluster name. I now want to deploy a second cluster in parallel, using the same nodes. I went through the same process for deploying the first, but with --cluster option, and I'm getting an error on gatherkeys. $ ceph-deploy --clust

Re: [ceph-users] Openstack Multi-rbd storage backend

2013-06-21 Thread w sun
Josh & Sebastien, Does either of you have any comments on this cephx issue with multi-rbd backend pools? Thx. --weiguo From: ws...@hotmail.com To: ceph-users@lists.ceph.com Date: Thu, 20 Jun 2013 17:58:34 + Subject: [ceph-users] Openstack Multi-rbd storage backend Anyone saw the same

Re: [ceph-users] Cors support

2013-06-21 Thread Neil Levine
Yes! On Fri, Jun 21, 2013 at 8:49 AM, Fabio - NS3 srl wrote: > HI, > > Is there support about Cors in cheph 0.61.4 ?? > > thanks, > Fabio > __**_ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/**listinfo.cgi/ceph-users-c

Re: [ceph-users] cephx and rados

2013-06-21 Thread Sage Weil
On Fri, 21 Jun 2013, Maciej Ga?kiewicz wrote: > Hello > I am trying to integrate openstack and ceph. I have successfully configured > cinder but there is a problem with "rados lspools" command executed during > cinder-volume startup. It looks like this command requires client.admin > keyring to be

[ceph-users] Cors support

2013-06-21 Thread Fabio - NS3 srl
HI, Is there support about Cors in cheph 0.61.4 ?? thanks, Fabio ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Testing linux-3.10 and rbd format v2

2013-06-21 Thread Sage Weil
On Fri, 21 Jun 2013, Damien Churchill wrote: > Hi, > > I've built a copy of linux 3.10-rc6 (and added the patch from > ceph-client/for-linus) however when I try and map a rbd image created > with: > > # rbd create test-format-2 --size 10240 --format 2 > > and then run a map on the machine runnin

[ceph-users] Backport of modern qemu rbd driver to qemu 1.0 + Precise packaging

2013-06-21 Thread Alex Bligh
I've backported the modern (qemu 1.5) rbd driver to qemu 1.0 (for anyone interested). This is designed for people who are conservative in hypervisor version, but like more bleeding edge storage. The main thing this adds is asynchronous flush to rbd, plus automatic control of rbd caching behaviour.

[ceph-users] cephx and rados

2013-06-21 Thread Maciej Gałkiewicz
Hello I am trying to integrate openstack and ceph. I have successfully configured cinder but there is a problem with "rados lspools" command executed during cinder-volume startup. It looks like this command requires client.admin keyring to be readable by cinder user. Is it possible to specify anot

Re: [ceph-users] Desktop or Enterprise SATA Drives?

2013-06-21 Thread Stefan Schneebeli
>>Hi Stefan >>If you use a hardware RAID controller, stay away from desktop HD! >>The TLER or ERC is a "must have" to connect your disks to a hardware >>raid controller >>Regards, >>Marco Yes, I know. But what happens if I use an Enterprise SATA Drive with TLER or ERC without an RAID Controller?

Re: [ceph-users] Desktop or Enterprise SATA Drives?

2013-06-21 Thread Stefan Schneebeli
James Harper , 6/21/2013 7:17 AM: > Hi all > > I'm building a small ceph cluster with 3 nodes (my first ceph cluster). > Each Node with one System Disk, one Journal SSD Disk and one SATA OSD > Disk. > > My question is now should I use Desktop or Enterprise SATA Drives? > Enterprise Drives have a h

Re: [ceph-users] several radosgw sharing pools

2013-06-21 Thread Artem Silenkov
This picture shows the way we do it http://habrastorage.org/storage2/1ed/532/627/1ed5326273399df81f3a73179848a404.png Regards, Artem Silenkov, 2GIS TM. --- 2GIS LLChttp://2gis.rua.silenkov at 2gis.ru gtalk:artem.silenkov at gmail.com

Re: [ceph-users] several radosgw sharing pools

2013-06-21 Thread Alvaro Izquierdo Jimeno
Thanks Artem De: Artem Silenkov [mailto:artem.silen...@gmail.com] Enviado el: viernes, 21 de junio de 2013 14:01 Para: Alvaro Izquierdo Jimeno CC: ceph-users@lists.ceph.com Asunto: Re: [ceph-users] several radosgw sharing pools Good day! We use balancing such way varnish frontend-->radosgw1 | -

Re: [ceph-users] several radosgw sharing pools

2013-06-21 Thread Artem Silenkov
Good day! We use balancing such way varnish frontend-->radosgw1 | ->radosgw2 Every radosgw host use his own config so not necessary to add both nodes in every ceph.conf. It looks like Host1 [client.radosgw.gateway] host = myhost1 ... Host2 [client.radosgw.gateway] host = myhost2

Re: [ceph-users] Empty swift_key after key create

2013-06-21 Thread Alvaro Izquierdo Jimeno
I think you have to add '--gen-secret' in 'radosgw-admin key create --subuser=testuser:swift --key-type=swift' De: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-boun...@lists.ceph..com] En nombre de Mihály Árva-Tóth Enviado el: viernes, 21 de junio de 2013 13:37 Para: ceph-users@lists.

[ceph-users] Empty swift_key after key create

2013-06-21 Thread Mihály Árva-Tóth
Hello, When I create a user, then sub-user and generate a swift key, the key is missing from json output. root@ceph4:~# radosgw-admin user create --uid=testuser --display-name="Test User" --email=t...@foobar.com { "user_id": "testuser", "display_name": "Test User", "email": "t.

[ceph-users] several radosgw sharing pools

2013-06-21 Thread Alvaro Izquierdo Jimeno
Hi, I have a ceph cluster with a radosgw running. The radosgw part in ceph.conf is: [client.radosgw.gateway] host = myhost1 But if the process radosgw dies for some reason, we lose this behavior...So: -Can I setup another radosgw in other host sharing pools, users in ceph? i.e.:

Re: [ceph-users] Problem with multiple hosts RBD + Cinder

2013-06-21 Thread Sebastien Han
De rien, cool :)Yes start from the libvirt section.Cheers!Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."Phone : +33 (0)1 49 70 99 72 – Mobile : +33 (0)6 52 84 44 70Email : sebastien@enovance.com – Skype : han.sbastienAddress : 10, rue de la Victoire – 75009 Paris

Re: [ceph-users] Desktop or Enterprise SATA Drives?

2013-06-21 Thread Marco Aroldi
Hi Stefan If you use a hardware RAID controller, stay away from desktop HD! The TLER or ERC is a "must have" to connect your disks to a hardware raid controller Regards, Marco Il 20/06/2013 17:07, Stefan Schneebeli ha scritto: Hi all I'm building a small ceph cluster with 3 nodes (my first

Re: [ceph-users] Problem with multiple hosts RBD + Cinder

2013-06-21 Thread Igor Laskovy
Merci Sebastien, it's work now ;) Now for live migration do I need follow https://wiki.openstack.org/wiki/LiveMigrationUsage begining from libvirt settings section? On Thu, Jun 20, 2013 at 2:47 PM, Sebastien Han wrote: > Hi, > > No this must always be the same UUID. You can only specify one in

Re: [ceph-users] How to change the journal size at run time?

2013-06-21 Thread Leen Besselink
On Fri, Jun 21, 2013 at 10:39:05AM +0200, Leen Besselink wrote: > On Fri, Jun 21, 2013 at 12:11:23PM +0800, Da Chun wrote: > > Hi List, > > The default journal size is 1G, which I think is too small for my Gb > > network. I want to extend all the journal partitions to 2 or 4G. How can I > > do th

Re: [ceph-users] How to change the journal size at run time?

2013-06-21 Thread Leen Besselink
On Fri, Jun 21, 2013 at 12:11:23PM +0800, Da Chun wrote: > Hi List, > The default journal size is 1G, which I think is too small for my Gb network. > I want to extend all the journal partitions to 2 or 4G. How can I do that? > The osds were all created by commands like "ceph-deploy osd create >

[ceph-users] Testing linux-3.10 and rbd format v2

2013-06-21 Thread Damien Churchill
Hi, I've built a copy of linux 3.10-rc6 (and added the patch from ceph-client/for-linus) however when I try and map a rbd image created with: # rbd create test-format-2 --size 10240 --format 2 and then run a map on the machine running the new kernel: # rbd map test-format-2 rbd: add failed: (22