Re: [ceph-users] ceph and incremental backups

2013-08-30 Thread Josh Durgin
On 08/30/2013 02:22 PM, Oliver Daudey wrote: Hey Mark, On vr, 2013-08-30 at 13:04 -0500, Mark Chaney wrote: Full disclosure, I have zero experience with openstack and ceph so far. If I am going to use a Ceph RBD cluster to store my kvm instances, how should I be doing backups? 1) I would pref

Re: [ceph-users] Location field empty in Glance when instance to image

2013-08-30 Thread Josh Durgin
On 08/30/2013 03:40 AM, Toni F. [ackstorm] wrote: Sorry, wrong list Anyway i take this oportunity to ask two questions: Somebody knows how i can download a image or snapshot? Cinder has no way to export them, but you can use: rbd export pool/image@snap /path/to/file how the direct url are

Re: [ceph-users] cephfs set_layout

2013-08-30 Thread Sage Weil
On Fri, 30 Aug 2013, Joao Pedras wrote: > > Greetings all! > > I am bumping into a small issue and I am wondering if someone has any > insight on it. > > I am trying to use a pool other than 'data' for cephfs. Said pool has id #3 > and I have run 'ceph mds add_data_pool 3'. > > After mounting c

[ceph-users] cephfs set_layout

2013-08-30 Thread Joao Pedras
Greetings all! I am bumping into a small issue and I am wondering if someone has any insight on it. I am trying to use a pool other than 'data' for cephfs. Said pool has id #3 and I have run 'ceph mds add_data_pool 3'. After mounting cephfs seg faults when trying to set the layout: $> cephfs /p

Re: [ceph-users] ceph and incremental backups

2013-08-30 Thread Martin Rudat
On 2013-08-31 04:04, Mark Chaney wrote: If I am going to use a Ceph RBD cluster to store my kvm instances, how should I be doing backups? 1) I would prefer them to be incremental so that a whole backup doesnt have to happen every night. 2) I would also like the instances to obviously stay onli

Re: [ceph-users] OSD to OSD Communication

2013-08-30 Thread Corin Langosch
Am 30.08.2013 20:33, schrieb Wido den Hollander: On 08/30/2013 08:19 PM, Geraint Jones wrote: Hi Guys We are using Ceph in production backing an LXC cluster. The setup is : 2 x Servers, 24 x 3TB Disks each in groups of 3 as RAID0. SSD for journals. Bonded 1gbit ethernet (2gbit total). I thin

Re: [ceph-users] OSD to OSD Communication

2013-08-30 Thread Corin Langosch
Am 30.08.2013 23:11, schrieb Geraint Jones: Oh the Machines are 128gb :) How many PGs in total do you have? (128 gb was minimum for 8192 pgs) How many to you plan to have in the near future? Is the cluster already under load? What's the current memory usage of the ods? What's the usage whe

Re: [ceph-users] ceph and incremental backups

2013-08-30 Thread Oliver Daudey
Hey Mark, On vr, 2013-08-30 at 13:04 -0500, Mark Chaney wrote: > Full disclosure, I have zero experience with openstack and ceph so far. > > If I am going to use a Ceph RBD cluster to store my kvm instances, how > should I be doing backups? > > 1) I would prefer them to be incremental so that a

Re: [ceph-users] OSD to OSD Communication

2013-08-30 Thread Gregory Farnum
Assuming the networks can intercommunicate, yes. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Fri, Aug 30, 2013 at 1:09 PM, Geraint Jones wrote: > One Other thing > > If I set cluster_network on node0 and restart it, then do the same on > node1 will I be able to maintain

Re: [ceph-users] OSD to OSD Communication

2013-08-30 Thread Geraint Jones
Is that not the point of the cluster_network - that it shouldn't be able to communicate with other networks... On 30/08/13 1:57 PM, "Gregory Farnum" wrote: >Assuming the networks can intercommunicate, yes. >-Greg >Software Engineer #42 @ http://inktank.com | http://ceph.com > > >On Fri, Aug 30,

Re: [ceph-users] OSD to OSD Communication

2013-08-30 Thread Geraint Jones
Oh the Machines are 128gb :) From: Corin Langosch Date: Friday, 30 August 2013 2:08 PM To: Wido den Hollander Cc: Subject: Re: [ceph-users] OSD to OSD Communication Am 30.08.2013 20:33, schrieb Wido den Hollander: > On 08/30/2013 08:19 PM, Geraint Jones wrote: > >> Hi Guys

Re: [ceph-users] OSD to OSD Communication

2013-08-30 Thread Wolfgang Hennerbichler
On Aug 30, 2013, at 20:38 , Geraint Jones wrote: >> >> Yes, you can use "cluster_network" to direct OSD traffic over different >> network interfaces. > > Perfect, so now to buy some NIC's :) or use VLANs on your 10GE and frickle around with QoS. >> >> Wido >> >>> If anyone has any suggesti

Re: [ceph-users] rbd mapping failes - maybe solved

2013-08-30 Thread bernhard glomm
Thanks Sage, I just tried various versions from gitbuilder and finally found one that worked ;-) deb http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/dumpling/ raring main looks like it works perfectly, on first glance with much better performance than cuttlefish. Do you nee

Re: [ceph-users] OSD to OSD Communication

2013-08-30 Thread Geraint Jones
One Other thing If I set cluster_network on node0 and restart it, then do the same on node1 will I be able to maintain availability while I roll the change out ? On 30/08/13 11:47 AM, "Dimitri Maziuk" wrote: >On 08/30/2013 01:38 PM, Geraint Jones wrote: >> >> >> On 30/08/13 11:33 AM, "Wido de

Re: [ceph-users] OSD to OSD Communication

2013-08-30 Thread Dimitri Maziuk
On 08/30/2013 01:51 PM, Mark Nelson wrote: > On 08/30/2013 01:47 PM, Dimitri Maziuk wrote: >> (There's nothing wrong with raid as long it's >0.) > > One exception: Some controllers (looking at you LSI!) don't expose disks > as JBOD or if they do, don't let you use write-back cache. In those > ca

[ceph-users] SSD only storage, where to place journal

2013-08-30 Thread Tobias Brunner
Hi everyone, Reading through the documentation leads to the conclusion that it's a best practice to place the journal of an OSD instance to a separate SSD disk to speed writing up. But if I only use SSD storage (for object storage and journal) I think it would be applicable to place to journal

Re: [ceph-users] SSD only storage, where to place journal

2013-08-30 Thread Stefan Priebe
Am 30.08.2013 22:09, schrieb Tobias Brunner: Hi everyone, Reading through the documentation leads to the conclusion that it's a best practice to place the journal of an OSD instance to a separate SSD disk to speed writing up. But if I only use SSD storage (for object storage and journal) I think

Re: [ceph-users] OSD to OSD Communication

2013-08-30 Thread Mark Nelson
On 08/30/2013 01:47 PM, Dimitri Maziuk wrote: On 08/30/2013 01:38 PM, Geraint Jones wrote: On 30/08/13 11:33 AM, "Wido den Hollander" wrote: On 08/30/2013 08:19 PM, Geraint Jones wrote: Hi Guys We are using Ceph in production backing an LXC cluster. The setup is : 2 x Servers, 24 x 3TB Di

[ceph-users] ceph install

2013-08-30 Thread Jimmy Lu [ Storage ]
Hello ceph-users, I am new to Ceph and would like to bring up a 5-node cluster for my PoC. I am doing an installation from below link and ran into a problem. I am not so sure how to deal with it. Can someone please shed some light? http://ceph.com/docs/master/install/rpm/ [root@cleverloadgen16 c

Re: [ceph-users] OSD to OSD Communication

2013-08-30 Thread Geraint Jones
On 30/08/13 11:33 AM, "Wido den Hollander" wrote: >On 08/30/2013 08:19 PM, Geraint Jones wrote: >> Hi Guys >> >> We are using Ceph in production backing an LXC cluster. The setup is : 2 >> x Servers, 24 x 3TB Disks each in groups of 3 as RAID0. SSD for >> journals. Bonded 1gbit ethernet (2gbit

Re: [ceph-users] OSD to OSD Communication

2013-08-30 Thread Dimitri Maziuk
On 08/30/2013 01:38 PM, Geraint Jones wrote: > > > On 30/08/13 11:33 AM, "Wido den Hollander" wrote: > >> On 08/30/2013 08:19 PM, Geraint Jones wrote: >>> Hi Guys >>> >>> We are using Ceph in production backing an LXC cluster. The setup is : 2 >>> x Servers, 24 x 3TB Disks each in groups of 3 a

Re: [ceph-users] OSD to OSD Communication

2013-08-30 Thread Wido den Hollander
On 08/30/2013 08:19 PM, Geraint Jones wrote: Hi Guys We are using Ceph in production backing an LXC cluster. The setup is : 2 x Servers, 24 x 3TB Disks each in groups of 3 as RAID0. SSD for journals. Bonded 1gbit ethernet (2gbit total). I think you sized your machines too big. I'd say go for

[ceph-users] OSD to OSD Communication

2013-08-30 Thread Geraint Jones
Hi Guys We are using Ceph in production backing an LXC cluster. The setup is : 2 x Servers, 24 x 3TB Disks each in groups of 3 as RAID0. SSD for journals. Bonded 1gbit ethernet (2gbit total). Overnight we have had a disk failure, this in itself is not a biggie ­ but due to the number of VM's we h

[ceph-users] ceph and incremental backups

2013-08-30 Thread Mark Chaney
Full disclosure, I have zero experience with openstack and ceph so far. If I am going to use a Ceph RBD cluster to store my kvm instances, how should I be doing backups? 1) I would prefer them to be incremental so that a whole backup doesnt have to happen every night. 2) I would also like the

Re: [ceph-users] Upgraded Bobtail to Cuttlefish and unable to mount cephfs

2013-08-30 Thread Gregory Farnum
Can you start up your mds with "dedug mds = 20" and "debug ms = 20"? The "failed to decode message" line is suspicious but there's not enough context here for me to be sure, and my pattern-matching isn't reminding me of any serious bugs. -Greg Software Engineer #42 @ http://inktank.com | http://cep

Re: [ceph-users] rbd mapping failes

2013-08-30 Thread Sage Weil
On Fri, 30 Aug 2013, Bernhard Glomm wrote: > mount cephfs failes to (I added 3 MDS) > anybody any ideas how to debug this further? > > I used ceph-deploy to create the cluster, > the xfs filesystem on the OSD's is okay, I can copy remove and open files on > that partition > so I asume it's somethi

Re: [ceph-users] trouble with ceph-deploy

2013-08-30 Thread Pavel Timoschenkov
>>>What happens if you do >>>ceph-disk -v activate /dev/sdaa1 >>>on ceph001? ceph-disk -v activate /dev/sdaa1 /dev/sdaa1: ambivalent result (probably more filesystems on the device, use wipefs(8) to see more details) -Original Message- From: Sage Weil [mailto:s...@inktank.com] Sent: Fri

Re: [ceph-users] ceph s3 allowed characters

2013-08-30 Thread Dominik Mostowiec
(echo -n 'GET /dysk/files/test.test%40op.pl/DOMIWENT%202013/Damian%20DW/dw/Specyfikacja%20istotnych%20warunk%F3w%20zam%F3wienia.doc HTTP/1.0'; printf "\r\n\r\n") | nc localhost 88 HTTP/1.1 400 Bad Request Date: Fri, 30 Aug 2013 14:10:07 GMT Server: Apache/2.2.22 (Ubuntu) Accept-Ranges: bytes Conte

Re: [ceph-users] trouble with ceph-deploy

2013-08-30 Thread Sage Weil
On Fri, 30 Aug 2013, Pavel Timoschenkov wrote: > > <<< <<< >   > > In logs everything looks good. After > > ceph-deploy disk zap ceph001:sdaa ceph001:sda1 > > and > > ceph-deploy osd create ceph001:sdaa:/dev/sda1 > > where: > > HOST: ceph001 > > DISK: sdaa > > JOURNAL: /dev/sda1 > > in

Re: [ceph-users] rbd mapping failes

2013-08-30 Thread Sage Weil
Hi Bernhard, On Fri, 30 Ag 2013, Bernhard Glomm wrote: > Hi all, > > due to a problem with ceph-deploy I currently use > > deb http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/wip-4924/ > raring main > (ceph version 0.67.2-16-gd41cf86 (d41cf866ee028ef7b821a5c37b991e85cbf3637f)) > > N

Re: [ceph-users] ceph s3 allowed characters

2013-08-30 Thread Yehuda Sadeh
On Fri, Aug 30, 2013 at 7:44 AM, Dominik Mostowiec wrote: > (echo -n 'GET > /dysk/files/test.test%40op.pl/DOMIWENT%202013/Damian%20DW/dw/Specyfikacja%20istotnych%20warunk%F3w%20zam%F3wienia.doc > HTTP/1.0'; printf "\r\n\r\n") | nc localhost 88 > HTTP/1.1 400 Bad Request > Date: Fri, 30 Aug 2013 1

[ceph-users] [ANN] ceph-deploy 1.2.3 released!

2013-08-30 Thread Alfredo Deza
Hi all, There is a new bug-fix release of ceph-deploy, the easy ceph deployment tool. Installation instructions: https://github.com/ceph/ceph-deploy#installation This is the list of all fixes that went into this release which can also be found in the CHANGELOG.rst file in ceph-deploy's git repo:

[ceph-users] ceph-mon runs on 6800 not 6789

2013-08-30 Thread 이주헌
I have 1 MDS and 3 OSDs. I installed them via ceph-deploy. (dumpling 0.67.2 version) At first, It works perfectly. But, after I reboot one of OSD, ceph-mon launched on port 6800 not 6789. This is a result of 'ceph -s' --- cluster c59d13fd-c4c9-4cd0-b2ed-b654428b3171 health HEALTH_WARN 1 mons

Re: [ceph-users] ceph s3 allowed characters

2013-08-30 Thread Alfredo Deza
On Fri, Aug 30, 2013 at 9:52 AM, Dominik Mostowiec < dominikmostow...@gmail.com> wrote: > Hi, > I got err (400) from radosgw on request: > 2013-08-30 08:09:19.396812 7f3b307c0700 2 req 3070:0.000150::POST > /dysk/files/test.test% > 40op.pl/DOMIWENT%202013/DW%202013_03_27/PROJEKTY%202012/ZB%20KROL

[ceph-users] ceph s3 allowed characters

2013-08-30 Thread Dominik Mostowiec
Hi, I got err (400) from radosgw on request: 2013-08-30 08:09:19.396812 7f3b307c0700 2 req 3070:0.000150::POST /dysk/files/test.test%40op.pl/DOMIWENT%202013/DW%202013_03_27/PROJEKTY%202012/ZB%20KROL/Szko%C5%82a%20%C5%81aziska%20ZB%20KROL/sala-%A3aziska_Dolne_PB-0_went_15_11_06%20Layout1%20%283%29.

Re: [ceph-users] rbd mapping failes

2013-08-30 Thread Bernhard Glomm
mount cephfs failes to (I added 3 MDS) anybody any ideas how to debug this further? I used ceph-deploy to create the cluster, the xfs filesystem on the OSD's is okay, I can copy remove and open files on that partition so I asume it's something inside of ceph??? TIA Bernhard P.S.: Version is c

Re: [ceph-users] ceph-deploy howto

2013-08-30 Thread Alfredo Deza
On Fri, Aug 30, 2013 at 6:17 AM, Bernhard Glomm wrote: > Is there an _actual_ howto, man page or other documentation > about ceph-deploy? > There are a couple of places for ceph-deploy documentation. The first one is the Github project page: https://github.com/ceph/ceph-deploy#ceph-deploydep

[ceph-users] ceph-mon runs on 6800 not 6789

2013-08-30 Thread 이주헌
I have 1 MDS and 3 OSDs. I installed them via ceph-deploy. (dumpling 0.67.2 version) At first, It works perfectly. But, after I reboot one of OSD, ceph-mon launched on port 6800 not 6789. This is a result of 'ceph -s' --- cluster c59d13fd-c4c9-4cd0-b2ed-b654428b3171 health HEALTH_WARN 1 mons

[ceph-users] ceph-mon runs on port 6800 not 6789

2013-08-30 Thread 이주헌
I have 1 MDS and 3 OSDs. I installed them via ceph-deploy. (dumpling 0.67.2 version) At first, It works perfectly. But, after I reboot one of OSD, ceph-mon launched on port 6800 not 6789. This is a result of 'ceph -s' --- cluster c59d13fd-c4c9-4cd0-b2ed-b654428b3171 health HEALTH_WARN 1 mons

Re: [ceph-users] trouble with ceph-deploy

2013-08-30 Thread Pavel Timoschenkov
<<> wrote: Hi. If I us

Re: [ceph-users] Location field empty in Glance when instance to image

2013-08-30 Thread Toni F. [ackstorm]
Sorry, wrong list Anyway i take this oportunity to ask two questions: Somebody knows how i can download a image or snapshot? how the direct url are build? rbd://9ed296cb-e9a7-4d36-b728-0ddc5f249ca0/images/7729788f-b80a-4d90-b3c7-6f61f5ebd535/snap This is from a image I need to build this

[ceph-users] Location field empty in Glance when instance to image

2013-08-30 Thread Toni F. [ackstorm]
Hi all, With a running boot-from-volume instance backed in ceph, i launch command to create an image from instance. All seems to work fine but if i look in bdd i notice that location is empty mysql> select * from images where id="b7674970-5d60-41da-bbb9-2ef10955fbbe" \G; ***

[ceph-users] ceph-deploy howto

2013-08-30 Thread Bernhard Glomm
Is there an _actual_ howto, man page or other documentation about ceph-deploy? I can't find any documentation about how to specify different networks (storage/public), use folders or partitions instead of disks... TIA Bernhard --

Re: [ceph-users] Removing OSD's on a ceph-deployed cluster

2013-08-30 Thread Vladislav Gorbunov
We run ceph osd crush reweight osd.{odd-num} 0 before ceph osd out {osd-num} to avoid double cluster rebalancing after ceph osd crush remove osd. {osd-num} ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users

[ceph-users] rbd mapping failes

2013-08-30 Thread Bernhard Glomm
Hi all, due to a problem with ceph-deploy I currently use deb http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/wip-4924/ raring main (ceph version 0.67.2-16-gd41cf86 (d41cf866ee028ef7b821a5c37b991e85cbf3637f)) Now the initialization of the cluster works like a charm, ceph health is o

[ceph-users] rbd mapping failes

2013-08-30 Thread Bernhard Glomm
Hi all, due to a problem with ceph-deploy I currently use deb http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/wip-4924/ raring main (ceph version 0.67.2-16-gd41cf86 (d41cf866ee028ef7b821a5c37b991e85cbf3637f)) Now the initialization of the cluster works like a charm, ceph health is o

[ceph-users] Removing OSD's on a ceph-deployed cluster

2013-08-30 Thread Fuchs, Andreas (SwissTXT)
Hi all On a Ceph cluster deployed wit ceph-deploy the instructions in the doku to remove a osd seem outdated to us. We did the following steps: ceph osd out {osd-num} sudo /etc/init.d/ceph stop osd.{osd-num} -> WON'T work as there is no osd.disk declaration in ceph.conf hast to be replaced b