Re: [ceph-users] Slow IOPS on RBD compared to journal and backing devices

2014-05-07 Thread Christian Balzer
On Thu, 08 May 2014 08:41:54 +0200 (CEST) Alexandre DERUMIER wrote: > Stupid question : Is your areca 4GB cache shared between ssd journal and > osd ? > Not a stupid question. I made that mistake about 3 years ago in a DRBD setup, OS and activity log SSDs on the same controller as the storage di

Re: [ceph-users] Slow IOPS on RBD compared to journal and backing devices

2014-05-07 Thread Alexandre DERUMIER
Stupid question : Is your areca 4GB cache shared between ssd journal and osd ? or only use by osds ? - Mail original - De: "Christian Balzer" À: ceph-users@lists.ceph.com Envoyé: Jeudi 8 Mai 2014 08:26:33 Objet: Re: [ceph-users] Slow IOPS on RBD compared to journal and backing de

Re: [ceph-users] Slow IOPS on RBD compared to journal and backing devices

2014-05-07 Thread Christian Balzer
Hello, On Wed, 7 May 2014 22:13:53 -0700 Gregory Farnum wrote: > Oh, I didn't notice that. I bet you aren't getting the expected > throughput on the RAID array with OSD access patterns, and that's > applying back pressure on the journal. > I doubt that based on what I see in terms of local perfo

Re: [ceph-users] Slow IOPS on RBD compared to journal and backing devices

2014-05-07 Thread Christian Balzer
Hello, On Thu, 08 May 2014 06:33:51 +0200 (CEST) Alexandre DERUMIER wrote: > Hi Christian, > > Do you have tried without raid6, to have more osd ? No and that is neither an option nor the reason for any performance issues here. If you re-read my original mail it clearly states that the same fio

Re: [ceph-users] Slow IOPS on RBD compared to journal and backing devices

2014-05-07 Thread Gregory Farnum
Oh, I didn't notice that. I bet you aren't getting the expected throughput on the RAID array with OSD access patterns, and that's applying back pressure on the journal. When I suggested other tests, I meant with and without Ceph. One particular one is OSD bench. That should be interesting to try a

Re: [ceph-users] Slow IOPS on RBD compared to journal and backing devices

2014-05-07 Thread Alexandre DERUMIER
Hi Christian, Do you have tried without raid6, to have more osd ? (how many disks do you have begin the raid6 ?) Aslo, I known that direct ios can be quite slow with ceph, maybe can you try without --direct=1 and also enable rbd_cache ceph.conf [client] rbd cache = true - Mail origin

Re: [ceph-users] Help -Ceph deployment in Single node Like Devstack

2014-05-07 Thread Neil Levine
Loic's micro-osd.sh script is as close to single push button as it gets: http://dachary.org/?p=2374 Not exactly a production cluster but it at least allows you to start experimenting on the CLI. Neil On Wed, May 7, 2014 at 7:56 PM, Patrick McGarry wrote: > Hey, > > Sorry for the delay, I have

Re: [ceph-users] Deep-Scrub Scheduling

2014-05-07 Thread Aaron Ten Clay
Mike, You can find the last scrub info for a given PG with "ceph pg x.yy query". -Aaron On Wed, May 7, 2014 at 8:47 PM, Mike Dawson wrote: > Perhaps, but if that were the case, would you expect the max concurrent > number of deep-scrubs to approach the number of OSDs in the cluster? > > I have

Re: [ceph-users] Deep-Scrub Scheduling

2014-05-07 Thread Mike Dawson
Perhaps, but if that were the case, would you expect the max concurrent number of deep-scrubs to approach the number of OSDs in the cluster? I have 72 OSDs in this cluster and concurrent deep-scrubs seem to peak at a max of 12. Do pools (two in use) and replication settings (3 copies in both p

Re: [ceph-users] Deep-Scrub Scheduling

2014-05-07 Thread Gregory Farnum
Is it possible you're running into the max scrub intervals and jumping up to one-per-OSD from a much lower normal rate? On Wednesday, May 7, 2014, Mike Dawson wrote: > My write-heavy cluster struggles under the additional load created by > deep-scrub from time to time. As I have instrumented the

Re: [ceph-users] Help -Ceph deployment in Single node Like Devstack

2014-05-07 Thread Patrick McGarry
Hey, Sorry for the delay, I have been traveling in Asia. This question should probably go to the ceph-user list (cc'd). Right now there is no single push-button deployment for Ceph like devstack (that I'm aware of)...but we have sever options in terms of orchestration and deployment (including o

Re: [ceph-users] Slow IOPS on RBD compared to journal and backing devices

2014-05-07 Thread Christian Balzer
On Wed, 7 May 2014 18:37:48 -0700 Gregory Farnum wrote: > On Wed, May 7, 2014 at 5:57 PM, Christian Balzer wrote: > > > > Hello, > > > > ceph 0.72 on Debian Jessie, 2 storage nodes with 2 OSDs each. The > > journals are on (separate) DC 3700s, the actual OSDs are RAID6 behind > > an Areca 1882 wi

Re: [ceph-users] Slow IOPS on RBD compared to journal and backing devices

2014-05-07 Thread Gregory Farnum
On Wed, May 7, 2014 at 5:57 PM, Christian Balzer wrote: > > Hello, > > ceph 0.72 on Debian Jessie, 2 storage nodes with 2 OSDs each. The journals > are on (separate) DC 3700s, the actual OSDs are RAID6 behind an Areca 1882 > with 4GB of cache. > > Running this fio: > > fio --size=400m --ioengine=l

[ceph-users] Deep-Scrub Scheduling

2014-05-07 Thread Mike Dawson
My write-heavy cluster struggles under the additional load created by deep-scrub from time to time. As I have instrumented the cluster more, it has become clear that there is something I cannot explain happening in the scheduling of PGs to undergo deep-scrub. Please refer to these images [0][1

[ceph-users] Slow IOPS on RBD compared to journal and backing devices

2014-05-07 Thread Christian Balzer
Hello, ceph 0.72 on Debian Jessie, 2 storage nodes with 2 OSDs each. The journals are on (separate) DC 3700s, the actual OSDs are RAID6 behind an Areca 1882 with 4GB of cache. Running this fio: fio --size=400m --ioengine=libaio --invalidate=1 --direct=1 --numjobs=1 --rw=randwrite --name=fiojob

Re: [ceph-users] 16 osds: 11 up, 16 in

2014-05-07 Thread Craig Lewis
On 5/7/14 15:33 , Dimitri Maziuk wrote: On 05/07/2014 04:11 PM, Craig Lewis wrote: On 5/7/14 13:40 , Sergey Malinin wrote: Check dmesg and SMART data on both nodes. This behaviour is similar to failing hdd. It does sound like a failing disk... but there's nothing in dmesg, and smartmontools

Re: [ceph-users] NFS over CEPH - best practice

2014-05-07 Thread Gilles Mocellin
Le 07/05/2014 15:23, Vlad Gorbunov a écrit : It's easy to install tgtd with ceph support. ubuntu 12.04 for example: Connect ceph-extras repo: echo deb http://ceph.com/packages/ceph-extras/debian $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph-extras.list Install tgtd with rbd

Re: [ceph-users] NFS over CEPH - best practice

2014-05-07 Thread Vladislav Gorbunov
>Should this be done on the iscsi target server? I have a default option to >enable rbd caching as it speeds things up on the vms. Yes, only on the iscsi target servers. 2014-05-08 1:29 GMT+12:00 Andrei Mikhailovsky : >> It's important to disable the rbd cache on tgtd host. Set in >> /etc/ceph/ce

Re: [ceph-users] 16 osds: 11 up, 16 in

2014-05-07 Thread Dimitri Maziuk
On 05/07/2014 04:11 PM, Craig Lewis wrote: > On 5/7/14 13:40 , Sergey Malinin wrote: >> Check dmesg and SMART data on both nodes. This behaviour is similar to >> failing hdd. >> >> > > It does sound like a failing disk... but there's nothing in dmesg, and > smartmontools hasn't emailed me about a

Re: [ceph-users] 16 osds: 11 up, 16 in

2014-05-07 Thread Craig Lewis
On 5/7/14 13:40 , Sergey Malinin wrote: Check dmesg and SMART data on both nodes. This behaviour is similar to failing hdd. It does sound like a failing disk... but there's nothing in dmesg, and smartmontools hasn't emailed me about a failing disk. The same thing is happening to more than

Re: [ceph-users] 16 osds: 11 up, 16 in

2014-05-07 Thread Sergey Malinin
Check dmesg and SMART data on both nodes. This behaviour is similar to failing hdd. On Wednesday, May 7, 2014 at 23:28, Craig Lewis wrote: > On 5/7/14 13:15 , Sergey Malinin wrote: > > Is there anything unusual in dmesg at osd.5? > > Nothing in dmesg, but ceph-osd.5.log has plenty. I've att

Re: [ceph-users] 16 osds: 11 up, 16 in

2014-05-07 Thread Sergey Malinin
Is there anything unusual in dmesg at osd.5? On Wednesday, May 7, 2014 at 23:09, Craig Lewis wrote: > I already have osd_max_backfill = 1, and osd_recovery_op_priority = 1. > > osd_recovery_max_active is the default 15, so I'll give that a try... some > OSDs timed out during the injectargs.

Re: [ceph-users] Bulk storage use case

2014-05-07 Thread Cedric Lemarchand
Some more details, the io pattern will be around 90%write 10%read, mainly sequential. Recent posts shows that max_backfills, recovery_max_active and recovery_op_priority settings will be helpful in case of backfilling/re balancing. Any thoughts on such hardware setup ? Le 07/05/2014 11:43, Cedric

Re: [ceph-users] 16 osds: 11 up, 16 in

2014-05-07 Thread Craig Lewis
I already have osd_max_backfill = 1, and osd_recovery_op_priority = 1. osd_recovery_max_active is the default 15, so I'll give that a try... some OSDs timed out during the injectargs. I added it to ceph.conf, and restarted them all. I was running RadosGW-Agent, but it's down now. I disable

Re: [ceph-users] Ovirt

2014-05-07 Thread Wido den Hollander
On 05/07/2014 08:14 PM, Neil Levine wrote: We were actually talking to Red Hat about oVirt support before the acquisition. It's on the To Do list but no dates yet. Of course, someone from the community is welcome to step up and do the work. I looked at it some time ago. I noticed that oVirt re

[ceph-users] [ANN] ceph-deploy 1.5.2 released

2014-05-07 Thread Alfredo Deza
Hi All, There is a new bug-fix release of ceph-deploy, the easy deployment tool for Ceph. This release comes with two important changes: * fix usage of `--` when removing packages in Debian/Ubuntu * Default to Firefly when installing Ceph. Make sure you upgrade! -Alfredo _

Re: [ceph-users] 16 osds: 11 up, 16 in

2014-05-07 Thread Mike Dawson
Craig, I suspect the disks in question are seeking constantly and the spindle contention is causing significant latency. A strategy of throttling backfill/recovery and reducing client traffic tends to work for me. 1) You should make sure recovery and backfill are throttled: ceph tell osd.* in

Re: [ceph-users] v0.80 Firefly released

2014-05-07 Thread Gregory Farnum
On Wed, May 7, 2014 at 11:18 AM, Mike Dawson wrote: > > On 5/7/2014 11:53 AM, Gregory Farnum wrote: >> >> On Wed, May 7, 2014 at 8:44 AM, Dan van der Ster >> wrote: >>> >>> Hi, >>> >>> >>> Sage Weil wrote: >>> >>> * *Primary affinity*: Ceph now has the ability to skew selection of >>>OSDs as

Re: [ceph-users] cannot revert lost objects

2014-05-07 Thread Kevin Horan
It is still "querying", after 6 days now. I have not tried any scrubbing options, I'll try them just to see. My next idea was to clobber osd 8, the one it is supposedly "querying". I ran into this problem too. I don't know what I did to fix it. I tried ceph pg scrub , ceph pg de

Re: [ceph-users] [Ceph-community] How to install CEPH on CentOS 6.3

2014-05-07 Thread Aaron Ten Clay
On Tue, May 6, 2014 at 7:35 PM, Ease Lu wrote: > Hi All, > As following the CEPH online document, I tried to install CEPH on > centos 6.3: > > The step: ADD CEPH > I cannot find centos distro, so I used el6. when I reach the > "INTSALL VIRTUALIZATION FOR BLOCK DEVICE" step, I got:

Re: [ceph-users] v0.80 Firefly released

2014-05-07 Thread Mike Dawson
On 5/7/2014 11:53 AM, Gregory Farnum wrote: On Wed, May 7, 2014 at 8:44 AM, Dan van der Ster wrote: Hi, Sage Weil wrote: * *Primary affinity*: Ceph now has the ability to skew selection of OSDs as the "primary" copy, which allows the read workload to be cheaply skewed away from parts

Re: [ceph-users] Ovirt

2014-05-07 Thread Neil Levine
We were actually talking to Red Hat about oVirt support before the acquisition. It's on the To Do list but no dates yet. Of course, someone from the community is welcome to step up and do the work. Neil On Wed, May 7, 2014 at 9:49 AM, Nathan Stratton wrote: > Now that everyone will be one big ha

Re: [ceph-users] health HEALTH_WARN too few pgs per osd (16 < min 20)

2014-05-07 Thread Sergey Malinin
On Wednesday, May 7, 2014 at 20:28, *sm1Ly wrote: > > [sm1ly@salt1 ceph]$ sudo ceph -s > cluster 0b2c9c20-985a-4a39-af8e-ef2325234744 > health HEALTH_WARN 19 pgs degraded; 192 pgs stuck unclean; recovery > 21/42 objects degraded (50.000%); too few pgs per osd (16 < min 20) > You might

Re: [ceph-users] health HEALTH_WARN too few pgs per osd (16 < min 20)

2014-05-07 Thread Henrik Korkuc
On 2014.05.07 20:28, *sm1Ly wrote: > I got deploy my cluster with this commans. > > mkdir "clustername" > > cd "clustername" > > ceph-deploy install mon1 mon2 mon3 mds1 mds2 mds3 osd200 > > ceph-deploy new mon1 mon2 mon3 > > ceph-deploy mon create mon1 mon2 mon3 > > ceph-deploy gatherk

[ceph-users] health HEALTH_WARN too few pgs per osd (16 < min 20)

2014-05-07 Thread *sm1Ly
I got deploy my cluster with this commans. mkdir "clustername" cd "clustername" ceph-deploy install mon1 mon2 mon3 mds1 mds2 mds3 osd200 ceph-deploy new mon1 mon2 mon3 ceph-deploy mon create mon1 mon2 mon3 ceph-deploy gatherkeys mon1 mon2 mon3 ceph-deploy osd prepare --fs-type ext4 osd20

[ceph-users] 16 osds: 11 up, 16 in

2014-05-07 Thread Craig Lewis
The 5 OSDs that are down have all been kicked out for being unresponsive. The 5 OSDs are getting kicked faster than they can complete the recovery+backfill. The number of degraded PGs is growing over time. root@ceph0c:~# ceph -w cluster 1604ec7a-6ceb-42fc-8c68-0a7896c4e120 health HE

Re: [ceph-users] Delete pool .rgw.bucket and objects within it

2014-05-07 Thread Thanh Tran
thanks Irek, it is correct as you did. Best regards, Thanh Tran On Wed, May 7, 2014 at 2:15 PM, Irek Fasikhov wrote: > Yes, delete all the objects stored in the pool. > > > 2014-05-07 6:58 GMT+04:00 Thanh Tran : > >> Hi, >> >> If i use command "ceph osd pool delete .rgw.bucket .rgw.bucket >> -

[ceph-users] Ovirt

2014-05-07 Thread Nathan Stratton
Now that everyone will be one big happy family, any new on ceph support of ovirt? ><> nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 | www.broadsoft.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listi

Re: [ceph-users] v0.80 Firefly released

2014-05-07 Thread Gregory Farnum
On Wed, May 7, 2014 at 8:44 AM, Dan van der Ster wrote: > Hi, > > > Sage Weil wrote: > > * *Primary affinity*: Ceph now has the ability to skew selection of > OSDs as the "primary" copy, which allows the read workload to be > cheaply skewed away from parts of the cluster without migrating any

Re: [ceph-users] v0.80 Firefly released

2014-05-07 Thread Sage Weil
On Wed, 7 May 2014, Dan van der Ster wrote: > Hi, > > Sage Weil wrote: > > * *Primary affinity*: Ceph now has the ability to skew selection of > OSDs as the "primary" copy, which allows the read workload to be > cheaply skewed away from parts of the cluster without migrating any > data. >

Re: [ceph-users] Cache tiering

2014-05-07 Thread Mark Nelson
On 05/07/2014 10:38 AM, Gregory Farnum wrote: On Wed, May 7, 2014 at 8:13 AM, Dan van der Ster wrote: Hi, Gregory Farnum wrote: 3) The cost of a cache miss is pretty high, so they should only be used when the active set fits within the cache and doesn't change too frequently. Can you rough

Re: [ceph-users] v0.80 Firefly released

2014-05-07 Thread Dan van der Ster
Hi, Sage Weil wrote: **Primary affinity*: Ceph now has the ability to skew selection of OSDs as the "primary" copy, which allows the read workload to be cheaply skewed away from parts of the cluster without migrating any data. Can you please elaborate a bit on this one? I found the bl

Re: [ceph-users] Cache tiering

2014-05-07 Thread Gregory Farnum
On Wed, May 7, 2014 at 8:13 AM, Dan van der Ster wrote: > Hi, > > > Gregory Farnum wrote: > > 3) The cost of a cache miss is pretty high, so they should only be > used when the active set fits within the cache and doesn't change too > frequently. > > > Can you roughly quantify how long a cache mis

Re: [ceph-users] v0.80 Firefly released

2014-05-07 Thread Sage Weil
On Wed, 7 May 2014, Kenneth Waegeman wrote: > - Message from Sage Weil - > Date: Tue, 6 May 2014 18:05:19 -0700 (PDT) > From: Sage Weil > Subject: [ceph-users] v0.80 Firefly released > To: ceph-de...@vger.kernel.org, ceph-us...@ceph.com > > > > We did it! Firefly v0.80 is b

Re: [ceph-users] Cache tiering

2014-05-07 Thread Sage Weil
On Wed, 7 May 2014, Gandalf Corvotempesta wrote: > Very simple question: what happen if server bound to the cache pool goes down? > For example, a read-only cache could be archived by using a single > server with no redudancy. > Is ceph smart enough to detect that cache is unavailable and > transpa

Re: [ceph-users] v0.80 Firefly released

2014-05-07 Thread Kenneth Waegeman
- Message from Alexandre DERUMIER - Date: Wed, 07 May 2014 15:21:55 +0200 (CEST) From: Alexandre DERUMIER Subject: Re: [ceph-users] v0.80 Firefly released To: Kenneth Waegeman Cc: ceph-us...@ceph.com, Sage Weil Do we need a journal when using this back-end? no

Re: [ceph-users] Cache tiering

2014-05-07 Thread Dan van der Ster
Hi, Gregory Farnum wrote: 3) The cost of a cache miss is pretty high, so they should only be used when the active set fits within the cache and doesn't change too frequently. Can you roughly quantify how long a cache miss would take? Naively I'd assume it would turn one read into a read from

Re: [ceph-users] Explicit F2FS support (was: v0.80 Firefly released)

2014-05-07 Thread Sage Weil
On Wed, 7 May 2014, Andrey Korolyov wrote: > Hello, > > first of all, congratulations to Inktank and thank you for your awesome work! > > Although exploiting native f2fs abilities, as with btrfs, sounds > awesome for a matter of performance, I wonder when kv db is able to > practically give users

Re: [ceph-users] Cache tiering

2014-05-07 Thread Gregory Farnum
On Wed, May 7, 2014 at 5:05 AM, Gandalf Corvotempesta wrote: > Very simple question: what happen if server bound to the cache pool goes down? > For example, a read-only cache could be archived by using a single > server with no redudancy. > Is ceph smart enough to detect that cache is unavailable

Re: [ceph-users] [rados-java] Hi, I am a newer for ceph . And I found rados-java in github, but there are some problems for me .

2014-05-07 Thread Wido den Hollander
On 05/05/2014 05:39 AM, peng wrote: I have installed the latest jdk , ant set target and source to 1.7 . But I always encounter the same error message. You also need jna-platform.jar to compile rados-java. I suggest you place that in /usr/share/java as well. Wido -- Orig

Re: [ceph-users] NFS over CEPH - best practice

2014-05-07 Thread Andrei Mikhailovsky
> It's important to disable the rbd cache on tgtd host. Set in > /etc/ceph/ceph.conf: Should this be done on the iscsi target server? I have a default option to enable rbd caching as it speeds things up on the vms. Thanks Andrei - Original Message - From: "Vlad Gorbunov" To

Re: [ceph-users] NFS over CEPH - best practice

2014-05-07 Thread Vlad Gorbunov
It's easy to install tgtd with ceph support. ubuntu 12.04 for example: Connect ceph-extras repo: echo deb http://ceph.com/packages/ceph-extras/debian $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph-extras.list Install tgtd with rbd support: apt-get update apt-get install tgt I

Re: [ceph-users] v0.80 Firefly released

2014-05-07 Thread Alexandre DERUMIER
>>Do we need a journal when using this back-end? no,they are no journal with key value store - Mail original - De: "Kenneth Waegeman" À: "Sage Weil" Cc: ceph-us...@ceph.com Envoyé: Mercredi 7 Mai 2014 15:06:50 Objet: Re: [ceph-users] v0.80 Firefly released - Message from S

Re: [ceph-users] v0.80 Firefly released

2014-05-07 Thread Kenneth Waegeman
- Message from Sage Weil - Date: Tue, 6 May 2014 18:05:19 -0700 (PDT) From: Sage Weil Subject: [ceph-users] v0.80 Firefly released To: ceph-de...@vger.kernel.org, ceph-us...@ceph.com We did it! Firefly v0.80 is built and pushed out to the ceph.com repositories. This

Re: [ceph-users] Cache tiering

2014-05-07 Thread Wido den Hollander
On 05/07/2014 02:05 PM, Gandalf Corvotempesta wrote: Very simple question: what happen if server bound to the cache pool goes down? For example, a read-only cache could be archived by using a single server with no redudancy. Is ceph smart enough to detect that cache is unavailable and transparent

Re: [ceph-users] NFS over CEPH - best practice

2014-05-07 Thread Sergey Malinin
http://www.hastexo.com/resources/hints-and-kinks/turning-ceph-rbd-images-san-storage-devices On Wednesday, May 7, 2014 at 15:06, Andrei Mikhailovsky wrote: > > Vlad, is there a howto somewhere describing the steps on how to setup iscsi > multipathing over ceph? It looks like a good alternati

Re: [ceph-users] NFS over CEPH - best practice

2014-05-07 Thread Andrei Mikhailovsky
Vlad, is there a howto somewhere describing the steps on how to setup iscsi multipathing over ceph? It looks like a good alternative to nfs Thanks - Original Message - From: "Vlad Gorbunov" To: "Andrei Mikhailovsky" Cc: ceph-users@lists.ceph.com Sent: Wednesday, 7 May, 2014 12:

[ceph-users] Cache tiering

2014-05-07 Thread Gandalf Corvotempesta
Very simple question: what happen if server bound to the cache pool goes down? For example, a read-only cache could be archived by using a single server with no redudancy. Is ceph smart enough to detect that cache is unavailable and transparently redirect all request to the main pool as usual ? Th

Re: [ceph-users] NFS over CEPH - best practice

2014-05-07 Thread Cedric Lemarchand
I am surprised that CephFS isn't proposed as an option, in the way it removes the not negligible block storage layer from the picture. I always feel uncomfortable to stack storage technologies or file systems (here NFS over XFS over iSCSI over RDB over Rados) and try to stay as possible on the "KIS

Re: [ceph-users] NFS over CEPH - best practice

2014-05-07 Thread Vlad Gorbunov
For XenServer or VMware is better to use iscsi client to tgtd with ceph support. You can install tgtd on osd or monitor server and use multipath for failover. On Wed, May 7, 2014 at 9:47 PM, Andrei Mikhailovsky wrote: > Hello guys, > I would like to offer NFS service to the XenServer and VMWa

Re: [ceph-users] NFS over CEPH - best practice

2014-05-07 Thread Andrija Panic
Mapping RBD image to 2 or more servers is the same as a shared storage device (SAN) - so from there on, you could do any clustering you want, based on what Wido said... On 7 May 2014 12:43, Andrei Mikhailovsky wrote: > > Wido, would this work if I were to run nfs over two or more servers with

Re: [ceph-users] NFS over CEPH - best practice

2014-05-07 Thread Andrei Mikhailovsky
Wido, would this work if I were to run nfs over two or more servers with virtual IP? I can see what you've suggested working in a one server setup. What about if you want to have two nfs servers in an active/backup or active/active setup? Thanks Andrei - Original Message - Fro

Re: [ceph-users] advice with hardware configuration

2014-05-07 Thread Christian Balzer
On Wed, 07 May 2014 11:01:33 +0200 Xabier Elkano wrote: > El 06/05/14 18:40, Christian Balzer escribió: > > Hello, > > > > On Tue, 06 May 2014 17:07:33 +0200 Xabier Elkano wrote: > > > >> Hi, > >> > >> I'm designing a new ceph pool with new hardware and I would like to > >> receive some suggestion

Re: [ceph-users] NFS over CEPH - best practice

2014-05-07 Thread Wido den Hollander
On 05/07/2014 11:46 AM, Andrei Mikhailovsky wrote: Hello guys, I would like to offer NFS service to the XenServer and VMWare hypervisors for storing vm images. I am currently running ceph rbd with kvm, which is working reasonably well. What would be the best way of running NFS services over CEP

[ceph-users] NFS over CEPH - best practice

2014-05-07 Thread Andrei Mikhailovsky
Hello guys, I would like to offer NFS service to the XenServer and VMWare hypervisors for storing vm images. I am currently running ceph rbd with kvm, which is working reasonably well. What would be the best way of running NFS services over CEPH, so that the XenServer and VMWare's vm disk im

[ceph-users] Bulk storage use case

2014-05-07 Thread Cedric Lemarchand
Hello, This build is only intended for archiving purpose, what matter here is lowering ratio $/To/W. Access to the storage would be via radosgw, installed on each nodes. I need that each nodes sustain an average of 1Gb write rates, for which I think it would not be a problem. Erasure encoding will

[ceph-users] Change size journal's blocks from 4k to another.

2014-05-07 Thread Mike
Hello. In my Ceph instalation I am uses a ssd drive for journal with direct access to a block device. At an osd is started a see in log file string: ... 1 journal _open /dev/sda1 fd 22: 19327352832 bytes, block size 4096 bytes, directio = 1, aio = 1 ... How I can change size of block from 4k to

Re: [ceph-users] advice with hardware configuration

2014-05-07 Thread Xabier Elkano
El 06/05/14 18:40, Christian Balzer escribió: > Hello, > > On Tue, 06 May 2014 17:07:33 +0200 Xabier Elkano wrote: > >> Hi, >> >> I'm designing a new ceph pool with new hardware and I would like to >> receive some suggestion. >> I want to use a replica count of 3 in the pool and the idea is to buy

Re: [ceph-users] advice with hardware configuration

2014-05-07 Thread Xabier Elkano
El 06/05/14 19:38, Sergey Malinin escribió: > If you plan to scale up in the future you could consider the following config > to start with: > > Pool size=2 > 3 x servers with OS+journal on 1 ssd, 3 journal ssds, 4 x 900 gb data disks. > It will get you 5+ TB capacity and you will be able to incre

[ceph-users] Explicit F2FS support (was: v0.80 Firefly released)

2014-05-07 Thread Andrey Korolyov
Hello, first of all, congratulations to Inktank and thank you for your awesome work! Although exploiting native f2fs abilities, as with btrfs, sounds awesome for a matter of performance, I wonder when kv db is able to practically give users with 'legacy' file systems ability to conduct CoW operat

Re: [ceph-users] advice with hardware configuration

2014-05-07 Thread Xabier Elkano
El 06/05/14 19:31, Cedric Lemarchand escribió: > Le 06/05/2014 17:07, Xabier Elkano a écrit : >> the goal is the performance over the capacity. > I am sure you already consider the "full SSD" option, did you ? > Yes, I considered full SSD option, but it is very expensive. Using intel 520 series eac

[ceph-users] How to install CEPH on CentOS 6.3

2014-05-07 Thread Ease Lu
Hi All, As following the CEPH online document, I tried to install CEPH on centos 6.3: The step: ADD CEPH I cannot find centos distro, so I used el6. when I reach the "INTSALL VIRTUALIZATION FOR BLOCK DEVICE" step, I got: Error: Package: 2:qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64 (

Re: [ceph-users] Delete pool .rgw.bucket and objects within it

2014-05-07 Thread Irek Fasikhov
Yes, delete all the objects stored in the pool. 2014-05-07 6:58 GMT+04:00 Thanh Tran : > Hi, > > If i use command "ceph osd pool delete .rgw.bucket .rgw.bucket > --yes-i-really-really-mean-it" to delete the pool .rgw.bucket, will this > delete the pool, its objects and clean the data on osds? >