Re: [ceph-users] Ceph + VMWare

2016-10-18 Thread Alex Gorbachev
On Tuesday, October 18, 2016, Frédéric Nass wrote: > Hi Alex, > > Just to know, what kind of backstore are you using whithin Storcium ? > vdisk_fileio > or vdisk_blockio ? > > I see your agents can handle both : http://www.spinics.net/lists/ >

Re: [ceph-users] Ceph + VMWare

2016-10-18 Thread Frédéric Nass
Hi Alex, Just to know, what kind of backstore are you using whithin Storcium ? vdisk_fileio or vdisk_blockio ? I see your agents can handle both : http://www.spinics.net/lists/ceph-users/msg27817.html Regards, Frédéric. Le 06/10/2016 à 16:01, Alex Gorbachev a écrit : On Wed, Oct 5,

Re: [ceph-users] new Open Source Ceph based iSCSI SAN project

2016-10-18 Thread Lars Marowsky-Bree
On 2016-10-18T11:36:42, Christian Balzer wrote: > > Cache tiering in Ceph works for this use case. I assume you mean in > > your UI? > May well be, but Oliver suggested that cache-tiering is not supported with > Hammer (0.94.x), which it most certainly is. Right, we've got some

Re: [ceph-users] RBD with SSD journals and SAS OSDs

2016-10-18 Thread William Josefsson
On Mon, Oct 17, 2016 at 6:16 PM, Nick Fisk wrote: >> -Original Message- >> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of >> William Josefsson >> Sent: 17 October 2016 10:39 >> To: n...@fisk.me.uk >> Cc: ceph-users@lists.ceph.com >> Subject: Re:

Re: [ceph-users] Does anyone know why cephfs do not support EC pool?

2016-10-18 Thread Lars Marowsky-Bree
On 2016-10-18T00:06:57, Erick Perez - Quadrian Enterprises wrote: > Is EC in the roadmap for CEPH? Cant seem to find it. My question is because > "all others" (Nutanix, Hypergrid) do EC storage for VMs as the default way > of storage. It seems EC in ceph (as of Sept

Re: [ceph-users] Ceph + VMWare

2016-10-18 Thread Frédéric Nass
Hi Alex, Just to know, what kind of backstore are you using whithin Storcium ? vdisk_fileio or vdisk_blockio ? I see your agents can handle both : http://www.spinics.net/lists/ceph-users/msg27817.html Regards, Frédéric. Le 06/10/2016 à 16:01, Alex Gorbachev a écrit : On Wed, Oct 5,

Re: [ceph-users] RBD with SSD journals and SAS OSDs

2016-10-18 Thread William Josefsson
Something seems to have happened to the rbd I was writing to. Earlier my instances had /dev/vdb attached, and I ran my fio tests against it. After a while the performance got terrible, I'm not sure why this happened. However, I have deleted and recreated the block to the instance and the IOPS got

[ceph-users] Feedback wanted: health warning when standby MDS dies?

2016-10-18 Thread John Spray
Hi all, Someone asked me today how to get a list of down MDS daemons, and I explained that currently the MDS simply forgets about any standby that stops sending beacons. That got me thinking about the case where a standby dies while the active MDS remains up -- the cluster has gone into a

[ceph-users] v11.0.2 released

2016-10-18 Thread Abhishek L
This development checkpoint release includes a lot of changes and improvements to Kraken. This is the first release introducing ceph-mgr, a new daemon which provides additional monitoring & interfaces to external monitoring/management systems. There are also many improvements to bluestore, RGW

Re: [ceph-users] Feedback wanted: health warning when standby MDS dies?

2016-10-18 Thread Dan van der Ster
+1 I would find this warning useful. On Tue, Oct 18, 2016 at 1:46 PM, John Spray wrote: > Hi all, > > Someone asked me today how to get a list of down MDS daemons, and I > explained that currently the MDS simply forgets about any standby that > stops sending beacons. That

Re: [ceph-users] Feedback wanted: health warning when standby MDS dies?

2016-10-18 Thread Wido den Hollander
> Op 18 oktober 2016 om 14:06 schreef Dan van der Ster : > > > +1 I would find this warning useful. > +1 Probably make it configurable, say, you want at least X standby MDS to be available before WARN. But in general, yes, please! Wido > > > On Tue, Oct 18, 2016 at

Re: [ceph-users] RBD with SSD journals and SAS OSDs

2016-10-18 Thread William Josefsson
Thx Christian for elaborating on this appreciate it, I will rerun some of my benchmarks and take your advice into consideration. I have also found maximum performance recommendations for the dell 730xd bios settings, hope these make sense: http://pasteboard.co/guHVMQVly.jpg I will set all these

Re: [ceph-users] Feedback wanted: health warning when standby MDS dies?

2016-10-18 Thread Lars Marowsky-Bree
On 2016-10-18T12:46:48, John Spray wrote: > I've suggested a solution here: > http://tracker.ceph.com/issues/17604 > > This is probably going to be a bit of a subjective thing in terms of > whether people find it useful or find it to be annoying noise, so I'd > be interested

Re: [ceph-users] Calc the nuber of shards needed for a pucket

2016-10-18 Thread Orit Wasserman
Hi Ansgar, We recommend 100,000 object per shard, for 50M objects you will need 512 shards. Orit On Fri, Oct 14, 2016 at 1:44 PM, Ansgar Jazdzewski wrote: > Hi, > > I like to know if someone of you have some kind of a formula to set > the right number of shards for

Re: [ceph-users] new Open Source Ceph based iSCSI SAN project

2016-10-18 Thread Maged Mokhtar
Thank you Mike for this update. I sent you and Dave the relevant changes we found for hyper-v. Cheers /maged -- From: "Mike Christie" Sent: Monday, October 17, 2016 9:40 PM To: "Maged Mokhtar" ; "Lars

Re: [ceph-users] Does anyone know why cephfs do not support EC pool?

2016-10-18 Thread Erick Perez - Quadrian Enterprises
On Tue, Oct 18, 2016 at 3:46 AM, Lars Marowsky-Bree wrote: > On 2016-10-18T00:06:57, Erick Perez - Quadrian Enterprises < > epe...@quadrianweb.com> wrote: > > > Is EC in the roadmap for CEPH? Cant seem to find it. My question is > because > > "all others" (Nutanix, Hypergrid) do

Re: [ceph-users] Feedback wanted: health warning when standby MDS dies?

2016-10-18 Thread Benjeman Meekhof
+1 to this, it would be useful On Tue, Oct 18, 2016 at 8:31 AM, Wido den Hollander wrote: > >> Op 18 oktober 2016 om 14:06 schreef Dan van der Ster : >> >> >> +1 I would find this warning useful. >> > > +1 Probably make it configurable, say, you want at least

Re: [ceph-users] Feedback wanted: health warning when standby MDS dies?

2016-10-18 Thread Goncalo Borges
Hi John. That would be good. In our case we are just picking that up simply through nagios and some fancy scripts parsing the dump of the MDS maps. Cheers Goncalo From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of John Spray

[ceph-users] ceph on two data centers far away

2016-10-18 Thread yan cui
Hi Guys, Our company has a use case which needs the support of Ceph across two data centers (one data center is far away from the other). The experience of using one data center is good. We did some benchmarking on two data centers, and the performance is bad because of the synchronization

Re: [ceph-users] ceph on two data centers far away

2016-10-18 Thread Sean Redmond
Maybe this would be an option for you: http://docs.ceph.com/docs/jewel/rbd/rbd-mirroring/ On Tue, Oct 18, 2016 at 8:18 PM, yan cui wrote: > Hi Guys, > >Our company has a use case which needs the support of Ceph across two > data centers (one data center is far away

[ceph-users] 答复: Does anyone know why cephfs do not support EC pool?

2016-10-18 Thread Liuxuan
Hello: Thank you very much for your detail answer. I have used iozone to test randwrite, it reported not support in , but osd not crash. 发件人: huang jun [mailto:hjwsm1...@gmail.com] 发送时间: 2016年10月18日 13:44 收件人: Erick Perez - Quadrian Enterprises 抄送: liuxuan 11625 (RD);

Re: [ceph-users] Appending to an erasure coded pool

2016-10-18 Thread Tianshan Qu
pool_requires_alignment can get pool's stripe_width, and you need write multiple of that size in each append. stripe_width can be configured with osd_pool_erasure_code_stripe_width, but the actual size will be adjusted by ec plugin 2016-10-17 18:34 GMT+08:00 James Norman

Re: [ceph-users] RBD with SSD journals and SAS OSDs

2016-10-18 Thread Christian Balzer
Hello, Note that the tests below were done on a VM with RBD cache disabled, so the "direct=1" flag in FIO had a similar impact to "sync=1". If your databases are MySQL, Oracle or something else that can use O_DIRECT, RBD caching can improve things dramatically for you (with the same risks that