RE: severe librbd performance degradation in Giant

2014-09-17 Thread Somnath Roy
: Wednesday, September 17, 2014 2:02 PM To: Somnath Roy Cc: ceph-devel@vger.kernel.org Subject: Re: severe librbd performance degradation in Giant I reported the same for librbd I'm firefly after upgrading from dumpling here: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-June/040664

RE: severe librbd performance degradation in Giant

2014-09-17 Thread Somnath Roy
Mark, All are running with concurrency 32. Thanks & Regards Somnath -Original Message- From: Mark Nelson [mailto:mark.nel...@inktank.com] Sent: Wednesday, September 17, 2014 1:59 PM To: Somnath Roy; ceph-devel@vger.kernel.org Subject: Re: severe librbd performance degradation in G

severe librbd performance degradation in Giant

2014-09-17 Thread Somnath Roy
Hi Sage, We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem. 1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giv

RE: puzzled with the design pattern of ceph journal, really ruining performance

2014-09-17 Thread Somnath Roy
Hi Nicheal, Not only recovery , IMHO the main purpose of ceph journal is to support transaction semantics since XFS doesn't have that. I guess it can't be achieved with pg_log/pg_info. Thanks & Regards Somnath -Original Message- From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel

RE: OSD is crashing during delete operation

2014-09-16 Thread Somnath Roy
Created the following pull request for the fix. https://github.com/ceph/ceph/pull/2510 Thanks & Regards Somnath -Original Message- From: Somnath Roy Sent: Monday, September 15, 2014 3:26 PM To: Sage Weil (sw...@redhat.com); Samuel Just (sam.j...@inktank.com) Cc: ceph-d

RE: OSD is crashing during delete operation

2014-09-15 Thread Somnath Roy
nath --- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Somnath

RE: [ceph-users] OpTracker optimization

2014-09-13 Thread Somnath Roy
Thanks Sage! -Original Message- From: Sage Weil [mailto:sw...@redhat.com] Sent: Saturday, September 13, 2014 7:32 AM To: Alexandre DERUMIER Cc: Somnath Roy; ceph-devel@vger.kernel.org; ceph-us...@lists.ceph.com; Samuel Just Subject: Re: [ceph-users] OpTracker optimization On Sat, 13

RE: OpTracker optimization

2014-09-13 Thread Somnath Roy
.com] Sent: Thursday, September 11, 2014 11:31 AM To: Somnath Roy Cc: Sage Weil; ceph-devel@vger.kernel.org; ceph-us...@lists.ceph.com Subject: Re: OpTracker optimization Just added it to wip-sam-testing. -Sam On Thu, Sep 11, 2014 at 11:30 AM, Somnath Roy wrote: > Sam/Sage, > I have addres

RE: Regarding key/value interface

2014-09-11 Thread Somnath Roy
Hi Haomai, <mailto:haomaiw...@gmail.com] Sent: Thursday, September 11, 2014 7:28 PM To: Somnath Roy Cc: Sage Weil; ceph-us...@lists.ceph.com; ceph-devel@vger.kernel.org Subject: Re: Regarding key/value interface On Fri, Sep 12, 2014 at 9:46 AM, Somnath Roy wrote: > > Make perfect s

RE: Regarding key/value interface

2014-09-11 Thread Somnath Roy
Weil [mailto:sw...@redhat.com] Sent: Thursday, September 11, 2014 6:55 PM To: Somnath Roy Cc: Haomai Wang (haomaiw...@gmail.com); ceph-us...@lists.ceph.com; ceph-devel@vger.kernel.org Subject: RE: Regarding key/value interface On Fri, 12 Sep 2014, Somnath Roy wrote: > Make perfect sense S

RE: Regarding key/value interface

2014-09-11 Thread Somnath Roy
stripe it as multiple key/value pair ? 2. Also, while reading it will take care of accumulating and send it back. Thanks & Regards Somnath -Original Message- From: Sage Weil [mailto:sw...@redhat.com] Sent: Thursday, September 11, 2014 6:31 PM To: Somnath Roy Cc: Haomai Wang (hao

RE: OpTracker optimization

2014-09-11 Thread Somnath Roy
Sam/Sage, I have addressed all of your comments and pushed the changes to the same pull request. https://github.com/ceph/ceph/pull/2440 Thanks & Regards Somnath -Original Message- From: Sage Weil [mailto:sw...@redhat.com] Sent: Wednesday, September 10, 2014 8:33 PM To: Somnath Ro

RE: OpTracker optimization

2014-09-10 Thread Somnath Roy
nath Roy Cc: Sage Weil (sw...@redhat.com); ceph-devel@vger.kernel.org; ceph-us...@lists.ceph.com Subject: Re: OpTracker optimization Oh, I changed my mind, your approach is fine. I was unclear. Currently, I just need you to address the other comments. -Sam On Wed, Sep 10, 2014 at 3:13 PM, Somnath

RE: OpTracker optimization

2014-09-10 Thread Somnath Roy
right ? Thanks & Regards Somnath -Original Message- From: Samuel Just [mailto:sam.j...@inktank.com] Sent: Wednesday, September 10, 2014 3:08 PM To: Somnath Roy Cc: Sage Weil (sw...@redhat.com); ceph-devel@vger.kernel.org; ceph-us...@lists.ceph.com Subject: Re: OpTracker optimizatio

RE: OpTracker optimization

2014-09-10 Thread Somnath Roy
Thanks Sam. So, you want me to go with optracker/shadedopWq , right ? Regards Somnath -Original Message- From: Samuel Just [mailto:sam.j...@inktank.com] Sent: Wednesday, September 10, 2014 2:36 PM To: Somnath Roy Cc: Sage Weil (sw...@redhat.com); ceph-devel@vger.kernel.org; ceph-us

RE: OpTracker optimization

2014-09-10 Thread Somnath Roy
Thanks Sam..I responded back :-) -Original Message- From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Samuel Just Sent: Wednesday, September 10, 2014 11:17 AM To: Somnath Roy Cc: Sage Weil (sw...@redhat.com); ceph-devel@vger.kernel.org; ceph

RE: OSD is crashing while running admin socket

2014-09-08 Thread Somnath Roy
uring dump and more specifically within _dump(). I am taking care of it as well. Thanks & Regards Somnath -Original Message- From: Sage Weil [mailto:sw...@redhat.com] Sent: Monday, September 08, 2014 5:59 PM To: Somnath Roy Cc: Samuel Just; ceph-devel@vger.kernel.org; ceph-us...@lists

RE: OSD is crashing while running admin socket

2014-09-08 Thread Somnath Roy
Created the following tracker and assigned to me. http://tracker.ceph.com/issues/9384 Thanks & Regards Somnath -Original Message- From: Samuel Just [mailto:sam.j...@inktank.com] Sent: Monday, September 08, 2014 5:22 PM To: Somnath Roy Cc: Sage Weil (sw...@redhat.com); ceph-d

RE: Mon gets flooded with log messages for default log level

2014-09-08 Thread Somnath Roy
But, Sage, it is not happening all the time. For some reason, in some cases it is waiting for the maps to be readable. Could you please let us know why it can happen in some scenario ? By not dumping this unusual scenario , will we be missing anything ? Thanks & Regards Somnath -Original Me

RE: [ceph-users] ceph osd unexpected error

2014-09-06 Thread Somnath Roy
Have you set the open file descriptor limit in the OSD node ? Try setting it like 'ulimit -n 65536" -Original Message- From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Haomai Wang Sent: Saturday, September 06, 2014 7:44 AM To: 廖建锋 Cc: ceph-user

RE: Deadlock in ceph journal

2014-08-22 Thread Somnath Roy
I think it is using direct io for non-aio mode as well. Thanks & Regards Somnath -Original Message- From: Mark Kirkwood [mailto:mark.kirkw...@catalyst.net.nz] Sent: Friday, August 22, 2014 3:19 PM To: Sage Weil Cc: Ma, Jianpeng; Somnath Roy; Samuel Just (sam.j...@inktank.com);

RE: Deadlock in ceph journal

2014-08-19 Thread Somnath Roy
I will also take the patch and test it out. Thanks & Regards Somnath -Original Message- From: Sage Weil [mailto:sw...@redhat.com] Sent: Tuesday, August 19, 2014 9:51 PM To: Somnath Roy Cc: Samuel Just (sam.j...@inktank.com); ceph-devel@vger.kernel.org; Mark Kirkwood; jian

RE: Deadlock in ceph journal

2014-08-19 Thread Somnath Roy
I got it, not yet there in master. https://github.com/ceph/ceph/commit/b40ddc5dcf95b4849706314b34e72b607629773f Sorry for the confusion. Thanks & Regards Somnath -Original Message- From: Somnath Roy Sent: Tuesday, August 19, 2014 9:38 PM To: 'Sage Weil' Cc: Samu

RE: Deadlock in ceph journal

2014-08-19 Thread Somnath Roy
Thanks Sage ! So, the latest master should have the fix, right ? Regards Somnath -Original Message- From: Sage Weil [mailto:sw...@redhat.com] Sent: Tuesday, August 19, 2014 8:55 PM To: Somnath Roy Cc: Samuel Just (sam.j...@inktank.com); ceph-devel@vger.kernel.org; Mark Kirkwood

RE: Regarding cache tier understanding

2014-08-07 Thread Somnath Roy
Thanks Sage, this helps ! Regards Somnath -Original Message- From: Sage Weil [mailto:sw...@redhat.com] Sent: Thursday, August 07, 2014 7:12 AM To: Somnath Roy Cc: ceph-devel@vger.kernel.org; ceph-us...@lists.ceph.com; Anirban Ray; Allen Samuels Subject: Re: Regarding cache tier

RE: [RFC] add rocksdb support

2014-07-01 Thread Somnath Roy
Hi Haomai, But, the cache hit will be very minimal or null, if the actual storage per node is very huge (say in the PB level). So, it will be mostly hitting Omap, isn't it ? How this header cache is going to resolve this serialization issue then ? Thanks & Regards Somnath -Original Message-

RE: CEPH IOPS Baseline Measurements with MemStore

2014-06-24 Thread Somnath Roy
Hi Andres, How many client instances you are running in parallel. For single client , you will not be seeing much difference with this sharded TP. Try to stress the cluster with more number of clients and you will be seeing throughput will not be increasing with firefly. The aggregated output wi

RE: Mon backing store

2014-06-05 Thread Somnath Roy
Yeah :-)...We are also waiting for wip-rocksdb to merge in mainstream. Thanks & Regards Somnath -Original Message- From: Mark Nelson [mailto:mark.nel...@inktank.com] Sent: Thursday, June 05, 2014 11:33 AM To: Somnath Roy; Samuel Just; ceph-devel@vger.kernel.org Subject: Re: Mon bac

RE: Mon backing store

2014-06-05 Thread Somnath Roy
Mark, Could you please share the performance benchmark result with Rocksdbstore + ceph and leveldbstore+ceph as you mentioned below ? BTW, have you measured the WA induced by Rocksdbstore and leveldbstore in the process since that is also a very important factor while backend is flash ? Thanks &

RE: [ceph-users] Scaling radosgw module

2013-09-27 Thread Somnath Roy
From: Mark Nelson [mailto:mark.nel...@inktank.com] Sent: Friday, September 27, 2013 11:50 AM To: Somnath Roy Cc: Yehuda Sadeh; ceph-us...@lists.ceph.com; Anirban Ray; ceph-devel@vger.kernel.org Subject: Re: [ceph-users] Scaling radosgw module Hi Somnath, With SSDs, you almost certainly are goin

RE: [ceph-users] Scaling radosgw module

2013-09-26 Thread Somnath Roy
from memory , no disk utilization here. Thanks & Regards Somnath -Original Message- From: Yehuda Sadeh [mailto:yeh...@inktank.com] Sent: Thursday, September 26, 2013 4:48 PM To: Somnath Roy Cc: Mark Nelson; ceph-us...@lists.ceph.com; Anirban Ray; ceph-devel@vger.kernel.org Subject

RE: [ceph-users] Scaling radosgw module

2013-09-26 Thread Somnath Roy
Mark, One more thing, all my test is with rgw cache enabled , disabling the cache the performance is around 3x slower. Thanks & Regards Somnath -Original Message- From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Somnath Roy Sent: Thur

RE: [ceph-users] Scaling radosgw module

2013-09-26 Thread Somnath Roy
nath -Original Message- From: Mark Nelson [mailto:mark.nel...@inktank.com] Sent: Thursday, September 26, 2013 3:50 PM To: Somnath Roy Cc: ceph-us...@lists.ceph.com; ceph-devel@vger.kernel.org; Anirban Ray Subject: Re: [ceph-users] Scaling radosgw module Ah, that's very good to know! And RGW

RE: [ceph-users] Scaling radosgw module

2013-09-26 Thread Somnath Roy
sday, September 26, 2013 3:33 PM To: Somnath Roy Cc: ceph-us...@lists.ceph.com; ceph-devel@vger.kernel.org; Anirban Ray Subject: Re: [ceph-users] Scaling radosgw module It's kind of annoying, but it may be worth setting up a 2nd RGW server and seeing if having two copies of the benchmark going at the

RE: [ceph-users] Scaling radosgw module

2013-09-26 Thread Somnath Roy
Hi Mark, FYI, I tried with wip-6286-dumpling release and the results are the same for me. The radosgw throughput is around ~6x slower than the single rados bench output! Any other suggestion ? Thanks & Regards Somnath -Original Message- From: Somnath Roy Sent: Friday, Septembe

RE: [ceph-users] Scaling RBD module

2013-09-24 Thread Somnath Roy
Hi Sage, Thanks for your input. I will try those. Please see my response inline. Thanks & Regards Somnath -Original Message- From: Sage Weil [mailto:s...@inktank.com] Sent: Tuesday, September 24, 2013 3:47 PM To: Somnath Roy Cc: Travis Rhoden; Josh Durgin; ceph-devel@vger.kernel

Scaling radosgw module

2013-09-20 Thread Somnath Roy
Hi, I am running Ceph on a 3 node cluster and each of my server node is running 10 OSDs, one for each disk. I have one admin node and all the nodes are connected with 2 X 10G network. One network is for cluster and other one configured as public network. All the OSD journals are on SSDs. I sta

RE: [ceph-users] Scaling RBD module

2013-09-19 Thread Somnath Roy
-devel-ow...@vger.kernel.org] On Behalf Of Josh Durgin Sent: Thursday, September 19, 2013 12:24 PM To: Somnath Roy Cc: Sage Weil; ceph-devel@vger.kernel.org; Anirban Ray; ceph-us...@lists.ceph.com Subject: Re: [ceph-users] Scaling RBD module On 09/19/2013 12:04 PM, Somnath Roy wrote: > Hi J

RE: [ceph-users] Scaling RBD module

2013-09-19 Thread Somnath Roy
[mailto:josh.dur...@inktank.com] Sent: Wednesday, September 18, 2013 6:10 PM To: Somnath Roy Cc: Sage Weil; ceph-devel@vger.kernel.org; Anirban Ray; ceph-us...@lists.ceph.com Subject: Re: [ceph-users] Scaling RBD module On 09/17/2013 03:30 PM, Somnath Roy wrote: > Hi, > I am running Ceph on a 3 node

Scaling RBD module

2013-09-17 Thread Somnath Roy
Hi, I am running Ceph on a 3 node cluster and each of my server node is running 10 OSDs, one for each disk. I have one admin node and all the nodes are connected with 2 X 10G network. One network is for cluster and other one configured as public network. Here is the status of my cluster. ~/fio

<    1   2   3