: Wednesday, September 17, 2014 2:02 PM
To: Somnath Roy
Cc: ceph-devel@vger.kernel.org
Subject: Re: severe librbd performance degradation in Giant
I reported the same for librbd I'm firefly after upgrading from dumpling here:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-June/040664
Mark,
All are running with concurrency 32.
Thanks & Regards
Somnath
-Original Message-
From: Mark Nelson [mailto:mark.nel...@inktank.com]
Sent: Wednesday, September 17, 2014 1:59 PM
To: Somnath Roy; ceph-devel@vger.kernel.org
Subject: Re: severe librbd performance degradation in G
Hi Sage,
We are experiencing severe librbd performance degradation in Giant over firefly
release. Here is the experiment we did to isolate it as a librbd problem.
1. Single OSD is running latest Giant and client is running fio rbd on top of
firefly based librbd/librados. For one client it is giv
Hi Nicheal,
Not only recovery , IMHO the main purpose of ceph journal is to support
transaction semantics since XFS doesn't have that. I guess it can't be achieved
with pg_log/pg_info.
Thanks & Regards
Somnath
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel
Created the following pull request for the fix.
https://github.com/ceph/ceph/pull/2510
Thanks & Regards
Somnath
-Original Message-
From: Somnath Roy
Sent: Monday, September 15, 2014 3:26 PM
To: Sage Weil (sw...@redhat.com); Samuel Just (sam.j...@inktank.com)
Cc: ceph-d
nath
---
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Somnath
Thanks Sage!
-Original Message-
From: Sage Weil [mailto:sw...@redhat.com]
Sent: Saturday, September 13, 2014 7:32 AM
To: Alexandre DERUMIER
Cc: Somnath Roy; ceph-devel@vger.kernel.org; ceph-us...@lists.ceph.com; Samuel
Just
Subject: Re: [ceph-users] OpTracker optimization
On Sat, 13
.com]
Sent: Thursday, September 11, 2014 11:31 AM
To: Somnath Roy
Cc: Sage Weil; ceph-devel@vger.kernel.org; ceph-us...@lists.ceph.com
Subject: Re: OpTracker optimization
Just added it to wip-sam-testing.
-Sam
On Thu, Sep 11, 2014 at 11:30 AM, Somnath Roy wrote:
> Sam/Sage,
> I have addres
Hi Haomai,
<mailto:haomaiw...@gmail.com]
Sent: Thursday, September 11, 2014 7:28 PM
To: Somnath Roy
Cc: Sage Weil; ceph-us...@lists.ceph.com; ceph-devel@vger.kernel.org
Subject: Re: Regarding key/value interface
On Fri, Sep 12, 2014 at 9:46 AM, Somnath Roy wrote:
>
> Make perfect s
Weil [mailto:sw...@redhat.com]
Sent: Thursday, September 11, 2014 6:55 PM
To: Somnath Roy
Cc: Haomai Wang (haomaiw...@gmail.com); ceph-us...@lists.ceph.com;
ceph-devel@vger.kernel.org
Subject: RE: Regarding key/value interface
On Fri, 12 Sep 2014, Somnath Roy wrote:
> Make perfect sense S
stripe it as multiple key/value pair ?
2. Also, while reading it will take care of accumulating and send it back.
Thanks & Regards
Somnath
-Original Message-
From: Sage Weil [mailto:sw...@redhat.com]
Sent: Thursday, September 11, 2014 6:31 PM
To: Somnath Roy
Cc: Haomai Wang (hao
Sam/Sage,
I have addressed all of your comments and pushed the changes to the same pull
request.
https://github.com/ceph/ceph/pull/2440
Thanks & Regards
Somnath
-Original Message-
From: Sage Weil [mailto:sw...@redhat.com]
Sent: Wednesday, September 10, 2014 8:33 PM
To: Somnath Ro
nath Roy
Cc: Sage Weil (sw...@redhat.com); ceph-devel@vger.kernel.org;
ceph-us...@lists.ceph.com
Subject: Re: OpTracker optimization
Oh, I changed my mind, your approach is fine. I was unclear.
Currently, I just need you to address the other comments.
-Sam
On Wed, Sep 10, 2014 at 3:13 PM, Somnath
right ?
Thanks & Regards
Somnath
-Original Message-
From: Samuel Just [mailto:sam.j...@inktank.com]
Sent: Wednesday, September 10, 2014 3:08 PM
To: Somnath Roy
Cc: Sage Weil (sw...@redhat.com); ceph-devel@vger.kernel.org;
ceph-us...@lists.ceph.com
Subject: Re: OpTracker optimizatio
Thanks Sam.
So, you want me to go with optracker/shadedopWq , right ?
Regards
Somnath
-Original Message-
From: Samuel Just [mailto:sam.j...@inktank.com]
Sent: Wednesday, September 10, 2014 2:36 PM
To: Somnath Roy
Cc: Sage Weil (sw...@redhat.com); ceph-devel@vger.kernel.org;
ceph-us
Thanks Sam..I responded back :-)
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Samuel Just
Sent: Wednesday, September 10, 2014 11:17 AM
To: Somnath Roy
Cc: Sage Weil (sw...@redhat.com); ceph-devel@vger.kernel.org;
ceph
uring dump and more specifically
within _dump().
I am taking care of it as well.
Thanks & Regards
Somnath
-Original Message-
From: Sage Weil [mailto:sw...@redhat.com]
Sent: Monday, September 08, 2014 5:59 PM
To: Somnath Roy
Cc: Samuel Just; ceph-devel@vger.kernel.org; ceph-us...@lists
Created the following tracker and assigned to me.
http://tracker.ceph.com/issues/9384
Thanks & Regards
Somnath
-Original Message-
From: Samuel Just [mailto:sam.j...@inktank.com]
Sent: Monday, September 08, 2014 5:22 PM
To: Somnath Roy
Cc: Sage Weil (sw...@redhat.com); ceph-d
But, Sage, it is not happening all the time. For some reason, in some cases it
is waiting for the maps to be readable. Could you please let us know why it can
happen in some scenario ?
By not dumping this unusual scenario , will we be missing anything ?
Thanks & Regards
Somnath
-Original Me
Have you set the open file descriptor limit in the OSD node ?
Try setting it like 'ulimit -n 65536"
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Haomai Wang
Sent: Saturday, September 06, 2014 7:44 AM
To: 廖建锋
Cc: ceph-user
I think it is using direct io for non-aio mode as well.
Thanks & Regards
Somnath
-Original Message-
From: Mark Kirkwood [mailto:mark.kirkw...@catalyst.net.nz]
Sent: Friday, August 22, 2014 3:19 PM
To: Sage Weil
Cc: Ma, Jianpeng; Somnath Roy; Samuel Just (sam.j...@inktank.com);
I will also take the patch and test it out.
Thanks & Regards
Somnath
-Original Message-
From: Sage Weil [mailto:sw...@redhat.com]
Sent: Tuesday, August 19, 2014 9:51 PM
To: Somnath Roy
Cc: Samuel Just (sam.j...@inktank.com); ceph-devel@vger.kernel.org; Mark
Kirkwood; jian
I got it, not yet there in master.
https://github.com/ceph/ceph/commit/b40ddc5dcf95b4849706314b34e72b607629773f
Sorry for the confusion.
Thanks & Regards
Somnath
-Original Message-
From: Somnath Roy
Sent: Tuesday, August 19, 2014 9:38 PM
To: 'Sage Weil'
Cc: Samu
Thanks Sage !
So, the latest master should have the fix, right ?
Regards
Somnath
-Original Message-
From: Sage Weil [mailto:sw...@redhat.com]
Sent: Tuesday, August 19, 2014 8:55 PM
To: Somnath Roy
Cc: Samuel Just (sam.j...@inktank.com); ceph-devel@vger.kernel.org; Mark
Kirkwood
Thanks Sage, this helps !
Regards
Somnath
-Original Message-
From: Sage Weil [mailto:sw...@redhat.com]
Sent: Thursday, August 07, 2014 7:12 AM
To: Somnath Roy
Cc: ceph-devel@vger.kernel.org; ceph-us...@lists.ceph.com; Anirban Ray; Allen
Samuels
Subject: Re: Regarding cache tier
Hi Haomai,
But, the cache hit will be very minimal or null, if the actual storage per node
is very huge (say in the PB level). So, it will be mostly hitting Omap, isn't
it ?
How this header cache is going to resolve this serialization issue then ?
Thanks & Regards
Somnath
-Original Message-
Hi Andres,
How many client instances you are running in parallel. For single client , you
will not be seeing much difference with this sharded TP. Try to stress the
cluster with more number of clients and you will be seeing throughput will not
be increasing with firefly.
The aggregated output wi
Yeah :-)...We are also waiting for wip-rocksdb to merge in mainstream.
Thanks & Regards
Somnath
-Original Message-
From: Mark Nelson [mailto:mark.nel...@inktank.com]
Sent: Thursday, June 05, 2014 11:33 AM
To: Somnath Roy; Samuel Just; ceph-devel@vger.kernel.org
Subject: Re: Mon bac
Mark,
Could you please share the performance benchmark result with Rocksdbstore +
ceph and leveldbstore+ceph as you mentioned below ?
BTW, have you measured the WA induced by Rocksdbstore and leveldbstore in the
process since that is also a very important factor while backend is flash ?
Thanks &
From: Mark Nelson [mailto:mark.nel...@inktank.com]
Sent: Friday, September 27, 2013 11:50 AM
To: Somnath Roy
Cc: Yehuda Sadeh; ceph-us...@lists.ceph.com; Anirban Ray;
ceph-devel@vger.kernel.org
Subject: Re: [ceph-users] Scaling radosgw module
Hi Somnath,
With SSDs, you almost certainly are goin
from memory , no disk
utilization here.
Thanks & Regards
Somnath
-Original Message-
From: Yehuda Sadeh [mailto:yeh...@inktank.com]
Sent: Thursday, September 26, 2013 4:48 PM
To: Somnath Roy
Cc: Mark Nelson; ceph-us...@lists.ceph.com; Anirban Ray;
ceph-devel@vger.kernel.org
Subject
Mark,
One more thing, all my test is with rgw cache enabled , disabling the cache the
performance is around 3x slower.
Thanks & Regards
Somnath
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Somnath Roy
Sent: Thur
nath
-Original Message-
From: Mark Nelson [mailto:mark.nel...@inktank.com]
Sent: Thursday, September 26, 2013 3:50 PM
To: Somnath Roy
Cc: ceph-us...@lists.ceph.com; ceph-devel@vger.kernel.org; Anirban Ray
Subject: Re: [ceph-users] Scaling radosgw module
Ah, that's very good to know!
And RGW
sday, September 26, 2013 3:33 PM
To: Somnath Roy
Cc: ceph-us...@lists.ceph.com; ceph-devel@vger.kernel.org; Anirban Ray
Subject: Re: [ceph-users] Scaling radosgw module
It's kind of annoying, but it may be worth setting up a 2nd RGW server and
seeing if having two copies of the benchmark going at the
Hi Mark,
FYI, I tried with wip-6286-dumpling release and the results are the same for
me. The radosgw throughput is around ~6x slower than the single rados bench
output!
Any other suggestion ?
Thanks & Regards
Somnath
-Original Message-
From: Somnath Roy
Sent: Friday, Septembe
Hi Sage,
Thanks for your input. I will try those. Please see my response inline.
Thanks & Regards
Somnath
-Original Message-
From: Sage Weil [mailto:s...@inktank.com]
Sent: Tuesday, September 24, 2013 3:47 PM
To: Somnath Roy
Cc: Travis Rhoden; Josh Durgin; ceph-devel@vger.kernel
Hi,
I am running Ceph on a 3 node cluster and each of my server node is running 10
OSDs, one for each disk. I have one admin node and all the nodes are connected
with 2 X 10G network. One network is for cluster and other one configured as
public network.
All the OSD journals are on SSDs.
I sta
-devel-ow...@vger.kernel.org] On Behalf Of Josh Durgin
Sent: Thursday, September 19, 2013 12:24 PM
To: Somnath Roy
Cc: Sage Weil; ceph-devel@vger.kernel.org; Anirban Ray;
ceph-us...@lists.ceph.com
Subject: Re: [ceph-users] Scaling RBD module
On 09/19/2013 12:04 PM, Somnath Roy wrote:
> Hi J
[mailto:josh.dur...@inktank.com]
Sent: Wednesday, September 18, 2013 6:10 PM
To: Somnath Roy
Cc: Sage Weil; ceph-devel@vger.kernel.org; Anirban Ray;
ceph-us...@lists.ceph.com
Subject: Re: [ceph-users] Scaling RBD module
On 09/17/2013 03:30 PM, Somnath Roy wrote:
> Hi,
> I am running Ceph on a 3 node
Hi,
I am running Ceph on a 3 node cluster and each of my server node is running 10
OSDs, one for each disk. I have one admin node and all the nodes are connected
with 2 X 10G network. One network is for cluster and other one configured as
public network.
Here is the status of my cluster.
~/fio
201 - 240 of 240 matches
Mail list logo