On Mon, Feb 13, 2012 at 8:15 PM, Peter Schuller wrote:
> > 2 Node cluster, 7.9GB of ram (ec2 m1.large)
> > RF=2
> > 11GB per node
> > Quorum reads
> > 122 million keys
> > heap size is 1867M (default from the AMI I am running)
> > I'm reading about 900k keys
>
> Ok, so basically a very significan
On Mon, Feb 13, 2012 at 8:09 PM, Peter Schuller wrote:
> > the servers spending >50% of the time in io-wait
>
> Note that I/O wait is not necessarily a good indicator, depending on
> situation. In particular if you have multiple drives, I/O wait can
> mostly be ignored. Similarly if you have non-
> 2 Node cluster, 7.9GB of ram (ec2 m1.large)
> RF=2
> 11GB per node
> Quorum reads
> 122 million keys
> heap size is 1867M (default from the AMI I am running)
> I'm reading about 900k keys
Ok, so basically a very significant portion of the data fits in page
cache, but not all.
> As I was just go
On Mon, Feb 13, 2012 at 8:00 PM, Peter Schuller wrote:
> What is your total data size (nodetool info/nodetool ring) per node,
> your heap size, and the amount of memory on the system?
>
2 Node cluster, 7.9GB of ram (ec2 m1.large)
RF=2
11GB per node
Quorum reads
122 million keys
heap size is 1867
> Yep, the readstage is backlogging consistently - but the thing I am trying
> to explain s why it is good sometimes in an environment that is pretty well
> controlled - other than being on ec2
So pending is constantly > 0? What are the clients? Is it batch jobs
or something similar where there is
> the servers spending >50% of the time in io-wait
Note that I/O wait is not necessarily a good indicator, depending on
situation. In particular if you have multiple drives, I/O wait can
mostly be ignored. Similarly if you have non-trivial CPU usage in
addition to disk I/O, it is also not a good i
On Mon, Feb 13, 2012 at 7:51 PM, Peter Schuller wrote:
> For one thing, what does ReadStage's pending look like if you
> repeatedly run "nodetool tpstats" on these nodes? If you're simply
> bottlenecking on I/O on reads, that is the most easy and direct way to
> observe this empirically. If you'r
On Mon, Feb 13, 2012 at 7:48 PM, Peter Schuller wrote:
> > Yep - I've been looking at these - I don't see anything in iostat/dstat
> etc
> > that point strongly to a problem. There is quite a bit of I/O load, but
> it
> > looks roughly uniform on slow and fast instances of the queries. The last
>
What is your total data size (nodetool info/nodetool ring) per node,
your heap size, and the amount of memory on the system?
--
/ Peter Schuller (@scode, http://worldmodscode.wordpress.com)
On Mon, Feb 13, 2012 at 7:49 PM, Peter Schuller wrote:
> > I'm making an assumption . . . I don't yet know enough about cassandra
> to
> > prove they are in the cache. I have my keycache set to 2 million, and am
> > only querying ~900,000 keys. so after the first time I'm assuming they
> are
> >
For one thing, what does ReadStage's pending look like if you
repeatedly run "nodetool tpstats" on these nodes? If you're simply
bottlenecking on I/O on reads, that is the most easy and direct way to
observe this empirically. If you're saturated, you'll see active close
to maximum at all times, and
> I'm making an assumption . . . I don't yet know enough about cassandra to
> prove they are in the cache. I have my keycache set to 2 million, and am
> only querying ~900,000 keys. so after the first time I'm assuming they are
> in the cache.
Note that the key cache only caches the index positio
> Yep - I've been looking at these - I don't see anything in iostat/dstat etc
> that point strongly to a problem. There is quite a bit of I/O load, but it
> looks roughly uniform on slow and fast instances of the queries. The last
> compaction ran 4 days ago - which was before I started seeing vari
rt, then improve the
>>>>> performance.
>>>>>
>>>>
>>>> Thanks - that would explain at least some of what I am seeing
>>>>
>>>> cheers
>>>>
>>>>
>>>>>
>>>>> 2012-02-13
&g
On Mon, Feb 13, 2012 at 7:21 PM, Peter Schuller wrote:
> > I actually has the opposite 'problem'. I have a pair of servers that have
> > been static since mid last week, but have seen performance vary
> > significantly (x10) for exactly the same query. I hypothesised it was
> > various caches so
> I actually has the opposite 'problem'. I have a pair of servers that have
> been static since mid last week, but have seen performance vary
> significantly (x10) for exactly the same query. I hypothesised it was
> various caches so I shut down Cassandra, flushed the O/S buffer cache and
> then bo
eeing
>>>
>>> cheers
>>>
>>>
>>>>
>>>> 2012-02-13
>>>> --
>>>> zhangcheng
>>>> --
>>>> *发件人:* Franc Carter
>>>> *发送时间:* 2012
heers
>>
>>
>>>
>>> 2012-02-13
>>> ----------
>>> zhangcheng
>>> --
>>> *发件人:* Franc Carter
>>> *发送时间:* 2012-02-13 13:53:56
>>> *收件人:* user
>>> *抄送:*
>
Thanks - that would explain at least some of what I am seeing
>
> cheers
>
>
>>
>> 2012-02-13
>> --
>> zhangcheng
>> --
>> *发件人:* Franc Carter
>> *发送时间:* 2012-02-13 13:53:56
>>
s
>
> 2012-02-13
> --
> zhangcheng
> --
> *发件人:* Franc Carter
> *发送时间:* 2012-02-13 13:53:56
> *收件人:* user
> *抄送:*
> *主题:* keycache persisted to disk ?
>
> Hi,
>
> I am testing Cassandra on Ama
I think the keycaches and rowcahches are bothe persisted to disk when shutdown,
and restored from disk when restart, then improve the performance.
2012-02-13
zhangcheng
发件人: Franc Carter
发送时间: 2012-02-13 13:53:56
收件人: user
抄送:
主题: keycache persisted to disk ?
Hi,
I am testing
Hi,
I am testing Cassandra on Amazon and finding performance can vary fairly
wildly. I'm leaning towards it being an artifact of the AWS I/O system but
have one other possibility.
Are keycaches persisted to disk and restored on a clean shutdown and
restart ?
cheers
--
*Franc Carter* | Systems
22 matches
Mail list logo