ystem.log) ?
>>>>>>>
>>>>>>> Something else you could check are local_writes stats to see if only
>>>>>>> one table if affected or this is keyspace / cluster wide. You can use
>>>>>>> metrics exposed by cassandra or if you h
ve a:
>>>>>> 'nodetool cfstats | grep -e 'Table:' -e 'Local'' should give you a
>>>>>> rough idea of local latencies.
>>>>>>
>>>>>> Those are just things I would check, I have not a clue on wh
t; 2016-02-18 5:13 GMT+01:00 Mike Heffner :
>>>>>
>>>>>> Jaydeep,
>>>>>>
>>>>>> No, we don't use any light weight transactions.
>>>>>>
>>>>>> Mike
>>>>>>
>>>>&g
;>> Are you guys using light weight transactions in your write path?
>>>>>>>
>>>>>>> On Thu, Feb 11, 2016 at 12:36 AM, Fabrice Facorat <
fabrice.faco...@gmail.com> wrote:
>>>>>>>>
>>>>>>&
our write path?
>>>>>>
>>>>>> On Thu, Feb 11, 2016 at 12:36 AM, Fabrice Facorat <
>>>>>> fabrice.faco...@gmail.com> wrote:
>>>>>>
>>>>>>> Are your commitlog and data on the same disk ? If yes, you should p
Jirsa
> wrote:
>>
>> What disk size are you using?
>>
>>
>>
>> From: Mike Heffner
>> Reply-To: "user@cassandra.apache.org"
>> Date: Wednesday, February 10, 2016 at 2:24 PM
>> To: "user@cassandra.apache.org"
>> Cc: Pet
>>>>> fabrice.faco...@gmail.com> wrote:
>>>>>
>>>>>> Are your commitlog and data on the same disk ? If yes, you should put
>>>>>> commitlogs on a separate disk which don't have a lot of IO.
>>>>>>
>>
gt;>
>>>>> An example of impact IO may have, even for Async writes:
>>>>>
>>>>> https://engineering.linkedin.com/blog/2016/02/eliminating-large-jvm-gc-pauses-caused-by-background-io-traffic
>>>>>
>>>>> 2016-02-11 0:31 G
;
>> From: Mike Heffner
>> Reply-To: "user@cassandra.apache.org"
>> Date: Wednesday, February 10, 2016 at 2:24 PM
>> To: "user@cassandra.apache.org"
>> Cc: Peter Norton
>> Subject: Re: Debugging write timeouts on Cassandra 2.2.5
>>
>> P
2016-02-11 0:31 GMT+01:00 Mike Heffner :
>>>> > Jeff,
>>>> >
>>>> > We have both commitlog and data on a 4TB EBS with 10k IOPS.
>>>> >
>>>> > Mike
>>>> >
>>>> > On Wed, Feb 10, 2016 at 5:28 PM, Jeff
Following up from our earlier post...
We have continued to do exhaustive testing and measuring of the numerous
hardware and configuration variables here. What we have uncovered is that
on identical hardware (including the configuration we run in production),
something between versions 2.0.17 and 2
>>> >
>>> > Mike
>>> >
>>> > On Wed, Feb 10, 2016 at 5:28 PM, Jeff Jirsa <
>>> jeff.ji...@crowdstrike.com>
>>> > wrote:
>>> >>
>>> >> What disk size are you using?
>>> >>
>
;>
>> >>
>> >>
>> >> From: Mike Heffner
>> >> Reply-To: "user@cassandra.apache.org"
>> >> Date: Wednesday, February 10, 2016 at 2:24 PM
>> >> To: "user@cassandra.apache.org"
>> >> Cc: Pe
t;
> >>
> >> From: Mike Heffner
> >> Reply-To: "user@cassandra.apache.org"
> >> Date: Wednesday, February 10, 2016 at 2:24 PM
> >> To: "user@cassandra.apache.org"
> >> Cc: Peter Norton
> >> Subject: Re: Debugging
>>
>> What disk size are you using?
>>
>>
>>
>> From: Mike Heffner
>> Reply-To: "user@cassandra.apache.org"
>> Date: Wednesday, February 10, 2016 at 2:24 PM
>> To: "user@cassandra.apache.org"
>> Cc: Peter Norton
>>
10, 2016 at 2:24 PM
> To: "user@cassandra.apache.org"
> Cc: Peter Norton
> Subject: Re: Debugging write timeouts on Cassandra 2.2.5
>
> Paulo,
>
> Thanks for the suggestion, we ran some tests against CMS and saw the same
> timeouts. On that note though, we are going to try dou
What disk size are you using?
From: Mike Heffner
Reply-To: "user@cassandra.apache.org"
Date: Wednesday, February 10, 2016 at 2:24 PM
To: "user@cassandra.apache.org"
Cc: Peter Norton
Subject: Re: Debugging write timeouts on Cassandra 2.2.5
Paulo,
Thanks for the sug
Paulo,
Thanks for the suggestion, we ran some tests against CMS and saw the same
timeouts. On that note though, we are going to try doubling the instance
sizes and testing with double the heap (even though current usage is low).
Mike
On Wed, Feb 10, 2016 at 3:40 PM, Paulo Motta
wrote:
> Are yo
Are you using the same GC settings as the staging 2.0 cluster? If not,
could you try using the default GC settings (CMS) and see if that changes
anything? This is just a wild guess, but there were reports before of
G1-caused instabilities with small heap sizes (< 16GB - see CASSANDRA-10403
for more
Hi all,
We've recently embarked on a project to update our Cassandra infrastructure
running on EC2. We are long time users of 2.0.x and are testing out a move
to version 2.2.5 running on VPC with EBS. Our test setup is a 3 node, RF=3
cluster supporting a small write load (mirror of our staging loa
20 matches
Mail list logo