Hi,
I upgraded to Cassandra 2.2.8 and noticed something weird in
nodetool tpstats:
Pool NameActive Pending Completed Blocked
All time blocked
MutationStage 0 0 116265693
0 0
ReadStage 1
Ok, thanks Matija.
On Tue, Feb 21, 2017, at 11:43 AM, Matija Gobec wrote:
> They appear for each repair run and disappear when repair run
> finishes.
>
> On Tue, Feb 21, 2017 at 11:14 AM, Vincent Rischmann
> <m...@vrischmann.me> wrote:
>> __
>> Hi,
>
Hello,
I'm using a table like this:
CREATE TABLE myset (id uuid PRIMARY KEY)
which is basically a set I use for deduplication, id is a unique id for
an event, when I process the event I insert the id, and before
processing I check if it has already been processed for deduplication.
you also store events in Cassandra? If yes, why not to add
> "processed" flag to existing table(s), and fetch non-processed events
> with single SELECT?
>
> Best regards, Vladimir Yudovin,
> *Winguzone[1] - Cloud Cassandra Hosting*
>
>
> On Fri
ith incremental repair.
>Furthermore, make sure you run repair daily after your first inc
> repair run, in order to work on small sized repairs.
>
> Cheers,
>
>
> On Thu, Oct 27, 2016 at 4:27 PM Vincent Rischmann
> <m...@vrischmann.me> wrote:
>&
f you have particularly big partitions
> in the CFs that fail to get repaired ? You can run nodetool
> cfhistograms to check that.
>
> Cheers,
>
>
>
> On Thu, Oct 27, 2016 at 5:24 PM Vincent Rischmann
> <m...@vrischmann.me> wrote:
>> __
>> Thanks for the res
ncremental repair on a regular basis once you
> started as you'll have two separate pools of sstables (repaired and
> unrepaired) that won't get compacted together, which could be a
> problem if you want tombstones to be purged efficiently.
> Cheers,
>
> Le jeu. 27 oct. 2016 17:57,
partitions, so it's definitely good to
know, and I'll definitely work on reducing partition sizes.
On Fri, Oct 28, 2016, at 06:32 PM, Edward Capriolo wrote:
>
>
> On Fri, Oct 28, 2016 at 11:21 AM, Vincent Rischmann
> <m...@vrischmann.me> wrote:
>> __
>> Doesn't paging help
Doesn't paging help with this ? Also if we select a range via the
cluster key we're never really selecting the full partition. Or is
that wrong ?
On Fri, Oct 28, 2016, at 05:00 PM, Edward Capriolo wrote:
> Big partitions are an anti-pattern here is why:
>
> First Cassandra is not an analytic
gt; I hope that's helpful as there is no easy answer here, and the problem
> should be narrowed down by fixing all potential causes.
>
> Cheers,
>
>
>
>
> On Mon, Nov 21, 2016 at 5:10 PM Vincent Rischmann
> <m...@vrischmann.me> wrote:
>> __
>&g
Hello,
we have a 8 node Cassandra 2.1.15 cluster at work which is giving us a
lot of trouble lately.
The problem is simple: nodes regularly die because of an out of memory
exception or the Linux OOM killer decides to kill the process.
For a couple of weeks now we increased the heap to 20Gb
ny value in the range 8-20 (e.g. 60-70% of physical
>> memory).
>> Also how many tables do you have across all keyspaces? Each table can
>> consume minimum 1M of Java heap.
>>
>> Best regards, Vladimir Yudovin,
>> *Winguzone[1] - Hosted Cloud Cassandra Launch yo
e swap is disabled
>
> Cheers,
>
>
> On Mon, Nov 21, 2016 at 2:57 PM Vincent Rischmann
> <m...@vrischmann.me> wrote:
>> __
>> @Vladimir
>>
>> We tried with 12Gb and 16Gb, the problem appeared eventually too.
>> In this particular cluster
Hi,
we have two Cassandra 2.1.15 clusters at work and are having some
trouble with repairs.
Each cluster has 9 nodes, and the amount of data is not gigantic but
some column families have 300+Gb of data.
We tried to use `nodetool repair` for these tables but at the time we
tested it, it made the
Hi,
I'm using cassandra-reaper
(https://github.com/thelastpickle/cassandra-reaper) to manage repairs of
my Cassandra clusters, probably like a bunch of other people.
When I started using it (it was still the version from the spotify
repository) the UI didn't work well, and the Python cli
oesn't seem to
> be write related) ?> Can you share the queries from your scheduled selects
> and the
> data model ?>
> Cheers,
>
>
> On Tue, Jun 6, 2017 at 2:33 PM Vincent Rischmann
> <m...@vrischmann.me> wrote:>> __
>> Hi,
>>
>> we have a cl
er a full day. If the
> results are satisfying, generalize to the rest of the cluster. You
> need to experience peak load to make sure the new settings are
> fixing your issues.>
> Cheers,
>
>
>
> On Tue, Jun 6, 2017 at 4:22 PM Vincent Rischmann
> <m...@vrischmann.me>
Hi,
we have a cluster of 11 nodes running Cassandra 2.2.9 where we regularly
get READ messages dropped:
> READ messages were dropped in last 5000 ms: 974 for internal timeout
> and 0 for cross node timeout
Looking at the logs, some are logged at the same time as Old Gen GCs.
These GCs all take
Hello,
we recently added a new 5 node cluster used only for a single service,
and right now it's not even read from, we're just loading data into it.
Each node are identical: 32Gib of RAM, 4 core Xeon E5-1630, 2 SSDs in
Raid 0, Cassandra v3.11
We have two tables with roughly this schema:
CREATE
Hi,
while replacing a node in a cluster I saw this log:
2019-08-27 16:35:31,439 Gossiper.java:995 - InetAddress /10.15.53.27 is now
DOWN
it caught my attention because that ip address doesn't exist anymore in the
cluster and it hasn't for a long time.
After some reading I ran `nodetool
see them in the logs? If that's the case, then
> yes, I would do `nodetool assassinate`.
>
>
>
> On Wed, Aug 28, 2019 at 7:33 AM Vincent Rischmann
> wrote:
>> __
>> Hi,
>>
>> while replacing a node in a cluster I saw this log:
>>
>> 2019-
21 matches
Mail list logo