Thanks Kurt.
-- --
??: "kurt";;
: 2017??11??1??(??) 7:22
??: "User";
: Re: decommissioned node still in gossip
It will likely hang around in gossip for 3-15 days but then
You can get dropped message statistics over JMX. for example nodetool
tpstats has a counter for dropped hints from startup. that would be the
preferred method for tracking this info, rather than parsing logs
On 2 Nov. 2017 6:24 am, "Anumod Mullachery"
wrote:
Hi
Hi Varun,
I apreciate you answer but this is not what is causing my problem.
Even if it is SEQ, as the excelent article by Ben Slater says, it will
always repeat the same sequential at each new operation (in my case one
operation equals to one partition).
But in that issue, I saw another one:
Hi All,
In cassandra v 2.1.15 , I'm able to pull the hints drop and dropped
messages from cassandra.log as below-
dropped hints-->
"/opt/xcal/apps/cassandra/logs/cassandra.log
https://www.instaclustr.com/deep-diving-cassandra-stress-part-3-using-yaml-profiles/
In this particular blog, they mentioned your case.
Changed uniform() distribution to seq() distribution
https://issues.apache.org/jira/browse/CASSANDRA-12490
Thanks!!
On Thu, Nov 2, 2017 at 12:54 AM, Varun
Hi,
https://www.instaclustr.com/deep-diving-into-cassandra-stress-part-1/
In the blog, They covered many things in detail.
Thanks!!
On Thu, Nov 2, 2017 at 12:38 AM, Lucas Benevides <
lu...@maurobenevides.com.br> wrote:
> Dear community,
>
> I am using Cassandra Stress Tool and trying to
Dear community,
I am using Cassandra Stress Tool and trying to simulate IoT generated data.
So I created a column family with the device_id as the partition key.
But in every different operation (the parameter received in the -n option)
the generated values are the same. For instance, I have a
Thanks a lot Chris,
I had noticed that even the counter in the TotalCompactionsCompleted is
higher than the number of SSTables compactions, that is what interests me
most. I measured the number of compactions turning on the log_all in the
compaction settings in the tables and reading the
It will likely hang around in gossip for 3-15 days but then should
disappear. As long as it's not showing up in the cluster it should be OK.
On 1 Nov. 2017 20:25, "Peng Xiao" <2535...@qq.com> wrote:
> Dear All,
>
> We have decommisioned a DC,but from system.log,it'still gossiping
> INFO
Dear All,
We have decommisioned a DC,but from system.log,it'still gossiping
INFO [GossipStage:1] 2017-11-01 17:21:36,310 Gossiper.java:1008 - InetAddress
/x.x.x.x is now DOWN
Could you please advise?
Thanks,
Peng Xiao
10 matches
Mail list logo