As I see the state 162.243.109.94 is UL(Up/Leaving) so maybe this is causing
the problem.
On Sunday, October 26, 2014 11:57 PM, Tim Dunphy bluethu...@gmail.com
wrote:
Hey all,
I'm trying to decommission a node.
First I'm getting a status:
[root@beta-new:/usr/local] #nodetool
Tyler,
I see. That explains it. Any chance you might know how the Datastax Java driver
behaves for this (odd) case?
Cheers,
Jens
———
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se
Facebook Linkedin Twitter
On Friday, Oct
On Mon, Oct 27, 2014 at 11:05 AM, Jens Rantil jens.ran...@tink.se wrote:
Tyler,
I see. That explains it. Any chance you might know how the Datastax Java
driver behaves for this (odd) case?
The Row.getInt() method will do as for nulls and return 0 (though of
course, the Row.isNull() method
Hi all,
We're using Hector in one of our older use cases with C* 1.0.9.
We suspect it increases our total round trip write latency to Cassandra.
C* metrics shows low latency so we assume the problem is somewhere else.
What are the configuration parameters you would recommend to
investigate/change
As I see the state 162.243.109.94 is UL(Up/Leaving) so maybe this is
causing the problem
OK, that's an interesting observation.How do you fix a node that is an UL
state? What causes this?
Also, is there any document that explains what all the nodetool
abbreviations (UN, UL) stand for?
On
Also, is there any document that explains what all the nodetool
abbreviations (UN, UL) stand for?
-- The documentation is in the command output itself
Datacenter: datacenter1
===
*Status=Up/Down*
*|/ State=Normal/Leaving/Joining/Moving*
-- Address Load Tokens
Also, is there any document that explains what all the nodetool
abbreviations (UN, UL) stand for?
-- The documentation is in the command output itself
Datacenter: datacenter1
===
*Status=Up/Down*
*|/ State=Normal/Leaving/Joining/Moving*-- Address Load
Tokens
Hi Tim,
The node with IP 94 is leaving. Maybe something wrong happens during
streaming data. You could use nodetool netstats on both nodes to monitor
if there is any streaming connection stuck.
Indeed, you could force remove the leaving node by shutting down it
directly. Then, perform nodetool
The node with IP 94 is leaving. Maybe something wrong happens during
streaming data. You could use nodetool netstats on both nodes to monitor
if there is any streaming connection stuck.
Indeed, you could force remove the leaving node by shutting down it
directly. Then, perform nodetool
Hi,
What version of Hector are you using? Probably start with different
consistency level? Does your node in cluster having memory pressure (you
can check in cassandra system log)? what is the average node load per node
currently? Also read concurrent_writes in cassandra.yaml if you can
increase
Hi guys, any feedback on this could be very useful for me, and I guess for
more people out there.
2014-10-23 11:16 GMT+02:00 Alain RODRIGUEZ arodr...@gmail.com:
Hi,
We are currently wondering about the best way to configure network
architecture to have a Cassandra cluster multi DC.
Reading
Hi!
2014-10-23 11:16 GMT+02:00 Alain RODRIGUEZ arodr...@gmail.com:
We are currently wondering about the best way to configure network
architecture to have a Cassandra cluster multi DC.
On solution 2, we would need to open IPs one by one on 3 ports (7000,
9042, 9160) at least. 100 entries
Again, from our experience w 2.0.x:
Revert to the defaults - you are manually setting heap way too high IMHO.
On our small nodes we tried LCS - way too much compaction - switch all CFs
to STCS.
We do a major rolling compaction on our small nodes weekly during less busy
hours - works great. Be
Tombstones will be a very important issue for me since the dataset is very
much a rolling dataset using TTLs heavily.
-- You can try the new DateTiered compaction strategy (
https://issues.apache.org/jira/browse/CASSANDRA-6602) released on 2.1.1 if
you have a time series data model to eliminate
Hi,
I have a standalone spark , where the executor is set to have 6.3 G memory
, as I am using two workers so in total there 12.6 G memory and 4 cores.
I am trying to cache a RDD with approximate size of 3.2 G, but apparently
it is not cached as neither I can seeBlockManagerMasterActor:
On Mon, Oct 27, 2014 at 12:17 PM, shahab shahab.mok...@gmail.com wrote:
I have a standalone spark , where the executor is set to have 6.3 G memory
, as I am using two workers so in total there 12.6 G memory and 4 cores.
Did you intend to mail the Apache Spark mailing list, instead of the
Hello,
I am looking to change how we trigger maintenance operations in our C*
clusters. The end goal is to schedule and run the jobs using a system that
is backed by Serf to handle the event propagation.
I know that when issuing some operations via nodetool, the command blocks
until the
On Mon, Oct 27, 2014 at 1:33 PM, Tim Heckman t...@pagerduty.com wrote:
I know that when issuing some operations via nodetool, the command blocks
until the operation is finished. However, is there a way to reliably
determine whether or not the operation has finished without monitoring that
On Mon, Oct 27, 2014 at 1:44 PM, Robert Coli rc...@eventbrite.com wrote:
On Mon, Oct 27, 2014 at 1:33 PM, Tim Heckman t...@pagerduty.com wrote:
I know that when issuing some operations via nodetool, the command blocks
until the operation is finished. However, is there a way to reliably
If you decide to go the iptables route, you could try neti
https://github.com/Instagram/neti (blog post here
http://instagram-engineering.tumblr.com/post/100758229719/migrating-from-aws-to-aws
.)
On 27 October 2014 16:44, Juho Mäkinen juho.maki...@gmail.com wrote:
Hi!
2014-10-23 11:16
https://github.com/BrianGallew/cassandra_range_repair
This breaks down the repair operation into very small portions of the ring
as a way to try and work around the current fragile nature of repair.
Leveraging range repair should go some way towards automating repair (this
is how the automatic
21 matches
Mail list logo