Hi,
I have a cluster running 2.0.9 with 2 data centers. I noticed that
'nodetool repair -pr keyspace cf' runs very slow (OpsCenter shows that the
node's data size is 39 GB and the largest SSTable size is like 7 GB so the
column family is not huge, SizeTieredCompactionStrategy is used). Repairing
://dba.stackexchange.com/questions/82414/do-you-have-to-run-nodetool-repair-on-every-node
.
Thanks again.
George
On Thu, Sep 1, 2016 at 10:22 AM, Paulo Motta <pauloricard...@gmail.com>
wrote:
> https://issues.apache.org/jira/browse/CASSANDRA-7450
>
> 2016-09-01 13:11 GMT-03:0
Romain,
I was trying what you mentioned as below:
a. nodetool stop VALIDATION
b. echo run -b org.apache.cassandra.db:type=StorageService
forceTerminateAllRepairSessions | java -jar
/tmp/jmxterm/jmxterm-1.0-alpha-4-uber.jar
-l 127.0.0.1:7199
to stop a seemingly forever-going repair but seeing
Hi,
I am using version 2.0.9. I have been looking into the logs to see if a
repair is finished. Each time a repair is started on a node, I am seeing
log line like "INFO [Thread-112920] 2016-09-16 19:00:43,805
StorageService.java (line 2646) Starting repair command #41, repairing 2048
ranges for
ed and the
> ranges that failed could be a very good thing as well. It would be easy to
> then read the repair result and to know what to do next (re-run repair on
> some ranges, move to the next node, etc).
>
>
> 2016-09-20 17:00 GMT+02:00 Li, Guangxing <guangxing...@pearson.c
ats on nodes?
>
> Romain
>
>
> Le Mercredi 21 septembre 2016 16h45, "Li, Guangxing" <
> guangxing...@pearson.com> a écrit :
>
>
> Alain,
>
> my script actually grep through all the log files, including those
> system.log.*. So it was probably due to a fa
will take more than a day on this rate. I guess the only thing I can do is
to upgrade to 2.1 and start using incremental repair?
Thanks.
George.
On Fri, Sep 16, 2016 at 3:03 PM, Dor Laor <d...@scylladb.com> wrote:
> On Fri, Sep 16, 2016 at 11:29 AM, Li, Guangxing <guangxing...@pearson.
com/spotify/cassandra-reaper
>
> - https://github.com/spodkowinski/cassandra-reaper-ui
>
> Best,
> Romain
>
>
>
> Le Mercredi 21 septembre 2016 22h32, "Li, Guangxing" <
> guangxing...@pearson.com> a écrit :
>
> Romain,
>
> I started running a new repair. If I see such behavior again, I will try
> what you mentioned.
>
> Thanks.
>
Hi,
I have a 3 nodes cluster, each with less than 200 GB data. Currently all
nodes have the default 256 value for num_tokens. My colleague told me that
with the data size I have (less than 200 GB on each node), I should change
num_tokens to something like 32 to get better performance, especially
Thanks a lot, guys. That is lots of useful info to digest.
In my cassandra.ymal, request_timeout_in_ms is set to
1, streaming_socket_timeout_in_ms is not set hence takes default of 0.
Looks like 2.1x has made quite some improvement on this area. Besides, I
can use incremental repair. So for
Hi,
I secured my C* cluster by having "authenticator:
org.apache.cassandra.auth.PasswordAuthenticator" in cassandra.yaml. I know
it secures the CQL native interface running on port 9042 because my code
uses such interface. Does this also secure the Thrift API interface running
on port 9160? I
11 matches
Mail list logo