Hi ,
I have a question regarding "nodetool repair -dc" option. recently we added
multiple nodes to one DC center, we want to perform repair only on current
DC.
Here is my question.
Do we need to perform "nodetool repair -dc" on all nodes belongs to that DC
?
or only one node of that DC?
The coordinator read/scan (Scan is just different naming for the Range, so
coordinator view of RangeLatency) is the latencies from the coordinator
perspective, so it includes network latency between replicas and such. This
which is actually added for speculative retry (why there is no
Thank you Nitan.
On Jul 7, 2017 8:59 PM, "Nitan Kainth" wrote:
Yes. Because that's the ip used for internode communication
Sent from my iPhone
On Jul 7, 2017, at 10:52 AM, CPC wrote:
Hi Nitan,
Do you mean setting broadcast_address to private network
You can setup broadcast address for the IP on which Nodes will communicate with
each other. You network team can setup routing table from source to target.
We had similar setup done in one of my previous project where we segregated
network between application and C* nodes communication.
> On
Yes. Because that's the ip used for internode communication
Sent from my iPhone
> On Jul 7, 2017, at 10:52 AM, CPC wrote:
>
> Hi Nitan,
>
> Do you mean setting broadcast_address to private network would suffice?
>
>> On 7 July 2017 at 20:45, Nitan Kainth
I’m currently on 2.1.12. Are you saying this bug exists on the current latest
version 3.0.14?
Thank you
Mark
801-705-7115 office
From: Fay Hou [Storage Service] [mailto:fay...@coupang.com]
Sent: Thursday, July 6, 2017 2:24 PM
To: User
Subject: Re: Repair on
Hi again,
Trying to restart my nodes after restoring snapshot data, initial tokens have
been added in as per the instructions online.
In system.log I get this error (same error is I run nodetool cleanup)
Exception encountered during startup: The number of initial tokens (by
initial_token)
Jonathan,
Make sure initial tokens have values from back up cluster i.e. 256 tokens. It
is possible to have typo.
> On Jul 7, 2017, at 9:14 AM, Jonathan Baynes
> wrote:
>
> Hi again,
>
> Trying to restart my nodes after restoring snapshot data, initial tokens
Yes both clusters match I've checked 3 Times and diff'd it as well. Would file
format have any affect I'm amending on windows machine and returning the file
back to Linux
Thanks
J
Sent from my iPhone
On 7 Jul 2017, at 17:43, Nitan Kainth
> wrote:
I have seen Windows format cause problems. Run dos2unix on the cassandra.yaml
file (on the linux box) and see if it helps.
Sean Durity
lord of the (C*) rings (Staff Systems Engineer - Cassandra)
MTC 2250
#cassandra - for the latest news and updates
From: Jonathan Baynes
Hi,
We are building 2 datacenters with each machine have one public(for native
client connections) and one for private(internode communication). What we
noticed that nodes in one datacenters trying to communicate with other
nodes in other dc over their public interfaces.
I mean:
DC1 Node1 public
Hi Nitan,
Do you mean setting broadcast_address to private network would suffice?
On 7 July 2017 at 20:45, Nitan Kainth wrote:
> You can setup broadcast address for the IP on which Nodes will communicate
> with each other. You network team can setup routing table from source
Thanks ZAIDI,
Using C++ driver doesn't have tracing with driver so executing those from
cqlsh. when i am tracing i am getting below error, i increased
--request-timeout to 3600 in cqlsh.
> ReadTimeout: code=1200 [Coordinator node timed out waiting for replica
> nodes' responses]
1 GB heap is very small. Why not try increasing it to 50% of RAM and see if it
helps you track down the real issue. It is hard to tune around a bad data
model, if that is indeed the issue. Seeing your tables and queries would help.
Sean Durity
From: Pranay akula
Yes it is possible, generally quotes gets messed up. I would suggest as
alternate comment num_tokens and start. Then compare output of node tool ring
on source and target.
> On Jul 7, 2017, at 9:47 AM, Jonathan Baynes
> wrote:
>
> Yes both clusters match I've
On 2017-07-07 07:26 (-0700), Pranay akula wrote:
> Lately i am seeing some select queries timing out, data modelling to blame
> for but not in a situation to redo it.
>
> Does increasing heap will help ??
>
> currently using 1GB new_heap, I analysed the GC logs
Hello,
We had a test server crashing for some reason (not related to Cassandra
probably) and now when trying to start cassandra, it gives following error:
ERROR [main] 2017-07-06 09:29:56,140 JVMStabilityInspector.java:82 -
Exiting due to error while processing commit log during initialization.
Hi Hannu,
I remember there have been discussions about this in the past. Most
probably there is already a JIRA for this.
I roughly remember a consense like that:
- Default behaviour should remain
- It should be configurable to the needs and preferences of the DBA
- It should at least spit out
Hello,
yes, that’s what we do when things like this happen.
My thinking is just that when commit log is corrupted, you cannot really do
anything else but exactly those steps. Delete corrupted file and run repair
after starting. At least I haven’t heard of any tools for salvaging commit
log
Hello, I have several question about cassandra metrics. What does exactly
mean the next metrics:
- CoordinatorReadLatency
- CoordinatorScanLatency
- ReadLatency
- RangeLatency
- ViewLockAcquireTime
- ViewReadTime
--
С уважением,
Павел Сапежко
skype: p.sapezhko
1st result on google returns:
http://cassandra.apache.org/doc/latest/operating/metrics.html
On Fri, Jul 7, 2017 at 12:16 PM, Павел Сапежко
wrote:
> Hello, I have several question about cassandra metrics. What does exactly
> mean the next metrics:
>
>-
On Thu, Jul 6, 2017 at 6:58 PM, Charulata Sharma (charshar) <
chars...@cisco.com> wrote:
> Hi,
>
> I am facing similar issues with SYSTEM_AUTH keyspace and wanted to know
> the implication of disabling the "*cassandra*" superuser.
>
Unless you have scheduled any tasks that require the user with
Are you really think that I don't read docs? Do you have enough information
in the documentation? I think no. What exactly does mean CoordinatorScanLatency
for example? Is it latency on full table scan or maybe range scan by
clustering key? What exactly mean ViewLockAcquireTime? What is "partition
What exactly does mean CoordinatorScanLatency for example
CoordinatorScanLatency is a timer metric that present coordinator range scan
latency for table.
Is it latency on full table scan or maybe range scan by clustering key?
It is range scan.. clustering key is used to only
>> I analysed the GC logs not having any issues with major GC's
If you don’t have issues on GC , than why do you want to [tune] GC
parameters ?
Instead focus on why select queries are taking time.. may be take a look on
their trace?
From: Pranay akula
Hi,
Can anyone help me. I'm trying (and failing) to move my 3 node C* data from my
Production Environment to my Development 3 node cluster.
Here is the fine print...
Oracle Linux 7.3
C* 3.0.11
3 Nodes ((virtual Nodes 256))
1 Keyspace (replication factor 3) Quorum Consistency
1 table
Snapshot
Lately i am seeing some select queries timing out, data modelling to blame
for but not in a situation to redo it.
Does increasing heap will help ??
currently using 1GB new_heap, I analysed the GC logs not having any issues
with major GC's .
Using G1GC , does increasing new_heap will help ??
Thanks ZAIDI,
The problem is the tracing queries are also getting timed out, so not sure
how to troubleshoot.
Does increasing new_heap will help reads ?? what other param's i can tune,
so that i can identify the issue.
Thanks
Pranay.
On Fri, Jul 7, 2017 at 10:45 AM, ZAIDI, ASAD A
Hello *Jonathan,*
As both clusters size is same.
Do I copy the snapshots from all the nodes? yes this will work, just make
sure that ur copying data to nodes with assiociated tokens.
Thanks
Pranay.
On Fri, Jul 7, 2017 at 10:48 AM, Jonathan Baynes <
jonathan.bay...@tradeweb.com> wrote:
>
Can you provide more details.
E.g. table structure, the app used for the query, the query itself and the
error message.
Also get the output of the following commands from your cluster nodes (note
that one command uses "." and the other "space" between keyspace and tablename)
nodetool -h
Hi guys,
I am trying to calculate sum(int) with a prepared statement. The problem is
that I have many rows to sum up, the result could be a bigint, however
cassandra java driver receives the metadata as int. My current workaround
is to change the column type to bigint.
What's the best practice
31 matches
Mail list logo