Hi Ted.
How long are the latency spikes when they occur? Have you investigated
compactions (nodetool compactionstats) during the spike?
Are you also seeing large latency spikes in the p95 (95th percentile)
metrics? p99 catches outliers, which aren't necessarily always cause for
alarm.
Are the
In the Cassandra versions 2.1.11 - 2.1.16, after we decommission a node or
datacenter, we observe the decommissioned nodes marked as DOWN in the
cluster when you do a "nodetool describecluster". The nodes however do not
show up in the "nodetool status" command.
The decommissioned node also does
Sorry about the confusion.
I meant the sstables for system.batches table which got created after dropping
the MV still persist and have huge size.
Original Message
Subject: Re: Huge size of system.batches table after dropping an incomplete
Materialized View
Local Time: 23
What exactly persists? I didn't really understand you, could you be more
specific?
2017-01-23 15:40 GMT+01:00 Vinci :
> Thanks for the response.
>
> After the MV failure and errors, MV was dropped and the table was
> truncated.
> Then I recreated the MV and Table from
Thanks for the response.
After the MV failure and errors, MV was dropped and the table was truncated.
Then I recreated the MV and Table from scratch which worked as expected.
The huge sizes of sstables as I have mentioned are after that. Somehow it still
persists with same last modification
I am bulk importing a large number of sstables that I pre-generated using the
bulk load process outlined at
https://github.com/yukim/cassandra-bulkload-example
I am using the 'sstableloader' utility to import them into a nine node
Cassandra cluster.
During the sstableloader execution, I
Thanks, Benjamin,
I found the issue hints was off in Cassandra.yml.
Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City, Noida, U.P. 201301 | INDIA
P Please do not print this email unless it
Sorry for the short answer, I am on the run:
I guess your hints expired. Default setting is 3h. If a node is down for a
longertime, no hints will be written.
Only a repair will help then.
2017-01-23 12:47 GMT+01:00 Abhishek Kumar Maheshwari <
abhishek.maheshw...@timesinternet.in>:
> Hi Benjamin,
Hi Benjamin,
I find the issue. while I was making query, I was overriding LOCAL_QUORUM to
QUORUM.
Also, one more Question,
I was able insert data in DRPOCcluster. But when I bring up dc_india DC, data
doesn’t seem in dc_india keyspace and column family (I wait near about 30 min)?
Thanks &
The query has QUORUM not LOCAL_QUORUM. So 3 of 5 nodes are required. Maybe
1 node in DRPOCcluster also was temporarily unavailable during that query?
2017-01-23 12:16 GMT+01:00 Abhishek Kumar Maheshwari <
abhishek.maheshw...@timesinternet.in>:
> Hi All,
>
>
>
> I have Cassandra stack with 2 Dc
>
Hi All,
I have Cassandra stack with 2 Dc
Datacenter: DRPOCcluster
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- AddressLoad Tokens OwnsHost ID
Rack
UN 172.29.xx.xxx 88.88 GB 256 ?
to kill running query, there is no way.
turn off all nodes and turn on.
cassandra doesnt support kill query feature
2017년 1월 23일 월요일, Cogumelos Maravilha님이 작성한 메시지:
> Hi,
>
> I'm using cqlsh --request-timeout=1 but because I've more than
> 600.000.000 rows some
Hi,
I'm using cqlsh --request-timeout=1 but because I've more than
600.000.000 rows some times I get blocked and I kill the cqlsh. But what
about the running query in Cassandra? How can I check that?
Thanks in advance.
HI Guys,
Lets say I have 2 DC's and I have 3 node cluster on each DC and one replica
on each DC. I would like to maintain Strong consistency and high
availability so
1) First of all, How do I even set up one replica on each DC?
2) what should my read and write consistent levels be when I am
You can verify unevenly data distribution using nodetool command
*nodetool toppartitions keyspace table*
On Mon, Jan 23, 2017 at 12:52 PM, chetan kumar wrote:
> Hi Pranay,
>
> i seems that your data is unevenly distributed across the cluster with
> respect your insertion
15 matches
Mail list logo