all the IP addresses across the cluster.
I've been changing IPs on a whole cluster back in 2.1 this way and it went
through seamlessly.
Cheers,
On Wed, Feb 27, 2019 at 8:54 AM Oleksandr Shulgin
wrote:
On Wed, Feb 27, 2019 at 4:15 AM wxn...@zjqunshuo.com
wrote:
>After restart with the
-26 18:36
To: User
Subject: Re: Question on changing node IP address
On Tue, Feb 26, 2019 at 9:39 AM wxn...@zjqunshuo.com
wrote:
I'm running 2.2.8 with vnodes and I'm planning to change node IP address.
My procedure is:
Turn down one node, setting auto_bootstrap to false in yaml file, then bri
Hi All,
I'm running 2.2.8 with vnodes and I'm planning to change node IP address.
My procedure is:
Turn down one node, setting auto_bootstrap to false in yaml file, then bring it
up with -Dcassandra.replace_address. Repeat the procedure one by one for the
other nodes.
I care about streaming
Hi Onmstester,
Thank you all. Now I understand whether to use batch or asynchronous writes
really depends on use case. Till now batch writes work for me in a 8 nodes
cluster with over 500 million requests per day.
> Did you compare the cluster performance including blocked natives, dropped
>
a single
partition, otherwise it cause a lot of performance degradation on your cluster
and after a while throughput would be alot less than parallel single statements
with executeAsync.
Sent using Zoho Mail
Forwarded message
From : wxn...@zjqunshuo.com
To : "user&
Hi All,
What's the difference between logged batch and unlogged batch? I'm asking this
question it's because I'm seeing the below WARNINGs after a new app started
writting to the cluster.
WARNING in system.log:
Unlogged batch covering 135 partitions detected against table
[cargts.eventdata].
red data will removed after TTL is reached.
Regards
Maik
From: wxn...@zjqunshuo.com [mailto:wxn...@zjqunshuo.com]
Sent: Mittwoch, 17. Oktober 2018 09:02
To: user
Subject: Re: TWCS: Repair create new buckets with old data
Hi Maik,
IMO, when using TWCS, you had better not run repair. The behavi
Hi Maik,
IMO, when using TWCS, you had better not run repair. The behaviour of TWCS is
same with STCS for repair when merging sstables and the result is leaving
sstables spanning multiple time buckets, but maybe I'm wrong. In my use case, I
don't do repair with table using TWCS.
-Simon
From:
How large is your row? You may meet reading wide row problem.
-Simon
From: Laxmikant Upadhyay
Date: 2018-09-05 01:01
To: user
Subject: High IO and poor read performance on 3.11.2 cassandra cluster
We have 3 node cassandra cluster (3.11.2) in single dc.
We have written 450 million records on
Your partition key is foreignid. You may have a large partition. Why not use
foreignid+timebucket as partition key?
From: learner dba
Date: 2018-07-19 01:48
To: User cassandra.apache.org
Subject: Timeout for only one keyspace in cluster
Hi,
We have a cluster with multiple keyspaces. All
data disk
In my experience, adding a new disk and restarting the Cassandra process slowly
distributes the disk usage evenly, so that existing disks have less disk usage
On 12 Jun 2018, at 11:09 AM, wxn...@zjqunshuo.com wrote:
Hi,
I know Cassandra can make use of multiple disks. My data disk
Hi,
I know Cassandra can make use of multiple disks. My data disk is almost full
and I want to add another 2TB disk. I don't know what will happen after the
addition.
1. C* will write to both disks util the old disk is full?
2. And what will happen after the old one is full? Will C* stop writing
re ok losing that data, then you could stop the node,
remove lb-143951-big-* , and start the node. This is usually a bad idea in data
models that aren't ttl-only time-series, but if you KNOW the data is all
expired, and you didnt manually delete any other data, it may work for you.
On Mon, M
Hi All,
I changed STCS to TWCS months ago and left some old sstable files. Some are
almost tombstones. To release disk space, I issued compaction command on one
file by JMX. After the compaction is done, I got one new file with almost the
same size of the old one. Seems no tombstones are
Hi All,
If using TWCS, will a full repair trigger major compaction and then compact all
the sstable files into big ones no matter the time bucket?
Thanks,
-Simon
dra/data/system/batchlog-0290003c977e397cac3efdfdc01d626b/lb-37-big:
it is not an active sstable
INFO [CompactionExecutor:4] 2018-01-09 15:55:04,525 CompactionManager.java:664
- No files to compact for user defined compaction
The last log means something?
Cheers,
-Simon
From: wxn...@zjqunshuo.
evicting tombstones faster if the partitions are
contained within a single SSTable.
If you are dealing with TTLed data and your partitions spread over time, I'd
strongly suggest considering TWCS instead of STCS which can remove fully
expired SSTables much more efficiently.
Cheers,
On Fri, Jan 5,
Hi All,
In order to evict tombstones, I issued full repair with the command "nodetool
-pr -full". Then the data load size was indeed decreased by 100G for each node
by using "nodetool status" to check. But the actual disk usage increased by
500G for each node. The repair is still ongoing and
Adding new node is really slow when you have a large load(for me, slow means
several hours). So I'm interested, is there anyway to speed up the addition
when adding a new node?
Best Regards,
-Simon
发件人: qf zhou
发送时间: 2018-01-03 11:30
收件人: user@cassandra.apache.org
主题: Cassandra cluster add
strategy can be set as a json string and it won’t change the
cluster wide schema (or persist through reboot).
--
Jeff Jirsa
On Dec 28, 2017, at 11:40 PM, "wxn...@zjqunshuo.com" <wxn...@zjqunshuo.com>
wrote:
Hi All,
My production cluster is running 2.2.8. It is used to store
Hi All,
My production cluster is running 2.2.8. It is used to store time series data
with only insertion with TTL, no update and deletion. From the mail lists seems
TWCS is more suitable than STCS for my use case. I'm thinking about changing
STCS to TWCS in production. I have read the
solution would be to reduce your GC grace seconds for that
table to a smaller value (as opposed to default of 10 days) so that the TTLed
data will be purged sooner.
You could also consider drafting more efficient queries which won’t hit TTLed
partitions.
Thanks,
Meg
From: wxn
Hi,
My cluster is running 2.2.8, no update and deletion, only insertion with TTL.
I saw below warnings reacently. What's the meaning of them and what's the
impact?
WARN [SharedPool-Worker-2] 2017-12-04 09:32:48,833 SliceQueryFilter.java:308 -
Read 2461 live and 1978 tombstone cells in
the agent on specific port" on all the Cassandra nodes.
After this go to your Prometheus server and make the scrape config to metrics
from all clients.
On 20-Jul-2017 3:27 PM, "wxn...@zjqunshuo.com" <wxn...@zjqunshuo.com> wrote:
Hi,
I'm going to set up Prometheus+Grafana
Hi,
I'm going to set up Prometheus+Grafana to monitor Cassandra cluster. I
installed Prometheus and started it, but don't know how to config it to support
Cassandra.
Any ideas or related articles are appreciated.
Cheers,
Simon
.
C*heers,
---
Alain Rodriguez - @arodream - al...@thelastpickle.com
France
The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com
2017-06-20 6:51 GMT+01:00 wxn...@zjqunshuo.com <wxn...@zjqunshuo.com>:
Hi,
Cleaning up is generating temporary files which are occupying large disk s
Hi,
Cleaning up is generating temporary files which are occupying large disk space.
I noticed for every source sstable file, it is generating 4 temporary files,
and two of them is almost as large as the source sstable file. If there are two
concurrent cleaning tasks running, I have to leave the
Hi,
Our cluster nodes are behind a SLB(Service Load Balancer) with a VIP and the
Cassandra client access the cluster by the VIP.
System.log print the below IOException every several seconds. I guess it's the
SLB service which Ping the port 9042 of the Cassandra node periodically and
caused the
.
Cassandra daemon crashed and left the files with the name with "tmp-" prefix in
the data directory which indicate the cleaning up task was not complete.
Cheers,
-Simon
From: Akhil Mehra
Date: 2017-06-19 15:17
To: wxn...@zjqunshuo.com
CC: user
Subject: Re: Cleaning up related issue
Wh
. If it is an
existing node is this the one where the node tool cleanup failed.
Cheers,
Akhil
On 19/06/2017, at 6:40 PM, wxn...@zjqunshuo.com wrote:
Hi,
After adding a new node, I started cleaning up task to remove the old data on
the other 4 nodes. All went well except one node. The cleanup takes hours
Hi,
After adding a new node, I started cleaning up task to remove the old data on
the other 4 nodes. All went well except one node. The cleanup takes hours and
the Cassandra daemon crashed in the third node. I checked the node and found
the crash was because of OOM. The Cassandra data volume
Thanks for the detail explanation. You did solve my problem.
Cheers,
-Simon
From: Oleksandr Shulgin
Date: 2017-06-14 17:09
To: wxn...@zjqunshuo.com
CC: user
Subject: Re: Cannot achieve consistency level LOCAL_ONE
On Wed, Jun 14, 2017 at 10:46 AM, wxn...@zjqunshuo.com <wxn...@zjqunshuo.
'} AND durable_writes = true;
-Simon
From: Oleksandr Shulgin
Date: 2017-06-14 16:36
To: wxn...@zjqunshuo.com
CC: user
Subject: Re: Cannot achieve consistency level LOCAL_ONE
On Wed, Jun 14, 2017 at 9:11 AM, wxn...@zjqunshuo.com <wxn...@zjqunshuo.com>
wrote:
Hi,
Cluster set up:
1 DC with 5 nodes (each node
Hi,
Cluster set up:
1 DC with 5 nodes (each node having 700GB data)
1 kespace with RF of 2
write CL is LOCAL_ONE
read CL is LOCAL_QUORUM
One node was down for about 1 hour because of OOM issue. During the down
period, all 4 other nodes report "Cannot achieve consistency level LOCAL_ONE"
Hi Artur,
When I asked similar questions, someone addressed me to the below links and
they are helpful.
See http://www.datastax.com/dev/blog/repair-in-cassandra
https://lostechies.com/ryansvihla/2015/09/25/cassandras-repair-should-be-called-required-maintenance/
/cassandras-repair-should-be-called-required-maintenance/
Best wishes,
Ben
On 18 November 2016 at 09:41, wxn...@zjqunshuo.com <wxn...@zjqunshuo.com> wrote:
Hi All,
I'm new to Cassandra and from the mail chain seemed repair is an important
thing to do. To avoid trouble in putting Cassandra in prod
Hi All,
I'm new to Cassandra and from the mail chain seemed repair is an important
thing to do. To avoid trouble in putting Cassandra in production environment, I
have some questions.
1. What exactly does Cassandra repair do?
2. I saw someone do repair in scheduled time, daily or weekly. Is it a
have some special requirements.
For the memory, you can check what's your JVM settings, and the gc log for JVM
usage.
--Dikang.
On Mon, Nov 7, 2016 at 7:25 PM, wxn...@zjqunshuo.com <wxn...@zjqunshuo.com>
wrote:
Hi All,
I need to do maintenance work for a C* cluster with about 10 nodes.
Hi All,
I need to do maintenance work for a C* cluster with about 10 nodes. Please
recommend a C* operation and maintenance tools you are using.
I also noticed my C* deamon using large memory while doing nothing. Is there
any convenent tool to deeply analysize the C* node memory?
Cheers,
Simon
...@zjqunshuo.com <wxn...@zjqunshuo.com> wrote:
Hi All,
We have one issue on C* testing. At first the inserting was very fast and TPS
was about 30K/s, but when the size of data rows reached 2 billion, the
insertion rate decreased very badly and the TPS was 20K/s. When the size of
rows reach
Hi All,
We have one issue on C* testing. At first the inserting was very fast and TPS
was about 30K/s, but when the size of data rows reached 2 billion, the
insertion rate decreased very badly and the TPS was 20K/s. When the size of
rows reached 2.3 billion, the TPS decreased to 0.5K/s, and
han that your model seems workable, I assume you're using DTCS/TWCS, and
aligning the time windows to your day bucket. (If not you should do that)
Kurt Greaves
k...@instaclustr.com
www.instaclustr.com
On 20 October 2016 at 07:29, wxn...@zjqunshuo.com <wxn...@zjqunshuo.com> wrote:
Hi All,
I'
mn storage overhead.
Best regards, Vladimir Yudovin,
Winguzone - Hosted Cloud Cassandra
Launch your cluster in minutes.
On Thu, 20 Oct 2016 03:29:16 -0400<wxn...@zjqunshuo.com> wrote
Hi All,
I'm trying to migrate my time series data which is GPS trace from mysql to C*.
I want a
Hi All,
I'm trying to migrate my time series data which is GPS trace from mysql to C*.
I want a wide row to hold one day data. I designed the data model as below.
Please help to see if there is any problem. Any suggestion is appreciated.
Table Model:
CREATE TABLE cargts.eventdata (
deviceid
44 matches
Mail list logo