Re: Connection refused - 127.0.0.1-Gossip

2017-12-11 Thread Marek Kadek -T (mkadek - CONSOL PARTNERS LTD at Cisco)
Besides seed nodes (which are resolved correctly) I could not find anything that would require DNS resolution in Cassandra config. On 12/10/17, 12:42 AM, "Jeff Jirsa" wrote: Does everything have valid forward/reverse DNS? Nothing that's going to reverse to a domain

rt jump during new node bootstrap

2017-12-11 Thread Peng Xiao
Dear All, We are using C* 2.1.18,when we bootstrap a new node,the rt will jump when the new node start up,then it back to normal.Could anyone please advise? Thanks, Peng Xiao

Re: Server errors during insert

2017-12-11 Thread ludovic boutros
Hi Earl, for the first error, it is not allowed for an index in SPARSE mode to have more than 5 rows per value. You have to choose another mode. I let you read this blog post on the subject : http://www.doanduyhai.com/blog/?p=2058 I don't know if the following errors are related to the first

effect of partition size

2017-12-11 Thread Micha
Hi, What are the effects of large partitions? I have a few tables which have partitions sizes as: 95% 24000 98% 42000 99% 85000 Max 82000 So, should I redesign the schema to get this max smaller or doesn't it matter much, since 99% of the partitions are <= 85000 ? Thanks for

COPY FROM/TO

2017-12-11 Thread Jaroslav Kameník
Hi all, I'd like to ask you about experience with this tool. We are trying to use it to export/import database but we run into problems with wrong unescaping of new lines and tabs (CASSANDRA-8675 ) causing that result of import is not

Re: about write performance

2017-12-11 Thread Lucas Benevides
Good answer Oleksandr, But I think the data is inserted in the Memtable already in the right order. At least the datastax academy videos say so. But it shouldn't make any difference anyhow. Kind regards, Lucas Benevides 2017-12-08 5:41 GMT-02:00 Oleksandr Shulgin :

Batch : Isolation and Atomicity for same partition on multiple table

2017-12-11 Thread Mickael Delanoë
Hello, I have a question regarding batch isolation and atomicity with query using a same partition key. The Datastax documentation says about the batches : "Combines multiple DML statements to achieve atomicity and isolation when targeting a single partition or only atomicity when targeting

Re: effect of partition size

2017-12-11 Thread Micha
ok, thanks for the answer. So the better approach here is to adjust the table schema to get the partition size to around 100MB max. This means using a partition key with multiple parts and making more selects instead of one when querying the data (which may increase parallelism). Michael

Re: effect of partition size

2017-12-11 Thread Jeff Jirsa
There's a few, and there have been various proposals (some in progress) to deal with them. The two most obvious problems are: The primary problem for most people is that wide partitions cause JVM heap pressure on reads (CASSANDRA-11206, CASSANDRA-9754). This is because we break the wide

Snapshot - files- 2.1 vs 3.x

2017-12-11 Thread Anumod Mullachery
Hi , In 2.1 , the snapshots are created with keyspace-table.db , (eg- janusgraph-txlog-ka-1-Filter.db) but in 3.1 its seems like - mc-3-big-Data.db. Is there any reason for the change, and is there any alternative method to change 3.1 snapshot files with keyspace-table name format. Our

Re: effect of partition size

2017-12-11 Thread Jeff Jirsa
Yes, that's LIKELY "better". On Mon, Dec 11, 2017 at 8:10 AM, Micha wrote: > ok, thanks for the answer. > > So the better approach here is to adjust the table schema to get the > partition size to around 100MB max. > This means using a partition key with multiple parts

Re: Blocking read repair giving consistent data but not repairing existing data

2017-12-11 Thread Michael Semb Wever
> We are using dsc 3.0.3 on total of *6 Nodes*,* 2 DC's, 3 Node Each, RF-3* > so every node has complete data. Now we are facing a situation on a table > with 1 partition key, 2 clustering columns and 4 normal columns. > > Out of the 6 5 nodes has a single value and Partition key, 2 clustering

Re: Tombstoned data seems to remain after compaction

2017-12-11 Thread kurt greaves
It might... If you have the disk space a major compaction would be better, or user defined compactions with the large/old SSTable. Better yet if you're on a recent version you can do a splitting major compaction (all these options are available through *nodetool compact*). On 11 December 2017 at

Re: Snapshot - files- 2.1 vs 3.x

2017-12-11 Thread Jeff Jirsa
Agree that it's annoying. Came from https://issues.apache.org/jira/browse/CASSANDRA-6962 - in part to make windows compatibility easier (windows has path size limits, and it was pretty easy to hit those limits with any meaningful ks/table name combination). On Mon, Dec 11, 2017 at 10:05 AM,

Re: Running repair while Cassandra upgrade 2.0.X to 2.1.X

2017-12-11 Thread kurt greaves
That ticket says that streaming SSTables that are older versions is supported. Streaming is only one component of repairs, and this ticket doesn't talk about repair at all, only bootstrap. For the most part it should work but as Alain said, it's probably best avoided. Especially if you can avoid

Re: Tombstoned data seems to remain after compaction

2017-12-11 Thread Jeff Jirsa
Hello Takashima, Answers inline. On Sun, Dec 10, 2017 at 11:41 PM, tak...@fujitsu.com wrote: > Hi Jeff > > > > > > I’m appreciate for your detailed explanation :) > > > > > > Ø Expired data gets purged on compaction as long as it doesn’t overlap > with other live data. The

RE: Tombstoned data seems to remain after compaction

2017-12-11 Thread tak...@fujitsu.com
Hi Jeff, Kurt Thanks again for your advice. Within those valuable ideas you provide, I think of executing nodetool compact because it is the most simplest way to try and I’m really novice about Cassandra. One thing I’m concerned about the plan is that the major compaction might have a serious