Re: Internal Handling of Map Updates

2016-06-01 Thread Matthias Niehoff
JSON would be an option, yes. A frozen collection would not work for us, as the updates are both overwrites of existing values and appends of new values (but never a remove of values). So we end up with 3 options: 1. use clustering columns 2. use json 3. save the row not using the spark-cassandra-

Re: Internal Handling of Map Updates

2016-06-01 Thread Eric Stevens
>From that perspective, you could also use a frozen collection which takes away the ability to append, but for which overwrites shouldn't generate a tombstone. On Wed, Jun 1, 2016, 5:54 PM kurt Greaves wrote: > Is there anything stopping you from using JSON instead of a collection? > > On 27 May

Re: Internal Handling of Map Updates

2016-06-01 Thread kurt Greaves
Is there anything stopping you from using JSON instead of a collection? On 27 May 2016 at 15:20, Eric Stevens wrote: > If you aren't removing elements from the map, you should instead be able > to use an UPDATE statement and append the map. It will have the same effect > as overwriting it, becau

Library/utility announcements?

2016-06-01 Thread James Carman
Some user lists allow it. Does the Cassandra community mind folks announcing their super cool Cassandra libraries on this list? Is there a page for us to list them?

Token Ring Question

2016-06-01 Thread Anubhav Kale
Hello, I recently learnt that regardless of number of Data Centers, there is really only one token ring across all nodes. (I was under the impression that there is one per DC like how Datastax Ops Center would show it). Suppose we have 4 v-nodes, and 2 DCs (2 nodes in each DC) and a key space i

Re: (Full) compaction does not delete (all) old files

2016-06-01 Thread Dongfeng Lu
Alain, Thanks for responding to my question. 1 & 2: I think it is a bug, but as you said, maybe no one will dig it. I just hope it has been fixed in the later versions. 3: Restarting the code does NOT remove those files. I stopped and restarted C* many times and it did nothing. 4: Thanks for

Timeout while waiting for workers when flushing pool

2016-06-01 Thread Zhang, Charles
We have a 4 nodes, two dc's test cluster. All of them have datastax enterprise installed and running. One dc is the Cassandra dc, and the other is the Solr dc. We first used sstableloader to stream 1 billion rows into the cluster. After that was done, we created a Solr core using resource auto-g

Re: [Marketing Mail] [Marketing Mail] Cassandra 2.1: Snapshot data changing while transferring

2016-06-01 Thread Paul Dunkler
Hi Reynald, > If I understand correctly, you are making a tar file with all the folders > named "snapshots" (i.e. the folder under which all the snapshots are created. > So you have one snapshots folder per table). No, Thats not the case. We are doing a nightly snapshot of the whole database (

Re: Evict Tombstones with STCS

2016-06-01 Thread Alain RODRIGUEZ
Hi, I think you got it, this is probably the way to go: > And if it so, forceUserdefinedcompaction or setting > unchecked_tombstone_compactions > to true wont help either as tombstones are less than 20% and not much disk > would be recovered. But if you have less than 20% tombstones in there I

Re: [Marketing Mail] Re: [Marketing Mail] Cassandra 2.1: Snapshot data changing while transferring

2016-06-01 Thread Reynald Bourtembourg
Hi Paul, If I understand correctly, you are making a tar file with all the folders named "snapshots" (i.e. the folder under which all the snapshots are created. So you have one /snapshots /folder per table). If this is the case, when you are executing "nodetool repair", Cassandra will create a

Re: (Full) compaction does not delete (all) old files

2016-06-01 Thread Alain RODRIGUEZ
Hi, About your main concern: 1. True those files should have been removed. Yet Cassandra 2.0 is no longer supported, even more such an old version (2.0.6), so I think no one is going to dig this issue. To fix it, upgrade will probably be enough. I don't usually run manual compaction, and relied

Re: [Marketing Mail] Cassandra 2.1: Snapshot data changing while transferring

2016-06-01 Thread Paul Dunkler
> I guess this might come from the incremental repairs... > The repair time is stored in the sstable (RepairedAt timestamp metadata). By the way: We are not using incremental repairs at all. So can't be the case here. It really seems like there is somewhat that can still change data in snapshot

Re: Restoring Incremental Backups without using sstableloader

2016-06-01 Thread Alain RODRIGUEZ
Hi, Well you can do it through copy / past all the sstable as written in the link you gave as long as your token ranges distribution did not change since you took the snapshots and that you have a way to be sure what node each sstable belongs. Make sure that snapshot taken to node X indeed go back

Re: [Marketing Mail] Cassandra 2.1: Snapshot data changing while transferring

2016-06-01 Thread Paul Dunkler
Hi Mike, > Hi Paul, what is the value of the snapshot_before_compaction property in your > cassandra.yaml? snapshot_before_compaction: false > Say if another snapshot is being taken (because compaction kicked in and > snapshot_before_compaction property is set to TRUE) and at this moment you'r