Re: Too many tombstones using TTL

2018-01-16 Thread Python_Max
xify your code, but it will prevent severe performance issues in > Cassandra. > > Tombstones won't be a problem for repair, they will get repaired as > classic cells. They negatively affect the read path mostly, and use space > on disk. > > On Tue, Jan 16, 2018 at 2:12 PM Python_Max

Re: Too many tombstones using TTL

2018-01-16 Thread Python_Max
e to be expired yet. > Those techniques usually work better with TWCS, but the former could make > you hit a lot of SSTables if your partitions can spread over all time > buckets, so only use TWCS if you can restrict individual reads to up to 4 > time windows. > > Cheers,

Re: Too many tombstones using TTL

2018-01-16 Thread Python_Max
end it to replicas during > reads to cover all possible cases. > > > On Fri, Jan 12, 2018 at 5:28 PM Python_Max <python@gmail.com> wrote: > >> Thank you for response. >> >> I know about the option of setting TTL per column or even per item in >> c

Re: Too many tombstones using TTL

2018-01-12 Thread Python_Max
?" > > --> Simply because technically it is possible to set different TTL value > on each column of a CQL row > > On Wed, Jan 10, 2018 at 2:59 PM, Python_Max <python@gmail.com> wrote: > >> Hello, C* users and experts. >> >> I have (one more) questi

Re: sstabledump tries to delete a file

2018-01-12 Thread Python_Max
e > metadata from the sstable it can just set the properties to match that of > the sstable to prevent this. > > Chris > > On Wed, Jan 10, 2018 at 4:16 AM, Python_Max <python@gmail.com> wrote: > >> Hello all. >> >> I have an error when trying to dump

Re: Deleted data comes back on node decommission

2018-01-10 Thread Python_Max
licated. Plugs not really fit the >> design of Cassandra. Here it's probably much easier to just follow >> recommended procedure when adding and removing nodes. >> >> On 16 Dec. 2017 01:37, "Python_Max" <python@gmail.com> wrote: >> >> Hello, Jeff.

Too many tombstones using TTL

2018-01-10 Thread Python_Max
018-01-10T13:29:25Z" } } ] } ] } ] The question is why Cassandra creates a tombstone for every column instead of single tombstone per row? In production environment I have a table with ~30 columns and It gives me a warning for 30k tombstones and 300 live rows. It is 30 times more then it could be. Can this behavior be tuned in some way? Thanks. -- Best regards, Python_Max.

sstabledump tries to delete a file

2018-01-10 Thread Python_Max
ug tracker. Shouldn't sstabledump be read only? -- Best regards, Python_Max.

Re: Deleted data comes back on node decommission

2017-12-15 Thread Python_Max
up', isn't it? On 14.12.17 16:14, kurt greaves wrote: Are you positive your repairs are completing successfully? Can you send through an example of the data in the wrong order? What you're saying certainly shouldn't happen, but there's a lot of room for mistakes. On 14 Dec. 2017 20:13, "Pyt

Re: Deleted data comes back on node decommission

2017-12-15 Thread Python_Max
to select query. Thank you very much, Jeff, for pointing me in right direction. On 13.12.17 18:43, Jeff Jirsa wrote: Did you run cleanup before you shrank the cluster? -- Best Regards, Python_Max. - To unsubscribe, e-mail

Re: Deleted data comes back on node decommission

2017-12-14 Thread Python_Max
? On 13.12.17 18:43, Jeff Jirsa wrote: Did you run cleanup before you shrank the cluster? -- Best Regards, Python_Max. - To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org For additional commands, e-mail: user-h

Deleted data comes back on node decommission

2017-12-13 Thread Python_Max
the node itself is streaming wrong data on decommission but that did not work either (deleted data is back to life). Is this a known issue? PS: I have not tried 'nodetool scrub' yet nor dropping repairedAt for affected sstables. -- Best Regards, Python_Max