Interesting thought, that should work indeed, I'll evaluate both options
and provide an update here once I have results.

Best regards,

Robin Verlangen
*Chief Data Architect*

W http://www.robinverlangen.nl
E ro...@us2.nl

<http://goo.gl/Lt7BC>
*What is CloudPelican? <http://goo.gl/HkB3D>*

Disclaimer: The information contained in this message and attachments is
intended solely for the attention and use of the named addressee and may be
confidential. If you are not the intended recipient, you are reminded that
the information remains the property of the sender. You must not use,
disclose, distribute, copy, print or rely on this e-mail. If you have
received this message in error, please contact the sender immediately and
irrevocably delete this message and any copies.

On Thu, Mar 26, 2015 at 7:09 AM, Thunder Stumpges <
thunder.stump...@gmail.com> wrote:

> Would it help here to not actually issue a delete statement but instead
> use date based compaction and a dynamically calculated ttl that is some
> safe distance in the future from your key?
>
> Just a thought.
> -Thunder
>  On Mar 25, 2015 11:07 AM, "Robert Coli" <rc...@eventbrite.com> wrote:
>
>> On Wed, Mar 25, 2015 at 12:45 AM, Robin Verlangen <ro...@us2.nl> wrote:
>>
>>> @Robert: can you elaborate a bit more on the "not ideal" parts? In my
>>> case I will be throwing away the rows (thus the points in time that are
>>> "now in the past"), which will create tombstones which are compacted away.
>>>
>>
>> "Not ideal" is what I mean... Cassandra has immutable data files, use
>> cases which do DELETE pay an obvious penalty. Some percentage of tombstones
>> will exist continuously, and you have to store them and seek past them.
>>
>> =Rob
>>
>>
>

Reply via email to