You are right, I missed the JSON part.
According to the 
docs<http://www.datastax.com/dev/blog/whats-new-in-cassandra-2-2-json-support> 
“Columns which are omitted from the JSON value map are treated as a null insert 
(which results in an existing value being deleted, if one is present).”
So “unset” doesn’t help you out.
You can open a Jira ticket asking for “unset” support with JSON values and 
omitted columns so you can control is omitted columns have a “null” value or an 
“unset” value.




From: Ralf Steppacher [mailto:ralf.viva...@gmail.com]
Sent: Thursday, March 24, 2016 11:36 AM
To: user@cassandra.apache.org
Subject: Re: Large number of tombstones without delete or update

How does this improvement apply to inserting JSON? The prepared statement has 
exactly one parameter and it is always bound to the JSON message:

INSERT INTO event_by_patient_timestamp JSON ?

How would I “unset” a field inside the JSON message written to the 
event_by_patient_timestamp table?


Ralf


On 24.03.2016, at 10:22, Peer, Oded 
<oded.p...@rsa.com<mailto:oded.p...@rsa.com>> wrote:

http://www.datastax.com/dev/blog/datastax-java-driver-3-0-0-released#unset-values

“For Protocol V3 or below, all variables in a statement must be bound. With 
Protocol V4, variables can be left “unset”, in which case they will be ignored 
server-side (no tombstones will be generated).”


From: Ralf Steppacher [mailto:ralf.viva...@gmail.com]
Sent: Thursday, March 24, 2016 11:19 AM
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: Re: Large number of tombstones without delete or update

I did some more tests with my particular schema/message structure:

A null text field inside a UDT instance does NOT yield tombstones.
A null map does NOT yield tombstones.
A null text field does yield tombstones.


Ralf

On 24.03.2016, at 09:42, Ralf Steppacher 
<ralf.viva...@gmail.com<mailto:ralf.viva...@gmail.com>> wrote:

I can confirm that if I send JSON messages that always cover all schema fields 
the tombstone issue is not reported by Cassandra.
So, is there a way to work around this issue other than to always populate 
every column of the schema with every insert? That would be a pain in the 
backside, really.

Why would C* not warn about the excessive number of tombstones if queried from 
cqlsh?


Thanks!
Ralf



On 23.03.2016, at 19:09, Robert Coli 
<rc...@eventbrite.com<mailto:rc...@eventbrite.com>> wrote:

On Wed, Mar 23, 2016 at 9:50 AM, Ralf Steppacher 
<ralf.viva...@gmail.com<mailto:ralf.viva...@gmail.com>> wrote:
How come I end up with that large a number of tombstones?

Are you inserting NULLs?

=Rob

Reply via email to