I’ve come across the same thing. I have a table with at least half a dozen 
columns that could be null, in any combination. Having a prepared statement for 
each permutation of null columns just isn’t going to happen. I don’t want to 
build custom queries each time because I have a really cool system of managing 
my queries that relies on them being prepared.

Fortunately for me, I should have at most a handful of tombstones in each 
partition, and most of my records are written exactly once. So, I just let the 
tombstones get written and they’ll eventually get compacted out and life will 
go on.

It’s annoying and not ideal, but what can you do?

On Apr 29, 2015, at 2:36 AM, Matthew Johnson 
<matt.john...@algomi.com<mailto:matt.john...@algomi.com>> wrote:

Hi all,

I have some fields that I am storing into Cassandra, but some of them could be 
null at any given point. As there are quite a lot of them, it makes the code 
much more readable if I don’t check each one for null before adding it to the 
INSERT.

I can see a few Jiras around CQL 3 supporting inserting nulls:

https://issues.apache.org/jira/browse/CASSANDRA-3783
https://issues.apache.org/jira/browse/CASSANDRA-5648

But I have tested inserting null and it seems to work fine (when querying the 
table with cqlsh, it shows up as a red lowercase null).

Are there any obvious pitfalls to look out for that I have missed? Could it be 
a performance concern to insert a row with some nulls, as opposed to checking 
the values first and inserting the row and just omitting those columns?

Thanks!
Matt

Reply via email to