> I think this is a much better approach because that gives you the
> ability to update or retrieve just parts of objects efficiently,
> rather than making column values just blobs with a bunch of special
> case logic to introspect them.  Which feels like a big step backwards
> to me.

Unless your access pattern involves reading/writing the whole document each 
time. In that case you're better off serializing the whole document and storing 
it in a column as a byte[] without incurring the overhead of column indexes. 
Right?


On Mar 29, 2012, at 9:23 AM, Jonathan Ellis wrote:

> On Thu, Mar 29, 2012 at 9:57 AM, Jeremiah Jordan
> <jeremiah.jor...@morningstar.com> wrote:
>> Its not clear what 3647 actually is, there is no code attached, and no real 
>> example in it.
>> 
>> Aside from that, the reason this would be useful to me (if we could get 
>> indexing of attributes working), is that I already have my data in 
>> JSON/Thrift/ProtoBuff, depending how large the data is, it isn't trivial to 
>> break it up into columns to insert, and re-assemble into columns to read.
> 
> I don't understand the problem.  Assuming Cassandra support for maps
> and lists, I could write a Python module that takes json (or thrift,
> or protobuf) objects and splits them into Cassandra rows by fields in
> a couple hours.  I'm pretty sure this is essentially what Brian's REST
> api for Cassandra does now.
> 
> I think this is a much better approach because that gives you the
> ability to update or retrieve just parts of objects efficiently,
> rather than making column values just blobs with a bunch of special
> case logic to introspect them.  Which feels like a big step backwards
> to me.
> 
> -- 
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder of DataStax, the source for professional Cassandra support
> http://www.datastax.com

Reply via email to