You want to run map/reduce jobs for your use case. You can already do
this with Cassandra (http://wiki.apache.org/cassandra/HadoopSupport),
and DataStax is introducing Brisk soon to make it easier:
http://www.datastax.com/products/brisk

On Wed, Apr 20, 2011 at 9:36 PM, Jonathan Ellis <jbel...@gmail.com> wrote:
> CQL changes the API, that is all.
>
> On Wed, Apr 20, 2011 at 5:40 PM, Constantin Teodorescu
> <braila...@gmail.com> wrote:
>> My use case is as follows: we are using in 70% of the jobs information
>> retrieval using keys, column names and ranges and up to now, what we have
>> tested suits our need.
>> However, the rest of 30% of the jobs involve full sequential scan of all
>> records in the database.
>> I found some web pages describing the next good thing for cassandra 0.8
>> release, CQL, and I'm wondering: the CQL execution will involve separate
>> processes running simultaneously on all nodes in the cluster that will do
>> the "filtering and pre-sorting phase" on the local stored data (using
>> indexes when available) and then execute the "merge phase" on a single node
>> (that one that have received the request) ?
>> Best regards,
>> Teo
>>
>
>
>
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder of DataStax, the source for professional Cassandra support
> http://www.datastax.com
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com

Reply via email to