You can use Apache PIG to load data and filter it by row key, filter in pig is 
very fast.
Regards
  Shamim

11.12.2012, 20:46, "Ayush V." <ayushv...@gmail.com>:
> I'm working on Cassandra Hadoop intergration (MapReduce). We have used Random
> Partioner to insert data to gain faster write. Now we have to read that data
> from cassandra in MapReduce and perform some calculation on it.
>
> From the lots of data we have in cassandra we wan't to fetch data only for
> particular ROW-KEYs but we are unable to do it due to RandomPartioner -
> assertion is there in code.
>
> Can anyone please guide me how should I filter data based on RowKey on
> Cassandra level itself (I know data is distributed across regions using Hash
> of the RowKey)?
>
> Does using secondary indexes (still trying to understand how it works) will
> solve my problem or is there some other way around?
>
> I will be really appreciated if someone could answer my queries.
>
> Thanks
> AV
>
> --
> View this message in context: 
> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Filter-data-on-row-key-in-Cassandra-Hadoop-s-Random-Partitioner-tp7584212.html
> Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
> Nabble.com.

Reply via email to