Why it's radically?
It will be same get_indexes_slices search but in specified set of rows. So
mostly it will be one more Search Expression over rowIDs not only column
values. Usually the more restrictions you could specify in search query, the
faster search it can be (not slower at least).
The way specify more restrictions to the query is to specify them in the
index_clause. The index clause is applied to the set of all rows in the
database, not a sub set, applying them to a sub set is implicitly supporting a
sub query. Currently it's doing select then project, this would be
Not sure it's a feature cassandra needs, it would radically change the meaning
of get_indexes_slices(). If you already know the row keys the assumption would
be you know they are the rows you want to get.
Feel free to add a Jira though.
IMHO this sounds more like Sphinx not supporting all
Hi,
We have an issue to search over Cassandra and we are using Sphinx for
indexing.
Because of Sphinx architecture we can't use range queries over all fields
that we need to.
So we have to run Sphinx Query first to get List of rowKeys and perform
additional range filtering over column values.
Just checking, you want an API call like this ?
multiget_filtered_slice(keys, column_parent, predicate, filter_clause,
consistency_level)
Where filter_clause is an IndexClause.
It's a bit messy.
is there no way to express this as a single get_indexed_slice() call? With a ==
index
Something like this.
Actually I think it's better to extend get_indexed_slice() API instead of
creating new one thrift method.
I wish to have something like this:
//here we run query to external search engine
Listbyte[] keys = performSphinxQuery(someFullTextSearchQuery);
IndexClause indexClause