On Fri, Oct 21, 2011 at 2:23 PM, Elias Levy <[email protected]>wrote:
> I found that if I limited the timestamps to a range that covers a > reasonable number of records the query succeeds. But if the query is of the > form 'ts:[0 TO 1319228408]', then Riak generates that error and the client > connection it shutdown. I am guessing that that queries covers too many > records, which is causing the nodes to take longer than expected to respond, > and that some timeout is being reached and Riak kills the query. Is that > correct? > I should have probably mentioned that the are other terms in the query that limit the results. I am now wondering if this is caused by the fact that Riak Search is sharded by term, rather than document, causing it to search for each term in the query independently and then creating an intersection of matches to return as the query result. If that is the case, then a single query term that select a large portion of the index will cause trouble, even if other terms limit the results, as the system will need return a good portion of the keys in the bucket, before they can be whittled down by other query terms. If so, it would seem the only solution is to break up the query into smaller, more manageable chunks and aggregate them on the client side. Is this correct?
_______________________________________________ riak-users mailing list [email protected] http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
