Hi! So I have millions and millions of documents in my Elasticsearch, each 
one of which has a field called "time". I need the results of my queries to 
come back in chronological order. So I put a "sort":{"time":{"order":"asc"}} 
in all my queries. This was going great on smaller data sets but then 
Elasticsearch started sending me 500s and circuit breaker exceptions 
started showing up in the logs with "data for field time would be too 
large". So I checked out 
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-fielddata.html
 
and that looks a lot like what I've been seeing: seems like it's trying to 
pull all the millions of time values into memory even if they're not 
relevant to my query. What are my options for fixing this? I can't 
compromise chronological order, it's at the heart of my application. "More 
memory" would be a short-term fix but the idea is to scale this thing to 
trillions and trillions of points and that's a race I don't want to run. 
Can I make these exceptions go away without totally tanking performance? 
Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/60c63662-71b5-4e98-b125-995e357cd06e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to