Some notes from warkolm via #elasticsearch

<warkolm> searchme close index
[17:52] <searchme> warkolm: Try these urls:
[17:52] <searchme> .. 
*http://elasticsearch.org/guide/en/elasticsearch/reference/current/indices-open-close.html#indices-open-close*
[17:52] <searchme> .. 
*http://elasticsearch.org/guide/en/elasticsearch/reference/current/indices.html#index-management*
[17:53] <warkolm> there you go
[17:57] <warkolm> but check out the cat apis
[17:57] <warkolm> if you're new, install monitoring plugins like elastichq 
and kopf, they will give you visual insight into things 

Thanks warkolm!

On Wednesday, September 3, 2014 5:32:11 PM UTC+9, El Jeffo wrote:
>
> So I've collected about 100GB of logstash logs over 3 months.
>
> So there are roughly 100 Indexes such as logstash-2014.07.01 and so forth.
>
> I have a cluster of 3 EC2 Instances, 1 CPU w/ 4GB RAM Each.  Granted it's 
> not much.
>
> When I do queries, it's usually fast, until I run a large timespan that 
> span say 10-30 indexes.  At that point, I'm guessing each node has loaded 
> so much index and field data that it was nearly impossible to avoid 
> overrunning the heap or RAM on the cluster.  I end up with nodes at 100% 
> CPU, and 75% RAM usage.  
>
> I just wanted to check what was possible with tuning:
>
> 1) Given limited RAM, is it possible somehow tune my nodes such that in 
> event of a large query requiring too much RAM:
>    1a) The job gets killed due to timeout
>    1b) Something else saves my node from becoming non-responsive?
>
> 2) Is it possible to make some indexes work fast, while others slow?
>   2a) When I query historical data, I don't need an answer quickly.  Just 
> eventually.
>   2b) When I query the last 72 hours, I really want an answer quickly, 
> even if that means killing other jobs
>
> 3) Is it an unavoidable fact that as my data increases, I have no choice 
> but to either:
>   3a) Increase cluster RAM to hold every index/field at the same time?
>   3b) Delete indexes until everything fits in RAM?
>
> As I attempt to open opensearch to more people, they are running queries 
> in Kibana that span a larger and larger timeframe.  Thus leading to random 
> frozen nodes.
>
> If there was just some way to prevent frozen nodes (Maxed out CPU @ 100% 
> despite ram usage at say 3gb out of 4gb) then I would have a more stable 
> cluster.
>
> As EC2 does carry a noticable cost, I was trying to minimize my EC2 
> requirement.  So I'm trying to find ways to selectively reduce performance 
> where I don't need it.
>
> Any ideas?
>
> Jeff
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/50dc4481-f060-47e3-9bea-87e0d5ce3048%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to