Hi Mark,
With Java 7, are pointers compressed by default.
Other JVM Settings
-
-XX:UseCompressedOops
-
Compressed oops is supported and enabled by default in Java SE 6u23
and later.
In Java SE 7, use of compressed oops is the default for 64-bit JVM
processes when -Xmx isn't specified and for values of -Xmx less than 32
gigabytes.
On Tuesday, June 10, 2014 3:17:49 PM UTC-7, Mark Walkom wrote:
>
> You will likely see an increase by distributing it to one shard per
> machine, but that's hard to quantify without actually doing it.
>
> Also, you may be doing yourself a disservice with such a large heap size
> as Nik mentioned. Over 32GB, Java pointers are not compressed and you do
> lose a bit of performance due to this.
>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: [email protected] <javascript:>
> web: www.campaignmonitor.com
>
>
> On 11 June 2014 07:20, <[email protected] <javascript:>> wrote:
>
>> Thanks for the clarification. The servers aren't under any (read) load
>> yet. There is constant update of data in the background - Roughly about 60
>> Index Writes per second. The refresh interval is set to 60s. Can this be a
>> performance bottleneck?
>>
>> We can add in more nodes to bring it up to 10 Nodes - 5 Shards with 1
>> Replica. But I doubt if that will reduce the Empty Search Query to 50ms.
>> Are there any other profiling tools out there to debug the response time?
>>
>>
>> On Tuesday, June 10, 2014 11:30:03 AM UTC-7, Nikolas Everett wrote:
>>
>>> Short answer: yes.
>>> Long answer: 500ms is a long time for the empty query. I see 2ms from
>>> elasticsearch and 23ms from time in development. In production I see maybe
>>> 54ms from elasticsearch and 70 from time across far far more shards and
>>> more data. When I do the same query across thousands of shards and a
>>> couple of TB of data I get ~250ms. Production is 16 servers with 96GB of
>>> ram and 30GB heaps.
>>>
>>> The analyzers really aren't going to hurt performance.
>>>
>>> I'd have a look at your servers themselves: what kind of load are they
>>> under? What is your indexing rate? that kind of thing.
>>>
>>> Also, 30GB is normally the sweet spot for heap sizes, making ~64GB of
>>> total ram the sweet spot for total ram. 110GB heap is pretty high and I'd
>>> expect for new generation (pause the world) garbage collection to take a
>>> while there.
>>>
>>>
>>> Nik
>>>
>>>
>>> On Tue, Jun 10, 2014 at 2:20 PM, <[email protected]> wrote:
>>>
>>>> I am currently running only 1 index with 5 shards. So the both of those
>>>> queries yield the same response time. My main question is to understand if
>>>> scaling out is an Option given the current replication scheme.
>>>>
>>>>
>>>> <https://lh5.googleusercontent.com/-bz8iQd0KUaA/U5dMSGLNNFI/AAAAAAAAABg/tGJl0HOj4xo/s1600/Elasticsearch+Cluster.png>
>>>>
>>>>
>>>> On Tuesday, June 10, 2014 11:15:26 AM UTC-7, Nikolas Everett wrote:
>>>>
>>>>> I imagine that depends on lots of stuff. Are you doing
>>>>> elasticsearch:9200/_search or elasticsearch:9200/index/_search ? The
>>>>> former can take quite a while if you have lots index and lots of shards.
>>>>> If you can get away with not doing it, I would. The latter will only
>>>>> take
>>>>> a long time if you have tons of shards. It should otherwise be pretty
>>>>> quick.
>>>>>
>>>>>
>>>>> On Tue, Jun 10, 2014 at 2:10 PM, <[email protected]> wrote:
>>>>>
>>>>>> We currently run our Elasticsearch (*v1.0.2*) cluster on *3 Nodes*
>>>>>> with *5 Shards and 1 Replication* Scheme. The total index size is
>>>>>> about 70GB (~140GB with replication).
>>>>>>
>>>>>> The Empty Search (/_search) query takes 500-600 ms to respond. Will
>>>>>> adding in more Nodes help in this case? The Servers are have 252gb
>>>>>> of RAM and 110gb for Heap.
>>>>>>
>>>>>> The Index uses the following analyzers - standard, lowercase, stop,
>>>>>> porter_stem. Will this degrade Query performance?
>>>>>>
>>>>>> --
>>>>>> You received this message because you are subscribed to the Google
>>>>>> Groups "elasticsearch" group.
>>>>>> To unsubscribe from this group and stop receiving emails from it,
>>>>>> send an email to [email protected].
>>>>>>
>>>>>> To view this discussion on the web visit https://groups.google.com/d/
>>>>>> msgid/elasticsearch/9941cfd1-d211-4706-aa45-6c545c66baff%40goo
>>>>>> glegroups.com
>>>>>> <https://groups.google.com/d/msgid/elasticsearch/9941cfd1-d211-4706-aa45-6c545c66baff%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>>> .
>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>
>>>>>
>>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "elasticsearch" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to [email protected].
>>>> To view this discussion on the web visit https://groups.google.com/d/
>>>> msgid/elasticsearch/d5291841-e791-40b3-93e6-d8fbe9921ac5%
>>>> 40googlegroups.com
>>>> <https://groups.google.com/d/msgid/elasticsearch/d5291841-e791-40b3-93e6-d8fbe9921ac5%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>> --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected] <javascript:>.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/ecb24d7b-404e-482c-b70e-9b90d33fd18d%40googlegroups.com
>>
>> <https://groups.google.com/d/msgid/elasticsearch/ecb24d7b-404e-482c-b70e-9b90d33fd18d%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/2feb8fa6-70e3-426c-b1be-03f52ff8d512%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.