On 7/21/2016 9:37 AM, Rallavagu wrote:
> I suspect swapping as well. But, for my understanding - are the index
> files from disk memory mapped automatically at the startup time?

They are *mapped* at startup time, but they are not *read* at startup. 
The mapping just sets up a virtual address space for the entire file,
but until something actually reads the data from the disk, it will not
be in memory.  Getting the data in memory is what makes mmap fast.

http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html

> We are not performing "commit" after every update and here is the
> configuration for softCommit and hardCommit.
>
> <autoCommit>
>        <maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
>        <openSearcher>false</openSearcher>
> </autoCommit>
>
> <autoSoftCommit>
>        <maxTime>${solr.autoSoftCommit.maxTime:120000}</maxTime>
> </autoSoftCommit>
>
> I am seeing QTimes (for searches) swing between 10 seconds - 2
> seconds. Some queries were showing the slowness caused to due to
> faceting (debug=true). Since we have adjusted indexing and facet times
> are improved but basic query QTime is still high so wondering where
> can I look? Is there a way to debug (instrument) a query on Solr node?

Assuming you have not defined the maxTime system properties mentioned in
those configs, that config means you will potentially be creating a new
searcher every two minutes ... but if you are sending explicit commits
or using commitWithin on your updates, then the true situation may be
very different than what's configured here.

>>> We have allocated significant amount of RAM (48G total
>>> physical memory, 12G heap, Total index disk size is 15G)

Assuming there's no other software on the system besides the one
instance of Solr with a 12GB heap, this would mean that you have enough
room to cache the entire index.  What OS are you running on? With that
information, I may be able to relay some instructions that will help
determine what the complete memory situation is on your server.

Thanks,
Shawn

Reply via email to