Hi Shawn;

Double thanks for answering my whole thread.

Regarding the page fault thing,  that seems to be a concern because this
setup is identical for both solr4 and solr6. Although, I cannot find a good
way to debug it yet.

I found some strange behavior today that my primary solr node (which
handles queries with 'shards' parameter) is  asking a very large number of
'rows' from shard nodes. (I sent this in a different email so that I don't
jumble together different questions in same thread.)


Thanks
Nawab


On Thu, Aug 17, 2017 at 11:32 AM, Shawn Heisey <apa...@elyograg.org> wrote:

> On 8/12/2017 11:48 AM, Nawab Zada Asad Iqbal wrote:
> > I am executing a query performance test against my solr 6.6 setup and I
> > noticed following exception every now and then. What do I need to do?
> >
> > Aug 11, 2017 08:40:07 AM INFO  (qtp761960786-250) [   x:filesearch]
> > o.a.s.s.HttpSolrCall Unable to write response, client closed connection
> or
> > we are shutting down
> > org.eclipse.jetty.io.EofException
>
> <snip>
>
> > Caused by: java.io.IOException: Broken pipe
>
> <snip>
>
> > Apart from that, I also noticed that the query response time is longer
> than
> > I expected, while the memory utilization stays <= 35%. I thought that
> > somewhere I have set maxThreads (Jetty) to a very low number, however I
> am
> > falling back on default which is 10000 (so that shouldn't be a problem).
>
> The EofException and "broken pipe" messages are typical when the client
> closes the TCP connection before Solr finishes processing the request
> and sends a response.  When Solr finally finishes working and has a
> response, the web container where Solr is running tries to send the
> response back, but finds that the connection is gone, and logs the kind
> of exception you are seeing.
>
> Very likely what has happened is that the program sending the queries
> has a very low socket timeout (or total request timeout) configured on
> the http connection, and that the requests are taking longer than that
> timeout to execute, so the query software closes the connection.
>
> Later in the thread you mentioned maxConnections.  Some software might
> decide to kill existing connections when that limit is exceeded, so more
> connections can be opened.  That's something you'd need to discuss with
> whoever wrote the software.
>
> Also later in the thread you mentioned "page faults" ... without a lot
> of specific detail, we're not going to have any idea what you mean by
> that.  I can tell you that if you're looking at operating system memory
> counters, page faults are a completely normal part of OS operation.  By
> itself, that number won't mean anything.
>
> Long query times can be caused by many things.  One of the most common
> is not having enough memory left over for the operating system to
> effectively cache your index ... but this is not the only thing that can
> cause problems.
>
> Thanks,
> Shawn
>
>

Reply via email to