It's just occoured to me that everything in
org.apache.oodt.cas.filemgr.catalog.solr.SolrCatalog (well, specifically
org.apache.oodt.cas.filemgr.catalog.solr.SolrClient) is done through
HTTPClient so all of the above may not be relevant.
Can you please scope all the same?
Thanks
Lewis


On Tue, Jul 8, 2014 at 10:54 PM, Lewis John Mcgibbney <
[email protected]> wrote:

> Hi Konstantinos,
> OK, I was ablel to scope this one and I have a few questions for you.
> 1) Which version of Solr are you using? Is it  3.5, 3.6, 4.0-ALPHA? If so
> please scope this issue [0], the solution would be to upgrade if you are
> not too long ahead with ingestion as fixes in Solr are worth having based
> on recent release cycles.
> 2) How many cores do you have on Solr server? Also what kind of setUp do
> you have? Replication at all?
> In recent versions of Solr 4.X SolrJ clients should now call shutdown()
> on their SolrServer object to let it know they don't want to re-use any
> existing
> connections anymore, and when Solr internally uses SolrJ to talk to other
> nodes in SolrCloud it should be doing this (as of 4.0-ALPHA)so this is why
> I ask.
> Lewis
>
> [0] https://issues.apache.org/jira/browse/SOLR-3280
>
>
> On Tue, Jul 8, 2014 at 7:14 AM, Konstantinos Mavrommatis <
> [email protected]> wrote:
>
>> Hi,
>> I have setup OODT filemanager on port 9000, using SOLR as the indexing
>> service on port 8081. They are both setup on the same computer, while
>> crawler runs on a number of different compute nodes spread across the local
>> network and the cloud.
>>
>> When the crawler runs and ingests files I notice that there are several
>> connections that open to solr and remain in CLOSE_WAIT state for hours.
>> any idea why this happens? Moving forward I am planning to use several
>> hundreds of crawler instances, each running on different computer, that
>> will create thousands of such connections and will probably create problems
>> to the system.
>> Thanks in advance for any help
>> Kostas
>>
>>  $lsof -i :8081
>> COMMAND   PID         USER   FD   TYPE             DEVICE SIZE/OFF NODE
>> NAME
>> java    92065 kmavrommatis   75u  IPv6 0x392c3fa3b63b29cf      0t0  TCP
>> localhost:49205->localhost:sunproxyadmin (CLOSE_WAIT)
>> java    92065 kmavrommatis   76u  IPv6 0x392c3fa3b6dcbbcf      0t0  TCP
>> localhost:49206->localhost:sunproxyadmin (CLOSE_WAIT)
>> java    92065 kmavrommatis   77u  IPv6 0x392c3fa39fd12e0f      0t0  TCP
>> localhost:49207->localhost:sunproxyadmin (CLOSE_WAIT)
>> java    92065 kmavrommatis   78u  IPv6 0x392c3fa39fdcdbcf      0t0  TCP
>> localhost:49208->localhost:sunproxyadmin (CLOSE_WAIT)
>> java    92065 kmavrommatis   79u  IPv6 0x392c3fa3b62cde0f      0t0  TCP
>> localhost:49209->localhost:sunproxyadmin (CLOSE_WAIT)
>> java    92065 kmavrommatis   80u  IPv6 0x392c3fa39fa2714f      0t0  TCP
>> localhost:49210->localhost:sunproxyadmin (CLOSE_WAIT)
>> java    92065 kmavrommatis   81u  IPv6 0x392c3fa3b6c32acf      0t0  TCP
>> localhost:49211->localhost:sunproxyadmin (CLOSE_WAIT)
>> java    92065 kmavrommatis   82u  IPv6 0x392c3fa3b6aa714f      0t0  TCP
>> localhost:49212->localhost:sunproxyadmin (CLOSE_WAIT)
>>
>>
>> process 92065 is:
>>  /usr/bin/java -Djava.ext.dirs=../lib
>> -Djava.util.logging.config.file=../etc/logging.properties
>> -Dorg.apache.oodt.cas.filemgr.properties=../etc/filemgr.properties
>> org.apache.oodt.cas.filemgr.system.XmlRpcFileManager --portNum 9000
>>
>> *********************************************************
>> THIS ELECTRONIC MAIL MESSAGE AND ANY ATTACHMENT IS
>> CONFIDENTIAL AND MAY CONTAIN LEGALLY PRIVILEGED
>> INFORMATION INTENDED ONLY FOR THE USE OF THE INDIVIDUAL
>> OR INDIVIDUALS NAMED ABOVE.
>> If the reader is not the intended recipient, or the
>> employee or agent responsible to deliver it to the
>> intended recipient, you are hereby notified that any
>> dissemination, distribution or copying of this
>> communication is strictly prohibited. If you have
>> received this communication in error, please reply to the
>> sender to notify us of the error and delete the original
>> message. Thank You.
>>
>
>
>
> --
> *Lewis*
>



-- 
*Lewis*

Reply via email to