Thanks a lot for your response!
I think that is exactly what's happening. It runs ok for a short time and 
starts throwing that error while some of the ueriea run successfully.
I had it setup with 10 threads, maybe that was too much.
I'd be very interested in that code if you don't mind sharing.
I'm migrating code from pure Lucene to Solr and indexing time went from less 
that one hour to more than 4 because it's using only one thread.
Thanks a lot again, very helpful.
Maria


Sent from my Motorola ATRIX™ 4G on AT&T

-----Original message-----
From: rkuris <swcafe+nab...@gmail.com>
To: solr-user@lucene.apache.org
Sent: Sat, Sep 24, 2011 18:11:20 GMT+00:00
Subject: RE: JdbcDataSource and threads

My guess on this is that you're making a LOT of database requests and have a
million TIME-WAIT connections, and your port range for local ports is
running out.

You should first confirm that's true by running netstat on the machine while
the load is running.  See if it gives a lot of output.

One way to solve this problem is to use a connection pool.  Look at adding a
pooled JNDI connection into your web service and connect with that instead.

The best way is to avoid making the extra connections.  If the data in the
subqueries is really short, look into caching the results using a
CachedSqlEntityProcessor instead.  I wasn't able to use this approach
because I had a lot of data in the inner queries.  What I ended out doing
was writing my own OrderedSqlEntityProcessor which correlates an outer
ordered query with an inner ordered query.  This ran a lot faster and
reduced my load times from 20 hours to 20 minutes.  Let me know if you're
interested in that code.

--
View this message in context: 
http://lucene.472066.n3.nabble.com/JdbcDataSource-and-threads-tp3359874p3364831.html
Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to