I've worked on an implementation for Postgres. I used the Large Object API provided by the Postgres JDBC driver. It works fine but I doubt it is very scalable because the number of open connections during indexing can become very high.
Lucene opens many different files when writing to an index. This results in opening one PG connection per open file. While working on a small index (30 000 files), I saw the number of open connections become quite high (approx 150). If you don't have a lot of RAM, this is problematic. Maybe someone more knowledgable of Lucene can confirm the quantity of open files while indexing? If Lucene needs more files when the index grows, this solution is not scalable, your database server will eventually choke and performance will degrade to make indexing impossibly slow. I think the main goal of having an SQL directory is to make the index more accessible. I've found that using my implementation to _copy_ an existing index to and from a Postgres DB is quite useful. I usually create an index in an FSDirectory and then copy it over into the PgsqlDirectory... I'll look into making the implementation available if you're interested. Anyone have thoughts on this? Thanks, Phil > -----Original Message----- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] > Sent: February 2, 2004 15:23 > To: Lucene Users List > Subject: Re: SQLDirectory > > > On Monday 02 February 2004 21:08, Jochen wrote: > > RE: Lucene Optimized Query Broken? > > Thanks for the hint. Alas, I also didn't find it there :-( Anyway, I need > something that does work on any (Postgres) SQL db. > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: [EMAIL PROTECTED] > For additional commands, e-mail: [EMAIL PROTECTED] --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]