I'm trying to reduce memory usage when indexing, and I see that using
the binary format may be a good way to do this. Unfortunately I can't
see a way to do this using the EmbeddedSolrServer since only the
CommonsHttpSolrServer has a setRequestWriter method. If I'm running out
of memory
I'm working on an application that will build indexes directly using the
Lucene API, but will expose them to clients using Solr. I'm seeing
plenty of documentation on how to support date range fields in Solr,
but they all assume that you are inserting documents through Solr rather
than merging
I'm are planning out a system with large indexes and wondering what kind
of performance boost I'd see if I split out documents into many cores
rather than using a single core and splitting by a field. I've got about
500GB worth of indexes ranging from 100MB to 50GB each.
I'm assuming if we split
On Thu, Jun 18, 2009 at 4:00 PM, Jonathan Vanascojvana...@2xlp.com wrote:
can anyone give me a suggestion ? i haven't touched java / jetty / tomcat /
whatever in at least a good 8 years and am lost.
I spent a lot of time trying to get this working too. My conclusion
was simply that the .deb
Phil Hagelberg p...@hagelb.org writes:
Noble Paul നോബിള് नोब्ळ् noble.p...@corp.aol.com writes:
if you removed the files while the slave is running , then the slave
will not know that you removed the files (assuming it is a *nix box)
and it will serve the search requests. But if you
Noble Paul നോബിള് नोब्ळ् noble.p...@corp.aol.com writes:
if you removed the files while the slave is running , then the slave
will not know that you removed the files (assuming it is a *nix box)
and it will serve the search requests. But if you restart the slave ,
it should have
of
development such that it shouldn't be expected to work by casual users,
if that is the case I can go back to the external-script-based
replication features of 1.3.
thanks,
Phil Hagelberg
http://technomancy.us
Phil Hagelberg p...@hagelb.org writes:
My only guess as to what's going wrong here is that deleting the
coreN/data directory is not a good way to reset a core back to its
initial condition. Maybe there's a bit of state somewhere that's making
the slave think that it's already up-to-date
Shalin Shekhar Mangar shalinman...@gmail.com writes:
You are right. In Solr/Lucene, a commit exposes updates to searchers. So you
need to call commit on the master for the slave to pick up the changes.
Replicating changes from the master and then not exposing new documents to
searchers does