Subject: Re: Retrieving large num of docs
Hi Otis,
I think my experiments are not conclusive about reduction in search time. I
was playing around with various configurations to reduce the time to
retrieve documents from Solr. I am sure that making the two multi valued
text fields from stored
raghuveer.kanche...@aplopio.com
To: solr-user@lucene.apache.org
Sent: Thu, December 3, 2009 8:43:16 AM
Subject: Re: Retrieving large num of docs
Hi Hoss,
I was experimenting with various queries to solve this problem and in one
such test I remember that requesting only the ID did not change
Hi Hoss,
I was experimenting with various queries to solve this problem and in one
such test I remember that requesting only the ID did not change the
retrieval time. To be sure, I tested it again using the curl command today
and it confirms my previous observation.
Also, enableLazyFieldLoading
Subject: Re: Retrieving large num of docs
Hi Hoss,
I was experimenting with various queries to solve this problem and in one
such test I remember that requesting only the ID did not change the
retrieval time. To be sure, I tested it again using the curl command today
and it confirms my previous
: I think I solved the problem of retrieving 300 docs per request for now. The
: problem was that I was storing 2 moderately large multivalued text fields
: though I was not retrieving them during search time. I reindexed all my
: data without storing these fields. Now the response time (time
Hi Hoss/Andrew,
I think I solved the problem of retrieving 300 docs per request for now. The
problem was that I was storing 2 moderately large multivalued text fields
though I was not retrieving them during search time. I reindexed all my
data without storing these fields. Now the response time
: I am using Solr1.4 for searching through half a million documents. The
: problem is, I want to retrieve nearly 200 documents for each search query.
: The query time in Solr logs is showing 0.02 seconds and I am fairly happy
: with that. However Solr is taking a long time (4 to 5 secs) to return
Thanks Hoss,
In my previous mail, I was measuring the system time difference between
sending a (http) request and receiving a response. This was being run on a
(different) client machine
Like you suggested, I tried to time the response on the server itself as
follows:
$ /usr/bin/time -p curl -sS
Hi Andrew,
I applied the patch you suggested. I am not finding any significant changes
in the response times.
I am wondering if I forgot some important configuration setting etc.
Here is what I did:
1. Wrote a small program using solrj to use EmbeddedSolrServer (most of
the code is from the
Hi Raghu
Let me describe our use case in more details. Probably that will clarify
things.
The usual use case for Lucene/Solr is retrieving of small portion of the
result set (10-20 documents). In our case we need to read the whole result
set and this creates huge load on Lucene index, meaning a
Hi Andrew,
We are running solr using its http interface from python. From the resources
I could find, EmbeddedSolrServer is possible only if I am using solr from a
java program. It will be useful to understand if a significant part of the
performance increase is due to bypassing HTTP before going
Hi Andrew,
We are running solr using its http interface from python.
From the resources
I could find, EmbeddedSolrServer is possible only if I am
using solr from a
java program. It will be useful to understand if a
significant part of the
performance increase is due to bypassing HTTP
Hi
We obtain ALL documents for every query, the index size is about 50k. We use
number of stored fields. Often the result set size is several thousands of
docs.
We performed the following things to make it faster:
1. Use EmbeddedSolrServer
2. Patch Solr to avoid unnecessary marshalling while
13 matches
Mail list logo