Hi.  Peter.
And I checked my example/solr/conf/solrconfig.xml. (solr 1.4)
I don't see 

<HashDocSet maxSize="3000" loadFactor="0.75"/>

in it. 
But I see it in solr website's solrconfig.xml wiki.

So should I add it or the default(without it ) is ok?


Thanks

在2010-07-15 17:19:57,"Peter Karich" <peat...@yahoo.de> 写道:
>How does your queries look like? Do you use faceting, highlighting, ... ?
>Did you try to customize the cache?
>Setting the HashDocSet to "0.005 of all documents" improves our search speed a 
>lot.
>Did you optimize the index?
>
>500ms seems to be slow for an 'average' search. I am not an expert but without 
>highlighting it should be faster as 100ms or at least 200ms
>
>Regards,
>Peter.
>
>
>> Hi.
>>    Thanks for replying.
>>    My document has many different fields(about 30 fields, 10 different type 
>> of documents but these are not the point ) and I have to search over several 
>> fields. 
>>    I was putting all 76M documents into several lucene indexes and use the 
>> default lucene.net ParaSearch to search over these indexes. That was slow, 
>> more than 20s.
>>    Then someone suggested I need to merge all our indexes into a huge one, 
>> he thought lucene can handle 76M documents in one index easily. Then I 
>> merged all the documents into a single huge one(which took me 3 days) . That 
>> time, the index folder is about 15G(I don't store info into index, just 
>> index them). Actually the search is still very slow, more than 20s too, and 
>> looks slower than use several indexes. 
>>    Then I come to solr. Why I put 1M into each core is I found when a core 
>> has 1M document, the search speed is fast, range from 0-500ms, which is 
>> acceptable. I don't know how many documents to saved in one core is proper. 
>>    The problem is even if I put 2M documents into each core. Then I have 
>> only 36 cores at the moment. But when our documents doubles in the future, 
>> same issue will rise again. So I don't think save 1M in each core is the 
>> issue. 
>>    The issue is I put too many cores into one server. I don't have extra 
>> server to spread solr cores. So we have to improve solr search speed from 
>> some other way. 
>>    Any suggestion?
>>
>> Regards.
>> Scott
>>
>>
>>
>>
>>
>> 在2010-07-15 15:24:08,"Fornoville, Tom" <tom.fornovi...@truvo.com> 写道:
>>   
>>> Is there any reason why you have to limit each instance to only 1M
>>> documents?
>>> If you could put more documents in the same core I think it would
>>> dramatically improve your response times.
>>>
>>> -----Original Message-----
>>> From: marship [mailto:mars...@126.com] 
>>> Sent: donderdag 15 juli 2010 6:23
>>> To: solr-user
>>> Subject: How to speed up solr search speed
>>>
>>> Hi. All.
>>>    I got a problem with distributed solr search. The issue is 
>>>    I have 76M documents spread over 76 solr instances, each instance
>>> handles 1M documents. 
>>>   Previously I put all 76 instances on single server and when I tested
>>> I found each time it runs, it will take several times, mostly 10-20s to
>>> finish a search. 
>>>   Now, I split these instances into 2 servers. each one with 38
>>> instances. the search speed is about 5-10s each time. 
>>> 10s is a bit unacceptable for me. And based on my observation, the slow
>>> is caused by disk operation as all theses instances are on same server.
>>> Because when I test each single instance, it is purely fast, always
>>> ~400ms. When I use distributed search, I found some instance say it need
>>> 7000+ms. 
>>>   Our server has plenty of memory free of use. I am thinking is there a
>>> way we can make solr use more memory instead of harddisk index, like,
>>> load all indexes into memory so it can speed up?
>>>
>>> welcome any help.
>>> Thanks.
>>> Regards.
>>> Scott
>>>     
>
>
>-- 
>http://karussell.wordpress.com/
>

Reply via email to