?
Thanks.
Regards.
Scott
在2010-07-17 23:38:58,"Shawn Heisey" 写道:
> On 7/17/2010 3:28 AM, marship wrote:
>> Hi. Peter and All.
>> I merged my indexes today. Now each index stores 10M document. Now I only
>> have 10 solr cores.
>> And I used
>>
>>
have defined a field
>'doctype')
>since the nr of values of documenttype would be pretty low and would be used
>independently of other queries, this would be an excellent candidate for the
>FQ-param.
>
>http://wiki.apache.org/solr/CommonQueryParameters#fq
><http:/
uring the container to use a maximum of 1024 MB ram
>> instead of the standard which is much lower (I'm not sure what exactly but
>> it could well be 64MB for non -server, aligning with what you're seeing)
>>
>> Geert-Jan
>>
>> 2010/7/16 marship
&g
Hi. I justed noticed when you add document to solr, turn the auto-commit flag
off, after posting done, commit and optimize. The the speed is super fast.
I was using 31 clients to post 31 solr cores at the same time. I think if you
use 2 clients to post to same core, the question will be "how fa
Hi Tom Burton-West.
Sorry looks my email ISP filtered out your replies. I checked web version of
mailing list and saw your reply.
My query string is always simple like "design", "principle of design", "tom"
EG:
URL:
http://localhost:7550/solr/select/?q=design&version=2.2&start=0&rows=1
t;>> The problem is even if I put 2M documents into each core. Then I
>>>> have only 36 cores at the moment. But when our documents doubles in
>>>> the future, same issue will rise again. So I don't think save 1M in
>>>> each core is the issue.
>&g
Hi. All.
Is there anyway to have time out support in distributed search. I searched
https://issues.apache.org/jira/browse/SOLR-502 but looks it is not in main
release of solr1.4
I have 70 cores, when I search, some response in 0-700ms. Some return in
about 2s. Some need very long time, more
?
>>
>> Regards.
>> Scott
>>
>>
>>
>>
>>
>> 在2010-07-15 15:24:08,"Fornoville, Tom" 写道:
>>
>>> Is there any reason why you have to limit each instance to only 1M
>>> documents?
>>> If you could put more
y.
>>Any suggestion?
>>
>> Regards.
>> Scott
>>
>>
>>
>>
>>
>> 在2010-07-15 15:24:08,"Fornoville, Tom" 写道:
>>
>>> Is there any reason why you have to limit each instance to only 1M
>>> documents?
&g
, Tom" 写道:
>Is there any reason why you have to limit each instance to only 1M
>documents?
>If you could put more documents in the same core I think it would
>dramatically improve your response times.
>
>-Original Message-
>From: marship [mailto:mars...@126.com]
Hi. All.
I got a problem with distributed solr search. The issue is
I have 76M documents spread over 76 solr instances, each instance handles
1M documents.
Previously I put all 76 instances on single server and when I tested I found
each time it runs, it will take several times, most
Hi. All.
I am using solr dismax to search over my books in db. I indexed them all
using solr.
the problem I noticed today is,
Everything start with I want to search for a book "
The Girl Who Kicked the Hornet's Nest
"
but nothing is returned. I'm sure I have this book in DB. So I stripped so
12 matches
Mail list logo