Sending chunked requests is more work, but as Mike points out,
it is good design. A design without limits on request size
can fail by overloading the server, hitting security triggers
in firewall software, etc. That is a fragile design.

I doubt that there is any difference in performance between
batched deletes and one mongo delete. Most of the time is in
the commit.

wunder

On 7/4/08 3:53 PM, "Jonathan Ariel" <[EMAIL PROTECTED]> wrote:

> It is reasonable, but it seems to me too much work if I already know in
> advance all the IDs that I want to delete.
> Having N Ids to delete in advance seems unnatural to execute N requests
> instead of just 1 or few, but not N.
> If I can avoid unnecessary requests grouping them, I would do it. Specially
> if I know that no one will execute deletes but just that process.
> 
> Do you think that solr performs better or the same with N delete requests
> (when N is more than 1000) than 1, 2 or 10?
> 
> 
> 
> On Fri, Jul 4, 2008 at 6:05 PM, Mike Klaas <[EMAIL PROTECTED]> wrote:
> 
>> Why?  It is not reasonable in a distributed system to perform requests of
>> unbounded size (not to say that it won't work).  If the concern is
>> throughput, large batches should be sufficient.
>> 
>> -Mike
>> 
>> 
>> On 4-Jul-08, at 9:06 AM, Jonathan Ariel wrote:
>> 
>>  Yes, I just wanted to avoid N requests and do just 2.
>>> 
>>> On Fri, Jul 4, 2008 at 12:48 PM, Walter Underwood <[EMAIL PROTECTED]
>>>> 
>>> wrote:
>>> 
>>>  Send multiple deletes, with a commit after the last one. --wunder
>>>> 
>>>> On 7/4/08 8:40 AM, "Jonathan Ariel" <[EMAIL PROTECTED]> wrote:
>>>> 
>>>>  yeah I know. the problem with a query is that there is a maximum amount
>>>>> of
>>>>> 
>>>> query terms that I can add, which is reasonable. The problem is that I
>>>> 
>>>>> have
>>>>> 
>>>> thousands of Ids.
>>>> 
>>>> 
>>>> 
>> 

Reply via email to