Yes, when using auto commit, the phoenix.mutate.maxSiiz config property has
no effect.

On Tuesday, September 15, 2015, James Heather <[email protected]>
wrote:

> I can confirm that setting auto-commit to true avoided the problem.
>
> That's for this particular query, though. I don't know whether it would do
> if the select query hit a different table. Probably that would mean it
> would execute client side, and the error would come back.
> On 15 Sep 2015 16:12, "Sumit Nigam" <[email protected]
> <javascript:_e(%7B%7D,'cvml','[email protected]');>> wrote:
>
>> Hi James,
>>
>> Is it right to assume that with auto-commit set to true, the mutate
>> maxSize being exceeded error would not occur? This should be because now
>> server side does the commit automatically when the batch size is reached/
>> buffered.
>>
>> Thanks,
>> Sumit
>>
>> ------------------------------
>> *From:* James Taylor <[email protected]
>> <javascript:_e(%7B%7D,'cvml','[email protected]');>>
>> *To:* "[email protected]
>> <javascript:_e(%7B%7D,'cvml','[email protected]');>" <
>> [email protected]
>> <javascript:_e(%7B%7D,'cvml','[email protected]');>>
>> *Sent:* Tuesday, September 15, 2015 7:37 PM
>> *Subject:* Re: simple commands that mutate a large number of rows
>>
>> That config setting (phoenix.mutate.maxSiize) is just a safety valve to
>> prevent out of memory errors and may be set to whatever you like. However,
>> if you're going to just turn around and do a commit after running your
>> upsert statement, performance will improve if you turn on auto commit
>> instead (connection.setAutoCommit(true)), as that'll prevent double
>> buffering and enable execution completely on the server side when possible.
>>
>> Thanks,
>> James
>>
>> On Tuesday, September 15, 2015, James Heather <[email protected]
>> <javascript:_e(%7B%7D,'cvml','[email protected]');>> wrote:
>>
>>
>> I found today that I can't execute this:
>>
>> UPSERT INTO loadtest.testing (id, firstname, lastname) SELECT NEXT VALUE FOR 
>> loadtest.testing_id_seq, firstname, lastname FROM loadtest.testing
>>
>>
>> when the table has more than 500,000 rows in it ("MutationState size of
>> 512000 is bigger than max allowed size of 500000").
>>
>> Is there a reason for the restriction, or a way of lifting it so that
>> things get processed in batches?
>>
>> Does this also mean that if I had a large table, and I wanted to (say)
>> null out one of the columns, I wouldn't be able to do it in one hit because
>> I'd be changing too many rows?
>>
>> James
>>
>>
>>
>>

Reply via email to