So, if I set this property to a very small value, then it will not be honored 
if auto-commit is set to true? Is it that in case of autocommit, the engine 
figures out its own buffer size or we can influence it through some property 
again?
      From: James Taylor <[email protected]>
 To: "[email protected]" <[email protected]> 
Cc: Sumit Nigam <[email protected]> 
 Sent: Tuesday, September 15, 2015 9:18 PM
 Subject: Re: simple commands that mutate a large number of rows
   
Yes, when using auto commit, the phoenix.mutate.maxSiiz config property has no 
effect.

On Tuesday, September 15, 2015, James Heather <[email protected]> 
wrote:



I can confirm that setting auto-commit to true avoided the problem.That's for 
this particular query, though. I don't know whether it would do if the select 
query hit a different table. Probably that would mean it would execute client 
side, and the error would come back.On 15 Sep 2015 16:12, "Sumit Nigam" 
<[email protected]> wrote:

Hi James,
Is it right to assume that with auto-commit set to true, the mutate maxSize 
being exceeded error would not occur? This should be because now server side 
does the commit automatically when the batch size is reached/ buffered.
Thanks,Sumit
      From: James Taylor <[email protected]>
 To: "[email protected]" <[email protected]> 
 Sent: Tuesday, September 15, 2015 7:37 PM
 Subject: Re: simple commands that mutate a large number of rows
   
That config setting (phoenix.mutate.maxSiize) is just a safety valve to prevent 
out of memory errors and may be set to whatever you like. However, if you're 
going to just turn around and do a commit after running your upsert statement, 
performance will improve if you turn on auto commit instead 
(connection.setAutoCommit(true)), as that'll prevent double buffering and 
enable execution completely on the server side when possible.
Thanks,James

On Tuesday, September 15, 2015, James Heather <[email protected]> 
wrote:



  I found today that I can't execute this:
 
 UPSERT INTO loadtest.testing (id, firstname, lastname) SELECT NEXT VALUE FOR 
loadtest.testing_id_seq, firstname, lastname FROM loadtest.testing


 when the table has more than 500,000 rows in it ("MutationState size of 512000 
is bigger than max allowed size of 500000").
 
 Is there a reason for the restriction, or a way of lifting it so that things 
get processed in batches?
 
 Does this also mean that if I had a large table, and I wanted to (say) null 
out one of the columns, I wouldn't be able to do it in one hit because I'd be 
changing too many rows?
 
 James
  


   



  

Reply via email to