Hi Ricardo!

Thanks for your answer.

Yes, I did open a issue in Jira long time 
ago... https://nhibernate.jira.com/browse/NH-3771 but didn't get any 
feedback.

Regarding your questions:

- Does it work in all/most supported database servers?
I am not aware of others than SQL Server and Oracle (the ones we are 
working with at the moment), in that ones worked well.

- Does it break anything?
Doesn't seems to me, but not 100% sure. We did some test in our system 
everything and everything worked well.

- Is the solution generic enough, or does it only work under certain 
scenarios?
Yeah, the solution is generic for all the databases.

This is a very important feature, since let's say that we have 500 entities 
to be updated, and a batch size of 100, now is going to the database 500 
times, and with this change will go only 5. The performance change is 
dramatic, specially in long running process.

Going deeper into the problem:

- when send a batch update, let say with N sentencies, the database in 
general give you back an array with N int elements, where in each position 
indicates the number of rows that been updated.

- in ADO Net, only the total sum of the N elements is given back.

- when you are working with the version control, in any UPDATE add a 
condition with the version is added (VERSION=[readed version]), so, if you 
find a row, means that nobody change the row, if you don't find a row, 
means that someone changed the row, so the version number also changed. 

- so, if any of the UPDATES didn't update a row, you should rollback with 
an optimistic lock exception error.

- I think in the NHibernate version, since you only have the total sum of 
all the rows updated, and not the individual number of each sentence, could 
be the possibility that a sentence don't update any row, and other sentence 
update 2 rows (So the total number of rows doesn't change)... 

- But... the batch updates are only use to change entities, and in that 
case, since NH is using the ID in the conditions, is no posibility to 
update more than a row, you update 1 (and only 1), or you don't update any, 
so the total number could not be wrong.

- In brief... if the batch updates are not been used for bulk update (i 
think they are not), and only to update entities, will be suficient to 
check the total number of rows updated to verify that optimistic locking.


Sorry for the long length of the explanation and my english.

Best Regards!

Carlos


El martes, 14 de julio de 2015, 9:27:22 (UTC-3), Ricardo Peres escribió:
>
> Hi, Tikuna!
>
> Why don't you create a Jira issue of type improvement and submit a pull 
> request so that the NH team can evaluate it?
> You should consider the following:
>
> - Does it work in all/most supported database servers?
> - Does it break anything?
> - Is the solution generic enough, or does it only work under certain 
> scenarios?
>
> Anyway, it's always good to have people interested in moving NH forward! 
> ;-)
>
> RP
>
>
> On Tuesday, July 14, 2015 at 12:26:14 PM UTC+1, tikuna wrote:
>>
>> Hello everyone!
>>
>> Did any of you had this problem before?
>>
>> Any of you have any concern about why will not be a good idea to make a 
>> change to fix this?
>>
>> We have a patch for this and will be happy to share it. It's  good to 
>> take 5 minutes to check it out since gives a big pump in performance.
>>
>> Thanks!
>>
>> Best Regard!
>>
>> El lunes, 30 de marzo de 2015, 14:34:32 (UTC-3), tikuna escribió:
>>>
>>> Hi,
>>>
>>> We are using NHibernate, and got some processing that modify a big chunk 
>>> of data; and are using Version control for Optimistic Locking.
>>>
>>> In the process, inserts are done using batch processing with very good 
>>> performance, but updates are doing a data base roundtrip each (if I take 
>>> out optimistic locking with version from the entity, updates are executed 
>>> in batch's).
>>>
>>> I find in the documentation this:
>>>
>>>
>>>    - 
>>>    
>>>    optimistic concurrency checking may be impaired since ADO.NET 2.0 
>>>    does not return the number of rows affected by each statement in the 
>>> batch, 
>>>    only the total number of rows affected by the batch.
>>>    
>>> Browsing the source files, found that in AbstractEntityPersister define 
>>> the IsBatchable property looking at the optimistic lock strategy.
>>>
>>> Since all the updates done in batching, have the ID (and the VERSION), 
>>> is not enough to have the total number of rows affected by the batch in 
>>> order to know if all the updates were sucessfully done? Is it really 
>>> necessary to have the rows affected by each statement of the batch?
>>>
>>> Any improvement to make possible to do the updates in batchs will really 
>>> boost performance.
>>>
>>> Thank you very much !
>>>
>>> Tikuna
>>>
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"nhusers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/nhusers.
For more options, visit https://groups.google.com/d/optout.

Reply via email to