I was thinking about that (as per your presentation last week) but my
problem is that when I'm building up a series of inserts, if one of them
fails (very likely in this case due to a unique_violation) I have to
rollback the entire commit. I asked about this in the
On Tue, Feb 21, 2012 at 9:59 AM, Alessandro Gagliardi
alessan...@path.comwrote:
I was thinking about that (as per your presentation last week) but my
problem is that when I'm building up a series of inserts, if one of them
fails (very likely in this case due to a unique_violation) I have to
True. I implemented the SAVEPOINTs solution across the board. We'll see
what kind of difference it makes. If it's fast enough, I may be able to do
without that.
On Tue, Feb 21, 2012 at 3:53 PM, Samuel Gendler
sgend...@ideasculptor.comwrote:
On Tue, Feb 21, 2012 at 9:59 AM, Alessandro Gagliardi
New question regarding this seen_its table: It gets over 100 inserts per
second. Probably many more if you include every time unique_violation occurs.
This flood of data is constant. The commits take too long (upwards of 100
ms, ten times slower than it needs to be!) What I'm wondering is if it
On 2/20/12 2:06 PM, Alessandro Gagliardi wrote:
. But first I just want to know if people
think that this might be a viable solution or if I'm barking up the wrong
tree.
Batching is usually helpful for inserts, especially if there's a unique
key on a very large table involved.
I suggest also