Found the issue. It had nothing to do with any of the three components. The problem was that: > 1. we remove the tuple causing the exception. didn't work as I expected and instead it removed all the tuples upto the offending record. I have fixed it and now it works like a charm.
Pushing the fix in a few minutes. Abdullah. Amoudi, Abdullah. On Thu, Jan 7, 2016 at 6:16 PM, Mike Carey <[email protected]> wrote: > The general transaction handling for such an exception wrt locking and > aborts probably assumes that total bailouts are the answer. Thus, it may > leave messes that rollbacks are otherwise the answer to. Feeds and > transactions don't mix super well, it seems.... Watching how duplicate > keys work for insert from query statements may help you debug. If we change > things to allow those to succeed for all non duplicate keys - which might > make more sense for that anyway. > On Jan 7, 2016 5:48 AM, "abdullah alamoudi" <[email protected]> wrote: > > > Today, as I was working on fixing handling of duplicate keys with feeds, > > everything seemed to work fine. here is what we do when we encounter a > > duplicate key exception. > > > > 1. we remove the tuple causing the exception. > > 2. we continue from where we stopped. > > > > The problem is that when I try to query the dataset after that to check > > and see which records made it into the dataset, I get a deadlock. > > > > I have looked at the stack trace (attached) and I think the threads in > the > > file are the relevant ones. Please, have a look and let me know if you > have > > a possible cause in mind. > > > > The threads are related to: > > 1. BufferCache. > > 2. Logging. > > 3. Locking. > > > > Let me know what you think. I can reproduce this bug. it happened on 100% > > of my test runs. > > > > I will let you know when I solve it but it is taking longer than I > thought. > > > > Amoudi, Abdullah. > > >
