Hi Jim,
I am seeing failed writes to a postgresql database backend remain in the write queue on the controller. The duplicate key error message for the corresponding write only appears on one of the 3 controllers. But the two sister controllers have the same request id 10577 in the scheduler queue along with any other write requests which arrived after the 10577 request. Is this normal behavior? How can I clear a failed write from the controller's write queue? My three controllers basically just start queueing any addtional writes after the duplicate key write occurs. Any assistance with resolving this issue would be greatly appreciated.
From the log you attached, I understand that the query was issued on the first controller (where it failed) but it is still pending on the 2 other controllers. This is why is still shows as 'pending' because it has to wait for the result of the other controllers to decide whether that was a real failure (all controllers fail) or if only the local controller failed (in which case its local backend are disabled and we continue with the other controllers).
*2nd Controller where no duplicate key error is recored but request is queued:

*ANGe(admin) > dump scheduler queues
Active transactions: 7
        Transaction id list: 3800 3802 3803 3804 3805 3806 3807
Pending write requests: 6
        Write request id list: 10586 10593 10591 10581 10587 10577

*3rd controller where no duplicate error is recorded but request is queued:
*ANGe(admin) > dump scheduler queues
Active transactions: 8
        Transaction id list: 3703 3800 3802 3803 3804 3805 3806 3807
Pending write requests: 6
        Write request id list: 10586 10593 10591 10581 10587 10577

Any suggestions on how I can recover when this happens?
What puzzles me is is the old transaction 3703 that remains open on the 3rd controller. No idea where this could come from since if it was a read-only transaction it would have executed on the first controller (given its id).

Another reason could be a problem with the group communication. Which one are you using?

Something else to investigate is potential query indeterminism. This can happen with multiple table updates or update with subselects. In such case, a strict table locking might be needed. Was this duplicate key exception something you expected?

Thanks for your feedback,
Emmanuel

--
Emmanuel Cecchet
Chief Scientific Officer, Continuent

Blog: http://emanux.blogspot.com/
Open source: http://www.continuent.org
Corporate: http://www.continuent.com
Skype: emmanuel_cecchet
Cell: +33 687 342 685


_______________________________________________
Sequoia mailing list
[email protected]
https://forge.continuent.org/mailman/listinfo/sequoia

Reply via email to