Hi Emmanuel, Thanks for you answer.
On 12/27/06, Emmanuel Cecchet <[EMAIL PROTECTED]> wrote:
> The problem is that when the application using Sequoia starts a > transaction, sometimes, the connection is reused before the > transaction is closed and we also have the case when the begin is > executed on one transaction and the commit on another one. We often > have the following case: > [pid:523] : BEGIN; first query of the transaction; > [pid:523] : other queries which have nothing to do in this transaction > [pid:534] : second query of the transaction; > [pid:534] : COMMIT; This seems to be a bug in the application, I don't see how we can fix that. If the application wildly shares a connection between multiple clients, we cannot do anything about it. The only thing that Sequoia guarantees, is that 2 concurrent calls on the same connection will be serialized (basically all APIs calls are synchronized in the driver).
I agree with you that if it's a problem with the application, there's nothing to do on Sequoia's side. That's what I thought first. I checked with the native driver and I didn't reproduce this problem. That's why I have begun digging into Sequoia's code to check everything was fine and that there was nothing obvious which may fail in my use case. I'll take a closer look to how the pool is managed in Red Hat WAF.
> I dug a lot into the code for a while but I didn't find any problem > yet. Well, in fact, while digging for my problem, I found something > weird in the load balancer code but it doesn't apply to my case > because I have only one backend for my tests: > - in RAIDb1_RR.executeRoundRobinRequest, sequoia chooses a backend > without looking for the transaction id and runs > executeRequestOnBackend on this particular backend; All backends are supposed to run all transactions, so you don't care which backend has to be picked up.
You mean that if I open a transaction, every query of this transaction are executed on each backend? It seems logical to me but I didn't find the code which does that yet. Can you point it to me so that I can understand how it's done? The fact is I misread the code which broadcasts the transaction depending on the transaction isolation level to all the controllers (and not to the backends).
When you want to execute a request in a transaction, we always pick up the connection that corresponds to that transaction on that backend. Transactions are lazily started (on-demand) so that we don't have to open connections on all backends for a read-only transaction. This is just an optimization. But it is not possible for an autocommit request to use a connection from a transaction.
Yep, I saw that code and I didn't find any problem in it. Every case seems to be perfectly taken into account.
> Does anyone have an idea of what can happen in my case and how I can > see what is the problem? A solution is also very welcome :). I am not sure to fully understand the problem but you can check the request.log file (by enabling request logging in log4j.properties) to be sure that what the controller receives is really what you expect.
I put a lot of loggers to DEBUG but I don't have the information I need to track this problem. I'm currently instrumenting the code with mode debug lines as I can't reproduce the problem with a step by step debugger. I'll let you know if I find something. -- Guillaume _______________________________________________ Sequoia mailing list [email protected] https://forge.continuent.org/mailman/listinfo/sequoia
