Thanks Jeff and Byron! -----Original Message----- From: Jeff Garza [mailto:[email protected]] Sent: July-16-13 10:38 AM To: cf-talk Subject: re: Getting lots of DB Errors: Transaction was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
Absolutely you should have an index on sessionID. Without that index you are forcing a table scan for that id so the engine can do the update. When your table was smaller, SQL could deal with that a lot easier. Once you start getting to 3-4 million rows, the performance starts to slow down on a flat table scan. I have tables with 500 million rows and I can find a unique record using a clustered index scan in under a second... If you are only indexing the sessionID column you shouldn't see any major performance hits on inserts and you should see dramatic performance increases on your updates. --Jeff -------- Original Message -------- > From: "Brook Davies" <[email protected]> > Sent: Tuesday, July 16, 2013 10:03 AM > To: "cf-talk" <[email protected]> > Subject: Getting lots of DB Errors: Transaction was deadlocked on lock > | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction. > > Hey Guys, > > I am getting a lot of these errors today: > > "Transaction (Process ID 65) was deadlocked on lock | communication buffer > resources with another process and has been chosen as the deadlock victim. > Rerun the transaction." > > On a system that is used to track 'views' of a page via an ajax request. All > of the errors are related to updating a specific table with a query > like this (this is not the CFML query..): > > update formAnalytics_view > set INLINESTART={ts '2013-07-16 10:04:21'} , > INTERACTED=1 > where sessionid = '5BAC1158-D89E-4CB5-A8FB-1DC06D3E82A9' > > The table has 4,925,131 rows and growing. It is has more writes than reads. > Is there something I can do to optimize this table to avoid these deadlock > errors? > > I had added a try/catch around the transactions that were causing > these deadlocks, and then I was re-running the failed transaction in > the catch segment, but it looks like the lock was not yet removed and > this was just > leading to more errors. What is the best way to resolve this? Would adding a > thread.sleep() statement and then re-trying the transaction work or would > that just lead to more errors? > > Note the queries should 9 times out of 10 be updating different rows > of this > table. Should I add an index on the sessionID? Or would that make > writes slower? > > Brook > > > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~| Order the Adobe Coldfusion Anthology now! http://www.amazon.com/Adobe-Coldfusion-Anthology/dp/1430272155/?tag=houseoffusion Archive: http://www.houseoffusion.com/groups/cf-talk/message.cfm/messageid:356194 Subscription: http://www.houseoffusion.com/groups/cf-talk/subscribe.cfm Unsubscribe: http://www.houseoffusion.com/groups/cf-talk/unsubscribe.cfm

