[HACKERS] SSI performance

2011-02-04 Thread Heikki Linnakangas
We know that the predicate locking introduced by the serializable snapshot isolation patch adds a significant amount of overhead, when it's used. It was fixed for sequential scans by acquiring a relation level lock upfront and skipping the locking after that, but the general problem for index

Re: [HACKERS] SSI performance

2011-02-04 Thread Robert Haas
On Fri, Feb 4, 2011 at 9:29 AM, Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote: The interesting thing is that CoarserLockCovers() accounts for 20% of the overall CPU time, or 2/3 of the overhead. The logic of PredicateLockAcquire is: 1. Check if we already have a lock on the

Re: [HACKERS] SSI performance

2011-02-04 Thread Heikki Linnakangas
On 04.02.2011 15:37, Robert Haas wrote: Not sure. How much benefit do we get from upgrading tuple locks to page locks? Should we just upgrade from tuple locks to full-relation locks? Hmm, good question. Page-locks are the coarsest level for the b-tree locks, but maybe that would make sense

Re: [HACKERS] SSI performance

2011-02-04 Thread Robert Haas
On Fri, Feb 4, 2011 at 11:07 AM, Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote: On 04.02.2011 15:37, Robert Haas wrote: Not sure.  How much benefit do we get from upgrading tuple locks to page locks?  Should we just upgrade from tuple locks to full-relation locks? Hmm, good

Re: [HACKERS] SSI performance

2011-02-04 Thread Kevin Grittner
Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote: The logic of PredicateLockAcquire is: 1. Check if we already have a lock on the tuple. 2. Check if we already have a lock on the page. 3. Check if we already have a lock on the relation. So if you're accessing a lot of rows,

Re: [HACKERS] SSI performance

2011-02-04 Thread Kevin Grittner
I wrote: I just had a thought -- we already have the LocalPredicateLockHash HTAB to help with granularity promotion issues without LW locking. Offhand, I can't see any reason we couldn't use this for an initial check for a relation level lock, before going through the more rigorous pass

Re: [HACKERS] SSI performance

2011-02-04 Thread Kevin Grittner
I wrote: If this works, it would be a very minor change, which might eliminate a lot of that overhead for many common cases. With that change in place, I loaded actual data from one county for our most heavily searched table and searched it on the most heavily searched index. I returned

Re: [HACKERS] SSI performance

2011-02-04 Thread Kevin Grittner
I wrote: repeatable read [best] Time: 51.150 ms serializable [best] Time: 52.089 ms It occurred to me that taking the best time from each was likely to give a reasonable approximation of the actual overhead of SSI in this situation. That came out to about 1.8% in this (small) set of