On Thu, Jan 7, 2010 at 9:28 PM, Kevin Grittner
<kevin.gritt...@wicourts.gov> wrote:
> All valid points.  I could try to make counter-arguments, but in my
> view the only thing that really matters is how any such attempt
> performs in a realistic workload.  If, when we get to the
> optimization phase, such a technique shows a performance improvement
> in benchmarks which we believe realistically model workloads we
> believe to be reasonable candidates to use serializable
> transactions, then I'll argue that the burden of proof is on anyone
> who thinks it's a bad idea in spite of that.  If it doesn't show an
> improvement, I'll be the first to either try to refine it or toss
> it.  Fair enough?

My comment was in relation to the idea of representing the costs in
the planner. I was a) saying you had to see how the implementation
went before you try to come up with how to represent the costs and
then b) speculating (hypocritically:) that you might have the
direction of adjustment backward.

From what I understand your first cut will just take full-table
"locks" anyways so it won't matter what type of plan is used at all.
Personally I can't see how that won't generate a serialization failure
on basically every query on any moderately concurrent system but at
least it would make an interesting test-bed for the SIREAD dependency
detection logic. And I agree it's necessary code before we get into
more fine-grained siread locks.

-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to