On Mon, Jan 14, 2013 at 12:56:37PM -0500, Tom Lane wrote:
The reported behavior was that the planner would prefer to
sequential-scan the table rather than use the index, even if
enable_seqscan=off. I'm not sure what the query looked like, but it
could have been something best implemented
Bruce Momjian br...@momjian.us writes:
On Mon, Jan 14, 2013 at 12:56:37PM -0500, Tom Lane wrote:
Remember also that enable_seqscan=off merely adds 1e10 to the
estimated cost of seqscans. For sufficiently large tables this is not
exactly a hard disable, just a thumb on the scales. But I don't
On Thu, Jan 10, 2013 at 8:07 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Comments?
I'm not sure I have anything intelligent to add to this conversation -
does that make me the wisest of all the Greeks? - but I do think it
worth mentioning that I have heard occasional reports within EDB of
the query
Robert Haas robertmh...@gmail.com writes:
I'm not sure I have anything intelligent to add to this conversation -
does that make me the wisest of all the Greeks? - but I do think it
worth mentioning that I have heard occasional reports within EDB of
the query planner refusing to use extremely
On Mon, Jan 14, 2013 at 12:23 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
I'm not sure I have anything intelligent to add to this conversation -
does that make me the wisest of all the Greeks? - but I do think it
worth mentioning that I have heard
Robert Haas robertmh...@gmail.com writes:
On Mon, Jan 14, 2013 at 12:23 PM, Tom Lane t...@sss.pgh.pa.us wrote:
The old code definitely had an unreasonably large charge for indexes
exceeding 1e8 or so tuples. This wouldn't matter that much for simple
single-table lookup queries, but I could
I wrote:
I'm fairly happy with the general shape of this formula: it has a
principled explanation and the resulting numbers appear to be sane.
The specific cost multipliers obviously are open to improvement based
on future evidence. (In particular, I intend to code it in a way that
doesn't
I wrote:
Now, if we consider only CPU costs of index descent, we expect
about log2(P) comparisons to be needed on each of the H upper pages
to be descended through, that is we have total descent cost
cpu_index_tuple_cost * H * log2(P)
If we ignore the ceil() step as being a second-order
On 7 January 2013 17:35, Tom Lane t...@sss.pgh.pa.us wrote:
That gives a formula of
cpu_operator_cost * log2(N) + cpu_operator_cost * 50 * (H+2)
This would lead to the behavior depicted in the attached plot, wherein
I've modified the comparison lines (historical, 9.2, and HEAD
Simon Riggs si...@2ndquadrant.com writes:
On 7 January 2013 17:35, Tom Lane t...@sss.pgh.pa.us wrote:
That gives a formula of
cpu_operator_cost * log2(N) + cpu_operator_cost * 50 * (H+2)
Again, this depends on N and H, so thats good.
I think my retinas detached while reading your
On Mon, Jan 7, 2013 at 3:27 PM, Tom Lane t...@sss.pgh.pa.us wrote:
One issue that needs some thought is that the argument for this formula
is based entirely on thinking about b-trees. I think it's probably
reasonable to apply it to gist, gin, and sp-gist as well, assuming we
can get some
On Saturday, January 5, 2013, Tom Lane wrote:
Jeff Janes jeff.ja...@gmail.com javascript:; writes:
[moved to hackers]
On Wednesday, December 5, 2012, Tom Lane wrote:
Hm. To tell you the truth, in October I'd completely forgotten about
the January patch, and was thinking that the 1/1
Jeff Janes jeff.ja...@gmail.com writes:
On Saturday, January 5, 2013, Tom Lane wrote:
Jeff Janes jeff.ja...@gmail.com javascript:; writes:
One thing which depends on the index size which, as far as I can tell, is
not currently being counted is the cost of comparing the tuples all the way
down
On 5 January 2013 22:18, Tom Lane t...@sss.pgh.pa.us wrote:
But I am wondering if it should be present at all in 9.3. When it was
introduced, the argument seemed to be that smaller indexes might be easier
to keep in cache.
No. The argument is that if we don't have some such correction, the
On 6 January 2013 16:29, Jeff Janes jeff.ja...@gmail.com wrote:
Worse, this over-punishment of bloat is more likely to penalize partial
indexes. Since they are vacuumed on the table's schedule, not their own
schedule, they likely get vacuumed less often relative to the amount of
turn-over
Simon Riggs si...@2ndquadrant.com writes:
On 5 January 2013 22:18, Tom Lane t...@sss.pgh.pa.us wrote:
No. The argument is that if we don't have some such correction, the
planner is liable to believe that different-sized indexes have *exactly
the same cost*, if a given query would fetch the
On 6 January 2013 18:58, Tom Lane t...@sss.pgh.pa.us wrote:
Simon Riggs si...@2ndquadrant.com writes:
On 5 January 2013 22:18, Tom Lane t...@sss.pgh.pa.us wrote:
No. The argument is that if we don't have some such correction, the
planner is liable to believe that different-sized indexes have
Simon Riggs si...@2ndquadrant.com writes:
On 6 January 2013 18:58, Tom Lane t...@sss.pgh.pa.us wrote:
IIRC, one of my very first attempts to deal with this was to charge
random_page_cost per level of index descended. This was such a horrid
overestimate that it never went anywhere. I think
I wrote:
[ slightly bogus graph ]
Ooops, it seems the ^ operator doesn't do what I thought in gnuplot.
Here's a corrected version.
regards, tom lane
image/pngset terminal png small color
set output 'new_fudge.png'
set xlabel Index tuples
set ylabel Added cost
set
On 6 January 2013 23:03, Tom Lane t...@sss.pgh.pa.us wrote:
Simon Riggs si...@2ndquadrant.com writes:
On 6 January 2013 18:58, Tom Lane t...@sss.pgh.pa.us wrote:
IIRC, one of my very first attempts to deal with this was to charge
random_page_cost per level of index descended. This was such a
Jeff Janes jeff.ja...@gmail.com writes:
[moved to hackers]
On Wednesday, December 5, 2012, Tom Lane wrote:
Hm. To tell you the truth, in October I'd completely forgotten about
the January patch, and was thinking that the 1/1 cost had a lot
of history behind it. But if we never shipped
On Tue, Dec 18, 2012 at 5:05 PM, Jeff Janes jeff.ja...@gmail.com wrote:
Sorry for the malformed and duplicate post. I was not trying to be
emphatic; I was testing out gmail offline. Clearly the test didn't go
too well.
Jeff
--
Sent via pgsql-hackers mailing list
[moved to hackers]
On Wednesday, December 5, 2012, Tom Lane wrote:
Jeff Janes jeff.ja...@gmail.com javascript:; writes:
I now see where the cost is coming from. In commit 21a39de5809 (first
appearing in 9.2) the fudge factor cost estimate for large indexes
was increased by about 10 fold,
[moved to hackers]
On Wednesday, December 5, 2012, Tom Lane wrote:
Jeff Janes jeff.ja...@gmail.com writes:
I now see where the cost is coming from. In commit 21a39de5809 (first
appearing in 9.2) the fudge factor cost estimate for large indexes
was increased by about 10 fold, which really
24 matches
Mail list logo