On 11/10/2010 5:43 PM, Kevin Grittner wrote:
The only half-sane answer I've thought of is to apply a different
cost to full-table or full-index scans based on the ratio with
effective cache size.
The "effective_cache_size" is, in my humble opinion, a wrong method. It
would be much easier to h
On Wed, Nov 10, 2010 at 6:07 PM, Tom Lane wrote:
> "Kevin Grittner" writes:
>> Robert Haas wrote:
>>> Unfortunately, to know how much data we're going to grovel
>>> through, we need to know the plan; and to decide on the right
>>> plan, we need to know how much data we're going to grovel through
"Kevin Grittner" writes:
> Robert Haas wrote:
>> Unfortunately, to know how much data we're going to grovel
>> through, we need to know the plan; and to decide on the right
>> plan, we need to know how much data we're going to grovel through.
> And that's where they've been ending.
> The only
Robert Haas wrote:
> Wow. That's fascinating, and if you don't mind, I might mention
> this potential problem in a future talk at some point.
I don't mind at all.
> For example, in your case, it would be sufficient to estimate the
> amount of data that a given query is going to grovel throu
On Wed, Nov 10, 2010 at 10:15 AM, Kevin Grittner
wrote:
> But wait -- it turns out that this pain was self-inflicted. Based
> on heavy testing of the interactive queries which users run against
> this database we tuned the database for "fully-cached" settings,
> with both random_page_cost and _se
Grzegorz Jaśkiewicz wrote:
> you're joining on more than one key. That always hurts performance.
That's very clearly *not* the problem, as there is a plan which runs
in acceptable time but the optimizer is not choosing without being
coerced.
(1) Virtually every query we run joins on multi-col
"Kevin Grittner" writes:
> Tom Lane wrote:
>> In principle, the old-style plan ought to be equivalent to a
>> nestloop antijoin with a seqscan of DbTranLogRecord on the outside
>> and an indexscan of DbTranRepository on the inside. Can you force
>> it to choose such a plan by setting enable_merg
Tom Lane wrote:
> "Kevin Grittner" writes:
>> The semi-join and anti-join have helped us quite a bit, but we
>> have seen a situation where anti-join is chosen even though it is
>> slower than the "old fashioned" plan. I know there have been
>> other reports of this, but I just wanted to go on r
I use the postgresql in default configuration and use inheritance way to create
table.
My postgresql version is:
SELECT version();
version
--
> Okay, 225044.255ms VS 83813.808 ms, it obviously seems that the planner
> select one bad scan plan by default.
Actually no, the planner chose the cheapest plan (more precisely a plan
with the lowest computed cost). The first plan has a cost 600830.86 while
the second one has a cost 634901.28, so
On 03/11/10 08:46, Mladen Gogala wrote:
I wrote a little Perl script, intended to test the difference that
array insert makes with PostgreSQL. Imagine my surprise when a single
record insert into a local database was faster than batches of 100
records. Here are the two respective routines:
I
On 10/11/10 22:10, Mark Kirkwood wrote:
What might also be interesting is doing each INSERT with an array-load
of bind variables appended to the VALUES clause - as this will only do
1 insert call per "array" of values.
This is probably more like what you were expecting:
rowsnum values t
12 matches
Mail list logo