So as the op, back to the original posting
In the real world, what should I do? Does it make sense to pull the
"AND articles.indexed" clause into an outer query? Will that query
simply perform poorly on other arbitrary combinations of words?
I'm happy to test any given query against th
Tom Lane wrote:
> Given that it estimated 1 row out of "words" (quite correctly) and 12264
> rows out of each scan on article_words, you'd think that the join size
> estimate would be 12264, which would be off by "only" a factor of 3 from
> the true result. Instead it's 23, off by a factor of 200
Kevin Grittner wrote:
Bryce Nesbitt wrote:
I've got a very slow query, which I can make faster by doing
something seemingly trivial.
Out of curiosity, what kind of performance do you get with?:
EXPLAIN ANALYZE
SELECT contexts.context_key
FROM contexts
JOIN a
I've been full-on vocally anti-Dell ever since they started releasing
PCs with the non-standard ATX power supply pinout; that was my final
straw with their terrible quality decisions. But after doing two tuning
exercises with PERC6 controllers and getting quite good results this
year, just a f
Kevin Grittner wrote:
I wonder if it might also pay to make the background writer even more
aggressive than we have, so that SELECT-only queries don't spend so
much time writing pages.
You can easily quantify if the BGW is aggressive enough. Buffers leave
the cache three ways, and they each sho
lionel duboeuf wrote:
> Thanks kevin for your answer. Here is some additionnal
> informations attached as files.
Could you supply an EXPLAIN ANALYZE of the fast plan as an
attachment, for comparison?
Anyway, it looks like at least one big problem is the bad estimate
on how many rows will be ge
Thanks kevin for your answer. Here is some additionnal informations
attached as files.
regards.
Lionel
Kevin Grittner a écrit :
lionel duboeuf wrote:
Some informations:
The following problem has been detected on
Postgresql 8.3 and 8.4. on System linux or windows
Default AutoVacuu
Kevin Grittner wrote:
> Alvaro Herrera wrote:
> > Actually, a transaction that performed no writes doesn't get a
> > commit WAL record written, so it shouldn't make any difference at
> > all.
>
> Well, concurrent to the web application is the replication. Would
> asynchronous commit of that po
Karl Denninger writes:
> Explain Analyze on the alternative CLAIMS the same query planner time
> (within a few milliseconds) with explain analyze. But if I replace the
> executing code with one that has the alternative ("not exists") syntax
> in it, the system load goes to crap instantly and the
I do think it's valid to prevent idiot customers from installing drives that
use too much power or run too hot, or desktop drives that don't support
fast-fail reads, thus driving up Dell's support load, but it sounds like
this is more of a lock-in attempt.
This is kind of a dumb move on their part
"Connors, Bill" writes:
> ... in my actual system where we have a couple hundred partitions this
> query takes minutes to plan.
Please note what the documentation says under "Partitioning Caveats".
The current partitioning support is not meant to scale past a few dozen
partitions. So the solutio
Karl Denninger wrote:
> Kevin Grittner wrote:
>> Have you seen such a difference under 8.4? Can you provide a
>> self-contained example?
> Yes:
>
> [query and EXPLAIN ANALYZE of fast query]
> The alternative:
>
> [query with no other information]
>
> goes nuts.
Which means what? Coul
Yes:
select forum, * from post where
marked is not true
and toppost = 1
and (select login from ignore_thread where login='xxx' and
number=post.number) is null
and (replied > now() - '30 days'::interval)
and (replied > (select lastview from forumlog where login='xxx' and
forum=p
I have been trying to track down a performance issue we've been having with a
INSERT INTO ... SELECT query run against a partitioned table on postgres. The
problem appears to be in the plan building of the query and after some further
research I think I have nailed down a simplified example of
Karl Denninger wrote:
Kevin Grittner wrote:
>> I suspect that the above might do pretty well in 8.4.
> "Exists" can be quite slow. So can "not exists"
>
> See if you can re-write it using a sub-select - just replace the
> "exists " with "(select ...) is not null"
>
> Surprisingly this o
"Exists" can be quite slow. So can "not exists"
See if you can re-write it using a sub-select - just replace the "exists
" with "(select ...) is not null"
Surprisingly this often results in a MUCH better query plan under
Postgresql. Why the planner evaluates it "better" eludes me (it
should
Bryce Nesbitt wrote:
> I've got a very slow query, which I can make faster by doing
> something seemingly trivial.
Out of curiosity, what kind of performance do you get with?:
EXPLAIN ANALYZE
SELECT contexts.context_key
FROM contexts
JOIN articles ON (articles.context_key = contexts.con
lionel duboeuf wrote:
> Some informations:
> The following problem has been detected on
>Postgresql 8.3 and 8.4. on System linux or windows
>Default AutoVacuum daemon working
>One pg_dump every day
> This happens sometimes and i don't see what can be the cause.
> A manual Vacuum Anal
Alvaro Herrera wrote:
> Alvaro Herrera wrote:
>> Kevin Grittner wrote:
>
>> > Anyway, given that these are replication targets, and aren't
>> > the "database of origin" for any data of their own, I guess
>> > there's no reason not to try asynchronous commit.
>>
>> Yeah; since the transactions o
Alvaro Herrera wrote:
> Kevin Grittner wrote:
> > Anyway, given that these are replication
> > targets, and aren't the "database of origin" for any data of their
> > own, I guess there's no reason not to try asynchronous commit.
>
> Yeah; since the transactions only ever write commit records to
Kevin Grittner wrote:
> Hannu Krosing wrote:
>
> > Can it be, that each request does at least 1 write op (update
> > session or log something) ?
>
> Well, the web application connects through a login which only has
> SELECT rights; but, in discussing a previous issue we've pretty well
> establ
Some informations:
The following problem has been detected on
Postgresql 8.3 and 8.4. on System linux or windows
Default AutoVacuum daemon working
One pg_dump every day
This happens sometimes and i don't see what can be the cause.
A manual Vacuum Analyse repair that problem.
Dear you all,
Luiz Angelo Daros de Luca wrote:
I have a directed graph, or better, a tree in postgresql 8.3. One
table are the nodes and another one are the connections. Given any
node, I need to get all nodes down to it(to the leafs) that have
relation with anotherTable. Also, this connection change on ti
Hannu Krosing wrote:
> Can it be, that each request does at least 1 write op (update
> session or log something) ?
Well, the web application connects through a login which only has
SELECT rights; but, in discussing a previous issue we've pretty well
established that it's not unusual for a read
Hello guys,
I don't know if this is the correct list. Correct me if I'm wrong.
I have a directed graph, or better, a tree in postgresql 8.3. One table are
the nodes and another one are the connections. Given any node, I need to get
all nodes down to it(to the leafs) that have relation with anot
25 matches
Mail list logo