=?ISO-8859-1?Q?Josu=E9_Maldonado?= <[EMAIL PROTECTED]> writes:
> Recreated the index (drop then create) and did the vacuum full pkardex
> and the behavior seems to be the same:
Well, there was a pretty huge improvement in the pkardex scan time,
whether you noticed it or not: 39520.406 to 87.393 m
Marcus Whitney <[EMAIL PROTECTED]> writes:
> I have an instance where I have a series of pl/pgsql calls, that report stat
> results to a common table. When other queries try to hit the stat table
> (with DML commands; SELECT, INSERT, UPDATE, DELETE etc.) they are forced to
> wait in a queue u
Hey All,
I've implemented a couple of functions ala date_trunc (listed at the bottom). These
functions are executed every 5 minutes (date_trunc_minute) and every week
(date_trunc_week) across 16 different values. The problem is that they take way
too long to execute (nearly 7x the 'regular' dat
On Tue, 2004-02-24 at 12:11, Richard Huxton wrote:
> On Tuesday 24 February 2004 16:11, Jonathan M. Gardner wrote:
> >
> > I've written a summary of my findings on implementing and using
> > materialized views in PostgreSQL. I've already deployed eagerly updating
> > materialized views on several v
After a long battle with technology, [EMAIL PROTECTED] ("Ed L."), an earthling, wrote:
> On Monday February 23 2004 10:23, Tom Lane wrote:
>> "Ed L." <[EMAIL PROTECTED]> writes:
>> Depending on the size of mytable, you might need an "ANALYZE doomed"
>> in there, but I'm suspecting not. A quick exp
On Tuesday 24 February 2004 16:11, Jonathan M. Gardner wrote:
>
> I've written a summary of my findings on implementing and using
> materialized views in PostgreSQL. I've already deployed eagerly updating
> materialized views on several views in a production environment for a
> company called RedWe
Title: RE: [PERFORM] is it possible to for the planner to optimize this form?
I didn't really look that closely at the problem but have you thought of trying:
select t.key, t.field from t a
, (select count(*) as cntb from t b
where b.field > a.field) as dmytbl
where
cntb = k
T
Josh,
El 07/06/2004 5:21 PM, Josh Berkus en su mensaje escribio:
Huh? It is not at all the same. Your index scan is down to 87ms from
27,000! And the total query is down to 16seconds from 47 seconds. Don't
you consider that an improvement?
Yes there was an improvement with respect the pre
Josue'
> dbmund=# explain analyze select * from vkardex where kprocode='1013';
> Nested Loop (cost=0.00..2248.19 rows=403 width=114) (actual
> time=846.318..16030.633 rows=3145 loops=1)
> -> Index Scan using pkd_pcode_idx on pkardex (cost=0.00..806.27
> rows=403 width=72) (actual time=0
Hi Josh and thanks for your response,
El 07/06/2004 4:31 PM, Josh Berkus en su mensaje escribio:
Josue'
-> Index Scan using pkd_pcode_idx on pkardex (cost=0.00..11292.52
rows=5831 width=72) (actual time=18.152..39520.406 rows=5049 loops=1)
Looks to me like there's a problem with index bloat
On 7 Jun 2004 at 18:49, Dan Langille wrote:
> On 7 Jun 2004 at 16:38, Rod Taylor wrote:
> > * random_page_cost (good disks will bring this down to a 2 from a
> > 4)
>
> I've got mine set at 4. Increasing it to 6 gave me a 1971ms query.
> At 3, it was a 995ms. Setting it to 2 gav
On 7 Jun 2004 at 16:38, Rod Taylor wrote:
> On Mon, 2004-06-07 at 16:12, Dan Langille wrote:
> > I grep'd postgresql.conf:
> >
> > #effective_cache_size = 1000# typically 8KB each
> > #random_page_cost = 4 # units are one sequential page fetch cost
>
> This would be the issue. You
On Mon, 2004-06-07 at 16:19, Josuà Maldonado wrote:
> Hello list,
>
> Server is dual Xeon 2.4, 2GBRAM, Postgresql is running on partition:
> /dev/sda9 29G 8.9G 20G 31% /home2
> /dev/sda9 on /home2 type jfs (rw)
>
> Version()
> PostgreSQL 7.4 on i686-pc-linux-gnu, compiled by GCC
Josue'
> -> Index Scan using pkd_pcode_idx on pkardex (cost=0.00..11292.52
> rows=5831 width=72) (actual time=18.152..39520.406 rows=5049 loops=1)
Looks to me like there's a problem with index bloat on pkd_pcode_idx. Try
REINDEXing it, and if that doesn't help, VACUUM FULL on pkardex.
Hello list,
Server is dual Xeon 2.4, 2GBRAM, Postgresql is running on partition:
/dev/sda9 29G 8.9G 20G 31% /home2
/dev/sda9 on /home2 type jfs (rw)
Version()
PostgreSQL 7.4 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2
20020903 (Red Hat Linux 8.0 3.2-7)
I have a view to jo
On Mon, 7 Jun 2004, Steve Wampler wrote:
> I do have a little concern about what's happening in the
> back end during the copy - I suspect the entire table is
> locked, which may impact the performance when multiple
> clients are saving entries into the table. Anyone know
> if that's how COPY w
On 7 Jun 2004 at 16:38, Rod Taylor wrote:
> On Mon, 2004-06-07 at 16:12, Dan Langille wrote:
> > On 7 Jun 2004 at 16:00, Rod Taylor wrote:
> >
> > > On Mon, 2004-06-07 at 15:45, Dan Langille wrote:
> > > > A production system has had a query recently degrade in performance.
> > > > What once to
Merlin,
> select t.key, t.field from t a
> where
> (
> select count(*) from t b
> where b.field > a.field
> ) = k
>
> The subplan (either index or seq. scan) executes once for each row in t,
> which of course takes forever.
>
> This query is a way of achieving LIMIT typ
On 7 Jun 2004 at 16:00, Rod Taylor wrote:
> On Mon, 2004-06-07 at 15:45, Dan Langille wrote:
> > A production system has had a query recently degrade in performance.
> > What once took < 1s now takes over 1s. I have tracked down the
> > problem to a working example.
>
> What changes have you
[EMAIL PROTECTED] ("Ed L.") wrote:
> A 7.3.4 question...
>
> I want to "expire" some data after 90 days, but not delete too
> much at once so as not to overwhelm a system with precariously
> balanced disk I/O and on a table with millions of rows. If I
> could say it the way I think for a simple e
A production system has had a query recently degrade in performance.
What once took < 1s now takes over 1s. I have tracked down the
problem to a working example.
Compare http://rafb.net/paste/results/itZIx891.html
with http://rafb.net/paste/results/fbUTNF95.html
The first shows the query a
On Mon, 2004-06-07 at 10:40, Steve Wampler wrote:
> Thanks Kris - that patch worked beautifully and bumped the
> insert rate from ~1000 entries/second to ~9000 e/s in my
> test code.
As a followup - that 9000 e/s becomes ~21,000 e/s if I don't
have the java code also dump the message to standard
On Mon, 2004-06-07 at 02:26, Kris Jurka wrote:
> On Sat, 5 Jun 2004, Steve Wampler wrote:
>
> >
> > [I want to use copy from JDBC]
> >
>
> I made a patch to the driver to support COPY as a PG extension.
...
> http://archives.postgresql.org/pgsql-jdbc/2003-12/msg00186.php
Thanks Kris - that pat
Right now, I am having trouble getting the planner to optimize queries
in the form of
select t.key, t.field from t a
where
(
select count(*) from t b
where b.field > a.field
) = k
The subplan (either index or seq. scan) executes once for each row in t,
which of cours
Dear Tom,
Thanks for your response.
- Original Message -
From: "Tom Lane" <[EMAIL PROTECTED]>
Sent: Monday, June 07, 2004 3:51 PM
> That's pretty hard to believe; particularly on modern machines, I'd
> think that moving it down would make more sense than moving it up.
> You're essentia
Thanks for your reply. My comments are below.
On Friday 04 June 2004 16:39, you wrote:
> Uh... I don't think this is necessarily the wrong list, sometimes people
> don't have much to chime in. You could try reposting to -sql or -general
> I suppose.
I'll try one of those.
>
> As for my take on
Hello,
I have an instance where I have a series of pl/pgsql calls, that report stat
results to a common table. When other queries try to hit the stat table
(with DML commands; SELECT, INSERT, UPDATE, DELETE etc.) they are forced to
wait in a queue until the pl/pgsql has finished executing.
Hi all,
I am optimizing some code that does a lot of iterative selects and
inserts within loops. Because of the exception handling limitations in
postgres and with no ability to twiddle autocommit, just about every
operation is standalone.
over 5000 odd lines this gets very slow (5-10 minutes i
Tom Lane mentioned :
=> Please try it on 7.4.2 and see if you still have a problem.
Will do, and I'll post the results
Thanks!
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
Stef <[EMAIL PROTECTED]> writes:
> I've been dealing with a problem for the past two days
> where a certain sql statement works 2 out of 5 times, and
> the other 3 times, it causes the machine (quad Xeon 2.8GHz
> + 792543232 bytes mem, linux kernel 2.4.26-custom, pg ver 7.3.4)
> to slow down, and
"=?iso-8859-2?B?U1rbQ1MgR+Fib3I=?=" <[EMAIL PROTECTED]> writes:
> Last week I fumbled with CPU_TUPLE_COST and revealed that 4 out of 4 tested
> queries improved by 10-60% if I changed it from 0.01 (default) to 0.40
> (ugh). Setting it higher did not bring any improvement.
That's pretty hard to bel
Eric,
> Nested Loop
> (cost=189.00..27359887.76 rows=607947200 width=22)
>Join Filter: ("outer".address <<= "inner".address)
>-> Seq Scan on clients c
>(cost=0.00..2074.76 rows=102176 width=11)
>-> Materialize
>(cost=189.00..308.00 rows=11900 width=11)
> ->
Hi all,
I've been dealing with a problem for the past two days
where a certain sql statement works 2 out of 5 times, and
the other 3 times, it causes the machine (quad Xeon 2.8GHz
+ 792543232 bytes mem, linux kernel 2.4.26-custom, pg ver 7.3.4)
to slow down, and finally grind to a halt. It looks
On Sat, 5 Jun 2004, Steve Wampler wrote:
>
> [I want to use copy from JDBC]
>
I made a patch to the driver to support COPY as a PG extension. The
patch required properly encoded and formatted copy data available
from an InputStream. Following some feedback from others I began adding
the abi
Hi, Steve,
Here are the results of some benchmarks we did inserting 30k rows into a
table, using "time psql -f blubb.sql -q dbname":
Filekingfisher skate
30kinsert.sql 39.359s762r/s 335.024s 90r/s
30kcommonce.sql 11.402s 2631r/s 7.086s
Dear Gurus,
Please feel free to show me to the archives if my question has already been
answered.
Last week I fumbled with CPU_TUPLE_COST and revealed that 4 out of 4 tested
queries improved by 10-60% if I changed it from 0.01 (default) to 0.40
(ugh). Setting it higher did not bring any improveme
36 matches
Mail list logo