o something near the real values, then the planner
will choose a different plan, which should give a huge performance
increase.
Regards
Russell Smith.
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
On Mon, 9 May 2005 09:20 am, Dan Harris wrote:
> Sorry to bother everyone with yet another "my query isn't using an
> index" problem but I am over my head on this one.. I am open to ways
> of restructuring this query to perform better.
>
> I have a table, 'ea', with 22 million rows in it. VACU
On Mon, 9 May 2005 11:49 am, Dan Harris wrote:
>
> On May 8, 2005, at 6:51 PM, Russell Smith wrote:
>
[snip]
> > select distinct em.incidentid, ea.recordtext as retdata, eg.long,
> > eg.lat
> > FROM em JOIN ea ON (em.incidentid = ea.incidentid AND em.entryda
On Thu, 2 Jun 2005 06:19 am, Casey Allen Shobe wrote:
> I found this response to my original post, and tried every single suggestion
> in it, which has not helped:
>
> http://archives.postgresql.org/pgsql-performance/2004-11/msg00416.php
>
> I'm sorry to come begging for help, but this is a MAJO
On Mon, 13 Jun 2005 04:54 pm, Yves Vindevogel wrote:
> Still, when I use explain, pg says it will first sort my tables instead
> of using my index
> How is that possible ?
Can we see the output of the explain analyze?
The definition of the view?
Regards
Russ
Functions, views. It will make not difference. The issue is the amount of
data returned
relative to the amount of data in the table.
Regards
Russell Smith
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
performance. It depends how much memory you have spare to use
for PostgreSQL. But something like
shared_buffers = 2
maintenance_work_mem = 256000
Will certainly give you a performance boost. You will have to adjust
those figures based on whatever else you are doing on the machine.
Russell Smith
that. If not, I suggest you upgrade
and see if the fault still exists.
Regards
Russell Smith
Thanks to all,
Kim
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresq
ible to put an index on LOG.CODCEP?
That might give you a better plan, as you only have 1 row in the left of
the join. so index scan would be preferable.
Regards
Russell Smith
Is there a type of index that could help or is there another way to
build this SQL?
Thank you in advance!
exp
y is int8, but
can't be certain that is what's causing the problem.
Regards
Russell Smith
Thanks,
Carlos
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
Guido Neitzer wrote:
On 27.01.2007, at 00:35, Russell Smith wrote:
Guess 1 would be that your primary key is int8, but can't be certain
that is what's causing the problem.
Why could that be a problem?
Before 8.0, the planner would not choose an index scan if the types were
Ben Perrault wrote:
Hi,
I recently inherited a very old (PostgreSQL 7.0.3) database, and have
migrated it to 8.2.4 but have run into a performance issue.
Basically, I did a dump and import into the new database, vacuumed and
created fresh indexes and everything is work great except the
foll
rocessing is postgres is SLOW. It
would strongly recommend redesigning your schema to use a table with
sensor_id's that correspond to the primary key in the reading table.
Rethinking the way you are going about this will probably be the most
effective solution, but we wi
Michael Glaesemann wrote:
On Aug 22, 2007, at 5:58 , Russell Smith wrote:
Stephen Davies wrote:
select count(rdate),rdate from reading where sensor_id in
(1137,1138,1139,1140) group by rdate order by rdate desc limit 1;
It would have been helpful to see the table definition here. I can
Russell Smith wrote:
Filter: (sensor_id = ANY ('{1137,1138,1139,1140}'::integer[]))
I've never seen this plan item except for when array's are involved. I
could be wrong. I'd like to know how this is generated when you don't
have an array.
I have just di
On 22/06/10 00:42, Sergio Charpinel Jr. wrote:
> Hi,
>
[snip]
>
> => explain analyze SELECT ip_src, port_src, ip_dst, port_dst,
> tcp_flags, ip_proto,SUM("bytes"),SUM("packets"),SUM("flows") FROM
> "acct_2010_25" WHERE "stamp_inserted">='2010-06-20 10:10' AND
> "stamp_inserted"<'2010-06-21 10:10' G
On 09/07/10 02:31, Ryan Wexler wrote:
> Thanks a lot for all the comments. The fact that both my windows box
> and the old linux box both show a massive performance improvement over
> the new linux box seems to point to hardware to me. I am not sure how
> to test the fsync issue, but i don't see
hose measures.
Which is essentially what I was suggesting...
All of them are a lot of code to hack, but although I'm not a postgresql
core developer, I am keen enough to invite you to send patches. :-)
Well, if you know C then you're already 1 step closer to being able to
change th
AMIR FRANCO D. JOVEN wrote:
Hi!
Im new to PostgreSQL.
My current project uses PostgreSQL 7.3.4.
Upgrading your version of PostgreSQL to 8.1 will give you significant
benefits to performance.
the problem is like this:
I have a table with 94 fields and a select with only one resultset in
onl
Craig A. James wrote:
For the third time today, our server has crashed, or frozen, actually
something in between. Normally there are about 30-50 connections
because of mod_perl processes that keep connections open. After the
crash, there are three processes remaining:
# ps -ef | grep postgr
planations of why the planner does such a terrible job of
estimated the number of rows for this query, with the stats set so high.
Tests were also done with stats set to 100, and 1. The results are exactly the same.
Which I would have assumed.
Also I am interested in how functional indexes have statist
On Thu, 29 Jul 2004 03:08 am, Stephane Tessier wrote:
> Hi everyone,
>
> somebody can help me??? my boss want to migrate to
> ORACLE
>
> we have a BIG problem of performance,it's slow
> we use postgres 7.3 for php security application with approximately 4
> millions of inser
a damn cheap upgrade at around $350 and
> an hour of downtime, so I figure that it's worth it for us to give it a
> shot.
>
> Thanks,
>
> Jason
Russell Smith
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
the query-planner chooses wrong.
> NB: It's postgresql 7.4.3
>
Please post explain analyze of the query.
I would guess you are using now() is your query, which is not optimized perfectly
by the planner, so you end up with problems. But if you post explain analy
nversions. like
int8 to int4.
I'm not 100% sure which ones were added, and which were not, but the query appears to
cast everything correctly anyway.
> Can it be depend on the the type of restore (with COPY commands)?
Shouldn't and VACUUM FULL ANALYZE will make the table as small as possible. The row
order may be different
on disk, but the planner won't know that, and it's a bad plan causing the problem.
> I have no idea.
>
> Thanks in advance!
> Reds
>
Regards
Russell Smith.
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
address those in a reply mail that would help everybody with your problem
Regards
Russell Smith
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
by the planner
> relatively cheaply. But as you can see, the first query uses indexes and the
> second one uses sequence scans, thereby taking much longer. I guess my
> question is - is there a better way to accomplish what I'm doing in SQL or am
> I going to have to dynamical
0) | not null
> > Indexes:
> > "scr_wcm_pk" PRIMARY KEY, btree (id_level, mat_id, barcode, well_index)
>
numeric is not optimized by postgresql like it is by Oracle. You will get much better
performance by changing the numeric types to int, big int, or small int.
Tha
ashAggregate (cost=1209424.15..1209424.35 rows=80 width=12)
> (actual time=298803.273..298881.020 rows=16396 loops=1)
> -> Seq Scan on dst_port_day cur (cost=0.00..1206693.40
> rows=546150 width=12) (actual time=298299.508..298526.544 rows=48072 loops=1)
>
Filter: ((prefix)::text <> 'xsd'::text)
> -> Seq Scan on flow fl (cost=0.00..0.00 rows=1 width=4)
> Filter: (servicetype = 646)
>
Unique (cost=11.50..11.52 rows=2 width=131)
Unique (cost=10.81..10.83 rows=1 width=167)
The estimations for the cost is basically the same, 10ms for the first row.
Can you supply Explain analyze to see what it's actually doing?
Russell Smith
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
he planner is likely to make the wrong choice when the
stats are out by that factor.
Beware what is a small "typing" change does not mean they queries are anything
alight.
Regards
Russell Smith.
---(end of broadcast)---
TIP 2: you can ge
had
much success
with the method above without the queries being prepared. Others may be able
to offer advice
about if prepare will effect it.
Regards
Russell Smith
---(end of broadcast)---
TIP 2: you can get off all lists at once with the u
solution
for having the % at the start, but you can win everyway.
>
> rgds
> Antony Paul
>
> On Mon, 24 Jan 2005 20:58:54 +1100, Russell Smith <[EMAIL PROTECTED]> wrote:
> > On Mon, 24 Jan 2005 08:18 pm, Antony Paul wrote:
> > > Hi,
> > > I have a
ql is actually executing the query.
Regards
Russell Smith.
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
x27;::text) AND
> (active = 'A'::bpchar))
> (2 rows)
>
>
>
> I get really slow repoonse times when using the following select statement
> (About 20 seconds).
> maach=# explain select * from tst where tst_id = 639246;
Before 8.0, bigint wou
R: syntax error at or near "limit" at character 71
select "limit" from limit_table WHERE "limit" < 50 LIMIT 2;
You need to quote the field name, and make sure the case is correct.
>
> Any Help would be realyl great to solve the problem.
>
> postgres
Pepe Barbe wrote:
Hello,
I am having an issue on PostgreSQL 8.0.12. In the past we had
performance issues with the query planner for queries on some tables
where we knew we had indexes and it was doing a sequential scan, and for
this reason we issue "SET enable_seqscan = FALSE" for some queri
Jean-David Beyer wrote:
[snip]
2007-11-14 12:00:31 EST DEBUG: analyzing "public.vl_in"
2007-11-14 12:00:31 EST DEBUG: "vl_in": scanned 2001 of 2001 pages,
containing 183983 live rows and 52 dead rows; 3000 rows in sample, 183983
estimated total rows
2007-11-14 12:00:31 EST DEBUG: analyzing "pu
;s my tiny point of view and I certainly don't have the resources to
direct what gets looked at for PostgreSQL.
Regards
Russell Smith
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
acuum?
Regards
Russell Smith
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
idc danny wrote:
> Hi everybody,
>
> I know that this group deals with performance but is the only one on which
> I'm subscribed, so my apologize in advance for the question.
>
> I want to allow everybody in the world, all IP's, to connect to my server.
> How do I accomplish that? Definitely, it'
Pavel Stehule wrote:
> Hello
>
> 2008/9/1 David West <[EMAIL PROTECTED]>:
>
>> Thanks for your suggestion but the result is the same.
>>
>> Here is the explain analyse output from different queries.
>> Select * from my_table where A is null and B = '21' limit 15
>>
>> "Limit (cost=0.00..3.68 ro
Kees van Dieren wrote:
> Hi Folks,
>
> Thanks for your response.
>
> I have added the following index (suggested by other post):
>
> CREATE INDEX events_events_cleared_eventtype
> ON events_events
> USING btree
> (eventtype_id, cleared)
> WHERE cleared = false;
>
> Also with columns in reve
Kai Behncke wrote:
>
> But I would like to get it in a php-script, like
>
> $timerequest_result=pg_result($timerequest,0);
>
> (well, that does not work).
>
> I wonder: Is there another way to get the time a request needs?
> How do you handle this?
>
$time = microtime()
$result = pg_result($quer
44 matches
Mail list logo