PFC <[EMAIL PROTECTED]> writes:
>> For example, let's add another filter to get all the shipments with
>> release code '5' that are 7 days old or newer.
>>
>> ss.date >= current_date - 7
> It's the order by + limit which makes the query behaves badly, and
> which
> forces use of kludge
SELECT * FROM shipment_status WHERE release_code_id = constant ORDER BY
release_code_id DESC, date DESC LIMIT 100;
I have done this in other queries where sorting by both release code and
date were important. You are right, it is very fast and I do have this
index in play. However, most of t
Thanks again for your response. I'll try and clarify some metrics that I
took a few days to figure out what would be the best join order.
By running some count queries on the production database, I noticed there
were only 8 rows in release_code. The filtered column is unique, so that
means th
So are you suggesting as a general rule then that sub-queries are the
way to force a specific join order in postgres? If that is the case, I
will do this from now on.
I'll try to explain a bit better...
Here's your original query :
select s.*, ss.*
from shipment s, shipment_status
Well, postgres does what you asked. It will be slow, because you have a
full table join. LIMIT does not change this because the rows have to be
sorted first.
I am aware that limit doesn't really affect the execution time all that
much. It does speed up ORM though and keeps the rows to a managea
5:04
PM
Subject: Re: [PERFORM] Performance
problem with semi-large tables
You don't mention if you have run VACUUM or VACUUM
ANALYZE lately. That's generally one of the first things that folks will
suggest. If you have a lot of updates then VACUUM will clean up dead tupl
select s.*, ss.*
from shipment s, shipment_status ss, release_code r
where s.current_status_id = ss.id
and ss.release_code_id = r.id
and r.filtered_column = '5'
order by ss.date desc
limit 100;
Release code is just a very small table of 8 rows by looking at the
production data, hence the
ervariSent: Wednesday, January 26, 2005 9:17 PMTo:
pgsql-performance@postgresql.orgSubject: [PERFORM] Performance
problem with semi-large tables
Hi everyone.
I'm new to this forum and was wondering if anyone
would be kind enough to help me out with a pretty severe performan
Ken,
Actually, your problem isn't that generic, and might be better solved by
dissecting an EXPLAIN ANALYZE.
> 1. Should I just change beg to change the requirements so that I can make
> more specific queries and more screens to access those?
This is always good.
> 2. Can you
> recommend way
Hi everyone.
I'm new to this forum and was wondering if anyone
would be kind enough to help me out with a pretty severe performance
issue. I believe the problem to be rather generic, so I'll put it in
generic terms. Since I'm at home and not a work (but this is really
bugging me), I can'
10 matches
Mail list logo