I've done a lot of work with a bookkeeping system where we have such
redundancy built in. The auditors, however, need to be able to generate
lists of the financial transaction detail to support balances. These
reports are among the most demanding in the system. I shudder to think
how unacceptabl
[Kevin Grittner - Fri at 02:49:57PM -0500]
> If you add a column to the person table for "last_food_id" and triggers
> to maintain it when the food table is modified, voila! You have a
> simple and fast way to get the results you want.
Reminds me about the way the precursor software of our produc
With your current (apparently well-normalized) schema, I don't see how
you can get a better query plan than that. There may be something you
can do in terms of memory configuration to get it to execute somewhat
faster, but the only way to make it really fast is to de-normalize.
This is something
On Thu, Jun 09, 2005 at 18:26:09 -0700,
Junaili Lie <[EMAIL PROTECTED]> wrote:
> Hi Bruno,
> I followed your suggestion.
> The query plan shows that it uses the index (id, person_id). However,
> the execution time is still slow. I have to do ctl-C to stop it.
> Maybe something is wrong with my po
[Junaili Lie - Thu at 06:26:09PM -0700]
> Hi Bruno,
> I followed your suggestion.
> The query plan shows that it uses the index (id, person_id). However,
> the execution time is still slow. I have to do ctl-C to stop it.
What is the estimate planner cost?
> Maybe something is wrong with my postgr
Hi Kevin,
Thanks for the reply.
I tried that query. It definately faster, but not fast enough (took
around 50 second to complete).
I have around 2.5 million on food and 1000 on person.
Here is the query plan:
QUERY PLAN
Hi Bruno,
I followed your suggestion.
The query plan shows that it uses the index (id, person_id). However,
the execution time is still slow. I have to do ctl-C to stop it.
Maybe something is wrong with my postgresql config.
It's running Solaris on dual Opteron, 4GB.
I allocated around 128MB for so
This is a pattern which I've seen many of times. I call it a "best
choice" query -- you can easily match a row from one table against any
of a number of rows in another, the trick is to pick the one that
matters most. I've generally found that I want the query results to
show more than the column
On Wed, Jun 08, 2005 at 15:48:27 -0700,
Junaili Lie <[EMAIL PROTECTED]> wrote:
> Hi,
> The suggested query below took forever when I tried it.
> In addition, as suggested by Tobias, I also tried to create index on
> food(p_id, id), but still no goal (same query plan).
> Here is the explain:
> TES
How about
SELECT p_id, f_id
FROM
person as p
LEFT JOIN
(SELECT f.p_id, max(f.id), f_item
FROM food) as f
ON p.p_id = f.p_id
Create an index on Food (p_id, seq #)
This may not gain any performance, but worth a try. I don't have any
data similar
Hi,
The suggested query below took forever when I tried it.
In addition, as suggested by Tobias, I also tried to create index on
food(p_id, id), but still no goal (same query plan).
Here is the explain:
TEST1=# explain select f.p_id, max(f.id) from Food f, Person p where
(f.p_id = p.id) group by p.
[Junaili Lie - Wed at 12:34:32PM -0700]
> select f.p_id, max(f.id) from person p, food f where p.id=f.p_id group
> by f.p_id will work.
> But I understand this is not the most efficient way. Is there another
> way to rewrite this query? (maybe one that involves order by desc
> limit 1)
eventually,
Hi,
I have the following table:
person - primary key id, and some attributes
food - primary key id, foreign key p_id reference to table person.
table food store all the food that a person is eating. The more recent
food is indicated by the higher food.id.
I need to find what is the most recent fo
13 matches
Mail list logo