Here's another, more drastic example... Here the order by / limit
version
runs in less than 1/7000 the time of the MAX() version.
select max(item_id)
from events e, receipts r, receipt_items ri
where e.event_id=r.event_id and r.receipt_id=ri.receipt_id
Aggregate (cost=10850.84..10850.85 rows=1
Ron wrote:
C= What file system are you using? Unlike BigDBMS, pg does not have its
own native one, so you have to choose the one that best suits your
needs. For update heavy applications involving lots of small updates
jfs and XFS should both be seriously considered.
Ron, thanks for your id
Here's the queries and explains... Granted, it's not a huge difference
here,
but both timings are absolutely consistent. Using max(), this runs
almost
15 queries/sec and "limit 1" runs at almost 40 queries/sec.
Is the differene in explain analyze expected behavior? (rows=168196 vs.
rows=1)
(T
Dave Cramer wrote:
On 6-Jan-07, at 11:32 PM, Guy Rouillier wrote:
Dave Cramer wrote:
The box has 3 GB of memory. I would think that BigDBMS would be
hurt by this more than PG. Here are the settings I've modified in
postgresql.conf:
As I said you need to set shared_buffers to at least 75
On Sun, 2007-01-07 at 20:26 -0600, Adam Rich wrote:
> I'm using 8.2 and using order by & limit is still faster than MAX()
> even though MAX() now seems to rewrite to an almost identical plan
> internally.
Gonna need you to back that up :) Can we get an explain analyze?
> Count(*) still seems to
"Adam Rich" <[EMAIL PROTECTED]> writes:
> I'm using 8.2 and using order by & limit is still faster than MAX()
> even though MAX() now seems to rewrite to an almost identical plan
> internally.
Care to quantify that? AFAICT any difference is within measurement
noise, at least for the case of separ
I'm using 8.2 and using order by & limit is still faster than MAX()
even though MAX() now seems to rewrite to an almost identical plan
internally.
Count(*) still seems to use a full table scan rather than an index scan.
Using one of our tables, MySQL/Oracle/MS-SQL all return instantly while
PG t
Craig A. James wrote:
The "idiom" to replace count() was
"select col from tbl order by col desc limit 1". It worked miracles for
my app.
Sorry, I meant to write, "the idiom to replace MAX()", not count()... MAX()
was the function that was killing me, 'tho count() also gave me problems.
Cra
I'd like to optimize my postgres configuration for optimal
performance under typical load. Unfortunately, as I understand
things, that implies that I have to have a way to repeat the same
load each time I try out new settings, so that I can fairly compare.
It's difficult for me to drive the
On 6-Jan-07, at 11:32 PM, Guy Rouillier wrote:
Dave Cramer wrote:
The box has 3 GB of memory. I would think that BigDBMS would be
hurt by this more than PG. Here are the settings I've modified
in postgresql.conf:
As I said you need to set shared_buffers to at least 750MB this is
the
Guy Rouillier wrote:
The application is fairly straightforward, but as you say, what is
working okay with BigDBMS isn't working as well under PG. I'm going to
try other configuration suggestions made by others before I attempt
logic changes. The core logic is unchangeable; millions of rows of
11 matches
Mail list logo