On Fri, Jun 19, 2009 at 2:05 PM, Brian Cox wrote:
> David Rees [dree...@gmail.com] wrote:
>>
>> Along those lines, couldn't you just have the DB do the work?
>>
>> select max(ts_id), min(ts_id) from ... where ts_interval_start_time >=
>> ... and ...
>>
>> Then you don't have to transfer 500k ids ac
David Rees [dree...@gmail.com] wrote:
Along those lines, couldn't you just have the DB do the work?
select max(ts_id), min(ts_id) from ... where ts_interval_start_time >=
... and ...
Then you don't have to transfer 500k ids across the network...
I guess you didn't read the entire thread: I star
On Fri, Jun 19, 2009 at 1:05 PM, Brian Cox wrote:
> Thanks to all for the analysis and suggestions. Since the number of rows in
> an hour < ~500,000, brute force looks to be a fast solution:
>
> select ts_id from ... where ts_interval_start_time >= ... and ...
>
> This query runs very fast as does
Tom Lane [...@sss.pgh.pa.us] wrote:
Some experimentation suggests that it might help to provide a 2-column
index on (ts_id, ts_interval_start_time). This is still going to be
scanned in order by ts_id, but it will be possible to check the
ts_interval_start_time condition in the index, eliminatin
> Like the other poster said, we likely don't have enough to tell you
> what's going on, but from what you've said here it sounds like you're
> mostly just CPU bound. Assuming you're reading the output of vmstat
> and top and other tools like that.
Thanks. I used 'sadc' from the sysstat RPM (par
On Fri, Jun 19, 2009 at 9:59 AM, Alan McKay wrote:
> Hey folks,
>
> I'm new to all this stuff, and am sitting here with kSar looking at
> some graphed results of some load tests we did, trying to figure
> things out :-)
>
> We got some unsatisfactory results in stressing our system, and now I
> hav
Alan McKay wrote:
Hey folks,
We did 4 tests, upping the load each time. The 3rd and 4th ones have
all 8 cores pegged at about 95%. Yikes!
In the first test the processor running queue spikes at 7 and maybe
averages 4 or 5
In the last test it spikes at 33 with an average maybe 25.
Looks to m
Hey folks,
I'm new to all this stuff, and am sitting here with kSar looking at
some graphed results of some load tests we did, trying to figure
things out :-)
We got some unsatisfactory results in stressing our system, and now I
have to divine where the bottleneck is.
We did 4 tests, upping the
On Fri, Jun 19, 2009 at 3:26 PM, Tom Lane wrote:
>
> That's the problem then. Notice what the query plan is doing: it's
> scanning the table in order by ts_id, looking for the first row that
> falls within the ts_interval_start_time range. Evidently this
> particular range is associated with smal
Kenneth Marshall writes:
> Looking at the XLogInsert() from 8.3 and 8.4, the 8.4
> version includes a call to RecoveryInProgress() at
> the top as well as a call to TRACE_POSTGRESQL_XLOG_INSERT().
> Could either of those have caused a context switch or
> cache flush resulting in worse performance.
Brian Cox writes:
> Kevin Grittner [kevin.gritt...@wicourts.gov] wrote:
>> Is there any correlation between ts_id and ts_interval_start_time?
> only vaguely: increasing ts_interval_start_time implies increasing ts_id
> but there may be many rows (100,000's) with the same ts_interval_start_time
> -Original Message-
> From: Brian Cox
> Subject: [PERFORM] select max() much slower than select min()
>
> seems like max() shouldn't take any longer than min() and
> certainly not 10 times as long. Any ideas on how to determine
> the max more quickly?
That is odd. It seems like max
Hi,
Looking at the XLogInsert() from 8.3 and 8.4, the 8.4
version includes a call to RecoveryInProgress() at
the top as well as a call to TRACE_POSTGRESQL_XLOG_INSERT().
Could either of those have caused a context switch or
cache flush resulting in worse performance.
Cheers,
Ken
--
Sent via pgs
13 matches
Mail list logo