Hi scott
I attached the query plan with this email. The top one is the first
run after I restarted my machine. And the bottom one is the second
run.
I am using PostgreSQL 8.3 on Solaris 10.
cheers
On Thu, Aug 6, 2009 at 4:15 PM, Scott Marlowe wrote:
> On Wed, Aug 5, 2009 at 11:21 PM, wrote:
>>
On Wed, Aug 5, 2009 at 11:21 PM, wrote:
> Sorry post again.
Nope, still mangled. Can you attach it?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Sorry post again.
QUERY PLAN
---
Sort (cost=390162.53..390162.54 rows=3 width=567) (actual
time=105726.803..105 7
Is this alright?
QUERY PLAN
---
Sort (cost=390162.53..390162.54 rows=3 width=567) (actual
time=106045.453..106 0
Could you possibly attach that in plain text format? Your email
client seems to have eaten any text formatting / indentation.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Hi Scott,
Yes I did that. And you are right because I restart my machine, so the
postgres cache is empty. And I think postgresql is reading all 1.8GB of
data back into the cache when it does a seq scan on the status table.
QUERY PLAN
-
On Wed, Aug 5, 2009 at 8:50 PM, Ip Wing Kin John wrote:
> I have a database (699221). It contains of 1.8GB data (707710). I am
> doing a complex query. Which required to load a 80MB index (732287).
>
> I restarted Postgresql so the cache is empty and it has to read the
> table and index from disk.
I have a database (699221). It contains of 1.8GB data (707710). I am
doing a complex query. Which required to load a 80MB index (732287).
I restarted Postgresql so the cache is empty and it has to read the
table and index from disk. Which I understand is an expensive process.
But what I don't unde
"Subbiah Stalin-XCGF84" wrote:
> We have found the problem.
Great news!
> Apparently there was a query doing count on 45 million rows table
> run prior to the episode of slow query. Definitely cached data is
> pushed out the memory.
Yeah, that would completely explain your symptoms.
> I
On 8/5/09 12:16 PM, "Subbiah Stalin-XCGF84" wrote:
> We have found the problem. Apparently there was a query doing count on
> 45 million rows table run prior to the episode of slow query. Definitely
> cached data is pushed out the memory. Is there way to assign portion of
> memory to recycling
We have found the problem. Apparently there was a query doing count on
45 million rows table run prior to the episode of slow query. Definitely
cached data is pushed out the memory. Is there way to assign portion of
memory to recycling purposes like in oracle, so the cached data doesn't
get affecte
On 8/5/09 7:12 AM, "Merlin Moncure" wrote:
> On Tue, Aug 4, 2009 at 4:40 PM, Tom Lane wrote:
>> Scott Carey writes:
>>> There are a handful of other compression algorithms very similar to LZO in
>>> performance / compression level under various licenses.
>>> LZO is just the best known and mos
On Tue, Aug 4, 2009 at 4:40 PM, Tom Lane wrote:
> Scott Carey writes:
>> There are a handful of other compression algorithms very similar to LZO in
>> performance / compression level under various licenses.
>> LZO is just the best known and most widely used.
>
> And after we get done with the lice
Matthew Wakeling writes:
> It is certainly doing a sequential scan. So are you saying that it will
> start a sequential scan from a different part of the table each time, even
> in the absence of other simultaneous sequential scans?
Yeah, that's the syncscan logic biting you. You can turn it o
On Wed, 5 Aug 2009, Robert Haas wrote:
On Wed, Aug 5, 2009 at 6:20 AM, Matthew Wakeling wrote:
It is certainly doing a sequential scan. So are you saying that it will
start a sequential scan from a different part of the table each time, even
in the absence of other simultaneous sequential scans?
On Wed, Aug 5, 2009 at 6:20 AM, Matthew Wakeling wrote:
> It is certainly doing a sequential scan. So are you saying that it will
> start a sequential scan from a different part of the table each time, even
> in the absence of other simultaneous sequential scans? Looks like I'm going
> to have to r
Kees van Dieren wrote:
> Hi Folks,
>
> Thanks for your response.
>
> I have added the following index (suggested by other post):
>
> CREATE INDEX events_events_cleared_eventtype
> ON events_events
> USING btree
> (eventtype_id, cleared)
> WHERE cleared = false;
>
> Also with columns in reve
On Wed, 5 Aug 2009, Greg Stark wrote:
On Tue, Aug 4, 2009 at 11:56 PM, Robert Haas wrote:
Beats me. It looks like the first few queries are pulling stuff into
cache, and then after that it settles down, but I'm not sure why it
takes 5 repetitions to do that. Is the plan changing?
Yeah, we're
18 matches
Mail list logo