Bruce Momjian wrote:
> Robert Haas wrote:
> > On Wed, Aug 11, 2010 at 9:42 PM, Bruce Momjian wrote:
> > > This confused me. ?If we are assuing the data is in
> > > effective_cache_size, why are we adding sequential/random page cost to
> > > the query cost routines?
> >
> > See the comments for in
Robert Haas wrote:
> On Wed, Aug 11, 2010 at 9:42 PM, Bruce Momjian wrote:
> > This confused me. ?If we are assuing the data is in
> > effective_cache_size, why are we adding sequential/random page cost to
> > the query cost routines?
>
> See the comments for index_pages_fetched(). We basically
On Wed, Aug 11, 2010 at 9:42 PM, Bruce Momjian wrote:
> This confused me. If we are assuing the data is in
> effective_cache_size, why are we adding sequential/random page cost to
> the query cost routines?
See the comments for index_pages_fetched(). We basically assume that
all data starts unc
Tom Lane wrote:
> Hannu Krosing writes:
> > Of course there are more variables than just *_page_cost, so if you nail
> > down any other one, you may end with less than 1 for both page costs.
>
> > I have always used seq_page_cost = 1 in my thinking and adjusted others
> > relative to it.
>
> Rig
On Wed, 2010-08-04 at 22:03 +0300, Hannu Krosing wrote:
> On Wed, 2010-08-04 at 21:41 +0300, Hannu Krosing wrote:
> > On Wed, 2010-08-04 at 14:00 -0400, Tom Lane wrote:
>
> > > regression=# select name, setting from pg_settings where name like
> > > '%cost';
> > > name | setting
On Wed, 2010-08-04 at 15:16 -0400, Greg Smith wrote:
> Hannu Krosing wrote:
> > There was ample space for keeping the indexes in linux cache (it has 1GB
> > cached currently) though the system may have decided to start writing it
> > to disk, so I suspect that most of the time was spent copying ran
Greg Smith wrote:
> Was trying to demonstrate the general ability of pg_stat_bgwriter
> snapshots at points in time to directly measure the buffer
> activity Hannu was theorizing about, not necessarily show a useful
> benchmark of any sort with that.
Ah, OK. Sorry I didn't pick up on that; I
Hannu Krosing wrote:
> This seems to indicate about 1 ms for moving pages over
> user/system boundary. (Intel Core2 Duo T7500 @ 2.20GHz, Ubuntu
> 9.10, 4GB RAM)
Using Greg's test script on a box with two cores like this:
Intel(R) Pentium(R) D CPU 3.40GHz
Linux kgrittn-desktop 2.6.31-22-gener
Hannu Krosing wrote:
Do you mean "written to disk", or written out from shared_buffers to
disk cache ?
The later turns into the former eventually, so both really. The kernel
will do some amount of write combining for you if you're lucky. But not
in all cases; it may decide to write somet
Kevin Grittner wrote:
Assuming this is real, and not just "in the noise" -- what conclusions would you
draw from this?
Was trying to demonstrate the general ability of pg_stat_bgwriter
snapshots at points in time to directly measure the buffer activity
Hannu was theorizing about, not necessar
On Wed, 2010-08-04 at 22:03 +0300, Hannu Krosing wrote:
> On Wed, 2010-08-04 at 21:41 +0300, Hannu Krosing wrote:
> > On Wed, 2010-08-04 at 14:00 -0400, Tom Lane wrote:
>
> > > regression=# select name, setting from pg_settings where name like
> > > '%cost';
> > > name | setting
Greg Smith wrote:
> What I do to quantify that is...well, the attached shows it better
> than I can describe; only works on 9.0 or later as it depends on a
> feature I added for this purpose there. It measures exactly how
> much buffer cache churn happened during a test, in this case
> creating
Hannu Krosing writes:
> There was ample space for keeping the indexes in linux cache (it has 1GB
> cached currently) though the system may have decided to start writing it
> to disk, so I suspect that most of the time was spent copying random
> index pages back and forth between shared buffers and
This time with attachment...
--
Greg Smith 2ndQuadrant US Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com www.2ndQuadrant.us
fsync-stress.sh
Description: Bourne shell script
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
Hannu Krosing wrote:
There was ample space for keeping the indexes in linux cache (it has 1GB
cached currently) though the system may have decided to start writing it
to disk, so I suspect that most of the time was spent copying random
index pages back and forth between shared buffers and disk ca
On Wed, 2010-08-04 at 21:41 +0300, Hannu Krosing wrote:
> On Wed, 2010-08-04 at 14:00 -0400, Tom Lane wrote:
> > regression=# select name, setting from pg_settings where name like '%cost';
> > name | setting
> > --+-
> > cpu_index_tuple_cost | 0.005
>
On Wed, 2010-08-04 at 14:00 -0400, Tom Lane wrote:
> Hannu Krosing writes:
> > Of course there are more variables than just *_page_cost, so if you nail
> > down any other one, you may end with less than 1 for both page costs.
>
> > I have always used seq_page_cost = 1 in my thinking and adjusted
Hannu Krosing writes:
> Of course there are more variables than just *_page_cost, so if you nail
> down any other one, you may end with less than 1 for both page costs.
> I have always used seq_page_cost = 1 in my thinking and adjusted others
> relative to it.
Right, seq_page_cost = 1 is sort of
On Wed, 2010-08-04 at 09:14 -0400, Robert Haas wrote:
> On Tue, Aug 3, 2010 at 3:03 AM, Hannu Krosing wrote:
> > In case of fully cached database it is closer to 1.
>
> In the case of a fully cached database I believe the correct answer
> begins with a decimal point.
The number 1 here was sugges
Peter Hussey wrote:
> I already had effective_cache_size set to 500MB.
That seems awfully small. You do realize that this setting does not
cause PostgreSQL to allocate any memory; it merely advises how much
disk space is likely to be cached. It should normally be set to the
sum of your share
On Tue, Aug 3, 2010 at 3:03 AM, Hannu Krosing wrote:
> In case of fully cached database it is closer to 1.
In the case of a fully cached database I believe the correct answer
begins with a decimal point.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company
-
On Mon, 2010-08-02 at 14:23 -0700, Peter Hussey wrote:
> I already had effective_cache_size set to 500MB.
>
> I experimented with lowering random_page_cost to 3 then 2.
In case of fully cached database it is closer to 1.
> 2) Why is the setting of work_mem something left to the admin and/or
>
Peter Hussey writes:
> My questions are still
> 1) Does the planner have any component of cost calculations based on the
> size of work_mem,
Sure.
> and if so why do those calculations seem to have so
> little effect here?
Since you haven't provided sufficient information to let someone else
On Mon, Aug 2, 2010 at 5:23 PM, Peter Hussey wrote:
> I already had effective_cache_size set to 500MB.
>
> I experimented with lowering random_page_cost to 3 then 2. It made no
> difference in the choice of plan that I could see. In the explain analyze
> output the estimated costs of nested loo
I already had effective_cache_size set to 500MB.
I experimented with lowering random_page_cost to 3 then 2. It made no
difference in the choice of plan that I could see. In the explain analyze
output the estimated costs of nested loop were in fact lowererd, but so were
the costs of the hash joi
Peter Hussey writes:
> Using the default of 1MB work_mem, the planner chooses a hash join plan :
> "Hash Left Join (cost=252641.82..11847353.87 rows=971572 width=111) (actual
> time=124196.670..280461.604 rows=968080 loops=1)"
> ...
> For the same default 1MB work_mem, a nested loop plan is bette
Alvaro Herrera writes:
> Excerpts from Tom Lane's message of mar jul 27 20:05:02 -0400 2010:
>> Well, the issue you're hitting is that the executor is dividing the
>> query into batches to keep the size of the in-memory hash table below
>> work_mem. The planner should expect that and estimate the
Hello,
> the join column, lsid, is a poor choice for a join column as it is a
> long varchar value (avg length 101 characters) that us only gets
> unique way out on the right hand side.
Would a join on subtring on the 'way out on the right hand side' (did you
mean 'rightmost characters' or 'only
Excerpts from Tom Lane's message of mar jul 27 20:05:02 -0400 2010:
> Peter Hussey writes:
> > 2) How is work_mem used by a query execution?
>
> Well, the issue you're hitting is that the executor is dividing the
> query into batches to keep the size of the in-memory hash table below
> work_mem.
Peter Hussey writes:
> I have spent the last couple of weeks digging into a Postgres performance
> problem that ultimately boiled down to this: the planner was choosing to
> use hash joins on a set of join keys that were much larger than the
> configured work_mem.
What Postgres version is this,
Hi,
On Tue, Jul 27, 2010 at 04:08:16PM -0700, Peter Hussey wrote:
> Now for the questions:
> 1) If we tell the customer to set his work_mem value to 500MB or 1GB in
> postgres.config, what problems might they see? the documentation and the
> guidelines we received from Rupinder Singh in support
I have spent the last couple of weeks digging into a Postgres performance
problem that ultimately boiled down to this: the planner was choosing to
use hash joins on a set of join keys that were much larger than the
configured work_mem. We found we could make the performance much better by
either
32 matches
Mail list logo