I'm hoping that through compare/contrast we might help someone start
closer to their own best values
>>> Scott Carey <[EMAIL PROTECTED]> wrote:
> Tests with writes can trigger it earlier if combined with bad
dirty_buffers
> settings.
We've never, ever modified dirty_buffers settings from
I it seems that slowness is caused by grouping by column
exchrate numeric(13,8)
exchrate has different values in few rows.
It has usually value 0
In this sample query it is always 0.
I tried not change exchrate with
nullif( bilkaib.EXCHRATE,0)
but this does not up speed query, no idea wh
On Fri, Nov 28, 2008 at 12:51 PM, Scott Carey <[EMAIL PROTECTED]> wrote:
> I'm not sure that postgres allocates the whole work_mem each time, and in any
> event, the OS won't try and allocate to RAM until the page is actually used.
> My experience is that oversized work_mem doesn't hurt until it
I it seems that slowness is caused by grouping by column
exchrate numeric(13,8)
if this column is excluded, query takes 12 seconds
if this column in present, query takes 27 (!) seconds.
How to fix this ?
Andrus.
set search_path to firma2,public;
SET work_mem = 2097151 ;
explain analyze SELECT
Application should work with any server starting at 8.1 with any RAM size
(probably starting at 1 GB).
How to find safe value which does not cause error in SET work_mem command ?
If I use 2 GB maybe this can cause excaption when running in server with 1
GB RAM where this setting may be not all
I'm not sure that postgres allocates the whole work_mem each time, and in any
event, the OS won't try and allocate to RAM until the page is actually used.
My experience is that oversized work_mem doesn't hurt until it is actually
used. Else, my configuration with 1000 concurrent connections an
On Fri, Nov 28, 2008 at 10:58 AM, Andrus <[EMAIL PROTECTED]> wrote:
>
> SET work_mem = 2097151 (this is max allowed value) or SET work_mem = 97151
> decreases query time from 12 seconds to 9 seconds.
Setting work_mem so high that it allocates memory that isn't there
WILL slow down your queries, b
Scott,
Thank you.
The below query is spending most of its time in the sort, or perhaps the
complicated check condition before it.
The explain has a 8 second gap in time between the 2.8 seconds after the
Hash Left Join and before the Sort. I'm guessing its hidden in the sort.
You can get the pl
The below query is spending most of its time in the sort, or perhaps the
complicated check condition before it.
The explain has a 8 second gap in time between the 2.8 seconds after the Hash
Left Join and before the Sort. I'm guessing its hidden in the sort.
You can get the planner to switch fro
Tom Lane wrote:
INFO: vacuuming "pg_catalog.pg_largeobject"
INFO: index "pg_largeobject_loid_pn_index" now contains 11060658 row
versions in 230587 pages
DETAIL: 178683 index pages have been deleted, 80875 are currently reusable.
CPU 0.92s/0.10u sec elapsed 199.38 sec.
INFO: "pg_largeobject":
Group by using CHAR columns takes abnormally big time.
How to speed it ?
Andrus.
8.1.4, cluster locale is en-us, db encoding is utf-8
set search_path to firma2,public;
explain analyze SELECT
CASE WHEN bilkaib.raha='EEK' THEN 0 ELSE bilkaib.id END,
bilkaib.DB,
CASE WHEN dbkonto.objekt1='+' THEN
Scott,
You may also end up setting higher statistics targets on some other columns
to fix other issues. You may want to set the value in the configuration
file higher than the default 10 -- I'd recommend starting with 40 and
re-analyzing the tables.
Thank you.
I set rid.toode statistics targ
Tom Lane schrieb:
"=?iso-8859-1?Q?Vegard_B=F8nes?=" <[EMAIL PROTECTED]> writes:
Running VACUUM VERBOSE pg_largeobject took quite some time. Here's the
output:
INFO: vacuuming "pg_catalog.pg_largeobject"
INFO: index "pg_largeobject_loid_pn_index" now contains 11060658 row
versions
13 matches
Mail list logo