Re: [PERFORM] Performance problems deleting data

2008-03-04 Thread Rafael Martinez
Tom Lane wrote:
 Rafael Martinez [EMAIL PROTECTED] writes:
 
 Any ideas why it is taking 2462558.813 ms to finish when the total time
 for the deletion is 2.546 ms + 3.422 ms + 0.603ms?


Hei Tom, I got this information from my colleague:


 Is the problem repeatable?  

Repeatable as in about 30+ times every day, the deletion of a row takes
more than 100 seconds.  I have not found a way to provoke it though.

 Is the delay consistent?  

No.  I see frequently everything from below the 8 seconds
log_min_duration_statement to about 4900 seconds.  As for distribution,
about half of the 30+ takes more than 500 seconds to complete, the rest
(obviously) between 100 and 500 seconds.

 What do you see in pg_locks while it's delaying?  

   locktype| database | relation | page | tuple | transactionid |
classid | objid | objsubid | transaction |  pid  |   mode   |
granted
---+--+--+--+---+---+-+---+--+-+---+--+-
 relation  |16393 |16784 |  |   |   |
  |   |  |82179843 | 19890 | AccessShareLock  | t
 relation  |16393 |16784 |  |   |   |
  |   |  |82179843 | 19890 | RowExclusiveLock | t
 relation  |16393 |17176 |  |   |   |
  |   |  |82179843 | 19890 | RowExclusiveLock | t
 relation  |16393 |16794 |  |   |   |
  |   |  |82180131 | 19907 | AccessShareLock  | t
 relation  |16393 |16794 |  |   |   |
  |   |  |82180131 | 19907 | RowExclusiveLock | t
 relation  |16393 |16977 |  |   |   |
  |   |  |82179843 | 19890 | AccessShareLock  | t
 relation  |16393 |16977 |  |   |   |
  |   |  |82179843 | 19890 | RowExclusiveLock | t
 relation  |16393 |16800 |  |   |   |
  |   |  |82179669 | 19906 | AccessShareLock  | t
 relation  |16393 |16800 |  |   |   |
  |   |  |82179669 | 19906 | RowExclusiveLock | t
 relation  |16393 |17174 |  |   |   |
  |   |  |82179843 | 19890 | RowExclusiveLock | t
 transactionid |  |  |  |   |  80430155 |
  |   |  |80430155 | 29569 | ExclusiveLock| t
 relation  |16393 |17164 |  |   |   |
  |   |  |82179843 | 19890 | AccessShareLock  | t
 relation  |16393 |16816 |  |   |   |
  |   |  |82179669 | 19906 | AccessShareLock  | t
 relation  |16393 |16816 |  |   |   |
  |   |  |82179669 | 19906 | RowExclusiveLock | t
 relation  |16393 |16812 |  |   |   |
  |   |  |82179669 | 19906 | AccessShareLock  | t
 relation  |16393 |16812 |  |   |   |
  |   |  |82179669 | 19906 | RowExclusiveLock | t
 relation  |16393 |17174 |  |   |   |
  |   |  |82180131 | 19907 | RowExclusiveLock | t
 relation  |16393 |16977 |  |   |   |
  |   |  |82180131 | 19907 | AccessShareLock  | t
 relation  |16393 |16977 |  |   |   |
  |   |  |82180131 | 19907 | RowExclusiveLock | t
 relation  |16393 |16784 |  |   |   |
  |   |  |82180131 | 19907 | AccessShareLock  | t
 relation  |16393 |16784 |  |   |   |
  |   |  |82180131 | 19907 | RowExclusiveLock | t
 relation  |16393 |16766 |  |   |   |
  |   |  |82179843 | 19890 | AccessShareLock  | t
 relation  |16393 |16766 |  |   |   |
  |   |  |82179843 | 19890 | RowExclusiveLock | t
 relation  |16393 |16977 |  |   |   |
  |   |  |82179669 | 19906 | AccessShareLock  | t
 relation  |16393 |16977 |  |   |   |
  |   |  |82179669 | 19906 | RowExclusiveLock | t
 relation  |16393 |17164 |  |   |   |
  |   |  |82179669 | 19906 | AccessShareLock  | t
 relation  |16393 |16766 |  |   |   |
  |   |  |82180131 | 19907 | AccessShareLock  | t
 relation  |16393 |16766 |  |   |   |
  |   |  |82180131 | 19907 | RowExclusiveLock | t
 relation  |16393 |10342 |

Re: [PERFORM] Performance tuning on FreeBSD

2008-03-04 Thread Matthew

On Tue, 4 Mar 2008, Ivan Voras wrote:

I'm curious about the math behind this - is ~4000 burst or sustained
rate? For common BBU cache sizes (256M, 512M), filling that amount with
data is pretty trivial. When the cache is full, new data can enter the
cache only at a rate at which old data is evacuated from the cache (to
the drive), which is at normal, uncached disk drive speeds.


Should be sustained rate. The reason is if you have no BBU cache, then 
each transaction needs to wait for the disc to rotate around to the bit 
where you want to write, even though each transaction is going to be 
writing in approximately the same place each time. However, with a BBU 
cache, the system no longer needs to wait for the disc to rotate, and the 
writes can be made from the cache to the disc in large groups of 
sequential writes, which is much faster. Several transactions worth can be 
written on each rotation instead of just one.


Matthew

--
People who love sausages, respect the law, and work with IT standards 
shouldn't watch any of them being made.  -- Peter Gutmann


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.orgextra=pgsql-performance


[PERFORM] Ramifications of turning off Nested Loops for slow queries

2008-03-04 Thread Chris Kratz

Hello Everyone,

I had posted an issue previously that we've been unable to resolve.   
An early mis-estimation in one or more subqueries causes the remainder  
of the query to choose nested loops instead of a more efficient method  
and runs very slowly (CPU Bound).  I don't think there is any way to  
suggest to the planner it not do what it's doing, so we are starting  
to think about turning off nested loops entirely.


Here is the history so far:

http://archives.postgresql.org/pgsql-performance/2008-02/msg00205.php

At the suggestion of the list, we upgraded to 8.2.6 and are still  
experiencing the same problem.  I'm now installing 8.3 on my  
workstation  to see if it chooses a better plan, but it will take some  
time to get it compiled, a db loaded, etc.


We have a number of very long running reports that will run in seconds  
if nested loops are turned off.  The other alternative we are  
exploring is programmatically turning off nested loops just for the  
problematic reports.  But with the speedups we are seeing, others are  
getting gun shy about having them on at all.


So, I've now been asked to ping the list as to whether turning off  
nested loops system wide is a bad idea, and why or why not.


Any other thoughts or suggestions?

Thanks,

-Chris

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.orgextra=pgsql-performance


Re: [PERFORM] How to allocate 8 disks

2008-03-04 Thread Shane Ambler

Ivan Voras wrote:

Joshua D. Drake wrote:


This scares me... You lose WAL you are a goner. Combine your OS and
WAL into a RAID 1.


Can someone elaborate on this? From the WAL concept and documentation at
http://www.postgresql.org/docs/8.3/interactive/wal-intro.html I'd say
the only data that should be lost are the transactions currently in the
log but not yet transferred to permanent storage (database files proper).



The log records what changes are made to your data files before the data 
files are changed. (and gets flushed to disk before the data files are 
changed)


In the event of power loss right in the middle of the data files being 
updated for a transaction, when power is restored, how do we know what 
changes were made to which data files and which changes are incomplete?


Without the log files there is no way to be sure your data files are not 
full of half done transactions




Chances are that 90% of the time everything is fine but without the log 
files how do you check that your data files are as they should be.

(or do you expect to restore from backup after any power outs?)


Keeping them on a raid 1 gives you a level of redundancy to get you past 
hardware failures that happen at the wrong time. (as they all do)





--

Shane Ambler
pgSQL (at) Sheeky (dot) Biz

Get Sheeky @ http://Sheeky.Biz

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.orgextra=pgsql-performance


Re: [PERFORM] Performance tuning on FreeBSD

2008-03-04 Thread Greg Smith

On Tue, 4 Mar 2008, Ivan Voras wrote:


I'm curious about the math behind this - is ~4000 burst or sustained
rate?


Average, which is not quite burst or sustained.  No math behind it, just 
looking at a few samples of pgbench data on similar hardware.  A system 
like this one is profiled at 
http://www.kaltenbrunner.cc/blog/index.php?/archives/21-8.3-vs.-8.2-a-simple-benchmark.html 
for example.


For common BBU cache sizes (256M, 512M), filling that amount with data 
is pretty trivial.


I don't have any good numbers handy but I think the burst is 6000, you 
only get that for a few seconds before all the caches fill and the rate 
drops considerably.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.orgextra=pgsql-performance


Re: [PERFORM] Ramifications of turning off Nested Loops for slow queries

2008-03-04 Thread Kevin Grittner
 On Tue, Mar 4, 2008 at  8:42 AM, in message
[EMAIL PROTECTED], Chris Kratz
[EMAIL PROTECTED] wrote: 
 
 So, I've now been asked to ping the list as to whether turning off  
 nested loops system wide is a bad idea, and why or why not.
 
In our environment, the fastest plan for a lot of queries involve
nested loops.  Of course, it's possible that these never provide the
fasted plan in your environment, but it seems very unlikely --
you're just not noticing the queries where it's doing fine.
 
 Any other thoughts or suggestions?
 
Make sure your effective_cache_size is properly configured.
 
Increase random_page_cost and/or decrease seq_page_cost.
You can play with the cost settings on a connection, using EXPLAIN
on the query, to see what plan you get with each configuration
before putting it into the postgresql.conf file.
 
-Kevin
 



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.orgextra=pgsql-performance


Re: [PERFORM] Ramifications of turning off Nested Loops for slow queries

2008-03-04 Thread Tom Lane
Kevin Grittner [EMAIL PROTECTED] writes:
 On Tue, Mar 4, 2008 at  8:42 AM, in message
 [EMAIL PROTECTED], Chris Kratz
 [EMAIL PROTECTED] wrote: 
 So, I've now been asked to ping the list as to whether turning off  
 nested loops system wide is a bad idea, and why or why not.
 
 In our environment, the fastest plan for a lot of queries involve
 nested loops.  Of course, it's possible that these never provide the
 fasted plan in your environment, but it seems very unlikely --
 you're just not noticing the queries where it's doing fine.

Yeah, I seem to recall similar queries from other people who were
considering the opposite, ie disabling the other join types :-(

The rule of thumb is that nestloop with an inner indexscan will beat
anything else for pulling a few rows out of a large table.  But on
the other hand it loses big for selecting lots of rows.  I don't think
that a global disable in either direction would be a smart move, unless
you run only a very small number of query types and have checked them
all.

regards, tom lane

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.orgextra=pgsql-performance


Re: [PERFORM] Ramifications of turning off Nested Loops for slow queries

2008-03-04 Thread Chris Kratz
On 3/4/08, Kevin Grittner [EMAIL PROTECTED] wrote:

  On Tue, Mar 4, 2008 at  8:42 AM, in message
  Any other thoughts or suggestions?


 Make sure your effective_cache_size is properly configured.

 Increase random_page_cost and/or decrease seq_page_cost.
 You can play with the cost settings on a connection, using EXPLAIN
 on the query, to see what plan you get with each configuration
 before putting it into the postgresql.conf file.


 -Kevin


That was a good idea.  I hadn't tried playing with those settings in a
session.  This is a 8G box, and we've dedicated half of that (4G) to the
file system cache.  So, 4G is what effective_cache_size is set to.  Our
seq_page_cost is set to 1 and our random_page_cost is set to 1.75 in the
postgresql.conf.

In testing this one particular slow query in a session, I changed these
settings alternating in increments of 0.25.  The random_page_cost up to 4
and the seq_page_cost down to 0.25.  This made perhaps a second difference,
but at the end, we were back to to the 37s.  Doing a set enable_nestloop=off
in the session reduced the runtime to 1.2s with the other settings back to
our normal day to day settings.

So, for now I think we are going to have to modify the code to prepend the
problematic queries with this setting and hope the estimator is able to
better estimate this particular query in 8.3.

Thanks for the suggestions,

-Chris


Re: [PERFORM] Ramifications of turning off Nested Loops for slow queries

2008-03-04 Thread Chris Kratz
On 3/4/08, Tom Lane [EMAIL PROTECTED] wrote:

 Kevin Grittner [EMAIL PROTECTED] writes:
  On Tue, Mar 4, 2008 at  8:42 AM, in message
  [EMAIL PROTECTED], Chris Kratz
  [EMAIL PROTECTED] wrote:
  So, I've now been asked to ping the list as to whether turning off
  nested loops system wide is a bad idea, and why or why not.

  In our environment, the fastest plan for a lot of queries involve
  nested loops.  Of course, it's possible that these never provide the
  fasted plan in your environment, but it seems very unlikely --
  you're just not noticing the queries where it's doing fine.


 Yeah, I seem to recall similar queries from other people who were
 considering the opposite, ie disabling the other join types :-(

 The rule of thumb is that nestloop with an inner indexscan will beat
 anything else for pulling a few rows out of a large table.  But on
 the other hand it loses big for selecting lots of rows.  I don't think
 that a global disable in either direction would be a smart move, unless
 you run only a very small number of query types and have checked them
 all.

 regards, tom lane


So, if we can't find another way to solve the problem, probably our best bet
is to turn off nested loops on particularly bad queries by prepending them
w/ set enable_nested_loop=off?  But, leave them on for the remainder of the
system?

Do you think it's worth testing on 8.3 to see if the estimator is able to
make a better estimate?

-Chris


[PERFORM] Optimisation help

2008-03-04 Thread dforums





Hello


We hace a Quad Xeon server, with 8GO of ram, sata II 750Go

An postgresql database, of 10 Go

I have several treatment every 2 minutes who select, insert, update
thousand of data in a table. It take a lot of time (0.3300 ms per line)
just to check if a string of 15 char is present, and decide to update
it under few constraint

I suppose the main problem is from database server settings.

This is my settings : 


max_connections = 256
shared_buffers = 1500 # min 16 or max_connections*2,
8KB each
temp_buffers = 500 # min 100, 8KB each
max_prepared_transactions = 100 

work_mem = 22000 # min 64, size in KB
maintenance_work_mem = 50 # min 1024, size in KB
max_stack_depth = 8192 


max_fsm_pages = 10 # min max_fsm_relations*16, 6
bytes each
max_fsm_relations = 5000 


vacuum_cost_delay = 50 # 0-1000 milliseconds
vacuum_cost_page_hit = 1000 # 0-1 credits
vacuum_cost_page_miss = 1000 # 0-1 credits
vacuum_cost_page_dirty = 120 # 0-1 credits
vacuum_cost_limit = 2000 # 0-1 credits

# - Background writer -

bgwriter_delay = 50 # 10-1 milliseconds between
rounds
bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers
scanned/round
bgwriter_lru_maxpages = 25 # 0-1000 buffers max
written/round
bgwriter_all_percent = 0.333 # 0-100% of all buffers
scanned/round
bgwriter_all_maxpages = 50 # 0-1000 buffers max
written/round

wal_buffers = 16 # min 4, 8KB each
commit_delay = 500 # range 0-10, in
microseconds
commit_siblings = 50 # range 1-1000

# - Checkpoints -

checkpoint_segments = 50 # in logfile segments, min 1,
16MB each
checkpoint_timeout = 1800 # range 30-3600, in seconds
checkpoint_warning = 180 

effective_cache_size = 2048 # typically 8KB each
random_page_cost = 3 


Shared memory set to :
echo /proc/sys/kernel/shmmax = 25600

Could you help please...

tx


David











Re: [PERFORM] Optimisation help

2008-03-04 Thread Alan Hodgson
On Tuesday 04 March 2008, dforums [EMAIL PROTECTED] wrote:
  Hello


  We hace a Quad Xeon server, with 8GO of ram, sata II 750Go


  I suppose the main problem is from database server settings.

No, the problem is your hard drive is too slow. One drive can only do maybe 
150 seeks per second.

Oh, and updates in PostgreSQL are expensive. But mostly I'd say it's your 
drive.

-- 
Alan

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.orgextra=pgsql-performance


Re: [PERFORM] Optimisation help

2008-03-04 Thread dforums

tX for your reply,

I do not have more information on disk speed. I'll get it latter.

But My most fear is that for now the database is only of 10 Go.

But I will have to increase it 10 times during the next six month I'm 
afraid that these problems will increase.


Regards

David

Alan Hodgson a écrit :

On Tuesday 04 March 2008, dforums [EMAIL PROTECTED] wrote:

 Hello


 We hace a Quad Xeon server, with 8GO of ram, sata II 750Go


 I suppose the main problem is from database server settings.


No, the problem is your hard drive is too slow. One drive can only do maybe 
150 seeks per second.


Oh, and updates in PostgreSQL are expensive. But mostly I'd say it's your 
drive.




--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.orgextra=pgsql-performance


Re: [PERFORM] Optimisation help

2008-03-04 Thread dforums

In regards of update, I have around 1 updates while a laps of 10 minutes

Is there a settings to optimise updates ?

regards

david

Alan Hodgson a écrit :

On Tuesday 04 March 2008, dforums [EMAIL PROTECTED] wrote:

 Hello


 We hace a Quad Xeon server, with 8GO of ram, sata II 750Go


 I suppose the main problem is from database server settings.


No, the problem is your hard drive is too slow. One drive can only do maybe 
150 seeks per second.


Oh, and updates in PostgreSQL are expensive. But mostly I'd say it's your 
drive.




--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.orgextra=pgsql-performance


Re: [PERFORM] Optimisation help

2008-03-04 Thread Greg Smith

On Tue, 4 Mar 2008, dforums wrote:


max_connections = 256
shared_buffers = 1500   # min 16 or max_connections*2, 8KB each
work_mem = 22000# min 64, size in KB
effective_cache_size = 2048 # typically 8KB each


Well, you're giving the main database server a whopping 1500*8K=12MB of 
space to work with.  Meanwhile you're giving each of the 256 clients up to 
22MB of work_mem, which means they can use 5.6GB total.  This is quite 
backwards.


Increase shared_buffers to something like 25 (2GB), decrease work_mem 
to at most 1 and probably lower, and raise effective_cache_size to 
something like 5GB=625000.  Whatever data you've collected about 
performance with your current settings is pretty much meaningless with 
only giving 12MB of memory to shared_buffers and having a tiny setting for 
effective_cache_size.


Oh, and make sure you ANALYZE your tables regularly.


random_page_cost = 3


And you shouldn't be playing with that until you've got the memory usage 
to something sane.


Also, you didn't mention what version of PostgreSQL you're using.  You'll 
need 8.1 or later to have any hope of using 8GB of RAM effectively on a 
4-core system.


But My most fear is that for now the database is only of 10 Go. But I 
will have to increase it 10 times during the next six month I'm afraid 
that these problems will increase.


It's very unlikely you will be able to get good performance on a 100GB 
database with a single SATA drive.  You should be able to get great 
performance with the current size though.


In regards of update, I have around 1 updates while a laps of 10 
minutes.  Is there a settings to optimise updates ?


1 updates / 600 seconds = 17 updates/second.  That's trivial; even a 
single boring drive can get 100/second.  As someone already suggested your 
real problem here is that you'll be hard pressed to handle the amount of 
seeking that goes into a larger database with only a single drive.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.orgextra=pgsql-performance


Re: [PERFORM] Optimisation help

2008-03-04 Thread dforums
Thanks i'm trying with this new settings. I gain only 3 second (2:40 vs 
 2:37 min) on a treatment of 1000 lines, with it's done every 2 minutes.


For the database version, i'm under postgresql 8.1.11. x64

As i'm in a procedure it seems that postgresql explain analyse doesn't 
give details.


I suppose that I have to fragment my procedure to see exactly where i'm 
wasting so much time.


regards

david

Greg Smith a écrit :

On Tue, 4 Mar 2008, dforums wrote:


max_connections = 256
shared_buffers = 1500   # min 16 or max_connections*2, 
8KB each

work_mem = 22000# min 64, size in KB
effective_cache_size = 2048 # typically 8KB each


Well, you're giving the main database server a whopping 1500*8K=12MB of 
space to work with.  Meanwhile you're giving each of the 256 clients up 
to 22MB of work_mem, which means they can use 5.6GB total.  This is 
quite backwards.


Increase shared_buffers to something like 25 (2GB), decrease 
work_mem to at most 1 and probably lower, and raise 
effective_cache_size to something like 5GB=625000.  Whatever data you've 
collected about performance with your current settings is pretty much 
meaningless with only giving 12MB of memory to shared_buffers and having 
a tiny setting for effective_cache_size.


Oh, and make sure you ANALYZE your tables regularly.


random_page_cost = 3


And you shouldn't be playing with that until you've got the memory usage 
to something sane.


Also, you didn't mention what version of PostgreSQL you're using.  
You'll need 8.1 or later to have any hope of using 8GB of RAM 
effectively on a 4-core system.


But My most fear is that for now the database is only of 10 Go. But I 
will have to increase it 10 times during the next six month I'm afraid 
that these problems will increase.


It's very unlikely you will be able to get good performance on a 100GB 
database with a single SATA drive.  You should be able to get great 
performance with the current size though.


In regards of update, I have around 1 updates while a laps of 10 
minutes.  Is there a settings to optimise updates ?


1 updates / 600 seconds = 17 updates/second.  That's trivial; even a 
single boring drive can get 100/second.  As someone already suggested 
your real problem here is that you'll be hard pressed to handle the 
amount of seeking that goes into a larger database with only a single 
drive.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.orgextra=pgsql-performance 






--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.orgextra=pgsql-performance


Re: [PERFORM] Optimisation help

2008-03-04 Thread dforums

Hello,

After controling the settings I so, that shared_buffers is configurated 
at 1024 (the default), however, in my postgresql.conf I set it to 
25, is it due to shared memory settings, should I increase shmmax?


regards

david

Greg Smith a écrit :

On Tue, 4 Mar 2008, dforums wrote:


max_connections = 256
shared_buffers = 1500   # min 16 or max_connections*2, 
8KB each

work_mem = 22000# min 64, size in KB
effective_cache_size = 2048 # typically 8KB each


Well, you're giving the main database server a whopping 1500*8K=12MB of 
space to work with.  Meanwhile you're giving each of the 256 clients up 
to 22MB of work_mem, which means they can use 5.6GB total.  This is 
quite backwards.


Increase shared_buffers to something like 25 (2GB), decrease 
work_mem to at most 1 and probably lower, and raise 
effective_cache_size to something like 5GB=625000.  Whatever data you've 
collected about performance with your current settings is pretty much 
meaningless with only giving 12MB of memory to shared_buffers and having 
a tiny setting for effective_cache_size.


Oh, and make sure you ANALYZE your tables regularly.


random_page_cost = 3


And you shouldn't be playing with that until you've got the memory usage 
to something sane.


Also, you didn't mention what version of PostgreSQL you're using.  
You'll need 8.1 or later to have any hope of using 8GB of RAM 
effectively on a 4-core system.


But My most fear is that for now the database is only of 10 Go. But I 
will have to increase it 10 times during the next six month I'm afraid 
that these problems will increase.


It's very unlikely you will be able to get good performance on a 100GB 
database with a single SATA drive.  You should be able to get great 
performance with the current size though.


In regards of update, I have around 1 updates while a laps of 10 
minutes.  Is there a settings to optimise updates ?


1 updates / 600 seconds = 17 updates/second.  That's trivial; even a 
single boring drive can get 100/second.  As someone already suggested 
your real problem here is that you'll be hard pressed to handle the 
amount of seeking that goes into a larger database with only a single 
drive.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.orgextra=pgsql-performance 






--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.orgextra=pgsql-performance


Re: [PERFORM] Optimisation help

2008-03-04 Thread Steinar H. Gunderson
On Wed, Mar 05, 2008 at 12:15:25AM +, dforums wrote:
 In regards of update, I have around 1 updates while a laps of 10 minutes

 Is there a settings to optimise updates ?

If you can, batch them into a single transaction.

If you can, upgrade to 8.3. HOT might help you here.

/* Steinar */
-- 
Homepage: http://www.sesse.net/

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.orgextra=pgsql-performance


Re: [PERFORM] Optimisation help

2008-03-04 Thread Erik Jones


On Mar 4, 2008, at 6:54 PM, dforums wrote:


Hello,

After controling the settings I so, that shared_buffers is  
configurated at 1024 (the default), however, in my postgresql.conf I  
set it to 25, is it due to shared memory settings, should I  
increase shmmax?


Did you do a full restart of the db cluster?  Changes to shared memory  
settings require that.


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com




--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.orgextra=pgsql-performance