Thanks Vick, I'll try it tonight. I will give 1024 shared_buffers and
maintenance_work_mem, and 102 MB of work_mem.

A question: I didn't use (it's marked with #) the effective_planner (or any
other planner method or config option). Is it ok, when I turn it on with
that parameter: 1036MB?

DROP/ADD TABLE stuck: I realized, that the locks number is so high, what
about these settings:

deadlock_timeout = 1s
max_locks_per_transaction = 64

is it ok? or is it too high?

Thanks,
Carl

2011/4/8 Vick Khera <vi...@khera.org>

>
> On Fri, Apr 8, 2011 at 4:00 AM, Gipsz Jakab <clausewit...@gmail.com>wrote:
>
>> My question is the following: if this is a dedicated database server, with
>> maximum 30 users (but they are using ODBC with Microsoft Acces, and each of
>> them generating 4-6 connection at the same time), and other 200 people will
>> use this server through drupal, php, apache not in daily basis, but weekly,
>> what is the ideal memory configuration?
>>
>
> if it is a dedicated DB server, then give shared memory  about 1/4 of the
> RAM, and perhaps a slightly larger maintenance work mem.  depending on your
> workload you may want to increase the checkpoint segments (if write-mostly,
> then add more segments).
>
> Here is what I use on my FreeBSD 8, Pg 9 big-memory servers.  these have
> 24GB or more of RAM and are attached to SSD external storage for the
> database:
>
> max_connections = 200
> shared_buffers = 5120MB
> work_mem = 512MB
> maintenance_work_mem = 1024MB
> max_stack_depth = 8MB
> vacuum_cost_delay = 15checkpoint_segments = 64
> checkpoint_timeout = 15min
> checkpoint_completion_target = 0.8
> random_page_cost = 1.0  # RAM disk. set equal seq_page_cost
> effective_cache_size = 6400MB           # shared_buffers + `sysctl -n
> vfs.hibufspace` / 8192 (BLKSZ)
>
>
> for individual complicated queries, you can increase the sort mem and work
> mem on a per-connection basis as needed.
>
>
>>
>> After the settings in the postgresql.conf our system is much faster, and
>> no more error messages in the postgres.log, but If I try to drop a table, or
>> add a new one, our system is stopping, until I kill the process, which is
>> dropping or adding a table.
>>
>
> Is something else using the table you want to drop and blocking the drop
> statement from taking the locks it needs?  It should be fairly instant.
>
>

Reply via email to