On 5/16/06 3:08 PM, "Tony Wasson" <[EMAIL PROTECTED]> wrote:
> On 5/16/06, Sean Davis <[EMAIL PROTECTED]> wrote:
>> I am using postgresql 8.1.0 on an Xserver running MacOS 10.3.9. I am
>> getting the following in the log every minute for the past couple of days.
>> The database is otherwise running normally, as far as I can tell:
>>
>> 2006-05-16 07:26:01 EDT FATAL: could not read statistics message:
>> Resource temporarily unavailable
>> 2006-05-16 07:27:01 EDT FATAL: could not read statistics message:
>> Resource temporarily unavailable
>> 2006-05-16 07:28:03 EDT FATAL: could not read statistics message:
>> Resource temporarily unavailable
>>
>> I saw a previous message in the archives, but it did not appear that any
>> conclusion was reached. Tom suggested that an EAGAIN signal was being
>> received from the system, but I'm not sure what this means exactly or why it
>> is happening now, as we have had the server running for months.
>>
>> Any insight?
>
> I ran into this problem also on OS X running Postgresql 8.0. When you
> start postgresql you usually see these 4 processes:
>
> /usr/local/pgsql/bin/postmaster
> postgres: writer process
> postgres: stats buffer process
> postgres: stats collector process
>
> When I saw the same error as you, the stats collector process was
> missing. A few times we also got messages like
Now that I look, I see the same thing.
> [KERNEL]: no space in available paging segments; swapon suggested
No such line in the logs
> and then a bunch of these:
>
> postgres[13562]: [1-1] FATAL: could not read statistics message:
> Resource temporarily unavailable
>
> We thought it was our memory tuning of OS X. Since it wasn't a
> production box, we didn't pursue the problem further. What tuning have
> you done to postgresql.conf and the OS X memory settings?
I had cranked things up a bit from the standard install.
shared_buffers = 15000 # min 16 or max_connections*2, 8KB
each
#temp_buffers = 1000 # min 100, 8KB each
#max_prepared_transactions = 50 # can be 0 or more
work_mem = 10000 # min 64, size in KB
maintenance_work_mem = 128000 # min 1024, size in KB
max_stack_depth = 4096 # min 100, size in KB
Some of these may not be ideal, but it really improved performance for our
needs.
Suggested by the lack of a stats collector process or stats buffer process,
I restarted the server, and it appears to have fixed the issue for now. At
least I know what to watch for now.
Thanks, Tony, for the reply.
Sean
---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings