Re: [PERFORM] memory allocation

2017-10-19 Thread Laurenz Albe
nijam J wrote:
> our server is getting too slow again and again

Use "vmstat 1" and "iostat -mNx 1" to see if you are
running out of memory, CPU capacity or I/O bandwith.

Figure out if the slowness is due to slow queries or
an overloaded system.

Yours,
Laurenz Albe


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] memory allocation

2017-10-19 Thread nijam J
we are using cloud server

*this are memory info*

free -h
 total   used   free sharedbuffers cached
Mem:   15G15G   197M   194M   121M14G
-/+ buffers/cache:   926M14G
Swap:  15G32M15G

*this are disk info:*
 df -h

FilesystemSize  Used Avail Use% Mounted on
/dev/vda1  20G  1.7G   17G  10% /
devtmpfs  7.9G 0  7.9G   0% /dev
tmpfs 7.9G  4.0K  7.9G   1% /dev/shm
tmpfs 7.9G   17M  7.9G   1% /run
tmpfs 7.9G 0  7.9G   0% /sys/fs/cgroup
/dev/mapper/vgzero-lvhome  99G  189M   94G   1% /home
/dev/mapper/vgzero-lvdata 1.2T   75G  1.1T   7% /data
/dev/mapper/vgzero-lvbackup   296G  6.2G  274G   3% /backup
/dev/mapper/vgzero-lvxlog 197G   61M  187G   1% /pg_xlog
/dev/mapper/vgzero-lvarchive  197G   67G  121G  36% /archive



i allocated memory as per following list:
shared_buffers = 2GB  (10-30 %)
effective_cache_size =7GB (70-75 %)   >>(shared_buffers+page cache) for
dedicated server only
work_mem = 128MB (0.3-1 %)
maintenance_work_mem = 512MB (0.5-4 % )
temp_Buffer =  8MB >>default is better( setting can
be changed within individual sessions)

checkpoint_segments = 64
checkpoint_completion_target = 0.9
random_page_cost = 3.5
cpu_tuple_cost = 0.05
wal_buffers = 32MB  leave this default 3% of shared buffer is better



is it better or do i want to modify any thing

our server is getting too slow again and again

please give me a suggestion


Re: [PERFORM] Memory Allocation (8 GB shared buffer limit on Ubuntu Hardy)

2009-01-06 Thread Frank Joerdens
On Wed, Jan 7, 2009 at 3:23 AM, Tom Lane  wrote:
> "Frank Joerdens"  writes:
>> then I take the request size value from the error and do
>> echo 8810725376 > /proc/sys/kernel/shmmax
>> and get the same error again.
>
> What about shmall?

Yes that works, it was set to

r...@db04:~# cat /proc/sys/kernel/shmall
2097152
r...@db04:~# getconf PAGE_SIZE
4096

which is 2097152 * 4096 = 85899345920 (presumably a Ubuntu default),
i.e. slightly less than the required shmmax, which explains why 7 GB
works but 8 doesn't. 8810725376 / 4096 = 2151056 would appear to be
right, and indeed after doing

r...@db04:~# echo 2151056 > /proc/sys/kernel/shmall

it works.

Thanks!

Frank

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Memory Allocation (8 GB shared buffer limit on Ubuntu Hardy)

2009-01-06 Thread Tom Lane
"Frank Joerdens"  writes:
> then I take the request size value from the error and do
> echo 8810725376 > /proc/sys/kernel/shmmax
> and get the same error again.

What about shmall?

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Memory Allocation (8 GB shared buffer limit on Ubuntu Hardy)

2009-01-06 Thread Frank Joerdens
Tom Lane wrote:
> "Ryan Hansen"  writes:
[...]
>> but when I set the shared buffer in PG and restart
>> the service, it fails if it's above about 8 GB.
>
> Fails how?  And what PG version is that?

The thread seems to end here as far as the specific question was
concerned. I just ran into the same issue though, also on Ubuntu Hardy
with PG 8.2.7 - if I set shared buffers to 8 GB, starting the server
fails with

2009-01-06 17:15:09.367 PST 6804   DETAIL:  Failed system call was
shmget(key=5432001, size=8810725376, 03600).

then I take the request size value from the error and do

echo 8810725376 > /proc/sys/kernel/shmmax

and get the same error again. If I try that with shared_buffers = 7 GB
(setting shmmax to 7706542080), it works. Even if I double the value
for 8 GB and set shmmax to 17621450752, I get the same error. There
seems to be a ceiling.

Earlier in this thread somebody mentioned they had set shared buffers
to 24 GB on CentOS, so it seems to be a platform issue.

I also tried to double SHMMNI, from 4096 -> 8192, as the PG error
suggests, but to no avail.

This is a new 16-core Dell box with 64 GB of RAM and a mid-range
controller with 8 spindles in RAID 0+1, one big filesystem. The
database is currently 55 GB in size with a web application type OLTP
load, doing ~6000 tps at peak time (and growing fast).

The problem surfaced here because we just upgraded from an 8-core
server with 16 GB RAM with very disappointing results initially. The
new server would go inexplicably slow near peak time, with context
switches ~100k and locks going ballistic. It seemed worse than on the
smaller machine.

Until we revised the configuration which I'd just copied over from the
old box, and adjusted shared_buffers from 2 GB -> 4 GB. Now it seem to
perform well. I found that surprising given that 2 GB is quite a lot
already and since I'd gathered that the benefits of cranking up shared
buffers are not scientifically proven, or that often if not most of
the time the OS's caching mechanisms are adequate or even superior to
what you might achieve by fiddling with the PG configuration and
setting shared buffers very high.

Regards,

Frank

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Memory Allocation

2008-11-28 Thread Kevin Grittner
I'm hoping that through compare/contrast we might help someone start
closer to their own best values
 
>>> Scott Carey <[EMAIL PROTECTED]> wrote: 
> Tests with writes can trigger it earlier if combined with bad
dirty_buffers 
> settings.
 
We've never, ever modified dirty_buffers settings from defaults.
 
> The root of the problem is that the Linux paging algorithm estimates
that 
> I/O for file read access is as costly as I/O for paging.  A
reasonable 
> assumption for a desktop, a ridiculously false assumption for a large

> database with high capacity DB file I/O and a much lower capability
swap 
> file.
 
Our swap file is not on lower speed drives.
 
> If you do have enough other applications that are idle that take up
RAM that 
> should be pushed out to disk from time to time (perhaps your programs
that 
> are doing the bulk loading?) a higher value is useful.
 
Bulk loading was ssh cat | psql.
 
> The more RAM you have and the larger your postgres memory usage, the
lower 
> the swappiness value should be.
 
I think the test environment had 8 GB RAM with 256 MB in
shared_buffers.  For the conversion we had high work_mem and
maintenance_work_mem settings, and we turned fsync off, along with a
few other settings we would never using during production.
 
> I currently use a value of 1, on a 32GB machine, and about 600MB of
'stuff' 
> gets paged out normally, 1400MB under heavy load.
 
Outside of bulk load, we've rarely seen anything swap, even under
load.
 
> ***For a bulk load database, one is optimizing for _writes_ and extra
page 
> cache doesn't help writes like it does reads.***
 
I'm thinking that it likely helps when indexing tables for which data
has recently been loaded.  It also might help minimize head movement
and/or avoid the initial disk hit for a page which subsequently get
hint bits set .
 
> Like all of these settings, tune to your application and test.
 
We sure seem to agree on that.
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Memory Allocation

2008-11-26 Thread Scott Carey
Swappiness optimization is going to vary.   Definitely test on your own.

For a bulk load database, with large page cache, swappines = 60 (default) is 
_GUARANTEED_ to force the OS to swap out some of Postgres while in heavy use.  
This is heavily dependent on the page cache size, work_mem size, and 
concurrency.
I've had significantly increased performance setting this value low (1000x ! -- 
if your DB starts swapping postgres, you're performance-DEAD).  The default has 
the OS targeting close to 60% of the memory for page cache.  On a 32GB server, 
with 7GB postgres buffer cache, several concurrent queries reading GB's of data 
and using 500MB + work_mem (huge aggregates), the default swappiness will 
choose to page out postgres with about 19GB of disk page cache left to evict, 
with disastrous results.  And that is a read-only test.  Tests with writes can 
trigger it earlier if combined with bad dirty_buffers settings.

The root of the problem is that the Linux paging algorithm estimates that I/O 
for file read access is as costly as I/O for paging.  A reasonable assumption 
for a desktop, a ridiculously false assumption for a large database with high 
capacity DB file I/O and a much lower capability swap file.  Not only that -- 
page in is almost always near pure random reads, but DB I/O is often 
sequential.  So losing 100M of cached db file takes a lot less time to scan 
back in than 100MB of the application.

If you do have enough other applications that are idle that take up RAM that 
should be pushed out to disk from time to time (perhaps your programs that are 
doing the bulk loading?) a higher value is useful.  Although it is not exact, 
think of the swappiness value as the percentage of RAM that the OS would prefer 
page cache to applications (very roughly).

The more RAM you have and the larger your postgres memory usage, the lower the 
swappiness value should be.  60% of 24GB is ~14.5GB, If you have that much 
stuff that is in RAM that should be paged out to save space, try it.

I currently use a value of 1, on a 32GB machine, and about 600MB of 'stuff' 
gets paged out normally, 1400MB under heavy load.  This is a dedicated machine. 
 Higher values page out more stuff that increases the cache size and helps 
performance a little, but under the heavy load, it hits the paging wall and 
falls over.  The small improvement in performance when the system is not 
completely stressed is not worth risking hitting the wall for me.

***For a bulk load database, one is optimizing for _writes_ and extra page 
cache doesn't help writes like it does reads.***

When I use a machine with misc. other lower priority apps and less RAM, I have 
found larger values to be helpful.

If your DB is configured with a low shared_buffers and small work_mem, you 
probably want the OS to use that much memory for disk pages, and again a higher 
swappiness may be more optimal.

Like all of these settings, tune to your application and test.  Many of these 
settings are things that go hand in hand with others, but alone don't make as 
much sense.  Tuning Postgres to do most of the caching and making the OS get 
out of the way is far different than tuning the OS to do as much caching work 
as possible and minimizing postgres.  Which of those two strategies is best is 
highly application dependent, somewhat O/S dependent, and also hardware 
dependent.

-Original Message-
From: Kevin Grittner [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 26, 2008 3:09 PM
To: Ryan Hansen; pgsql-performance@postgresql.org; Scott Carey
Subject: Re: [PERFORM] Memory Allocation

>>> Scott Carey <[EMAIL PROTECTED]> wrote:
> Set swappiness to 0 or 1.

We recently converted all 72 remote county databases from 8.2.5 to
8.3.4.  In preparation we ran a test conversion of a large county over
and over with different settings to see what got us the best
performance.  Setting swappiness below the default degraded
performance for us in those tests for identical data, same hardware,
no other changes.

Our best guess is that code which really wasn't getting called got
swapped out leaving more space in the OS cache, but that's just a
guess.  Of course, I'm sure people would not be recommending it if
they hadn't done their own benchmarks to confirm that this setting
actually improved things in their environments, so the lesson here is
to test for your environment when possible.

-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Memory Allocation

2008-11-26 Thread Kevin Grittner
>>> Scott Carey <[EMAIL PROTECTED]> wrote: 
> Set swappiness to 0 or 1.
 
We recently converted all 72 remote county databases from 8.2.5 to
8.3.4.  In preparation we ran a test conversion of a large county over
and over with different settings to see what got us the best
performance.  Setting swappiness below the default degraded
performance for us in those tests for identical data, same hardware,
no other changes.
 
Our best guess is that code which really wasn't getting called got
swapped out leaving more space in the OS cache, but that's just a
guess.  Of course, I'm sure people would not be recommending it if
they hadn't done their own benchmarks to confirm that this setting
actually improved things in their environments, so the lesson here is
to test for your environment when possible.
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Memory Allocation

2008-11-26 Thread Scott Carey
Tuning for bulk loading:

Make sure the Linux kernel paramters in /proc/sys/vm related to the page cache 
are set well.
Set swappiness to 0 or 1.
Make sure you understand and configure /proc/sys/vm/dirty_background_ratio
and /proc/sys/vm/dirty_ratio well.
With enough RAM the default on some kernel versions is way, way off (40% of RAM 
with dirty pages!  yuck).
http://www.westnet.com/~gsmith/content/linux-pdflush.htm
If postgres is doing a lot of caching for you you probably want dirty_ratio at 
10% or less, and you'll want the OS to start flushing to disk sooner rather 
than later.  A dirty_background_ratio of 3% with 24GB of RAM  is 720MB -- a 
pretty big buffer.  I would not personally want this buffer to be larger than 5 
seconds of max write speed of the disk I/O.

You'll need to tune your background writer to be aggressive enough to actually 
write data fast enough so that checkpoints don't suck, and tune your checkpoint 
size and settings as well.  Turn on checkpoint logging on the database and run 
tests while looking at the output of those.  Ideally, most of your batch writes 
have made it to the OS before the checkpoint, and the OS has actually started 
moving most of it to disk.  If your settings are wrong,  you'll have the data 
buffered twice, and most or nearly all of it will be in memory when the 
checkpoint happens, and the checkpoint will take a LONG time.  The default 
Linux settings + default postgres settings + large shared_buffers will almost 
guarantee this situation for bulk loads.  Both have to be configured with 
complementary settings.  If you have a large postgres buffer, the OS buffer 
should be small and write more aggressively.  If you have a small postgres 
buffer, the OS can be more lazy and cache much more.


From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Ryan Hansen
Sent: Wednesday, November 26, 2008 2:10 PM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] Memory Allocation

Hey all,

This may be more of a Linux question than a PG question, but I'm wondering if 
any of you have successfully allocated more than 8 GB of memory to PG before.

I have a fairly robust server running Ubuntu Hardy Heron, 24 GB of memory, and 
I've tried to commit half the memory to PG's shared buffer, but it seems to 
fail.  I'm setting the kernel shared memory accordingly using sysctl, which 
seems to work fine, but when I set the shared buffer in PG and restart the 
service, it fails if it's above about 8 GB.  I actually have it currently set 
at 6 GB.

I don't have the exact failure message handy, but I can certainly get it if 
that helps.  Mostly I'm just looking to know if there's any general reason why 
it would fail, some inherent kernel or db limitation that I'm unaware of.

If it matters, this DB is going to be hosting and processing hundreds of GB and 
eventually TB of data, it's a heavy read-write system, not transactional 
processing, just a lot of data file parsing (python/bash) and bulk loading.  
Obviously the disks get hit pretty hard already, so I want to make the most of 
the large amount of available memory wherever possible.  So I'm trying to tune 
in that direction.

Any info is appreciated.

Thanks!


Re: [PERFORM] Memory Allocation

2008-11-26 Thread Tom Lane
"Ryan Hansen" <[EMAIL PROTECTED]> writes:
> I have a fairly robust server running Ubuntu Hardy Heron, 24 GB of memory,
> and I've tried to commit half the memory to PG's shared buffer, but it seems
> to fail.  I'm setting the kernel shared memory accordingly using sysctl,
> which seems to work fine, but when I set the shared buffer in PG and restart
> the service, it fails if it's above about 8 GB.

Fails how?  And what PG version is that?

FWIW, while there are various schools of thought on how large to make
shared_buffers, pretty much everybody agrees that half of physical RAM
is not the sweet spot.  What you're likely to get is maximal
inefficiency with every active disk page cached twice --- once in kernel
space and once in shared_buffers.

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Memory Allocation

2008-11-26 Thread Carlos Moreno
Ryan Hansen wrote:
>
> Hey all,
>
> This may be more of a Linux question than a PG question, but I’m
> wondering if any of you have successfully allocated more than 8 GB of
> memory to PG before.
>
> I have a fairly robust server running Ubuntu Hardy Heron, 24 GB of
> memory, and I’ve tried to commit half the memory to PG’s shared
> buffer, but it seems to fail.
>

Though not sure why this is happening or whether it is normal, I would
suggest that such setting is maybe too high. From the Annotated
postgresql.conf document at

http://www.powerpostgresql.com/Downloads/annotated_conf_80.html,

the suggested range is 8 to 400MB. They specifically say that it
should never be set to more than 1/3 of the available memory, which
in your case is precisely the 8GB figure (I guess that's just a
coincidence --- I doubt that the server would be written so that it
fails to start if shared_buffers is more than 1/3 of available RAM)

Another important parameter that you don't mention is the
effective_cache_size, which that same document suggests should
be about 2/3 of available memory. (this tells the planner the amount
of data that it can "probabilistically" expect to reside in memory due
to caching, and as such, the planner is likely to produce more
accurate estimates and thus better query optimizations).

Maybe you could set shared_buffers to, say, 1 or 2GB (that's already
beyond the recommended figure, but given that you have 24GB, it
may not hurt), and then effective_cache_size to 16GB or so?

HTH,

Carlos
--


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Memory Allocation

2008-11-26 Thread Alan Hodgson
On Wednesday 26 November 2008, "Ryan Hansen" 
<[EMAIL PROTECTED]> wrote:
> This may be more of a Linux question than a PG question, but I'm
> wondering if any of you have successfully allocated more than 8 GB of
> memory to PG before.
>

CentOS 5, 24GB shared_buffers on one server here. No problems.

-- 
Alan

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Memory Allocation

2008-11-26 Thread Ryan Hansen
Hey all,

 

This may be more of a Linux question than a PG question, but I'm wondering
if any of you have successfully allocated more than 8 GB of memory to PG
before.

 

I have a fairly robust server running Ubuntu Hardy Heron, 24 GB of memory,
and I've tried to commit half the memory to PG's shared buffer, but it seems
to fail.  I'm setting the kernel shared memory accordingly using sysctl,
which seems to work fine, but when I set the shared buffer in PG and restart
the service, it fails if it's above about 8 GB.  I actually have it
currently set at 6 GB.

 

I don't have the exact failure message handy, but I can certainly get it if
that helps.  Mostly I'm just looking to know if there's any general reason
why it would fail, some inherent kernel or db limitation that I'm unaware
of.  

 

If it matters, this DB is going to be hosting and processing hundreds of GB
and eventually TB of data, it's a heavy read-write system, not transactional
processing, just a lot of data file parsing (python/bash) and bulk loading.
Obviously the disks get hit pretty hard already, so I want to make the most
of the large amount of available memory wherever possible.  So I'm trying to
tune in that direction.

 

Any info is appreciated.

 

Thanks!



Re: [PERFORM] Memory allocation and Vacuum abends

2007-05-27 Thread Jim C. Nasby
What does top report as using the most memory?

On Wed, May 23, 2007 at 11:01:24PM -0300, Leandro Guimar?es dos Santos wrote:
> Hi all,
> 
>  
> 
> I have a 4 CPU, 4GB Ram memory box running PostgreSql 8.2.3 under Win 2003 in 
> a very high IO intensive insert application.
> 
>  
> 
> The application inserts about 570 rows per minute or 9 rows per second.
> 
>  
> 
> We have been facing some memory problem that we cannot understand.
> 
>  
> 
> From time to time memory allocation goes high and even after we stop 
> postgresql service the memory continues allocated and if were restart the 
> service the Postgres crash over.
> 
>  
> 
> It's a 5 GB database size already that was born 1 and a half month ago. We 
> have 2 principal tables partitioned.
> 
>  
> 
> Above is the log file. Do anyone have any idea what could the problem be..
> 
>  
> 
> Thanks in advance.
> 
>  
> 
>  
> 
> 2007-05-23 13:21:00 LOG:  CreateProcess call failed: A blocking operation was 
> interrupted by a call to WSACancelBlockingCall.
> 
>  (error code 1450)
> 
> 2007-05-23 13:21:00 LOG:  could not fork new process for connection: A 
> blocking operation was interrupted by a call to WSACancelBlockingCall.
> 
> 
> 
> 2007-05-23 13:21:06 LOG:  could not receive data from client: An operation on 
> a socket could not be performed because the system lacked sufficient buffer 
> space or because a queue was full.
> 
> 2007-05-23 13:21:17 LOG:  server process (PID 256868) exited with exit code 
> 128
> 
> 2007-05-23 13:21:17 LOG:  terminating any other active server processes
> 
> 2007-05-23 13:21:17 WARNING:  terminating connection because of crash of 
> another server process
> 
> 2007-05-23 13:21:17 DETAIL:  The postmaster has commanded this server process 
> to roll back the current transaction and exit, because another server process 
> exited abnormally and possibly corrupted shared memory.
> 
> 2007-05-23 13:21:17 HINT:  In a moment you should be able to reconnect to the 
> database and repeat your command.
> 
> 2007-05-23 13:21:17 WARNING:  terminating connection because of crash of 
> another server process
> 
> 2007-05-23 13:21:17 DETAIL:  The postmaster has commanded this server process 
> to roll back the current transaction and exit, because another server process 
> exited abnormally and possibly corrupted shared memory.
> 
> 2007-05-23 13:21:17 WARNING:  terminating connection because of crash of 
> another server process
> 
> 2007-05-23 13:21:17 DETAIL:  The postmaster has commanded this server process 
> to roll back the current transaction and exit, because another server process 
> exited abnormally and possibly corrupted shared memory.
> 
>  
> 

-- 
Jim Nasby  [EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgp0D3edkaqiH.pgp
Description: PGP signature


[PERFORM] Memory allocation and Vacuum abends

2007-05-23 Thread Leandro Guimarães dos Santos
Hi all,

 

I have a 4 CPU, 4GB Ram memory box running PostgreSql 8.2.3 under Win 2003 in a 
very high IO intensive insert application.

 

The application inserts about 570 rows per minute or 9 rows per second.

 

We have been facing some memory problem that we cannot understand.

 

>From time to time memory allocation goes high and even after we stop 
>postgresql service the memory continues allocated and if were restart the 
>service the Postgres crash over.

 

It's a 5 GB database size already that was born 1 and a half month ago. We have 
2 principal tables partitioned.

 

Above is the log file. Do anyone have any idea what could the problem be..

 

Thanks in advance.

 

 

2007-05-23 13:21:00 LOG:  CreateProcess call failed: A blocking operation was 
interrupted by a call to WSACancelBlockingCall.

 (error code 1450)

2007-05-23 13:21:00 LOG:  could not fork new process for connection: A blocking 
operation was interrupted by a call to WSACancelBlockingCall.



2007-05-23 13:21:06 LOG:  could not receive data from client: An operation on a 
socket could not be performed because the system lacked sufficient buffer space 
or because a queue was full.

2007-05-23 13:21:17 LOG:  server process (PID 256868) exited with exit code 128

2007-05-23 13:21:17 LOG:  terminating any other active server processes

2007-05-23 13:21:17 WARNING:  terminating connection because of crash of 
another server process

2007-05-23 13:21:17 DETAIL:  The postmaster has commanded this server process 
to roll back the current transaction and exit, because another server process 
exited abnormally and possibly corrupted shared memory.

2007-05-23 13:21:17 HINT:  In a moment you should be able to reconnect to the 
database and repeat your command.

2007-05-23 13:21:17 WARNING:  terminating connection because of crash of 
another server process

2007-05-23 13:21:17 DETAIL:  The postmaster has commanded this server process 
to roll back the current transaction and exit, because another server process 
exited abnormally and possibly corrupted shared memory.

2007-05-23 13:21:17 WARNING:  terminating connection because of crash of 
another server process

2007-05-23 13:21:17 DETAIL:  The postmaster has commanded this server process 
to roll back the current transaction and exit, because another server process 
exited abnormally and possibly corrupted shared memory.

 



Re: [PERFORM] memory allocation

2004-06-18 Thread Richard Huxton
Michael Ryan S. Puncia wrote:
Hi everyone .
 

How much memory should I give to the kernel and postgresql
I have 1G of memory and 120G of HD
Devrim's pointed you to a guide to the configuration file. There's also 
an introduction to performance tuning on the same site.

An important thing to remember is that the sort_mem is the amount of 
memory available *per sort* and some queries can use several sorts.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] memory allocation

2004-06-18 Thread Devrim GUNDUZ
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


Hi,

On Fri, 18 Jun 2004, Michael Ryan S. Puncia wrote:

> How much memory should I give to the kernel and postgresql
> 
> I have 1G of memory and 120G of HD
> 
> Shared Buffers = ?
> 
> Vacuum Mem = ?

Maybe you should read

http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.sxw
OR
http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html

> SHMAX = ?

SHMMAX is not that relevant with PostgreSQL, it's rather relevant with 
your operating system.

Regards,
- -- 
Devrim GUNDUZ  
devrim~gunduz.org   devrim.gunduz~linux.org.tr 
http://www.tdmsoft.com
http://www.gunduz.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.1 (GNU/Linux)

iD8DBQFA0vCUtl86P3SPfQ4RAvITAJ48FV24aBN+nc2+lkRwXc79HlHV6QCfSvRA
YuGjn8hs1jvOJ2Ah9oamIJQ=
=96+i
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[PERFORM] memory allocation

2004-06-18 Thread Michael Ryan S. Puncia








Hi everyone .

 

How much memory should I give to the kernel and postgresql

 

I have 1G of memory and 120G of HD

 

Shared Buffers = ?

Vacuum Mem = ?

SHMAX = ?

 

Sorry I have so many question .I am a newbie L

 

I have 30G of data 

At least 30 simultaneus users

But I will use it only for query with lot of sorting

 

thanks