nijam J wrote:
> our server is getting too slow again and again
Use "vmstat 1" and "iostat -mNx 1" to see if you are
running out of memory, CPU capacity or I/O bandwith.
Figure out if the slowness is due to slow queries or
an overloaded system.
Yours,
Laurenz Albe
--
Sent via
we are using cloud server
*this are memory info*
free -h
total used free sharedbuffers cached
Mem: 15G15G 197M 194M 121M14G
-/+ buffers/cache: 926M14G
Swap: 15G32M15G
*this
Tom Lane wrote:
Ryan Hansen ryan.han...@brightbuilders.com writes:
[...]
but when I set the shared buffer in PG and restart
the service, it fails if it's above about 8 GB.
Fails how? And what PG version is that?
The thread seems to end here as far as the specific question was
concerned. I
Frank Joerdens fr...@joerdens.de writes:
then I take the request size value from the error and do
echo 8810725376 /proc/sys/kernel/shmmax
and get the same error again.
What about shmall?
regards, tom lane
--
Sent via pgsql-performance mailing list
On Wed, Jan 7, 2009 at 3:23 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Frank Joerdens fr...@joerdens.de writes:
then I take the request size value from the error and do
echo 8810725376 /proc/sys/kernel/shmmax
and get the same error again.
What about shmall?
Yes that works, it was set to
I'm hoping that through compare/contrast we might help someone start
closer to their own best values
Scott Carey [EMAIL PROTECTED] wrote:
Tests with writes can trigger it earlier if combined with bad
dirty_buffers
settings.
We've never, ever modified dirty_buffers settings from
Hey all,
This may be more of a Linux question than a PG question, but I'm wondering
if any of you have successfully allocated more than 8 GB of memory to PG
before.
I have a fairly robust server running Ubuntu Hardy Heron, 24 GB of memory,
and I've tried to commit half the memory to PG's
On Wednesday 26 November 2008, Ryan Hansen
[EMAIL PROTECTED] wrote:
This may be more of a Linux question than a PG question, but I'm
wondering if any of you have successfully allocated more than 8 GB of
memory to PG before.
CentOS 5, 24GB shared_buffers on one server here. No problems.
--
Ryan Hansen wrote:
Hey all,
This may be more of a Linux question than a PG question, but I’m
wondering if any of you have successfully allocated more than 8 GB of
memory to PG before.
I have a fairly robust server running Ubuntu Hardy Heron, 24 GB of
memory, and I’ve tried to commit half
Ryan Hansen [EMAIL PROTECTED] writes:
I have a fairly robust server running Ubuntu Hardy Heron, 24 GB of memory,
and I've tried to commit half the memory to PG's shared buffer, but it seems
to fail. I'm setting the kernel shared memory accordingly using sysctl,
which seems to work fine, but
] [mailto:[EMAIL PROTECTED] On Behalf Of Ryan Hansen
Sent: Wednesday, November 26, 2008 2:10 PM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] Memory Allocation
Hey all,
This may be more of a Linux question than a PG question, but I'm wondering if
any of you have successfully allocated more
Scott Carey [EMAIL PROTECTED] wrote:
Set swappiness to 0 or 1.
We recently converted all 72 remote county databases from 8.2.5 to
8.3.4. In preparation we ran a test conversion of a large county over
and over with different settings to see what got us the best
performance. Setting
] Memory Allocation
Scott Carey [EMAIL PROTECTED] wrote:
Set swappiness to 0 or 1.
We recently converted all 72 remote county databases from 8.2.5 to
8.3.4. In preparation we ran a test conversion of a large county over
and over with different settings to see what got us the best
performance
What does top report as using the most memory?
On Wed, May 23, 2007 at 11:01:24PM -0300, Leandro Guimar?es dos Santos wrote:
Hi all,
I have a 4 CPU, 4GB Ram memory box running PostgreSql 8.2.3 under Win 2003 in
a very high IO intensive insert application.
The application
Hi all,
I have a 4 CPU, 4GB Ram memory box running PostgreSql 8.2.3 under Win 2003 in a
very high IO intensive insert application.
The application inserts about 570 rows per minute or 9 rows per second.
We have been facing some memory problem that we cannot understand.
From time
Hi everyone .
How much memory should I give to the kernel and postgresql
I have 1G of memory and 120G of HD
Shared Buffers = ?
Vacuum Mem = ?
SHMAX = ?
Sorry I have so many question .I am a newbie L
I have 30G of data
At least 30 simultaneus users
But I will use it only
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
On Fri, 18 Jun 2004, Michael Ryan S. Puncia wrote:
How much memory should I give to the kernel and postgresql
I have 1G of memory and 120G of HD
Shared Buffers = ?
Vacuum Mem = ?
Maybe you should read
Michael Ryan S. Puncia wrote:
Hi everyone .
How much memory should I give to the kernel and postgresql
I have 1G of memory and 120G of HD
Devrim's pointed you to a guide to the configuration file. There's also
an introduction to performance tuning on the same site.
An important thing to
18 matches
Mail list logo