On Wed, 5 Jan 2005 22:35:42 +0700, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
Now I turn hyperthreading off and readjust the conf . I found the bulb query
that was :
update one flag of the table [8 million records which I think not too much]
Ahh, the huge update. Below are my hints I've
Ahh, the huge update. Below are my hints I've
found while trying to optimize such updates.
First of all, does this update really changes this 'flag'?
Say, you have update:
UPDATE foo SET flag = 4 WHERE [blah];
are you sure, that flag always is different than 4?
If not, then add:
UPDATE
On Thu, 6 Jan 2005 09:06:55 -0800
Josh Berkus josh@agliodbs.com wrote:
I can't tell you how many times I've seen this sort of thing. And
the developers always tell me Well, we denormalized for performance
reasons ...
Now that's rich. I don't think I've ever seen a database perform
Dawid,
Ahh, the huge update. Below are my hints I've
found while trying to optimize such updates.
Divide the update, if possible. This way query uses
less memory and you may call VACUUM inbetween
updates. To do this, first SELECT INTO TEMPORARY
table the list of rows to update (their ids
Reading can be worse for a normalized db, which is likely what the
developers were concerned about.
One always have to be careful to measure the right thing.
Dave
Frank Wiles wrote:
On Thu, 6 Jan 2005 09:06:55 -0800
Josh Berkus josh@agliodbs.com wrote:
I can't tell you how many times I've
On Thu, 2005-01-06 at 12:35 -0500, Dave Cramer wrote:
Reading can be worse for a normalized db, which is likely what the
developers were concerned about.
To a point. Once you have enough data that you start running out of
space in memory then normalization starts to rapidly gain ground again
: Re: [PERFORM] Low
Performance for big hospital server ..
tgresql.org
Frank Wiles [EMAIL PROTECTED] writes:
Now that's rich. I don't think I've ever seen a database perform
worse after it was normalized. In fact, I can't even think of a
situation where it could!
Just remember. All generalisations are false.
--
greg
---(end
Greg Stark wrote:
Frank Wiles [EMAIL PROTECTED] writes:
Now that's rich. I don't think I've ever seen a database perform
worse after it was normalized. In fact, I can't even think of a
situation where it could!
Just remember. All generalisations are false.
In general, I would agree.
Today is the first official day of this weeks and the system run
better in serveral points but there are still some points that need to
be corrected. Some queries or some tables are very slow. I think the
queries inside the programe need to be rewrite.
Now I put the sort mem to a little
Amrit,
can you post
explain your slow update query
so we can see what it does ?
Dave
[EMAIL PROTECTED] wrote:
Today is the first official day of this weeks and the system run
better in serveral points but there are still some points that need to
be corrected. Some queries
[EMAIL PROTECTED] wrote:
Now I turn hyperthreading off and readjust the conf . I found the bulb query
that was :
update one flag of the table [8 million records which I think not too much]
.When I turned this query off everything went fine.
I don't know whether update the data is much slower than
Today is the first official day of this weeks and the system run better in
serveral points but there are still some points that need to be corrected. Some
queries or some tables are very slow. I think the queries inside the programe
need to be rewrite.
Now I put the sort mem to a little bit
On Tue, 4 Jan 2005 [EMAIL PROTECTED] wrote:
Today is the first official day of this weeks and the system run better in
serveral points but there are still some points that need to be corrected.
Some
queries or some tables are very slow. I think the queries inside the programe
need to be
I will put more ram but someone said RH 9.0 had poor recognition on the Ram
above 4 Gb?
I think they were refering to 32 bit architectures, not distributions as
such.
Sorry for wrong reason , then should I increase more RAM than 4 Gb. on 32 bit
Arche.?
Should I close the hyperthreading
shared_buffers = 12000 will use 12000*8192 bytes (i.e about 96Mb). It is
shared, so no matter how many connections you have it will only use 96M.
Now I use the figure of 27853
Will the increasing in effective cache size to arround 20 make a little
bit
improvement ? Do you think so?
[EMAIL PROTECTED] wrote:
I will try to reduce shared buffer to 1536 [1.87 Mb].
1536 is probaby too low. I've tested a bunch of different settings on my
8GB Opteron server and 10K seems to be the best setting.
also effective cache is the sum of kernel buffers + shared_buffers so it
should be
William Yu wrote:
[EMAIL PROTECTED] wrote:
Yes , vacuumdb daily.
Do you vacuum table by table or the entire DB? I find over time, the
system tables can get very bloated and cause a lot of slowdowns just due
to schema queries/updates. You might want to try a VACUUM FULL ANALYZE
just on the
Amrit --
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Mon 1/3/2005 12:18 AM
To:Mark Kirkwood
Cc:PGsql-performance
Subject: Re: [PERFORM] Low Performance for big hospital server ..
shared_buffers = 12000 will use 12000*8192 bytes (i.e about
Decrease the sort mem too much [8196] make the performance much slower
so I use
sort_mem = 16384
and leave effective cache to the same value , the result is quite better
but I
should wait for tomorrow morning [official hour] to see the end result.
You could also profile your queries to see
William Yu wrote:
[EMAIL PROTECTED] wrote:
I will try to reduce shared buffer to 1536 [1.87 Mb].
1536 is probaby too low. I've tested a bunch of different settings on
my 8GB Opteron server and 10K seems to be the best setting.
Be careful here, he is not using opterons which can access physical
Amrit,
I realize you may be stuck with 7.3.x but you should be aware that 7.4
is considerably faster, and 8.0 appears to be even faster yet.
I would seriously consider upgrading, if at all possible.
A few more hints.
Random page cost is quite conservative if you have reasonably fast
amrit wrote:
I try to adjust my server for a couple of weeks with some sucess but
it
still
slow when the server has stress in the moring from many connection . I
used
postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram of
4
Gb.
Since 1 1/2 yr. when I started to use the
I realize you may be stuck with 7.3.x but you should be aware that 7.4
is considerably faster, and 8.0 appears to be even faster yet.
There are a little bit incompatibility between 7.3 -8 , so rather difficult to
change.
I would seriously consider upgrading, if at all possible.
A few more
On Monday 03 January 2005 10:40, [EMAIL PROTECTED] wrote:
I realize you may be stuck with 7.3.x but you should be aware that 7.4
is considerably faster, and 8.0 appears to be even faster yet.
There are a little bit incompatibility between 7.3 -8 , so rather difficult
to change.
Sure, but
Dave Cramer wrote:
William Yu wrote:
[EMAIL PROTECTED] wrote:
I will try to reduce shared buffer to 1536 [1.87 Mb].
1536 is probaby too low. I've tested a bunch of different settings on
my 8GB Opteron server and 10K seems to be the best setting.
Be careful here, he is not using opterons which
[EMAIL PROTECTED] wrote:
I realize you may be stuck with 7.3.x but you should be aware that 7.4
is considerably faster, and 8.0 appears to be even faster yet.
There are a little bit incompatibility between 7.3 -8 , so rather difficult
to
change.
I would seriously consider upgrading, if
William Yu wrote:
Dave Cramer wrote:
William Yu wrote:
[EMAIL PROTECTED] wrote:
I will try to reduce shared buffer to 1536 [1.87 Mb].
1536 is probaby too low. I've tested a bunch of different settings
on my 8GB Opteron server and 10K seems to be the best setting.
Be careful here, he is not
[EMAIL PROTECTED] wrote:
I try to adjust my server for a couple of weeks with some sucess but it still
slow when the server has stress in the moring from many connection . I used
postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram of 4 Gb.
Since 1 1/2 yr. when I started to use the
On Sun, Jan 02, 2005 at 09:54:32AM +0700, [EMAIL PROTECTED] wrote:
postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram of 4 Gb.
You may want to try disabling hyperthreading, if you don't mind
rebooting.
grew up to 3.5 Gb and there were more than 160 concurent connections.
The common wisdom of shared buffers is around 6-10% of available memory.
Your proposal below is about 50% of memory.
I'm not sure what the original numbers actually meant, they are quite large.
also effective cache is the sum of kernel buffers + shared_buffers so it
should be bigger than shared
The common wisdom of shared buffers is around 6-10% of available memory.
Your proposal below is about 50% of memory.
I'm not sure what the original numbers actually meant, they are quite large.
I will try to reduce shared buffer to 1536 [1.87 Mb].
also effective cache is the sum of kernel
[EMAIL PROTECTED] wrote:
max_connections = 160
shared_buffers = 2048[Total = 2.5 Gb.]
sort_mem = 8192 [Total = 1280 Mb.]
vacuum_mem = 16384
effective_cache_size = 128897 [= 1007 Mb. = 1 Gb. ]
Will it be more suitable for my server than before?
I would keep shared_buffers in the
I try to adjust my server for a couple of weeks with some sucess but it still
slow when the server has stress in the moring from many connection . I used
postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram of 4 Gb.
Since 1 1/2 yr. when I started to use the database server after
34 matches
Mail list logo