Neil Conway wrote:
Another idea would be to improve the quality of the plan we generate at PREPARE time:
for instance you could generate 'n' plans for various combinations of
input parameters, and then choose the best query plan at EXECUTE time.
It's a difficult problem to solve, however (consider
TODO has:
* Consider use of open/fcntl(O_DIRECT) to minimize OS caching
Should the item be removed?
---
Neil Conway wrote:
On Mon, 2004-09-20 at 17:57, Guy Thornley wrote:
According to the manpage, O_DIRECT
In article [EMAIL PROTECTED],
Scott Kirkwood [EMAIL PROTECTED] writes:
I couldn't find anything in the docs or in the mailing list on this,
but it is something that Oracle appears to do as does MySQL.
The idea, I believe, is to do a quick (hash) string lookup of the
query and if it's exactly
Tom Lane wrote:
Greg Stark [EMAIL PROTECTED] writes:
No, postgres didn't do things in reverse order. It hashed the empty table and
then went ahead and checked every record of the non-empty table against the
empty hash table.
Reading the code there's no check for this, and it seems like it would
Bruce Momjian [EMAIL PROTECTED] writes:
TODO has:
* Consider use of open/fcntl(O_DIRECT) to minimize OS caching
Should the item be removed?
I think it's fine ;-) ... it says consider it, not do it. The point
is that we could do with more research in this area, even if O_DIRECT
per se is
Not knowing anything about the internals of pg, I don't know how this relates, but in
theory,
query plan caching is not just about saving time re-planning queries, it's about
scalability.
Optimizing queries requires shared locks on the database metadata, which, as I
understand it
causes
Hi,
I expect you mean RAID 1/0 or 1+0 since the CX300 didn't support RAID 10 last time I
looked.
Whether you are using a SAN or not, you should consider putting the WAL files (pg_xlog
folder) on
seperate diskes from the DB. Since the log files are mostly written to, not read from
you could
I upgraded to 7.4.3 this morning and
did a vacuum full analyze on the problem table and now the indexes show
the correct number of records
Patrick Hatcher
Macys.Com
Josh Berkus [EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
09/21/04 10:49 AM
To
Patrick Hatcher
[EMAIL PROTECTED]
cc
I'm not an expert, but I've been hunting down a killer performance problem
for a while now. It seems this may be the cause.
At peak load, our database slows to a trickle. The CPU and disk utilization
are normal - 20-30% used CPU and disk performance good.
All of our postgres processes end up
Jason Coene [EMAIL PROTECTED] writes:
All of our postgres processes end up in the semwai state - seemingly
waiting on other queries to complete. If the system isn't taxed in CPU or
disk, I have a good feeling that this may be the cause.
Whatever that is, I'll bet lunch that it's got 0 to do
Jason Coene wrote:
I'm not an expert, but I've been hunting down a killer performance problem
for a while now. It seems this may be the cause.
At peak load, our database slows to a trickle. The CPU and disk utilization
are normal - 20-30% used CPU and disk performance good.
For a peak load
Scott:
We have seen similar issues when we have had massive load on our web
server. My determination was that simply the act of spawning and
stopping postgres sessions was very heavy on the box, and by
implementing connection pooling (sqlrelay), we got much higher
throughput, and better response
Hi Tom,
Easily recreated with Apache benchmark, ab -n 3 -c 3000
http://webserver . This runs 1 query per page, everything else is cached
on webserver.
The lone query:
SELECT
id,
gameid,
forumid,
subject
FROM threads
WHERE nuked = 0
ORDER BY nuked DESC,
Jason Coene [EMAIL PROTECTED] writes:
All of our postgres processes end up in the semwai state - seemingly
waiting on other queries to complete. If the system isn't taxed in CPU or
disk, I have a good feeling that this may be the cause.
Well, it's possible contention of some sort is an
Jason Coene [EMAIL PROTECTED] writes:
I'm not sure how I go about getting the stack traceback you need. Any info
on this? Results of ps below. System is dual xeon 2.6, 2gb ram, hardware
raid 10 running FreeBSD 5.2.1.
Hmm. Dual Xeon sets off alarm bells ...
I think you are probably looking
I've looked at PREPARE, but apparently it only lasts
per-session - that's worthless in our case (web based
service, one connection per data-requiring connection).
That's a non-sequitur. Most 'normal' high volume web apps have persistent
DB connections, one per http server process. Are you
Jason Coene wrote:
Hi Tom,
Easily recreated with Apache benchmark, ab -n 3 -c 3000
http://webserver . This runs 1 query per page, everything else is cached
on webserver.
That test require 3 access with 3000 connections that is not a normal
load. Describe us your HW.
3000 connections
Hi, Jason,
On Thu, 23 Sep 2004 12:53:25 -0400
Jason Coene [EMAIL PROTECTED] wrote:
I've looked at PREPARE, but apparently it only lasts per-session - that's
worthless in our case (web based service, one connection per data-requiring
connection).
This sounds like the loads of connection init
I believe 1/0 or 1+0 is aka RAID-10. CX300 doesn't support 0+1.
So far i am aware of two things, the cache page size is 8KB (can be increased or
decreased), and the stripe element size of 128 sectors default.
Thanks,
Anjan
-Original Message-
From: Mr Pink
Tom,
I think you are probably looking at the same problem previously reported
by Josh Berkus among others. Does the rate of context swaps shown by
vmstat go through the roof when this happens? If you strace or ktrace
one of the backends, do you see lots of semop()s and little else?
That
On Tue, 2004-09-21 at 03:54, Mariusz Czuada wrote:
Hi all,
I searched list archives, but did not found anything about HT of Pentium
4/Xeon processors. I wonder if hyperthreading can boost or decrease
performance. AFAIK for other commercial servers (msssql, oracle) official
documents
Josh Berkus [EMAIL PROTECTED] writes:
I think you are probably looking at the same problem previously reported
by Josh Berkus among others.
That would be interesting. Previously we've only demonstrated the
problem on long-running queries, but I suppose it could also affect
massive
Hi All,
It does sound like we should be pooling connections somehow. I'll be
looking at implementing that shortly. I'd really like to understand what
the actual problem is, though.
Sorry, I meant 30,000 with 300 connections - not 3,000. The 300 connections
/ second is realistic, if not
Update:
I just tried running the same test (ab with 150 concurrent connections)
while connecting to postgres through 35 persistent connections (PHP
library), and had roughly the same type of results. This should eliminate
the new connection overhead. I've attached top and vmstat. I let it run
My database was converted from MySQL a while back and has maintained all
of the indexes which were previously used. Tt the time however, there
were limitations on the way PostgreSQL handled the indexes compared to
MySQL.
Meaning that under MySQL, it would make use of a multi-column index
If you have set up the postgres instance to write stats, the tables
pg_stat_user_indexes, pg_statio_all_indexes and so (use the \dS option at the psql
prompt to see these system tables); also check the pg_stat_user_tables table and
similar beasts for information on total access, etc. Between
On Tue, 14 Sep 2004, Stephen Crowley wrote:
Problem solved.. I set the fetchSize to a reasonable value instead of
the default of unlimited in the PreparedStatement and now the query
is . After some searching it seeems this is a common problem, would it
make sense to change the default
Thanks for the explanation. So what sort of changes need to be made to
the client/server protocol to fix this problem?
On Thu, 23 Sep 2004 18:22:15 -0500 (EST), Kris Jurka [EMAIL PROTECTED] wrote:
On Tue, 14 Sep 2004, Stephen Crowley wrote:
Problem solved.. I set the fetchSize to a
On Thu, 23 Sep 2004, Stephen Crowley wrote:
Thanks for the explanation. So what sort of changes need to be made to
the client/server protocol to fix this problem?
The problem is that there is no way to indicate why you are using a
particular statement in the extended query protocol. For
Jason,
Sorry, I meant 30,000 with 300 connections - not 3,000. The 300
connections
/ second is realistic, if not underestimated. As is the nature of
our site
(realtime information about online gaming), there's a huge fan base
and as a
big upset happens, we'll do 50,000 page views in a
Sorry, I meant 30,000 with 300 connections - not 3,000. The 300
connections
/ second is realistic, if not underestimated. As is the nature of
our site
(realtime information about online gaming), there's a huge fan base
and as a
big upset happens, we'll do 50,000 page views in a span
Hi Josh,
I just tried using pgpool to pool the connections, and ran:
ab -n 1000 -c 50 http://wstg.int/portal/news/
I ran some previous queries to get pgpool to pre-establish all the
connections, and ab ran for a few minutes (with one query per page, eek!).
It was still exhibiting the same
On Thu, Sep 23, 2004 at 09:23:51PM -0400, Jason Coene wrote:
I ran some previous queries to get pgpool to pre-establish all the
connections, and ab ran for a few minutes (with one query per page, eek!).
It was still exhibiting the same problems as before. While so many new
connections at once
33 matches
Mail list logo