Hi, Fzied,
[EMAIL PROTECTED] wrote:
I'm using httperf/autobench for measurments and the best result I can
get is that my system can handle a trafiic of almost 1600 New
con/sec.
Are you using connection pooling or persistent connections between
PostgreSQL and the Apaches?
Maybe it simply is
Von: Merlin Moncure [mailto:[EMAIL PROTECTED]
An: Merkel Marcel (CR/AEM4)
Cc: pgsql-performance@postgresql.org
Betreff: Re: [PERFORM] Big array speed issues
On 6/20/06, Merkel Marcel (CR/AEM4) [EMAIL PROTECTED] wrote:
I use libpqxx to access the database. This might be another
bottleneck,
Hi, Zydoon,
Zydoon wrote:
Now I'm trying to make my tests, and I'm not that sure I will make the
switch to the PSeries, since my dual xeon with 4 G RAM can handle 3500
concurrent postmasters consuming 3.7 G of the RAM. I cannot reach this
number on the PSeries with 2 G.
This sounds like you
Hi, Chris,
Chris Beecroft wrote:
Query is now returning with results on our replicated database. Will
vacuum analyze production now. So it seems to have done the trick. Now
the question is has our auto vacuum failed or was not set up properly...
A question for my IT people.
Most of the
Hey - I am running into a data relation bloat problem which I believe is
causing fairly significant
slowdown of my updates. I am using version
version
-
PostgreSQL 8.1.4 on
Csaba Nagy [EMAIL PROTECTED] writes:
So, it appears my autovacuum is just NOT working... I must have screwed
something up, but I cannot see what.
Is it possible that you have long running transactions ?
The other question I was wondering about is if autovacuum is actually
choosing to vacuum
Hello People,
I'm trying to solve a 'what i feel is a'
performance/configuration/query error on my side. I'm fairly new to
configuring PostgreSQL so, i might be completely wrong with my
configuration.
My database consists of 44 tables, about 20GB. Two of those tables are
'big/huge'. Table
Title: Message
Could
you post an explain analyze of the query? Just FYI, if you do an explain
analyze of the insert statement, it will actually do the insert. If you
don't want that just post an explain analyze of the select
part.
To me
it would be interesting to compare just the select
Our application is broken down quite well. We have two main writing processes
writing to two separate sets of tables. No crossing over, nothign to prohibit
the
vacuuming in the nature which you describe.
My longest transaction on the tables in question are typically quite short
until
of
Hi Nicky,
I guess, you should try to upgrade the memory setting of PostgreSQL first.
work_mem = 65536
Is a bit low for such large joins.
Did you get a change to watch the directory
PGDATA/base/DBOID/pgsql_tmp to see how large the temporary file is
during this query. I'm sure that there is
On Wed, 2006-06-21 at 17:27, jody brownell wrote:
Our application is broken down quite well. We have two main writing processes
writing to two separate sets of tables. No crossing over, nothign to prohibit
the
vacuuming in the nature which you describe.
It really doesn't matter what table
That is interesting.
There is one thread keeping a transaction open it appears from ps
postgres: app app xxx(42644) idle in transaction
however, I created a test table t not configured in pg_autovacuum. I inserted
a whack of rows and saw this.
Jun 21 12:38:45 vanquish postgres[1525]: [8-1]
Opps - that was confusing. The idle in transaction was from one box and the
autovacuum was from another.
So, one question was answered, auto vacuum is running and selecting the tables
but apparently not at the
same time as my app probably due to this idle in transaction. I will track it
down
On Wed, 2006-06-21 at 18:21, jody brownell wrote:
That is interesting.
There is one thread keeping a transaction open it appears from ps
postgres: app app xxx(42644) idle in transaction
That shouldn't be a problem on itself, idle in transaction happens all
the time between 2 commands in
that is exactly what I am seeing, one process, no change, always in idle while
the others are constantly
changing their state.
looks like someone opened a tx then is blocking on a queue lock or something.
dang.
On Wednesday 21 June 2006 13:36, Csaba Nagy wrote:
On Wed, 2006-06-21 at 18:21,
block and row are always configured on - they are my friend :)
thanks
On Wednesday 21 June 2006 13:44, Csaba Nagy wrote:
On Wed, 2006-06-21 at 18:39, jody brownell wrote:
that is exactly what I am seeing, one process, no change, always in idle
while the others are constantly
changing
On Wed, 2006-06-21 at 08:47, nicky wrote:
Hello People,
SNIPPAGE
The query above takes around 42 minutes.
However, i also have a wimpy desktop machine with 1gb ram. Windows
with MSSQL 2000 (default installation), same database structure, same
indexes, same query, etc and it takes 17
Howdy,
Didn't see anything in the archives, so I thought I'd ask: has anyone
done any work to gauge the performance penalty of using DOMAINs? I'm
thinking of something like Elein's email DOMAIN:
http://www.varlena.com/GeneralBits/
I figured that most simple domains that have a
On Wed, Jun 21, 2006 at 09:29:03AM +0200, Merkel Marcel (CR/AEM4) wrote:
Von: Merlin Moncure [mailto:[EMAIL PROTECTED]
An: Merkel Marcel (CR/AEM4)
Cc: pgsql-performance@postgresql.org
Betreff: Re: [PERFORM] Big array speed issues
On 6/20/06, Merkel Marcel (CR/AEM4) [EMAIL PROTECTED]
On Wed, Jun 21, 2006 at 03:47:19PM +0200, nicky wrote:
WHERE substr(t0.code,1,2) not in ('14','15','16','17')
AND (substr(t0.correctie,4,1) '1' OR t0.correctie is null)
AND EXTRACT(YEAR from t1.datum) 2004;
How much data do you expect to be getting back from that where clause?
On Wed, Jun 21, 2006 at 10:52:42AM -0300, jody brownell wrote:
A db wide vacuum full outputs this at the end.
INFO: free space map contains 32848 pages in 159 relations
DETAIL: A total of 24192 page slots are in use (including overhead).
24192 page slots are required to track all free
On Wed, Jun 21, 2006 at 11:26:16AM -0700, David Wheeler wrote:
Howdy,
Didn't see anything in the archives, so I thought I'd ask: has anyone
done any work to gauge the performance penalty of using DOMAINs? I'm
thinking of something like Elein's email DOMAIN:
On Wed, Jun 21, 2006 at 01:21:05PM -0300, jody brownell wrote:
Jun 21 13:04:04 vanquish postgres[3311]: [19-1] DEBUG: target: removed
5645231 row versions in 106508 pages
Jun 21 13:04:04 vanquish postgres[3311]: [19-2] DETAIL: CPU 3.37s/1.23u sec
elapsed 40.63 sec.
Jun 21 13:04:04
OK this was over a 12 - 16 hour period of not having anything done with it
though right?
I am assuming if autovacuum were active through out that period, we would be
somewhat better off ...is that not accurate?
On Wednesday 21 June 2006 16:38, Jim C. Nasby wrote:
5
BTW, in production with a similar load - autovacuum with default out of the box
settings seems to work quite well
I double checked this earlier today.
On Wednesday 21 June 2006 16:38, Jim C. Nasby wrote:
5
---(end of broadcast)---
TIP 5:
Not yet. I would first like to know what is the time consuming part and
what is a work around. If you are sure individual columns for every
entry of the array solve the issue I will joyfully implement it. The
downsize of this approch is that the array dimensions are not always the
same in my
David Wheeler [EMAIL PROTECTED] writes:
Didn't see anything in the archives, so I thought I'd ask: has anyone
done any work to gauge the performance penalty of using DOMAINs?
There are some reports in the archives of particular usage patterns
where they pretty much suck, because
I have a really stupid question about top, what exactly is iowait CPU time?Alex
On Wed, Jun 21, 2006 at 04:46:15PM -0400, Alex Turner wrote:
I have a really stupid question about top, what exactly is iowait CPU time?
Time while the CPU is idle, but at least one I/O request is outstanding.
In other words, if you're at 100% I/O-wait, you're heavily I/O-bound and your
We just purchased a new Dell PowerEdge 2800 (dual xeon, 8GB RAM, raid 4,
RHEL, postgres 8.1) and ported our old database over to it (single cpu,
2GB RAM, no raid, postgres 7.4). Our apps perform great on it, however
some queries are super slow. One function in particular, which used to
take
On Wed, Jun 21, 2006 at 04:41:45PM -0300, jody brownell wrote:
BTW, in production with a similar load - autovacuum with default out of the
box
settings seems to work quite well
I double checked this earlier today.
So what's different between production and the machine with the
On Wed, Jun 21, 2006 at 02:27:41PM -0700, Ron St-Pierre wrote:
We just purchased a new Dell PowerEdge 2800 (dual xeon, 8GB RAM, raid 4,
RHEL, postgres 8.1) and ported our old database over to it (single cpu,
RAID *4*?
If you do any kind of updating at all, you're likely to be real unhappy
Jim C. Nasby wrote:
On Wed, Jun 21, 2006 at 02:27:41PM -0700, Ron St-Pierre wrote:
We just purchased a new Dell PowerEdge 2800 (dual xeon, 8GB RAM, raid 4,
RHEL, postgres 8.1) and ported our old database over to it (single cpu,
RAID *4*?
oops, raid 5 (but we are getting good io
Well, for one we did introduce a TX leak which was preventing autovac from
running. I guess that was _the_ issue.
I have since fixed it and an now testing looks much better, nothing
concerning
(fingers crossed until morning :)). debug logs are full of vac/anal of the
tables... so, for
David,
But I'm also interested in how Elein made the email domain case-
insensitive, since I'd like to have/create a truly case-insensitive
text type (ITEXT anyone?). The functions for the operator class there
were mainly written in SQL, and if it adds a significant overhead,
I'm not sure
since I'd like to have/create a truly case-insensitive
text type (ITEXT anyone?).
I haven't seen it mentioned in this thread yet, but have you looked
at citext?
http://gborg.postgresql.org/project/citext/projdisplay.php
I don't have any experience with it, but perhaps it can do what
Hello,
I am getting following error while inserting a row into the "abc" table:
ERROR: fmgr_info: function 2720768: cache lookup failed
Table "abc" has one trigger called "abct"
Definition is as follows:
BEGIN;
LOCK TABLE abc IN SHARE ROW EXCLUSIVE MODE;
create TRIGGER abct
AFTER INSERT OR
37 matches
Mail list logo