We just purchased a new Dell PowerEdge 2800 (dual xeon, 8GB RAM, raid 4,
RHEL, postgres 8.1) and ported our old database over to it (single cpu,
2GB RAM, no raid, postgres 7.4). Our apps perform great on it, however
some queries are super slow. One function in particular, which used to
take 15-
We just purchased a new Dell PowerEdge 2800 (dual xeon, 8GB RAM, raid 4,
RHEL, postgres 8.1) and ported our old database over to it (single cpu,
2GB RAM, no raid, postgres 7.4). Our apps perform great on it, however
some queries are super slow. One function in particular, which used to
take 15-
I've recently configured a new high-performance database server:
2xXeon 3.4G, 2G RAM, 4x15K SCSI disks in RAID 10, h/w RAID
This has been live for a couple of weeks.
The box is running Fedora Core 4.
The only thing running on this box is PostgreSQL 8.1.4 and some stub
applications that
Thanks! This exactly what I was looking for.
Benjamin Arai
[EMAIL PROTECTED]
http://www.benjaminarai.com
-Original Message-
From: Bruno Wolff III [mailto:[EMAIL PROTECTED]
Sent: Friday, June 16, 2006 11:56 AM
To: Benjamin Arai
Cc: pgsql-performance@postgresql.org
Subject: Re: Question
I have a producer/consumer setup where various producer processes
insert new records into a table and consumer processes mark those
records as having been handled when they have dealt with them, but
leave the records in the table so that we can generate reports later.
The records are added
I'd have to agree with you about the specific SAN/setup you're working
with there. I certainly disagree that it's a general property of SAN'sthough. We've got a DS4300 with FC controllers and drives, hosts aregenerally dual-controller load-balanced and it works quite decently.
How are you guys do
Jim Nasby wrote:
On Jun 16, 2006, at 12:01 PM, Josh Berkus wrote:
First thing as soon as I have a login, of course, is to set up a
Buildfarm
instance.
Keep in mind that buildfarm clients and benchmarking stuff don't
usually mix well.
On a fast machine like this a buildfarm run is
Any suggestions? FYI the original question wasn't meant as a poke at comparing PG to MySQL to DB2. I'm not making an yvalue judgements either way. I'm just trying to understand how we can use it the best way possible.
Actually we just thought about something. With PG, we can create an index that is
Hello,
I am getting following error while inserting a row into the "abc" table:
ERROR: fmgr_info: function 2720768: cache lookup failed
Table "abc" has one trigger called "abct"
Definition is as follows:
BEGIN;
LOCK TABLE abc IN SHARE ROW EXCLUSIVE MODE;
create TRIGGER abct
AF
since I'd like to have/create a truly case-insensitive
text type (ITEXT anyone?).
I haven't seen it mentioned in this thread yet, but have you looked
at citext?
http://gborg.postgresql.org/project/citext/projdisplay.php
I don't have any experience with it, but perhaps it can do what
you
David,
> But I'm also interested in how Elein made the email domain case-
> insensitive, since I'd like to have/create a truly case-insensitive
> text type (ITEXT anyone?). The functions for the operator class there
> were mainly written in SQL, and if it adds a significant overhead,
> I'm not sur
Well, for one we did introduce a TX leak which was preventing autovac from
running. I guess that was _the_ issue.
I have since fixed it and an now testing looks much better, nothing
concerning
(fingers crossed until morning :)). debug logs are full of vac/anal of the
tables... so, for
Jim C. Nasby wrote:
On Wed, Jun 21, 2006 at 02:27:41PM -0700, Ron St-Pierre wrote:
We just purchased a new Dell PowerEdge 2800 (dual xeon, 8GB RAM, raid 4,
RHEL, postgres 8.1) and ported our old database over to it (single cpu,
RAID *4*?
oops, raid 5 (but we are getting good io th
On Wed, Jun 21, 2006 at 02:27:41PM -0700, Ron St-Pierre wrote:
> We just purchased a new Dell PowerEdge 2800 (dual xeon, 8GB RAM, raid 4,
> RHEL, postgres 8.1) and ported our old database over to it (single cpu,
RAID *4*?
If you do any kind of updating at all, you're likely to be real unhappy
w
On Wed, Jun 21, 2006 at 04:41:45PM -0300, jody brownell wrote:
> BTW, in production with a similar load - autovacuum with default out of the
> box
> settings seems to work quite well
>
> I double checked this earlier today.
So what's different between production and the machine with the pr
We just purchased a new Dell PowerEdge 2800 (dual xeon, 8GB RAM, raid 4,
RHEL, postgres 8.1) and ported our old database over to it (single cpu,
2GB RAM, no raid, postgres 7.4). Our apps perform great on it, however
some queries are super slow. One function in particular, which used to
take 15-
On Wed, Jun 21, 2006 at 04:46:15PM -0400, Alex Turner wrote:
> I have a really stupid question about top, what exactly is iowait CPU time?
Time while the CPU is idle, but at least one I/O request is outstanding.
In other words, if you're at 100% I/O-wait, you're heavily I/O-bound and your
process
I have a really stupid question about top, what exactly is iowait CPU time?Alex
David Wheeler <[EMAIL PROTECTED]> writes:
> Didn't see anything in the archives, so I thought I'd ask: has anyone
> done any work to gauge the performance penalty of using DOMAINs?
There are some reports in the archives of particular usage patterns
where they pretty much suck, because GetDomainC
Not yet. I would first like to know what is the time consuming part and
what is a work around. If you are sure individual columns for every
entry of the array solve the issue I will joyfully implement it. The
downsize of this approch is that the array dimensions are not always the
same in my scena
BTW, in production with a similar load - autovacuum with default out of the box
settings seems to work quite well
I double checked this earlier today.
On Wednesday 21 June 2006 16:38, Jim C. Nasby wrote:
> 5
---(end of broadcast)---
TIP 5: do
OK this was over a 12 - 16 hour period of not having anything done with it
though right?
I am assuming if autovacuum were active through out that period, we would be
somewhat better off ...is that not accurate?
On Wednesday 21 June 2006 16:38, Jim C. Nasby wrote:
> 5
-
On Wed, Jun 21, 2006 at 01:21:05PM -0300, jody brownell wrote:
> Jun 21 13:04:04 vanquish postgres[3311]: [19-1] DEBUG: "target": removed
> 5645231 row versions in 106508 pages
> Jun 21 13:04:04 vanquish postgres[3311]: [19-2] DETAIL: CPU 3.37s/1.23u sec
> elapsed 40.63 sec.
> Jun 21 13:04:04 v
On Wed, Jun 21, 2006 at 11:26:16AM -0700, David Wheeler wrote:
> Howdy,
>
> Didn't see anything in the archives, so I thought I'd ask: has anyone
> done any work to gauge the performance penalty of using DOMAINs? I'm
> thinking of something like Elein's email DOMAIN:
>
> http://www.varlena.
On Wed, Jun 21, 2006 at 10:52:42AM -0300, jody brownell wrote:
> A db wide vacuum full outputs this at the end.
>
> INFO: free space map contains 32848 pages in 159 relations
> DETAIL: A total of 24192 page slots are in use (including overhead).
> 24192 page slots are required to track all free
On Wed, Jun 21, 2006 at 03:47:19PM +0200, nicky wrote:
> WHERE substr(t0.code,1,2) not in ('14','15','16','17')
> AND (substr(t0.correctie,4,1) <> '1' OR t0.correctie is null)
> AND EXTRACT(YEAR from t1.datum) > 2004;
How much data do you expect to be getting back from that where clause?
On Wed, Jun 21, 2006 at 09:29:03AM +0200, Merkel Marcel (CR/AEM4) wrote:
>
>
> Von: Merlin Moncure [mailto:[EMAIL PROTECTED]
> An: Merkel Marcel (CR/AEM4)
> Cc: pgsql-performance@postgresql.org
> Betreff: Re: [PERFORM] Big array speed issues
>
> On 6/20/06, Merkel Marcel (CR/AEM4) <[EMAIL PROT
Howdy,
Didn't see anything in the archives, so I thought I'd ask: has anyone
done any work to gauge the performance penalty of using DOMAINs? I'm
thinking of something like Elein's email DOMAIN:
http://www.varlena.com/GeneralBits/
I figured that most simple domains that have a constraint
Scott Marlowe <[EMAIL PROTECTED]> writes:
> What is the difference between the two plans (i.e. explain on both boxes
> and compare)
Even more to the point, let's see EXPLAIN ANALYZE output from both boxes...
regards, tom lane
---(end of broadcast)-
On Wed, 2006-06-21 at 08:47, nicky wrote:
> Hello People,
SNIPPAGE
> The query above takes around 42 minutes.
>
> However, i also have a wimpy desktop machine with 1gb ram. Windows
> with MSSQL 2000 (default installation), same database structure, same
> indexes, same query, etc and it takes 1
block and row are always configured on - they are my friend :)
thanks
On Wednesday 21 June 2006 13:44, Csaba Nagy wrote:
> On Wed, 2006-06-21 at 18:39, jody brownell wrote:
> > that is exactly what I am seeing, one process, no change, always in idle
> > while the others are constantly
> > changin
On Wed, 2006-06-21 at 18:39, jody brownell wrote:
> that is exactly what I am seeing, one process, no change, always in idle
> while the others are constantly
> changing their state.
>
> looks like someone opened a tx then is blocking on a queue lock or something.
> dang.
Don't forget to check
that is exactly what I am seeing, one process, no change, always in idle while
the others are constantly
changing their state.
looks like someone opened a tx then is blocking on a queue lock or something.
dang.
On Wednesday 21 June 2006 13:36, Csaba Nagy wrote:
> On Wed, 2006-06-21 at 18:21,
On Wed, 2006-06-21 at 18:21, jody brownell wrote:
> That is interesting.
>
> There is one thread keeping a transaction open it appears from ps
>
> postgres: app app xxx(42644) idle in transaction
That shouldn't be a problem on itself, "idle in transaction" happens all
the time between 2 commands
Opps - that was confusing. The idle in transaction was from one box and the
autovacuum was from another.
So, one question was answered, auto vacuum is running and selecting the tables
but apparently not at the
same time as my app probably due to this "idle in transaction". I will track it
down
That is interesting.
There is one thread keeping a transaction open it appears from ps
postgres: app app xxx(42644) idle in transaction
however, I created a test table "t" not configured in pg_autovacuum. I inserted
a whack of rows and saw this.
Jun 21 12:38:45 vanquish postgres[1525]: [8-1] L
On Wed, 2006-06-21 at 17:27, jody brownell wrote:
> Our application is broken down quite well. We have two main writing processes
> writing to two separate sets of tables. No crossing over, nothign to prohibit
> the
> vacuuming in the nature which you describe.
It really doesn't matter what tab
Hi Nicky,
I guess, you should try to upgrade the memory setting of PostgreSQL first.
work_mem = 65536
Is a bit low for such large joins.
Did you get a change to watch the directory
/base//pgsql_tmp to see how large the temporary file is
during this query. I'm sure that there is large file.
Our application is broken down quite well. We have two main writing processes
writing to two separate sets of tables. No crossing over, nothign to prohibit
the
vacuuming in the nature which you describe.
My longest transaction on the tables in question are typically quite short
until
of cours
Title: Message
Could
you post an explain analyze of the query? Just FYI, if you do an explain
analyze of the insert statement, it will actually do the insert. If you
don't want that just post an explain analyze of the select
part.
To me
it would be interesting to compare just the select
Hello People,
I'm trying to solve a 'what i feel is a'
performance/configuration/query error on my side. I'm fairly new to
configuring PostgreSQL so, i might be completely wrong with my
configuration.
My database consists of 44 tables, about 20GB. Two of those tables are
'big/huge'. Table s
Csaba Nagy <[EMAIL PROTECTED]> writes:
>> So, it appears my autovacuum is just NOT working... I must have screwed
>> something up, but I cannot see what.
> Is it possible that you have long running transactions ?
The other question I was wondering about is if autovacuum is actually
choosing to
> So, it appears my autovacuum is just NOT working... I must have screwed
> something up, but I cannot see what.
Is it possible that you have long running transactions ? If yes, VACUUM
is simply not efficient, as it won't eliminate the dead space
accumulated during the long running transaction.
Hey - I am running into a data relation bloat problem which I believe is
causing fairly significant
slowdown of my updates. I am using version
version
-
PostgreSQL 8.1.4 on i586-trusti
Hi, Chris,
Chris Beecroft wrote:
> Query is now returning with results on our replicated database. Will
> vacuum analyze production now. So it seems to have done the trick. Now
> the question is has our auto vacuum failed or was not set up properly...
> A question for my IT people.
Most of th
Hi, Zydoon,
Zydoon wrote:
> Now I'm trying to make my tests, and I'm not that sure I will make the
> switch to the PSeries, since my dual xeon with 4 G RAM can handle 3500
> concurrent postmasters consuming 3.7 G of the RAM. I cannot reach this
> number on the PSeries with 2 G.
This sounds like
Von: Merlin Moncure [mailto:[EMAIL PROTECTED]
An: Merkel Marcel (CR/AEM4)
Cc: pgsql-performance@postgresql.org
Betreff: Re: [PERFORM] Big array speed issues
On 6/20/06, Merkel Marcel (CR/AEM4) <[EMAIL PROTECTED]> wrote:
> I use libpqxx to access the database. This might be another
bottleneck,
Hi, Fzied,
[EMAIL PROTECTED] wrote:
> I'm using httperf/autobench for measurments and the best result I can
> get is that my system can handle a trafiic of almost 1600 New
> con/sec.
Are you using connection pooling or persistent connections between
PostgreSQL and the Apaches?
Maybe it simply i
48 matches
Mail list logo