[PERFORM] Performance Testing Utility

2007-06-13 Thread Christo Du Preez
Hi All,

Is there some kind of performance testing utility available for
postgresql Something I can run after installing postgresql to help me
identify if my installation is optimal.

I've been battling for days now trying to sort out performance issues
and something like that may just identify issues I'm not even aware of
or considering at this stage.

Regards,
Christo Du Preez


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PERFORM] Performance Testing Utility

2007-06-13 Thread Heikki Linnakangas

Christo Du Preez wrote:

Is there some kind of performance testing utility available for
postgresql Something I can run after installing postgresql to help me
identify if my installation is optimal.


Not really. There's contrib/pgbench, but I wouldn't recommend using it 
for that purpose since the optimal configuration depends on your 
application, and pgbench is likely nothing like your application.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Performance Testing Utility

2007-06-13 Thread Tobias Brox
[Christo Du Preez - Wed at 12:25:20PM +0200]
> Is there some kind of performance testing utility available for
> postgresql Something I can run after installing postgresql to help me
> identify if my installation is optimal.
> 
> I've been battling for days now trying to sort out performance issues
> and something like that may just identify issues I'm not even aware of
> or considering at this stage.

If you are really having performance problems, my general experience is
that you should look into the queries and usage patterns rather than the
configuration.  The server configuration can only give marginal
benefits, compared to query and usage tuning.

It often a good idea to turn on the stats collector, even if it slows
down postgres a bit.

One of the things the stats collector gives is the pg_stat_activity
view, where you can find everything the server is working on exactly
now; checking up this view while you are actually experiencing
problems can give a lot of information.

Another thing I've noticed, is that the activity from our applications
often can come in bursts; the server can be like 70% idle most of the
time, but when the server is struck by 4-5 IO-heavy queries at the
same time in addition to the steady flow of simple queries, it can
easily get trashed.  I've made up an algorithm to stop this from
happening, before running a transaction which is either heavy or not
considered very important, the activity view will be scanned, and if
the server is already working with many queries, the application will
sleep a bit and try again - and eventually return an error message
("please try again later") if it's doing interactive stuff.

Another thing that we've experienced - be aware of pending
transactions!  It's important to commit or roll back every transaction
within reasonable time - if (i.e. due to a programming mistake or a
DBA starting a transaction in psql) a transaction is pending for
several hours or even ays, it is really very bad for the performance.

Another experience we have is that autovacuum can be quite naughty
when one has some really huge tables.  This can be tweaked by
disabling autovacuum at those tables, and running a nightly vacuum
instead.

Apologies for not replying to your question, though ;-)


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[PERFORM] Optimize slow query

2007-06-13 Thread Tyler Durden

Hi,
I've a table with 300 000 records and I'm trying to do a search:

SELECT * FROM addresses WHERE address ILIKE '%Jean Paul%' AND
(l_pc='4250' or r_pc='4250') AND (l_struc='O' or r_struc='O') AND
(prenm ILIKE 'Street')

It performs in 2 seconds in a dual Xeon 2.4mhz with 2Gb of RAM.
I'm using Postgresql 8.1 on ubuntu.
I've indexes on l_pc, r_pc, l_struc,r_struc and prenm (all btrees)
What I'm doing wrong to have such a slow query?

Thanks,
Nuno Mariz

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [PERFORM] Optimize slow query

2007-06-13 Thread Tyler Durden

On 6/13/07, Tyler Durden <[EMAIL PROTECTED]> wrote:

Hi,
I've a table with 300 000 records and I'm trying to do a search:

SELECT * FROM addresses WHERE address ILIKE '%Jean Paul%' AND
(l_pc='4250' or r_pc='4250') AND (l_struc='O' or r_struc='O') AND
(prenm ILIKE 'Street')

It performs in 2 seconds in a dual Xeon 2.4mhz with 2Gb of RAM.
I'm using Postgresql 8.1 on ubuntu.
I've indexes on l_pc, r_pc, l_struc,r_struc and prenm (all btrees)
What I'm doing wrong to have such a slow query?

Thanks,
Nuno Mariz



My bad!
Sorry, I've missed an index on l_struc and r_struc.

Thanks, anyway.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Optimize slow query

2007-06-13 Thread Cédric Villemain
Le mercredi 13 juin 2007, Tyler Durden a écrit :
> Hi,
> I've a table with 300 000 records and I'm trying to do a search:
>
> SELECT * FROM addresses WHERE address ILIKE '%Jean Paul%' AND
> (l_pc='4250' or r_pc='4250') AND (l_struc='O' or r_struc='O') AND
> (prenm ILIKE 'Street')
>
> It performs in 2 seconds in a dual Xeon 2.4mhz with 2Gb of RAM.
> I'm using Postgresql 8.1 on ubuntu.
> I've indexes on l_pc, r_pc, l_struc,r_struc and prenm (all btrees)
> What I'm doing wrong to have such a slow query?
Could you add 'explain analyze' to your post ? 
And how much content have the text fields ? (isn't tsearch2 an option for 
you ?)
>
> Thanks,
> Nuno Mariz
>
> ---(end of broadcast)---
> TIP 7: You can help support the PostgreSQL project by donating at
>
> http://www.postgresql.org/about/donate



-- 
Cédric Villemain
Administrateur de Base de Données
Cel: +33 (0)6 74 15 56 53
http://dalibo.com - http://dalibo.org


signature.asc
Description: This is a digitally signed message part.


[PERFORM] WAL shipping and ever expanding pg_xlog

2007-06-13 Thread Michael Dengler

Hi,

I'm doing WAL shipping to do a warm standby system (8.2.4).

The problem is that the pg_xlog dir on the master just gets bigger and
bigger (never seems to truncate) and the corresponding archive directory on
the slave also gets bigger and bigger. Is there a way to moderate this?

Thanks

Mike


Re: [PERFORM] WAL shipping and ever expanding pg_xlog

2007-06-13 Thread Michael Dengler

OKit looks like the slave machine has run out of space and that caused
the xlog files to pile up on the master.

Still...how do I prevent such all of the shipped WAL segments from remaining
on the slave machine? Do I need to retain every single one? Can they be
safely removed after the slave machine has restored the particular segment?

Thanks

Mike


On 6/13/07, Michael Dengler <[EMAIL PROTECTED]> wrote:


Hi,

I'm doing WAL shipping to do a warm standby system (8.2.4).

The problem is that the pg_xlog dir on the master just gets bigger and
bigger (never seems to truncate) and the corresponding archive directory on
the slave also gets bigger and bigger. Is there a way to moderate this?

Thanks

Mike




Re: [PERFORM] Best use of second controller with faster disks?

2007-06-13 Thread Vivek Khera


On Jun 12, 2007, at 8:33 PM, Francisco Reyes wrote:


Vivek Khera writes:


what raid card have you got?


2 3ware cards.
I believe both are 9550SX
i'm playing with an external enclosure  which has an areca sata  
raid in it and connects to the host via fibre  channel.


What is the OS? FreeBSD?


FreeBSD, indeed.  The vendor, Partners Data Systems, did a wonderful  
job ensuring that everything integrated well to the point of talking  
with various FreeBSD developers, LSI engineers, etc., and sent me a  
fully tested system end-to-end with a Sun X4100 M2, LSI 4Gb Fibre  
card, and their RAID array, with FreeBSD installed already.


I can't recommend them enough -- if you need a high-end RAID system  
for FreeBSD (or other OS, I suppose) do check them out.


Right now I'm testing an 8-disk RAID6 configuration on the same
device; it seems slower than the 16-disk RAID6, but I haven't yet   
tried 8-disk RAID10 with dedicated log yet.


Is all this within the same controller?


Yes,  the system is in testing right now, so I'm playing with all  
sort of different disk configurations and it seems that the 16-disk  
RAID6 is the winner so far.  The next best was the 14-disk RAID6 + 2  
disk RAID1 for log.


I have separate disks built-in to the system for boot.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] VACUUM vs auto-vacuum daemon

2007-06-13 Thread Bill Moran
In response to "Sabin Coanda" <[EMAIL PROTECTED]>:

> Hi Bill,
> 
> ...
> >
> > However, you can get some measure of tracking my running VACUUM VERBOSE
> > on a regular basis to see how well autovacuum is keeping up.  There's
> > no problem with running manual vacuum and autovacuum together, and you'll
> > be able to gather _some_ information about how well autovacuum is
> > keeping up.
> 
> Well, I think it is useful just if I am able to synchronize the autovacuum 
> to run always after I run vacuum verbose. But I don't know how to do that. 
> Do you ?

No, I don't.  Why would you want to do that?

Personally, I'd be more interested in whether autovacuum, running whenever
it wants without me knowing, is keeping the table bloat under control.

If this were a concern for me (which it was during initial testing of
our DB) I would run vacuum verbose once a day to watch sizes and what
not.  After a while, I'd switch to once a week, then probably settle on
once a month to ensure nothing ever gets out of hand.  Put it in a cron
job and have the output mailed.

-- 
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

[EMAIL PROTECTED]
Phone: 412-422-3463x4023

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] VACUUM vs auto-vacuum daemon

2007-06-13 Thread Joshua D. Drake

Bill Moran wrote:

In response to "Sabin Coanda" <[EMAIL PROTECTED]>:


Hi Bill,

...

However, you can get some measure of tracking my running VACUUM VERBOSE
on a regular basis to see how well autovacuum is keeping up.  There's
no problem with running manual vacuum and autovacuum together, and you'll
be able to gather _some_ information about how well autovacuum is
keeping up.
Well, I think it is useful just if I am able to synchronize the autovacuum 
to run always after I run vacuum verbose. But I don't know how to do that. 
Do you ?


No, I don't.  Why would you want to do that?

Personally, I'd be more interested in whether autovacuum, running whenever
it wants without me knowing, is keeping the table bloat under control.


analyze verbose.



If this were a concern for me (which it was during initial testing of
our DB) I would run vacuum verbose once a day to watch sizes and what
not.  After a while, I'd switch to once a week, then probably settle on
once a month to ensure nothing ever gets out of hand.  Put it in a cron
job and have the output mailed.




--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] WAL shipping and ever expanding pg_xlog

2007-06-13 Thread Merlin Moncure

On 6/13/07, Michael Dengler <[EMAIL PROTECTED]> wrote:

OKit looks like the slave machine has run out of space and that caused
the xlog files to pile up on the master.

Still...how do I prevent such all of the shipped WAL segments from remaining
on the slave machine? Do I need to retain every single one? Can they be
safely removed after the slave machine has restored the particular segment?


Are you using the pg_standy utility?  It has options to control this...

merlin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] WAL shipping and ever expanding pg_xlog

2007-06-13 Thread Michael Dengler

Doug Knight just informed me about the pg_standby module.

Works like a charm!

Thanks

Mike


On 6/13/07, Merlin Moncure <[EMAIL PROTECTED]> wrote:


On 6/13/07, Michael Dengler <[EMAIL PROTECTED]> wrote:
> OKit looks like the slave machine has run out of space and that
caused
> the xlog files to pile up on the master.
>
> Still...how do I prevent such all of the shipped WAL segments from
remaining
> on the slave machine? Do I need to retain every single one? Can they be
> safely removed after the slave machine has restored the particular
segment?

Are you using the pg_standy utility?  It has options to control this...

merlin



Re: [PERFORM] dbt2 NOTPM numbers

2007-06-13 Thread Mark Wong

On 6/11/07, Markus Schiltknecht <[EMAIL PROTECTED]> wrote:

Heikki Linnakangas wrote:
> Markus Schiltknecht wrote:
>> For dbt2, I've used 500 warehouses and 90 concurrent connections,
>> default values for everything else.
>
> 500? That's just too much for the hardware. Start from say 70 warehouses
> and up it from there 10 at a time until you hit the wall. I'm using 30
> connections with ~100 warehouses on somewhat similar hardware.

Aha! That's why...  I've seen the '500' in some dbt2 samples and thought
it'd be a good default value.

But it makes sense that the benchmark doesn't automatically 'scale down'...

Stupid me.

Thanks again! Hoping for larger NOTPMs.


Yeah, I ran with 500+ warehouses, but I had 6 14-disk arrays of 15K
RPM scsi drives and 6 dual-channel controllers... :)

Regards,
Mark

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [PERFORM] dbt2 NOTPM numbers

2007-06-13 Thread Markus Schiltknecht

Hi,

Mark Wong wrote:

Yeah, I ran with 500+ warehouses, but I had 6 14-disk arrays of 15K
RPM scsi drives and 6 dual-channel controllers... :)


Lucky you!

In the mean time, I've figured out that the box in question peaked at 
about 1450 NOTPMs with 120 warehouses with RAID 1+0. I'll try to compare 
again to RAID 6.


Is there any place where such results are collected?

Regards

Markus

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[PERFORM] [PG 8.1.0 / AIX 5.3] Vacuum processes freezing

2007-06-13 Thread RESTOUX , Loïc

 
Hello everybody,

We're using PostgreSQL 8.1.0 on AIX 5.3 through NFS (a Netapp Filer hosts the 
database files), and we're encoutering somes issues with vaccums. PostgreSQL 
binaries are built with xlc 6 (C for AIX Compiler 6.0.0.6) on AIX 5.2 (yes, I 
know, building on 5.2 and running on 5.3 is not the best way to avoid bugs...).


We have strong performance constraints with this database, so we planned 
vacuums with a crontab :
- Frequent vacuum analyze on some heavily-updated tables (few rows, but a lot 
of insert/update/delete). The frequency varies between 5 and 15 minutes.
- A nightly (not FULL) vacuum on the entire database.
We don't use autovacuum or FULL vacuum, because the high havailability needed 
for the database. We prefer to keep it under control.


Since some weeks, the amount of data hosted by the database grows, and, some 
nights,  the database vacuum seems to "freeze" during his execution. In verbose 
mode, the logs show that the vacuum clean up a table (not always the same 
table), and... no more news. The system shows a vacuum process, which seems to 
be sleeping (no CPU used, no memory consumption...). In addition, the logs of 
our application show that database transactions seems to be slower. 

For some internal reasons, the only way for us to workaround this problem is to 
shutdown of the application and the database. After a full restart, things are 
ok. 


Some questions :

1) During the nightly database vacuum, some vacuums run concurrently (vacuums 
on heavily-updated tables). Can this concurrency cause some deadlocks ? We're 
planning to patch our shell scripts to avoid this concurrency.


2) I believed that the poor performances during the vacuum freeze were due to 
the obsolete data statistics. But after a full restart of the dabatase, 
performances are good. Does PostgreSQL rebuild his statistics during startup ? 


3) Can we explain the freeze with a bad database configuration ? For instance, 
postgreSQL running out of connections, or whatever, causing the vacuum process 
to wait for free ressources ?


4) This morning, just before the database vacuum freeze, the logs show this 
error :
<2007-06-13 03:20:35 DFT%>ERROR:  could not open relation 16391/16394/107937: A 
system call received an interrupt.
<2007-06-13 03:20:35 DFT%>CONTEXT:  writing block 2 of relation 
16391/16394/107937
<2007-06-13 03:20:40 DFT%>LOG:  could not fsync segment 0 of relation 
16392/16394/107925: A system call received an interrupt.
<2007-06-13 03:20:40 DFT%>ERROR:  storage sync failed on magnetic disk: A 
system call received an interrupt.

This is the first time we're encountering this error. Can it be a cause of the 
vacuum freeze ?
 

Regards,


-- 
  Loic Restoux
  Capgemini Telecom & Media / ITDR
  tel : 02 99 27 82 30
  e-mail : [EMAIL PROTECTED] 

This message contains information that may be privileged or confidential and is 
the property of the Capgemini Group. It is intended only for the person to whom 
it is addressed. If you are not the intended recipient,  you are not authorized 
to read, print, retain, copy, disseminate,  distribute, or use this message or 
any part thereof. If you receive this  message in error, please notify the 
sender immediately and delete all  copies of this message.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] dbt2 NOTPM numbers

2007-06-13 Thread Mark Wong

On 6/13/07, Markus Schiltknecht <[EMAIL PROTECTED]> wrote:

Hi,

Mark Wong wrote:
> Yeah, I ran with 500+ warehouses, but I had 6 14-disk arrays of 15K
> RPM scsi drives and 6 dual-channel controllers... :)

Lucky you!

In the mean time, I've figured out that the box in question peaked at
about 1450 NOTPMs with 120 warehouses with RAID 1+0. I'll try to compare
again to RAID 6.

Is there any place where such results are collected?


Unfortunately not anymore.  When I was working at OSDL there was...
I've been told that the lab has been mostly disassembled now so the
data are lost now.

Regards,
Mark

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[PERFORM] Update table performance problem

2007-06-13 Thread Mark Makarowsky
I am trying to update a field in one table with a
field from another table like:

update co set
firest_id=fco.firest_id,fire_dist=fco.fire_dist from
fco where co.xno=fco.xno

Table co has 384964 records
Table fco has 383654 records

The xno fields in both tables are indexed but they
don't seem to be used.  I would expect the update to
be faster than 6.3 minutes or is that expectation
wrong?  Here is the results of Explain Analyze:

"Hash Join  (cost=15590.22..172167.03 rows=383654
width=215) (actual time=1473.297..43032.178
rows=383654 loops=1)"
"  Hash Cond: (co.xno = fco.xno)"
"  ->  Seq Scan on co  (cost=0.00..123712.64
rows=384964 width=195) (actual time=440.196..37366.682
rows=384964 loops=1)"
"  ->  Hash  (cost=7422.54..7422.54 rows=383654
width=34) (actual time=995.651..995.651 rows=383654
loops=1)"
"->  Seq Scan on fco  (cost=0.00..7422.54
rows=383654 width=34) (actual time=4.641..509.947
rows=383654 loops=1)"
"Total runtime: 378258.707 ms"

Thanks, 

Fred


   

Looking for a deal? Find great prices on flights and hotels with Yahoo! 
FareChase.
http://farechase.yahoo.com/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] [PG 8.1.0 / AIX 5.3] Vacuum processes freezing

2007-06-13 Thread Tom Lane
=?iso-8859-1?Q?RESTOUX=2C_Lo=EFc?= <[EMAIL PROTECTED]> writes:
> Since some weeks, the amount of data hosted by the database grows, and, som=
> e nights,  the database vacuum seems to "freeze" during his execution. In v=
> erbose mode, the logs show that the vacuum clean up a table (not always the=
>  same table), and... no more news. The system shows a vacuum process, which=
>  seems to be sleeping (no CPU used, no memory consumption...).

Have you looked into pg_locks to see if it's blocked on someone else's lock?

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Update table performance problem

2007-06-13 Thread Tom Lane
Mark Makarowsky <[EMAIL PROTECTED]> writes:
> "Hash Join  (cost=15590.22..172167.03 rows=383654
> width=215) (actual time=1473.297..43032.178
> rows=383654 loops=1)"
> "  Hash Cond: (co.xno = fco.xno)"
> "  ->  Seq Scan on co  (cost=0.00..123712.64
> rows=384964 width=195) (actual time=440.196..37366.682
> rows=384964 loops=1)"
> "  ->  Hash  (cost=7422.54..7422.54 rows=383654
> width=34) (actual time=995.651..995.651 rows=383654
> loops=1)"
> "->  Seq Scan on fco  (cost=0.00..7422.54
> rows=383654 width=34) (actual time=4.641..509.947
> rows=383654 loops=1)"
> "Total runtime: 378258.707 ms"

According to the top line, the actual scanning and joining took 43 sec;
so the rest of the time went somewhere else.  Possibilities include
the actual data insertion (wouldn't take 5 minutes), index updates
(what indexes are on this table?), constraint checks, triggers, ...

You failed to mention which PG version this is.  8.1 and up would show
time spent in triggers separately, so we could eliminate that
possibility if it's 8.1 or 8.2.  My suspicion without any data is
a lot of indexes on the table.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Performance Testing Utility

2007-06-13 Thread Vivek Khera


On Jun 13, 2007, at 6:25 AM, Christo Du Preez wrote:


Is there some kind of performance testing utility available for
postgresql Something I can run after installing postgresql to help me
identify if my installation is optimal.


Your own app is the only one that will give you meaningful  
results...  You need to either run your app against it or simulate  
your applications' access patterns.


Any other load will lead to different optimizations.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Update table performance problem

2007-06-13 Thread Mark Makarowsky
The version is:

"PostgreSQL 8.2.4 on i686-pc-mingw32, compiled by GCC
gcc.exe (GCC) 3.4.2 (mingw-special)"

Here is the table definition for co and fco.  There
aren't any rules constraints, triggers, etc. on the
tables.  Only an index on each table for the xno
field.  Any other thoughts?

CREATE TABLE co
(
  xno character(10),
  longitude double precision,
  latitude double precision,
  firest_id character(8),
  fire_dist double precision,
  polst_id character(8),
  pol_dist double precision,
  fnew_id character(10),
  fnew_dist double precision,
  pnew_id character(10),
  pnew_dist double precision,
  seihazm020 bigint,
  acc_val integer,
  valley integer,
  flood_id bigint,
  chance character varying
) 
WITHOUT OIDS;
ALTER TABLE co OWNER TO postgres;
-- Index: co_xno

-- DROP INDEX co_xno;

CREATE UNIQUE INDEX co_xno
  ON co
  USING btree
  (xno);

CREATE TABLE fco
(
  firest_id character(8),
  fire_dist double precision,
  xno character(10)
) 
WITHOUT OIDS;
ALTER TABLE fco OWNER TO postgres;

-- Index: fco_xno

-- DROP INDEX fco_xno;

CREATE UNIQUE INDEX fco_xno
  ON fco
  USING btree
  (xno);

--- Tom Lane <[EMAIL PROTECTED]> wrote:

> Mark Makarowsky <[EMAIL PROTECTED]>
> writes:
> > "Hash Join  (cost=15590.22..172167.03 rows=383654
> > width=215) (actual time=1473.297..43032.178
> > rows=383654 loops=1)"
> > "  Hash Cond: (co.xno = fco.xno)"
> > "  ->  Seq Scan on co  (cost=0.00..123712.64
> > rows=384964 width=195) (actual
> time=440.196..37366.682
> > rows=384964 loops=1)"
> > "  ->  Hash  (cost=7422.54..7422.54 rows=383654
> > width=34) (actual time=995.651..995.651
> rows=383654
> > loops=1)"
> > "->  Seq Scan on fco  (cost=0.00..7422.54
> > rows=383654 width=34) (actual time=4.641..509.947
> > rows=383654 loops=1)"
> > "Total runtime: 378258.707 ms"
> 
> According to the top line, the actual scanning and
> joining took 43 sec;
> so the rest of the time went somewhere else. 
> Possibilities include
> the actual data insertion (wouldn't take 5 minutes),
> index updates
> (what indexes are on this table?), constraint
> checks, triggers, ...
> 
> You failed to mention which PG version this is.  8.1
> and up would show
> time spent in triggers separately, so we could
> eliminate that
> possibility if it's 8.1 or 8.2.  My suspicion
> without any data is
> a lot of indexes on the table.
> 
>   regards, tom lane
> 



   

Got a little couch potato? 
Check out fun summer activities for kids.
http://search.yahoo.com/search?fr=oni_on_mail&p=summer+activities+for+kids&cs=bz
 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Best use of second controller with faster disks?

2007-06-13 Thread Francisco Reyes

Vivek Khera writes:

FreeBSD, indeed.  The vendor, Partners Data Systems, did a wonderful  


This one?
http://www.partnersdata.com

job ensuring that everything integrated well to the point of talking  
with various FreeBSD developers, LSI engineers, etc., and sent me a  
fully tested system end-to-end with a Sun X4100 M2, LSI 4Gb Fibre  
card, and their RAID array, with FreeBSD installed already.


Is there a management program in FreeBSD for the Areca card?
So I understand the setup you are describing..
Machine has Areca controller
Connects to external enclosure
Enclosure has LSI controller 


I have separate disks built-in to the system for boot.


How did you get FreeBSD to newfs such a large setup?
newfs -s /dev/raw-disk?

What are the speed/size of the disks?
7K rpm?


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings