Re: [PERFORM] Running lots of inserts from selects on 9.4.5

2016-02-13 Thread Dan Langille

> On Feb 11, 2016, at 4:41 PM, Dan Langille <d...@langille.org> wrote:
> 
>> On Feb 10, 2016, at 5:13 AM, Dan Langille <d...@langille.org> wrote:
>> 
>>> On Feb 10, 2016, at 2:47 AM, Jeff Janes <jeff.ja...@gmail.com> wrote:
>>> 
>>> On Tue, Feb 9, 2016 at 4:09 PM, Dan Langille <d...@langille.org> wrote:
>>>> I have a wee database server which regularly tries to insert 1.5 million or
>>>> even 15 million new rows into a 400 million row table.  Sometimes these
>>>> inserts take hours.
>>>> 
>>>> The actual query to produces the join is fast.  It's the insert which is
>>>> slow.
>>>> 
>>>> INSERT INTO File (FileIndex, JobId, PathId, FilenameId, LStat, MD5,
>>>> DeltaSeq)
>>>> SELECT batch_testing.FileIndex, batch_testing.JobId, Path.PathId,
>>>> Filename.FilenameId, batch_testing.LStat, batch_testing.MD5,
>>>> batch_testing.DeltaSeq
>>>>  FROM batch_testing JOIN Path ON (batch_testing.Path = Path.Path)
>>>> JOIN Filename ON (batch_testing.Name =
>>>> Filename.Name);
>>>> 
>>>> This is part of the plan: http://img.ly/images/9374145/full  created via
>>>> http://tatiyants.com/pev/#/plans
>>>> 
>>>> This gist contains postgresql.conf, zfs settings, slog, disk partitions.
>>>> 
>>>> https://gist.github.com/dlangille/1a8c8cc62fa13b9f
>>> 
>>> The table you are inserting into has 7 indexes, all of which have to
>>> be maintained.  The index on the sequence column should be efficient
>>> to maintain.  But for the rest, if the inserted rows are not naturally
>>> ordered by any of the indexed columns then it would end up reading 6
>>> random scattered leaf pages in order to insert row pointers.  If none
>>> those pages are in memory, that is going to be slow to read off from
>>> hdd in single-file.  Also, you are going dirty all of those scattered
>>> pages, and they will be slow to write back to hdd because there
>>> probably won't be much opportunity for write-combining.
>>> 
>>> Do you really need all of those indexes?
>>> 
>>> Won't the index on (jobid, pathid, filenameid) service any query that
>>> (jobid) does, so you can get rid of the latter?
>>> 
>>> And unless you have range queries on fileindex, like "where jobid = 12
>>> and fileindex between 4 and 24" then you should be able to replace
>>> (jobid, fileindex) with (fileindex,jobid) and then get rid of the
>>> stand-alone index on (fileindex).
>>> 
>>> If you add an "order by" to the select statement which order by the
>>> fields of one of the remaining indexes, than you could make the
>>> maintenance of that index become much cheaper.
>> 
>> I will make these changes one-by-one and test each.  This will be 
>> interesting.
> 
> On a test server, the original insert takes about 45 minutes.  I removed all 
> indexes.  25 minutes.
> 
> Thank you.

Today I tackled the production server.  After discussion on the Bacula devel 
mailing list (http://marc.info/?l=bacula-devel=145537742804482=2 
<http://marc.info/?l=bacula-devel=145537742804482=2>)
I compared my schema to the stock schema provided with Bacula.  Yes, I found
extra indexes.  I saved the existing schema and proceeded to remove the indexes
from prod not found in the default.

The query time went from 223 minute to 4.5 minutes.  That is 50 times faster.

I think I can live with that. :)

Jeff: if you show up at PGCon, dinner is on me.  Thank you.

--
Dan Langille - BSDCan / PGCon
d...@langille.org





signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [PERFORM] Running lots of inserts from selects on 9.4.5

2016-02-13 Thread Dan Langille
> On Feb 13, 2016, at 10:43 AM, Dan Langille <d...@langille.org> wrote:
> 
> Today I tackled the production server.  After discussion on the Bacula devel 
> mailing list (http://marc.info/?l=bacula-devel=145537742804482=2 
> <http://marc.info/?l=bacula-devel=145537742804482=2>)
> I compared my schema to the stock schema provided with Bacula.  Yes, I found
> extra indexes.  I saved the existing schema and proceeded to remove the 
> indexes
> from prod not found in the default.
> 
> The query time went from 223 minute to 4.5 minutes.  That is 50 times faster.

The query plans: https://twitter.com/DLangille/status/698528182383804416

--
Dan Langille - BSDCan / PGCon
d...@langille.org



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [PERFORM] Running lots of inserts from selects on 9.4.5

2016-02-11 Thread Dan Langille
> On Feb 10, 2016, at 5:13 AM, Dan Langille <d...@langille.org> wrote:
> 
>> On Feb 10, 2016, at 2:47 AM, Jeff Janes <jeff.ja...@gmail.com> wrote:
>> 
>> On Tue, Feb 9, 2016 at 4:09 PM, Dan Langille <d...@langille.org> wrote:
>>> I have a wee database server which regularly tries to insert 1.5 million or
>>> even 15 million new rows into a 400 million row table.  Sometimes these
>>> inserts take hours.
>>> 
>>> The actual query to produces the join is fast.  It's the insert which is
>>> slow.
>>> 
>>> INSERT INTO File (FileIndex, JobId, PathId, FilenameId, LStat, MD5,
>>> DeltaSeq)
>>> SELECT batch_testing.FileIndex, batch_testing.JobId, Path.PathId,
>>> Filename.FilenameId, batch_testing.LStat, batch_testing.MD5,
>>> batch_testing.DeltaSeq
>>>   FROM batch_testing JOIN Path ON (batch_testing.Path = Path.Path)
>>>  JOIN Filename ON (batch_testing.Name =
>>> Filename.Name);
>>> 
>>> This is part of the plan: http://img.ly/images/9374145/full  created via
>>> http://tatiyants.com/pev/#/plans
>>> 
>>> This gist contains postgresql.conf, zfs settings, slog, disk partitions.
>>> 
>>>  https://gist.github.com/dlangille/1a8c8cc62fa13b9f
>> 
>> The table you are inserting into has 7 indexes, all of which have to
>> be maintained.  The index on the sequence column should be efficient
>> to maintain.  But for the rest, if the inserted rows are not naturally
>> ordered by any of the indexed columns then it would end up reading 6
>> random scattered leaf pages in order to insert row pointers.  If none
>> those pages are in memory, that is going to be slow to read off from
>> hdd in single-file.  Also, you are going dirty all of those scattered
>> pages, and they will be slow to write back to hdd because there
>> probably won't be much opportunity for write-combining.
>> 
>> Do you really need all of those indexes?
>> 
>> Won't the index on (jobid, pathid, filenameid) service any query that
>> (jobid) does, so you can get rid of the latter?
>> 
>> And unless you have range queries on fileindex, like "where jobid = 12
>> and fileindex between 4 and 24" then you should be able to replace
>> (jobid, fileindex) with (fileindex,jobid) and then get rid of the
>> stand-alone index on (fileindex).
>> 
>> If you add an "order by" to the select statement which order by the
>> fields of one of the remaining indexes, than you could make the
>> maintenance of that index become much cheaper.
> 
> I will make these changes one-by-one and test each.  This will be interesting.

On a test server, the original insert takes about 45 minutes.  I removed all 
indexes.  25 minutes.

Thank you.

-- 
Dan Langille - BSDCan / PGCon
d...@langille.org







-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Running lots of inserts from selects on 9.4.5

2016-02-10 Thread Dan Langille
> On Feb 10, 2016, at 2:47 AM, Jeff Janes <jeff.ja...@gmail.com> wrote:
> 
> On Tue, Feb 9, 2016 at 4:09 PM, Dan Langille <d...@langille.org> wrote:
>> I have a wee database server which regularly tries to insert 1.5 million or
>> even 15 million new rows into a 400 million row table.  Sometimes these
>> inserts take hours.
>> 
>> The actual query to produces the join is fast.  It's the insert which is
>> slow.
>> 
>> INSERT INTO File (FileIndex, JobId, PathId, FilenameId, LStat, MD5,
>> DeltaSeq)
>>  SELECT batch_testing.FileIndex, batch_testing.JobId, Path.PathId,
>> Filename.FilenameId, batch_testing.LStat, batch_testing.MD5,
>> batch_testing.DeltaSeq
>>FROM batch_testing JOIN Path ON (batch_testing.Path = Path.Path)
>>   JOIN Filename ON (batch_testing.Name =
>> Filename.Name);
>> 
>> This is part of the plan: http://img.ly/images/9374145/full  created via
>> http://tatiyants.com/pev/#/plans
>> 
>> This gist contains postgresql.conf, zfs settings, slog, disk partitions.
>> 
>>   https://gist.github.com/dlangille/1a8c8cc62fa13b9f
> 
> The table you are inserting into has 7 indexes, all of which have to
> be maintained.  The index on the sequence column should be efficient
> to maintain.  But for the rest, if the inserted rows are not naturally
> ordered by any of the indexed columns then it would end up reading 6
> random scattered leaf pages in order to insert row pointers.  If none
> those pages are in memory, that is going to be slow to read off from
> hdd in single-file.  Also, you are going dirty all of those scattered
> pages, and they will be slow to write back to hdd because there
> probably won't be much opportunity for write-combining.
> 
> Do you really need all of those indexes?
> 
> Won't the index on (jobid, pathid, filenameid) service any query that
> (jobid) does, so you can get rid of the latter?
> 
> And unless you have range queries on fileindex, like "where jobid = 12
> and fileindex between 4 and 24" then you should be able to replace
> (jobid, fileindex) with (fileindex,jobid) and then get rid of the
> stand-alone index on (fileindex).
> 
> If you add an "order by" to the select statement which order by the
> fields of one of the remaining indexes, than you could make the
> maintenance of that index become much cheaper.

I will make these changes one-by-one and test each.  This will be interesting.

> Could you move the indexes for this table to SSD?

Now that's a clever idea.

bacula=# select pg_size_pretty(pg_indexes_size('file'));
 pg_size_pretty 

 100 GB
(1 row)

bacula=# select pg_size_pretty(pg_table_size('file'));
 pg_size_pretty 

 63 GB
(1 row)

bacula=# 

No suprising that the indexes are larger than the data.

The SSD is 30GB.  I don't have enough space.  Buying 2x500GB SSDs
would allow me to put all the data onto SSD.  I'm using about 306G for the 
databases now.

> SSD is probably wasted on your WAL.  If your main concern is bulk
> insertions, then WAL is going to written sequentially with few fsyncs.
> That is ideal for HDD.  Even if you also have smaller transactions,

OK.

> WAL is still sequentially written as long as you have a non-volatile
> cache on your RAID controller which can absorb fsyncs efficiently.

Of note, no RAID controller or non-volatile cache here. I'm running ZFS with 
plain HBA controllers.

Thank you.  I have some interesting changes to test.

-- 
Dan Langille - BSDCan / PGCon
d...@langille.org

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Running lots of inserts from selects on 9.4.5

2016-02-09 Thread Dan Langille
I have a wee database server which regularly tries to insert 1.5 million or 
even 15 million new rows into a 400 million row table.  Sometimes these inserts 
take hours.

The actual query to produces the join is fast.  It's the insert which is slow.

INSERT INTO File (FileIndex, JobId, PathId, FilenameId, LStat, MD5, DeltaSeq)
  SELECT batch_testing.FileIndex, batch_testing.JobId, Path.PathId, 
Filename.FilenameId, batch_testing.LStat, batch_testing.MD5, 
batch_testing.DeltaSeq
FROM batch_testing JOIN Path ON (batch_testing.Path = Path.Path)
   JOIN Filename ON (batch_testing.Name = Filename.Name);

This is part of the plan: http://img.ly/images/9374145/full 
<http://img.ly/images/9374145/full>  created via 
http://tatiyants.com/pev/#/plans <http://tatiyants.com/pev/#/plans>

This gist contains postgresql.conf, zfs settings, slog, disk partitions.

   https://gist.github.com/dlangille/1a8c8cc62fa13b9f 
<https://gist.github.com/dlangille/1a8c8cc62fa13b9f>

I'm tempted to move it to faster hardware, but in case I've missed something 
basic...

Thank you.

--
Dan Langille - BSDCan / PGCon
d...@langille.org






signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [PERFORM] SSD + RAID

2010-02-20 Thread Dan Langille
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Bruce Momjian wrote:
 Matthew Wakeling wrote:
 On Fri, 13 Nov 2009, Greg Smith wrote:
 In order for a drive to work reliably for database use such as for 
 PostgreSQL, it cannot have a volatile write cache.  You either need a write 
 cache with a battery backup (and a UPS doesn't count), or to turn the cache 
 off.  The SSD performance figures you've been looking at are with the 
 drive's 
 write cache turned on, which means they're completely fictitious and 
 exaggerated upwards for your purposes.  In the real world, that will result 
 in database corruption after a crash one day.
 Seagate are claiming to be on the ball with this one.

 http://www.theregister.co.uk/2009/12/08/seagate_pulsar_ssd/
 
 I have updated our documentation to mention that even SSD drives often
 have volatile write-back caches.  Patch attached and applied.

Hmmm.  That got me thinking: consider ZFS and HDD with volatile cache.
Do the characteristics of ZFS avoid this issue entirely?

- --
Dan Langille

BSDCan - The Technical BSD Conference : http://www.bsdcan.org/
PGCon  - The PostgreSQL Conference: http://www.pgcon.org/
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.13 (FreeBSD)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkuAayQACgkQCgsXFM/7nTyMggCgnZUbVzldxjp/nPo8EL1Nq6uG
6+IAoNGIB9x8/mwUQidjM9nnAADRbr9j
=3RJi
-END PGP SIGNATURE-

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Censorship

2009-06-13 Thread Dan Langille
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Matthew Wakeling wrote:
 
 It appears that I am being censored. I have tried three times to send a
 particular message to this list over the last few days, while a
 different mail has gone through fine. There does not appear to be a
 publicised list manager address, so I am addressing this complaint to
 the whole list. Is there someone here who can fix the problem?

This one seems to have made it.

Rest assured, nobody is interested enough to censor anything here.

- --
Dan Langille

BSDCan - The Technical BSD Conference : http://www.bsdcan.org/
PGCon  - The PostgreSQL Conference: http://www.pgcon.org/
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.11 (FreeBSD)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAko0KfsACgkQCgsXFM/7nTxc8QCgolfbFTkK1ZqtJN0XzWNghL5X
Y+YAnjvyNdhaV1LDfrALXd66CdjY8j+y
=rxzf
-END PGP SIGNATURE-

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Slow updates, poor IO

2008-09-28 Thread Dan Langille


On Sep 28, 2008, at 10:01 PM, John Huttley wrote:




Greg Smith wrote:

On Mon, 29 Sep 2008, John Huttley wrote:


checkpoint _segments=16 is fine, going to 64 made no improvement.


You might find that it does *after* increasing shared_buffers.  If  
the buffer cache is really small, the checkpoints can't have very  
much work to do, so their impact on performance is smaller.  Once  
you've got a couple of hundred MB on there, the per-checkpoint  
overhead can be considerable.



Ahh bugger, I've just trashed my test setup.


Pardon?  How did you do that?

--
Dan Langille
http://langille.org/





--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] wal_sync_methods for AIX

2008-02-19 Thread Dan Langille

Erik Jones wrote:


On Feb 15, 2008, at 3:55 PM, Dan Langille wrote:

We're using PostgreSQL 8.1.11 on AIX 5.3 and we've been doing some 
playing around
with various settings.  So far, we've (I say we, but it's another guy 
doing the work) found
that open_datasync seems better than fsync.  By how much, we have not 
yet determined,

but initial observations are promising.


Here's a good explanation (by the Greg Smith) on the different sync 
methods.  It basically says that if you have open_datasync available, 
it'll probably beat everything else.


Where is that explanation?

--
Dan Langille

BSDCan - The Technical BSD Conference : http://www.bsdcan.org/
PGCon  - The PostgreSQL Conference: http://www.pgcon.org/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[PERFORM] wal_sync_methods for AIX

2008-02-15 Thread Dan Langille
We're using PostgreSQL 8.1.11 on AIX 5.3 and we've been doing some  
playing around
with various settings.  So far, we've (I say we, but it's another guy  
doing the work) found
that open_datasync seems better than fsync.  By how much, we have not  
yet determined,

but initial observations are promising.

Our tests have been on a p550 connected to DS6800 array using pgbench.

One nasty behaviour we have seen is long running commits. Initial  
thoughts connected
them with checkpoints, but the long running commits do not correlate  
with checkpoints being

written.  Have you seen this behaviour?

FYI, 8.3.0 is not an option for us in the short term.

What have you been using on AIX and why?

thanks

--
Dan Langille -- http://www.langille.org/
[EMAIL PROTECTED]





---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] viewing source code

2007-12-21 Thread Dan Langille

Bruce Momjian wrote:

Is this a TODO?

---

Tom Lane wrote:

Merlin Moncure [EMAIL PROTECTED] writes:

I don't really agree that wrapping pl/pgsql with encryptor/decryptor
is a bad idea.

It's quite a good idea, because it has more than zero chance of
succeeding politically in the community.

The fundamental reason why preventing access to pg_proc.prosrc won't
happen is this: all the pain (and there will be plenty) will be
inflicted on people who get none of the benefit (because they don't give
a damn about hiding their own functions' code).  The folks who want
function hiding can shout all they want, but as long as there is a very
sizable fraction of the community who flat out *don't* want it, it's
not going to get applied.

Encrypted function bodies avoid this problem because they inflict no
performance penalty, operational complexity, or client-code breakage
on people who don't use the feature.  They are arguably also a better
solution because they can guard against more sorts of threats than
a column-hiding solution can.

I don't deny that the key-management problem is interesting, but it
seems soluble; moreover, the difficulties that people have pointed to
are nothing but an attempt to move the goalposts, because they
correspond to requirements that a column-hiding solution would never
meet at all.

So if you want something other than endless arguments to happen,
come up with a nice key-management design for encrypted function
bodies.


I keep thinking the problem of keys is similar that of Apache servers 
which use certificates that require passphrases.  When the server is 
started, the passphrase is entered on the command line.


--
Dan Langille - http://www.langille.org/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Newbie question about degraded performance on delete statement.

2007-10-02 Thread Dan Langille
On 2 Oct 2007 at 23:55, Giulio Cesare Solaroli wrote:

 What I have observed are impossibly high time on delete statements on
 some tables.
 
 The delete statement is very simple:
 delete from table where pk = ?
 
 The explain query report a single index scan on the primary key index,
 as expected.
 
 I have run vacuum using the pgAdmin tool, but to no avail.
 
 I have also dropped and recreated the indexes, again without any benefit.
 
 I have later created a copy of the table using the create table
 table_copy as select * from table syntax.
 
 Matching the configuration of the original table also on the copy
 (indexes and constraints), I was able to delete the raws from the new
 table with regular performances, from 20 to 100 times faster than
 deleting from the original table.

There may be more to that original table.  What about triggers?  
rules?  Perhaps there other things going on in the background.

-- 
Dan Langille - http://www.langille.org/
Available for hire: http://www.freebsddiary.org/dan_langille.php



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Performance problems with large telemetric datasets on 7.4.2

2007-08-03 Thread Dan Langille
On 3 Aug 2007 at 6:52, Sven Clement wrote:

 Hello everybody,
 
 as I'm new to this list I hope that it is the right place to post this
 and also the right format, so if I'm committing an error, I apologize
 in advance.
 
 First the background of my request:
 
 I'm currently employed by an enterprise which has approx. 250 systems
 distributed worldwide which are sending telemetric data to the main
 PostgreSQL. The remote systems are generating about 10 events per
 second per system which accumulates to about 2500/tps. The data is
 stored for about a month before it is exported and finally deleted
 from the database. On the PostgreSQL server are running to databases
 one with little traffic (about 750K per day) and the telemetric
 database with heavy write operations all around the day (over 20
 million per day). We already found that the VACUUM process takes
 excessively long and as consequence the database is Vacuumed
 permanently.
 
 The hardware is a IBM X306m Server, 3.2 GHz HT (Pentium IV), 1 GB RAM
 and 2x 250 GB HDD (SATA-II) with ext3 fs, one of the HDD is dedicated
 to database. OS is Debian 3.1 Sarge with PostgreSQL 7.4.7
 (7.4.7-6sarge1) with the libpq frontend library.
 
 Now the problem:
 
 The problem we are experiencing is that our queries are slowing down
 continuously even if we are performing queries on the index which is
 the timestamp of the event, a simple SELECT query with only a simple
 WHERE clause ( or ) takes very long to complete. So the database
 becomes unusable for production use as the data has to be retrieved
 very quickly if we want to act based on the telemetric data.

Have you confirmed via explain (or explain analyse) that the index is 
being used?

 So I'm asking me if it is useful to update to the actual 8.2 version
 and if we could experience performance improvement only by updating.

There are other benefits from upgrading, but you may be able to solve 
this problem without upgrading.

-- 
Dan Langille - http://www.langille.org/
Available for hire: http://www.freebsddiary.org/dan_langille.php



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Forcing index usage without 'enable_hashjoin = FALSE'

2006-08-23 Thread Dan Langille
On 23 Aug 2006 at 13:31, Chris wrote:

 Dan Langille wrote:
  I'm using PostgreSQL 8.1.4 and I'm trying to force the planner to use
  an index.  With the index, I get executions times of 0.5 seconds. 
  Without, it's closer to 2.5 seconds.
  
  Compare these two sets of results (also provided at 
  http://rafb.net/paste/results/ywcOZP66.html
  should it appear poorly formatted below):
  
  freshports.org=# \i test2.sql
   
  QUERY PLAN
  --
  --
  -
   Merge Join  (cost=24030.39..24091.43 rows=3028 width=206) (actual 
  time=301.301..355.261 rows=3149 loops=1)
 Merge Cond: (outer.id = inner.category_id)
 -  Sort  (cost=11.17..11.41 rows=97 width=4) (actual 
  time=0.954..1.300 rows=95 loops=1)
   Sort Key: c.id
   -  Seq Scan on categories c  (cost=0.00..7.97 rows=97 
  width=4) (actual time=0.092..0.517 rows=97 loops=1)
 -  Sort  (cost=24019.22..24026.79 rows=3028 width=206) (actual 
  time=300.317..314.114 rows=3149 loops=1)
   Sort Key: p.category_id
   -  Nested Loop  (cost=0.00..23844.14 rows=3028 width=206) 
  (actual time=0.082..264.459 rows=3149 loops=1)
 -  Seq Scan on ports p  (cost=0.00..6141.11 rows=3028 
  width=206) (actual time=0.026..133.575 rows=3149 loops=1)
   Filter: (status = 'D'::bpchar)
 -  Index Scan using element_pkey on element e  
  (cost=0.00..5.83 rows=1 width=4) (actual time=0.022..0.026 rows=1 
  loops=3149)
   Index Cond: (outer.element_id = e.id)
   Total runtime: 369.869 ms
  (13 rows)
  
  freshports.org=# set enable_hashjoin = true;
  SET
  freshports.org=# \i test2.sql
 QUERY PLAN
  --
  --
   Hash Join  (cost=6156.90..13541.14 rows=3028 width=206) (actual 
  time=154.741..2334.366 rows=3149 loops=1)
 Hash Cond: (outer.category_id = inner.id)
 -  Hash Join  (cost=6148.68..13472.36 rows=3028 width=206) 
  (actual time=153.801..2288.792 rows=3149 loops=1)
   Hash Cond: (outer.id = inner.element_id)
   -  Seq Scan on element e  (cost=0.00..4766.70 rows=252670 
  width=4) (actual time=0.022..1062.626 rows=252670 loops=1)
   -  Hash  (cost=6141.11..6141.11 rows=3028 width=206) 
  (actual time=151.105..151.105 rows=3149 loops=1)
 -  Seq Scan on ports p  (cost=0.00..6141.11 rows=3028 
  width=206) (actual time=0.027..131.072 rows=3149 loops=1)
   Filter: (status = 'D'::bpchar)
 -  Hash  (cost=7.97..7.97 rows=97 width=4) (actual 
  time=0.885..0.885 rows=97 loops=1)
   -  Seq Scan on categories c  (cost=0.00..7.97 rows=97 
  width=4) (actual time=0.076..0.476 rows=97 loops=1)
   Total runtime: 2346.877 ms
  (11 rows)
  
  freshports.org=#
  
  Without leaving enable_hashjoin = false, can you suggest a way to 
  force the index usage?
  
  FYI, the query is:
  
  explain analyse
  SELECT P.id,
 P.category_id,
 P.version as version,
 P.revisionas revision,
 P.element_id,
 P.maintainer,
 P.short_description,
 to_char(P.date_added - SystemTimeAdjust(), 'DD Mon  
  HH24:MI:SS') as date_added,
 P.last_commit_id  as last_change_log_id,
 P.package_exists,
 P.extract_suffix,
 P.homepage,
 P.status,
 P.broken,
 P.forbidden,
 P.ignore,
 P.restricted,
 P.deprecated,
 P.no_cdrom,
 P.expiration_date,
 P.latest_link
FROM categories C, ports P JOIN element E on P.element_id = E.id
   WHERE P.status  = 'D'
 AND P.category_id = C.id;
  
 
 I doubt it would make a difference but if you:
 
 ...
 FROM categories C JOIN ports P on P.category_id=C.id JOIN element E on 
 P.element_id = E.id
 WHERE P.status  = 'D';
 
 does it change anything?

Not really, no:

freshports.org=# \i test3.sql
  
QUERY PLAN
--
--
---
 Hash Join  (cost=5344.62..12740.73 rows=3365 width=204) (actual 
time=63.871..2164.880 rows=3149 loops=1)
   Hash Cond: (outer.category_id = inner.id)
   -  Hash Join  (cost=5336.41..12665.22 rows=3365 width=204) 
(actual time=62.918..2122.529 rows=3149 loops=1)
 Hash Cond: (outer.id = inner.element_id)
 -  Seq Scan on element e  (cost=0.00..4767.58 rows=252758 
width=4) (actual time=0.019..1024.299 rows=252791 loops=1)
 -  Hash  (cost=5328.00..5328.00 rows=3365 width=204) 
(actual time=60.228..60.228 rows=3149

Re: [PERFORM] Forcing index usage without 'enable_hashjoin = FALSE'

2006-08-23 Thread Dan Langille
On 23 Aug 2006 at 22:30, Tom Lane wrote:

 Dan Langille [EMAIL PROTECTED] writes:
  Without leaving enable_hashjoin = false, can you suggest a way to 
  force the index usage?
 
 Have you tried reducing random_page_cost?

Yes.  No effect.

 FYI, 8.2 should be a bit better about this.

Good.  This query is not critical, but it would be nice.

Thank you.

-- 
Dan Langille : Software Developer looking for work
my resume: http://www.freebsddiary.org/dan_langille.php



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[PERFORM] Forcing index usage without 'enable_hashjoin = FALSE'

2006-08-22 Thread Dan Langille
I'm using PostgreSQL 8.1.4 and I'm trying to force the planner to use
an index.  With the index, I get executions times of 0.5 seconds. 
Without, it's closer to 2.5 seconds.

Compare these two sets of results (also provided at 
http://rafb.net/paste/results/ywcOZP66.html
should it appear poorly formatted below):

freshports.org=# \i test2.sql
 
QUERY PLAN
--
--
-
 Merge Join  (cost=24030.39..24091.43 rows=3028 width=206) (actual 
time=301.301..355.261 rows=3149 loops=1)
   Merge Cond: (outer.id = inner.category_id)
   -  Sort  (cost=11.17..11.41 rows=97 width=4) (actual 
time=0.954..1.300 rows=95 loops=1)
 Sort Key: c.id
 -  Seq Scan on categories c  (cost=0.00..7.97 rows=97 
width=4) (actual time=0.092..0.517 rows=97 loops=1)
   -  Sort  (cost=24019.22..24026.79 rows=3028 width=206) (actual 
time=300.317..314.114 rows=3149 loops=1)
 Sort Key: p.category_id
 -  Nested Loop  (cost=0.00..23844.14 rows=3028 width=206) 
(actual time=0.082..264.459 rows=3149 loops=1)
   -  Seq Scan on ports p  (cost=0.00..6141.11 rows=3028 
width=206) (actual time=0.026..133.575 rows=3149 loops=1)
 Filter: (status = 'D'::bpchar)
   -  Index Scan using element_pkey on element e  
(cost=0.00..5.83 rows=1 width=4) (actual time=0.022..0.026 rows=1 
loops=3149)
 Index Cond: (outer.element_id = e.id)
 Total runtime: 369.869 ms
(13 rows)

freshports.org=# set enable_hashjoin = true;
SET
freshports.org=# \i test2.sql
   QUERY PLAN
--
--
 Hash Join  (cost=6156.90..13541.14 rows=3028 width=206) (actual 
time=154.741..2334.366 rows=3149 loops=1)
   Hash Cond: (outer.category_id = inner.id)
   -  Hash Join  (cost=6148.68..13472.36 rows=3028 width=206) 
(actual time=153.801..2288.792 rows=3149 loops=1)
 Hash Cond: (outer.id = inner.element_id)
 -  Seq Scan on element e  (cost=0.00..4766.70 rows=252670 
width=4) (actual time=0.022..1062.626 rows=252670 loops=1)
 -  Hash  (cost=6141.11..6141.11 rows=3028 width=206) 
(actual time=151.105..151.105 rows=3149 loops=1)
   -  Seq Scan on ports p  (cost=0.00..6141.11 rows=3028 
width=206) (actual time=0.027..131.072 rows=3149 loops=1)
 Filter: (status = 'D'::bpchar)
   -  Hash  (cost=7.97..7.97 rows=97 width=4) (actual 
time=0.885..0.885 rows=97 loops=1)
 -  Seq Scan on categories c  (cost=0.00..7.97 rows=97 
width=4) (actual time=0.076..0.476 rows=97 loops=1)
 Total runtime: 2346.877 ms
(11 rows)

freshports.org=#

Without leaving enable_hashjoin = false, can you suggest a way to 
force the index usage?

FYI, the query is:

explain analyse
SELECT P.id,
   P.category_id,
   P.version as version,
   P.revisionas revision,
   P.element_id,
   P.maintainer,
   P.short_description,
   to_char(P.date_added - SystemTimeAdjust(), 'DD Mon  
HH24:MI:SS') as date_added,
   P.last_commit_id  as last_change_log_id,
   P.package_exists,
   P.extract_suffix,
   P.homepage,
   P.status,
   P.broken,
   P.forbidden,
   P.ignore,
   P.restricted,
   P.deprecated,
   P.no_cdrom,
   P.expiration_date,
   P.latest_link
  FROM categories C, ports P JOIN element E on P.element_id = E.id
 WHERE P.status  = 'D'
   AND P.category_id = C.id;

-- 
Dan Langille : Software Developer looking for work
my resume: http://www.freebsddiary.org/dan_langille.php



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] index scan of whole table, can't see why

2005-01-20 Thread Dan Langille
On 20 Jan 2005 at 9:34, Ragnar HafstaĆ° wrote:

 On Wed, 2005-01-19 at 20:37 -0500, Dan Langille wrote:
  Hi folks,
 
  Running on 7.4.2, recently vacuum analysed the three tables in
  question.
 
  The query plan in question changes dramatically when a WHERE clause
  changes from ports.broken to ports.deprecated.  I don't see why.
  Well, I do see why: a sequential scan of a 130,000 rows.  The query
  goes from 13ms to 1100ms because the of this.  The full plans are at
  http://rafb.net/paste/results/v8ccvQ54.html
 
  I have tried some tuning by:
 
set effective_cache_size to 4000, was 1000
set random_page_cost to 1, was 4
 
  The resulting plan changes, but no speed improvment, are at
  http://rafb.net/paste/results/rV8khJ18.html
 

 this just confirms that an indexscan is not always better than a
 tablescan. by setting random_page_cost to 1, you deceiving the
 planner into thinking that the indexscan is almost as effective
 as a tablescan.

  Any suggestions please?

 did you try to increase sort_mem ?

I tried sort_mem = 4096 and then 16384. This did not make a
difference.  See http://rafb.net/paste/results/AVDqEm55.html

Thank you.
--
Dan Langille : http://www.langille.org/
BSDCan - The Technical BSD Conference - http://www.bsdcan.org/


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] index scan of whole table, can't see why

2005-01-20 Thread Dan Langille
On 20 Jan 2005 at 6:14, Stephan Szabo wrote:

 On Wed, 19 Jan 2005, Dan Langille wrote:
 
  Hi folks,
 
  Running on 7.4.2, recently vacuum analysed the three tables in
  question.
 
  The query plan in question changes dramatically when a WHERE clause
  changes from ports.broken to ports.deprecated.  I don't see why.
  Well, I do see why: a sequential scan of a 130,000 rows.  The query
  goes from 13ms to 1100ms because the of this.  The full plans are at
  http://rafb.net/paste/results/v8ccvQ54.html
 
  I have tried some tuning by:
 
set effective_cache_size to 4000, was 1000
set random_page_cost to 1, was 4
 
  The resulting plan changes, but no speed improvment, are at
  http://rafb.net/paste/results/rV8khJ18.html
 
  Any suggestions please?
 
 As a question, what does it do if enable_hashjoin is false? I'm wondering
 if it'll pick a nested loop for that step for the element/ports join and
 what it estimates the cost to be.

With enable_hashjoin = false, no speed improvement.  Execution plan 
at http://rafb.net/paste/results/qtSFVM72.html

thanks
-- 
Dan Langille : http://www.langille.org/
BSDCan - The Technical BSD Conference - http://www.bsdcan.org/


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] index scan of whole table, can't see why

2005-01-20 Thread Dan Langille
On 20 Jan 2005 at 7:26, Stephan Szabo wrote:

 On Thu, 20 Jan 2005, Dan Langille wrote:
 
  On 20 Jan 2005 at 6:14, Stephan Szabo wrote:
 
   On Wed, 19 Jan 2005, Dan Langille wrote:
  
Hi folks,
   
Running on 7.4.2, recently vacuum analysed the three tables in
question.
   
The query plan in question changes dramatically when a WHERE clause
changes from ports.broken to ports.deprecated.  I don't see why.
Well, I do see why: a sequential scan of a 130,000 rows.  The query
goes from 13ms to 1100ms because the of this.  The full plans are at
http://rafb.net/paste/results/v8ccvQ54.html
   
I have tried some tuning by:
   
  set effective_cache_size to 4000, was 1000
  set random_page_cost to 1, was 4
   
The resulting plan changes, but no speed improvment, are at
http://rafb.net/paste/results/rV8khJ18.html
   
Any suggestions please?
  
   As a question, what does it do if enable_hashjoin is false? I'm wondering
   if it'll pick a nested loop for that step for the element/ports join and
   what it estimates the cost to be.
 
  With enable_hashjoin = false, no speed improvement.  Execution plan
  at http://rafb.net/paste/results/qtSFVM72.html
 
 Honestly I expected it to be slower (which it was), but I figured it's
 worth seeing what alternate plans it'll generate (specifically to see how
 it cost a nested loop on that join to compare to the fast plan).
 Unfortunately, it generated a merge join, so I think it might require both
 enable_hashjoin=false and enable_mergejoin=false to get it which is likely
 to be even slower in practice but still may be useful to see.

Setting both to false gives a dramatic performance boost.  See 
http://rafb.net/paste/results/b70KAi42.html

This gives suitable speed, but why does the plan vary so much with 
such a minor change in the WHERE clause?
-- 
Dan Langille : http://www.langille.org/
BSDCan - The Technical BSD Conference - http://www.bsdcan.org/


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] index scan of whole table, can't see why

2005-01-20 Thread Dan Langille
On 21 Jan 2005 at 8:38, Russell Smith wrote:

 On Fri, 21 Jan 2005 02:36 am, Dan Langille wrote:
  On 20 Jan 2005 at 7:26, Stephan Szabo wrote:
 
 [snip]
   Honestly I expected it to be slower (which it was), but I figured
   it's worth seeing what alternate plans it'll generate
   (specifically to see how it cost a nested loop on that join to
   compare to the fast plan). Unfortunately, it generated a merge
   join, so I think it might require both enable_hashjoin=false and
   enable_mergejoin=false to get it which is likely to be even slower
   in practice but still may be useful to see.
  
  Setting both to false gives a dramatic performance boost.  See
  http://rafb.net/paste/results/b70KAi42.html
  
  -  Materialize  (cost=15288.70..15316.36 rows=2766 width=35)
  (actual time=0.004..0.596 rows=135 loops=92)
-  Nested Loop  (cost=0.00..15288.70 rows=2766
width=35) (actual time=0.060..9.130 rows=135 loops=1)
 
 The Planner here has a quite inaccurate guess at the number of rows
 that will match in the join.  An alternative to turning off join types
 is to up the statistics on the Element columns because that's where
 the join is happening.  Hopefully the planner will get a better idea. 
 However it may not be able too.  2766 rows vs 135 is quite likely to
 choose different plans.  As you can see you have had to turn off two
 join types to give something you wanted/expected.

Fair comment.  However, the statistics on ports.element_id, 
ports.deprecated, ports.broken, and element.id are both set to 1000.

  This gives suitable speed, but why does the plan vary so much with
  such a minor change in the WHERE clause?
 Plan 1 - broken
-  Nested Loop  (cost=0.00..3825.30 rows=495 width=35) (actual
time=0.056..16.161 rows=218 loops=1)
 
 Plan 2 - deprecated
 -  Hash Join  (cost=3676.78..10144.06 rows=2767 width=35)
 (actual time=7.638..1158.128 rows=135 loops=1)
 
 The performance difference is when the where is changed, you have a
 totally different set of selection options. The Plan 1 and Plan 2
 shown from your paste earlier, report that you are out by a factor of
 2 for plan 1.  But for plan 2 its a factor of 20.  The planner is
 likely to make the wrong choice when the stats are out by that factor.
 
 Beware what is a small typing change does not mean they queries are
 anything alight.

Agreed.  I just did not expect such a dramatic change which a result 
set that is similar.  Actually, they aren't that similar at all.

Thank you.
-- 
Dan Langille : http://www.langille.org/
BSDCan - The Technical BSD Conference - http://www.bsdcan.org/


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[PERFORM] index scan of whole table, can't see why

2005-01-19 Thread Dan Langille
Hi folks,

Running on 7.4.2, recently vacuum analysed the three tables in 
question.

The query plan in question changes dramatically when a WHERE clause 
changes from ports.broken to ports.deprecated.  I don't see why.  
Well, I do see why: a sequential scan of a 130,000 rows.  The query 
goes from 13ms to 1100ms because the of this.  The full plans are at 
http://rafb.net/paste/results/v8ccvQ54.html

I have tried some tuning by:

  set effective_cache_size to 4000, was 1000
  set random_page_cost to 1, was 4

The resulting plan changes, but no speed improvment, are at 
http://rafb.net/paste/results/rV8khJ18.html

Any suggestions please?  

-- 
Dan Langille : http://www.langille.org/
BSDCan - The Technical BSD Conference - http://www.bsdcan.org/


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] my boss want to migrate to ORACLE

2004-07-30 Thread Dan Langille
On Fri, 30 Jul 2004, Matthew T. O'Connor wrote:

 Stephane Tessier wrote:

 I think with your help guys I'll do it!
 
 I'm working on it!
 
 I'll work on theses issues:
 
 we have space for more ram(we use 2 gigs on possibility of 3 gigs)
 iowait is very high 98% -- look like postgresql wait for io access
 raid5 --raid0 if i'm right raid5 use 4 writes(parity,data, etc) for each
 write on disk
 use more transactions (we have a lot of insert/update without transaction).
 cpu look like not running very hard
 
 *php is not running on the same machine
 *redhat enterprise 3.0 ES
 *the version of postgresql is 7.3.4(using RHDB from redhat)
 *pg_autovacuum running at 12 and 24 hour each day
 
 
 What do you mean by pg_autovacuum running at 12 and 24 hour each day?

I suspect he means at 1200 and 2400 each day (i.e noon and midnight).

-- 
Dan Langille - http://www.langille.org/

---(end of broadcast)---
TIP 8: explain analyze is your friend


[PERFORM] seq scan woes

2004-06-07 Thread Dan Langille
A production system has had a query recently degrade in performance.  
What once took  1s now takes over 1s.  I have tracked down the 
problem to a working example.

Compare  http://rafb.net/paste/results/itZIx891.html
with   http://rafb.net/paste/results/fbUTNF95.html

The first shows the query as is, without much change (actually, this 
query is nested within a larger query, but it demonstrates the 
problem).  The query time is about 1 second.

In the second URL, a SET ENABLE_SEQSCAN TO OFF; is done, and the 
time drops to 151ms, which is acceptable.

What I don't understand is why the ports table is scanned in the 
first place.  Clues please?
-- 
Dan Langille : http://www.langille.org/
BSDCan - http://www.bsdcan.org/


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] seq scan woes

2004-06-07 Thread Dan Langille
On 7 Jun 2004 at 16:00, Rod Taylor wrote:

 On Mon, 2004-06-07 at 15:45, Dan Langille wrote:
  A production system has had a query recently degrade in performance.  
  What once took  1s now takes over 1s.  I have tracked down the 
  problem to a working example.
 
 What changes have you made to postgresql.conf?

Nothing recently (ie. past few months). Nothing at all really.  
Perhaps I need to start tuning that.

 Could you send explain analyse again with SEQ_SCAN enabled but with
 nested loops disabled?

See http://rafb.net/paste/results/zpJEvb28.html

13s

 Off the cuff? I might hazard a guess that effective_cache is too low or
 random_page_cost is a touch too high. Probably the former.

I grep'd postgresql.conf:

#effective_cache_size = 1000# typically 8KB each
#random_page_cost = 4   # units are one sequential page fetch cost

NOTE: both above are commented out.

Thank you
-- 
Dan Langille : http://www.langille.org/
BSDCan - http://www.bsdcan.org/


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [PERFORM] seq scan woes

2004-06-07 Thread Dan Langille
On 7 Jun 2004 at 16:38, Rod Taylor wrote:

 On Mon, 2004-06-07 at 16:12, Dan Langille wrote:
  On 7 Jun 2004 at 16:00, Rod Taylor wrote:
  
   On Mon, 2004-06-07 at 15:45, Dan Langille wrote:
A production system has had a query recently degrade in performance.  
What once took  1s now takes over 1s.  I have tracked down the 
problem to a working example.
   
   What changes have you made to postgresql.conf?
  
  Nothing recently (ie. past few months). Nothing at all really.  
  Perhaps I need to start tuning that.
  
   Could you send explain analyse again with SEQ_SCAN enabled but with
   nested loops disabled?
  
  See http://rafb.net/paste/results/zpJEvb28.html
 
 This doesn't appear to be the same query as we were shown earlier.

My apologies. I should try to cook dinner and paste at the same time. 
 ;)

http://rafb.net/paste/results/rVr3To35.html is the right query.

   Off the cuff? I might hazard a guess that effective_cache is too low or
   random_page_cost is a touch too high. Probably the former.
  
  I grep'd postgresql.conf:
  
  #effective_cache_size = 1000# typically 8KB each
  #random_page_cost = 4   # units are one sequential page fetch cost
 
 This would be the issue. You haven't told PostgreSQL anything about your
 hardware. The defaults are somewhat modest.
 
 http://www.postgresql.org/docs/7.4/static/runtime-config.html
 
 Skim through the run-time configuration parameters that can be set in
 postgresql.conf.
 
 Pay particular attention to:
   * shared_buffers (you may be best with 2000 or 4000)
   * effective_cache_size (set to 50% of ram size if dedicated db
 machine)
   * random_page_cost (good disks will bring this down to a 2 from a
 4)

I'll have a play with that and report back.

Thanks.
-- 
Dan Langille : http://www.langille.org/
BSDCan - http://www.bsdcan.org/


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] seq scan woes

2004-06-07 Thread Dan Langille
On 7 Jun 2004 at 16:38, Rod Taylor wrote:

 On Mon, 2004-06-07 at 16:12, Dan Langille wrote:
  I grep'd postgresql.conf:
  
  #effective_cache_size = 1000# typically 8KB each
  #random_page_cost = 4   # units are one sequential page fetch cost
 
 This would be the issue. You haven't told PostgreSQL anything about your
 hardware. The defaults are somewhat modest.
 
 http://www.postgresql.org/docs/7.4/static/runtime-config.html
 
 Skim through the run-time configuration parameters that can be set in
 postgresql.conf.
 
 Pay particular attention to:
   * shared_buffers (you may be best with 2000 or 4000)

I do remember increasing this in the past.  It was now at 1000 and is 
now at 2000.

see http://rafb.net/paste/results/VbXQcZ87.html

   * effective_cache_size (set to 50% of ram size if dedicated db
 machine)

The machine has 512MB RAM.  effective_cache_size was at 1000.  So 
let's try a 256MB cache. Does that the match a 32000 setting?  I 
tried it.  The query went to 1.5s.  At 8000, the query was 1s.  At 
2000, the query was about 950ms.

This machine is a webserver/database/mail server, but the FreshPorts 
database is by far its biggest task.  

   * random_page_cost (good disks will bring this down to a 2 from a
 4)

I've got mine set at 4.  Increasing it to 6 gave me a 1971ms query.  
At 3, it was a 995ms.  Setting it to 2 gave me a 153ms query.

How interesting.

For camparison, I reset shared_buffers and effective_cache_size back 
to their original value (both at 1000).  This gave me a 130-140ms 
query.

The disks in question is:

ad0: 19623MB IC35L020AVER07-0 [39870/16/63] at ata0-master UDMA100

I guess that might be this disk: 
http://www.harddrives4less.com/ibmdes6020ua2.html

I invite comments upon my findings.

Rod: thanks for the suggestions.




   
 
 -- 
 Rod Taylor rbt [at] rbt [dot] ca
 
 Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL
 PGP Key: http://www.rbt.ca/signature.asc
 
 


-- 
Dan Langille : http://www.langille.org/
BSDCan - http://www.bsdcan.org/


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] seq scan woes

2004-06-07 Thread Dan Langille
On 7 Jun 2004 at 18:49, Dan Langille wrote:

 On 7 Jun 2004 at 16:38, Rod Taylor wrote:
* random_page_cost (good disks will bring this down to a 2 from a
  4)
 
 I've got mine set at 4.  Increasing it to 6 gave me a 1971ms query.  
 At 3, it was a 995ms.  Setting it to 2 gave me a 153ms query.
 
 How interesting.

The explain analyse: http://rafb.net/paste/results/pWhHsL86.html
-- 
Dan Langille : http://www.langille.org/
BSDCan - http://www.bsdcan.org/


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly