Re: [PERFORM] Problems with PostGreSQL and Windows 2003

2007-11-26 Thread Heikki Linnakangas

There is no mention of Out of Memory in that piece of log.

[EMAIL PROTECTED] wrote:

Hi,

I'm using a cursor.

Here is the a piece of log file (psqlodbc):

[0.000]conn=02DE3A70, PGAPI_DriverConnect(
in)='DSN=BI;UID=biuser;PWD=x;', fDriverCompletion=0
[0.000]DSN info:
DSN='BI',server='localhost',port='5432',dbase='BI',user='biuser',passwd='x'
[0.000] 
onlyread='0',protocol='7.4',showoid='0',fakeoidindex='0',showsystable='0'

[0.000]  conn_settings='',conn_encoding='(null)'
[0.000]  translation_dll='',translation_option=''
[0.000]Driver Version='08.02.0400,200704270001' linking static Multithread 
library
[0.000]Global Options: fetch=100, socket=4096, unknown_sizes=0,
max_varchar_size=255, max_longvarchar_size=8190
[0.000]disable_optimizer=0, ksqo=1, unique_index=1,
use_declarefetch=1
[0.000]text_as_longvarchar=1, unknowns_as_longvarchar=0,
bools_as_char=1 NAMEDATALEN=64
[0.000]extra_systable_prefixes='dd_;', conn_settings=''
conn_encoding=''
[0.046][ PostgreSQL version string = '8.2.5' ]
[0.046][ PostgreSQL version number = '8.2' ]
[0.046]conn=02DE3A70, query='select oid, typbasetype from pg_type where typname
= 'lo''
[0.046]NOTICE from backend during send_query: 'SLOG'
[0.046]NOTICE from backend during send_query: 'C0'
[0.046]NOTICE from backend during send_query: 'Mstatement: select oid,
typbasetype from pg_type where typname = 'lo''
[0.046]NOTICE from backend during send_query: 'Fpostgres.c'
[0.046]NOTICE from backend during send_query: 'L811'
[0.046]NOTICE from backend during send_query: 'Rexec_simple_query'
[0.046][ fetched 1 rows ]
[0.046][ Large Object oid = 17288 ]
[0.046][ Client encoding = 'LATIN9' (code = 16) ]
[0.046]conn=02DE3A70,
PGAPI_DriverConnect(out)='DSN=BI;DATABASE=BI;SERVER=localhost;PORT=5432;UID=biuser;PWD=x;SSLmode=disable;ReadOnly=0;Protocol=7.4-1;FakeOidIndex=0;ShowOidColumn=0;RowVersioning=0;ShowSystemTables=0;ConnSettings=;Fetch=100;Socket=4096;UnknownSizes=0;MaxVarcharSize=255;MaxLongVarcharSize=8190;Debug=0;CommLog=1;Optimizer=0;Ksqo=1;UseDeclareFetch=1;TextAsLongVarchar=1;UnknownsAsLongVarchar=0;BoolsAsChar=1;Parse=0;CancelAsFreeStmt=0;ExtraSysTablePrefixes=dd_;;LFConversion=1;UpdatableCursors=1;DisallowPremature=0;TrueIsMinus1=0;BI=0;ByteaAsLongVarBinary=0;UseServerSidePrepare=0;LowerCaseIdentifier=0;XaOpt=1'
[0.062]STATEMENT ERROR: func=set_statement_option, desc='', errnum=30,
errmsg='The option may be for MS SQL Server(Set)'
[0.062]


[0.062] hdbc=02DE3A70, stmt=02DE85C8, result=
[0.062] prepare=0, internal=0
[0.062] bindings=, bindings_allocated=0
[0.062] parameters=02DE8F48, parameters_allocated=1
[0.062] statement_type=-2, statement='(NULL)'
[0.062] stmt_with_params='(NULL)'
[0.062] data_at_exec=-1, current_exec_param=-1, put_data=0
[0.062] currTuple=-1, current_col=-1, lobj_fd=-1
[0.062] maxRows=0, rowset_size=1, keyset_size=0, cursor_type=0,
scroll_concurrency=1
[0.062] cursor_name=''
[0.062] QResult Info
---
[0.062]CONN ERROR: func=set_statement_option, desc='', errnum=0, errmsg='(NULL)'
[0.062]



Thanks,
Cláudia.




Are you then trying to process the whole data set at once?  I'm pretty
certain the issue is your app, not pgsql, running out of memory.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster







--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] doubt with pg_dump and high concurrent used databases

2007-11-26 Thread Peter Childs
On 25/11/2007, Pablo Alcaraz [EMAIL PROTECTED] wrote:

 Tom Lane wrote:
  Peter Childs [EMAIL PROTECTED] writes:
 
  On 25/11/2007, Erik Jones [EMAIL PROTECTED] wrote:
 
  Does the pg_dump create this kind of consistent backups? Or do I
  need to do the backups using another program?
 
  Yes, that is exactly what pg_dump does.
 
 
  Yes so long as you are using transactions correctly. Ie doing a begin
 before
  each invoice and a commit afterwards if your not bothering and using
 auto
  commit you *may* have problems.
 
 
  I think you need to qualify that a bit more.  What you're saying is that
  if an application has consistency requirements that are momentarily
  violated during multi-statement updates, and it fails to wrap such
  updates into a single transaction, then pg_dump could capture one of the
  intermediate states.  That's true, but it's hardly pg_dump's fault.
  If there were a system crash partway through such a sequence, the
  consistency requirements would be violated afterwards, too.
 
 

 Agree. In my case I define consistent database state like the state
 the database has when the program that use it is stopped normally and
 without errors. In this state the program starts without troubles and
 everything looks fine. I believe this behavior is because all the
 inserts and updates are made using transactions. Another things will be
 a bug, it ll be fixed and it ll not be pg_dump fault.

 So if pg_dump can capture a consistent state with all the data until
 the start time, without all the pending open transaction updates/inserts
 in the same way that I did when I stopped the program before start
 pg_dump, for me is usefull and enough to solve my problem.

 Thanks to all!

 Pablo


Given your long description over what you though was constant I thought it
important that the answer yes but was given rather than just a plain yes.
I've met quite a few apps that create inconstant databases when  the
database its self is actually consistent.

Peter


[PERFORM] TB-sized databases

2007-11-26 Thread Peter Koczan
Hi all,

I have a user who is looking to store 500+ GB of data in a database
(and when all the indexes and metadata are factored in, it's going to
be more like 3-4 TB). He is wondering how well PostgreSQL scales with
TB-sized databases and what can be done to help optimize them (mostly
hardware and config parameters, maybe a little advocacy). I can't
speak on that since I don't have any DBs approaching that size.

The other part of this puzzle is that he's torn between MS SQL Server
(running on Windows and unsupported by us) and PostgreSQL (running on
Linux...which we would fully support). If any of you have ideas of how
well PostgreSQL compares to SQL Server, especially in TB-sized
databases, that would be much appreciated.

We're running PG 8.2.5, by the way.

Peter

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] TB-sized databases

2007-11-26 Thread Joshua D. Drake

Peter Koczan wrote:

Hi all,

I have a user who is looking to store 500+ GB of data in a database
(and when all the indexes and metadata are factored in, it's going to
be more like 3-4 TB). He is wondering how well PostgreSQL scales with
TB-sized databases and what can be done to help optimize them (mostly
hardware and config parameters, maybe a little advocacy). I can't
speak on that since I don't have any DBs approaching that size.

The other part of this puzzle is that he's torn between MS SQL Server
(running on Windows and unsupported by us) and PostgreSQL (running on
Linux...which we would fully support). If any of you have ideas of how
well PostgreSQL compares to SQL Server, especially in TB-sized
databases, that would be much appreciated.

We're running PG 8.2.5, by the way.


Well I can't speak to MS SQL-Server because all of our clients run 
PostgreSQL ;).. I can tell you we have many that are in the 500GB - 
1.5TB range.


All perform admirably as long as you have the hardware behind it and are 
doing correct table structuring (such as table partitioning).


Sincerely,

Joshua D. Drake




Peter

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org




---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [PERFORM] TB-sized databases

2007-11-26 Thread Oleg Bartunov

We have several TB database in production and it works well on
HP rx1620 dual Itanium2, MSA 20, running Linux. It's read-only storage for
astronomical catalogs with about 4-billions objects. We have custom
index for spherical coordinates which provide great performance.

Oleg
On Mon, 26 Nov 2007, Peter Koczan wrote:


Hi all,

I have a user who is looking to store 500+ GB of data in a database
(and when all the indexes and metadata are factored in, it's going to
be more like 3-4 TB). He is wondering how well PostgreSQL scales with
TB-sized databases and what can be done to help optimize them (mostly
hardware and config parameters, maybe a little advocacy). I can't
speak on that since I don't have any DBs approaching that size.

The other part of this puzzle is that he's torn between MS SQL Server
(running on Windows and unsupported by us) and PostgreSQL (running on
Linux...which we would fully support). If any of you have ideas of how
well PostgreSQL compares to SQL Server, especially in TB-sized
databases, that would be much appreciated.

We're running PG 8.2.5, by the way.

Peter

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org



Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PERFORM] TB-sized databases

2007-11-26 Thread Pablo Alcaraz
I had a client that tried to use Ms Sql Server to run a 500Gb+ database. 
The database simply colapsed. They switched to Teradata and it is 
running good. This database has now 1.5Tb+.


Currently I have clients using postgresql huge databases and they are 
happy. In one client's database the biggest table has 237Gb+ (only 1 
table!) and postgresql run the database without problem using 
partitioning, triggers and rules (using postgresql 8.2.5).


Pablo

Peter Koczan wrote:

Hi all,

I have a user who is looking to store 500+ GB of data in a database
(and when all the indexes and metadata are factored in, it's going to
be more like 3-4 TB). He is wondering how well PostgreSQL scales with
TB-sized databases and what can be done to help optimize them (mostly
hardware and config parameters, maybe a little advocacy). I can't
speak on that since I don't have any DBs approaching that size.

The other part of this puzzle is that he's torn between MS SQL Server
(running on Windows and unsupported by us) and PostgreSQL (running on
Linux...which we would fully support). If any of you have ideas of how
well PostgreSQL compares to SQL Server, especially in TB-sized
databases, that would be much appreciated.

We're running PG 8.2.5, by the way.

Peter

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org

  



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] TB-sized databases

2007-11-26 Thread Stephen Cook
I think either would work; both PostgreSQL and MS SQL Server have 
success stories out there running VLDBs.  It really depends on what you 
know and what you have.  If you have a lot of experience with Postgres 
running on Linux, and not much with SQL Server on Windows, of course the 
former would be a better choice for you.  You stand a much better chance 
working with tools you know.



Pablo Alcaraz wrote:
I had a client that tried to use Ms Sql Server to run a 500Gb+ database. 
The database simply colapsed. They switched to Teradata and it is 
running good. This database has now 1.5Tb+.


Currently I have clients using postgresql huge databases and they are 
happy. In one client's database the biggest table has 237Gb+ (only 1 
table!) and postgresql run the database without problem using 
partitioning, triggers and rules (using postgresql 8.2.5).


Pablo

Peter Koczan wrote:

Hi all,

I have a user who is looking to store 500+ GB of data in a database
(and when all the indexes and metadata are factored in, it's going to
be more like 3-4 TB). He is wondering how well PostgreSQL scales with
TB-sized databases and what can be done to help optimize them (mostly
hardware and config parameters, maybe a little advocacy). I can't
speak on that since I don't have any DBs approaching that size.

The other part of this puzzle is that he's torn between MS SQL Server
(running on Windows and unsupported by us) and PostgreSQL (running on
Linux...which we would fully support). If any of you have ideas of how
well PostgreSQL compares to SQL Server, especially in TB-sized
databases, that would be much appreciated.

We're running PG 8.2.5, by the way.

Peter

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org

  



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[PERFORM] Base de Datos Transaccional

2007-11-26 Thread Fabio Arias
Hola amigos, les escribo por que necesito conocer si PostgreSQL es lo
suficientemente robusto para manejar una plataforma transaccional de 2000 tx
per second. necesito conocer la manera de separar mi operacion transaccional
de la aquella que es de consulta sabiendo que existe informacion comun para
ambos ambientes.

Les pido su ayuda y un poco de su experticia en el tema.


Gracias a todos


Re: [PERFORM] Base de Datos Transaccional

2007-11-26 Thread Pablo Alcaraz
Si tenes el hardware necesario y planificas el deployment de la base de 
datos apropiadamente sin dudas puede llegar a manejar esa carga.


Saludos

Pablo

Fabio Arias wrote:
Hola amigos, les escribo por que necesito conocer si PostgreSQL es lo 
suficientemente robusto para manejar una plataforma transaccional de 
2000 tx per second. necesito conocer la manera de separar mi operacion 
transaccional de la aquella que es de consulta sabiendo que existe 
informacion comun para ambos ambientes.


Les pido su ayuda y un poco de su experticia en el tema.


Gracias a todos



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[PERFORM] PostgreSQL performance on various distribution stock kernels

2007-11-26 Thread Damon Hart
Is there a source comparing PostgreSQL performance (say, using
pgbench) out of the box for various Linux distributions?  Alternately,
is there an analysis anywhere of the potential gains from building a
custom kernel and just what customizations are most relevant to a
PostgreSQL server?

Some background - in investigating the overhead of adopting OpenVZ
virtualization, I ran pgbench tests on PostgreSQL running in a virtual
environment (VE) and compared to PostgreSQL running directly on the
hardware node (HN) under the current stable OpenVZ kernel with no VE
running. The results were roughly in line with expectations based on
OpenVZ documentation (5% fewer transactions per second.)

For completeness, I then ran the same tests with the current stock
Fedora 8 kernel running natively on the same hardware (after all this
is the true non-virtual alternative.)  Surprisingly, this test
performed markedly worse than under the OpenVZ kernel (either on HN or
in VE) even though the latter is from the 2.6.18 series and has added
baggage to support OpenVZ's OS virtualization. Multiple pgbench runs
arrive confirm this conclusion.

The PostgreSQL server version (8.2.5), configuration, hardware,
etc. are identical (actually same HD filesystem image mounted at
/var/lib/pgsql) for each test. Similarly, other than the kernel, the
OS is identical - stock Fedora 8 with up to date packages for each
test.

I double-checked the kernel architecture via uname:

Fedora 8:
Linux 2.6.23.1-49.fc8 #1 SMP Thu Nov 8 21:41:26 EST 2007 i686 i686 i386
GNU/Linux

OpenVZ:
Linux 2.6.18-8.1.15.el5.028stab049.1 #1 SMP Thu Nov 8 16:23:12 MSK 2007
i686 i686 i386 GNU/Linux

So, what's different between these tests? I'm seeing performance
differences of between +65% to +90% transactions per second of the
OpenVZ kernel running on the HN over the stock Fedora 8 kernel. Is
this reflective of different emphasis between RHEL and Fedora kernel
builds?  Some OpenVZ optimization on top of the RHEL5 build? Something
else? Where should I look?

any insights much appreciated,

Damon Hart



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] PostgreSQL performance on various distribution stock kernels

2007-11-26 Thread Alexander Staubo
On 11/26/07, Damon Hart [EMAIL PROTECTED] wrote:
 So, what's different between these tests? I'm seeing performance
 differences of between +65% to +90% transactions per second of the
 OpenVZ kernel running on the HN over the stock Fedora 8 kernel. Is
 this reflective of different emphasis between RHEL and Fedora kernel
 builds?  Some OpenVZ optimization on top of the RHEL5 build? Something
 else? Where should I look?

A recent FreeBSD benchmark (which also tested Linux performance) found
major performance differences between recent versions of the kernel,
possibly attributable to the new so-called completely fair scheduler:

  http://archives.postgresql.org/pgsql-performance/2007-11/msg00132.php

No idea if it's relevant.

Alexander.

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PERFORM] PostgreSQL performance on various distribution stock kernels

2007-11-26 Thread Scott Marlowe
On Nov 26, 2007 4:50 PM, Damon Hart [EMAIL PROTECTED] wrote:

 So, what's different between these tests? I'm seeing performance
 differences of between +65% to +90% transactions per second of the
 OpenVZ kernel running on the HN over the stock Fedora 8 kernel. Is
 this reflective of different emphasis between RHEL and Fedora kernel
 builds?  Some OpenVZ optimization on top of the RHEL5 build? Something
 else? Where should I look?

 any insights much appreciated,

How many TPS are you seeing on each one?  If you are running 10krpm
drives and seeing more than 166.66 transactions per second, then your
drives are likely lying to you and not actually fsyncing, and it could
be that fsync() on IDE / SATA has been implemented in later kernels
and it isn't lying.

Hard to say for sure.

What does vmstat 1 have to say on each system when it's under load?

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] PostgreSQL performance on various distribution stock kernels

2007-11-26 Thread Damon Hart
On Mon, 2007-11-26 at 17:00 -0600, Scott Marlowe wrote:
 On Nov 26, 2007 4:50 PM, Damon Hart [EMAIL PROTECTED] wrote:
 
  So, what's different between these tests? I'm seeing performance
  differences of between +65% to +90% transactions per second of the
  OpenVZ kernel running on the HN over the stock Fedora 8 kernel. Is
  this reflective of different emphasis between RHEL and Fedora kernel
  builds?  Some OpenVZ optimization on top of the RHEL5 build? Something
  else? Where should I look?
 
  any insights much appreciated,
 
 How many TPS are you seeing on each one?  If you are running 10krpm
 drives and seeing more than 166.66 transactions per second, then your
 drives are likely lying to you and not actually fsyncing, and it could
 be that fsync() on IDE / SATA has been implemented in later kernels
 and it isn't lying.
 
 Hard to say for sure.
 
 What does vmstat 1 have to say on each system when it's under load?

I will have to repeat the tests to give you any vmstat info, but perhaps
a little more raw input might be useful.

Test H/W:

Dell Precision 650 Dual 
Intel CPU: Dual XEON 2.4GHz 512k Cache 
RAM: 4GB of DDR ECC 
Hard Drive: 4 x 36GB 10K 68Pin SCSI Hard Drive 

pgbench
scale: 50
clients: 50
transactions per client: 100

stats for 30 runs each kernel in TPS (excluding connections
establishing)

OpenVZ (RHEL5 derived 2.6.18 series)

average: 446
maximum: 593
minimum: 95
stdev: 151
median: 507

stock Fedora 8 (2.6.23 series)

average: 270
maximum: 526
minimum: 83
stdev: 112
median: 268

Does your 10K RPM drive 166 TPS ceiling apply in this arrangement with
multiple disks (the PostgreSQL volume spans three drives, segregated
from the OS) and multiple pgbench clients? I'm fuzzy on whether these
factors even enter into that rule of thumb. At least as far as the
PostgreSQL configuration is concerned, fsync has not been changed from
the default.

Damon



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] PostgreSQL performance on various distribution stock kernels

2007-11-26 Thread Damon Hart
On Mon, 2007-11-26 at 18:06 -0500, Tom Lane wrote:
 Damon Hart [EMAIL PROTECTED] writes:
  So, what's different between these tests? I'm seeing performance
  differences of between +65% to +90% transactions per second of the
  OpenVZ kernel running on the HN over the stock Fedora 8 kernel. Is
  this reflective of different emphasis between RHEL and Fedora kernel
  builds?  Some OpenVZ optimization on top of the RHEL5 build? Something
  else? Where should I look?
 
 Considering how raw Fedora 8 is, I think what you've probably found is a
 performance bug that should be reported to the kernel hackers.
 

Not being a kernel hacker, any suggestions on how to provide more useful
feedback than just pgbench TPS comparison and hardware specs? What's the
best forum, presuming this does boil down to kernel issues.

 Just to confirm: this *is* the same filesystem in both cases, right?
 
   regards, tom lane

Yes, same filesystem simply booting different kernels.

Damon



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] PostgreSQL performance on various distribution stock kernels

2007-11-26 Thread Greg Smith

On Mon, 26 Nov 2007, Damon Hart wrote:


Fedora 8:
Linux 2.6.23.1-49.fc8 #1 SMP Thu Nov 8 21:41:26 EST 2007 i686 i686 i386
GNU/Linux

OpenVZ:
Linux 2.6.18-8.1.15.el5.028stab049.1 #1 SMP Thu Nov 8 16:23:12 MSK 2007
i686 i686 i386 GNU/Linux


2.6.23 introduced a whole new scheduler: 
http://www.linux-watch.com/news/NS2939816251.html
so it's rather different from earlier 2.6 releases, and so new that there 
could easily be performance bugs.


Does your 10K RPM drive 166 TPS ceiling apply in this arrangement with 
multiple disks


Number of disks has nothing to do with it; it depends only on the rate the 
disk with the WAL volume is spinning at.  But that's for a single client.



pgbench
scale: 50
clients: 50
transactions per client: 100


With this many clients, you can get far more transactions per second 
committed than the max for a single client ([EMAIL PROTECTED] rpm).  What you're 
seeing, somewhere around 500 per second, is reasonable.


Note that you're doing two things that make pgbench less useful than it 
can be:


1) The number of transactions you're committing is trivial, which is one 
reason why your test runs have such a huge variation.  Try 1 
transactions/client if you want something that doesn't vary quite so much. 
If it doesn't run for a couple of minutes, you're not going to get good 
repeatability.


2) The way pgbench works, it takes a considerable amount of resources to 
simulate this many clients.  You might get higher (and more realistic) 
numbers if you run the pgbench client on another system than the server.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[PERFORM] 8.1 planner problem ?

2007-11-26 Thread Gianluca Alberici

Hello,

I have ran into an interesting problem with 8.1 and i would like anybody
to explain me if there's a problem with the planner or there's a problem
with myself. In both cases a solution is welcome. The following query:

SELECT sum(qty) FROM
_abi_main_pof_r ampr
inner join _abi_main_items ami on ampr.itemid=ami.id
inner join _abi_main_pof_t ampt on ampr.poftid=ampt.poftid
inner join _abi_main_contacts amc on ampt.contactid=amc.codest
WHERE
ampt.doctypeid in ('BCA','BSC')
and amc.descr ilike '%SOCO%'
and ampr.type=0;

is explain analyze'd as follows

Aggregate  (cost=1220.65..1220.66 rows=1 width=4) (actual 
time=95937.824..95937.824 rows=1 loops=1)
  -  Nested Loop  (cost=0.00..1220.65 rows=1 width=4) (actual 
time=3695.250..95936.292 rows=1503 loops=1)
-  Nested Loop  (cost=0.00..1214.64 rows=1 width=8) (actual 
time=3695.229..95924.697 rows=1503 loops=1)

  Join Filter: (inner.poftid = outer.poftid)
  -  Nested Loop  (cost=0.00..79.16 rows=1 width=4) 
(actual time=0.039..22.547 rows=2437 loops=1)

Join Filter: (inner.contactid = outer.codest)
-  Seq Scan on _abi_main_contacts amc 
(cost=0.00..1.29 rows=1 width=4) (actual time=0.029..0.034 rows=1 loops=1)

  Filter: ((descr)::text ~~* '%SOCO%'::text)
-  Seq Scan on _abi_main_pof_t ampt 
(cost=0.00..77.53 rows=27 width=8) (actual time=0.006..15.820 rows=2702 
loops=1)
  Filter: (((doctypeid)::text = 'BCA'::text) OR 
((doctypeid)::text = 'BSC'::text))
  -  Seq Scan on _abi_main_pof_r ampr  (cost=0.00..1132.81 
rows=214 width=12) (actual time=0.034..35.986 rows=8271 loops=2437)

Filter: (type = 0)
-  Index Scan using _abi_docks_items_pkey on _abi_main_items 
ami  (cost=0.00..5.99 rows=1 width=4) (actual time=0.005..0.005 rows=1 
loops=1503)

  Index Cond: (outer.itemid = ami.id)
Total runtime: 95937.950 ms

...The same query, but with a condition change as ampr.type != 1 instead
of ampr.type=0

SELECT sum(qty) FROM
_abi_main_pof_r ampr
inner join _abi_main_items ami on ampr.itemid=ami.id
inner join _abi_main_pof_t ampt on ampr.poftid=ampt.poftid
inner join _abi_main_contacts amc on ampt.contactid=amc.codest
WHERE
ampt.doctypeid in ('BCA','BSC')
and amc.descr ilike '%SOCO%'
and ampr.type != 1;


is explain analyze'd as follows:

Aggregate  (cost=1446.13..1446.14 rows=1 width=4) (actual 
time=81.609..81.609 rows=1 loops=1)
  -  Nested Loop  (cost=77.60..1446.12 rows=2 width=4) (actual 
time=22.597..80.944 rows=1503 loops=1)
-  Nested Loop  (cost=77.60..1434.12 rows=2 width=8) (actual 
time=22.577..72.785 rows=1503 loops=1)

  Join Filter: (inner.contactid = outer.codest)
  -  Seq Scan on _abi_main_contacts amc  (cost=0.00..1.29 
rows=1 width=4) (actual time=0.030..0.036 rows=1 loops=1)

Filter: ((descr)::text ~~* '%SOCO%'::text)
  -  Hash Join  (cost=77.60..1427.52 rows=425 width=12) 
(actual time=22.536..69.034 rows=8271 loops=1)

Hash Cond: (outer.poftid = inner.poftid)
-  Seq Scan on _abi_main_pof_r ampr 
(cost=0.00..1132.81 rows=42571 width=12) (actual time=0.035..37.045 
rows=8271 loops=1)

  Filter: (type  1)
-  Hash  (cost=77.53..77.53 rows=27 width=8) 
(actual time=22.471..22.471 rows=2702 loops=1)
  -  Seq Scan on _abi_main_pof_t ampt 
(cost=0.00..77.53 rows=27 width=8) (actual time=0.006..20.482 rows=2702 
loops=1)
Filter: (((doctypeid)::text = 
'BCA'::text) OR ((doctypeid)::text = 'BSC'::text))
-  Index Scan using _abi_docks_items_pkey on _abi_main_items 
ami  (cost=0.00..5.99 rows=1 width=4) (actual time=0.003..0.004 rows=1 
loops=1503)

  Index Cond: (outer.itemid = ami.id)
Total runtime: 81.735 ms



The evidence is that with the first condition the query planner seems to
choose a 'lower cost' solution which lead to an execution time of 95
seconds (!!!) but in the second 'higher cost' solution the time is 85
milliseconds. The result set in this case is exactly the same. It is
evident to me that there must be a problem with this.

Anybody ?

TIA

Gianluca



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq