[PERFORM] Using EXECUTE in a function

2007-07-31 Thread Andreas Tille

Hi,

I have found under

  
http://www.physiol.ox.ac.uk/Computing/Online_Documentation/postgresql/plpgsql.html#PLPGSQL-OVERVIEW

Note:  The PL/pgSQL  EXECUTE statement is not related to the EXECUTE
   statement supported by the PostgreSQL server. The server's EXECUTE
   statement cannot be used within PL/pgSQL functions (and is not 
needed).

I'm especially stumbling over the is not needed part.  My plan
is to write a server side function (either SQL or pgsql) that wraps
the output of a PREPAREd statement but I have no idea how to do this.

The final task is to obtain some XML for of my data via a simple shell script
that contains

psql -t MyDatabase -c 'SELECT * FROM MyFunction ($1, $2);'

The task of MyFunction($1,$2) is to wrap up the main data into an XML
header (just some text like
?xml version=1.0 encoding=ISO-8859-1?
  ...
) around the real data that will be obtained via a PREPAREd statement that is
declared like this

   PREPARE xml_data(int, int) AS ( SELECT ... WHERE id = $1 AND source = $2 );

where ... stands for wrapping the output into xml format.

I don't know whether this is a reasonable way.  I know how to solve this
problem when using a pgsql function and preparing the output as a text
string but I learned that PREPAREd statements might be much more clever
performance wise and thus I wonder whether I could do it this way.

Kind regards and thanks for any help

Andreas.

--
http://fam-tille.de

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] postgres: 100% CPU utilization

2007-04-23 Thread Andreas Tille

On Thu, 19 Apr 2007, Sergey Tsukinovsky wrote:


I know that 7.0.2 is an old version and therefore ran the same test on
7.3.18 - the performance behavior was similar.


Why have you choosen just another very old version for performance
comparison and not the latest stable release?

Kind regards

   Andreas.

--
http://fam-tille.de

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[PERFORM] Performance of count(*)

2007-03-22 Thread Andreas Tille

Hi,

I just try to find out why a simple count(*) might last that long.
At first I tried explain, which rather quickly knows how many rows
to check, but the final count is two orders of magnitude slower.

My MS_SQL server using colleague can't believe that.

$ psql InfluenzaWeb -c 'explain SELECT count(*) from agiraw ;'
  QUERY PLAN 
---

 Aggregate  (cost=196969.77..196969.77 rows=1 width=0)
   -  Seq Scan on agiraw  (cost=0.00..185197.41 rows=4708941 width=0)
(2 rows)

real0m0.066s
user0m0.024s
sys 0m0.008s

$ psql InfluenzaWeb -c 'SELECT count(*) from agiraw ;'
  count 
-

 4708941
(1 row)

real0m4.474s
user0m0.036s
sys 0m0.004s


Any explanation?

Kind regards

 Andreas.

--
http://fam-tille.de

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [PERFORM] Performance of count(*)

2007-03-22 Thread Andreas Tille

On Thu, 22 Mar 2007, Andreas Kostyrka wrote:


Which version of PG?


Ahh, sorry, forgot that.  The issue occurs in Debian (Etch) packaged
version 7.4.16.  I plan to switch soon to 8.1.8.


That's the reason why PG (check the newest releases, I seem to
remember that there has been some aggregate optimizations there),


I'll verify this once I moved to the new version.


does
a SeqScan for select count(*) from table. btw, depending upon your
data, doing a select count(*) from table where user=X will use an
Index, but will still need to fetch the rows proper to validate them.


I have an index on three (out of 7 columns) of this table.  Is there
any chance to optimize indexing regarding this.

Well, to be honest I'm not really interested in the performance of
count(*).  I was just discussing general performance issues on the
phone line and when my colleague asked me about the size of the
database he just wonderd why this takes so long for a job his
MS-SQL server is much faster.  So in principle I was just asking
a first question that is easy to ask.  Perhaps I come up with
more difficult optimisation questions.

Kind regards

Andreas.

--
http://fam-tille.de

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[PERFORM] How to debug performance problems

2007-02-19 Thread Andreas Tille

Hi,

I'm running a web application using Zope that obtains all data
from a PostgreSQL 7.4 database (Debian Sarge system with package
7.4.7-6sarge4 on an older Sparc machine, equipped with 2GB
memory and two processors E250 server).  Once I did some performance
tuning and found out that

 max_connections = 256
 shared_buffers = 131072
 sort_mem = 65536

would help for a certain application (that is now not running any
more on this machine, but I left these parameters in
/etc/postgresql/postgresql.conf untouched.

My web application was running fine for years without any problem
and the performance was satisfying.  Some months ago I added a
table containing 450 data rows (all other used tables are
smaller by order of magnitudes) so nothing very large and this
table is not directly accessed in the web application (just some
genereated caching tables updated once a day.  Some functions
and small tables were added as well, but there was a stable
core over several years.

Since about two weeks the application became *drastically* slower
and I urgently have to bring back the old performance.  As I said
I'm talking about functions accessing tables that did not increased
over several years and should behave more or less the same.

I wonder whether adding tables and functions could have an influence
on other untouched parts and how to find out what makes the things
slow that worked for years reliable and satisfying.  My first try
was to switch back to the default settings of the current Debian
package maintainers /etc/postgresql/postgresql.conf leaving the
parameters above untouched but this did not changed anything.

I'm quite clueless even how to explain the problem correctly and
I'm hoping you will at least find information enouth to ask me
the right questions to find out the information you need to
track down the performance problems.

Kind regards and thanks for any help

   Andreas.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match