Re: [PERFORM] How are text columns stored?

2005-06-28 Thread Tom Lane
Meetesh Karia [EMAIL PROTECTED] writes:
 According to section 8.3 of the doc:

 Long values are also stored in background tables so they do not interfere
 with rapid access to the shorter column values.

 So, how long does a value have to be to be considered long?

Several kilobytes.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] perl garbage collector

2005-06-28 Thread Tom Lane
Jean-Max Reymond [EMAIL PROTECTED] writes:
 I have a stored procedure written in perl and I doubt that perl's
 garbage collector is working :-(
 after a lot of work, postmaster has a size of 1100 Mb and  I think
 that the keyword undef has no effects.

Check the PG list archives --- there's been previous discussion of
similar issues.  I think we concluded that when Perl is built to use
its own private memory allocator, the results of that competing with
malloc are not very pretty :-(.  You end up with a fragmented memory
map and no chance to give anything back to the OS.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [PERFORM] How can I speed up this function?

2005-06-28 Thread David Mitchell
The function I have exits the loop when the count hits 100 yes, but the 
inner loop can push the count up as high as necessary to select all the 
statements for a transaction, so by the time it exits, the count could 
be much higher. I do want to limit the statements, but I want to get 
enough for complete transactions.


David

Gnanavel Shanmugam wrote:

But in the function you are exiting the loop when the count hits 100. If you
do not want to limit the statements then remove the limit clause from the
query I've written.

with regards,
S.Gnanavel




-Original Message-
From: [EMAIL PROTECTED]
Sent: Tue, 28 Jun 2005 16:29:32 +1200
To: [EMAIL PROTECTED]
Subject: Re: [PERFORM] How can I speed up this function?

Hi Gnanavel,

Thanks, but that will only return at most 100 statements. If there is a
transaction with 110 statements then this will not return all the
statements for that transaction. We need to make sure that the function
returns all the statements for a transaction.

Cheers

David

Gnanavel Shanmugam wrote:


Merge the two select statements like this and try,

SELECT t.trans_id as ID,s.id, s.transaction_id, s.table_name, s.op,


s.data


  FROM pending_trans AS t join dbmirror.pending_statement AS s
  on (s.transaction_id=t.id)
WHERE t.fetched = false order by t.trans_id,s.id limit 100;

If the above query works in the way you want, then you can also do the
update
using the same.

with regards,
S.Gnanavel





-Original Message-
From: [EMAIL PROTECTED]
Sent: Tue, 28 Jun 2005 14:37:34 +1200
To: pgsql-performance@postgresql.org
Subject: [PERFORM] How can I speed up this function?

We have the following function in our home grown mirroring package, but
it isn't running as fast as we would like. We need to select statements



from the pending_statement table, and we want to select all the



statements for a single transaction (pending_trans) in one go (that is,
we either select all the statements for a transaction, or none of


them).


We select as many blocks of statements as it takes to top the 100
statement limit (so if the last transaction we pull has enough
statements to put our count at 110, we'll still take it, but then we're
done).

Here is our function:

CREATE OR REPLACE FUNCTION dbmirror.get_pending()
 RETURNS SETOF dbmirror.pending_statement AS
$BODY$

DECLARE
   count INT4;
   transaction RECORD;
   statement dbmirror.pending_statement;
   BEGIN
   count := 0;

   FOR transaction IN SELECT t.trans_id as ID
   FROM pending_trans AS t WHERE fetched = false
   ORDER BY trans_id LIMIT 50
   LOOP
   update pending_trans set fetched =  true where trans_id =
transaction.id;

FOR statement IN SELECT s.id, s.transaction_id, s.table_name,


s.op,


s.data
   FROM dbmirror.pending_statement AS s
   WHERE s.transaction_id = transaction.id
   ORDER BY s.id ASC
   LOOP
   count := count + 1;

   RETURN NEXT statement;
   END LOOP;

   IF count  100 THEN
   EXIT;
   END IF;
   END LOOP;

   RETURN;
   END;$BODY$
 LANGUAGE 'plpgsql' VOLATILE;

Table Schemas:

CREATE TABLE dbmirror.pending_trans
(
 trans_id oid NOT NULL,
 fetched bool DEFAULT false,
 CONSTRAINT pending_trans_pkey PRIMARY KEY (trans_id)
)
WITHOUT OIDS;

CREATE TABLE dbmirror.pending_statement
(
 id oid NOT NULL DEFAULT nextval('dbmirror.statement_id_seq'::text),
 transaction_id oid NOT NULL,
 table_name text NOT NULL,
 op char NOT NULL,
 data text NOT NULL,
 CONSTRAINT pending_statement_pkey PRIMARY KEY (id)
)
WITHOUT OIDS;

CREATE UNIQUE INDEX idx_stmt_tran_id_id
 ON dbmirror.pending_statement
 USING btree
 (transaction_id, id);

Postgres 8.0.1 on Linux.

Any Help would be greatly appreciated.

Regards

--
David Mitchell
Software Engineer
Telogis

---(end of


broadcast)---


TIP 8: explain analyze is your friend



--
David Mitchell
Software Engineer
Telogis



--
David Mitchell
Software Engineer
Telogis

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] How can I speed up this function?

2005-06-28 Thread Gnanavel Shanmugam
But in the function you are exiting the loop when the count hits 100. If you
do not want to limit the statements then remove the limit clause from the
query I've written.

with regards,
S.Gnanavel


 -Original Message-
 From: [EMAIL PROTECTED]
 Sent: Tue, 28 Jun 2005 16:29:32 +1200
 To: [EMAIL PROTECTED]
 Subject: Re: [PERFORM] How can I speed up this function?

 Hi Gnanavel,

 Thanks, but that will only return at most 100 statements. If there is a
 transaction with 110 statements then this will not return all the
 statements for that transaction. We need to make sure that the function
 returns all the statements for a transaction.

 Cheers

 David

 Gnanavel Shanmugam wrote:
  Merge the two select statements like this and try,
 
  SELECT t.trans_id as ID,s.id, s.transaction_id, s.table_name, s.op,
 s.data
 FROM pending_trans AS t join dbmirror.pending_statement AS s
 on (s.transaction_id=t.id)
  WHERE t.fetched = false order by t.trans_id,s.id limit 100;
 
   If the above query works in the way you want, then you can also do the
  update
  using the same.
 
  with regards,
  S.Gnanavel
 
 
 
 -Original Message-
 From: [EMAIL PROTECTED]
 Sent: Tue, 28 Jun 2005 14:37:34 +1200
 To: pgsql-performance@postgresql.org
 Subject: [PERFORM] How can I speed up this function?
 
 We have the following function in our home grown mirroring package, but
 it isn't running as fast as we would like. We need to select statements
 from the pending_statement table, and we want to select all the
 statements for a single transaction (pending_trans) in one go (that is,
 we either select all the statements for a transaction, or none of
 them).
 We select as many blocks of statements as it takes to top the 100
 statement limit (so if the last transaction we pull has enough
 statements to put our count at 110, we'll still take it, but then we're
 done).
 
 Here is our function:
 
 CREATE OR REPLACE FUNCTION dbmirror.get_pending()
RETURNS SETOF dbmirror.pending_statement AS
 $BODY$
 
 DECLARE
  count INT4;
  transaction RECORD;
  statement dbmirror.pending_statement;
  BEGIN
  count := 0;
 
  FOR transaction IN SELECT t.trans_id as ID
  FROM pending_trans AS t WHERE fetched = false
  ORDER BY trans_id LIMIT 50
  LOOP
  update pending_trans set fetched =  true where trans_id =
 transaction.id;
 
 FOR statement IN SELECT s.id, s.transaction_id, s.table_name,
 s.op,
 s.data
  FROM dbmirror.pending_statement AS s
  WHERE s.transaction_id = transaction.id
  ORDER BY s.id ASC
  LOOP
  count := count + 1;
 
  RETURN NEXT statement;
  END LOOP;
 
  IF count  100 THEN
  EXIT;
  END IF;
  END LOOP;
 
  RETURN;
  END;$BODY$
LANGUAGE 'plpgsql' VOLATILE;
 
 Table Schemas:
 
 CREATE TABLE dbmirror.pending_trans
 (
trans_id oid NOT NULL,
fetched bool DEFAULT false,
CONSTRAINT pending_trans_pkey PRIMARY KEY (trans_id)
 )
 WITHOUT OIDS;
 
 CREATE TABLE dbmirror.pending_statement
 (
id oid NOT NULL DEFAULT nextval('dbmirror.statement_id_seq'::text),
transaction_id oid NOT NULL,
table_name text NOT NULL,
op char NOT NULL,
data text NOT NULL,
CONSTRAINT pending_statement_pkey PRIMARY KEY (id)
 )
 WITHOUT OIDS;
 
 CREATE UNIQUE INDEX idx_stmt_tran_id_id
ON dbmirror.pending_statement
USING btree
(transaction_id, id);
 
 Postgres 8.0.1 on Linux.
 
 Any Help would be greatly appreciated.
 
 Regards
 
 --
 David Mitchell
 Software Engineer
 Telogis
 
 ---(end of
 broadcast)---
 TIP 8: explain analyze is your friend


 --
 David Mitchell
 Software Engineer
 Telogis
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Too slow querying a table of 15 million records

2005-06-28 Thread John A Meinel

Tobias Brox wrote:


[EMAIL PROTECTED] - Tue at 08:33:58PM +0200]



I use FreeBSD 4.11 with PostGreSQL 7.3.8.



(...)



database= explain select date_trunc('hour', time),count(*) as total from
test where p1=53 and time  now() - interval '24 hours' group by
date_trunc order by date_trunc ;




I haven't looked through all your email yet, but this phenomena have been up
at the list a couple of times.  Try replacing now() - interval '24 hours'
with a fixed time stamp, and see if it helps.

pg7 will plan the query without knowledge of what now() - interval '24
hours' will compute to.  This should be fixed in pg8.




The grandparent was a mailing list double send. Notice the date is 1
week ago. It has already been answered (though your answer is still
correct).

John
=:-



signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] Postgresql7.4.5 running slow on plpgsql function

2005-06-28 Thread Michael Fuhr
On Thu, Jun 23, 2005 at 05:56:52PM +0800, Chun Yit(Chronos) wrote:

 currently we have a function that use together with temp table, it calls
 search result function, everytime this function is calling, it will go
 through some filter before come out as a result.  now we have some major
 problem , the first time the function execute, it take about 13 second
 second time the function is execute, it take about 17 second, every time
 you execute the function the time taken will grow about 4 second, ?  may
 i know what going on here?  since we use function with temp table, so
 every statement that related to temp table will using EXECUTE command.

Could you post the function?  Without knowing what the code is doing
it's impossible to say what's happening.  Is the temporary table
growing on each function call?  Does the function delete records
from the table on each call, leaving a lot of dead tuples?

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] Insert performance vs Table size

2005-06-28 Thread Praveen Raja
I assume you took size to mean the row size? What I really meant was
does the number of rows a table has affect the performance of new
inserts into the table (just INSERTs) all other things remaining
constant. Sorry for the confusion.

I know that having indexes on the table adds an overhead but again does
this overhead increase (for an INSERT operation) with the number of rows
the table contains?

My instinct says no to both. If I'm wrong can someone explain why the
number of rows in a table affects INSERT performance?

Thanks again

-Original Message-
From: Jacques Caron [mailto:[EMAIL PROTECTED] 
Sent: 27 June 2005 14:05
To: Praveen Raja
Cc: pgsql-performance@postgresql.org
Subject: RE: [PERFORM] Insert performance vs Table size

Hi,

At 13:50 27/06/2005, Praveen Raja wrote:
Just to clear things up a bit, the scenario that I'm interested in is a
table with a large number of indexes on it (maybe 7-8).

If you're after performance you'll want to carefully consider which
indexes 
are really useful and/or redesign your schema so that you can have less 
indexes on that table. 7 or 8 indexes is quite a lot, and that really
has a 
cost.

  In this scenario
other than the overhead of having to maintain the indexes (which I'm
guessing is the same regardless of the size of the table)

Definitely not: indexes grow with the size of the table. Depending on
what 
columns you index (and their types), the indexes may be a fraction of
the 
size of the table, or they may be very close in size (in extreme cases
they 
may even be larger). With 7 or 8 indexes, that can be quite a large
volume 
of data to manipulate, especially if the values of the columns inserted
can 
span the whole range of the index (rather than being solely id- or 
time-based, for instance, in which case index updates are concentrated
in a 
small area of each of the indexes), as this means you'll need to have a 
majority of the indexes in RAM if you want to maintain decent
performance.

does the size of the table play a role in determining insert
performance 
(and I mean
only insert performance)?

In this case, it's really the indexes that'll cause you trouble, though 
heavily fragmented tables (due to lots of deletes or updates) will also 
incur a penalty just for the data part of the inserts.

Also, don't forget the usual hints if you are going to do lots of
inserts:
- batch them in large transactions, don't do them one at a time
- better yet, use COPY rather than INSERT
- in some situations, you might be better of dropping the indexes, doing

large batch inserts, then re-creating the indexes. YMMV depending on the

existing/new ratio, whether you need to maintain indexed access to the 
tables, etc.
- pay attention to foreign keys

Jacques.



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Insert performance vs Table size

2005-06-28 Thread Jacques Caron

Hi,

At 11:50 28/06/2005, Praveen Raja wrote:

I assume you took size to mean the row size?


Nope, the size of the table.


 What I really meant was
does the number of rows a table has affect the performance of new
inserts into the table (just INSERTs) all other things remaining
constant. Sorry for the confusion.


As I said previously, in most cases it does. One of the few cases where it 
doesn't would be an append-only table, no holes, no indexes, no foreign keys...



I know that having indexes on the table adds an overhead but again does
this overhead increase (for an INSERT operation) with the number of rows
the table contains?


It depends on what you are indexing. If the index key is something that 
grows monotonically (e.g. a unique ID or a timestamp), then the size of the 
table (and hence of the indexes) should have a very limited influence on 
the INSERTs. If the index key is anything else (and that must definitely be 
the case if you have 7 or 8 indexes!), then that means updates will happen 
all over the indexes, which means a lot of read and write activity, and 
once the total size of your indexes exceeds what can be cached in RAM, 
performance will decrease quite a bit. Of course if your keys are 
concentrated in a few limited areas of the key ranges it might help.



My instinct says no to both. If I'm wrong can someone explain why the
number of rows in a table affects INSERT performance?


As described above, maintaining indexes when you hit anywhere in said 
indexes is very costly. The larger the table, the larger the indexes, the 
higher the number of levels in the trees, etc. As long as it fits in RAM, 
it shouldn't be a problem. Once you exceed that threshold, you start 
getting a lot of random I/O, and that's expensive.


Again, it depends a lot on your exact schema, the nature of the data, the 
spread of the different values, etc, but I would believe it's more often 
the case than not.


Jacques.



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] Too slow querying a table of 15 million records

2005-06-28 Thread Christopher Kings-Lynne

database= explain select date_trunc('hour', time),count(*) as total from
test where p1=53 and time  now() - interval '24 hours' group by
date_trunc order by date_trunc ;


Try going:

time  '2005-06-28 15:34:00'

ie. put in the time 24 hours ago as a literal constant.

Chris


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[PERFORM]

2005-06-28 Thread Erik Westland
 
 



 
Yahoo! Sports 
Rekindle the Rivalries. Sign up for Fantasy Football 
http://football.fantasysports.yahoo.com

---(end of broadcast)---
TIP 8: explain analyze is your friend


[PERFORM] tricky query

2005-06-28 Thread Merlin Moncure
I need a fast way (sql only preferred) to solve the following problem:

I need the smallest integer that is greater than zero that is not in the
column of a table.  In other words, if an 'id' column has values
1,2,3,4,6 and 7, I need a query that returns the value of 5.

I've already worked out a query using generate_series (not scalable) and
pl/pgsql.  An SQL only solution would be preferred, am I missing
something obvious?

Merlin

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] tricky query

2005-06-28 Thread Bruno Wolff III
On Tue, Jun 28, 2005 at 10:21:16 -0400,
  Merlin Moncure [EMAIL PROTECTED] wrote:
 I need a fast way (sql only preferred) to solve the following problem:
 
 I need the smallest integer that is greater than zero that is not in the
 column of a table.  In other words, if an 'id' column has values
 1,2,3,4,6 and 7, I need a query that returns the value of 5.
 
 I've already worked out a query using generate_series (not scalable) and
 pl/pgsql.  An SQL only solution would be preferred, am I missing
 something obvious?

I would expect that using generate series from the 1 to the max (using
order by and limit 1 to avoid extra sequential scans) and subtracting
out the current list using except and then taking the minium value
would be the best way to do this if the list is pretty dense and
you don't want to change the structure.

If it is sparse than you can do a special check for 1 and if that
is present find the first row whose successor is not in the table.
That shouldn't be too slow.

If you are willing to change the structure you might keep one row for
each number and use a flag to mark which ones are empty. If there are
relatively few empty rows at any time, then you can create a partial
index on the row number for only empty rows.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] tricky query

2005-06-28 Thread John A Meinel

Merlin Moncure wrote:


I need a fast way (sql only preferred) to solve the following problem:

I need the smallest integer that is greater than zero that is not in the
column of a table.  In other words, if an 'id' column has values
1,2,3,4,6 and 7, I need a query that returns the value of 5.

I've already worked out a query using generate_series (not scalable) and
pl/pgsql.  An SQL only solution would be preferred, am I missing
something obvious?

Merlin




Not so bad. Try something like this:

SELECT min(id+1) as id_new FROM table
   WHERE (id+1) NOT IN (SELECT id FROM table);

Now, this requires probably a sequential scan, but I'm not sure how you
can get around that.
Maybe if you got trickier and did some ordering and limits. The above
seems to give the right answer, though.

I don't know how big you want to scale to.

You might try something like:
SELECT id+1 as id_new FROM t
   WHERE (id+1) NOT IN (SELECT id FROM t)
   ORDER BY id LIMIT 1;

John
=:-



signature.asc
Description: OpenPGP digital signature


Réf. : [PERFORM] tricky query

2005-06-28 Thread bsimon
I would suggest something like this, don't know how fast it is ... :

SELECT  (ID +1) as result FROM my_table
WHERE (ID+1) NOT IN (SELECT ID FROM my_table) as tmp
ORDER BY result asc limit 1;





Merlin Moncure [EMAIL PROTECTED]
Envoyé par : [EMAIL PROTECTED]
28/06/2005 16:21

 
Pour :  pgsql-performance@postgresql.org
cc : 
Objet : [PERFORM] tricky query


I need a fast way (sql only preferred) to solve the following problem:

I need the smallest integer that is greater than zero that is not in the
column of a table.  In other words, if an 'id' column has values
1,2,3,4,6 and 7, I need a query that returns the value of 5.

I've already worked out a query using generate_series (not scalable) and
pl/pgsql.  An SQL only solution would be preferred, am I missing
something obvious?

Merlin

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] Insert performance vs Table size

2005-06-28 Thread Tom Lane
Praveen Raja [EMAIL PROTECTED] writes:
 I know that having indexes on the table adds an overhead but again does
 this overhead increase (for an INSERT operation) with the number of rows
 the table contains?

Typical index implementations (such as b-tree) have roughly O(log N)
cost to insert or lookup a key in an N-entry index.  So yes, it grows,
though slowly.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] tricky query

2005-06-28 Thread John A Meinel

Merlin Moncure wrote:


Not so bad. Try something like this:

SELECT min(id+1) as id_new FROM table
   WHERE (id+1) NOT IN (SELECT id FROM table);

Now, this requires probably a sequential scan, but I'm not sure how



you



can get around that.
Maybe if you got trickier and did some ordering and limits. The above
seems to give the right answer, though.




it does, but it is still faster than generate_series(), which requires
both a seqscan and a materialization of the function.




I don't know how big you want to scale to.




big. :)

merlin




See my follow up post, which enables an index scan. On my system with
90k rows, it takes no apparent time.
(0.000ms)
John
=:-



signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] tricky query

2005-06-28 Thread Sam Mason
Merlin Moncure wrote:
I've already worked out a query using generate_series (not scalable) and
pl/pgsql.  An SQL only solution would be preferred, am I missing
something obvious?

I would be tempted to join the table to itself like:

  SELECT id+1
  FROM foo
  WHERE id  0
AND i NOT IN (SELECT id-1 FROM foo)
  LIMIT 1;

Seems to work for me.  Not sure if that's good enough for you, but
it may help.

  Sam

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] tricky query

2005-06-28 Thread John A Meinel

John A Meinel wrote:


Merlin Moncure wrote:


I need a fast way (sql only preferred) to solve the following problem:

I need the smallest integer that is greater than zero that is not in the
column of a table.  In other words, if an 'id' column has values
1,2,3,4,6 and 7, I need a query that returns the value of 5.

I've already worked out a query using generate_series (not scalable) and
pl/pgsql.  An SQL only solution would be preferred, am I missing
something obvious?

Merlin




Not so bad. Try something like this:

SELECT min(id+1) as id_new FROM table
   WHERE (id+1) NOT IN (SELECT id FROM table);

Now, this requires probably a sequential scan, but I'm not sure how you
can get around that.
Maybe if you got trickier and did some ordering and limits. The above
seems to give the right answer, though.

I don't know how big you want to scale to.

You might try something like:
SELECT id+1 as id_new FROM t
   WHERE (id+1) NOT IN (SELECT id FROM t)
   ORDER BY id LIMIT 1;

John
=:-


Well, I was able to improve it to using appropriate index scans.
Here is the query:

SELECT t1.id+1 as id_new FROM id_test t1
   WHERE NOT EXISTS
   (SELECT t2.id FROM id_test t2 WHERE t2.id = t1.id+1)
   ORDER BY t1.id LIMIT 1;

I created a test table which has 90k randomly inserted rows. And this is
what EXPLAIN ANALYZE says:

  QUERY PLAN


Limit  (cost=0.00..12.10 rows=1 width=4) (actual time=0.000..0.000 rows=1 
loops=1)
  -  Index Scan using id_test_pkey on id_test t1  (cost=0.00..544423.27 
rows=45000 width=4) (actual time=0.000..0.000 rows=1 loops=1)
Filter: (NOT (subplan))
SubPlan
  -  Index Scan using id_test_pkey on id_test t2  (cost=0.00..6.01 
rows=1 width=4) (actual time=0.000..0.000 rows=1 loops=15)
Index Cond: (id = ($0 + 1))
Total runtime: 0.000 ms
(7 rows)

The only thing I have is a primary key index on id_test(id);

John
=:-



signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] tricky query

2005-06-28 Thread Merlin Moncure
 Not so bad. Try something like this:
 
 SELECT min(id+1) as id_new FROM table
 WHERE (id+1) NOT IN (SELECT id FROM table);
 
 Now, this requires probably a sequential scan, but I'm not sure how
you
 can get around that.
 Maybe if you got trickier and did some ordering and limits. The above
 seems to give the right answer, though.

it does, but it is still faster than generate_series(), which requires
both a seqscan and a materialization of the function.
 
 I don't know how big you want to scale to.

big. :)

merlin

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] tricky query

2005-06-28 Thread Merlin Moncure
John Meinel wrote:
 See my follow up post, which enables an index scan. On my system with
 90k rows, it takes no apparent time.
 (0.000ms)
 John
 =:-

Confirmed.  Hats off to you, the above some really wicked querying.
IIRC I posted the same question several months ago with no response and
had given up on it.  I think your solution (smallest X1 not in X) is a
good candidate for general bits, so I'm passing this to varlena for
review :)

SELECT t1.id+1 as id_new FROM id_test t1
WHERE NOT EXISTS
(SELECT t2.id FROM id_test t2 WHERE t2.id = t1.id+1)
ORDER BY t1.id LIMIT 1;

Merlin

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] tricky query

2005-06-28 Thread Merlin Moncure
 Merlin Moncure wrote:
 
  I need a fast way (sql only preferred) to solve the following
problem:
  I need the smallest integer that is greater than zero that is not in
the
  column of a table.
 
  I've already worked out a query using generate_series (not scalable)
and
  pl/pgsql.  An SQL only solution would be preferred, am I missing
  something obvious?
 
 Probably not, but I thought about this brute-force approach... :-)
 This should work well provided that:
 
 - you have a finite number of integers. Your column should have a
biggest
integer value with a reasonable maximum like 100,000 or 1,000,000.
#define YOUR_MAX 9
[...]
:-) generate_series function does the same thing only a little bit
faster (although less portable).

generate_series(m,n) returns set of integers from m to n with time
complexity n - m.  I use it for cases where I need to increment for
something, for example:

select now()::date + d from generate_series(0,355) as d;

returns days from today until 355 days from now.

Merlin

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[PERFORM] read block size

2005-06-28 Thread Michael Stone

Is it possible to tweak the size of a block that postgres tries to read
when doing a sequential scan? It looks like it reads in fairly small
blocks, and I'd expect a fairly significant boost in i/o performance
when doing a large (multi-gig) sequential scan if larger blocks were
used.

Mike Stone

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] tricky query

2005-06-28 Thread Cosimo Streppone

Merlin Moncure wrote:


I need a fast way (sql only preferred) to solve the following problem:
I need the smallest integer that is greater than zero that is not in the
column of a table.

I've already worked out a query using generate_series (not scalable) and
pl/pgsql.  An SQL only solution would be preferred, am I missing
something obvious?


Probably not, but I thought about this brute-force approach... :-)
This should work well provided that:

- you have a finite number of integers. Your column should have a biggest
  integer value with a reasonable maximum like 100,000 or 1,000,000.
  #define YOUR_MAX 9

- you can accept that query execution time depends on smallest integer found.
  The bigger the found integer, the slower execution you get.

Ok, so:

Create a relation integers (or whatever) with every single integer from 1 to 
YOUR_MAX:


  CREATE TABLE integers (id integer primary key);
  INSERT INTO integers (id) VALUES (1);
  INSERT INTO integers (id) VALUES (2);
  ...
  INSERT INTO integers (id) VALUES (YOUR_MAX);

Create your relation:

  CREATE TABLE merlin (id integer primary key);
  and fill it with values

Query is simple now:

  SELECT a.id FROM integers a
LEFT JOIN merlin b ON a.id=b.id
WHERE b.id IS NULL
 ORDER BY a.id LIMIT 1;

Execution times with 100k tuples in integers and
99,999 tuples in merlin:

  \timing
  Timing is on.
  select i.id from integers i left join merlin s on i.id=s.id where s.id is 
null order by i.id limit 1;

   9

  Time: 233.618 ms
  insert into merlin (id) values (9);
  INSERT 86266614 1
  Time: 0.579 ms
  delete from merlin where id=241;
  DELETE 1
  Time: 0.726 ms
  select i.id from integers i left join merlin s on i.id=s.id where s.id is 
null order by i.id limit 1;

   241

  Time: 1.336 ms
  

--
Cosimo


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] Too slow querying a table of 15 million records

2005-06-28 Thread PFC




database= explain select date_trunc('hour', time),count(*) as total from
test where p1=53 and time  now() - interval '24 hours' group by
date_trunc order by date_trunc ;


	1. Use CURRENT_TIMESTAMP (which is considered a constant by the planner)  
instead of now()
	2. Create a multicolumn index on (p1,time) or (time,p1) whichever works  
better


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PERFORM] read block size

2005-06-28 Thread Michael Stone

On Tue, Jun 28, 2005 at 12:02:55PM -0500, John A Meinel wrote:

There has been discussion about changing the reading/writing code to be
able to handle multiple pages at once, (using something like vread())
but I don't know that it has been implemented.


that sounds promising


Also, this would hurt cases where you can terminate as sequential scan
early. 


If you're doing a sequential scan of a 10G file in, say, 1M blocks I
don't think the performance difference of reading a couple of blocks
unnecessarily is going to matter.


And if the OS is doing it's job right, it will already do some
read-ahead for you.


The app should have a much better idea of whether it's doing a
sequential scan and won't be confused by concurrent activity. Even if
the OS does readahead perfectly, you'll still get a with with larger
blocks by cutting down on the syscalls.

Mike Stone


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] tricky query

2005-06-28 Thread John A Meinel

Merlin Moncure wrote:


John Meinel wrote:



See my follow up post, which enables an index scan. On my system with
90k rows, it takes no apparent time.
(0.000ms)
John
=:-




Confirmed.  Hats off to you, the above some really wicked querying.
IIRC I posted the same question several months ago with no response and
had given up on it.  I think your solution (smallest X1 not in X) is a
good candidate for general bits, so I'm passing this to varlena for
review :)

SELECT t1.id+1 as id_new FROM id_test t1
   WHERE NOT EXISTS
   (SELECT t2.id FROM id_test t2 WHERE t2.id = t1.id+1)
   ORDER BY t1.id LIMIT 1;

Merlin



Just be aware that as your table fills it's holes, this query gets
slower and slower.
I've been doing some testing. And it starts at 0.00 when the first entry
is something like 3, but when you start getting to 16k it starts taking
more like 200 ms.

So it kind of depends how your table fills (and empties I suppose).

The earlier query was slower overall (since it took 460ms to read in the
whole table).
I filled up the table such that 63713 is the first empty space, and it
takes 969ms to run.
So actually if your table is mostly full, the first form is better.

But if you are going to have 100k rows, with basically random
distribution of empties, then the NOT EXISTS works quite well.

Just be aware of the tradeoff. I'm pretty sure the WHERE NOT EXISTS will
always use a looping structure, and go through the index in order.

John
=:-



signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] Too slow querying a table of 15 million records

2005-06-28 Thread Tom Lane
PFC [EMAIL PROTECTED] writes:
   1. Use CURRENT_TIMESTAMP (which is considered a constant by the 
 planner)  
 instead of now()

Oh?

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [PERFORM] tricky query

2005-06-28 Thread Sam Mason
John A Meinel wrote:
SELECT t1.id+1 as id_new FROM id_test t1
   WHERE NOT EXISTS
   (SELECT t2.id FROM id_test t2 WHERE t2.id = t1.id+1)
   ORDER BY t1.id LIMIT 1;

This works well on sparse data, as it only requires as many index
access as it takes to find the first gap.   The simpler NOT IN
version that everybody seems to have posted the first time round
has a reasonably constant (based on the number of rows, not gap
position) startup time but the actual time spent searching for the
gap is much lower.

I guess the version you use depends on how sparse you expect the
data to be.  If you expect your query to have to search through
more than half the table before finding the gap then you're better
off using the NOT IN version, otherwise the NOT EXISTS version
is faster -- on my system anyway.

Hope that's interesting!


  Sam

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] perl garbage collector

2005-06-28 Thread Jean-Max Reymond
2005/6/28, Tom Lane [EMAIL PROTECTED]:
 Jean-Max Reymond [EMAIL PROTECTED] writes:
  I have a stored procedure written in perl and I doubt that perl's
  garbage collector is working :-(
  after a lot of work, postmaster has a size of 1100 Mb and  I think
  that the keyword undef has no effects.
 
 Check the PG list archives --- there's been previous discussion of
 similar issues.  I think we concluded that when Perl is built to use
 its own private memory allocator, the results of that competing with
 malloc are not very pretty :-(.  You end up with a fragmented memory
 map and no chance to give anything back to the OS.

thanks Tom for your advice. I have read the discussion but a small
test is very confusing for me.
Consider this function:

CREATE FUNCTION jmax() RETURNS integer
AS $_$use strict;

my $i=0;
for ($i=0; $i1;$i++) {
my $ch = 0123456789x10;
my $res = spi_exec_query(select * from xdb_child where
doc_id=100 and ele_id=3 );
}
my $j=1;$_$
LANGUAGE plperlu SECURITY DEFINER;


ALTER FUNCTION public.jmax() OWNER TO postgres;

the line my $ch = 0123456789x10;   is used to allocate 1Mb.
the line my $res = spi_exec_query(select * from xdb_child where
doc_id=100 and ele_id=3 limit 5); simulates a query.

without spi_exec_quer, the used memory in postmaster is a constant.
So, I think that pl/perl manages correctly memory in this case.
with spi_exec_query, postmaster grows and grows until the end of the loop. 
Si, it seems that spi_exec_query does not release all the memory after
each call.
For my application (in real life) afer millions of spi_exec_query, it
grows up to 1Gb :-(



-- 
Jean-Max Reymond
CKR Solutions Open Source
Nice France
http://www.ckr-solutions.com

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] tricky query

2005-06-28 Thread John A Meinel

Merlin Moncure wrote:


On Tue, Jun 28, 2005 at 12:02:09 -0400,
 Merlin Moncure [EMAIL PROTECTED] wrote:



Confirmed.  Hats off to you, the above some really wicked querying.
IIRC I posted the same question several months ago with no response



and



had given up on it.  I think your solution (smallest X1 not in X) is



a



good candidate for general bits, so I'm passing this to varlena for
review :)

SELECT t1.id+1 as id_new FROM id_test t1
   WHERE NOT EXISTS
   (SELECT t2.id FROM id_test t2 WHERE t2.id = t1.id+1)
   ORDER BY t1.id LIMIT 1;



You need to rework this to check to see if row '1' is missing. The
above returns the start of the first gap after the first row that
isn't missing.




Correct.

In fact, I left out a detail in my original request in that I had a
starting value (easily supplied with where clause)...so what I was
really looking for was a query which started at a supplied value and
looped forwards looking for an empty slot.  John's supplied query is a
drop in replacement for a plpgsql routine which does exactly this.

The main problem with the generate_series approach is that there is no
convenient way to determine a supplied upper bound.  Also, in some
corner cases of my problem domain the performance was not good.

Merlin



Actually, if you already have a lower bound, then you can change it to:

SELECT t1.id+1 as id_new FROM id_test t1
   WHERE t1.id  id_min
AND NOT EXISTS
   (SELECT t2.id FROM id_test t2 WHERE t2.id = t1.id+1)
   ORDER BY t1.id LIMIT 1;

This would actually really help performance if you have a large table
and then empty entries start late.

On my system, where the first entry is 64k, doing where id  6
speeds it up back to 80ms instead of 1000ms.
John
=:-



signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] tricky query

2005-06-28 Thread Cosimo Streppone

John A Meinel wrote:

John A Meinel wrote:

Merlin Moncure wrote:


I need the smallest integer that is greater than zero that is not in the
column of a table.  In other words, if an 'id' column has values
1,2,3,4,6 and 7, I need a query that returns the value of 5.



 [...]


Well, I was able to improve it to using appropriate index scans.
Here is the query:

SELECT t1.id+1 as id_new FROM id_test t1
   WHERE NOT EXISTS
   (SELECT t2.id FROM id_test t2 WHERE t2.id = t1.id+1)
   ORDER BY t1.id LIMIT 1;


I'm very interested in this tricky query.
Sorry John, but if I populate the `id_test' relation
with only 4 tuples with id values (10, 11, 12, 13),
the result of this query is:

  cosimo= create table id_test (id integer primary key);
  NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index 'id_test_pkey' 
for table 'id_test'

  CREATE TABLE
  cosimo= insert into id_test values (10); -- and 11, 12, 13, 14
  INSERT 7457570 1
  INSERT 7457571 1
  INSERT 7457572 1
  INSERT 7457573 1
  INSERT 7457574 1
  cosimo= SELECT t1.id+1 as id_new FROM id_test t1 WHERE NOT EXISTS (SELECT 
t2.id FROM id_test t2 WHERE t2.id = t1.id+1) ORDER BY t1.id LIMIT 1;

   id_new
  
   15
  (1 row)

which if I understand correctly, is the wrong answer to the problem.
At this point, I'm starting to think I need some sleep... :-)

--
Cosimo


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] tricky query

2005-06-28 Thread Merlin Moncure
 On Tue, Jun 28, 2005 at 12:02:09 -0400,
   Merlin Moncure [EMAIL PROTECTED] wrote:
 
  Confirmed.  Hats off to you, the above some really wicked querying.
  IIRC I posted the same question several months ago with no response
and
  had given up on it.  I think your solution (smallest X1 not in X) is
a
  good candidate for general bits, so I'm passing this to varlena for
  review :)
 
  SELECT t1.id+1 as id_new FROM id_test t1
  WHERE NOT EXISTS
  (SELECT t2.id FROM id_test t2 WHERE t2.id = t1.id+1)
  ORDER BY t1.id LIMIT 1;
 
 You need to rework this to check to see if row '1' is missing. The
 above returns the start of the first gap after the first row that
 isn't missing.

Correct.  

In fact, I left out a detail in my original request in that I had a
starting value (easily supplied with where clause)...so what I was
really looking for was a query which started at a supplied value and
looped forwards looking for an empty slot.  John's supplied query is a
drop in replacement for a plpgsql routine which does exactly this.

The main problem with the generate_series approach is that there is no
convenient way to determine a supplied upper bound.  Also, in some
corner cases of my problem domain the performance was not good.

Merlin



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] tricky query

2005-06-28 Thread Bruno Wolff III
On Tue, Jun 28, 2005 at 12:02:09 -0400,
  Merlin Moncure [EMAIL PROTECTED] wrote:
 
 Confirmed.  Hats off to you, the above some really wicked querying.
 IIRC I posted the same question several months ago with no response and
 had given up on it.  I think your solution (smallest X1 not in X) is a
 good candidate for general bits, so I'm passing this to varlena for
 review :)
 
 SELECT t1.id+1 as id_new FROM id_test t1
 WHERE NOT EXISTS
 (SELECT t2.id FROM id_test t2 WHERE t2.id = t1.id+1)
 ORDER BY t1.id LIMIT 1;

You need to rework this to check to see if row '1' is missing. The
above returns the start of the first gap after the first row that
isn't missing.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] tricky query

2005-06-28 Thread Merlin Moncure
Cosimo wrote:
 I'm very interested in this tricky query.
 Sorry John, but if I populate the `id_test' relation
 with only 4 tuples with id values (10, 11, 12, 13),
 the result of this query is:
 
cosimo= create table id_test (id integer primary key);
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index
 'id_test_pkey'
 for table 'id_test'
CREATE TABLE
cosimo= insert into id_test values (10); -- and 11, 12, 13, 14
INSERT 7457570 1
INSERT 7457571 1
INSERT 7457572 1
INSERT 7457573 1
INSERT 7457574 1
cosimo= SELECT t1.id+1 as id_new FROM id_test t1 WHERE NOT EXISTS
 (SELECT
 t2.id FROM id_test t2 WHERE t2.id = t1.id+1) ORDER BY t1.id LIMIT 1;
 id_new

 15
(1 row)
 
 which if I understand correctly, is the wrong answer to the problem.
 At this point, I'm starting to think I need some sleep... :-)

Correct, in that John's query returns the first empty slot above an
existing  filled slot (correct behavior in my case).  You could flip
things around a bit to get around thist tho.

Merlin

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[PERFORM] optimized counting of web statistics

2005-06-28 Thread Billy extyeightysix
Hola folks,

I have a web statistics Pg database (user agent, urls, referrer, etc)
that is part of an online web survey system. All of the data derived
from analyzing web server logs is stored in one large table with each
record representing an analyzed webserver log entry.

Currently all reports are generated when the logs are being analyzed
and before the data ever goes into the large table I mention above.
Well, the time has come to build an interface that will allow a user
to make ad-hoc queries against the stats and that is why I am emailing
the performance list.

I need to allow the user to specify any fields and values in a query. 
For example,

I want to see a report about all users from Germany that have flash
installed  or
I want to see a report about all users from Africa that are using FireFox 1

I do not believe that storing all of the data in one big table is the
correct way to go about this. So, I am asking for suggestions,
pointers and any kind of info that anyone can share on how to store
this data set in an optimized manner.

Also, I have created a prototype and done some testing using the
colossal table. Actually finding all of the rows that satisfy the
query is pretty fast and is not a problem.  The bottleneck in the
whole process is actually counting each data point (how many times a
url was visited, or how many times a url referred the user to the
website). So more specifically I am wondering if there is way to store
and retrieve the data such that it speeds up the counting of the
statistics.

Lastly, this will become an open source tool that is akin to urchin,
awstats, etc. The difference is that this software is part of a suite
of tools for doing online web surveys and it maps web stats to the
survey respondent data.  This can give web site managers a very clear
view of what type of people come to the site and how those types use
the site.

Thanks in advance,

exty

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [PERFORM] tricky query

2005-06-28 Thread Sebastian Hennebrueder
John A Meinel schrieb:

 John A Meinel wrote:



 Well, I was able to improve it to using appropriate index scans.
 Here is the query:

 SELECT t1.id+1 as id_new FROM id_test t1
WHERE NOT EXISTS
(SELECT t2.id FROM id_test t2 WHERE t2.id = t1.id+1)
ORDER BY t1.id LIMIT 1;

 I created a test table which has 90k randomly inserted rows. And this is
 what EXPLAIN ANALYZE says:




As Cosimo stated the result can be wrong. The result is always wrong
when the id with value 1 does not exist.

-- 
Best Regards / Viele Grüße

Sebastian Hennebrueder



http://www.laliluna.de

Tutorials for JSP, JavaServer Faces, Struts, Hibernate and EJB 

Get support, education and consulting for these technologies - uncomplicated 
and cheap.


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] optimized counting of web statistics

2005-06-28 Thread Billy extyeightysix
 The bottleneck in the
 whole process is actually counting each data point (how many times a
 url was visited, or how many times a url referred the user to the
 website). So more specifically I am wondering if there is way to store
 and retrieve the data such that it speeds up the counting of the
 statistics.

This is misleading, the counting is being done by perl.  so what is
happening is that I am locating all of the rows via a cursor and then
calculating the stats using perl hashes.  no counting is being in the
DB.  maybe it would be much faster to count in the db somehow?


exty

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] optimized counting of web statistics

2005-06-28 Thread Matthew Nuzum
On 6/28/05, Billy extyeightysix [EMAIL PROTECTED] wrote:
 Hola folks,
 
 I have a web statistics Pg database (user agent, urls, referrer, etc)
 that is part of an online web survey system. All of the data derived
 from analyzing web server logs is stored in one large table with each
 record representing an analyzed webserver log entry.
 
 Currently all reports are generated when the logs are being analyzed
 and before the data ever goes into the large table I mention above.
 Well, the time has come to build an interface that will allow a user
 to make ad-hoc queries against the stats and that is why I am emailing
 the performance list.

Load your data into a big table, then pre-process into additional
tables that have data better organized for running your reports.

For example, you may want a table that shows a sum of all hits for
each site, for each hour of the day. You may want an additional table
that shows the sum of all page views, or maybe sessions for each site
for each hour of the day.

So, if you manage a single site, each day you will add 24 new records
to the sum table.

You may want the following fields:
site (string)
atime (timestamptz)
hour_of_day (int)
day_of_week (int)
total_hits (int8)

A record may look like this:
site | atime | hour_of_day | day_of_week | total_hits
'www.yoursite.com'  '2005-06-28 16:00:00 -0400'  18  2  350

Index all of the fields except total_hits (unless you want a report
that shows all hours where hits were greater than x or less than x).

Doing:
select sum(total_hits) as total_hits from summary_table where atime
between now() and (now() - '7 days'::interval);
should be pretty fast.

You can also normalize your data such as referrers, user agents, etc
and create similar tables to the above.

In case you haven't guessed, I've already done this very thing.

I do my batch processing daily using a python script I've written. I
found that trying to do it with pl/pgsql took more than 24 hours to
process 24 hours worth of logs. I then used C# and in memory hash
tables to drop the time to 2 hours, but I couldn't get mono installed
on some of my older servers. Python proved the fastest and I can
process 24 hours worth of logs in about 15 minutes. Common reports run
in  1 sec and custom reports run in  15 seconds (usually).
-- 
Matthew Nuzum
www.bearfruit.org

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] perl garbage collector

2005-06-28 Thread Jean-Max Reymond
2005/6/28, Jean-Max Reymond [EMAIL PROTECTED]:
 For my application (in real life) afer millions of spi_exec_query, it
 grows up to 1Gb :-(

OK, now in 2 lines:

CREATE FUNCTION jmax() RETURNS integer
AS $_$use strict;

for (my $i=0; $i1000;$i++) {
spi_exec_query(select 'foo');
}
my $j=1;$_$
LANGUAGE plperlu SECURITY DEFINER

running this test and your postmaster eats a lot of memory.
it seems that there is a memory leak  in spi_exec_query :-( 


-- 
Jean-Max Reymond
CKR Solutions Open Source
Nice France
http://www.ckr-solutions.com

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] optimized counting of web statistics

2005-06-28 Thread Rudi Starcevic
Hi,

I do my batch processing daily using a python script I've written. I
found that trying to do it with pl/pgsql took more than 24 hours to
process 24 hours worth of logs. I then used C# and in memory hash
tables to drop the time to 2 hours, but I couldn't get mono installed
on some of my older servers. Python proved the fastest and I can
process 24 hours worth of logs in about 15 minutes. Common reports run
in  1 sec and custom reports run in  15 seconds (usually).
  


When you say you do your batch processing in a Python script do you mean
a you are using 'plpython' inside
PostgreSQL or using Python to execut select statements and crunch the
data 'outside' PostgreSQL?

Your reply is very interesting.

Thanks.
Regards,
Rudi.


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] optimized counting of web statistics

2005-06-28 Thread Matthew Nuzum
On 6/29/05, Rudi Starcevic [EMAIL PROTECTED] wrote:
 Hi,
 
 I do my batch processing daily using a python script I've written. I
 found that trying to do it with pl/pgsql took more than 24 hours to
 process 24 hours worth of logs. I then used C# and in memory hash
 tables to drop the time to 2 hours, but I couldn't get mono installed
 on some of my older servers. Python proved the fastest and I can
 process 24 hours worth of logs in about 15 minutes. Common reports run
 in  1 sec and custom reports run in  15 seconds (usually).
 
 
 
 When you say you do your batch processing in a Python script do you mean
 a you are using 'plpython' inside
 PostgreSQL or using Python to execut select statements and crunch the
 data 'outside' PostgreSQL?
 
 Your reply is very interesting.

Sorry for not making that clear... I don't use plpython, I'm using an
external python program that makes database connections, creates
dictionaries and does the normalization/batch processing in memory. It
then saves the changes to a textfile which is copied using psql.

I've tried many things and while this is RAM intensive, it is by far
the fastest aproach I've found. I've also modified the python program
to optionally use disk based dictionaries based on (I think) gdb. This
signfincantly increases the time to closer to 25 min. ;-) but drops
the memory usage by an order of magnitude.

To be fair to C# and .Net, I think that python and C# can do it
equally fast, but between the time of creating the C# version and the
python version I learned some new optimization techniques. I feel that
both are powerful languages. (To be fair to python, I can write the
dictionary lookup code in 25% (aprox) fewer lines than similar hash
table code in C#. I could go on but I think I'm starting to get off
topic.)
-- 
Matthew Nuzum
www.bearfruit.org

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster