Re: [GENERAL] Temp tables

2013-05-23 Thread Steve Crawford

On 05/23/2013 12:24 AM, Pascal Tufenkji wrote:


Hi,

To enhance the performance of the queries, I use temp tables in my 
website reports.


It seems that a temp table has not been dropped automatically in a 
certain session, and now the report is giving an error since the temp 
table already exists ?!


When I type the following, the database gives me an error:

dragon=# CREATE TEMP TABLE _parcours (id int);

ERROR:  type _parcours already exists

1.How can I identify the session in which the temp table is still 
locked, so I can drop it manually


2.Why does those cases happen and the temp tables don't drop automatically



Two questions:

1. Is that the exact message you are getting. I would expect *relation* 
_parcours already exists not *type*.


2. Are you using persistent connections or connection pooling?

It feels a bit like there is something leftover from a previous process 
that was using the same connection.


Cheers,
Steve



Re: [GENERAL] Temp tables

2013-05-23 Thread Tom Lane
Steve Crawford scrawf...@pinpointresearch.com writes:
 On 05/23/2013 12:24 AM, Pascal Tufenkji wrote:
 When I type the following, the database gives me an error:
 dragon=# CREATE TEMP TABLE _parcours (id int);
 ERROR:  type _parcours already exists

Perhaps you have a type or temp table named parcours?  If so,
_parcours is the internal name of the associated array type.
You'll need to use a different name.

 It feels a bit like there is something leftover from a previous process 
 that was using the same connection.

We have seen a small number of reports where it seemed that some catalog
entry(s) associated with temp tables didn't get dropped when they should
have been.  No one's been able to create a reproducible case though.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] temp tables not dropping at end of script

2011-06-13 Thread jonathansfl
we're having a similar situation, where FunctionA calls FunctionB inside a
cursor. FunctionB DROPs Temp table, then creates temp table. FunctionA runs
through the cursor fine but breaks after the last loop, unable to DROP
temporary table because it is being used by active queries in this
session.

Those sessions should have been closed after each loop!

Here is my ticket on this:
http://postgresql.1045698.n5.nabble.com/Can-t-drop-temp-table-in-subfunction-during-cursor-loop-being-used-by-active-queries-td4482806.html#a4484134

Here is another similar (and unanswered) ticket:
http://forums.enterprisedb.com/posts/list/849.page


--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/temp-tables-not-dropping-at-end-of-script-tp4286391p4484759.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] temp tables not dropping at end of script

2011-04-06 Thread Rob Sargent


On 04/06/2011 08:12 AM, Davenport, Julie wrote:

Since we upgraded from postgres 8.0 to 8.4, every script where we have
queries that use temp tables now has to have an explicit drop of the
temp table at the end of the script, or it will blow up the next time it
runs, saying it cannot create the temp table because it already exists
(these are coldfusion 8 scripts running queries on postgres 8.4
database). When we get the error, if we try to drop the table at the
command line, it says the table does not exist, yet we cannot rerun the
script unless we stop and restart the database. This never happened with
pg 8.0, so the definition of “when a session ends” seems to have changed
(isn’t a temp table supposed to automatically disappear at the end of
the session)? Is there some easier or better way to clear these
temporary areas?

Thanks,

Julie

julie.davenp...@ctcd.edu


The connection running the script actually terminates?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] temp tables not dropping at end of script

2011-04-06 Thread Andrew Sullivan
On Wed, Apr 06, 2011 at 09:12:55AM -0500, Davenport, Julie wrote:

 postgres 8.4 database).  When we get the error, if we try to drop
 the table at the command line, it says the table does not exist, yet
 we cannot rerun the script unless we stop and restart the database.

What if you stop your connection?  This sounds like under 8.0 you were
closing the connection (thereby ending a session), but that under 8.4
your connection isn't actually closing (so your session remains open,
so the temp table hangs around).

A

-- 
Andrew Sullivan
a...@crankycanuck.ca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] temp tables not dropping at end of script

2011-04-06 Thread Relyea, Mike
Since we upgraded from postgres 8.0 to 8.4, every script where we have queries 
that use temp tables now has to have an explicit drop of the temp table at the 
end of the script, or it will blow up the next time it runs, saying it cannot 
create the temp table because it already exists (these are coldfusion 8 scripts 
running queries on postgres 8.4 database).  When we get the error, if we try to 
drop the table at the command line, it says the table does not exist, yet we 
cannot rerun the script unless we stop and restart the database.  This never 
happened with pg 8.0, so the definition of when a session ends seems to have 
changed (isn't a temp table supposed to automatically disappear at the end of 
the session)?  Is there some easier or better way to clear these temporary 
areas?
Thanks,
Julie
julie.davenp...@ctcd.edu





I ran in to a similar issue with our scripts.  I took the easy way out and 
before creating each temp table, I added a DROP TABLE IF EXISTS statement.  If 
the table doesn't exist, I get a warning but my script doesn't fail.

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] temp tables not dropping at end of script

2011-04-06 Thread Andrew Sullivan
On Wed, Apr 06, 2011 at 10:47:55AM -0500, Davenport, Julie wrote:
 We've never explicitly closed the connection, it just seemed to close 
 automatically when the coldfusion script ended.  Not sure how I would even do 
 that from a script since this is run automatically, not from the command 
 line.  Other than putting a quit inside a cfquery tag?

Is it possible that the older driver closed automatically?

Anyway, you could set a savepoint, try to create the temp table, and
then rollback to savepoint if it doesn't work or else continue if it
does.

A


-- 
Andrew Sullivan
a...@crankycanuck.ca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] temp tables not dropping at end of script

2011-04-06 Thread Davenport, Julie
On Wed, Apr 06, 2011 at 09:12:55AM -0500, Davenport, Julie wrote:

 postgres 8.4 database).  When we get the error, if we try to drop
 the table at the command line, it says the table does not exist, yet
 we cannot rerun the script unless we stop and restart the database.

What if you stop your connection?  This sounds like under 8.0 you were
closing the connection (thereby ending a session), but that under 8.4
your connection isn't actually closing (so your session remains open,
so the temp table hangs around).

A

--
Andrew Sullivan
ajs(at)crankycanuck(dot)ca

We've never explicitly closed the connection, it just seemed to close 
automatically when the coldfusion script ended.  Not sure how I would even do 
that from a script since this is run automatically, not from the command line.  
Other than putting a quit inside a cfquery tag?
Thanks,
Julie
julie.davenp...@ctcd.edu





Re: [GENERAL] temp tables not dropping at end of script

2011-04-06 Thread Scott Ribe
On Apr 6, 2011, at 9:47 AM, Davenport, Julie wrote:

 We’ve never explicitly closed the connection, it just seemed to close 
 automatically when the coldfusion script ended.

My guess is you've also upgraded coldfusion, or changed its config, and now 
it's caching connections.

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] temp tables

2010-04-30 Thread A. Kretschmer
In response to Geoffrey :
 Do temp tables need to be explicitly dropped, or do the go away when the 
 process that created them leaves?

The latter one.
But explicitely delete them isn't an error.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] temp tables, sessions, pgpool and disk

2009-06-22 Thread Ivan Sergio Borgonovo
On Mon, 22 Jun 2009 07:26:56 +0800
Craig Ringer cr...@postnewspapers.com.au wrote:

  http://www.postgresql.org/docs/8.3/interactive/sql-createtable.html
  If specified, the table is created as a temporary table.
  Temporary tables are automatically dropped at the end of a
  session
  
  I'd interpret it as a connection.
 
 Correctly.
 
  I don't even know if it is possible to send more than one command
  over a single connection and wait for the results asynchronously.
  Any clarification?
 
 To an extent cursors provide that ability. The result is returned
 quite promptly, but it's a placeholder that allows you to retrieve
 the real results progressively as you need them. Whether the
 database generates the results immediately and stores them to
 return later, or whether it generates them on demand, isn't
 something you can easily tell it's up to the database.

http://www.postgresql.org/docs/8.3/interactive/libpq-async.html
PQsendQuery cannot be called again (on the same connection) until
PQgetResult has returned a null pointer, indicating that the command
is done.

Asynchronous calls can't be made to parallelize postgres queries on
the same session, but just to parallelize client and server work.

So a temp table seems as private as I need it.

I wonder what will happen if I put something like pgpool between
postgresql and a web app.

  So postgresql actually issues writes to disk and delegate to the
  OS management of the cache/actual write on disk.

 Yes.

  I thought it could just try to hold them in RAM and still
  delegate to the OS to save them on disk in swap if the system is
  short on RAM.

 For a variety of reasons, you REALLY don't want it to work that
 way.

mmm... first sorry for the noise...
Interpret the following as reality checks.
I'm perfectly aware building up a DB is not easy, and I'm not
pretending I know how to write one. ;)

 OS memory managers tend to be _much_ better and faster at managing
 pages that're backed by a file. They'll write dirty data out
 pre-emptively so that execution doesn't stall when memory runs
 low; they write data to the file in order for best disk
 performance; they efficiently buffer and read-ahead when pulling
 the data back in, etc.

 The OS knows much less about what anonymous memory (memory not
 backed by a file) means to a program and can't be as clever with
 it. Swapping tends to be _much_ more CPU expensive than writing

But issuing a write to disk Postgresql doesn't actually say anything
more about what it is placing on the disk and how it is going to
access it... and it is actually adding overhead to move it back and
forward, no matter if this overhead happens on RAM or disk.
Actually since temp table are private to the connection they should
(?) be private to a postgresql process, so the OS should be able to
do a good job.
I don't see any atomicity constraint, so... if something fail while
writing to RAM, as you said you shouldn't need a WAL.

 dirty buffers to a file. It's a lot more expensive to retrieve
 from disk, too, and usually involves lots of seeks for quite
 scattered pages instead of nice block readahead.

Once you're running out of memory I see no guaranty your file will
end up in a fast easily accessible area of your disk... and you're
going to add the overhead associated with a file system
(journalling, permissions/ownership, locks) swap is volatile... and
it should offers the guaranty you need for a temp table.

 The OS knows much better than PostgreSQL does when the table will
 fit in RAM and when it needs to spill to disk, and it's much
 better at managing that than Pg can ever be. It's great that Pg
 just uses the OS's hardware knowledge, system-wide awareness, and
 highly optimised memory manager + disk IO management to take care
 of the problem.

The same should be true for virtual memory, not just file management
and postgresql has a configuration file that should give a clue to
the DB about the expected workload and hardware.
Surely postgresql can't forecast how many and how large the temp
tables for a single connection will be... but substantially I got
the idea that a connection is somehow serial in its execution and
that storage could be garbage collected or just released early
(drop table, on commit drop).
This looks as it is taking temp tables very far from the standard.
And yeah... once you want to do memory management/resource management
inside SQL you've opened the doors of Hell.
But well 

For what I could see about SQL99 the definition of temp table is very
terse... and a bit confusing (at least for me) about global and
local.
I gave a quick look at what's available on MS SQL... and they have
an sort of in memory temp table but you can't modify its schema.

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] temp tables, sessions, pgpool and disk

2009-06-22 Thread Martijn van Oosterhout
On Mon, Jun 22, 2009 at 09:53:59AM +0200, Ivan Sergio Borgonovo wrote:
  The OS knows much less about what anonymous memory (memory not
  backed by a file) means to a program and can't be as clever with
  it. Swapping tends to be _much_ more CPU expensive than writing
 
 But issuing a write to disk Postgresql doesn't actually say anything
 more about what it is placing on the disk and how it is going to
 access it... and it is actually adding overhead to move it back and
 forward, no matter if this overhead happens on RAM or disk.
 Actually since temp table are private to the connection they should
 (?) be private to a postgresql process, so the OS should be able to
 do a good job.
 I don't see any atomicity constraint, so... if something fail while
 writing to RAM, as you said you shouldn't need a WAL.

For the record, temp tables are in fact handled differently, in
particular they are not stored in the shared_buffers, but instead are
in backend local (private) buffers, whose size is controlled by
temp_buffers. They are indeed not WAL archived, nor written to disk
unless needed.

So yes, small temp tables will likely stay in memory, but large temp
tables may spill to disk. There's no flushing or syncing so quite
likely they'll end up in the OS disk cache for a while. Once the temp
table is deleted, the file is deleted and the OS throws that data away.
So temp tables most likely won't use any disk I/O, but they *can* if
the need arises.

Have a nice day,
-- 
Martijn van Oosterhout   klep...@svana.org   http://svana.org/kleptog/
 Please line up in a tree and maintain the heap invariant while 
 boarding. Thank you for flying nlogn airlines.


signature.asc
Description: Digital signature


Re: [GENERAL] temp tables, sessions, pgpool and disk

2009-06-22 Thread Ivan Sergio Borgonovo
On Mon, 22 Jun 2009 11:40:08 +0200
Martijn van Oosterhout klep...@svana.org wrote:

 On Mon, Jun 22, 2009 at 09:53:59AM +0200, Ivan Sergio Borgonovo
 wrote:
   The OS knows much less about what anonymous memory (memory not
   backed by a file) means to a program and can't be as clever
   with it. Swapping tends to be _much_ more CPU expensive than
   writing
  
  But issuing a write to disk Postgresql doesn't actually say
  anything more about what it is placing on the disk and how it is
  going to access it... and it is actually adding overhead to move
  it back and forward, no matter if this overhead happens on RAM
  or disk. Actually since temp table are private to the connection
  they should (?) be private to a postgresql process, so the OS
  should be able to do a good job.
  I don't see any atomicity constraint, so... if something fail
  while writing to RAM, as you said you shouldn't need a WAL.
 
 For the record, temp tables are in fact handled differently, in
 particular they are not stored in the shared_buffers, but instead
 are in backend local (private) buffers, whose size is controlled by
 temp_buffers. They are indeed not WAL archived, nor written to disk
 unless needed.

 So yes, small temp tables will likely stay in memory, but large
 temp tables may spill to disk. There's no flushing or syncing so
 quite likely they'll end up in the OS disk cache for a while. Once
 the temp table is deleted, the file is deleted and the OS throws
 that data away. So temp tables most likely won't use any disk I/O,
 but they *can* if the need arises.

Just to make it extra-clear to people unaware of pg internals...
since the second paragraph may seems to contradict the first one...

could be nor written to disk unless needed rephrased as:
even repeated UPDATE/INSERT won't issue writes (no matter if they end
up on disk or not, it won't issue writes to the OS) if the table fit
the buffer?

I see the default is somehow large (8M) and it is not pre
allocated. Looks nice.

 Have a nice day,

thanks

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] temp tables and sequences in functions

2007-09-05 Thread A. Kretschmer
am  Wed, dem 05.09.2007, um  6:58:30 -0700 mailte Rob folgendes:
 What is the proper why to deal with temp tables and sequences? Why aren't they
 being dropped after the function ends? Why do I get OID errors if I delete the
 temp table/sequence at the end of the function and then try to rerun the
 function?
 
 ERROR: could not open relation with OID 58341
 SQL state: XX000

normal behavior. Read more about this problem here:
http://merlinmoncure.blogspot.com/2007/09/as-previously-stated-postgresql-8.html

Solution: use EXECUTE for DDL-commands inside functions.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] temp tables in functions?

2007-02-08 Thread Michael Fuhr
On Thu, Feb 08, 2007 at 12:29:37PM +0500, Shoaib Mir wrote:
 Something like this will help you:
 
 execute immediate 'create temporary table test (a number) on commit drop';

PL/pgSQL doesn't recognize immediate and number isn't a PostgreSQL
type so the above yields a syntax error.  Also, EXECUTE isn't
necessary for the CREATE TABLE statement, although as Bruno mentioned
EXECUTE will be necessary for other statements due to plan caching.
And ON COMMIT DROP won't help if you call the function multiple
times in the same transaction.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] temp tables in functions?

2007-02-08 Thread Shoaib Mir

Agreed :) I guess missed out some details from there as I just thought he
needed to drop a temp table inside a function like this:

CREATE OR REPLACE function tempfunc (int) returns int
AS
$$
 begin
 execute 'create temporary table test (a numeric) on commit drop';
  execute 'INSERT INTO test  values (1);';
return 1;
end;
$$ LANGUAGE 'plpgsql'

 used number by mistake so sorry for any inconvenience caused as I was
trying it with EnterpriseDB (where 'number 'is added for Oracle
compatibility)

--
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)

On 2/8/07, Michael Fuhr [EMAIL PROTECTED] wrote:


On Thu, Feb 08, 2007 at 12:29:37PM +0500, Shoaib Mir wrote:
 Something like this will help you:

 execute immediate 'create temporary table test (a number) on commit
drop';

PL/pgSQL doesn't recognize immediate and number isn't a PostgreSQL
type so the above yields a syntax error.  Also, EXECUTE isn't
necessary for the CREATE TABLE statement, although as Bruno mentioned
EXECUTE will be necessary for other statements due to plan caching.
And ON COMMIT DROP won't help if you call the function multiple
times in the same transaction.

--
Michael Fuhr



Re: [GENERAL] temp tables in functions?

2007-02-07 Thread Bruno Wolff III
On Wed, Feb 07, 2007 at 20:40:09 -0800,
  jws [EMAIL PROTECTED] wrote:
 Having developed a complex query, I want to wrap it up as a function
 so that it can take a parameter and return a set of rows. This query
 is currently written as multiple sql statements that create a few
 interstitial temp tables that are then joined. If I put this into a
 function definition, do those temp tables get dropped automatically
 when the function returns?

See: http://www.postgresql.org/docs/8.2/interactive/sql-createtable.html
So, the answer is no.

Also note that currently Postgres will cache information about tables
used in functions and this may not work well when you are dropping and
recreating tables with the same name in the same session. For that kind
of thing you need to use EXECUTE to avoid caching.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] temp tables in functions?

2007-02-07 Thread Shoaib Mir

Something like this will help you:

execute immediate 'create temporary table test (a number) on commit drop';

--
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)

On 2/8/07, Bruno Wolff III [EMAIL PROTECTED] wrote:


On Wed, Feb 07, 2007 at 20:40:09 -0800,
  jws [EMAIL PROTECTED] wrote:
 Having developed a complex query, I want to wrap it up as a function
 so that it can take a parameter and return a set of rows. This query
 is currently written as multiple sql statements that create a few
 interstitial temp tables that are then joined. If I put this into a
 function definition, do those temp tables get dropped automatically
 when the function returns?

See: http://www.postgresql.org/docs/8.2/interactive/sql-createtable.html
So, the answer is no.

Also note that currently Postgres will cache information about tables
used in functions and this may not work well when you are dropping and
recreating tables with the same name in the same session. For that kind
of thing you need to use EXECUTE to avoid caching.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



Re: [GENERAL] temp tables and function performance

2006-12-18 Thread Alban Hertroys
Anton Melser wrote:
 Hi,
 I am trying to move up in the world with my sql and need to do the
 following...
 I have a subscribers table and I need to export to csv (semi-colon
 separated) certain fields - that is fine, but I also need to export a
 multi-select field from another table as one string (0 to n values
 separated by commas) per line. The problem being that to get the
 actual string I have to go via 4 other relations... and as I have
 200k+ subscribers this takes a while.

200k isn't all that much, unless you have a lot of large columns.

 My idea (which seems to work, though I haven't tested fully as it
 takes too damn long!), was to do the following. I would REALLY
 appreciate any pointers as my sql has never been this challenged!
 
 CREATE OR REPLACE FUNCTION mytest()
  RETURNS integer AS
 $BODY$DECLARE kindy INTEGER;
 BEGIN


 create temporary table tmp_interests(
 id bigint,
 value character varying(100)
 )
 WITHOUT OIDS
 ON COMMIT DROP;
 
 insert into tmp_interests
 select distinct si.subid, rbi.value
 from restem rbi, cats cc, trm_terms tt, subrest si
 where rbi.key = cc.name
 and cc.catid = tt.modcid
 and tt.tid = si.themeid;

You can do these in one go using
 CREATE TEMPORARY TABLE tmp_interests AS SELECT ...

 create temporary table tmp_subscribers(
 email character varying(200),
 format character varying(4),
 interests  character varying(1000),
 )
 WITHOUT OIDS
 ON COMMIT DROP;
 
 insert into tmp_subscribers
 Select email,
 format,
 my_interests(id) as interests
 from subscriber;
 
 GET DIAGNOSTICS kindy = ROW_COUNT;
 
 copy tmp_subscribers to '/home/myname/subs.csv' WITH CSV DELIMITER AS
 ';' NULL AS '';

If you have PG8.2 and can combine your 2 select queries into one, then
you can create a view of them and copy that instead. It takes out all
the inserts and can use your already existing table statistics - it
should be faster.

Also, after inserting a bunch of records into a table, make a habit of
running ANALYSE on it. Otherwise the query-planner knows nothing about
the data in the tables and is likely to come up with a sub-optimal query
plan.

 
 GET DIAGNOSTICS kindy = ROW_COUNT;
 
 return kindy;
 
 END;$BODY$
  LANGUAGE 'plpgsql' VOLATILE;
 
 ...
 
 
 CREATE OR REPLACE FUNCTION my_interests(bigint)
  RETURNS character varying AS
 $BODY$DECLARE
 subid ALIAS FOR $1;
 interests character varying;
 myinterest RECORD;
 
 BEGIN
 interests := '';
 FOR myinterest IN execute 'select value from tmp_interests where id =
 ' || subid LOOP

No need for a dynamic query here...

 if interests = '' then
 interests := myinterest.value;
 else
 interests := interests || ',' || myinterest.value;
 end if;
 END LOOP;
 
 RETURN interests;
 END$BODY$
  LANGUAGE 'plpgsql' VOLATILE;

I'd have to look up the syntax, but I'm quite certain you can put the
results of a select into an array. After that you can call
array_to_string(...) to convert it into a comma seperated string. That'd
take away the need for this SP (which I think is actually STABLE instead
of VOLATILE).

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Temp Tables

2006-05-29 Thread William Leite Araújo
2006/5/25, Brandon E Hofmann [EMAIL PROTECTED]:
(..)I tried defining composite types, but get a runtime error that it isn'tavailable. That is you postgres-- William Leite AraújoEspecialista em Geoprocessamento- UFMG
Bacharel em Ciêncida da Computação - UFMGMSN:[EMAIL PROTECTED]ICQ:222159351GTalk: [EMAIL PROTECTED]Yahoo: 
[EMAIL PROTECTED]Skype: william.bh


Re: [GENERAL] Temp Tables

2006-05-29 Thread William Leite Araújo
2006/5/25, Brandon E Hofmann [EMAIL PROTECTED]:
(..)I tried defining composite types, but get a runtime error that it isn'tavailable. That is you postgresql version?  Why you need return the temporary table type? Why create a temporary table if you use a function return type setof?
 -- William Leite Araújo


Re: [GENERAL] temp tables problem

2006-04-21 Thread sconeek
just to add on that, there is only one user for the db. so both
application accesses use the same db username and password. the web app
automatically logs into the db in with the one username and password
for both remote and local access.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] temp tables problem

2006-04-21 Thread Jim C. Nasby
On Thu, Apr 20, 2006 at 09:21:27PM -0700, [EMAIL PROTECTED] wrote:
 just to add on that, there is only one user for the db. so both
 application accesses use the same db username and password. the web app
 automatically logs into the db in with the one username and password
 for both remote and local access.

Users don't matter at all for temp tables. Temp tables are per
*session*, so as soon as you come in from a different connection it's a
different set of temp tables.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] temp tables remain after server restart

2005-08-31 Thread Tom Lane
Hari Bhaskaran [EMAIL PROTECTED] writes:
 one of our programs went haywire and created around 200,000 temp
 tables. In the end, I restarted the db, but the temporary tables are
 still around

What did you do, the old kill -9 some random process approach to
database management?  The recommended ways of cancelling a session
wouldn't have caused this.

 What should I be doing to clean it up?

There is code to make them go away the first time a backend wants to use
the relevant pg_temp_x namespace.  So you could start a backend,
do create temp table ..., start another backend while the first
remains running, do another create temp table ..., repeat until they
go away.

It would probably work to do drop schema pg_temp_x cascade too,
but you'd have to be really careful not to clobber the temp schema of
an active backend this way.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] temp tables remain after server restart

2005-08-31 Thread Hari Bhaskaran
 What did you do, the old kill -9 some random process approach to
 database management?  The recommended ways of cancelling a session
 wouldn't have caused this.

I never said I kill -9 .  I do pg_ctl stop
BTW, drop cascade on the namespace seems to be working.

create temp queries failed with an error asking to increase
max_locks_per_transaction variable. Now that you mention about
the clean up code, it does make sense - it was probably trying to
cleanup and it couldn't.
I increased max_locks_per_transaction and now delete cascade seems to work
(still running, so I can't say)

 There is code to make them go away the first time a backend wants to use
 the relevant pg_temp_x namespace.  So you could start a backend,
 do create temp table ..., start another backend while the first
 remains running, do another create temp table ..., repeat until they
 go away.

Didn't know that - thanks for the info.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Temp tables...

2005-07-13 Thread Greg Patnude
I am TOP POSTING intentionally -- 

Thanks Mike... Very informative -- I never realized that child (INHERITED) 
tables do NOT inherit the indexes from their parent... that might be part of 
the solution I duplicated the parents index on the child table -- the 
function still takes 4672 ms to complete... Based on your evaluation -- I 
now believe that the issue is in the UPDATE clause -- 

--  THIS IS AN EXAMPLE OF THE ACTUAL UPDATE... WHICH GETS CALLED 50 TIMES...

EXPLAIN ANALYZE UPDATE ONLY l_store_hours SET amount = amount * 1.00 WHERE 
l_store_hours.id = 14511;

-- OUTPUT...
Nested Loop  (cost=0.00..6.05 rows=1 width=52) (actual time=67.487..67.493 
rows=1 loops=1)
  -  Index Scan using l_store_hours_pkey on l_store_hours lh 
(cost=0.00..3.01 rows=1 width=52) (actual time=54.674..54.675 rows=1 
loops=1)
Index Cond: (14511 = id)
  -  Index Scan using l_store_hours_pkey on l_store_hours 
(cost=0.00..3.01 rows=1 width=4) (actual time=0.006..0.009 rows=1 loops=1)
Index Cond: (id = 14511)
Total runtime: 87.803 ms

Index Scan using l_store_hours_pkey on l_store_hours  (cost=0.00..3.02 
rows=1 width=58) (actual time=0.032..0.034 rows=1 loops=1)
  Index Cond: (id = 14511)
Total runtime: 0.500 ms

-- THE TABLES, CHILD TABLE, VIEW, AND FUNCTION...
CREATE TABLE l_store_hours (

 id serial PRIMARY KEY,
 l_store_id int4 NOT NULL DEFAULT 0,
 l_activity_type int4 NOT NULL REFERENCES l_activity_type(id),
 week_code int4 NOT NULL DEFAULT 0,
 year_code int4 NOT NULL DEFAULT 0,
 amount numeric(10,2) DEFAULT 0.00,
 create_dt timestamp NOT NULL DEFAULT now(),
 change_dt timestamp NOT NULL DEFAULT now(),
 change_id int4 DEFAULT 0,
 active_flag bool DEFAULT true

) WITH OIDS;

CREATE INDEX idx_store_hours ON l_store_hours USING btree (id, l_store_id, 
year_code, week_code);
ALTER TABLE l_store_hours CLUSTER ON idx_store_hours;

-- THE CHILD (INHERITED) TABLE...
CREATE TABLE l_store_hours_history (

 hist_id serial PRIMARY KEY,
 hist_dt timestamp NOT NULL DEFAULT now()

) INHERITS (l_store_hours) WITH OIDS;

CREATE INDEX idx_store_hours_history ON l_store_hours_history USING btree 
(id, l_store_id, year_code, week_code);
ALTER TABLE l_store_hours_history CLUSTER ON idx_store_hours_history;

-- THE UPDATE RULE ON THE PARENT TABLE...
CREATE OR REPLACE RULE l_store_hours_history_upd AS ON UPDATE TO 
l_store_hours DO INSERT INTO l_store_hours_history (SELECT * FROM ONLY 
l_store_hours LH WHERE LH.id = old.id);

-- THE VIEW...
CREATE OR REPLACE VIEW v_storehours AS
SELECT ls.id, ls.l_activity_type, ls.l_store_id, ls.week_code, ls.year_code,
ls.amount, ls.create_dt, ls.change_dt, ls.change_id, ls.active_flag,
COALESCE(lsh.amount, ls.amount) AS previous,
COALESCE(lsh.hist_id, 0) AS history, lsh.hist_dt
FROM ONLY l_store_hours ls
FULL JOIN l_store_hours_history lsh ON ls.id = lsh.id
ORDER BY ls.year_code, ls.week_code;

/*

This function updates the block of expected employee hours (l_store_hours) 
based on the current projected sales figures

*/
CREATE OR REPLACE FUNCTION l_updatehoursonsales(int4, int4, int4)
  RETURNS bool AS
$BODY$

DECLARE ROW v_storesales%ROWTYPE;
DECLARE F1 real;
DECLARE CUR t_updhours%ROWTYPE;
DECLARE STORE ALIAS FOR $1;
DECLARE WEEK ALIAS FOR $2;
DECLARE YEAR ALIAS FOR $3;
DECLARE C RECORD;
BEGIN

 RAISE LOG 'STARTING: l_updatehoursonsales for store: %', $1;
 RAISE LOG 'STARTING: l_updatehoursonsales for week: %', $2;
 RAISE LOG 'STARTING: l_updatehoursonsales for year: %', $3;

 -- RAISE LOG 'CREATING TEMP TABLE AS SELECT...';
 CREATE TEMPORARY TABLE tmphours AS

  SELECT LT.type_desc, 0 AS hist_id, LSH.* FROM ONLY l_store_hours LSH
  FULL OUTER JOIN l_activity_type LT ON LSH.l_activity_type = LT.id
  WHERE LSH.l_store_id = $1 AND LSH.week_code = $2
  AND LSH.year_code = $3 ORDER BY l_activity_type;

 -- UPDATE THE TEMP TABLE WITH THE LEAST HISTORY ID FROM THE HISTORY 
TABLE...
 -- RAISE LOG 'SETTING HISTORY IDs...';
 UPDATE tmphours SET hist_id = (

  SELECT A.hist_id FROM (

   SELECT MIN(hist_id) AS hist_id, id FROM ONLY l_store_hours_history LSH
   WHERE tmphours.l_store_id = LSH.l_store_id
   AND tmphours.year_code = LSH.year_code
   AND tmphours.week_code = LSH.week_code
   AND tmphours.l_activity_type = LSH.l_activity_type
   AND tmphours.id = LSH.id
   GROUP BY 2

   )

  AS A);

 --  UPDATE THE AMOUNTS (HOURS) WITH THE PRESERVED VALUES...
 -- RAISE LOG 'UPDATING tmphours -- Setting amount...';
 UPDATE tmphours SET amount = LSH.amount FROM ONLY l_store_hours_history LSH
 WHERE tmphours.hist_id = LSH.hist_id;

 -- GET THE CURRENT SALES FIGURES FROM THE VIEW...
 FOR ROW IN SELECT VSS.* FROM v_storesales VSS
 WHERE VSS.l_store_id = $1 AND VSS.week_code = $2 AND VSS.year_code = $3
 ORDER BY history LIMIT 1
 LOOP

  -- RAISE LOG 'LOOPING FOR UPDATE... %', ROW.id;
  IF (ROW.amount  ROW.previous) THEN

   SELECT  (1 + (((ROW.amount::float / ROW.previous::float)::float - 1) * 
0.8))::NUMERIC(10, 2) INTO F1;
   RAISE LOG 'USING FORMULA (a  b): %', F1;

  END IF;

  IF (ROW.amount  

Re: [GENERAL] Temp tables...

2005-07-12 Thread Michael Fuhr
On Tue, Jul 12, 2005 at 10:52:24AM -0700, Greg Patnude wrote:

 Performing an update to an inherited table system from inside of a stored 
 procedure (PLPGSQL) seems to be unusually sluggish...

Is the update slower when done inside a function than when doing
it directly (e.g., from psql)?  That is, is the use of a function
relevant, or is the update equally slow in any case?  Could you
post the EXPLAIN ANALYZE output for the update?

The message subject is Temp tables.  Are you using temporary
tables, and if so, are you seeing different behavior with temporary
tables than with real tables?  Again, is that relevant to the
problem?

 Does anyone have a faster solution ? I am updating 50 records and it
 takes approximately 4.375 seconds + or -
 
 The inherited table has an ON INSERT DO INSTEAD and there are approximately 
 2 million rows in the inherited table structure...

Could you post the table definitions, including all indexes, rules,
etc.?  Do all the child tables have indexes on the column(s) used
to restrict the update?  As the documentation states, indexes aren't
inherited, so you might need to create additional indexes on the
children, indexes that you'd think would be redundant.  Example:

CREATE TABLE parent (id serial PRIMARY KEY);
CREATE TABLE child (x integer) INHERITS (parent);

INSERT INTO child (x) SELECT * FROM generate_series(1, 10);

ANALYZE parent;
ANALYZE child;

EXPLAIN ANALYZE UPDATE child SET id = id WHERE id BETWEEN 1 AND 50;
  QUERY PLAN
   
---
 Seq Scan on child  (cost=0.00..1991.00 rows=41 width=14) (actual 
time=0.059..307.234 rows=50 loops=1)
   Filter: ((id = 1) AND (id = 50))
 Total runtime: 309.350 ms
(3 rows)

EXPLAIN ANALYZE UPDATE parent SET id = id WHERE id BETWEEN 1 AND 50;
 QUERY PLAN 
 
-
 Append  (cost=0.00..2006.37 rows=52 width=14) (actual time=304.838..306.252 
rows=50 loops=1)
   -  Index Scan using parent_pkey on parent  (cost=0.00..15.37 rows=11 
width=10) (actual time=0.110..0.110 rows=0 loops=1)
 Index Cond: ((id = 1) AND (id = 50))
   -  Seq Scan on child parent  (cost=0.00..1991.00 rows=41 width=14) (actual 
time=304.705..305.619 rows=50 loops=1)
 Filter: ((id = 1) AND (id = 50))
 Total runtime: 307.935 ms
(6 rows)

Notice the sequential scans on child, even though we have an index
on parent.id, a column that child inherits.  We need to create an
index on child.id as well:

CREATE INDEX child_id_idx ON child (id);

EXPLAIN ANALYZE UPDATE child SET id = id WHERE id BETWEEN 1 AND 50;
  QUERY PLAN
   
---
 Index Scan using child_id_idx on child  (cost=0.00..3.65 rows=41 width=14) 
(actual time=0.369..1.371 rows=50 loops=1)
   Index Cond: ((id = 1) AND (id = 50))
 Total runtime: 6.100 ms
(3 rows)

EXPLAIN ANALYZE UPDATE parent SET id = id WHERE id BETWEEN 1 AND 50;
 QUERY PLAN 


 Append  (cost=0.00..19.02 rows=52 width=14) (actual time=0.119..1.895 rows=50 
loops=1)
   -  Index Scan using parent_pkey on parent  (cost=0.00..15.37 rows=11 
width=10) (actual time=0.037..0.037 rows=0 loops=1)
 Index Cond: ((id = 1) AND (id = 50))
   -  Index Scan using child_id_idx on child parent  (cost=0.00..3.65 rows=41 
width=14) (actual time=0.066..1.320 rows=50 loops=1)
 Index Cond: ((id = 1) AND (id = 50))
 Total runtime: 7.820 ms
(6 rows)

If that's not the problem, then do other tables have foreign key
references to the table(s) you're updating?  If so, then you might
need indexes on the foreign key columns in the referring tables.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Temp tables as session var containers

2005-05-19 Thread Richard Huxton
James Croft wrote:
Hi,
I've seen the session variable question pop up a fair bit on this list. 
The temporary table solution seems good but I've got a question before 
using it...
Another option is to use one of the procedural languages that provide 
global variable storage. The attached examples are in TCL.

Set the user-id
  SELECT app_session('UID', 'ABC1234');
Get the user-id
  SELECT app_session('UID');
--
  Richard Huxton
  Archonet Ltd
-- app_session(VARNAME, VALUE)
--  Defines a text variable and sets its value.
--  If you try to set the same VARNAME twice in one session, an error is 
returned.
--  If VALUE is null, just returns the value.
--
CREATE OR REPLACE FUNCTION app_session(text, text) RETURNS text AS '
upvar app_sess_vars a
if {![ argisnull 2 ]} {
if {[ info exists a($1) ]} {
elog ERROR app_session(): Already set var $1 this 
session
}
set a($1) $2
}

return $a($1)
' LANGUAGE pltcl;


-- app_session(VARNAME)
--  Returns the value of VARNAME (if set) or UNDEFINED
--  NOTE - this function is marked IMMUTABLE - do not use as a DEFAULT 
source 
--  in a table definition
--
CREATE OR REPLACE FUNCTION app_session(text) RETURNS text AS '
upvar app_sess_vars a
if {![ info exists a($1) ]} {
return UNDEFINED
}

return $a($1)
' LANGUAGE pltcl IMMUTABLE;

-- app_session_int(VARNAME)
--  Returns the value of VARNAME (if set) or 0
--  NOTE - this function is marked IMMUTABLE - do not use as a DEFAULT 
source 
--  in a table definition
--
CREATE OR REPLACE FUNCTION app_session_int(text) RETURNS int4 AS '
upvar app_sess_vars a
if {![ info exists a($1) ]} {
return 0
}

return $a($1)
' LANGUAGE pltcl IMMUTABLE;


-- app_session_vol(VARNAME)
--  Returns the value of VARNAME (if set) or UNDEFINED
--  NOTE - this function is marked IMMUTABLE - do not use as a DEFAULT 
source 
--  in a table definition
--
CREATE OR REPLACE FUNCTION app_session_vol(text) RETURNS text AS '
upvar app_sess_vars a
if {![ info exists a($1) ]} {
return UNDEFINED
}

return $a($1)
' LANGUAGE pltcl VOLATILE;


-- app_session_int_vol(VARNAME)
--  Returns the value of VARNAME (if set) or 0
--  NOTE - this function is marked VOLATILE
--
CREATE OR REPLACE FUNCTION app_session_int_vol(text) RETURNS int4 AS '
upvar app_sess_vars a
if {![ info exists a($1) ]} {
return 0
}

return $a($1)
' LANGUAGE pltcl VOLATILE;


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] temp tables ORACLE/PGSQL

2005-04-29 Thread Tony Caduto
I am assuming you need session varables for a web based app right?
For a standard client/server app created in something like VB or Delphi all you 
really need is a single
connection(because most db apps are single threaded), and a temp table will 
stay around until that
connection is closed, and if you use ON COMMIT DELETE ROWS you don't ever need 
to
use execute in functions(that was the whole point).
You are right that the temp tables would not work with HTTP since each request 
is a new connection,
but there are easy ways around that by using a sessions table, which sounds 
like what you did.
For storing web based session variables I use a table in PG like this:
CREATE TABLE public.sessions
(
sessionid integer,
sesstimestamp timestamp,
sessdata text,
CONSTRAINT pk_sessions PRIMARY KEY (sessionid)
);
sessionID is generated from a sequence and stored on the client in a cookie
sesstimestamp is used to expire the session after 20 minutes of no activity
(the timestamp is updated at each new request from the client, so it's not a 
hard 20 minutes)
sessdata can contain as many variables as you like in this form:
username=bsmith
It all works great and I don't need global temp tables :-)  And using a scheme 
like this will work on
any database that supports sequences or some other form of generating a unique 
ID.
--
Tony Caduto
AM Software Design
Home of PG Lightning Admin for Postgresql 8.x
http://www.amsoftwaredesign.com
fisher wrote:
Hi 
Thank You very much.
As I mentioned I need temp tables for storing sesssion variables.
I plan to write functions to return suitable column value and I need
them to be availabele during whole session. That makes deleteing on
commit not the best solution. For example I want to keep emp_id in one
of columns and
use ps_get_emp_id() function to return it's value in other functions.
All values stored in thie parameter temp table are quite stable.

Anyway thank You very much.
fisher
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] temp tables ORACLE/PGSQL

2005-04-29 Thread Neil Conway
Dennis Sacks wrote:
The disadvantage is, you'll have to have some process for deleting
old data from the table, as it will stay around and it will bite you
when you get the same pg_backend_pid() again down the road.
Rather than use pg_backend_id(), why not just assign session IDs from a 
sequence? You would still get the problem of stale session data so you'd 
probably still want a periodic cleaner process, but you won't need to 
worry about session ID collision.

-Neil
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [GENERAL] temp tables ORACLE/PGSQL

2005-04-28 Thread Pavel Stehule
 Hi
 I used to work with Oracle and now tryin' PostgreSQL I'm a bit
 confused.
 I found that creating temp table in one session does not  make it
 available for other sessions for the same user? Is this intended??

Yes, it's natural behave of temp. tables in PostgreSQL. The life cycle of 
temp tables is related with session. When session ends then all temp 
tables are destroyed. When you wont to use temp tables again, you have to 
create it again.

 I was tryin to use because of lack of session and package variables in
 PGSQL (thats what I know).  But I'm not sure if I have to create  a
 temp table at the beginning of each session? Or just like in Oracle
 create temp table definition and all whats temporar is data.

You will lost table definition.

 
 If creating a temp table for each connection is a must then maybe You
 can tell me if there is any triger /event  that is being called during
 connecting to db. I guess that would be the best place to create temp
 tables and feed it up with session parameters.

No there are not table or session triggers. You have to create temp tables 
from application.

Regards
Pavel Stehule


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] temp tables ORACLE/PGSQL

2005-04-28 Thread Dennis Sacks
NO-fisher-SPAM_PLEASE wrote:
Hi
I used to work with Oracle and now tryin' PostgreSQL I'm a bit
confused.
I found that creating temp table in one session does not  make it
available for other sessions for the same user? Is this intended??
 

PostgreSQL does not support global temporary tables. This is one of the 
most painful features missing as far as porting from Oracle goes from my 
standpoint.

Yes, you need to create the temporary table at the beginning of each 
session. Also, stored procedures that use temporary tables are more 
painful to write - you need to use EXECUTE for any SQL that references a 
temporary table - read the Porting From Oracle section of the PostgreSQL 
manual. I'd recommend rereading it several times.

The other option with temporary tables is to emulate a global temporary 
table using a normal table and adding a column like this:

session_id INTEGER DEFAULT pg_backend_pid() NOT NULL
and then modifying your select/update/delete statements to include 
where session_id = pg_backend_pid() so that you only deal with the 
data from your current session.

The pg_backend_pid() guaranteed to be unique while connected. You'll 
just want to make sure you have a process for deleting rows from the 
table so if you get a pg_backend_pid() again you won't have problems.

This has the advantage of not having to create a temporary table at the 
beginning of every session, plus your stored procedures don't need to 
use EXECUTE. The disadvantage is, you'll have to have some process for 
deleting old data from the table, as it will stay around and it will 
bite you when you get the same pg_backend_pid() again down the road.

Dennis
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] temp tables ORACLE/PGSQL

2005-04-28 Thread Tony Caduto
This is not entirely correct.   We use temp tables all the time in PLpgsql 
functions and never have to use
execute.  We have found that you have to use EXECUTE only in certain 
circumstances.
 stored procedures that use temporary tables are more
painful to write - you need to use EXECUTE for any SQL that references a 
temporary table - read the Porting From Oracle section of the PostgreSQL 
manual. I'd recommend rereading it several times.
we use this in all our functions that use temp tables, and we use PG 
Lightning Admin, which refreshes the
connection each time you run a query which gets around a lot of issues with 
temp tables.
CREATE or REPLACE FUNCTION public.iftableexists( varchar)
RETURNS pg_catalog.bool AS
$BODY$
DECLARE
 BEGIN
 /* check the table exist in database and is visible*/
 perform n.nspname ,c.relname
FROM pg_catalog.pg_class c LEFT JOIN pg_catalog.pg_namespace n ON n.oid
= c.relnamespace
where n.nspname like 'pg_temp_%' AND pg_catalog.pg_table_is_visible(c.oid)
AND Upper(relname) = Upper($1);
 IF FOUND THEN
RETURN TRUE;
 ELSE
RETURN FALSE;
 END IF;
 END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE;
--
Tony Caduto
AM Software Design
Home of PG Lightning Admin for Postgresql 8.x
http://www.amsoftwaredesign.com
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] temp tables ORACLE/PGSQL

2005-04-28 Thread Steve Atkins
On Thu, Apr 28, 2005 at 01:14:58PM -0500, Tony Caduto wrote:
 This is not entirely correct.   We use temp tables all the time in PLpgsql 
 functions and never have to use
 execute.  We have found that you have to use EXECUTE only in certain 
 circumstances.
 
 we use this in all our functions that use temp tables, and we use PG 
 Lightning Admin, which refreshes the
 connection each time you run a query which gets around a lot of issues with 
 temp tables.

I'm assuming that by refresh you mean that you close the connection
to the database and create a new one for every query?

If you do that each time you run a query, doesn't that make temporary
tables pretty much worthless for anything other than PL scratch space?

And it's obviously a ludicrous thing to do in almost all production
cases, so if you're using lightning admin to prototype queries for
production use aren't you going to get burned by the entirely
different behaviour?

Cheers,
  Steve


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] temp tables ORACLE/PGSQL

2005-04-28 Thread Dennis Sacks
Tony Caduto wrote:
This is not entirely correct.   We use temp tables all the time in 
PLpgsql functions and never have to use
execute.  We have found that you have to use EXECUTE only in certain 
circumstances.

we use this in all our functions that use temp tables, and we use PG 
Lightning Admin, which refreshes the
connection each time you run a query which gets around a lot of issues 
with temp tables.

If you refresh the connection each time you run a query,  maybe you 
don't need to use EXECUTE with temporary tables in stored procedures, 
but who does that in a production database application? Most people want 
to re-use connections for performance reasons.

Dennis
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [GENERAL] temp tables ORACLE/PGSQL

2005-04-28 Thread Tony Caduto
We only do the connection refesh in the Lightning Admin Query editorfor 
testing our SQL that uses temp tables.
refreshing the connection eliminates the OID does not exist problems.
We put everything into stored procs and use them from Delphi applications and 
still never use execute in
our procs.
I think the key is we use ON COMMIT DELETE ROWS when we create our temp tables, 
we don't ever drop them
until the client disconnects.  using the ON COMMIT DELETE ROWS just re uses the 
the same temp table over
and over again.  You only need to use select if you drop the temp table and 
recreate it multiple times in
the same session.
Here is a example of how we use temp tables:
(NOTE: I ADDED the temp table to this function, you don't actually need it in 
this case becuase
the cursor can just be refereneced against the select statement)
This can be called over and over again from the same connection because the 
temp table is not dropped, it's
just re-used.  when the client does disconnect it gets dropped.
You could also truncate the temp table at the end of the function if you 
wanted, then you don't have
data sitting in the table until the next function call.
CREATE or REPLACE FUNCTION admin.spadm_get_status_list()
RETURNS pg_catalog.refcursor AS
$BODY$
DECLARE
return_cursor refcursor;
BEGIN
return_cursor = 'return_cursor';
IF iftableexists('temp_get_status_list') THEN
 RAISE NOTICE 'temp table already exists';
ELSE
CREATE TEMP TABLE temp_get_status_list
(
STATUS_ID SMALLINT,
DESCRIPTION VARCHAR(50))WITHOUT OIDS ON COMMIT DELETE ROWS;
END IF;
 INSERT INTO temp_get_status_list
 (
  STATUS_ID, DESCRIPTION
 )
 SELECTstatus_id, description
 FROMadmin.admin_status
 ORDER BY 1;
OPEN return_cursor FOR SELECT * FROM temp_get_status_list;
RETURN return_cursor;
END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE;


If you refresh the connection each time you run a query,  maybe you 
don't need to use EXECUTE with temporary tables in stored procedures, 
but who does that in a production database application? Most people want 
to re-use connections for performance reasons.

Dennis
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq
--
Tony Caduto
AM Software Design
Home of PG Lightning Admin for Postgresql 8.x
http://www.amsoftwaredesign.com
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Temp tables and copy

2003-09-14 Thread Konstantin Goudkov
 A message like this?
   lost synchronization with server, resetting connection


Yeah, like that.
I get pissed way too often when people miss simplest things and bug me with
stupid questions, and now it's me.
Sorry guys.



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match