Re: [GENERAL] pg_xlog not cleaned up

2006-08-01 Thread Christian Kastner
Simon Riggs wrote:
 On Mon, 2006-07-31 at 22:18 +0200, Christian Kastner wrote:
 Bruce Momjian wrote:
 What PostgreSQL version are you using?  I know the *.backup file removal
 was added in 8.1:

  * Remove old *.backup files when we do pg_stop_backup() (Bruce)
This prevents a large number of *.backup files from existing in
 /pg_xlog.

 but the existance of the files should not cause problems.
 It's 8.1.4, running on FC 5.

 It does not cause any active problems, but the WAL archiving fails until
 the *.backup and *.done files are removed manually, after which all the
 ready WAL segments are archived away normally.
 
 This is a known problem fixed on June 22 and the fix has been applied to
 8.1 branch as well as what will become 8.2.
 
 The next point release for 8.1 should have that included.

In that case, I can work around this easily until the next release.

Thank you for clearing this up!
Chris

-- 
Christian Kastner
PGP Key: AE90E13f

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Error in PostgreSQL query with psycopg

2006-08-01 Thread Parthan SR
Hii,With regards to the previous mail and replaies, I tried with this code. Although it doesn't give any error when i run it as a script, but the 
return value is NONE for 'q', which is assigned to return value of 
cursor.execute(), and the data are not inserted into the database.

[code]

name = 'GLV'
host = 'Parthan'
start = '04-08-2006'
end = '04-08-2006'
days = 1
starttime = '15:00:00'
endtime = '18:00:00'
size = 20

arglist = (name, host, start, end, days, starttime, endtime, size)

connection = connect(dbname=TimeTable user=myname password=mypassword)
cur = connection.cursor()
q = cur.execute(INSERT INTO ConfMain (ConfName, ConfHost, ConfStart, 
ConfEnd, ConfDays, ConfStartTime, ConfEndTime, ConfSize) VALUES (%s, %s, 
%s, %s, %i, %s, %s, %i);, arglist)
print q

Result: q= None

Am not getting any other error, I tried this one by calling the script 
in the temrinal. When i ran a similar one with the browser, it says 
premature end of script  :( http://pastebin.de/9994 -- this is my apache error log (/var/log/apache2/error.log)-- 
With Regards---Parthan.S.R.Research AssistantNational Resource Center for Free/Open Source SoftwarePython Developer n00b


Re: [GENERAL] Error in PostgreSQL query with psycopg

2006-08-01 Thread Volkan YAZICI
On Jul 31 10:40, Parthan SR wrote:
 On 7/31/06, Richard Huxton dev@archonet.com wrote:
 Traceback (most recent call last): File
 /usr/lib/cgi-bin/ConfSachem/page2.py, line 75, in ? main(num_days) File
 /usr/lib/cgi-bin/ConfSachem/page2.py, line 68, in main query =
 cursor.execute('INSERT INTO ConfMain (ConfName, ConfHost, ConfStart,
 ConfEnd, ConfDays, ConfStartTime, ConfEndTime, ConfSize) VALUES (?, ?, ?, ?,
 ?, ?, ?, ?)', (conf_name, host_name, start_day, end_day, num_days,
 start_time, end_time, aud_size)) psycopg.ProgrammingError: ERROR: syntax
 error at or near , at character 120 INSERT INTO ConfMain (ConfName,
 ConfHost, ConfStart, ConfEnd, ConfDays, ConfStartTime, ConfEndTime,
 ConfSize) VALUES (?, ?, ?, ?, ?, ?, ?, ?)

Did you follow the 120 character? It points to ',' just after '?'. Can
you please replace '?' character with '%s'. I'm not sure but, IIRC,
psycopg variable using method (e.g. $1, $2, ... or ?, ?, ...)
should be adjustable.


Regards.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Triggers in Postgres

2006-08-01 Thread Roman Neuhauser
# [EMAIL PROTECTED] / 2006-07-31 11:58:49 -0400:
 Actually Postgres manual of triggers says that in postgres, you can't write
 a trigger in conventional sql. You have to write it in a procedural language
 like C. So wanted some more insight on it.
 ~Jas

Where does it say so? Do you have a link?

-- 
How many Vietnam vets does it take to screw in a light bulb?
You don't know, man.  You don't KNOW.
Cause you weren't THERE. http://bash.org/?255991

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Triggers in Postgres

2006-08-01 Thread Jasbinder Bali
http://www.postgresql.org/docs/8.1/interactive/triggers.html

it says something like this:

 It is not currently possible to write a trigger function in the plain SQL function language. 

though lately I saw triggers written in pure sql in postgres

~jas
On 8/1/06, Roman Neuhauser [EMAIL PROTECTED] wrote:
# [EMAIL PROTECTED] / 2006-07-31 11:58:49 -0400: Actually Postgres manual of triggers says that in postgres, you can't write
 a trigger in conventional sql. You have to write it in a procedural language like C. So wanted some more insight on it. ~Jas Where does it say so? Do you have a link?--How many Vietnam vets does it take to screw in a light bulb?
You don't know, man.You don't KNOW.Cause you weren't THERE. http://bash.org/?255991


Re: [GENERAL] Postgres on 64bit Windows Server with WOW64

2006-08-01 Thread DANTE Alexandra

Hello Mont, hello List,

I am the person who wrote these threads and since, I haven't found the 
solution to compile PostgreSQL on Windows 64 bits...Instead, I used a 32 
bit release of PostgreSQL on Windows 64 bits.
I worked with Microsoft Windows Server 2003, Enterprise Edition with 
SP1 for Itanium-based Systems. I managed to have an environment of 
compilation on Windows 64 bits via a trial release of MKS Toolkit (as 
mingw does not support 64 bits). It was the only tool which brought the 
shell script (sh.exe) used to launch the ./configure command.


But when I executed the ./configure command, I had errors as the 
configuration of my server was not recognized.
My problems were in the config.guess file. In this file, the case 
ia64 is only present for Linux, AIX and HP-UX, not for Windows 64 bits.

I tried to make changes but I didn't not manage to compile PostgreSQL...

If someone manages to compile PostgreSQL without using Microsoft Visual 
C, I shall be interested to know how he made ! :-)


Regards,
Alexandra DANTE


Mont Rothstein wrote:


Previous threads indicate that it isn't currently possible:

http://archives.postgresql.org/pgsql-general/2006-04/msg00628.php

http://archives.postgresql.org/pgsql-general/2006-04/msg01264.php

-Mont


On 7/31/06, *Joshua D. Drake*  [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Mont Rothstein wrote:
 I wouldn't know that one way or the other :-)

 I was referring to Windows on Windows
 64http://en.wikipedia.org/wiki/WoW64
 .

 Enables 32bit apps to run on 64bit Windows OSs, in case anyone
wasn't
 familiar with it (I wasn't until today).

In theory you should be able to compile PostgreSQL to 64bit
windows. We
support 64bit everywhere else.

Does mingw support 64bit?

Joshua D. Drake



 -Mont



 On 7/31/06, Joshua D. Drake [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:

 Mont Rothstein wrote:
  Does anyone have experience with running Postgres on Windows
Server
 2003
  64bit using WOW64?

 There is a 64bit version of World of Warcraft?
 
  In theory it *should* work, but it is always nice to hear
from someone
 that
  has already taken the plunge :-)
 
  Thanks,
  -Mont
 


 --

 === The PostgreSQL Company: Command Prompt, Inc. ===
 Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
 Providing the most comprehensive  PostgreSQL solutions
since 1997
   http://www.commandprompt.com/






--

=== The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
  http://www.commandprompt.com/






---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Corrupted DB? could not open file pg_clog/####

2006-08-01 Thread Martijn van Oosterhout
On Mon, Jul 31, 2006 at 06:09:33PM -0400, Francisco Reyes wrote:
 Martijn van Oosterhout writes:
 
 That's when you've reached the end of the table. The point is that
 before then you'll have found the value of N that produces the error.
 
 Will be a while.. my little python script is doing under 10 selects/sec... 
 and there are nearly 67 million records. :-(

Naturally you'd do a binary search. That'd take a maximum of about
log2(67 million) = about 26 scans.

Once you find it you should be able to identify the ctid. You may be
able to delete it.

 It will rollback all pending transactions. The point is that it's
 looking for information about transactions that were committed. This is
 usually a memory or disk error.
 
 So, should it be safe to create the file and fill it up with 256K zeros?

For a certain value of safe. It get the system running, but there's
no guarentees about the data in it...

 At the rate my script is going.. it's going to take a very long time to 
 find out where the problem is. If I have a dump.. any usefull info I can 
 take from the point the dump stopped?

That gives you the place in the file where it broke...

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Triggers in Postgres

2006-08-01 Thread Roman Neuhauser
# [EMAIL PROTECTED] / 2006-08-01 02:35:48 -0400:
 On 8/1/06, Roman Neuhauser [EMAIL PROTECTED] wrote:
 
 # [EMAIL PROTECTED] / 2006-07-31 11:58:49 -0400:
  Actually Postgres manual of triggers says that in postgres, you can't
 write
  a trigger in conventional sql. You have to write it in a procedural
 language
  like C. So wanted some more insight on it.
  ~Jas
 
Where does it say so? Do you have a link?

 http://www.postgresql.org/docs/8.1/interactive/triggers.html
 
 it says something like this:
 
  It is not currently possible to write a trigger function in the plain SQL
 function language. 
 
 though lately I saw triggers written in pure sql in postgres

Notice that the manual doesn't mention C, and I guess those pure
sql triggers were written in PL/PgSQL, a procedural language.

As the following example fails to demonstrate, it's just SQL with a
few control structures, very easy to get running if you have a bit
of SQL and programming background.

CREATE TABLE t (x SERIAL);

CREATE FUNCTION sqlf()
RETURNS SETOF t
STABLE
LANGUAGE SQL
AS
$$
SELECT * FROM t;
$$;

CREATE FUNCTION plpgsqlf()
RETURNS SETOF t
STABLE
LANGUAGE PLPGSQL
AS
$$
DECLARE
r t;
BEGIN
FOR r IN SELECT * FROM t LOOP
RETURN NEXT r;
END LOOP;
END;
$$;

-- 
How many Vietnam vets does it take to screw in a light bulb?
You don't know, man.  You don't KNOW.
Cause you weren't THERE. http://bash.org/?255991

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Error in PostgreSQL query with psycopg

2006-08-01 Thread Martijn van Oosterhout
On Mon, Jul 31, 2006 at 09:47:40PM -0400, John D. Burger wrote:
 never, never, never try quoting on your own! You can only fail. The 
 only choice is to fail now or later.
 
 Nonetheless, in case it's useful, here's a barebones wrapper I call on 
 everything:

It'll work fine on single byte encodings (ascii and latin*) and for
well-formed utf8. If somebody can supply malformed utf8 you may have a
problem. It will definitly break for some eastern encodings.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Triggers in Postgres

2006-08-01 Thread Chris Mair

 http://www.postgresql.org/docs/8.1/interactive/triggers.html
  
 it says something like this:
  
  It is not currently possible to write a trigger function in the
 plain SQL function language. 

The whole paragraph says.

It is also possible to write a trigger function in C, although most
people find it easier to use one of the procedural languages. It is not
currently possible to write a trigger function in the plain SQL function
language.

That is: you can and you should write your trigger in a procedural
language. In particular - if you want to stay as closed as possible
to SQL you should use procedural SQL, which in PostgreSQL is called
PL/pgSQL:
http://www.postgresql.org/docs/8.1/interactive/plpgsql.html

Bye, Chris.



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Triggers in Postgres

2006-08-01 Thread Tom Lane
Chris Mair [EMAIL PROTECTED] writes:
 http://www.postgresql.org/docs/8.1/interactive/triggers.html
  It is not currently possible to write a trigger function in the
 plain SQL function language. 

 The whole paragraph says.

 It is also possible to write a trigger function in C, although most
 people find it easier to use one of the procedural languages. It is not
 currently possible to write a trigger function in the plain SQL function
 language.

And that comes *after* a paragraph talking about the different
procedural languages you can write a trigger in.  I can't imagine how
anyone would come away from reading that with the impression that C
is the first recommendation for writing triggers.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Unicode sorting problem

2006-08-01 Thread David Jorjoliani

Hi,
I have problem with UTF-8 database sorting. Postgres initialized with:

--locale=ka_GE.UTF-8 --lc-collate=ka_GE.UTF-8

show all; command shows:

...
lc_collate | ka_GE.UTF-8   | Shows the 
collation order locale.
lc_ctype| ka_GE.UTF-8   
| Shows the character classification and case conversion locale.
lc_messages | ka_GE.UTF-8   
| Sets the language in which messages are displayed.
lc_monetary | ka_GE.UTF-8   
| Sets the locale for formatting monetary amounts.
lc_numeric  | ka_GE.UTF-8   
| Sets the locale for formatting numbers.
lc_time | ka_GE.UTF-8   
| Sets the locale for form

...

ka_GE.UTF-8 is Georgian locale.
When run sql query with order of text or varchar field with 
Georgian unicode data inside, result is totally wrong.

Can somebody help me?
Thanks,
David Jorjoliani


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] pg_restore performance on solaris 10/6

2006-08-01 Thread Thomas Burns

Hi,

We recently started using a Sun x4200 with Solaris 10/6 (the release
from jun of 2006) and postgresql 8.1.3.  I think I may need to do some
tuning, but have no idea what to change.  Using pg_restore to restore
our database literally takes 8 times longer on the x4200 as it does on
OS X (the x4200 hardware should be considerably faster -- it has better
drives, more RAM, faster processors and 4 cores vs 2 cores).  The  
loadavg

on the x4200 during the restore is extremely low -- about 0.01 (on OS X,
it would be more like 1.5 or 2.0).  Any idea what could be going on?

Thanks,
Tom Burns



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Unicode sorting problem

2006-08-01 Thread Martijn van Oosterhout
On Mon, Jul 31, 2006 at 09:34:08PM +0400, David Jorjoliani wrote:
 ka_GE.UTF-8 is Georgian locale.
 When run sql query with order of text or varchar field with 
 Georgian unicode data inside, result is totally wrong.
 Can somebody help me?

Please define wrong.

What OS? What version of OS? What version of Postgres?

It looks like a glibc system, does your system actually have that locale
defined? Do you get the same results from sort?

Hope this helps,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Dynamic pgplsql triggers

2006-08-01 Thread Merlin Moncure

On 7/31/06, Worky Workerson [EMAIL PROTECTED] wrote:

I'm trying to trigger a whole bunch of partitions at once (initial DB
setup) using the same plpgsql trigger.  The trigger is basically the
merge trigger in the docs (i.e. UPDATE IF NOT FOUND RETURN NEW ...).

I need to use the TG_RELNAME variable within the UPDATE in the
trigger so that I can use the same function to trigger all of the
partitions (correct?), the problem is that I can't quite figure out
how.  I figure that I will have to use EXECUTE on a string that I
build up, right?  The problem that I'm having with this approach is
that some of the columns of NEW don't have a text conversion, and I'm
getting an error whenever the trigger fires.  Is there a way around
this and/or a better way to trigger a bunch of partitions with the
same function?


I don't think it's possible.  however, what is possible and achieves
roughly the same affect is to query the system catalogs  (or
information schema) and via dynamic sql cut trigger
funtions/procedures by looping the results of your query.  non-dynamic
sql will usually be a bit faster than dynamic as a bonus, the only
downsie is you are creating a lot of functions, albeit in easy to
manage fashion.  If you are really clever, you can put your trigger
functions in a special schema for organizational purposes.

to do this the 'functional' way:

create or replace function create_trigger_for_table(table_name text,
schema_name text) returns void as
$$
 begin
   excecute 'create or replace function ' -- and so forth
 end;
$$;

and to invoke the function:

select create_trigger_for_table(table_name , schema_name )  from
information_schema.tables -- and so forth

regards,
merlin

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Dynamic pgplsql triggers

2006-08-01 Thread Worky Workerson

I have scanned the archives and found the following message from 2004
dealing with v7.4, however it doesn't solve all my problems:

http://archives.postgresql.org/pgsql-sql/2004-07/msg00208.php

Here is a brief example.  I have a table db with a merge trigger given
below, shamelessly stolen from Example 36-1 in the docs.  When an
insert occurs, the number_seen is updated if the id/content are the
same, otherwise the new record is inserted.

-- Note that id is not unique
CREATE TABLE db (
   id INTEGER,
   content BYTEA,
   number_seen INTEGER
);

CREATE OR REPLACE FUNCTION merge_db() RETURNS TRIGGER AS $$
 BEGIN
   UPDATE db SET number_seen = number_seen + NEW.number_seen
 WHERE id = NEW.id AND content = NEW.content;
   IF FOUND THEN
 --Update row
 RETURN NULL;
   END IF;

   RETURN NEW;
 END;
$$ LANGUAGE PLPGSQL;

CREATE TRIGGER db_merge_db BEFORE INSERT ON db FOR EACH ROW EXECUTE
PROCEDURE merge_db();

This works like a charm.  My 'db' table is getting rather large,
however, and I started to partition it based on the non-unique column
'id'

CREATE TABLE db_1 (
 CONSTRAINT partition_id_1 CHECK (id =1);
) INHERITS db;

CREATE TABLE db_2  db_100

However now I am at a loss as to how to create a single trigger
function that I can use to trigger all of the partition tables.
Basically, how do I set the db name to be dynamic?  I tried the naive
approach of just replacing 'db' with TG_RELNAME, i.e.:

   UPDATE TG_RELNAME SET number_seen = number_seen + NEW.number_seen
 WHERE id = NEW.id AND content = NEW.content;

but of course this gives me an error about $1, which is what I
figured, given that the table name can't be a parameter (right?):
'ERROR:  syntax error at or near $1 at character 9'

So then I tried to make the statement dynamic, i.e.:

   EXECUTE 'UPDATE ' || TG_RELNAME || '
 SET number_seen = number_seen + NEW.number_seen
 WHERE id = NEW.id AND content = NEW.content;

but this gives the error: 'ERROR:  NEW used in query that is not in a
rule'.  This seems a little confusing as NEW should be visible to
trigger functions, but I assume that it has something to do with the
EXECUTE and how the planner couldn't pre-plan the SQL.

So I plod on, and try and make all the NEW args dynamic, i.e.:

   EXECUTE 'UPDATE ' || TG_RELNAME || '
 SET number_seen = number_seen + ' || NEW.number_seen || '
 WHERE id = ' || NEW.id || ' AND content = ' || NEW.content;

However now I get the error: 'ERROR:  operator does not exist:  bytea
|| ip4'.  I think I understand what is going on ... that bytea doesn't
have a text representation, right?

So I'm not quite sure where to go from here.  How do I make the UPDATE
statement in the trigger function operate against the table on which
the trigger was fired, while at the same time passing in the values in
NEW?

Any help would be greatly appreciated.

Thanks!



CREATE OR REPLACE FUNCTION merge_db() RETURNS TRIGGER AS $$
 BEGIN
   UPDATE db SET number_seen = number_seen + NEW.number_seen
 WHERE id = NEW.id AND content = NEW.content;
   IF FOUND THEN
 --Update row
 RETURN NULL;
   END IF;

   RETURN NEW;
 END;
$$ LANGUAGE PLPGSQL;


On 7/31/06, Worky Workerson [EMAIL PROTECTED] wrote:

I'm trying to trigger a whole bunch of partitions at once (initial DB
setup) using the same plpgsql trigger.  The trigger is basically the
merge trigger in the docs (i.e. UPDATE IF NOT FOUND RETURN NEW ...).

I need to use the TG_RELNAME variable within the UPDATE in the
trigger so that I can use the same function to trigger all of the
partitions (correct?), the problem is that I can't quite figure out
how.  I figure that I will have to use EXECUTE on a string that I
build up, right?  The problem that I'm having with this approach is
that some of the columns of NEW don't have a text conversion, and I'm
getting an error whenever the trigger fires.  Is there a way around
this and/or a better way to trigger a bunch of partitions with the
same function?

Thanks!
-Worky



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Dynamic pgplsql triggers

2006-08-01 Thread Worky Workerson

I don't think it's possible.  however, what is possible and achieves
roughly the same affect is to query the system catalogs  (or
information schema) and via dynamic sql cut trigger
funtions/procedures by looping the results of your query.  non-dynamic
sql will usually be a bit faster than dynamic as a bonus, the only
downsie is you are creating a lot of functions, albeit in easy to
manage fashion.  If you are really clever, you can put your trigger
functions in a special schema for organizational purposes.


Thanks for that hint.  I've been thinking about that ... I do similar
things to create all and trigger the partitions.  And, since I have a
regular naming for the partitions, I don't have to go to the catalogs.
It seems a little excessive to have to create a different function
for each of the triggers, however, when each one is basically the
same.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Where did the compat-postgresql-libs rpm get to?

2006-08-01 Thread Karl O. Pinc

Hi,

Whatever happened to the rpm packaging of the 7.x libpq?
(Which enabled programs linked against the old libraries
to be used with a 8.x. postgresql.)

Last time I went looking for it I couldn't find it.

Thanks.

Karl [EMAIL PROTECTED]
Free Software:  You don't pay back, you pay forward.
 -- Robert A. Heinlein


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Where did the compat-postgresql-libs rpm get to?

2006-08-01 Thread Devrim GUNDUZ
Hello,

On Tue, 2006-08-01 at 14:51 +, Karl O. Pinc wrote:

 Whatever happened to the rpm packaging of the 7.x libpq?
 (Which enabled programs linked against the old libraries
 to be used with a 8.x. postgresql.) 

http://developer.postgresql.org/~devrim/rpms/compat/

Regards,
-- 
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/



signature.asc
Description: This is a digitally signed message part


[GENERAL] Autovacuum help..

2006-08-01 Thread Sundar Narayanaswamy
Hi,

I need your help/suggestions with a problem I am facing related to autovacuum.

I am using PostgreSQL 8.1.2 through a JDBC connection. The connection is 
long lived (established when the application starts up and is closed only when 
the application is shutdown). 

I have enabled the autovacuum daemon and setup additional parameters (for 
instance, stats_row_level=on) as specified in the PostgreSQL documentation. 

In the database, I have a table that has a fairly high rate of inserts and 
deletes (~10 rows a second). The table is basically a FIFO queue that can have
a maximum of 800 entries. As new rows are added to the table, oldest rows are 
deleted such that the table always about 800 rows.

The problem I see is that the database size (disk usage) is continually 
increasing even though I have the autovacuum daemon enabled and the PostgreSQL 
log file indicates that the autovacuum daemon is processing the databases 
every minute as expected.

On digging in further, I noticed that the reltuples (in pg_catalog.pg_class) 
for the relevant table keeps increasing continually. I also noticed a large 
number of dead unremovable rows when I ran the vacuum analyze command. 

After shutting down my application, if I watch the reltuples, it continues to 
stay high until I run the analyze command (analyze verbose table_name) after 
which the reltuples drops to about 800 immediately. The analyze command output 
also indicates that the dead rows have been removed and I notice that the space
is reused for future inserts when I restart the application.

I am pretty sure that I don't have any transaction that is held open forever 
(the work flow is insert, commit, insert commit etc).

My question is, is autovacuum expected to work in situations like this where I 
have a long lived connection to the database ? After I shutdown my application,

why am required to run the analyze command before the dead rows are removed 
(autovacuum is not able to remove the dead rows until I run the analyze 
command) ?

I'll appreciate your thoughts since I seem to be running out of things to try..

Thanks

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Autovacuum help..

2006-08-01 Thread Martijn van Oosterhout
On Tue, Aug 01, 2006 at 08:02:59AM -0700, Sundar Narayanaswamy wrote:
 I need your help/suggestions with a problem I am facing related to autovacuum.
 
 I am using PostgreSQL 8.1.2 through a JDBC connection. The connection is 
 long lived (established when the application starts up and is closed only 
 when 
 the application is shutdown). 

snip

 On digging in further, I noticed that the reltuples (in pg_catalog.pg_class) 
 for the relevant table keeps increasing continually. I also noticed a large 
 number of dead unremovable rows when I ran the vacuum analyze command. 

Well, you need to work out why they are unremovable. For example, do
you have any really long lived open transactions. These are generally a
bad idea, for all sorts of reasons. If you don't commit occasionally,
none of your changes will be visible to anyone else.

 My question is, is autovacuum expected to work in situations like this where 
 I 
 have a long lived connection to the database ? After I shutdown my 
 application,

It's got nothing to do with connections and everything to do with open
transactions. I'd suggest you run a:

select * from pg_stat_activity ;

And look for messages like: IDLE in transaction.

 why am required to run the analyze command before the dead rows are removed 
 (autovacuum is not able to remove the dead rows until I run the analyze 
 command) ?

The stats arn't updated until the rows are actually removed. Eventually
autovacuum would have done the analyze for you...

Hope this helps,

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


[GENERAL] Performance/Issues with CMP and JBoss

2006-08-01 Thread Andy Dale
Hi,We currently have an JBoss web application that persists a byte array it recieves (Using the EJB persistence API), and then tries to read it from the Database again from further parsing. The application works really well with the default Hypersonic datasource, but it will not work correctly when using postgres as the datasource. 
The current problem we seem to have is that the data is persisted ok (or at least it seems to be in there with pgadmin), but cannot be read back out of the database all the time (in fact for about 90% of the time), the current behaviour of the application suggests it is trying to read it back out of the database (using 
EntityManager.find()) before it has really been saved, and thus fails to find the data. Do i have to tweak some settings in the postgres.conf file ? i have tried turning off fsync (i do not want to do this, for reliability reasons) and it performed far better. Can anyone advise me on the changes i need to make to speed up the inserting of data, i know that turning autocommit off is supposed to increase performance.
Thanks in advance,Andy



Re: [GENERAL] Autovacuum help..

2006-08-01 Thread John Purser
On Tue, 1 Aug 2006 08:02:59 -0700 (PDT)
Sundar Narayanaswamy [EMAIL PROTECTED] wrote:

 Hi,
 
 I need your help/suggestions with a problem I am facing related to
 autovacuum.
 
 I am using PostgreSQL 8.1.2 through a JDBC connection. The connection
 is long lived (established when the application starts up and is
 closed only when the application is shutdown). 
 
 I have enabled the autovacuum daemon and setup additional parameters
 (for instance, stats_row_level=on) as specified in the PostgreSQL
 documentation. 
 
 In the database, I have a table that has a fairly high rate of
 inserts and deletes (~10 rows a second). The table is basically a
 FIFO queue that can have a maximum of 800 entries. As new rows are
 added to the table, oldest rows are deleted such that the table
 always about 800 rows.
 
 The problem I see is that the database size (disk usage) is
 continually increasing even though I have the autovacuum daemon
 enabled and the PostgreSQL log file indicates that the autovacuum
 daemon is processing the databases every minute as expected.
 
 On digging in further, I noticed that the reltuples (in
 pg_catalog.pg_class) for the relevant table keeps increasing
 continually. I also noticed a large number of dead unremovable rows
 when I ran the vacuum analyze command. 
 
 After shutting down my application, if I watch the reltuples, it
 continues to stay high until I run the analyze command (analyze
 verbose table_name) after which the reltuples drops to about 800
 immediately. The analyze command output also indicates that the dead
 rows have been removed and I notice that the space is reused for
 future inserts when I restart the application.
 
 I am pretty sure that I don't have any transaction that is held open
 forever (the work flow is insert, commit, insert commit etc).
 
 My question is, is autovacuum expected to work in situations like
 this where I have a long lived connection to the database ? After I
 shutdown my application,
 
 why am required to run the analyze command before the dead rows are
 removed (autovacuum is not able to remove the dead rows until I run
 the analyze command) ?
 
 I'll appreciate your thoughts since I seem to be running out of
 things to try..
 
 Thanks
 
 __
 Do You Yahoo!?
 Tired of spam?  Yahoo! Mail has the best spam protection around 
 http://mail.yahoo.com 
 
 ---(end of
 broadcast)--- TIP 6: explain analyze is your
 friend

Sundar,

Take a look at the documentation at:
http://www.postgresql.org/docs/8.1/static/maintenance.html#AUTOVACUUM

There are a lot of configuration options that effect the autovacuum
daemon.

John Purser

-- 
You will pay for your sins.  If you have already paid, please disregard
this message.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] pg_restore performance on solaris 10/6

2006-08-01 Thread Vivek Khera


On Jul 31, 2006, at 3:45 PM, Thomas Burns wrote:


our database literally takes 8 times longer on the x4200 as it does on
OS X (the x4200 hardware should be considerably faster -- it has  
better


for a restore like this, bump up the value of checkpoint_segments to  
some large value (I use 256 -- but I have a dedicated partition for  
the pg_xlog directory which gets big).


also, the normal tuning of the shared memory settings apply.  how  
much RAM do you have and what kind of disks are you using?





smime.p7s
Description: S/MIME cryptographic signature


[GENERAL] prepare, execute oids

2006-08-01 Thread phazon
Hello, i've this kind of query:PREPARE preparedInsert (varchar) AS INSERT INTO my_table (my_field) VALUES ($1); ');and i use it with: EXECUTE preparedInsert ('test');
I juste want to get the OID of the line inserted. Does anyone know how to do it ?Thanks,Phazon


Re: [GENERAL] Postgres on 64bit Windows Server with WOW64

2006-08-01 Thread Mont Rothstein
Thanks for the info. Good to hear from someone that has actually used WOW64 successfully.Thanks,-MontOn 8/1/06, DANTE Alexandra 
[EMAIL PROTECTED] wrote:Hello Mont, hello List,I am the person who wrote these threads and since, I haven't found the
solution to compile PostgreSQL on Windows 64 bits...Instead, I used a 32bit release of PostgreSQL on Windows 64 bits.I worked with Microsoft Windows Server 2003, Enterprise Edition withSP1 for Itanium-based Systems. I managed to have an environment of
compilation on Windows 64 bits via a trial release of MKS Toolkit (asmingw does not support 64 bits). It was the only tool which brought theshell script (sh.exe) used to launch the ./configure command.
But when I executed the ./configure command, I had errors as theconfiguration of my server was not recognized.My problems were in the config.guess file. In this file, the caseia64 is only present for Linux, AIX and HP-UX, not for Windows 64 bits.
I tried to make changes but I didn't not manage to compile PostgreSQL...If someone manages to compile PostgreSQL without using Microsoft VisualC, I shall be interested to know how he made ! :-)Regards,
Alexandra DANTEMont Rothstein wrote: Previous threads indicate that it isn't currently possible: http://archives.postgresql.org/pgsql-general/2006-04/msg00628.php
 http://archives.postgresql.org/pgsql-general/2006-04/msg01264.php -Mont On 7/31/06, *Joshua D. Drake*  
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Mont Rothstein wrote:  I wouldn't know that one way or the other :-)
   I was referring to Windows on Windows  64http://en.wikipedia.org/wiki/WoW64  . 
  Enables 32bit apps to run on 64bit Windows OSs, in case anyone wasn't  familiar with it (I wasn't until today). In theory you should be able to compile PostgreSQL to 64bit
 windows. We support 64bit everywhere else. Does mingw support 64bit? Joshua D. Drake   -Mont 
On 7/31/06, Joshua D. Drake [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
 wrote:   Mont Rothstein wrote:   Does anyone have experience with running Postgres on Windows Server  2003   64bit using WOW64?
   There is a 64bit version of World of Warcraft? In theory it *should* work, but it is always nice to hear from someone
  that   has already taken the plunge :-) Thanks,   -Mont   
   --   === The PostgreSQL Company: Command Prompt, Inc. ===  Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
  Providing the most comprehensivePostgreSQL solutions since 1997  http://www.commandprompt.com/ 
-- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
 Providing the most comprehensivePostgreSQL solutions since 1997 http://www.commandprompt.com/



Re: [GENERAL] Performance/Issues with CMP and JBoss

2006-08-01 Thread Douglas McNaught
Andy Dale [EMAIL PROTECTED] writes:

 The current problem we seem to have is that the data is persisted ok
 (or at least it seems to be in there with pgadmin), but cannot be
 read back out of the database all the time (in fact for about 90% of
 the time), the current behaviour of the application suggests it is
 trying to read it back out of the database (using
 EntityManager.find()) before it has really been saved, and thus
 fails to find the data.  Do i have to tweak some settings in the
 postgres.conf file ? i have tried turning off fsync (i do not want
 to do this, for reliability reasons) and it performed far better.
 Can anyone advise me on the changes i need to make to speed up the
 inserting of data, i know that turning autocommit off is supposed to
 increase performance.

This is almost certainly a problem with your persistence layer rather
than with Postgres.  If you can see the data with PGAdmin then it's in
the database.  It may be that the transaction that saves the object is
not committing quickly, and so other connections don't see the object
until the commit happens.  But that's not the fault of Postgres.

-Doug

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] LISTEN considered dangerous

2006-08-01 Thread Flemming Frandsen
I have an application that does aggresive caching of data pulled from 
the database, it even keeps the objects cached between transactions.


Normally this works very well and when the cache is warmed up about 90% 
of the database time is saved.


However that leaves the problem of how to notice that my cached objects 
have turned stale, luckily pg has the listen/notify feature so I have 
triggers on all tables that do a notify, as you do.


However that just doesn't work, because listen is broken, allow me to 
illustrate, here A and B are two clients:


A: BEGIN
A: SELECT * FROM foo and cache the result.
A: LISTEN foochange
B: BEGIN
B: update foo
B: NOTIFY foochange
B: COMMIT
A: COMMIT

When A continues with an other transaction it will never get the event 
from B and thus will keep using the cached foo data, clearly this is not 
what you'd want.


The workaround is to commit after the listen, but that too is broken 
because then you'd commit all the changes up to that point, also not a 
desirable situation.


The only real way to work around the problem is to LISTEN to every 
single object that could ever be interesting to cache and commit right 
after connecting the first time.


The reason for this is that LISTEN is implemented by inserting into a 
table that's under transaction control (well naturally), so the actual 
listening doesn't start until the transaction has been committed.


I'm quite lucky I didn't get any corrupted data from this gotcha, but I 
did get som annoyed users, so let this be a warning to other pg users.



The correct behaviour would be to start listening at the begining of the 
transaction, when committed, IMHO.


To allow this the entire implementation needs to change so all events 
from all transactions are stored until all connections with earlier 
started transactions have started new transactions.


This way we could even have wildcard listens, imagine doing a listen % 
and getting all the generated events:)


--
 Regards Flemming Frandsen - http://dion.swamp.dk - YAPH


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] money type depreciated?

2006-08-01 Thread Karen Hill
I read in the documentation that the money type is depreciated.  It
says to use the  to_char function and NUMERIC/decimal instead.  Why was
the money type depreciated when it was so useful?  How would be the
best way to use to_char and numeric to replace that type since I don't
want to be using a depreciated data type.

regards,


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] LISTEN considered dangerous

2006-08-01 Thread Karsten Hilbert
On Tue, Aug 01, 2006 at 07:16:39PM +0200, Flemming Frandsen wrote:

 This way we could even have wildcard listens, imagine doing a listen % 
 and getting all the generated events:)
That'd be awesome. Along with a data field in the listen
structure, please :-)

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Can you run out of oids?

2006-08-01 Thread Chris Hoover
Somewhat silly question, but is it possible to run out of OID's?Since we upgraded to 8.1.3, I noticed that I can create tables without an oid column. I am wondering if I should consider trying to rebuild the existing tables to be built without OID. 
If it is possible to run out of OID's, how can you tell how close you are to the edge.Thanks,ChrisRH4.0PG8.1.3


Re: [GENERAL] Can you run out of oids?

2006-08-01 Thread Martijn van Oosterhout
On Tue, Aug 01, 2006 at 02:02:18PM -0400, Chris Hoover wrote:
 Somewhat silly question, but is it possible to run out of OID's?

Nope. When you reach 4 billion, they just start again at zero.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


[GENERAL] Where do Tcl questions go?

2006-08-01 Thread Carlo Stonebanks








I have been trying to figure out where to put my plTcl
questions, and where the people most knowledgable about that topic may be 
either on these mail lists or elsewhere.



Postgres docs makes reference to scripts called: pltcl_loadmod, pltcl_listmod, pltcl_delmod 
but I cant find them anywhere on my Windows 8.1.4 installation. It looks
like I need them  as well as requiring some more support for advanced
Tcl programming topics such as the unknown command, and with using
Tcl as a procedural language. The existing documentation appears out of date. Where
can I go for all of this?



Carlo








Re: [GENERAL] Can you run out of oids?

2006-08-01 Thread Michael Fuhr
On Tue, Aug 01, 2006 at 02:02:18PM -0400, Chris Hoover wrote:
 Somewhat silly question, but is it possible to run out of OID's?

It depends on what you mean by run out.  As the FAQ and documentation
mention, OIDs wrap around and aren't guaranteed to be unique.

http://www.postgresql.org/docs/faqs.FAQ.html#item4.12
http://www.postgresql.org/docs/8.1/interactive/datatype-oid.html
http://www.postgresql.org/docs/8.1/interactive/sql-createtable.html

 Since we upgraded to 8.1.3, I noticed that I can create tables without an
 oid column.  I am wondering if I should consider trying to rebuild the
 existing tables to be built without OID.

Avoid using OIDs; if you need a unique identifier use a serial or
bigserial column.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] prepare, execute oids

2006-08-01 Thread Michael Fuhr
On Tue, Aug 01, 2006 at 06:19:29PM +0200, phazon wrote:
 I juste want to get the OID of the line inserted. Does anyone know how to do
 it ?

It depends on the client interface.  What interface are you using?

OIDs are deprecated as row identifiers; the preferred method is to
use a sequence (serial column).  To get a sequence's value you can
use currval() or lastval() (the latter available in 8.1).

-- 
Michael Fuhr

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Autovacuum help..

2006-08-01 Thread Sundar Narayanaswamy
 Well, you need to work out why they are unremovable. For example, do
 you have any really long lived open transactions. These are generally a
 bad idea, for all sorts of reasons. If you don't commit occasionally,
 none of your changes will be visible to anyone else.

I have multiple databases on the server and a JDBC connection to
each of the database from my app. Only couple of the databases have continuous
inserts/delete activity and they usually stay in idle state.
Other databases, however, don't have much of updates happen,
but mostly selects. As I understand it, when executing the first select
after a commit or rollback, the state changes to Idle in Transaction
and state goes back to idle state when a commit/rollback happens.
These databases that have selects on them most of the time stay in
the Idle in Transaction state most of the time.

Now, in this situation, if some databases stay in Idle in transaction,
would the dead rows be unremovable from other databases (that are in idle
state) as well ? In other words, should ALL of the databases/connections
in the server be in idle state for the autovacuum to be able to remove dead 
rows in any database ?

 
  My question is, is autovacuum expected to work in situations like this
 where I 
  have a long lived connection to the database ? After I shutdown my
 application,
 
 It's got nothing to do with connections and everything to do with open
 transactions. I'd suggest you run a:
 
 select * from pg_stat_activity ;
 
 And look for messages like: IDLE in transaction.
 

I tried this, but I see command string not enabled as the current query.
I searched for docs on this view (pg_stat_activity), but couldn't find
much. Could you help me to enable it so that I can see the current query
in this view ? I found that some databases are in idle in transaction from
the
ps -afe command. 

  why am required to run the analyze command before the dead rows are
 removed 
  (autovacuum is not able to remove the dead rows until I run the analyze 
  command) ?
 
 The stats arn't updated until the rows are actually removed. Eventually
 autovacuum would have done the analyze for you...
 
 Hope this helps,
 

Thanks again. I am wondering as to why the state changes to Transaction in
idle when a query is executed. It'll be nice if that happens only when
a real change is made (transaction starts) to the database and not when
a select query occurs. 


 Have a nice day,


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] proper use of array datatype

2006-08-01 Thread Eric Andrews

hello all,

I am not much of a schema designer and have a general questoin about
the proper use of the array datatype. In my example, I have
destinations, and destinations can have multiple boxes, and inside
those boxes are a set of contents. what I want to do is search and
basically mine data from the content sets. do I use an array
datatype for the content column, or is there a better more efficient
way to go about this?


From http://www.postgresql.org/docs/8.1/interactive/arrays.html


Tip:  Arrays are not sets; searching for specific array elements may
be a sign of database misdesign. Consider using a separate table with
a row for each item that would be an array element. This will be
easier to search, and is likely to scale up better to large numbers of
elements.

thanks,
eric

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Can you run out of oids?

2006-08-01 Thread Joshua D. Drake

Martijn van Oosterhout wrote:

On Tue, Aug 01, 2006 at 02:02:18PM -0400, Chris Hoover wrote:

Somewhat silly question, but is it possible to run out of OID's?


Nope. When you reach 4 billion, they just start again at zero.


O.k. but there is a catch.. if you are using them, they can and will 
wrap. So don't use OIDs.


Joshua D. Drake




Have a nice day,




   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] LISTEN considered dangerous

2006-08-01 Thread Ian Harding

On 8/1/06, Flemming Frandsen [EMAIL PROTECTED] wrote:

I have an application that does aggresive caching of data pulled from
the database, it even keeps the objects cached between transactions.

Normally this works very well and when the cache is warmed up about 90%
of the database time is saved.

However that leaves the problem of how to notice that my cached objects
have turned stale, luckily pg has the listen/notify feature so I have
triggers on all tables that do a notify, as you do.

However that just doesn't work, because listen is broken, allow me to
illustrate, here A and B are two clients:

A: BEGIN
A: SELECT * FROM foo and cache the result.
A: LISTEN foochange
B: BEGIN
B: update foo
B: NOTIFY foochange
B: COMMIT
A: COMMIT

From the docs:.


NOTIFY interacts with SQL transactions in some important ways.
Firstly, if a NOTIFY is executed inside a transaction, the notify
events are not delivered until and unless the transaction is
committed. This is appropriate, since if the transaction is aborted,
all the commands within it have had no effect, including NOTIFY. But
it can be disconcerting if one is expecting the notification events to
be delivered immediately.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] proper use of array datatype

2006-08-01 Thread Nikolay Samokhvalov

On 8/1/06, Eric Andrews [EMAIL PROTECTED] wrote:

... and inside
those boxes are a set of contents. what I want to do is search and
basically mine data from the content sets. do I use an array
datatype for the content column, or is there a better more efficient
way to go about this?


What kind of content?
Is it possible to design regular table for it (a set of properties is
clear a priori)?

BTW, there is some interesting constrib module - contrib/hstore - that
allows to work with structures similar to perl's hashes. And, one more
- contrib/ltree - that provides the tree-like structures. Both are
based on GiST - that means support of index and, therefore, good
perfomance. Consider using these extensions.

--
Best regards,
Nikolay

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Autovacuum help..

2006-08-01 Thread Martijn van Oosterhout
On Tue, Aug 01, 2006 at 11:48:04AM -0700, Sundar Narayanaswamy wrote:
 Now, in this situation, if some databases stay in Idle in transaction,
 would the dead rows be unremovable from other databases (that are in idle
 state) as well ? In other words, should ALL of the databases/connections
 in the server be in idle state for the autovacuum to be able to remove dead 
 rows in any database ?

You'll have to check the docs, but it basically comes down to that
VACUUM can only remove rows that are older than the oldest transaction.
Whether this is per database or per cluster, I'm not sure...

It's not that IDLE in transaction is bad in itself. It's that if you
started a transaction three hours ago, no tuples deleted in the last
three hours can be truly removed because that transaction can see them.

 I tried this, but I see command string not enabled as the current query.
 I searched for docs on this view (pg_stat_activity), but couldn't find
 much. Could you help me to enable it so that I can see the current query
 in this view ? I found that some databases are in idle in transaction from
 the
 ps -afe command. 

You have to set stats_command_string=on in the server config. But the
output from ps is good also.
 
 Thanks again. I am wondering as to why the state changes to Transaction in
 idle when a query is executed. It'll be nice if that happens only when
 a real change is made (transaction starts) to the database and not when
 a select query occurs. 

This makes no sense. A select query is also a query affected by
transactions. In the example above, if you're in a transaction started
three hours ago, a SELECT will be looking at a version of the database
as it was three hours ago. Also, select queries can change the database
also. Consider nextval() for example.

The real question is, why are you keeping the transactions open? If
they don't need to be, just commit them when you go idle and everything
can be cleaned up normally.

hope this helps,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] money type depreciated?

2006-08-01 Thread Roman Neuhauser
# [EMAIL PROTECTED] / 2006-08-01 10:18:45 -0700:
 I read in the documentation that the money type is depreciated.  It
 says to use the  to_char function and NUMERIC/decimal instead.  Why was
 the money type depreciated when it was so useful?  How would be the
 best way to use to_char and numeric to replace that type since I don't
 want to be using a depreciated data type.

Use a custom type; IIRC Martijn van Oosterhout has something.

-- 
How many Vietnam vets does it take to screw in a light bulb?
You don't know, man.  You don't KNOW.
Cause you weren't THERE. http://bash.org/?255991

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] money type depreciated?

2006-08-01 Thread Martijn van Oosterhout
On Tue, Aug 01, 2006 at 10:37:35PM +, Roman Neuhauser wrote:
 # [EMAIL PROTECTED] / 2006-08-01 10:18:45 -0700:
  I read in the documentation that the money type is depreciated.  It
  says to use the  to_char function and NUMERIC/decimal instead.  Why was
  the money type depreciated when it was so useful?  How would be the
  best way to use to_char and numeric to replace that type since I don't
  want to be using a depreciated data type.
 
 Use a custom type; IIRC Martijn van Oosterhout has something.

Indeed, I wrote a module called taggedtypes that provided a way to
create custom types based on a base type and a set of tags. One of the
uses is to take a numeric base type and have currency names as the
tags.

The end result is that you have a currency type that works like a
numeric, but will complain if you try to add values of different
currencies. Ofcourse the infrastructure is there to allow you to do
automatic conversion, and such things.

Anyway, check it out on the website. It's a bit terse, but it might
suit your needs. There's an example included in the tarball.

http://svana.org/kleptog/pgsql/taggedtypes.html

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] proper use of array datatype

2006-08-01 Thread Eric Andrews

On 8/1/06, Nikolay Samokhvalov [EMAIL PROTECTED] wrote:

On 8/1/06, Eric Andrews [EMAIL PROTECTED] wrote:
 ... and inside
 those boxes are a set of contents. what I want to do is search and
 basically mine data from the content sets. do I use an array
 datatype for the content column, or is there a better more efficient
 way to go about this?

What kind of content?
Is it possible to design regular table for it (a set of properties is
clear a priori)?


a set of ID numbers and no not really. the boxes are unique to a
destination and the content list/set is unique to the box. I'd have a
bajillion tables :(


BTW, there is some interesting constrib module - contrib/hstore - that
allows to work with structures similar to perl's hashes. And, one more
- contrib/ltree - that provides the tree-like structures. Both are
based on GiST - that means support of index and, therefore, good
perfomance. Consider using these extensions.

id like to stay away from addons if i can avoid it you know?

thanks,
eric

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] proper use of array datatype

2006-08-01 Thread Erik Jones

Eric Andrews wrote:

hello all,

I am not much of a schema designer and have a general questoin about
the proper use of the array datatype. In my example, I have
destinations, and destinations can have multiple boxes, and inside
those boxes are a set of contents. what I want to do is search and
basically mine data from the content sets. do I use an array
datatype for the content column, or is there a better more efficient
way to go about this?

From http://www.postgresql.org/docs/8.1/interactive/arrays.html

Tip:  Arrays are not sets; searching for specific array elements may
be a sign of database misdesign. Consider using a separate table with
a row for each item that would be an array element. This will be
easier to search, and is likely to scale up better to large numbers of
elements.
Yeah, I've never considered arrays to be good data-types for columns.  
One possible solution to what (I think) you're trying to do, is to have 
a text or varchar column in which you store multiple values separated by 
some delimiter (such as ::) that will not occur in the actual option 
names.  Then you can write rules to handle 
inserting/updating/selecting/deleting options (which would boil down to 
string operations).  Or, you could just do the string manipulation 
directly in your queries, whichever is easiest for you.


Here's a link to an article that discusses using inheritance for dynamic 
content questionnaires (but, it may be overkill for what you  need):


http://www.varlena.com/GeneralBits/110.php

--
erik jones [EMAIL PROTECTED]
software development
emma(r)


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] proper use of array datatype

2006-08-01 Thread Reece Hart




Eric Andrews wrote:
 I am not much of a schema designer and have a general questoin about
 the proper use of the array datatype. In my example, I have
 destinations, and destinations can have multiple boxes, and inside
 those boxes are a set of contents. what I want to do is search and
 basically mine data from the content sets. 

I would use arrays exclusively for data sets for which each datum is meaningless by itself (for example, a single coordinate in 3D, although there are better ways to handle points in postgresql). I would recommend against using arrays for any data you wish to mine, and instead recast these has-a relationships as many-to-one joins across at least two tables. For example, a row from the table destination has-a (joins to) rows from boxes, and a box has-a (joins to) contents.

The same argument goes for a similar representation such as concatenated values in a text field. The fundamental principle is that it's relatively easy to turn join separate data into a set of values or concatenated list, but it's quite cumbersome to turn a set of values into easily searchable data (i.e., it's often expensive to peek inside the structure of the data for a single value). Furthermore, it's difficult or impossible to write check or foreign key constraints on data within such a structure.

-Reece





-- 
Reece Hart, http://harts.net/reece/, GPG:0x25EC91A0








Re: [GENERAL] LISTEN considered dangerous

2006-08-01 Thread Gregory Stark
Ian Harding [EMAIL PROTECTED] writes:

  However that just doesn't work, because listen is broken, allow me to
  illustrate, here A and B are two clients:
 
  A: BEGIN
  A: SELECT * FROM foo and cache the result.
  A: LISTEN foochange
  B: BEGIN
  B: update foo
  B: NOTIFY foochange
  B: COMMIT
  A: COMMIT
 
 From the docs:.
 
 NOTIFY interacts with SQL transactions in some important ways.
 Firstly, if a NOTIFY is executed inside a transaction, the notify
 events are not delivered until and unless the transaction is
 committed. This is appropriate, since if the transaction is aborted,
 all the commands within it have had no effect, including NOTIFY. But
 it can be disconcerting if one is expecting the notification events to
 be delivered immediately.

Note that he's not complaining about when the NOTIFY takes effect. He's
complaining about when the LISTEN takes effect.

I haven't used LISTEN/NOTIFY myself yet and I do indeed find the behaviour he
shows somewhat surprising. Normally in read-committed mode uncommitted
transactions are affected by other transactions when they commit. In this case
the uncommitted LISTEN is not being affected by the committed NOTIFY.




-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Best Procedural Language?

2006-08-01 Thread Christopher Browne
Martha Stewart called it a Good Thing when Carlo Stonebanks [EMAIL 
PROTECTED] wrote:
 I am interested in finding out a non-religious answer to which
 procedural language has the richest and most robust implementation
 for Postgres. C is at the bottom of my list because of how much
 damage runaway code can cause. I also would like a solution which is
 platorm-independent; we develop on Windows but may deploy on Linux.

You mean for implementing stored procedures?

I'd say that the answer varies depending on what the stored proc is
for.

- If it needs to do text munging, then one of {Perl|Python|Tcl} seem
  appropriate; they draw in big libraries of text munging code

- If you're writing code that selects data from various tables based
  on the inputs, then pl/pgsql tends to be the natural answer

- C is needed when you need deep engine access that can't be gotten
  any other way

- Untrusted Perl/Tcl are nifty if you need access to the rich sets of
  external libraries

- If you have some code in Java that you'd want to run in the DB
  server, then one of the pl/Java systems may be for you

It doesn't seem overly flameworthy to me.

Except for the cases where you *must* use C, you can usually
accomplish things in the wrong language, but there are likely to be
drawbacks...

 - Doing funky string munging using the SQL functions available in
   pl/pgsql is likely to be painful;

 - Doing a lot of DB manipulation in pl/Perl or pl/Tcl or such
   requires having an extra level of function manipulations that
   won't be as natural as straight pl/pgsql.
-- 
wm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','gmail.com').
http://linuxdatabases.info/info/postgresql.html
Q: Are the SETQ expressions used only for numerics?
A: No, they can also be used with symbolics (Fig.18).
-- Ken Tracton, Programmer's Guide to Lisp, page 17.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Best Procedural Language?

2006-08-01 Thread Ian Harding

On 8/1/06, Christopher Browne [EMAIL PROTECTED] wrote:

Martha Stewart called it a Good Thing when Carlo Stonebanks [EMAIL 
PROTECTED] wrote:
 I am interested in finding out a non-religious answer to which
 procedural language has the richest and most robust implementation
 for Postgres. C is at the bottom of my list because of how much
 damage runaway code can cause. I also would like a solution which is
 platorm-independent; we develop on Windows but may deploy on Linux.






 - Doing funky string munging using the SQL functions available in
   pl/pgsql is likely to be painful;

 - Doing a lot of DB manipulation in pl/Perl or pl/Tcl or such
   requires having an extra level of function manipulations that
   won't be as natural as straight pl/pgsql.


Another important distinguishing characteristic is whether it supports
set returning functions.  I think only plpgsql does right now.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Best Procedural Language?

2006-08-01 Thread Merlin Moncure

On 8/1/06, Ian Harding [EMAIL PROTECTED] wrote:

On 8/1/06, Christopher Browne [EMAIL PROTECTED] wrote:
 Martha Stewart called it a Good Thing when Carlo Stonebanks [EMAIL 
PROTECTED] wrote:
  I am interested in finding out a non-religious answer to which
  procedural language has the richest and most robust implementation
  for Postgres. C is at the bottom of my list because of how much
  damage runaway code can cause. I also would like a solution which is
  platorm-independent; we develop on Windows but may deploy on Linux.




  - Doing funky string munging using the SQL functions available in
pl/pgsql is likely to be painful;

  - Doing a lot of DB manipulation in pl/Perl or pl/Tcl or such
requires having an extra level of function manipulations that
won't be as natural as straight pl/pgsql.

Another important distinguishing characteristic is whether it supports
set returning functions.  I think only plpgsql does right now.


and C, and SQL ;)

in fact, sql functions make the best SRF because they are fast,
basically as fast as a query, but also can be called like this:

select sql_func();  --works!
select plpgsql_func(); --bad
select * from plpgsqlfunc(); works, but the other form is nice in some
situations

merlin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Can you run out of oids?

2006-08-01 Thread Tom Lane
Chris Hoover [EMAIL PROTECTED] writes:
 Since we upgraded to 8.1.3, I noticed that I can create tables without an
 oid column.  I am wondering if I should consider trying to rebuild the
 existing tables to be built without OID.

As things are currently set up, a table that's uselessly using OIDs
isn't going to have any serious impact on any other table.  It might be
worth doing ALTER  SET WITHOUT OIDS just to save the microseconds
required to generate an OID for each insert --- but I don't see another
reason.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Best Procedural Language?

2006-08-01 Thread Merlin Moncure

On 8/1/06, Christopher Browne [EMAIL PROTECTED] wrote:

Martha Stewart called it a Good Thing when Carlo Stonebanks [EMAIL 
PROTECTED] wrote:
 I am interested in finding out a non-religious answer to which
 procedural language has the richest and most robust implementation
 for Postgres. C is at the bottom of my list because of how much
 damage runaway code can cause. I also would like a solution which is
 platorm-independent; we develop on Windows but may deploy on Linux.


my take:
C:
you can probably get by without doing any C. Most (but not quite all)
of things you would do via C is exposed in libraries.  One thing you
can do with C for example is invoke a function via its oid and
manually supplying parameters to make callbacks for proceures.  you
can also dump core on your backend. good luck!

pl/pgsql:
you do not know postgresql if you do not know pl/pgsql. period. ideal
for data processing and all sorts of things.  all queries are first
class in the code (except for dynamic sql), which in my estimation
cuts code size, defect rate, and development time about 75% for
typical database type stuff.  just be warned, after you learn it you
will never want to use another database ever again, i'm not kiddig.

pl/perl, etc:
not much to add beyond what chris browe said: great for text
processing or library support.

merlin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Best Procedural Language?

2006-08-01 Thread Carlo Stonebanks
Well, the surprise for me is: this is exactly what I thought! I actually do 
have a lot of string manipulation to do, but I am the only one on the team 
with Tcl experience. For the sake of other developers I thought that the 
plPHP project would be interesting, but I don't get the impression that it 
is as well-developed as plTcl. Does anyone know anything about it?

Also, does anyone know why the plTcl was taken outof the core distribution?

Carlo


Ian Harding [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 On 8/1/06, Christopher Browne [EMAIL PROTECTED] wrote:
 Martha Stewart called it a Good Thing when Carlo Stonebanks 
 [EMAIL PROTECTED] wrote:
  I am interested in finding out a non-religious answer to which
  procedural language has the richest and most robust implementation
  for Postgres. C is at the bottom of my list because of how much
  damage runaway code can cause. I also would like a solution which is
  platorm-independent; we develop on Windows but may deploy on Linux.




  - Doing funky string munging using the SQL functions available in
pl/pgsql is likely to be painful;

  - Doing a lot of DB manipulation in pl/Perl or pl/Tcl or such
requires having an extra level of function manipulations that
won't be as natural as straight pl/pgsql.

 Another important distinguishing characteristic is whether it supports
 set returning functions.  I think only plpgsql does right now.

 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match
 



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Best Procedural Language?

2006-08-01 Thread Joshua D. Drake



 - Doing a lot of DB manipulation in pl/Perl or pl/Tcl or such
   requires having an extra level of function manipulations that
   won't be as natural as straight pl/pgsql.


Another important distinguishing characteristic is whether it supports
set returning functions.  I think only plpgsql does right now.


Actually no. Plperl, plphp, plruby and I believe even pl/Tcl support set 
returning functions.


Sincerely,

Joshua D. Drake




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match




--

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Best Procedural Language?

2006-08-01 Thread Joe Conway

Joshua D. Drake wrote:



 - Doing a lot of DB manipulation in pl/Perl or pl/Tcl or such
   requires having an extra level of function manipulations that
   won't be as natural as straight pl/pgsql.



Another important distinguishing characteristic is whether it supports
set returning functions.  I think only plpgsql does right now.



Actually no. Plperl, plphp, plruby and I believe even pl/Tcl support set 
returning functions.


and so does PL/R
 ;-)

Joe


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] plTcl - how to create proc/function libraries

2006-08-01 Thread Carlo Stonebanks
From the plTcl docs it appears the way for me to create function/procedure 
libraries for plTcl is by implementing the unknown command. However, my 
8.1.4 Windows-based installation doesn't seem to jibe with what the docs say 
I should expect::



PL/Tcl has a special support for things often used. It recognizes two magic 
tables, pltcl_modules and pltcl_modfuncs.





The docs don't mention where these should be, or how to create them. Other 
docs refer to scripts called: pltcl_loadmod, pltcl_listmod and pltcl_delmod 
but they are nowhere to be found. Can anyone enlighten me?





To enable this behavior, the PL/Tcl call handler must be compiled 
with -DPLTCL_UNKNOWN_SUPPORT set.





I have a Windows-based server, and I am assuming the pre-compiled library 
file pltcl.dll is supporting it - no compiling option there. How do I find 
out if it was compiled with DPLTCL_UNKNOWN_SUPPORT set?





TIA!



Carlo





---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] plTcl - how to create proc/function libraries

2006-08-01 Thread Tom Lane
Carlo Stonebanks [EMAIL PROTECTED] writes:
 To enable this behavior, the PL/Tcl call handler must be compiled 
 with -DPLTCL_UNKNOWN_SUPPORT set.

Where are you reading that?  There's no such sentence in the current
docs, and no sign of any such conditional in the source code either.
I'm not sure why it doesn't work for you, but it's not because it's
been deliberately turned off...

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Best Procedural Language?

2006-08-01 Thread Alvaro Herrera
Merlin Moncure wrote:
 On 8/1/06, Ian Harding [EMAIL PROTECTED] wrote:
 On 8/1/06, Christopher Browne [EMAIL PROTECTED] wrote:
  Martha Stewart called it a Good Thing when Carlo Stonebanks 
 [EMAIL PROTECTED] wrote:
   I am interested in finding out a non-religious answer to which
   procedural language has the richest and most robust implementation
   for Postgres. C is at the bottom of my list because of how much
   damage runaway code can cause. I also would like a solution which is
   platorm-independent; we develop on Windows but may deploy on Linux.
 
 
 
 
   - Doing funky string munging using the SQL functions available in
 pl/pgsql is likely to be painful;
 
   - Doing a lot of DB manipulation in pl/Perl or pl/Tcl or such
 requires having an extra level of function manipulations that
 won't be as natural as straight pl/pgsql.
 
 Another important distinguishing characteristic is whether it supports
 set returning functions.  I think only plpgsql does right now.
 
 and C, and SQL ;)

And PL/Perl (and PL/php but it's still immature.)

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Best Procedural Language?

2006-08-01 Thread Joshua D. Drake

Carlo Stonebanks wrote:
Well, the surprise for me is: this is exactly what I thought! I actually do 
have a lot of string manipulation to do, but I am the only one on the team 
with Tcl experience. For the sake of other developers I thought that the 
plPHP project would be interesting, but I don't get the impression that it 
is as well-developed as plTcl. Does anyone know anything about it?


plPHP is not as mature as plTcl (or is that plTclng). However it is very 
well developed and maintained. Heck, companies are even holding talks 
and training classes on it now.


Sincerely,

Joshua D. Drake
--

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] LISTEN considered dangerous

2006-08-01 Thread Martijn van Oosterhout
On Tue, Aug 01, 2006 at 07:50:19PM -0400, Gregory Stark wrote:
   However that just doesn't work, because listen is broken, allow me to
   illustrate, here A and B are two clients:
  
   A: BEGIN
   A: SELECT * FROM foo and cache the result.
   A: LISTEN foochange
   B: BEGIN
   B: update foo
   B: NOTIFY foochange
   B: COMMIT
   A: COMMIT

 I haven't used LISTEN/NOTIFY myself yet and I do indeed find the behaviour he
 shows somewhat surprising. Normally in read-committed mode uncommitted
 transactions are affected by other transactions when they commit. In this case
 the uncommitted LISTEN is not being affected by the committed NOTIFY.

Eh? At the point the LISTEN is run, the NOTIFY hasn't committed, so a
row is inserted. At the time the NOTIFY is committed, the LISTEN hasn't
committed yet so won't be visible. Only LISTEN is stored, not NOTIFY so
there's nothing wrong with the read-committed semantics.

It's slightly surprising though. I havn't seen anyone else complain
about this before though. The only way to fix this is to make the
LISTEN completely atransactional, so NOTIFY can see uncomitted LISTENs
also.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] LISTEN considered dangerous

2006-08-01 Thread Flemming Frandsen

Ian Harding wrote:

NOTIFY interacts with SQL transactions in some important ways.
Firstly, if a NOTIFY is executed inside a transaction, the notify
events are not delivered until and unless the transaction is
committed. This is appropriate, since if the transaction is aborted,
all the commands within it have had no effect, including NOTIFY. But
it can be disconcerting if one is expecting the notification events to
be delivered immediately.


Yes, that's very nice, but it doesn't have *anything* to do with what I 
posted about.


I'm bothered by listen listening from the end of the transaction in 
stead of the start of the transaction.


--
 Regards Flemming Frandsen - http://dion.swamp.dk - YAPH


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org