Re: [HACKERS] pg_encoding not needed anymore

2004-04-20 Thread Karel Zak
On Mon, Apr 19, 2004 at 08:41:18PM -0400, Bruce Momjian wrote:
 With initdb written now in C, we don't need a pg_encoding binary
 anymore.

 By  the way,  what  change  the name  of  initdb to  pg_initdb. The
 current  name  is  really  too  common  (like  some  others  things  in
 pgsql/src/bin)

Karel

-- 
 Karel Zak  [EMAIL PROTECTED]
 http://home.zf.jcu.cz/~zakkr/

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] Unable to create an array of a domain

2004-04-20 Thread Mark Gibson
Hi,
   I've recently been trying to make use of domains but have
encountered some violations of the rule of least surprise.
(PostgreSQL 7.4.2)
I couldn't create an array of a domain, eg:

   CREATE DOMAIN mydomain AS integer;

   CREATE TABLE mytable (
   stuff   mydomain[]
   );
results in:

   ERROR:  type mydomain[] does not exist

I noticed that the type '_mydomain' doesn't exist in pg_type, so I tried:

   CREATE DOMAIN _mydomain AS mydomain[];

   ERROR:  type mydomain[] does not exist

(hmmm, chicken/egg), so let's try:

   CREATE DOMAIN _mydomain AS integer[];

This worked, and now the create table statement above also works, and I can
insert into the table using the array input syntax too.
I have a few concerns though:

Is there any reason why a domain cannot use another domain as it's base 
type?
eg:
   CREATE DOMAIN anotherdomain AS mydomain;

   ERROR:  mydomain is not a valid base type for a domain

And why isn't an array type created for a domain?
(Is there a way to create an array for a domain without resorting to an
array of the domain's base type?)
Cheers

--
Mark Gibson gibsonm |AT| cromwell |DOT| co |DOT| uk
Web Developer  Database Admin
Cromwell Tools Ltd.
Leicester, England.
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [HACKERS] Unable to create an array of a domain

2004-04-20 Thread Rod Taylor
 Is there any reason why a domain cannot use another domain as it's base 
 type?

It hasn't been written yet, but Tom has spent some time ensuring the
system could be expanded to deal with this.

 And why isn't an array type created for a domain?
 (Is there a way to create an array for a domain without resorting to an
 array of the domain's base type?)

It was difficult to ensure that the constraints would be applied to all
elements of an array but that may have changed since array support has
been rewritten.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] pg_encoding not needed anymore

2004-04-20 Thread Bruce Momjian
Karel Zak wrote:
 On Mon, Apr 19, 2004 at 08:41:18PM -0400, Bruce Momjian wrote:
  With initdb written now in C, we don't need a pg_encoding binary
  anymore.
 
  By  the way,  what  change  the name  of  initdb to  pg_initdb. The
  current  name  is  really  too  common  (like  some  others  things  in
  pgsql/src/bin)

Uh, that would be pretty major.  No one has complained about it in the
past.  I think createuser is much worse.  :-)

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] pg_encoding not needed anymore

2004-04-20 Thread Karel Zak
On Tue, Apr 20, 2004 at 08:59:20AM -0400, Bruce Momjian wrote:
 Karel Zak wrote:
  On Mon, Apr 19, 2004 at 08:41:18PM -0400, Bruce Momjian wrote:
   With initdb written now in C, we don't need a pg_encoding binary
   anymore.
  
   By  the way,  what  change  the name  of  initdb to  pg_initdb. The
   current  name  is  really  too  common  (like  some  others  things  in
   pgsql/src/bin)
 
 Uh, that would be pretty major.  No one has complained about it in the
 past.  I think createuser is much worse.  :-)

 Sure. Maybe is needful wait for some other project like PostgreSQL that
 will use  same clever names... But maybe  we will never see  a problem,
 because the others are less ignorant...

Karel

-- 
 Karel Zak  [EMAIL PROTECTED]
 http://home.zf.jcu.cz/~zakkr/

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[HACKERS] COPY CSV keywords

2004-04-20 Thread Bruce Momjian

[ discussion moved to hackers.]

Peter Eisentraut wrote:
 Bruce Momjian wrote:
  I have applied the attached patch that complete TODO item:
 
  o -Allow dump/load of CSV format
 
  This adds new keywords to COPY and \copy:
 
  CSV - enable CSV mode
  QUOTE - specify quote character
  ESCAPE - specify escape character
  FORCE - force quoting of specified columns
 
 FORCE QUOTE
 
  LITERAL - prevent NULL checks for specific columns
 
 NO NULL CHECK

I considered going this direction, but it broke the WITH clause style of
COPY.  Previously it was WITH keyword value.  Now it is also WITH
keyword value, value too.  This would add WITH keyword keyword value,
value.

It would change:

COPY pg_language TO '/tmp/x' WITH CSV FORCE lanname 
LITERAL lanacl
to:
COPY pg_language TO '/tmp/x' WITH CSV FORCE QUOTE lanname NO
NULL CHECK lanacl

If folks like it, I can make the change.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] pg_encoding not needed anymore

2004-04-20 Thread Andrew Dunstan
Bruce Momjian wrote:

Karel Zak wrote:
 

On Mon, Apr 19, 2004 at 08:41:18PM -0400, Bruce Momjian wrote:
   

With initdb written now in C, we don't need a pg_encoding binary
anymore.
 

By  the way,  what  change  the name  of  initdb to  pg_initdb. The
current  name  is  really  too  common  (like  some  others  things  in
pgsql/src/bin)
   

Uh, that would be pretty major.  No one has complained about it in the
past.  I think createuser is much worse.  :-)
 

Agreed. Actually, the big problem with the name initdb is that the 
name is misleading, and newbies often get confused by it. You are 
preparing a data store for many databases, not a single database.  But I 
think it is far too sanctified by history to change now, just as Ken 
Thompson now wishes he had put an 'e' on the end of 'creat' but can't go 
back and fix it. Maybe we should think about a symlink/hardlink to use a 
better name.

cheers

andrew

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[HACKERS] ERROR action extension for rules?

2004-04-20 Thread Fabien COELHO

Dear hackers,

I'm trying to use some RULE as simple and maybe slower TRIGGER, so as to
make it impossible for a row to be modified once a boolean is set to lock
it.

 CREATE TABLE foo(data TEXT, locked BOOLEAN NOT NULL DEFAULT FALSE);

This cannot be done with a CHECK constraint as it is not a restriction
about the state of the tuple, but instead to its changes.

What I could do is to skip the attempt with a NOTHING action:

 CREATE RULE PasTouche AS ON UPDATE TO foo
 WHERE old.locked=TRUE
 DO INSTEAD NOTHING;

but as a result the attempt is not reported to the transaction, which
goes on. I really want to generate an error so as to abort the
current transaction. After various attempts, I did the following:

 CREATE TABLE Boom(ok BOOLEAN CHECK(ok=TRUE));

 CREATE RULE PasTouche AS ON UPDATE TO foo
 WHERE old.locked=TRUE
 DO INSTEAD INSERT INTO Boom(ok) VALUES(FALSE);

It works, as an error is raised because the attempt fails on the check.
Other solutions could be thought off with the same result, and that
would basically use the same kind of trick...

However I thing that this would look much better to write simply:

 CREATE RULE PasTouche AS ON UPDATE TO foo
 WHERE old.locked=TRUE
 DO INSTEAD ERROR;

I think this simple new rule action could be added to pg.
I'm planning to do it, if there is no opposition.
Any comments on this proposed new rule action?
Or did I missed something obvious with rules?

Have a nice day,

-- 
Fabien Coelho - [EMAIL PROTECTED]

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Prepared select

2004-04-20 Thread Robert Turnbull
There are several production issues related to the proposed solution. For
example, what happens when the result set exceeds the swap space of the
server or client machine? My original question is how to get a cursor from a
prepared select so a subset of the result can be returned to the client for
processing. For your solution to work the SQL EXECUTE command needs the
functionality of  the SQL FETCH command.


 
 
  How can I use a prepared select statement as mentioned in the
documentation=
   on SQL PREPARE. Preparing the statement is easy, the problem is using
the =
  plan to get a cursor. My assumption is the SQL OPEN command is not
document=
  ed or there is some other libpq API to make this happen.
 
  Thanks
 
 
 
 I'm using libpq and lines like below are working:

 res = PQexec(conn,
 PREPARE plan001 ( integer , double precision , character ) AS SELECT
a,b,d FROM foo WHERE a = $1 OR d  $2 OR b = $3);
 ...
 res = PQexec(conn, EXECUTE plan001 ( 3 , 6.66 , 'whatever' ) );

 HTH, pretty late reply - I know (but no one else did as far as I can tell)

 Regards, Christoph





---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Why are these ARC variables per-backend?

2004-04-20 Thread Kenneth Marshall
On Mon, Apr 19, 2004 at 02:58:11PM -0400, Tom Lane wrote:
 Jan Wieck [EMAIL PROTECTED] writes:
  Tom Lane wrote:
  I've got a problem with these variables in freelist.c:
  
  static int strategy_cdb_found;
  static int strategy_cdb_replace;
 
  These two most definitely are per backend because they hold status 
  information about the blocks this backend specifically is mucking with. 
  If it all would be in one function, they would be local variables.
 
 Would you object if I made 'em actual local variables?  This would
 clutter the strategy API slightly since the vars would have to be passed
 out of some routines and into others, but I think it would be logically
 cleaner.  (In the back of my mind is the idea to support two instances
 of the ARC datastructure, one for global and one for local buffers, so
 minimizing the number of static variables is a prerequisite.)

I am not sure what changes are in store with the bufmgr locking, but
the concurrency could also benefit from having multiple buffer queues.
This would allow the contention for the locks to be further reduced.
The change to a local variable would also enable that change.

Ken Marshall


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] Triggers on system tables

2004-04-20 Thread Bruce Momjian

Added to TODO:

* Allow AFTER triggers on system tables


---

Gavin Sherry wrote:
 On Thu, 12 Feb 2004, Tom Lane wrote:
 
  Gavin Sherry [EMAIL PROTECTED] writes:
   Do you have an example at hand of a system function which will face this
   problem so that I can see what is involved?
 
  Mmm ... try CREATE TABLE with foreign keys.  IIRC the basic table is
  created and then we do ALTER TABLE ADD FOREIGN KEY.
 
 Ahh, of course. I misunderstood you. I was taking system 'function'
 literally. CREATE TABLE by itself is bad enough. We do ALTER TABLE ADD
 TOAST outside of DefineRelation().
 
 Gavin
 
 ---(end of broadcast)---
 TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] COPY CSV keywords

2004-04-20 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  Peter Eisentraut wrote:
  Bruce Momjian wrote:
  LITERAL - prevent NULL checks for specific columns
  
  NO NULL CHECK
 
  I considered going this direction, but it broke the WITH clause style of
  COPY.
 
 Huh?  Many of the other options are already potentially two words,
 eg QUOTE [AS], so I don't see how you can object to that aspect.

AS was a noise word, while the new keywords would not be.

 I agree with Peter that LITERAL is a completely unhelpful name for the
 functionality.

Yep.

 NO NULL CHECK seems a bit confusing as well --- at first glance it
 looks like it might mean overriding the column's NOT NULL attribute
 (which I trust is not what it means ;-)).  Perhaps NO NULLS ?

Right.  It doesn't affect the constraint.

 Or just NOT NULL, to avoid introducing another keyword.

Yes, I liked that, but to me it looked like we were saying the column
was NOT NULL, which isn't the meaning.  You could almost call it QUOTE
NULL because it is actually quoting the NULL strings in those columns.

 I liked the FORCE QUOTE suggestion, too.

OK.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] COPY CSV keywords

2004-04-20 Thread Tom Lane
Bruce Momjian [EMAIL PROTECTED] writes:
 Peter Eisentraut wrote:
 Bruce Momjian wrote:
 LITERAL - prevent NULL checks for specific columns
 
 NO NULL CHECK

 I considered going this direction, but it broke the WITH clause style of
 COPY.

Huh?  Many of the other options are already potentially two words,
eg QUOTE [AS], so I don't see how you can object to that aspect.

I agree with Peter that LITERAL is a completely unhelpful name for the
functionality.

NO NULL CHECK seems a bit confusing as well --- at first glance it
looks like it might mean overriding the column's NOT NULL attribute
(which I trust is not what it means ;-)).  Perhaps NO NULLS ?
Or just NOT NULL, to avoid introducing another keyword.

I liked the FORCE QUOTE suggestion, too.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] ERROR action extension for rules?

2004-04-20 Thread Rod Taylor
  CREATE RULE PasTouche AS ON UPDATE TO foo
  WHERE old.locked=TRUE
  DO INSTEAD ERROR;
 
 I think this simple new rule action could be added to pg.
 I'm planning to do it, if there is no opposition.
 Any comments on this proposed new rule action?
 Or did I missed something obvious with rules?

I've often debated creating an sql elog function for the ELSE segment of
a CASE statement.



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[HACKERS] pl/j looking for alpha testers

2004-04-20 Thread Dave Cramer
Hi,

Pl/J is a java procedural language for postgres. We are looking for
alpha testers to help us find bugs, and get feedback.

The project can be found at

http://plj.codehaus.org/

Bugs can be reported at

http://jira.codehaus.org/secure/BrowseProject.jspa?id=10430

Regards,

Dave
-- 
Dave Cramer
519 939 0336
ICQ # 14675561


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] [PATCHES] CSV patch applied

2004-04-20 Thread Bruce Momjian
Andrew Dunstan wrote:
 Bruce and I tossed this around quite a bit.
 
 The problem is that using QUOTE or NULL in these phrases might look 
 confusing, e.g.
COPY mytable TO 'mytable.csv' CSV QUOTE '' FORCE QUOTE field1,field2;
COPY mytable FROM 'mytable.csv' NULL '' CSV NO NULL CHECK field1,field2;
 
 I also don't think NO NULL CHECK actually matches the use case for this 
 very well (and I'm dubious about LITERAL too).  They both describe the 
 actual behaviour, but not what you are trying to achieve. Essentially, 
 this would be used when you have a field with a NOT NULL constraint, but 
 the input CSV data stream has what would otherwise be considered nulls. 
 (COPY itself will never produce such a CSV, as non-null values that 
 resemble null are always quoted, but third party programs well might.) 
 So an alternative might be FORCE NOT NULL, but for the previous 
 consideration. Perhaps use of an optional preposition might make things 
 slightly clearer, e.g.:
 
COPY mytable TO 'mytable.csv' CSV QUOTE '' FORCE QUOTE  IN 
 field1,field2;
COPY mytable FROM 'mytable.csv' NULL '' CSV FORCE NOT NULL IN 
 field1,field2;
 
 But it does start to look a little too much like COBOL :-).

Yea, and as I remember, that COBOL word scared us off.  :-)

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[HACKERS] Probably security hole in postgresql-7.4.1

2004-04-20 Thread Ken Ashcraft
I work at Coverity where we use static analysis to find bugs in
software.  I ran a security checker over postgresql-7.4.1 and I think I
found a security hole.  I'm not familiar with the postgres source, so
this report may be false.  My interpretation of the code follows.

I'd appreciate your feedback,
Ken Ashcraft

In the code below, fld_size gets copied in from a user specified file. 
It is passed as the 'needed' parameter to enlargeStringInfo().  If
needed is a very large positive value, the addition 'needed += str-len
+ 1;' could cause an overflow, making needed a negative number. 
enlargeStringInfo() would quickly return, thinking that enough memory
was present.  However, the call to CopyGetData() uses the large,
positive fld_size, resulting in a buffer overflow, assuming that the
user's file has enough data.


/home/kash/user-progs/postgres/postgresql-7.4.1/src/backend/commands/copy.c:2050:CopyReadBinaryAttribute:
 ERROR:TAINT: 2030:2050:Passing unbounded user value fld_size as arg 1 to function 
enlargeStringInfo, which uses it unsafely in model 
[SOURCE_MODEL=(lib,CopyGetInt32,user,taint)] 
[SINK_MODEL=(lib,enlargeStringInfo,user,trustingsink)] [BOUNDS= Upper bound on line 
2040]   [PATH=] 

bool *isnull)
{
int32   fld_size;
Datum   result;

Start ---
fld_size = CopyGetInt32();

... DELETED 14 lines ...

/* reset attribute_buf to empty, and load raw data in it */
attribute_buf.len = 0;
attribute_buf.data[0] = '\0';
attribute_buf.cursor = 0;

Error ---
enlargeStringInfo(attribute_buf, fld_size);

CopyGetData(attribute_buf.data, fld_size);
if (CopyGetEof())
---

enlargeStringInfo(StringInfo str, int needed)
{
int newlen;

needed += str-len + 1; /* total space required now */
if (needed = str-maxlen)
return; /* got enough space already */

/*
 * We don't want to allocate just a little more space with each
 * append; for efficiency, double the buffer size each time it
 * overflows. Actually, we might need to more than double it if
 * 'needed' is big...
 */
newlen = 2 * str-maxlen;
while (needed  newlen)
newlen = 2 * newlen;

str-data = (char *) repalloc(str-data, newlen);

str-maxlen = newlen;
}



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Prepared select

2004-04-20 Thread Cyril VELTER
libpq doesn't have enought support to allow executing a prepared statement
in a named portal (current libpq only works wuth the unnamed portal). But
the V3 protocol have it. I solved this problem by adding the following
functions. They let you prepare a named statement, execute this statement in
a named portal, fetch from it and close it.

this is a temporary solution to wait for an official extension of libpq
(more call could be added to support completly the V3 protocol).

cyril


/*
 * PQportalSetup
 *  Setup a portal to execute a prepared statement
 */
PGresult *
PQportalSetup(PGconn *conn,
  const char *stmtName,
  const char *portalName,
  int nParams,
  const char *const * paramValues,
  const int *paramLengths,
  const int *paramFormats,
  int resultFormat)
{
 int   i;

 if (!PQexecStart(conn))
  return NULL;

 if (!PQsendQueryStart(conn))
  return NULL;

 if (!stmtName)
 {
  printfPQExpBuffer(conn-errorMessage,
libpq_gettext(statement name is a null pointer\n));
  return NULL;
 }

 /* This isn't gonna work on a 2.0 server */
 if (PG_PROTOCOL_MAJOR(conn-pversion)  3)
 {
  printfPQExpBuffer(conn-errorMessage,
libpq_gettext(function requires at least protocol version 3.0\n));
  return 0;
 }

 /* construct the Bind message */
 if (pqPutMsgStart('B', false, conn)  0 ||
  pqPuts(portalName, conn)  0 ||
  pqPuts(stmtName, conn)  0)
  goto sendFailed;
 if (nParams  0  paramFormats)
 {
  if (pqPutInt(nParams, 2, conn)  0)
   goto sendFailed;
  for (i = 0; i  nParams; i++)
  {
   if (pqPutInt(paramFormats[i], 2, conn)  0)
goto sendFailed;
  }
 }
 else
 {
  if (pqPutInt(0, 2, conn)  0)
   goto sendFailed;
 }
 if (pqPutInt(nParams, 2, conn)  0)
  goto sendFailed;
 for (i = 0; i  nParams; i++)
 {
  if (paramValues  paramValues[i])
  {
   int   nbytes;

   if (paramFormats  paramFormats[i] != 0)
   {
/* binary parameter */
nbytes = paramLengths[i];
   }
   else
   {
/* text parameter, do not use paramLengths */
nbytes = strlen(paramValues[i]);
   }
   if (pqPutInt(nbytes, 4, conn)  0 ||
pqPutnchar(paramValues[i], nbytes, conn)  0)
goto sendFailed;
  }
  else
  {
   /* take the param as NULL */
   if (pqPutInt(-1, 4, conn)  0)
goto sendFailed;
  }
 }
 if (pqPutInt(1, 2, conn)  0 ||
  pqPutInt(resultFormat, 2, conn))
  goto sendFailed;
 if (pqPutMsgEnd(conn)  0)
  goto sendFailed;

 /* construct the Sync message */
 if (pqPutMsgStart('S', false, conn)  0 ||
  pqPutMsgEnd(conn)  0)
  goto sendFailed;

 /* remember we are using extended query protocol */
 conn-ext_query = true;

 /*
  * Give the data a push.  In nonblock mode, don't complain if we're
  * unable to send it all; PQgetResult() will do any additional
  * flushing needed.
  */
 if (pqFlush(conn)  0)
  goto sendFailed;

 /* OK, it's launched! */
 conn-asyncStatus = PGASYNC_BUSY;

 return PQexecFinish(conn);

sendFailed:
 pqHandleSendFailure(conn);
 return NULL;
}

/*
 * PQportalFetch
 *  Fetch next rows
 */
PGresult *
PQportalFetch(PGconn *conn,
  const char *portalName,
  int maxrows)
{
 if (!PQexecStart(conn))
  return NULL;

 if (!PQsendQueryStart(conn))
  return NULL;

 /* This isn't gonna work on a 2.0 server */
 if (PG_PROTOCOL_MAJOR(conn-pversion)  3)
 {
  printfPQExpBuffer(conn-errorMessage,
libpq_gettext(function requires at least protocol version 3.0\n));
  return 0;
 }

 /* construct the Describe Portal message */
 if (pqPutMsgStart('D', false, conn)  0 ||
  pqPutc('P', conn)  0 ||
  pqPuts(portalName, conn)  0 ||
  pqPutMsgEnd(conn)  0)
  goto sendFailed;

 /* construct the Execute message */
 if (pqPutMsgStart('E', false, conn)  0 ||
  pqPuts(portalName, conn)  0 ||
  pqPutInt(maxrows, 4, conn)  0 ||
  pqPutMsgEnd(conn)  0)
  goto sendFailed;

 /* construct the Sync message */
 if (pqPutMsgStart('S', false, conn)  0 ||
  pqPutMsgEnd(conn)  0)
  goto sendFailed;

 /* remember we are using extended query protocol */
 conn-ext_query = true;

 /*
  * Give the data a push.  In nonblock mode, don't complain if we're
  * unable to send it all; PQgetResult() will do any additional
  * flushing needed.
  */
 if (pqFlush(conn)  0)
  goto sendFailed;

 /* OK, it's launched! */
 conn-asyncStatus = PGASYNC_BUSY;
 return PQexecFinish(conn);

sendFailed:
 pqHandleSendFailure(conn);
 return NULL;
}

/*
 * PQportalClose
 *  Close a named portal
 *  using protocol 3.0
 */
PGresult *
PQportalClose(PGconn *conn,
  const char *portalName)
{
 if (!PQexecStart(conn))
  return NULL;

 if (!PQsendQueryStart(conn))
  return NULL;

 /* This isn't gonna work on a 2.0 server */
 if (PG_PROTOCOL_MAJOR(conn-pversion)  3)
 {
  printfPQExpBuffer(conn-errorMessage,
libpq_gettext(function requires at least protocol version 3.0\n));
  return 0;
 }

 /* construct the Close message */
 if (pqPutMsgStart('C', false, conn)  0 ||
  pqPutc('P', conn)  0 ||
  pqPuts(portalName, conn)  0 ||
  pqPutMsgEnd(conn)  0)
  goto sendFailed;

 /* construct the Sync message */
 if 

Re: [HACKERS] pg_encoding not needed anymore

2004-04-20 Thread Bruce Momjian
Andrew Dunstan wrote:
 past.  I think createuser is much worse.  :-)
 
   
 
 Agreed. Actually, the big problem with the name initdb is that the 
 name is misleading, and newbies often get confused by it. You are 
 preparing a data store for many databases, not a single database.  But I 
 think it is far too sanctified by history to change now, just as Ken 
 Thompson now wishes he had put an 'e' on the end of 'creat' but can't go 
 back and fix it. Maybe we should think about a symlink/hardlink to use a 
 better name.

Yea, initcluster would have been better, but cluster confuses with
CLUSTER, just like database schema confuses with CREATE SCHEMA.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] COPY CSV keywords

2004-04-20 Thread Andrew Dunstan
Bruce Momjian wrote:

[ discussion moved to hackers.]

Peter Eisentraut wrote:
 

Bruce Momjian wrote:
   

I have applied the attached patch that complete TODO item:

   o -Allow dump/load of CSV format

This adds new keywords to COPY and \copy:

   CSV - enable CSV mode
   QUOTE - specify quote character
   ESCAPE - specify escape character
   FORCE - force quoting of specified columns
 

FORCE QUOTE

   

   LITERAL - prevent NULL checks for specific columns
 

NO NULL CHECK
   

I considered going this direction, but it broke the WITH clause style of
COPY.  Previously it was WITH keyword value.  Now it is also WITH
keyword value, value too.  This would add WITH keyword keyword value,
value.
It would change:
	
	COPY pg_language TO '/tmp/x' WITH CSV FORCE lanname 
		LITERAL lanacl
to:
	COPY pg_language TO '/tmp/x' WITH CSV FORCE QUOTE lanname NO
		NULL CHECK lanacl

If folks like it, I can make the change.

 

[reposting my comments from patches]

The problem is that using QUOTE or NULL in these phrases might look 
confusing, e.g.
 COPY mytable TO 'mytable.csv' CSV QUOTE '' FORCE QUOTE field1,field2;
 COPY mytable FROM 'mytable.csv' NULL '' CSV NO NULL CHECK field1,field2;

I also don't think NO NULL CHECK actually matches the use case for this 
very well (and I'm dubious about LITERAL too).  They both describe the 
actual behaviour, but not what you are trying to achieve. Essentially, 
this would be used when you have a field with a NOT NULL constraint, but 
the input CSV data stream has what would otherwise be considered nulls. 
(COPY itself will never produce such a CSV, as non-null values that 
resemble null are always quoted, but third party programs well might.) 
So an alternative might be FORCE NOT NULL, but for the previous 
consideration. Perhaps use of an optional preposition might make things 
slightly clearer, e.g.:

 COPY mytable TO 'mytable.csv' CSV QUOTE '' FORCE QUOTE  IN 
field1,field2;
 COPY mytable FROM 'mytable.csv' NULL '' CSV FORCE NOT NULL IN 
field1,field2;

But it does start to look a little too much like COBOL .

So I'm interested to see if there are any other inspirations people have.

cheers

andrew

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[HACKERS] Multiple statements in one prepare

2004-04-20 Thread Robert Turnbull



How canmultiple SQL statements be included in 
one prepare command? The goal is toreduce chatter between the client and 
serverby submitting an atomictranasaction set.My interface is 
libpq.


Re: [HACKERS] user-defined default public acl

2004-04-20 Thread James Robinson
[ discussion re/ default state of minimal rights, as opposed to the 
more generous situation today snipped ]

Just to add fuel to the fire, as an ex-college sys-admin having had to 
deploy both Oracle and postgres, I would have to say that Oracle 
allowed me to deploy a database container shared by many many students 
relatively securely, while at the same time allowing them to perform 
cross-schema queries to their teammates tables if and when they needed 
to. The users could manage the ACLs of their own schema objects, 
allowing their peers into their tables on a mutual need basis, but the 
default action was to be closed off completely. Combined with per-user 
quotas on the tablespaces and per-user connection limits, I could 
create a new student user and be comfortable knowing they're not going 
to be able to steal from others nor consume all disk space. I didn't 
have to deal with CPU / memory based attacks on the box just 'cause I 
was lucky I guess.

With postgres, I had to create a new database for each user (this was 
pre-schema, anyway), then tweak the hba conf file and sighup. I had no 
disk-based resource limits available to me (there's a creative use of 
linux loopback mounts to enforce per-database quotas floatin' around on 
the net somewhere, but I didn't think of that, as well as that probably 
wouldn't scale to, say, thousands of users). I wasn't about to dblink 
databases for 'em, so it ended up that the mass-student-consumption 
learn-SQL box was Oracle, and the lesser-used DB was postgres.

So, finally, from the perspective of a college admin with a 'centralize 
the student services' mindset, being able to sandbox SQL users 
relatively easily while also being able to provide higher-level service 
such as cross-schema queries, centralized / guaranteed backup, etc. 
would have been fantastic using postgres.

I don't work there anymore, but I'm sure other university shops still 
have the same issue to solve. If a tight schema sandbox isn't the 
default, then a system-catalog expert managed schema patch would have 
been greatly appreciated to solve at least the 'stay out of other 
user's crud by default' issues.


James Robinson
Socialserve.com
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[HACKERS] pg_autovacuum crashes when query fails for temp tables

2004-04-20 Thread Bruce Momjian

I looked into this and I see a number of cases where pg_autovacuum calls
send_query(), but doesn't test for a NULL return from the function.

Matthew, would you look into this and submit a patch?  Thanks.

---

Jeff Boes wrote:
 Recently I installed and started pg_autovacuum against my new Pg 7.4.1 
 installation. We use a fairly large number of temporary tables within an 
 application (that is, several copies of this application may be running, 
 and each creates and drops several temp tables as they cycle through 
 their workload). Here's what I think happened, based on the log 
 (pg_autovacuum's and the postmaster's):
 
 pg_autovacuum.log:
 [2004-02-15 08:10:01 AM] Performing: ANALYZE pg_temp_13.tmp_targs
 [2004-02-15 08:10:01 AM] Can not refresh statistics information from the 
 database nexcerpt.
 [2004-02-15 08:10:01 AM] The error is [ERROR:  relation 
 pg_temp_13.tmp_targs does not exist
 
 postmaster.log:
 2004-02-15 08:10:01 [31563] ERROR:  relation pg_temp_13.tmp_targs does 
 not exist
 2004-02-15 08:10:01 [31563] LOG:  unexpected EOF on client connection
 
 
 It appears that pg_autovacuum collected the name of a temp table, and 
 later tried to analyze it. The table was gone by then, and this caused 
 the daemon to exit. As this happened on a Sunday morning, my weekend 
 experiment to see how pg_autovacuum would maintain our test database was 
 rather spoiled ... 8-(
 
 -- 
 Jeff Boes  vox 269.226.9550 ext 24
 Database Engineer fax 269.349.9076
 Nexcerpt, Inc. http://www.nexcerpt.com
...Nexcerpt... Extend your Expertise
 
 
 ---(end of broadcast)---
 TIP 8: explain analyze is your friend
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[HACKERS] Accessing RelOptInfo structure from the executor module

2004-04-20 Thread Shalu Gupta


 Hello,

 I am trying to work with the Postgresql-7.3.4 and can you
 please help me with it.
 I have the following problem:
 I want to access the RelOptInfo data structure from the executor module.
 Basically I have added a flag to the RelOptInfo data structure and at
the
 time of execution I want to check the flag but I got lost after the path
 selection. I am unable to access  the RelOptInfo in the execution
module.

 Can you please help me out with this.

 Your help will be appreciated,

 Sincerely,
 Shalu Gupta
 NC State University,
 Raleigh , NC


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] Multiple statements in one prepare

2004-04-20 Thread Tom Lane
Robert Turnbull [EMAIL PROTECTED] writes:
 How can multiple SQL statements be included in one prepare command?

They can't.  Consider using a plpgsql function, instead.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] pg_encoding not needed anymore

2004-04-20 Thread Andrew Dunstan
Joshua D. Drake wrote:


g a data store for many databases, not a single database.  But I 
think it is far too sanctified by history to change now, just as Ken 
Thompson now wishes he had put an 'e' on the end of 'creat' but can't 
go back and fix it. Maybe we should think about a symlink/hardlink to 
use a better name.


initcatalog?

initpgstore ... we can play the name game if people are serious :-)

cheers

andrew

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] pg_autovacuum crashes when query fails for temp

2004-04-20 Thread Matthew T. O'Connor
 Bruce Momjian wrote:
 Should pg_autovacuum be vacuuming temporary tables?

This is a good question, and I would like some opinions from some other
people more informed than I.

 Secondly, why would
 a temporary table for another session be visible to pg_autovacuum?  I
 know these may sound like stupid questions, but I'm a little shocked
 that it found a temp table.  Did someone make a permanent table, delete
 it, and pg_autovacuum found it in between those operations?

I will look into this when I have time, it certainly would be possible
that pg_autovacuum could be tripped up by a sequence of events like you
describe above.  The more general problem is that it shouldn't crash even
if it's vacuuming tables it shouldn't.

Matthew


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] pg_encoding not needed anymore

2004-04-20 Thread Joshua D. Drake

g a data store for many databases, not a single database.  But I think 
it is far too sanctified by history to change now, just as Ken 
Thompson now wishes he had put an 'e' on the end of 'creat' but can't 
go back and fix it. Maybe we should think about a symlink/hardlink to 
use a better name.


initcatalog?

cheers

andrew

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
PostgreSQL Replicator -- production quality replication for PostgreSQL
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [HACKERS] [Pgreplication-general] converting the DBMirror as peer-to-peer

2004-04-20 Thread Bruce Momjian

Where are we on this?  Is it something that can be added to dbmirror as
a patch?

---

merino silva wrote:
 Hi all,
 
 The method I've used to convert DBMirror to a
 peer-to-peer replicator was two DBMirror instances
 with one considered slave of other as the master.
 
 Here, I've stopped the loop back by dropping the
 trigger when INSERT, DELETE, UPDATE through the
 DBMirror and then creating the trigger again.
 
 The system was tested on the
 ManddrakeLinux 9.2
 Posgresql 7.3.4
 and Perl v5.8.1
 
 A 'LOCK TABLE table name IN EXCLUSIVE MODE'
 was used before dropping the trigger to prevent
 updating the client database, because updates for that
 database wont trigger to the remaining one while the
 trigger is dropped.
 
 The modified DBMirror.pl is attached with this.
 
 Is this an acceptable solution for peer-to-peer
 replication?
 What can be go wrong with this approach?
 
 Please reply me with your suggestions.
 
 regards,
 merino silva.
 
 __
 Do you Yahoo!?
 Get better spam protection with Yahoo! Mail.
 http://antispam.yahoo.com/tools

Content-Description: DBMirror.pl

[ Attachment, skipping... ]

 ___
 Pgreplication-general mailing list
 [EMAIL PROTECTED]
 http://gborg.postgresql.org/mailman/listinfo/pgreplication-general

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] pg_encoding not needed anymore

2004-04-20 Thread scott.marlowe
On Tue, 20 Apr 2004, Bruce Momjian wrote:

 Andrew Dunstan wrote:
  past.  I think createuser is much worse.  :-)
  

  
  Agreed. Actually, the big problem with the name initdb is that the 
  name is misleading, and newbies often get confused by it. You are 
  preparing a data store for many databases, not a single database.  But I 
  think it is far too sanctified by history to change now, just as Ken 
  Thompson now wishes he had put an 'e' on the end of 'creat' but can't go 
  back and fix it. Maybe we should think about a symlink/hardlink to use a 
  better name.
 
 Yea, initcluster would have been better, but cluster confuses with
 CLUSTER, just like database schema confuses with CREATE SCHEMA.

Maybe initpg or pg_init or something like that?


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] pg_autovacuum crashes when query fails for temp tables

2004-04-20 Thread Thomas Swan
Bruce Momjian wrote:

I looked into this and I see a number of cases where pg_autovacuum calls
send_query(), but doesn't test for a NULL return from the function.
Matthew, would you look into this and submit a patch?  Thanks.

 

Should pg_autovacuum be vacuuming temporary tables?  Secondly, why would 
a temporary table for another session be visible to pg_autovacuum?  I 
know these may sound like stupid questions, but I'm a little shocked 
that it found a temp table.  Did someone make a permanent table, delete 
it, and pg_autovacuum found it in between those operations?

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[HACKERS] Reporting a security hole

2004-04-20 Thread ken
I work at Coverity where we make a static analysis tool to find bugs in
software at compile time.  I think I found a security hole in
postgresql-7.4.1, but I don't want to just report it to a public list.  I
sent email to [EMAIL PROTECTED], hoping that the address existed,
but I got no response.

So where can I report a potential security hole?

thanks,
Ken Ashcraft

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] ERROR action extension for rules?

2004-04-20 Thread Tom Lane
Fabien COELHO [EMAIL PROTECTED] writes:
 I'm trying to use some RULE as simple and maybe slower TRIGGER, so as to
 make it impossible for a row to be modified once a boolean is set to lock
 it.
 ...
 However I thing that this would look much better to write simply:

  CREATE RULE PasTouche AS ON UPDATE TO foo
  WHERE old.locked=TRUE
  DO INSTEAD ERROR;

This would be sensible if rules were actually reasonable substitutes for
triggers, but they are not.  If you check the archives you will find
many many cases where people tried to do this sort of thing, and got
burned by the fundamental semantic differences ...

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] pg_autovacuum crashes when query fails for temp tables

2004-04-20 Thread Bruce Momjian
Matthew T. O'Connor wrote:
 Yeah, I will, I just don't know when.  I have been trying to get to this
 and lots of other pg_autovacuum tasks, but my schedule has been quite
 crazy as of late.  Anyway, this should probably be a pretty simple patch,
 so I can probably find some time to look at it soon.
 
 Any idea on the 7.4.3 release timeline?

No, I have not heard of a 7.4.3 timeline, but we certainly want your
eventual fixes in that release.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] pg_autovacuum crashes when query fails for temp tables

2004-04-20 Thread Matthew T. O'Connor
Yeah, I will, I just don't know when.  I have been trying to get to this
and lots of other pg_autovacuum tasks, but my schedule has been quite
crazy as of late.  Anyway, this should probably be a pretty simple patch,
so I can probably find some time to look at it soon.

Any idea on the 7.4.3 release timeline?


 I looked into this and I see a number of cases where pg_autovacuum calls
 send_query(), but doesn't test for a NULL return from the function.

 Matthew, would you look into this and submit a patch?  Thanks.

 ---

 Jeff Boes wrote:
 Recently I installed and started pg_autovacuum against my new Pg 7.4.1
 installation. We use a fairly large number of temporary tables within an
 application (that is, several copies of this application may be running,
 and each creates and drops several temp tables as they cycle through
 their workload). Here's what I think happened, based on the log
 (pg_autovacuum's and the postmaster's):

 pg_autovacuum.log:
 [2004-02-15 08:10:01 AM] Performing: ANALYZE pg_temp_13.tmp_targs
 [2004-02-15 08:10:01 AM] Can not refresh statistics information from the
 database nexcerpt.
 [2004-02-15 08:10:01 AM] The error is [ERROR:  relation
 pg_temp_13.tmp_targs does not exist

 postmaster.log:
 2004-02-15 08:10:01 [31563] ERROR:  relation pg_temp_13.tmp_targs does
 not exist
 2004-02-15 08:10:01 [31563] LOG:  unexpected EOF on client connection


 It appears that pg_autovacuum collected the name of a temp table, and
 later tried to analyze it. The table was gone by then, and this caused
 the daemon to exit. As this happened on a Sunday morning, my weekend
 experiment to see how pg_autovacuum would maintain our test database was
 rather spoiled ... 8-(

 --
 Jeff Boes  vox 269.226.9550 ext 24
 Database Engineer fax 269.349.9076
 Nexcerpt, Inc. http://www.nexcerpt.com
...Nexcerpt... Extend your Expertise


 ---(end of broadcast)---
 TIP 8: explain analyze is your friend


 --
   Bruce Momjian|  http://candle.pha.pa.us
   [EMAIL PROTECTED]   |  (610) 359-1001
   +  If your life is a hard drive, |  13 Roberts Road
   +  Christ can be your backup.|  Newtown Square, Pennsylvania
 19073




---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Reporting a security hole

2004-04-20 Thread Bruce Momjian

You can send it to [EMAIL PROTECTED], but I already saw your report on
the hackers list.

---

[EMAIL PROTECTED] wrote:
 I work at Coverity where we make a static analysis tool to find bugs in
 software at compile time.  I think I found a security hole in
 postgresql-7.4.1, but I don't want to just report it to a public list.  I
 sent email to [EMAIL PROTECTED], hoping that the address existed,
 but I got no response.
 
 So where can I report a potential security hole?
 
 thanks,
 Ken Ashcraft
 
 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?
 
http://archives.postgresql.org
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] pg_autovacuum crashes when query fails for temp tables

2004-04-20 Thread Christopher Kings-Lynne
I looked into this and I see a number of cases where pg_autovacuum calls
send_query(), but doesn't test for a NULL return from the function.
Matthew, would you look into this and submit a patch?  Thanks.
Does pg_autovacuum vacuum and analyze system catalog and TOAST tables 
properly?

Chris

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[HACKERS] pgbench written in Pro*C

2004-04-20 Thread Bruce Momjian
I received a copy of pgbench rewritten in Pro*C, which is similar to
embedded C.  I think it was done so the same program could be tested on
Oracle and PostgreSQL.

Are folks interested in this code?  Should it be put on gborg or in our
/contrib/pgbench?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] [PERFORM] pgbench written in Pro*C

2004-04-20 Thread Tom Lane
Bruce Momjian [EMAIL PROTECTED] writes:
 I received a copy of pgbench rewritten in Pro*C, which is similar to
 embedded C.  I think it was done so the same program could be tested on
 Oracle and PostgreSQL.

 Are folks interested in this code?  Should it be put on gborg or in our
 /contrib/pgbench?

If it requires non-free tools even to build, it is of no value.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] pg_autovacuum crashes when query fails for temp

2004-04-20 Thread Tom Lane
Matthew T. O'Connor [EMAIL PROTECTED] writes:
 Bruce Momjian wrote:
 Should pg_autovacuum be vacuuming temporary tables?

 This is a good question, and I would like some opinions from some other
 people more informed than I.

You *can not* vacuum other sessions' temp tables; you don't have access
to the data.  (You have no way to get at pages that are modified in
someone else's local buffer manager.)  You could vacuum your own temp
tables, if you had any, but I would hardly expect autovacuum to have
any.

In reference to Chris' followup question, you *should* be vacuuming
system catalogs, and you *should not* be vacuuming TOAST tables.  VACUUM
on a regular table automatically hits the associated TOAST table.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] pg_autovacuum crashes when query fails for temp tables

2004-04-20 Thread Matthew T. O'Connor
Christopher Kings-Lynne wrote:

Does pg_autovacuum vacuum and analyze system catalog and TOAST tables 
properly?


Properly?  I think so, that is to the best of my knowledge which is a 
bit limited :-)

Toast Tables:  pg_autovacuum doesn't do anything to toast tables 
explicitly.  I am not aware that they need to be considered, I believe 
that if you vacuum table_foo which has an underlying toast table, then 
vacuum does the right thing.  Am I wrong?  Someone please enlighten me 
if I am and I will fix.

System Tables:  pg_autovacuum treats non-shared system tables just like 
any other table.  It monitors the activity and vacuums when it deems it 
appropriate.  As for shared system tables: In user databases they are 
only analyzed by pg_autovacuum, while connected to template1, 
pg_autovacuum will treat the shared tables as normal tables and vacuum 
when appropriate.

Does all this sound kosher?

Thanks,

Matthew

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] pg_autovacuum crashes when query fails for temp tables

2004-04-20 Thread Matthew T. O'Connor
Bruce Momjian wrote:

No, I have not heard of a 7.4.3 timeline, but we certainly want your
eventual fixes in that release.
 

Right, and along these lines there are a few other pg_autovacuum bugs 
that were fixed just after 7.4.2.

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] pg_autovacuum crashes when query fails for temp

2004-04-20 Thread Matthew T. O'Connor
Tom Lane wrote:

Matthew T. O'Connor [EMAIL PROTECTED] writes:
 

This is a good question, and I would like some opinions from some other
people more informed than I.
   

You *can not* vacuum other sessions' temp tables; you don't have access
to the data.  (You have no way to get at pages that are modified in
someone else's local buffer manager.)  You could vacuum your own temp
tables, if you had any, but I would hardly expect autovacuum to have
any.
 

Ok, so I will change pg_autovacuum to explicitly ignore temp tables.  
Just to be sure, I can do this by avoiding anything found in the pg_temp 
schemea, or is there a better way?  Is it possible that a user could or 
would put a non-temp table the pg_temp schemea?

In reference to Chris' followup question, you *should* be vacuuming
system catalogs, and you *should not* be vacuuming TOAST tables.  VACUUM
on a regular table automatically hits the associated TOAST table.
Ok, this is as I thought, pg_autovacuum is already doing the right thing 
here. (see the post I just sent a few minutes ago for more details). 

Thanks Tom,

Matthew

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] pg_autovacuum crashes when query fails for temp tables

2004-04-20 Thread Christopher Kings-Lynne
No, I have not heard of a 7.4.3 timeline, but we certainly want your
eventual fixes in that release.
Right, and along these lines there are a few other pg_autovacuum bugs 
that were fixed just after 7.4.2.
A rollable log solution would be nice :)  Syslog? :)

Chris

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] pg_autovacuum crashes when query fails for temp

2004-04-20 Thread Tom Lane
Matthew T. O'Connor [EMAIL PROTECTED] writes:
 Just to be sure, I can do this by avoiding anything found in the pg_temp 
 schemea, or is there a better way?  Is it possible that a user could or 
 would put a non-temp table the pg_temp schemea?

The pg_temp_NN schemas are the temp objects, by definition and by
implementation.  (Essentially, the reason a temp table is temp is that
its backend does DROP SCHEMA pg_temp_NN on exit.)  See namespace.c,
particularly the isTempNamespace and isOtherTempNamespace functions, for
the gory details.

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[HACKERS] Duplicate variable declared?

2004-04-20 Thread mike g
Hello,

In file postgresql-7.4.2/src/backend/utils/fmgrtab.c

line 336 and line 337 are both equal to extern Datum array_push
(PG_FUNCTION_ARGS);

I have created a simple perl script that examines files line by line and
if 2 in a row match prints a result.

Sometimes it catches bad cvs checkins or copy/paste mistakes.

I will continue to scan the output.

Hope this helps.

Mike

On Tue, 2004-04-20 at 22:51, Matthew T. O'Connor wrote:
 Tom Lane wrote:
 
 Matthew T. O'Connor [EMAIL PROTECTED] writes:
   
 
 This is a good question, and I would like some opinions from some other
 people more informed than I.
 
 
 
 You *can not* vacuum other sessions' temp tables; you don't have access
 to the data.  (You have no way to get at pages that are modified in
 someone else's local buffer manager.)  You could vacuum your own temp
 tables, if you had any, but I would hardly expect autovacuum to have
 any.
   
 
 
 Ok, so I will change pg_autovacuum to explicitly ignore temp tables.  
 Just to be sure, I can do this by avoiding anything found in the pg_temp 
 schemea, or is there a better way?  Is it possible that a user could or 
 would put a non-temp table the pg_temp schemea?
 
 In reference to Chris' followup question, you *should* be vacuuming
 system catalogs, and you *should not* be vacuuming TOAST tables.  VACUUM
 on a regular table automatically hits the associated TOAST table.
 
 
 Ok, this is as I thought, pg_autovacuum is already doing the right thing 
 here. (see the post I just sent a few minutes ago for more details). 
 
 Thanks Tom,
 
 Matthew
 
 
 ---(end of broadcast)---
 TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] pg_autovacuum crashes when query fails for temp tables

2004-04-20 Thread Tom Lane
Matthew T. O'Connor [EMAIL PROTECTED] writes:
 System Tables:  pg_autovacuum treats non-shared system tables just like 
 any other table.  It monitors the activity and vacuums when it deems it 
 appropriate.  As for shared system tables: In user databases they are 
 only analyzed by pg_autovacuum, while connected to template1, 
 pg_autovacuum will treat the shared tables as normal tables and vacuum 
 when appropriate.

As long as you hit template1 reasonably often, this will work.  But I'm
a bit concerned about the possibility that some maverick will decide he
doesn't need template1.  (It's at least theoretically possible to run
without it.)

Plan B would be to ignore the sharedness issue and vacuum/analyze shared
catalogs the same as anything else.  While this would certainly result
in more vacuums than really necessary, these tables are probably small
enough that it'd hardly matter ...

Comments?

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] pg_autovacuum crashes when query fails for temp

2004-04-20 Thread Christopher Kings-Lynne
Ok, so I will change pg_autovacuum to explicitly ignore temp tables.  
Just to be sure, I can do this by avoiding anything found in the pg_temp 
schemea, or is there a better way?  Is it possible that a user could or 
would put a non-temp table the pg_temp schemea?
There's no such thing as the pg_temp schema, you will get lots of 
pg_temp_xxx schemas I think.

Chris

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Duplicate variable declared?

2004-04-20 Thread Joe Conway
mike g wrote:
In file postgresql-7.4.2/src/backend/utils/fmgrtab.c
This is an automatically generated file. The reason for duplicate 
array_push declarations is that one-and-the-same array_push function is 
used to implement two SQL functions, array_append and array_prepend. I 
don't imagine the duplicate declaration hurts anything.

Joe

Mike

On Tue, 2004-04-20 at 22:51, Matthew T. O'Connor wrote:

Tom Lane wrote:
Matthew T. O'Connor [EMAIL PROTECTED] writes:
This is a good question, and I would like some opinions from some other
people more informed than I.
p.s. It is generally considered bad form to reply to an existing thread 
with an entirely new topic.



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] [PERFORM] pgbench written in Pro*C

2004-04-20 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  I received a copy of pgbench rewritten in Pro*C, which is similar to
  embedded C.  I think it was done so the same program could be tested on
  Oracle and PostgreSQL.
 
  Are folks interested in this code?  Should it be put on gborg or in our
  /contrib/pgbench?
 
 If it requires non-free tools even to build, it is of no value.

OK, it's only value would be if we could modify it so it compiled using
our ecpg and Pro*C and the comparison program could be run on both
databases.

I will tell the submitter to put it on gborg if they wish.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html