[HACKERS] GRANT/REVOKE column-level privileges

2006-01-13 Thread kevin brintnall
Has anyone else taken a look at this?  I thought I'd play around with the
system catalog and see if I couldn't put an ACL column into pg_attribute:

It ended up generating the following BKI line:

insert ( 1249 attacl 1034 -1 -1 18 1 -1 -1 f x i f f f t 0 _null_ )

And the ROW certainly appears to be in pg_attribute:

template1=# select * from pg_attribute where attrelid=1249 and 
attnum=18;
-[ RECORD 1 ]-+---
attrelid  | 1249
attname   | attacl
atttypid  | 1034
attstattarget | -1
attlen| -1
attnum| 18
attndims  | 1
attcacheoff   | -1
atttypmod | -1
attbyval  | f
attstorage| x
attalign  | i
attnotnull| f
atthasdef | f
attisdropped  | f
attislocal| t
attinhcount   | 0

 no attacl column though!

However, the COLUMN doesn't appear to the parser:

[EMAIL PROTECTED]/test=# select attacl from pg_attribute;
ERROR:  column attacl does not exist

-

For better or worse, I tried the idea from pg_class where the attacl[]
comes at the end of the CATALOG(pg_attribute):

*** include/catalog/pg_attribute.h  15 Oct 2005 02:49:42 -  1.119
--- include/catalog/pg_attribute.h  13 Jan 2006 09:29:06 -
***
*** 37,44 
--- 37,50 
   *
   *If you change the following, make sure you change the structs 
for
   *system attributes in catalog/heap.c also.
   * 
+  *This structure is actually variable-length (the last attribute 
is
+  *a POSTGRES array).  Hence, sizeof(FormData_pg_attribute) 
does not
+  *necessarily match the actual length of the structure.  
Furthermore
+  *attacl may be a NULL field.  Hence, you MUST use heap_getattr()
+  *to get the attacl field ... and don't forget to check isNull.
+  * 
   */
  #define AttributeRelationId  1249
  
  CATALOG(pg_attribute,1249) BKI_BOOTSTRAP BKI_WITHOUT_OIDS
***
*** 148,161 
--- 154,174 
boolattislocal;
  
/* Number of times inherited from direct parent relation(s) */
int4attinhcount;
+ 
+   /*
+* attacl may or may not be present, see note above!
+*/
+   aclitem attacl[1];  /* we declare this just for the 
catalog */
+ 
  } FormData_pg_attribute;
  
  /*
   * someone should figure out how to do this properly. (The problem is
   * the size of the C struct is not the same as the size of the tuple
   * because of alignment padding at the end of the struct.)
+  * This includes only the fixed part of the tuple (not the attacl).
   */
  #define ATTRIBUTE_TUPLE_SIZE \
(offsetof(FormData_pg_attribute,attinhcount) + sizeof(int4))


-

What is causing the parser not to be able to see that attacl is a valid
column?  Have I missed something in the relcache?  Or is the pg_class hack
(with its relacl[] on the end of the struct) truly not going to work with
pg_attribute?

Ideas?

-- 
 kevin brintnall =~ [EMAIL PROTECTED]

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] GRANT/REVOKE column-level privileges

2006-01-13 Thread Martijn van Oosterhout
On Fri, Jan 13, 2006 at 03:37:32AM -0600, kevin brintnall wrote:
 Has anyone else taken a look at this?  I thought I'd play around with the
 system catalog and see if I couldn't put an ACL column into pg_attribute:
 
 It ended up generating the following BKI line:
 
   insert ( 1249 attacl 1034 -1 -1 18 1 -1 -1 f x i f f f t 0 _null_ )

Umm, yes. You also need to add the column to the contents of
pg_attribute, give the attribute a number, increase the number of
attributes as stored in pg_class, update the #define that gives the
attribute count, change the macro that gives the size of the
pg_attribute structure (ATTRIBUTE_TUPLE_SIZE) and update all the places
that create the structure to store a null or something else in that
column.

At that, I think I missed some steps but this should get you a bit
further...

Good luck!
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


signature.asc
Description: Digital signature


[HACKERS] PostgreSQL win32 NT4

2006-01-13 Thread Magnus Hagander
Hi!

I'm looking at a way to fix the issues with admin privileges on Win32
- specifically by finding a way to give up all admin stuff before the
server starts, when possible (and otherwise fail just as today).

I think I can do this. However, it is not possible to do this in a way
that's compatible with NT4. The APIs werent' created then. So from this,
I see two ways to proceed:

1) Do it in a Windows 2000+ way. This will once and for all break
compatibility with NT4 - pg simply will not work any more.

2) Code around it by dynamically loading these functions when available.
With this solution it will still run on NT4 - without this capability.
It will however be quite a bit more code, since all the functions have
to be loaded at runtime.



Now (2) can be done, and it's not hugely much more code for this change.
But the amount of places with issues will continue to go up, and
eventually we're going to have to pull the plug, IMHO. The question is -
is it time to do it now? Or do we wait until we hit a change that leaves
us no choice at all?

(Oh, and as everybody knows, NT4 isn't supported by Microsoft any more,
see
http://www.microsoft.com/ntserver/ProductInfo/Availability/Retiring.asp)


Comments?

//Magnus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] PostgreSQL win32 NT4

2006-01-13 Thread pmagnoli
NT4 is officially dead, IMHO no need for PostgreSQL to officially support it,
let's leave place for companies offering commercial postgresql versions to
work on it if they have enough customer requests.
BTW Win 2000 is more or less 6 years old now ...
Regards

Paolo

Magnus Hagander [EMAIL PROTECTED] ha scritto

 Hi!
 
 I'm looking at a way to fix the issues with admin privileges on Win32
 - specifically by finding a way to give up all admin stuff before the
 server starts, when possible (and otherwise fail just as today).
 
 I think I can do this. However, it is not possible to do this in a way
 that's compatible with NT4. The APIs werent' created then. So from this,
 I see two ways to proceed:
 
 1) Do it in a Windows 2000+ way. This will once and for all break
 compatibility with NT4 - pg simply will not work any more.
 
 2) Code around it by dynamically loading these functions when available.
 With this solution it will still run on NT4 - without this capability.
 It will however be quite a bit more code, since all the functions have
 to be loaded at runtime.
 
 
 
 Now (2) can be done, and it's not hugely much more code for this change.
 But the amount of places with issues will continue to go up, and
 eventually we're going to have to pull the plug, IMHO. The question is -
 is it time to do it now? Or do we wait until we hit a change that leaves
 us no choice at all?
 
 (Oh, and as everybody knows, NT4 isn't supported by Microsoft any more,
 see
 http://www.microsoft.com/ntserver/ProductInfo/Availability/Retiring.asp)
 
 
 Comments?
 
 //Magnus
 
 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faq
 




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] GRANT/REVOKE column-level privileges

2006-01-13 Thread Tom Lane
Martijn van Oosterhout kleptog@svana.org writes:
 Umm, yes. You also need to add the column to the contents of
 pg_attribute, give the attribute a number, increase the number of
 attributes as stored in pg_class, update the #define that gives the
 attribute count, change the macro that gives the size of the
 pg_attribute structure (ATTRIBUTE_TUPLE_SIZE) and update all the places
 that create the structure to store a null or something else in that
 column.

 At that, I think I missed some steps but this should get you a bit
 further...

It'd be worthwhile to look into the CVS history to study past commits
that have added columns to pg_attribute.  Adding columns to any of
the core system catalogs is generally a PITA ... not impossible,
but there are plenty of details to take care of.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] C++ - C : Module for converting the WHERE clause to the canonical form with PostgreSQL

2006-01-13 Thread Tom Lane
St Valentine [EMAIL PROTECTED] writes:
 That module is writen in the C++ langguage, how can i connect it to
 the PostgreSQL sources?

Convert it to C ;-)  Seriously, I think this would be a major pain in
the neck to do --- there are various gotchas like the system headers not
being C++-clean.  Why don't you just resurrect the qual canonicalization
logic that existed in src/backend/optimizer/prep/prepqual.c not too long
ago?

 In our project we must to compare
 queries. Beacause of the same query can be writen in different forms,
 so to compare queries we must to convert them to the same form of
 presentation.

I find this argument fairly dubious, though, as there are plenty of ways
to write the same query differently.  Forcing the condition into CNF or
DNF will fix only some of them.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] GRANT/REVOKE column-level privileges

2006-01-13 Thread kevin brintnall
On Fri, Jan 13, 2006 at 10:04:10AM -0500, Tom Lane wrote:
 Martijn van Oosterhout kleptog@svana.org writes:
  Umm, yes. You also need to add the column to the contents of
  pg_attribute, give the attribute a number, increase the number of
  attributes as stored in pg_class, update the #define that gives the
  attribute count, change the macro that gives the size of the
  pg_attribute structure (ATTRIBUTE_TUPLE_SIZE) and update all the places
  that create the structure to store a null or something else in that
  column.

I did all that, with the exception of the relnatts entry in pg_class.  I
omitted my full diff for brevity.

  At that, I think I missed some steps but this should get you a bit
  further...
 
 It'd be worthwhile to look into the CVS history to study past commits
 that have added columns to pg_attribute.  Adding columns to any of
 the core system catalogs is generally a PITA ... not impossible,
 but there are plenty of details to take care of.

Thank you.  That is a good suggestion.

-- 
 kevin brintnall =~ [EMAIL PROTECTED]

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] Control File

2006-01-13 Thread Bruno Almeida do Lago
Hi,

Today I was playing with our test environment and noticed that if I delete a
PostgreSQL datafile (with cluster down) and bring up the database, It will
simple... come up :-/

I wonder if it shouldn't complain about the missing datafile before opening
the cluster... I don't know... Maybe something like the oracle control file
which stores information about the files needed to start the database.

It seems pretty easy to do it (I may be wrong...).

What do you think? Are there any special reasons for PostgreSQL not complain
about the missing datafiles on startup? Would that be hard to implement
(maybe I could do it).


C ya,
Bruno






---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Control File

2006-01-13 Thread Qingqing Zhou

Bruno Almeida do Lago [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 Hi,

 Today I was playing with our test environment and noticed that if I delete 
 a
 PostgreSQL datafile (with cluster down) and bring up the database, It will
 simple... come up :-/

 I wonder if it shouldn't complain about the missing datafile before 
 opening
 the cluster... I don't know... Maybe something like the oracle control 
 file
 which stores information about the files needed to start the database.

 It seems pretty easy to do it (I may be wrong...).


What if after you check done, I remove the data file ?

Regards,
Qingqing 



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] PostgreSQL win32 NT4

2006-01-13 Thread Magnus Hagander
  I'm looking at a way to fix the issues with admin privileges on 
  Win32
  - specifically by finding a way to give up all admin stuff 
 before the 
  server starts, when possible (and otherwise fail just as today).
 
 
 So this will let an admin startup Postgres ... I got a 
 question -- if it can give up all admin stuff, can it reclaim later?

Depends on how you do it. You can do it in a way that it cannot be
reclaimed, which is the only one we'd be interested in. To do this it
has to be implemented in a combination of pg_ctl and postmaster.
Actually, you can do it three ways:

1) Can be reclaimed. 

2) Can't be reclaimed, but you can create a new token using a different
user - *IFF* you have the username/password for this.

3) Can't be reclaimed, can't change your token in any way at all, no
matter what you do. This does not work in all scenarios (for example, it
doesn't work if the process was already started by runas)


//Magnus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Checkpoint question

2006-01-13 Thread Jim C. Nasby
On Thu, Jan 12, 2006 at 05:00:49PM -0500, Qingqing Zhou wrote:
 
 
 On Thu, 12 Jan 2006, Jim C. Nasby wrote:
 
 
  It sounds like worrying about this would be much more interesting on a
  machine that is seeing both a fairly heavy IO load (meaning checkpoint
  will both take longer and affect other workloads more) and is seeing a
  pretty high rate of buffer updates (meaning that we'd likely do a bunch
  of extra work as part of the checkpoint if we didn't take note of
  exactly what buffers needed to be flushed). Unfortunately I don't think
  there's any way for the backend to know much about either condition
  right now, so it couldn't decide when it made sense to make a list of
  buffers to flush. Maybe in the future...
 
 
 The senario you mentioned is happened in many OLTP applications. No need
 for backend to know this -- we can leave the decision to the DBA:
 CHECKPOINT FULL or CHECPOINT PARTIAL. If you got some machines can observe
 its CHECKPOINT duration, that would be sweet.

Maybe I'm missing something here, but wouldn't that only help if you
were manually issuing checkpoints?
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Improving N-Distinct estimation by ANALYZE

2006-01-13 Thread Simon Riggs
On Fri, 2006-01-06 at 15:26 -0800, Josh Berkus wrote:

 Anyway, since the proof is in the pudding, Simon and I will be working on 
 some demo code for different sampling methods so that we can debate 
 results rather than theory.

I enclose a patch for checking out block sampling. This is not
production ready, yet, but works bug-free on cvstip. Code comments have
been fully updated to explain what's going on inside.

All you need to do is set analyze_blocks=b and ANALYZE will switch
over to using block sampling method and will read all the rows in b
blocks. The sample size will also be limited by maintenance_work_mem.
(Memory limitations could be smarter). This de-couples the specification
of the sample size from the specification of the MCV/histogram size
(stats_target).
[Right now, I'm not suggesting that we have a GUC named this - it just
exists for testing. If/when we agree to allow block sampling, then we
can discuss how to invoke/specify it]

The stats calculations aren't touched - it still uses Haas-Stokes.
If you set log_min_messages=DEBUG2 you'll also get more useful
explanations of what the variables are and what decisions it makes about
D for each column being analyzed.

This patch has two main effects:
- ANALYZE runs more than x10 faster to retrieve the same size sample
- you can specify much larger samples for bigger tables, without
increasing the stats targets

Generally, the larger samples will give better results for the
estimation. However, what is immediately apparent is that the
Haas-Stokes estimator actually gets even worse with block sampling in
the particular case I raised on-list. (Which is understandable, but not
desirable). ISTM this is a strike against Haas-Stokes, rather than a
strike against block sampling. So I'm looking at implementing the
Brutlag  Richardson estimator(s) that cope with
number-of-values-appearing in only one block. Not surprisingly that
means some non-trivial additional code to retrieve blockids for each
tuple and make decisions accordingly. I plan to use a similar technique
to the existing TupnoLink array to match blockids.

The BR estimators should allow a fairly fast, small sample to be taken,
making this more usable for dynamic sampling during query planning (when
desirable, see recent -perform thread).

It's also worth mentioning that for datatypes that only have an =
operator the performance of compute_minimal_stats is O(N^2) when values
are unique, so increasing sample size is a very bad idea in that case.
It may be possible to re-sample the sample, so that we get only one row
per block as with the current row sampling method. Another idea might be
just to abort the analysis when it looks fairly unique, rather than
churn through the whole sample.

Best Regards, Simon Riggs

Index: src/backend/commands/analyze.c
===
RCS file: /projects/cvsroot/pgsql/src/backend/commands/analyze.c,v
retrieving revision 1.90
diff -c -r1.90 analyze.c
*** src/backend/commands/analyze.c	22 Nov 2005 18:17:08 -	1.90
--- src/backend/commands/analyze.c	13 Jan 2006 18:54:26 -
***
*** 62,67 
--- 62,68 
  
  /* Default statistics target (GUC parameter) */
  int			default_statistics_target = 10;
+ int analyze_blocks = 0;
  
  static int	elevel = -1;
  
***
*** 77,84 
  	HeapTuple *rows, int numrows,
  	MemoryContext col_context);
  static VacAttrStats *examine_attribute(Relation onerel, int attnum);
! static int acquire_sample_rows(Relation onerel, HeapTuple *rows,
! 	int targrows, double *totalrows, double *totaldeadrows);
  static double random_fract(void);
  static double init_selection_state(int n);
  static double get_next_S(double t, int n, double *stateptr);
--- 78,86 
  	HeapTuple *rows, int numrows,
  	MemoryContext col_context);
  static VacAttrStats *examine_attribute(Relation onerel, int attnum);
! static HeapTuple *acquire_sample_rows(Relation onerel, 
! int targblocks, int targrows, long allowedMem,
! int *samplerows, double *totalrows, double *totaldeadrows);
  static double random_fract(void);
  static double init_selection_state(int n);
  static double get_next_S(double t, int n, double *stateptr);
***
*** 108,117 
--- 110,122 
  	VacAttrStats **vacattrstats;
  	AnlIndexData *indexdata;
  	int			targrows,
+ targblocks,
  numrows;
  	double		totalrows,
  totaldeadrows;
  	HeapTuple  *rows;
+ 	long		allowedMem = 0;		/* total memory allowed, in bytes */
+ 
  
  	if (vacstmt-verbose)
  		elevel = INFO;
***
*** 348,357 
  	/*
  	 * Acquire the sample rows
  	 */
- 	rows = (HeapTuple *) palloc(targrows * sizeof(HeapTuple));
- 	numrows = acquire_sample_rows(onerel, rows, targrows,
-   totalrows, totaldeadrows);
  
  	/*
  	 * Compute the statistics.	Temporary results during the calculations for
  	 * each column are stored in a 

Re: [HACKERS] Contrib Schemas

2006-01-13 Thread Simon Riggs
On Thu, 2006-01-12 at 19:33 -0500, Tom Lane wrote:
 David Fetter [EMAIL PROTECTED] writes:
  What do you think?
 
 I think it's not broken and doesn't need fixing.  I have not seen any
 groundswell of demand for moving the contrib stuff out of the public
 schema.  On the other hand, doing so *will* break many if not all
 existing applications that use contrib modules.

This is the kind of thing that makes *me* break between releases. :-)

My memory is bad enough already... do we need this?

Best Regards, Simon Riggs


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Contrib Schemas

2006-01-13 Thread Josh Berkus
John,

 Would it be reasonable for there to be a way for the super user to
 grant access to load approved modules and/or C language functions?

I can't see a way to do this except individually, in which case the 
superuser might as well load the functions.   We *have* to be restrictive 
about this because a C function can do anything, including overwriting 
whatever parts of the filesystem postgres has access to.  Look over our 
patch releases for the last 2 years and you'll see a host of patches 
designed specifically to prevent regular users from gaining access to 
superuser priveleges.  

What you want isn't impossible, but it would be a lot of work and testing 
to engineer such a mechanism and keep PostgreSQL's most secure status.  
So far, everyone has found it easier to work around the issue, especially 
since for most sites backup/restore is done by the superuser anyway.

-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Contrib Schemas

2006-01-13 Thread John DeSoi

Hi Josh,

On Jan 13, 2006, at 2:34 PM, Josh Berkus wrote:


I can't see a way to do this except individually, in which case the
superuser might as well load the functions.   We *have* to be  
restrictive

about this because a C function can do anything, including overwriting
whatever parts of the filesystem postgres has access to.  Look  
over our

patch releases for the last 2 years and you'll see a host of patches
designed specifically to prevent regular users from gaining access to
superuser priveleges.

What you want isn't impossible, but it would be a lot of work and  
testing
to engineer such a mechanism and keep PostgreSQL's most secure  
status.
So far, everyone has found it easier to work around the issue,  
especially

since for most sites backup/restore is done by the superuser anyway.


I suspected it was out of the question for security reasons, but I  
wanted to bring it up to make sure I was not missing some alternative  
solution.


I backup and restore all the time for hosted web sites running with  
PostgreSQL as a content management system. This is critical for doing  
site upgrades and you certainly can't depend on the super user in a  
hosted environment.


Maybe the best solution here would be some web interface setup by the  
hoster to perform specific approved tasks like tsearch install. This  
is already the mechanism used to allow users to create their own  
databases.


Thanks for taking the time to respond.



John DeSoi, Ph.D.
http://pgedit.com/
Power Tools for PostgreSQL


---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] simple utility commands (src/backend/commands)

2006-01-13 Thread Gevik babakhani








Folks,



I would like to understand how the simple
backend commands work. 

After reading the How PostgreSQL
Processes a Query, I was wondering 

Which command would be the easiest to begin with? (For a newbie of
course.)



Regards,

Gevik.












Re: [HACKERS] simple utility commands (src/backend/commands)

2006-01-13 Thread Jonah H. Harris
You should look at something like the LOCK table command. The following areas will help:src/backend/tcop/utility.csrc/backend/commands/lockcmds.csrc/backend/nodes/*funcs.csrc/backend/parser-Jonah
On 1/13/06, Gevik babakhani [EMAIL PROTECTED] wrote:













Folks,



I would like to understand how the 'simple'
backend commands work. 

After reading the "How PostgreSQL
Processes a Query", I was wondering 

Which command would be the easiest to begin with? (For a newbie of
course.)



Regards,

Gevik.














Re: [HACKERS] Improving N-Distinct estimation by ANALYZE

2006-01-13 Thread Josh Berkus
Simon,

 It's also worth mentioning that for datatypes that only have an =
 operator the performance of compute_minimal_stats is O(N^2) when values
 are unique, so increasing sample size is a very bad idea in that case.
 It may be possible to re-sample the sample, so that we get only one row
 per block as with the current row sampling method. Another idea might be
 just to abort the analysis when it looks fairly unique, rather than
 churn through the whole sample.

I'd tend to do the latter.   If we haven't had a value repeat in 25 blocks, 
how likely is one to appear later?

Hmmm ... does ANALYZE check for UNIQUE constraints?   Most unique values 
are going to have a constraint, in which case we don't need to sample them 
at all for N-distinct.

-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Improving N-Distinct estimation by ANALYZE

2006-01-13 Thread Tom Lane
Josh Berkus josh@agliodbs.com writes:
 It's also worth mentioning that for datatypes that only have an =
 operator the performance of compute_minimal_stats is O(N^2) when values
 are unique, so increasing sample size is a very bad idea in that case.

 Hmmm ... does ANALYZE check for UNIQUE constraints?

Our only implementation of UNIQUE constraints is btree indexes, which
require more than an = operator, so this seems irrelevant.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] message for constraint

2006-01-13 Thread Jaime Casanova
Hi,

From time to time people ask me if there is a way to customize
messages for constraints so they could be more informative to the
user...

Imagine something like:

create table foo (fld int4 check (fld  0));
message for constraint foo_fld_check on foo is 'fld field must contain
possitive numbers only.';

so i can let this message go directly to my users, and they can
respond without knowing waht a check constraint is...

we can use the pg_description catalog with a column added to indicate
if it is a comment or a message for constraint...

what do you think, it's worth the effort?

--
regards,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings