Re: [HACKERS] new food for the contrib/ directory

2002-04-18 Thread Justin Clift

Hi Bruce,

Haven't looked at the code, but there's no license with it.

Andreas, are you cool with having the same License as PostgreSQL for it
(BSD license)?

:-)

Regards and best wishes,

Justin Clift


Bruce Momjian wrote:
 
 Can someone comment on this?  I can't decide.
 
 ---
 
 Andreas Scherbaum wrote:
 
  Hello,
 
  i have written a module for logging changes on a table (without knowing
  the exact column names).
  Dont know where to put it, but its ready for use in the contrib directory.
 
  Its available at: http://ads.ufp.de/projects/Pg/table_log.tar.gz (3k)
 
  Would be nice, if this can be added.
 
 
  Best regards
 
  --
Andreas 'ads' Scherbaum
  Failure is not an option. It comes bundled with your Microsoft product.
   (Ferenc Mantfeld)
 
 
 
 
 --
   Bruce Momjian|  http://candle.pha.pa.us
   [EMAIL PROTECTED]   |  (610) 853-3000
   +  If your life is a hard drive, |  830 Blythe Avenue
   +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026
 
 ---(end of broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly

-- 
My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there.
   - Indira Gandhi

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[HACKERS] Is this an IN bug?

2002-04-18 Thread Christopher Kings-Lynne

Is this a bug?

usa=# SELECT * FROM palm_buyers WHERE buyer_id=in('150',210) ;
ERROR:  Function 'in(unknown, int4)' does not exist
Unable to identify a function that satisfies the given argument
types
You may need to add explicit typecasts

Chris


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS] Oops!

2002-04-18 Thread Christopher Kings-Lynne

Ignore my previous post - for obvious reasons!!!

Chris


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Index Scans become Seq Scans after VACUUM ANALYSE

2002-04-18 Thread Luis Alberto Amigo Navarro


 Look at the pgbench utility. I can't run that program without a +- 10%
 variation from run to run, no mater how many times I run vacuum and
checkpoint.


It's pgbench's fault, TPC-B was replaced with TPC-C because it is not
accurate enough, we run a pseudo TPC-H and it has almost no variations from
one run to another.

Regards


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] new food for the contrib/ directory

2002-04-18 Thread Andreas Scherbaum

Justin Clift wrote:
 
 Hi Bruce,
 
 Haven't looked at the code, but there's no license with it.
 
 Andreas, are you cool with having the same License as PostgreSQL for it
 (BSD license)?
 
 :-)
 
 Regards and best wishes,
 
 Justin Clift
 
 Bruce Momjian wrote:
 
  Can someone comment on this?  I can't decide.
 
  ---
 
  Andreas Scherbaum wrote:
  
   Hello,
  
   i have written a module for logging changes on a table (without knowing
   the exact column names).
   Dont know where to put it, but its ready for use in the contrib directory.
  
   Its available at: http://ads.ufp.de/projects/Pg/table_log.tar.gz (3k)
  
   Would be nice, if this can be added.
  
  
   Best regards

Hello,

uhm, good point. I thought i missed something ;-)

This software is distributed under the GNU General Public License
either version 2, or (at your option) any later version.

I have updated the readme and replaced the archive with a new version.


Thanks and best regards

-- 
  Andreas 'ads' Scherbaum

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] [SQL] A bug in gistPageAddItem()/gist_tuple_replacekey() ??? (fwd)

2002-04-18 Thread Teodor Sigaev

gistPageAddItem and gist_tuple_replacekey are commented out by GIST_PAGEADDITEM. 
They was used up to 7.1, but now it is unusable.

gistPageAddItem has interesting feature: recompress entry before writing to 
disk, but we (with Oleg and Tom) couldn't find any reasons to do it. And so, we 
leave this code for later thinking about.

Now gistPageAddItem is wrong, because it can work only in single-key indexes. In 
7.2 GiST supports multy-key index.



 I haven't see any comment on this.  If no one replies, would you send
 over a patch of fixes?  Thanks.
 
 ---
 
 Dmitry Tkach wrote:
 
I was trying to write a gist index extension, and, after some debugging,
it looks like I found a bug somewhere in the gist.c code ...
I can't be quite sure, because I am not familiar with the postgres
code... but, here is what I see happenning (this is 7.1, but I compared
the sources to 7.2, and did not see this fixed - although, I did not
inspect it too carefully)...

First of all, gistPageAddItem () calls gistdentryinit() with a pointer
to what's stored in the tuple, so, 'by-value' types do not work (because
gistcentryinit () would be passed the value itself, when called from
gistinsert(), and then, in gistPageAddItem (), it is passed a pointer,
coming from gistdentryinit () - so, it just doesn't know really how to
treat the argument)...

Secondly, gist_tuple_replacekey() seems to have incorrect logic figuring
out if there is enough space in the tuple (it checks for '', instead of
'=') - this causes a new tuple to get always created (this one, seems
to be fixed in 7.2)

Thirdly, gist_tuple_replace_key () sends a pointer to entry.pred (which
is already a pointer to the actual value) to index_formtuple (), that
looks at the tuple, sees that the type is 'pass-by-value', and puts that
pointer directly into the tuple, so that, the resulting tuple now
contains a pointer to a pointer to the actual value...

Now, if more then one split is required, this sequence is repeated again
and again and again, so that, by the time the tuple gets actually
written, it contains something like a pointer to a pointer to a pointer
to a pointer to the actual data :-(

Once again, I've seen some comments in the 7.2 branch about gists and
pass-by-value types, but brief looking at the differences in the source
did not make me conveinced that it was indeed fixed...

Anyone knows otherwise?

Thanks a lot!

Dima


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


 


-- 
Teodor Sigaev
[EMAIL PROTECTED]



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] updated qCache

2002-04-18 Thread Karel Zak

On Wed, Apr 17, 2002 at 05:17:51PM -0400, Neil Conway wrote:
 Hi all,
 
 Here's an updated version of the experimental qCache patch I
 posted a couple days ago (which is a port of Karel Zak's 7.0
 work to CVS HEAD).
 
 Changes:
 
 - fix segfault in EXECUTE under some circumstances (reported
   by Barry Lind)
 - fix some memory leaks  (thanks to Karel Zak)
 - write more regression tests (make check still won't pass)
 - re-diff against CVS HEAD
 - more code cleanup, minor tweaks
 
 However, I've tentatively decided that I think the best
 way to go forward is to rewrite this code. IMHO the utility of
 plans cached in shared memory is fairly limited, but the
 code that implements this adds a lot of complex to the patch.
 I'm planning to re-implement PREPARE/EXECUTE with support only
 for locally-prepared plans, using the existing patch as a
 guide. The result should be a simpler patch -- once it's
 in CVS we can worry about more advanced plan caching
 techiques. Any complaints/comments on this plan?

 I agree too :-) I think remove the shared memory code from this patch
 is easy and local memory storage is there already done.

Karel

-- 
 Karel Zak  [EMAIL PROTECTED]
 http://home.zf.jcu.cz/~zakkr/
 
 C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Index Scans become Seq Scans after VACUUM ANALYSE

2002-04-18 Thread tycho

On Thu, 18 Apr 2002 [EMAIL PROTECTED] wrote:

 
 On 04/17/2002 01:44:46 PM Michael Loftis wrote:
  In many of the cases where it is a primary key it is also there to
  ensure fast lookups when referenced as a foreign key.  Or for joins.
 
 Don't know if the optimizer takes this into consideration, but a query that uses a 
primary and/or unique key in the where-clause, should always choose to use
 the related indices (assuming the table size is above a certain threshold). Since a 
primary key/unique index always restricts the resultset to a single row.

I don't think so.

eg. table with primary key pk, taking values from 1 to 100 (so
100 records)

select * from table where pk  5

should probably not use the index ...

Cheers
Tycho

-- 
Tycho Fruru [EMAIL PROTECTED]
Prediction is extremely difficult. Especially about the future.
  - Niels Bohr


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] updated qCache

2002-04-18 Thread Karel Zak

On Wed, Apr 17, 2002 at 06:05:59PM -0400, Neil Conway wrote:
 On Wed, 17 Apr 2002 14:34:45 -0700
 
 I'm not saying it's a bad idea, I just think I'd like to
 concentrate on the locally-cached plans for now and see if
 there is a need to add shared plans later.

 Yes, later we can use shared memory buffer as pipe between
 backends:

 Backend A:Backend B:
 local-memory-query-plan -- shmem -- local-memory-query-plan
 
 In this idea is in the shared memory one query-plan only and backends 
 use it for plan copying from A to B.

 It require persistent backends of course.

Karel

 PS. it's idea only and nothing other, the original qcache was idea
 only too :-)
-- 
 Karel Zak  [EMAIL PROTECTED]
 http://home.zf.jcu.cz/~zakkr/
 
 C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] timeout implementation issues

2002-04-18 Thread Michael Loftis



Tom Lane wrote:

Bruce Momjian [EMAIL PROTECTED] writes:

I have added this to the TODO list, with a question mark.  Hope this is
OK with everyone.


o Abort SET changes made in aborted transactions (?)


Actually, I was planning to make only search_path act that way, because
of all the push-back I'd gotten on applying it to other SET variables.
search_path really *has* to have it, but if there's anyone who agrees
with me about doing it for all SET vars, they didn't speak up :-(

I did and do, strongly.  TRANSACTIONS are supposed to leave things as 
they were before the BEGIN.  It either all happens or it all doesnt' 
happen.  If you need soemthing inside of a transaction to go 
irregardless then it shouldn't be within the transaction.

regards, tom lane




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] timeout implementation issues

2002-04-18 Thread Hiroshi Inoue
Michael Loftis wrote:
 
 Tom Lane wrote:
 
 Bruce Momjian [EMAIL PROTECTED] writes:
 
 I have added this to the TODO list, with a question mark.  Hope this is
 OK with everyone.
 
 
 o Abort SET changes made in aborted transactions (?)
 
 
 Actually, I was planning to make only search_path act that way, because
 of all the push-back I'd gotten on applying it to other SET variables.
 search_path really *has* to have it, but if there's anyone who agrees
 with me about doing it for all SET vars, they didn't speak up :-(
 
 I did and do, strongly.  TRANSACTIONS are supposed to leave things as
 they were before the BEGIN.  It either all happens or it all doesnt'
 happen.  If you need soemthing inside of a transaction to go
 irregardless then it shouldn't be within the transaction.

Oops is this issue still living ?
I object to the TODO(why ) strongly.
Please remove it from the TODO first and put it back
to the neutral position.

regards,
Hiroshi Inoue
http://w2422.nsk.ne.jp/~inoue/

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html


Re: [HACKERS] [PATCHES] YADP - Yet another Dependency Patch

2002-04-18 Thread Rod Taylor

Thats what I was going to propose if no-one could figure out a way of
automatically gathering system table dependencies.

It would be nice (for a minimallist db) to be able to drop a bunch of
stuff, but a number of other things would need to be done as well
(full system compression for example).

--
Rod
- Original Message -
From: Tom Lane [EMAIL PROTECTED]
To: Rod Taylor [EMAIL PROTECTED]
Cc: Hackers List [EMAIL PROTECTED]
Sent: Thursday, April 18, 2002 1:24 AM
Subject: Re: [HACKERS] [PATCHES] YADP - Yet another Dependency Patch


 Rod Taylor [EMAIL PROTECTED] writes:
  3. Isn't there a better way to find the initial dependencies?
That
  SELECT is truly ugly, and more to the point is highly likely to
  break anytime someone rearranges the catalogs.

  I'm having a really hard time coming up with a good method for
this.

 Well, do we actually need an *accurate* representation of the
 dependencies?  You seemed to like my idea of pinning essential
stuff,
 and in reality all of the initial catalog structures ought to be
pinned.
 Maybe it would be sufficient to just make pinned entries for
 everything that appears in the initial catalogs.  Someone who's
really
 intent on manually deleting, say, the box datatype could be
expected
 to be bright enough to figure out how to remove the pg_depends entry
 that's preventing him from doing so.

 (There are a very small number of things that are specifically
intended
 to be droppable, like the public namespace, but seems like
excluding
 that short list from the pg_depends entries would be more
maintainable
 than the approach you've got now.)

 regards, tom lane



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] another optimizer question

2002-04-18 Thread Jakub Ouhrabka

hi,
can anyone explain me why there are different query plans for select ...
from ... where y!=x  and select ... from ... where yx or yx for
integers, please?
see the details below...

thanks,
kuba

db_cen7=# analyze;
ANALYZE

db_cen7=# \d ts19
 Table ts19
  Column   |   Type   |   Modifiers
---+--+
 ts19pk___ | integer  | not null default
nextval('ts19_ts19pkseq'::text)
 ts19datum | timestamp with time zone | not null
 ts19zavaz | integer  | not null
 ts19cislo | integer  | not null
 ts19text_ | character varying(65536) | not null
 ts19idpri | integer  | not null
Indexes: ts19_ts19zavaz_idx
Primary key: ts19_pkey

db_cen7=# explain analyze select * from ts19 where ts19zavaz != 7 order by
ts19pk___ desc limit 10;
NOTICE:  QUERY PLAN:

Limit  (cost=89635.63..89635.63 rows=1 width=38) (actual
time=50868.17..50868.18 rows=10 loops=1)
  -  Sort  (cost=89635.63..89635.63 rows=1 width=38) (actual
time=50868.16..50868.17 rows=11 loops=1)
-  Seq Scan on ts19  (cost=0.00..89635.62 rows=1 width=38)
(actual time=95.99..50852.34 rows=300 loops=1)
Total runtime: 50868.27 msec

db_cen7=# explain analyze select * from ts19 where ts19zavaz  7 or
ts19zavaz  7 order by ts19pk___ desc limit 10;
NOTICE:  QUERY PLAN:

Limit  (cost=4.04..4.04 rows=1 width=38) (actual time=1118.28..1118.29
rows=10 loops=1)
  -  Sort  (cost=4.04..4.04 rows=1 width=38) (actual
time=1118.27..1118.28 rows=11 loops=1)
-  Index Scan using ts19_ts19zavaz_idx, ts19_ts19zavaz_idx on
ts19  (cost=0.00..4.03 rows=1 width=38) (actual time=0.03..1117.58
rows=300 loops=1)
Total runtime: 1118.40 msec

the runtime times depends on the machine load but generally the second
query is much faster...

more info:

db_cen7=# select count(*) from ts19;
  count
-
 4190527
(1 row)

db_cen7=# select distinct(ts19zavaz) from ts19;
 ts19zavaz
---
 3
 7
(2 rows)

db_cen7=# select count(*) from ts19 where ts19zavaz = 3;
 count
---
   300
(1 row)

db_cen7=# select version();
version
---
 PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.95.4
(1 row)



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] SQL Query Optimization

2002-04-18 Thread Dav Coleman


Hello,

I am using postgresql to house chemical informatics data which consists
of
several interlinked tables with tens of thousands (maximum) of rows.
When
doing search queries against these tables (which always requires
multiple
joins) I have noticed that the semantically equivalent SQL queries can
differ
vastly in speed performance depending on the order of clauses ANDed
together ( WHERE cond1 AND cond2 takes forever, but  WHERE cond2
AND cond1 comes right back).

So it appears I need to do some pre-optimization of the SQL query
generated
by the user before submitting it to postgresql in order to guarantee (or
at least
increase the likelihood of) the fastest results. I've tried STFW and
RTFM but
haven't found any good pointers on where to start with this, although I
feel that
there must be some published algorithms or theories. Can anyone point me
to
a URL or other source to get me on my way?

Also, I wonder if this sort of query optimization is built into other
databases
such as Oracle?

I did find this URL: http://redbook.cs.berkeley.edu/lec7.html
which seems to be interesting, but honestly I'm far from a DB expert so
I
can't follow most of it, and I can't tell if it is talking about
optimization that
can be done in application space (query rewrite) or something that has
to
be done in the database engine itself. I'm going to try to find the book
it
references though.

Basically I feel a bit in over my head, which is ok but I don't want to
waste
time paddling in the wrong direction, so I'm hoping someone can
recognize
where I need to look and nudge me in that direction. Maybe I just need
proper terminology to plug into google.

Thanks,
Dav


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Index Scans become Seq Scans after VACUUM ANALYSE

2002-04-18 Thread Mark Pritchard

Apologies for the naff double post, but I meant to add that obviously
the figures for the solaris box are bogus after the first run...imagine
a file system cache of an entire 2gb file. I tried creating a file of
4gb on this box, but it bombed with a file too large error.
Unfortunately, I can't rip memory out of this box as I don't have
exclusive access.

On Thu, 2002-04-18 at 11:49, Mark Pritchard wrote:
 I threw together the attached program (compiles fine with gcc 2.95.2 on
 Solaris 2.6 and egcs-2.91.66 on RedHat Linux 6.2) and ran it a few
 times. Data is below. Usual disclaimers about hastily written code etc
 :)
 
 Machine = ghoul (generic intel, 384mb ram, dual p3-800, ide disk running
 dma)
 
 Sequential
 Bytes ReadTimeBytes / Sec
 536870912   27.14   19783933.74
 536870912   27.14   19783990.60
 536870912   27.11   19801872.14
 536870912   26.92   19942928.41
 536870912   27.31   19657408.43
 19794026.66 (avg)
 
 Random
 Bytes ReadTimeBytes / Sec
 1073741824  519.57  2066589.21
 1073741824  517.78  2073751.44
 1073741824  516.92  2077193.23
 1073741824  513.18  2092333.29
 1073741824  510.68  2102579.88
 2082489.41 (avg)
 
 Machine = jedi (Sun E420, 3gb ram, dual 400s, test on single scsi disk)
 
 Sequential
 Bytes ReadTimeBytes / Sec
 2097152000  65.19   32167675.28
 2097152000  65.22   32154114.65
 2097152000  65.16   32182561.99
 2097152000  65.12   32206105.12
 2097152000  64.67   32429463.26
 32227984.06 (avg)
 
 Random
 Bytes ReadTimeBytes / Sec
 4194304000  1522.22 2755394.79
 4194304000  278.18  15077622.05
 4194304000  91.43   45874730.07
 4194304000  61.43   68273795.19
 4194304000  54.55   76890231.51
 41774354.72
 
 If I interpret Tom's divide instruction correctly, is that a factor of
 10 on the linux box?
 
 On Thu, 2002-04-18 at 01:16, Tom Lane wrote:
  Luis Alberto Amigo Navarro [EMAIL PROTECTED] writes:
   On my own few experience I think this could be solved decreasing
   random_page_cost, if you would prefer to use indexes than seq scans, then
   you can lower random_page_cost to a point in which postgres works as you
   want. So the planner would prefer indexes when in standard conditions it
   would prefer seq scans.
  
  It's entirely possible that the default value of random_page_cost is too
  high, at least for many modern machines.  The experiments I did to get
  the 4.0 figure were done a couple years ago, on hardware that wasn't
  exactly new at the time.  I have not heard of anyone else trying to
  measure it though.
  
  I don't think I have the source code I used anymore, but the principle
  is simple enough:
  
  1. Make a large file (several times the size of your machine's RAM, to
  ensure you swamp out kernel disk buffering effects).  Fill with random
  data. (NB: do not fill with zeroes, some filesystems optimize this away.)
  
  2. Time reading the file sequentially, 8K per read request.
  Repeat enough to get a statistically trustworthy number.
  
  3. Time reading randomly-chosen 8K pages from the file.  Repeat
  enough to get a trustworthy number (the total volume of pages read
  should be several times the size of your RAM).
  
  4. Divide.
  
  The only tricky thing about this is making sure you are measuring disk
  access times and not being fooled by re-accessing pages the kernel still
  has cached from a previous access.  (The PG planner does try to account
  for caching effects, but that's a separate estimate; the value of
  random_page_cost isn't supposed to include caching effects.)  AFAIK the
  only good way to do that is to use a large test, which means it takes
  awhile to run; and you need enough spare disk space for a big test file.
  
  It'd be interesting to get some numbers for this across a range of
  hardware, filesystems, etc ...
  
  regards, tom lane
  
  ---(end of broadcast)---
  TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
  
 
 
 
 

 #include errno.h
 #include stdio.h
 #include stdlib.h
 #include time.h
 #include sys/stat.h
 #include sys/time.h
 
 /**
  * Constants
  */
 
 #define BLOCK_SIZE(8192)
 
 /**
  * Prototypes
  */
 
   // Creates the test file filled with random data
   void createTestFile(char *testFileName, long long fileSize);
 
   // Handles runtime errors by displaying the function, activity and error number
   void handleError(char *functionName, char *activity);
 
   // Standard entry point
   int main(int argc, char *args[]);
 
   // Prints correct usage and quits
   void printUsageAndQuit();
 
   // Tests performance of random reads of the given file
   void testRandom(char *testFileName, long long amountToRead);
 
   // 

Re: [HACKERS] Index Scans become Seq Scans after VACUUM ANALYSE

2002-04-18 Thread Tom Lane

mlw [EMAIL PROTECTED] writes:
 For instance: say we have two similarly performing plans, close to one another,
 say within 20%, one plan uses an index, and one does not. It is unlikely that
 the index plan will perform substantially worse than the non-index plan, right?

This seems to be the crux of the argument ... but I've really seen no
evidence to suggest that it's true.  The downside of improperly picking
an indexscan plan is *not* any less than the downside of improperly
picking a seqscan plan, in my experience.

It does seem (per Thomas' earlier observation) that we get more
complaints about failure to use an index scan than the other case.
Prior to 7.2 it was usually pretty obviously traceable to overestimates
of the number of rows to be retrieved (due to inadequate data
statistics).  In 7.2 that doesn't seem to be the bottleneck anymore.
I think now that there may be some shortcoming in the planner's cost
model or in the adjustable parameters for same.  But my reaction
to that is to try to figure out how to fix the cost model.  I certainly
do not feel that we've reached a dead end in which the only answer is
to give up and stop trusting the cost-based optimization approach.

 Now, given the choice of the two strategies on a table, both pretty close to
 one another, the risk of poor performance for using the index scan is minimal
 based on the statistics, but the risk of poor performance for using the
 sequential scan is quite high on a large table.

You keep asserting that, and you keep providing no evidence.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Index Scans become Seq Scans after VACUUM ANALYSE

2002-04-18 Thread Tom Lane

mlw [EMAIL PROTECTED] writes:
 should we not just allow the developer to place hints in the
 SQL, as:

 select /*+ INDEX(a_id, b_id) */ * from a, b where a.id = b.id;

itch  People have suggested that sort of thing from time to time,
but I have a couple of problems with it:

1.  It's unobvious how to tag the source in a way that is helpful
for any but the most trivial queries.  Moreover, reasonable sorts
of tags would provide only partial specification of the exact
query plan, which is a recipe for trouble --- an upgraded optimizer
might make different choices, leading to a pessimized plan if some
points are pinned down when others aren't.

2.  The tag approach presumes that the query programmer is smarter
than the planner.  This might be true under ideal circumstances,
but I have a hard time crediting that the planner looking at today's
stats is dumber than the junior programmer who left two years ago,
and no one's updated his query since then.  The planner may not be
very bright, but it doesn't get bored, tired, or sick, nor move on
to the next opportunity.  It will pick the best plan it can on the
basis of current statistics and the specific values appearing in
the given query.  Every time.  A tag-forced query plan doesn't
have that adaptability.

By and large this argument reminds me of the compiler versus hand-
programmed assembler argument.  Which was pretty much a dead issue
when I was an undergrad, more years ago than I care to admit in a
public forum.  Yes, a competent programmer who's willing to work
hard can out-code a compiler over small stretches of code.  But no
one tries to write large systems in assembler anymore.  Hand-tuned
SQL is up against that same John-Henry-vs-the-steam-hammer logic.
Maybe the current PG optimizer isn't quite in the steam hammer
league yet, but it will get there someday.  I'm more interested
in revving up the optimizer than in betting on John Henry.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Index Scans become Seq Scans after VACUUM ANALYSE

2002-04-18 Thread mlw

Tom Lane wrote:
 
 mlw [EMAIL PROTECTED] writes:
  For instance: say we have two similarly performing plans, close to one another,
  say within 20%, one plan uses an index, and one does not. It is unlikely that
  the index plan will perform substantially worse than the non-index plan, right?
 
 This seems to be the crux of the argument ... but I've really seen no
 evidence to suggest that it's true.  The downside of improperly picking
 an indexscan plan is *not* any less than the downside of improperly
 picking a seqscan plan, in my experience.

Our experiences differ. I have fought with PostgreSQL on a number of occasions
when it would not use an index. Inevitably, I would have to set enable_seqscan
= false. I don't like doing that because it forces the use of an index when it
doesn't make sense.

I don't think we will agree, we have seen different behaviors, and our
experiences seem to conflict. This however does not mean that either of us is
in error, it just may mean that we use data with very different
characteristics.

This thread is kind of frustrating for me because over the last couple years I
have seen this problem many times and the answer is always the same, The
statistics need to be improved. Tom, you and I have gone back and forth about
this more than once.

I submit to you that the statistics will probably *never* be right. They will
always need improvement here and there. Perhaps instead of fighting over an
algorithmic solution, and forcing the users to work around problems with
choosing an index, should we not just allow the developer to place hints in the
SQL, as:

select /*+ INDEX(a_id, b_id) */ * from a, b where a.id = b.id;

That way if there is a performance issue with using or not using an index, the
developer can have better control over the evaluation of the query.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Index Scans become Seq Scans after VACUUM ANALYSE

2002-04-18 Thread Adrian 'Dagurashibanipal' von Bidder

On Wed, 2002-04-17 at 19:43, Tom Lane wrote:
 Hannu Krosing [EMAIL PROTECTED] writes:
  OTOH, it is also important where the file is on disk. As seen from disk
  speed test graphs on http://www.tomshardware.com , the speed difference
  of sequential reads is 1.5 to 2.5 between inner and outer tracks. 
 
 True.  But if we use the same test file for both the sequential and
 random-access timings, hopefully the absolute speed of access will
 cancel out.  (Again, it's the sort of thing that could use some
 real-world testing...)

Not so sure about that. Random access basically measures latency,
sequential access measures transfer speed. I'd argue that latency is
more or less constant across the disk as it depends on head movement and
the spindle turning.

cheers
-- vbi





signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] new food for the contrib/ directory

2002-04-18 Thread Andreas Scherbaum

Justin Clift wrote:
 
 Hi Bruce,
 
 Did we reach an opinion as to whether we'll include GPL'd code?
 
 My vote is to not include this code, as it just muddies the water with
 PostgreSQL being BSD based.
 
 :-)
 

Hmm, there's enough GPL'ed stuff in contrib/ ;-)

-- 
  Andreas 'ads' Scherbaum

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Index Scans become Seq Scans after VACUUM ANALYSE

2002-04-18 Thread mlw

Tom Lane wrote:

 By and large this argument reminds me of the compiler versus hand-
 programmed assembler argument.  Which was pretty much a dead issue
 when I was an undergrad, more years ago than I care to admit in a
 public forum.  Yes, a competent programmer who's willing to work
 hard can out-code a compiler over small stretches of code.  But no
 one tries to write large systems in assembler anymore.  Hand-tuned
 SQL is up against that same John-Henry-vs-the-steam-hammer logic.
 Maybe the current PG optimizer isn't quite in the steam hammer
 league yet, but it will get there someday.  I'm more interested
 in revving up the optimizer than in betting on John Henry.

I am not suggesting that anyone is going to write each and every query with
hints, but a few select queries, yes, people will want to hand tune them.

You are right no one uses assembler to create big systems, but big systems
often have spot optimizations in assembler. Even PostgreSQL has assembler in
it. 

No generic solution can be perfect for every specific application. There will
always be times when hand tuning a query will produce better results, and
sometimes that will make the difference between using PostgreSQL or use
something else.

For the two years I have been subscribed to this list, this is a fairly
constant problem, and the answer is always the same, in effect, we're working
on it. If PostgreSQL had the ability to accept hints, one could say, We are
always working to improve it, but in your case you may want to give the
optimizer a hint as to what you expect it to do.

It may not be the best solution in your mind, but speaking as a long time
user of PostgreSQL, it would be a huge help to me, and I'm sure I am not alone.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] Bug or misunderstanding w/domains in 7.3devel?

2002-04-18 Thread Joel Burton

Using latest CVS sources with Linux 2.4 i586:

Comparing using domains versus traditional explicit field types.
Here's the control test:

test=# create table t1 (f varchar(5) not null);
CREATE
test=# insert into t1 values ('2');
INSERT 16626 1
test=# select * from t1 where f='2';
 f
---
 2
(1 row)


If I create a VARCHAR domain, everything works as expected.

test=# create domain typ varchar(5) not null;
CREATE DOMAIN
test=# create table t2 (f typ);
CREATE
test=# insert into t2 values ('2');
INSERT 16627 1
test=# select * from t2 where f='2';
 f
---
 2
(1 row)


Here's a control test for the same thing, except with CHAR:

test=# create table t1 (f char(5) not null);
CREATE
test=# insert into t1 values ('2');
INSERT 16639 1
test=# select * from t1 where f='2';
   f
---
 2
(1 row)


However, if I create a CHAR domain, I'm unable to query the value from the
table:

test=# create domain typ char(5) not null;
CREATE DOMAIN
test=# create table t2 (f typ);
CREATE
test=# insert into t2 values ('2');
INSERT 16640 1
test=# select * from t2 where f='2';
 f
---
(0 rows)


Even if I coerce the value to the correct domain:

test=# select * from t2 where f='2'::typ;
 f
---
(0 rows)


However, this works:

test=# select * from t2 where f='2'::char;
   f
---
 2
(1 row)


Is this a bug? Is this correct behavior? Am I misunderstanding this?

Thanks!


Joel BURTON | [EMAIL PROTECTED] | joelburton.com | aim: wjoelburton
Knowledge Management  Technology Consultant


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Index Scans become Seq Scans after VACUUM ANALYSE

2002-04-18 Thread Bruce Momjian

mlw wrote:
 I don't think we will agree, we have seen different behaviors, and our
 experiences seem to conflict. This however does not mean that either of us is
 in error, it just may mean that we use data with very different
 characteristics.
 
 This thread is kind of frustrating for me because over the last couple years I
 have seen this problem many times and the answer is always the same, The
 statistics need to be improved. Tom, you and I have gone back and forth about
 this more than once.
 

Have you tried reducing 'random_page_cost' in postgresql.conf.  That
should solve most of your problems if you would like more index scans.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Index Scans become Seq Scans after VACUUM ANALYSE

2002-04-18 Thread Bruce Momjian

Tom Lane wrote:
 2.  The tag approach presumes that the query programmer is smarter
 than the planner.  This might be true under ideal circumstances,
 but I have a hard time crediting that the planner looking at today's
 stats is dumber than the junior programmer who left two years ago,
 and no one's updated his query since then.  The planner may not be
 very bright, but it doesn't get bored, tired, or sick, nor move on
 to the next opportunity.  It will pick the best plan it can on the
 basis of current statistics and the specific values appearing in
 the given query.  Every time.  A tag-forced query plan doesn't
 have that adaptability.

Add to this that hand tuning would happem mostly queries where the two
cost estimates are fairly close, and add the variability of a multi-user
environment, a hard-coded plan may turn out to be faster only some of
the time, and could change very quickly into something longer if the
table changes.

My point is that very close cases are the ones most likely to change
over time.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] new food for the contrib/ directory

2002-04-18 Thread Tom Lane

Andreas Scherbaum [EMAIL PROTECTED] writes:
 Justin Clift wrote:
 Did we reach an opinion as to whether we'll include GPL'd code?
 
 My vote is to not include this code, as it just muddies the water with
 PostgreSQL being BSD based.

 Hmm, there's enough GPL'ed stuff in contrib/ ;-)

Indeed, the core committee recently agreed that we should try to ensure
that the whole distribution is under the same BSD license.  I have a
TODO item to contact the authors of the existing GPL'd contrib modules,
and if possible get them to agree to relicense.  If not, those modules
will be removed from contrib.

There are other possible homes for contrib modules whose authors
strongly prefer GPL.  For example, Red Hat's add-ons for Postgres will
be GPL (per corporate policy), and I expect that they'd be willing to
host contrib modules.  But the core distribution will be straight BSD
to avoid license confusion.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] another optimizer question

2002-04-18 Thread Tom Lane

Jakub Ouhrabka [EMAIL PROTECTED] writes:
 can anyone explain me why there are different query plans for select ...
 from ... where y!=x  and select ... from ... where yx or yx for
 integers, please?

!= isn't an indexable operation.  This is not the planner's fault, but
a consequence of the index opclass design we inherited from Berkeley.
I suppose we could make it an indexable operation --- but there are so
few cases where it'd be a win that I'm not excited about it.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] SQL Query Optimization

2002-04-18 Thread Tom Lane

Dav Coleman [EMAIL PROTECTED] writes:
 I have noticed that the semantically equivalent SQL queries can
 differ
 vastly in speed performance depending on the order of clauses ANDed
 together ( WHERE cond1 AND cond2 takes forever, but  WHERE cond2
 AND cond1 comes right back).

Could we see a specific example?

It would also be useful to know what PG version you are using, whether
you've VACUUM ANALYZEd the tables, and what EXPLAIN has to say about
your query.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] new food for the contrib/ directory

2002-04-18 Thread Alexandre Dulaunoy

first comment :

* a special directory with ./contrib/gpl ?

second comment : 

* I don't really understand your position regarding the GNU General Public 
  License. The GPL is offering multiple advantages for a big project and 
  software like PostgreSQL. For example : 

* Contribution back to the main tree more easy if redistribution. 
  (like HP and Samba team are doing, copyright holder remains samba team) 

* More easy to get a RF (Royalty Free) license from a patent 
  owner. (this is guarantee for him that it will not go back to  
  proprietary software where it's not a RF license) (like the 
  UB-Trees)

* A possible bigger audience.

Dual licensing is also an alternative but could be a real mess. 

It's just idea. 

alx


-- 
Alexandre Dulaunoy  [EMAIL PROTECTED]
http://www.conostix.com/


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] timeout implementation issues

2002-04-18 Thread Bruce Momjian

Hiroshi Inoue wrote:
 Michael Loftis wrote:
  
  Tom Lane wrote:
  
  Bruce Momjian [EMAIL PROTECTED] writes:
  
  I have added this to the TODO list, with a question mark.  Hope this is
  OK with everyone.
  
  
  o Abort SET changes made in aborted transactions (?)
  
  
  Actually, I was planning to make only search_path act that way, because
  of all the push-back I'd gotten on applying it to other SET variables.
  search_path really *has* to have it, but if there's anyone who agrees
  with me about doing it for all SET vars, they didn't speak up :-(
  
  I did and do, strongly.  TRANSACTIONS are supposed to leave things as
  they were before the BEGIN.  It either all happens or it all doesnt'
  happen.  If you need soemthing inside of a transaction to go
  irregardless then it shouldn't be within the transaction.
 
 Oops is this issue still living ?
 I object to the TODO(why ) strongly.
 Please remove it from the TODO first and put it back
 to the neutral position.

OK, how is this:

  o Abort all or commit all SET changes made in an aborted transaction

Is this neutral?  I don't think our current behavior is defended by anyone.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] timeout implementation issues

2002-04-18 Thread Bruce Momjian

Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  I have added this to the TODO list, with a question mark.  Hope this is
  OK with everyone.
 
  o Abort SET changes made in aborted transactions (?)
 
 Actually, I was planning to make only search_path act that way, because
 of all the push-back I'd gotten on applying it to other SET variables.
 search_path really *has* to have it, but if there's anyone who agrees
 with me about doing it for all SET vars, they didn't speak up :-(

Woh, this all started because of timeout, which needs this fix too.  We
certainly need something and I don't want to get into on of those we
can't all decide, so we do nothing situations.

I have updated the TODO to:

o Abort all or commit all SET changes made in an aborted transaction

I don't think our current behavior is defended by anyone.  Is abort all
or commit all the only two choices?   If so, we will take a vote and be
done with it.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Index Scans become Seq Scans after VACUUM ANALYSE

2002-04-18 Thread Michael Loftis

Finally someone writes down whats been itching at my brain for a while.

In a multi-tasking system it's always cheaper to fetch less blocks, no 
matter where they are.  Because, as you said, it will end up more or 
less random onf a system experiencing a larger number of queries.

mlw wrote:

Bruce Momjian wrote:

My second point, that index scan is more risky than sequential scan, is
outlined above.  A sequential scan reads each page once, and uses the
file system read-ahead code to prefetch the disk buffers.  Index scans
are random, and could easily re-read disk pages to plow through a
significant portion of the table, and because the reads are random,
the file system will not prefetch the rows so the index scan will have
to wait for each non-cache-resident row to come in from disk.


It took a bike ride to think about this one. The supposed advantage of a
sequential read over an random read, in an active multitasking system, is a
myth. If you are executing one query and the system is doing only that query,
you may be right.

Execute a number of queries at the same time, the expected benefit of a
sequential scan goes out the window. The OS will be fetching blocks, more or
less, at random.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Index Scans become Seq Scans after VACUUM ANALYSE

2002-04-18 Thread Michael Loftis

Somethings wrong with the random numbers from the sun... re-run them, 
that first sample is insane  Caching looks like it's affecctign your 
results alot...

Mark Pritchard wrote:

I threw together the attached program (compiles fine with gcc 2.95.2 on
Solaris 2.6 and egcs-2.91.66 on RedHat Linux 6.2) and ran it a few
times. Data is below. Usual disclaimers about hastily written code etc
:)

Machine = ghoul (generic intel, 384mb ram, dual p3-800, ide disk running
dma)

Sequential
Bytes Read TimeBytes / Sec
536870912   27.14   19783933.74
536870912   27.14   19783990.60
536870912   27.11   19801872.14
536870912   26.92   19942928.41
536870912   27.31   19657408.43
19794026.66 (avg)

Random 
Bytes Read TimeBytes / Sec
1073741824  519.57  2066589.21
1073741824  517.78  2073751.44
1073741824  516.92  2077193.23
1073741824  513.18  2092333.29
1073741824  510.68  2102579.88
2082489.41 (avg)

Machine = jedi (Sun E420, 3gb ram, dual 400s, test on single scsi disk)

Sequential 
Bytes Read TimeBytes / Sec
2097152000  65.19   32167675.28
2097152000  65.22   32154114.65
2097152000  65.16   32182561.99
2097152000  65.12   32206105.12
2097152000  64.67   32429463.26
32227984.06 (avg)

Random 
Bytes Read TimeBytes / Sec
4194304000  1522.22 2755394.79
4194304000  278.18  15077622.05
4194304000  91.43   45874730.07
4194304000  61.43   68273795.19
4194304000  54.55   76890231.51
41774354.72

If I interpret Tom's divide instruction correctly, is that a factor of
10 on the linux box?




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Index Scans become Seq Scans after VACUUM ANALYSE

2002-04-18 Thread mlw

Bruce Momjian wrote:
 
 mlw wrote:
  I don't think we will agree, we have seen different behaviors, and our
  experiences seem to conflict. This however does not mean that either of us is
  in error, it just may mean that we use data with very different
  characteristics.
 
  This thread is kind of frustrating for me because over the last couple years I
  have seen this problem many times and the answer is always the same, The
  statistics need to be improved. Tom, you and I have gone back and forth about
  this more than once.
 
 
 Have you tried reducing 'random_page_cost' in postgresql.conf.  That
 should solve most of your problems if you would like more index scans.

My random page cost is 1 :-)

I had a database where I had to have enable_seqscan=false in the config file.
The nature of the data always makes the statistics bogus, and it always refused
to use the index. 

It is frustrating because sometimes it *is* a problem for some unknown number
of users (including myself), as evidenced by the perenial why isn't postgres
using my index posts, and for the last two years you guys keep saying it isn't
a problem, or that the statistics just need improvement. Sorry for my tone, but
I have pulled out my hair numerous times on this very problem.

This whole process has lead me to change my mind. I don't think adding weight
to an index scan is the answer, I think having the ability to submit hints to
the planner is the only way to really address this or any future issues.

Just so you understand my perspective, I am not thinking of the average web
monkey. I am thinking of the expert DBA or archetect who want to deploy a
system, and needs to have real control over performance in critical areas.

My one most important experience (I've had more than one) with this whole topic
is DMN's music database, when PostgreSQL uses the index, the query executes in
a fraction of a second. When enable_seqscan=true PostgreSQL refuses to use
the index, and the query takes a about a minute. No matter how much I analyze,
I have to disable sequential scan for the system to work correctly. 

cdinfo=# set enable_seqscan=false ;
SET VARIABLE
cdinfo=# explain select * from ztitles, zsong where ztitles.muzenbr =
zsong.muzenbr and ztitles.artistid = 100 ;
NOTICE:  QUERY PLAN:

Merge Join  (cost=3134.95..242643.42 rows=32426 width=356)
  -  Sort  (cost=3134.95..3134.95 rows=3532 width=304)
-  Index Scan using ztitles_artistid on ztitles  (cost=0.00..3126.62
rows=3532 width=304)
  -  Index Scan using zsong_muzenbr on zsong  (cost=0.00..237787.51
rows=4298882 width=52)

EXPLAIN
cdinfo=# set enable_seqscan=true ;
SET VARIABLE
cdinfo=# explain select * from ztitles, zsong where ztitles.muzenbr =
zsong.muzenbr and ztitles.artistid = 100 ;
NOTICE:  QUERY PLAN:

Hash Join  (cost=3126.97..61889.37 rows=32426 width=356)
  -  Seq Scan on zsong  (cost=0.00..52312.66 rows=4298882 width=52)
  -  Hash  (cost=3126.62..3126.62 rows=3532 width=304)
-  Index Scan using ztitles_artistid on ztitles  (cost=0.00..3126.62
rows=3532 width=304)

EXPLAIN
cdinfo=# select count(*) from zsong ;
  count
-
 4298882
(1 row)

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Index Scans become Seq Scans after VACUUM ANALYSE

2002-04-18 Thread Tom Lane

Michael Loftis [EMAIL PROTECTED] writes:
 Somethings wrong with the random numbers from the sun... re-run them, 
 that first sample is insane  Caching looks like it's affecctign your 
 results alot...

Yeah; it looks like the test case is not large enough to swamp out
caching effects on the Sun box.  It is on the Linux box, evidently,
since the 10:1 ratio appears very repeatable.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] new food for the contrib/ directory

2002-04-18 Thread Doug McNaught

Alexandre Dulaunoy [EMAIL PROTECTED] writes:

 first comment :
 
 * a special directory with ./contrib/gpl ?

Doesn't really change anything.

 second comment : 
 
 * I don't really understand your position regarding the GNU General Public 
   License. The GPL is offering multiple advantages for a big project and 
   software like PostgreSQL. For example : 

Not open for discussion.  See the FAQ.

-Doug
-- 
Doug McNaught   Wireboard Industries  http://www.wireboard.com/

  Custom software development, systems and network consulting.
  Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Index Scans become Seq Scans after VACUUM ANALYSE

2002-04-18 Thread Thomas Lockhart

...
 My one most important experience (I've had more than one) with this whole topic
 is DMN's music database, when PostgreSQL uses the index, the query executes in
 a fraction of a second. When enable_seqscan=true PostgreSQL refuses to use
 the index, and the query takes a about a minute. No matter how much I analyze,
 I have to disable sequential scan for the system to work correctly.

How about contributing the data and a query? We've all got things that
we would like to change or adjust in the PostgreSQL feature set. If you
can't contribute code, how about organizing some choice datasets for
testing purposes? If the accumulated set is too big for postgresql.org
(probably not, but...) I can host them on my machine.

Most folks seem to not have to manipulate the optimizer to get good
results nowadays. So to make more progress we need to have test cases...

   - Thomas

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Index Scans become Seq Scans after VACUUM ANALYSE

2002-04-18 Thread Bruce Momjian

mlw wrote:
 Bruce Momjian wrote:
  
  mlw wrote:
   I don't think we will agree, we have seen different behaviors, and our
   experiences seem to conflict. This however does not mean that either of us is
   in error, it just may mean that we use data with very different
   characteristics.
  
   This thread is kind of frustrating for me because over the last couple years I
   have seen this problem many times and the answer is always the same, The
   statistics need to be improved. Tom, you and I have gone back and forth about
   this more than once.
  
  
  Have you tried reducing 'random_page_cost' in postgresql.conf.  That
  should solve most of your problems if you would like more index scans.
 
 My random page cost is 1 :-)

Have you tried  1.  Seems that may work well for your case.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] timeout implementation issues

2002-04-18 Thread Bruce Momjian

Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  I have updated the TODO to:
  o Abort all or commit all SET changes made in an aborted transaction
  I don't think our current behavior is defended by anyone.
 
 Hiroshi seems to like it ...
 
 However, commit SETs even after an error is most certainly NOT
 acceptable.  It's not even sensible --- what if the SETs themselves
 throw errors, or are depending on the results of failed non-SET
 commands; will you try to commit them anyway?
 
 It seems to me that the choices we realistically have are
 
   (a) leave the behavior the way it is
 
   (b) cause all SETs in an aborted transaction to roll back.

I disagree.  You commit all the SET's you can, even if in aborted
transactions.  If they throw an error, or rely on a previous non-SET
that aborted, oh well.  That is what some are asking for.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] new food for the contrib/ directory

2002-04-18 Thread Bruce Momjian

Justin Clift wrote:
 Hi Bruce,
 
 Did we reach an opinion as to whether we'll include GPL'd code?
 
 My vote is to not include this code, as it just muddies the water with
 PostgreSQL being BSD based.

Yes, our current policy is to add GPL to /contrib only when we have
little choice and the module is important.  I am not sure if the module
is even appropriate for /contrib.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] new food for the contrib/ directory

2002-04-18 Thread Tom Lane

Alexandre Dulaunoy [EMAIL PROTECTED] writes:
 * I don't really understand your position regarding the GNU General Public 
   License. The GPL is offering multiple advantages for a big project and 
   software like PostgreSQL.

Every month or two a newbie pops up and asks us why Postgres isn't GPL.
The short answer is that we like the BSD license and that's how Berkeley
released it originally.  We have no interest in changing it even if we
could (which we can't).

If you want a longer answer, consult the mailing list archives; there
have been numerous extended threads on this topic.  Most of us are
pretty tired of it by now :-(

The question of whether to accept GPL'd contrib modules is less
clear-cut (obviously, since it's been done in the past).  But we've
concluded that it just muddies the water to have GPL'd code in the
distribution.  Contrib authors who really prefer GPL have other avenues
to distribute their code.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] [SQL] A bug in gistPageAddItem()/gist_tuple_replacekey()

2002-04-18 Thread Bruce Momjian


Here is a good example of why keeping old code around causes confusion. 
I encourage the GIST guys to remove the stuff they don't feel they will
ever need.  I know Tom may disagree.  ;-)

---

Teodor Sigaev wrote:
 gistPageAddItem and gist_tuple_replacekey are commented out by GIST_PAGEADDITEM. 
 They was used up to 7.1, but now it is unusable.
 
 gistPageAddItem has interesting feature: recompress entry before writing to 
 disk, but we (with Oleg and Tom) couldn't find any reasons to do it. And so, we 
 leave this code for later thinking about.
 
 Now gistPageAddItem is wrong, because it can work only in single-key indexes. In 
 7.2 GiST supports multy-key index.
 
 
 
  I haven't see any comment on this.  If no one replies, would you send
  over a patch of fixes?  Thanks.
  
  ---
  
  Dmitry Tkach wrote:
  
 I was trying to write a gist index extension, and, after some debugging,
 it looks like I found a bug somewhere in the gist.c code ...
 I can't be quite sure, because I am not familiar with the postgres
 code... but, here is what I see happenning (this is 7.1, but I compared
 the sources to 7.2, and did not see this fixed - although, I did not
 inspect it too carefully)...
 
 First of all, gistPageAddItem () calls gistdentryinit() with a pointer
 to what's stored in the tuple, so, 'by-value' types do not work (because
 gistcentryinit () would be passed the value itself, when called from
 gistinsert(), and then, in gistPageAddItem (), it is passed a pointer,
 coming from gistdentryinit () - so, it just doesn't know really how to
 treat the argument)...
 
 Secondly, gist_tuple_replacekey() seems to have incorrect logic figuring
 out if there is enough space in the tuple (it checks for '', instead of
 '=') - this causes a new tuple to get always created (this one, seems
 to be fixed in 7.2)
 
 Thirdly, gist_tuple_replace_key () sends a pointer to entry.pred (which
 is already a pointer to the actual value) to index_formtuple (), that
 looks at the tuple, sees that the type is 'pass-by-value', and puts that
 pointer directly into the tuple, so that, the resulting tuple now
 contains a pointer to a pointer to the actual value...
 
 Now, if more then one split is required, this sequence is repeated again
 and again and again, so that, by the time the tuple gets actually
 written, it contains something like a pointer to a pointer to a pointer
 to a pointer to the actual data :-(
 
 Once again, I've seen some comments in the 7.2 branch about gists and
 pass-by-value types, but brief looking at the differences in the source
 did not make me conveinced that it was indeed fixed...
 
 Anyone knows otherwise?
 
 Thanks a lot!
 
 Dima
 
 
 ---(end of broadcast)---
 TIP 4: Don't 'kill -9' the postmaster
 
 
  
 
 
 -- 
 Teodor Sigaev
 [EMAIL PROTECTED]
 
 
 
 ---(end of broadcast)---
 TIP 4: Don't 'kill -9' the postmaster
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Index Scans become Seq Scans after VACUUM ANALYSE

2002-04-18 Thread Tom Lane

mlw [EMAIL PROTECTED] writes:
 My one most important experience (I've had more than one) with this
 whole topic is DMN's music database, when PostgreSQL uses the index,
 the query executes in a fraction of a second. When
 enable_seqscan=true PostgreSQL refuses to use the index, and the
 query takes a about a minute. No matter how much I analyze, I have to
 disable sequential scan for the system to work correctly.

It would be useful to see explain analyze not just explain for these
cases.  Also, what stats does pg_stats show for the variables used?

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] new food for the contrib/ directory

2002-04-18 Thread Alexandre Dulaunoy

On 18 Apr 2002, Doug McNaught wrote:

 Alexandre Dulaunoy [EMAIL PROTECTED] writes:
 
  first comment :
  
  * a special directory with ./contrib/gpl ?
 
 Doesn't really change anything.
 
  second comment : 
  
  * I don't really understand your position regarding the GNU General Public 
License. The GPL is offering multiple advantages for a big project and 
software like PostgreSQL. For example : 
 
 Not open for discussion.  See the FAQ.

I love that type of respond ;-)

Yes, I have read the faq. The 1.2 is not responding why the modified  
Berkeley-style BSD license was choosen. There is only a respond :because 
is like that... 

I have also read that : 
http://archives.postgresql.org/pgsql-general/2000-07/msg00210.php

My question is more regarding the recent issue of RF license for some 
specific patents. As described in my previous message, copyleft type 
license has some advantages around the RF licensing issue. 

Could you extend the FAQ (1.2) with more arguments ? 

Thanks a lot for the excellent software. 


alx



 
 -Doug
 

-- 
Alexandre Dulaunoy  [EMAIL PROTECTED]
http://www.conostix.com/



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] new food for the contrib/ directory

2002-04-18 Thread Bruce Momjian

Alexandre Dulaunoy wrote:
  Not open for discussion.  See the FAQ.
 
 I love that type of respond ;-)
 
 Yes, I have read the faq. The 1.2 is not responding why the modified  
 Berkeley-style BSD license was choosen. There is only a respond :because 
 is like that... 
 
 I have also read that : 
 http://archives.postgresql.org/pgsql-general/2000-07/msg00210.php
 
 My question is more regarding the recent issue of RF license for some 
 specific patents. As described in my previous message, copyleft type 
 license has some advantages around the RF licensing issue. 

Yes, GPL has advantages, but it does prevent non-source distributions. 
You can say that is not a problem, but not everyone agrees.

 Could you extend the FAQ (1.2) with more arguments ? 

No.  The discussion thread was painful enough.  :-)

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] regexp character class locale awareness patch

2002-04-18 Thread Manuel Sugawara

Alvaro Herrera [EMAIL PROTECTED] writes:

 En 17 Apr 2002 22:53:32 -0600
 Manuel Sugawara [EMAIL PROTECTED] escribió:
 
  Bruce Momjian [EMAIL PROTECTED] writes:
  
   Tatsuo Ishii wrote:
 I miss that case :-(. Here is the pached patch.
 
 Regards,
 Manuel.

I also suggest that cclass_init() is called only if the locale is not
C.
   
   OK, patch on hold while this is addressed.
  
  Here is a patch which addresses Tatsuo's concerns (it does return an
  static struct instead of constructing it).
 
 Is there a reason to use  instead of NULL in the multis member of
 that static struct?

Yes, read the code.

Regards,
Manuel.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] new food for the contrib/ directory

2002-04-18 Thread Alexandre Dulaunoy

first comment : 

* a special directory with ./contrib/gpl ?

second comment : 

* I don't really understand your position regarding the GNU General Public 
  License. The GPL is offering multiple advantages for a big project and 
  software like PostgreSQL. For example : 

* Contribution back to the main tree more easy if redistribution. 
  (like HP and Samba team are doing, copyright holder remains samba team) 

* More easy to get a RF (Royalty Free) license from a patent 
  owner. (this is guarantee for him that it will not go back to  
  proprietary software where it's not a RF license) (like the 
  UB-Trees)

* A possible bigger audience.

Dual licensing is also an alternative but could be a real mess. 

It's just idea. 

alx



On Thu, 18 Apr 2002, Justin Clift wrote:

 Hi Bruce,
 
 Did we reach an opinion as to whether we'll include GPL'd code?
 
 My vote is to not include this code, as it just muddies the water with
 PostgreSQL being BSD based.
 
 :-)
 
 Regards and best wishes,
 
 Justin Clift
 
 Andreas Scherbaum wrote:
  
  Justin Clift wrote:
  
   Hi Bruce,
  
   Haven't looked at the code, but there's no license with it.
  
   Andreas, are you cool with having the same License as PostgreSQL for it
   (BSD license)?
  
   :-)
  
   Regards and best wishes,
  
   Justin Clift
  
   Bruce Momjian wrote:
   
Can someone comment on this?  I can't decide.
   
---
   
Andreas Scherbaum wrote:

 Hello,

 i have written a module for logging changes on a table (without knowing
 the exact column names).
 Dont know where to put it, but its ready for use in the contrib directory.

 Its available at: http://ads.ufp.de/projects/Pg/table_log.tar.gz (3k)

 Would be nice, if this can be added.


 Best regards
  
  Hello,
  
  uhm, good point. I thought i missed something ;-)
  
  This software is distributed under the GNU General Public License
  either version 2, or (at your option) any later version.
  
  I have updated the readme and replaced the archive with a new version.
  
  Thanks and best regards
  
  --
Andreas 'ads' Scherbaum
  
  ---(end of broadcast)---
  TIP 6: Have you searched our list archives?
  
  http://archives.postgresql.org
 
 

-- 
Alexandre Dulaunoy  [EMAIL PROTECTED]
http://www.conostix.com/


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] [SQL] SQL Query Optimization

2002-04-18 Thread Tom Lane

Dav Coleman [EMAIL PROTECTED] writes:
 But basically I haven't done any ANALYZE or EXPLAIN yet because of the 
 fact that the order -is- making a difference so it can't be executing
 the same query inside the database engine.

If you haven't ever done VACUUM ANALYZE then the planner is flying
completely blind as to table sizes and data distributions.  This would
(among other things) very possibly allow different plans to be estimated
as exactly the same cost --- since all the cost numbers will be based on
exactly the same default statistics.  So it's not surprising that you'd
get an arbitrary choice of plans depending on trivial details like
WHERE clause order.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] new food for the contrib/ directory

2002-04-18 Thread Andreas Scherbaum

Tom Lane wrote:
 
 Andreas Scherbaum [EMAIL PROTECTED] writes:
  Justin Clift wrote:
  Did we reach an opinion as to whether we'll include GPL'd code?
 
  My vote is to not include this code, as it just muddies the water with
  PostgreSQL being BSD based.
 
  Hmm, there's enough GPL'ed stuff in contrib/ ;-)
 
 Indeed, the core committee recently agreed that we should try to ensure
 that the whole distribution is under the same BSD license.  I have a
 TODO item to contact the authors of the existing GPL'd contrib modules,
 and if possible get them to agree to relicense.  If not, those modules
 will be removed from contrib.
 
 There are other possible homes for contrib modules whose authors
 strongly prefer GPL.  For example, Red Hat's add-ons for Postgres will
 be GPL (per corporate policy), and I expect that they'd be willing to
 host contrib modules.  But the core distribution will be straight BSD
 to avoid license confusion.

I have to excuse myself, because i think, i did a mistake.
Yes, my first intention was to make it GPL, but i do not stick to it.

On the other hand, i copied some parts from contrib/noupdate (there'e no
licence in the readme) and now i think, this is contributed under BSD
licence.
I'm sure or i'm wrong? I think, i have to change the licence.
Who is the author of the noupdate module and can anybody tell me,
whats in this case the right (or left) license?


Best regards

-- 
  Andreas 'ads' Scherbaum

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] new food for the contrib/ directory

2002-04-18 Thread Thomas Lockhart

...
 Thanks a lot for the excellent software.

My personal view is that one might consider using the same BSD license
as PostgreSQL itself as a gesture of appreciation for the software you
are using. Contribute or not, it is your choice. But if you are
benefiting from the software (and lots of folks are) then why not take
the big risk of contributing back with a similar license?

Regards.

- Thomas

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Names of view select rules

2002-04-18 Thread Jan Wieck

No problem with that. Good idea IMHO.


Jan

Tom Lane wrote:
 Currently, the name of the ON SELECT rule for a view is defined to be
   '_RET' || viewname
 truncated if necessary to fit in a NAME.
 
 I've just committed fixes to make rule names be per-relation instead
 of global, and it occurs to me that we could now get rid of this
 convention.  The select rule name could be the same for all views ---
 _RETURN, say.  This would simplify life in a number of places.
 
 A quick look at psql, pgaccess, etc suggests that a lot of places know
 that view select rule names begin with _RET, but not that many are
 dependent on the rest of it.  So I think this wouldn't break clients
 too badly.
 
 Any thoughts pro or con?  I'm leaning towards changing it, but could be
 persuaded to leave well enough alone.
 
   regards, tom lane
 
 ---(end of broadcast)---
 TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
 


-- 

#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] Schema (namespace) privilege details

2002-04-18 Thread Tom Lane

I've been thinking about exactly what to do with access privileges for
namespaces (a/k/a schemas).  The SQL99 spec isn't much guidance, since
as far as I can tell it doesn't have explicit privileges for schemas
at all --- and in any case, since it identifies schemas and ownership,
the really interesting cases don't arise.

Here is a straw-man definition --- any comments appreciated.

We'll define two privilege bits for namespaces/schemas: read and
create (GRANT SELECT and GRANT INSERT seem like reasonable keyword
choices).  Read controls the ability to look up objects within
that namespace --- it's similar to execute permission on directories
in Unix.  Create controls the ability to create new objects within
a namespace.  As usual, superusers bypass these checks.

The initial state of the database will be: pg_catalog is world readable,
but has no create permissions; public has world read and create
permissions; pg_toast has no permissions (you can't explicitly inspect
toast tables).  Newly created schemas will initially have all permissions
for the owner, no permissions for anyone else.  Whenever a pg_temp
namespace is created or recycled by a fresh backend, it will be set to be
owned by the user running that backend, with all permissions for him and
none for anyone else.

Renaming of an object is allowed to the owner of that object regardless of
schema permissions.  While we could invent an UPDATE privilege on schemas
to control this, leaving it with the owner seems simpler.

Deletion of an object is allowed either to the owner of the object, or to
the owner of the containing schema.  (Without the latter provision, you
couldn't DROP a schema containing objects created by other people; which
seems wrong.)  Again, I'd rather keep this based on ownership than invent,
say, a DELETE privilege for schemas.

It's not quite clear what should happen if User A allows User B to create
an object in a schema owned by A, but then revokes read access on that
schema from B.  Presumably, B can no longer access the object, even though
he still owns it.  A would have the ability to delete the object under
these rules, but is that enough?

One of the things I'd like this mechanism to do is answer the request
we've heard so often about preventing users from creating new tables.
If the DBA revokes write access on the public namespace from a particular
user, and doesn't create a personal schema for that user, then under this
proposal that user would have noplace to create tables --- except TEMP
tables in his temp schema.  Is that sufficient, or do the folks who want
this also want a way to prevent TEMP table creation?

Another thing that would be needed to prevent users from creating new
tables is to prevent them from creating schemas for themselves.  I am not
sure how to handle that --- should the right to create schemas be treated
as a user property (a column of pg_shadow), or should it be attached
somehow to the database (and if the latter, how)?

As sketched so far, the schema privilege bits would be the same for all
object types --- whether table, type, function, or operator, either you
can look it up (resp. create it) in a given namespace, or you can't.
Offhand I see no need to distinguish different kinds of objects for this
purpose; does anyone think differently?

Should the owner of a database (assume he's not a superuser) have the
right to drop any schema in his database, even if he doesn't own it?
I can see arguments either way on that one.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Schema (namespace) privilege details

2002-04-18 Thread Rod Taylor

 Should the owner of a database (assume he's not a superuser) have
the
 right to drop any schema in his database, even if he doesn't own it?
 I can see arguments either way on that one.

Given that you've chosen to allow the owner of a schema or the table
to drop a table, it would be consistent to allow the owner of the
database, schema or table to drop the table.

Much as I'd tend to allow the owner of a trigger, the table it's on,
the schema, or the database to drop the trigger.


Technically if the owner of a database doesn't have permission to drop
a table, do they have permission to drop the database?  In which case,
pg_dump, drop create table statement, drop db, create db, restore data
will accomplish the same thing.  All we've done is make the process
long and drawn out.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Schema (namespace) privilege details

2002-04-18 Thread Joe Conway

Tom Lane wrote:
 We'll define two privilege bits for namespaces/schemas: read and
 create (GRANT SELECT and GRANT INSERT seem like reasonable keyword
 choices).  Read controls the ability to look up objects within
 that namespace --- it's similar to execute permission on directories
 in Unix.  Create controls the ability to create new objects within
 a namespace.  As usual, superusers bypass these checks.

If user1, the owner of the schema1, creates a new table tab1, will user2 
  who has Read privilege to schema1, be automatically granted SELECT 
privilege on tab1? Or will he be able to see that tab1 exists, but not 
select from it (continuing the analogy with directories)?


 
 The initial state of the database will be: pg_catalog is world readable,
 but has no create permissions; public has world read and create
 permissions; pg_toast has no permissions (you can't explicitly inspect
 toast tables).  Newly created schemas will initially have all permissions
 for the owner, no permissions for anyone else.  Whenever a pg_temp
 namespace is created or recycled by a fresh backend, it will be set to be
 owned by the user running that backend, with all permissions for him and
 none for anyone else.

This looks good to me. I only wonder if public should default to world 
read and no create?

 Renaming of an object is allowed to the owner of that object regardless of
 schema permissions.  While we could invent an UPDATE privilege on schemas
 to control this, leaving it with the owner seems simpler.

Agreed.

 
 Deletion of an object is allowed either to the owner of the object, or to
 the owner of the containing schema.  (Without the latter provision, you
 couldn't DROP a schema containing objects created by other people; which
 seems wrong.)  Again, I'd rather keep this based on ownership than invent,
 say, a DELETE privilege for schemas.

I'd agree with other posted comments -- db owner should also be 
essentially a superuser in there own db.



 
 It's not quite clear what should happen if User A allows User B to create
 an object in a schema owned by A, but then revokes read access on that
 schema from B.  Presumably, B can no longer access the object, even though
 he still owns it.  A would have the ability to delete the object under
 these rules, but is that enough?

I like this. That way I can lock out a particular user if I need to with 
a single command. Would A automatically get ALL privileges on objects 
created in his schema by others? I think he should.


 
 One of the things I'd like this mechanism to do is answer the request
 we've heard so often about preventing users from creating new tables.
 If the DBA revokes write access on the public namespace from a particular
 user, and doesn't create a personal schema for that user, then under this
 proposal that user would have noplace to create tables --- except TEMP
 tables in his temp schema.  Is that sufficient, or do the folks who want
 this also want a way to prevent TEMP table creation?

I think there should be a way to prevent temp table creation, but not 
set that way as the default. Presumably you could REVOKE INSERT on the 
temp schema?


 
 Another thing that would be needed to prevent users from creating new
 tables is to prevent them from creating schemas for themselves.  I am not
 sure how to handle that --- should the right to create schemas be treated
 as a user property (a column of pg_shadow), or should it be attached
 somehow to the database (and if the latter, how)?

I think only the database owner should be able to create schemas in 
their own database. That way if I want a user to be able to create 
tables, I just grant them CREATE in the public schema, or create a 
schema for them.


 
 As sketched so far, the schema privilege bits would be the same for all
 object types --- whether table, type, function, or operator, either you
 can look it up (resp. create it) in a given namespace, or you can't.
 Offhand I see no need to distinguish different kinds of objects for this
 purpose; does anyone think differently?
 

Agreed. How would it work though if say I wanted to create a view in the 
public schema, which pointed at a table in a schema which has had SELECT 
revoked? Same question for a public function/private table. It would be 
ideal if you could do this.


 Should the owner of a database (assume he's not a superuser) have the
 right to drop any schema in his database, even if he doesn't own it?
 I can see arguments either way on that one.
 

I think the database owner should be just like a superuser in his little 
world. The db owner should be able to drop contained schemas or other 
objects at will.


Just my 2 cents.

Joe


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] timeout implementation issues

2002-04-18 Thread Hiroshi Inoue
Tom Lane wrote:
 
 Bruce Momjian [EMAIL PROTECTED] writes:
  I have updated the TODO to:
  o Abort all or commit all SET changes made in an aborted transaction
  I don't think our current behavior is defended by anyone.
 
 Hiroshi seems to like it ...

Probably I don't love it. Honestly I don't understand
what the new TODO means exactly.
I don't think this is  *all* *should be* or *all
or nothing* kind of thing. If a SET variable has
its reason, it would behave in its own right.

 However, "commit SETs even after an error" is most certainly NOT
 acceptable. 

What I've meant is that SET commands are out of transactional
control and so the word *commit SETs even after* has no meaning
to me. Basically it's a user's responsisbilty to manage the
errors. He only knows what's to do with the errors.

regards,
Hiroshi Inoue
http://w2422.nsk.ne.jp/~inoue/

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] Schema (namespace) privilege details

2002-04-18 Thread Peter Eisentraut

Tom Lane writes:

 We'll define two privilege bits for namespaces/schemas: read and
 create (GRANT SELECT and GRANT INSERT seem like reasonable keyword
 choices).  Read controls the ability to look up objects within
 that namespace --- it's similar to execute permission on directories
 in Unix.  Create controls the ability to create new objects within
 a namespace.  As usual, superusers bypass these checks.

I think other databases actually use GRANT CREATE.

About the read permission, I think that other databases use the rule that
you can see an object if and only if you have some sort of privilege on
it.  I see little reason to create an extra privilege to just see the
existence of objects.

 It's not quite clear what should happen if User A allows User B to create
 an object in a schema owned by A, but then revokes read access on that
 schema from B.  Presumably, B can no longer access the object, even though
 he still owns it.  A would have the ability to delete the object under
 these rules, but is that enough?

That concern would be eliminated by the system above.  B can still access
anything it owns.  If A doesn't like B anymore, just delete B's stuff in
A's schemas.

 One of the things I'd like this mechanism to do is answer the request
 we've heard so often about preventing users from creating new tables.
 If the DBA revokes write access on the public namespace from a particular
 user, and doesn't create a personal schema for that user, then under this
 proposal that user would have noplace to create tables --- except TEMP
 tables in his temp schema.  Is that sufficient, or do the folks who want
 this also want a way to prevent TEMP table creation?

Maybe the temp schema should be a permanent catalog entry.  That way the
DBA can revoke create access from it as a means to disallow users to
create temp tables.

 Another thing that would be needed to prevent users from creating new
 tables is to prevent them from creating schemas for themselves.  I am not
 sure how to handle that --- should the right to create schemas be treated
 as a user property (a column of pg_shadow), or should it be attached
 somehow to the database (and if the latter, how)?

An aclitem[] column on pg_database seems like the most flexible solution
to me.

 Offhand I see no need to distinguish different kinds of objects for this
 purpose; does anyone think differently?

Not me.

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] timeout implementation issues

2002-04-18 Thread Tom Lane

Hiroshi Inoue [EMAIL PROTECTED] writes:
 I don't think this is  *all* *should be* or *all
 or nothing* kind of thing. If a SET variable has
 its reason, it would behave in its own right.

Well, we could provide some kind of escape hatch to let the behavior
vary from one variable to the next.  But can you give any specific
examples?  Which SET variables should not roll back on error?

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Schema (namespace) privilege details

2002-04-18 Thread Tom Lane

Peter Eisentraut [EMAIL PROTECTED] writes:
 We'll define two privilege bits for namespaces/schemas: read and
 create (GRANT SELECT and GRANT INSERT seem like reasonable keyword
 choices).

 I think other databases actually use GRANT CREATE.

Okay, I'm not picky about the keywords.

 About the read permission, I think that other databases use the rule that
 you can see an object if and only if you have some sort of privilege on
 it.  I see little reason to create an extra privilege to just see the
 existence of objects.

Hm.  That seems like it would not interact at all well with resolution
of ambiguous functions and operators.  In the first place, I don't want
to execute a permission check for every candidate function/operator
before I can assemble the list of candidates to be chosen among.  (For
example, on every use of an '=' operator that would cost us seventy-three
permissions checks, rather than one.)  In the second place, that would
mean that granting or revoking access to a particular operator could
change resolution decisions for *other* operators of the same name ---
which is certainly surprising.  In the third place, it's wrong to be
applying permissions checks at parse-analysis time; they should be done
at run-time.  Otherwise rules have big problems.  I realize that we have
to apply the namespace permissions checks at parse time, but I don't
want to do it for ordinary objects.

 If the DBA revokes write access on the public namespace from a particular
 user, and doesn't create a personal schema for that user, then under this
 proposal that user would have noplace to create tables --- except TEMP
 tables in his temp schema.  Is that sufficient, or do the folks who want
 this also want a way to prevent TEMP table creation?

 Maybe the temp schema should be a permanent catalog entry.  That way the
 DBA can revoke create access from it as a means to disallow users to
 create temp tables.

Hm, we could clone a prototype pg_temp schema entry as a means of
getting this set up, I suppose.  But the first question should be is it
worth troubling with?

 Another thing that would be needed to prevent users from creating new
 tables is to prevent them from creating schemas for themselves.  I am not
 sure how to handle that --- should the right to create schemas be treated
 as a user property (a column of pg_shadow), or should it be attached
 somehow to the database (and if the latter, how)?

 An aclitem[] column on pg_database seems like the most flexible solution
 to me.

Yeah, I was afraid you would say that ;-).  I'd prefer to avoid it
because I think we'd need to have a TOAST table for pg_database then.
And I'm not at all sure how to setup a shared toast table.  Can we get
away with constraining pg_database rows to 8K if they contain ACL lists?
(We might get some benefit from compression of the ACL list, but
probably not a heck of a lot.)

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Schema (namespace) privilege details

2002-04-18 Thread Tom Lane

Joe Conway [EMAIL PROTECTED] writes:
 If user1, the owner of the schema1, creates a new table tab1, will user2 
   who has Read privilege to schema1, be automatically granted SELECT 
 privilege on tab1? Or will he be able to see that tab1 exists, but not 
 select from it (continuing the analogy with directories)?

No, and yes.

 This looks good to me. I only wonder if public should default to world 
 read and no create?

That would be non-backwards-compatible.  Since the main reason for
having the public namespace at all is backwards compatibility of the
out-of-the-box behavior, I think we have to let it default to world
write.  DBAs can revoke world write, or even remove the public namespace
altogether, if they want to run a tighter ship.

 I like this. That way I can lock out a particular user if I need to with 
 a single command. Would A automatically get ALL privileges on objects 
 created in his schema by others? I think he should.

Hmm, I'd argue not; see nearby messages.  The analogy with Unix
directory permissions seems to hold good here.  If you are owner of
a directory you can delete files therein, but not necessarily do
anything else with 'em.

 I think only the database owner should be able to create schemas in 
 their own database.

That seems overly restrictive to me; it'd be the equivalent of getting
rid of users that have createdb rights but aren't superusers.

Also, if a database owner is not superuser, I do not think he should be
able to create objects that are marked as belonging to other users.
At least not in general.  Do we need to make an exception for schemas?

 Agreed. How would it work though if say I wanted to create a view in the 
 public schema, which pointed at a table in a schema which has had SELECT 
 revoked? Same question for a public function/private table. It would be 
 ideal if you could do this.

AFAICS this would not be checked at creation time, but when someone
tries to use the view; just the same as now.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] timeout implementation issues

2002-04-18 Thread Hiroshi Inoue
Tom Lane wrote:
 
 Hiroshi Inoue [EMAIL PROTECTED] writes:
  I don't think this is  *all* *should be* or *all
  or nothing* kind of thing. If a SET variable has
  its reason, it would behave in its own right.
 
 Well, we could provide some kind of escape hatch to let the behavior
 vary from one variable to the next.  But can you give any specific
 examples?  Which SET variables should not roll back on error?

It seems veeery dangerous to conclude that SET *should* 
roll back even if there's no *should not* roll back case.
There could be no *should not* roll back case because
a user could set the variable as he likes in the next
transaction.
  
Hiroshi Inoue
http://w2422.nsk.ne.jp/~inoue/

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Index Scans become Seq Scans after VACUUM ANALYSE

2002-04-18 Thread Michael Loftis

Got some numbers now...  You'll notice the Random reads are *really* 
slow.  The reason for htis is the particular read sizes that are ebing 
used are the absolute worst-case for my particular configuration. (wiht 
a 32kb or 64kb block size I generally achieve much higher performance 
even on random I/O)  Sequential I/O is most likely being limited atleast 
in part by the CPU power available...

Sequential tests:

2147483648 bytes read in 39.716158 seconds  54070780.16 bytes/sec
2147483648 bytes read in 37.836187 seconds  56757401.27 bytes/sec
2147483648 bytes read in 38.081452 seconds  56391853.13 bytes/sec
2147483648 bytes read in 38.122105 seconds  56331717.46 bytes/sec
2147483648 bytes read in 38.303999 seconds  56064215.33 bytes/sec

Total:  192.059901 seconds 279615967.4 (mumble)
Ave:  38.4119802 seconds  55923193.47 bytes/sec

Random tests:

2147483648 bytes read in 1744.002332 seconds  1231353.656 bytes/sec
2147483648 bytes read in 1744.797705 seconds  1230792.339 bytes/sec
2147483648 bytes read in 1741.577362 seconds  1233068.191 bytes/sec
2147483648 bytes read in 1741.497690 seconds  1233124.603 bytes/sec
2147483648 bytes read in 1739.773354 seconds  1234346.786 bytes/sec

Total:  8711.648443 seconds 6162685.575
Ave: 1742.329689 seconds 1232537.115 bytes/sec

So on this machine at that block I/O level (8kb block I believe it was) 
I have a ~55MB/sec Sequential Read rate and ~12MB/sec Random Read rate. 
 Like I said though I'm fairly certain the random read rates were worst 
case because of the particular block size in the configuration this 
system uses.  But I feel that the results are respectable and valid 
nonetheless.

Note how the random reads kept getting better...  The ICP and drive 
caching firmware were starting to 'catch on' that this 2gb file was a 
hot spot so were preferring to cache things a little longer and 
pre-fetch in a different order than normal.  I estimate that it would 
have dropped as low as 1700 if allowed to keep going.




RAW output from my script...


mloftis@free:/mnt/rz01/ml01/rndtst$ sh PAGECOST2GB.sh
CREATING FILE
Thu Apr 18 09:11:55 PDT 2002
Creating test file 2gb.test of 2048 mb
  176.23 real22.75 user34.72 sys
BEGINNING SEQUENTIAL TESTS
Thu Apr 18 09:14:51 PDT 2002
Sequential read test of 2gb.test
2147483648 bytes read in 39.716158 seconds
   39.73 real 1.52 user23.87 sys
Sequential read test of 2gb.test
2147483648 bytes read in 37.836187 seconds
   37.83 real 1.44 user23.68 sys
Sequential read test of 2gb.test
2147483648 bytes read in 38.081452 seconds
   38.08 real 1.62 user23.51 sys
Sequential read test of 2gb.test
2147483648 bytes read in 38.122105 seconds
   38.12 real 1.63 user23.50 sys
Sequential read test of 2gb.test
2147483648 bytes read in 38.303999 seconds
   38.30 real 1.32 user23.83 sys
Thu Apr 18 09:18:03 PDT 2002
BEGINNING RANDOM READ TESTS
Random read test of 2gb.test for 2048 mb
2147483648 bytes read in 1744.002332 seconds
 1744.01 real 4.33 user36.47 sys
Random read test of 2gb.test for 2048 mb
2147483648 bytes read in 1744.797705 seconds
 1744.81 real 4.38 user36.56 sys
Random read test of 2gb.test for 2048 mb
2147483648 bytes read in 1741.577362 seconds
 1741.58 real 4.58 user36.18 sys
Random read test of 2gb.test for 2048 mb
2147483648 bytes read in 1741.497690 seconds
 1741.50 real 4.17 user36.57 sys
Random read test of 2gb.test for 2048 mb
2147483648 bytes read in 1739.773354 seconds
 1739.78 real 4.41 user36.36 sys
TESTS COMPLETED
Thu Apr 18 11:43:15 PDT 2002






Michael Loftis wrote:

 Numbers being run on a BSD box now...

 FreeBSD 4.3-p27 512MB RAM 2xPiii600 Xeon ona  4 disk RAID 5 ARRAY on a 
 dedicated ICP Vortex card.  Sorry no single drives on this box, I have 
 an outboard Silicon Gear Mercury on a motherboard based Adaptec 
 controller I can test as well.  I'll post when the tests on the Vortex 
 are done.  I'm using 2Gb files ATM, I'll look at the code and see if 
 it can be made to work with large files.  Atleast for FreeBSD the 
 change will be mostly taking doing s/fseek/fseeko/g s/size_t/off_t/g 
 or something similar.  FreeBSD seems ot prefer teh Open Unix standard 
 in this regard...

 This will make it usable for much larger test files.

 Tom Lane wrote:

 Michael Loftis [EMAIL PROTECTED] writes:

 Somethings wrong with the random numbers from the sun... re-run 
 them, that first sample is insane  Caching looks like it's 
 affecctign your results alot...


 Yeah; it looks like the test case is not large enough to swamp out
 caching effects on the Sun box.  It is on the Linux box, evidently,
 since the 10:1 ratio appears very repeatable.

 regards, tom lane




 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?

 

Re: [HACKERS] Schema (namespace) privilege details

2002-04-18 Thread Oliver Elphick

On Fri, 2002-04-19 at 00:14, Tom Lane wrote:

 It's not quite clear what should happen if User A allows User B to create
 an object in a schema owned by A, but then revokes read access on that
 schema from B.  Presumably, B can no longer access the object, even though
 he still owns it.  A would have the ability to delete the object under
 these rules, but is that enough?

Then A should take over ownership.  It would be like the expiry of a
lease on a piece of land: any buildings erected by the lessee become the
property of the landowner.  (If this consequence was not desired, the
objects should not have been created in a database/schema outside the
owner's control.)

 Another thing that would be needed to prevent users from creating new
 tables is to prevent them from creating schemas for themselves.  I am not
 sure how to handle that --- should the right to create schemas be treated
 as a user property (a column of pg_shadow), or should it be attached
 somehow to the database (and if the latter, how)?

I think it could be both: a database owner may not want any schemas
created by anyone else, or by some particular user; alternatively, the
administrator may not want a particular user to create any schemas
anywhere.  These are two different kinds of restriction:

   GRANT CREATE SCHEMA TO user | PUBLIC
   REVOKE CREATE SCHEMA FROM user | PUBLIC

would allow/disallow the user (other than the database owner) the
theoretical right to create a schema, whereas

   GRANT CREATE SCHEMA IN database TO user | PUBLIC
   REVOKE CREATE SCHEMA IN database FROM user | PUBLIC

would allow/disallow him it on a particular database.  Having both gives
more flexibility and allows different people control for different
purposes (suppose someone needs to pay for the privilege to create
schemas in a variable set of databases; the general permission could be
turned on or off according to whether the bill was paid.).  A general
permission would be needed before permission could be effective on a
particular database.


 Should the owner of a database (assume he's not a superuser) have the
 right to drop any schema in his database, even if he doesn't own it?
 I can see arguments either way on that one.

I think a database owner should be able to override the owner of a
schema within the database; similarly a schema owner should be able to
override the owner of an object within the schema.  This makes sense in
practice, since the higher owner can delete the schema/object and
recreate it under his own ownership; so there is little point in not
allowing him to change it directly.

-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight  http://www.lfix.co.uk/oliver
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C

 For I am persuaded, that neither death, nor life, nor 
  angels, nor principalities, nor powers, nor things 
  present, nor things to come, nor height, nor depth, 
  nor any other creature, shall be able to separate us 
  from the love of God, which is in Christ Jesus our 
  Lord. Romans 8:38,39 



signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] timeout implementation issues

2002-04-18 Thread Michael Loftis


Hiroshi Inoue wrote:

Tom Lane wrote:

Hiroshi Inoue [EMAIL PROTECTED] writes:

I don't think this is  *all* *should be* or *all
or nothing* kind of thing. If a SET variable has
its reason, it would behave in its own right.

Well, we could provide some kind of escape hatch to let the behavior
vary from one variable to the next.  But can you give any specific
examples?  Which SET variables should not roll back on error?


It seems veeery dangerous to conclude that SET *should* 
roll back even if there's no *should not* roll back case.
There could be no *should not* roll back case because
a user could set the variable as he likes in the next
transaction.

In whihc case, if I'm understanding you correctly Hiroshi-san, the
rollback is moot anyway...

IE


BEGIN transaction_1
...
SET SOMEVAR=SOMETHING
...
COMMIT

(transaction_1 fails and rolls back)

BEGIN transaction_2
...
SET SOMEVAR=SOMETHINGELSE
...
COMMIT

(transaction_2 succeeds)

SOMEVAR, in either case, assuming transaction_2 succeeds, would be
SOMETHINGELSE. If both succeed SOMEVAR is SOMETHINGELSE, if the first
succeeds and the second fails SOMEVAR will be SOMETHING. If neither
succeed SOMEVAR (for this short example) is whatever it was before the
two transactions.


Am I understanding you correctly in that this is the example you were
trying to point out?


  
Hiroshi Inoue
   http://w2422.nsk.ne.jp/~inoue/




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html


Re: [HACKERS] Schema (namespace) privilege details

2002-04-18 Thread Oliver Elphick

On Fri, 2002-04-19 at 01:10, Tom Lane wrote:
 Peter Eisentraut [EMAIL PROTECTED] writes:
 
  Another thing that would be needed to prevent users from creating new
  tables is to prevent them from creating schemas for themselves.  I am not
  sure how to handle that --- should the right to create schemas be treated
  as a user property (a column of pg_shadow), or should it be attached
  somehow to the database (and if the latter, how)?
 
  An aclitem[] column on pg_database seems like the most flexible solution
  to me.
 
 Yeah, I was afraid you would say that ;-).  I'd prefer to avoid it
 because I think we'd need to have a TOAST table for pg_database then.
 And I'm not at all sure how to setup a shared toast table.  Can we get
 away with constraining pg_database rows to 8K if they contain ACL lists?
 (We might get some benefit from compression of the ACL list, but
 probably not a heck of a lot.)

Creating schemas is not the kind of thing people do very frequently. 
Why not simply normalise the relationship into another table? the extra
expense of the lookup would be insignificant in the total context of
schema creation.

-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight  http://www.lfix.co.uk/oliver
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C

 For I am persuaded, that neither death, nor life, nor 
  angels, nor principalities, nor powers, nor things 
  present, nor things to come, nor height, nor depth, 
  nor any other creature, shall be able to separate us 
  from the love of God, which is in Christ Jesus our 
  Lord. Romans 8:38,39 



signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] Schema (namespace) privilege details

2002-04-18 Thread Rod Taylor

  Another thing that would be needed to prevent users from creating
new
  tables is to prevent them from creating schemas for themselves.  I
am not
  sure how to handle that --- should the right to create schemas be
treated
  as a user property (a column of pg_shadow), or should it be
attached
  somehow to the database (and if the latter, how)?

 I think only the database owner should be able to create schemas in
 their own database. That way if I want a user to be able to create
 tables, I just grant them CREATE in the public schema, or create a
 schema for them.

If owners could be groups, I'd tend to agree.  I'm tired of setting up
general admin logins and giving a group of people a single key for
doing system changes.  Anytime someone has to leave the company we run
around and issue new keys.

I really want to allow a small group to have control of the
development db but not in other DBs (other projects generally).
Granting superuser status isn't appropriate.  But, giving a group
control over an individual database (schema or otherwise) is extreamly
useful.  Production basically has the same thing but a different
group -- who know enough not to touch anything without a patch and
change control being issued by development which has been approved by
the resident DBA.

I'd really like to see a schema owner have full control over all
objects in a schema, and likewise a database owner have full control
over their database.   My POV for large systems.



Lets look at small ones.  Database usage in webhosting companies is on
the rise.  With the changes to pg_hba.conf to allow specific users
access to specific databases it can now be easily sold as a part of a
hosting package.

FTP accounts on a server always have a master.  Larger clients will
often create a directory structure in such a way that various web
developers can work in various parts without having to worry about
accidentally touching others stuff. BUT the master account can still
override the entire set if necessary.  They own parent, they flip
permissions to suit themselves if they're blocked by them.


Postgresql needs something similar to be easily sold as a service.
The person actually paying for the DB installation would of course be
the owner of the DB.

In the event of a company, the buyer may allow others to do work
(consultants? employee? friend?).  They create a user, a schema and
put the user to work.  User does something they shouldn't and is
removed for it.  Owner wants to clean up the mess or continue
maintainence.  How do they do this?  Owner isn't a superuser as
they're simply buying DB services from an Application hosting company.
They can't login as the user as they don't have the password (user
took it with them). ** I forget whether changing ownership of an
object would require superuser access or just ownership of the parent
object. **  So, they're left with calling the hosting company to clean
up the mess for them (not something we'd want to do).


With Postgresql 7.3 the above is a likley scenario at the company I
work for as we would like to offer this type of service along side the
other DBs we currently host -- and it's very close to being feasible.
What I need is a per DB superuser / supergroup which cannot do things
like drop database (even their own preferably as that ends in a tech
support call to have it recreated), create untrusted procedures /
languages, and other nerveracking abilities.

Giving the database owner, or better a group at the database level an
ACL to accomplish any job within their own database (including user
creation -- but we can get around that with a control panel to do it
for them) that an otherwise untrusted user should be allowed to looks
very good to me.


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Schema (namespace) privilege details

2002-04-18 Thread Tom Lane

Rod Taylor [EMAIL PROTECTED] writes:
 [ how it ought to be to support hosting companies ]

I'm not real comfortable with this.  The design I proposed is based
fairly firmly on the Unix directory/file protection model --- which
is assuredly not perfect, but it's survived a lot of use and is not
known to have major flaws.  You're suggesting that we should invent
a protection model off-the-cuff on the basis of the supposed needs
of one class of application.  I think that's a recipe for trouble...

 I'd really like to see a schema owner have full control over all
 objects in a schema, and likewise a database owner have full control
 over their database.   My POV for large systems.

Those things are both easily done: just don't allow anyone else to
create objects in your schema (resp. database).  This is indeed what
SQL99 envisions.  However, in a database where there are multiple
users sharing schemas, I am not convinced that the notion the schema
owner has ALL rights to objects within the schema is appropriate.
That seems to me to go way too far; if we are troubling to maintain
distinct ownership of objects within a schema, that should mean
something.  In particular, the guy who is not the schema owner should
be able to have some confidence that the guy who is can't make arbitrary
changes in his table.  Otherwise the schema owner is effectively
superuser, and what's the point of pretending he's not?

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Schema (namespace) privilege details

2002-04-18 Thread Joe Conway

Oliver Elphick wrote:
 On Fri, 2002-04-19 at 00:14, Tom Lane wrote:
 I think it could be both: a database owner may not want any schemas
 created by anyone else, or by some particular user; alternatively, the
 administrator may not want a particular user to create any schemas
 anywhere.  These are two different kinds of restriction:
 
GRANT CREATE SCHEMA TO user | PUBLIC
REVOKE CREATE SCHEMA FROM user | PUBLIC
 
 would allow/disallow the user (other than the database owner) the
 theoretical right to create a schema, whereas
 
GRANT CREATE SCHEMA IN database TO user | PUBLIC
REVOKE CREATE SCHEMA IN database FROM user | PUBLIC
 
 would allow/disallow him it on a particular database.  Having both gives
 more flexibility and allows different people control for different
 purposes (suppose someone needs to pay for the privilege to create
 schemas in a variable set of databases; the general permission could be
 turned on or off according to whether the bill was paid.).  A general
 permission would be needed before permission could be effective on a
 particular database.

I like this general idea and syntax. But it seems awkward to have to 
have the privilege granted twice. What about:

 GRANT CREATE SCHEMA [IN { database | ALL }] TO user | PUBLIC
 REVOKE CREATE SCHEMA [IN { database | ALL }] FROM user | PUBLIC

where lack of the IN clause implies the current database, and ALL 
implies a system-wide grant/revoke. System-wide could only be issued by 
a superuser, while a specific database command could be issued by the DB 
owner or a superuser.

 
Should the owner of a database (assume he's not a superuser) have the
right to drop any schema in his database, even if he doesn't own it?
I can see arguments either way on that one.
 
 
 I think a database owner should be able to override the owner of a
 schema within the database; similarly a schema owner should be able to
 override the owner of an object within the schema.  This makes sense in
 practice, since the higher owner can delete the schema/object and
 recreate it under his own ownership; so there is little point in not
 allowing him to change it directly.

Yeah, I still feel that the owner of a container object like a 
database or schema should have complete control of whatever is contained 
therein. Anything else would strike me as surprising behavior.

Joe


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Schema (namespace) privilege details

2002-04-18 Thread Oliver Elphick

On Fri, 2002-04-19 at 02:24, Joe Conway wrote:
 I like this general idea and syntax. But it seems awkward to have to 
 have the privilege granted twice. What about:
 
  GRANT CREATE SCHEMA [IN { database | ALL }] TO user | PUBLIC
  REVOKE CREATE SCHEMA [IN { database | ALL }] FROM user | PUBLIC

I would naturally interpret granting permission IN ALL to mean that the
user would certainly be allowed permission in all databases, whereas it
ought to be clear that the permission given is only hypothetical and
subject to permission's being granted for a specific database.

 where lack of the IN clause implies the current database, and ALL 
 implies a system-wide grant/revoke. System-wide could only be issued by 
 a superuser, while a specific database command could be issued by the DB 
 owner or a superuser.

-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight  http://www.lfix.co.uk/oliver
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C

 For I am persuaded, that neither death, nor life, nor 
  angels, nor principalities, nor powers, nor things 
  present, nor things to come, nor height, nor depth, 
  nor any other creature, shall be able to separate us 
  from the love of God, which is in Christ Jesus our 
  Lord. Romans 8:38,39 



signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] Schema (namespace) privilege details

2002-04-18 Thread Rod Taylor

 I'm not real comfortable with this.  The design I proposed is based
 fairly firmly on the Unix directory/file protection model --- which
 is assuredly not perfect, but it's survived a lot of use and is not
 known to have major flaws.  You're suggesting that we should invent

Will we be able to accomplish the equivelent of the below?


knight# ls -la
total 3
drwxr-xr-x   2 rbt   rbt 512 Apr 18 21:53 .
drwxr-xr-x  43 rbt   rbt2048 Apr 18 21:36 ..
-rwx--   1 root  wheel 0 Apr 18 21:53 file

knight# head /etc/group
# $FreeBSD: src/etc/group,v 1.19.2.1 2001/11/24 17:22:24 gshapiro Exp
$
#
wheel:*:0:root
daemon:*:1:daemon
kmem:*:2:root
sys:*:3:root
tty:*:4:root
operator:*:5:root
mail:*:6:
bin:*:7:

knight# exit
exit

bash-2.05a$ whoami
rbt

bash-2.05a$ rm file
override rwx--  root/wheel for file? y

bash-2.05a$ ls -la
total 3
drwxr-xr-x   2 rbt  rbt   512 Apr 18 21:55 .
drwxr-xr-x  43 rbt  rbt  2048 Apr 18 21:36 ..


  I'd really like to see a schema owner have full control over all
  objects in a schema, and likewise a database owner have full
control
  over their database.   My POV for large systems.

 Those things are both easily done: just don't allow anyone else to
 create objects in your schema (resp. database).  This is indeed what

Yes, basically what we do now.  I'm hoping to add the ability to
enable a group (ROLES) to have ownership of items as well as users
when I complete the other tasks I've set before myself.




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] Odd(?) RI-trigger behavior

2002-04-18 Thread Tom Lane

I was just fooling around with replacing the existing plain index on
pg_trigger.tgrelid with a unique index on (tgrelid, tgname).  In theory
this should not affect anything --- the code already enforced that two
triggers on the same relation can't have the same name.  The index
should merely provide a backup check.

So I was a tad surprised to get a regression test failure:

*** ./expected/foreign_key.out  Thu Apr 11 15:13:36 2002
--- ./results/foreign_key.out   Thu Apr 18 21:26:20 2002
***
*** 899,905 
  ERROR:  unnamed referential integrity violation - key in pktable still referenced 
from pktable
  -- fails (1,1) is being referenced (twice)
  update pktable set base1=3 where base1=1;
! ERROR:  unnamed referential integrity violation - key in pktable still referenced 
from pktable
  -- this sequence of two deletes will work, since after the first there will be no 
(2,*) references
  delete from pktable where base2=2;
  delete from pktable where base1=2;
--- 899,905 
  ERROR:  unnamed referential integrity violation - key in pktable still referenced 
from pktable
  -- fails (1,1) is being referenced (twice)
  update pktable set base1=3 where base1=1;
! ERROR:  unnamed referential integrity violation - key referenced from pktable not 
found in pktable
  -- this sequence of two deletes will work, since after the first there will be no 
(2,*) references
  delete from pktable where base2=2;
  delete from pktable where base1=2;

==

This particular test involves a table with a foreign-key reference to
itself, ie, it's both PK and FK.  What apparently is happening is that
the two RI triggers are now being fired in a different order than
before.  While either of them would have detected an error, we now get
the other error first.

Does this bother anyone?  It seems to me that the old code essentially
had no guarantee at all about the order in which the triggers would
fire, and so it was pure luck that the regression test never showed
the other message.

With the modified code, because we load the triggers by scanning
an index on (tgrelid, tgname), it is actually true that triggers are
fired in name order.  We've had requests in the past to provide a
well-defined firing order for triggers --- should we document this
behavior and support it, or should we pretend it ain't there?

BTW, the same goes for rules: it would now be pretty easy to guarantee
that rules are fired in name order.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Schema (namespace) privilege details

2002-04-18 Thread Christopher Kings-Lynne

 Will we be able to accomplish the equivelent of the below?


 knight# ls -la
 total 3
 drwxr-xr-x   2 rbt   rbt 512 Apr 18 21:53 .
 drwxr-xr-x  43 rbt   rbt2048 Apr 18 21:36 ..
 -rwx--   1 root  wheel 0 Apr 18 21:53 file

 knight# head /etc/group
 # $FreeBSD: src/etc/group,v 1.19.2.1 2001/11/24 17:22:24 gshapiro Exp
 $
 #
 wheel:*:0:root
 daemon:*:1:daemon
 kmem:*:2:root
 sys:*:3:root
 tty:*:4:root
 operator:*:5:root
 mail:*:6:
 bin:*:7:

 knight# exit
 exit

 bash-2.05a$ whoami
 rbt

 bash-2.05a$ rm file
 override rwx--  root/wheel for file? y

 bash-2.05a$ ls -la
 total 3
 drwxr-xr-x   2 rbt  rbt   512 Apr 18 21:55 .
 drwxr-xr-x  43 rbt  rbt  2048 Apr 18 21:36 ..

That is, of course, a BSD-ism that would confuse a lot of the SysV people...
:)

Chris


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Schema (namespace) privilege details

2002-04-18 Thread Tom Lane

Rod Taylor [EMAIL PROTECTED] writes:
 Will we be able to accomplish the equivelent of the below?

I think what you're depicting is the equivalent of a schema owner
dropping a table in his schema, right?  Yes, I proposed allowing that,
but not granting the schema owner any other ownership rights over
contained tables.  This is analogous to the way that ownership of a Unix
directory lets you rm a contained file ... but not necessarily alter
that file in any way short of rm'ing it.

 Yes, basically what we do now.  I'm hoping to add the ability to
 enable a group (ROLES) to have ownership of items as well as users
 when I complete the other tasks I've set before myself.

That could be a good extension, but I think it's orthogonal to the
immediate issue...

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] Getting Constrint information..??

2002-04-18 Thread Steffen Nielsen

Help!

Can anyone tell me what wrong with the following codesnippet. I nuke the 
server when called (stored procedure)

... some VALID spi_exec call :-) has been done ...

TupleDesc tupdesc = SPI_tuptable-tupdesc;
TupleConstr *tupconstr = SPI_tuptable-tupdesc-constr;
ConstrCheck  *check = tupconstr-check;
SPITupleTable *tuptable = SPI_tuptable;
char *ccbin;

char buf[8192];
int i;

for (i = 1, buf[0] = 0; i = tupdesc-natts; i++) {   
ccbin = check[i].ccbin;
sprintf(buf + strlen (buf), %s, %s,
SPI_fname(tupdesc,i),
ccbin);
elog (NOTICE, %s, buf);
}




I have not had any luck :-( I'm a C beginner thou, so maybe i screw up when 
accessing the structures

Any help is appreciated

/Steffen Nielsen


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Schema (namespace) privilege details

2002-04-18 Thread Rod Taylor

 That is, of course, a BSD-ism that would confuse a lot of the SysV
people...
 :)

Yup..  But it's been around quite a while and I don't know of any
horrible problems with it -- that said I've not actually tried it on
OpenBSD (different mindset) but would be surprised if it wasn't the
same.

Sure, it may not be the smartest thing to allow user Y to create a
table in my schema BUT if I decide to reverse that decision  (for
whatever reason) I want to be able to drop the junk user Y littered
around my schema along with the user even if I'm not allowed to look
at it, use it or otherwise fiddle around with it.

But if I'm the only one who feels this way, so be it.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Schema (namespace) privilege details

2002-04-18 Thread Rod Taylor

  Will we be able to accomplish the equivelent of the below?

 I think what you're depicting is the equivalent of a schema owner
 dropping a table in his schema, right?  Yes, I proposed allowing
that,

Yes, thats what I was looking for.  Sorry if I missed that in the
initial proposal.

  Yes, basically what we do now.  I'm hoping to add the ability to
  enable a group (ROLES) to have ownership of items as well as users
  when I complete the other tasks I've set before myself.

 That could be a good extension, but I think it's orthogonal to the
 immediate issue...

Yes it is.


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] syslog support by default

2002-04-18 Thread Tatsuo Ishii

Can we enable syslog support by default for 7.3?
--
Tatsuo Ishii

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Getting Constrint information..??

2002-04-18 Thread Tom Lane

Steffen Nielsen [EMAIL PROTECTED] writes:
 Can anyone tell me what wrong with the following codesnippet. I nuke the 
 server when called (stored procedure)

 for (i = 1, buf[0] = 0; i = tupdesc-natts; i++) {   
 ccbin = check[i].ccbin;

Well, for one thing, the number of check[] array entries is probably not
equal to the number of attributes of the relation.  tupconstr-num_check
tells you how many there are.  For another, check[] should be indexed
from 0 not 1 (just like all C arrays).

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] syslog support by default

2002-04-18 Thread Tom Lane

Tatsuo Ishii [EMAIL PROTECTED] writes:
 Can we enable syslog support by default for 7.3?

AFAIR, we agreed to flip the default some time ago, we just didn't
want to do it late in the 7.2 cycle.  Go for it.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] syslog support by default

2002-04-18 Thread Tatsuo Ishii

 Tatsuo Ishii [EMAIL PROTECTED] writes:
  Can we enable syslog support by default for 7.3?
 
 AFAIR, we agreed to flip the default some time ago, we just didn't
 want to do it late in the 7.2 cycle.  Go for it.

Ok. I'll work on this.
--
Tatsuo Ishii

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] timeout implementation issues

2002-04-18 Thread Hiroshi Inoue
Michael Loftis wrote:
 
 Hiroshi Inoue wrote:
 
 Tom Lane wrote:
 
 Hiroshi Inoue [EMAIL PROTECTED] writes:
 
 I don't think this is  *all* *should be* or *all
 or nothing* kind of thing. If a SET variable has
 its reason, it would behave in its own right.
 
 Well, we could provide some kind of escape hatch to let the behavior
 vary from one variable to the next.  But can you give any specific
 examples?  Which SET variables should not roll back on error?
 
 
 It seems veeery dangerous to conclude that SET *should*
 roll back even if there's no *should not* roll back case.
 There could be no *should not* roll back case because
 a user could set the variable as he likes in the next
 transaction.
 
 In whihc case, if I'm understanding you correctly Hiroshi-san, the
 rollback is moot anyway...
 
 IE
 
 BEGIN transaction_1
 ...
 SET SOMEVAR=SOMETHING
 ...
 COMMIT
 
 (transaction_1 fails and rolls back)

Probably you are misunderstanding my point.
I don't think that SOMEVAR *should* be put back
on failure.
Users must know what value would be set to the
SOMEVAR after an error. In some cases it must
be put back, in some cases the current value
is OK, in other cases new SOMEVAR is needed.
Basically it's a user's resposibilty to set
the value.

regards, 
Hiroshi Inoue
http://w2422.nsk.ne.jp/~inoue/

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Odd(?) RI-trigger behavior

2002-04-18 Thread Stephan Szabo

On Thu, 18 Apr 2002, Tom Lane wrote:

 This particular test involves a table with a foreign-key reference to
 itself, ie, it's both PK and FK.  What apparently is happening is that
 the two RI triggers are now being fired in a different order than
 before.  While either of them would have detected an error, we now get
 the other error first.

 Does this bother anyone?  It seems to me that the old code essentially
 had no guarantee at all about the order in which the triggers would
 fire, and so it was pure luck that the regression test never showed
 the other message.

That's probably a bad thing even if I doubt that it'd ever come up the
other way barring changes to other regression tests in practice.  Forcing
an order probably helps with this case anyway.

 With the modified code, because we load the triggers by scanning
 an index on (tgrelid, tgname), it is actually true that triggers are
 fired in name order.  We've had requests in the past to provide a
 well-defined firing order for triggers --- should we document this
 behavior and support it, or should we pretend it ain't there?

Didn't someone (Peter?) say that the mandated firing order was based on
creation order/time in SQL99?


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] syslog support by default

2002-04-18 Thread Peter Eisentraut

Tom Lane writes:

 Tatsuo Ishii [EMAIL PROTECTED] writes:
  Can we enable syslog support by default for 7.3?

 AFAIR, we agreed to flip the default some time ago, we just didn't
 want to do it late in the 7.2 cycle.  Go for it.

I think if no one complains about the lack of syslog on his machine we
should just remove the option in 7.3+1.

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Odd(?) RI-trigger behavior

2002-04-18 Thread Tom Lane

Stephan Szabo [EMAIL PROTECTED] writes:
 Didn't someone (Peter?) say that the mandated firing order was based on
 creation order/time in SQL99?

It does say that:

 The order of execution of a set of triggers is ascending by value
 of their timestamp of creation in their descriptors, such that the
 oldest trigger executes first. If one or more triggers have the
 same timestamp value, then their relative order of execution is
 implementation-defined.

However, this strikes me as fairly brain-dead; it's unnecessarily hard
to control the order of trigger execution.  You have to drop and
recreate triggers if you want to insert a new one at a desired position.
Worse, if you create several triggers in the same transaction, they'll
have the same timestamp --- leaving you right back in the
implementation-defined case.  But if you want to make your rearrangement
atomically with respect to other transactions, you have little choice
but to drop/recreate in one xact.  Looks like a catch-22 to me.

ISTM we had discussed this before and concluded that name order was
a more reasonable definition.  Nobody had got round to doing anything
about it though.  (Indeed my current hack was not intended to provide
a predictable firing order, it just fell out that way...)

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Odd(?) RI-trigger behavior

2002-04-18 Thread Stephan Szabo

On Thu, 18 Apr 2002, Tom Lane wrote:

 Stephan Szabo [EMAIL PROTECTED] writes:
  Didn't someone (Peter?) say that the mandated firing order was based on
  creation order/time in SQL99?

 It does say that:

  The order of execution of a set of triggers is ascending by value
  of their timestamp of creation in their descriptors, such that the
  oldest trigger executes first. If one or more triggers have the
  same timestamp value, then their relative order of execution is
  implementation-defined.

 However, this strikes me as fairly brain-dead; it's unnecessarily hard
 to control the order of trigger execution.  You have to drop and
 recreate triggers if you want to insert a new one at a desired position.
 Worse, if you create several triggers in the same transaction, they'll
 have the same timestamp --- leaving you right back in the
 implementation-defined case.  But if you want to make your rearrangement
 atomically with respect to other transactions, you have little choice
 but to drop/recreate in one xact.  Looks like a catch-22 to me.

 ISTM we had discussed this before and concluded that name order was
 a more reasonable definition.  Nobody had got round to doing anything
 about it though.  (Indeed my current hack was not intended to provide
 a predictable firing order, it just fell out that way...)

I agree that name is better, I wasn't sure if we'd reached a consensus on
it or if the conversation drifted away due to the fact that noone was
looking at it at the time.



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Schema (namespace) privilege details

2002-04-18 Thread Joe Conway

Tom Lane wrote:
This looks good to me. I only wonder if public should default to world 
read and no create?
 
 
 That would be non-backwards-compatible.  Since the main reason for
 having the public namespace at all is backwards compatibility of the
 out-of-the-box behavior, I think we have to let it default to world
 write.  DBAs can revoke world write, or even remove the public namespace
 altogether, if they want to run a tighter ship.

Ah yes, I forgot about that aspect.

 
 Also, if a database owner is not superuser, I do not think he should be
 able to create objects that are marked as belonging to other users.
 At least not in general.  Do we need to make an exception for schemas?
 

Well, I like to think of the database owner as the superuser within that 
one database. This is similar to (at least) SQL Server and Oracle. But I 
don't think either of those systems have quite this issue because the 
notion of schema and login user are so tightly coupled, something you 
were specifically trying to avoid ;-)



 
Agreed. How would it work though if say I wanted to create a view in the 
public schema, which pointed at a table in a schema which has had SELECT 
revoked? Same question for a public function/private table. It would be 
ideal if you could do this.
 
 
 AFAICS this would not be checked at creation time, but when someone
 tries to use the view; just the same as now.

Great!

Thanks,

Joe




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Odd(?) RI-trigger behavior

2002-04-18 Thread Alvaro Herrera

En Thu, 18 Apr 2002 20:43:54 -0700 (PDT)
Stephan Szabo [EMAIL PROTECTED] escribió:

 I agree that name is better, I wasn't sure if we'd reached a consensus on
 it or if the conversation drifted away due to the fact that noone was
 looking at it at the time.

http://archives.postgresql.org/pgsql-general/2001-09/msg00234.php

Nobody opposed to the idea of name ordering in that thread.

But note that this is on TODO:

* Allow user to control trigger firing order

That probably means that the user should have some reasonable way to
change the name, besides fiddling with system catalogs.

-- 
Alvaro Herrera (alvherre[a]atentus.com)
Siempre hay que alimentar a los dioses, aunque la tierra este seca (Orual)

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] syslog support by default

2002-04-18 Thread Hannu Krosing

On Fri, 2002-04-19 at 05:28, Peter Eisentraut wrote:
 Tom Lane writes:
 
  Tatsuo Ishii [EMAIL PROTECTED] writes:
   Can we enable syslog support by default for 7.3?
 
  AFAIR, we agreed to flip the default some time ago, we just didn't
  want to do it late in the 7.2 cycle.  Go for it.
 
 I think if no one complains about the lack of syslog on his machine we
 should just remove the option in 7.3+1.

My experience has been that logging to syslog makes postgres much
slower.

Can anyone confirm or refute this ?

--
Hannu



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html