Re: [HACKERS] OOP real life example (was Re: Why is MySQL more chosen

2002-08-12 Thread Don Baccus

Curt Sampson wrote:
 On Sun, 11 Aug 2002, Don Baccus wrote:
 
 
Oh? Ok, please translate the following into equivalant SQL that
does not use a view:
...

Granulize GRANT to the table column level.
 
 
 Can you please show me the code for that? After all, I showed you
 all of my code when doing equivalants.

Obviously it would require extending SQL, but since you in part argue 
that SQL sucks in regard to the relational model this shouldn't matter, 
right?   You're arguing the superiority of the relational model as 
described by DD over other models, non-relational SQL (which all agree 
has weaknesses) and most likely God.

So don't flip-flop between the oh, SQL sucks think about the relational 
model and SQL doesn't support that.  Pick one or the other.  Argue 
SQL or DD/relational model.

It's not hard to propose *extensions* to SQL that would allow granting 
of perms on a column rather than table level.

 Or are you saying that it's syntactic sugar only in some imaginary
 version of postgres that does not exist?

Sort of like the idealized relational model that isn't implemented by 
SQL nor PG, but yet you reference again and again when it suits you to 
ignore the shortcomings of SQL92?

Sure.

Sorry, for a moment I thought you were interested in a meaningful 
discussion rather than a dick-waving contest but I was wrong.

I give up.  Your right hand waves your dick more frequently and with 
much more vigor than mine.  This has nothing to do with with anything I 
care about, though.


-- 
Don Baccus
Portland, OR
http://donb.photo.net, http://birdnotes.net, http://openacs.org


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] OOP real life example (was Re: Why is MySQL more chosen

2002-08-12 Thread Don Baccus

Tom Lane wrote:
 Curt Sampson [EMAIL PROTECTED] writes:
 
On Sun, 11 Aug 2002, Don Baccus wrote:

Granulize GRANT to the table column level.

 
Can you please show me the code for that?
 
 
 It's required by the SQL spec.  PG hasn't got it, but the spec is
 perfectly clear about how it should be done.
 
 I think this is really a bit irrelevant to the thread topic, though.

As far as the last goes, not really.  Curtis argues from false premises, 
and this is one.  If it were the only false premise he argues from, 
sure, I'd agree it's irrelevant but sadly Curtis argues from false 
premises by default.

-- 
Don Baccus
Portland, OR
http://donb.photo.net, http://birdnotes.net, http://openacs.org


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Interesting message about printf()'s in PostgreSQL

2002-08-12 Thread Christopher Kings-Lynne

 I've just finished a quick grep through the backend sources for
 sprintf, and identified the following files as containing possible
 problems:
 src/backend/port/dynloader/freebsd.c

This one is perhaps dodgy.  You ahve this:

static char error_message[BUFSIZ];

Then you have this:

sprintf(error_message, dlopen (%s) not supported, file);

Where file isn't restricted in length I think...

So does that mean if you go:

CREATE FUNCTION blah AS '/home/chriskl/[9 characters here].so' LANGUAGE
'C';

Sort of thing you could crash it?

Chris


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Interesting message about printf()'s in PostgreSQL

2002-08-12 Thread Tom Lane

Christopher Kings-Lynne [EMAIL PROTECTED] writes:
 src/backend/port/dynloader/freebsd.c
 This one is perhaps dodgy.  You ahve this:
 static char error_message[BUFSIZ];
 Then you have this:
 sprintf(error_message, dlopen (%s) not supported, file);
 Where file isn't restricted in length I think...

Yeah.  In practice I'm not sure there's a problem --- the callers may
all limit the filename string to MAXPGPATH, which is well below BUFSIZ.
But changing the sprintf to snprintf is a cheap, localized way to be
sure.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] OOP real life example (was Re: Why is MySQL more chosen

2002-08-12 Thread Mario Weilguni

Am Montag, 12. August 2002 08:02 schrieb Don Baccus:
 Curt Sampson wrote:
  On Sun, 11 Aug 2002, Don Baccus wrote:
 I've been wanting to point out that SQL views are really, when
 scrutinized, just syntactic sugar ...
 
  Oh? Ok, please translate the following into equivalant SQL that
  does not use a view:
 
  CREATE TABLE t1 (key serial, value1 text, value2 text);
  CREATE VIEW v1 AS SELECT key, value1 FROM t1;
  GRANT SELECT ON v1 TO sorin;

 Granulize GRANT to the table column level.   Then GRANT SELECT perms
 for the user on every column from the two tables that happen to be
 included in the view.

 Yes, it's awkward.   So are the VIEW-based replacements for PG's type
 extensibility features.

But this is not a replacement for a view, isn't it? With a view I can do this:
create view v1 as select name, salary from workers where type  'MANAGEMENT';

with column permissions I must give access to all workers salary including the 
management, but not with a view.

best regards,
  mario weilguni

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] OOP real life example (was Re: Why is MySQL more chosen

2002-08-12 Thread Curt Sampson

On Sun, 11 Aug 2002, Don Baccus wrote:

 Obviously it would require extending SQL, but since you in part argue
 that SQL sucks in regard to the relational model this shouldn't matter,
 right?

Well, if we're going to go so far as to get rid of SQL, we can go all
the way with the DD thing, and VIEWs will no longer be syntatic sugar
because views and tables will be the same thing. (I'll leave you how
specify physical storage as an exercise for the reader. :-))

But anyway, I have no particularly huge objection to syntatic sugar
alone. I do have objections to it when it's not saving much typing. (It
is in this case, but that could be fixed with better automatic support
of view updates.)

But my real objection is when it makes things more confusing, rather
than less, which I think is definitely happening here. I've never
seen a rigourous explanation of our model of table inheritance,
nor any model that was more obviously correct than another. And
the parallel drawn with inheritance in OO languages is a false
parallel that adds to the confusion. (For example, the distinction
between types and instances of types is critical in OO theory. What are
the TI equivalants of this?)

All this is borne out by the regular questions one sees about
inheritance in the mailing lists. I'll admit a good part of it is
due to the broken implementation of inheritance, but all of the
problems I've ever seen are easily solved with very simple relational
solutions. Maybe the inheritance thing is causing people to turn
off the relational parts of their brain or something.

 I give up.  Your right hand waves your dick more frequently and with
 much more vigor than mine.

First you ask for more meaningful discussion. Then you make comments
like this. Hello?

If you really don't intend to stop completely with the insulting comments,
let me know and I can killfile you and we'll be done with this.

cjs
-- 
Curt Sampson  [EMAIL PROTECTED]   +81 90 7737 2974   http://www.netbsd.org
Don't you know, in this new Dark Age, we're all light.  --XTC


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] OOP real life example (was Re: Why is MySQL more

2002-08-12 Thread Hannu Krosing

On Mon, 2002-08-12 at 11:38, Mario Weilguni wrote:
 Am Montag, 12. August 2002 08:02 schrieb Don Baccus:
  Curt Sampson wrote:
   On Sun, 11 Aug 2002, Don Baccus wrote:
  I've been wanting to point out that SQL views are really, when
  scrutinized, just syntactic sugar ...
  
   Oh? Ok, please translate the following into equivalant SQL that
   does not use a view:
  
   CREATE TABLE t1 (key serial, value1 text, value2 text);
   CREATE VIEW v1 AS SELECT key, value1 FROM t1;
   GRANT SELECT ON v1 TO sorin;
 
  Granulize GRANT to the table column level.   Then GRANT SELECT perms
  for the user on every column from the two tables that happen to be
  included in the view.
 
  Yes, it's awkward.   So are the VIEW-based replacements for PG's type
  extensibility features.
 
 But this is not a replacement for a view, isn't it? With a view I can do this:
 create view v1 as select name, salary from workers where type  'MANAGEMENT';
 
 with column permissions I must give access to all workers salary including the 
management, but not with a view.

I guess that bare-bones replacement of CREATE VIEW with CREATE TABLE and
CREATE RULE ... ON SELECT DO INSTEAD ... would have exaclty the same
semantics as CREATE VIEW, including the ability to GRANT .

so the no-view-syntactic-sugar equivalent would be

CREATE TABLE v1 AS SELECT * FROM t1 WHERE false;
CREATE RULE v1ins AS
ON SELECT TO tv1
DO INSTEAD
SELECT t1.key,
   t1.value2
  FROM t1
 WHERE (t1.type  'MANAGEMENT'::text);
GRANT SELECT ON v1 TO sorin;

Actually it seems that GRANT is also syntactic sugar for rules and the
above could be replaced with 

CREATE RULE v1ins AS
ON SELECT TO tv1
DO INSTEAD
SELECT t1.key,
   t1.value2
  FROM t1
 WHERE (t1.type  'MANAGEMENT'::text)
   AND CURRENT_USER IN ( SELECT username
   FROM grantees
  WHERE tablename = 'v1'
AND command = 'select' )
INSERT INTO GRANTEES(tablename,command,username)
  VALUES('v1','select','sorin');


Hannu


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] [SECURITY] DoS attack on backend possible (was: Re:

2002-08-12 Thread Mike Mascari

Christopher Kings-Lynne wrote:
 
  Hey yep, good point.
 
  Is this the only way that we know of non postgresql-superusers to be
  able to take out the server other than by extremely non-optimal,
  resource wasting queries?
 
  If we release a 7.2.2 because of this, can we be pretty sure we have a
  no known vulnerabilities release, or are there other small holes which
  should be fixed too?
 
 What about that select cash_out(2) crashes because of opaque entry in the
 TODO?  That really needs to be fixed.
 
 I was talking to a CS lecturer about switching to postgres from oracle when
 7.3 comes out and all he said was how easily is it hacked?.  He says their
 systems are the most constantly bombarded in universities.  What could I
 say?  That any unprivileged user can just go 'select cash_out(2)' to DOS the
 backend?

If he's using Oracle already, he ought to check out:

http://www.cert.org/advisories/CA-2002-08.html

I'd still think it would be a good policy to make a security release.
However, without user resource limits in PostgreSQL, anyone can make a
machine useless with a query like:

SELECT * 
FROM pg_class a, pg_class b, pg_class c, pg_class d, pg_class e, ... ;

Mike Mascari
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] OOP real life example (was Re: Why is MySQL more

2002-08-12 Thread Hannu Krosing

On Mon, 2002-08-12 at 11:52, Curt Sampson wrote:
 On Sun, 11 Aug 2002, Don Baccus wrote:
 
  Obviously it would require extending SQL, but since you in part argue
  that SQL sucks in regard to the relational model this shouldn't matter,
  right?
 
 Well, if we're going to go so far as to get rid of SQL, we can go all
 the way with the DD thing, and VIEWs will no longer be syntatic sugar
 because views and tables will be the same thing. (I'll leave you how
 specify physical storage as an exercise for the reader. :-))
 
 But anyway, I have no particularly huge objection to syntatic sugar
 alone. I do have objections to it when it's not saving much typing. (It
 is in this case, but that could be fixed with better automatic support
 of view updates.)
 
 But my real objection is when it makes things more confusing, rather
 than less, which I think is definitely happening here.

What makes things more confusing is poor understanding of a feature, not
the feature itself. 

 I've never
 seen a rigourous explanation of our model of table inheritance,
 nor any model that was more obviously correct than another. And
 the parallel drawn with inheritance in OO languages is a false
 parallel that adds to the confusion.

Are you saying that inheritance in SQL is something fundamentally
different than inheritance in OO languages ?

 (For example, the distinction
 between types and instances of types is critical in OO theory. What are
 the TI equivalants of this?)

If by TI you mean type instance then the equivalent of of an instance is
a relation (i.e. one row in an (inherited) table).

 All this is borne out by the regular questions one sees about
 inheritance in the mailing lists. I'll admit a good part of it is
 due to the broken implementation of inheritance, but all of the
 problems I've ever seen are easily solved with very simple relational
 solutions.

All _simple_ inheritance problems are easily solved by simple relational
solutions. The general problem of much more typing and debugging, less
clues for optimiser etc. are not solved by _simple_ relational
solutions.

 Maybe the inheritance thing is causing people to turn off the relational
 parts of their brain or something.
 
Of maybe people are diversifying, using inheritance for is-a
relationships and relational model for has-a relationships.

---
Hannu


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] OOP real life example (was Re: Why is MySQL more chosen

2002-08-12 Thread Curt Sampson

On 12 Aug 2002, Hannu Krosing wrote:

 Are you saying that inheritance in SQL is something fundamentally
 different than inheritance in OO languages ?

Yes.

  (For example, the distinction
  between types and instances of types is critical in OO theory. What are
  the TI equivalants of this?)

 If by TI you mean type instance

Sorry, I shouldn't have abbreviated this. By TI I meant table
inheritance.

 then the equivalent of of an instance is
 a relation (i.e. one row in an (inherited) table).

As I understand it, one row in a table, inherited or not, is a
tuple, not a relation. The definitions I'm familar with are Date's:
a relation is a header, describing the types of attributes within
the tuple, and a set of tuples conforming to that header, and a
relvar is a variable that holds such a relation. (His definitions
seem to be the ones in common use--Korth/Silberschatz agree with
him, though they don't use the relvar concept AFIK.)

So is an instance a relation (a set of tuples) or a tuple?

If the former, consider the following argument.

In an object oriented program I can have a class C, and a subclass C'
that inherits from C. Now, in any situation that calls for an instance
of C, I can instead use an instance of C'. This is polymorphism.

Now, if an instance is equivalant to tuple, and a relation inherits from
another relation, I'd guess that a relation is equivalant to a class.
But given relation R' inheriting from relation R, does that mean that I
can use a tuple from R' anywhere I could use a tuple from R? No, obviously
not, as the two tuples have a different number of attributes, to start with.
So this analogy is now breaking down.

I suppose I could try to work out here if you really mean that
(using the strict Date sense of the terms here) the relvars are
classes, and the relations that they hold are instances. But that
seems to get a bit sticky too. I think it's better if I wait at
this point for you to provide some further clarification. Would
you mind doing so? Specifically, what is the equivalant of a class,
and what is the equivalant of an instance? What are the consequences
of this, if you know them?

 All _simple_ inheritance problems are easily solved by simple relational
 solutions. The general problem of much more typing and debugging, less
 clues for optimiser etc. are not solved by _simple_ relational
 solutions.

Can you please give me two or three examples of problems that are
not solved by simple relational solutions, and how table inheritance
solves them?

 Of maybe people are diversifying, using inheritance for is-a
 relationships and relational model for has-a relationships.

Well, it seems to me that the relational model better supports the is-a
relationship. With the relational model, I can specify a column in a
table that specifies what that particular entity is, and that can be set
to one and only one value. With the table inheritance model, how are we
ensuring that, if tables R' and R'' both inherit from R, when a tuple
is in R' relating to another tuple in R (or is that the same tuple),
there's not also such a relation between a tuple in R'' and R?

cjs
-- 
Curt Sampson  [EMAIL PROTECTED]   +81 90 7737 2974   http://www.netbsd.org
Don't you know, in this new Dark Age, we're all light.  --XTC


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] [SECURITY] DoS attack on backend possible (was: Re:

2002-08-12 Thread Florian Weimer

Tom Lane [EMAIL PROTECTED] writes:

 Justin Clift [EMAIL PROTECTED] writes:
 Am I understanding this right:
  - A PostgreSQL 7.2.1 server can be crashed if it gets passed certain
 date values which would be accepted by standard front end parsing? 

 AFAIK it's a buffer overrun issue, so anything that looks like a
 reasonable date would *not* cause the problem.

Yes, but if you just check that the date given by the user matches the
regular expression [0-9]+-[0-9]+-[0-9]+, it's still possible to
crash the backend.

-- 
Florian Weimer[EMAIL PROTECTED]
University of Stuttgart   http://CERT.Uni-Stuttgart.DE/people/fw/
RUS-CERT  fax +49-711-685-5898

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] [SECURITY] DoS attack on backend possible (was: Re:

2002-08-12 Thread Florian Weimer

Mike Mascari [EMAIL PROTECTED] writes:

 I'd still think it would be a good policy to make a security release.
 However, without user resource limits in PostgreSQL, anyone can make a
 machine useless with a query like:

 SELECT * 
 FROM pg_class a, pg_class b, pg_class c, pg_class d, pg_class e, ... ;

But this requires to be able to send arbitrary SQL commands; just
feeding a specially crafted date string usually does not.

-- 
Florian Weimer[EMAIL PROTECTED]
University of Stuttgart   http://CERT.Uni-Stuttgart.DE/people/fw/
RUS-CERT  fax +49-711-685-5898

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] Strange bahaviour

2002-08-12 Thread Michael Meskes

I just got the following example:

DROP TABLE foo;
CREATE TABLE foo (login varchar(100));
INSERT INTO foo values ('abc');

DROP FUNCTION footest1(varchar(100));
CREATE FUNCTION footest1(varchar(100)) RETURNS varchar(100) AS '
DECLARE
login varchar(100);
BEGIN
SELECT INTO login login FROM foo LIMIT 1;
RETURN login;
END;
' LANGUAGE 'plpgsql';

DROP FUNCTION footest2(varchar(100));
CREATE FUNCTION footest2(varchar(100)) RETURNS varchar(100) AS '
DECLARE
fieldname varchar(100);
BEGIN
SELECT INTO fieldname login FROM foo LIMIT 1;
RETURN fieldname;
END;
' LANGUAGE 'plpgsql';

SELECT footest1('foobar');
SELECT footest2('foobar');

The first select returns NULL while the second correctly returns 'abc'.
I just wonder why it is that way. The only difference seems to be the
name of the variable which in footest1 equals the attribute name. 

Now I can guess what happens but I wonder if this is the desired
behaviour.

Michael
-- 
Michael Meskes
[EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire!
Use Debian GNU/Linux! Use PostgreSQL!

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] [SECURITY] DoS attack on backend possible (was: Re:

2002-08-12 Thread Greg Copeland

Well, if it's a buffer overrun, there is certainly potential for risks
well beyond that of simply crashing the be.  It's certainly possible
that a simple bug in one cgi script or web site could allow someone to
execute code on the database host because of this bug.  Assuming they
are running the be as postgres or some other seemingly harmless 
user, it's still possible that complete destruction of any and all
databases which are hosted and accessible by this user can be utterly
destroyed or miscellaneously corrupted.

Buffer over runs should be treated with the up most urgency and
respect.  IMO, any known buffer overrun is worthy of an emergency fix
and corresponding advisory.

Greg Copeland


On Sun, 2002-08-11 at 12:09, Tom Lane wrote:
 Justin Clift [EMAIL PROTECTED] writes:
  Am I understanding this right:
   - A PostgreSQL 7.2.1 server can be crashed if it gets passed certain
  date values which would be accepted by standard front end parsing? 
 
 AFAIK it's a buffer overrun issue, so anything that looks like a
 reasonable date would *not* cause the problem.
 
   regards, tom lane
 
 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?
 
 http://www.postgresql.org/users-lounge/docs/faq.html




signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] Strange bahaviour

2002-08-12 Thread Tom Lane

Michael Meskes [EMAIL PROTECTED] writes:
 CREATE FUNCTION footest1(varchar(100)) RETURNS varchar(100) AS '
 DECLARE
 login varchar(100);
 BEGIN
 SELECT INTO login login FROM foo LIMIT 1;
 RETURN login;
 END;
 ' LANGUAGE 'plpgsql';

 The first select returns NULL while the second correctly returns 'abc'.

The NULL is perfectly correct: that's the initial value of the plpgsql
variable.  The above is essentially the same as saying
login := login;
It is not incorrect.

 Now I can guess what happens but I wonder if this is the desired
 behaviour.

Certainly, unless you'd like to disable all use of plpgsql variables in
SQL queries.  plpgsql has no way of guessing that login in the above
query wasn't intended to reference its variable.  Either choose a
different variable name, or qualify the query-variable reference
(eg, foo.login).

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] [SECURITY] DoS attack on backend possible (was: Re:

2002-08-12 Thread Florian Weimer

Greg Copeland [EMAIL PROTECTED] writes:

 Well, if it's a buffer overrun, there is certainly potential for risks
 well beyond that of simply crashing the be.

It's a buffer overrun, but the data has to pass through the date/time
parser in the backend, so it's not entirely obvious how you can
exploit this to run arbitrary code.

-- 
Florian Weimer[EMAIL PROTECTED]
University of Stuttgart   http://CERT.Uni-Stuttgart.DE/people/fw/
RUS-CERT  fax +49-711-685-5898

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] [SECURITY] DoS attack on backend possible (was: Re:

2002-08-12 Thread Florian Weimer

Gavin Sherry [EMAIL PROTECTED] writes:

 Yes, but if you just check that the date given by the user matches the
 regular expression [0-9]+-[0-9]+-[0-9]+, it's still possible to
 crash the backend.

 Anyone who is using that regular expression in an attempt to validate a
 user supplied date is already in trouble.

I don't understand why extremely strict syntax checks are necessary.
The database has to parse it again anyway, and if you can't rely on
the database to get this simple parsing right, will it store your
data?  Such a reasoning doesn't seem to be too far-fetched to me

I would probably impose a length limit in the frontend that uses the
database, but the PostgreSQL documentation does not state that this is
a requirement (because the parsers in the backend are so fragile).

-- 
Florian Weimer[EMAIL PROTECTED]
University of Stuttgart   http://CERT.Uni-Stuttgart.DE/people/fw/
RUS-CERT  fax +49-711-685-5898

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Strange bahaviour

2002-08-12 Thread Michael Meskes

On Mon, Aug 12, 2002 at 09:40:12AM -0400, Tom Lane wrote:
 The NULL is perfectly correct: that's the initial value of the plpgsql
 variable.  The above is essentially the same as saying
   login := login;
 It is not incorrect.

That's exactly what I thought is the reason. I just wonder if there's a
way to make this kind of stuff more obvious for instance by using :login
for the variable as with embedded SQL. IMO the actual behaviour, while
of course correct, is a little bit confusing.

Michael
-- 
Michael Meskes
[EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire!
Use Debian GNU/Linux! Use PostgreSQL!

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] OOP real life example (was Re: Why is MySQL more

2002-08-12 Thread Greg Copeland

On Mon, 2002-08-12 at 00:29, Hannu Krosing wrote:
 On Mon, 2002-08-12 at 11:52, Curt Sampson wrote:
  On Sun, 11 Aug 2002, Don Baccus wrote:
[snip]
  But anyway, I have no particularly huge objection to syntatic sugar
  alone. I do have objections to it when it's not saving much typing. (It
  is in this case, but that could be fixed with better automatic support
  of view updates.)
  
  But my real objection is when it makes things more confusing, rather
  than less, which I think is definitely happening here.
 
 What makes things more confusing is poor understanding of a feature, not
 the feature itself. 

Agreed.  Just because a feature may not be well understood by the masses
doesn't mean the feature is worthless.

 
  I've never
  seen a rigourous explanation of our model of table inheritance,
  nor any model that was more obviously correct than another. And
  the parallel drawn with inheritance in OO languages is a false
  parallel that adds to the confusion.
 
 Are you saying that inheritance in SQL is something fundamentally
 different than inheritance in OO languages ?
 

Hmmm...there might be.  Curt raises in interesting point below.  Do keep
in mind that I believe he's specifically referring to table inheritance
and not the broad scope of language wide inheritance.


  (For example, the distinction
  between types and instances of types is critical in OO theory. What are
  the TI equivalants of this?)
 
 If by TI you mean type instance then the equivalent of of an instance is
 a relation (i.e. one row in an (inherited) table).
 

Look a little deeper here.  In other OO implementations, I can define a
class (say class a) which has no instances (abstract base class). 
Furthermore, I can take this case and use it for building blocks
(assuming multiple inheritance is allowed in this world) by combining
with other classes (z inherits from a, b, c; whereby classes a, b, c
still do not have an actual instance).  I can create an instance of my
newly inherited class (z).

Seems to me that there is some distinction between types (classes) and
and type instances (instance of a specific class) as it pertains to it's
usability.

How exactly would you create an abstract base class for table type?

I'm still trying to put my brain around exactly what the implications
are here, but I *think* this is what curt was trying to stress.  Curt,
feel free to correct me as needed.


  All this is borne out by the regular questions one sees about
  inheritance in the mailing lists. I'll admit a good part of it is
  due to the broken implementation of inheritance, but all of the
  problems I've ever seen are easily solved with very simple relational
  solutions.
 
 All _simple_ inheritance problems are easily solved by simple relational
 solutions. The general problem of much more typing and debugging, less
 clues for optimiser etc. are not solved by _simple_ relational
 solutions.

I agree with Hannu here.  Curt's comment seems like lip service.  Worth
noting too, even if it were not for the issues pointed out by Hannu
here, Curt's statement certainly does nothing to invalidate the concept
of table inheritance.  After all, most camps are happy when there are
multiple means to an end.  Just because it can be done via method-x,
doesn't invalid method-y.  The inverse is probably true too.  ;)

 
  Maybe the inheritance thing is causing people to turn off the relational
  parts of their brain or something.
  
 Of maybe people are diversifying, using inheritance for is-a
 relationships and relational model for has-a relationships.

That's an interesting point.

Greg




signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] [SECURITY] DoS attack on backend possible (was: Re:

2002-08-12 Thread Gavin Sherry

On Mon, 12 Aug 2002, Florian Weimer wrote:

 Gavin Sherry [EMAIL PROTECTED] writes:
 
  Yes, but if you just check that the date given by the user matches the
  regular expression [0-9]+-[0-9]+-[0-9]+, it's still possible to
  crash the backend.
 
  Anyone who is using that regular expression in an attempt to validate a
  user supplied date is already in trouble.
 
 I don't understand why extremely strict syntax checks are necessary.
 The database has to parse it again anyway, and if you can't rely on
 the database to get this simple parsing right, will it store your
 data?  Such a reasoning doesn't seem to be too far-fetched to me

Why attempt to validate the user data at all if you're going to do a bad
job of it? Moreover, 'rely on the database to get this ... right': what
kind of security principle is that? For someone interested in security,
you've just broken the most important principle.

As to your other point -- that this bug in the data/time code actually
*reflects* the quality and reliability of the database itself -- you've
really gone too far. The best software has bugs. The reason that no one is
jumping up and down making releases and giving you a medal is that (1) it
is still questionable as to whether or not this bug exists in 7.2.1 (2) it
does not appear to be exploitable (3) it could only be used to cause a
denial of service by an authorised user (4) it is common practise for
database application developers to validate user input and if they don't
they have bigger problems than a potential DoS on their hands.

Gavin


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Strange bahaviour

2002-08-12 Thread Rod Taylor

Just go with tradition and label all of your variables with a
v_varname.  Never use columns or tablenames prefixed with a v_.

It's a quick way for determining what is what.  Forcing use of a prefix 
in some places and not others would not be a nice thing -- especially as
the core takes on more and more abilities of plpgsql.


On Mon, 2002-08-12 at 10:00, Michael Meskes wrote:
 On Mon, Aug 12, 2002 at 09:40:12AM -0400, Tom Lane wrote:
  The NULL is perfectly correct: that's the initial value of the plpgsql
  variable.  The above is essentially the same as saying
  login := login;
  It is not incorrect.
 
 That's exactly what I thought is the reason. I just wonder if there's a
 way to make this kind of stuff more obvious for instance by using :login
 for the variable as with embedded SQL. IMO the actual behaviour, while
 of course correct, is a little bit confusing.




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] [SECURITY] DoS attack on backend possible (was: Re:

2002-08-12 Thread Dave Page



 -Original Message-
 From: Gavin Sherry [mailto:[EMAIL PROTECTED]] 
 Sent: 12 August 2002 15:15
 To: Florian Weimer
 Cc: [EMAIL PROTECTED]
 Subject: Re: [HACKERS] [SECURITY] DoS attack on backend 
 possible (was: Re:
 
 
 On Mon, 12 Aug 2002, Florian Weimer wrote:
 
  Gavin Sherry [EMAIL PROTECTED] writes:
  
   Yes, but if you just check that the date given by the 
 user matches 
   the regular expression [0-9]+-[0-9]+-[0-9]+, it's 
 still possible 
   to crash the backend.
  
   Anyone who is using that regular expression in an attempt to 
   validate a user supplied date is already in trouble.
  
  I don't understand why extremely strict syntax checks are 
 necessary. 
  The database has to parse it again anyway, and if you can't rely on 
  the database to get this simple parsing right, will it store your 
  data?  Such a reasoning doesn't seem to be too far-fetched to me
 
 Why attempt to validate the user data at all if you're going 
 to do a bad job of it? Moreover, 'rely on the database to get 
 this ... right': what kind of security principle is that? For 
 someone interested in security, you've just broken the most 
 important principle.

If I write code in a Microsoft product such as VB it will happily accept
timestamps such as '2001-12-23 22.15.01' which is a perfectly valid date
in some parts of the world. PostgreSQL will barf on the .'s - is it
expected then that I write my own validation/parsing code to replace
Microsoft's in this and every other area that may need checking just
because PostgreSQL doesn't understand a particular format? I would
rather let PostgreSQL alone know about it's oddities and just throw me
an error I can deal with in such cases.

Regards, Dave.

PS (Gavin). Thanks for the CREATE OR REPLACE's you recently submitted!

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] [SECURITY] DoS attack on backend possible (was: Re:

2002-08-12 Thread Don Baccus

Gavin Sherry wrote:

 As to your other point -- that this bug in the data/time code actually
 *reflects* the quality and reliability of the database itself -- you've
 really gone too far. The best software has bugs.

For example, in the current version of Oracle 9i, if a client (say 
SQL*Plus) is running on a linux box and talking to Oracle running on a 
Solaris box, executes the following:

create table foo(i integer primary key, bar blob);

... then later does ...

update foo set bar=empty_blob() where i = some key value

The Oracle server on Solaris crashes.  *the whole thing* BANG! 
Shot-to-the-head-dead.  Not the user's client - the server.

This means that any user with the right to update a single table with a 
blob can crash Oracle at will.

What does this say about Oracle's overall reliability?

As Gavin says all software has bugs.  Most of PG's bugs are far less 
spectacular than the Oracle bug I mention here.

Overall I rate PG and Oracle as being about equivalent in terms of bugs.

-- 
Don Baccus
Portland, OR
http://donb.photo.net, http://birdnotes.net, http://openacs.org


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke

2002-08-12 Thread Andrew Sullivan

On Fri, Aug 09, 2002 at 06:54:44PM -0700, Thomas Lockhart wrote:

 I had thought to extend the capabilities to allow resource allocation
 for individual tables and indices, which has *long* been identified as a
 desired capability by folks who are managing large systems. 

Without wishing to pour fuel on any fires, or advocate any
implementation, I can say for sure that this is very much a feature
we'd love to see.

A

-- 

Andrew Sullivan   87 Mowat Avenue 
Liberty RMS   Toronto, Ontario Canada
[EMAIL PROTECTED]  M6K 3E3
 +1 416 646 3304 x110


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-12 Thread Andrew Sullivan

On Sat, Aug 10, 2002 at 09:21:07AM -0500, Greg Copeland wrote:

 I'm actually amazed that postgres isn't already using large file
 support.  Especially for tools like dump. 

Except it would only cause confusion if you ran such a program on a
system that didn't itself have largefile support.  Better to make the
admin turn all these things on on purpose, until everyone is running
64 bit systems everywhere.

A
-- 

Andrew Sullivan   87 Mowat Avenue 
Liberty RMS   Toronto, Ontario Canada
[EMAIL PROTECTED]  M6K 3E3
 +1 416 646 3304 x110


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-12 Thread Greg Copeland

On Mon, 2002-08-12 at 09:39, Andrew Sullivan wrote:
 On Sat, Aug 10, 2002 at 09:21:07AM -0500, Greg Copeland wrote:
 
  I'm actually amazed that postgres isn't already using large file
  support.  Especially for tools like dump. 
 
 Except it would only cause confusion if you ran such a program on a
 system that didn't itself have largefile support.  Better to make the
 admin turn all these things on on purpose, until everyone is running
 64 bit systems everywhere.

If by turn...on, you mean recompile, that's a horrible idea IMO. 
Besides, you're expecting that an admin is going to know that he even
needs to recompile to obtain this feature let alone that he'd interested
in compiling his own installation.  Whereas, more then likely he'll know
off hand (or can easily find out) if his FS/system supports large files
(32 bit sizes).

Seems like, systems which can natively support this feature should have
it enabled by default.  It's a different issue if an admin attempts to
create files larger than what his system and/or FS can support.

I guess what I'm trying to say here is, it's moving the problem from
being a postgres specific issue (not compiled in -- having to recompile
and install and not knowing if it's (dis)enabled) to a general body of
knowledge (does my system support such-n-such capabilities).

If a recompile time is still much preferred by the core developers,
perhaps a log entry can be created which at least denotes the current
status of such a feature when a compile time option is required.  Simply
having an entry of, LOG:  LARGE FILE SUPPORT (DIS)ENABLED (64-bit file
sizes), etc...things along those lines.  Of course, having a
--enable-large-files would be nice too.

This would seemingly make sense in other contexts too.  Imagine a
back-end compiled with large file support and someone else using fe
tools which does not support it.  How are they going to know if their
fe/be supports this feature unless we let them know?

Greg




signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-12 Thread Andrew Sullivan

On Mon, Aug 12, 2002 at 10:15:46AM -0500, Greg Copeland wrote:

 If by turn...on, you mean recompile, that's a horrible idea IMO. 

Ah.  Well, that is what I meant.  Why is it horrible?  PostgreSQL
doesn't take very long to compile.  

 I guess what I'm trying to say here is, it's moving the problem from
 being a postgres specific issue (not compiled in -- having to recompile
 and install and not knowing if it's (dis)enabled) to a general body of
 knowledge (does my system support such-n-such capabilities).

The problem is not just a system-level one, but a filesystem-level
one.  Enabling 64 bits by default might be dangerous, because a DBA
might think oh, it supports largefiles by default and therefore not
notice that the filesystem itself is not mounted with largefile
support.  But I suspect that the developers would welcome autoconfig
patches if someone offered them.

A

-- 

Andrew Sullivan   87 Mowat Avenue 
Liberty RMS   Toronto, Ontario Canada
[EMAIL PROTECTED]  M6K 3E3
 +1 416 646 3304 x110


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] OOP real life example (was Re: Why is MySQL more

2002-08-12 Thread Oliver Elphick

On Mon, 2002-08-12 at 15:00, Greg Copeland wrote:
...
 Look a little deeper here.  In other OO implementations, I can define a
 class (say class a) which has no instances (abstract base class). 
 Furthermore, I can take this case and use it for building blocks
 (assuming multiple inheritance is allowed in this world) by combining
 with other classes (z inherits from a, b, c; whereby classes a, b, c
 still do not have an actual instance).  I can create an instance of my
 newly inherited class (z).
 
 Seems to me that there is some distinction between types (classes) and
 and type instances (instance of a specific class) as it pertains to it's
 usability.
 
 How exactly would you create an abstract base class for table type?

CREATE TABLE abstract_base (
   cols ...,
   CONSTRAINT No data allowed in table abstract_base! CHECK (1 = 0)
)

This assumes that the constraint is not inherited or can be removed in
child tables.

-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight, UK
http://www.lfix.co.uk/oliver
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
 
 And he spake a parable unto them to this end, that men
  ought always to pray, and not to faint.   
 Luke 18:1 


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-12 Thread Lamar Owen

On Monday 12 August 2002 11:30 am, Andrew Sullivan wrote:
 The problem is not just a system-level one, but a filesystem-level
 one.  Enabling 64 bits by default might be dangerous, because a DBA
 might think oh, it supports largefiles by default and therefore not
 notice that the filesystem itself is not mounted with largefile
 support.  But I suspect that the developers would welcome autoconfig
 patches if someone offered them.

Interesting point.  Before I could deploy RPMs with largefile support by 
default, I would have to make sure it wouldn't silently break anything.  So 
keep discussing the issues involved, and I'll see what comes of it.  I don't 
have an direct experience with the largefile support, and am learning as I go 
with this.

Given that I have to make the source RPM's buildable on distributions that 
might not have the largefile support available, so on those distributions the 
support will have to be unavailable -- and the decision to build it or not to 
build it must be automatable.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-12 Thread Andrew Sullivan

On Mon, Aug 12, 2002 at 11:44:24AM -0400, Lamar Owen wrote:
  The problem is not just a system-level one, but a filesystem-level
  one.  Enabling 64 bits by default might be dangerous, because a DBA
  might think oh, it supports largefiles by default and therefore not
  notice that the filesystem itself is not mounted with largefile
  support.  

 keep discussing the issues involved, and I'll see what comes of it.  I don't 
 have an direct experience with the largefile support, and am learning as I go 
 with this.

I do have experience with both of these cases.  We're hosted in a
managed-hosting environment, and one day one of the sysadmins there
must've remounted a filesystem without largefile support.  Poof! I
started getting all sorts of strange pg_dump problems.  It wasn't
hard to track down, except that I was initially surprised by the
errors, since I'd just _enabled_ large file support.

This is an area that is not encountered terribly often, actually,
because postgres itself breaks its files at 1G.  Most people's dump
files either don't reach the 2G limit, or they use split (a
reasonable plan).

There are, in any case, _lots_ of problems with these large files. 
You not only need to make sure that pg_dump and friends can support
files bigger than 2G.  You need to make sure that you can move the
files around (your file transfer commands), that you can compress the
files (how is gzip compiled?  bzip2?), and even that you r backup
software takes the large file.  In a few years, when all
installations are ready for this, it seems like it'd be a good idea
to turn this on by default.  Right now, I think the risks are at
least as great as those incurred by telling people they need to
recompile.  

A

-- 

Andrew Sullivan   87 Mowat Avenue 
Liberty RMS   Toronto, Ontario Canada
[EMAIL PROTECTED]  M6K 3E3
 +1 416 646 3304 x110


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-12 Thread Greg Copeland

On Mon, 2002-08-12 at 10:30, Andrew Sullivan wrote:
 On Mon, Aug 12, 2002 at 10:15:46AM -0500, Greg Copeland wrote:
 
  If by turn...on, you mean recompile, that's a horrible idea IMO. 
 
 Ah.  Well, that is what I meant.  Why is it horrible?  PostgreSQL
 doesn't take very long to compile.  


Many reasons.  A DBA is not always the same thing as a developer (which
means it's doubtful he's even going to know about needed options to pass
-- if any).  Using a self compiled installation may not install the same
sense of reliability (I know that sounds odd) as using a distribution's
package.  DBA may not be a SA, which means he should probably not be
compiling and installing software on a system.  Furthermore, he may not
even have access to do so.

Means upgrading in the future may be problematic.  Someone compiled with
large file support.  He leaves.  New release comes out.  Someone else
upgrades and now finds things are broken.  Why?  If it supported it out
of the box, issue is avoided.

Lastly, and perhaps the most obvious, SA and DBA bodies of knowledge are
fairly distinct.  You should not expect a DBA to function as a SA. 
Furthermore, SA and developer bodies of knowledge are also fairly
distinct.  You shouldn't expect a SA to know what compiler options he
needs to use to compile software on his system.  Especially for
something as obscure as large file support.


 
  I guess what I'm trying to say here is, it's moving the problem from
  being a postgres specific issue (not compiled in -- having to recompile
  and install and not knowing if it's (dis)enabled) to a general body of
  knowledge (does my system support such-n-such capabilities).
 
 The problem is not just a system-level one, but a filesystem-level
 one.  Enabling 64 bits by default might be dangerous, because a DBA
 might think oh, it supports largefiles by default and therefore not
 notice that the filesystem itself is not mounted with largefile
 support.  But I suspect that the developers would welcome autoconfig
 patches if someone offered them.


The distinction you make there is minor.  A SA, should know and
understand the capabilities of the systems he maintains (this is true
even if the SA and DBA are one).  This includes filesystem
capabilities.  A DBA, should only care about the system requirements and
trust that the SA can deliver those capabilities.  If a SA says, my
filesystems can support very large files, installs postgres, the DBA
should expect that match support in the database is already available. 
Woe is his surprise when he finds out that his postgres installation
can't handle it?!

As for the concern for danger.  Hmm...my understanding is that the
result is pretty much the same thing as exceeding max file size.  That
is, if you attempt to read/write beyond what the filesystem can provide,
you're still going to get an error.  Is this really more dangerous than
simply reading/writing to a file which exceeds max system capabilities? 
Either way, this issue exists and having large file support, seemingly,
does not effect it one way or another.

I guess I'm tying to say, the risk of seeing filesystem corruption or
even database corruption should not be effected by the use of large file
support.  Please correct me if I'm wrong.


Greg




signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] psql arguments

2002-08-12 Thread Bruce Momjian


Uh, from the command line, you are running _only_ a (f)ile, while from
inside psql, you are (i)ncluding it in your session;  the existing
values seem OK to me.

---

Rod Taylor wrote:
 In 7.2.1, the psql argument to import a file from a connected database
 is \i.  From the command line its -f.
 
 Any chance we could make these consistent with eachother.
 
 
 
 
 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?
 
 http://www.postgresql.org/users-lounge/docs/faq.html
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-12 Thread Greg Copeland

On Mon, 2002-08-12 at 11:04, Andrew Sullivan wrote:
 On Mon, Aug 12, 2002 at 11:44:24AM -0400, Lamar Owen wrote:
  keep discussing the issues involved, and I'll see what comes of it.  I don't 
  have an direct experience with the largefile support, and am learning as I go 
  with this.
 
 I do have experience with both of these cases.  We're hosted in a
 managed-hosting environment, and one day one of the sysadmins there
 must've remounted a filesystem without largefile support.  Poof! I
 started getting all sorts of strange pg_dump problems.  It wasn't
 hard to track down, except that I was initially surprised by the
 errors, since I'd just _enabled_ large file support.

And, what if he just remounted it read only.  Mistakes will happen. 
That doesn't come across as being a strong argument to me.  Besides,
it's doubtful that a filesystem is going to be remounted while it's in
use.  Which means, these issues are going to be secondary to actual
product use of the database.  That is, either the system is working
correctly or it's not.  If it's not, guess it's not ready for production
use.

Furthermore, since fs mounting, if being done properly, is almost always
a matter of automation, this particular class of error should be few and
very far between.

Wouldn't you rather answer people with, remount your file system,
rather than, recompile with such-n-such option enabled, reinstall.  Oh
ya, since you're re-installing a modified version of your database,
probably a good paranoid option would be to back up and dump, just to be
safe.  Personally, I'd rather say, remount.


 There are, in any case, _lots_ of problems with these large files. 
 You not only need to make sure that pg_dump and friends can support
 files bigger than 2G.  You need to make sure that you can move the
 files around (your file transfer commands), that you can compress the
 files (how is gzip compiled?  bzip2?), and even that you r backup
 software takes the large file.  In a few years, when all
 installations are ready for this, it seems like it'd be a good idea
 to turn this on by default.  Right now, I think the risks are at
 least as great as those incurred by telling people they need to
 recompile.  
 

All of those are SA issues.  Shouldn't we leave that domain of issues
for a SA to deal with rather than try to force a single view down
someone's throat?  Which, btw, results is creating more work for those
that desire this feature.

Greg




signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-12 Thread Oliver Elphick

On Mon, 2002-08-12 at 16:44, Lamar Owen wrote:
 Interesting point.  Before I could deploy RPMs with largefile support by 
 default, I would have to make sure it wouldn't silently break anything.  So 
 keep discussing the issues involved, and I'll see what comes of it.  I don't 
 have an direct experience with the largefile support, and am learning as I go 
 with this.
 
 Given that I have to make the source RPM's buildable on distributions that 
 might not have the largefile support available, so on those distributions the 
 support will have to be unavailable -- and the decision to build it or not to 
 build it must be automatable.

I raised the question on the Debian developers' list.  As far as I can
see, the general feeling is that it won't break anything but will only
work with kernel 2.4.  It may break with 2.0, but 2.0 is no longer
provided with Debian stable, so I don't mind that.

The thread starts at
http://lists.debian.org/debian-devel/2002/debian-devel-200208/msg00597.html

I intend to enable it in the next version of the Debian packages (which
will go into the unstable archive if this works for me) by adding
-D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 to CFLAGS for the entire build.

One person said:
However compiling with largefile support will change the size
of off_t from 32 bits to 64 bits - if postgres uses off_t or
anything else related to file offsets in a binary struct in one
of the database files you will break stuff pretty heavily. I
would not compile postgres with largefile support until it
is officially supported by the postgres developers.

but I cannot see that off_t is used in such a way.
-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight, UK
http://www.lfix.co.uk/oliver
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
 
 And he spake a parable unto them to this end, that men
  ought always to pray, and not to faint.   
 Luke 18:1 


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] OOP real life example (was Re: Why is MySQL more

2002-08-12 Thread Greg Copeland

On Mon, 2002-08-12 at 10:39, Oliver Elphick wrote:
 On Mon, 2002-08-12 at 15:00, Greg Copeland wrote:
  How exactly would you create an abstract base class for table type?
 
 CREATE TABLE abstract_base (
cols ...,
CONSTRAINT No data allowed in table abstract_base! CHECK (1 = 0)
 )
 
 This assumes that the constraint is not inherited or can be removed in
 child tables.
 

Why would I assume that constraints would not be inherited?  Seems as a
general rule of thumb, you'd want the constraints to be inherited.  Am I
missing something?

Also, if I remove the constraint on the child table, doesn't that really
mean I'm removing the constraint on the parent table?  That would seem
to violate the whole reason of having constraints.  If a constraint is
placed in an ABC and we find that we later need to remove it for EVERY
derived class, doesn't that imply it shouldn't of been in there to begin
with?  After all, in this case, we're saying that each and every derived
class needs to overload or drop a specific constraint.  That strikes me
as being rather obtuse.

That, in it self, I find rather interesting.  Is there any papers or
books which offers explanation of how constraints should handled for
table inheritance?

Greg




signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-12 Thread Andrew Sullivan

On Mon, Aug 12, 2002 at 11:07:51AM -0500, Greg Copeland wrote:

 Many reasons.  A DBA is not always the same thing as a developer (which
 means it's doubtful he's even going to know about needed options to pass
 -- if any). 

This (and the upgrade argument) are simply documentation issues. 
If you check the FAQ_Solaris, there's already a line in there which
tells you how to do it.

 Lastly, and perhaps the most obvious, SA and DBA bodies of knowledge are
 fairly distinct.  You should not expect a DBA to function as a SA. 
 Furthermore, SA and developer bodies of knowledge are also fairly
 distinct.  You shouldn't expect a SA to know what compiler options he
 needs to use to compile software on his system.  Especially for
 something as obscure as large file support.

It seems to me that a DBA who is running a system which produces 2
Gig dump files, and who can't compile Postgres, is in for a rocky
ride.  Such a person needs at least a support contract, and in such a
case the supporting organisation would be able to provide the needed
binary.

Anyway, as I said, this really seems like the sort of thing that
mostly gets done when someone sends in a patch.  So if it scratches
your itch . . .

 The distinction you make there is minor.  A SA, should know and
 understand the capabilities of the systems he maintains (this is true
 even if the SA and DBA are one).  This includes filesystem
 capabilities.  A DBA, should only care about the system requirements and
 trust that the SA can deliver those capabilities.  If a SA says, my
 filesystems can support very large files, installs postgres, the DBA
 should expect that match support in the database is already available. 
 Woe is his surprise when he finds out that his postgres installation
 can't handle it?!

And it seems to me the distinction you're making is an invidious one. 
I am sick to death of so-called experts who want to blather on about
this or that tuning parameter of [insert big piece of software here]
without knowing the slightest thing about the basic operating
environment.  A DBA has responsibility to know a fair amount about
the platform in production.  A DBA who doesn't is one day going to
find out what deep water is.

 result is pretty much the same thing as exceeding max file size.  That
 is, if you attempt to read/write beyond what the filesystem can provide,
 you're still going to get an error.  Is this really more dangerous than
 simply reading/writing to a file which exceeds max system capabilities? 

Only if you were relying on it for backups, and suddenly your backups
don't work.

A

-- 

Andrew Sullivan   87 Mowat Avenue 
Liberty RMS   Toronto, Ontario Canada
[EMAIL PROTECTED]  M6K 3E3
 +1 416 646 3304 x110


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS\

2002-08-12 Thread Andrew Sullivan

On Mon, Aug 12, 2002 at 11:17:31AM -0500, Greg Copeland wrote:
 
 And, what if he just remounted it read only.  Mistakes will happen. 
 That doesn't come across as being a strong argument to me.  Besides,
 it's doubtful that a filesystem is going to be remounted while it's in
 use.  Which means, these issues are going to be secondary to actual
 product use of the database.  That is, either the system is working
 correctly or it's not.  If it's not, guess it's not ready for production
 use.

If it's already in production use, but was taken out briefly for
maintenance, and the supposed expert SAs do something dimwitted, then
it's broken, sure.  The point I was trying to make is that the
symptoms one sees from breakage can be from many different places,
and so a glib enable largefile support remark hides an actual,
real-world complexity.  Several steps can be broken, any one fof
which causes problems.  Better to force the relevant admins to do the
work to set things up for an exotic feature, if it is desired. 
There's nothing about Postgres itself that requires large file
support, so this is really a discussion about pg_dump.  Using split
is more portable, in my view, and therefore preferable.  You can also
use the native-compressed binary dump format, if you like one big
file.  Both of those already work out of the box.

  There are, in any case, _lots_ of problems with these large files. 

 All of those are SA issues.  

So is compiling the software correctly, if the distinction has any
meaning at all.  When some mis-installed bit of software breaks, the
DBAs won't go running to the SAs.  They'll ask here.

A

-- 

Andrew Sullivan   87 Mowat Avenue 
Liberty RMS   Toronto, Ontario Canada
[EMAIL PROTECTED]  M6K 3E3
 +1 416 646 3304 x110


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-12 Thread Greg Copeland

On Mon, 2002-08-12 at 11:40, Andrew Sullivan wrote:
 On Mon, Aug 12, 2002 at 11:07:51AM -0500, Greg Copeland wrote:
 
  Many reasons.  A DBA is not always the same thing as a developer (which
  means it's doubtful he's even going to know about needed options to pass
  -- if any). 
 
 This (and the upgrade argument) are simply documentation issues. 
 If you check the FAQ_Solaris, there's already a line in there which
 tells you how to do it.

And?  What's you're point.  That somehow make it disappear?  Even if it
had been documented, it doesn't mean the documentation made it to the
right hands or was obviously located.  Just look at postgres'
documentation in general.  How often are people told to read the
code.  Give me a break.  You're argument is a very weak straw.

 
  Lastly, and perhaps the most obvious, SA and DBA bodies of knowledge are
  fairly distinct.  You should not expect a DBA to function as a SA. 
  Furthermore, SA and developer bodies of knowledge are also fairly
  distinct.  You shouldn't expect a SA to know what compiler options he
  needs to use to compile software on his system.  Especially for
  something as obscure as large file support.
 
 It seems to me that a DBA who is running a system which produces 2
 Gig dump files, and who can't compile Postgres, is in for a rocky
 ride.  Such a person needs at least a support contract, and in such a
 case the supporting organisation would be able to provide the needed
 binary.

LOL.  Managing data and compiling applications have nothing to do with
each other.  Try, try again.

You also don't seem to understand that this isn't as simple as
recompile.  It's not!!!  We clear on this?!  It's as simple as
needing to KNOW that you have to recompile and then KNOWING you have to
use a serious of obtuse options when compiling.

In other words, you seemingly know everything you don't know which is
more than the rest of us.

 Anyway, as I said, this really seems like the sort of thing that
 mostly gets done when someone sends in a patch.  So if it scratches
 your itch . . .
 
  The distinction you make there is minor.  A SA, should know and
  understand the capabilities of the systems he maintains (this is true
  even if the SA and DBA are one).  This includes filesystem
  capabilities.  A DBA, should only care about the system requirements and
  trust that the SA can deliver those capabilities.  If a SA says, my
  filesystems can support very large files, installs postgres, the DBA
  should expect that match support in the database is already available. 
  Woe is his surprise when he finds out that his postgres installation
  can't handle it?!
 
 And it seems to me the distinction you're making is an invidious one. 
 I am sick to death of so-called experts who want to blather on about
 this or that tuning parameter of [insert big piece of software here]
 without knowing the slightest thing about the basic operating
 environment.  A DBA has responsibility to know a fair amount about

In other words, you can't have a subject matter expert unless he is an
expert on every subject?  Ya, right!

 the platform in production.  A DBA who doesn't is one day going to
 find out what deep water is.

Agreed...as it relates to the database.  DBA's should have to know
details about the filesystem...that's the job of a SA.  You seem to be
under the impression that SA = DBA or somehow a DBA is an SA with extra
knowledge.  While this is sometimes true, I can assure you this is not
always the case.

This is exactly why large companies often have DBAs in one department
and SA in another.  Their knowledge domains tend to uniquely differ.

 
  result is pretty much the same thing as exceeding max file size.  That
  is, if you attempt to read/write beyond what the filesystem can provide,
  you're still going to get an error.  Is this really more dangerous than
  simply reading/writing to a file which exceeds max system capabilities? 
 
 Only if you were relying on it for backups, and suddenly your backups
 don't work.
 

Correction.  Suddenly your backends never worked.  Seems like it would
of been caught prior to going into testing.  Surely you're not
suggesting that people place a system into production without having
testing full life cycle?  Back up testing is part of your life cycle
right?

Greg




signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS\

2002-08-12 Thread Greg Copeland

On Mon, 2002-08-12 at 11:48, Andrew Sullivan wrote:
 On Mon, Aug 12, 2002 at 11:17:31AM -0500, Greg Copeland wrote:
[snip]

   There are, in any case, _lots_ of problems with these large files. 
 
  All of those are SA issues.  
 
 So is compiling the software correctly, if the distinction has any
 meaning at all.  When some mis-installed bit of software breaks, the
 DBAs won't go running to the SAs.  They'll ask here.

Either case, they're going to ask.  You can give them a simple solution
or you can make them run around and pull their hair out.

You're also assuming that SA = developer.  I can assure you it does
not.  I've met many an SA who's development experience was make and
korn scripts.  Expecting that he should know to use GNU_SOURCE and
BITS=64, it a pretty far reach.  Furthermore, you're even expecting that
he knows that such a recompile fix even exists.  Where do you think
he's going to turn?  The lists.  That's right.  Since he's going to
contact the list or review a faq item anyways, doesn't it make sense to
give them the easy way out (the the initiator and the mailing list)?

IMO, powerful tools seem to always be capable enough to shoot your self
in the foot.  Why make pay special attention with this sole feature
which doesn't really address it to begin with?

Would you at least agree that --enable-large-files, rather than
CFLAGS=xxx, is a good idea as might well be banners and log entries
stating that large file support has or has not been compiled in?

Greg




signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] OOP real life example (was Re: Why is MySQL more

2002-08-12 Thread Oliver Elphick

On Mon, 2002-08-12 at 17:30, Greg Copeland wrote:
 On Mon, 2002-08-12 at 10:39, Oliver Elphick wrote:
  On Mon, 2002-08-12 at 15:00, Greg Copeland wrote:
   How exactly would you create an abstract base class for table type?
  
  CREATE TABLE abstract_base (
 cols ...,
 CONSTRAINT No data allowed in table abstract_base! CHECK (1 = 0)
  )
  
  This assumes that the constraint is not inherited or can be removed in
  child tables.
  
 
 Why would I assume that constraints would not be inherited?  Seems as a
 general rule of thumb, you'd want the constraints to be inherited.  Am I
 missing something?

You are right, but I was stuck trying to think of a constraint that
would restrict the abstract base but not its descendants.  Instead of
CHECK (1 = 0), I think we can use a function that checks whether the
current table is the abstract base and returns false if it is.  That
would be validly heritable.  (CHECK (tableoid != 12345678))
 
 Also, if I remove the constraint on the child table, doesn't that really
 mean I'm removing the constraint on the parent table?  That would seem
 to violate the whole reason of having constraints.  If a constraint is
 placed in an ABC and we find that we later need to remove it for EVERY
 derived class, doesn't that imply it shouldn't of been in there to begin
 with?  After all, in this case, we're saying that each and every derived
 class needs to overload or drop a specific constraint.  That strikes me
 as being rather obtuse.

Yes, it would be clumsy, and I think you are correct that constraints
should not be removable.

The inheritance model I am familiar with is that of Eiffel, where
constraints are additive down the hierarchy.  That is, an invariant on
the base class applies in its descendants along with any invariants
added by the descendant or intermediate classes.  That language has the
concept of a deferred class, which is the parallel of the abstract base
table we are discussing.  A deferred class cannot be directly
instantiated.  To do the same in the table hierarchy would require a
keyword to designate a table as an abstract table (CREATE ABSTRACT TABLE
xxx ...?).  In the absence of that, a constraint based on the table
identity will have to do.

-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight, UK
http://www.lfix.co.uk/oliver
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
 
 And he spake a parable unto them to this end, that men
  ought always to pray, and not to faint.   
 Luke 18:1 


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] [SECURITY] DoS attack on backend possible (was: Re:

2002-08-12 Thread ngpg

[EMAIL PROTECTED] (Florian Weimer) wrote in 
[EMAIL PROTECTED]:">news:[EMAIL PROTECTED]:

 Gavin Sherry [EMAIL PROTECTED] writes:
 
 Yes, but if you just check that the date given by the user matches the
 regular expression [0-9]+-[0-9]+-[0-9]+, it's still possible to
 crash the backend.
 
 Anyone who is using that regular expression in an attempt to validate a
 user supplied date is already in trouble.
 
 I don't understand why extremely strict syntax checks are necessary.
 The database has to parse it again anyway, and if you can't rely on
 the database to get this simple parsing right, will it store your
 data?  Such a reasoning doesn't seem to be too far-fetched to me

I believe this is often referred to as the layered onion approach of 
security, besides that what constitutes extremely strict syntax checking is 
somewhat subjective.  What about checking the input for backslash, quote, 
and double quote (\')?  If you are not taking care of those in input then 
crashing the backend is going to be the least of your worries.  I think 
there needs to be some level of checking before the input is blindly passed 
to the backend for parsing.  Typically the input for an individual field 
wouldnt be more than ~255 characters, unless you are dealing with TEXT or 
lo's.  I dont consider adding a length check to the usual \' check to be 
extreme... but perhaps just as necssary?


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] libpqxx

2002-08-12 Thread Marc G. Fournier

On Sun, 11 Aug 2002, Tom Lane wrote:

 Peter Eisentraut [EMAIL PROTECTED] writes:
  The problem I see now is that libpqxx has a completely different build
  system and documentation system.

 Unless someone's going to do the work to integrate libpqxx into our
 build/documentation system, I have to agree with Peter: it should not
 be part of our core distribution.

For this, all I've been waiting for is for J to get the standalone to
build and then going to dive into that ...

 Since Marc's been moaning about bloat, I'm sure he'll vote for pulling
 both libpq++ and libpqxx from the core distro and making them separate
 packages.  This would be okay with me.

Okay, but if we are going to pull libpqxx, what about the other lib's too?

 However: making libpq++ buildable standalone would take some work too.
 Any way you slice it, we need some work here... any volunteers?



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] libpqxx

2002-08-12 Thread Jeroen T. Vermeulen

On Mon, Aug 12, 2002 at 04:44:12PM -0300, Marc G. Fournier wrote:

 For this, all I've been waiting for is for J to get the standalone to
 build and then going to dive into that ...
 
I added Ray's changes a few days back, which may help.  My handicap is
that I appear to be on a newer version of libtoolize than you are, which
is where Marc's build appears to fail, so obviously it just builds on my 
system like it did all along.

So, any luck with the version currently in CVS?


Jeroen


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-12 Thread Peter Eisentraut

Oliver Elphick writes:

 One person said:
 However compiling with largefile support will change the size
 of off_t from 32 bits to 64 bits - if postgres uses off_t or
 anything else related to file offsets in a binary struct in one
 of the database files you will break stuff pretty heavily. I
 would not compile postgres with largefile support until it
 is officially supported by the postgres developers.

 but I cannot see that off_t is used in such a way.

This is not the only issue.  You really need to check all uses of off_t
(for example printf(%ld, off_t) will crash) and all places where off_t
should have been used in the first place.  Furthermore you might need to
replace ftell() and fseek() by ftello() and fseeko(), especially if you
want pg_dump to support large archives.

Still, most of the configuration work is already done in Autoconf (see
AC_FUNC_FSEEKO and AC_SYS_LARGEFILE), so the work might be significantly
less than the time spent debating the merits of large files on these
lists. ;-)

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] CLUSTER all tables at once?

2002-08-12 Thread Alvaro Herrera

Hello

In TODO there is an item that says cluster all tables at once.  Might
I ask, how is the system supposed to know on which indexes does it have
to cluster each table?

Maybe if some index had the indisclustered bit set one could select
that; but is it possible for some table to have more than one index with
it?  Intuition (but no code observation) says no.  And what happens with
those tables that do not have any such index?

-- 
Alvaro Herrera (alvherre[a]atentus.com)
Oh, great altar of passive entertainment, bestow upon me thy discordant images
at such speed as to render linear thought impossible (Calvin a la TV)


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS] Oracle releasing clustered file system code

2002-08-12 Thread Joe Conway

I thought this might be of interest to the list:

LINUXWORLD: ORACLE RELEASING CLUSTERED FILE SYSTEM CODE
http://www.idg.net/go.cgi?id=726336

The company on Wednesday plans to post online the source code for its 
new clustered file system designed for its Oracle9i Real Application 
Clusters (RAC), said Robert Shimp, vice president of database marketing 
at Oracle. The release will be an early test version of the software, 
with a production version due in October, Shimp said.

The company is making available the source code for the Oracle Cluster 
File System under the GNU/General Public License...


Joe


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] CLUSTER all tables at once?

2002-08-12 Thread Rod Taylor

 Maybe if some index had the indisclustered bit set one could select
 that; but is it possible for some table to have more than one index with
 it?  Intuition (but no code observation) says no.  And what happens with
 those tables that do not have any such index?

The bool marker sounds useful.  Falling back on the primary key is
probably the most appropriate.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] [GENERAL] workaround for lack of REPLACE() function

2002-08-12 Thread Joe Conway

Tatsuo Ishii wrote:
Now a new function similar to toast_raw_datum_size(), maybe 
toast_raw_datum_strlen() could be used to get the original string 
length, whether MB or not, without needing to retrieve and decompress 
the entire datum.

I understand we would either: have to steal another bit from the VARHDR 
which would reduce the effective size of a valena from 1GB down to .5GB; 
or we would need to add a byte or two to the VARHDR which is extra 
per-datum overhead. I'm not sure we would want to do either. But I 
wanted to toss out the idea while it was fresh on my mind.
 
 
 Interesting idea. I also was thinking about adding some extra
 infomation to text data types such as character set, collation
 etc. for 7.4 or later.

I ran some tests to confirm the theory above regarding overhead;

create table strtest(f1 text);
do 100 times
insert into strtest values('12345');  -- 10 characters
loop
do 1000 times
select length(f1) from strtest;
loop

Results:

SQL_ASCII database, new code:
=
PG_RETURN_INT32(toast_raw_datum_size(PG_GETARG_DATUM(0)) - VARHDRSZ);
== 2 seconds

SQL_ASCII database, old code:
=
text 
*t = PG_GETARG_TEXT_P(0);
PG_RETURN_INT32(VARSIZE(t) - VARHDRSZ);
== 66 seconds

EUC_JP database, new  old code:

text 
*t = PG_GETARG_TEXT_P(0);
PG_RETURN_INT32(pg_mbstrlen_with_len(VARDATA(t),
 VARSIZE(t) - VARHDRSZ));
== 469 seconds

So it appears that, while detoasting is moderately expensive (adds 64 
seconds to the test), the call to pg_mbstrlen_with_len() is very 
expensive (adds 403 seconds to the test).

Joe


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] VACUUM's No one parent tuple was found, redux

2002-08-12 Thread Tom Lane

I've been studying the No one parent tuple was found problem some
more, and I've realized there are actually two distinct problems that
manifest at the same place.

Here are test procedures for duplicating the problems on-demand (these
work in either 7.2 or CVS tip):

CASE 1 (transient failure):

1. Start a transaction (begin;) in one psql window; leave it sitting open.

2. In another psql window, do:

drop table foo;
create table foo (f1 int);

begin;
-- insert at least a few hundred rows of junk data into foo.
-- in the regression database you can do
insert into foo select unique1 from tenk1;
abort;

begin;
insert into foo values(0);
update foo set f1 = f1 + 1;
end;

vacuum full foo;
ERROR:  No one parent tuple was found

Repeated vacuum-full attempts will fail as long as the background
transaction remains open; after you close that transaction, it works.


CASE 2 (non-transient failure):

Here, you don't even need a background transaction.  Do:

drop table foo;
create table foo (f1 int);

begin;
-- insert at least a few hundred rows of junk data into foo.
-- in the regression database you can do
insert into foo select unique1 from tenk1;
abort;

begin;
insert into foo values(0);
update foo set f1 = f1 + 1;
end;

-- this part is added compared to the transient case:
begin;
update foo set f1 = f1 + 1;
abort;
begin;
select * from foo for update;
insert into foo values(1);
end;

vacuum full foo;
ERROR:  No one parent tuple was found

This error is reproducible indefinitely.  However, you can clear it by
doing select * from foo for update outside any transaction block.
(There are other ways, but that's the easiest non-destructive way.)


The non-transient-failure test procedure is designed to produce a tuple
that has HEAP_XMAX_COMMITTED, HEAP_MARKED_FOR_UPDATE, and t_self
different from t_ctid.  This fools the this tuple is in the chain of
tuples created in updates test (vacuum.c line 1583 in 7.2 sources).
The second part of the test shouldn't trigger, but does.

The transient-failure test procedure sets up a situation where we have
an updated tuple produced by a transaction younger than the oldest open
transaction (ie, younger than OldestXmin) --- but the immediate parent
tuple of that tuple is dead and gone.  I did it by making that
immediate parent be created and deleted in the same transaction.  There
may be other ways to get the same effect, but this is certainly one way.

In the given test cases, VACUUM finds *no* tuples that are recently
dead per the HeapTupleSatisfiesVacuum test: they're all either
definitely alive or definitely dead and deletable.  So no vtlinks
records have been created and you get the No one parent tuple was
found error.  It's simple to modify the test conditions so that there
are additional tuples that are considered recently dead, and then
you'll get Parent tuple was not found instead.  (That behavior is
transient, since if there are no other open transactions then no tuples
can be considered recently dead.)

Now, what to do about it?  I had previously proposed making sure that
t_ctid is set to t_self whenever we delete or mark4update a tuple.
I still think that's a good idea, but the VACUUM tests (there are two)
should also be changed to ignore tuples that are MARKED_FOR_UPDATE:
instead of

(!(tuple.t_data-t_infomask  HEAP_XMAX_INVALID) 
 !(ItemPointerEquals((tuple.t_self),
 (tuple.t_data-t_ctid)

the code should be

(!(tuple.t_data-t_infomask  (HEAP_XMAX_INVALID |
   HEAP_MARKED_FOR_UPDATE)) 
 !(ItemPointerEquals((tuple.t_self),
 (tuple.t_data-t_ctid)

A quick look through the sources shows that these two VACUUM tests are
the only places where HEAP_XMAX_INVALID is checked without also checking
for MARKED_FOR_UPDATE.  So this bug is a simple oversight that was
created when the mark-for-update feature was added.

It's less clear what to do about the transient error.  Clearly, VACUUM
should *not* be assuming that 

(tuple.t_data-t_infomask  HEAP_UPDATED 
 !TransactionIdPrecedes(tuple.t_data-t_xmin, OldestXmin))

is sufficient to guarantee that there must be a surviving parent tuple
--- the parent may not have survived scan_heap, if it was created and
deleted in the same transaction.

The conservative approach would be to do what is done now in some other
cases: if we can't find a parent tuple, emit a NOTICE and stop the
repair_frag process, but don't raise a hard error.  This would apply
both when vtlinks is null and when we fail to find a match in it.

A more aggressive approach would be to assume that if we didn't find the
parent in vtlinks, it's dead and gone, and we can just bull ahead with
the chain move anyway.  While this would be correct and safe in the
test-case scenario, I'm a little nervous about whether any problem could
be created that way.

Comments?


Re: [HACKERS] CLUSTER all tables at once?

2002-08-12 Thread Tom Lane

Alvaro Herrera [EMAIL PROTECTED] writes:
 Maybe if some index had the indisclustered bit set one could select
 that; but is it possible for some table to have more than one index with
 it?  Intuition (but no code observation) says no.

At the moment that bit will never be set at all, unless you were to
reach in and set it with a manual UPDATE pg_index command.

It would probably be feasible for the CLUSTER code to update the system
catalogs to set the bit on the index used for the clustering (and clear
it from any others it might be set on).  Then indisclustered would have
the semantics of the index most recently used in CLUSTERing its table,
which seems pretty reasonable.  And it'd fit in nicely as the control
bit for an auto-CLUSTER command.

 And what happens with those tables that do not have any such index?

Nothing, would be my vote.  You'd just re-CLUSTER all tables that have
been clustered before, the same way they were last clustered.

(I'm not actually convinced that this behavior is worth the code space
it'd take to implement, btw.)

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] python patch

2002-08-12 Thread Greg Copeland

On Sun, 2002-08-11 at 21:15, Christopher Kings-Lynne wrote:
  Not a problem.  I would rather them be correct.
 
  Worth noting that the first patch is what attempts to fix the long -
  int overflow issue.  The second patch attempts to resolve attisdropped
  column use issues with the python scripts.  The third patch addresses
  issues generated by the implicate to explicate use of cascade.
 
  I assume your reservations are only with the second patch and not the
  first and third patches?
 
 Correct.  I'm pretty sure you don't need to exclude attisdropped from the
 primary key list because all it's doing is finding the column that a primary
 key is over and that should never be over a dropped column.  I can't
 remember what you said the second query did?


Hmmm.  Sounds okay but I'm just not sure that holds true (as I
previously stated, I'm ignorant on the topic).  Obviously I'll defer to
you on this.

Here's the queries and what they do:


From pg.py:
Used to locate primary keys -- or so the comment says.  It does create a
dictionary of keys and attribute values for each returned row so I
assume it really is attempting to do something of the like.

SELECT pg_class.relname, pg_attribute.attname 
FROM pg_class, pg_attribute, pg_index 
WHERE pg_class.oid = pg_attribute.attrelid AND 
pg_class.oid = pg_index.indrelid AND 
pg_index.indkey[0] = pg_attribute.attnum AND 
pg_index.indisprimary = 't' AND 
pg_attribute.attisdropped = 'f' ;

So, everyone is in agreement that any attribute which is indexed as a
primary key will never be able to have attisdtopped = 't'?

According to the code:
SELECT pg_attribute.attname, pg_type.typname
FROM pg_class, pg_attribute, pg_type
WHERE pg_class.relname = '%s' AND
pg_attribute.attnum  0 AND
pg_attribute.attrelid = pg_class.oid AND
pg_attribute.atttypid = pg_type.oid AND
pg_attribute.attisdropped = 'f' ;

is used to obtain all attributes (column names) and their types for a
given table ('%s').  It then attempts to build a column/type cache.  I'm
assuming that this really does need to be there.  Please correct
accordingly.


From syscat.py:
SELECT bc.relname AS class_name,
ic.relname AS index_name, a.attname
FROM pg_class bc, pg_class ic, pg_index i, pg_attribute a
WHERE i.indrelid = bc.oid AND i.indexrelid = bc.oid
AND i.indkey[0] = a.attnum AND a.attrelid = bc.oid
AND i.indproc = '0'::oid AND a.attisdropped = 'f'
ORDER BY class_name, index_name, attname ;

According to the nearby documentation, it's supposed to be fetching a
list of all simple indicies.  If that's the case, is it safe to assume
that any indexed column will never have attisdropped = 't'?  If so, we
can remove that check from the file as well.  Worth pointing out, this
is from syscat.py, which is sample source and not used as actual
interface.  So, worse case, it would appear to be redundant in nature
with no harm done.

This should conclude the patched items offered in the second patch.

What ya think?

Thanks,
Greg





signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] python patch

2002-08-12 Thread Rod Taylor

All of that said, the cost of the check is so small it may save someones
ass some day when they have a corrupted catalog and the below
assumptions are no longer true.

On Mon, 2002-08-12 at 18:40, Greg Copeland wrote:
 On Sun, 2002-08-11 at 21:15, Christopher Kings-Lynne wrote:
   Not a problem.  I would rather them be correct.
  
   Worth noting that the first patch is what attempts to fix the long -
   int overflow issue.  The second patch attempts to resolve attisdropped
   column use issues with the python scripts.  The third patch addresses
   issues generated by the implicate to explicate use of cascade.
  
   I assume your reservations are only with the second patch and not the
   first and third patches?
  
  Correct.  I'm pretty sure you don't need to exclude attisdropped from the
  primary key list because all it's doing is finding the column that a primary
  key is over and that should never be over a dropped column.  I can't
  remember what you said the second query did?
 
 
 Hmmm.  Sounds okay but I'm just not sure that holds true (as I
 previously stated, I'm ignorant on the topic).  Obviously I'll defer to
 you on this.
 
 Here's the queries and what they do:
 
 
 From pg.py:
 Used to locate primary keys -- or so the comment says.  It does create a
 dictionary of keys and attribute values for each returned row so I
 assume it really is attempting to do something of the like.
 
 SELECT pg_class.relname, pg_attribute.attname 
 FROM pg_class, pg_attribute, pg_index 
 WHERE pg_class.oid = pg_attribute.attrelid AND 
   pg_class.oid = pg_index.indrelid AND 
   pg_index.indkey[0] = pg_attribute.attnum AND 
   pg_index.indisprimary = 't' AND 
   pg_attribute.attisdropped = 'f' ;
 
 So, everyone is in agreement that any attribute which is indexed as a
 primary key will never be able to have attisdtopped = 't'?
 
 According to the code:
 SELECT pg_attribute.attname, pg_type.typname
 FROM pg_class, pg_attribute, pg_type
 WHERE pg_class.relname = '%s' AND
   pg_attribute.attnum  0 AND
   pg_attribute.attrelid = pg_class.oid AND
   pg_attribute.atttypid = pg_type.oid AND
   pg_attribute.attisdropped = 'f' ;
 
 is used to obtain all attributes (column names) and their types for a
 given table ('%s').  It then attempts to build a column/type cache.  I'm
 assuming that this really does need to be there.  Please correct
 accordingly.
 
 
 From syscat.py:
 SELECT bc.relname AS class_name,
   ic.relname AS index_name, a.attname
 FROM pg_class bc, pg_class ic, pg_index i, pg_attribute a
 WHERE i.indrelid = bc.oid AND i.indexrelid = bc.oid
   AND i.indkey[0] = a.attnum AND a.attrelid = bc.oid
   AND i.indproc = '0'::oid AND a.attisdropped = 'f'
   ORDER BY class_name, index_name, attname ;
 
 According to the nearby documentation, it's supposed to be fetching a
 list of all simple indicies.  If that's the case, is it safe to assume
 that any indexed column will never have attisdropped = 't'?  If so, we
 can remove that check from the file as well.  Worth pointing out, this
 is from syscat.py, which is sample source and not used as actual
 interface.  So, worse case, it would appear to be redundant in nature
 with no harm done.
 
 This should conclude the patched items offered in the second patch.
 
 What ya think?
 
 Thanks,
   Greg
 
 



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] anoncvs currently broken

2002-08-12 Thread Oliver Elphick

anoncvs is still broken:

cvs server: Updating src/interfaces/libpqxx/config
cvs server: Updating src/interfaces/libpqxx/debian
cvs server: failed to create lock directory for
`/projects/cvsroot/interfaces/libpqxx/debian'
(/projects/cvsroot/interfaces/libpqxx/debian/#cvs.lock): Permission
denied
cvs server: failed to obtain dir lock in repository
`/projects/cvsroot/interfaces/libpqxx/debian'
cvs [server aborted]: read lock failed - giving up

-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight, UK
http://www.lfix.co.uk/oliver
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
 
 Watch ye therefore, and pray always, that ye may be 
  accounted worthy to escape all these things that shall
  come to pass, and to stand before the Son of man.
   Luke 21:36 


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-12 Thread Martijn van Oosterhout

On Mon, Aug 12, 2002 at 11:30:36AM -0400, Andrew Sullivan wrote:
 The problem is not just a system-level one, but a filesystem-level
 one.  Enabling 64 bits by default might be dangerous, because a DBA
 might think oh, it supports largefiles by default and therefore not
 notice that the filesystem itself is not mounted with largefile
 support.  But I suspect that the developers would welcome autoconfig
 patches if someone offered them.

Are there any filesystems in common use (not including windows ones) that
don't support 32-bit filesizes?

Linux (ext2) I know supports by default at least to 2TB (2^32 x 512bytes),
probably much more. What about the BSDs? XFS? etc

-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 There are 10 kinds of people in the world, those that can do binary
 arithmetic and those that can't.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] anoncvs currently broken

2002-08-12 Thread Marc G. Fournier


should be fixed ... looks like just an ownership issue on a new directory
...


On 13 Aug 2002, Oliver Elphick wrote:

 anoncvs is still broken:

 cvs server: Updating src/interfaces/libpqxx/config
 cvs server: Updating src/interfaces/libpqxx/debian
 cvs server: failed to create lock directory for
 `/projects/cvsroot/interfaces/libpqxx/debian'
 (/projects/cvsroot/interfaces/libpqxx/debian/#cvs.lock): Permission
 denied
 cvs server: failed to obtain dir lock in repository
 `/projects/cvsroot/interfaces/libpqxx/debian'
 cvs [server aborted]: read lock failed - giving up

 --
 Oliver Elphick[EMAIL PROTECTED]
 Isle of Wight, UK
 http://www.lfix.co.uk/oliver
 GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
  
  Watch ye therefore, and pray always, that ye may be
   accounted worthy to escape all these things that shall
   come to pass, and to stand before the Son of man.
Luke 21:36


 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?

 http://archives.postgresql.org



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] regression test failure

2002-08-12 Thread Tatsuo Ishii

[Cced to hackers list]

 I'm seeing a regression test failure with the latest CVS code, in the
 'conversion' test. I've attached the 'regression.diff' file -- the
 failure occurs consistently on my machine.
 
 I'm mailing you because I believe the test in question is for code you
 wrote). Let me know if you need any more information.

Do you still have the failure after doing make install? If so, it's
news to me.

If not, it's a known problem I want to fix before the beta freeze. The
basic problem here is I'm using following statement while doing
initdb:

CREATE OR REPLACE FUNCTION utf8_to_iso8859_1 (INTEGER, INTEGER,
OPAQUE, OPAQUE, INTEGER) RETURNS INTEGER AS
'$libdir/utf8_and_iso8859_1', 'utf8_to_iso8859_1' LANGUAGE 'c';

The $libdir variable is defined at the compile time and it points to
$prefix/lib. Apparently it points to different place while doing
regression tests. One idea is replacing $lindir with the absolute path
to $prefix/lib. However I wonder this would break some installations,
for example RPM.

Any idea?
--
Tatsuo Ishii

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Open 7.3 items

2002-08-12 Thread Lamar Owen

On Saturday 10 August 2002 10:41 pm, Bruce Momjian wrote:
 Let me give a little history.  The secondary password file was created
 at a time when we didn't encrypt with random salt over the wire, and we
 had people who wanted to share their /etc/passwd file with PostgreSQL.
[snip]

 So, based on the voting, I think dbname.username is an agreed-upon
 feature addition for 7.3.  I will work on a final patch with
 documentation and post it to the patches list for more comment.

I can live with this, if the documentation is prominently referred to in the 
changelog.

As to the feature itself, I believe Bruce's proposed solution is the best, and 
believed that from the beginning -- I just wanted to deal with the 'fair 
warning' issue alone.

As to fair warning:  watch for the next RPM release.  Fair Warning is being 
given that upgrades within the RPM context will not be supported in any form 
for the final release of PostgreSQL 7.3. 

I had a 'd'oh' moment (and I don't watch the Simpsons) when I realized 
that I could quite easily prevent anyone from even attempting an RPM upgrade, 
unless that take matters into their own grubby little hands with special 
switches to the rpm command line. 

It will not be yanked this next set, but the following set will be 
unupgradable.  Sorry, but the packaged kludge isn't reliable enough for the 
state of PostgreSQL reliability, and I don't want the RPMset's shortcomings 
(due to the whole RPM mechanism forcing the issue) causing bad blood towards 
PostgreSQL in general. The Debian packages don't have much of the limitations 
and restrictions I have to deal with, and until a good upgrade utility is 
available I'm just going to have to do this.

I have been so swamped with Fortran work for work that I've not even looked at 
the python code Hannu so kindly sent me, nor have I played any more with 
pg_fsck.  Groundwave propagation modeling in Fortran has been more 
important...

Likewise, my focus as RPM maintainer is changing with this next release.  
Since the distributions, such as Red Hat, are doing a great job keeping up to 
date, I'm going to not bother much with building RPMs that are, frankly, 
redundant at this point.  Three years ago it wasn't this nice.  Trond has 
done a good job on the Red Hat bleeding edge front, Reinhard Max has done 
similarly for SuSE.  There are PLD, Connectiva, TurboLinux, Caldera, and 
Mandrake maintainers as well -- and they seem to be doing fine.

I'm going to now go to the lagging plane -- building newer PostgreSQL for 
older Red Hat (and maybe others, if I can get enough hard drives available).  
The source RPM will still be useful to the newer distribution's maintainers 
-- but the requests I see more of on the lists is newer PostgreSQL on older 
linux.  So I'm going to try to rise to that occassion, and take this 
opportunity to apologize for not seeing it sooner.

I welcome comments on this change of focus.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] OOP real life example (was Re: Why is MySQL more

2002-08-12 Thread Curt Sampson

Ok, big bundled up reply here to various people.

 From: Greg Copeland [EMAIL PROTECTED]

  What makes things more confusing is poor understanding of a feature, not
  the feature itself.

 Agreed.  Just because a feature may not be well understood by the masses
 doesn't mean the feature is worthless.

Yeah, but if it's not understood by fairly smart people familiar
with both relational theory and OO programming? If the feature is
confusing because it appears to be something it's not, that's a
feature problem, not a problem with the people trying to understand
it. Maybe all that's necessary to fix it is a terminology change,
but even so

 Hmmm...there might be.  Curt raises in interesting point below.  Do keep
 in mind that I believe he's specifically referring to table inheritance
 and not the broad scope of language wide inheritance.

Yes.

  All _simple_ inheritance problems are easily solved by simple relational
  solutions. The general problem of much more typing and debugging, less
  clues for optimiser etc. are not solved by _simple_ relational
  solutions.

 I agree with Hannu here.  Curt's comment seems like lip service.

Well, as I said: examples please. Quite frankly, between the lack
of a clear model of table inheritance (Hannu seems to have one,
but this needs to be written up in unambiguous form and put into
the postgres manual) and the bugs in the postgres implementation
of table inheritance, I've found the relational model much easier
to use for solving problems.

 From: Oliver Elphick [EMAIL PROTECTED]

 On Mon, 2002-08-12 at 15:00, Greg Copeland wrote:
  How exactly would you create an abstract base class for table type?

 CREATE TABLE abstract_base (
cols ...,
CONSTRAINT No data allowed in table abstract_base! CHECK (1 = 0)
 )

 This assumes that the constraint is not inherited or can be removed in
 child tables.

Are we then assuming that tuples in the child tables do not appear
in the base table? That's more or less what I'd assumed when I
originally heard about table inheritance (after all, instantiating
a child object does not automatically instantiate a separate copy
of the parent object), but the SQL standard, postgres, and I believe other
systems make the exact opposite assumption.

If the child table tuples do appear in the parent, you've now got
a situation analogous to the current postgres situation where a
constraint on the parent table is an outright lie. (I'm thinking
of the UNIQUE constraint which guarantees that all values in a
column will be unique--and then they aren't.) I consider breaking
the relational model this badly a completely unacceptable cost no
matter what additional functionality you're wanting to add, and I
expect that most other people do, too.

 From: Greg Copeland [EMAIL PROTECTED]

 That, in it self, I find rather interesting.  Is there any papers or
 books which offers explanation of how constraints should handled for
 table inheritance?

Here again, I'd love to hear about some references, too. I see a
lot of people saying they like table inheritance; I don't see anyone
(except maybe Hannu) who seems to have a clear idea of how it should
work.

cjs
-- 
Curt Sampson  [EMAIL PROTECTED]   +81 90 7737 2974   http://www.netbsd.org
Don't you know, in this new Dark Age, we're all light.  --XTC


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] OOP real life example (was Re: Why is MySQL more

2002-08-12 Thread Don Baccus

Curt Sampson wrote:

 ... the bugs in the postgres implementation
 of table inheritance, I've found the relational model much easier
 to use for solving problems.

No one has argued that the shortcomings (not bugs, really, just things 
left out) makes the current implementation of very limited utility.  As 
I mention this is exactly why we choose not to use it at OpenACS.

On the other hand at least we took the time to understand how it 
actually does work before criticizing it.

It's a pity, as I pointed out the reduction in joins alone would really 
be great.


-- 
Don Baccus
Portland, OR
http://donb.photo.net, http://birdnotes.net, http://openacs.org


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] OOP real life example (was Re: Why is MySQL more

2002-08-12 Thread Greg Copeland

On Mon, 2002-08-12 at 20:34, Curt Sampson wrote:
 Ok, big bundled up reply here to various people.
 
  From: Greg Copeland [EMAIL PROTECTED]
 
   What makes things more confusing is poor understanding of a feature, not
   the feature itself.
 
  Agreed.  Just because a feature may not be well understood by the masses
  doesn't mean the feature is worthless.
 
 Yeah, but if it's not understood by fairly smart people familiar
 with both relational theory and OO programming? If the feature is
 confusing because it appears to be something it's not, that's a
 feature problem, not a problem with the people trying to understand
 it. Maybe all that's necessary to fix it is a terminology change,
 but even so

You're constantly confusing Postgres' implementation with a desired
implementation.  Below, I think, is the effort to figure out exactly
what a desired implementation really is.

If a feature is partially implemented, of course it's going to be
confusing to use.

Let's please stop beating this horse Curt.  At this point, I think the
horse is floating upside down in a pond somewhere...yep...and the
buzzards are coming.

Please.  Beating people with a stick isn't suddenly going to make
everyone share your view point.

   All _simple_ inheritance problems are easily solved by simple relational
   solutions. The general problem of much more typing and debugging, less
   clues for optimiser etc. are not solved by _simple_ relational
   solutions.
 
  I agree with Hannu here.  Curt's comment seems like lip service.
 
 Well, as I said: examples please. Quite frankly, between the lack
 of a clear model of table inheritance (Hannu seems to have one,
 but this needs to be written up in unambiguous form and put into
 the postgres manual) and the bugs in the postgres implementation
 of table inheritance, I've found the relational model much easier
 to use for solving problems.

If you're so keen on examples, please provide one that justifies such a
boastful statement.  Hannu has done a pretty fair job of beating ya back
every time.  Personally, in this case, I don't really need examples are
it's pretty obvious a braggart statement full of bias.  So second
thought, perhaps we can let this one alone.

I do agree that it looks like Hannu is doing a fairly good job of
providing some constructive direction here.  Hannu, please keep up the
good work.  ;)

 
  From: Oliver Elphick [EMAIL PROTECTED]
 
  On Mon, 2002-08-12 at 15:00, Greg Copeland wrote:
   How exactly would you create an abstract base class for table type?
 
  CREATE TABLE abstract_base (
 cols ...,
 CONSTRAINT No data allowed in table abstract_base! CHECK (1 = 0)
  )
 
  This assumes that the constraint is not inherited or can be removed in
  child tables.
 
 Are we then assuming that tuples in the child tables do not appear
 in the base table? That's more or less what I'd assumed when I
 originally heard about table inheritance (after all, instantiating
 a child object does not automatically instantiate a separate copy
 of the parent object), but the SQL standard, postgres, and I believe other
 systems make the exact opposite assumption.

That's actually my exact assumption...that is, that tuples in the parent
did not exist in the child.  Is that not true?  Can you point me to any
references?

 
 If the child table tuples do appear in the parent, you've now got
 a situation analogous to the current postgres situation where a
 constraint on the parent table is an outright lie. (I'm thinking
 of the UNIQUE constraint which guarantees that all values in a
[snip]

I knew that there are *implementation* issues with postgres that causes
problems with constraints, etc...I didn't realize that was the reason.

 
  From: Greg Copeland [EMAIL PROTECTED]
 
  That, in it self, I find rather interesting.  Is there any papers or
  books which offers explanation of how constraints should handled for
  table inheritance?
 
 Here again, I'd love to hear about some references, too. I see a
 lot of people saying they like table inheritance; I don't see anyone
 (except maybe Hannu) who seems to have a clear idea of how it should
 work.

Well, you seem to be making references to ...SQL standard, postgres,
and I believe other systems  I was counting on you or someone else
to point us to existing references.  I'm fairly sure we can manage to
wade through it to walk a sane and fruitful path...it would just be a
less bumpier road if we all spoke the same OO'ish dialect and shared a
common knowledge base that we can all agree on for starters.  So, you
got anything to share here???  ;)


Greg




signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] Open 7.3 items

2002-08-12 Thread Lamar Owen

On Monday 12 August 2002 09:51 pm, Karl DeBisschop wrote:
 On Mon, 2002-08-12 at 21:28, Lamar Owen wrote:
  I'm going to now go to the lagging plane -- building newer PostgreSQL for
  older Red Hat (and maybe others, if I can get enough hard drives
  available). The source RPM will still be useful to the newer
  distribution's maintainers -- but the requests I see more of on the lists
  is newer PostgreSQL on older linux.  So I'm going to try to rise to that
  occassion, and take this opportunity to apologize for not seeing it
  sooner.

 But I for one will miss your lead on the bleeding edge of RPM
 development.

Oh, I've misstated, apparently.  I'll continue on the 'bleeding edge' as far 
as versions of PostgreSQL are concerned -- I'm just shifting focus to 
providing prebuilt binaries on older dists.  As I do some other bleeding edge 
work, I typically will make sure my source RPM's build on the latest and 
greatest -- they just won't be optimized for it.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-12 Thread Greg Copeland

On Mon, 2002-08-12 at 18:41, Martijn van Oosterhout wrote:
 On Mon, Aug 12, 2002 at 11:30:36AM -0400, Andrew Sullivan wrote:
  The problem is not just a system-level one, but a filesystem-level
  one.  Enabling 64 bits by default might be dangerous, because a DBA
  might think oh, it supports largefiles by default and therefore not
  notice that the filesystem itself is not mounted with largefile
  support.  But I suspect that the developers would welcome autoconfig
  patches if someone offered them.
 
 Are there any filesystems in common use (not including windows ones) that
 don't support 32-bit filesizes?
 
 Linux (ext2) I know supports by default at least to 2TB (2^32 x 512bytes),
 probably much more. What about the BSDs? XFS? etc
 

Ext2  3 should be okay.  XFS (very sure) and JFS (reasonably sure)
should also be okay...IIRC.  NFS and SMB are probably problematic, but I
can't see anyone really wanting to do this.  Maybe some of the
clustering file systems (GFS, etc) might have problems???  I'm not sure
where reiserfs falls.  I *think* it's not a problem but something
tingles in the back of my brain that there may be problems lurking...

Just for the heck of it, I did some searching.  Found these for
starters:
http://www.suse.de/~aj/linux_lfs.html.
http://www.gelato.unsw.edu.au/~peterc/lfs.html

http://ftp.sas.com/standards/large.file/


So, in a nut shell, most modern (2.4.x+) x86 Linux systems should be
able to handle large files.

Enjoy,
Greg




signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke

2002-08-12 Thread Bruce Momjian


OK, seeing as no one voted, and only Tom and I objected originally, we
will keep the code as Thomas has applied it, namely that PGXLOG/-X is
recognized by initdb, postmaster, postgres, and pg_ctl.  There is no
symlink from the /data directory to the WAL location.


Thomas, you mentioned implementing table spaces.  Are you planning to
use environment variables there too?  You mentioned that Ingres's use of
environment variables wasn't well implemented.  How will you improve on
it?

---

Bruce Momjian wrote:
 Thomas Lockhart wrote:
   Thomas, would you remind me of the concusions because I thought everyone
   involved felt that it should be an initdb-only option, but I still see
   it in CVS.
  
  ?? Concussions as in brain bruises? ;)
 
 Uh, conclusions.  Sorry.  New keyboard desk in new house. :-)
 
  I'm not sure I understand the question. I assume that we are talking
  about the WAL log location feature I implemented recently. It is an
  initdb-only option, and defaults to the current behavior *exactly*.
 
 Yep.  What bothers me is the clutter to the other commands that allow
 XLOG location specification when you would never really want to specify
 it except as part of initdb.  I just see those extra flags as
 cruft/confusion.
 
 Look at pg_ctl:
 
   pg_ctl start   [-w] [-D DATADIR] [-s] [-X PGXLOG] [-l FILENAME] [-o OPTIONS]
 
 Which option doesn't make sense?  -X.  It is way beyond the
 functionality of the command.
 
 
  The new feature is to allow an argument to initdb to locate the WAL file
  to another location. That location can be specified on the command line,
  or through an environment variable. Neither form precludes use of the
  other, and either form can be considered best practice depending on
  your opinion of what that is.
  
  The postmaster also recognizes the command line option and environment
  variable. The only suggestion I got as an alternative involved soft
  links, which is not portable, which is not robust, and which is not used
  anywhere else in the system. If we moved toward relying on soft links
  for distributing resources we will be moving in the wrong direction for
  many reasons, some of which I've mentioned previously. GUC parameters
  were also mentioned as a possibility, and the infrastructure does not
  preclude that at any time.
 
 I don't think anyone agreed with your concerns about symlinks.  If you
 want to be careful, do the ln -s in initdb and exit on failure, and
 tell them not to use -X on that platform, though we use symlinks for
 postmaster/postgres identification, so I know the only OS that doesn't
 support symlinks is Netware, only because Netware folks just sent in a
 patch to add a -post flag to work around lack of symlinks.  (I have
 asked for clarification from them.)
 
 I actually requested a vote, and got several people who wanted my
 compromise (PGXLOG or initdb -X flag only), and I didn't see anyone who
 liked the addition of -X into non-initdb commands.  Should I have a
 specific vote?  OK, three options:
 
   1) -X, PGXLOG in initdb, postmaster, postgres, pg_ctl
   2) -X, PGXLOG in initdb only
   3) nothing
 
 I remember a number of people liking 2, but we can vote again.
 
  I don't recall that there were very many folks involved. There were
  several opinions, though most were from folks who were not thinking of
  implementing disk management features. Some opinions dealt with details,
  and some seemed to deal with the wisdom of allowing anything other than
  a one partition model of the database, which is nothing if not short
  sighted. Current default behavior is as first implemented, and the new
  feature allows locating the WAL logs in another area. For the current
  state of the art, that seems competitive with features found in other
  database products, and an essential step in teaching PostgreSQL to work
  with very large databases.
  
  I had thought to extend the capabilities to allow resource allocation
  for individual tables and indices, which has *long* been identified as a
  desired capability by folks who are managing large systems. It seemed
  reasonable to have done in time for 7.3. I'm rethinking that, not
  because it shouldn't happen, but because the process of discussing these
  issues has become so argumentative, divisive, impolite, and unpleasant.
  Which is a shame imho...
 
 I clearly want tablespaces, and it would be great for 7.3, and I don't
 think it is a huge job.  However, I think it will require symlinks to be
 usable, and you probably do not, so it may be an issue.
 
 As for the argumentativeness, we do have folks with some strong
 opinions, and I guess on the PGXLOG issue, I am one of them.  Maybe that
 is bad?  Are people expressing themselves badly?  If so, I would like to
 hear details either on list or privately so I can address them.
 
 -- 
   Bruce Momjian|  http://candle.pha.pa.us
   [EMAIL 

Re: [HACKERS] OOP real life example (was Re: Why is MySQL more

2002-08-12 Thread Curt Sampson

On Mon, 12 Aug 2002, Don Baccus wrote:

 It's a pity, as I pointed out the reduction in joins alone would really
 be great.

So implement the same thing relationally, and get your reduction
in joins.  There are tricks, discussed on this very list in the
last few days, that would let you do what you need.

cjs
-- 
Curt Sampson  [EMAIL PROTECTED]   +81 90 7737 2974   http://www.netbsd.org
Don't you know, in this new Dark Age, we're all light.  --XTC


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke

2002-08-12 Thread Tom Lane

Bruce Momjian [EMAIL PROTECTED] writes:
 OK, seeing as no one voted, and only Tom and I objected originally, we
 will keep the code as Thomas has applied it, namely that PGXLOG/-X is
 recognized by initdb, postmaster, postgres, and pg_ctl.

We will?  It looks to me like Thomas lost the vote 2-to-1.

Unless there are more votes, I'm going to *insist* that this code be
changed.  It's dangerous and offers no offsetting benefit.  XLOG
location should be settable at initdb, noplace later.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] CLUSTER and indisclustered

2002-08-12 Thread Bruce Momjian


I wanted to comment on this bitmapped index discussion because I am
hearing a lot about star joins, data warehousing, and bitmapped indexes
recently.

It seems we have several uses for bitmapped indexes:

Do index lookups in sequential heap order
Allow joining of bitmapped indexes to construct arbitrary indexes

There is a web page about star joins used a lot in data warehousing,
where you don't know what queries are going to be required and what
indexes to create:

http://www.dbdomain.com/a100397.htm

They show some sample queries, which is good.  Here is some
interesting text:

Star Transformation

If there are bitmap indexes on SALES_REP_ID, PRODUCT_ID, and
DEPARTMENT_ID in the SALES table, then Oracle can resolve the query
using merges of the bitmap indexes.

Because Oracle can efficiently merge multiple bitmap indexes, you can 
create a single bitmap index on each of the foreign-key columns in the
fact table rather than on every possible combination of columns. This
lets you support all possible combinations of dimensions without
creating an unreasonable number of indexes.

Added to TODO:

* Use bitmaps to fetch heap pages in sequential order [performance] 
* Use bitmaps to combine existing indexes [performance]

and I will add some of these emails to TODO.detail/performance.

---

Tom Lane wrote:
 Curt Sampson [EMAIL PROTECTED] writes:
  But after doing some benchmarking of various sorts of random reads
  and writes, it occurred to me that there might be optimizations
  that could help a lot with this sort of thing. What if, when we've
  got an index block with a bunch of entries, instead of doing the
  reads in the order of the entries, we do them in the order of the
  blocks the entries point to?
 
 I thought to myself didn't I just post something about that?
 and then realized it was on a different mailing list.  Here ya go
 (and no, this is not the first time around on this list either...)
 
 
 I am currently thinking that bitmap indexes per se are not all that
 interesting.  What does interest me is bitmapped index lookup, which
 came back into mind after hearing Ann Harrison describe how FireBird/
 InterBase does it.
 
 The idea is that you don't scan the index and base table concurrently
 as we presently do it.  Instead, you scan the index and make a list
 of the TIDs of the table tuples you need to visit.  This list can
 be conveniently represented as a sparse bitmap.  After you've finished
 looking at the index, you visit all the required table tuples *in
 physical order* using the bitmap.  This eliminates multiple fetches
 of the same heap page, and can possibly let you get some win from
 sequential access.
 
 Once you have built this mechanism, you can then move on to using
 multiple indexes in interesting ways: you can do several indexscans
 in one query and then AND or OR their bitmaps before doing the heap
 scan.  This would allow, for example, WHERE a = foo and b = bar
 to be handled by ANDing results from separate indexes on the a and b
 columns, rather than having to choose only one index to use as we do
 now.
 
 Some thoughts about implementation: FireBird's implementation seems
 to depend on an assumption about a fixed number of tuple pointers
 per page.  We don't have that, but we could probably get away with
 just allocating BLCKSZ/sizeof(HeapTupleHeaderData) bits per page.
 Also, the main downside of this approach is that the bitmap could
 get large --- but you could have some logic that causes you to fall
 back to plain sequential scan if you get too many index hits.  (It's
 interesting to think of this as lossy compression of the bitmap...
 which leads to the idea of only being fuzzy in limited areas of the
 bitmap, rather than losing all the information you have.)
 
 A possibly nasty issue is that lazy VACUUM has some assumptions in it
 about indexscans holding pins on index pages --- that's what prevents
 it from removing heap tuples that a concurrent indexscan is just about
 to visit.  It might be that there is no problem: even if lazy VACUUM
 removes a heap tuple and someone else then installs a new tuple in that
 same TID slot, you should be okay because the new tuple is too new to
 pass your visibility test.  But I'm not convinced this is safe.
 
   regards, tom lane
 
 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?
 
 http://archives.postgresql.org
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---

Re: [HACKERS] [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke

2002-08-12 Thread Bruce Momjian

Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  OK, seeing as no one voted, and only Tom and I objected originally, we
  will keep the code as Thomas has applied it, namely that PGXLOG/-X is
  recognized by initdb, postmaster, postgres, and pg_ctl.
 
 We will?  It looks to me like Thomas lost the vote 2-to-1.
 
 Unless there are more votes, I'm going to *insist* that this code be
 changed.  It's dangerous and offers no offsetting benefit.  XLOG
 location should be settable at initdb, noplace later.

Well, you didn't vote again in my follow up email, so I thought you
didn't care anymore, and Thomas didn't reply to my email either.  I am
clearly concerned, as you are, but Thomas isn't, and no one else seems
to care.

Can two guys override another guy if he is doing the work?  I usually
like to have a larger margin than that.  I don't know what to do.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke

2002-08-12 Thread Marc G. Fournier

On Tue, 13 Aug 2002, Tom Lane wrote:

 Bruce Momjian [EMAIL PROTECTED] writes:
  OK, seeing as no one voted, and only Tom and I objected originally, we
  will keep the code as Thomas has applied it, namely that PGXLOG/-X is
  recognized by initdb, postmaster, postgres, and pg_ctl.

 We will?  It looks to me like Thomas lost the vote 2-to-1.

 Unless there are more votes, I'm going to *insist* that this code be
 changed.  It's dangerous and offers no offsetting benefit.  XLOG
 location should be settable at initdb, noplace later.

Okay, I'm going to pop up here and side with Thomas ... I think ... I may
have missed some issues here, but, quite frankly, I hate symlinks, as I've
seen it create more evil then good .. hardlinks is a different story ...

And for that reason, I will side with Thomas ...

initdb should allow you to specify a seperate location, which I don't
think anyone disagrees with ... but, what happens if, for some reason, I
have to move it to another location?  I have to then dump/reload after
doing a new initdb?

One thought at the back of my mind is why not have something like a
'PG_VERSION' for XLOG?  Maybe something so simple as a text file in both
the data and xlog directory that just contains a timestamp from the
initdb?  then, when  you startup postmaster with a -X option, it compares
the two files and makes sure that they belong to each other?

Bruce, if I'm missing something, could you post a summary/scorecard for
pros-cons on this issue?


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] [COMMITTERS] pgsql-server/src

2002-08-12 Thread Greg Copeland

On Mon, 2002-08-12 at 23:09, Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  OK, seeing as no one voted, and only Tom and I objected originally, we
  will keep the code as Thomas has applied it, namely that PGXLOG/-X is
  recognized by initdb, postmaster, postgres, and pg_ctl.
 
 We will?  It looks to me like Thomas lost the vote 2-to-1.
 
 Unless there are more votes, I'm going to *insist* that this code be
 changed.  It's dangerous and offers no offsetting benefit.  XLOG
 location should be settable at initdb, noplace later.
 


I think Tom is on to something here.  I meant to ask but never got
around to it.  Why would anyone need to move the XLOG after you've
inited the db?

Greg




signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke

2002-08-12 Thread Marc G. Fournier

On Tue, 13 Aug 2002, Bruce Momjian wrote:

 Tom Lane wrote:
  Bruce Momjian [EMAIL PROTECTED] writes:
   OK, seeing as no one voted, and only Tom and I objected originally, we
   will keep the code as Thomas has applied it, namely that PGXLOG/-X is
   recognized by initdb, postmaster, postgres, and pg_ctl.
 
  We will?  It looks to me like Thomas lost the vote 2-to-1.
 
  Unless there are more votes, I'm going to *insist* that this code be
  changed.  It's dangerous and offers no offsetting benefit.  XLOG
  location should be settable at initdb, noplace later.

 Well, you didn't vote again in my follow up email, so I thought you
 didn't care anymore, and Thomas didn't reply to my email either.  I am
 clearly concerned, as you are, but Thomas isn't, and no one else seems
 to care.

k, why are you concerned?  see my other message, but if the major concern
is the xlog directory for a postmaster, then put in a consistency check
for when starting ...

then again, how many out there are running multiple instances of
postmaster on their machine that would move xlog to a different location
without realizing they did?  I know in my case, we're working at reducing
the # of instances on our servers to 2 (internal vs external), but, our
servers are all on a RAID5 array, so there is nowhere to really move
xlog to out then its default location ... but I can see the usefulness of
doing so ...

Myself, if I'm going to move something around, it will be after the server
has been running for a while and I've added in more drive space in order
to correct a problem i didn't anticipate (namely, disk I/O) ... at that
point, I really don't wan tto have to dump/re-initdb/load just to move the
xlog directory to that new drive ...


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] OOP real life example (was Re: Why is MySQL more

2002-08-12 Thread Curt Sampson

On 12 Aug 2002, Greg Copeland wrote:

 You're constantly confusing Postgres' implementation with a desired
 implementation.

No. I'm still trying to figure out what the desired implementation
actually is. This is documented nowhere.

 If you're so keen on examples, please provide one that justifies such a
 boastful statement.

You appear to be saying I should provide an example that proves there
exists no table inheritance configuration that cannot easily be done
with a relational implementation. That's not possible to do, sorry.

I will revise my opinion the instant someone shows me something that I
can't do relationally, or is easy to implement with inheritance, and
difficult with relational methods. Now you know what you need to do, and
if you have no example, we can drop the whole thing. But I am honestly
interested to see just what it is that makes table inheritance so great.

  Are we then assuming that tuples in the child tables do not appear
  in the base table? That's more or less what I'd assumed when I
  originally heard about table inheritance (after all, instantiating
  a child object does not automatically instantiate a separate copy
  of the parent object), but the SQL standard, postgres, and I believe other
  systems make the exact opposite assumption.

 That's actually my exact assumption...that is, that tuples in the parent
 did not exist in the child.

Sorry, by opposite assumption, I meant these two opposites:

1. Tuples in child tables appear in the parent table.

2. Tuples in child tables do not appear in the parent table.

Take your pick, keeping in mind that the sources I know of (Appendix E of _The
Third Manifesto_, _Database Systems Concepts_ (ISTR), the SQL standard and
postgres currently all assume #1.

If we find the one we pick is unworkable, we can always go back
and try the other.

  If the child table tuples do appear in the parent, you've now got
  a situation analogous to the current postgres situation where a
  constraint on the parent table is an outright lie. (I'm thinking
  of the UNIQUE constraint which guarantees that all values in a
 [snip]

 I knew that there are *implementation* issues with postgres that causes
 problems with constraints, etc...I didn't realize that was the reason.

Well, assuming we are mapping inheritance back into relational stuff
behind the scenes (which it appears to me we are doing now), we can just
map back to the relation method I demonstrated earlier of doing what
someone wanted to do with table inheritance (child tables contain only
foreign key and child-specific data; parent table contains primary key
and all parent data) and that will fix the implementation problem.

Or people have proposed other things, such as cross-table constraints,
to try to do this.

 Well, you seem to be making references to ...SQL standard, postgres,
 and I believe other systems  I was counting on you or someone else
 to point us to existing references.

Well, counting on me is not good, since the whole reason I started this
was because I found the issue confusing in part due to the lack of any
obvious standards here that I could find. :-) But here's what I do have:

Date, Darwen, _Foundation for Future Database Systems, The
Third Manefesto (Second Edition)_. Appendex E.

Silberschatz, Korth, Sudarshan, _Database Systems Concepts
(Fourth Edition)_. I think it's around chapter 9. (My copy is
at home right now.)

SQL Standard. I don't have it handy. Anyone? Anyone? Bueller?

Postgres. Known broken implementation, but we can at least poke
stuff into it and see what it does.

In addition, OO programming gets mentioned ocassionally. I don't
think that table inheritance is anything related (and I've spent
a lot of time in the last couple of years developing methods to
make my OO programs and relational databases play nice with each
other), but it might help to have some idea of what people to do
connect the two, in case some people think that they are or should
be connected. You can start by checking out this page for a few
ways of creating objects from database information:

http://www.martinfowler.com/isa/inheritanceMappers.html

cjs
-- 
Curt Sampson  [EMAIL PROTECTED]   +81 90 7737 2974   http://www.netbsd.org
Don't you know, in this new Dark Age, we're all light.  --XTC


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] [COMMITTERS] pgsql-server/src

2002-08-12 Thread Marc G. Fournier

On 13 Aug 2002, Greg Copeland wrote:

 On Mon, 2002-08-12 at 23:09, Tom Lane wrote:
  Bruce Momjian [EMAIL PROTECTED] writes:
   OK, seeing as no one voted, and only Tom and I objected originally, we
   will keep the code as Thomas has applied it, namely that PGXLOG/-X is
   recognized by initdb, postmaster, postgres, and pg_ctl.
 
  We will?  It looks to me like Thomas lost the vote 2-to-1.
 
  Unless there are more votes, I'm going to *insist* that this code be
  changed.  It's dangerous and offers no offsetting benefit.  XLOG
  location should be settable at initdb, noplace later.
 


 I think Tom is on to something here.  I meant to ask but never got
 around to it.  Why would anyone need to move the XLOG after you've
 inited the db?

I just determined that disk I/O is terrible, so want to move the XLOG over
to a different file system that is currently totally idle ...


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke

2002-08-12 Thread Bruce Momjian

Marc G. Fournier wrote:
 On Tue, 13 Aug 2002, Tom Lane wrote:
 
  Bruce Momjian [EMAIL PROTECTED] writes:
   OK, seeing as no one voted, and only Tom and I objected originally, we
   will keep the code as Thomas has applied it, namely that PGXLOG/-X is
   recognized by initdb, postmaster, postgres, and pg_ctl.
 
  We will?  It looks to me like Thomas lost the vote 2-to-1.
 
  Unless there are more votes, I'm going to *insist* that this code be
  changed.  It's dangerous and offers no offsetting benefit.  XLOG
  location should be settable at initdb, noplace later.
 
 Okay, I'm going to pop up here and side with Thomas ... I think ... I may
 have missed some issues here, but, quite frankly, I hate symlinks, as I've
 seen it create more evil then good .. hardlinks is a different story ...

OK, that agrees with Thomas's aversion to symlinks.

 And for that reason, I will side with Thomas ...
 
 initdb should allow you to specify a seperate location, which I don't
 think anyone disagrees with ... but, what happens if, for some reason, I
 have to move it to another location?  I have to then dump/reload after
 doing a new initdb?

If you move pg_xlog, you have to create a symlink in /data that points
to the new location.  Initdb would do that automatically, but if you
move it after initdb, you would have to create the symlink yourself. 
With Thomas's current code, you would add/change PGXLOG instead to point
to the new location, rather than modify the symlink.

 One thought at the back of my mind is why not have something like a
 'PG_VERSION' for XLOG?  Maybe something so simple as a text file in both
 the data and xlog directory that just contains a timestamp from the
 initdb?  then, when  you startup postmaster with a -X option, it compares
 the two files and makes sure that they belong to each other?

Uh, seems it could get messy, but, yea, that would work.  It means
adding a file to pg_xlog and /data and somehow matching them.  My
feeling was that the symlink was unambiguous and allowed for fewer
mistakes.  I think that was Tom's opinion too.

 Bruce, if I'm missing something, could you post a summary/scorecard for
 pros-cons on this issue?

OK, I tried.  :-)

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke

2002-08-12 Thread Bruce Momjian

Marc G. Fournier wrote:
  Well, you didn't vote again in my follow up email, so I thought you
  didn't care anymore, and Thomas didn't reply to my email either.  I am
  clearly concerned, as you are, but Thomas isn't, and no one else seems
  to care.
 
 k, why are you concerned?  see my other message, but if the major concern
 is the xlog directory for a postmaster, then put in a consistency check
 for when starting ...


I really have two concerns;  one, the ability to point to the wrong
place or to the default place accidentally if PGXLOG isn't set, and two,
the addition of PGXLOG/-X into postmaster, postgres, pg_ctl where it
really isn't adding any functionality and just adds bloat to the
commands arg list.

 then again, how many out there are running multiple instances of
 postmaster on their machine that would move xlog to a different location
 without realizing they did?  I know in my case, we're working at reducing
 the # of instances on our servers to 2 (internal vs external), but, our
 servers are all on a RAID5 array, so there is nowhere to really move
 xlog to out then its default location ... but I can see the usefulness of
 doing so ...
 
 Myself, if I'm going to move something around, it will be after the server
 has been running for a while and I've added in more drive space in order
 to correct a problem i didn't anticipate (namely, disk I/O) ... at that
 point, I really don't wan tto have to dump/re-initdb/load just to move the
 xlog directory to that new drive ...

No dump/reload required, but the creation of a _dreaded_ symlink is
required.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] [COMMITTERS] pgsql-server/src

2002-08-12 Thread Bruce Momjian

Marc G. Fournier wrote:
  I think Tom is on to something here.  I meant to ask but never got
  around to it.  Why would anyone need to move the XLOG after you've
  inited the db?
 
 I just determined that disk I/O is terrible, so want to move the XLOG over
 to a different file system that is currently totally idle ...

Yep, and you are going to do it using symlinks.  Let us know how it
goes?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] [COMMITTERS] pgsql-server/src

2002-08-12 Thread Greg Copeland

On Tue, 2002-08-13 at 00:16, Marc G. Fournier wrote:
 
 Myself, if I'm going to move something around, it will be after the server
 has been running for a while and I've added in more drive space in order
 to correct a problem i didn't anticipate (namely, disk I/O) ... at that
 point, I really don't wan tto have to dump/re-initdb/load just to move the
 xlog directory to that new drive ...
 

Okay, fair enough.  But do we really need to have environment variables
for this?  Sounds like we need a stand alone utility which does the
associated magic in the database which moves the xlog and associated
internal pointers.  Doing so would assuming that all bes for the
database have been shutdown or simply would not go into effect until
restarted.  Is this feasible?

For something that would seemingly be infrequently used, creating
environment variables would seemingly be rather error prone.

Requiring soft links also doesn't strike me as a good portable idea
either...not to mention I've been bitten by them before too.

Sign,
Greg Copeland




signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] OOP real life example (was Re: Why is MySQL more

2002-08-12 Thread Curt Sampson

On Mon, 12 Aug 2002, Don Baccus wrote:

 Give it up.  You're acting like a turkey.  If you aren't, skin yourself
 a new non-turkey skin.

Since he appears not to be able to avoid abusive ad hominem attacks,
I'm now sending mail with [EMAIL PROTECTED] in the From: header
to /dev/null. If there's a technical point in one of his messages that
relates to the discussion that I need to answer, someone should please
mention it on the list or forward it to me.

cjs
-- 
Curt Sampson  [EMAIL PROTECTED]   +81 90 7737 2974   http://www.netbsd.org
Don't you know, in this new Dark Age, we're all light.  --XTC



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] OOP real life example (was Re: Why is MySQL more

2002-08-12 Thread Greg Copeland

On Tue, 2002-08-13 at 00:33, Curt Sampson wrote:
 On Mon, 12 Aug 2002, Don Baccus wrote:
 
  Give it up.  You're acting like a turkey.  If you aren't, skin yourself
  a new non-turkey skin.
 
 Since he appears not to be able to avoid abusive ad hominem attacks,
 I'm now sending mail with [EMAIL PROTECTED] in the From: header
 to /dev/null. If there's a technical point in one of his messages that
 relates to the discussion that I need to answer, someone should please
 mention it on the list or forward it to me.
 

Curt, I think his reply stems from his frustration of chosen content in
many emails that originate from you.  We all pretty well understand
postgres has a broken feature.  We all understand you see zero value in
it.  We're all actively attempting to determine ways to make it less
broken, if not altogether better.  The fact that it's broken and you
hardly go an email reminding everyone of this fact is certainly not
making friends.  In fact, one should hardly be surprised you're not
seeing more retorts as such.

For the sake of the project, I'd hope you could give the broken topic
a rest, move on, and allow a little time for the list to settle again. 
If such abuse continues, then IMHO, it would make sense to /dev/null
him.

Greg




signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] OOP real life example (was Re: Why is MySQL more

2002-08-12 Thread Greg Copeland

On Tue, 2002-08-13 at 00:16, Curt Sampson wrote:
 I will revise my opinion the instant someone shows me something that I
 can't do relationally, or is easy to implement with inheritance, and
 difficult with relational methods. Now you know what you need to do, and
 if you have no example, we can drop the whole thing. But I am honestly
 interested to see just what it is that makes table inheritance so great.

I think here-in is the first problem.  You seem to insist that the world
can only allow for one or the other and that the two approaches are
mutually exclusive.  I tends to think that there is room for both.  One
would also seem to allow that they can actually be complimentary
(referring to Hannu's recent is-a  has-a inheritance comments).

Can we let go of x is better than y and just concentrate on how y can be
made better without regard for x?

After it's all said and done, who knows, everyone might agree that table
inheritance is just a plain, bad idea.

 
  I knew that there are *implementation* issues with postgres that causes
  problems with constraints, etc...I didn't realize that was the reason.
 
 Well, assuming we are mapping inheritance back into relational stuff
 behind the scenes (which it appears to me we are doing now), we can just
 map back to the relation method I demonstrated earlier of doing what
 someone wanted to do with table inheritance (child tables contain only
 foreign key and child-specific data; parent table contains primary key
 and all parent data) and that will fix the implementation problem.

This is what I imagined the preferred solution would be, however, I'm
also assuming it would be the more complex to implement *properly*.  

 
 Or people have proposed other things, such as cross-table constraints,
 to try to do this.

Ya, I was kicking this idea around in my head tonight.  Didn't get far
on it.  So I should look for postings in the archive about this specific
implementation?

 
  Well, you seem to be making references to ...SQL standard, postgres,
  and I believe other systems  I was counting on you or someone else
  to point us to existing references.
 
 Well, counting on me is not good, since the whole reason I started this
 was because I found the issue confusing in part due to the lack of any
 obvious standards here that I could find. :-) But here's what I do have:
 
 Date, Darwen, _Foundation for Future Database Systems, The
 Third Manefesto (Second Edition)_. Appendex E.

Is this a book or a paper.  I have a paper that I've been reading
(ack...very, very dry) by these guys of the same name.

 
 Silberschatz, Korth, Sudarshan, _Database Systems Concepts
 (Fourth Edition)_. I think it's around chapter 9. (My copy is
 at home right now.)
 
 SQL Standard. I don't have it handy. Anyone? Anyone? Bueller?

So the SQL standard does address table inheritance?  Not that this means
I feel that they've done the right thing...but what did the
specification have to say on the subject?  Any online references?

 
 Postgres. Known broken implementation, but we can at least poke
 stuff into it and see what it does.
 
 In addition, OO programming gets mentioned ocassionally. I don't
 think that table inheritance is anything related (and I've spent

Yes.  I think I'm starting to buy into that too, however, I'm not sure
that it has to mean that no value is within.  In other words, I'm still
on the fence on a) table inheritance really makes much OO sense and b)
even if it does or does not, is there value in any form of it's
implementation (whatever the end result looks like) .

 a lot of time in the last couple of years developing methods to
 make my OO programs and relational databases play nice with each
 other), but it might help to have some idea of what people to do
 connect the two, in case some people think that they are or should
 be connected. You can start by checking out this page for a few
 ways of creating objects from database information:
 
 http://www.martinfowler.com/isa/inheritanceMappers.html
 

Thanks.  Funny, I was reading that just the other day.  ;)

Greg




signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke

2002-08-12 Thread Thomas Lockhart

 If you move pg_xlog, you have to create a symlink in /data that points
 to the new location.  Initdb would do that automatically, but if you
 move it after initdb, you would have to create the symlink yourself.
 With Thomas's current code, you would add/change PGXLOG instead to point
 to the new location, rather than modify the symlink.

There is no the symlink, but of course that tinkering is in no way
precluded by the new code. Although some seem to like symlinks, others
(including myself) see no good engineering practice in making them the
only foundation for distributing files across file systems.

The patches as-is follow existing PostgreSQL practice, have complete and
perfect backward compatibility, and do not preclude changes in
underlying implementation in the future if those who are objecting
choose to do a complete and thorough job of meeting my objections to the
current counter-suggestions. As an example, two lines of code in initdb
would add the beloved symlink to $PGDATA, eliminating one objection
though (of course) one I don't support.

  One thought at the back of my mind is why not have something like a
  'PG_VERSION' for XLOG?  Maybe something so simple as a text file in both
  the data and xlog directory that just contains a timestamp from the
  initdb?  then, when  you startup postmaster with a -X option, it compares
  the two files and makes sure that they belong to each other?
 Uh, seems it could get messy, but, yea, that would work.  It means
 adding a file to pg_xlog and /data and somehow matching them.  My
 feeling was that the symlink was unambiguous and allowed for fewer
 mistakes.  I think that was Tom's opinion too.

In the spirit of gratutious overstatement, I'll point out again:
symlinks are evil. Any sense of a job well done is misplaced if our
underpinnings rely on them for distributing files across file systems.
As an ad hoc hack to work around current limitations they may have some
utility.

Anyway, istm that this is way too much discussion for a small extension
of capability, and it has likely cost a table and index with location
implementation for the upcoming release just due to time wasted
discussing it. Hope it was worth it :/

- Thomas

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] OOP real life example (was Re: Why is MySQL more

2002-08-12 Thread Curt Sampson

On 13 Aug 2002, Greg Copeland wrote:

 On Tue, 2002-08-13 at 00:16, Curt Sampson wrote:
  I will revise my opinion the instant someone shows me something that I
  can't do relationally, or is easy to implement with inheritance, and
  difficult with relational methods. Now you know what you need to do, and
  if you have no example, we can drop the whole thing. But I am honestly
  interested to see just what it is that makes table inheritance so great.

 I think here-in is the first problem.  You seem to insist that the world
 can only allow for one or the other and that the two approaches are
 mutually exclusive.

No, I don't.

1. If it changes the rules, as it were, that is breaks other
parts of the system, it should go. This is the current state
of the postgres implementation. I'm guessing it's not the state
of the desired implementation, once we figure out what that is.

2. If it's just syntactic sugar, that's livable, so long as
it's quite obvious what it's syntatic sugar for. (In the current
case, it's not.) It's even good if it saves a lot of effort.

3. If it actually allows you to do something you cannot otherwise
do, or allows you to do something very difficult with much
greater ease, it's a good thing and it should stay.

  Well, assuming we are mapping inheritance back into relational stuff
  behind the scenes (which it appears to me we are doing now), we can just
  map back to the relation method I demonstrated earlier of doing what
  someone wanted to do with table inheritance (child tables contain only
  foreign key and child-specific data; parent table contains primary key
  and all parent data) and that will fix the implementation problem.

 This is what I imagined the preferred solution would be, however, I'm
 also assuming it would be the more complex to implement *properly*.

I don't think so. Both systems are currently, AFICT, pretty simple
mappings onto the relational system. Once we get the exact details of
table inheritance behaviour hammered out, I will gladly provide the
mapping it's possible to create it.

  Date, Darwen, _Foundation for Future Database Systems, The
  Third Manefesto (Second Edition)_. Appendex E.

 Is this a book or a paper.  I have a paper that I've been reading
 (ack...very, very dry) by these guys of the same name.

It's a book. Apparently the paper is, in comparison, much more lively.
:-) But I find the book good in that, at the very least, it shows the
level to which you have to go to come up with a theoretically solid
basis for something you want to implement.

 So the SQL standard does address table inheritance?

Yes.

 Not that this means I feel that they've done the right thing...but
 what did the specification have to say on the subject? Any online
 references?

I don't have a copy of the spec handy, and have not had time to go and
dig one up. All I got from it was out of the two book references I gave.

cjs
-- 
Curt Sampson  [EMAIL PROTECTED]   +81 90 7737 2974   http://www.netbsd.org
Don't you know, in this new Dark Age, we're all light.  --XTC


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] OOP real life example (was Re: Why is MySQL more

2002-08-12 Thread Hannu Krosing

On Tue, 2002-08-13 at 10:16, Curt Sampson wrote:
 On 12 Aug 2002, Greg Copeland wrote:
...

  Are we then assuming that tuples in the child tables do not appear
   in the base table? That's more or less what I'd assumed when I
   originally heard about table inheritance (after all, instantiating
   a child object does not automatically instantiate a separate copy
   of the parent object),

Tuples in the child table appear in parent table when you do a plain
SELECT, as thei IMHO should, because you _do_ want to get all kinds of
animals when doing select from animals.

They do not appear in parent table when you do 

SELECT .. FROM ONLY parent_table

It used to be the opposite (one needed to do SELECT .. FROM 
parent_table*   to get tuples from inherited tables as well ) but it
was changed because SQL99 mandated that inherited tables should be
included by default. That's for SQL99's CREATE TABLE (...) UNDER
another_table kind of single inheritance.

   but the SQL standard, postgres, and I believe other
   systems make the exact opposite assumption.
 
  That's actually my exact assumption...that is, that tuples in the parent
  did not exist in the child.
 
 Sorry, by opposite assumption, I meant these two opposites:

There are two main orthogonal ways of mapping inheritance to relational
model.

 1. Tuples in child tables appear in the parent table.

That's the way you implemented the samples in the beginning of this
thread, i.e. keep the common part in one table and extend by stitching
columns fron child tables to the side using foreign keys. 

This makes it easy to enforce primary keys and uniqueness, but grows
ugly quite fast if you have deep inhweritance hierarchies  - if you have
inheritance 5 levels deep, you need 4 joins to get a tuple from the
last-descendant table.

It also makes automatic updating ov views a pain to do.

 2. Tuples in child tables do not appear in the parent table.

This is how postgres implements it - make a new table for each inherited
table and do UNION join when doing a SELECT .

This makes it hard to implement uniqueness and primary keys, but easy to
do updates and inserts.

 Take your pick, keeping in mind that the sources I know of (Appendix E of _The
 Third Manifesto_, _Database Systems Concepts_ (ISTR), the SQL standard and
 postgres currently all assume #1.

I would like yet another implementation, more in line with SQL99's
single inheritance, where all inherited tables would be stored in the
same pagefile (so that you can put a unique index on them and it would
just work because TIDs all point into the same file). Fast access to
some single table ONLY could be done using partial indexes on tableoid.

This can't be mapped directly on SQL92 kind of relational model, but can
more or less be mimicked by setting the new fields to NULL for tuples
belonging to parent relation.

 If we find the one we pick is unworkable, we can always go back
 and try the other.
 
   If the child table tuples do appear in the parent, you've now got
   a situation analogous to the current postgres situation where a
   constraint on the parent table is an outright lie. (I'm thinking
   of the UNIQUE constraint which guarantees that all values in a
  [snip]
 
  I knew that there are *implementation* issues with postgres that causes
  problems with constraints, etc...I didn't realize that was the reason.
 
 Well, assuming we are mapping inheritance back into relational stuff
 behind the scenes (which it appears to me we are doing now), we can just
 map back to the relation method I demonstrated earlier of doing what
 someone wanted to do with table inheritance (child tables contain only
 foreign key and child-specific data; parent table contains primary key
 and all parent data) and that will fix the implementation problem.

The main problems I pointed out above: 

1. hard-to-implement UPDATE rules, theoretically possible is not good
enough for real use ;)

2. too much joining for deep inheritance hierarchies .

 Or people have proposed other things, such as cross-table constraints,
 to try to do this.
 
  Well, you seem to be making references to ...SQL standard, postgres,
  and I believe other systems  I was counting on you or someone else
  to point us to existing references.
 
 Well, counting on me is not good, since the whole reason I started this
 was because I found the issue confusing in part due to the lack of any
 obvious standards here that I could find. :-) But here's what I do have:
 
 Date, Darwen, _Foundation for Future Database Systems, The
 Third Manefesto (Second Edition)_. Appendex E.
 
 Silberschatz, Korth, Sudarshan, _Database Systems Concepts
 (Fourth Edition)_. I think it's around chapter 9. (My copy is
 at home right now.)
 
 SQL Standard. I don't have it handy. Anyone? Anyone? Bueller?

I got mine from

http://www.sqlstandards.org/SC32/WG3/Progression_Documents/FCD/fcd2found.pdf

Quite hard to read, as standard in general tend to be ;)