Re: [HACKERS] Proposal for resolving casting issues

2002-09-17 Thread Bruce Momjian

Tom Lane wrote:
 We've been discussing this stuff in fits and starts for months now, but
 nothing satisfactory has been arrived at.  I've concluded that part of
 the problem is that we are trying to force the system's behavior into
 a model that is too limiting: we need more than an implicit/explicit cast
 distinction.  Accordingly, I suggest we bite the bullet and make it happen.
 (Note that I've resigned myself to having to do an initdb for 7.3beta2.)

I was reading my backlog of email and thinking, Oh, things are shaping
up well, then I hit this message.  Let me try to collect open items
tomorrow and get a plan together.  I have caught up on my email.  I am
heading to bed.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Proposal for resolving casting issues

2002-09-17 Thread Bruce Momjian

Tom Lane wrote:
 Zeugswetter Andreas SB SD [EMAIL PROTECTED] writes:
  I think the following three states may enable a closer match to an actually 
  desired (Peter said mandated by SQL99) behavior.
 
  1. okay as implicit cast in expression or assignment
  2. okay as implicit cast in expression or assignment but needs runtime check
  (precision loss possible)
  3. okay only as explicit cast (precision loss possible)
 
 The runtime checks are there already, eg
 
 regression=# select 123456789::int4::int2;
 ERROR:  i4toi2: '123456789' causes int2 overflow
 
 However this does not help us much; the critical point is that if we
 want function overloading to work in a sane fashion, we have to prefer
 up-conversions to down-conversions *at parse time*, at least for the
 operands of functions and operators (which is what I meant by in
 expressions).  Runtime checks are irrelevant to this problem.

I think there is some confusion here.  The runtime checks Andreas was
talking about was allowing a double of 64.0 to cast to an int4 while
disallowing 64.1 from being cast to an int4 because it is not a hole
number.  

I am not sure doubles have enough precision to make such comparisons
functional (NUMERIC certainly does) but that was his proposal, and he
stated he thought the standard required it.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Proposal for resolving casting issues

2002-09-17 Thread Zeugswetter Andreas SB SD


 What I will do instead is adjust parse_coerce.c so that a
 length-coercion function can have either of the signatures
   foo(foo,int4) returns foo
 or
   foo(foo,int4,bool) returns foo
 and then modify the above-mentioned length coercion functions to provide
 the desired behavior.  This has no direct impact on pg_cast because we
 do not use pg_cast for length-coercion functions.

Sounds good to me. 

When those are really truncated ESQL/C needs to set a warning in sqlca.sqlwarn
though, thus I think the second signature should also have an output flag to tell 
whether truncation actually occurred.
Maybe this should be kept for a protocol change though, since I would not think
a NOTICE would be suitable here. 

Andreas

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Proposal for resolving casting issues

2002-09-17 Thread Tom Lane

Bruce Momjian [EMAIL PROTECTED] writes:
 I think there is some confusion here.  The runtime checks Andreas was
 talking about was allowing a double of 64.0 to cast to an int4 while
 disallowing 64.1 from being cast to an int4 because it is not a hole
 number.  

 I am not sure doubles have enough precision to make such comparisons
 functional (NUMERIC certainly does) but that was his proposal, and he
 stated he thought the standard required it.

It seems clear to me that the standard requires us NOT to reject that.

In the explicit-cast case, SQL92 6.10 cast specification saith:

 3) If TD is exact numeric, then

Case:

a) If SD is exact numeric or approximate numeric, then

  Case:

  i) If there is a representation of SV in the data type TD
 that does not lose any leading significant digits after
 rounding or truncating if necessary, then TV is that rep-
 resentation. The choice of whether to round or truncate is
 implementation-defined.

 ii) Otherwise, an exception condition is raised: data exception-
 numeric value out of range.

So we are *only* allowed to throw an error for overflow; having to round
is not an error condition.

In the implicit-cast case, section 9.2 Store assignment has

k) If the data type of T is numeric and there is an approxi-
  mation obtained by rounding or truncation of the numerical
  value of V for the data type of T, then the value of T is set
  to such an approximation.

  If there is no such approximation, then an exception condi-
  tion is raised: data exception-numeric value out of range.

  If the data type of T is exact numeric, then it is implementation-
  defined whether the approximation is obtained by rounding or
  by truncation.

which is different wording but seems to boil down to the same thing: the
only error condition is out-of-range.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] If there a bug in the psql or just a feature .

2002-09-17 Thread Hannu Krosing

On Tue, 2002-09-10 at 17:22, Vanmunin Chea wrote:
 Hey Tom,
 
   Thanks for the tips, Tom. I have that feeling from the start
 (with the two different implementation) but never actually have a chance
 to confirm with someone.
 
 1. It there a way to store the dynamic array at all ?
 
 
   I notice psql has a similar type - Single Dynamic Dimensional
 Array. However there isn't any built in operators(,=,==,,=) for Array
 to do sorting.
 
 2. Can I write one up ?

See attachment.

Unfortunately I ran out of time before figuring out how to make btree
index use it ;(

Also, in 7.3 there are a lot more ops for in contrib/intarray than was
in 7.2.

-
Hannu





CREATE FUNCTION plpgsql_call_handler () RETURNS OPAQUE AS
'/usr/lib/pgsql/plpgsql.so' LANGUAGE 'C';

CREATE TRUSTED PROCEDURAL LANGUAGE 'plpgsql'
HANDLER plpgsql_call_handler
LANCOMPILER 'PL/pgSQL';


-- first the 'worker' function

DROP  FUNCTION intarray_compare(INT[],INT[]);

CREATE FUNCTION intarray_compare(INT[],INT[]) RETURNS INT AS '
DECLARE 
I INTEGER := 1;
BEGIN
LOOP
IF ($1[I]  $2[I]) THEN
RETURN 1;
ELSE
IF ($1[I]  $2[I]) THEN
RETURN -1;
END IF;
END IF;
IF ($1[I] IS NULL) THEN
IF ($2[I] IS NULL) THEN RETURN 0; END IF;
RETURN -1; 
END IF;
IF ($2[I] IS NULL) THEN RETURN 1; END IF;
I := I + 1;
EXIT WHEN I  1000;
END LOOP;
END;
' LANGUAGE 'plpgsql';


DROP FUNCTION intarray_gt(INT[],INT[]);
create FUNCTION intarray_gt(INT[],INT[]) RETURNS BOOLEAN AS '
BEGIN
RETURN intarray_compare($1,$2) = 1;
END;
' LANGUAGE 'plpgsql';

DROP FUNCTION intarray_gte(INT[],INT[]);
create FUNCTION intarray_gte(INT[],INT[]) RETURNS BOOLEAN AS '
BEGIN
RETURN intarray_compare($1,$2) = 0;
END;
' LANGUAGE 'plpgsql';

DROP FUNCTION intarray_lt(INT[],INT[]);
create FUNCTION intarray_lt(INT[],INT[]) RETURNS BOOLEAN AS '
BEGIN
RETURN intarray_compare($1,$2) = -1;
END;
' LANGUAGE 'plpgsql';

DROP FUNCTION intarray_lte(INT[],INT[]);
create FUNCTION intarray_lte(INT[],INT[]) RETURNS BOOLEAN AS '
BEGIN
RETURN intarray_compare($1,$2) = 0;
END;
' LANGUAGE 'plpgsql';

DROP FUNCTION intarray_eq(INT[],INT[]);
create FUNCTION intarray_eq(INT[],INT[]) RETURNS BOOLEAN AS '
BEGIN
RETURN intarray_compare($1,$2) = 0;
END;
' LANGUAGE 'plpgsql';

DROP FUNCTION intarray_neq(INT[],INT[]);
create FUNCTION intarray_neq(INT[],INT[]) RETURNS BOOLEAN AS '
BEGIN
RETURN intarray_compare($1,$2) != 0;
END;
' LANGUAGE 'plpgsql';


-- now greate the operators

CREATE OPERATOR  (
leftarg = INT[],
rightarg = INT[],
procedure = intarray_lt,
commutator =  ,
negator = =
);
   
CREATE OPERATOR = (
leftarg = INT[],
rightarg = INT[],
procedure = intarray_lte,
commutator = = ,
negator = 
);

CREATE OPERATOR  (
leftarg = INT[],
rightarg = INT[],
procedure = intarray_gt,
commutator =  ,
negator = =
);

CREATE OPERATOR = (
leftarg = INT[],
rightarg = INT[],
procedure = intarray_gte,
commutator = = ,
negator = 
);

CREATE OPERATOR = (
leftarg = INT[],
rightarg = INT[],
procedure = intarray_eq,
commutator = = ,
negator = !=
);

CREATE OPERATOR != (
leftarg = INT[],
rightarg = INT[],
procedure = intarray_neq,
commutator = != ,
negator = =
);














---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Proposal for resolving casting issues

2002-09-17 Thread Zeugswetter Andreas SB SD


  For numbers there is probably only the solution to invent an
  anynumber generic type.
 
 Actually, I had been toying with the notion of doing the following:
 
 1. A numeric literal is initially typed as the smallest type that will
 hold it in the series int2, int4, int8, numeric (notice NOT float8).

Yes, that sounds like a good plan for all scenarios that can follow !

 2. Allow implicit up-coercion int2-int4-int8-numeric-float4-float8,
 but down-coercions aren't implicit except for assignment.

How about int2-int4-int8-numeric-float4-float8-numeric ?
That would also allow an upward path from float8.

 3. Eliminate most or all of the cross-numeric-type operators 
 (eg, there is no reason to support int2+int4 as a separate operator).

Yes.

 With this approach, an expression like int4var = 42 would be initially
 typed as int4 and int2, but then the constant would be coerced to int4
 because int4=int4 is the closest-match operator.  (int2=int2 would not
 be considered because down-coercion isn't implicitly invokable.)  

It would fix the constants issue, yes. How about where int2col=int4col 
and it's indexability of int2col though ?

 Also
 we get more nearly SQL-standard behavior in expressions that combine
 numeric with float4/float8: the preferred type will be float, which
 accords with the spec's notions of exact numeric vs. 
 approximate numeric.

I do not understand the standard here.
Especially the following would seem awkward if that would switch to approximate:
set numericcol = numericcol * float4col; 

Andreas

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] PGXLOG variable worthwhile?

2002-09-17 Thread Dave Page



 -Original Message-
 From: Bruce Momjian [mailto:[EMAIL PROTECTED]] 
 Sent: 17 September 2002 06:36
 To: Christopher Kings-Lynne
 Cc: Robert Treat; Justin Clift; Peter Eisentraut; Tom Lane; 
 Curt Sampson; PostgreSQL Hackers Mailing List
 Subject: Re: [HACKERS] PGXLOG variable worthwhile?
 

 Well, let's see if we ever run on native NT4.X and we can 
 decide then. 
 Actually, don't our Cygnus folks have a problem with moving 
 pg_xlog already?

No, because Cygwin knows about shell links.

Whilst I'm here, I'll chuck my $0.02 in:

I use PostgreSQL on Linux for production and XP for development, and am
likely to continue that way. I've been beta testing the native Win32
port of PostgreSQL as Justin has and the latest version is fantastic -
it runs as a service, osdb shows impressive results compared to Cygwin
PostgreSQL on the same system and it's a breeze to install, despite
there being no installer yet.

What I can't understand is the attitude of some people here. Yes,
Microsoft are evil, but the bottom line is, millions of people use
Windows. Just look at the number of downloads for pgAdmin (shown at
http://www.pgadmin.org/downloads/) - the last stable version has clocked
up over 38,000 downloads, the preview I released just a couple of weeks
ago, 2230 at the time of writing. I know from talking to some of the
users that often people download copies for themselves and their
colleagues, so we can probably assume there are actually 40,000+
PostgreSQL users that use Windows reguarly enough to want pgAdmin. What
happens if you add in the pgAccess/Windows users, Tora, or pgExplorer?
How many of these people would want to run PostgreSQL on Windows as
well?

What about the companies out there that have good sysadmins who want to
use PostgreSQL, but manglement that insist on using Windows?

What about situations where a single server is running SQL Server and
other software (such as a middle tier server - as I have on one box
here), and that other software cannot be changed, but SQL could?

I think that ignoring the huge number of people that use windows because
some of us consider it a Mickey Mouse OS is a particuarly bad strategy
if we want to expand our userbase. Windows is not going anywhere soon,
and like it or not, it *is* getting better and better. Our Windows 2000
(and our Beta3/RC1 .Net test Servers) are rock solid and haven't been
rebooted in months) - we get more hardware faults these days, and those
can occur on our Linux or HP-UX boxes just as easily.

Anyway, enough of my rant :-)

Regards, Dave.


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] PGXLOG variable worthwhile?

2002-09-17 Thread Christopher Kings-Lynne

 I use PostgreSQL on Linux for production and XP for development, and am
 likely to continue that way. I've been beta testing the native Win32
 port of PostgreSQL as Justin has and the latest version is fantastic -
 it runs as a service, osdb shows impressive results compared to Cygwin
 PostgreSQL on the same system and it's a breeze to install, despite
 there being no installer yet.

From where do we get this fabled Win32 port?

Chris


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] Still big problems with pg_dump!

2002-09-17 Thread Wim



Hello guys,

I have still problems with dumping my database

I have postgres 7.2.1 running on a solaris 8 server. When I try to do a 
pg_dump of my database, I get the following message:
pg_dump: query to obtain list of tables failed: server closed the 
connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.

I connect to the database and try to view the tables with \dt and \dS. 
Now I get:
ERROR:  AllocSetFree: cannot find block containing chunk 4c5ad0
I retry:
ERROR:  AllocSetFree: cannot find block containing chunk 4860d0

I can view the tables with: \d tablename

Some people suggest a drive failure, but I checked that and found no 
problems...
I REINDEXED the whole database... problem still the same...
Tried a VACUUM... still not working...


I must say that one of the table contains more than 3.000.000 rows, 
another more than 1.400.000...
Select, update, delete, insert works, just the pg_dump(all) and the \dt 
\dS commands...
I must say that I had this problem a few months before, I got some help 
then, but that couldn't solve my problem,
I recreated the database from scratch and copied the data, to fix thing 
quickly. Thing went well for about two months :-(
Now the problem raises again, and I'm trying to find a solution without 
reinstalling the whole thing allover again.


Some advice/help from the specialists?

Cheers!

Wim.

Some info from the debug logfile:
-

DEBUG:  StartTransactionCommand
DEBUG:  query: SELECT c.relname as Name,
  CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'i' 
THEN 'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' END as Type,
  u.usename as Owner
FROM pg_class c LEFT JOIN pg_user u ON c.relowner = u.usesysid
WHERE c.relkind IN ('r','')
  AND c.relname !~ '^pg_'
ORDER BY 1;
DEBUG:  parse tree: { QUERY :command 1  :utility  :resultRelation 0 
:into  :isPortal false :isBinary false :isTemp false :hasAggs false 
:hasSubLinks false :rtable ({ RTE :relname pg_class :relid 1259  
:subquery  :alias { ATTR :relname c :attrs } :eref { ATTR :relname c 
:attrs ( relname   reltype   relowner   relam   relfilenode   
relpages   reltuples   reltoastrelid   reltoastidxid   
relhasindex   relisshared   relkind   relnatts   relchecks   
reltriggers   relukeys   relfkeys   relrefs   relhasoids   
relhaspkey   relhasrules   relhassubclass   relacl )} :inh true 
:inFromCl true :checkForRead true :checkForWrite false :checkAsUser 0} { 
RTE :relname pg_user :relid 16478  :subquery  :alias { ATTR :relname u 
:attrs } :eref { ATTR :relname u :attrs ( usename   usesysid   
usecreatedb   usetrace   usesuper   usecatupd   passwd   
valuntil )} :inh true :inFromCl true :checkForRead true :checkForWrite 
false :checkAsUser 0}) :jointree { FROMEXPR :fromlist ({ JOINEXPR 
:jointype 1 :isNatural false :larg { RANGETBLREF 1 } :rarg { RANGETBLREF 
2 } :using  :quals { EXPR :typeOid 16  :opType op :oper { OPER :opno 
96 :opid 0 :opresulttype 16 } :args ({ VAR :varno 1 :varattno 3 :vartype 
23 :vartypmod -1  :varlevelsup 0 :varnoold 1 :varoattno 3} { VAR :varno 
2 :varattno 2 :vartype 23 :vartypmod -1  :varlevelsup 0 :varnoold 2 
:varoattno 2})} :alias  :colnames ( relname   reltype   
relowner   relam   relfilenode   relpages   reltuples   
reltoastrelid   reltoastidxid   relhasindex   relisshared   
relkind   relnatts   relchecks   reltriggers   relukeys   
relfkeys   relrefs   relhasoids   relhaspkey   relhasrules   
relhassubclass   relacl   usename   usesysid   usecreatedb   
usetrace   usesuper   usecatupd   passwd   valuntil ) :colvars 
({ VAR :varno 1 :varattno 1 :vartype 19 :vartypmod -1  :varlevelsup 0 
:varnoold 1 :varoattno 1} { VAR :varno 1 :varattno 2 :vartype 26 
:vartypmod -1  :varlevelsup 0 :varnoold 1 :varoattno 2} { VAR :varno 1 
:varattno 3 :vartype 23 :vartypmod -1  :varlevelsup 0 :varnoold 1 
:varoattno 3} { VAR :varno 1 :varattno 4 :vartype 26 :vartypmod -1  
:varlevelsup 0 :varnoold 1 :varoattno 4} { VAR :varno 1 :varattno 5 
:vartype 26 :vartypmod -1  :varlevelsup 0 :varnoold 1 :varoattno 5} { 
VAR :varno 1 :varattno 6 :vartype 23 :vartypmod -1  :varlevelsup 0 
:varnoold 1 :varoattno 6} { VAR :varno 1 :varattno 7 :vartype 700 
:vartypmod -1  :varlevelsup 0 :varnoold 1 :varoattno 7} { VAR :varno 1 
:varattno 8 :vartype 26 :vartypmod -1  :varlevelsup 0 :varnoold 1 
:varoattno 8} { VAR :varno 1 :varattno 9 :vartype 26 :vartypmod -1  
:varlevelsup 0 :varnoold 1 :varoattno 9} { VAR :varno 1 :varattno 10 
:vartype 16 :vartypmod -1  :varlevelsup 0 :varnoold 1 :varoattno 10} { 
VAR :varno 1 :varattno 11 :vartype 16 :vartypmod -1  :varlevelsup 0 
:varnoold 1 :varoattno 11} { VAR :varno 1 :varattno 12 :vartype 18 
:vartypmod -1  :varlevelsup 0 :varnoold 1 :varoattno 12} { VAR :varno 1 
:varattno 13 :vartype 21 :vartypmod -1  :varlevelsup 0 :varnoold 1 
:varoattno 13} { VAR :varno 1 :varattno 14 :vartype 21 :vartypmod -1  
:varlevelsup 0 :varnoold 1 

Re: [HACKERS] PGXLOG variable worthwhile?

2002-09-17 Thread Shridhar Daithankar

On 17 Sep 2002 at 16:11, Christopher Kings-Lynne wrote:

  What I can't understand is the attitude of some people here. Yes,
  Microsoft are evil, but the bottom line is, millions of people use
  Windows. Just look at the number of downloads for pgAdmin (shown at
  http://www.pgadmin.org/downloads/) - the last stable version has clocked
  up over 38,000 downloads, the preview I released just a couple of weeks
  ago, 2230 at the time of writing. I know from talking to some of the
  users that often people download copies for themselves and their
  colleagues, so we can probably assume there are actually 40,000+
  PostgreSQL users that use Windows reguarly enough to want pgAdmin. What
  happens if you add in the pgAccess/Windows users, Tora, or pgExplorer?
  How many of these people would want to run PostgreSQL on Windows as
  well?
 I actually think that the long-term survival of Postgres DEPENDS on our
 Win32 support.  Otherwise, we'll just get massacred by MySQL, MSSQL, Oracle
 and Firebird who do support Win32.

Let's move this to general.

But I disagree. History says that nobody can compete with microsoft on 
microsoft platform. Postgres will not be competing with either SQL Server or 
access. It would remain as toy database..

As far as people using mysql on windows, I have couple of colleages here who 
got things crowling for some heavy load, something like 60GB database with 
512MB compq workstations..

Let's leave it. The main point to focus postgres on unix is not only because 
unix is proven/known as robust and scalable, but unix is much more standard to 
support across multiple OS. The amount with which windows differs from unices 
on API level, any serious efforts to make postgresql good enough on windows 
whould be a mammoth task.

I haven't tried either port of postgres on windows but I would not bet on any 
of them.

 Users of Postgres are our lifeblood.  The more users we have the more

I agree but even as of now, not even 1% users comes on any of postgres lists, 
in my estimate.

So if users are not providing their feedback, what's the point in open source? 
(Actually all those people do help postgres by publicising it but still 
feedback remains an important phase of open source software engineering..)

 developers we get, the more testing we get and the more likely we are to get
 money, corporate support, etc.  Our ODBC driver will also be improved.

I agree for ODBC but that can be done without giving much to postgresql windows 
port as well.

I understand windows port of postgresql remains very much important for people 
who want to evaluate it. But for some good evaluation, I would rather recommend 
them trying postgresql on linux rather than windows.

There are limits as what postgresql can do on windows and probably postgresql 
development team can't do much about many of them..

No offense to anybody.. just some opinions..

Bye
 Shridhar

--
Albrecht's Law: Social innovations tend to the level of minimum tolerable well-
being.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] PGXLOG variable worthwhile?

2002-09-17 Thread Dave Page



 -Original Message-
 From: Christopher Kings-Lynne [mailto:[EMAIL PROTECTED]] 
 Sent: 17 September 2002 09:05
 To: Dave Page; Bruce Momjian
 Cc: Robert Treat; Justin Clift; Peter Eisentraut; Tom Lane; 
 Curt Sampson; PostgreSQL Hackers Mailing List
 Subject: RE: [HACKERS] PGXLOG variable worthwhile?
 
 
  I use PostgreSQL on Linux for production and XP for 
 development, and 
  am likely to continue that way. I've been beta testing the native 
  Win32 port of PostgreSQL as Justin has and the latest version is 
  fantastic - it runs as a service, osdb shows impressive results 
  compared to Cygwin PostgreSQL on the same system and it's a 
 breeze to 
  install, despite there being no installer yet.
 
 From where do we get this fabled Win32 port?
 

The call for testers (below) was originally posted to the Cygwin list.

Regards, Dave.

===

My company is actively working on a Native Windows Port of Postgres
based on 7.2.1.  This is the same group that Jan Wieck and Katie Ward
work with.  We are now at the stage that we need community involvement
to help work out the bugs.

We plan on contributing the code to the Postgres base, but
we want to make sure that most of the bugs have been worked 
out before doing so.

We are looking for people who have an application that currently runs on
Postgres 7.2 and who also have a Windows environment. If you would like
to get involved, please send me email at mailto:[EMAIL PROTECTED]

Thanks...

...MikeF
-- 
--
Mike Furgal - mailto:[EMAIL PROTECTED] - http://www.multera.com
--

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Proposal for resolving casting issues

2002-09-17 Thread Zeugswetter Andreas SB SD


 I think there is some confusion here.  The runtime checks Andreas was
 talking about was allowing a double of 64.0 to cast to an int4 while
 disallowing 64.1 from being cast to an int4 because it is not a hole
 number.  

Yes, and Tom's proposal for numbers is sufficient for constants, since the 64.0
will initially be an int2 and thus do the correct thing together with an int4,
and the 64.1 constant will be a numeric, and thus also do the correct thing with
all other types.

It is not sufficient for the optimizer for joins though, since it cannot use the 
int4 index when confronted with where tab1.int4col = tab2.numericcol.
Here only a runtime (non aborting) check would help.
Maybe this could be overcome if the index access (or something inbetween) would allow
a numeric constant for an int4 index (If the numeric value does not cleanly convert
to int4, return no rows).

Andreas

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] PGXLOG variable worthwhile?

2002-09-17 Thread Dave Page



 -Original Message-
 From: Shridhar Daithankar 
 [mailto:[EMAIL PROTECTED]] 
 Sent: 17 September 2002 09:30
 To: [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Subject: Re: [HACKERS] PGXLOG variable worthwhile?
 
 
 On 17 Sep 2002 at 16:11, Christopher Kings-Lynne wrote:
 
 But I disagree. History says that nobody can compete with 
 microsoft on 
 microsoft platform. Postgres will not be competing with 
 either SQL Server or 
 access. It would remain as toy database..

Like Oracle?

 Let's leave it. The main point to focus postgres on unix is 
 not only because 
 unix is proven/known as robust and scalable, but unix is much 
 more standard to 
 support across multiple OS. The amount with which windows 
 differs from unices 
 on API level, any serious efforts to make postgresql good 
 enough on windows 
 whould be a mammoth task.

Maybe, but it's pretty much there now. The beta Win32 native port has
been performing excellently in the tests I've been able to throw at it,
certainly better than the Cygwin port.

 I haven't tried either port of postgres on windows but I 
 would not bet on any 
 of them.

The thing I wouldn't bet on is not the quality of the code produced by
the developers here, but Windows. Yes, it runs great here at the moment,
and has done for a while now but there's no guarantee that a new release
won't have a nasty bug. But that applies to the SQL user as well though.
Or for that matter the user of *any* other OS...

 There are limits as what postgresql can do on windows and 
 probably postgresql 
 development team can't do much about many of them..

The only real issue afaik with the current beta is that you can only run
one instance on a single server. That is the case with SQL Server as
well of course.

 No offense to anybody.. just some opinions..
 

Likewise.

Regards, Dave.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] PGXLOG variable worthwhile?

2002-09-17 Thread Christopher Kings-Lynne

 Let's leave it. The main point to focus postgres on unix is not
 only because
 unix is proven/known as robust and scalable, but unix is much
 more standard to
 support across multiple OS. The amount with which windows differs
 from unices
 on API level, any serious efforts to make postgresql good enough
 on windows
 whould be a mammoth task.

It's already been done - that's the whole point.

 So if users are not providing their feedback, what's the point in
 open source?

Users HAVE provided their feedback - they want Postgres on Windows.  What's
the point of open source if we can't accomodate them?  There's no problems
with economics, marketing, schedules, deadlines, nothing.  The reason that
people like Open Source is because they don't have to deal with some
monolithic company refusing to port to their platform just because it's too
hard.

Chris


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] PGXLOG variable worthwhile?

2002-09-17 Thread Dave Page



 -Original Message-
 From: Christopher Kings-Lynne [mailto:[EMAIL PROTECTED]] 
 Sent: 17 September 2002 09:49
 To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Subject: Re: [HACKERS] PGXLOG variable worthwhile?
 

 Users HAVE provided their feedback - they want Postgres on 
 Windows.  What's the point of open source if we can't 
 accomodate them?  There's no problems with economics, 
 marketing, schedules, deadlines, nothing.  The reason that 
 people like Open Source is because they don't have to deal 
 with some monolithic company refusing to port to their 
 platform just because it's too hard.

Which in this case is what puzzles me. We are only talking about a
simple GUC variable after all - I don't know for sure, but I'm guessing
it's not a huge effort to add one?

Regards, Dave.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] PostgreSQL 7.3: help on new CREATE TYPE

2002-09-17 Thread Jerome Chochon



Hi all.

I have read the last version of PostgreSQL (7.3 
beta) and found that the second version of CREATE TYPE is very 
interesting.

So we can create a type that look like a 
RECORD.
For example:
CREATE TYPE adress AS (number int, street text, 
country VARCHAR);

But can i use this type in a table definition 
like this:
CREATE TABLE person (his_name VARCHAR, 
his_adress adress);

Someone can answer to my question.

Thanks for your help.

Jérôme Chochon.




[HACKERS] Backend crash

2002-09-17 Thread Michael Paesold

Hi all,

I have a problem with inserting one milling records into a table using a
function. This is for testing. The backend crashes on that every time,
although the error messages seem to be different. Can I post a full
description here or should that go to pgsql-general?

Thanks.

Best Regards,
Michael Paesold


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] PostgreSQL 7.3: help on new CREATE TYPE

2002-09-17 Thread Jerome Chochon

Sorry if my english is not very good. ;-).

When I say that the second form of CREATE TYPE allow you to make RECORD type
like RECORD, i don't want to speak about the record in PlPgsql but RECORD
from programming language like ADA or C (typedef struct).

So the real question is:
Can I use this new type like other user-type ?
CREATE TABLE person (his_name VARCHAR, his_adress adress);
...where adress is CREATE TYPE adress AS (number int, street text, country
VARCHAR);

Thanks for your reply ?




- Original Message -
From: Christopher Kings-Lynne [EMAIL PROTECTED]
To: Jerome Chochon [EMAIL PROTECTED];
[EMAIL PROTECTED]; [EMAIL PROTECTED];
[EMAIL PROTECTED]
Sent: Tuesday, September 17, 2002 11:25 AM
Subject: RE: [HACKERS] PostgreSQL 7.3: help on new CREATE TYPE


 Hi Jerome,

 The RECORD type is used for writing stored procedures and functions that
 return sets.

 eg.  CREATE FUNCTION foo() RETURNS setof adress
 AS '...';

 Sort of thing...

 Chris
 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED]]On Behalf Of Jerome Chochon
 Sent: Tuesday, 17 September 2002 5:17 PM
 To: [EMAIL PROTECTED]; [EMAIL PROTECTED];
 [EMAIL PROTECTED]
 Subject: [HACKERS] PostgreSQL 7.3: help on new CREATE TYPE


 Hi all.

 I have read the last version of PostgreSQL (7.3 beta) and found that the
 second version of CREATE TYPE is very interesting.

 So we can create a type that look like a RECORD.
 For example:
 CREATE TYPE adress AS (number int, street text, country VARCHAR);

 But can i use this type in a table definition like this:
 CREATE TABLE person (his_name VARCHAR, his_adress adress);

 Someone can answer to my question.

 Thanks for your help.

 Jérôme Chochon.



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Bug: COPY IN doesn't test domain constraints

2002-09-17 Thread Rod Taylor

On Mon, 2002-09-16 at 17:54, Tom Lane wrote:
 In CVS tip:
 
 regression=# create domain nnint int not null;
 CREATE DOMAIN

Ok, I'll take a look at this today.

Thanks


-- 
  Rod Taylor


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Proposal for resolving casting issues

2002-09-17 Thread Tom Lane

Zeugswetter Andreas SB SD [EMAIL PROTECTED] writes:
 When those are really truncated ESQL/C needs to set a warning in sqlca.sqlwarn
 though, thus I think the second signature should also have an output flag to tell 
 whether truncation actually occurred.
 Maybe this should be kept for a protocol change though, since I would not think
 a NOTICE would be suitable here. 

Again, I don't want to invent output arguments for functions today ;-).

I agree that a NOTICE would be overkill, and that we need a protocol
change to implement completion conditions (sqlca.sqlwarn) properly.
When that happens, I think the explicit-cast paths in the coercion
routines can easily call the set a completion condition routine for
themselves; I see no reason to pass back the condition one level
before doing so.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Still big problems with pg_dump!

2002-09-17 Thread Gavin Sherry

On Tue, 17 Sep 2002, Wim wrote:

 
 
 Hello guys,
 
 I have still problems with dumping my database

Wim,

This kind of error is not generated as a result of on-disk data
corruption. It is probably a hardware error: memory, cache, or CPU. Can
you replace any of these components on the machine and attempt to
re-create?

Gavin


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Proposal for resolving casting issues

2002-09-17 Thread Tom Lane

Zeugswetter Andreas SB SD [EMAIL PROTECTED] writes:
 It is not sufficient for the optimizer for joins though, since it
 cannot use the int4 index when confronted with where tab1.int4col =
 tab2.numericcol.

For cross-datatype joins, the proposal as I sketched it would result in
the parser producing, eg,
where tab1.int4col::numeric = tab2.numericcol
that is, we'd have a single-datatype operator and a runtime cast in the
expression.

The optimizer is today capable of producing a nested loop with inner
indexscan join from this --- so long as the inner indexscan is on the
uncasted column (numericcol in this case).  It won't consider an int4
index on int4col for this.  This seems okay to me, actually.  It's
better than what you get now with a cross-datatype comparison operator
(neither side can be indexscanned since the operator matches neither
index opclass).

The major failing that needs to be rectified is that merge and hash
joins won't even be considered, because that code only works with
quals that are unadorned Var = Var.  I don't believe there is any
fundamental reason for this restriction.  As long as the top operator
is merge/hashjoinable, any expression should work on either side.
It's just a matter of cleaning up a few unwarranted shortcuts in the
planner.

But that work does need to be done before we can rip out all the
cross-datatype operators ... so this is definitely not happening
for 7.3 ...

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] One more problem with odbc driver

2002-09-17 Thread Michael Meskes

Hi,

I just talked to Sebastian again and we face another problem. The
software he's porting to PostgreSQL calls SQLProcedureColumns to get the
info about the input columns and the result. But the problem is that the
function in question returns an unnamed cursor. Before we start porting
the procedure/function we of course have to figure out how to tell the
app that the procedure will return a cursor, but we couldn't find
anything in the odbc specs.

As I do not have access to the MS SQL procedure as it is now I cannot
try anything myself, but I'm willing to act as a channel for Sebastian
to talk to you. The matter of the fact is that I never saw a function
returning a cursor on PostgreSQL so far.

Michael
-- 
Michael Meskes
[EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire!
Use Debian GNU/Linux! Use PostgreSQL!

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Proposal for resolving casting issues

2002-09-17 Thread Tom Lane

Zeugswetter Andreas SB SD [EMAIL PROTECTED] writes:
 2. Allow implicit up-coercion int2-int4-int8-numeric-float4-float8,
 but down-coercions aren't implicit except for assignment.

 How about int2-int4-int8-numeric-float4-float8-numeric ?
 That would also allow an upward path from float8.

Uh, what?  That seems logically impossible to me ... or at least it
would reintroduce exactly the problem we need to get away from: casts
between float4, float8, numeric would be considered equally good in
either direction, creating ambiguity about which operator to use.
How are you envisioning it would work exactly?

Perhaps I should clarify what I had in mind: because the parser only
considers one level of type coercion when choosing a function or
assigning to a result column, it's actually necessary to have all thirty
cast combinations between the six numeric types available in pg_cast.
My notation int2-int4-int8-numeric-float4-float8 is intended to
imply that of the thirty, these would be marked as implicitly coercible:

int2-int4
int2-int8
int2-numeric
int2-float4
int2-float8
int4-int8
int4-numeric
int4-float4
int4-float8
int8-numeric
int8-float4
int8-float8
numeric-float4
numeric-float8
float4-float8

while the fifteen reverse coercions would be assignment-only.

If we allow any circularity then we will have pairs of types with both
cast pathways marked as implicit, which will leave the parser unable to
choose which operator to use.  This is exactly why numeric = float8
has failed in past versions: there are two alternatives that are equally
easy to reach.


 It would fix the constants issue, yes. How about where int2col=int4col 
 and it's indexability of int2col though ?

See my other response.  The current scheme of using a cross-datatype
operator isn't helpful for indexing such cases anyway...

 Also
 we get more nearly SQL-standard behavior in expressions that combine
 numeric with float4/float8: the preferred type will be float, which
 accords with the spec's notions of exact numeric vs. 
 approximate numeric.

 I do not understand the standard here.
 Especially the following would seem awkward if that would switch to
 approximate:
   set numericcol = numericcol * float4col; 

Well, the spec's notion is that combining an exact number and an
approximate number must yield an approximate result.  This logic
is hard to argue with, even though in our implementation it would
seem to make more sense for numeric to be the top of the hierarchy
on range and precision grounds.

Note that if you write, say,
set numericcol = numericcol * 3.14159;
my proposal would do the right thing since the constant would be typed
as numeric to start with and would stay that way.  To do what you want
with a float variable, it'd be necessary to write
set numericcol = numericcol * float4col::numeric;
which is sort of ugly; but no uglier than
set float4col = float4col * numericcol::float4;
which is what you'd have to write if the system preferred numeric and
you wanted the other behavior.

I too have been thinking for a long time that I didn't like following
the spec's lead on this point; but I am now starting to think that it's
not all that bad.  This approach to handling constants is *much* cleaner
than what we've done in the past, or even any of the unimplemented
proposals that I can recall.  The behavior you'd get with combinations
of float and numeric variables is, well, debatable; from an
implementor's point of view preferring a numeric result makes sense,
but it's much less clear that users would automatically think the same.
Given the spec's position, I am starting to think that preferring float
is the right thing to do.

BTW, I am thinking that we don't need the notion of preferred type at
all in the numeric category if we use this approach.  I have not worked
through the details for the other type categories, but perhaps if we
adopt similar systems of one-way implicit promotions in each category,
we could retire preferred types altogether --- which would let us get
rid of hardwired type categories, too.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Still big problems with pg_dump!

2002-09-17 Thread Wim

Hi Gavin,

Thnx for your response...

Maybe you know how to check memory on an Ultrasparc-II???

Cheers!

Wim.

Gavin Sherry wrote:

On Tue, 17 Sep 2002, Wim wrote:

  

Hello guys,

I have still problems with dumping my database



Wim,

This kind of error is not generated as a result of on-disk data
corruption. It is probably a hardware error: memory, cache, or CPU. Can
you replace any of these components on the machine and attempt to
re-create?

Gavin


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


  




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] [HACKERS] PostgreSQL 7.3: help on new CREATE TYPE

2002-09-17 Thread Tom Lane

Jerome Chochon [EMAIL PROTECTED] writes:
 Can I use this new type like other user-type ?
 CREATE TABLE person (his_name VARCHAR, his_adress adress);
 ...where adress is CREATE TYPE adress AS (number int, street text, country
 VARCHAR);

Not at the moment, though that might be an interesting direction to
pursue in future releases.  At present, the only thing such a type is
useful for is to define the argument or result type of a function that
takes or returns records.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Proposal for resolving casting issues

2002-09-17 Thread Tom Lane

I wrote:
 I think we must extend pg_cast's castimplicit column to a three-way value:
   * okay as implicit cast in expression (or in assignment)
   * okay as implicit cast in assignment only
   * okay only as explicit cast

 Question: what shall we call these alternatives in CREATE CAST?  The SQL99
 phrase AS ASSIGNMENT looks like it should mean the second, but I think
 the spec semantics require it to mean the first.  Ugh.  Perhaps AS
 ASSIGNMENT ONLY for the second case?

On looking more closely, SQL99 appears to define user-defined casts as
invocable *only* in explicit cast and assignment contexts.  Part 2 sez:

 4.13  Data conversions

 Explicit data conversions can be specified by a CAST operator.
 A CAST operator defines how values of a source data type are
 converted into a value of a target data type according to
 the Syntax Rules and General Rules of Subclause 6.22, cast
 specification. Data conversions between predefined data types
 and between constructed types are defined by the rules of this part
 of ISO/IEC 9075. Data conversions between one or more user-defined
 types are defined by a user-defined cast.

 A user-defined cast identifies an SQL-invoked function, called the
 cast function, that has one SQL parameter whose declared type is
 the same as the source data type and a result data type that is the
 target data type. A cast function may optionally be specified to
 be implicitly invoked whenever values are assigned to targets of
 its result data type. Such a cast function is called an implicitly
 invocable cast function.

This seems to mean that we can get away with defining AS ASSIGNMENT to
mean my second category (implicit in assignment only), and then picking
some more natural term for my first category (implicit anywhere).

I favor using IMPLICIT, which would make the syntax of CREATE CAST be

CREATE CAST (sourcetype AS targettype)
WITH FUNCTION funcname (argtype)
[ AS ASSIGNMENT | IMPLICIT ]

CREATE CAST (sourcetype AS targettype)
WITHOUT FUNCTION
[ AS ASSIGNMENT | IMPLICIT ]

Or possibly it should be AS IMPLICIT?

Comments?

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] RPMS for 7.3 beta.

2002-09-17 Thread Lamar Owen

Having not seen anyone asking about the progress on the 7.3beta RPMset, I 
thought I would give a statement as to where things stand.

I am waiting the result of the pg_dump from 7.2.x to 7.3 restore discussion.  
The structure of the entire packaging depends upon knowing how the upgrade 
will be performed, since the rest of the packaging is just getting a good 
build, and excising the gborged clients, which will then have to have their 
own RPMs built.  But I'll get the core built first, then I'll work on the 
clients.

I have a basic build running, but it's not releasable.  I haven't had time to 
go through it with the properly fine-toothed comb that I want to as yet.  I 
would expect to be able to release an RPMset for beta 2 if that is a week or 
two off.

I'll try to keep everyone who cares updated periodically.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] An opportunity to prove PostgreSQL and our requirement of Case Study info

2002-09-17 Thread Lamar Owen

On Monday 16 September 2002 01:15 pm, Andrew Sullivan wrote:
 On Fri, Sep 13, 2002 at 03:28:12PM +1000, Justin Clift wrote:
  Afilias and LibertyRMS, the people who've been happily running the
  .info namespace on PostgreSQL servers, are the technical backend of
  the ISOC application for management of the .org namespace.

Talk about full circle.  See my e-mail address's domain to get the punch line.

In more than one way WGCR relies on PostgreSQL for mission-critical data 
storage.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Proposal for resolving casting issues

2002-09-17 Thread Bruce Momjian

Tom Lane wrote:
 I favor using IMPLICIT, which would make the syntax of CREATE CAST be
 
 CREATE CAST (sourcetype AS targettype)
 WITH FUNCTION funcname (argtype)
 [ AS ASSIGNMENT | IMPLICIT ]
 
 CREATE CAST (sourcetype AS targettype)
 WITHOUT FUNCTION
 [ AS ASSIGNMENT | IMPLICIT ]
 
 Or possibly it should be AS IMPLICIT?

I think AS IMPLICIT would be better because we have other AS [var]
clauses.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] Numeric casting rules, take two

2002-09-17 Thread Tom Lane

I started by saying
 * Within a category, up (lossless) conversions are implicit, down
 (potentially lossy) conversions should be assignment-only.
but as always the devil is in the details.

After further thought, and the thread with Andreas about where we might go
with this in 7.4, I have developed a two-stage plan for dealing with
numeric casts.  We can make some progress in 7.3 but there is more work
that will have to be postponed.  Here's my current thoughts (plan first,
then discussion):

Do for 7.3:

* Set up pg_cast so that up-coercions in the series
int2-int4-int8-numeric-float4-float8 are implicit, while
down-coercions (the reverse direction of each of these fifteen casts)
are marked assignment-only.

* Modify make_const so that numeric literals are typed as the smallest
type that will hold them in the series int4, int8, numeric (as opposed
to the former behavior, which was int4, float8, numeric).

* Make only float8, not numeric, be a preferred type for category NUMERIC.

Do for 7.4:

* Change make_const so that numeric literals are typed as the smallest
type that will hold them in the series int2, int4, int8, numeric (ie,
add int2 to the possible set of initial datatypes for constants).

* Remove most cross-datatype operators (int2+int4, etc), expecting such
operations to be handled by an implicit cast and a single-datatype
operator instead.  This is necessary for comparison operators, because
we want operations like int4var = 42 to be coerced to int4-only
operations so that they are indexable.  It's optional for operators that
are never associated with indexes (like +), but I'm inclined to reduce
the code bulk and size of pg_proc (and pg_operator) by getting rid of as
much as we can.

* Fix planner to cope with merge and hash joins wherein the arguments
aren't plain Var nodes (must cope with Var + type promotion, and might
as well just take any expression).

* Develop similar promotion hierarchies for the other type categories.
See if we can't retire the notion of preferred type entirely.

Discussion:

The main point of the 7.3 changes is to create a consistent promotion scheme
for the numeric hierarchy.  By twiddling make_const, we can improve the
behavior for large integers and float-format constants: these will be
typed as int8 or numeric and then if necessary up-converted to numeric,
float4, or float8.  It happens that there are no cross-datatype operators
at present between int8 and numeric/float4/float8 nor between numeric and
float4/float8, so we will get the desired up-conversion and not selection
of a cross-datatype operator when such a constant is used with a numeric
or float variable.  In the existing code, an integer too large for int4
(but not too large for int8) would be initially typed as float8, thus
forcing us to allow float8-int8 as an implicit coercion to ensure
reasonable behavior for int8 constants.  So we must introduce int8 as
an allowed initial type for constants if we want to remove float8-int8
as an implicit coercion.  But we can get rid of float8 as an initial type,
which simplifies matters.

With these changes we can expect reasonable behavior for cases like
where numericvar = float-style-constant.  The behavior will not get
better for cases involving int2 or int8 variables compared to int-size
constants, but it won't get worse either.  These changes will also bring
us into line with the SQL spec concerning mixed float/numeric operations
(the result should be approximate, ie float).

With the additional changes for 7.4 we can expect to finally fix the
behavior for int2 and int8 variables as well: cases like where int2var =
42 will be indexable without having to explicitly cast the constant.


Comments?

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Proposal for resolving casting issues

2002-09-17 Thread Bruce Momjian

Tom Lane wrote:
 Note that if you write, say,
   set numericcol = numericcol * 3.14159;
 my proposal would do the right thing since the constant would be typed
 as numeric to start with and would stay that way.  To do what you want
 with a float variable, it'd be necessary to write
   set numericcol = numericcol * float4col::numeric;
 which is sort of ugly; but no uglier than
   set float4col = float4col * numericcol::float4;
 which is what you'd have to write if the system preferred numeric and
 you wanted the other behavior.

I need a clarification.  In the non-assignment case, does:

WHERE numericcol = numericcol * 3.14159

evaluate numericcol * 3.14159 as a numeric?

And does:

WHERE 5.55 = numericcol * 3.14159

evaluate numericcol * 3.14159 as a numeric too?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Proposal for resolving casting issues

2002-09-17 Thread Tom Lane

Bruce Momjian [EMAIL PROTECTED] writes:
 I need a clarification.  In the non-assignment case, does:
   WHERE numericcol = numericcol * 3.14159
 evaluate numericcol * 3.14159 as a numeric?

Yup (given my proposed changes that is).

 And does:
   WHERE 5.55 = numericcol * 3.14159
 evaluate numericcol * 3.14159 as a numeric too?

Yup.  The context does not matter: when we have foo * bar, we are going
to decide which kind of * operator is meant without regard to
surrounding context.  It's very much a bottom-up process, and has to be.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] RPMS for 7.3 beta.

2002-09-17 Thread Tom Lane

Lamar Owen [EMAIL PROTECTED] writes:
 I have a basic build running, but it's not releasable.  I haven't had time to
 go through it with the properly fine-toothed comb that I want to as yet.  I 
 would expect to be able to release an RPMset for beta 2 if that is a week or 
 two off.

Sounds good.  I think the earliest we could be ready for beta2 is the
end of this week; sometime next week may be more realistic.

Given that we'll be forcing an initdb for beta2 anyway, those who use
RPMs may be just as happy to have missed beta1.

 I am waiting the result of the pg_dump from 7.2.x to 7.3 restore discussion.

Right.  We clearly have to support loading of 7.2 dumps; the only issue
in my mind is exactly how we kluge that up ;-).  I just talked to Bruce
about this a little bit, and we came to the conclusion that there are
two plausible-looking paths:

1. Relax CREATE LANGUAGE to accept either LANGUAGE_HANDLER or OPAQUE as
the datatype of the function (ie, make it work more like CREATE TRIGGER
does).

2. Hack CREATE LANGUAGE so that if it's pointed at an OPAQUE-returning
function, it actually updates the recorded return type of the function
in pg_proc to say LANGUAGE_HANDLER.

If we go with #1 we're more or less admitting that we have to support
OPAQUE forever, I think.  If we go with #2, then dumps out of 7.3 or
later would be OPAQUE-free, and we could eventually remove OPAQUE a few
release cycles down the road.  So even though #2 looks mighty ugly,
I am leaning in that direction.

Whichever way we jump, I think the same behavior should be adopted for
all three contexts where OPAQUE is relevant: language handlers,
triggers, and user-defined-datatype I/O functions.  Either we accept
OPAQUE forever, or we proactively fix the function declarations when
an old dump is loaded.

Another interesting thought is that if we do the OPAQUE-to-HANDLER
update thing, we could at the same time coerce the stored path for
the PL's shared library into the preferred '$libdir/foo' format,
rather than the absolute-path form it's likely to have if we're dealing
with a pre-7.2 dump.  This would not help anything immediately (if you
got past the CREATE FUNCTION then you gave a valid shlib path) but it'd
very possibly save people trouble down the road.

Comments?

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] RPMS for 7.3 beta.

2002-09-17 Thread Lamar Owen

On Tuesday 17 September 2002 03:59 pm, Tom Lane wrote:
 Lamar Owen [EMAIL PROTECTED] writes:
  as yet.  I would expect to be able to release an RPMset for beta 2 if
  that is a week or two off.

 Given that we'll be forcing an initdb for beta2 anyway, those who use
 RPMs may be just as happy to have missed beta1.

Hmmm. Any idea if any more initdb forcings are going to happen? :-)

  I am waiting the result of the pg_dump from 7.2.x to 7.3 restore
  discussion.

 Right.  We clearly have to support loading of 7.2 dumps; the only issue
 in my mind is exactly how we kluge that up ;-).  I just talked to Bruce
 about this a little bit, and we came to the conclusion that there are
 two plausible-looking paths:

 Comments?

From a user/packager viewpoint: the exact mechanics on the internal level, 
while nice to know (so that I know what to look for in bug reports), are 
rather irrelevant when it comes to 'how do I package?'.   What I am looking 
at is whether the user will have to run 7.3's pg_dump in order to migrate 
older data.  If so I, and Oliver, will have to kludge up dependencies and 
linkages in ways that I'm not happy with, but can do if need be.  And 
migration is 'need be' if ever there were 'need be'.

I think that I will be able to just build a 'postgresql-olddump' package or 
similar that contains 7.3's pg_dump in a 7.2.2-friendly form, and let the 
various distributors worry about building that for older system libraries. 
:-)  This is just a possibility -- it may not be nearly as hard as I fear it 
will be -- best case is I do virtually nothing and let people upgrade the 
postgresql-libs and the main package (which includes pg_dump anyway), leaving 
the existing postgresql-server package in place.  They then dump, erase the 
old server package, and install the new server package.  I have disabled rpm 
upgrades for the server subpackage as of 7.2.2, so that portion I know is 
doable.  I'll just have to try it.  I may be overanalyzing the situation. :-)
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] Old pgsql versions

2002-09-17 Thread Bruce Momjian

Marc needs old PostgreSQL source code tarbals for our ftp site.  We
have = 6.1, and I have postgres95 1.01 and postgres 4.2.  Does anyone
have 6.0.X and 1.0X?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] RPMS for 7.3 beta.

2002-09-17 Thread Tom Lane

Lamar Owen [EMAIL PROTECTED] writes:
 ... What I am looking 
 at is whether the user will have to run 7.3's pg_dump in order to migrate 
 older data.

AFAIK this is not *necessary*, though it may be *helpful*.  Aside from
the OPAQUE issue, which we will fix one way or another, I am aware of
these issues for loading a pre-7.3 dump:

* A reloaded dump will fail to GRANT EXECUTE TO PUBLIC on functions,
  likewise fail to GRANT USAGE TO PUBLIC on procedural languages.
  This may not bother some people, but for those it does bother,
  it's not that hard to issue the GRANTs manually after loading the dump.

* A reloaded dump will not create dependencies between serial columns
  and sequence objects, nor between triggers and foreign key
  constraints, thus 7.3's nifty new support for DROP CONSTRAINT won't
  work, nor will dropping a table make its associated sequences go away.
  However, this can be boiled down to saying that it still works like it
  did before.

There are of course the same old same old issues regarding pg_dump's
ability to choose a good dump order, but these are not worse than before
either, and would bite you just as badly if you tried to reload your
dump into 7.2.*.

Using 7.3's pg_dump would help you with the GRANT issue, but AFAIR it
won't do anything for reconstructing serial or foreign-key dependencies.
And it definitely wouldn't help on the ordering issue.  So it's probably
not worth the trouble if you can't do it trivially, which you can't in
an RPM-upgrade context.  (We do advise it for people who are building
from source, since it's not difficult for them.)

In short, I'm not sure why you and Oliver are so unhappy.  We may not
have made the world better than before for upgrade scenarios, but I
don't think we've made it worse either.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] a quick question

2002-09-17 Thread scott.marlowe

Hey, me and a few other folks were having a discussion off list, and the 
subject of inserts and missing columns came up. you may remember the point 
in the I'm done post by Bruce.  It said:

 o -Disallow missing columns in INSERT ... VALUES, per ANSI
  What is this, and why is it marked done?

We used to allow INSERT INTO tab VALUES (...) to skip the trailing
columns and automatically fill in null's.  That is fixed, per ANSI.


Anyway, I just tested it on 7.3b1 and I can still do an insert with the 
columns missing and it fills in defaults or nulls, with defaults being the 
preference.

So, are we gonna make postgresql throw an error when someone tries to 
submit an insert with too few columns to match up to the implicit column 
list, or not?

This just seems like a change designed to piss off users to me, but I can 
see where it does encourage better query crafting.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] a quick question

2002-09-17 Thread Rod Taylor

On Tue, 2002-09-17 at 16:44, scott.marlowe wrote:
 Hey, me and a few other folks were having a discussion off list, and the 
 subject of inserts and missing columns came up. you may remember the point 
 in the I'm done post by Bruce.  It said:
 
  o -Disallow missing columns in INSERT ... VALUES, per ANSI
   What is this, and why is it marked done?
 
 We used to allow INSERT INTO tab VALUES (...) to skip the trailing
 columns and automatically fill in null's.  That is fixed, per ANSI.
 
 So, are we gonna make postgresql throw an error when someone tries to 
 submit an insert with too few columns to match up to the implicit column 
 list, or not?

There was a vote to keep previous behaviour when the column list wasn't
supplied, so it's not to ANSI spec, it's to our improved version ;)

INSERT INTO (...) VALUES (...) will not allow you to skip value entries,
but the keyword DEFAULT is available now, so it shouldn't be much of an
issue.

 
-- 
  Rod Taylor


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] RPMS for 7.3 beta.

2002-09-17 Thread Rod Taylor

 Using 7.3's pg_dump would help you with the GRANT issue, but AFAIR it
 won't do anything for reconstructing serial or foreign-key dependencies.

The below perl script can help with both of those.

http://www.rbt.ca/postgresql/upgrade/upgrade.tar.gz

Explanation URL:
http://www.rbt.ca/postgresql/upgrade.shtml


Doesn't deal with DEFERRED triggers.

-- 
  Rod Taylor


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] PGXLOG variable worthwhile?

2002-09-17 Thread Bruce Momjian

Dave Page wrote:
 Which in this case is what puzzles me. We are only talking about a
 simple GUC variable after all - I don't know for sure, but I'm guessing
 it's not a huge effort to add one?

Can we get agreement on that?  A GUC for pg_xlog location?  Much cleaner
than -X, doesn't have the problems of possible accidental use, and does
allow pg_xlog moving without symlinks, which some people don't like?

If I can get a few 'yes' votes I will add it to TODO and do it for 7.4.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] RPMS for 7.3 beta.

2002-09-17 Thread Bruce Momjian

Tom Lane wrote:
 Lamar Owen [EMAIL PROTECTED] writes:
  ... What I am looking 
  at is whether the user will have to run 7.3's pg_dump in order to migrate 
  older data.
 
 AFAIK this is not *necessary*, though it may be *helpful*.  Aside from
 the OPAQUE issue, which we will fix one way or another, I am aware of
 these issues for loading a pre-7.3 dump:
 
 * A reloaded dump will fail to GRANT EXECUTE TO PUBLIC on functions,
   likewise fail to GRANT USAGE TO PUBLIC on procedural languages.
   This may not bother some people, but for those it does bother,
   it's not that hard to issue the GRANTs manually after loading the dump.
 
 * A reloaded dump will not create dependencies between serial columns
   and sequence objects, nor between triggers and foreign key
   constraints, thus 7.3's nifty new support for DROP CONSTRAINT won't
   work, nor will dropping a table make its associated sequences go away.
   However, this can be boiled down to saying that it still works like it
   did before.

These seem like poor reasons for using 7.3 pg_dump on 7.2 databases. 
Item #1 can be easily fixed via an SQL command issued after the load, if
desired, and #2 is really not something specific to the RPM issue.  

We may be better writing a script that uses the names of the
triggers/sequences to create dependency information automatically.  Has
anyone looked at that?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] PGXLOG variable worthwhile?

2002-09-17 Thread Nigel J. Andrews

On Tue, 17 Sep 2002, Bruce Momjian wrote:

 Dave Page wrote:
  Which in this case is what puzzles me. We are only talking about a
  simple GUC variable after all - I don't know for sure, but I'm guessing
  it's not a huge effort to add one?
 
 Can we get agreement on that?  A GUC for pg_xlog location?  Much cleaner
 than -X, doesn't have the problems of possible accidental use, and does
 allow pg_xlog moving without symlinks, which some people don't like?
 
 If I can get a few 'yes' votes I will add it to TODO and do it for 7.4.

GUC instead of -X or PGXLOG : yes.

However, how is that going to work if tablespaces are introduced in 7.4. Surely
the same mechanism for tablespaces would be used for pg_xlog. As the tablespace
mechanism hasn't been determined yet, as far as I know, wouldn't it be best to
see what happens there before creating the TODO item for the log?


-- 
Nigel J. Andrews
Director

---
Logictree Systems Limited
Computer Consultants


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] PGXLOG variable worthwhile?

2002-09-17 Thread Justin Clift

Nigel J. Andrews wrote:
snip
 However, how is that going to work if tablespaces are introduced in 7.4. Surely
 the same mechanism for tablespaces would be used for pg_xlog. As the tablespace
 mechanism hasn't been determined yet, as far as I know, wouldn't it be best to
 see what happens there before creating the TODO item for the log?

It's a Yes from me of course.

Would a TODO list entry of something like Add a GUC xlog_path variable
be broad enough that
people keep it in mind when tablespaces are created, but it doesn't get
forgotten about
by not being on the list?

:-)

Regards and best wishes,

Justin Clift


 --
 Nigel J. Andrews
 Director
 
 ---
 Logictree Systems Limited
 Computer Consultants
 
 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?
 
 http://www.postgresql.org/users-lounge/docs/faq.html

-- 
My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there.
   - Indira Gandhi

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] PGXLOG variable worthwhile?

2002-09-17 Thread Bruce Momjian

Nigel J. Andrews wrote:
 On Tue, 17 Sep 2002, Bruce Momjian wrote:
 
  Dave Page wrote:
   Which in this case is what puzzles me. We are only talking about a
   simple GUC variable after all - I don't know for sure, but I'm guessing
   it's not a huge effort to add one?
  
  Can we get agreement on that?  A GUC for pg_xlog location?  Much cleaner
  than -X, doesn't have the problems of possible accidental use, and does
  allow pg_xlog moving without symlinks, which some people don't like?
  
  If I can get a few 'yes' votes I will add it to TODO and do it for 7.4.
 
 GUC instead of -X or PGXLOG : yes.
 
 However, how is that going to work if tablespaces are introduced in 7.4. Surely
 the same mechanism for tablespaces would be used for pg_xlog. As the tablespace
 mechanism hasn't been determined yet, as far as I know, wouldn't it be best to
 see what happens there before creating the TODO item for the log?

Good point.  How about:

Allow pg_xlog to be moved without symlinks

That is vague enough.  Added to TODO.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] PGXLOG variable worthwhile?

2002-09-17 Thread Rod Taylor

I forget, is it possible to make a GUC that cannot be changed during
runtime?

If so, then I vote yes, otherwise, there is a problem if someone tries.


On Tue, 2002-09-17 at 17:07, Bruce Momjian wrote:
 Dave Page wrote:
  Which in this case is what puzzles me. We are only talking about a
  simple GUC variable after all - I don't know for sure, but I'm guessing
  it's not a huge effort to add one?
 
 Can we get agreement on that?  A GUC for pg_xlog location?  Much cleaner
 than -X, doesn't have the problems of possible accidental use, and does
 allow pg_xlog moving without symlinks, which some people don't like?

-- 
  Rod Taylor


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] PGXLOG variable worthwhile?

2002-09-17 Thread Bruce Momjian

Rod Taylor wrote:
 I forget, is it possible to make a GUC that cannot be changed during
 runtime?

Yes, you can set it to it only can be changed by the super-user and only
takes effect on restart.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] PostgreSQL 7.3: help on new CREATE TYPE

2002-09-17 Thread Christopher Kings-Lynne

 When I say that the second form of CREATE TYPE allow you to make
 RECORD type
 like RECORD, i don't want to speak about the record in PlPgsql but RECORD
 from programming language like ADA or C (typedef struct).

 So the real question is:
 Can I use this new type like other user-type ?
 CREATE TABLE person (his_name VARCHAR, his_adress adress);
 ...where adress is CREATE TYPE adress AS (number int, street text, country
 VARCHAR);

No.

By the way - the pgsql-announce list is not for asking quetsions in!

Chris


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] RPMS for 7.3 beta.

2002-09-17 Thread Christopher Kings-Lynne

 * A reloaded dump will not create dependencies between serial columns
   and sequence objects, nor between triggers and foreign key
   constraints, thus 7.3's nifty new support for DROP CONSTRAINT won't
   work, nor will dropping a table make its associated sequences go away.
   However, this can be boiled down to saying that it still works like it
   did before.

Remember that Rod Taylor's written a script to fix at least the foreign key
issue above.  I think it'd be neat if that script were perfected and did
serials as well and then we could recommend its use...

Chris


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] inquiry

2002-09-17 Thread



Hi,all expert of the postgresql.
I want to learn the kernel of postgresql. WhenI debug it with gdb, I 
come to a problem that I can't solve. In the BackendStartup() it forks a new 
child process. I can't trace into the new child process with attach. It say that 
Operation is not permitted.
I really need your help.
Thank you very much.
Jinqiang Han


Re: [HACKERS] RPMS for 7.3 beta.

2002-09-17 Thread Lamar Owen

On Tuesday 17 September 2002 04:40 pm, Tom Lane wrote:
 Lamar Owen [EMAIL PROTECTED] writes:
  ... What I am looking
  at is whether the user will have to run 7.3's pg_dump in order to migrate
  older data.

 AFAIK this is not *necessary*, though it may be *helpful*.  Aside from
 the OPAQUE issue, which we will fix one way or another, I am aware of
 these issues for loading a pre-7.3 dump:

Helpful is good.  If it proves not too hard I'm going to try that route.  And 
the more I think about the less difficult I think it will be.  I've about 
given up on the upgrade ever really being easy.

 In short, I'm not sure why you and Oliver are so unhappy.  We may not
 have made the world better than before for upgrade scenarios, but I
 don't think we've made it worse either.

It's a long-term pain, Tom.  With brief paroxysms worthy of appendicitis.

I've been caught by it -- I lost data due to bad RPM packaging coupled with 
the dump/restore cycle.  That's what motivated me to start doing this in the 
first place, three years ago.

I just want people to not get bit in a bad way and decide they don't want to 
use PostgreSQL after all.  And with the new features of 7.3, lots of users 
who might have begun with 7.2 are going to want to upgrade -- but if it's too 
painful  Sorry, it's just a sore spot for me, this whole upgrade issue.  
I know Oliver has the same problem, with slightly different presentation.

I'm not meaning to be a pain; just trying to prevent some for someone else.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] RPMS for 7.3 beta.

2002-09-17 Thread Christopher Kings-Lynne

 I just want people to not get bit in a bad way and decide they
 don't want to
 use PostgreSQL after all.  And with the new features of 7.3, lots
 of users
 who might have begun with 7.2 are going to want to upgrade -- but
 if it's too
 painful  Sorry, it's just a sore spot for me, this whole
 upgrade issue.
 I know Oliver has the same problem, with slightly different presentation.

IS there any solution to Postgres's upgrade problems?  I mean, ever?  With
the complex catalog design, etc - how is it every possible for us to do a
plug-n-play major version upgrade (assuming datafile format doesn't change
anymore)

How does pg_upgrade work?

Chris


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] RPMS for 7.3 beta.

2002-09-17 Thread Bruce Momjian

Christopher Kings-Lynne wrote:
  I just want people to not get bit in a bad way and decide they
  don't want to
  use PostgreSQL after all.  And with the new features of 7.3, lots
  of users
  who might have begun with 7.2 are going to want to upgrade -- but
  if it's too
  painful  Sorry, it's just a sore spot for me, this whole
  upgrade issue.
  I know Oliver has the same problem, with slightly different presentation.
 
 IS there any solution to Postgres's upgrade problems?  I mean, ever?  With
 the complex catalog design, etc - how is it every possible for us to do a
 plug-n-play major version upgrade (assuming datafile format doesn't change
 anymore)
 
 How does pg_upgrade work?

pg_upgrade sort of worked for 7.2 but I got to it too late and I didn't
properly expand the pg_clog files.  In 7.3, the file format has changed.
If we don't change the format for 7.4, I can do it, but I have to add
schema stuff to it.  Shouldn't be too hard.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] RPMS for 7.3 beta.

2002-09-17 Thread Christopher Kings-Lynne

  How does pg_upgrade work?
 
 pg_upgrade sort of worked for 7.2 but I got to it too late and I didn't
 properly expand the pg_clog files.  In 7.3, the file format has changed.
 If we don't change the format for 7.4, I can do it, but I have to add
 schema stuff to it.  Shouldn't be too hard.

I mean - how does it actually _work_?

Chris


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] RPMS for 7.3 beta.

2002-09-17 Thread Lamar Owen

On Tuesday 17 September 2002 10:27 pm, Christopher Kings-Lynne wrote:
 Lamar Owen wrote:
  Sorry, it's just a sore spot for me, this whole
  upgrade issue.

 IS there any solution to Postgres's upgrade problems?  I mean, ever?  With
 the complex catalog design, etc - how is it every possible for us to do a
 plug-n-play major version upgrade (assuming datafile format doesn't change
 anymore)

While I should know better, I'm going to reply.:-)

The system catalog has poor system-user separation.  Better separation might 
help the issue.  Putting all the stuff that belongs to system into the 
'system' catalog and then putting the user's customizations into a 'user' 
catalog, with a set of SQL scripts to upgrade the user portion if columns or 
other metadata changed in the user portion.  This statement is vastly 
simplified.  Then you can blow out the system portion and reinit it without 
disturbing the user data and metadata.  The problem I believe would be 
enforcing a strict enough demarcation to make that possible.  Then there's 
the nontrivial issue of where the point of demarcation lies.  But I should 
let someone better versed in the system catalog structure answer that.

heresy
I'd give up a few extensibility features for solid upgrading.  If I didn't 
have so much invested in PostgreSQL I might take a hard look at MySQL 4, 
since data migration has heretofore been one of their few real strengths.  
But I've got three years of RPM maintenance and five years of infrastructure 
built on PostgreSQL, so migrating to something else isn't a real palatable 
option at this point.
/heresy

 How does pg_upgrade work?

If I am not mistaken pg_upgrade attempts to do just exactly what I described 
above, moving data tables and associated metadata out of the way, initdb, and 
move the data back, rebuiding the system catalog linkages into the user 
metadata as it goes. And it works in a state where there is mixed metadata.  
At least that's what I remember without looking at the source code to it -- 
the code is in contrib/pg_upgrade and is a shell script.  For laughs I have 
the source code in another window now, and it is rather involved, issuing a 
number of queries to gather the information to relink the user metadata back 
in.

It then vacuums so that losing the transaction log file (!!) isn't fatal to 
the upgrade.

It then stops postmaster and moves things out of the way, then an initdb is 
performed.  The schema is restored; the transaction statuses are restored, 
and data is moved back in, into the proper places.  Moving back into the 
proper places is nontrivial, and the existing code makes no attempt to 
rollback partial upgrades.  That failing could be fixed, however.

Then:
# Now that we have moved the WAL/transaction log files, vacuum again to
# mark install rows with fixed transaction ids to prevent problems on xid
# wraparound.

Like I said, it's involved.  I'm not sure it works for a 7.2.2- 7.3 upgrade.  

If the on-disk binary format has changed, tough cookie.  It won't help us, 
since it doesn't make any effort to convert data -- it's just moving it 
around and recreating the metadata linkages necessary.

Now if a binary data converter could be paired with what pg_upgrade is 
currently doing, it might fly.  But scattered in the code is the discouraging 
comment:
# Check for version compatibility.
# This code will need to be updated/reviewed for each new PostgreSQL release.

Keeping abreast of the changing formats and the other 'gotchas' is just about 
going to be a full-time job, since changes are made to the system catalogs, 
syntax, semantics, and data format with little regard as to how it will 
impact data migration.  IOW, migration/upgrading shouldn't be an afterthought 
if it's going to work right.

I wish (in a somewhat wistful, yet futile manner) that each change was 
accompanied by data migration strategies for that change, but I'm not holding 
my breath, since the core developers have more important things to do.  (Not 
being sarcastic -- just observing a fact).

Oh well.  Chris, you got me wound up again... :-(  I wish I had the time and 
funding to go after it, but I have a full-time job already as a broadcast 
engineer, and while we use PostgreSQL in a mission critical role here, I 
can't justify diverting other monies for this purpose.  Money is tight enough 
already.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] RPMS for 7.3 beta.

2002-09-17 Thread Bruce Momjian


This is a better description that I could make. If you look at the
script it is very well commented so you should be able to see it works. 
Also, read the manual page first. 

In summary, doing any kind of data changes is quite involved (smaller
tuple header for 7.3) and because it has to be redone for every release,
it is quite a pain.  Also, considering commercial databases don't do
much better, I fell pretty OK about it.  However, we do make releases
more frequently than commercial folks, so the pain is more consistent.

MySQL hasn't changed their base table format in perhaps 10 years, so
yea, that is a real win for them.  Of course, they don't shoot out
features as fast as we do so that helps.  You could pretend you are
using MySQL and just not upgrade for 5 years.  ;-)

---

Lamar Owen wrote:
 On Tuesday 17 September 2002 10:27 pm, Christopher Kings-Lynne wrote:
  Lamar Owen wrote:
   Sorry, it's just a sore spot for me, this whole
   upgrade issue.
 
  IS there any solution to Postgres's upgrade problems?  I mean, ever?  With
  the complex catalog design, etc - how is it every possible for us to do a
  plug-n-play major version upgrade (assuming datafile format doesn't change
  anymore)
 
 While I should know better, I'm going to reply.:-)
 
 The system catalog has poor system-user separation.  Better separation might 
 help the issue.  Putting all the stuff that belongs to system into the 
 'system' catalog and then putting the user's customizations into a 'user' 
 catalog, with a set of SQL scripts to upgrade the user portion if columns or 
 other metadata changed in the user portion.  This statement is vastly 
 simplified.  Then you can blow out the system portion and reinit it without 
 disturbing the user data and metadata.  The problem I believe would be 
 enforcing a strict enough demarcation to make that possible.  Then there's 
 the nontrivial issue of where the point of demarcation lies.  But I should 
 let someone better versed in the system catalog structure answer that.
 
 heresy
 I'd give up a few extensibility features for solid upgrading.  If I didn't 
 have so much invested in PostgreSQL I might take a hard look at MySQL 4, 
 since data migration has heretofore been one of their few real strengths.  
 But I've got three years of RPM maintenance and five years of infrastructure 
 built on PostgreSQL, so migrating to something else isn't a real palatable 
 option at this point.
 /heresy
 
  How does pg_upgrade work?
 
 If I am not mistaken pg_upgrade attempts to do just exactly what I described 
 above, moving data tables and associated metadata out of the way, initdb, and 
 move the data back, rebuiding the system catalog linkages into the user 
 metadata as it goes. And it works in a state where there is mixed metadata.  
 At least that's what I remember without looking at the source code to it -- 
 the code is in contrib/pg_upgrade and is a shell script.  For laughs I have 
 the source code in another window now, and it is rather involved, issuing a 
 number of queries to gather the information to relink the user metadata back 
 in.
 
 It then vacuums so that losing the transaction log file (!!) isn't fatal to 
 the upgrade.
 
 It then stops postmaster and moves things out of the way, then an initdb is 
 performed.  The schema is restored; the transaction statuses are restored, 
 and data is moved back in, into the proper places.  Moving back into the 
 proper places is nontrivial, and the existing code makes no attempt to 
 rollback partial upgrades.  That failing could be fixed, however.
 
 Then:
 # Now that we have moved the WAL/transaction log files, vacuum again to
 # mark install rows with fixed transaction ids to prevent problems on xid
 # wraparound.
 
 Like I said, it's involved.  I'm not sure it works for a 7.2.2- 7.3 upgrade.  
 
 If the on-disk binary format has changed, tough cookie.  It won't help us, 
 since it doesn't make any effort to convert data -- it's just moving it 
 around and recreating the metadata linkages necessary.
 
 Now if a binary data converter could be paired with what pg_upgrade is 
 currently doing, it might fly.  But scattered in the code is the discouraging 
 comment:
 # Check for version compatibility.
 # This code will need to be updated/reviewed for each new PostgreSQL release.
 
 Keeping abreast of the changing formats and the other 'gotchas' is just about 
 going to be a full-time job, since changes are made to the system catalogs, 
 syntax, semantics, and data format with little regard as to how it will 
 impact data migration.  IOW, migration/upgrading shouldn't be an afterthought 
 if it's going to work right.
 
 I wish (in a somewhat wistful, yet futile manner) that each change was 
 accompanied by data migration strategies for that change, but I'm not holding 
 my breath, since the core developers have more important things to do.  (Not 
 being sarcastic -- 

Re: [HACKERS] RPMS for 7.3 beta.

2002-09-17 Thread Christopher Kings-Lynne

 heresy
 I'd give up a few extensibility features for solid upgrading.  If
 I didn't
 have so much invested in PostgreSQL I might take a hard look at MySQL 4,
 since data migration has heretofore been one of their few real
 strengths.
 But I've got three years of RPM maintenance and five years of
 infrastructure
 built on PostgreSQL, so migrating to something else isn't a real
 palatable
 option at this point.
 /heresy

I do notice that I think MySQL requires you to run a script for some
upgrades...

Chris


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] RPMS for 7.3 beta.

2002-09-17 Thread Oliver Elphick

On Tue, 2002-09-17 at 21:40, Tom Lane wrote:
 In short, I'm not sure why you and Oliver are so unhappy.  We may not
 have made the world better than before for upgrade scenarios, but I
 don't think we've made it worse either.

I'm unhappy because I know that I will get bug reports that I will have
to deal with.  They will take time and effort and would not be necessary
if we had a seamless upgrade path.  The more PostgreSQL gets used, the
more it will be used by 'clueless' users; they just install binary
packages and expect them to work.  That may currently be an unrealistic
expectation, but I would like it to become a goal of the project.  It
has always been my goal as Debian maintainer, but I don't think I can
achieve it for this release.

-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight, UK
http://www.lfix.co.uk/oliver
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
 
 Give, and it shall be given unto you; good measure, 
  pressed down, and shaken together, and running over, 
  shall men pour into your lap. For by your standard of 
  measure it will be measured to in return.
   Luke 6:38 


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] RPMS for 7.3 beta.

2002-09-17 Thread Tom Lane

Lamar Owen [EMAIL PROTECTED] writes:
 How does pg_upgrade work?
 [ pretty good description ]

You missed a key point, which is that pg_upgrade does not even try to
cope with version-to-version system catalog changes.  It assumes it can
use pg_dump to dump and reload the database schema.  So there is no
hope, ever, that it will be more reliable than pg_dump.  All pg_upgrade
tries to do is short-circuit the moving of the bulk data.

The bald fact of the matter is that we are still a good ways away from
the point where we might be willing to freeze the system catalogs.  PG
is evolving and improving by a substantial amount with every release,
and the implication of that is that there *will* be some upgrade pain.
If you don't like that ... well ... you're welcome to keep using PG 6.1
... but I haven't got a better answer.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] RPMS for 7.3 beta.

2002-09-17 Thread Oliver Elphick

On Wed, 2002-09-18 at 04:22, Bruce Momjian wrote:
 
 In summary, doing any kind of data changes is quite involved (smaller
 tuple header for 7.3) and because it has to be redone for every release,
 it is quite a pain. 

Is it feasible to make a utility to rewrite each table, shortening the
headers and making any other necessary changes?  (Taking for granted
that the database has been vacuumed and the postmaster shut down.)

This could build up over successive releases, with an input section
appropriate to each older version and an output section for the current
version.  Then an upgrade from any older version to the current one
could be done by pg_upgrade.

Is this even worth considering?  

-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight, UK
http://www.lfix.co.uk/oliver
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
 
 Give, and it shall be given unto you; good measure, 
  pressed down, and shaken together, and running over, 
  shall men pour into your lap. For by your standard of 
  measure it will be measured to in return.
   Luke 6:38 


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] RPMS for 7.3 beta.

2002-09-17 Thread Bruce Momjian

Oliver Elphick wrote:
 On Tue, 2002-09-17 at 21:40, Tom Lane wrote:
  In short, I'm not sure why you and Oliver are so unhappy.  We may not
  have made the world better than before for upgrade scenarios, but I
  don't think we've made it worse either.
 
 I'm unhappy because I know that I will get bug reports that I will have
 to deal with.  They will take time and effort and would not be necessary
 if we had a seamless upgrade path.

This last line gave me a chuckle.  It is like software wouldn't be
necessary if computers could read people's minds.  :-)

The issue with modifying the data files is that if we have to modify the
large binary data file we may as well just dump/reload the data.  If we
don't change the on-disk format for 7.4 I will try again to make
pg_upgrade work.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Schemas not available for pl/pgsql %TYPE....

2002-09-17 Thread Bruce Momjian


Does pl/python even have a DECLARE section that can mimick the data type
of an existing table column?

---

Greg Copeland wrote:
-- Start of PGP signed section.
 Does anyone know if such effort is also required to pl/python to become
 schema aware?
 
 Regards,
 
   Greg Copeland
 
 
 On Wed, 2002-09-11 at 19:24, Bruce Momjian wrote:
  
  Patch applied.  Thanks.
  
  ---
  
  
  Joe Conway wrote:
   Tom Lane wrote:
Sean Chittenden [EMAIL PROTECTED] writes:

   ::sigh:: Is it me or does it look like all
   of pl/pgsql is schema un-aware (ie, all of the declarations).  -sc


Yeah.  The group of routines parse_word, parse_dblword, etc that are
called by the lexer certainly all need work.  There are some
definitional issues to think about, too --- plpgsql presently relies on
the number of names to give it some idea of what to look for, and those
rules are probably all toast now.  Please come up with a sketch of what
you think the behavior should be before you start hacking code.
   
   Attached is a diff -c format proposal to fix this. I've also attached a short 
   test script. Seems to work OK and passes all regression tests.
   
   Here's a breakdown of how I understand plpgsql's Special word rules -- I 
   think it illustrates the behavior reasonably well. New functions added by this 
   patch are plpgsql_parse_tripwordtype and plpgsql_parse_dblwordrowtype:
   
   
   Identifiers (represents)parsing function
   
   identifierplpgsql_parse_word
tg_argv
T_LABEL (label)
T_VARIABLE  (variable)
T_RECORD(record)
T_ROW   (row)
   
   identifier.identifier  plpgsql_parse_dblword
T_LABEL
T_VARIABLE  (label.variable)
T_RECORD(label.record)
T_ROW   (label.row)
T_RECORD
T_VARIABLE  (record.variable)
T_ROW
T_VARIABLE  (row.variable)
   
   identifier.identifier.identifier  plpgsql_parse_tripword
T_LABEL
T_RECORD
T_VARIABLE  (label.record.variable)
T_ROW
T_VARIABLE  (label.row.variable)
   
   identifier%TYPE   plpgsql_parse_wordtype
T_VARIABLE
T_DTYPE (variable%TYPE)
T_DTYPE (typname%TYPE)
   
   identifier.identifier%TYPE plpgsql_parse_dblwordtype
T_LABEL
T_VARIABLE
T_DTYPE (label.variable%TYPE)
T_DTYPE (relname.attname%TYPE)
   
   new
   identifier.identifier.identifier%TYPE plpgsql_parse_tripwordtype
T_DTYPE (nspname.relname.attname%TYPE)
   
   identifier%ROWTYPE plpgsql_parse_wordrowtype
T_DTYPE (relname%ROWTYPE)
   
   new
   identifier.identifier%ROWTYPE   plpgsql_parse_dblwordrowtype
T_DTYPE (nspname.relname%ROWTYPE)
   
   
   Parameters - parallels the above
   
   $#plpgsql_parse_word
   $#.identifier  plpgsql_parse_dblword
   $#.identifier.identifier  plpgsql_parse_tripword
   $#%TYPE   plpgsql_parse_wordtype
   $#.identifier%TYPE plpgsql_parse_dblwordtype
   $#.identifier.identifier%TYPE plpgsql_parse_tripwordtype
   $#%ROWTYPE plpgsql_parse_wordrowtype
   $#.identifier%ROWTYPE   plpgsql_parse_dblwordrowtype
   
   Comments?
   
   Thanks,
   
   Joe
  
   Index: src/pl/plpgsql/src/pl_comp.c
   

Re: [HACKERS] Interesting results using new prepared statements

2002-09-17 Thread Bruce Momjian

Tom Lane wrote:
  But I am not
  sure how to find out what the execution plan is for a prepared
  statement, since EXPLAIN doesn't work for a prepared statement (i.e.
  EXPLAIN EXECUTE preparedStatementName, doesn't work).
 
 Hmmm --- I can see the usefulness of that, but it looks like a new
 feature and hence verboten during beta.  Maybe a TODO for 7.4?

Added to TODO:

o Allow EXPLAIN EXECUTE to see prepared plans

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] RPMS for 7.3 beta.

2002-09-17 Thread Lamar Owen

On Tuesday 17 September 2002 11:22 pm, Bruce Momjian wrote:
 This is a better description tha[n] I could make. If you look at the
 script it is very well commented so you should be able to see it works.
 Also, read the manual page first.

I don't know how, but this time looking at the script, I just grokked it.  
Maybe that's because it finally clicked in my mind what was happening; 
regardless, thanks for the compliment;  feel free to use that, edited as 
necessary, in any documentation you might desire.

But you are certainly correct about the comments...some of which are more than 
a little tongue in cheek...
# Strip off the trailing directory name and store our data there
# in the hope we are in the same filesystem so 'mv 'works.

:-)

 However, we do make releases
 more frequently than commercial folks, so the pain is more consistent.

Well, for me and Oliver it comes in waves -- every major release has its 
paroxysm.  Then things cool off a little until next cycle.  These one year 
cycles have, in that way, been a good thing. :-P

You know, if the featureset of the new releases wasn't so _seductive_ it 
wouldn't be nearly as big of a problem... 

 You could pretend you are
 using MySQL and just not upgrade for 5 years.  ;-)

Don't say that too loudly, or my production 6.5.3 database that backends the 
larger portion of my intranet will hear youI'm just now moving the whole 
shooting match over to 7.2.2 as part of our delayed website redesign to use 
OpenACS.  That dataset started with 6.1.2 over five years ago, and it was the 
6.2.1-6.3.2 fiasco Red Hat created (by giving no warning that 5.1 had 6.3.2 
(5.0 had 6.2.1)) that got my dander up the first time.  I lost a few thousand 
records in that mess, which are now moot but then was a bad problem.  Since 
there wasn't an official Red Hat RPM for 6.1.2, that installation was from 
source and didn't get obliterated when I moved from Red Hat 4.2 to 5.0.  I 
was able to run both 6.1.2 and 6.2.1 concurrently, and the migration went 
smoothly -- but there were less than ten thousand records at that point.

So I _do_ have a three-year old database sitting there.  Rock solid except for 
one or two times of wierd vacuum/pg_dump interactions, solved by making them 
sequential.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] RPMS for 7.3 beta.

2002-09-17 Thread Lamar Owen

On Tuesday 17 September 2002 11:51 pm, Tom Lane wrote:
 Lamar Owen [EMAIL PROTECTED] writes:
  How does pg_upgrade work?
  [ pretty good description ]

 You missed a key point, which is that pg_upgrade does not even try to
 cope with version-to-version system catalog changes.  It assumes it can
 use pg_dump to dump and reload the database schema.  So there is no
 hope, ever, that it will be more reliable than pg_dump.  All pg_upgrade
 tries to do is short-circuit the moving of the bulk data.

Yes, this is a key point and one that shouldn't be overlooked.  If the 
metadata belonging to the user's data didn't have to be pg_dumped, but was 
decoupled somewhat from the system metadata about types, operators, classes, 
and the like, the schema (great, another overloaded term) wouldn't need 
dumping but would travel with its data.

 The bald fact of the matter is that we are still a good ways away from
 the point where we might be willing to freeze the system catalogs.  

Not talking about a freeze.  Talking about separation of system/feature 
metadata from user metadata that wouldn't change in the upgrade anyway -- 
table names, fields, user types, views, triggers, etc, that belong to this 
database and not to the installation as a whole.  If columns need changed or 
added to the user data's metadata, have the upgrade script run the 
appropriate ALTER commands and UPDATES necessary.  The hard parts, I know, 
are the details behind the broad 'appropriate'.

 PG
 is evolving and improving by a substantial amount with every release,
 and the implication of that is that there *will* be some upgrade pain.

Why is it a given conclusion?  It should not be axiomatic that 'there *will* 
be upgrade pain if we improve our features.'  That's fatalistic.

We have innovative solutions in PostgreSQL that solve some pretty hairy 
problems.  WAL.  MVCC.  The subselect code (made my day when I heard about 
that one -- but then had to wait seven months before Red Hat saw fit to 
provide an RPM that I wasn't expecting.the other reason I began RPM 
building, even though it was two cycles later before I got up the nerve to 
tackle it...).  The PL's. Foreign keys. TOAST (now that's a prime example of 
a 'sideways' solution to a head-on problem).

This is just a different challenge: how to keep the loosely dynamic system 
catalog structure while at the same time allowing the possibility of smooth 
data migration so people can more easily take advantage of the improved 
system catalog structure.  And yes I know that such a change is not for 7.3.  
Too late for that, and maybe too late for 7.4 too.

But unlike Bruce I winced at Oliver's last line -- it hit a little too close 
to home and to many multitudes of bug reports and nastygrams directed my way 
for something I have tried to kludge around in the past.  Yes, nastygrams, in 
the grand old alt.flame tradition.  When you maintain RPM's, you find 
yourself the point man for the entire project in some people's eyes.  The bug 
report about my RPM's trashing a fellow's RPM database was an extreme example 
of that.  I get two-three dozen e-mails a week that I redirect to the web 
site and/or the mailing lists.  I'm sure Oliver is nodding his head in 
understanding on this one.

I don't think seamless upgrading is a pipe dream.  And I think that dismissing 
it out of hand as 'impossible' is a self-fulfilling prophecy.

But I do think it won't work well if it's just tacked-on.

But, like Tom, I really don't have more of an answer than that.  I do 
understand pg_upgrade much better now, though.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] [HACKERS] PGXLOG variable worthwhile?

2002-09-17 Thread Jan Wieck

Bruce Momjian wrote:
 
 Dave Page wrote:
  Which in this case is what puzzles me. We are only talking about a
  simple GUC variable after all - I don't know for sure, but I'm guessing
  it's not a huge effort to add one?
 
 Can we get agreement on that?  A GUC for pg_xlog location?  Much cleaner
 than -X, doesn't have the problems of possible accidental use, and does
 allow pg_xlog moving without symlinks, which some people don't like?
 
 If I can get a few 'yes' votes I will add it to TODO and do it for 7.4.

'yes' - make it one more GUC and done


Jan

-- 

#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] [HACKERS] PGXLOG variable worthwhile?

2002-09-17 Thread Jan Wieck

Nigel J. Andrews wrote:
 However, how is that going to work if tablespaces are introduced in 7.4. Surely
 the same mechanism for tablespaces would be used for pg_xlog. As the tablespace
 mechanism hasn't been determined yet, as far as I know, wouldn't it be best to
 see what happens there before creating the TODO item for the log?

No, tablespaces would have to be something DB specific, while the Xlog
is instance wide (instance == one postmaster == installation == whatever
you name that level).

My vision is that we start off with two tablespaces per database,
default and default_idx, which are subdirectories inside the
database directory. All (non-index-)objects created without explicitly
saying what tablespace they belong to automatically belong to default.
Indexes ... bla.

The tablespace catalog will have a column telling the physical location
of that directory. Moving it around will not be *that* easy, I guess,
because the UPDATE of that entry has to go hand in hand with the move of
all files in that damned directory. But that's another thing to sort out
later, IMHO.


Jan

-- 

#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] [HACKERS] PGXLOG variable worthwhile?

2002-09-17 Thread Bruce Momjian

Jan Wieck wrote:
 Nigel J. Andrews wrote:
  However, how is that going to work if tablespaces are introduced in 7.4. Surely
  the same mechanism for tablespaces would be used for pg_xlog. As the tablespace
  mechanism hasn't been determined yet, as far as I know, wouldn't it be best to
  see what happens there before creating the TODO item for the log?
 
 No, tablespaces would have to be something DB specific, while the Xlog
 is instance wide (instance == one postmaster == installation == whatever
 you name that level).
 
 My vision is that we start off with two tablespaces per database,
 default and default_idx, which are subdirectories inside the
 database directory. All (non-index-)objects created without explicitly
 saying what tablespace they belong to automatically belong to default.
 Indexes ... bla.
 
 The tablespace catalog will have a column telling the physical location
 of that directory. Moving it around will not be *that* easy, I guess,
 because the UPDATE of that entry has to go hand in hand with the move of
 all files in that damned directory. But that's another thing to sort out
 later, IMHO.

Yes, the nifty trick was to use a lstat() from pg_dump to learn if it is a
symlink and if so, where it points to.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Numeric casting rules, take two

2002-09-17 Thread Bruce Momjian


Tom, do you want any TODO items from this?

---

Tom Lane wrote:
 I started by saying
  * Within a category, up (lossless) conversions are implicit, down
  (potentially lossy) conversions should be assignment-only.
 but as always the devil is in the details.
 
 After further thought, and the thread with Andreas about where we might go
 with this in 7.4, I have developed a two-stage plan for dealing with
 numeric casts.  We can make some progress in 7.3 but there is more work
 that will have to be postponed.  Here's my current thoughts (plan first,
 then discussion):
 
 Do for 7.3:
 
 * Set up pg_cast so that up-coercions in the series
 int2-int4-int8-numeric-float4-float8 are implicit, while
 down-coercions (the reverse direction of each of these fifteen casts)
 are marked assignment-only.
 
 * Modify make_const so that numeric literals are typed as the smallest
 type that will hold them in the series int4, int8, numeric (as opposed
 to the former behavior, which was int4, float8, numeric).
 
 * Make only float8, not numeric, be a preferred type for category NUMERIC.
 
 Do for 7.4:
 
 * Change make_const so that numeric literals are typed as the smallest
 type that will hold them in the series int2, int4, int8, numeric (ie,
 add int2 to the possible set of initial datatypes for constants).
 
 * Remove most cross-datatype operators (int2+int4, etc), expecting such
 operations to be handled by an implicit cast and a single-datatype
 operator instead.  This is necessary for comparison operators, because
 we want operations like int4var = 42 to be coerced to int4-only
 operations so that they are indexable.  It's optional for operators that
 are never associated with indexes (like +), but I'm inclined to reduce
 the code bulk and size of pg_proc (and pg_operator) by getting rid of as
 much as we can.
 
 * Fix planner to cope with merge and hash joins wherein the arguments
 aren't plain Var nodes (must cope with Var + type promotion, and might
 as well just take any expression).
 
 * Develop similar promotion hierarchies for the other type categories.
 See if we can't retire the notion of preferred type entirely.
 
 Discussion:
 
 The main point of the 7.3 changes is to create a consistent promotion scheme
 for the numeric hierarchy.  By twiddling make_const, we can improve the
 behavior for large integers and float-format constants: these will be
 typed as int8 or numeric and then if necessary up-converted to numeric,
 float4, or float8.  It happens that there are no cross-datatype operators
 at present between int8 and numeric/float4/float8 nor between numeric and
 float4/float8, so we will get the desired up-conversion and not selection
 of a cross-datatype operator when such a constant is used with a numeric
 or float variable.  In the existing code, an integer too large for int4
 (but not too large for int8) would be initially typed as float8, thus
 forcing us to allow float8-int8 as an implicit coercion to ensure
 reasonable behavior for int8 constants.  So we must introduce int8 as
 an allowed initial type for constants if we want to remove float8-int8
 as an implicit coercion.  But we can get rid of float8 as an initial type,
 which simplifies matters.
 
 With these changes we can expect reasonable behavior for cases like
 where numericvar = float-style-constant.  The behavior will not get
 better for cases involving int2 or int8 variables compared to int-size
 constants, but it won't get worse either.  These changes will also bring
 us into line with the SQL spec concerning mixed float/numeric operations
 (the result should be approximate, ie float).
 
 With the additional changes for 7.4 we can expect to finally fix the
 behavior for int2 and int8 variables as well: cases like where int2var =
 42 will be indexable without having to explicitly cast the constant.
 
 
 Comments?
 
   regards, tom lane
 
 ---(end of broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] Open 7.3 items

2002-09-17 Thread Bruce Momjian


There has been a lot of activity on open items in the past week.  Here
is the updated list.

Basically, upgrading and casting have blown up into a variety of items.

---

  P O S T G R E S Q L

  7 . 3  O P E NI T E M S


Current at ftp://candle.pha.pa.us/pub/postgresql/open_items.

Source Code Changes
---
Schema handling - ready? interfaces? client apps?
Drop column handling - ready for all clients, apps?
Fix BeOS, QNX4 ports
Fix AIX large file compile failure of 2002-09-11 (Andreas)
Get bison upgrade on postgresql.org for ecpg only (Marc)
Allow ecpg to properly handle PREPARE/EXECUTE (Michael)
Fix vacuum btree bug (Tom)
Fix client apps for autocommit = off
Fix clusterdb to be schema-aware
Change log_min_error_statement to be off by default (Gavin)
Fix return tuple counts/oid/tag for rules
Loading 7.2 pg_dumps
fix up function return types on lang/type/trigger creation or
  loosen opaque restrictions
functions no longer public executable
languages no longer public usable
Add schema dump option to pg_dump
Add casts: (Tom)
assignment-level cast specification
inet - text
macaddr - text
int4 - varchar?
int8 - varchar?
add param for length check for char()/varchar()
Create script to make proper dependencies for SERIAL and foreign keys (Rod)
Fix $libdir in loaded functions?

On Going

Point-in-time recovery
Win32 port
Security audit

Documentation Changes
-
Document need to add permissions to loaded functions and languages
Move documation to gborg for moved projects

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] 7.3 gotchas for applications and client libraries

2002-09-17 Thread Bruce Momjian


I have copied Tom's fine email to:

http://www.ca.postgresql.org/docs/momjian/upgrade_7.3

and have added a mention of it in the HISTORY file:

   A dump/restore using pg_dump is required for those wishing to migrate
   data from any previous release.  A summary of changes needed in client
   applications is at http://www.ca.postgresql.org/docs/momjian/upgrade_7.3.

---

Tom Lane wrote:
 Bruce suggested that we need a porting guide to help people look for
 application and client-library code that will be broken by the changes
 in PG 7.3.  Here is a first cut at documenting the issues.
 Comments welcome --- in particular, what have I missed?
 
   regards, tom lane
 
 
 Revising client-side code for PG 7.3 system catalogs
 
 
 Here are some notes about things to look out for in updating client-side
 code for PG 7.3.  Almost anything that looks at the system catalogs is
 probably going to need work, if you want it to behave reasonably when you
 start using 7.3's new features such as schemas and DROP COLUMN.
 
 As an example, consider the task of listing the names and datatypes for
 a table named foo.  In the past you may have done this with a query like
 
   SELECT a.attname, format_type(a.atttypid, a.atttypmod)
   FROM pg_class c, pg_attribute a
   WHERE c.relname = 'foo'
 AND a.attnum  0 AND a.attrelid = c.oid
   ORDER BY a.attnum
 
 (this in fact is exactly what 7.2 psql uses to implement \d foo).
 This query will work perfectly well in 7.2 or 7.1, but it's broken in half
 a dozen ways for 7.3.
 
 The biggest problem is that with the addition of schemas, there might be
 several tables named foo listed in pg_class.  The old query will produce
 a list of all of their attributes mixed together.  For example, after
   create schema a;
   create schema b;
   create table a.foo (f1 int, f2 text);
   create table b.foo (f1 text, f2 numeric(10,1));
 we'd get:
 
  attname |  format_type
 -+---
  f1  | text
  f1  | integer
  f2  | text
  f2  | numeric(10,1)
 (4 rows)
 
 Not good.  We need to decide exactly which foo we want, and restrict the
 query to find only that row in pg_class.  There are a couple of ways to
 do this, depending on how fancy you want to get.
 
 If you just want to handle an unqualified table name foo, and find the
 same foo that would be found if you said select * from foo, then one way
 to do it is to restrict the query to visible rows of pg_class:
 
   SELECT ...
   FROM ...
   WHERE c.relname = 'foo' AND pg_table_is_visible(c.oid)
 AND ...
 
 pg_table_is_visible() will only return true for pg_class rows that are in
 your current search path and are not hidden by similarly-named tables that
 are in earlier schemas of the search path.
 
 An alternative way is to eliminate the explicit join to pg_class, and
 instead use the new datatype regclass to look up the correct pg_class
 OID:
 
   SELECT ...
   FROM pg_attribute a
   WHERE a.attrelid = 'foo'::regclass
 AND a.attnum  0
   ORDER BY a.attnum
 
 The regclass input converter looks up the given string as a table name
 (obeying schema visibility rules) and produces an OID constant that you
 can compare directly to attrelid.  This is more efficient than doing
 the join, but there are a couple of things to note about it.  One is
 that if there isn't any foo table, you'll get an ERROR message from
 the regclass input converter, whereas with the old query you got zero
 rows out and no error message.  You might or might not prefer the old
 behavior.  Another limitation is that there isn't any way to adapt
 this approach to search for a partially-specified table name;
 whereas in the original query you could use a LIKE or regex pattern to
 match the table name, not only a simple equality test.
 
 Now, what if you'd like to be able to specify a qualified table name
 --- that is, show the attributes of a.foo or b.foo on demand?
 It will not work to say
   WHERE c.relname = 'a.foo'
 so this is another way in which the original query fails for 7.3.
 
 It turns out that the regclass method will work for this: if you say
   WHERE a.attrelid = 'a.foo'::regclass
 then the right things happen.
 
 If you don't want to use regclass then you're going to have to do an
 explicit join against pg_namespace to find out which foo you want:
 
   SELECT a.attname, format_type(a.atttypid, a.atttypmod)
   FROM pg_namespace n, pg_class c, pg_attribute a
   WHERE n.nspname = 'a' AND c.relname = 'foo'
 AND c.relnamespace = n.oid
 AND a.attnum  0 AND a.attrelid = c.oid
   ORDER BY a.attnum
 
 This is somewhat tedious because you have to be prepared to split the
 qualified name into its components on the client side.  An advantage
 is that once you've done that, you can again consider using LIKE or
 regex patterns 

Re: [HACKERS] Numeric casting rules, take two

2002-09-17 Thread Tom Lane

Bruce Momjian [EMAIL PROTECTED] writes:
 Tom, do you want any TODO items from this?

I think we have plenty already on this general subject, no?  But you
could stick this whole thread into TODO.detail/typeconv if you like.
(It's interesting to compare these ideas to where we were 2 years
ago...)

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] RPMS for 7.3 beta.

2002-09-17 Thread Lamar Owen

On Wednesday 18 September 2002 12:55 am, Tom Lane wrote:
 Lamar Owen [EMAIL PROTECTED] writes:
  Not talking about a freeze.  Talking about separation of system/feature
  metadata from user metadata that wouldn't change in the upgrade anyway --

 But the system catalogs *store* that metadata.

They _currently_ store the user's metadata.  But that's my point -- does the 
user metadata that isn't typically substantially different after going 
through a dump/reload _have_ to coexist with the system data which is 
intrinsic to the basic backend operation?

Yes, I know I'm talking about refactoring/renormalizing the system catalogs.  
And I know that's neither interesting nor 'fun'.  And a major undertaking.

 from?  None of the key developers care to spend their time that way;
 all of us have other issues that we find more interesting/compelling/fun.
 Unless someone of key-developer caliber comes along who *likes* spending
 time on upgrade issues, it's not going to get better.  Sorry to be the
 bearer of bad news, but that's reality as I see it.

Quoting myself from my reply a couple of hours ago to Chris:
- While I should know better, I'm going to reply.:-)
[snip]
- I wish (in a somewhat wistful, yet futile manner) that each change was 
- accompanied by data migration strategies for that change, but I'm not 
- holding my breath, since the core developers have more important things
- to do.  (Not being sarcastic -- just observing a fact).

You're not telling me something I don't already know in your paragraph, Tom.  
Data migration of real users isn't interesting, compelling, or fun.  That's 
been made abundantly clear the last ten times the subject of upgrading has 
come up.  What's a real-world user to do?  Find it interesting, compelling, 
and fun to work around our shortcoming? (here comes one of those paroxysms 
that will keep me awake tonight)

I for one am not doing _this_ because I find it to be 'fun'.  Quite the 
opposite -- you try to help people who end up cussing you out for something 
you can't control.  (And I see all those messages addressed to Tom coming 
through the lists, so I'm sure Tom is no stranger to this portion of the 
project, either)  

I'm doing _this_ to try to help people not go through what I went through, as 
well as to try to help the project in general, for both selfish and selfless 
reasons.  If I were able to spend enough time on the issue I am quite 
confident I could find a solution, in a year or so.  But I find it 
compelling, if nothing else, to put food on my kids' plates, which precludes 
me working much on this particular issue.  But I do what I can, if nothing 
else.

But it is _necessary_ to migrate data for one reason or another.  Lack of 
distributed backports for security patches, that are official releases, is 
one quite compelling reason to go through an upgrade.

Chris, this is why I was somewhat reticent to reply before.  I've been down 
this dead-end road before.  To distill Tom's comments:
It is technically feasible to make a better (not perfect) upgrade path, but 
nobody that can do it wants to.

What good is an interesting, compelling, fun, featureful, new version if 
nobody ugrades to it due to migration difficulties?  This release could be 
the harbinger of further difficulties, I fear.

So, that's why I'm unhappy, to answer a question asked quite a while back in 
the thread.  

Back on topic: I'll work towards using the 7.3 pg_dump unless the 7.2 dump can 
be easily restored.  Given the desireability for opaque to go away soon, if 
the 7.3 pg_dump Does The Right Thing and creates an opaque-free dump, that in 
itself is enough reason to go that route, as it helps the user create a 
nonambiguous data dump.  If it helps the user it is typically a Good Thing, 
and I am willing to put the effort into that.  And it may prove to not be 
that bad -- I'll know in a few days, hopefully.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Open 7.3 items

2002-09-17 Thread Gavin Sherry

 Change log_min_error_statement to be off by default (Gavin)

I will be happy to provide this simple fix once I can get some indication
of the preferred implication. The discussion left off with Bruce prefering
that the GUC code for the *_min_* variables be variable specific where as
Tom saw no need to back out the generic assignment function I provided,
despite the fact that it behaves `illogically' (client_min_messages =
FATAL?).

Gavin



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly