I wonder whether we shouldn't
use the most recent address we have for
the git conversion, though.
Thomas, do you have a preference? See
http://archives.postgresql.org/pgsql-hackers/2010-08/msg01078.php
for
context.
I'd already
1) Is any John Franks code really in this file?
Possibly, maybe probably. I don't remember the details (9 years is a
long time!) but almost certainly any code or algorithms were
specifically for the inside or outside routines.
2) Did John provide a separate license for PostgreSQL to
Just a quick note to mention that I've resigned from the PostgreSQL
steering committee. It has been a lot of fun and very rewarding to
participate in PostgreSQL development over the last six years, but it is
time to take a break and to move on to other projects.
Thanks to Marc, Bruce, and
Does any know the location of a good cvsup file for grabbing the
various releases?
There is none; when the configuration was changed the docs were not.
Use the appendix in the current docs (not whatever TODO is) and replace
the pgsql project name with repository. Your CVS area will then
How would we make use of 'hasoids' in the case of multiple source
tables, or a source table defined by an SRF?
Choose a convention. At the moment, the first source table provides more
characteristics to the target than do the other sources, and that could
be true for OIDs also. I would be
I concur, but do we have some sort of commitment that the rest of
the SQL200x sequence machinery will be supported eventually? Otherwise,
adding some irrelevant syntax variations in limited places doesn't seem
fruitful.
Yes, I'll implement the rest of the SQL200x sequence stuff
eventually.
I agree with Neil: in many situations it's not reasonable to try to
associate a unique source table with a CREATE AS or SELECT INTO, and
therefore automatically propagating hasoids is doomed to failure.
Well, it certainly is reasonable to have better behavior than currently.
We do something
... and the same for SELECT/INTO Also, the hasoids (or equivalent)
attribute is not picked up from the source table which would be another
way to push this property into the target table.
One or both of these mechanisms (I'd think both should be available)
would seem to be required to make
...
In the long run, seems like it would be a good idea for type TIME
WITHOUT TIME ZONE's input converter to accept and ignore a timezone
field, just as type TIMESTAMP WITHOUT TIME ZONE does:
...
Thomas, what do you think --- was this behavior deliberate or an
oversight?
The behavior was
...
to pull in those changes that were made to the REL7_3_STABLE branch ...
Right.
But, if I did:
cvs checkout -rREL7_3_STABLE pgsql
What would I use as BRANCHNAME in the -j to 'pull in' the changes we made
to HEAD? Or is there where I'm misunderstanding something?
Use HEAD for the
As I was unable to figure out a way of merging HEAD into a BRANCH
(BRANCH into HEAD is easy), I moved the REL7_3_STABLE tag up to today, so
that the BRANCH is in a more appropriate location.
It is easy to move both directions; I've done it. You need to have the
right stuff in your checked-out
Out of curiousity, is there an easy (heh) way to upgrade(change?) a
checked out tree to be off the branch?
cvs update -rYOUR_BRANCH_NAME_HERE .
- Thomas
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to
We have just upgraded from 7.1.3 to 7.2.3 over the weekend, and have just
noticed something weird with regards 'epoch'::timestamp.
In 7.1.3, 'epoch'::timestamp always returned the value 1970-01-01
00:00:00+00,...
I would expect 'epoch'::timestamp to always act as though the value were
the same
Create table arnold (
a int8,
b timestamp default 'epoch'::timestamp,
c timestamp default timestamp('epoch'::text)
);
I think that the best way forward for us (short of re-writing the backend to
use NULL) will be to just alter the default value to the one in column b in
the test table above.
Seems like someone ought to issue a call for port reports. The
supported platforms list hasn't been touched ...
Good point. Thomas, can you take that on?
No, at least not now. I'm not able to communicate reliably with the
mailing lists, and so can not coordinate anything :( Not sure when or
SIMILAR TO doesn't implement the SQL standard, it's only a wrapper around
the POSIX regexp matching, which is wrong. I thought someone wanted to
fix that, but if it's not happening it should be removed.
Please be specific on what you would consider correct. I'm not recalling
any details of
Actually, Linux Journal (and their editors) are fans of PostgreSQL.
This year, MySQL may actually have clued in to transactions and a few
other big database features. I don't know that they actually *have*
these features polished up, but LJ is giving them credit for trying...
-
template1=# select current_timestamp(0) at time zone 'Australia/Sydney';
ERROR: Time zone 'australia/sydney' not recognized
The input is done using an internal lookup, not your system's time zone
database. Much faster; setting time zone variables for every input will
be substantially slower
Neil Conway wrote:
I've backpatched Thomas's fixes for the potential buffer overruns in
the datetime code to the REL7_2_STABLE branch. The required changes
are pretty minimal, and the code passes the regression tests.
However, I haven't yet seen a test-case that demonstrates the buffer
...
So I think that fixing the opaque problems in 7.2.x is simply
impossible. Given that, the question is whether we should make a 7.2.2
release with fixes for the other security holes (lpad(), rpad(),
reverse(), and the datetime overruns). IMHO, we should.
Just a minor point: can someone
OK, with two people now asking to have the patch removed, and with no
comment from Thomas, I have removed the patch. This removes XLogDir
environment variable, and -X postmaster/postgres/initdb/pg_ctl flag.
I have also removed the code that dynamically sized xlogdir.
... Back in town...
Revert. The XLogDir change was incomplete and basically useless to
start with ...
Yea, but it was tied into the PGXLOG commit. Thomas, what are we doing
with that?
Why ask me?
- Thomas
---(end of broadcast)---
TIP 4:
To simply decide not to fix it means:
snip
What I am saying is that there is nothing that could possibly be more
important than fixing this, except some other known problem that could
also cause billions of dollars worth of damage. Are there any such
problems besides the buffer overrun
If you move pg_xlog, you have to create a symlink in /data that points
to the new location. Initdb would do that automatically, but if you
move it after initdb, you would have to create the symlink yourself.
With Thomas's current code, you would add/change PGXLOG instead to point
to the new
That's what I was thinking. In cases where you want to use the type for
several functions, use CREATE TYPE. If you only need the type for one
function, let the function creation process manage it for you.
It would be nice then to have some mechanism for converting the
automatic type to a
Thomas, would you remind me of the concusions because I thought everyone
involved felt that it should be an initdb-only option, but I still see
it in CVS.
?? Concussions as in brain bruises? ;)
I'm not sure I understand the question. I assume that we are talking
about the WAL log location
I've committed changes to do the following:
o Fix buffer overrun possibilities in date/time handling
o Handle fixed-length char and bit literals
o Implement IS OF type predicate
o Define macros to manipulate date/time typmod values
o Map hex string literals to bit string type (may change later)
...
Would it be out of line to question the fact that none of these commit
messages showed any documentation updates?
Not at all :)
I needed to dump the patches into the tree since with the recent changes
to CVS I'm at risk of losing the ability to work and to generate patches
for the work
I am wondering why we even want to specify the WAL location anywhere
except as a flag to initdb. If you specify a location at initdb time,
it creates the /xlog directory, then symlinks it into /data.
Does this have any negative implications for Win32 ports?
Sure. the symlinks thing was
...
I was just wondering why we would deal with environment variables or
postgresql.conf settings. Just make it an initdb flag, create it in the
desired location with a symlink in /data and then we don't have to do
any more work for WAL locations unless people want to move it around
after
...
I wasn't terribly concerned because this wasn't a 2% on normal workload
test, it was a 2% bang on function calls as fast as you can test.
Yeah, good point. But if we can get it back somehow that would be even
better :)
- Thomas
---(end of
...
Perhaps a more relevant question is why are we cluttering the namespace
with any such function at all? What's the use case for it? We've
gotten along fine without one so far, and I don't really think that we
*ought* to be exposing random bits of internal implementation details
at the
With FUNC_MAX_ARGS=16:
(average = 28.6 seconds)
With FUNC_MAX_ARGS=32:
(average = 29.15 seconds)
That is almost a 2 percent cost. Shall we challenge someone to get us
back 2 percent from somewhere before the 7.3 release? Optimizing a hot
spot might do it...
-
...
I agree that if we could quickly come to a resolution about how this
ought to work, there's plenty of time to go off and implement it. But
(1) we failed to come to a consensus before, so I'm not optimistic
than one will suddenly emerge now; (2) we've got a ton of other issues
that we
...
But ... my recollection is that we've had a *huge* number of complaints
about the initlocation behavior, at least by comparison to the number
of people using the feature. No one can understand how it works,
let alone how to configure it so that it works reliably. I really
fail to
...
Is this a fair account?
Yes. You may note that we have not explored the implementation details
on any of these, so the attributes of each are not cast in stone (except
for the purposes of argument of course ;)
- Thomas
---(end of
* libpqxx is not integrated into build process nor docs. It should
be integrated or reversed out before beta.
I've requestsed that Jeorgen(sp?) move this over to GBorg ... its
something that can, and should be, built seperately from the base
distribution, along with at least a dozen other
...
*Eventually*, a simple checkout of 'pgsql' should result in a server
only distribution that we can pull bits and pieces into transparently ...
I'm still not quite sure where this is headed or why, but if nothing
else pgsql could and should be the whole thing, and pgsql-server could
be the
I have implemented the ROW keyword, but am not sure that I've gotten
what the spec intends to be the full scope of functionality. It may be
that I've missed the main point completely :)
If someone has the time and interest I'd appreciate it if they would go
through the SQL99 spec and see what
... amongst all the various 'bruce's...
Hmm. The Monty Python scenario? :)
- Thomas
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/users-lounge/docs/faq.html
I've got patches to adjust the interpretation of hex literals from an
integer type (which is how I implemented it years ago to support the
*syntax*) to a bit string type. I've mentioned this in a previous
thread, and am following up now.
One point raised previously is that the spec may not be
Oh, I've also implemented int8 to/from bit conversions, which was a
trivial addition/modification to the int4 support already there...
- Thomas
---(end of broadcast)---
TIP 2: you can get off all lists at once with the
We have some docs...
it's written in Russian.
I could try to translate if needed.
That would be great! Perhaps someone would help to go through and edit
it after you do a first cut, so you don't need to spend time working on
exact phrasing but rather on the content itself. I admire your
... But I strongly feel that having a feature
because 'it is something that no one else has. It distinguishes us.' is
no justification at all.
One reason why we have a database which *does* come very close to the
standards is precisely because it had (and has) things which no one else
had
...
Thomas, are you going to extend this to locations for any table/index?
Seems whatever we do for WAL should fix in that scheme.
Yes, the longer-term goal is enabling table/index-specific locations.
I'm not certain whether WAL can use *exactly* the same mechanism, since
1) the location for
I have implemented the ROW keyword, but am not sure that I've gotten
what the spec intends to be the full scope of functionality. It may be
that I've missed the main point completely :)
...
afaict the spec is not at all verbose about this, and is very dense and
obtuse where it does
...
I've been securing systems since I started an ISP in 1995, and so I've
seen a lot of security vulnerabilities come and go, and I've got a bit
of a feel for what kinds of things are typically exploited. And this one
one just screams, potential security vulnerability! to me.
Sure, there is
Whether you think that there is a potentially-exploitable security hole
here is not really the issue. The point is that two different arguments
have been advanced against using environment variables for configuration
(if you weren't counting, (1) possible security issues now or in the
...
Agreed. Consistency argues for the postgresql.conf solution, not
security. Also, I would like to see initlocation removed as soon as we
get a 100% functional replacement. We have fielded too many questions
about how to set it up.
Hmm. I'm not sure the best way to look, but I was able
This is great, we thought we may go for code changes, we will go with this
solution instead.
But you did catch Stephan's point that an outer join is not required to
produce the result you apparently want? The equivalent inner join will
be at worst just as fast, and possibly faster, both for
I've developed patches to be able to specify the location of the WAL
directory, with the default location being where it is now. The patches
define a new environment variable PGXLOG (a la PGDATA) and postmaster,
postgres, initdb and pg_ctl have been taught to recognize a new command
I've developed patches to be able to specify the location of the WAL
directory, with the default location being where it is now. The patches
define a new environment variable PGXLOG (a la PGDATA) and postmaster,
postgres, initdb and pg_ctl have been taught to recognize a new command
line switch
I've developed patches to be able to specify the location of the WAL
directory, with the default location being where it is now. The patches
define a new environment variable PGXLOG (a la PGDATA) and postmaster,
postgres, initdb and pg_ctl have been taught to recognize a new command
It would be nice to get the line type working 100%. Thomas says the
problem is input/output format. I don't completely understand.
The issue is in choosing an external format for LINE which does not lose
precision during dump/reload. Internally, LINE is described by a formula
which is likely
1) the SQL standard says what hex values should be translated to in
binary, which implies that all values may be *output* in binary format.
So the standard says both represent a fixed-length bit string data type.
ISTM that we should not try to preserve any information on the units
used for
...
We do need a solution for exact dump/reload of floating-point data,
but I don't see why the lack of it should be reason to disable access
to the LINE type.
I don't understand why dumping the two point values isn't sufficient.
Which two point values? LINE is handled as an equation, not
...
Well, the \dT documentation used to show line as two points, so I
assumed that was how it was specified.
Hmm. And it seems I entered it a few years ago ;)
Cut and paste error. At that time the line type was defined but has
never had the i/o routines enabled.
No one likes entering an
No one likes entering an equation. Two points seems the simplest.
That it does.
On the other hand, if you want to enter two points why don't you just
use lseg to begin with? There's not much justification for having a
separate line type unless it behaves differently ...
They are
Log message:
This fixes 2 inaccuracies in the recently added SQL99 feature list docs.
UNIQUE and DISTINCT predicates are both listed as implemented -- AFAIK,
neither is.
DISTINCT was implemented a couple of weeks ago. I'll change the docs
again...
- Thomas
SQL9x defines bit string constants with a format like
B'101010'
and
X'ABCD'
for binary and hexadecimal representations. But at the moment we don't
explicitly handle both of these cases as bit strings; the hex version is
converted to decimal in the scanner (*really* early in the parsing
for binary and hexadecimal representations. But at the moment we don't
explicitly handle both of these cases as bit strings; the hex version is
converted to decimal in the scanner (*really* early in the parsing
stage) and then handled as an integer.
It looks like our current bit string type
So, what should the behavior be of a constant declared as
CHAR 'hi'
? Right now it fails, since SQL9x asks that the char type defaults to a
length of one and our parser does not distinguish between usage as a
constant declaration and as a column definition (where you would want
the
Is this possible ?
Sure.
- Thomas
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/users-lounge/docs/faq.html
(crossposted to -hackers, should follow up on that list)
Well, OVERLAY is defined as:
overlay(string placing string from integer [for integer])
and replace() is defined (by me at least) as:
replace(inputstring, old-substr, new-substr)
OK.
OVERLAY requires that I know the from
So, what should the behavior be of a constant declared as
CHAR 'hi'
? Right now it fails, since SQL9x asks that the char type defaults to a
length of one and our parser does not distinguish between usage as a
constant declaration and as a column definition (where you would want
the char(1) to
If so, what about the coercibility property?
The standard defines four distinct coercibility properties. So in
above my example, actually you are going to define 80 new types?
(also a collation could be either PAD SPACE or NO PAD. So you
might have 160 new types).
Well, yes I suppose so.
When you say We do not yet implement the SQL99 forms of character
support, I think you mean the ability to specify per column (or even
per string) charset. I don't think this would happen for 7.3(or 8.0
whatever), but sometime later I would like to make it reality.
Right.
An aside: I was
I've been think this for a while too. What about collation? If we add
new chaset A and B, and each has 10 collations then we are going to
have 20 new types? That seems overkill to me.
Well, afaict all of the operations we would ask of a type we will be
required to provide for character sets
We have always stored our time zone offsets in a compressed
divide-by-ten form to allow storage in a single byte. But there are a
few time zones ending at a 45 minute boundary, which minutes divided by
10 can not represent.
But minutes divided by 15 can represent this and afaik all other time
Is there a reason for the restriction to one byte? Offhand I don't
recall that we store TZ offsets on disk at all...
Ah, we don't. Sorry I wasn't clear. This is only for the lookup table we
use to interpret time zones on *input*. It is contained in
src/backend/utils/adt/datetime.c
And it has
...
So I withdraw my earlier comment. But perhaps the syntax of the proposed
command could be aligned with the CREATE TRANSLATION command.
Tatsuo, it seems that we should use SQL99 terminology and commands where
appropriate. We do not yet implement the SQL99 forms of character
support, and
I modified pg_controldata to display a few new fields. Example output
appears at the end of this message, and the cvs log is:
Add a few new lines to display recently added fields in the ControlFile
structure.
Now includes the following new fields:
integer/float date/time storage
maximum
Actually, the big change is such that will, at least as far as I'm
understanding it, break pretty much every front-end applicaiton ... which,
I'm guessing, is pretty major, no? :)
I've always thought of our release numbering as having themes. The 6.x
series took Postgres from interesting but
Well in the renumbering case, the client needs to know about missing attnos
and it has to know to ignore negative attnos (which it probably does
already). ie. psql and pg_dump wouldn't have to be modified in that case.
In the isdropped case, the client needs to know to exclude any column
I've committed support for IS DISTINCT FROM to the head of the CVS tree.
I did not update the catalog version, but since one enumerated type has
added a value it may be that initdb is actually required. Almost
certainly a make clean is essential.
There is more work to do, including perhaps
...
I am backing out my GNUmakefile change. I am still unclear why this has
started happening all of a sudden.
?
The results/ directory should not be a part of CVS (since it is assumed
to not exist by the regression tests). But it has been in CVS since 1997
during a period of time when a
Another possibility is that you got burnt by some schema-related issue;
cf the updated conversion docs at
http://developer.postgresql.org/docs/postgres/typeconv-func.html
I'll bet that is it, though possible differences in CAST() behaviors are
not explained. I'll see if I can reproduce
...
Adding a new expression node tree type is not too difficult these days;
see for example Joe Conway's recent NullTest and BooleanTest additions.
I believe the existing expansions of row comparison operators
(makeRowExpr) should be replaced by specialized nodes, too. That would
give us a
I get
cvs [server aborted]: cannot write /cvsroot/CVSROOT/val-tags: Permission denied
This seems to be a server message.
I see the same thing when trying to update a tree to this branch using
local cvs on mcvsup.postgresql.org. The file is owned by scrappy and has
no group write permissions.
I've actually already done almost all the work for converting BETWEEN to a
node but I have a couple of questions:
Should I use a boolean in the node to indicate whether it is SYMMETRIC or
ASYMMETRIC, or should I use some sort of integer to indicate whether it is
SYMMETRIC, ASYMMETRIC or
I'm looking at implementing IS DISTINCT FROM, among other things.
...
I was thinking to implement this by simply expanding these rules within
gram.y to be a tree of comparison tests.
Please, please, do not do that. Make a new expression node tree type,
instead. We've made this mistake
It doesn't match perfectly in that one field is ignored as being
(afaict) redundant for us. The basic definition from SQL99 is
CREATE CAST(from AS to) WITH FUNCTION func(args) [AS ASSIGNMENT]
I can map this to something equivalent to
CREATE FUNCTION to(from) RETURNS to AS 'select
SQL uses LOCAL to mean the local node in a distributed system (SET LOCAL
TRANSACTION ...) and the current session as opposed to all sessions (local
temporary table). The new SET LOCAL command adds the meaning this
transaction only. Instead we could simply use SET TRANSACTION, which
would
I see no real reason why we should not require casting functions to
follow the Postgres naming convention --- after all, what else would
you name a casting function?
We do require casting functions to follow the Postgres naming
convention. istm to be a waste of time to have the CREATE CAST()
Fernando Nasser of Red Hat reminded me that it really makes no sense
for ALTER TABLE ADD COLUMN and ALTER TABLE RENAME COLUMN to behave
non-recursively --- that is, they should *always* affect inheritance
children of the named table, never just the named table itself.
Hmm. Good point.
I'm happy setting up the branch if that would be helpful. Let me know if
this is the way you want to proceed, and if so what you would like the
That would be nice. I do not really knwo cvs myself.
Done. And here is how you would use it...
branch to be called.
No idea. new-bison maybe?
A couple of notes:
...
Then, update *only* the ecpg source directory to the branch:
cd pgsql/src/interfaces
cvs update -r ecpg_big_bison ecpg
cvs will respect any changes you have made to the sources in your
directory and the changes will be preserved in the move to the branch.
Here is what
hi all. i want to begin contributing work to stuff on the todo list. i have
worked on suns for a long time.
I have a dual processor 180MHz Pentium Pro which is adequate for
building and testing PostgreSQL, though I find I use my newer laptop
more often nowadays. I'd expect that any machine
I'm looking at implementing IS DISTINCT FROM, among other things. It has
the unusual behavior that it compares elements for a tuple by
considering two NULLs to be equal (hence non-distinct) rather than
unknown. So the rules for comparison seem to be:
a) if the rows compared have different
a) The client-side programmer has to be responsible for parsing the
returned string, which could cause problems if the output format of the
ADT is changed, and
You seem to be proposing that we instead expose the internal storage
format of the ADT, which seems to me to be much more likely
Ah, I see --- more or less make all of utils/adt/ available to be
linked into clients.
That is a Good Idea in principle. In practice, ...
Yeah, it'd be a huge amount of work. For starters, all that code
relies on the backend environment for error handling and memory
management...
It
I've got patches for the CREATE CAST/DROP CAST feature (just a
rearrangement of our existing function declaration syntax). The SQL99
form assumes that an existing function will be used for the cast
definition, so I've extended the syntax to allow that and to have an
alternate form which
So, is you vote for or against the elog(NOTICE)?
OK, if we are still voting, then I'll mention that I generally dislike
the idea of notices of this kind. And would not like this notice in
particular. So would vote no with both hands ;)
I'm pretty sure that we have a consensus policy (hmm, at
I've gone ahead and committed patches for CREATE CAST/DROP CAST, as well
as for a few other SQL99 clauses in other statements. Details below...
- Thomas
Implement SQL99 CREATE CAST and DROP CAST statements.
Also implement alternative forms to expose the PostgreSQL CREATE
I've got patches for the CREATE CAST/DROP CAST feature (just a
rearrangement of our existing function declaration syntax). The SQL99
form assumes that an existing function will be used for the cast
definition, so I've extended the syntax to allow that and to have an
alternate form which has more
I've just updated features.sgml to have a list of supported features
followed by a list of unsupported ones. There are some items in the
unsupported list which look easy to do. I've got patches for a MATCH
SIMPLE clause on referential integrity declarations, and am developing
patches for CREATE
You wrote was either to voluminous instead of was either too voluminous
in the first paragraph of the appendix...
Thanks!
- Thomas
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
As you probably know, SQL99 has dropped the rather useless
categorizations of basic, intermediate, and advanced SQL
compliance and instead lists a large number of labeled features. I've
put these into an appendix for the docs (not yet committed to cvs).
The list is organized as a (for now) three
I've just committed changes to include an SQL99 feature list as an
appendix in the User's Guide. While preparing that I noticed a feature
or two which would be trivial to implement, so we now have LOCALTIME and
LOCALTIMESTAMP function calls per spec (afaict; the spec is very vague
on the
Thanks for the info! I have a question...
As usual: ( ) + * [ ] |
Instead of dot . there is underscore _
There is % to mean .* just like LIKE
There is no ? or ^ or $
Regular expressions match the whole string, as if there were an
implicit ^ before and $ after the pattern. You have to
1 - 100 of 534 matches
Mail list logo