Thanks. Applied.
[ Charset ISO-8859-1 unsupported, converting... ]
Hello!
Here is a patch to make the current snapshot compile on
Win32 (native, libpq
and psql) again. Changes are:
I thought the consensus was to do something other than that patch.
As it looks, if nothing else is
There were only a few to fix, so I fixed them.
Peter Eisentraut [EMAIL PROTECTED] writes:
Which one of these should we use?
int4 is a data type, int32 isn't. c.h has DatumGetInt8, but no
DatumGetInt64; it also has DatumGetInt32 but no
DatumGetInt4. fmgr has
Wait a sec !
The
The other category is run-time behaviour, which is the present case of
testing whether mktime() behaves a certain way when given certain
arguments. Another item that has been thrown around over the years is
whether the system supports Unix domain sockets (essentially a run-time
behaviour
The problem is that there are strings being allocated from
libpq.dll using PQExpBuffers (for example, initPQExpBuffer()
on line 92 of input.c). These are being allocated using the
malloc function used by libpq.dll. This function *may* be
different from the malloc function used by
Is anyone looking at doing this? Is this purely a MySQL-ism, or is it
something that everyone else has except us?
We should not only support access to all db's under one postmaster,
but also remote access to other postmaster's databases.
All biggie db's allow this in one way or another
You don't need to put this check into configure, you can just
do the check after mktime() is used.
No, we need that info for the output functions that only use localtime.
The intent is, to not use DST before 1970 on platforms that don't have
mktime for dates before 1970.
You
I agree that configure is the way to go. What if someone has
installed a third party library to provide a better mktime() and
localtime()?
Exactly. What if someone has a binary PostgreSQL package installed, then
updates his time library to something supposedly binary compatible and
We do not need any execution time checks for this at all. The objective is
to determine whether mktime works for any results that would be negative.
On AIX and IRIX all calls to mktime for dates before 1970 lead to a result of
-1, and the configure test is supposed to give a define for
I do not have the original thread where Andreas describes
the behavior
of mktime() on his machine. Andreas, can you suggest a
simple configure
test to be used?
#include time.h
int main()
{
struct tm tt, *tm=tt;
int i = -5000;
tm = localtime (i);
i
I'm trying to compile postgres on a Solaris 7 SPARC machine
and I get this error:
gcc -g -Wall -Wmissing-prototypes -Wmissing-declarations -fPIC
-I../../../src/include -I../../../src/interfaces/libpq -c pgtcl.c -o pgtcl.o
In file included from pgtcl.c:19:
libpgtcl.h:19: tcl.h: No such
Yes, the annoyance is, that localtime works for dates before 1970
but mktime doesn't. Best would probably be to assume no DST before
1970 on AIX and IRIX.
That seems like a reasonable answer to me, especially since we have
other platforms that behave that way. How can we do this
More importantly, PostgreSQL 6.5.3 works very, very well without
VACUUM'ing.
6.5 effectively assumes that "foo = constant" will select exactly one
row, if it has no statistics to prove otherwise.
I thought we had agreed upon a default that would still use
the index in the above case
Yes, the annoyance is, that localtime works for dates before 1970
but mktime doesn't. Best would probably be to assume no DST before
1970 on AIX and IRIX.
That seems like a reasonable answer to me, especially since we have
other platforms that behave that way. How can we
The correct thing to do instead of the #if defined (_AIX) would be to use
something like #ifdef NO_NEGATIVE_MKTIME and set that with a configure.
Thomas, are you volunteering ?
Actually, I can volunteer to be supportive of your efforts ;) I'm
traveling at the moment, and don't have the
Instead of a partial row CRC, we could just as well use some other bit
of identifying information, say the row OID. Given a block CRC on the
heap page, we'll be pretty confident already that the heap page is OK,
we just need to guard against the possibility that it's older than the
index
How about having some #if BROKEN_TIMEZONE_DATABASE code which uses both
mktime() and localtime() to derive the correct time zone? That is, call
mktime to get a time_t, then call localtime() to get the time zone info,
but only on platforms which do not get a complete result from just
A disk-block CRC would detect partially written blocks (ie, power drops
after disk has written M of the N sectors in a block). The disk's own
checks will NOT consider this condition a failure.
But physical log recovery will rewrite every page that was changed
after last checkpoint, thus
Pete Forman [EMAIL PROTECTED] writes:
Thinking about that a bit more, I think that tm_isdst should not be
written into.
IIRC, setting isdst to -1 was necessary to get the right
behavior across
DST boundaries on more-mainstream systems. I do not think it's
acceptable to do worse on
Yes, the annoyance is, that localtime works for dates before 1970
but mktime doesn't. Best would probably be to assume no DST before
1970 on AIX and IRIX.
That seems like a reasonable answer to me, especially since we have
other platforms that behave that way. How can we do this ---
On AIX mktime(3) leaves tm_isdst at -1 if it does not have timezone
info for that particular year and returns -1.
The following code then makes savings time out of the -1.
tz = (tm-tm_isdst ? (timezone - 3600) : timezone);
Hmm. That description is consistant with what I see
we've almost totally rewrite gist.c because old code and algorithm
were not suitable for variable size keys. I think it might be
submitted into 7.1 beta source tree.
Urgh. Dropping in a total rewrite when we're already past
beta3 doesn't
strike me as good project management
The time zone is now evaluated in the time zone of the result, rather
than the input, using system support routines from libc.
Ok, I have the answer.
On AIX mktime(3) leaves tm_isdst at -1 if it does not have timezone
info for that particular year and returns -1.
From man page:
The
On AIX timestamp and horology regression fails in current, because
timestamp - interval for result timestamps that are before 1970 (epoch ?)
are off by one hour. I think this is not an issue for an adapted expected file,
but a new (in 7.1beta) bug. But I am at no means an expert at what the
On AIX timestamp and horology regression fails in current, because
timestamp - interval for result timestamps that are before
1970 (epoch ?)
are off by one hour. I think this is not an issue for an
adapted expected file,
but a new (in 7.1beta) bug. But I am at no means an expert
at
You can remove the randomness by setting the Seed
configuration value,
True, but that's not the default setup.
I would set Seed per default. Even worse than a bad query path
is an unpredictable query path. I see no argument, that a random Seed
would buy us anything.
Andreas
For 7.1, I propose that we only allow creation of child tables to the
owner of the parent table.
Or dba. Sounds reasonable, maybe even sufficient to me.
(Informix has a separate right (called under) to grant inheritability to others
(just to support your separate right point).)
Andreas
There is something busted in the snapshots, that leads to a wrong
configure file. The file is equal to configure.in (not autoconf'ed).
First noticed shortly before Christmas.
Andreas
select * from table where col = function() ;
(2) "function()" returns a number of values that are independent of the
query. Postgres should be able to optimize this to be: "select * from
table where col in (val1, val2, val3, ..valn)." I guess Postgres can
loop until done, using the isDone
Not sure what to do about this. If you had actually typed 2^64-1040,
it would be appropriate for the code to reject it. But I hadn't
realized that the extra check would introduce a discrepancy between
32- and 64-bit machines for negative inputs. Maybe it'd be
better just
to delete the
VACUUM ANALYZE after the INSERTs made no performance difference at all,
which is good since no other modern database requires anything to be done
to improve performance after a large number of INSERTs. (i can understand
why COPY would need it, but not INSERT.)
I know of no DB that
IIRC oid uses int4in/int4out and those should definitely be able to parse
-1040 into a 4 byte signed long without platform dependency, no ?
Tom Lane changed this recently to have OID use its own i/o routines.
Reading the code, I don't understand it. Why would strtoul return an int in the
A heap page corruption is not very likely in PostgreSQL because of the
underlying page design. Not even on flakey hardware/ossoftware.
(I once read a page design note from pg 4 but don't exactly remember
were or when)
The point is, that the heap page is only modified in places that were
because the cache will be emptied by high priority multiple new rows,
thus writing to the end anyways.
Yes, but this only happens when you don't have enought spare idle CPU
time. If you are in such situation for long periods, there's nothing you
can do, you already have problems.
I
cusejoua=# update journaleintrag set txt_funktion=trim(txt_funktion);
FATAL 2: write(logfile 0 seg 2 off 4612096) failed: No such file or directory
pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally
before or
A heap page corruption is not very likely in PostgreSQL because of the
underlying page design. Not even on flakey hardware/ossoftware.
(I once read a page design note from pg 4 but don't exactly remember
were or when)
The point is, that the heap page is only modified in places that were
The tendency here seems to be towards an improved smgr.
But, it is currently extremely cheap to calculate where a new row
needs to be located physically. This task is *a lot* more expensive
in an overwrite smgr.
I don't agree. If (as I have proposed) the search is made in the
create unique index child_id_index on child (id);
Thanks a lot. You saved my day :-)))
Always feels good to be able to help :-)
CREATE TABLE will create implicit trigger(s) for FOREIGN
KEY check(s)
ERROR: UNIQUE constraint matching given keys for referenced
table "child"
I have an index on group_id, one on
(group_id,status_id) and one on (group_id,status_id,assigned_to)
As an aside notice: you should definitely only need the last of the
three indices, since it can perfectly work on group_id
or group_id + status_id only restrictions.
Andreas
btw anyone trying this query should use: "attdispersion"
I see it, yes. Was this an intended change ? I am quite sure, that it was
attdisbursion in 7.0 ?
Andreas
I take it from the smiley that you're not serious, but actually it seems
like it might not be a bad idea. I could see appending a CRC to each
tuple record. Comments anyone?
Let's not get paranoid. If you compress the output the file will get checksummed
anyway. I am against a CRC in binary
Perhaps they *should* truncate if the specified length is less than
the original string length. Does Oracle do that?
Yes, it truncates, same as Informix.
I went to fix this and then realized I still don't have an adequate spec
of how Oracle defines these functions. It would seem
I suggest to remove the following elog from line 943 of xlog.c.
It does not give real useful info and is repeated for each checkpoint,
thus filling the log of an otherwise idle server.
elog(LOG, "MoveOfflineLogs: skip %s", xlde-d_name);
DEBUG: MoveOfflineLogs: skip
BTW, it also seems like a good idea to reorder the postmaster's
startup operations so that the data-directory lockfile is checked
before trying to acquire the port lockfile, instead of after. That
way, in the common scenario where you're trying to start a second
postmaster in the same
NO, I just tested how solid PgSQL is, I run a program busy inserting record into
PG table, when I
suddenly pulled out power from my machine and restarted PG, I can not insert any
record into database
table, all backends are dead without any respone (not core dump), note that I am
I don't believe it's a performance issue, I believe it's that
writes to
blocks greater than 8k cannot be guaranteed 'atomic' by the operating
system. Hence, 32k blocks would break the transactions system. (Or
something like that - am I correct?)
First, 8k are not atomic eighter. Second,
8k is the standard Unix file system disk transfer size.
Are you sure ? I thought it was 4k on AIX and 2k on Sun.
Andreas
This is a summary of replies.
1. Calculated fields in table definitions . eg.
Create table test (
A Integer,
B integer,
the_sum As (A+B),
);
This functionality can be achieved through the use of views.
Using a view for this
lpad and rpad never truncate, they only pad.
Perhaps they *should* truncate if the specified length is less than
the original string length. Does Oracle do that?
Yes, it truncates, same as Informix.
Andreas
Reason: I want to know if any of these features are scheduled.
1. Calculated fields in table definitions . eg.
Create table test (
A Integer,
B integer,
the_sum As (A+B),
);
This is currently easily done with a procedure that takes a tabletype
We lack a syntax that would enable us to write an on update/delete do instead rule
that would efficiently map an update/delete to a table that is referenced by a view.
Currently the only rule you can implement is one that uses a primary key.
This has the disadvantage of needing a self join to
We lack a syntax that would enable us to write an on update/delete do instead rule
that would efficiently map an update/delete to a table that is referenced by a
view.
Currently the only rule you can implement is one that uses a primary key.
This has the disadvantage of needing a self
Ewe, so we have this 1/200 second delay for every transaction. Seems
bad to me.
I think as long as it becomes a tunable this isn't a bad idea at
all. Fixing it at 1/200 isn't so great because people not wrapping
large amounts of inserts/updates with transaction blocks will
Since I see, that Tom has implemented the "keep a AccessShareLock lock until
transaction end" philisophy I would like to state a protest.
This is a fundamental change in behavior and I would like to see
a vote on this.
The one example we already know is:
session1
Earlier, Vadim was talking about arranging to share fsyncs of the WAL
log file across transactions (after writing your commit record to the
log, sleep a few milliseconds to see if anyone else fsyncs before you
do; if not, issue the fsync yourself). That would offer less-than-
My solution would be to use INT_MIN for all ports, which has the advantage
that the above problematic comparison can be converted to !=,
since no integer will be smaller than INT_MIN.
I agree. When I was looking at this code this morning, I was wondering
what INT_MIN was supposed to
We need the 7.0 style for compatibility with other DB's. Postgres was
"the" pioneer in this area, but similar functionality is now available in other
DB's.
Could you explain? PostgreSQL cant be compatible in C level, why
the SQL compatibility? (I mean the LANGUAGE 'C' specifically)
I have to agree with Alfred here: this does not sound like a feature,
it sounds like a horrid hack. You're giving up *all* consistency
guarantees for a performance gain that is really going to be pretty
minimal in the WAL context.
The "buffered log" still guarantees consistency within the
I am just suggesting that instead of flushing the log on every
transaction end, just do it every X seconds.
Or maybe more practical is, when the log buffer fills.
And of course during checkpoints.
Andreas
because as said, it can be any other language besides C and also
the 'AS file' is weird.
This is interesting. It allows us to control the default behavour of
"C". I would vote to default to 7.0-style when no version is used for
7.1, then default to 7.1 style in 7.2 and later. We don't
Or maybe more practical is, when the log buffer fills.
And of course during checkpoints.
Log filling is too abritrary. If I commit something, and nothing
happens for 2 hours, we should commit that transaction.
Checkpoint, as I said ?
Andreas
Results: 5000 transactions took ~60 sec in 7.1, ~550 sec in
7.0.2 with fsync and ~60 sec without fsync.
So, seems that WAL added not just complexity to system -:)
Wow, this sounds fantastic :-)
I see my concerns where not justified.
Andreas
PS: You might consider applying the patch for (update where not_found) - 100
No, this is not allowed. sqlcode is supposed to be 0 in above case.
You need to explicitly check for the number of rows updated in your
program if needed.
Andreas
but, there is eighter an optimizer bug or a code bug in nabstime.c
leading to regression failure in abstime, tinterval and horology.
The rest all passes.
Does anybody see similar behavior ?
IIRC, the same regression tests fail on Linux/Alpha with 7.0.2, even
at -O0. I had assumed it
The macro AbsoluteTimeIsReal does not work somehow.
Hm. That expands to
(((int) time) ((int) 0x7FFC) \
((int) time) ((int) 0x8001))
There is a special case in nabstime.h for AIX, which imho
got swapped. The normal case for me would be INT_MIN
and not the
There is a special case in nabstime.h for AIX, which imho
got swapped. The normal case for me would be INT_MIN
and not the 0x8001.
There is a comment that I don't understand at all given the below
source code:
/*
* AIX considers 2147483648 == -2147483648 (since they have
the
Do we still need the lastsysoid column in pg_database if we do things
this way? Seems like what you really want is to suppress all the
objects that are in template0, so you really only need one lastsysoid
value, namely template0's. The other entries are useless AFAICS.
Where would you
To me, though, the point of independent databases is that they be
*independent*, and therefore if we keep them I'd vote for mapping them
to the top-level SQL notion (catalog, you said?). Schemas ought to be
substructure within a database.
Yes, that was also "sort of" the bottom line of the
3. Schemas are what we call databases. They contain tables, views wtc.
Let us not start this all over again. Our database would correspond to a catalog
if we put schemas below our database hierarchy.
The standard requires, that you see all schemas within one catalog in
one user session. We
Will we still have readers-dont-block-writers behaviour?
Sure. The only thing this really affects is VACUUM and schema-altering
commands, which will now have to wait until reader transactions commit.
And "lock table ...", which would need some deadlock resolution code,
because two
relcache. Not good. Forcing the schema update to be held off in the
first place seems the right answer.
Agreed, the only question is, how long. My idea would be until statement end,
which is also how Informix does it btw. (If you wanted a prominent example)
Of course a "statement" spans
Unfortunately, session 3 with just SELECT * FROM foo will also wait
for session 1 session 2 commit.
Session 3 would wait for session 2 in any case, no?
This is all irrelevant unless someone can make a convincing case that
it's safe to release read locks early. In the words of the
I like that a lot. Solves the whole problem at a stroke, and even
adds some extra functionality (alternate templates).
Do we need an actual enforcement mechanism for "don't modify
template0"?
I think we could live without that for now. If you're
worried about it,
one way would be to
Just seems like we'd be forcing non-standard syntax on
ourselves when/if
CREATE DATABASE becomes CREATE SCHEMA;
I do not think this will be the way.
I would assume that the two
statements would become synonymous?
No, I think we need the schema below the database hierarchy.
Thus you
I have updated horology-no-DST-before-1970.out, but that still leaves
us needing updates for horology-1947-PDT.out and
That one is mine (AIX), I'll do it.
Andreas
This somehow gets moot. Is there a way to make gcc reject those comments ?
Andreas
*** ./src/backend/utils/adt/varbit.c.orig Wed Nov 1 10:00:22 2000
--- ./src/backend/utils/adt/varbit.cTue Nov 7 11:07:28 2000
***
*** 1212,1218
For a "select colname from tablename" we do not currently hold any lock
until end of tx. This is the situation you described, and I am worried about.
That's a bug in itself, because the executor's read lock is grabbed by
heap_beginscan and released by heap_endscan, which means it may
Pete Forman [EMAIL PROTECTED] writes:
The only remaining failure is geometry. The results I got
were nearly
identical to geometry-powerpc-aix4.out. The only
differences were the
order of rows returned by three of the tables. I'll submit the
results file to pgsql-patches.
into a
different location)
Andreas
-Ursprüngliche Nachricht-
Von: Pete Forman [mailto:[EMAIL PROTECTED]]
Gesendet: Dienstag, 07. November 2000 11:55
An: [EMAIL PROTECTED]
Betreff: AW: [HACKERS] v7.0.3 *pre-release* ...
Zeugswetter Andreas SB writes:
AIX 4.3.2, xlc 3.6.6
I was unable to get runcheck to pass even when I altered all the
LD_LIBRARY_PATH entries in run_check.sh to LIBPATH for the benefit of
AIX. If this cannot be fixed there ought to be an entry added to the
faq-aix.
The fix for AIX below 4.3 would be to relink both postmaster and the libs
with
The problem at hand is that
a plan may be invalidated before it is even finished building. Do you
expect the parse-rewrite-plan-execute pipeline to be prepared to back up
and restart if we notice a relation schema change report halfway down the
process?
Yes, during the processing "of
I'd say that normally you're not using cursors because you intend to throw
away 80% or 90% of the result set, but instead you're using it because
it's convenient in your programming environment (e.g., ecpg). There are
other ways of getting only some rows, this is not it.
I didn't say
After thinking some more about yesterday's discussions, I propose that
we adopt the following planning behavior for cursors:
1. If DECLARE CURSOR does not contain a LIMIT, continue to plan on the
basis of 10%-or-so fetch (I'd consider anywhere from 5% to 25% to be
just as reasonable, if
Hmm. I wonder why cc and gcc are doing different math. Wierd.
Not only that, but you get different results with the same compiler
depending on different optimization settings. The joys of
binary floating point...
Same on AIX.
Andreas
I have not followed the entire thread, but if you are in a serializable OR
repeatable-read transaction,
Serializable and repeatable read are the same thing, different wording.
I would think that read-only statements will
need to keep some kind of lock on the rows they read (or the table).
Are there many applications which have many SELECT statements(without
FOR UPDATE) in one tx ?
Why not ?
It seems to me that multiple SELECT statements in a tx has little
meaning unless the tx is executed in SERIALIZABLE isolation level.
E.g. a table is accessed multiple times to
What do other DBs do with their output variables if there is
an embedded SQL
query resulting in a NULL return value? What I mean is:
exec sql select text into :txt:ind from ...
If text is NULL, ind will be set, but does txt change?
I was just told Informix blanks txt.
No, it gives
What I'm proposing is that once an xact has touched a
table, other xacts should not be able to apply schema updates to that
table until the first xact commits.
No, this would mean too many locks, and would leave the dba with hardly a
chance to alter a table.
Are there many
You were talking about the "select only" case (and no for update eighter).
I think that select statements need a shared lock for the duration of their
execution only.
You seem to think that locks on individual tuples conflict with
table-wide locks.
Yes, very much so. Any other way
Yes, writes are only necessary when "too many dirty pages"
are in the buffer pool. Those writes can be done by a page flusher
on demand or during checkpoint (don't know if we need checkpoint,
but you referred to doing checkpoints).
How else to know from where in log to start redo and
BTW, avoiding writes is base WAL feature, ie - it'll be
implemented in 7.1.
Wow, great, I thought first step was only to avoid sync :-)
? If syncs are not required then why to do write call?
Yes, writes are only necessary when "too many dirty pages"
are in the buffer pool. Those
BTW, avoiding writes is base WAL feature, ie - it'll be
implemented in 7.1.
Wow, great, I thought first step was only to avoid sync :-)
No, but rollforward is currently the main feature, no ?
I'm going to rollback changes on abort in 7.1. Seems I've
mentioned both redo and UNDO (without
The Upper- and the lower function don't convert the german
umlauts (ü.ä.ö.) but leave them in their original condition
Gert (or anyone): what should the result be? I'm German-impaired, so
you'll need to be more specific. Did you compile with locale
turned on?
Multi-byte character
So, pg_dump should be preserved asis.
Just to clarify; I have no intention of doing anything nasty to pg_dump.
All I plan to do is rename the pg_restore to one of
pg_load/pg_import/pg_undump/pmud_gp, to make way for a WAL based restore
utility, although as Bruce suggests, this may be
It is not premature. We will need a WAL based restore for 7.1
or we imho don't need to enable WAL for 7.1 at all.
I missed your point here - why ?!
New backup/restore is not only result of WAL.
What about recovery performance?
Ok, recovery is only improved for indexes, no ?
Performance
As a result do people have any objection to changing pg_restore to
pg_undump? Or pg_load?
Also possible would be a name like Oracle
pg_exp and pg_imp for export and import.
(or pg_export and pg_import)
Load and unload is often more tied to data only (no dml).
I agree that the current
This style of "DROP COLUMN" would change the attribute
numbers whose positons are after the dropped column.
Unfortunately we have no mechanism to invalidate/remove
objects(or prepared plans) which uses such attribute numbers.
And I've seen no proposal/discussion to solve this problem
for
I've been reading something about implementation of histograms, and,
AFAIK, in practice histograms is just a cool name for no more than:
1. top ten with frequency for each
2. the same for top ten worse
3. average for the rest
Consider, that we only need that info for choice
Bottom line is we're not sure what to do now. Opinions from the
floor, anyone?
One thing that comes to my mind is, that you (core members) working full
time on PG will produce so much work, that we "hobby PgSQL'ers" will
have a hard job in keeping up to date.
Thus you will have to be nice
He does ask a legitimate question though. If you are
going to have a
LIMIT feature (which of course is not pure SQL), there
seems no reason
you shouldn't be able to insert the result into a table.
This is an interesting idea. We don't allow ORDER BY in
INSERT INTO ...
we bite the bullet to the extent of supporting a distinction between
physical and logical column numbers, then ISTM there's no strong need
to do any of this other stuff at all. I'd expect that an inserted or
updated tuple would have a NULL in any physical column position that
doesn't have
401 - 500 of 506 matches
Mail list logo