On Fri, 2006-05-05 at 18:04 -0400, Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
On Fri, 5 May 2006, Tom Lane wrote:
BTW, I just realized another bug in the patch: btbulkdelete fails to
guarantee that it visits every page in the index.
The first solution that occurs to me
On Sun, May 07, 2006 at 08:21:43PM -0400, Tom Lane wrote:
changes in any of the following:
PG_VERSION_NUM
CATALOG_VERSION_NO
the size of 8 basic C types
BLCKSZ=20
NAMEDATALEN=20
HAVE_INT64_TIMESTAMP
INDEX_MAX_KEYS
FUNC_MAX_ARGS
VARHDRSZ
MAXDIM
The compiler used (only
it's considered the linker's job to prevent loading 32-bit
code into a
64-bit executable or vice versa, so I don't think we need to be
checking for common assumptions about sizeof(long).
I know ELF headers contain some of this info, and unix in
general doesn't try to allow different
Simon Riggs [EMAIL PROTECTED] writes:
I read your earlier post about needing to lock everything and spent some
time thinking about this. The issue of needing to lock everything means
that we would never be able to do a partial vacuum of an index i.e.
remove one page without a scan. I'm more
Martijn van Oosterhout kleptog@svana.org writes:
On Sun, May 07, 2006 at 08:21:43PM -0400, Tom Lane wrote:
That seems way overkill to me. FUNC_MAX_ARGS is good to check, but
most of those other things are noncritical for typical add-on modules.
I was trying to find variables that when
Alvaro Herrera [EMAIL PROTECTED] writes:
I'm not too sure about the XLOG routines -- I don't understand very well
the business about attaching the changes to a buffer; I thought at first
that since all the changes go to a tuple, they all belong to the buffer,
so I assigned a single XLogRecData
On Mon, May 08, 2006 at 10:32:47AM -0400, Tom Lane wrote:
Martijn van Oosterhout kleptog@svana.org writes:
I was trying to find variables that when changed would make some things
corrupt. For example, a changed NAMEDATALEN will make any use of the
syscache a source of errors. A change in
On Mon, 2006-05-08 at 10:18 -0400, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
I read your earlier post about needing to lock everything and spent some
time thinking about this. The issue of needing to lock everything means
that we would never be able to do a partial vacuum of an
Simon Riggs [EMAIL PROTECTED] writes:
That wasn't the proposal. Every split would be marked and stay marked
until those blocks were VACUUMed. The data used to mark is readily
available and doesn't rely on whether or not VACUUM is running.
IMHO this does work.
OK, I misunderstood what you had
On Mon, 2006-05-08 at 11:26 -0400, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
That wasn't the proposal. Every split would be marked and stay marked
until those blocks were VACUUMed. The data used to mark is readily
available and doesn't rely on whether or not VACUUM is running.
Per feedback, here is an updated version. As was pointed out, the prior
version was checking stuff that either changed too often to be useful
or was never going to change at all. The error reporting is cleaned up
too, it now releases the module before throwing the error. It now only
checks four
Simon Riggs [EMAIL PROTECTED] writes:
So we just optimised for slowly-but-continually churning tables (i.e.
DELETEs match INSERTs, or just UPDATEs). i.e. we just improved VACUUM
performance for those that don't need it that often. That might be the
common case, but it isn't the one thats
Tom Lane wrote:
But why do you need your own xlogging at all? Shouldn't these actions
be perfectly ordinary updates of the relevant catalog tuples?
The XLog entry can be smaller, AFAICT (we need to store the whole new
tuple in a heap_update, right?). If that's not a worthy goal I'll
gladly
Alvaro Herrera [EMAIL PROTECTED] writes:
Ah, there's another reason, and it's that I'm rewriting the tuple in
place, not calling heap_update.
Is that really a good idea, as compared to using heap_update?
(Now, if you're combining this with the very grotty relpages/reltuples
update code, then
Tom,
On 5/8/06 11:46 AM, Tom Lane [EMAIL PROTECTED] wrote:
I made a table of 16M rows with an
index over a random-data integer column. With a thoroughly disordered
index (built on-the-fly as the random data was inserted), the time to
VACUUM after deleting a small number of rows was 615
Tom Lane wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
Ah, there's another reason, and it's that I'm rewriting the tuple in
place, not calling heap_update.
Is that really a good idea, as compared to using heap_update?
Not sure -- we would leave dead tuples after the VACUUM is finished
Alvaro Herrera [EMAIL PROTECTED] writes:
Tom Lane wrote:
(Now, if you're combining this with the very grotty relpages/reltuples
update code, then I'm all for making that xlog properly --- we've gotten
away without it so far but it really should xlog the change.)
I hadn't done that, but I'll
Tom Lane wrote:
BTW, one thing I was looking at last week was whether we couldn't
refactor the relpages/reltuples updates to be done in a cleaner place.
Right now UpdateStats is called (for indexes) directly from the index
AM, which sucks from a modularity point of view, and what's worse it
Alvaro Herrera [EMAIL PROTECTED] writes:
Tom Lane wrote:
We should reorganize things so this is done once at the outer level.
It'd require some change of the ambuild() API, but considering we're
hacking several other aspects of the AM API in this development cycle,
that doesn't bother me.
Tom Lane wrote:
BTW, has anyone looked at the possibility of driving VC from gmake,
so that we can continue to use the same Makefiles? Or is that just
out of the question?
The Coin 3D project had a wrapper program that makes VC look like a Unix
compiler. Looking at
BTW, has anyone looked at the possibility of driving VC
from gmake, so
that we can continue to use the same Makefiles? Or is that
just out
of the question?
The Coin 3D project had a wrapper program that makes VC look
like a Unix compiler. Looking at
Alvaro Herrera [EMAIL PROTECTED] writes:
Tom Lane wrote:
(Now, if you're combining this with the very grotty relpages/reltuples
update code, then I'm all for making that xlog properly --- we've gotten
away without it so far but it really should xlog the change.)
I hadn't done that, but I'll
On Mon, 2006-05-08 at 14:46 -0400, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
So we just optimised for slowly-but-continually churning tables (i.e.
DELETEs match INSERTs, or just UPDATEs). i.e. we just improved VACUUM
performance for those that don't need it that often. That
Hi Magnus.
I understood that this helped.
#define PGBINDIR /usr/local/pgsql/bin
#define PGSHAREDIR /usr/local/pgsql/share
#define SYSCONFDIR /usr/local/pgsql/etc
#define INCLUDEDIR /usr/local/pgsql/include
#define PKGINCLUDEDIR /usr/local/pgsql/include
#define INCLUDEDIRSERVER
24 matches
Mail list logo