We just discussed this in detail with Simon, and it looks like we have
5 (!) different but related problems:
1) The original problem of freeze then crash, leaving too high values in
relminxid and datminxid. If you then run vacuum, it might truncate CLOG
and you lose the commit status of the
Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
Tom Lane wrote:
I think it's premature to start writing
patches until we've decided how this really needs to work.
Not logging hint-bit updates seems safe to me. As long as we have the
clog, the hint-bit is just a hint. The
Heikki Linnakangas wrote:
We just discussed this in detail with Simon, and it looks like we have
5 (!) different but related problems:
Wow, four of them are mine :-(
2) vactuple_get_minxid doesn't take into account xmax's of tuples that
have HEAP_XMAX_INVALID set. That's a problem:
On Fri, Oct 27, 2006 at 01:19:25PM -, Greg Sabino Mullane wrote:
This is documented clearly on the psql man page, so it is simply not a
bug, and changing this would probably break lots of legacy scripts.
In a general sense, perhaps, but in this *particular* case, I don't
see what harm
Alvaro Herrera [EMAIL PROTECTED] writes:
Ugh. Is there another solution to this? Say, sync the buffer so that
the hint bits are written to disk?
Yeah. The original design for all this is explained by the notes for
TruncateCLOG:
* When this is called, we know that the database logically
Hello,
I recently posted about a word being too long with Tsearch2. That isn't
actually the problem I am trying to solve (thanks for the feedback
though, now I understand it).
The problem I am after is the 8k index size issue. It is very easy to
get a GIST index (especially when using tsearch2)
On Thu, 2006-10-26 at 18:45 -0400, Tom Lane wrote:
Chris Campbell [EMAIL PROTECTED] writes:
Is there additional logging information I can turn on to get more
details? I guess I need to see exactly what locks both processes
hold, and what queries they were running when the deadlock
On Mon, 2006-10-30 at 12:05 -0500, Tom Lane wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
Ugh. Is there another solution to this? Say, sync the buffer so that
the hint bits are written to disk?
Yeah. The original design for all this is explained by the notes for
TruncateCLOG:
*
Simon Riggs wrote:
On Mon, 2006-10-30 at 12:05 -0500, Tom Lane wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
Ugh. Is there another solution to this? Say, sync the buffer so that
the hint bits are written to disk?
Yeah. The original design for all this is explained by the notes
Simon Riggs [EMAIL PROTECTED] writes:
ISTM we only need to flush iff the clog would be truncated when we
update relminxid.
Wrong :-( If the relvacuumxid change (not relminxid ... as I said, these
names aren't very transparent) makes it to disk but not all the hint
bits do, you're at risk.
Alvaro Herrera [EMAIL PROTECTED] writes:
In fact I don't understand what's the point about multiple databases vs.
a single database. Surely a checkpoint would flush all buffers in all
databases, no?
Yeah --- all the ones that are dirty *now*. Consider the case where you
vacuum DB X, update
On Mon, 2006-10-30 at 16:58 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
ISTM we only need to flush iff the clog would be truncated when we
update relminxid.
Wrong :-( If the relvacuumxid change (not relminxid ... as I said, these
names aren't very transparent) makes it
Simon Riggs [EMAIL PROTECTED] writes:
I don't agree: If the truncation points are at 1 million, 2 million etc,
then if we advance the relvacuumxid from 1.2 million to 1.5 million,
then crash, the hints bits for that last vacuum are lost. Sounds bad,
but we have not truncated clog, so there is
On Mon, 2006-10-30 at 19:18 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
I don't agree: If the truncation points are at 1 million, 2 million etc,
then if we advance the relvacuumxid from 1.2 million to 1.5 million,
then crash, the hints bits for that last vacuum are lost.
Simon Riggs [EMAIL PROTECTED] writes:
That was understood; in the above example I agree you need to flush. If
you don't pass a truncation point, you don't need to flush whether or
not you actually truncate. So we don't need to flush *every* time,
OK, but does that actually do much of anything
Neil Conway [EMAIL PROTECTED] writes:
On Tue, 2006-10-31 at 01:07 +, Simon Riggs wrote:
As requested.
Applied, thanks for the patch.
This patch converted a correct statement into a lie: there is not
anything that will cause begin/commit in a script file to fail just
because you wrapped
On Friday 27 October 2006 19:38, Joe wrote:
Hi Beau,
On Fri, 2006-10-27 at 16:23 -0700, beau hargis wrote:
I am hoping that there is an easy way to obtain case-preservation with
case-insensitivity, or at the very least, case-preservation and complete
case-sensitivity, or case-preservation
beau hargis [EMAIL PROTECTED] writes:
Considering the differences that already exist between database systems and
their varying compliance with SQL and the various extensions that have been
created, I do not consider that the preservation of case for identifiers
would violate any SQL
The problem I am after is the 8k index size issue. It is very easy to
get a GIST index (especially when using tsearch2) that is larger than that.
Hmm, tsearch2 GIST index is specially designed for support huge index entry:
first, every lexemes in tsvectore are transformed to hash value (with a
At Teradata, we certainly interpreted the spec to allow case-preserving,
but case-insensitive, identifiers.
Users really liked it that way: If you re-created a CREATE TABLE
statement from the catalog, you could get back exactly the case the user
had entered, but people using the table didn't need
Chuck McDevitt [EMAIL PROTECTED] writes:
At Teradata, we certainly interpreted the spec to allow case-preserving,
but case-insensitive, identifiers.
Really?
As I see it, the controlling parts of the SQL spec are (SQL99 sec 5.2)
26) A regular identifier and a delimited identifier are
21 matches
Mail list logo