Joe Conway wrote:
Christopher Kings-Lynne wrote:
As an implementation issue, I wonder why these things are hacking
permanent on-disk data structures anyway, when what is wanted is only a
temporary suspension of triggers/rules within a single backend. Some
kind of superuser-only SET variable
Andreas Pflug wrote:
Joe Conway wrote:
Christopher Kings-Lynne wrote:
As an implementation issue, I wonder why these things are
hacking permanent on-disk data structures anyway, when what is
wanted is only a temporary suspension of triggers/rules within
a single backend. Some kind of
Does anyone know if the code for the Main Memory Storage Manager is
available somewhere (Berkeley??)?
Also, in released versions, MM.c is included but not used, does anyone
know if it should work if we define the STABLE_MEMORY_STORAGE, or do a
lot coding has to be done for it to work?
Please CC
Joe Conway [EMAIL PROTECTED] writes:
I didn't dispute the fact that disabling triggers (without unsupported
hacks) is useful. I did agree with Tom that doing so with permanent
commands is dangerous. I think the superuser-only SET variable idea is
the best one I've heard for a way to support
Tom Lane wrote:
This is a dead end. The --disable-triggers hack is already a time bomb
waiting to happen, because all dump scripts using it will break if we
ever change the catalog representations it is hacking. Disabling rules
by such methods is no better an idea; it'd double our exposure to
Oh, yea, that would be bad. So you want to invalidate the entire
session on any error? That could be done.
--
Bruce Momjian| http://candle.pha.pa.us
[EMAIL PROTECTED] | (610) 359-1001
Well, that's exactly the current behaviour, which creates
and I'm willing to entertain other suggestions.
Very nice, but you missed the most important. Command Tag.
--
Rod Taylor rbt [at] rbt [dot] ca
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL
PGP Key: http://www.rbt.ca/rbtpub.asc
signature.asc
Description: This is a digitally
I am running PostgreSQL 7.3.3 on OS X Server 10.2
The database has been running just fine for quite some time now, but
this morning it began pitching the error:
ERROR: cannot read block 176 of tfxtrade_details: Numerical result
out of range
any time the table tfxtrade_details is accessed.
A
Op Thu, 12 Feb 2004 21:18:10 +0100, schreef Jo Voordeckers:
restored from a live system backup. One CLOG file turned up to be
missining according to syslog, however no database-transactions are used
in this database application.
Hmm I think I misunderstood the transaction nature of the
Can we clarify what is meant by the client? It is my
expectation/desire that the client library would handle this as a
setting similar to AutoCommit, which would implicitly protect each
statement within a nested block (savepoint), causing only itself to
abort. Such as, OnError=[abort|continue],
Hello,
When was the last time you ran a reindex? Or a vacuum / vacuum full?
Sincerely,
Joshua D. Drake
On Sat, 14 Feb 2004, Jason Essington wrote:
I am running PostgreSQL 7.3.3 on OS X Server 10.2
The database has been running just fine for quite some time now, but
this morning it began
Both vacuum [full] and reindex fail with that same error.
vacuum is run regularly via a cron job.
-jason
On Feb 14, 2004, at 2:29 PM, Joshua D. Drake wrote:
Hello,
When was the last time you ran a reindex? Or a vacuum / vacuum full?
Sincerely,
Joshua D. Drake
On Sat, 14 Feb 2004, Jason
Kernel 2.4.23 on redhat 8.0Please cc any response from
linux kernel list. TIA.
On or about 7:50am Friday 13 2004 my postgresql server
breaks down. I can ssh and use top and ps but
postgresql stops accepting connection. A small perl
script that logs system load average also hangs. I
cannot
Jo Voordeckers wrote:
What exactly are these PG_CLOG files for... they seem pretty
important and not corresponding to a single DB. What I don't
understand is that some query's error on this while others run
fine... And again how can you identify the PG_CLOG files's
corresponding DB's.
The
Hello,
There are a couple of things it could be. I would suggest that you take
down the database, start it up with -P? (I think it is -o '-P' it might
be -p '-O' I don't recall) and try and reindex the database itself.
You can also do a vacuuum verbose and see if you get some more errors you
Hello,
I personally ran into the exact same thing with another customer.
They are running RedHat 8.0 with (2.4.20 at the time). We had to upgrade
them to 2.4.23 and reboot. Worked like a charm. This was about two months
ago. I swear it was almost the exact same error.
Sincerely,
Joshua D.
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
I ended up not using a regex, which seemed to be a little heavy handed,
but just writing a small custom recognition function, that should (and I
think does) mimic the pattern recognition for these tokens used by the
backend lexer.
Andrew Dunstan [EMAIL PROTECTED] writes:
Tom Lane wrote:
... But how about
42$foo$
This is a syntax error in 7.4, and we propose to redefine it as an
integer literal '42' followed by a dollar-quote start symbol.
The test should not succeed anywhere in the string '42$foo$'.
No, it won't.
Andrew Dunstan [EMAIL PROTECTED] writes:
I ended up not using a regex, which seemed to be a little heavy handed,
but just writing a small custom recognition function, that should (and I
think does) mimic the pattern recognition for these tokens used by the
backend lexer.
I looked at this
19 matches
Mail list logo