1. Tx Old is running.
2. Tx S reads new transaction ID in GetSnapshotData() and swapped away
before SInval acquired.
3. Tx New gets new transaction ID, makes changes and commits.
4. Tx Old changes some row R changed by Tx New and commits.
5. Tx S gets snapshot data and now sees R
lindo=# vacuum analyze;
NOTICE: Index probably_good_banner_myidx1: NUMBER OF INDEX' TUPLES (1) IS
NOT THE SAME AS HEAP' (4).
Recreate the index.
NOTICE: Index probably_good_banner_myidx1: NUMBER OF INDEX' TUPLES (1) IS
NOT THE SAME AS HEAP' (4).
Recreate the index.
NOTICE:
Vadim Mikheev [EMAIL PROTECTED] writes:
You forget about Tx Old! The point is that changes made by Tx Old *over*
Tx New' changes effectively make those Tx New' changes *visible* to
Tx S!
Yes, but what's that got to do with the order of operations in
GetSnapshotData? The scenario you describe
V. M. [EMAIL PROTECTED] writes:
lindo=# vacuum analyze;
NOTICE: Index probably_good_banner_myidx1: NUMBER OF INDEX' TUPLES (1) IS
NOT THE SAME AS HEAP' (4).
Recreate the index.
NOTICE: Index probably_good_banner_myidx1: NUMBER OF INDEX' TUPLES (1) IS
NOT THE SAME AS HEAP' (4).
You forget about Tx Old! The point is that changes made by
Tx Old *over* Tx New' changes effectively make those Tx New'
changes *visible* to Tx S!
Yes, but what's that got to do with the order of operations in
GetSnapshotData? The scenario you describe can occur anyway.
Try to
Mikheev, Vadim [EMAIL PROTECTED] writes:
1.1 For this case acquiring SInval lock before ReadNewTransactionId()
changes behavior of Tx Old: it postpones change of Tx Old'
(and Tx New') MyProc-xid in xact.c:CommitTransaction(), so Tx S
will see Tx Old as running, ie Tx Old' changes
Oh, now I get it: the point is to prevent Tx Old from exiting the set
of still running xacts as seen by Tx S. Okay, it makes sense.
I'll try to add some documentation to explain it.
TIA! I had no time from '99 -:)
Given this, I'm wondering why we bother with having a separate
XidGenLock
One day i found these in my logs, and the vacuum process hung, effectively
locking everybody out of some tables...
Version 7.1.2
VACUUM ANALYZE
NOTICE: RegisterSharedInvalid: SI buffer overflow
NOTICE: InvalidateSharedInvalid: cache state reset
It was sleeping in semop().
Any ideas, or
Mikheev, Vadim [EMAIL PROTECTED] writes:
Given this, I'm wondering why we bother with having a separate
XidGenLock spinlock at all. Why not eliminate it and use SInval
spinlock to lock GetNewTransactionId and ReadNewTransactionId?
Reading all MyProc in GetSnashot may take long time - why
Hello all,
At the time of creation function body could be parsed and referenced
objects stored in system table (or function could be marked as dirty
and referenced objects would stored at first compilation and after
each subsequent successful after-dirtied-compilation).
Isn't it possible for
What's the fastest way to select the number of rows in a table? If I
use count(*) with no whereclause, it uses a seq_scan and takes 4 secs
(122k rows). With a where clause, it uses an index and returns in 1
sec. Selecting count(requestnumber), which is an indexed column, with
no where clause
Isn't spinlock just a few ASM instructions?... on most platforms...
If we change over to something that supports read vs write locking,
it's probably going to be rather more than that ... right now, I'm
pretty dissatisfied with the performance of our spinlocks under load.
We shouldn't use
Mikheev, Vadim [EMAIL PROTECTED] writes:
Isn't spinlock just a few ASM instructions?... on most platforms...
If we change over to something that supports read vs write locking,
it's probably going to be rather more than that ... right now, I'm
pretty dissatisfied with the performance of our
Given this, I'm wondering why we bother with having a separate
XidGenLock spinlock at all. Why not eliminate it and use SInval
spinlock to lock GetNewTransactionId and ReadNewTransactionId?
Reading all MyProc in GetSnashot may take long time - why disallow
new Tx to begin.
Because
IMHO we are trying to have a compiled language behave like an interpreted
language.
This is a bottom to top approach with no real future. Here is a proposal of
a top to bottom approach.
What we do in pgAdmin is that we store objects (functions, views and
triggers) in separate tables called
Peter Eisentraut [EMAIL PROTECTED] writes:
So would anyone object if I get rid of this while doing the i18n pass over
libpq?
Don't forget to fix the numerous places where examples of these messages
appear in the documentation ...
regards, tom lane
Anyway, what's the preferred syntax for triggering the rule
recompilation? I thought about
ALTER RULE {rulename|ALL} RECOMPILE;
Where ALL triggers only those rules where the user actually
has RULE access right on a relation.
In good world rules (PL functions
Mikheev, Vadim wrote:
In good world rules (PL functions etc) should be automatically
marked as dirty (ie recompilation required) whenever referenced
objects are changed.
Yepp, and it'd be possible for rules (just not right now).
But we're not in a really good world,
Mark Volpe writes:
Good point. Would the issue be resolved by either:
- Only allowing the database superuser to use this mechanism?
If you mean only allow a superuser do define functions using this
mechanism, that could work. But it would probably make this feature a
lot less
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
Also, can someone tell my why we use malloc in plpgsql?
Plain palloc() won't do because the compiled tree for the function needs
to outlive the current query. However, malloc() is not cool. Really,
these structures ought to be built
I remember awhile ago, someone floated the idea of a dependency view which
would list all objects and what OIDs they have in their plan. (i.e. what
do they depend on).
I'm definitely no expert in this, but to me, one possible implementation
would be to enhance outfuncs to provide for creation
On Thu, 12 Jul 2001, Jan Wieck wrote:
Mikheev, Vadim wrote:
In good world rules (PL functions etc) should be automatically
marked as dirty (ie recompilation required) whenever referenced
objects are changed.
Yepp, and it'd be possible for rules (just not right now).
On Thu, 12 Jul 2001, Jan Wieck wrote:
Alex Pilosov wrote:
I remember awhile ago, someone floated the idea of a dependency view which
would list all objects and what OIDs they have in their plan. (i.e. what
do they depend on).
I'm definitely no expert in this, but to me, one possible
Hiroshi Inoue [EMAIL PROTECTED] writes:
We would have to reconsider *alter table .. rename ..* ..
Yeah, that's one thing that would act differently if we adopt my idea of
considering the source text of the rule to be the primary definition.
It's not clear if this is good or bad, however.
Alex Pilosov wrote:
I remember awhile ago, someone floated the idea of a dependency view which
would list all objects and what OIDs they have in their plan. (i.e. what
do they depend on).
I'm definitely no expert in this, but to me, one possible implementation
would be to enhance outfuncs
Mikheev, Vadim wrote:
Why is it possible in Oracle' world? -:)
Because of there limited features?
And now we limit our additional advanced features -:)
Think about a language like PL/Tcl. At the time you call a
script for execution, you cannot even be sure that the
Bruce Momjian wrote:
Confirmed. I found a second problem in the file too, very similar.
Patch applied.
Cut'n paste error. Thanks to both of you, good catch.
Jan
In this bit of code in src/pl/plpgsql/src/gram.y in the current CVS
sources, curname_def is defined as PLpgSQL_expr *
Jan Wieck [EMAIL PROTECTED] writes:
are you sure that this doesn't have a severe performance
impact?
It's not provable, of course, until we try it ... but I think the
performance impact would be small. Has anyone complained about the
fact that plpgsql functions are stored as
Jan Wieck [EMAIL PROTECTED] writes:
And PL/pgSQL? We don't prepare all the statements into SPI
plans at compile time. We wait until the separate branches
are needed, so how do you know offhand here?
If we haven't prepared a statement yet, then we don't need to reprepare
it,
Jan Wieck [EMAIL PROTECTED] writes:
This isn't local recompilation in current backend. It's
recreation of the pg_rewrite entry for a relation, including
propagation.
Where I'd like to go (see my previous mail) is that pg_rewrite,
pg_attrdef, and friends store *only* the
Jan Wieck [EMAIL PROTECTED] writes:
You cannot control Tcl's bytecode compiler from the outside.
An excellent example. You don't *need* to control Tcl's bytecode
compiler from the outside, because *Tcl gets it right without help*.
It takes care of the function-text-to-derived-form
Tom Lane wrote:
Jan Wieck [EMAIL PROTECTED] writes:
There is of course a difference between the original
CREATE RULE/VIEW statement and the string stored here. This
is because we cannot rely on the actual query buffer but have
to parseback the parsetree like done by the
That might happen eventually, but I'm not all that eager to convert
the postmaster into a (half-baked) substitute for cron. My experience
as a dbadmin is that you need various sorts of routinely-run maintenance
tasks anyway; VACUUM is only one of them. So you're gonna need some
cron
Tom Lane wrote:
Jan Wieck [EMAIL PROTECTED] writes:
This isn't local recompilation in current backend. It's
recreation of the pg_rewrite entry for a relation, including
propagation.
Where I'd like to go (see my previous mail) is that pg_rewrite,
pg_attrdef, and
Jan Wieck [EMAIL PROTECTED] writes:
There is of course a difference between the original
CREATE RULE/VIEW statement and the string stored here. This
is because we cannot rely on the actual query buffer but have
to parseback the parsetree like done by the utility functions
Mark Volpe writes:
Good point. Would the issue be resolved by either:
- Only allowing the database superuser to use this mechanism?
If you mean only allow a superuser do define functions using this
mechanism, that could work. But it would probably make this feature a
lot less attractive,
Most, or at least half, of the error messages that libpq itself generates
look like PQwhatever(): this and that went wrong, where PQwhatever is
usually the function that generates the error message.
I consider this practice ugly. If PQwhatever is an exported API function,
then the users
lindo=# vacuum analyze;
NOTICE: Index probably_good_banner_myidx1: NUMBER OF INDEX' TUPLES (1) IS
NOT THE SAME AS HEAP' (4).
Recreate the index.
NOTICE: Index probably_good_banner_myidx1: NUMBER OF INDEX' TUPLES (1) IS
NOT THE SAME AS HEAP' (4).
Recreate the index.
NOTICE:
Tom Lane wrote:
Jan Wieck [EMAIL PROTECTED] writes:
In the PL/pgSQL case it *might* be possible. But is it worth
it?
Yes. If we're not going to do it right, I think we needn't bother to do
it at all. Restart your backend is just as good an answer, probably
better, than issue
Jan Wieck [EMAIL PROTECTED] writes:
Stop!
We're talking about two different things here.
You're right: fixing obsoleted querytrees stored in pg_rewrite and
similar catalogs is not the same thing as invalidating cached
query plans in plpgsql, SPI, etc.
However, we could turn them into
Jan Wieck [EMAIL PROTECTED] writes:
For most objects, there is no such recompile possible - at
least not without storing alot more information than now.
Create a function and based on that an operator. Then you
drop the function and create another one. Hmmm,
Hi,
I'd like to add another column to pg_rewrite, holding the
string representation of the rewrite rule. A new utility
command will then allow to recreate the rules (internally
DROP/CREATE, but that doesn't matter).
This would be a big help in case anything used
The question is really whether you ever want a client to get a
rejected result from an open attempt, or whether you'd rather they
got a report from the back end telling them they can't log in. The
second is more polite but a lot more expensive. That expense might
really matter if you
43 matches
Mail list logo