> I would like to see some checking of this, though. Currently
> I'm doing testing of PostgreSQL under very large numbers of
> connections (2000+) and am finding that there's a huge volume
> of xlog output ... far more than
> comparable RDBMSes. So I think we are logging stuff we
> don't r
>There is no "regular shared locks" in postgres in that sense. Shared locks >are only used for maintaining FK integrity. Or by manually issuing a >SELECT FOR SHARE, but that's also for maintaining integrity. MVCC >rules take care of the "plain reads". If you're not familiar with MVCC, >it's explain
On Sun, 18 Jun 2006, paolo romano wrote:
Anyway, again in theory, if one wanted to minimize logging overhead for
shared locks, one might adopt a different treatment for (i) regular
shared locks (i.e. locks due to plain reads not requiring durability in
case of 2PC) and (ii) shared locks held
paolo romano <[EMAIL PROTECTED]> writes:
> Anyway, again in theory, if one wanted to minimize logging overhead for
> shared locks, one might adopt a different treatment for (i) regular shared
> locks (i.e. locks due to plain reads not requiring durability in case of 2PC)
> and (ii) shared locks
No, it's not safe to release them until 2nd phase commit.Imagine table foo and table bar. Table bar has a foreign key reference to foo.1. Transaction A inserts a row to bar, referencing row R in foo. This acquires a shared lock on R.2. Transaction A precommits, releasing the lock.3. Transaction B d
Josh Berkus writes:
>> Please dump some of the WAL segments with xlogdump so we can get a
>> feeling for what's in there.
> OK, will do on Monday's test run. Is it possible for me to run this at the
> end of the test run, or do I need to freeze it in the middle to get useful
> data?
I'd just
Tom,
> Please dump some of the WAL segments with xlogdump so we can get a
> feeling for what's in there.
OK, will do on Monday's test run. Is it possible for me to run this at the
end of the test run, or do I need to freeze it in the middle to get useful
data?
Also, we're toying with the ide
Josh Berkus writes:
> I would like to see some checking of this, though. Currently I'm doing
> testing of PostgreSQL under very large numbers of connections (2000+) and am
> finding that there's a huge volume of xlog output ... far more than
> comparable RDBMSes. So I think we are logging st
Tom, Paolo,
> Yeah, it's difficult to believe that multixact stuff could form a
> noticeable fraction of the total WAL load, except perhaps under really
> pathological circumstances, because the code just isn't supposed to be
> exercised often. So I don't think this is worth pursuing. Paolo's fr
paolo romano <[EMAIL PROTECTED]> writes:
> Concerning the prepare state of two phase commit, as I was pointing out in my
> previous post, shared locks can safely be released once a transaction gets
> precommitted, hence they do not have to be made durable.
The above statement is plainly wrong.
On Sat, 17 Jun 2006, paolo romano wrote:
The original point I was moving is if there were any concrete reason
(which still I can't see) to require Multixacts recoverability (by means
of logging).
Concerning the prepare state of two phase commit, as I was pointing out
in my previous post, share
<[EMAIL PROTECTED]>Yeah, it's difficult to believe that multixact stuff could form anoticeable fraction of the total WAL load, except perhaps under reallypathological circumstances, because the code just isn't supposed to beexercised often. So I don't think this is worth pursuing. Paolo's freeto
In PostgreSQL, shared locks are not taken when just reading data. They're used to enforce foreign key constraints. When inserting a row to a table with a foreign key, the row in the parent table is locked to keep another transaction from deleting it. It's not safe to release the lock before end of
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> Also, multixacts are only used when two transactions hold a shared lock
> on the same row.
Yeah, it's difficult to believe that multixact stuff could form a
noticeable fraction of the total WAL load, except perhaps under really
pathological circums
On Sat, 17 Jun 2006, paolo romano wrote:
* Reduced I/O Activity: during transaction processing: current workloads
are typically dominated by reads (rather than updates)... and reads give
rise to multixacts (if there are at least two transactions reading the
same page or if an explicit lock req
On Sat, 17 Jun 2006, paolo romano wrote:
When a transaction enters (successfully) the prepared state it only
retains its exclusive locks and releases any shared locks (i.e.
multixacts)... or, at least, that's how it should be in principle
according to serializiaton theory, i haven't yet checke
Tom Lane <[EMAIL PROTECTED]> ha scritto: paolo romano <[EMAIL PROTECTED]> writes:> The point i am missing is the need to be able to completely recover> multixacts offsets and members data. These carry information about> current transactions holding shared locks on db tuples, which should> not be e
> May be this is needed to support savepoints/subtransactions? Or is it > something else that i am missing?It's for two-phase commit. A prepared transaction can hold locks that need to be recovered.When a transaction enters (successfully) the prepared state it only retains its exclusive locks and r
On Sat, 17 Jun 2006, paolo romano wrote:
May be this is needed to support savepoints/subtransactions? Or is it
something else that i am missing?
It's for two-phase commit. A prepared transaction can hold locks that need
to be recovered.
- Heikki
---(end of broadcast
paolo romano <[EMAIL PROTECTED]> writes:
> The point i am missing is the need to be able to completely recover
> multixacts offsets and members data. These carry information about
> current transactions holding shared locks on db tuples, which should
> not be essential for recovery purposes.
This
I am working on a possible extension of postgresql mvcc to support very timely failure masking in the context of three-tier applications so i am currently studying Postgresql internals...I am wondering what are the reasons why both the MultiXactIds and the corresponding OFFSETs and MEMBERs are cur
21 matches
Mail list logo