On Sat, 17 Jun 2006, paolo romano wrote:
May be this is needed to support savepoints/subtransactions? Or is it
something else that i am missing?
It's for two-phase commit. A prepared transaction can hold locks that need
to be recovered.
- Heikki
---(end of
May be this is needed to support savepoints/subtransactions? Or is it something else that i am missing?It's for two-phase commit. A prepared transaction can hold locks that need to be recovered.When a transaction enters (successfully) the prepared state it only retains its exclusive locks and
Tom Lane [EMAIL PROTECTED] ha scritto: paolo romano <[EMAIL PROTECTED]> writes: The point i am missing is the need to be able to completely recover multixacts offsets and members data. These carry information about current transactions holding shared locks on db tuples, which should not be
A module magic patch was added recently and I'm a bit uncertain what the implications are
for the external PL modules. Does it affect them at all? Will I need to provide separate
binaries for each bug fix release even though the API's do not change? Exactly how is the
magic determined?
On 6/16/06, Mark Woodward [EMAIL PROTECTED] wrote:
Chris Campbell [EMAIL PROTECTED] writes:
I heard an interesting feature request today: preventing the
execution of a DELETE or UPDATE query that does not have a WHERE
clause.
These syntaxes are required by the SQL spec. Furthermore,
On Sat, 17 Jun 2006, paolo romano wrote:
When a transaction enters (successfully) the prepared state it only
retains its exclusive locks and releases any shared locks (i.e.
multixacts)... or, at least, that's how it should be in principle
according to serializiaton theory, i haven't yet
On Sat, 17 Jun 2006, paolo romano wrote:
* Reduced I/O Activity: during transaction processing: current workloads
are typically dominated by reads (rather than updates)... and reads give
rise to multixacts (if there are at least two transactions reading the
same page or if an explicit lock
Thomas Hallgren [EMAIL PROTECTED] writes:
A module magic patch was added recently and I'm a bit uncertain what the
implications are
for the external PL modules. Does it affect them at all?
Yes.
Will I need to provide separate
binaries for each bug fix release even though the API's do not
Heikki Linnakangas [EMAIL PROTECTED] writes:
Also, multixacts are only used when two transactions hold a shared lock
on the same row.
Yeah, it's difficult to believe that multixact stuff could form a
noticeable fraction of the total WAL load, except perhaps under really
pathological
Tom Lane wrote:
No, each major release (8.2, 8.3, etc). There are hardly ever any major
releases where you wouldn't need a new compilation anyway ...
True. I'm all in favor of a magic used this way. It will save me some grief.
Regards,
Thomas Hallgren
---(end of
In PostgreSQL, shared locks are not taken when just reading data. They're used to enforce foreign key constraints. When inserting a row to a table with a foreign key, the row in the parent table is locked to keep another transaction from deleting it. It's not safe to release the lock before end of
Bruce Momjian pgman@candle.pha.pa.us writes:
1) Run this script and record the time reported:
ftp://candle.pha.pa.us/pub/postgresql/mypatches/stat.script
One thing you neglected to specify is that the test must be done on a
NON ASSERT CHECKING build of CVS HEAD (or recent head, at
<[EMAIL PROTECTED]>Yeah, it's difficult to believe that multixact stuff could form anoticeable fraction of the total WAL load, except perhaps under reallypathological circumstances, because the code just isn't supposed to beexercised often. So I don't think this is worth pursuing. Paolo's freeto
On Jun 15, 2006, at 9:45 PM, Toru SHIMOGAKI wrote:
NTT has some ideas about index creation during a large amount of
data loading. Our approach is the following: index tuples are
created at the same time as heap tuples and added into heapsort. In
addition, we use old index tuples as sorted
Moving to osdldbt-general and dropping Tom and Marc.
On Jun 13, 2006, at 1:18 PM, Kris Kennaway wrote:
On Tue, Jun 13, 2006 at 12:29:14PM -0500, Jim C. Nasby wrote:
Unless supersmack has improved substantially, you're unlikely to find
much interest. Last I heard it was a pretty brain-dead
On Jun 13, 2006, at 9:42 PM, Kris Kennaway wrote:
BTW, there's another FBSD performance odditiy I've run across.
Running
pg_dump -t email_contrib -COx stats | bzip2 ec.sql.bz2
which dumps the email_contrib table to bzip2 then to disk, the OS
won't use more than 1 CPU on an SMP system...
I've gotten some insight into the stats collection issues by monitoring
Bruce's test case with oprofile (http://oprofile.sourceforge.net/).
Test conditions: PG CVS HEAD, built with --enable-debug --disable-cassert
(debug symbols are needed for oprofile), on current Fedora Core 5
(Linux kernel
On Sat, 17 Jun 2006, paolo romano wrote:
The original point I was moving is if there were any concrete reason
(which still I can't see) to require Multixacts recoverability (by means
of logging).
Concerning the prepare state of two phase commit, as I was pointing out
in my previous post,
Tom,
18% in s_lock is definitely bad :-(. Were you able to determine which
LWLock(s) are accounting for the contention?
Gavin Sherry and Tom Daly (Sun) are currently working on identifying the
problem lock using DLWLOCK_STATS. Any luck, Gavin?
--
Josh Berkus
PostgreSQL @ Sun
San Francisco
paolo romano [EMAIL PROTECTED] writes:
Concerning the prepare state of two phase commit, as I was pointing out in my
previous post, shared locks can safely be released once a transaction gets
precommitted, hence they do not have to be made durable.
The above statement is plainly wrong. It
Tom, Paolo,
Yeah, it's difficult to believe that multixact stuff could form a
noticeable fraction of the total WAL load, except perhaps under really
pathological circumstances, because the code just isn't supposed to be
exercised often. So I don't think this is worth pursuing. Paolo's free
Josh Berkus josh@agliodbs.com writes:
I would like to see some checking of this, though. Currently I'm doing
testing of PostgreSQL under very large numbers of connections (2000+) and am
finding that there's a huge volume of xlog output ... far more than
comparable RDBMSes. So I think we
On Jun 16, 2006, at 12:01 PM, Josh Berkus wrote:
Folks,
I am thrill to inform you all that Sun has just donated a fully
loaded
T2000 system to the PostgreSQL community, and it's being setup by
Corey
Shields at OSL (osuosl.org) and should be online probably early next
week. The system has
On Jun 16, 2006, at 12:01 PM, Josh Berkus wrote:
First thing as soon as I have a login, of course, is to set up a
Buildfarm
instance.
Keep in mind that buildfarm clients and benchmarking stuff don't
usually mix well.
--
Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED]
Tom,
Please dump some of the WAL segments with xlogdump so we can get a
feeling for what's in there.
OK, will do on Monday's test run. Is it possible for me to run this at the
end of the test run, or do I need to freeze it in the middle to get useful
data?
Also, we're toying with the idea
Josh Berkus josh@agliodbs.com writes:
Please dump some of the WAL segments with xlogdump so we can get a
feeling for what's in there.
OK, will do on Monday's test run. Is it possible for me to run this at the
end of the test run, or do I need to freeze it in the middle to get useful
In view of my oprofile results
http://archives.postgresql.org/pgsql-hackers/2006-06/msg00859.php
I'm thinking we need some major surgery on the way that the stats
collection mechanism works.
It strikes me that we are using a single communication mechanism to
handle what are really two distinct
Jim Nasby wrote:
On Jun 16, 2006, at 12:01 PM, Josh Berkus wrote:
First thing as soon as I have a login, of course, is to set up a
Buildfarm
instance.
Keep in mind that buildfarm clients and benchmarking stuff don't
usually mix well.
On a fast machine like this a buildfarm run is
It strikes me that we are using a single communication mechanism to
handle what are really two distinct kinds of data:
Interesting.
I recently read a paper on how to get rid of locks for this kind of
pattern.
* For the Command String
- Problem : need to display the currently
On Fri, Jun 16, 2006 at 10:58:05PM -0400, Tom Lane wrote:
The alternative I'm currently thinking about is to build and install an
auto-generated file comparable to fmgroids.h, containing *only* the type
OID macro #defines extracted from pg_type.h. This would require just a
trivial amount of
PFC [EMAIL PROTECTED] writes:
So, the proposal :
On executing a command, Backend stores the command string, then
overwrites the counter with (counter + 1) and with the timestamp of
command start.
Periodically, like every N seconds, a separate process reads the
counter,
Dears,
I'm looking for a way to univocally identify the server on which a sql function
or statement is running. My idea would be something close to the value returned
by a 'host -f' under linux: the FQDN of the host, but even a serial code or a
number would be fine to me. It needs only to be
Giampaolo Tomassoni [EMAIL PROTECTED] writes:
I'm looking for a way to univocally identify the server on which a sql
function or statement is running. My idea would be something close to the
value returned by a 'host -f' under linux: the FQDN of the host, but even a
serial code or a number
...omissis...
Perhaps inet_server_addr() and inet_server_port() would answer. These
aren't super-useful on local connections, however.
No, infact. Mine are local cons...
How immutable do you want it to be exactly? The system_identifier
embedded in pg_control might be interesting if you
I assume by 'univocal' you mean unequivocal.
Can you set it up in a table per server? or in a file? or would you
rather use a guuid?
And how is this to be made available?
And is it to be unique per machine, or per cluster (since you can have
many postgresql clusters on one machine).
Andrew Dunstan [EMAIL PROTECTED] writes:
And is it to be unique per machine, or per cluster (since you can have
many postgresql clusters on one machine).
Actually, there are *lots* of ambiguities there. For instance, if you
pg_dump and reload a cluster do you want the ID to change or stay the
I assume by 'univocal' you mean unequivocal.
Yes, sorry about that: I'm writing italish...
Can you set it up in a table per server? or in a file? or would you
rather use a guuid?
A per-server table will probably be my way.
And how is this to be made available?
Well, a function would be
PFC [EMAIL PROTECTED] writes:
- Will only be of use if the command is taking a long, long time.
So, it need not be realtime ; no problem if the data comes with a
little delay, or not at all if the command executes quickly.
I would dispute this point. Picture a system running a
Greg Stark [EMAIL PROTECTED] writes:
PFC [EMAIL PROTECTED] writes:
- Will only be of use if the command is taking a long, long time.
So, it need not be realtime ; no problem if the data comes with a
little delay, or not at all if the command executes quickly.
I would dispute
39 matches
Mail list logo