On 11.04.10 20:47 , Robert Haas wrote:
On Sun, Apr 11, 2010 at 10:26 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
Robert Haas wrote:
2010/4/10 Andrew Dunstanand...@dunslane.net:
Heikki Linnakangas wrote:
1. Keep the materialized view up-to-date when the base tables
On 28.12.09 18:54 , Kevin Grittner wrote:
To give some idea of the scope of development, Michael Cahill added
SSI to InnoDB by modifying 250 lines of code and adding 450 lines of
code; however, InnoDB already had the S2PL option and the prototype
implementation isn't as sophisticated as I feel
Hi
I've completed a (first) working version of a extension that allows
easier introspection of composite types from SQL and pl/PGSQL.
The original proposal and ensuing discussion can be found here:
http://archives.postgresql.org/pgsql-hackers/2009-11/msg00695.php
The extension can be found on:
Hi
HEAD fails to compile in 64-bit mode on Mac OS X 10.6 with gcc 4.2 and
-Werror.
What happens is that INT64_FORMAT gets defined as %ld (which is
correct - long and unsigned long are 64 bits wide on x86_64), but
the check for a working 64-bit int fails, causing INT64_IS_BUSTED to get
defined
On 15.12.09 16:02 , Tom Lane wrote:
Florian G. Pflugf...@phlo.org writes:
configure fails to recognize long as a working 64-bit type
because the does_int64_work configure test produces warning due to
a missing return value declaration for main() and a missing
prototype for does_int64_work().
On 15.12.09 15:52 , Tom Lane wrote:
to...@tuxteam.de writes:
(and as Andrew Dunstan pointed out off-list: I was wrong with my
bold assertion that one can squeeze infinitely many (arbitrary
length) strings between two given. This is not always the case).
Really? If the string length is
On 15.12.09 23:38 , Tom Lane wrote:
Peter Eisentrautpete...@gmx.net writes:
So to summarize, this is just a bad idea. Creating a less obscure
way to use -Werror might be worthwhile, though.
I suppose we could add --with-Werror but it seems pretty
specialized to me. A more appropriate
Tom Lane wrote:
: One possibility would be to make it possible to issue SETs that
behave : as if set in a startup packet - imho its an implementation
detail that : SET currently is used.
I think there's a good deal of merit in this, and it would't be hard
at all to implement, seeing that we
Dan Eloff wrote:
At the lower levels in PG, reading from the disk into cache, and
writing from the cache to the disk is always done in pages.
Why does PG work this way? Is it any slower to write whole pages
rather than just the region of the page that changed? Conversely, is
it faster? From
Hi
I'm currently investigating how much work it'd be to implement arrays of
domains since I have a client who might be interested in sponsoring that
work.
The comments around the code handling ALTER DOMAIN ADD CONSTRAINT are
pretty clear about the lack of proper locking in that code - altering
Florian G. Pflug wrote:
I do, however, suspect that ALTER TABLE is plagued by similar
problems. Currently, during the rewrite phase of ALTER TABLE,
find_composite_type_dependencies is used to verify that the table's
row type (or any type directly or indirectly depending on that type
Tom Lane wrote:
Josh Berkus j...@agliodbs.com writes:
(2) this change, while very useful, does change what had been a
simple rule (All variables are NULL unless specifically set
otherwise) into a conditional one (All variables are NULL unless
set otherwise OR unless they are declared as domain
Gurjeet Singh wrote:
On Sat, Nov 21, 2009 at 7:26 AM, Josh Berkus j...@agliodbs.com
mailto:j...@agliodbs.com wrote: However, there are some other
issues to be resolved:
(1) what should be the interaction of DEFAULT parameters and domains
with defaults?
The function's DEFAULT parameter
Robert Haas wrote:
On Thu, Nov 19, 2009 at 9:06 PM, Florian G. Pflug f...@phlo.org wrote:
I've tried to create a patch, but didn't see how I'd convert the result
from get_typedefault() (A Node*, presumeably the parsetree corresponding
to the default expression?) into a plan that I could store
Tom Lane wrote:
Florian G. Pflug f...@phlo.org writes:
It seems that pl/pgsql ignores the DEFAULT value of domains for
local variables.
The plpgsql documentation seems entirely clear on this:
The DEFAULT clause, if given, specifies the initial value assigned to
the variable when the block
Heikki Linnakangas wrote:
Joachim Wieland wrote:
On Thu, Nov 19, 2009 at 4:12 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Yes, I have been thinking about that also. So what should happen
when you prepare a transaction that has sent a NOTIFY before?
From the user's point of view, nothing should
Tom Lane wrote:
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
A better approach is to do something similar to what we do now: at
prepare, just store the notifications in the state file like we do
already. In notify_twophase_postcommit(), copy the messages to the
shared queue.
Tom Lane wrote:
Florian G. Pflug f...@phlo.org writes:
Tom Lane wrote:
This is still ignoring the complaint: you are creating a clear
risk that COMMIT PREPARED will fail.
I'd see no problem with COMMIT PREPARED failing, as long as it
was possible to retry the COMMIT PREPARED at a later time
Hi
It seems that pl/pgsql ignores the DEFAULT value of domains for local
variables. With the following definitions in place
create domain myint as int default 0;
create or replace function myint() returns myint as $body$
declare
v_result myint;
begin
return v_result;
end;
$body$ language
Hi
While trying to come up with a patch to handle domain DEFAULTs in
plpgsql I've stumbled across the following behavior regarding domain
DEFAULTs and prepared statements.
session 1: create domain myint as int default 0 ;
session 1: create table mytable (i myint) ;
session 2: prepare ins as
Tom Lane wrote:
Florian G. Pflug f...@phlo.org writes:
Tom Lane wrote:
Trying to do this in plpgsql is doomed to failure and heartache,
Well, the proposed functions at least allow for some more
flexibility in working with row types, given that you know in
advance which types you
Tom Lane wrote:
Florian G. Pflug f...@phlo.org writes:
Ok, I must be missing something. I currently fail to see how my
proposed record_value(record, name, anyelement) returns anyelement
function differs (from the type system's point of view) from
value_from_string(text, anyelement) returns
Tom Lane wrote:
Andrew Dunstan and...@dunslane.net writes:
Yes, and I have used it, but it really would be nicer to have some
introspection facilities built in, especially for use in triggers.
Maybe, but the proposal at hand is spectacularly ugly --- in particular
it seems designed around the
Tom Lane wrote:
Florian G. Pflug f...@phlo.org writes:
While I agree that handling arbitrary datatypes at runtime would be
nice, I really don't see how that could ever be done from within a
plpgsql procedure, unless plpgsql somehow morphs into a
dynamically typed language.
Which
Tom Lane wrote:
Florian G. Pflug f...@phlo.org writes:
Tom Lane wrote:
Perhaps it would help if we looked at some specific use-cases
that people need, rather than debating abstractly. What do you
need your generic trigger to *do*?
I need to build a global index table of all values
Heikki Linnakangas wrote:
Agreed, it's a bug. A simpler example is just: [snipped]
Will this fix for this be included in 8.4.2 (or .3), or will it have to
wait for 8.4 because it changes behavior?
There's a special case in transformExpr function to handle the
ARRAY[...]::arraytype construct,
Hi
I'm currently working on a project where we need to build a global cache
table containing all values of certain types found in any of the other
tables. Currently, a seperate insert, update and delete (plpgsql)
trigger function exists for each table in the database which is
auto-generated by a
Tom Lane wrote:
Florian G. Pflug f...@phlo.org writes:
I'd like to replace this function-generating function by a generic
trigger function that works for all tables. Due to the lack of any
way to inspect the *structure* of a record type, however, I'd have
to use a C language function
Hi
While trying to create a domain over an array type to enforce a certain
shape or certain contents of an array (like the array being only
one-dimensional or not containing NULLs), I've stumbled over what I
believe to be a bug in postgresql 8.4
It seems that check constraints on domains are
Simon Riggs wrote:
On Sat, 2008-09-13 at 10:48 +0100, Florian G. Pflug wrote:
The main idea was to invert the meaning of the xid array in the snapshot
struct - instead of storing all the xid's between xmin and xmax that are
to be considering in-progress, the array contained all the xid's
xmin
Simon Riggs wrote:
On Sat, 2008-09-13 at 10:48 +0100, Florian G. Pflug wrote:
The current read-only snapshot (which current meaning the
corresponding state on the master at the time the last replayed wal
record was generated) was maintained in shared memory. It' xmin field
was continually
Heikki Linnakangas wrote:
BTW, we haven't talked about how to acquire a snapshot in the slave.
You'll somehow need to know which transactions have not yet
committed, but will in the future. In the master, we keep track of
in-progress transaction in the ProcArray, so I suppose we'll need to
do
Tom Lane wrote:
You can get around that by hacking up the generated config files
with #ifdef __i386__ and so on to expose the correct values of
the hardware-dependent symbols to each build. Of course you have
to know what the correct values are --- if you don't have a sample
of each
Josh Berkus wrote:
Tom,
Indeed. If the Solaris folk feel that getupeercred() is insecure,
they had better explain why their kernel is that broken. This is
entirely unrelated to the known shortcomings of the ident IP
protocol.
The Solaris security kernel folks do, actually. However,
Simon Riggs wrote:
When we move from having a virtual xid to having a real xid I don't
see any attempt to re-arrange the lock queues. Surely if there are
people waiting on the virtual xid, they must be moved across to wait
on the actual xid? Otherwise the locking queue will not be respected
Kevin Grittner wrote:
On Wed, May 28, 2008 at 6:26 PM, in message
[EMAIL PROTECTED],
Florian G. Pflug [EMAIL PROTECTED] wrote:
I think we should put some randomness into the decision,
to spread the IO caused by hit-bit updates after a batch load.
Currently we have a policy of doing
Simon Riggs wrote:
Hmm, I think the question is: How many hint bits need to be set
before we mark the buffer dirty? (N)
Should it be 1, as it is now? Should it be never? Never is a long
time. As N increases, clog accesses increase. So it would seem there
is likely to be an optimal value for
Simon Riggs wrote:
After some discussions at PGCon, I'd like to make some proposals for
hint bit setting with the aim to reduce write overhead.
Currently, when we see an un-hinted row we set the bit, if possible and
then dirty the block.
If we were to set the bit but *not* dirty the block we
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
trigger on prepare, commit, rollback, savepoint,
This is a sufficiently frequently asked question that I wish someone
would add an entry to the FAQ about it, or add it to the TODO list's
Features we don't want section.
OK, remind me
ran out of time
before I had time to benchmark this, and I probably also lack the
hardware for running high-concurrency tests.
---
Florian G. Pflug wrote:
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes
Pavan Deolasee wrote:
What I am thinking is if we can read ahead these blocks in the shared
buffers and then apply redo changes to them, it can potentially
improve things a lot. If there are multiple read requests, kernel (or
controller ?) can probably schedule the reads more efficiently.
Greg Stark wrote:
Florian G. Pflug wrote:
The same holds true for index scans, though. Maybe we can find a
solution that benefits both cases - something along the line of a
bgreader process
I posted a patch to do readahead for bitmap index scans using
posix_fadvise. Experiments showed
Pavan Deolasee wrote:
In a typical scenario, user might create a table and load data in the
table as part of a single transaction (e.g pg_restore). In this case,
it would help if we create the tuples in the *frozen* state to avoid
any wrap-around related issues with the table. Without this,
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
...
Neither the dealer, nor the workers would need access to the either
the shared memory or the disk, thereby not messing with the one backend
is one transaction is one session dogma.
...
Unfortunately, this idea has far too narrow
Dimitri Fontaine wrote:
Of course, the backends still have to parse the input given by pgloader, which
only pre-processes data. I'm not sure having the client prepare the data some
more (binary format or whatever) is a wise idea, as you mentionned and wrt
Tom's follow-up. But maybe I'm all
Brian Hurt wrote:
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
...
Neither the dealer, nor the workers would need access to the either
the shared memory or the disk, thereby not messing with the one backend
is one transaction is one session dogma.
...
Unfortunately, this idea
Andrew Dunstan wrote:
Florian G. Pflug wrote:
Would it be possible to determine when the copy is starting that this
case holds, and not use the parallel parsing idea in those cases?
In theory, yes. In pratice, I don't want to be the one who has to
answer to an angry user who just suffered
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
Plus, I'd see this as a kind of testbed for gently introducing
parallelism into postgres backends (especially thinking about sorting
here).
This thinking is exactly what makes me scream loudly and run in the
other direction. I don't
As far as I can see the main difficulty in making COPY run faster (on
the server) is that pretty involved conversion from plain-text lines
into tuples. Trying to get rid of this conversion by having the client
send something that resembles the data stored in on-disk tuples is not a
good answer,
Marko Kreen wrote:
On 2/25/08, Florian G. Pflug [EMAIL PROTECTED] wrote:
I'm not sure how a proper fix for this could look like, since the
blocking actually happens inside libpq - but this certainly makes
working with dblink painfull...
Proper fix would be to use async libpq API, then loop
Hi
I just stumbled over the following behaviour, introduced with 8.3, and
wondered if this is by design or an oversight.
If you define a domain over some existing type, constrain it to
non-null values, and use that domain as a field type in a table
definition, it seems to be impossible to
Andrew Dunstan wrote:
Florian G. Pflug wrote:
If you define a domain over some existing type, constrain it to
non-null values, and use that domain as a field type in a table
definition, it seems to be impossible to declare pl/pgsql variables
of that table's row type. The problem seems
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
I just stumbled over the following behaviour, introduced with 8.3,
and wondered if this is by design or an oversight.
No, this was in 8.2.
Ah, sorry - I'm porting an app from 8.1 straight to 8.3, and blindly
assumes that i'd have
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
Plus, the fact that we don't support default specifications in
pl/pgsql for row types turns this inconvenience into a major PITA,
You mean initialization expressions, not defaults, correct? (I would
consider the latter to mean
Hi
dblink in 8.3 blocks without any possibility of interrupting it while
waiting for an answer from the remote server. Here is a strace
[pid 27607] rt_sigaction(SIGPIPE, {SIG_IGN}, {SIG_IGN}, 8) = 0
[pid 27607] sendto(56, Q\0\0\0008lock table travelhit.booking_code in
exclusive mode\0, 57, 0,
Tom Lane wrote:
Florian Weimer [EMAIL PROTECTED] writes:
* Alvaro Herrera:
I am wondering if we can set the system up so that it skips postmaster,
How much does that help? Postmaster c still need to be shut down
when a regular backend dies due to SIGKILL.
The $64 problem is that if the
Tom Lane wrote:
Another thought is to tell people to run the postmaster under a
per-process memory ulimit that is conservative enough so that the
system can't get into the regime where the OOM killer activates.
ulimit actually behaves the way we want, ie, it's polite about
telling you you
Guillaume Smet wrote:
On Jan 27, 2008 9:07 PM, Markus Bertheau
[EMAIL PROTECTED] wrote:
2008/1/28, Tom Lane [EMAIL PROTECTED]:
Do we have nominations for a name? The first idea that comes to
mind is synchronized_scanning (defaulting to ON).
synchronized_sequential_scans is a bit long, but
Steve Atkins wrote:
On Jan 28, 2008, at 8:36 AM, Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
Kevin Grittner wrote:
It would seem reasonable to me for pg_dump to use ORDER BY to select
data from clustered tables.
What will be the performance hit from doing that?
That worries
Tom Lane wrote:
I'm not sure what the most convenient user API would be for an on-demand
hard-read-only mode, but we can't use SET TRANSACTION READ ONLY for it.
It'd have to be some other syntax. Maybe just use a GUC variable
instead of bespoke syntax? SET TRANSACTION is really just syntactic
Tom Lane wrote:
Well, my point is that taking automatic rewriting as a required feature
has at least two negative impacts:
* it rules out any form of lazy update, even though for many applications
an out-of-date summary view would be acceptable for some purposes;
* requiring MVCC consistency
Tom Lane wrote:
Chris Browne [EMAIL PROTECTED] writes:
Note that we required that the provider transaction have the
attributes IsXactIsoLevelSerializable and XactReadOnly both being
true, so we have the mandates that the resultant backend process:
a) Is in read only mode, and
b) Is in
Peter Eisentraut wrote:
Am Donnerstag, 25. Oktober 2007 schrieb Andrew Dunstan:
From time to time people have raised the idea of a CPAN-like mechanism for
downloading, building and installing extensions and the like (types,
functions, sample dbs, anything not requiring Postgres itself to be
Sebastien FLAESCH wrote:
Forget this one, just missing the WITH HOLD option... Must teach myself a bit
more before sending further mails. Seb
AFAIK you cannot use WITH HOLD together with updateable cursors.
I might be wrong, though...
regards, Florian Pflug
---(end of
andy wrote:
Is there any chance there is an easier way to backup/restore? On one
hand, its not too bad, and it'll only be once (correct?). Now that fts
is in core future backup/restores will work, right? I think it's
analogous to telling someone they are updating from tsearch2 to
tsearch3,
Gregory Stark wrote:
Tom Lane [EMAIL PROTECTED] writes:
There doesn't seem to be any very nice way to fix this. There is
not any existing support mechanism (comparable to query_tree_walker)
for scanning whole plan trees, which means that searching a cached plan
for regclass Consts is going to
Heikki Linnakangas wrote:
Tom Lane wrote:
I tend to agree that truncating the file, and extending the fsync
request mechanism to actually delete it after the next checkpoint,
is the most reasonable route to a fix.
Ok, I'll write a patch to do that.
What is the argument against making
Heikki Linnakangas wrote:
Tom Lane wrote:
I tend to agree that truncating the file, and extending the fsync
request mechanism to actually delete it after the next checkpoint,
is the most reasonable route to a fix.
Ok, I'll write a patch to do that.
What is the argument against making
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
What is the argument against making relfilenodes globally unique by adding
the xid and epoch of the creating transaction to the filename?
1. Zero chance of ever backpatching. (I know I said I wasn't excited about
that, but it's still
Heikki Linnakangas wrote:
I wrote:
Unfortunately I don't see any easy way to fix it. One approach would be
to avoid reusing the relfilenodes until next checkpoint, but I don't see
any nice place to keep track of OIDs that have been dropped since last
checkpoint.
Ok, here's one idea:
Instead
Simon Riggs wrote:
On Wed, 2007-10-17 at 12:11 +0100, Heikki Linnakangas wrote:
Simon Riggs wrote:
On Wed, 2007-10-17 at 17:18 +0800, Jacky Leng wrote:
Second, suppose that no checkpoint has occured during the upper
series--authough not quite possible;
That part is irrelevant. It's forced out
Heikki Linnakangas wrote:
Mario Weilguni wrote:
I cannot use -1 for performance, because some gist stuff has changed
and the restore fails. But there seems to be no option for pg_restore to
use transactions for data restore, so it's very very slow (one million
records, each obviously in it's
Tom Lane wrote:
Peter Eisentraut [EMAIL PROTECTED] writes:
Am Freitag, 12. Oktober 2007 schrieb Gregory Stark:
It would make Postgres inconsistent and less integrated with the rest of
the OS. How do you explain that Postgres doesn't follow the system's
configurations and the collations don't
Gregory Stark wrote:
Tom Lane [EMAIL PROTECTED] writes:
There doesn't seem to be any very nice way to fix this. There is
not any existing support mechanism (comparable to query_tree_walker)
for scanning whole plan trees, which means that searching a cached plan
for regclass Consts is going to
andy wrote:
Is there any chance there is an easier way to backup/restore? On one
hand, its not too bad, and it'll only be once (correct?). Now that fts
is in core future backup/restores will work, right? I think it's
analogous to telling someone they are updating from tsearch2 to
tsearch3,
Csaba Nagy wrote:
Can we frame a set of guidelines, or may be some test procedure, which
can declare a certain function as deterministic?
You mean postgres should check your function if it is really immutable ?
I can't imagine any way to do it correctly in reasonable time :-)
Imagine a
Andrew Dunstan wrote:
Florian G. Pflug wrote:
I think you're overly pessimistic here ;-) This classification can be done
quite efficiently as long as your language is static enough. The trick is
not to execute the function, but to scan the code to find all other
functions and SQL statements
Gokulakannan Somasundaram wrote:
Hi Heikki, I am always slightly late in understanding things. Let me
try to understand the use of DSM. It is a bitmap index on whether all
the tuples in a particular block is visible to all the backends,
whether a particular block contains tuples which are
Kevin Grittner wrote:
I omitted the code I was originally considering to have it work against
files in place rather than as a filter. It seemed much simpler this
way, we didn't actually have a use case for the additional functionality,
and it seemed safer as a filter. Thoughts?
A special
Hi
When reading Tom's comment about the bug in my use latestCompletedXid
to slightly speed up TransactionIdIsInProgress patch, I remembered that
I recently stumbled across GCC builtins for atomic test-and-test and
read/write reordering barriers...
Has anyone looked into those? It seems that
Cristiano Duarte wrote:
2007/9/17, Tom Lane [EMAIL PROTECTED]:
Cristiano Duarte [EMAIL PROTECTED] writes:
Is there a way to have access to PostgreSQL query plan and/or predicates
inside a function using spi (or any other way)?
No.
Hi Tom,
No means: there is no way since the query plan is
Hi
When I initially proposed to use the latest *committed* xid as the xmax instead
of ReadNewTransactionId(), I believed that this would cause tuples created by a
later aborted transaction not to be vacuumed until another transaction (with a
higher xid) commits later. The idea was therefore
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
Therefore, I suggest that we rename latestCompletedXid to latestCommittedXid,
and update it only on commits. Admittedly, this won't bring any measurable
performance benefit in itself (it will slightly reduce the average snapshot
size
Simon Riggs wrote:
On Tue, 2007-09-11 at 10:21 -0400, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
1. The ProcArrayLock is acquired Exclusive-ly by only one
remaining operation: XidCacheRemoveRunningXids(). Reducing things
to that level is brilliant work, Florian and Tom.
It would
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
Currently, we do not assume that either the childXids array, nor the xid
cache in the proc array are sorted by ascending xid order. I believe that
we could simplify the code, further reduce the locking requirements, and
enabled
Hi
I've already posted this idea, but I feel that I did explain it
rather badly. So here comes a new try.
Currently, we do not assume that either the childXids array, nor
the xid cache in the proc array are sorted by ascending xid order.
I believe that we could simplify the code, further reduce
Simon Riggs wrote:
On Fri, 2007-09-07 at 06:36 +0200, Florian G. Pflug wrote:
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
- I actually think with just a little bit of more work, we
can go even further, and get rid of the ReadNewTransactionId() call
completely during
Tom Lane wrote:
I've spent the past hour or so trying to consolidate the comments in
GetSnapshotData and related places into a single chunk of text to be
added to src/backend/access/transam/README. Attached is what I have so
far --- this incorporates the idea of not taking ProcArrayLock to exit
Tom Lane wrote:
Here's some revised text for the README file, based on using Florian's idea
of a global latestCompletedXid variable. As I worked through it I realized
that in this design, XidGenLock gates entry of new XIDs into the ProcArray
while ProcArrayLock gates their removal. Which is
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
So I believe you're right, and we can skip taking the lock in the no
xid case - I actually think with just a little bit of more work, we
can go even further, and get rid of the ReadNewTransactionId() call
completely during snapshotting
Tom Lane wrote:
Simon was complaining a bit ago that we still have problems with
excessive contention for the ProcArrayLock, and that much of this stems
from the need for transaction exit to take that lock exclusively.
The lazy-XID patch, as committed, doesn't help that situation at all,
saying
Tom Lane wrote:
I've committed Florian's patch, but there remain a couple of things
that need work:
* Should CSV-mode logging include the virtual transaction ID (VXID) in
addition to, or instead of, XID? There will be many situations where
there is no XID.
Maybe make %x show both, or only the
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
Tom Lane wrote:
So it seems that only SET LOCAL within a function with per-function
GUC settings is at issue. I think that there is a pretty strong
use-case for saying that if you have a per-function setting of a
particular variable
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
At least for me, the least surprising behaviour would be to
revert it too. Than the rule becomes a function is always
executed in a pseudo-subtransaction that affects only GUCs
Only if it has at least one SET clause. The overhead
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
And the rule becomes (I tend to forget things, so I like simple
rules that I can remember ;-) ) For each SET-clause, there is
a pseudo-subtransaction affecting only *this* GUC.
The other question is whether we want to change
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
Tom Lane wrote:
Clear to everyone? Any objections?
That makes SET LOCAL completely equivalent to SET, except
when used inside a function that has a corresponding SET-clause, right?
Maybe it wasn't clear :-(. They aren't
Tom Lane wrote:
So, to reiterate, my idea is
.) Make SET TRANSACTION a synonym for SET LOCAL at the SQL-Level.
.) In pl/pgsql, SET TRANSACTION sets a new value that is kept after the
function exits, even if the function has a matching SET-clause.
.) SET LOCAL in pl/pgsql set a new value that
Heikki Linnakangas wrote:
Tom Lane wrote:
I had an idea this morning that might be useful: back off the strength
of what we try to guarantee. Specifically, does it matter if we leak a
file on crash, as long as it isn't occupying a lot of disk space?
(I suppose if you had enough crashes to
August Zajonc wrote:
Yes, checkpoints would need to include a list of
created-but-yet-uncommitted
files. I think the hardest part is figuring out a way to get that
information
to the backend doing the checkpoint - my idea was to track them in shared
memory, but that would impose a hard limit
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
It might be even worse - I'm not sure that a rename is an atomic operation
on most filesystems.
rename(2) is specified to be atomic by POSIX, but relinking a file into
a different directory can hardly be --- it's not even provided
1 - 100 of 303 matches
Mail list logo