Hi all,
On Thu, Jun 28, 2012 at 5:16 AM, David E. Wheeler da...@justatheory.com
wrote:
I don't see the virtue of this in this case. Since the index is not
unique, why not just put the index on (a,b,c,d) and be done with it?
Is there some advantage to be had in inventing a way to store c
Hi Tomas,
On Wed, 2011-01-19 at 23:13 +0100, Tomas Vondra wrote:
No, the multi-column statistics do not require constant updating. There
are cases where a sampling is perfectly fine, although you may need a
bit larger sample. Generally if you can use a multi-dimensional
histogram, you don't
On Tue, 2011-01-11 at 07:14 -0500, Noah Misch wrote:
On Tue, Jan 11, 2011 at 09:24:46AM +, Simon Riggs wrote:
I have a concern that by making the ALTER TABLE more complex that we
might not be able to easily tell if a rewrite happens, or not.
What about add EXPLAIN support to it, then
On Fri, 2011-01-07 at 12:32 +0100, t...@fuzzy.cz wrote:
the problem is you will eventually need to drop the results and rebuild
it, as the algorithms do not handle deletes (ok, Florian mentioned an
algorithm L_0 described in one of the papers, but I'm not sure we can use
it).
Yes, but even
On Thu, 2010-12-30 at 21:02 -0500, Tom Lane wrote:
How is an incremental ANALYZE going to work at all?
How about a kind of continuous analyze ?
Instead of analyzing just once and then drop the intermediate results,
keep them on disk for all tables and then piggyback the background
writer (or
On Wed, 2010-12-15 at 10:39 +, Simon Riggs wrote:
Perhaps a more useful definition would be
EXCHANGE TABLE target WITH source;
which just swaps the heap and indexes of each table.
You can then use TRUNCATE if you want to actually destroy data.
Yes please, that's exactly what I would
On Tue, 2010-12-14 at 14:36 -0500, Robert Haas wrote:
Well, you have to do that for DROP TABLE as well, and I don't see any
way around doing it for REPLACE WITH.
Sure, but in Simon's proposal you can load the data FIRST and then
take a lock just long enough to do the swap. That's very
Hi all,
On Tue, 2010-11-30 at 12:05 -0800, Josh Berkus wrote:
Can you explain, for our benefit, the use case for this? Specifically,
what can be done with synonyms which can't be done with search_path and
VIEWs?
I had a few cases where synonyms for user/data base names would have
helped me
Hi all,
The workaround recommended some time ago by Tom is:
DELETE FROM residents_of_athens WHERE ctid = any(array(SELECT ctid FROM
residents_of_athens ORDER BY ostracism_votes DESC LIMIT 1));
It is about as efficient as the requested feature would be, just uglier
to write down. I use it all
Hi Robert,
On Tue, 2010-11-30 at 09:19 -0500, Robert Haas wrote:
That's a very elegant hack, but not exactly obvious to a novice user
or, say, me. So I think it'd be nicer to have the obvious syntax
work.
I fully agree - but you first have to convince core hackers that this is
not just a
Hi all,
Some time ago I was also interested in this feature, and that time I
also thought about complete setup possibility via postgres connections,
meaning the transfer of the files and all configuration/slave
registration to be done through normal backend connections.
In the meantime our DBs
On Thu, 2010-09-23 at 12:02 +0300, Heikki Linnakangas wrote:
On 23/09/10 11:34, Csaba Nagy wrote:
In the meantime our DBs are not able to keep in sync via WAL
replication, that would need some kind of parallel WAL restore on the
slave I guess, or I'm not able to configure it properly
On Thu, 2010-09-23 at 16:18 +0300, Heikki Linnakangas wrote:
There's a program called pg_readahead somewhere on pgfoundry by NTT that
will help if it's the single-threadedness of I/O. Before handing the WAL
file to the server, it scans it through and calls posix_fadvise for all
the blocks
On Thu, 2010-09-23 at 11:43 -0400, Tom Lane wrote:
What other problems are there that mean we *must* have a file?
Well, for one thing, how do you add a new slave? If its configuration
comes from a system catalog, it seems that it has to already be
replicating before it knows what its
Hi all,
On Tue, 2010-04-27 at 11:07 -0400, Merlin Moncure wrote:
The block level case seems pretty much covered by the hot standby feature.
One use case we would have is to dump only the changes from the last
backup of a single table. This table takes 30% of the DB disk space, it
is in the
Hi all,
On Thu, 2010-04-08 at 07:45 -0400, Robert Haas wrote:
2010/4/8 Thom Brown thombr...@gmail.com:
So you could write:
DELETE FROM massive_table WHERE id 4000 LIMIT 1;
I've certainly worked around the lack of this syntax more than once.
And I bet it's not even that hard
Hi all,
On Thu, 2010-03-18 at 10:18 -0700, Josh Berkus wrote:
Or, let's put it another way: I've made my opinion clear in the past
that I think that we ought to ship with a minimal postgresql.conf with
maybe 15 items in it. If we are going to continue to ship with
postgresql.conf kitchen
Hi all,
On Mon, 2010-02-22 at 10:29 +, Greg Stark wrote:
On Mon, Feb 22, 2010 at 8:18 AM, Gokulakannan Somasundaram
gokul...@gmail.com wrote:
a) IOT has both table and index in one structure. So no duplication of data
b) With visibility maps, we have three structures a) Table b) Index
hating reading C code beyond reason, and
writing any of it till now resumed to copy-paste-modify).
Cheers,
Csaba.
Csaba Nagy
Software Engineer
eCircle
P: +49 (0)89 / 120 09-783 | F: +49 (0)89 / 120 09-750
E: c.n...@ecircle.com
Nymphenburger Str. 86, 80636 München
Stay in touch
Web
On Wed, 2009-10-07 at 16:34 +0200, Gnanam wrote:
NOTE: I've seen deadlock errors in UPDATE statement but why it is throwing
in INSERT statements.
It is because of the foreign key. Inserting a child row will lock the
corresponding parent row, and if you insert multiple rows with different
Hi all,
On Tue, 2009-10-06 at 16:58 +0200, Tom Lane wrote:
Yeah, I have sometimes thought that pg_largeobject shouldn't be
considered a system catalog at all. It's more nearly like a toast
table, ie, it's storing out of line user data.
pg_largeobject in it's current form has serious
On Thu, 2009-09-17 at 10:08 +0200, Heikki Linnakangas wrote:
Robert Haas suggested a while ago that walreceiver could be a
stand-alone utility, not requiring postmaster at all. That would allow
you to set up streaming replication as another way to implement WAL
archiving. Looking at how the
On Tue, 2009-08-11 at 23:58 +0200, Andrew Dunstan wrote:
Well, I don't think that the fact that we are producing machine-readable
output means we can just ignore the human side of it. It is more than
likely that such output will be read by both machines and humans.
Obviously, we need to
On Wed, 2009-08-12 at 15:42 +0200, Andrew Dunstan wrote:
Have you actually looked at a logfile with this in it? A simple
stylesheet won't do at all. What you get is not an XML document but a
text document with little bits of XML embedded in it. So you would need
a program to parse that file
On Wed, 2009-08-12 at 16:51 +0200, Csaba Nagy wrote:
I argue that a sufficiently complicated explain output will never be
easily navigated in a text browser, however much you would like it. If
you do a where clause with 100 nested ANDs (which occasionally happens
here), I don't think you'll
On Wed, 2009-08-12 at 17:11 +0200, Andrew Dunstan wrote:
That will just make things worse. And it will break if the XML includes
any expression that contains a line break.
Then escape the expressions using CDATA or such... I'm sure it would be
possible to make sure it's one line and rely on
On Wed, 2009-08-12 at 17:31 +0200, Csaba Nagy wrote:
On Wed, 2009-08-12 at 17:11 +0200, Andrew Dunstan wrote:
That will just make things worse. And it will break if the XML includes
any expression that contains a line break.
Then escape the expressions using CDATA or such... I'm sure
On Wed, 2009-08-12 at 17:41 +0200, Andrew Dunstan wrote:
Csaba Nagy wrote:
Then why you bother calling it machine readable at all ? Would you
really read your auto-explain output on the DB server ? I doubt that's
the common usage scenario, I would expect that most people would let
On Wed, 2009-08-12 at 18:07 +0200, Andrew Dunstan wrote:
Csaba Nagy wrote:
On Wed, 2009-08-12 at 17:11 +0200, Andrew Dunstan wrote:
Well, the right solution would actually be NOT to use CDATA but to
replace a literal linefeed with the XML numeric escape #x0a; , but I
really don't think
On Tue, 2009-04-21 at 11:43 -0400, Robert Haas wrote:
This doesn't sound like a very good idea, because the planner cannot
then rely on the overflow table not containing tuples that ought to be
within some other partition.
The big win that is associated with table partitioning is using
On Wed, 2008-10-01 at 16:57 +0100, Gregory Stark wrote:
I wonder if we could do something clever here though. Only one process
is busy
calculating the checksum -- it just has to know if anyone fiddles the hint
bits while it's busy.
What if the hint bits are added at the very end to the
On Fri, 2008-09-12 at 09:38 +0100, Simon Riggs wrote:
If you request a block, we check to see whether there is a lookaside
copy of it prior to the tuple removals. We then redirect the block
request to a viewpoint relation's block. Each viewpoint gets a separate
relfilenode. We do the
On Fri, 2008-09-12 at 12:31 +0100, Richard Huxton wrote:
There was a suggestion (Simon - from you?) of a transaction voluntarily
restricting itself to a set of tables.
While thinking about how easy it would be for the DBA to specify the set
of tables a single query is accessing, first I thought
I think that enabling long-running queries this way is both
low-hanging
fruit (or at least medium-height-hanging ;) ) and also consistent to
PostgreSQL philosophy of not replication effort. As an example we trust
OS's file system cache and don't try to write our own.
I have again questions
On Fri, 2008-09-12 at 15:08 +0300, Hannu Krosing wrote:
* how will the buffers keep 2 different versions of the same page ?
As the FS snapshot is mounted as a different directory, it will have
it's own buffer pages.
Lack of knowledge about this shows my ignorance about the implementation
of
On Fri, 2008-09-12 at 17:24 +0300, Hannu Krosing wrote:
On Fri, 2008-09-12 at 17:08 +0300, Heikki Linnakangas wrote:
Hmm, built-in rsync capability would be cool. Probably not in the first
phase, though..
We have it for WAL shipping, in form of GUC archive_command :)
Why not add
On Thu, 2008-09-11 at 15:23 +0300, Heikki Linnakangas wrote:
I'd imagine that even if applying the WAL on the slave is blocked, it's
still streamed from the master to the slave, and in case of failover the
slave will fast-forward before starting up as the new master.
Which begs the question:
On Thu, 2008-09-11 at 15:42 +0300, Heikki Linnakangas wrote:
One problem with this, BTW, is that if there's a continuous stream of
medium-length transaction in the slave, each new snapshot taken will
prevent progress in the WAL replay, so the WAL replay will advance in
baby steps, and can
On Thu, 2008-09-11 at 16:19 +0300, Heikki Linnakangas wrote:
Well, yes, but you can fall behind indefinitely that way. Imagine that
each transaction on the slave lasts, say 10 minutes, with a new
transaction starting every 5 minutes. On the master, there's a table
that's being vacuumed (or
On Thu, 2008-09-11 at 15:33 +0200, Dimitri Fontaine wrote:
What would forbid the slave to choose to replay all currently lagging WALs
each time it's given the choice to advance a little?
Well now that I think I understand what Heikki meant, I also think the
problem is that there's no choice at
On Tue, 2008-09-09 at 20:59 +0200, Zeugswetter Andreas OSB sIT wrote:
All in all a useful streamer seems like a lot of work.
I mentioned some time ago an alternative idea of having the slave
connect through a normal SQL connection and call a function which
streams the WAL file from the point
On Thu, 2008-07-03 at 23:15 +1000, Aaron Spiteri wrote:
Inside foo there was a INSERT and UPDATE, and the INSERT failed but
the UPDATE succeeded would the UPDATE be rolled back?
Just to add to the other answers, if the INSERT is before the UPDATE in
the function, the function execution stops
On Thu, 2008-07-03 at 19:56 +0530, cinu wrote:
Could anyone please tell me where I am going wrong and if there is a
way I can get the same behaviour that I am getting while I am
executing the through psql prompt.
You're mistake is that you think a transaction is related to your
terminal, but
On Tue, 2008-06-10 at 19:03 -0400, Tom Lane wrote:
Given such an MCV list, the planner will always make the right choice
of whether to do index or seqscan ... as long as it knows the value
being searched for, that is. Parameterized plans have a hard time here,
but that's not really the fault
On Wed, 2008-06-04 at 11:13 +0300, Heikki Linnakangas wrote:
Hmm, WAL version compatibility is an interesting question. Most minor
releases hasn't changed the WAL format, and it would be nice to allow
running different minor versions in the master and slave in those cases.
But it's
[Looks like this mail missed the hackers list on reply to all, I wonder
how it could happen... so I forward it]
On Thu, 2008-05-29 at 17:00 +0100, Dave Page wrote:
Yes, we're talking real-time streaming (synchronous) log shipping.
Is there any design already how would this be implemented ?
On Fri, 2008-05-09 at 08:47 -0400, Andrew Dunstan wrote:
However, I wondered if we couldn't mitigate this by caching the results
of constraint exclusion analysis for a particular table + condition. I
have no idea how hard this would be, but in principle it seems silly to
keep paying the
The hairiness is in the plan dependence (or independence) on parameter
values, ideally we only want to cache plans that would be good for all
parameter values, only the user knows that precisely. Although it could be
possible to examine the column histograms...
If cached plans
On Mon, 2008-04-14 at 16:54 +0300, Heikki Linnakangas wrote:
Figuring out the optimal decision points is hard, and potentially very
expensive. There is one pretty simple scenario though: enabling the use
of partial indexes, preparing one plan where a partial index can be
used, and another
On Mon, 2008-04-14 at 16:10 +0200, Csaba Nagy wrote:
... or plan the query with the actual parameter value you get, and also
record the range of the parameter values you expect the plan to be valid
for. If at execution time the parameter happens to be out of that range,
replan, and possibly
... or plan the query with the actual parameter value you get, and also
record the range of the parameter values you expect the plan to be valid
for. If at execution time the parameter happens to be out of that range,
replan, and possibly add new sublpan covering the extra range. This
could
On Mon, 2008-04-14 at 10:55 -0400, Mark Mielke wrote:
The other ideas about automatically deciding between plans based on
ranges and such strike me as involving enough complexity and logic, that
to do properly, it might as well be completely re-planned from the
beginning to get the most
On Mon, 2008-04-14 at 17:08 +0200, PFC wrote:
Those Decision nodes could potentially lead to lots of decisions
(ahem).
What if you have 10 conditions in the Where, plus some joined ones ?
That
would make lots of possibilities...
Yes, that's true, but most of them are likely
On Thu, 2008-04-10 at 05:03 +0930, Shane Ambler wrote:
I do think it is useful for more than typo's in the \join command. What
about a slip where you forget to \g the command. Or you start a query
that seems to be taking too long, background it and look into what is
happening. This would be
I find myself doing this frequently with any long-running command,
but currently it's a PITA because I'd doing it at the shell level and
firing up a new psql: more work than should be necessary, and psql
sometimes gets confused when you resume it from the background in
interactive
On Thu, 2008-04-03 at 16:44 +0200, PFC wrote:
CREATE FLATFILE READER mydump (
id INTEGER,
dateTEXT,
...
) FROM file 'dump.txt'
(followed by delimiter specification syntax identical to COPY, etc)
;
Very cool idea, but why would you need to create a reader object
Are you suggesting we keep appending? So if I set the same parameter 100
times, it would show up on 100 rows?
What about not touching the config file at all, but write to a separate
file which is completely under the control of postgres and include that
at the end of the config file ? You just
On Tue, 2008-02-19 at 16:41 +0100, Magnus Hagander wrote:
The end result wouldn't be as clean as some would expect, but it would
certainly be easier code-wise. For example, I'm sure someone would get the
suggestion to go edit postgresql.conf to change a config value, and be
surprised when it
On Mon, 2008-01-14 at 09:22 +, Simon Riggs wrote:
So I support Mark Mielke's views on writing code. Anybody who wants to
code, can. There's probably a project of a size and complexity that's
right for your first project.
The main problem is that usually that initial thing is not what you
On Fri, 2008-01-11 at 11:34 +, Richard Huxton wrote:
1. Make an on-disk chunk much smaller (e.g. 64MB). Each chunk is a
contigous range of blocks.
2. Make a table-partition (implied or explicit constraints) map to
multiple chunks.
That would reduce fragmentation (you'd have on average
Which is roughly what Simon's original Dynamic Partitioning would be
if it became visible at the planner level (unless I've misunderstood). I
was picturing it in my head as a two-dimensional bitmap with
value-ranges along one axis and block-ranges along the other. Of course,
unlike other
On Wed, 2008-01-02 at 17:56 +, Simon Riggs wrote:
Like it?
Very cool :-)
One additional thought: what about a kind of segment fill factor ?
Meaning: each segment has some free space reserved for future
updates/inserts of records in the same range of it's partitioning
constraint. And when
On Mon, 2008-01-07 at 13:59 +0100, Markus Schiltknecht wrote:
However, for tables which don't fit the use case of SE, people certainly
don't want such a fill factor to bloat their tables.
Sure, but it could be configurable and should only be enabled if the
table is marked as partitioned on
On Mon, 2008-01-07 at 14:20 +0100, Markus Schiltknecht wrote:
Why is that? AFAIUI, Segment Exclusion combines perfectly well with
clustering. Or even better, with an upcoming feature to maintain
clustered ordering. Where do you see disadvantages such an optimization
for sequential scans?
On Tue, 2007-12-11 at 11:12 +, Simon Riggs wrote:
Features
- Read Only Tables
- Compressed Tablespaces
I wonder if instead of read-only tables wouldn't it be better to have
some kind of automatic partitioning which permits to have different
chunks of the table data in different
On Tue, 2007-12-11 at 13:44 +0100, Csaba Nagy wrote:
Another advantage I guess would be that active data would more likely
stay in cache, as updated records would stay together and not spread
over the inactive.
And I forgot to mention that vacuum could mostly skip the archive part,
and only
On Tue, 2007-12-11 at 14:58 +0200, Hannu Krosing wrote:
Ühel kenal päeval, T, 2007-12-11 kell 13:44, kirjutas Csaba Nagy:
Then put the active chunk on a high performance file system and the
archive tablespace on a compressed/slow/cheap file system and you're
done. Allow even the archive
On Fri, 2007-11-23 at 12:36 +, Gregory Stark wrote:
I also did an optimization similar to the bounded-sort case where we check if
the next tuple from the same input which last contributed the result record
comes before the top element of the heap. That avoids having to do an insert
and
On Tue, 2007-10-23 at 11:00 +0200, Rafael Martinez wrote:
We are always 1 year back the main release. We are testing and planing
the move to 8.2 now, and it won't happen until desember. In a 6 month
cycle we will have to jump over every second release.
We here are also just in the process of
[snip]
In the case of User-Defined functions, the user should be defining it
as Deterministic.
The user CAN already define his functions as
Deterministic=IMMUTABLE... the problem is that many of us will define
functions as immutable, when in fact they are not. And do that by
mistake... and
I think you're overly pessimistic here ;-) This classification can be done
quite
efficiently as long as your language is static enough. The trick is not to
execute the function, but to scan the code to find all other functions and
SQL
statements a given function may possibly call. If
On Tue, 2007-10-09 at 11:22 -0400, Andrew Dunstan wrote:
Csaba Nagy wrote:
You mean postgres should check your function if it is really immutable ?
I can't imagine any way to do it correctly in reasonable time :-)
I would say that in the general case it's analogous to the halting
On Mon, 2007-10-08 at 09:40 +0100, Heikki Linnakangas wrote:
This idea has been discussed to death many times before. Please search
the archives.
Somewhat related to the visibility in index thing: would it be
possible to have a kind of index-table ? We do have here some tables
which have 2-4
On Mon, 2007-09-24 at 10:55 -0700, Joshua D. Drake wrote:
IMO, monitor_ seems weird versus track_. To me monitor implies actions
to be taken when thresholds are met. PostgreSQL doesn't do that.
PostgreSQL tracks/stores information for application to monitor or
interact with and those
In other words, if I can assure that data exported and then imported
will always, under all circumstances, compare the same to the original,
would that be enough of a requirement? In other words, if I offer a
format that is assured of preserving both mantissa and exponent
precision and range,
We have _ample_ evidence that the problem is lack of people able to
review patches, and yet there is this discussion to track patches
better. It reminds me of someone who has lost their keys in an alley,
but is looking for them in the street because the light is better there.
Bruce, I guess
On Thu, 2007-05-03 at 13:51, Bruce Momjian wrote:
I believe the problem is not that there isn't enough information, but
not enough people able to do the work. Seeking solutions in areas that
aren't helping was the illustration.
Yes Bruce, but you're failing to see that a more structured
On Sat, 2007-04-07 at 18:09, Tom Lane wrote:
Awhile back Csaba Nagy [EMAIL PROTECTED] wrote:
Making cluster MVCC-safe will kill my back-door of clustering a hot
table while I run a full DB backup.
Are we agreed that the TRUNCATE-based workaround shown here
http://archives.postgresql.org
speaking with pavan off list he seems to think that only 'create
index' is outside transaction, not the other ddl flavors of it because
they are generally acquiring a excl lock. so, in that sense it is
possibly acceptable to me although still a pretty tough pill to
swallow (thinking, guc
On Tue, 2007-03-20 at 18:12, Josh Berkus wrote:
Tom,
Actually, I think you don't particularly need stats for that in most
cases --- if the planner simply took note that the FK relationship
exists, it would know that each row of the FK side joins to exactly
one row of the PK side, which
This should read:
Considering that the FK part is unique, the
^^PK^^
skewness in the relationship is completely determined by the FK part's
histogram. That would give at least a lower/upper bound and MCVs to the
relationship.
Cheers,
Csaba.
On Thu, 2007-03-15 at 17:01, A.M. wrote:
It seems to me that postgresql is especially well-suited to run DDL
at runtime, so what's the issue?
The issue is that some applications are not well suited to run DDL at
runtime :-)
As I already mentioned in another post in this thread, our
On Wed, 2007-03-14 at 16:08, [EMAIL PROTECTED] wrote:
On Wed, Mar 14, 2007 at 02:28:03PM +, Gregory Stark wrote:
David Fetter [EMAIL PROTECTED] writes:
CREATE TABLE symptom (
symptom_id SERIAL PRIMARY KEY, /* See above. */
...
);
CREATE TABLE patient_presents_with
On Wed, 2007-03-14 at 16:50, David Fetter wrote:
On Wed, Mar 14, 2007 at 02:28:03PM +, Gregory Stark wrote:
David Fetter [EMAIL PROTECTED] writes:
CREATE TABLE symptom (
symptom_id SERIAL PRIMARY KEY, /* See above. */
...
);
CREATE TABLE patient_presents_with (
On Tue, 2007-03-13 at 00:43, Richard Huxton wrote:
Josh Berkus wrote:
I really don't see any way you could implement UDFs other than EAV that
wouldn't be immensely awkward, or result in executing DDL at runtime.
What's so horrible about DDL at runtime? Obviously, you're only going to
On Fri, 2007-03-09 at 12:29, Heikki Linnakangas wrote:
Csaba, you mentioned recently
(http://archives.postgresql.org/pgsql-hackers/2007-03/msg00027.php) that
you're actually using the MVCC-violation to clean up tables during a
backup. Can you tell us a bit more about that? Would you be
On Fri, 2007-03-09 at 13:42, Gregory Stark wrote:
Csaba Nagy [EMAIL PROTECTED] writes:
Wouldn't be possible to do it like Simon (IIRC) suggested, and add a
parameter to enable/disable the current behavior, and use the MVCC
behavior as default ?
Doing it in CLUSTER would be weird
On Fri, 2007-03-09 at 14:00, Alvaro Herrera wrote:
But I'm not really seeing the problem here. Why isn't Csaba's problem
fixed by the fact that HOT reduces the number of dead tuples in the
first place? If it does, then he no longer needs the CLUSTER
workaround, or at least, he needs it to a
Hmm. You could use something along these lines instead:
0. LOCK TABLE queue_table
1. SELECT * INTO queue_table_new FROM queue_table
2. DROP TABLE queue_table
3. ALTER TABLE queue_table_new RENAME queue_table
After all, it's not that you care about the clustering of the table, you
just
On Fri, 2007-03-09 at 17:47, Tom Lane wrote:
I don't think that people are very likely to need to turn archiving on
and off on-the-fly.
We did need occasionally to turn archiving on on-the-fly. It did happen
that I started up a new DB machine and I did not have yet the log
archive available, so
On Thu, 2007-03-01 at 13:02, Simon Riggs wrote:
I would like to introduce the concept of utility transactions. This is
any transaction that touches only one table in a transaction and is not
returning or modifying data. All utility transactions wait until they
are older than all non-utility
On Thu, 2007-03-01 at 13:56, Simon Riggs wrote:
Wouldn't this be deadlock prone ? What if a non-utility transaction
(which could even be started before the vacuum full) blocks on the table
being vacuumed, then if the vacuum wants to wait until all non-utility
transactions finish will
Do 97% of transactions commit because Oracle has slow rollbacks and
developers are working around that performance issue, or because they
really commit?
I have watched several developers that would prefer to issue numerous
selects to verify things like foreign keys in the application
One option that I've heard before is to have vacuum after a single iteration
(ie, after it fills maintenance_work_mem and does the index cleanup and the
second heap pass), remember where it was and pick up from that point next
time.
From my experience this is not acceptable... I have tables
On Thu, 2007-02-08 at 17:47, Marc Munro wrote:
[snip] One of the causes of deadlocks in Postgres is that its referential
integrity triggers can take locks in inconsistent orders. Generally a
child record will be locked before its parent, but not in all cases.
[snip]
The problem is that
On Fri, 2007-02-02 at 10:51, Simon Riggs wrote:
[snip]
Why do we need a SHARE lock at all, on the **referenc(ed)** table?
It sounds like if we don't put a SHARE lock on the referenced table then
we can end the transaction in an inconsistent state if the referenced
table has concurrent
You say below the cut that you're not updating keys, so presumably it's
other columns. Which leads me to something I've wondered for a while -
why do we lock the whole row? Is it just a matter of not optimised that
yet or is there a good reason why locking just some columns isn't
On Sat, 2007-01-20 at 18:08, Merlin Moncure wrote:
[snip]
To be honest, I'm not a huge fan of psql tricks (error recovery being
another example) but this could provide a solution. in your opnion,
how would you use \if to query the transaction state?
Wouldn't it make sense to introduce
[snip]
IMHO *most* UPDATEs occur on non-indexed fields. [snip]
If my assumption is badly wrong on that then perhaps HOT would not be
useful after all. If we find that the majority of UPDATEs meet the HOT
pre-conditions, then I would continue to advocate it.
Just to confirm that the scenario
On Fri, 2006-10-27 at 09:23, Albe Laurenz wrote:
[ Memo to hackers: why is it that log_min_error_statement = error
isn't the default? ]
To avoid spamming the logs with every failed SQL statement?
And it would be hurting applications where query failure is taken as a
valid path (as
1 - 100 of 198 matches
Mail list logo