On 21.07.2013 08:41, Jeff Davis wrote:
(For that matter, am I not supposed to commit between 'fests? Or is it
still an option for me to finish up with this after I get back even if
we close the CF?)
It's totally OK to commit stuff between 'fests.
- Heikki
--
Sent via pgsql-hackers mailing
(For that matter, am I not supposed to commit between 'fests? Or is it
still an option for me to finish up with this after I get back even if
we close the CF?)
The idea of the CommitFests is to give committers some *time off*
between them. If a committer wants to commit stuff when it's not a
On Wed, 2013-07-17 at 13:43 -0400, Alvaro Herrera wrote:
Tom Lane escribió:
My feeling about this code is that the reason we print the infomask in
hex is so you can see exactly which bits are set if you care, and that
the rest of the line ought to be designed to interpret the bits in as
... well, 11 actually.
We are down to 3 patches needing review, 2 waiting on author, and 6
waiting on commit. Whatever folks can do to close out this commitfest
is very welcome!
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
--
Sent via pgsql-hackers mailing list
http://outlet.vanlooken.com/qqwcl/yqlivfckw.rcxaekidbyraanhw
Mag Gam
7/21/2013 7:23:01 AM
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Jul 21, 2013 4:06 AM, Noah Misch n...@leadboat.com wrote:
If these hooks will need to apply to a larger operation, I
think that mandates a different means to reliably expose the before/after
object states.
I haven't checked the code to see how it would fit the API, but what about
taking a
hi, list. there are my proposal. i would like to tell about odirect in wal sync
in wal_level is higher than minimal. i think in my case when wal traffic is up
to 1gb per 2-3 minutes but discs hardware with 2gb bbu cache (or maybe ssd
under wal) - there would be better if wall traffic could not
hi, list, again. the next proposal into auto explain. one would be happy if
could set list of target tables and indexes. sometimes it is very hard to
detect who is using your indexes. but turn total logging on under thousands
transactions per seconds is not seems like nice idea couse size of
On Thu, Jul 11, 2013 at 09:14:38PM -0400, Chad Wagner wrote:
It looks like to me when AtEOSubXact_SPI is called the
_SPI_current-connectSubId is always 1 (since it is only set when
SPI_connect is called, which is only once for plpgsql), but the
CurrentSubTransactionId is incremented each time
Noah Misch n...@leadboat.com writes:
On Thu, Jul 11, 2013 at 09:14:38PM -0400, Chad Wagner wrote:
Should SPI_connect be called again after the subtransaction is created? And
SPI_finish before the subtransaction is committed or aborted?
Hmm. An SPI_push()+SPI_connect() every time PL/pgSQL
On Fri, Jul 19, 2013 at 07:34:14PM -0400, Tom Lane wrote:
Noah Misch n...@leadboat.com writes:
On Fri, Jul 05, 2013 at 02:47:06PM -0400, Tom Lane wrote:
So I'm inclined to propose that SPI itself should offer some mechanism
for cleaning up tuple tables at subtransaction abort. We could
On Sun, Jul 21, 2013 at 11:44:51AM +0300, Ants Aasma wrote:
On Jul 21, 2013 4:06 AM, Noah Misch n...@leadboat.com wrote:
If these hooks will need to apply to a larger operation, I
think that mandates a different means to reliably expose the before/after
object states.
I haven't checked
Noah Misch n...@leadboat.com writes:
On Fri, Jul 19, 2013 at 07:34:14PM -0400, Tom Lane wrote:
However, we can use your idea when running inside a subtransaction,
while still attaching the tuple table to the procedure's own procCxt
when no subtransaction is involved. The attached draft patch
Historically, REINDEX would always revalidate any uniqueness enforced by the
index. An EDB customer reported that this is not happening, and indeed I
broke it way back in commit 8ceb24568054232696dddc1166a8563bc78c900a.
Specifically, REINDEX TABLE and REINDEX DATABASE no longer revalidate
Noah,
Attached patch just restores the old behavior. Would it be worth preserving
the ability to fix an index consistency problem with a REINDEX independent
from related heap consistency problems such as duplicate keys?
I would love to have two versions of REINDEX, one which validated and
On 07/21/2013 11:30 AM, Josh Berkus wrote:
Noah,
Attached patch just restores the old behavior. Would it be worth preserving
the ability to fix an index consistency problem with a REINDEX independent
from related heap consistency problems such as duplicate keys?
I would love to have two
Salut Dimitri,
On 07/20/2013 01:23 AM, Dimitri Fontaine wrote:
Markus Wanner mar...@bluegap.ch writes:
- per-installation (not even per-cluster) DSO availability
If you install PostGIS 1.5 on a system, then it's just impossible to
bring another cluster (of the same PostgreSQL major
On 07/21/2013 11:30 AM, Josh Berkus wrote:
Noah,
Attached patch just restores the old behavior. Would it be worth preserving
the ability to fix an index consistency problem with a REINDEX independent
from related heap consistency problems such as duplicate keys?
I would love to have two
On 07/21/2013 10:30 PM, Markus Wanner wrote:
but I will admit having a hard time swallowing
the threat model we're talking about…
An attacker having access to a libpq connection with superuser rights
cannot currently run arbitrary native code. He can try a DOS by
exhausting system resources,
Greg,
Yes, I already took at look at it briefly. The updates move in the
right direction, but I can edit them usefully before commit. I'll
have that done by tomorrow and send out a new version. I'm hopeful
that v18 will finally be the one that everyone likes.
Have you done it?
--
Tatsuo
On Sat, Jul 20, 2013 at 6:28 PM, Greg Smith g...@2ndquadrant.com wrote:
On 7/20/13 4:48 AM, didier wrote:
With your tests did you try to write the hot buffers first? ie buffers
with a high refcount, either by sorting them on refcount or at least
sweeping the buffer list in reverse?
I
Hi,
On Sat, Jul 20, 2013 at 6:28 PM, Greg Smith g...@2ndquadrant.com wrote:
On 7/20/13 4:48 AM, didier wrote:
That is the theory. In practice write caches are so large now, there is
almost no pressure forcing writes to happen until the fsync calls show up.
It's easily possible to enter
22 matches
Mail list logo