On 18 February 2015 at 16:22, Stephen Frost sfr...@snowman.net wrote:
Here's the patch against master. I'm still fiddling with the comment
wording and the commit message a bit, but barring objections these
patches are what I'm planning to move forward with.
Yes, that matches what I had in
2015-02-19 16:06 GMT+01:00 Petr Jelinek p...@2ndquadrant.com:
On 19/01/15 17:14, Pavel Stehule wrote:
2015-01-19 14:27 GMT+01:00 Robert Haas robertmh...@gmail.com
mailto:robertmh...@gmail.com:
On Mon, Jan 19, 2015 at 2:59 AM, Pavel Stehule
pavel.steh...@gmail.com
On 1/6/15 4:22 PM, Peter Eisentraut wrote:
That said, there is a much simpler way to achieve that specific
functionality: Expose all the recovery settings as fake read-only GUC
variables. See attached patch for an example.
The commit fest app has this as the patch of record. I don't think
On 2/18/15 3:44 AM, Kyotaro HORIGUCHI wrote:
Sorry, I sent the previous mail without patches by accident. The
patches are attached to this mail.
Hello, this is the patchset v2 of this feature.
0001-Add-regrole.patch
Adding regrole as the name says.
0002-Add-regnamespace.patch
2015-02-19 19:51 GMT+01:00 David Fetter da...@fetter.org:
On Sun, Feb 01, 2015 at 08:38:24AM +0100, Pavel Stehule wrote:
Hi all
I am sending a review of this patch:
* What it does? - Allow to connect to other db by \connect uri connection
format
postgres=# \c
Ants Aasma a...@cybertec.at wrote:
If I understood the issue correctly, you have long running snapshots
accessing one part of the data, while you have high churn on a
disjoint part of data. We would need to enable vacuum on the high
churn data while still being able to run those long queries.
On Thu, Feb 19, 2015 at 11:10 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
I fully agree with your summary here. However, why should we suppose
that while we wait, the other backends don't both delete and then
re-insert their tuple? They need the pre-check to know not to
re-insert
On 02/19/2015 05:34 AM, Michael Paquier wrote:
I noticed that the following structures are still defined in
gin_private.h but they are used nowhere since 2c03216d that has
reworked WAL format:
- ginxlogSplitEntry
- ginxlogSplitDataLeaf
- ginxlogSplitDataInternal
Attached is a trivial patch to
Stephen Frost wrote:
Yes, I will push these unless somebody objects soon, as they seem
perfectly reasonable to me. The only troubling thing is that there is
no regression test for this kind of thing in event triggers (i.e. verify
which command tags get support and which don't),
On 2/16/15 2:45 AM, Michael Paquier wrote:
While looking at the patch to fix pg_dump with extensions containing
tables referencing each other, I got surprised by the fact that
getTableAttrs tries to dump table attributes even for tables that are
part of an extension. Is that normal?
Attached
On 2/18/15 1:26 AM, Michael Paquier wrote:
On Wed, Feb 18, 2015 at 10:06 AM, Michael Paquier wrote:
Yes, the existing assertion is right. My point is that it is strange
that we do not check the values of freeze parameters for an ANALYZE
query, which should be set to -1 all the time. If this is
On 1/14/15 11:31 PM, Michael Paquier wrote:
pg_regress will fail with test suites using only source files if the
destination folders do not exist in the code tree. This is annoying
because this forces to maintain empty folders sql/ and expected/ with
a .gitignore ignoring everything.
We'd
On 02/19/2015 03:31 PM, Kevin Grittner wrote:
What about having the long running snapshots declare their working
set, and then only take them into account for global xmin for
relations that are in the working set? Like a SET TRANSACTION WORKING
SET command. This way the error is deterministic,
On 12/29/14 7:16 PM, Adam Brightwell wrote:
Given this discussion, I have attached a patch that removes CATUPDATE
for review/discussion.
One of the interesting behaviors (or perhaps not) is how
'pg_class_aclmask' handles an invalid role id when checking permissions
against 'rolsuper'
On 2015-02-18 21:00:43 -0500, Tom Lane wrote:
Michael Paquier michael.paqu...@gmail.com writes:
3) heapam.c in three places with HeapTupleHeaderData:
struct
{
HeapTupleHeaderData hdr;
chardata[MaxHeapTupleSize];
}
On 2015-02-18 17:29:27 -0500, Tom Lane wrote:
Michael Paquier michael.paqu...@gmail.com writes:
On Wed, Feb 18, 2015 at 10:09 PM, Andres Freund and...@2ndquadrant.com
wrote:
The compiler will complain if you use a FLEXIBLE_ARRAY_MEMBER in the
middle of a struct but not when when you
Hi,
On 2015-02-18 16:35:14 -0800, Peter Geoghegan wrote:
Andres pointed out that the INSERT ... ON CONFLICT UPDATE patch
doesn't work well with logical decoding.
Just to make that clear: I didn't actually test it, but it ddidn't look
good.
I guess that the best way of fixing this is exposing
Hi,
Commit 79af9a1d2668c9edc8171f03c39e7fed571eeb98 changed xpath handling
with regard to namespaces, and it seems to be fixing an actual issue.
However, it was also backpatched to all branches despite it breaking for
example code like this:
do $$
declare
_x xml;
begin
_x :=
On Wed, Feb 18, 2015 at 5:34 PM, Kyotaro HORIGUCHI
horiguchi.kyot...@lab.ntt.co.jp wrote:
Hello, this is the last patch for pg_basebackup/pg_receivexlog on
master (9.5). Preor versions don't have this issue.
4. basebackup_reply_fix_mst_v2.patch
receivelog.c patch applyable on master.
This
On 02/18/2015 11:43 PM, Peter Geoghegan wrote:
Heikki seemed to think that the deadlock problems were not really
worth fixing independently of ON CONFLICT UPDATE support, but rather
represented a useful way of committing code incrementally. Do I have
that right?
Yes.
The way I chose to break
My Salesforce colleagues have been bugging me about this topic, and
since I see in a nearby thread that we may be about to break backwards
compatibility on =, maybe it's time to do something about this too.
To wit, that the precedence of = = and is neither sane nor standards
compliant.
Up to
On Wed, Feb 18, 2015 at 08:31:09PM -0700, David G. Johnston wrote:
On Wed, Feb 18, 2015 at 6:50 PM, Andrew Dunstan and...@dunslane.net wrote:
On 02/18/2015 08:34 PM, David Fetter wrote:
On Tue, Feb 17, 2015 at 08:21:32PM -0500, Peter Eisentraut wrote:
On 1/20/15 6:32 PM, David G
On Wed, Feb 18, 2015 at 4:57 PM, Kevin Grittner kgri...@ymail.com wrote:
But max_standby_streaming_delay, max_standby_archive_delay and
hot_standby_feedback are among the most frequent triggers for
questions and complaints that I/we see.
Agreed.
And a really bad one used to be
Andrew Dunstan and...@dunslane.net wrote:
On 02/19/2015 09:44 AM, Kevin Grittner wrote:
I understand why this make people nervous. I wonder if it might be more
palatable if there were a per-table setting that could enable it? If we
could ensure that this was only applied to high churn queue
Rod Taylor rod.tay...@gmail.com wrote:
Would pg_dump be able to opt-out of such a restriction?
I don't see how, since vacuum would be removing recently dead
tuples that are still visible; the alternative to getting a
snapshot too old error when reading a page which could be
affected is to
On Thu, Feb 19, 2015 at 11:10 AM, David Fetter da...@fetter.org wrote:
On Wed, Feb 18, 2015 at 08:31:09PM -0700, David G. Johnston wrote:
On Wed, Feb 18, 2015 at 6:50 PM, Andrew Dunstan and...@dunslane.net
wrote:
On 02/18/2015 08:34 PM, David Fetter wrote:
On Tue, Feb 17, 2015 at
On Sun, Feb 01, 2015 at 08:38:24AM +0100, Pavel Stehule wrote:
Hi all
I am sending a review of this patch:
* What it does? - Allow to connect to other db by \connect uri connection
format
postgres=# \c postgresql://localhost?service=old
psql (9.5devel, server 9.2.9)
You are now
On Thu, Feb 19, 2015 at 5:21 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
Hmm. I haven't looked at your latest patch, but I don't think you need to
pre-check for this to work. To recap, the situation is that two backends
have already inserted the heap tuple, and then see that the other
On Thu, Feb 19, 2015 at 6:01 PM, Kevin Grittner kgri...@ymail.com wrote:
I can see how they would be, provided we can be confident that we're
going to actually throw an error when the snapshot is out of date and
not end up returning incorrect results. We need to be darn sure of
that, both now
On Thu, Feb 19, 2015 at 3:44 PM, Kevin Grittner kgri...@ymail.com wrote:
I'm also interested in handling the case Stephen Frost described, where
a tuple is effectively dead but we don't currently have the means of
discovering the fact, because there is an older long running transaction
which
On 02/19/2015 08:16 PM, Peter Geoghegan wrote:
On Thu, Feb 19, 2015 at 5:21 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
Hmm. I haven't looked at your latest patch, but I don't think you need to
pre-check for this to work. To recap, the situation is that two backends
have already
On 02/19/2015 12:23 PM, Peter Eisentraut wrote:
On 1/6/15 4:22 PM, Peter Eisentraut wrote:
That said, there is a much simpler way to achieve that specific
functionality: Expose all the recovery settings as fake read-only GUC
variables. See attached patch for an example.
The commit fest app
In the course of writing a small side project which hopefully will make its
way onto pgxn soon, I was writing functions that had a polymorphic result
set.
create function foo( p_row_type anyelement, p_param1 ...) returns setof
anyelement
Inside that function would be multiple calls to dblink()
Hello
This is me problem:
Saludos,
Gilberto Castillo
La Habana, Cuba
---
This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE running at
host imx3.etecsa.cu
Visit our web-site: http://www.kaspersky.com, http://www.viruslist.com
--
Sent via pgsql-hackers mailing list
Hello
This is me problem:
fdw_test=# SELECT oracle_diag();
oracle_diag
---
oracle_fdw 1.2.0, PostgreSQL 9.1.8, Oracle client
On Feb 19, 2015 10:31 PM, Kevin Grittner kgri...@ymail.com wrote:
What about having the long running snapshots declare their working
set, and then only take them into account for global xmin for
relations that are in the working set? Like a SET TRANSACTION WORKING
SET command. This way the
On Thu, Feb 19, 2015 at 09:32:29PM +0100, Pavel Stehule wrote:
2015-02-19 19:51 GMT+01:00 David Fetter da...@fetter.org:
On Sun, Feb 01, 2015 at 08:38:24AM +0100, Pavel Stehule wrote:
I'm not sure how best to illustrate those. Are you thinking of one
example each for the URI and
On Fri, Feb 20, 2015 at 5:33 AM, Peter Eisentraut pete...@gmx.net wrote:
On 2/16/15 2:45 AM, Michael Paquier wrote:
While looking at the patch to fix pg_dump with extensions containing
tables referencing each other, I got surprised by the fact that
getTableAttrs tries to dump table attributes
On Fri, Feb 20, 2015 at 07:10:29AM +0100, Pavel Stehule wrote:
Hi
I am happy with doc changes now.
When I test last patch, I found sigfault bug, because host =
PQhost(o_conn); returns NULL. I fexed it - please, see patch 007
If you are agree with fix, I'll mark this patch as ready for
The following review has been posted through the commitfest application:
make installcheck-world: tested, passed
Implements feature: tested, passed
Spec compliant: tested, passed
Documentation:tested, passed
no issues with last 007 patch
--
Sent via pgsql-hackers
Hi
I am happy with doc changes now.
When I test last patch, I found sigfault bug, because host =
PQhost(o_conn); returns NULL. I fexed it - please, see patch 007
If you are agree with fix, I'll mark this patch as ready for commit.
Regards
Pavel
2015-02-19 23:33 GMT+01:00 David Fetter
On Fri, Feb 20, 2015 at 5:50 AM, Peter Eisentraut pete...@gmx.net wrote:
On 1/14/15 11:31 PM, Michael Paquier wrote:
pg_regress will fail with test suites using only source files if the
destination folders do not exist in the code tree. This is annoying
because this forces to maintain empty
On Fri, Feb 20, 2015 at 7:15 AM, Gilberto Castillo wrote:
This is me problem:
fdw_test=# SELECT oracle_diag();
oracle_diag
---
Michael Paquier michael.paqu...@gmail.com writes:
Thanks for the clarifications and the review. Attached is a new set.
I've reviewed and pushed the 0001 patch (you missed a few things :-().
Let's see how unhappy the buildfarm is with this before we start on
the rest of them.
Hello,
We have a combination of 9.3 and 9.4 databases used for logging of data.
We do not need a strong durability guarantee, meaning it is ok if on crash a
minute or two of data is lost from our logs. (This is just stats for our
internal tool).
I am looking at this page:
On Fri, Feb 20, 2015 at 2:50 PM, Corey Huinker corey.huin...@gmail.com wrote:
Thanks - completely new to this process, so I'm going to need
walking-through of it. I promise to document what I learn and try to add
that to the commitfest wiki. Where can I go for guidance about documentation
On 2015/01/15 16:35, Etsuro Fujita wrote:
On 2014/12/23 0:36, Tom Lane wrote:
Yeah, we need to do something about the PlanRowMark data structure.
Aside from the pre-existing issue in postgres_fdw, we need to fix it
to support inheritance trees in which more than one rowmark method
is being
Thanks - completely new to this process, so I'm going to need
walking-through of it. I promise to document what I learn and try to add
that to the commitfest wiki. Where can I go for guidance about
documentation format and regression tests?
Author field is presently being finicky, reported that
2015-02-20 8:22 GMT+01:00 David Fetter da...@fetter.org:
On Fri, Feb 20, 2015 at 07:10:29AM +0100, Pavel Stehule wrote:
Hi
I am happy with doc changes now.
When I test last patch, I found sigfault bug, because host =
PQhost(o_conn); returns NULL. I fexed it - please, see patch 007
On Fri, Feb 20, 2015 at 7:06 AM, Corey Huinker corey.huin...@gmail.com wrote:
Proposed patch attached.
At quick glance, this patch lacks two things:
- regression test coverage
- documentation
(Note: do not forget to add your name in the field Author when
adding a new patch in the CF app).
--
Hello all,
This is my first patch to postgreSQL, so I think I'd better start from a easy
one.
I am a native Chinese speaker. A lot of words and expressions are different
between mainsand Chinese and TaiWan/HongKong Chinese.
For instance TaiWan Chinese will use 档案 to represent file, but in
On Fri, Feb 20, 2015 at 2:14 PM, Tom Lane wrote:
Michael Paquier writes:
Thanks for the clarifications and the review. Attached is a new set.
I've reviewed and pushed the 0001 patch (you missed a few things :-().
My apologies. I completely forgot to check for any calls of offsetof
with the
On Mon, Feb 02, 2015 at 03:48:33PM -0500, Robert Haas wrote:
On Sat, Jan 31, 2015 at 8:28 AM, Marco Nenciarini
marco.nenciar...@2ndquadrant.it wrote:
I've attached a new version of the patch fixing the missing closedir on
readdir error.
If readir() fails and closedir() succeeds, the
On Fri, Feb 20, 2015 at 5:41 AM, Peter Eisentraut wrote:
That's cool if you want to add those assertions, but please make them
separate statements each, like
Assert(vacstmt-options (VACOPT_FULL | VACOPT_FREEZE) ||
vacstmt-freeze_min_age == -1);
Assert(vacstmt-options (VACOPT_FULL |
Tomas Vondra tomas.von...@2ndquadrant.com writes:
I see the patch only works with the top-level snapshot timestamp, stored
in globalStats, but since 9.3 (when the stats were split into per-db
files) we track per-database timestamps too.
Shouldn't we make those timestamps accessible too? It's
On 2/19/15 11:57 AM, Bruce Momjian wrote:
On Wed, Jan 28, 2015 at 09:26:11PM -0800, Josh Berkus wrote:
3. Check that the replica is not very lagged. If it is, wait for
traffic to die down and for it to catch up.
Now that 9.4.1 is released, I would like to get this doc patch applied
--- it
Tomas Vondra tomas.von...@2ndquadrant.com writes:
Well, the patch also does this:
*** 28,34 SELECT pg_sleep_for('2 seconds');
CREATE TEMP TABLE prevstats AS
SELECT t.seq_scan, t.seq_tup_read, t.idx_scan, t.idx_tup_fetch,
(b.heap_blks_read + b.heap_blks_hit) AS heap_blks,
Stephen Frost sfr...@snowman.net wrote:
Kevin Grittner (kgri...@ymail.com) wrote:
Stephen Frost sfr...@snowman.net wrote:
In the end, with a single long-running transaction, the worst bloat you
would have is double the size of the system at the time the long-running
transaction started.
I
On 02/19/2015 09:44 AM, Kevin Grittner wrote:
On the 15th I said this:
| What this discussion has made me reconsider is the metric for
| considering a transaction too old. The number of transaction IDs
| consumed seems inferior as the unit of measure for that to LSN or
| time.
|
| It looks
Greg Stark st...@mit.edu wrote:
On Sun, Feb 15, 2015 at 8:27 PM, Tom Lane t...@sss.pgh.pa.us wrote:
There might be something in that, but again it's not much like
this patch. The key point I think we're both making is that
nondeterministic failures are bad, especially when you're
talking
Sorry to revive this thread, I just had one additional thought...
To those advocating the deprecation of the max_rows parameter of Execute,
there's another argument to consider. max_rows isn't just there in order to
fetch, say, a single row of the result set and discard the rest (which is
what I
On 02/16/2015 11:31 AM, Andres Freund wrote:
On 2015-02-16 10:00:24 +0200, Heikki Linnakangas wrote:
I'm starting to think that we should bite the bullet and consume an infomask
bit for this. The infomask bits are a scarce resource, but we should use
them when it makes sense. It would be good
On 19/01/15 17:14, Pavel Stehule wrote:
2015-01-19 14:27 GMT+01:00 Robert Haas robertmh...@gmail.com
mailto:robertmh...@gmail.com:
On Mon, Jan 19, 2015 at 2:59 AM, Pavel Stehule
pavel.steh...@gmail.com mailto:pavel.steh...@gmail.com wrote:
I think you should just remove the
On Wed, Jan 28, 2015 at 09:26:11PM -0800, Josh Berkus wrote:
So, for my 2c, I'm on the fence about it. On the one hand, I agree,
it's a bit of a complex process to get right. On the other hand, it's
far better if we put something out there along the lines of if you
really want to, this
Stephen Frost wrote:
* Alvaro Herrera (alvhe...@2ndquadrant.com) wrote:
Now, we probably don't want to hack *all* the utility commands to return
ObjectAddress instead of OID, because it many cases that's just not
going to be convenient (not to speak of the code churn); so I think for
Peter,
Thanks for the review and feedback.
One of the interesting behaviors (or perhaps not) is how
'pg_class_aclmask' handles an invalid role id when checking permissions
against 'rolsuper' instead of 'rolcatupdate'.
I'd get rid of that whole check, not just replace rolcatupdate by
Kevin,
* Kevin Grittner (kgri...@ymail.com) wrote:
Stephen Frost sfr...@snowman.net wrote:
Kevin Grittner (kgri...@ymail.com) wrote:
(1) They have a pool of connections each of which can have several
long-running cursors, so the limit from that isn't just doubling
the size of their
Yeah. The only use-case that's been suggested is detecting an
unresponsive stats collector, and the main timestamp should be plenty for
that.
Lately, I've spent most of my time doing investigation into increasing
qps. Turned out we've been able to triple our throughput by monitoring
Matt Kelly mkell...@gmail.com writes:
Attached is the fixed version. (hopefully with the right mime-type and
wrong extension. Alas, gmail doesn't let you set mime-types; time to find
a new email client...)
Committed with a couple of changes:
* I changed the function name from
Matt Kelly mkell...@gmail.com writes:
Yeah. The only use-case that's been suggested is detecting an
unresponsive stats collector, and the main timestamp should be plenty for
that.
The problem with doing highly granular snapshots is that the postgres
counters are monotonically increasing,
Hello,
I showed an extreme number of examples to include *almost of all*
variations of existing syntax of option specification. And showed
what if all variations could be used for all commands. It was
almost a mess. Sorry for the confusion.
I think the issues at our hands are,
- Options
On 20.2.2015 02:58, Tom Lane wrote:
Tomas Vondra tomas.von...@2ndquadrant.com writes:
I see the patch only works with the top-level snapshot timestamp,
stored in globalStats, but since 9.3 (when the stats were split
into per-db files) we track per-database timestamps too.
Shouldn't we make
On 2/19/15 1:54 PM, Kevin Grittner wrote:
Rod Taylor rod.tay...@gmail.com wrote:
Would pg_dump be able to opt-out of such a restriction?
I don't see how, since vacuum would be removing recently dead
tuples that are still visible; the alternative to getting a
snapshot too old error when
On 2/18/15 10:29 PM, Fujii Masao wrote:
On Thu, Feb 19, 2015 at 12:25 AM, David Steele da...@pgmasters.net wrote:
The pg_audit doesn't log BIND parameter values when prepared statement is
used.
Seems this is an oversight of the patch. Or is this intentional?
It's actually intentional -
74 matches
Mail list logo