Re: [HACKERS] Measuring replay lag

2017-01-03 Thread Simon Riggs
On 3 January 2017 at 23:22, Thomas Munro  wrote:

>> I don't see why that would be unacceptable. If we do it for
>> remote_apply, why not also do it for other modes? Whatever the
>> reasoning was for remote_apply should work for other modes. I should
>> add it was originally designed to be that way by me, so must have been
>> changed later.
>
> You can achieve that with this patch by setting
> replication_lag_sample_interval to 0.

I wonder why you ignore my mention of the bug in the correct mechanism?

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Declarative partitioning - another take

2017-01-03 Thread 高增琦
Server crash(failed assertion) when two "insert" in one SQL:

Step to reproduce:
create table t(a int, b int) partition by range(a);
create table t_p1 partition of t for values from (1) to (100);
create table t_p2 partition of t for values from (100) to (200);
create table t_p3 partition of t for values from (200) to (300);

create table b(a int, b int);
with a(a,b) as(insert into t values(3, 3) returning a, b) insert into b
select * from a;

Please check it.

2017-01-04 14:11 GMT+08:00 Rajkumar Raghuwanshi <
rajkumar.raghuwan...@enterprisedb.com>:

> On Wed, Jan 4, 2017 at 10:37 AM, Amit Langote <
> langote_amit...@lab.ntt.co.jp> wrote:
>
>> On 2017/01/03 19:04, Rajkumar Raghuwanshi wrote:
>> > On Tue, Dec 27, 2016 at 3:24 PM, Amit Langote wrote:
>> >>
>> >> Attached patch should fix the same.
>> >
>> > I have applied attached patch, server crash for range is fixed, but
>> still
>> > getting crash for multi-level list partitioning insert.
>> >
>> > postgres=# CREATE TABLE test_ml_l (a int, b int, c varchar) PARTITION BY
>> > LIST(c);
>>
>> [ ... ]
>>
>> > postgres=# INSERT INTO test_ml_l SELECT i, i, to_char(i/50, 'FM')
>> FROM
>> > generate_series(0, 599, 2) i;
>> > server closed the connection unexpectedly
>> > This probably means the server terminated abnormally
>> > before or while processing the request.
>> > The connection to the server was lost. Attempting reset: Failed.
>>
>> Hm, that's odd.  I tried your new example, but didn't get the crash.
>>
>> Thanks,
>> Amit
>>
>
> Thanks, I have pulled latest sources from git, and then applied patch
> "fix-wrong-ecxt_scantuple-crash.patch", Not getting crash now, may be I
> have missed something last time.
>
>


-- 
GaoZengqi
pgf...@gmail.com
zengqi...@gmail.com


Re: [HACKERS] An isolation test for SERIALIZABLE READ ONLY DEFERRABLE

2017-01-03 Thread Michael Paquier
On Wed, Jan 4, 2017 at 12:48 AM, Robert Haas  wrote:
> On Sun, Jan 1, 2017 at 4:38 AM, Thomas Munro
>  wrote:
>> To be able to do this, the patch modifies the isolation tester so that
>> it recognises wait_event SafeSnapshot.
>
> I'm not going to say that's unacceptable, but it's certainly not beautiful.

Perhaps being able to define in an isolation spec a step called
'wait_event' with a value defined to the wait event to look for would
make more sense?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel bitmap heap scan

2017-01-03 Thread Dilip Kumar
On Mon, Dec 26, 2016 at 3:14 PM, Dilip Kumar  wrote:
> Other the another option is, that we can always make caller to provide
> an allocator. But this way every new user for simple hash need to take
> care of having allocator.
>
> What is your opinion?

Attached is the new version of the patch which implements it the way I
described.

>
>
>>This also needs docs, including a warning that just
>> using an allocator in shared memory does *NOT* allow the hash table to be
>> used in shared memory in the general case.
>
> Make sense.
Added the Warning.

I have also fixed some bug in parallel bitmap heap scan
(path.parallel_workers was not initialised before calling
cost_bitmap_heap_scan in some cases, so it was taking the
uninitialized value). Patch attached.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com


hash-support-alloc-free-v6.patch
Description: Binary data


parallel-bitmap-heap-scan-v6.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commit fest 2017-01 will begin soon!

2017-01-03 Thread Michael Paquier
On Tue, Jan 3, 2017 at 8:41 PM, Michael Paquier
 wrote:
> On Mon, Jan 2, 2017 at 2:15 AM, Fabrízio de Royes Mello
>  wrote:
>> I changed the status to "In Progress".
>
> Thanks for covering my absence.

To all hackers,

Commit fest 2017-01 has now officially begun. With this commit fest
included, there are still two sessions to get a new feature included
in Postgres 10.

If you are the author of one or more patches, please don't forget to
review other patches with an equivalent difficulty to balance with the
patches you have submitted.

If you review patches, please coordinate with the author. The worse
that can happen to a patch is that someone is registered as reviewer,
meaning that there is a guarantee that he/she will look at the patch
within the timeframe of the commit fest, but he/she does not show up
at the end. Looking at patches that touch areas of the code that you
are not an expert of is always well appreciated. The challenge is
surely tougher, but you gain confidence and knowledge for your future
work on Postgres core. As for the last commit fest, it is forecasted
that there will be more patches than reviewers, so even if you have
not submitted any patch, of course feel free to look at what is
present.

And of course, don't forget that any input is useful input. Just
looking at a patch and let the user know the following things is
always appreciated. So asking those questions first usually makes the
most sense:
- Does the patch apply cleanly? For example look at if `git diff
--check` complains or not.
- Does the patch include documentation? Does the patch need
documentation or a README?
- Does the patch need, include and pass regression tests?
- Does the patch respect the code format of the project? This portion
of the documentation is useful:
https://www.postgresql.org/docs/devel/static/source.html

Here are now the current stats of the patches:
- Needs review: 82.
- Waiting on Author: 18.
- Ready for Committer: 18.
Meaning that some effort is required from committers and reviewers to
get into a better balance. For the authors of the patches waiting for
their input, please avoid letting your patch in a latent state for too
long.

And of course, please double-check that the status of your patch on
the CF app (https://commitfest.postgresql.org/12/) is set according to
the status of the thread.

Let's have a productive CF for Postgres 10! And the show begins..
Thanks.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Potential data loss of 2PC files

2017-01-03 Thread Michael Paquier
On Wed, Jan 4, 2017 at 1:23 PM, Ashutosh Bapat
 wrote:
> I don't have anything more to review in this patch. I will leave that
> commitfest entry in "needs review" status for few days in case anyone
> else wants to review it. If none is going to review it, we can mark it
> as "ready for committer".

Thanks for the input!
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal : Parallel Merge Join

2017-01-03 Thread Amit Kapila
On Wed, Dec 21, 2016 at 9:23 PM, Dilip Kumar  wrote:
> On Wed, Dec 21, 2016 at 8:39 PM, Robert Haas  wrote:
>> Committed the refactoring patch with some mild cosmetic adjustments.
>
> Thanks..
>>
>> As to the second patch:
>>
>> +if (jointype == JOIN_UNIQUE_INNER)
>> +jointype = JOIN_INNER;
>>
>> Isn't this dead code?  save_jointype might that value, but jointype won't.
>
> Yes, it is.
>
> I have fixed this, and also observed that comment for
> try_partial_mergejoin_path header was having some problem, fixed that
> too.
>

Review comments:
1.
+ bool is_partial);
+

Seems additional new line is not required.

2.
+ * try_partial_mergejoin_path
+ *  Consider a partial merge join path; if it appears useful,
push it into
+ *  the joinrel's pathlist via add_path().
+ */

I think in above comment, you should say ".. joinrel's
partial_pathlist via add_partial_path()"

3.
+ /*
+ * See comments in try_partial_nestloop_path().
+ */
+ Assert(bms_is_empty
(joinrel->lateral_relids));
+ if (inner_path->param_info != NULL)
+ {
+ Relids
inner_paramrels = inner_path->param_info->ppi_req_outer;
+
+ if (!bms_is_subset
(inner_paramrels, outer_path->parent->relids))
+ return;
+ }

This same code exists in try_partial_nestloop_path() and
try_partial_hashjoin_path() with minor difference of code in
try_partial_hashjoin_path().  Can't we write a separate inline
function for this code and call from all the three places.

4.
+ /*
+ * See comments in try_nestloop_path().
+ */
+ initial_cost_mergejoin(root,
, jointype, mergeclauses,
+   outer_path,
inner_path,
+   outersortkeys, innersortkeys,
+
  extra->sjinfo);

I think in above comments, you want to say try_partial_nestloop_path().

5.
- if (joinrel->consider_parallel && nestjoinOK &&
- save_jointype != JOIN_UNIQUE_OUTER &&
- bms_is_empty(joinrel->lateral_relids))
+ if (!joinrel->consider_parallel ||
+ save_jointype == JOIN_UNIQUE_OUTER ||
+ !bms_is_empty(joinrel->lateral_relids) ||
+ jointype == JOIN_FULL ||
+ jointype == JOIN_RIGHT)
+ return;
+
+ if (nestjoinOK)

Here, we only want to create partial paths when
outerrel->partial_pathlist is not NILL, so I think you can include
that check as well.  Another point to note is that in HEAD, we are not
checking for jointype as JOIN_RIGHT or JOIN_FULL for considering
parallel nestloop paths, whereas with your patch, it will do those
checks, is it a bug of HEAD or is there any other reason for including
those checks for parallel nestloop paths?

6.
+ /* Can't generate mergejoin path if inner rel is parameterized by outer */
+ if (inner_cheapest_total != NULL)
+ {
+ ListCell   *lc1;
+
+ /* generate merge join path for each partial outer path */
+ foreach(lc1, outerrel->partial_pathlist)
+ {
+ Path   *outerpath = (Path *) lfirst(lc1);
+ List   *merge_pathkeys;
+
+ /*
+ * Figure out what useful ordering any paths we create
+ * will have.
+ */
+ merge_pathkeys = build_join_pathkeys(root, joinrel, jointype,
+   outerpath->pathkeys);
+
+ generate_mergejoin_paths(root, joinrel, innerrel, outerpath,
+ save_jointype, extra, false,
+ inner_cheapest_total, merge_pathkeys,
+ true);
+ }
+
+ }

Won't it be better if you encapsulate the above chunk of code in
function consider_parallel_merjejoin() similar to
consider_parallel_nestloop()?  I think that way code will look
cleaner.

7.
+ /*
+ * Figure out what useful ordering any paths we create
+ * will have.
+ */
+ merge_pathkeys = build_join_pathkeys(root, joinrel, jointype,
+   outerpath->pathkeys);

indentation problem with variable outerpath->pathkeys.

8.
- try_mergejoin_path(root,
-   joinrel,
-   outerpath,
-   inner_cheapest_total,
-   merge_pathkeys,
-   mergeclauses,
-   NIL,
-   innersortkeys,
-   jointype,
-   extra);
+ if (!is_partial)
+ try_mergejoin_path(root,
+ joinrel,
+ outerpath,
+ inner_cheapest_total,
+ merge_pathkeys,
+ mergeclauses,
+ NIL,
+ innersortkeys,
+ jointype,
+ extra);
+
+ /* Generate partial path if inner is parallel safe. */
+ else if (inner_cheapest_total->parallel_safe)
+ try_partial_mergejoin_path(root,
+ joinrel,
+ outerpath,
+ inner_cheapest_total,
+ merge_pathkeys,
+ mergeclauses,
+ NIL,
+ innersortkeys,
+ jointype,
+ extra);

In above code indentation is broken, similarly, there is another code
in a patch where it is broken, try using pgindent.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Update copyright for 2017

2017-01-03 Thread Abhijit Menon-Sen
At 2017-01-03 18:46:32 +0100, mag...@hagander.net wrote:
>
> Thoughts?

I think we should stop making wholesale changes to copyright notices
every year.

-- Abhijit


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Declarative partitioning - another take

2017-01-03 Thread Rajkumar Raghuwanshi
On Wed, Jan 4, 2017 at 10:37 AM, Amit Langote  wrote:

> On 2017/01/03 19:04, Rajkumar Raghuwanshi wrote:
> > On Tue, Dec 27, 2016 at 3:24 PM, Amit Langote wrote:
> >>
> >> Attached patch should fix the same.
> >
> > I have applied attached patch, server crash for range is fixed, but still
> > getting crash for multi-level list partitioning insert.
> >
> > postgres=# CREATE TABLE test_ml_l (a int, b int, c varchar) PARTITION BY
> > LIST(c);
>
> [ ... ]
>
> > postgres=# INSERT INTO test_ml_l SELECT i, i, to_char(i/50, 'FM')
> FROM
> > generate_series(0, 599, 2) i;
> > server closed the connection unexpectedly
> > This probably means the server terminated abnormally
> > before or while processing the request.
> > The connection to the server was lost. Attempting reset: Failed.
>
> Hm, that's odd.  I tried your new example, but didn't get the crash.
>
> Thanks,
> Amit
>

Thanks, I have pulled latest sources from git, and then applied patch
"fix-wrong-ecxt_scantuple-crash.patch", Not getting crash now, may be I
have missed something last time.


Re: [HACKERS] background sessions

2017-01-03 Thread amul sul
On Tue, Jan 3, 2017 at 11:36 PM, Andrew Borodin  wrote:
> 2017-01-03 19:39 GMT+05:00 Peter Eisentraut
> :
>>
>> On 1/3/17 1:26 AM, amul sul wrote:
>> > One more requirement for pg_background is session, command_qh,
>> > response_qh and worker_handle should be last longer than current
>> > memory context, for that we might need to allocate these in
>> > TopMemoryContext.  Please find attach patch does the same change in
>> > BackgroundSessionStart().
>>
>> I had pondered this issue extensively.  The standard coding convention
>> in postgres is that the caller sets the memory context.  See the dblink
>> and plpython patches that make this happen in their own way.
>>
>> I agree it would make sense that you either pass in a memory context or
>> always use TopMemoryContext.  I'm open to these ideas, but they did not
>> seem to match any existing usage.
>
> +1
> Please excuse me if I'm not getting something obvious, but seems like
> BackgroundSessionNew() caller from pg_background_launch() can pick up
> TopMemoryCtx. I think that context setup should be done by extension, not by
> bg_session API.
>

Agree, will do this changes for pg_background.

One more query, can we modify
BackgroundSessionStart()/BackgroundSession struct to get background
worker PID as well?
I am asking because of BackgroundSessionNew() only returns session pointer,
but pg_background_launch() requires this PID to pass to user, which
will be further
used as session identifier at fetching result and/or executing further
queries as well as
to send query cancelation signal to background worker.

I can understand this requirement could be sound useless for now,
because it only for the benefit of pg_background contrib module only.
Thoughts?

Thanks & Regards,
Amul


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add support to COMMENT ON CURRENT DATABASE

2017-01-03 Thread Ashutosh Bapat
On Tue, Jan 3, 2017 at 10:09 PM, Robert Haas  wrote:
> On Tue, Jan 3, 2017 at 12:06 AM, Ashutosh Bapat
>  wrote:
>> Instead of changing get_object_address_unqualified(),
>> get_object_address_unqualified() and pg_get_object_address(), should
>> we just stick get_database_name(MyDatabaseId) as object name in
>> gram.y?
>
> No.  Note this comment at the top of gram.y:
>
>  *In general, nothing in this file should initiate database accesses
>  *nor depend on changeable state (such as SET variables).  If you do
>  *database accesses, your code will fail when we have aborted the
>  *current transaction and are just parsing commands to find the next
>  *ROLLBACK or COMMIT.  If you make use of SET variables, then you
>  *will do the wrong thing in multi-query strings like this:
>  *  SET constraint_exclusion TO off; SELECT * FROM foo;
>  *because the entire string is parsed by gram.y before the SET gets
>  *executed.  Anything that depends on the database or changeable state
>  *should be handled during parse analysis so that it happens at the
>  *right time not the wrong time.
>
> I grant you that MyDatabaseId can't (currently, anyway) change during
> the lifetime of a single backend, but it still seems like a bad idea
> to make gram.y depend on that.  If nothing else, it's problematic if
> we want to deparse the DDL statement (as Fabrízio also points out).
>

Thanks for pointing that out.

I think that handling NIL list in get_object_address_unqualified(),
get_object_address_unqualified() and pg_get_object_address() doesn't
really look good. Intuitively having a NIL list indicates no object
being specified, hence those checks.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Declarative partitioning - another take

2017-01-03 Thread Amit Langote
On 2017/01/03 19:04, Rajkumar Raghuwanshi wrote:
> On Tue, Dec 27, 2016 at 3:24 PM, Amit Langote wrote:
>>
>> Attached patch should fix the same.
> 
> I have applied attached patch, server crash for range is fixed, but still
> getting crash for multi-level list partitioning insert.
> 
> postgres=# CREATE TABLE test_ml_l (a int, b int, c varchar) PARTITION BY
> LIST(c);

[ ... ]

> postgres=# INSERT INTO test_ml_l SELECT i, i, to_char(i/50, 'FM') FROM
> generate_series(0, 599, 2) i;
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> The connection to the server was lost. Attempting reset: Failed.

Hm, that's odd.  I tried your new example, but didn't get the crash.

Thanks,
Amit




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add support to COMMENT ON CURRENT DATABASE

2017-01-03 Thread Ashutosh Bapat
On Tue, Jan 3, 2017 at 9:18 PM, Peter Eisentraut
 wrote:
> On 12/30/16 9:28 PM, Fabrízio de Royes Mello wrote:
>> The attached patch is reworked from a previous one [1] to better deal
>> with get_object_address and pg_get_object_address.
>>
>> Regards,
>>
>> [1] https://www.postgresql.org/message-id/20150317171836.gc10...@momjian.us
>
> The syntax we have used for the user/role case is ALTER USER
> CURRENT_USER, not ALTER CURRENT USER, so this should be done in the same
> way for databases.  Eventually, we'll also want ALTER DATABASE
> current_database, and probably not ALTER CURRENT DATABASE, etc.

We don't allow a user to be created as CURRENT_USER, but we allow a
database to be created with name CURRENT_DATABASE.
postgres=# create user current_user;
ERROR:  CURRENT_USER cannot be used as a role name here
LINE 1: create user current_user;
^
postgres=# create database current_database;
CREATE DATABASE

We will need to make CURRENT_DATABASE a reserved keyword. But I like
this idea more than COMMENT ON CURRENT DATABASE.


-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add support to COMMENT ON CURRENT DATABASE

2017-01-03 Thread Ashutosh Bapat
On Tue, Jan 3, 2017 at 6:40 PM, Fabrízio de Royes Mello
 wrote:
> Hi Ashutosh,
>
> First of all thanks for your review.
>
>
> On Tue, Jan 3, 2017 at 3:06 AM, Ashutosh Bapat
>  wrote:
>>
>> The patch has white space error
>> git apply /mnt/hgfs/tmp/comment_on_current_database_v1.patch
>> /mnt/hgfs/tmp/comment_on_current_database_v1.patch:52: trailing
>> whitespace.
>>  * schema-qualified or catalog-qualified.
>> warning: 1 line adds whitespace errors.
>>
>
> I'll fix.
>
>
>> The patch compiles clean, regression is clean. I tested auto
>> completion of current database, as well pg_dumpall output for comments
>> on multiple databases. Those are working fine. Do we want to add a
>> testcase for testing the SQL functionality as well as pg_dumpall
>> functionality?
>>
>
> While looking into the src/test/regress/sql files I didn't find any test
> cases for shared objects (databases, roles, tablespaces, procedural
> languages, ...).


Right. There's comment.sql but it's about comments in language and not
comments on database objects. It looks like the COMMENT ON for various
objects is tested in test files for those objects and there isn't any
tests testing databases e.g. ALTER DATABASE in sql directory.

> Should we need also add test cases for this kind of
> objects?
>
Possibly, we don't have those testcases, because those will affect
existing objects when run through "make installcheck". But current
database is always going to be "regression" for these tests. That
database is dropped when installcheck starts. So, we can add a
testcase for it. But I am not sure where should we add it. Creating a
new file just for COMMENT ON DATABASE doesn't look good.

>
>> Instead of changing get_object_address_unqualified(),
>> get_object_address_unqualified() and pg_get_object_address(), should
>> we just stick get_database_name(MyDatabaseId) as object name in
>> gram.y? That would be much cleaner than special handling of NIL list.
>> It looks dubious to set that list as NIL when the target is some
>> object. If we do that, we will spend extra cycles in finding database
>> id which is ready available as MyDatabaseId, but the code will be
>> cleaner. Another possibility is to use objnames to store the database
>> name and objargs to store the database id. That means we overload
>> objargs, which doesn't looks good either.
>>
>
> In the previous thread Alvaro point me out about a possible DDL deparse
> inconsistency [1] and because this reason we decide to think better and
> rework this patch.
>
> I confess I'm not too happy with this code yet, and thinking better maybe we
> should create a new object type called OBJECT_CURRENT_DATABASE to handle it
> different than OBJECT_DATABASE. Opinions???
>

Please read my reply to Robert's mail.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] PostgresNode.pm enhancements, pg_lsn helper, and some more recovery tests

2017-01-03 Thread Craig Ringer
On 3 January 2017 at 12:36, Craig Ringer  wrote:
> On 2 January 2017 at 20:17, Simon Riggs  wrote:
>
>> Bit confused... can't see a caller for wait_for_slot_catchup() and the
>> slot tests don't call it AFAICS.
>
> The recovery tests for decoding on standby will use it. I can delay
> adding it until then.
>
>> Also can't see anywhere the LSN stuff is used either.
>
> Removed after discussion with Michael in the logical decoding on standby 
> thread.
>
> I'll be posting a new patch series there shortly, which removes
> pg_lsn.pm and modifies the PostgresNode.pm changes per discussion. If
> you're able to commit the updated PostgresNode.pm and new standby
> tests that'd be very handy.

To keep things together, I've followed up on the logical decoding on
standby thread that now incorporates all these patches.


-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Assignment of valid collation for SET operations on queries with UNKNOWN types.

2017-01-03 Thread Ashutosh Bapat
On Tue, Jan 3, 2017 at 5:57 PM, Rahila Syed  wrote:
> Thank you all for inputs.
> Kindly help me clarify the scope of the patch.
>
>>However, I thought the idea was to silently coerce affected columns from
>>unknown to text.  This doesn't look like the behavior we want:
>
> This patch prevents creation of relation with unknown columns and
> in addition fixes the particular case of CREATE VIEW with literal columns
> by coercing unknown to text only in this particular case.
>
> Are you suggesting extending the patch to include coercing from unknown to
> text for all possible cases where a column of unknown type is being created?
>

I guess, that's what Tom is suggesting.

We need to do something to avoid throwing an error in the cases which
do not throw error in earlier releases.

We may just leave that warning as warning for now, and just tackle the
view case in this patch. I would like everything being done in one
patch, rather than this piece-meal approach. But delaying fix for
views because it takes longer to work on other pieces may not be good
either.

--
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Potential data loss of 2PC files

2017-01-03 Thread Ashutosh Bapat
On Tue, Jan 3, 2017 at 5:38 PM, Michael Paquier
 wrote:
> On Tue, Jan 3, 2017 at 8:41 PM, Ashutosh Bapat
>  wrote:
>> Are you talking about
>> /*
>>  * Now we can mark ourselves as out of the commit critical section: a
>>  * checkpoint starting after this will certainly see the gxact as a
>>  * candidate for fsyncing.
>>  */
>> MyPgXact->delayChkpt = false;
>>
>> That's while creating the file. I do not see similar code in
>> FinishPreparedTransaction() where the 2PC file is removed.
>
> RecordTransactionCommitPrepared() does the same work for COMMIT PREPARED.

Ok.

I don't have anything more to review in this patch. I will leave that
commitfest entry in "needs review" status for few days in case anyone
else wants to review it. If none is going to review it, we can mark it
as "ready for committer".

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logical decoding on standby

2017-01-03 Thread Craig Ringer
On 4 January 2017 at 12:08, Craig Ringer  wrote:
>
> 0001 incorporates the changes requested by Michael Paquier. Simon
> expressed his intention to commit this after updates, in the separate
> thread [...]

...

> 0005 (new streaming rep tests) is updated for the changes in 0001,
> otherwise unchanged. Simon said he wanted to commit this soon.
>
> 0006 (timeline following) is unchanged except for updates to be
> compatible with 0001.
>
> 0007 is the optional snapshot export requested by Petr.
>
> 0008 is unchanged.
>
> 0009 is unchanged except for updates vs 0001 and use of the
> WITHOUT_SNAPSHOT option added in 0007.

Oh, note that it's possible to commit 0001 then 0005, skipping over
2..4. I should probably have ordered them that way.

That's particularly relevant to you Simon as you expressed a wish to
commit the new streaming rep tests.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] gettimeofday is at the end of its usefulness?

2017-01-03 Thread Haribabu Kommi
On Fri, Dec 30, 2016 at 1:02 PM, Tom Lane  wrote:

> Haribabu Kommi  writes:
> > Attached a patch that replaces most of the getimeofday function calls,
> > except timeofday(user callable) and GetCurrentTimestamp functions.
>
> I looked at this for awhile and could not convince myself that it's
> a good idea.  Trying to do s/gettimeofday/clock_gettime/g is not going
> to do much for us except create portability headaches.  According
> to my tests, clock_gettime is not noticeably faster than gettimeofday
> on any platform, except that if you use nonstandard clockids like
> CLOCK_REALTIME_COARSE then on *some* platforms it's a little bit quicker,
> at the cost of being a great deal less precise.  But we'd have to research
> the existence and effects of nonstandard clockids on every platform.
> So AFAICS the only clear advantage to switching is the extra precision
> available from clock_gettime.
>
> But ... most of the places you've touched in this patch have neither any
> need for sub-microsecond precision nor any great need to worry about
> shaving a few ns off the time taken by the call.  As far as I can find,
> the only place where it's actually worth our trouble to deal with it is
> instr_time.h (ie, EXPLAIN ANALYZE and a few other uses).
>
> So I think we should do something more like the attached.
>

Thanks for your valuable input.

As the getimeofday() function is obsolete and any further enhancements
may happen to clock_gettime() function only, because of this reason, I
changed
it many places.

Yes, I agree that until unless the clock_gettime() function that performs
faster
in all platforms compared to gettimeofday(), we can retain the getimeofday()
function.

Regards,
Hari Babu
Fujitsu Australia


Re: [HACKERS] pg_hba_file_settings view patch

2017-01-03 Thread Haribabu Kommi
On Tue, Nov 29, 2016 at 9:15 PM, Ashutosh Bapat <
ashutosh.ba...@enterprisedb.com> wrote:

> Here's backtrace and some debugging information
> Program terminated with signal 11, Segmentation fault.
> #0  0x007f96cd in shm_mq_sendv (mqh=0x121e998,
> iov=0x7ffc9b7b79f0, iovcnt=2, nowait=1 '\001') at shm_mq.c:357
> 357Assert(mq->mq_sender == MyProc);
> (gdb) where
> #0  0x007f96cd in shm_mq_sendv (mqh=0x121e998,
> iov=0x7ffc9b7b79f0, iovcnt=2, nowait=1 '\001') at shm_mq.c:357
> #1  0x006d8387 in mq_putmessage (msgtype=88 'X', s=0x0, len=0)
> at pqmq.c:165
> #2  0x00515147 in ParallelWorkerMain (main_arg=141900502) at
> parallel.c:1120
> #3  0x00783063 in StartBackgroundWorker () at bgworker.c:726
> #4  0x00795b77 in do_start_bgworker (rw=0x1216f00) at
> postmaster.c:5535
> #5  0x00795e4f in maybe_start_bgworker () at postmaster.c:5710
> #6  0x00794eb3 in sigusr1_handler (postgres_signal_arg=10) at
> postmaster.c:4959
> #7  
> #8  0x2b005933a693 in select () from /lib/x86_64-linux-gnu/libc.so.6
> #9  0x00790720 in ServerLoop () at postmaster.c:1665
> #10 0x0078fe76 in PostmasterMain (argc=8, argv=0x11eef40) at
> postmaster.c:1309
> #11 0x006d8f1d in main (argc=8, argv=0x11eef40) at main.c:228
> (gdb) p mq->mq_sender
> Cannot access memory at address 0x6b636568635f707d
> (gdb) p mq
> $1 = (shm_mq *) 0x6b636568635f706d
>
> Looking at this, I am wondering, how could that happen with your
> patches. But every time I have tried to apply your patches and run
> regression, I get this crash. Just now I tried the patches on a all
> new repository and reproduced the crash.


I am also able to reproduce the crash once, but I didn't find the
reason why I leads to crash if I change the loading of hba and ident
files under currentmemory context instead of postmaster context.


>> Reused the error string once, as in this patch it chances in many places
>> compared
>> to pg_file_settings, so I tend to reuse it.
>
>Thanks. Although the new change might affect the way we translate the
>messages in other languages. I am not sure. So, I will leave it for
>someone with more knowledge to review.

There is no problem to the translation, because i kept those messages
under _(), so translations will pick those messages.

>What we may want to do, is separate the logic of reading the hba rules
>in a list and the logic to update existing rules in two different
>functions e.g read_hba() and load_hba(). hba_rules() can use
>read_hba() with ignore errors flag to get the list of hba lines. It
>can then use this list to create tuples to be returned in hba_rules().
>load_hba() will read_hba() with memory reset on error flag (same
>boolean) to read the list of hba lines and update the saved rules if
>there's no error. err_msg can be either a field in HbaLine, which will
>be used only by hba_rules() OR read_hba() could return list of
>err_msgs as a pass by ref argument.

Because of the above context problem, I just needs some part of the
code to read the pg_hba.conf under current memory context, so changed
the logic into a separate function to read the hba rules under currentmemory
context.

Latest patch is attached.

Regards,
Hari Babu
Fujitsu Australia


pg_hba_rules_6.patch
Description: Binary data


pg_hba_rules_tap_tests_2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Odd behavior with PG_TRY

2017-01-03 Thread Amit Kapila
On Wed, Jan 4, 2017 at 3:47 AM, Jim Nasby  wrote:
> On 1/2/17 9:47 PM, Tom Lane wrote:
>> Correct coding would be
>>
>> volatile TupleDesc  desc = slot->tts_tupleDescriptor;
>> CallbackState * volatile myState = (CallbackState *) self;
>> PLyTypeInfo * volatile args = myState->args;
>>
>> because what needs to be marked volatile is the pointer variable,
>> not what it points at.  I'm a bit surprised you're not getting
>> "cast away volatile" warnings from the code as you have it.
>
>
> Unfortunately, that didn't make a difference. Amit's suggestion of isolating
> the single statement in a PG_TRY() didn't work either, but assigning
> args->in.r.atts[i] to a pointer did.
>

Good to know that it worked, but what is the theory?  From your
experiment, it appears that in some cases accessing local pointer
variables is okay and in other cases, it is not okay.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logical decoding - filtering tables

2017-01-03 Thread Craig Ringer
On 3 January 2017 at 21:32, valeriof  wrote:
> Craig Ringer-3 wrote
>> Take a look at how pglogical does it in its replication set handling
>> and relation metadata cache.
>
> I checked it out but for what I understand it uses the inline parameter.

It specifies which replication sets to use with logical decoding parameters.

Those are in turn looked up in user-catalog tables to find mappings of
relations to replication sets.

A cache is used to avoid reading those tables for every tuple.

> Would it be possible to store this info in some config table and then run a
> select from inside the plugin?

Yes, that's what pglogical does.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Shrink volume of default make output

2017-01-03 Thread Craig Ringer
On 3 January 2017 at 05:37, Jim Nasby  wrote:
> The recent thread about compiler warnings got me thinking about how it's
> essentially impossible to notice warnings with default make output. Perhaps
> everyone just uses make -s by default, though that's a bit annoying since
> you get no output unless something does warn (and then you don't know what
> directory it was in).

For that latter reason I'd love to be rid of recursive make. But it's
not that bad since we don't have many files of the same names; I just
find myself using vim's ctrlp a lot.

Personally I'd rather let it lie, use 'make -s' and wait for cmake.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cluster wide option to control symbol case folding

2017-01-03 Thread Lewis, Ian (Microstar Laboratories)
Tom Lane [mailto:t...@sss.pgh.pa.us] wrote:
>> 2. If the folding mode is chosen through a GUC variable, which
>> is certainly what people would expect, then it turns out that
>> it breaks client libraries/applications *anyway*, because an
>> installation-wide setting could impose itself on a client that
>> hadn't asked for it.

I know that some variables can only be configured at a wide scope, and
not a narrow one. Is there no way to restrict a GUC variable's
configuration scope to session and finer, but force a fixed value at
global scope? 

If it is possible to restrict global configuration, that at least
protects the general purpose administrative tools to a significant
degree.

>> And for libraries, that isn't a great solution because then they're
incompatible with applications that wanted another setting.

Good point. Libraries continue to have problems even with session level
configuration if they are to operate in the context of an application
that reconfigures its session case folding for its own purposes. 

But, that seems like a problem that is much more likely to affect
developers of new systems rather than general users or administrators of
existing database systems. 

If so, it is more of a forward looking problem than a legacy problem in
the sense that the person who encounters it is likely to be in a
position to do something about it. This makes it much less critical to
get every library in the world updated to support all case folding modes
than would be the case for general administrative tools like pgAdmin.

Depending on the nature of the library, a developer would have the
option of using multiple sessions or, perhaps, if it were possible,
modifying the folding configuration when using the library. 

Anyhow, as you say, libraries clearly continue to have issues even with
restricted scope on case folding configuration.

And the session level idea really helps nothing unless the global
default session configuration is fixed.

Ian Lewis (www.mstarlabs.com)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: pg_authid.rolpassword format (was Re: [HACKERS] Password identifiers, protocol aging and SCRAM protocol)

2017-01-03 Thread Michael Paquier
On Tue, Jan 3, 2017 at 11:09 PM, Heikki Linnakangas  wrote:
> Since not everyone agrees with this approach, I split this patch into two.
> The first patch refactors things, replacing the isMD5() function with
> get_password_type(), without changing the representation of
> pg_authid.rolpassword. That is hopefully uncontroversial. And the second
> patch adds the "plain:" prefix, which not everyone agrees on.
>
> Barring objections I'm going to at least commit the first patch. I think we
> should commit the second one too, but it's not as critical, and the first
> patch matters more for the SCRAM patch, too.

The split does not look correct to me. 0001 has references to the
prefix "plain:".
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cluster wide option to control symbol case folding

2017-01-03 Thread Tom Lane
"Lewis, Ian \(Microstar Laboratories\)"  writes:
> One idea, which would likely be harder to implement on the server, but
> that would have less impact on third party tools and libraries, would be
> to configure case folding on a session basis.

There are a couple of problems even with that:

1. All the sessions have to share the same catalog state, which greatly
restricts what you could do.

2. If the folding mode is chosen through a GUC variable, which is
certainly what people would expect, then it turns out that it breaks
client libraries/applications *anyway*, because an installation-wide
setting could impose itself on a client that hadn't asked for it.
So you have to update everything at least to the extent of teaching it
to turn off setting X when talking to server versions >= Y.  And for
libraries, that isn't a great solution because then they're incompatible
with applications that wanted another setting.  The notion that the
client side is a monolithic chunk doesn't withstand scrutiny; really
there's usually a stack of code over there, and it all has to cope
with the SQL semantics we expose.

Point #2 was really the lesson that we learned the hard way with the
autocommit fiasco.  We'd thought going into it that client-side code
could be updated only when somebody wanted to use the new behavior,
and it took awhile to absorb the fact that much code would be forced
to deal with the behavior whether it wanted to or not.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unusable SP-GiST index

2017-01-03 Thread Tom Lane
I wrote:
> After looking a bit at gist and sp-gist, neither of them would find that
> terribly convenient; they really want to create one blob of memory per
> index entry so as to not complicate storage management too much.  But
> they'd be fine with making that blob be a HeapTuple not IndexTuple.
> So maybe the right approach is to expand the existing API to allow the
> AM to return *either* a heap or index tuple; that could be made to not
> be an API break.

Here's a draft patch along those lines.  With this approach, btree doesn't
need to be touched at all, since what it's returning certainly is an
IndexTuple anyway.  I fixed both SPGIST and GIST to use HeapTuple return
format.  It's not very clear to me whether GIST has a similar hazard with
very large return values, but it might, and it's simple enough to change.

regards, tom lane

diff --git a/doc/src/sgml/indexam.sgml b/doc/src/sgml/indexam.sgml
index 40f201b..d4af010 100644
*** a/doc/src/sgml/indexam.sgml
--- b/doc/src/sgml/indexam.sgml
*** amgettuple (IndexScanDesc scan,
*** 535,549 

 If the index supports index-only
 scans (i.e., amcanreturn returns TRUE for it),
!then on success the AM must also check
!scan-xs_want_itup, and if that is true it must return
!the original indexed data for the index entry, in the form of an
 IndexTuple pointer stored at scan-xs_itup,
!with tuple descriptor scan-xs_itupdesc.
!(Management of the data referenced by the pointer is the access method's
 responsibility.  The data must remain good at least until the next
 amgettuple, amrescan, or amendscan
!call for the scan.)

  

--- 535,553 

 If the index supports index-only
 scans (i.e., amcanreturn returns TRUE for it),
!then on success the AM must also check scan-xs_want_itup,
!and if that is true it must return the originally indexed data for the
!index entry.  The data can be returned in the form of an
 IndexTuple pointer stored at scan-xs_itup,
!with tuple descriptor scan-xs_itupdesc; or in the form of
!a HeapTuple pointer stored at scan-xs_hitup,
!with tuple descriptor scan-xs_hitupdesc.  (The latter
!format should be used when reconstructing data that might possibly not fit
!into an IndexTuple.)  In either case,
!management of the data referenced by the pointer is the access method's
 responsibility.  The data must remain good at least until the next
 amgettuple, amrescan, or amendscan
!call for the scan.

  

diff --git a/src/backend/access/gist/gistget.c b/src/backend/access/gist/gistget.c
index eea366b..122dc38 100644
*** a/src/backend/access/gist/gistget.c
--- b/src/backend/access/gist/gistget.c
*** gistScanPage(IndexScanDesc scan, GISTSea
*** 441,452 
  			so->pageData[so->nPageData].offnum = i;
  
  			/*
! 			 * In an index-only scan, also fetch the data from the tuple.
  			 */
  			if (scan->xs_want_itup)
  			{
  oldcxt = MemoryContextSwitchTo(so->pageDataCxt);
! so->pageData[so->nPageData].ftup =
  	gistFetchTuple(giststate, r, it);
  MemoryContextSwitchTo(oldcxt);
  			}
--- 441,453 
  			so->pageData[so->nPageData].offnum = i;
  
  			/*
! 			 * In an index-only scan, also fetch the data from the tuple.  The
! 			 * reconstructed tuples are stored in pageDataCxt.
  			 */
  			if (scan->xs_want_itup)
  			{
  oldcxt = MemoryContextSwitchTo(so->pageDataCxt);
! so->pageData[so->nPageData].recontup =
  	gistFetchTuple(giststate, r, it);
  MemoryContextSwitchTo(oldcxt);
  			}
*** gistScanPage(IndexScanDesc scan, GISTSea
*** 478,484 
   * In an index-only scan, also fetch the data from the tuple.
   */
  if (scan->xs_want_itup)
! 	item->data.heap.ftup = gistFetchTuple(giststate, r, it);
  			}
  			else
  			{
--- 479,485 
   * In an index-only scan, also fetch the data from the tuple.
   */
  if (scan->xs_want_itup)
! 	item->data.heap.recontup = gistFetchTuple(giststate, r, it);
  			}
  			else
  			{
*** getNextNearest(IndexScanDesc scan)
*** 540,550 
  	bool		res = false;
  	int			i;
  
! 	if (scan->xs_itup)
  	{
  		/* free previously returned tuple */
! 		pfree(scan->xs_itup);
! 		scan->xs_itup = NULL;
  	}
  
  	do
--- 541,551 
  	bool		res = false;
  	int			i;
  
! 	if (scan->xs_hitup)
  	{
  		/* free previously returned tuple */
! 		pfree(scan->xs_hitup);
! 		scan->xs_hitup = NULL;
  	}
  
  	do
*** getNextNearest(IndexScanDesc scan)
*** 601,607 
  
  			/* in an index-only scan, also return the reconstructed tuple. */
  			if (scan->xs_want_itup)
! scan->xs_itup = item->data.heap.ftup;
  			res = true;
  		}
  		else
--- 602,608 
  
  			/* in an index-only scan, also return the reconstructed tuple. */
  			if (scan->xs_want_itup)
! scan->xs_hitup = item->data.heap.recontup;
  			res = true;
  		}
  		else
*** 

Re: [HACKERS] Parallel tuplesort (for parallel B-Tree index creation)

2017-01-03 Thread Peter Geoghegan
On Tue, Dec 20, 2016 at 5:14 PM, Peter Geoghegan  wrote:
>> Imagine a data structure that is stored in dynamic shared memory and
>> contains space for a filename, a reference count, and a mutex.  Let's
>> call this thing a SharedTemporaryFile or something like that.  It
>> offers these APIs:
>>
>> extern void SharedTemporaryFileInitialize(SharedTemporaryFile *);
>> extern void SharedTemporaryFileAttach(SharedTemporary File *, dsm_segment 
>> *seg);
>> extern void SharedTemporaryFileAssign(SharedTemporyFile *, char *pathname);
>> extern File SharedTemporaryFileGetFile(SharedTemporaryFile *);
>
> I'm a little bit tired right now, and I have yet to look at Thomas'
> parallel hash join patch in any detail. I'm interested in what you
> have to say here, but I think that I need to learn more about its
> requirements in order to have an informed opinion.

Attached is V7 of the patch. The overall emphasis with this revision
is on bringing clarity on how much can be accomplished using
generalized infrastructure, explaining the unification mechanism
coherently, and related issues.

Notable changes
---

* Rebased to work with the newly simplified logtape.c representation
(the recent removal of "indirect blocks" by Heikki). Heikki's work was
something that helped with simplifying the whole unification
mechanism, to a significant degree. I think that there was over a 50%
reduction in logtape.c lines of code in this revision.

* randomAccess cases are now able to reclaim disk space from blocks
originally written by workers. This further simplifies logtape.c
changes significantly. I don't think that this is important because
some future randomAccess caller might otherwise have double the
storage overhead for their parallel sort, or even because of the
disproportionate performance penalty such a caller would experience;
rather, it's important because it removes previous special cases (that
were internal to logtape.c).

For example, aside from the fact that worker tapes within a unified
tapeset will often have a non-zero offset, there is no state that
actually remembers that this is a unified tapeset, because that isn't
needed anymore. And, even though we reclaim blocks from workers, we
only have one central chokepoint for applying worker offsets in the
leader (that chokepoint is ltsReadFillBuffer()). Routines tasked with
things like positional seeking for mark/restore for certain tuplesort
clients (which are. in general, poorly tested) now need to have no
knowledge of unification while still working just the same. This is a
consequence of the fact that ltsWriteBlock() callers (and
ltsWriteBlock() itself) never have to think about offsets. I'm pretty
happy about that.

* pg_restore now prevents the planner from deciding that parallelism
should be used, in order to make restoration behavior more consistent
and predictable. Iff a dump being restored happens to have a CREATE
INDEX with the new index storage parameter parallel_workers set, then
pg_restore will use parallel CREATE INDEX. This is accomplished with a
new GUC, enable_parallelddl (since "max_parallel_workers_maintenance =
0" will disable parallel CREATE INDEX across the board, ISTM that a
second new GUC is required). I think that this behavior the right
trade-off for pg_restore goes, although I still don't feel
particularly strongly about it. There is now a concrete proposal on
what to do about pg_restore, if nothing else. To recap, the general
concern address here is that there are typically no ANALYZE stats
available for the planner to decide with when pg_restore runs CREATE
INDEX, although that isn't always true, which was both surprising and
inconsistent.

* Addresses the problem of anonymous record types and their need for
"remapping" across parallel workers. I've simply pushed the
responsibility on callers within tuplesort.h contract; parallel CREATE
INDEX callers don't need to care about this, as explained there.
(CLUSTER tuplesorts would also be safe.)

* Puts the whole rationale for unification into one large comment
above the function BufFileUnify(), and removes traces of the same kind
of discussion from everywhere else. I think that buffile.c is the
right central place to discuss the unification mechanism, now that
logtape.c has been greatly simplified. All the fd.c changes are in
routines that are only ever called by buffile.c anyway, and are not
too complicated (in general, temp fd.c files are only ever owned
transitively, through BufFiles). So, morally, the unification
mechanism is something that wholly belongs to buffile.c, since
unification is all about temp files, and buffile.h is the interface
through which temp files are owned and accessed in general, without
exception.

Unification remains specialized
---

On the one hand, BufFileUnify() now describes the whole idea of
unification in detail, in its own general terms, including its
performance characteristics, but on the other hand it 

Re: [HACKERS] Cluster wide option to control symbol case folding

2017-01-03 Thread Lewis, Ian (Microstar Laboratories)
Robert Haas [mailto:robertmh...@gmail.com] wrote:
> The issue is, rather, that every extension written for
> PostgreSQL, whether in or out of core, needs to handle this issue and
> every general-purpose client tool (pgAdmin, etc.) needs to be aware of
> it. 

I can see the accuracy of all of the points you make here. And, I
definitely had not thought through the side effects on support tools and
third party libraries of implementing such modal behavior on the server
when I originally asked my question. I did not even understand the
ramifications of upper case folding on the server until Tom pointed out
the earlier conversations on the subject (in my defense, I was not
confused enough to think I had thought through all the effects of a
fundamental change to language recognition based on writing one e-mail
message). 

A fully case sensitive mode, leaving the server catalogs all in lower
case, which is what we would really like to have for our use, still
looks pretty easy to implement on the server. And, it would at least
behave consistently with the lower case folding mode if one quoted all
identifiers, unlike a case preserving, case insensitive mode.

One idea, which would likely be harder to implement on the server, but
that would have less impact on third party tools and libraries, would be
to configure case folding on a session basis. There would have to be
some means to configure a session for the case folding your application
wants to see. And, the general default would have to be the current
PostgreSQL behavior so that an application that was designed for current
behavior would never see a change. 

While not quite obvious to me how one would implement this for all
client environments, it would make such a feature more useful if it
included a means to make the configuration outside of the scope of an
application itself so that one could give an application over which one
has no control the behavior it expects. That is, provide a means to
configure a specific application's session default behavior on the
client. But, provide no means to configure the server's general default
behavior so that the server itself is never modal with respect to case
folding. Only the client session is modal.

It is pretty easy to see the pain of adding symbol case folding modes.
On the other hand, there is no way to know exactly the gain (or loss) in
adoption to providing alternate case folding. So, you have one fact (the
pain) and one speculation (the gain). I can see that makes deciding
whether this is a good or bad idea for the project not at all easy. 

Anyhow, I appreciate the time you, and others, have taken to explain
your thinking and the impacts of adding modal case folding to the
server. 

Ian Lewis (www.mstarlabs.com)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Measuring replay lag

2017-01-03 Thread Thomas Munro
On Wed, Jan 4, 2017 at 12:22 PM, Thomas Munro
 wrote:
> (replay_lag - (write_lag / 2) may be a cheap proxy
> for a lag time that doesn't include the return network leg, and still
> doesn't introduce clock difference error)

(Upon reflection it's a terrible proxy for that because of the mix of
write/flush work done by WAL receiver today, but would improve
dramatically if the WAL writer were doing the flushing.  A better yet
proxy might involve also tracking receive_lag which doesn't include
the write() syscall.  My real point is that there are ways to work
backwards from the two-way round trip time to get other estimates, but
no good ways to undo the damage that would be done to the data if we
started using two systems' clocks.)

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Measuring replay lag

2017-01-03 Thread Thomas Munro
On Wed, Jan 4, 2017 at 12:22 PM, Thomas Munro
 wrote:
> The patch streams (time-right-now, end-of-wal) to the standby in every
> outgoing message, and then sees how long it takes for those timestamps
> to be fed back to it.

Correction: we already stream (time-right-now, end-of-wal) to the
standby in every outgoing message.  The patch introduces a new use of
that information by feeding them back upstream.

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Measuring replay lag

2017-01-03 Thread Thomas Munro
On Wed, Jan 4, 2017 at 1:06 AM, Simon Riggs  wrote:
> On 21 December 2016 at 21:14, Thomas Munro
>  wrote:
>> I thought about that too, but I couldn't figure out how to make the
>> sampling work.  If the primary is choosing (LSN, time) pairs to store
>> in a buffer, and the standby is sending replies at times of its
>> choosing (when wal_receiver_status_interval has been exceeded), then
>> you can't accurately measure anything.
>
> Skipping adding the line delay to this was very specifically excluded
> by Tom, so that clock disparity between servers is not included.
>
> If the balance of opinion is in favour of including a measure of
> complete roundtrip time then I'm OK with that.

I deliberately included the network round trip for two reasons:

1.  The three lag numbers tell you how long syncrep would take to
return control at the three levels remote_write, on, remote_apply.
2.  The time arithmetic is all done on the primary side using two
observations of its single system clock, avoiding any discussion of
clock differences between servers.

You can always subtract half the ping time from these numbers later if
you really want to (replay_lag - (write_lag / 2) may be a cheap proxy
for a lag time that doesn't include the return network leg, and still
doesn't introduce clock difference error).  I am strongly of the
opinion that time measurements made by a single observer are better
data to start from.

>> You could fix that by making the standby send a reply *every time* it
>> applies some WAL (like it does for transactions committing with
>> synchronous_commit = remote_apply, though that is only for commit
>> records), but then we'd be generating a lot of recovery->walreceiver
>> communication and standby->primary network traffic, even for people
>> who don't otherwise need it.  It seems unacceptable.
>
> I don't see why that would be unacceptable. If we do it for
> remote_apply, why not also do it for other modes? Whatever the
> reasoning was for remote_apply should work for other modes. I should
> add it was originally designed to be that way by me, so must have been
> changed later.

You can achieve that with this patch by setting
replication_lag_sample_interval to 0.

The patch streams (time-right-now, end-of-wal) to the standby in every
outgoing message, and then sees how long it takes for those timestamps
to be fed back to it.  The standby feeds them back immediately as soon
as it writes, flushes and applies those WAL positions.  I figured it
would be silly if  every message from the primary caused the standby
to generate 3 replies from the standby just for a monitoring feature,
so I introduced the GUC replication_lag_sample_interval to rate-limit
that.  I don't think there's much point in setting it lower than 1s:
how often will you look at pg_stat_replication?

>> That's why I thought that the standby should have the (LSN, time)
>> buffer: it decides which samples to record in its buffer, using LSN
>> and time provided by the sending server, and then it can send replies
>> at exactly the right times.  The LSNs don't have to be commit records,
>> they're just arbitrary points in the WAL stream which we attach
>> timestamps to.  IPC and network overhead is minimised, and accuracy is
>> maximised.
>
> I'm dubious of keeping standby-side state, but I will review the patch.

Thanks!

The only standby-side state is the three buffers of (LSN, time) that
haven't been written/flushed/applied yet.  I don't see how that can be
avoided, except by inserting extra periodic timestamps into the WAL
itself, which has already been rejected.

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Replication/backup defaults

2017-01-03 Thread Tomas Vondra

On 01/03/2017 01:34 PM, Michael Paquier wrote:

On Mon, Jan 2, 2017 at 10:55 PM, Simon Riggs  wrote:

In the hope of making things better in 10.0, I remove my objection.
If people want to use wal_level = minimal they can restart their
server and they can find that out in the release notes.

Should we set wal_level = replica or wal_level = logical as the
default for 10.0?


replica sounds like a better default to me as most users use at
least archiving. Logical decoding is still fresh though, and its use
is not that wide. Have there been any study on its performance
impact compared to replica by the way?



I've just posted results for some benchmarks I'm running. Those are some 
simple pgbench tests, nothing special, and according to those results 
the performance impact (logical vs. replica) is negligible. I can run 
additional tests with other workloads, of course.


While we can probably construct workloads where the difference is 
measurable, I'm not sure performance impact is the main concern here. As 
you point out, we have 'replica' since 9.0 effectively, while logical is 
much newer, so perhaps there are some hidden bugs? It'd be embarrassing 
to pick 'logical' and hurt everyone, even if they don't get any benefit 
from wal_level=logical.


So +1 to 'replica' and allowing switching to 'logical' without restart. 
That should not be extremely difficult, as the main culprit seems to be 
max_wal_senders/max_replication_slots requiring shared memory. But with 
'replica' we already have those enabled/allocated, unlike when switching 
from 'minimal' to 'replica'.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Replication/backup defaults

2017-01-03 Thread Tomas Vondra

Hi,

On 12/31/2016 04:00 PM, Magnus Hagander wrote:

Cycling back to this topic again, but this time at the beginning of a CF.

Here's an actual patch to change:


wal_level=replica
max_wal_senders=10
max_replication_slots=20

Based on feedback from last year
(https://www.postgresql.org/message-id/CABUevEwfV7zDutescm2PHGvsJdYA0RWHFMTRGhwrJPGgSbzZDQ%40mail.gmail.com):


There were requests for benchmarks of performance difference. Tomas
has promised to run a couple of benchmarks on his standard
benchmarking setups to give numbers on that. Thanks Tomas, please
pipe in with your results when you have them!

>

As promised, I'm running some benchmarks, and I have some early results 
to report. And perhaps we can discuss whether we need to test some 
additional workloads.


I'm 100% on board with the idea that we should switch to wal_level which 
allows taking backups or setting-up a streaming replica, as long as it 
does not cause severe performance regression in common workloads.


So while it'd be trivial to construct workloads demonstrating the 
optimizations in wal_level=minimal (e.g. initial loads doing CREATE 
TABLE + COPY + CREATE INDEX in a single transaction), but that would be 
mostly irrelevant I guess.


Instead, I've decided to run regular pgbench TPC-B-like workload on a 
bunch of different scales, and measure throughput + some xlog stats with 
each of the three wal_level options.


Note: I tweaked the code a bit to allow archiving with "minimal" WAL 
level, to allow computing WAL stats on the archived segments (instead of 
keeping all segments in the data directory).


As usual, I'm running it on two machines - a small old one (i5-2500k box 
with 4 cores and 8GB of RAM) and a new one (2x e5-2620v4 with 16/32 
cores, 64GB of RAM). Both machines have SSD-based storage.


The clusters on both machines were reasonably tuned, see 'settings.log' 
for each run. The tests are fairly long, covering multiple checkpoints 
etc. In other words, the results should be fairly stable.


The scripts/results/stats/configs are available here:

* https://bitbucket.org/tvondra/wal-levels-e2620-v4/src
* https://bitbucket.org/tvondra/wal-levels-i5/src

So far I only have results for the smallest data sets (50 on i5 and 100 
on e5), which easily fits into shared_buffers in both cases, and the 
numbers look like this:


minimal  replica  standby
  
  i5-2500k 5884 5896 5873
  e5-2620v4   239682439324259

So the performance penalty of replica/standby WAL levels on this 
workload is pretty much non-existent - for the larger machine those 
levels are actually a tad faster than 'minimal', but the difference is 
within 2% (so might easily be noise).


I'll push results for larger ones once those tests complete (possibly 
tomorrow).



regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cluster wide option to control symbol case folding

2017-01-03 Thread Jim Nasby

On 1/2/17 2:01 PM, Greg Stark wrote:

The PostGIS extensions might not work on
your system with different case rules if they haven't been 100%
consistent with their camelCasing


FWIW I've already run into a similar problem with inter-extension 
dependencies and relocatability. I've found hacks to work around this 
during extension installation (ie: query pg_extension.extnamespace in 
the dependent extension build script), but if the other extension gets 
relocated you're hosed.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logical Replication WIP

2017-01-03 Thread Petr Jelinek
On 03/01/17 20:39, Peter Eisentraut wrote:
> In 0001-Add-PUBLICATION-catalogs-and-DDL-v16.patch.gz,
> 
> +static bool
> +is_publishable_class(Oid relid, Form_pg_class reltuple)
> +{
> +   return reltuple->relkind == RELKIND_RELATION &&
> +   !IsCatalogClass(relid, reltuple) &&
> +   reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
> +   /* XXX needed to exclude information_schema tables */
> +   relid >= FirstNormalObjectId;
> +}
> 
> I don't think the XXX part is necessary, because IsCatalogClass()
> already checks for the same thing.  (The whole thing is a bit bogus
> anyway, because you can drop and recreate the information schema at run
> time without restriction.)
>

I got this remark about IsCatalogClass() from Andres offline as well,
but it's not true, it only checks for FirstNormalObjectId for objects in
pg_catalog and toast schemas, not anywhere else.

> +#define MAX_RELCACHE_INVAL_MSGS 100
> +   List*relids = GetPublicationRelations(HeapTupleGetOid(tup));
> +
> +   /*
> +* We don't want to send too many individual messages, at some point
> +* it's cheaper to just reset whole relcache.
> +*
> +* XXX: the MAX_RELCACHE_INVAL_MSGS was picked arbitrarily, maybe
> +* there is better limit.
> +*/
> +   if (list_length(relids) < MAX_RELCACHE_INVAL_MSGS)
> 
> Do we have more data on this?  There are people running with 10
> tables, and changing a publication with a 1000 tables would blow all
> that away?
> 
> Maybe at least it should be set relative to INITRELCACHESIZE (400) to
> tie things together a bit?
> 

I am actually thinking this should correspond to MAXNUMMESSAGES (4096)
as that's the limit on buffer size. I didn't find it the first time
around when I was looking for good number.

-- 
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE .. ALTER COLUMN .. ERROR: attribute .. has wrong type

2017-01-03 Thread Alvaro Herrera
Tom Lane wrote:

> A little bit of "git bisect"-ing later, the blame is pinned on
> 
> commit 9550e8348b7965715789089555bb5a3fda8c269c
> Author: Alvaro Herrera 
> Date:   Fri Apr 3 17:33:05 2015 -0300
> 
> Transform ALTER TABLE/SET TYPE/USING expr during parse analysis
> 
> This lets later stages have access to the transformed expression; in
> particular it allows DDL-deparsing code during event triggers to pass
> the transformed expression to ruleutils.c, so that the complete command
> can be deparsed.
> 
> This shuffles the timing of the transform calls a bit: previously,
> nothing was transformed during parse analysis, and only the
> RELKIND_RELATION case was being handled during execution.  After this
> patch, all expressions are transformed during parse analysis (including
> those for relkinds other than RELATION), and the error for other
> relation kinds is thrown only during execution.  So we do more work than
> before to reject some bogus cases.  That seems acceptable.
> 
> Of course, the reason why this work was postponed until execution was
> exactly because we wanted to do it over again for each child table.
>
> We could probably fix the specific issue being seen here by passing the
> expression tree through a suitable attno remapping,

Hmm, ouch.  I can look into fixing this starting tomorrow afternoon.

> but I am now filled with dread about how much of the event trigger
> code may be naively supposing that child tables have the same attnums
> as their parents.

I guess it's on me to figure that out.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Odd behavior with PG_TRY

2017-01-03 Thread Jim Nasby

On 1/2/17 9:47 PM, Tom Lane wrote:

Jim Nasby  writes:

In the attached patch (snippet below), I'm seeing something strange with
args->in.r.atts[].


Did you try comparing the apparent values of "args" before and after
entering PG_TRY?


Yeah, see below. FWIW, when I did that just now I stepped through at the 
instruction level, and things went belly up when I stepped into the 
first instruction of the for (first instruction after PG_TRY was done).



I saw the comment on PG_TRY about marking things as volatile, but my
understanding from the comment is I shouldn't even need to do that,
since these variables aren't being modified.


Not sure, but if you do need to mark them volatile to prevent
misoptimization in the PG_TRY, this is not how to do it:


volatile TupleDesc  desc = slot->tts_tupleDescriptor;
volatile CallbackState *myState = (CallbackState *) self;
volatile PLyTypeInfo *args = myState->args;


Correct coding would be

volatile TupleDesc  desc = slot->tts_tupleDescriptor;
CallbackState * volatile myState = (CallbackState *) self;
PLyTypeInfo * volatile args = myState->args;

because what needs to be marked volatile is the pointer variable,
not what it points at.  I'm a bit surprised you're not getting
"cast away volatile" warnings from the code as you have it.


Unfortunately, that didn't make a difference. Amit's suggestion of 
isolating the single statement in a PG_TRY() didn't work either, but 
assigning args->in.r.atts[i] to a pointer did. The volatile didn't make 
a difference in this case, which if the PG_TRY() comments are to be 
believed is what I'd expect. Though, it was also OK with value not being 
volatile, which AFAIK isn't safe, so...


for (i = 0; i < desc->natts; i++)
{
PyObject   *value = NULL;
PLyDatumToOb * volatile atts = >in.r.atts[i];

...

if (slot->tts_isnull[i] || atts->func == NULL)
{
Py_INCREF(Py_None);
value = Py_None;
}
else
{
PG_TRY();
{
value = (atts->func) (atts, 
slot->tts_values[i]);
}
PG_CATCH();
{
Py_XDECREF(value);
MemoryContextSwitchTo(oldcontext);
PLy_switch_execution_context(old_exec_ctx);
PG_RE_THROW();
}
PG_END_TRY();
}

lldb output from inspecting variables:

(lldb) call args
(volatile PLyTypeInfo *) $60 = 0x7fff5912fbe8
(lldb) call @args
error: expected expression
(lldb) call *args
(PLyTypeInfo) $61 = {
  in = {
d = {
  func = 0x7fca218f8ed0
  typfunc = {
fn_addr = 0x7fff0001
fn_oid = 114610686
fn_nargs = 1
fn_strict = '\0'
fn_retset = '\0'
fn_stats = '0'
fn_extra = 0x7fca2206d060
fn_mcxt = 0x7fca22800960
fn_expr = 0x7fff5912fbe8
  }
  typtransform = {
fn_addr = 0x7fca218f8e38
fn_oid = 570849696
fn_nargs = 32714
fn_strict = '\0'
fn_retset = '\0'
fn_stats = '8'
fn_extra = 0x7fca22066f60
fn_mcxt = 0x7fff5912fc90
fn_expr = 0x000106d7df6f
  }
  typoid = 570830824
  typmod = 32714
  typioparam = 570830864
  typbyval = '\0'
  typlen = 0
  typalign = 'h'
  elm = 0x
}
r = {
  atts = 0x7fca218f8ed0
  natts = 1
}
  }
  out = {
d = {
  func = 0x7fca22066f60
  typfunc = {
fn_addr = 0x7fca
fn_oid = 570846544
fn_nargs = 32714
fn_strict = '\0'
fn_retset = '\0'
fn_stats = '`'
fn_extra = 0x7fff5912fce0
fn_mcxt = 0x000106d4d437
fn_expr = 0x7fca2206ce30
  }
  typtransform = {
fn_addr = 0x7fca22062f90
fn_oid = 0
fn_nargs = 0
fn_strict = '\0'
fn_retset = '\0'
fn_stats = '\0'
fn_extra = 0xffdc22066c20
fn_mcxt = 0x
fn_expr = 0x7fca22066f60
  }
  typoid = 3756919206
  typmod = -754934605
  typioparam = 1494416704
  typbyval = '\xff'
  typlen = 0
  typalign = '^'
  elm = 0x7fff5912fd10
}
r = {
  atts = 0x7fca22066f60
  natts = 0
}
  }
  is_rowtype = 0
  typ_relid = 0
  typrel_xmin = 570847072
  typrel_tid = {
ip_blkid = (bi_hi = 32714, bi_lo = 0)
ip_posid = 36408
  }
  mcxt = 0x00012206c6f0
}
(lldb) gu
(lldb) call args
(volatile PLyTypeInfo *) $62 = 0x7fff5912fbe8
(lldb) call *args
(PLyTypeInfo) $63 = {
  in = {
d = {
  func = 0x218f8ed0
  

Re: [HACKERS] Broken atomics code on PPC with FreeBSD 10.3

2017-01-03 Thread Tom Lane
Robert Haas  writes:
> The only point I'm making here is that the width of a spinlock is not
> irrelevant.

Sure, but it does not follow that we need to get all hot and bothered
about the width of pg_atomic_flag, which has yet to find its first
use-case.  When and if its width becomes a demonstrable issue,
we'll have some work to do in this area ... but that was true already.

> All things being equal
> I still think a narrower one is significantly better than a wider one,
> but we can always leave solving that problem to a day when the
> difference can be proved out by benchmarks.

I think we can agree on that conclusion.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Shrink volume of default make output

2017-01-03 Thread Peter Eisentraut
On 1/2/17 4:37 PM, Jim Nasby wrote:
> The recent thread about compiler warnings got me thinking about how it's 
> essentially impossible to notice warnings with default make output. 

I always build with -Werror.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] multivariate statistics (v19)

2017-01-03 Thread Tomas Vondra

On 12/30/2016 02:12 PM, Petr Jelinek wrote:

On 12/12/16 22:50, Tomas Vondra wrote:

On 12/12/2016 12:26 PM, Amit Langote wrote:


Hi Tomas,

On 2016/10/30 4:23, Tomas Vondra wrote:

Hi,

Attached is v20 of the multivariate statistics patch series, doing
mostly
the changes outlined in the preceding e-mail from October 11.

The patch series currently has these parts:

* 0001 : (FIX) teach pull_varno about RestrictInfo
* 0002 : (PATCH) shared infrastructure and ndistinct coefficients


Hi,

I went over these two (IMHO those could easily be considered as minimal
committable set even if the user visible functionality they provide is
rather limited).



Yes, although I still have my doubts 0001 is the right way to make 
pull_varnos work. It's probably related to the bigger design question, 
because moving the statistics selection to an earlier phase could make 
it unnecessary I guess.



dropping statistics
---

The statistics may be dropped automatically using DROP STATISTICS.

After ALTER TABLE ... DROP COLUMN, statistics referencing are:

  (a) dropped, if the statistics would reference only one column

  (b) retained, but modified on the next ANALYZE


This should be documented in user visible form if you plan to keep it
(it does make sense to me).



Yes, I plan to keep it. I agree it should be documented, probably on the 
ALTER TABLE page (and linked from CREATE/DROP statistics pages).



+   therefore perfectly correlated. Providing additional information about
+   correlation between columns is the purpose of multivariate statistics,
+   and the rest of this section thoroughly explains how the planner
+   leverages them to improve estimates.
+  
+
+  
+   For additional details about multivariate statistics, see
+   src/backend/utils/mvstats/README.stats. There are additional
+   READMEs for each type of statistics, mentioned in the following
+   sections.
+  
+
+ 


I don't think this qualifies as "thoroughly explains" ;)



OK, I'll drop the "thoroughly" ;-)


+
+Oid
+get_statistics_oid(List *names, bool missing_ok)


No comment?


+   case OBJECT_STATISTICS:
+   msg = gettext_noop("statistics \"%s\" does not exist, 
skipping");
+   name = NameListToString(objname);
+   break;


This sounds somewhat weird (plural vs singular).



Ah, right - it should be either "statistic ... does not" or "statistics 
... do not". I think "statistics" is the right choice here, because (a) 
we have CREATE STATISTICS and (b) it may be a combination of statistics, 
e.g. histogram + MCV.



+ * XXX Maybe this should check for duplicate stats. Although it's not clear
+ * what "duplicate" would mean here (wheter to compare only keys or also
+ * options). Moreover, we don't do such checks for indexes, although those
+ * store tuples and recreating a new index may be a way to fix bloat (which
+ * is a problem statistics don't have).
+ */
+ObjectAddress
+CreateStatistics(CreateStatsStmt *stmt)


I don't think we should check duplicates TBH so I would remove the XXX
(also "wheter" is typo but if you remove that paragraph it does not matter).



Yes, I came to the same conclusion - we can only really check for exact 
matches (same set of columns, same choice of statistic types), but 
that's fairly useless. I'll remove the XXX.



+   if (true)
+   {


Huh?



Yeah, that's a bit weird pattern. It's a remainder of copy-pasting the 
preceding block, which looks like this


if (hasindex)
{
...
}

But we've decided to not add similar flag for the statistics. I'll move 
the block to a separate function (instead of merging it directly into 
the function, which is already a bit largeish).



+
+List *
+RelationGetMVStatList(Relation relation)
+{

...

+
+void
+update_mv_stats(Oid mvoid, MVNDistinct ndistinct,
+   int2vector *attrs, VacAttrStats **stats)

...

+static double
+ndistinct_for_combination(double totalrows, int numrows, HeapTuple *rows,
+  int2vector *attrs, VacAttrStats **stats,
+  int k, int *combination)
+{



Again, these deserve comment.



OK, will add.


I'll try to look at other patches in the series as time permits.


thanks

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logical Replication WIP

2017-01-03 Thread Peter Eisentraut
On 1/3/17 2:39 PM, Peter Eisentraut wrote:
> In 0001-Add-PUBLICATION-catalogs-and-DDL-v16.patch.gz,

Attached are a couple of small fixes for this.  Feel free to ignore the
removal of the header files if they are needed by later patches.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>From d1a509cee9e1c2f6c9c4503f19e55520d8b6617f Mon Sep 17 00:00:00 2001
From: Peter Eisentraut 
Date: Tue, 3 Jan 2017 12:00:00 -0500
Subject: [PATCH 1/5] fixup! Add PUBLICATION catalogs and DDL

Add documentation for pg_publication_tables view.
---
 doc/src/sgml/catalogs.sgml | 63 +-
 1 file changed, 62 insertions(+), 1 deletion(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 70533405e2..ae33c56db7 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -5375,7 +5375,8 @@ pg_publication_rel
   
The catalog pg_publication_rel contains the
mapping between relations and publications in the database.  This is a
-   many-to-many mapping.
+   many-to-many mapping.  See also 
+   for a more user-friendly view of this information.
   
 
   
@@ -7729,6 +7730,11 @@ System Views
  
 
  
+  pg_publication_tables
+  publications and their associated tables
+ 
+
+ 
   pg_replication_origin_status
   information about replication origins, including replication progress
  
@@ -9010,6 +9016,61 @@ pg_prepared_xacts Columns
 
  
 
+ 
+  pg_publication_tables
+
+  
+   pg_publication_tables
+  
+
+  
+   The view pg_publication_tables provides
+   information about the mapping between publications and the tables they
+   contain.  Unlike the underlying
+   catalog pg_publication_rel, this view expands
+   publications defined as FOR ALL TABLES, so for such
+   publications there will be a row for each eligible table.
+  
+
+  
+   pg_publication_tables Columns
+
+   
+
+ 
+  Name
+  Type
+  References
+  Description
+ 
+
+
+
+ 
+  pubname
+  name
+  pg_publication.pubname
+  Name of publication
+ 
+
+ 
+  schemaname
+  name
+  pg_namespace.nspname
+  Name of schema containing table
+ 
+
+ 
+  tablename
+  name
+  pg_class.relname
+  Name of table
+ 
+
+   
+  
+ 
+
   
   pg_replication_origin_status
 
-- 
2.11.0

>From 173215232e14ae209373fd69c936fa71ae3b1758 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut 
Date: Tue, 3 Jan 2017 12:00:00 -0500
Subject: [PATCH 2/5] fixup! Add PUBLICATION catalogs and DDL

Add missing clauses to documentation of DROP PUBLICATION.
---
 doc/src/sgml/ref/drop_publication.sgml | 23 ++-
 1 file changed, 22 insertions(+), 1 deletion(-)

diff --git a/doc/src/sgml/ref/drop_publication.sgml b/doc/src/sgml/ref/drop_publication.sgml
index d05d522041..1a1be579ad 100644
--- a/doc/src/sgml/ref/drop_publication.sgml
+++ b/doc/src/sgml/ref/drop_publication.sgml
@@ -21,7 +21,7 @@
 
  
 
-DROP PUBLICATION name [, ...]
+DROP PUBLICATION [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
 
  
 
@@ -43,6 +43,16 @@ Parameters
 
   

+IF EXISTS
+
+ 
+  Do not throw an error if the extension does not exist. A notice is issued
+  in this case.
+ 
+
+   
+
+   
 name
 
  
@@ -51,6 +61,17 @@ Parameters
 

 
+   
+CASCADE
+RESTRICT
+
+
+ 
+  These key words do not have any effect, since there are no dependencies
+  on publications.
+ 
+
+   
   
  
 
-- 
2.11.0

>From 1755433aa69d20b9bcd9520ac65e7ee4d42ff124 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut 
Date: Tue, 3 Jan 2017 12:00:00 -0500
Subject: [PATCH 3/5] fixup! Add PUBLICATION catalogs and DDL

Small tweak of tab completion.
---
 src/bin/psql/tab-complete.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 9a404dc8ae..8b75ac9f4c 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -2463,7 +2463,7 @@ psql_completion(const char *text, int start, int end)
 /* DROP */
 	/* Complete DROP object with CASCADE / RESTRICT */
 	else if (Matches3("DROP",
-	  "COLLATION|CONVERSION|DOMAIN|EXTENSION|LANGUAGE|SCHEMA|SEQUENCE|SERVER|TABLE|TYPE|VIEW",
+	  "COLLATION|CONVERSION|DOMAIN|EXTENSION|LANGUAGE|PUBLICATION|SCHEMA|SEQUENCE|SERVER|TABLE|TYPE|VIEW",
 	  MatchAny) ||
 			 Matches4("DROP", "ACCESS", "METHOD", MatchAny) ||
 			 (Matches4("DROP", "AGGREGATE|FUNCTION", MatchAny, MatchAny) &&
-- 
2.11.0

>From da31845bbddc24c9b1885efddeb295f8a93f2158 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut 
Date: Tue, 3 Jan 2017 12:00:00 -0500
Subject: [PATCH 4/5] fixup! Add PUBLICATION catalogs and DDL

Fix typo
---
 src/backend/catalog/pg_publication.c | 4 ++--
 1 file changed, 2 

Re: [HACKERS] ALTER TABLE .. ALTER COLUMN .. ERROR: attribute .. has wrong type

2017-01-03 Thread Tom Lane
I wrote:
> Hah:

> regression=# create table p(f1 int);
> CREATE TABLE
> regression=# create table c1(extra smallint) inherits(p);
> CREATE TABLE
> regression=# alter table p add column f2 int;
> ALTER TABLE
> regression=# insert into c1 values(1,2,3);
> INSERT 0 1
> regression=# alter table p alter column f2 type bigint using f2::bigint;
> ERROR:  attribute 2 has wrong type
> DETAIL:  Table has type smallint, but query expects integer.

> Of course, in c1 the target column is #3 not #2.  The USING expression
> isn't being adjusted for the discrepancy between parent and child column
> numbers.

> This test case works before 9.5; somebody must have broke it while
> refactoring.

A little bit of "git bisect"-ing later, the blame is pinned on

commit 9550e8348b7965715789089555bb5a3fda8c269c
Author: Alvaro Herrera 
Date:   Fri Apr 3 17:33:05 2015 -0300

Transform ALTER TABLE/SET TYPE/USING expr during parse analysis

This lets later stages have access to the transformed expression; in
particular it allows DDL-deparsing code during event triggers to pass
the transformed expression to ruleutils.c, so that the complete command
can be deparsed.

This shuffles the timing of the transform calls a bit: previously,
nothing was transformed during parse analysis, and only the
RELKIND_RELATION case was being handled during execution.  After this
patch, all expressions are transformed during parse analysis (including
those for relkinds other than RELATION), and the error for other
relation kinds is thrown only during execution.  So we do more work than
before to reject some bogus cases.  That seems acceptable.


Of course, the reason why this work was postponed until execution was
exactly because we wanted to do it over again for each child table.

We could probably fix the specific issue being seen here by passing the
expression tree through a suitable attno remapping, but I am now filled
with dread about how much of the event trigger code may be naively
supposing that child tables have the same attnums as their parents.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] increasing the default WAL segment size

2017-01-03 Thread Robert Haas
On Tue, Jan 3, 2017 at 3:38 PM, Simon Riggs  wrote:
> On 3 January 2017 at 16:24, Robert Haas  wrote:
>> On Jan 3, 2017 at 11:16 AM, Simon Riggs  wrote:
>>> On 3 January 2017 at 15:44, Robert Haas  wrote:
 Yeah.  I don't think there's any way to get around the fact that there
 will be bigger latency spikes in some cases with larger WAL files.
>>>
>>> One way would be for the WALwriter to zerofill new files ahead of
>>> time, thus avoiding the latency spike.
>>
>> Sure, we could do that.  I think it's an independent improvement,
>> though: it is beneficial with or without this patch.
>
> The latency spike problem is exacerbated by increasing file size, so I
> think if we are allowing people to increase file size in this release
> then we should fix the knock-on problem it causes in this release
> also. If we don't fix it as part of this patch I would consider it an
> open item.

I think I'd like to see some benchmark results before forming an
opinion on whether that's a must-fix issue.  I'm not sure I believe
that allowing a larger WAL segment size is going to make things worse
more than it makes them better.  I think that should be tested, not
assumed true.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Broken atomics code on PPC with FreeBSD 10.3

2017-01-03 Thread Robert Haas
On Tue, Jan 3, 2017 at 12:07 PM, Tom Lane  wrote:
> [ shrug... ]  I have not seen you putting any effort into optimizing
> slock_t on non-x86 architectures, where it might make a difference today.
> Why all the concern about shaving hypothetical future bytes for
> pg_atomic_flag?

I don't know what you're getting all wrapped around the axle here
about.  I already explained why I think it's an issue.  You can
disagree if you like.  As to whether I've put any effort into
optimizing slock_t on non-x86 architectures, I commented in my first
post to this thread about my disappointment regarding the situation on
ppc64.  I didn't realize that we had that issue and I think it would
be worth fixing at some point, but I haven't quite gotten around to it
in the 4 hours since I had that realization.  I have previously done
work on non-x86 spinlocks, though
(c01c25fbe525869fa81237954727e1eb4b7d4a14).

> But to reduce this to the simplest possible terms: it really does not
> matter whether the implementation saves bytes or cycles or anything else,
> if it fails to *work*.  The existing coding for PPC fails on FreeBSD,
> and likely on some other platforms using the same compiler.  I'd be the
> first to say that that doesn't reflect well on the state of their PPC
> port, but it's reality that we ought to be cognizant of.  And we've
> worked around similar bugs nearby, eg see this bit in atomics.h:
>
>  * Given a gcc-compatible xlc compiler, prefer the xlc implementation.  The
>  * ppc64le "IBM XL C/C++ for Linux, V13.1.2" implements both interfaces, but
>  * __sync_lock_test_and_set() of one-byte types elicits SIGSEGV.
>
> From here it seems like you're arguing that we mustn't give FreeBSD
> the identical pass that we already gave IBM, for what is nearly the
> same bug.

The only point I'm making here is that the width of a spinlock is not
irrelevant.  Or at least, it didn't use to be. lwlock.h says:

 * LWLOCK_MINIMAL_SIZE should be 32 on basically all common platforms, but
 * because slock_t is more than 2 bytes on some obscure platforms, we allow
 * for the possibility that it might be 64.

That comment is actually nonsense since
008608b9d51061b1f598c197477b3dc7be9c4a64 but before that it was
relevant.  Also, before 48354581a49c30f5757c203415aa8412d85b0f70, a
BufferDesc fit within a 64-byte cache line if slock_t was 1 or 2
bytes, a point commented on explicitly by
6150a1b08a9fe7ead2b25240be46dddeae9d98e1.   So we've clearly made
efforts in that direction in the past.  Looking a little more, though,
since both lwlock.c and bufmgr.c have been rewritten to use atomics
directly, there might not be any remaining places where the spinlocks
get hot enough to care about the extra bytes.  All things being equal
I still think a narrower one is significantly better than a wider one,
but we can always leave solving that problem to a day when the
difference can be proved out by benchmarks.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE .. ALTER COLUMN .. ERROR: attribute .. has wrong type

2017-01-03 Thread Tom Lane
Justin Pryzby  writes:
> On Tue, Jan 03, 2017 at 03:35:34PM -0500, Tom Lane wrote:
>> Right.  So I bet that if you check the attnum of pmsumpacketlatency_000 in
>> eric_umts_rnc_utrancell_metrics, you'll find it's different from that in
>> eric_umts_rnc_utrancell_201701, and that the attribute having that attnum
>> in eric_umts_rnc_utrancell_201701 has type smallint not int.

> I think that's consistent with what your understanding:

> ts=# SELECT attrelid::regclass, attname, attnum, atttypid FROM pg_attribute 
> WHERE attrelid::regclass::text~'eric_umts_rnc_utrancell_(metrics|201701)$' 
> AND (attname='pmsumpacketlatency_000' OR attnum IN (367,424) ) ORDER BY 1,2;
>  eric_umts_rnc_utrancell_metrics | pmsamplespshsadchrabestablish |367 |   
> 21
>  eric_umts_rnc_utrancell_metrics | pmsumpacketlatency_000|424 |   
> 23
>  eric_umts_rnc_utrancell_201701  | pmsumpacketlatency_000|367 |   
> 23
>  eric_umts_rnc_utrancell_201701  | pmulupswitchsuccessmedium |424 |   
> 21

Yup.  So if you can't wait for a fix, your best bet would be to dump and
reload these tables, which should bring their attnums back in sync.
(Of course, they might not stay that way for long, if you're also
in the habit of adding columns often.)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Shrink volume of default make output

2017-01-03 Thread Tom Lane
Jim Nasby  writes:
> The attached hack doesn't quiet everything, but makes a significant 
> difference, 1588 lines down to 622, with 347 being make -C (each of 
> those was a make -j4 after a make clean).

> If folks are interested in this I can look at quieting the remaining 
> output. My intention would be to still output something on entry to a 
> directory that would take a non-trivial amount of time (like 
> src/backend). Though if it's very likely that the CMake stuff is going 
> to happen (is it?) then I don't think it's worth it.

TBH, I flat out don't want this.  Normally I want "-s" mode, ie *no*
routine output, and when I don't want that I generally need to see
everything.  Intermediate levels of verbosity are just an annoyance.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Shrink volume of default make output

2017-01-03 Thread Jim Nasby

On 1/2/17 3:57 PM, Tom Lane wrote:

Jim Nasby  writes:

The recent thread about compiler warnings got me thinking about how it's
essentially impossible to notice warnings with default make output.
Perhaps everyone just uses make -s by default, though that's a bit
annoying since you get no output unless something does warn (and then
you don't know what directory it was in).



Is it worth looking into this? I'm guessing this may be moot with the
CMake work, but it's not clear when that'll make it in. In the meantime,
ISTM http://stackoverflow.com/a/218295 should be an easy change to make
(though perhaps with a variable that gives you the old behavior).


I'm not really sure which of the kluges in that article you're proposing
we adopt, but none of them look better than "make -s" to me.  Also,
none of them would do anything about make's own verbosity such as
"entering/leaving directory" lines.


I was specifically thinking of quieting the compiler lines, along the 
lines of silencing the CC lines. That would still provide the per 
directory output for some amount of status. (At first I thought of doing 
the @echo "Compiling $<" hack, but in retrospect there's probably no use 
in that.)


The attached hack doesn't quiet everything, but makes a significant 
difference, 1588 lines down to 622, with 347 being make -C (each of 
those was a make -j4 after a make clean).


If folks are interested in this I can look at quieting the remaining 
output. My intention would be to still output something on entry to a 
directory that would take a non-trivial amount of time (like 
src/backend). Though if it's very likely that the CMake stuff is going 
to happen (is it?) then I don't think it's worth it.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index d39d6ca867..76ec919476 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -2,6 +2,9 @@
 # src/Makefile.global.in
 # @configure_input@
 
+SILENT = @
+DOTSILENT = .SILENT
+
 #--
 # All PostgreSQL makefiles include this file and use the variables it sets,
 # which in turn are put here by the configure script. There is no need for
@@ -618,9 +621,10 @@ TAS = @TAS@
 #
 # Global targets and rules
 
+
 %.c: %.l
 ifdef FLEX
-   $(FLEX) $(if $(FLEX_NO_BACKUP),-b) $(FLEXFLAGS) -o'$@' $<
+   $(SILENT) $(FLEX) $(if $(FLEX_NO_BACKUP),-b) $(FLEXFLAGS) -o'$@' $<
@$(if $(FLEX_NO_BACKUP),if [ `wc -l &2; exit 1; 
fi)
 else
@$(missing) flex $< '$@'
@@ -629,19 +633,20 @@ endif
 %.c: %.y
$(if $(BISON_CHECK_CMD),$(BISON_CHECK_CMD))
 ifdef BISON
-   $(BISON) $(BISONFLAGS) -o $@ $<
+   $(SILENT) $(BISON) $(BISONFLAGS) -o $@ $<
 else
@$(missing) bison $< $@
 endif
 
 %.i: %.c
-   $(CPP) $(CPPFLAGS) -o $@ $<
+   $(SILENT) $(CPP) $(CPPFLAGS) -o $@ $<
 
 %.gz: %
-   $(GZIP) --best -c $< >$@
+   $(SILENT) $(GZIP) --best -c $< >$@
 
 %.bz2: %
-   $(BZIP2) -c $< >$@
+   $(SILENT) $(BZIP2) -c $< >$@
+
 
 # Direct builds of foo.c -> foo are disabled to avoid generating
 # *.dSYM junk on Macs.  All builds should normally go through the
@@ -785,10 +790,11 @@ ifeq ($(GCC), yes)
 # GCC allows us to create object and dependency file in one invocation.
 %.o : %.c
@if test ! -d $(DEPDIR); then mkdir -p $(DEPDIR); fi
-   $(COMPILE.c) -o $@ $< -MMD -MP -MF $(DEPDIR)/$(*F).Po
+   $(SILENT) $(COMPILE.c) -o $@ $< -MMD -MP -MF $(DEPDIR)/$(*F).Po
 
 endif # GCC
 
+
 # Include all the dependency files generated for the current
 # directory. Note that make would complain if include was called with
 # no arguments.
diff --git a/src/backend/Makefile b/src/backend/Makefile
index a1d3f067d7..8a8e439f6f 100644
--- a/src/backend/Makefile
+++ b/src/backend/Makefile
@@ -53,6 +53,8 @@ endif
 
 all: submake-libpgport submake-schemapg postgres $(POSTGRES_IMP)
 
+$(DOTSILENT): postgres
+
 ifneq ($(PORTNAME), cygwin)
 ifneq ($(PORTNAME), win32)
 ifneq ($(PORTNAME), aix)
diff --git a/src/backend/common.mk b/src/backend/common.mk
index 5d599dbd0c..66b5a0c573 100644
--- a/src/backend/common.mk
+++ b/src/backend/common.mk
@@ -26,9 +26,10 @@ endif
 SUBSYS.o: $(SUBDIROBJS) $(OBJS)
$(LD) $(LDREL) $(LDOUT) $@ $^
 
+
 objfiles.txt: Makefile $(SUBDIROBJS) $(OBJS)
 # Don't rebuild the list if only the OBJS have changed.
-   $(if $(filter-out $(OBJS),$?),( $(if $(SUBDIROBJS),cat $(SUBDIROBJS); 
)echo $(addprefix $(subdir)/,$(OBJS)) ) >$@,touch $@)
+   $(SILENT) $(if $(filter-out $(OBJS),$?),( $(if $(SUBDIROBJS),cat 
$(SUBDIROBJS); )echo $(addprefix $(subdir)/,$(OBJS)) ) >$@,touch $@)
 
 # make function to 

Re: [HACKERS] ALTER TABLE .. ALTER COLUMN .. ERROR: attribute .. has wrong type

2017-01-03 Thread Justin Pryzby
On Tue, Jan 03, 2017 at 03:35:34PM -0500, Tom Lane wrote:
> Justin Pryzby  writes:
> > On Tue, Jan 03, 2017 at 03:18:15PM -0500, Tom Lane wrote:
> >> I'm wondering if this represents some sort of out-of-sync condition
> >> between the table and its child tables.  We can't actually tell from
> >> this trace which table is being processed.  Could you try, from this
> >> breakpoint,
> >> 
> >> f 3
> >> p oldrel->rd_rel->relname
> 
> > (gdb) p oldrel->rd_rel->relname
> > $1 = {data = "eric_umts_rnc_utrancell_201701", '\000' }
> 
> Right.  So I bet that if you check the attnum of pmsumpacketlatency_000 in
> eric_umts_rnc_utrancell_metrics, you'll find it's different from that in
> eric_umts_rnc_utrancell_201701, and that the attribute having that attnum
> in eric_umts_rnc_utrancell_201701 has type smallint not int.

I think that's consistent with what your understanding:

ts=# SELECT attrelid::regclass, attname, attnum, atttypid FROM pg_attribute 
WHERE attrelid::regclass::text~'eric_umts_rnc_utrancell_(metrics|201701)$' AND 
(attname='pmsumpacketlatency_000' OR attnum IN (367,424) ) ORDER BY 1,2;
 eric_umts_rnc_utrancell_metrics | pmsamplespshsadchrabestablish |367 | 
  21
 eric_umts_rnc_utrancell_metrics | pmsumpacketlatency_000|424 | 
  23
 eric_umts_rnc_utrancell_201701  | pmsumpacketlatency_000|367 | 
  23
 eric_umts_rnc_utrancell_201701  | pmulupswitchsuccessmedium |424 | 
  21

Justin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] increasing the default WAL segment size

2017-01-03 Thread Simon Riggs
On 3 January 2017 at 16:24, Robert Haas  wrote:
> On Jan 3, 2017 at 11:16 AM, Simon Riggs  wrote:
>> On 3 January 2017 at 15:44, Robert Haas  wrote:
>>> Yeah.  I don't think there's any way to get around the fact that there
>>> will be bigger latency spikes in some cases with larger WAL files.
>>
>> One way would be for the WALwriter to zerofill new files ahead of
>> time, thus avoiding the latency spike.
>
> Sure, we could do that.  I think it's an independent improvement,
> though: it is beneficial with or without this patch.

The latency spike problem is exacerbated by increasing file size, so I
think if we are allowing people to increase file size in this release
then we should fix the knock-on problem it causes in this release
also. If we don't fix it as part of this patch I would consider it an
open item.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE .. ALTER COLUMN .. ERROR: attribute .. has wrong type

2017-01-03 Thread Tom Lane
Justin Pryzby  writes:
> On Tue, Jan 03, 2017 at 03:18:15PM -0500, Tom Lane wrote:
>> I'm wondering if this represents some sort of out-of-sync condition
>> between the table and its child tables.  We can't actually tell from
>> this trace which table is being processed.  Could you try, from this
>> breakpoint,
>> 
>> f 3
>> p oldrel->rd_rel->relname

> (gdb) p oldrel->rd_rel->relname
> $1 = {data = "eric_umts_rnc_utrancell_201701", '\000' }

Right.  So I bet that if you check the attnum of pmsumpacketlatency_000 in
eric_umts_rnc_utrancell_metrics, you'll find it's different from that in
eric_umts_rnc_utrancell_201701, and that the attribute having that attnum
in eric_umts_rnc_utrancell_201701 has type smallint not int.

This is an expected situation in some situations where you ALTER existing
inheritance hierarchies; it's a bug that ALTER COLUMN is failing to cope.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE .. ALTER COLUMN .. ERROR: attribute .. has wrong type

2017-01-03 Thread Justin Pryzby
On Tue, Jan 03, 2017 at 03:18:15PM -0500, Tom Lane wrote:
> Justin Pryzby  writes:
> > (gdb) bt
> > #3  0x0059d5ce in ATRewriteTable (tab=, 
> > OIDNewHeap=, lockmode=) at 
> > tablecmds.c:4152
> 
> I'm wondering if this represents some sort of out-of-sync condition
> between the table and its child tables.  We can't actually tell from
> this trace which table is being processed.  Could you try, from this
> breakpoint,
> 
> f 3
> p oldrel->rd_rel->relname

(gdb) p oldrel->rd_rel->relname
$1 = {data = "eric_umts_rnc_utrancell_201701", '\000' }


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE .. ALTER COLUMN .. ERROR: attribute .. has wrong type

2017-01-03 Thread Tom Lane
I wrote:
> I'm wondering if this represents some sort of out-of-sync condition
> between the table and its child tables.

Hah:

regression=# create table p(f1 int);
CREATE TABLE
regression=# create table c1(extra smallint) inherits(p);
CREATE TABLE
regression=# alter table p add column f2 int;
ALTER TABLE
regression=# insert into c1 values(1,2,3);
INSERT 0 1
regression=# alter table p alter column f2 type bigint using f2::bigint;
ERROR:  attribute 2 has wrong type
DETAIL:  Table has type smallint, but query expects integer.

Of course, in c1 the target column is #3 not #2.  The USING expression
isn't being adjusted for the discrepancy between parent and child column
numbers.

This test case works before 9.5; somebody must have broke it while
refactoring.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE .. ALTER COLUMN .. ERROR: attribute .. has wrong type

2017-01-03 Thread Tom Lane
Justin Pryzby  writes:
> (gdb) bt
> #0  errfinish (dummy=0) at elog.c:414
> #1  0x005d0e30 in ExecEvalScalarVar (exprstate=, 
> econtext=, isNull=, isDone= optimized out>) at execQual.c:655
> #2  0x005d0c3c in ExecMakeFunctionResultNoSets (fcache=0x21f18a0, 
> econtext=0x2199e80, isNull=0x21e90ee "", isDone=) at 
> execQual.c:2015
> #3  0x0059d5ce in ATRewriteTable (tab=, 
> OIDNewHeap=, lockmode=) at 
> tablecmds.c:4152
> #4  0x005a92fc in ATRewriteTables (parsetree=0x1f63b20, rel= optimized out>, cmds=, recurse=, 
> lockmode=) at tablecmds.c:3858
> #5  ATController (parsetree=0x1f63b20, rel=, cmds= optimized out>, recurse=, lockmode= out>) at tablecmds.c:3104
> #6  0x006e25e6 in ProcessUtilitySlow (parsetree=0x1fc6f78, 
> queryString=0x1fc5fb0 "ALTER TABLE eric_umts_rnc_utrancell_metrics ALTER 
> COLUMN PMSUMPACKETLATENCY_000 TYPE BIGINT USING 
> PMSUMPACKETLATENCY_000::BIGINT;", 
> context=PROCESS_UTILITY_TOPLEVEL, params=0x0, dest=, 
> completionTag=0x7fff8b9d3a90 "") at utility.c:1085

Okay, so it's clearly processing the USING expression and not something
else, which is weird because that should've just been parsed against the
existing table column; how could that Var contain the wrong type?

I'm wondering if this represents some sort of out-of-sync condition
between the table and its child tables.  We can't actually tell from
this trace which table is being processed.  Could you try, from this
breakpoint,

f 3
p oldrel->rd_rel->relname

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] merging some features from plpgsql2 project

2017-01-03 Thread Pavel Stehule
2017-01-03 20:54 GMT+01:00 Merlin Moncure :

> On Tue, Jan 3, 2017 at 9:58 AM, Pavel Stehule 
> wrote:
> > 2017-01-03 16:23 GMT+01:00 Merlin Moncure :
> >> So -1 to strict mode, unless we can make a case why this can't be done
> >> as part of checking/validation.
> >
> > Can be plpgsq.extra_errors and plpgsql.extra_warnings solution?
> >
> > I am thinking so there is a space for improvement (in extra_* usage)
>
> extra_warnings seems ok at the GUC level.  However it's bad to have a
> body of code fail to compile based on GUC.  check_function_bodies for
> example is a complete hack and should be avoided if at all possible
> IMO.  There is very good informal rule that GUC should not impact
> behavior (minus some special cases like timeouts).   Good examples of
> failure to follow this rule are mysql and php.
>
> Maybe settings at level of extension could be ok, but I'm skeptical.
> Good languages are clear without needing extra context.
>
> > Do you know plpgsql_check https://github.com/okbob/plpgsql_check ?
>
> Yes.  This is good design and should be model for core-work (if any).
>  In my ideal world, this could would be part of pgxn and to have pgxn
> client be installed in core.   For plpgsql to enter modern era we need
> standardized packaging and deployment like cran, npm, etc.
>
> >> Other random points:
> >> *) Another major pain point is swapping in the input variables for
> >> debugging purposes.  Something that emits a script based on a set of
> >> arguments would be wonderful.
> >
> > ???
>
> Often for debugging of complicated cases I'm starting from errors in
> database log with function name and argument values.  Sometimes I find
> myself pasting pl/pgsql function into text editor and replacing input
> variables with known values.
>

is it related to plpgsql debugger? Have not idea how it can be better on
language level.


>
> >>
> >> *) Would also like to have a FINALLY block
> >
> > What you can do there?
>
> This is syntax sugar so you don't need second begin/end/exception
> block or duplicated code.  It separates error handling from cleanup.
>
> BEGIN
>   PERFORM dblink_connect(...
>   
> EXCEPTION WHEN OTHERS THEN
>   
> FINALLY
>   PERFORM dblink_disconnect(...
> END;
>

Does know somebody this pattern from Ada or PL/SQL?


>
> >> *) Some user visible mechanic other than forcing SQL through EXECUTE
> >> to be able to control plan caching would be useful.
> >
> > fully agree.
> >
> > Have you some ideas?
> >
> > What about plpgsql option (function scope) -- WITHOUT-PLAN-CACHE - any
> non
> > trivial plans will not be cached - and evaluated as parametrized query
> only.
>
> I have slight preference for syntax marker for each query, similar to
> INTO.  Maybe 'UNCACHED'?
>

I am not clean opinion - the statement level is nice, but what readability?

SELECT UNCACHED t.a, t.b FROM INTO a,b;

Regards

Pavel


>
> On Tue, Jan 3, 2017 at 10:57 AM, Jim Nasby 
> wrote:
> > Or just fix the issue, provide the backwards compatability GUCs and move
> on.
>
> I really don't think this will fly.  I'm not buying your argument (at
> all) that compatibility breaks have have been cleanly done in the
> past, at least not in the modern era.  In any event, marginal language
> improvements are not a good justification to do it.   And yes, the
> continual monkey around with column names in pg_stat_activity are a
> major hassle.  For heaven's sake, can we just add new columns and/or
> create a new view?
>
> merlin
>


Re: [HACKERS] proposal: session server side variables

2017-01-03 Thread Pavel Stehule
2017-01-03 20:56 GMT+01:00 Fabien COELHO :

>
> Hello,
>
> Probably there is not big difference between RESET and UNDO in complexity
>> of implementation. You have to do partial implementation of MVCC. No
>> simple
>> code.
>>
>
> I think so; yes; indeed.
>
> Also note that user-defined GUCs already implements the transactional
>>> property, so probably the mecanism is already available and can be
>>> reused.
>>>
>>
>> GUC are stack based  - the value doesn't depends if transaction was
>> successful or not.
>>
>
> Hmmm... this looks transactional to me:
>
>   SELECT set_config('x.x', 'before', FALSE); -- 'before'
>   BEGIN;
> SELECT set_config('x.x', 'within', FALSE); -- 'within'
>   ROLLBACK;
>   SELECT current_setting('x.x'); -- 'before'
>   BEGIN;
> SELECT set_config('x.x', 'inside', FALSE); -- 'inside'
>   COMMIT;
>   SELECT current_setting('x.x'); -- 'inside'
>
> I would say the stack is needed for SAVEPOINT:
>
>   SELECT set_config('x.x', 'before', FALSE); -- 'before'
>   BEGIN;
> SELECT set_config('x.x', 'within', FALSE); -- 'within'
> SAVEPOINT within;
> SELECT set_config('x.x', 'inside', FALSE); -- 'inside'
> SELECT current_setting('x.x'); -- 'inside'
> ROLLBACK TO SAVEPOINT within;
> SELECT current_setting('x.x'); -- 'within'
> SELECT set_config('x.x', 'further', FALSE); -- 'further'
>   ROLLBACK;
>   SELECT current_setting('x.x'); -- 'before'
>
> So basically the use case needs GUCs with some access control. Or just
> role-private GUCs and some access function tricks would do as well for the
> use case. At least it is probably much easier to add privacy to gucs than
> to (re)implement permissions and MVCC on some session variables. And it
> would be nice if GUCs could be typed as well...


With respect, I don't share your opinion  - it is not enough for usage like
package variables - there usually should not to use any dependency on
transactions.

More it is dynamic - it should be hard inconsistency to implement CREATE or
DECLARE statement for GUC. So it is out my proposal (and my goal).

Regards

Pavel



>
>
> --
> Fabien.
>


Re: [HACKERS] ALTER TABLE .. ALTER COLUMN .. ERROR: attribute .. has wrong type

2017-01-03 Thread Justin Pryzby
On Tue, Jan 03, 2017 at 02:50:21PM -0500, Tom Lane wrote:
> Justin Pryzby  writes:
> > On Tue, Jan 03, 2017 at 02:32:36PM -0500, Tom Lane wrote:
> >> 2. Even better would be a stack trace for the call to errfinish,
> >> https://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend
> 
> Thanks, but we need the whole call stack, or at least the first dozen or
> so levels.  "bt" in gdb would do.

#0  errfinish (dummy=0) at elog.c:414
#1  0x006dd39f in exec_simple_query (query_string=0x1fc5fb0 "ALTER 
TABLE eric_umts_rnc_utrancell_metrics ALTER COLUMN PMSUMPACKETLATENCY_000 TYPE 
BIGINT USING PMSUMPACKETLATENCY_000::BIGINT;") at postgres.c:932
#2  0x006dec8c in PostgresMain (argc=, argv=, dbname=0x1f65d98 "ts", username=) at 
postgres.c:4070
#3  0x0067f2c5 in BackendRun (argc=, argv=) at postmaster.c:4270
#4  BackendStartup (argc=, argv=) at 
postmaster.c:3944
#5  ServerLoop (argc=, argv=) at 
postmaster.c:1701
#6  PostmasterMain (argc=, argv=) at 
postmaster.c:1309
#7  0x00607658 in main (argc=3, argv=0x1f3a4f0) at main.c:228


(gdb) bt
#0  errfinish (dummy=0) at elog.c:414
#1  0x005d0e30 in ExecEvalScalarVar (exprstate=, 
econtext=, isNull=, isDone=) at execQual.c:655
#2  0x005d0c3c in ExecMakeFunctionResultNoSets (fcache=0x21f18a0, 
econtext=0x2199e80, isNull=0x21e90ee "", isDone=) at 
execQual.c:2015
#3  0x0059d5ce in ATRewriteTable (tab=, 
OIDNewHeap=, lockmode=) at 
tablecmds.c:4152
#4  0x005a92fc in ATRewriteTables (parsetree=0x1f63b20, rel=, cmds=, recurse=, 
lockmode=) at tablecmds.c:3858
#5  ATController (parsetree=0x1f63b20, rel=, cmds=, recurse=, lockmode=) 
at tablecmds.c:3104
#6  0x006e25e6 in ProcessUtilitySlow (parsetree=0x1fc6f78, 
queryString=0x1fc5fb0 "ALTER TABLE eric_umts_rnc_utrancell_metrics ALTER COLUMN 
PMSUMPACKETLATENCY_000 TYPE BIGINT USING PMSUMPACKETLATENCY_000::BIGINT;", 
context=PROCESS_UTILITY_TOPLEVEL, params=0x0, dest=, 
completionTag=0x7fff8b9d3a90 "") at utility.c:1085
#7  0x006e2a70 in standard_ProcessUtility (parsetree=0x1fc6f78, 
queryString=0x1fc5fb0 "ALTER TABLE eric_umts_rnc_utrancell_metrics ALTER 
COLUMN PMSUMPACKETLATENCY_000 TYPE BIGINT USING 
PMSUMPACKETLATENCY_000::BIGINT;", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, 
dest=0x1fc72b8, 
completionTag=0x7fff8b9d3a90 "") at utility.c:907
#8  0x006df2cc in PortalRunUtility (portal=0x1fff2e0, 
utilityStmt=0x1fc6f78, isTopLevel=1 '\001', setHoldSnapshot=, dest=0x1fc72b8, completionTag=0x7fff8b9d3a90 "") at pquery.c:1193
#9  0x006e01cb in PortalRunMulti (portal=0x1fff2e0, isTopLevel=1 
'\001', setHoldSnapshot=0 '\000', dest=0x1fc72b8, altdest=0x1fc72b8, 
completionTag=0x7fff8b9d3a90 "") at pquery.c:1349
#10 0x006e0934 in PortalRun (portal=0x1fff2e0, 
count=9223372036854775807, isTopLevel=1 '\001', dest=0x1fc72b8, 
altdest=0x1fc72b8, completionTag=0x7fff8b9d3a90 "") at pquery.c:815
#11 0x006dd5b1 in exec_simple_query (query_string=0x1fc5fb0 "ALTER 
TABLE eric_umts_rnc_utrancell_metrics ALTER COLUMN PMSUMPACKETLATENCY_000 TYPE 
BIGINT USING PMSUMPACKETLATENCY_000::BIGINT;") at postgres.c:1094
#12 0x006dec8c in PostgresMain (argc=, argv=, dbname=0x1f65d98 "ts", username=) at 
postgres.c:4070
#13 0x0067f2c5 in BackendRun (argc=, argv=) at postmaster.c:4270
#14 BackendStartup (argc=, argv=) at 
postmaster.c:3944
#15 ServerLoop (argc=, argv=) at 
postmaster.c:1701
#16 PostmasterMain (argc=, argv=) at 
postmaster.c:1309
#17 0x00607658 in main (argc=3, argv=0x1f3a4f0) at main.c:228

> > I'll send the rest of \d if you really want but:
> 
> Well, we don't know what we're looking for, so assuming that there's
> nothing of interest there is probably bad.

Attached

Justin


alter-wrong-type-dplus.gz
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: session server side variables

2017-01-03 Thread Fabien COELHO


Hello,


Probably there is not big difference between RESET and UNDO in complexity
of implementation. You have to do partial implementation of MVCC. No simple
code.


I think so; yes; indeed.


Also note that user-defined GUCs already implements the transactional
property, so probably the mecanism is already available and can be reused.


GUC are stack based  - the value doesn't depends if transaction was
successful or not.


Hmmm... this looks transactional to me:

  SELECT set_config('x.x', 'before', FALSE); -- 'before'
  BEGIN;
SELECT set_config('x.x', 'within', FALSE); -- 'within'
  ROLLBACK;
  SELECT current_setting('x.x'); -- 'before'
  BEGIN;
SELECT set_config('x.x', 'inside', FALSE); -- 'inside'
  COMMIT;
  SELECT current_setting('x.x'); -- 'inside'

I would say the stack is needed for SAVEPOINT:

  SELECT set_config('x.x', 'before', FALSE); -- 'before'
  BEGIN;
SELECT set_config('x.x', 'within', FALSE); -- 'within'
SAVEPOINT within;
SELECT set_config('x.x', 'inside', FALSE); -- 'inside'
SELECT current_setting('x.x'); -- 'inside'
ROLLBACK TO SAVEPOINT within;
SELECT current_setting('x.x'); -- 'within'
SELECT set_config('x.x', 'further', FALSE); -- 'further'
  ROLLBACK;
  SELECT current_setting('x.x'); -- 'before'

So basically the use case needs GUCs with some access control. Or just 
role-private GUCs and some access function tricks would do as well for the 
use case. At least it is probably much easier to add privacy to gucs than 
to (re)implement permissions and MVCC on some session variables. And it 
would be nice if GUCs could be typed as well...


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] merging some features from plpgsql2 project

2017-01-03 Thread Merlin Moncure
On Tue, Jan 3, 2017 at 9:58 AM, Pavel Stehule  wrote:
> 2017-01-03 16:23 GMT+01:00 Merlin Moncure :
>> So -1 to strict mode, unless we can make a case why this can't be done
>> as part of checking/validation.
>
> Can be plpgsq.extra_errors and plpgsql.extra_warnings solution?
>
> I am thinking so there is a space for improvement (in extra_* usage)

extra_warnings seems ok at the GUC level.  However it's bad to have a
body of code fail to compile based on GUC.  check_function_bodies for
example is a complete hack and should be avoided if at all possible
IMO.  There is very good informal rule that GUC should not impact
behavior (minus some special cases like timeouts).   Good examples of
failure to follow this rule are mysql and php.

Maybe settings at level of extension could be ok, but I'm skeptical.
Good languages are clear without needing extra context.

> Do you know plpgsql_check https://github.com/okbob/plpgsql_check ?

Yes.  This is good design and should be model for core-work (if any).
 In my ideal world, this could would be part of pgxn and to have pgxn
client be installed in core.   For plpgsql to enter modern era we need
standardized packaging and deployment like cran, npm, etc.

>> Other random points:
>> *) Another major pain point is swapping in the input variables for
>> debugging purposes.  Something that emits a script based on a set of
>> arguments would be wonderful.
>
> ???

Often for debugging of complicated cases I'm starting from errors in
database log with function name and argument values.  Sometimes I find
myself pasting pl/pgsql function into text editor and replacing input
variables with known values.

>>
>> *) Would also like to have a FINALLY block
>
> What you can do there?

This is syntax sugar so you don't need second begin/end/exception
block or duplicated code.  It separates error handling from cleanup.

BEGIN
  PERFORM dblink_connect(...
  
EXCEPTION WHEN OTHERS THEN
  
FINALLY
  PERFORM dblink_disconnect(...
END;

>> *) Some user visible mechanic other than forcing SQL through EXECUTE
>> to be able to control plan caching would be useful.
>
> fully agree.
>
> Have you some ideas?
>
> What about plpgsql option (function scope) -- WITHOUT-PLAN-CACHE - any non
> trivial plans will not be cached - and evaluated as parametrized query only.

I have slight preference for syntax marker for each query, similar to
INTO.  Maybe 'UNCACHED'?

On Tue, Jan 3, 2017 at 10:57 AM, Jim Nasby  wrote:
> Or just fix the issue, provide the backwards compatability GUCs and move on.

I really don't think this will fly.  I'm not buying your argument (at
all) that compatibility breaks have have been cleanly done in the
past, at least not in the modern era.  In any event, marginal language
improvements are not a good justification to do it.   And yes, the
continual monkey around with column names in pg_stat_activity are a
major hassle.  For heaven's sake, can we just add new columns and/or
create a new view?

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE .. ALTER COLUMN .. ERROR: attribute .. has wrong type

2017-01-03 Thread Tom Lane
Justin Pryzby  writes:
> On Tue, Jan 03, 2017 at 02:32:36PM -0500, Tom Lane wrote:
>> 2. Even better would be a stack trace for the call to errfinish,
>> https://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend

Thanks, but we need the whole call stack, or at least the first dozen or
so levels.  "bt" in gdb would do.

> I'll send the rest of \d if you really want but:

> ts=# SELECT COUNT(1) FROM pg_attribute WHERE 
> attrelid='eric_umts_rnc_utrancell_metrics'::regclass;
> count | 1116

Well, we don't know what we're looking for, so assuming that there's
nothing of interest there is probably bad.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE .. ALTER COLUMN .. ERROR: attribute .. has wrong type

2017-01-03 Thread Justin Pryzby
On Tue, Jan 03, 2017 at 02:32:36PM -0500, Tom Lane wrote:
> 3. It's pretty hard to see how you'd reach any of these places for an
> ALTER COLUMN TYPE on a simple table.  Has the table got rules, triggers,
> default values?  Could we see "\d+" output for it?

I really meant to do \d+..

   Table 
"public.eric_umts_rnc_utrancell_metrics"
   Column|   Type   | 
Modifiers | Storage  | Stats target | Description 
-+--+---+--+--+-
 sect_id | integer  | not 
null  | plain| 400  | 
 start_time  | timestamp with time zone | not 
null  | plain| 400  | 
 site_id | integer  | not 
null  | plain| 400  | 
 interval_seconds| smallint | not 
null  | plain| 200  | 
 utrancell   | text | not 
null  | extended | 200  | 
 nedn| text | not 
null  | extended | 200  | 
 rnc_id  | integer  | not 
null  | plain| 400  | 
 device_id   | integer  | not 
null  | plain| 200  | 
 pmcelldowntimeauto  | smallint |   
| plain| 10   | 
 pmcelldowntimeman   | smallint |   
| plain| 10   | 
[...]

Justin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE .. ALTER COLUMN .. ERROR: attribute .. has wrong type

2017-01-03 Thread Justin Pryzby
On Tue, Jan 03, 2017 at 02:32:36PM -0500, Tom Lane wrote:
> Justin Pryzby  writes:
>  I can cause the error at will on the existing table,
> 
> That's good news, at least.
> 
> 1. Please trigger it with "\set VERBOSITY verbose" enabled, so we can see
> the exact source location --- there are a couple of instances of that
> text.

ts=# begin; drop view umts_eric_ch_switch_view, eric_umts_rnc_utrancell_view, 
umts_eric_cell_integrity_view; ALTER TABLE eric_umts_rnc_utrancell_metrics 
ALTER COLUMN PMSUMPACKETLATENCY_000 TYPE BIGINT USING 
PMSUMPACKETLATENCY_000::BIGINT; 
BEGIN
DROP VIEW
ERROR:  42804: attribute 424 has wrong type
DETAIL:  Table has type smallint, but query expects integer.
LOCATION:  ExecEvalScalarVar, execQual.c:660

> 2. Even better would be a stack trace for the call to errfinish,
> https://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend

#1  0x006dd39f in exec_simple_query (query_string=0x1fc5fb0 "begin;")
at postgres.c:932
dest = DestRemote
oldcontext = 0x1f3b100
parsetree_list = 0x1fc69f0
save_log_statement_stats = 0 '\000'
was_logged = 0 '\000'
msec_str = 
"\360:\235\213\377\177\000\000`<\235\213\377\177\000\000\260_\374\001", '\000' 

__func__ = "exec_simple_query"

and then

#1  0x006dd39f in exec_simple_query (
query_string=0x1fc5fb0 "ALTER TABLE eric_umts_rnc_utrancell_metrics ALTER 
COLUMN PMSUMPACKETLATENCY_000 TYPE BIGINT USING 
PMSUMPACKETLATENCY_000::BIGINT;")
at postgres.c:932
dest = DestRemote
oldcontext = 0x1f3b100
parsetree_list = 0x1fc6fc8
save_log_statement_stats = 0 '\000'
was_logged = 0 '\000'
msec_str = 
"\360:\235\213\377\177\000\000`<\235\213\377\177\000\000\260_\374\001", '\000' 

__func__ = "exec_simple_query"

then

#1  0x005d0e30 in ExecEvalScalarVar (exprstate=, 
econtext=, isNull=, 
isDone=) at execQual.c:655
attnum = 424
__func__ = "ExecEvalScalarVar"

> 3. It's pretty hard to see how you'd reach any of these places for an
> ALTER COLUMN TYPE on a simple table.  Has the table got rules, triggers,
> default values?  Could we see "\d+" output for it?

triggers and defaults, yes.

 sect_id | integer  | not 
null
 start_time  | timestamp with time zone | not 
null
 site_id | integer  | not 
null
 interval_seconds| smallint | not 
null
 utrancell   | text | not 
null
 nedn| text | not 
null
 rnc_id  | integer  | not 
null
 device_id   | integer  | not 
null
 pmcelldowntimeauto  | smallint | 
 pmcelldowntimeman   | smallint | 
 pmchswitchattemptfachura| smallint | 
 pmchswitchattempturafach| smallint | 
...
Triggers:
eric_umts_rnc_utrancell_insert_trigger BEFORE INSERT ON 
eric_umts_rnc_utrancell_metrics FOR EACH ROW EXECUTE PROCEDURE 
eric_umts_rnc_utrancell_insert_function()
Number of child tables: 3 (Use \d+ to list them.)

I'll send the rest of \d if you really want but:

ts=# SELECT COUNT(1) FROM pg_attribute WHERE 
attrelid='eric_umts_rnc_utrancell_metrics'::regclass;
count | 1116

Justin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logical Replication WIP

2017-01-03 Thread Peter Eisentraut
In 0001-Add-PUBLICATION-catalogs-and-DDL-v16.patch.gz,

+static bool
+is_publishable_class(Oid relid, Form_pg_class reltuple)
+{
+   return reltuple->relkind == RELKIND_RELATION &&
+   !IsCatalogClass(relid, reltuple) &&
+   reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
+   /* XXX needed to exclude information_schema tables */
+   relid >= FirstNormalObjectId;
+}

I don't think the XXX part is necessary, because IsCatalogClass()
already checks for the same thing.  (The whole thing is a bit bogus
anyway, because you can drop and recreate the information schema at run
time without restriction.)

+#define MAX_RELCACHE_INVAL_MSGS 100
+   List*relids = GetPublicationRelations(HeapTupleGetOid(tup));
+
+   /*
+* We don't want to send too many individual messages, at some point
+* it's cheaper to just reset whole relcache.
+*
+* XXX: the MAX_RELCACHE_INVAL_MSGS was picked arbitrarily, maybe
+* there is better limit.
+*/
+   if (list_length(relids) < MAX_RELCACHE_INVAL_MSGS)

Do we have more data on this?  There are people running with 10
tables, and changing a publication with a 1000 tables would blow all
that away?

Maybe at least it should be set relative to INITRELCACHESIZE (400) to
tie things together a bit?

Update the documentation of SharedInvalCatalogMsg in sinval.h for the
"all relations" case.  (Maybe look around the whole file to make sure
comments are still valid.)

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE .. ALTER COLUMN .. ERROR: attribute .. has wrong type

2017-01-03 Thread Tom Lane
Justin Pryzby  writes:
 I can cause the error at will on the existing table,

That's good news, at least.

1. Please trigger it with "\set VERBOSITY verbose" enabled, so we can see
the exact source location --- there are a couple of instances of that
text.

2. Even better would be a stack trace for the call to errfinish,
https://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend

3. It's pretty hard to see how you'd reach any of these places for an
ALTER COLUMN TYPE on a simple table.  Has the table got rules, triggers,
default values?  Could we see "\d+" output for it?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] emergency outage requiring database restart

2017-01-03 Thread Peter Eisentraut
On 11/2/16 11:45 AM, Oskari Saarenmaa wrote:
> 26.10.2016, 21:34, Andres Freund kirjoitti:
>> Any chance that plsh or the script it executes does anything with the file 
>> descriptors it inherits? That'd certainly one way to get into odd corruption 
>> issues.
>>
>> We processor really should use O_CLOEXEC for the majority of it file handles.
> 
> Attached a patch to always use O_CLOEXEC in BasicOpenFile if we're not 
> using EXEC_BACKEND.  It'd be nice to not expose all fds to most 
> pl-languages either, but I guess there's no easy solution to that 
> without forcibly closing all fds whenever any functions are called.

It seems like everyone was generally in favor of this.  I looked around
the internet for caveats but everyone was basically saying, you should
definitely do this.

Why not for EXEC_BACKEND?

O_CLOEXEC is a newer interface.  There are older systems that don't have
it but have FD_CLOEXEC for fcntl().  We should use that as a fallback.

Have you gone through the code and checked for other ways file
descriptors might get opened?  Here is a blog posts that lists some
candidates: http://udrepper.livejournal.com/20407.html

Ideally, we would have a test case that exec's something that lists the
open file descriptors, and we check that there are only those we expect.

The comment "We don't expect execve() calls inside the postgres code" is
not quite correct, as we do things like archive_command and COPY to
program (see OpenPipeStream()).

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] generating fmgr prototypes automatically

2017-01-03 Thread Pavel Stehule
Hi

2016-12-31 6:46 GMT+01:00 Peter Eisentraut :

> During some recent patch reviews, there was some back and forth about
> where to put prototypes for fmgr-callable functions.  We don't actually
> need these prototypes except to satisfy the compiler, so we end up
> sprinkling them around random header files.
>
> I figured we could generate these prototypes automatically using the
> existing Gen_fmgrtab.pl script.  That ends up saving more than 2000
> lines of manual code, and it helps detect when a function exists in code
> but is not registered in pg_proc.h.
>
> ... which this actually found in cash.c.  I ended up adding the missing
> entries in pg_proc and pg_operator, but we could also opt to just remove
> the unused code.
>
> There are long-standing symbol clashes from the lo_* functions between
> the backend and libpq.  So far this hasn't bothered anyone, but because
> this patch rearranges some header files, the libpqwalreceiver code gets
> upset.  So I renamed the C symbols for the backends functions in a
> separate patch.  The user-visible function names remain the same.
>

I am sending a review of these patches

1. there was not any problem with patching, compiling, all test passed
2. the doc and regress tests are not necessary ??

patch 0001 .. trivial cleaning
patch 0002 .. renaming lo_* to be_lo_* -- the prefix "be" is not what I
expect - maybe "pg" instead. More because the  be-fsstubs.h will be holds
only lo_read, lo_write on end
patch 0003 .. trivial, but doesn't mean so we have not regress tests for
these functions?
patch 0004 .. bigger part - I miss a comments when there are a exceptions:

extern Datum numeric_float8_no_overflow(PG_FUNCTION_ARGS);
extern Datum nextval(PG_FUNCTION_ARGS);
extern Datum fmgr_sql(PG_FUNCTION_ARGS);
extern Datum aggregate_dummy(PG_FUNCTION_ARGS);

I found some obsolete definitions in c files

pgstatfuncs.c

/* bogus ... these externs should be in a header file */
extern Datum pg_stat_get_numscans(PG_FUNCTION_ARGS);
extern Datum pg_stat_get_tuples_returned(PG_FUNCTION_ARGS);
extern Datum pg_stat_get_tuples_fetched(PG_FUNCTION_ARGS);
extern Datum pg_stat_get_tuples_inserted(PG_FUNCTION_ARGS);
extern Datum pg_stat_get_tuples_updated(PG_FUNCTION_ARGS);

namespace.c
Datum><-->pg_table_is_visible(PG_FUNCTION_ARGS);
Datum<-><-->pg_type_is_visible(PG_FUNCTION_ARGS);
Datum<-><-->pg_function_is_visible(PG_FUNCTION_ARGS);
Datum<-><-->pg_operator_is_visible(PG_FUNCTION_ARGS);
Datum<-><-->pg_opclass_is_visible(PG_FUNCTION_ARGS);
Datum<-><-->pg_opfamily_is_visible(PG_FUNCTION_ARGS);
Datum<-><-->pg_collation_is_visible(PG_FUNCTION_ARGS);
Datum<-><-->pg_conversion_is_visible(PG_FUNCTION_ARGS);
Datum<-><-->pg_ts_parser_is_visible(PG_FUNCTION_ARGS);
Datum<-><-->pg_ts_dict_is_visible(PG_FUNCTION_ARGS);
Datum<-><-->pg_ts_template_is_visible(PG_FUNCTION_ARGS);
Datum<-><-->pg_ts_config_is_visible(PG_FUNCTION_ARGS);
Datum<-><-->pg_my_temp_schema(PG_FUNCTION_ARGS);
Datum<-><-->pg_is_other_temp_schema(PG_FUNCTION_ARGS);

Regards

Pavel






>
> --
> Peter Eisentraut  http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>


Re: [HACKERS] emergency outage requiring database restart

2017-01-03 Thread Peter Eisentraut
On 11/7/16 5:31 PM, Merlin Moncure wrote:
> Regardless, it seems like you might be on to something, and I'm
> inclined to patch your change, test it, and roll it out to production.
> If it helps or at least narrows the problem down, we ought to give it
> consideration for inclusion (unless someone else can think of a good
> reason not to do that, heh!).

Any results yet?

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE .. ALTER COLUMN .. ERROR: attribute .. has wrong type

2017-01-03 Thread Justin Pryzby
On Tue, Jan 03, 2017 at 01:40:50PM -0500, Robert Haas wrote:
> On Tue, Jan 3, 2017 at 11:59 AM, Justin Pryzby  wrote:
> > On Tue, Jan 03, 2017 at 11:45:33AM -0500, Robert Haas wrote:
> >> > ts=# begin; drop view umts_eric_ch_switch_view, 
> >> > eric_umts_rnc_utrancell_view, umts_eric_cell_integrity_view; ALTER TABLE 
> >> > eric_umts_rnc_utrancell_metrics ALTER COLUMN PMSUMPACKETLATENCY_000 TYPE 
> >> > BIGINT USING PMSUMPACKETLATENCY_000::BIGINT;
> >> > BEGIN
> >> > DROP VIEW
> >> > ERROR:  attribute 424 has wrong type
> >> > DETAIL:  Table has type smallint, but query expects integer.
> >> > ts=#
> >> >
> > I can cause the error at will on the existing table, but I wouldn't know 
> > how to
> > reproduce the problem on a new table/database.  I'm guessing it has 
> > something

> Just for kicks, could you try running pg_catcheck on the affected system?
> 
> https://github.com/EnterpriseDB/pg_catcheck

Neat, I hadn't heard of it before ;)

The version in PGDG has the "amkeytype" issue, so I compiled,

I got this:

[pryzbyj@database pg_catcheck]$ ./pg_catcheck ts
notice: pg_shdepend row has invalid classid "2613": not a system catalog OID
row identity: dbid="16402" classid="2613" objid="1086583699" objsubid="0" 
refclassid="1260" refobjid="16384" deptype="o"
notice: pg_shdepend row has invalid classid "2613": not a system catalog OID
row identity: dbid="16402" classid="2613" objid="1086583701" objsubid="0" 
refclassid="1260" refobjid="16384" deptype="o"
[...]

notice: pg_depend row has invalid objid "1124153791": no matching entry in 
pg_class
row identity: classid="1259" objid="1124153791" objsubid="0" refclassid="1259" 
refobjid="1064197368" refobjsubid="1" deptype="a"

progress: done (294 inconsistencies, 0 warnings, 0 errors)

.. those are the only two problem oids:
[pryzbyj@database pg_catcheck]$ time ./pg_catcheck ts 2>&1 |grep -Evw 
'2613|1259'
progress: done (264 inconsistencies, 0 warnings, 0 errors)




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Update copyright for 2017

2017-01-03 Thread Tom Lane
Magnus Hagander  writes:
> On Tue, Jan 3, 2017 at 7:33 PM, Tom Lane  wrote:
>> Somehow the reset is clobbering local configuration on some members?

> I doubt that. I think that was probably never configured, it just didn't
> show up when everything was working.

> I don't know what the buildfarm run, but perhaps it's trying to inject a
> merge commit there or something, and that's why it's failing on not having
> a name?

AFAICS, for a pre-existing branch it'll just do

# do a checkout in case the work tree has been removed
# this is harmless if it hasn't
my @colog = `git checkout . 2>&1`;
my @pulllog = `git pull 2>&1`;


More reports are coming in now, and it's clear that only some of the
critters are failing.  It sort of looks like the fastest machines
are the unhappy ones, which might mean that the ones that aren't failing
happened to never pull the bad commit, because they were busy rebuilding
other branches while it was there.

> I'm guessing the solution is to reset the 9.2 branch to a point prior to
> the commit and then pull again? Or wouldn't just a rebase work?

Flushing the local mirror would likely be the easiest way out.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: session server side variables

2017-01-03 Thread Pavel Stehule
2017-01-03 18:52 GMT+01:00 Fabien COELHO :

>
> ** PLEASE **
>>>  COULD YOU REMOVE THE PARTS OF EMAILS YOU ARE NOT RESPONDING TO WHEN
>>>  REPLYING IN THE THREAD?
>>> ** THANKS **
>>
>>
> Hmmm. It seems that you can't. You should, really.


I am sorry - The gmail client mask me these parts. I'll clean it more


>
>
> If you use patterns that I wrote - the security context will be valid
 always.

>>>
>>> No: This pattern assumes that operations started in the "TRY" zone
>>> cannot fail later on... This assumption is false because of possible
>>> deferred triggers for instance. See attached example:
>>>
>>
>> ok .. it is pretty artificial, but ok.
>>
>
> Good. We seem to agree that some kind of transactional support is needed
> for the use case, which is pretty logical.
>
> In this case the reset to NULL on ROLLBACK should be enough.
>>
>
> Probably.
>
> So I expect that you are going to update your proposal somehow to provide
> some transactional properties.
>
> Note that if you have some mecanism for doing a NULL on rollback, then why
> not just keep and reset the previous value if needed? This just means that
> you have a transactional variable, which is fine from my point of view. As
> I already wrote, session variable are memory only, so transactional does
> not involve costs such as WAL.
>

There is not cost such as WAL - in any update, you have to check if this is
first update in transaction, and if it is, then you have to create new
memory context and create new callback that will be evaluated on rollback.

Probably there is not big difference between RESET and UNDO in complexity
of implementation. You have to do partial implementation of MVCC. No simple
code.



>
> Also note that user-defined GUCs already implements the transactional
> property, so probably the mecanism is already available and can be reused.


GUC are stack based  - the value doesn't depends if transaction was
successful or not.


> --
> Fabien.
>


Re: [HACKERS] Cluster wide option to control symbol case folding

2017-01-03 Thread Alvaro Herrera
Lewis, Ian (Microstar Laboratories) wrote:

> PS. To anyone who might know the answer: My Reply All to this group does
> not seem to join to the original thread. All I am doing is Reply All
> from Outlook. Is there something else I need to do to allow my responses
> to join the original thread?

That's a known deficiency with Outlook, which fails to include the
required headers (References and/or In-Reply-To).  We have a few threads
like that in the archives.  As far as I know there's no workaround; the
only solution is to change to another mail program.  Perhaps there are
config options that can be changed, but I've asked a few people that
have had this problem and nobody has found anything -- but I don't know
how hard they've tried, if at all.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE .. ALTER COLUMN .. ERROR: attribute .. has wrong type

2017-01-03 Thread Robert Haas
On Tue, Jan 3, 2017 at 11:59 AM, Justin Pryzby  wrote:
> On Tue, Jan 03, 2017 at 11:45:33AM -0500, Robert Haas wrote:
>> > ts=# begin; drop view umts_eric_ch_switch_view, 
>> > eric_umts_rnc_utrancell_view, umts_eric_cell_integrity_view; ALTER TABLE 
>> > eric_umts_rnc_utrancell_metrics ALTER COLUMN PMSUMPACKETLATENCY_000 TYPE 
>> > BIGINT USING PMSUMPACKETLATENCY_000::BIGINT;
>> > BEGIN
>> > DROP VIEW
>> > ERROR:  attribute 424 has wrong type
>> > DETAIL:  Table has type smallint, but query expects integer.
>> > ts=#
>> >
>> > ts=# begin; drop view umts_eric_ch_switch_view, 
>> > eric_umts_rnc_utrancell_view, umts_eric_cell_integrity_view; ALTER TABLE 
>> > eric_umts_rnc_utrancell_metrics ALTER COLUMN PMSUMPACKETLATENCY_000 TYPE 
>> > BIGINT ;
>> > BEGIN
>> > DROP VIEW
>> > ALTER TABLE
>> > ts=#
>> >
>> > Is it useful to send something from pg_attribute, or other clues ??
>>
>> So, are these errors reproducible?  Like, if you create a brand new
>
> I can cause the error at will on the existing table, but I wouldn't know how 
> to
> reproduce the problem on a new table/database.  I'm guessing it has something
> to do with dropped columns or historic alters (which I mentioned are typically
> done separately on child tables vs their parent).
>
> Since it's happened 3 times now on this table, but not others on this 
> database,
> I would guess it's an "data issue", possibly related to pg_upgrades.  IOW it
> may be impossible to get into this state from a fresh initdb from a current
> version.
>
> I considered that perhaps it only affected our oldest tables, and would stop
> happening once they were dropped, but note this ALTER is only of a parent and
> its 3 most recent children.  So only the empty parent could be described as
> "old".

Just for kicks, could you try running pg_catcheck on the affected system?

https://github.com/EnterpriseDB/pg_catcheck

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Update copyright for 2017

2017-01-03 Thread Magnus Hagander
On Tue, Jan 3, 2017 at 7:33 PM, Tom Lane  wrote:

> Magnus Hagander  writes:
> >> Ok. Now let's wait for the fallout from the reset. This is an
> interesting
> >> experiment, we'll find out how many people are annoyed by a reset :-).
>
> > I bet a number of buildfarm machines will dislike it :(
>
> Early returns don't look good, eg on termite
>
> From git://git.postgresql.org/git/postgresql
>7e3ae54..83a25a5  REL9_2_STABLE -> REL9_2_STABLE
>7e3ae54..83a25a5  github/REL9_2_STABLE -> github/REL9_2_STABLE
> From /home/pgbuildfarm/buildroot-termite/pgmirror
>  + 19371e1...83a25a5 REL9_2_STABLE -> origin/REL9_2_STABLE
> (forced update)
>
> *** Please tell me who you are.
>
> Run
>
>   git config --global user.email "y...@example.com"
>   git config --global user.name "Your Name"
>
> to set your account's default identity.
> Omit --global to set the identity only in this repository.
>
> fatal: empty ident   not
> allowed
>
> Somehow the reset is clobbering local configuration on some members?
>

I doubt that. I think that was probably never configured, it just didn't
show up when everything was working.

I don't know what the buildfarm run, but perhaps it's trying to inject a
merge commit there or something, and that's why it's failing on not having
a name?

It *should* certainly not be required to have a name in order to pull.



> We should advise buildfarm owners how to fix that.
>
>
I'm guessing the solution is to reset the 9.2 branch to a point prior to
the commit and then pull again? Or wouldn't just a rebase work?


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] [COMMITTERS] pgsql: Update copyright for 2017

2017-01-03 Thread Tom Lane
Magnus Hagander  writes:
>> Ok. Now let's wait for the fallout from the reset. This is an interesting
>> experiment, we'll find out how many people are annoyed by a reset :-).

> I bet a number of buildfarm machines will dislike it :(

Early returns don't look good, eg on termite

From git://git.postgresql.org/git/postgresql
   7e3ae54..83a25a5  REL9_2_STABLE -> REL9_2_STABLE
   7e3ae54..83a25a5  github/REL9_2_STABLE -> github/REL9_2_STABLE
From /home/pgbuildfarm/buildroot-termite/pgmirror
 + 19371e1...83a25a5 REL9_2_STABLE -> origin/REL9_2_STABLE  (forced 
update)

*** Please tell me who you are.

Run

  git config --global user.email "y...@example.com"
  git config --global user.name "Your Name"

to set your account's default identity.
Omit --global to set the identity only in this repository.

fatal: empty ident   not allowed

Somehow the reset is clobbering local configuration on some members?

We should advise buildfarm owners how to fix that.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cluster wide option to control symbol case folding

2017-01-03 Thread Robert Haas
On Mon, Jan 2, 2017 at 8:03 PM, Lewis, Ian (Microstar Laboratories)
 wrote:
> Personally, I believe such an option would increase, not decrease the
> number of people who could relatively easily use PostgreSQL. If that is
> right it is a strong argument for such a modal behavior in spite of the
> obvious real pain.

That is definitely true.  "X will let more people easily use
PostgreSQL" is an argument with a lot of merit, and I don't see how
anyone could argue otherwise (unless they want fewer people to use
PostgreSQL).  I think the issue isn't so much whether we'd increase or
decrease the number of people who could easily use PostgreSQL; I'm
confident that, as you say, many potential users would find it
convenient.  The issue is, rather, that every extension written for
PostgreSQL, whether in or out of core, needs to handle this issue and
every general-purpose client tool (pgAdmin, etc.) needs to be aware of
it.  Many drivers probably need to know about it.  Any application
built on PostgreSQL must either now require a specific server
configuration or be prepared to work with any of them.  I think this
is the sort of thing where the server changes would take a month and
tools, connectors, and extensions would still be getting bug reports a
decade later.  Now, I am not as convinced as Tom is that this is a
categorically terrible idea.  But I do think that it would be easy to
underestimate the amount of collateral damage that such a change would
cause.  It would be very large.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_sequence catalog

2017-01-03 Thread Andreas Karlsson

On 01/03/2017 03:30 PM, Peter Eisentraut wrote:

On 1/3/17 7:23 AM, Kuntal Ghosh wrote:

The regression tests for hot standby check fails since it uses the
following statement:
-select min_value as sequence_min_value from hsseq;
which is no longer supported I guess. It should be modified as following:
select min_value as sequence_min_value from pg_sequences where
sequencename = 'hsseq';

Attached is a patch which reflects the above changes.


Fixed, thanks.

I made a note to self to port this test to the TAP framework.


Hm, doesn't this change the intent of the test case? As I read the test 
it seems to make sure that we are allowed to do a read from a sequence 
relation on the slave. If so I think it should be changed to something 
like the below.


select is_called from hsseq;

Andreas


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Update copyright for 2017

2017-01-03 Thread Tom Lane
Magnus Hagander  writes:
> On Tue, Jan 3, 2017 at 6:59 PM, Heikki Linnakangas  wrote:
>> Ok. Now let's wait for the fallout from the reset. This is an interesting
>> experiment, we'll find out how many people are annoyed by a reset :-).

> Yeah, and how many had time to pull. It was only out there for 15 minutes
> or so.

I hadn't pulled it, so no problem from here.

> I bet a number of buildfarm machines will dislike it :(

We might be saved by the fact that Bruce pushed all the minor back-branch
updates first --- the buildfarm critters were still working through those,
or at least the ones using run_branches.pl were, when you pushed the
reset.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] background sessions

2017-01-03 Thread Andrew Borodin
2017-01-03 19:39 GMT+05:00 Peter Eisentraut :

> On 1/3/17 1:26 AM, amul sul wrote:
> > One more requirement for pg_background is session, command_qh,
> > response_qh and worker_handle should be last longer than current
> > memory context, for that we might need to allocate these in
> > TopMemoryContext.  Please find attach patch does the same change in
> > BackgroundSessionStart().
>
> I had pondered this issue extensively.  The standard coding convention
> in postgres is that the caller sets the memory context.  See the dblink
> and plpython patches that make this happen in their own way.
>
> I agree it would make sense that you either pass in a memory context or
> always use TopMemoryContext.  I'm open to these ideas, but they did not
> seem to match any existing usage.
>
+1
Please excuse me if I'm not getting something obvious, but seems like
BackgroundSessionNew() caller from pg_background_launch() can pick up
TopMemoryCtx. I think that context setup should be done by extension, not
by bg_session API.

Best regards, Andrey Borodin.


Re: [HACKERS] [COMMITTERS] pgsql: Update copyright for 2017

2017-01-03 Thread Magnus Hagander
On Tue, Jan 3, 2017 at 6:59 PM, Jim Nasby  wrote:

> On 1/3/17 11:57 AM, Magnus Hagander wrote:
>
>> I've pushed a reset to the master repo. Working on the mirror now.
>>
>
> Please don't forget github. :)
>
> Handled, thanks for the reminder.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] [COMMITTERS] pgsql: Update copyright for 2017

2017-01-03 Thread Bruce Momjian
On Tue, Jan  3, 2017 at 06:57:44PM +0100, Magnus Hagander wrote:
> I'm leaning for +1 for resetting. It'll be a pain for any mirrors of the
> repo, but I think the clean history is worth it.
>
> 
> 
> It seems bruce pushed a whole bunch of merge conflicts, and possibly more. I
> think his commit sscripts are badly broken.
> 
> I've pushed a reset to the master repo. Working on the mirror now.

Yeah, I was doing parallel pulls of different branches in git via shell
script, and it seems the size of this commit showed me that doesn't
work.  Sorry.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Update copyright for 2017

2017-01-03 Thread Magnus Hagander
On Tue, Jan 3, 2017 at 6:59 PM, Heikki Linnakangas  wrote:

> On 01/03/2017 07:57 PM, Magnus Hagander wrote:
>
>> On Tue, Jan 3, 2017 at 6:54 PM, Heikki Linnakangas 
>> wrote:
>>
>> On 01/03/2017 07:49 PM, Bruce Momjian wrote:
>>>
>>> On Tue, Jan  3, 2017 at 06:46:32PM +0100, Magnus Hagander wrote:

 Is this a big enough boo that we actually want to reset the master repo
> to get
> rid of it?
>
> If so, we need to do it *now* beore people get a chance to mirror it
> properly..
>
> Thoughts?
>
> If not, just a revert should work of course..
>
>
 OK, not sure how this happened but I think it has to do with my
 accidentally doing a 'pull' after the changes, and doing multiple
 branches.

 Whatever you suggest is fine --- I will wait.


>>> I'm leaning for +1 for resetting. It'll be a pain for any mirrors of the
>>> repo, but I think the clean history is worth it.
>>>
>>>
>>> It seems bruce pushed a whole bunch of merge conflicts, and possibly
>> more.
>> I think his commit sscripts are badly broken.
>>
>> I've pushed a reset to the master repo. Working on the mirror now.
>>
>
> Ok. Now let's wait for the fallout from the reset. This is an interesting
> experiment, we'll find out how many people are annoyed by a reset :-).


Yeah, and how many had time to pull. It was only out there for 15 minutes
or so.

I bet a number of buildfarm machines will dislike it :(

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] [COMMITTERS] pgsql: Update copyright for 2017

2017-01-03 Thread Heikki Linnakangas

On 01/03/2017 07:57 PM, Magnus Hagander wrote:

On Tue, Jan 3, 2017 at 6:54 PM, Heikki Linnakangas  wrote:


On 01/03/2017 07:49 PM, Bruce Momjian wrote:


On Tue, Jan  3, 2017 at 06:46:32PM +0100, Magnus Hagander wrote:


Is this a big enough boo that we actually want to reset the master repo
to get
rid of it?

If so, we need to do it *now* beore people get a chance to mirror it
properly..

Thoughts?

If not, just a revert should work of course..



OK, not sure how this happened but I think it has to do with my
accidentally doing a 'pull' after the changes, and doing multiple
branches.

Whatever you suggest is fine --- I will wait.



I'm leaning for +1 for resetting. It'll be a pain for any mirrors of the
repo, but I think the clean history is worth it.



It seems bruce pushed a whole bunch of merge conflicts, and possibly more.
I think his commit sscripts are badly broken.

I've pushed a reset to the master repo. Working on the mirror now.


Ok. Now let's wait for the fallout from the reset. This is an 
interesting experiment, we'll find out how many people are annoyed by a 
reset :-).


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Update copyright for 2017

2017-01-03 Thread Jim Nasby

On 1/3/17 11:57 AM, Magnus Hagander wrote:

I've pushed a reset to the master repo. Working on the mirror now.


Please don't forget github. :)
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: session server side variables

2017-01-03 Thread Jim Nasby

On 1/3/17 10:33 AM, Fabien COELHO wrote:


** PLEASE **

COULD YOU REMOVE THE PARTS OF EMAILS YOU ARE NOT RESPONDING TO WHEN
REPLYING IN THE THREAD?

** THANKS **


+1. Frankly, I've been skipping most of your (Pavel) replies in this 
thread because of this.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Update copyright for 2017

2017-01-03 Thread Magnus Hagander
On Tue, Jan 3, 2017 at 6:54 PM, Heikki Linnakangas  wrote:

> On 01/03/2017 07:49 PM, Bruce Momjian wrote:
>
>> On Tue, Jan  3, 2017 at 06:46:32PM +0100, Magnus Hagander wrote:
>>
>>> Is this a big enough boo that we actually want to reset the master repo
>>> to get
>>> rid of it?
>>>
>>> If so, we need to do it *now* beore people get a chance to mirror it
>>> properly..
>>>
>>> Thoughts?
>>>
>>> If not, just a revert should work of course..
>>>
>>
>> OK, not sure how this happened but I think it has to do with my
>> accidentally doing a 'pull' after the changes, and doing multiple
>> branches.
>>
>> Whatever you suggest is fine --- I will wait.
>>
>
> I'm leaning for +1 for resetting. It'll be a pain for any mirrors of the
> repo, but I think the clean history is worth it.
>
>
It seems bruce pushed a whole bunch of merge conflicts, and possibly more.
I think his commit sscripts are badly broken.

I've pushed a reset to the master repo. Working on the mirror now.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] [COMMITTERS] pgsql: Update copyright for 2017

2017-01-03 Thread Heikki Linnakangas

On 01/03/2017 07:49 PM, Bruce Momjian wrote:

On Tue, Jan  3, 2017 at 06:46:32PM +0100, Magnus Hagander wrote:

Is this a big enough boo that we actually want to reset the master repo to get
rid of it?

If so, we need to do it *now* beore people get a chance to mirror it properly..

Thoughts?

If not, just a revert should work of course..


OK, not sure how this happened but I think it has to do with my
accidentally doing a 'pull' after the changes, and doing multiple
branches.

Whatever you suggest is fine --- I will wait.


I'm leaning for +1 for resetting. It'll be a pain for any mirrors of the 
repo, but I think the clean history is worth it.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: session server side variables

2017-01-03 Thread Fabien COELHO



** PLEASE **
 COULD YOU REMOVE THE PARTS OF EMAILS YOU ARE NOT RESPONDING TO WHEN
 REPLYING IN THE THREAD?
** THANKS **


Hmmm. It seems that you can't. You should, really.

If you use patterns that I wrote - the security context will be valid 
always.


No: This pattern assumes that operations started in the "TRY" zone 
cannot fail later on... This assumption is false because of possible 
deferred triggers for instance. See attached example:


ok .. it is pretty artificial, but ok.


Good. We seem to agree that some kind of transactional support is needed 
for the use case, which is pretty logical.



In this case the reset to NULL on ROLLBACK should be enough.


Probably.

So I expect that you are going to update your proposal somehow to provide 
some transactional properties.


Note that if you have some mecanism for doing a NULL on rollback, then why 
not just keep and reset the previous value if needed? This just means that 
you have a transactional variable, which is fine from my point of view. As 
I already wrote, session variable are memory only, so transactional does 
not involve costs such as WAL.


Also note that user-defined GUCs already implements the transactional 
property, so probably the mecanism is already available and can be reused.


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] merging some features from plpgsql2 project

2017-01-03 Thread Pavel Stehule
2017-01-03 18:41 GMT+01:00 Jim Nasby :

> On 1/3/17 11:19 AM, Pavel Stehule wrote:
>
>> 2) There's no way to incrementally change those values for a
>> single
>> function. If you've set extra_errors = 'all' globally, a
>> single
>> function can't say "turn off the too many rows setting for
>> this
>> function".
>>
>>
>> We can enhance the GUC syntax like "all -too_many_rows,-xxx"
>>
>>
>> Why create all that framework when we could just have multiple
>> plpgsql.blah GUCs? plpgsql.multirow_assign_level=FATAL solves that
>> problem. We just need a plpgsql GUC for each backwards compatibility
>> break.
>>
>> We have this framework already, so why don't use it.
>>
>
> We *don't* have a framework that works for this, because you can't
> incrementally modify extra_errors. Maybe extra_errors is an OK API for
> static checking, but it's definitely a BAD API for something you'd need to
> control at a function (or even statement) level.


I have different opinion then you - sure - it should not to change behave,
it should to help with identification. And it is enough for this purpose.

>
>
> If we never broke compatibility we'd still be allowing SELECT
>> without FROM, NULL = NULL being TRUE, and a whole bunch of other
>> problems. We'd also be stuck on protocol v1 (and of course not
>> talking about what we want in v4).
>>
>>
>> This was in dark age - how much users of plpgsql was in 2000? Hard to
>> speak about Postgres as mature software in this era.
>>
>
> I don't know about you' but I've considered Postgres to be mature since at
> least 8.0, if not earlier. Actually, in many ways it was far more mature
> than other databases I was using in 2000 (let alone 2007).
>
> We've successfully made incompatible changes that were *far worse*
>> than this (ie: renaming pg_stat_activity.procpid). Obviously we
>> shouldn't be breaking things willy-nilly, but these are
>> long-standing warts (dare I say BUGS?) that should be fixed. They're
>> ugly enough that someone took the time to break plpgsql out of the
>> core code and fork it.
>>
>> We are not talk about features that can be simply marked as bugs, so
>> there is not too much what we should to fix it. We should to help to
>> users to identify some possible risk places.
>>
>
> You keep claiming that these aren't serious bugs, yet someone felt so
> strongly that they ARE serious bugs that they forked the entire PL.
>

Sorry, but it it is subjective - and there can be different opinions - some
body would to prefer more rigidity, some other less rigidity.


>
> If you're not willing to even consider a compatibility break (with a means
> to get the old behavior back) then I don't think there's any point in
> continuing this thread, because some of these issues can NOT be reasonably
> solved by a checker.


yes, I don't would to consider about a compatibility break. I accept so you
have different opinion.

I'll send this patch + doc to next commitfest - and depends on commiters if
the patch will be rejected or not. I know so it should not be fully  fixed,
but it is step forward from my perspective.

Thank you for discussion

Regards

Pavel


>
> --
> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
> Experts in Analytics, Data Architecture and PostgreSQL
> Data in Trouble? Get it in Treble! http://BlueTreble.com
> 855-TREBLE2 (855-873-2532)
>


Re: [HACKERS] [COMMITTERS] pgsql: Update copyright for 2017

2017-01-03 Thread Bruce Momjian
On Tue, Jan  3, 2017 at 06:46:32PM +0100, Magnus Hagander wrote:
> Is this a big enough boo that we actually want to reset the master repo to get
> rid of it?
> 
> If so, we need to do it *now* beore people get a chance to mirror it 
> properly..
> 
> Thoughts?
> 
> If not, just a revert should work of course..

OK, not sure how this happened but I think it has to do with my
accidentally doing a 'pull' after the changes, and doing multiple
branches.

Whatever you suggest is fine --- I will wait.

---


> 
> //Magnus
> 
> 
> On Tue, Jan 3, 2017 at 6:41 PM, Bruce Momjian  wrote:
> 
> 
> Sorry, this will be reverted and redone.
> 
> 
> ---
> 
> On Tue, Jan  3, 2017 at 05:38:05PM +, Bruce Momjian wrote:
> > Update copyright for 2017
> >
> > Backpatch-through: certain files through 9.2
> >
> > Branch
> > --
> > REL9_2_STABLE
> >
> > Details
> > ---
> > http://git.postgresql.org/pg/commitdiff/19371e148207c33d15fded06a178d5
> 8d0781141d
> >
> > Modified Files
> > --
> > COPYRIGHT                                          |    2 +-
> > configure                                          |   11 +
> > configure.in                                       |    4 +
> > contrib/adminpack/adminpack.c                      |    4 +
> > contrib/auth_delay/auth_delay.c                    |    4 +
> > contrib/auto_explain/auto_explain.c                |    4 +
> > contrib/bloom/blcost.c                             |   48 +
> > contrib/bloom/blinsert.c                           |  358 ++
> > contrib/bloom/bloom.h                              |  212 +
> > contrib/bloom/blscan.c                             |  173 +
> > contrib/bloom/blutils.c                            |  485 ++
> > contrib/bloom/blvacuum.c                           |  217 +
> > contrib/bloom/blvalidate.c                         |  220 +
> > contrib/dblink/dblink.c                            |    4 +
> > contrib/dblink/dblink.h                            |    4 +
> > contrib/dict_int/dict_int.c                        |    4 +
> > contrib/dict_xsyn/dict_xsyn.c                      |    4 +
> > contrib/dummy_seclabel/dummy_seclabel.c            |    4 +
> > contrib/file_fdw/file_fdw.c                        |    4 +
> > contrib/fuzzystrmatch/fuzzystrmatch.c              |    4 +
> > contrib/fuzzystrmatch/levenshtein.c                |    4 +
> > contrib/intarray/_int_selfuncs.c                   |  341 ++
> > contrib/isn/isn.c                                  |    4 +
> > contrib/isn/isn.h                                  |    4 +
> > contrib/pageinspect/brinfuncs.c                    |  409 ++
> > contrib/pageinspect/fsmfuncs.c                     |    4 +
> > contrib/pageinspect/ginfuncs.c                     |  283 ++
> > contrib/pageinspect/heapfuncs.c                    |    4 +
> > contrib/pageinspect/rawpage.c                      |    4 +
> > contrib/passwordcheck/passwordcheck.c              |    4 +
> > contrib/pg_prewarm/pg_prewarm.c                    |  206 +
> > contrib/pg_stat_statements/pg_stat_statements.c    |    4 +
> > contrib/pg_trgm/trgm_regexp.c                      | 2244 +
> > contrib/pg_upgrade/check.c                         |    5 +
> > contrib/pg_upgrade/controldata.c                   |    5 +
> > contrib/pg_upgrade/exec.c                          |    5 +
> > contrib/pg_upgrade/option.c                        |    5 +
> > contrib/pg_upgrade/pg_upgrade.h                    |    5 +
> > contrib/pg_upgrade/server.c                        |    5 +
> > contrib/pg_upgrade/test.sh                         |    4 +
> > contrib/pg_visibility/pg_visibility.c              |  749 +++
> > contrib/pgstattuple/pgstatapprox.c                 |  303 ++
> > contrib/postgres_fdw/connection.c                  |  838 
> > contrib/postgres_fdw/deparse.c                     | 2940 
> > contrib/postgres_fdw/option.c                      |  363 ++
> > contrib/postgres_fdw/postgres_fdw.c                | 5029
> 
> > contrib/postgres_fdw/postgres_fdw.h                |  172 +
> > contrib/postgres_fdw/shippable.c                   |  214 +
> > contrib/sepgsql/database.c                         |    4 +
> > contrib/sepgsql/dml.c                              |    4 +
> > contrib/sepgsql/hooks.c                            |    4 +
> > contrib/sepgsql/label.c                            |    4 +
> > contrib/sepgsql/launcher                           |    4 +
> > contrib/sepgsql/proc.c                             |    4 +
> > contrib/sepgsql/relation.c    

Re: [HACKERS] merging some features from plpgsql2 project

2017-01-03 Thread Jim Nasby

On 1/3/17 11:19 AM, Pavel Stehule wrote:

2) There's no way to incrementally change those values for a
single
function. If you've set extra_errors = 'all' globally, a single
function can't say "turn off the too many rows setting for this
function".


We can enhance the GUC syntax like "all -too_many_rows,-xxx"


Why create all that framework when we could just have multiple
plpgsql.blah GUCs? plpgsql.multirow_assign_level=FATAL solves that
problem. We just need a plpgsql GUC for each backwards compatibility
break.

We have this framework already, so why don't use it.


We *don't* have a framework that works for this, because you can't 
incrementally modify extra_errors. Maybe extra_errors is an OK API for 
static checking, but it's definitely a BAD API for something you'd need 
to control at a function (or even statement) level.



If we never broke compatibility we'd still be allowing SELECT
without FROM, NULL = NULL being TRUE, and a whole bunch of other
problems. We'd also be stuck on protocol v1 (and of course not
talking about what we want in v4).


This was in dark age - how much users of plpgsql was in 2000? Hard to
speak about Postgres as mature software in this era.


I don't know about you' but I've considered Postgres to be mature since 
at least 8.0, if not earlier. Actually, in many ways it was far more 
mature than other databases I was using in 2000 (let alone 2007).



We've successfully made incompatible changes that were *far worse*
than this (ie: renaming pg_stat_activity.procpid). Obviously we
shouldn't be breaking things willy-nilly, but these are
long-standing warts (dare I say BUGS?) that should be fixed. They're
ugly enough that someone took the time to break plpgsql out of the
core code and fork it.

We are not talk about features that can be simply marked as bugs, so
there is not too much what we should to fix it. We should to help to
users to identify some possible risk places.


You keep claiming that these aren't serious bugs, yet someone felt so 
strongly that they ARE serious bugs that they forked the entire PL.


If you're not willing to even consider a compatibility break (with a 
means to get the old behavior back) then I don't think there's any point 
in continuing this thread, because some of these issues can NOT be 
reasonably solved by a checker.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal for changes to recovery.conf API

2017-01-03 Thread Josh Berkus
On 01/03/2017 08:47 AM, Robert Haas wrote:
> On Tue, Jan 3, 2017 at 11:21 AM, Simon Riggs  wrote:
>> On 3 January 2017 at 15:50, Robert Haas  wrote:
>>> On Sun, Jan 1, 2017 at 4:14 PM, Simon Riggs  wrote:
 Trying to fit recovery targets into one parameter was really not
 feasible, though I tried.
>>>
>>> What was the problem?
>>
>> There are 5 different parameters that affect the recovery target, 3 of
>> which are optional. That is much more complex than the two compulsory
>> parameters suggested when the proposal was made and in my view tips
>> the balance over the edge into pointlessness. Michael's suggestion for
>> 2 parameters works well for this case.
> 
> I still think merging recovery_target_type and recovery_target_value
> into a single parameter could make sense, never mind the other three.
> But I don't really want to argue about it any more.
> 

Either solution works for me.

-- 
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] merging some features from plpgsql2 project

2017-01-03 Thread Jim Nasby

On 1/3/17 9:58 AM, Pavel Stehule wrote:

> ** The real problem is that we have no mechanism for allowing a PL's
> language/syntax/API to move forward without massive backwards 
compatibility
> problems. **

Just got back from break :-).  Have some thoughts on this.  Backwards
compatibility is really a fundamental problem.  There's really no
solution to it other than to try and avoid using syntax to solve
problems.  It should be obvious to everyone that plgsql cannot
withstand a compatibility break.  Another language could be offered as


I don't think that's obvious at all. We've introduced incompatibility in 
the main grammar without problem. You just need a way for people to get 
the old behavior back if they need it. Eventually people will stop 
relying on the old, broken behavior.



an alternative in core (say, pl/psm or pl/v8), but pl/pgsql has to
support old code.   Some really out there features could maybe be
redacted (in particular, using = for assignment), but not not much.
But I guess we're stuck with the status quo.

I think we ought to avoid language features that influence the
behavior (performance is ok) of the code (and that includes throwing
errors).  That's a freight train headed towards javscript's 'strict'
mode, which is thinly disguised language fork.  #option and pragma
type syntaxes are trying to cheat the language -- hardly anyone uses
them and it's a tricky way to try and make the language into something
other than it is.


Yeah, trying to bulk all these changes into one "magic setting" is not a 
way to move forward. I think we're actually really well off in that 
regard, because unlike most languages we have a very robust settings 
system that allows controlling this behavior even at the statement level.



C does it right -- dubious code is raised as warnings and it's up to
the end user to determine which warnings are interesting and likely to
be an error.  So, rather than hacking the language to control throwing
and errors and such there should be some ability validate the function
heavily and verify suspicious use of INTO or other dubious things
(unused variables, masked assignments, etc).  The validation output
could even be a set returning function.


While static analysis can do some good (and I think we should actually 
be enabling more of that by default), it won't realistically solve 
everything. Multi-row assignment is a good example: NO ONE is going to 
be OK with tons of warnings for every little := or SELECT INTO (without 
strict), but the reality is that most code actually won't work correctly 
if you have multiple rows coming back, so there's nothing technically 
wrong with `var = field FROM table WHERE table_id = plpgsql_variable` if 
table_id is the PK: you'll always get 0 or 1 rows back.



So -1 to strict mode, unless we can make a case why this can't be done
as part of checking/validation.


Can be plpgsq.extra_errors and plpgsql.extra_warnings solution?

I am thinking so there is a space for improvement (in extra_* usage)

Do you know plpgsql_check https://github.com/okbob/plpgsql_check ?


I think we should look at what parts of that we should pull into core 
(as well as enabling more by default). Stuff that can be done at 
compile/load time is certainly better than runtime checks.



Other random points:
*) Another major pain point is swapping in the input variables for
debugging purposes.  Something that emits a script based on a set of
arguments would be wonderful.

???


Yeah, could you elaborate here?


*) Would also like to have a FINALLY block

What you can do there?


It's a block that ALWAYS executes, even if an exception occurs. Python 
has this[1]. That (along with an ELSE clause for if there is no 
exception) would mean you could catch an exception for a single command 
instead of a bunch of commands.


Somewhat related to that, I wish you could make GUC changes that were 
local only to a specific BEGIN block. AFAIK the GUC infrastructure fully 
supports that, it would just need to be exposed in plpgsql.



*) A mechanic to manually print out a stack trace for debugging
purposes would be helpful.


I had plan to develop a extension for this purpose - easy printing
stack, function parameters, and local variables. But I had a motivation
to start it. It can be usable for any PL


I assume you're thinking an SRF that spits out PG_CONTEXT? It'd be 
really nice if you could also get things like function names and line 
numbers broken out separately. I've thought of building this myself.


BTW, the biggest case I can think of using this for is a userspace 
method of doing "private" functions, where the function throws an 
exception unless it was called directly by a set of allowed functions 
(or views).



*) COPY not being able to accept arguments as variables (in particular
the filename) is a major 

Re: [HACKERS] merging some features from plpgsql2 project

2017-01-03 Thread Pavel Stehule
2017-01-03 17:57 GMT+01:00 Jim Nasby :

> On 1/2/17 1:51 PM, Pavel Stehule wrote:
>
>> 1) Neither is enabled by default, so 90% of users have no idea they
>> exist. Obviously that's an easy enough fix, but...
>>
>> We can strongly talk about it - there can be a chapter in plpgsql doc.
>> Now, the patterns and antipatterns are not officially documented.
>>
>
> Or just fix the issue, provide the backwards compatability GUCs and move
> on.


 It is still compatibility break.


> 2) There's no way to incrementally change those values for a single
>> function. If you've set extra_errors = 'all' globally, a single
>> function can't say "turn off the too many rows setting for this
>> function".
>>
>>
>> We can enhance the GUC syntax like "all -too_many_rows,-xxx"
>>
>
> Why create all that framework when we could just have multiple
> plpgsql.blah GUCs? plpgsql.multirow_assign_level=FATAL solves that
> problem. We just need a plpgsql GUC for each backwards compatibility break.


We have this framework already, so why don't use it.


>
>
> BTW, while I can see value in being able to change these settings
>> for an entire function, I think the recommended use should be to
>> only change them for a specific statement.
>>
>>
>> What you can do in plain assign statement
>>
>> target := expression ?
>>
>
> The point I was trying to make there is if you do have some cases where
> you need to silently ignore extra rows (for example) it's probably only one
> statement and not an entire function. That said, if we just make these
> options GUCs then you can just do SET and RESET.
>
> My border is any compatibility break - and I would not to across it.
>> First issue is probably harder
>>
>
> If we never broke compatibility we'd still be allowing SELECT without
> FROM, NULL = NULL being TRUE, and a whole bunch of other problems. We'd
> also be stuck on protocol v1 (and of course not talking about what we want
> in v4).
>

This was in dark age - how much users of plpgsql was in 2000? Hard to speak
about Postgres as mature software in this era.


>
> We've successfully made incompatible changes that were *far worse* than
> this (ie: renaming pg_stat_activity.procpid). Obviously we shouldn't be
> breaking things willy-nilly, but these are long-standing warts (dare I say
> BUGS?) that should be fixed. They're ugly enough that someone took the time
> to break plpgsql out of the core code and fork it.


We are not talk about features that can be simply marked as bugs, so there
is not too much what we should to fix it. We should to help to users to
identify some possible risk places.

Regards

Pavel



>
> --
> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
> Experts in Analytics, Data Architecture and PostgreSQL
> Data in Trouble? Get it in Treble! http://BlueTreble.com
> 855-TREBLE2 (855-873-2532)
>


Re: [HACKERS] Broken atomics code on PPC with FreeBSD 10.3

2017-01-03 Thread Tom Lane
Robert Haas  writes:
> On Tue, Jan 3, 2017 at 11:11 AM, Tom Lane  wrote:
 A survey of s_lock.h shows that we prefer char-width slock_t only on
 these architectures:
 
 x86
 sparc (but not sparcv9, there we use int)
 m68k
 vax

>>> I don't think that's right, because on my MacBook Pro:

>> ... which is an x86, which won't be affected by the proposed change.

> Uh, x86 means 32-bit to me, and this MacBook Pro is definitely x86_64.

Sorry for the confusion.  x86 subsumes x86_64 so far as the atomics
code is concerned, and I was following that terminology.

>> Also, since pg_atomic_flag is currently used in a grand total of zero
>> places (other than the test case in regress.c), arguing that making
>> it word-wide will bloat critical data structures is flat wrong.

> Well that just begs the question of whether we should rip it out.  If
> it's unused, then, yes, the performance characteristics don't matter,
> but if it's gonna get used for anything important, I maintain that
> both the speed of the implementation and the number of bytes that it
> consumes will be relevant.

[ shrug... ]  I have not seen you putting any effort into optimizing
slock_t on non-x86 architectures, where it might make a difference today.
Why all the concern about shaving hypothetical future bytes for
pg_atomic_flag?

But to reduce this to the simplest possible terms: it really does not
matter whether the implementation saves bytes or cycles or anything else,
if it fails to *work*.  The existing coding for PPC fails on FreeBSD,
and likely on some other platforms using the same compiler.  I'd be the
first to say that that doesn't reflect well on the state of their PPC
port, but it's reality that we ought to be cognizant of.  And we've
worked around similar bugs nearby, eg see this bit in atomics.h:

 * Given a gcc-compatible xlc compiler, prefer the xlc implementation.  The
 * ppc64le "IBM XL C/C++ for Linux, V13.1.2" implements both interfaces, but
 * __sync_lock_test_and_set() of one-byte types elicits SIGSEGV.

>From here it seems like you're arguing that we mustn't give FreeBSD
the identical pass that we already gave IBM, for what is nearly the
same bug.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: session server side variables

2017-01-03 Thread Pavel Stehule
2017-01-03 17:33 GMT+01:00 Fabien COELHO :

>
> ** PLEASE **
>
> COULD YOU REMOVE THE PARTS OF EMAILS YOU ARE NOT RESPONDING TO WHEN
> REPLYING IN THE THREAD?
>
> ** THANKS **
>
> [...] Then B believes that A succeeded, which is not the case.
>>>
>>
>> No, just your design is unhappy
>>
>
>
>
> SELECT A(..)
>>  SET SESSION VARIABLE status_ok = false;
>>  -- do all, if fails there,
>>   -- then follow line fails too, or never be executed
>>  SET SESSION VARIABLE status_ok = true;
>>
>
> My point is that there is no commit in this code, the commit is performed
> *AFTER* the last set session, and it mail fail then.


>
> or
>>
>>  SET SESSION VARIABLE status_ok = true
>>  TRY
>> do something
>>  CATCH
>>ROLLBACK
>>SET SESSION VARIABLE status_ok = false
>>
>> Both I can do in current PL
>>
>
> The fact that "do something" worked does not preclude the overall
> transaction to work and to revert "do something" and let status_ok as true.
>
> The key issue is that the final status (commit or rollback) of the
>>> containing transaction cannot be known from within the function, so the
>>> session variables cannot reflect this status.
>>>
>>> So somehow the status_ok variable must be (1) rolledback to previous
>>> value
>>> or (2) removed (3) set to FALSE or (4) set to NULL. It cannot be TRUE if
>>> A
>>> containing transactions has failed for the security as I understand it.
>>>
>>> you don't need do rollback variable if you write well A
>>
>
> My point is that A may still fail *after* setting the variable, because it
> is in a transaction.
>
> If you use patterns that I wrote - the security context will be valid
>> always.
>>
>
> No: This pattern assumes that operations started in the "TRY" zone cannot
> fail later on... This assumption is false because of possible deferred
> triggers for instance. See attached example:
>

ok .. it is pretty artificial, but ok. In this case the reset to NULL on
ROLLBACK should be enough.


>
>  NOTICE:  SET secured = FALSE
>  NOTICE:  SET secured = TRUE
>  ERROR:  insert or update on table "log" violates foreign key constraint
> "log_sid_fkey"
>  DETAIL:  Key (sid)=(3) is not present in table "stuff".
>
> The error occurs after secured has been set to TRUE.


It is possible only if you are use deferred constraints. It is hard to
imagine this scenario in functions like A. Probably you would not to risk
on rollback log information. So you will use there elog or some form of
autonomous transaction.





>
> --
> Fabien.


Re: [HACKERS] merging some features from plpgsql2 project

2017-01-03 Thread Jim Nasby

On 1/2/17 1:51 PM, Pavel Stehule wrote:

1) Neither is enabled by default, so 90% of users have no idea they
exist. Obviously that's an easy enough fix, but...

We can strongly talk about it - there can be a chapter in plpgsql doc.
Now, the patterns and antipatterns are not officially documented.


Or just fix the issue, provide the backwards compatability GUCs and move on.


2) There's no way to incrementally change those values for a single
function. If you've set extra_errors = 'all' globally, a single
function can't say "turn off the too many rows setting for this
function".


We can enhance the GUC syntax like "all -too_many_rows,-xxx"


Why create all that framework when we could just have multiple 
plpgsql.blah GUCs? plpgsql.multirow_assign_level=FATAL solves that 
problem. We just need a plpgsql GUC for each backwards compatibility break.



BTW, while I can see value in being able to change these settings
for an entire function, I think the recommended use should be to
only change them for a specific statement.


What you can do in plain assign statement

target := expression ?


The point I was trying to make there is if you do have some cases where 
you need to silently ignore extra rows (for example) it's probably only 
one statement and not an entire function. That said, if we just make 
these options GUCs then you can just do SET and RESET.



My border is any compatibility break - and I would not to across it.
First issue is probably harder


If we never broke compatibility we'd still be allowing SELECT without 
FROM, NULL = NULL being TRUE, and a whole bunch of other problems. We'd 
also be stuck on protocol v1 (and of course not talking about what we 
want in v4).


We've successfully made incompatible changes that were *far worse* than 
this (ie: renaming pg_stat_activity.procpid). Obviously we shouldn't be 
breaking things willy-nilly, but these are long-standing warts (dare I 
say BUGS?) that should be fixed. They're ugly enough that someone took 
the time to break plpgsql out of the core code and fork it.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE .. ALTER COLUMN .. ERROR: attribute .. has wrong type

2017-01-03 Thread Justin Pryzby
On Tue, Jan 03, 2017 at 11:45:33AM -0500, Robert Haas wrote:
> > ts=# begin; drop view umts_eric_ch_switch_view, 
> > eric_umts_rnc_utrancell_view, umts_eric_cell_integrity_view; ALTER TABLE 
> > eric_umts_rnc_utrancell_metrics ALTER COLUMN PMSUMPACKETLATENCY_000 TYPE 
> > BIGINT USING PMSUMPACKETLATENCY_000::BIGINT;
> > BEGIN
> > DROP VIEW
> > ERROR:  attribute 424 has wrong type
> > DETAIL:  Table has type smallint, but query expects integer.
> > ts=#
> >
> > ts=# begin; drop view umts_eric_ch_switch_view, 
> > eric_umts_rnc_utrancell_view, umts_eric_cell_integrity_view; ALTER TABLE 
> > eric_umts_rnc_utrancell_metrics ALTER COLUMN PMSUMPACKETLATENCY_000 TYPE 
> > BIGINT ;
> > BEGIN
> > DROP VIEW
> > ALTER TABLE
> > ts=#
> >
> > Is it useful to send something from pg_attribute, or other clues ??
> 
> So, are these errors reproducible?  Like, if you create a brand new

I can cause the error at will on the existing table, but I wouldn't know how to
reproduce the problem on a new table/database.  I'm guessing it has something
to do with dropped columns or historic alters (which I mentioned are typically
done separately on child tables vs their parent).

Since it's happened 3 times now on this table, but not others on this database,
I would guess it's an "data issue", possibly related to pg_upgrades.  IOW it
may be impossible to get into this state from a fresh initdb from a current
version.

I considered that perhaps it only affected our oldest tables, and would stop
happening once they were dropped, but note this ALTER is only of a parent and
its 3 most recent children.  So only the empty parent could be described as
"old".

Justin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal for changes to recovery.conf API

2017-01-03 Thread Simon Riggs
On 3 January 2017 at 16:47, Robert Haas  wrote:
> On Tue, Jan 3, 2017 at 11:21 AM, Simon Riggs  wrote:
>> On 3 January 2017 at 15:50, Robert Haas  wrote:
>>> On Sun, Jan 1, 2017 at 4:14 PM, Simon Riggs  wrote:
 Trying to fit recovery targets into one parameter was really not
 feasible, though I tried.
>>>
>>> What was the problem?
>>
>> There are 5 different parameters that affect the recovery target, 3 of
>> which are optional. That is much more complex than the two compulsory
>> parameters suggested when the proposal was made and in my view tips
>> the balance over the edge into pointlessness. Michael's suggestion for
>> 2 parameters works well for this case.
>
> I still think merging recovery_target_type and recovery_target_value
> into a single parameter could make sense, never mind the other three.
> But I don't really want to argue about it any more.

We all agree that reducing number parameters made sense and would
remove weird quirks of multiple settings. When we were discussing the
idea of replacing them with just 1 parameter, that idea made sense to
me after I'd thought about it, but it would actually be 4 parameters.
I couldn't see a way to wave the corner case parameters away.

After more detailed thought I don't see the point of reducing the
number of parameters from to 4, especially if that reduction comes
with increased complexity for one of the parameters. Better to just
have 5 as Michael suggested.

This patch replaces the previous 8 parameters with just 5.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal for changes to recovery.conf API

2017-01-03 Thread Robert Haas
On Tue, Jan 3, 2017 at 11:21 AM, Simon Riggs  wrote:
> On 3 January 2017 at 15:50, Robert Haas  wrote:
>> On Sun, Jan 1, 2017 at 4:14 PM, Simon Riggs  wrote:
>>> Trying to fit recovery targets into one parameter was really not
>>> feasible, though I tried.
>>
>> What was the problem?
>
> There are 5 different parameters that affect the recovery target, 3 of
> which are optional. That is much more complex than the two compulsory
> parameters suggested when the proposal was made and in my view tips
> the balance over the edge into pointlessness. Michael's suggestion for
> 2 parameters works well for this case.

I still think merging recovery_target_type and recovery_target_value
into a single parameter could make sense, never mind the other three.
But I don't really want to argue about it any more.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE .. ALTER COLUMN .. ERROR: attribute .. has wrong type

2017-01-03 Thread Robert Haas
On Mon, Jan 2, 2017 at 7:32 PM, Justin Pryzby  wrote:
> On Wed, Oct 12, 2016 at 10:25:05AM -0500, Justin Pryzby wrote:
>> > I don't have a clear recollection how I solved this in July; possibly by
>> > restoring the (historic, partition) table from backup.
>> >
>> > Last week again again just now (both under 9.6), a colleague found that he 
>> > was
>> > able to avoid the error by ALTER TYPE without USING.
>> >
>> > Note that we ALTER TABLE .. NO INHERIT the partitions for all but the most
>> > recent 2 months before ALTERing them (or the parent).  The "ALTER NO 
>> > INHERIT"
>> > and the ALTER TYPE of historic partitions are done outside of a 
>> > transaction in
>> > order to avoid large additional disk use otherwise used when ALTERing a 
>> > parent
>> > with many or large children (the sum of the size of the children).
>
> Here's DETAILs for a 2nd such error which has shown up today:
>
> (EricssonUtranXmlParser): Failed to alter table 
> eric_umts_rnc_utrancell_metrics: ERROR:  attribute 424 has wrong type
> DETAIL:  Table has type smallint, but query expects integer.
>
> (EricssonUtranXmlParser): Failed to alter table 
> eric_umts_rnc_utrancell_metrics: ERROR:  attribute 361 has wrong type
> DETAIL:  Table has type integer, but query expects smallint.
>
> Also, note both alters really do work without "USING":
>
> ts=# begin; drop view umts_eric_ch_switch_view, eric_umts_rnc_utrancell_view, 
> umts_eric_cell_integrity_view; ALTER TABLE eric_umts_rnc_utrancell_metrics 
> ALTER COLUMN PMSUMPACKETLATENCY_000 TYPE BIGINT USING 
> PMSUMPACKETLATENCY_000::BIGINT;
> BEGIN
> DROP VIEW
> ERROR:  attribute 424 has wrong type
> DETAIL:  Table has type smallint, but query expects integer.
> ts=#
>
> ts=# begin; drop view umts_eric_ch_switch_view, eric_umts_rnc_utrancell_view, 
> umts_eric_cell_integrity_view; ALTER TABLE eric_umts_rnc_utrancell_metrics 
> ALTER COLUMN PMSUMPACKETLATENCY_000 TYPE BIGINT ;
> BEGIN
> DROP VIEW
> ALTER TABLE
> ts=#
>
> Is it useful to send something from pg_attribute, or other clues ??

So, are these errors reproducible?  Like, if you create a brand new
cluster with initdb and a brand new database with createdb and you use
CREATE VIEW to recreate the tables and views and then do this, does
the error reliably happen?  Or is this problem unique to your existing
database but it doesn't happen on a new one?  If it doesn't reproduce
on a new database, does it reproduce consistently on the existing
database or is that also intermittent?

If nothing else, I'd say the error message is very poor.  But there
might be an actual bug here, too.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >