Re: [HACKERS] postgres_fdw join pushdown (was Re: Custom/Foreign-Join-APIs)

2016-01-18 Thread Etsuro Fujita

On 2016/01/18 19:46, Ashutosh Bapat wrote:

PFA patches for postgres_fdw join pushdown, taken care of all TODOs in
my last mail.

Here is the list of things that have been improved/added new as compared
to Hanada-san's previous patch at [1].


Great!  Thank you for working on that!  I'll review the patch.


I will be working next on (in that order)
1. eval_plan_qual fix for foreign join. (Considered as a must-have for 9.6)
2. Pushing down ORDER BY clause along with join pushdown
3. Parameterization of foreign join paths (Given the complexity of the
feature this may not make it into 9.6)


As discussed internally, I think #3 might have some impact on the 
overall design of the EvalPlanQual fix, especially the design of a 
helper function that creates a local join execution path for a foreign 
join path for EvalPlanQual.  So, IMO, I think the order is #1, #3, and 
#2 (or #3, #1, #2).


Best regards,
Etsuro Fujita




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] silent data loss with ext4 / all current versions

2016-01-18 Thread Tomas Vondra



On 01/19/2016 08:03 AM, Michael Paquier wrote:

On Tue, Jan 19, 2016 at 3:58 PM, Tomas Vondra
 wrote:



...

Tomas, I am planning to have a look at that, because it seems to be
important. In case it becomes lost on my radar, do you mind if I add
it to the 2016-03 CF?



Well, what else can I do? I have to admit I'm quite surprised this is still
rotting here, considering it addresses a rather serious data loss /
corruption issue on pretty common setup.


Well, I think you did what you could. And we need to be sure now that
it gets in and that this patch gets a serious lookup. So for now my
guess is that not loosing track of it would be a good first move. I
have added it here to attract more attention:
https://commitfest.postgresql.org/9/484/


Ah, thanks. I haven't realized it's not added into 2016-1 (I'd swear I 
added it into the CF app).



--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: BUG #13685: Archiving while idle every archive_timeout with wal_level hot_standby

2016-01-18 Thread Michael Paquier
On Tue, Jan 19, 2016 at 1:28 PM, Amit Kapila  wrote:
> On Mon, Jan 18, 2016 at 7:08 PM, Andres Freund  wrote:
>> I find this patch rather unsatisfactory. Yes, it kinda solves the
>> problem of archive timeout, but it leaves the bigger and longer standing
>> problems of unneccessary checkpoints with wal_level=hs in place. It's
>> also somewhat fragile in my opinion.

Check.

>> I think we rather want a per backend (or perhaps per wal insertion lock)
>> flag that says 'last relevant record inserted at', and a way to not set
>> that during insertion. Then during a checkpoint or the relevant bgwriter
>> code, we look wether anything relevant happened in any backend since the
>> last time we performed a checkpoint/logged a running xacts snapshot.

And in this case, the last relevant record would be caused by a forced
segment switch or a checkpoint record, right? Doing that per WAL
insertion lock seems more scalable to me. I haven't looked at the code
yet though to see how that would work out.

> Sounds to be a more robust way of dealing with this problem.  Michael,
> if you don't disagree with above proposal, then we can mark this patch
> as Waiting on Author?

Yeah let's do so. I'll think more about this thing.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: BUG #13685: Archiving while idle every archive_timeout with wal_level hot_standby

2016-01-18 Thread Michael Paquier
On Mon, Jan 18, 2016 at 10:19 PM, Amit Kapila  wrote:
> On Mon, Jan 18, 2016 at 10:54 AM, Michael Paquier
>  wrote:
>>
>> On Sun, Jan 17, 2016 at 1:37 PM, Amit Kapila 
>> wrote:
>> > On Sat, Jan 16, 2016 at 6:37 PM, Michael Paquier
>> > 
>> > wrote:
>> >>
>> >>
>> >>
>> >
>> > So here if I understand correctly the check related to the last segment
>> > forcibly switched is based on the fact that if it is forcibly switched,
>> > then
>> > we don't need to log this record? What is the reason of such an
>> > assumption?
>>
>> Yes, the thing is that, as mentioned at the beginning of the thread, a
>> low value of archive_timeout causes a segment to be forcibly switched
>> at the end of the timeout defined by this parameter. In combination
>> with the standby snapshot taken in bgwriter since 9.4, this is causing
>> segments to be always switched even if a system is completely idle.
>> Before that, in 9.3 and older versions, we would have a segment
>> forcibly switched on an idle system only once per checkpoint.
>>
>
> Okay, so this will fix the case where the system is idle, but what I
> am slightly worried is that it should not miss to log the standby snapshot
> due to this check when there is actually some activity in the system.
> Why is not possible to have a case such that the segment is forcibly
> switched due to reason other than checkpoint (user has requested the
> same) and the current insert LSN is at beginning of a new segment
> due to write activity? If that is possible, which to me theoretically seems
> possible, then I think bgwriter will miss to log the standby snapshot.

Yes, with segments forcibly switched by users this check would make
the bgwriter not log in a snapshot if the last action done by server
was XLOG_SWITCH. Based on the patch I sent, the only alternative would
be to not forcedSegSwitchLSN in those code paths. Perhaps that's fine
enough for back-branches..

>> The
>> documentation is already clear about that actually. The whole point of
>> this patch is to fix this regression, aka switch to a new segment only
>> once per checkpoint.
>>
>> > It is not very clear by reading the comments you have
>> > added in patch, may be you can expand comments slightly to explain
>> > the reason of same.
>>
>> OK. Here are the comments that this patch is adding:
>> - * only log if enough time has passed and some xlog record
>> has
>> - * been inserted.
>> + * Only log if enough time has passed and some xlog record
>> has
>> + * been inserted on a new segment. On an idle system where
>> + * segments can be archived in a fast pace with for example a
>> + * low archive_command setting, avoid as well logging a new
>> + * standby snapshot if the current insert position is still
>> + * at the beginning of the segment that has just been
>> switched.
>> + *
>> + * It is possible that GetXLogLastSwitchPtr points to the
>> last
>> + * position of previous segment or to the first position of
>> the
>> + * new segment after the switch, hence take both cases into
>> + * account when deciding if a standby snapshot should be
>> taken.
>> + * (see comments on top of RequestXLogSwitch for more
>> details).
>>   */
>> It makes mention of the problem that it is trying to fix, perhaps you
>> mean that this should mention the fact that on an idle system standby
>> snapshots should happen at worst once per checkpoint?
>>
>
> I mean to say that in below part of comment, explanation about the
> the check related to insert position is quite clear whereas why it is
> okay to avoid logging standby snapshot when the segment is not
> forcibly switched is not apparent.
>
> * avoid as well logging a new
> * standby snapshot if the current insert position is still
> * at the beginning of the segment that has just been switched.

Perhaps: "avoid as well logging a new standby snapshot if the current
insert position is at the beginning of a segment that has been
*forcibly* switched at checkpoint or by archive_timeout"?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] silent data loss with ext4 / all current versions

2016-01-18 Thread Michael Paquier
On Tue, Jan 19, 2016 at 3:58 PM, Tomas Vondra
 wrote:
>
>
> On 01/19/2016 07:44 AM, Michael Paquier wrote:
>>
>> On Wed, Dec 2, 2015 at 3:24 PM, Michael Paquier
>>  wrote:
>>>
>>> On Wed, Dec 2, 2015 at 3:23 PM, Michael Paquier
>>>  wrote:

 On Wed, Dec 2, 2015 at 7:05 AM, Tomas Vondra
  wrote:
>
> Attached is v2 of the patch, that
>
> (a) adds explicit fsync on the parent directory after all the rename()
>  calls in timeline.c, xlog.c, xlogarchive.c and pgarch.c
>
> (b) adds START/END_CRIT_SECTION around the new fsync_fname calls
>  (except for those in timeline.c, as the START/END_CRIT_SECTION is
>  not available there)
>
> The patch is fairly trivial and I've done some rudimentary testing, but
> I'm
> sure I haven't exercised all the modified paths.


 I would like to have an in-depth look at that after finishing the
 current CF, I am the manager of this one after all... Could you
 register it to 2016-01 CF for the time being? I don't mind being
 beaten by someone else if this someone has some room to look at this
 patch..
>>>
>>>
>>> And please feel free to add my name as reviewer.
>>
>>
>> Tomas, I am planning to have a look at that, because it seems to be
>> important. In case it becomes lost on my radar, do you mind if I add
>> it to the 2016-03 CF?
>
>
> Well, what else can I do? I have to admit I'm quite surprised this is still
> rotting here, considering it addresses a rather serious data loss /
> corruption issue on pretty common setup.

Well, I think you did what you could. And we need to be sure now that
it gets in and that this patch gets a serious lookup. So for now my
guess is that not loosing track of it would be a good first move. I
have added it here to attract more attention:
https://commitfest.postgresql.org/9/484/
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: Odd behavior in foreign table modification (Was: Re: [HACKERS] Optimization for updating foreign tables in Postgres FDW)

2016-01-18 Thread Etsuro Fujita

On 2016/01/08 14:08, Etsuro Fujita wrote:

On 2016/01/07 21:50, Etsuro Fujita wrote:

On 2016/01/06 20:37, Thom Brown wrote:



I've run into an issue:

*# UPDATE master_customers SET id = 22 WHERE id = 16 RETURNING
tableoid::regclass;
ERROR:
CONTEXT:  Remote SQL command: UPDATE public.customers SET id = 22
WHERE ((id = 16)) RETURNING NULL



While working on this, I noticed that the existing postgres_fdw system
shows similar behavior, so I changed the subject.

IIUC, the reason for that is when the local query specifies "RETURNING
tableoid::regclass", the FDW has fmstate->has_returning=false while the
remote query executed at ModifyTable has "RETURNING NULL", as shown in
the above example; that would cause an abnormal exit in executing the
remote query in postgresExecForeignUpdate, since that the FDW would get
PGRES_TUPLES_OK as a result of the query while the FDW would think that
the right result to get should be PGRES_COMMAND_OK, from the flag
fmstate->has_returning=false.

Attached is a patch to fix that.


I added this to the next CF.

https://commitfest.postgresql.org/9/483/

Best regards,
Etsuro Fujita




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] silent data loss with ext4 / all current versions

2016-01-18 Thread Tomas Vondra



On 01/19/2016 07:44 AM, Michael Paquier wrote:

On Wed, Dec 2, 2015 at 3:24 PM, Michael Paquier
 wrote:

On Wed, Dec 2, 2015 at 3:23 PM, Michael Paquier
 wrote:

On Wed, Dec 2, 2015 at 7:05 AM, Tomas Vondra
 wrote:

Attached is v2 of the patch, that

(a) adds explicit fsync on the parent directory after all the rename()
 calls in timeline.c, xlog.c, xlogarchive.c and pgarch.c

(b) adds START/END_CRIT_SECTION around the new fsync_fname calls
 (except for those in timeline.c, as the START/END_CRIT_SECTION is
 not available there)

The patch is fairly trivial and I've done some rudimentary testing, but I'm
sure I haven't exercised all the modified paths.


I would like to have an in-depth look at that after finishing the
current CF, I am the manager of this one after all... Could you
register it to 2016-01 CF for the time being? I don't mind being
beaten by someone else if this someone has some room to look at this
patch..


And please feel free to add my name as reviewer.


Tomas, I am planning to have a look at that, because it seems to be
important. In case it becomes lost on my radar, do you mind if I add
it to the 2016-03 CF?


Well, what else can I do? I have to admit I'm quite surprised this is 
still rotting here, considering it addresses a rather serious data loss 
/ corruption issue on pretty common setup.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COPY (... tab completion

2016-01-18 Thread Michael Paquier
On Tue, Jan 19, 2016 at 10:12 AM, Andreas Karlsson  wrote:
> On 01/19/2016 01:57 AM, Andreas Karlsson wrote:
>>
>> Thanks for the review. A new version is attached.
>
>
> Whops, attached the wrong file.

+/* If we have COPY BINARY, compelete with list of tables */
s/compelete/complete

+else if (TailMatches2("COPY|\\copy", "("))
+COMPLETE_WITH_LIST7("SELECT", "TABLE", "VALUES", "INSERT",
"UPDATE", "DELETE", "WITH");
This one should be Matches, no?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb array-style subscription

2016-01-18 Thread Michael Paquier
On Mon, Jan 18, 2016 at 9:32 PM, Dmitry Dolgov <9erthali...@gmail.com> wrote:
> I've cleaned up the code, created a separate JsonbRef node (and there are a
> lot of small changes because of that), abandoned an idea of "deep nesting"
> of assignments (because it doesn't relate to jsonb subscription, is more
> about the
> "jsonb_set" function, and anyway it's not a good idea). It looks fine for
> me, and I need a little guidance - is it ok to propose this feature for
> commitfest 2016-03 for a review?

That's what the CF is for. If you are confident enough, I think that
you should register it, elsewhise the fate of this patch will be
likely to fall in oblivion.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minor code improvements to create_foreignscan_plan/ExecInitForeignScan

2016-01-18 Thread Etsuro Fujita

On 2016/01/15 19:00, Etsuro Fujita wrote:

On 2016/01/12 18:00, Etsuro Fujita wrote:

On 2016/01/12 2:36, Alvaro Herrera wrote:

I wonder,



--- 2166,2213 
   }

   /*
!  * If rel is a base relation, detect whether any system columns
are
!  * requested from the rel.  (If rel is a join relation,
rel->relid will be
!  * 0, but there can be no Var in the target list with relid 0,
so we skip
!  * this in that case.  Note that any such system columns are
assumed to be
!  * contained in fdw_scan_tlist, so we never need fsSystemCol to
be true in
!  * the joinrel case.)  This is a bit of a kluge and might go
away someday,
!  * so we intentionally leave it out of the API presented to FDWs.
*/
! scan_plan->fsSystemCol = false;
! if (scan_relid > 0)
   {
! Bitmapset  *attrs_used = NULL;
! ListCell   *lc;
! inti;

! /*
!  * First, examine all the attributes needed for joins or
final output.
!  * Note: we must look at reltargetlist, not the attr_needed
data,
!  * because attr_needed isn't computed for inheritance child
rels.
!  */
! pull_varattnos((Node *) rel->reltargetlist, scan_relid,
&attrs_used);

! /* Add all the attributes used by restriction clauses. */
! foreach(lc, rel->baserestrictinfo)
   {
! RestrictInfo *rinfo = (RestrictInfo *) lfirst(lc);
!
! pull_varattnos((Node *) rinfo->clause, scan_relid,
&attrs_used);
   }

! /* Now, are any system columns requested from rel? */
! for (i = FirstLowInvalidHeapAttributeNumber + 1; i < 0; i++)
! {
! if (bms_is_member(i -
FirstLowInvalidHeapAttributeNumber, attrs_used))
! {
! scan_plan->fsSystemCol = true;
! break;
! }
! }
!
! bms_free(attrs_used);
! }

   return scan_plan;
   }



Would it make sense to call pull_varattnos(reltargetlist), then walk the
bitmapset and break if we see a system column, then call
pull_varattnos() on the rinfo->clause?  That way, if the targetlist
request a system column we don't have to walk the RestrictInfos.



Seems like a good idea.  Will update the patch.



Done.  Attached is an updated version of the patch.


On second thought, I noticed that detecting whether we see a system 
column that way needs more cycles in cases where the reltargetlist and 
the restriction clauses don't contain any system columns.  ISTM that 
such cases are rather common, so I'm inclined to keep that code as-is.


Best regards,
Etsuro Fujita




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] silent data loss with ext4 / all current versions

2016-01-18 Thread Michael Paquier
On Wed, Dec 2, 2015 at 3:24 PM, Michael Paquier
 wrote:
> On Wed, Dec 2, 2015 at 3:23 PM, Michael Paquier
>  wrote:
>> On Wed, Dec 2, 2015 at 7:05 AM, Tomas Vondra
>>  wrote:
>>> Attached is v2 of the patch, that
>>>
>>> (a) adds explicit fsync on the parent directory after all the rename()
>>> calls in timeline.c, xlog.c, xlogarchive.c and pgarch.c
>>>
>>> (b) adds START/END_CRIT_SECTION around the new fsync_fname calls
>>> (except for those in timeline.c, as the START/END_CRIT_SECTION is
>>> not available there)
>>>
>>> The patch is fairly trivial and I've done some rudimentary testing, but I'm
>>> sure I haven't exercised all the modified paths.
>>
>> I would like to have an in-depth look at that after finishing the
>> current CF, I am the manager of this one after all... Could you
>> register it to 2016-01 CF for the time being? I don't mind being
>> beaten by someone else if this someone has some room to look at this
>> patch..
>
> And please feel free to add my name as reviewer.

Tomas, I am planning to have a look at that, because it seems to be
important. In case it becomes lost on my radar, do you mind if I add
it to the 2016-03 CF?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench - allow backslash-continuations in custom scripts

2016-01-18 Thread Michael Paquier
On Thu, Jan 7, 2016 at 5:36 PM, Kyotaro HORIGUCHI
 wrote:
> Finally, PsqlScanState has four callback funcions and all pgbench
> needs to do to use it is setting NULL to all of them and link the
> object file in psql directory. No link switch/ifdef are necessary.

Am I missing something? This patch is not registered in the CF app.
Horiguchi-san, if you expect feedback, it would be good to get it
there.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: Trigonometric functions in degrees

2016-01-18 Thread Michael Paquier
On Mon, Jan 18, 2016 at 5:01 AM, Dean Rasheed  wrote:
> On 30 November 2015 at 14:11, Tom Lane  wrote:
>> FWIW, I think that probably the best course of action is to go ahead
>> and install POSIX-compliant error checking in the existing functions
>> too.  POSIX:2008 is quite clear about this:
>>
>> : An application wishing to check for error situations should set errno to
>> : zero and call feclearexcept(FE_ALL_EXCEPT) before calling these
>> : functions. On return, if errno is non-zero or fetestexcept(FE_INVALID |
>> : FE_DIVBYZERO | FE_OVERFLOW | FE_UNDERFLOW) is non-zero, an error has
>> : occurred.
>>
>
> Looking at this again, I think it makes sense to make the Inf/NaN
> handling of these functions platform-independent. POSIX.1-2008 is
> pretty explicit about how they ought to behave, which is different
> from the current behaviour on either Linux or Windows:
>
> sin(Inf):
>   POSIX: domain error
>   Linux: NaN
>   Windows: ERROR:  input is out of range

OSX: NaN

> asin(Inf):
>   POSIX: domain error
>   Linux: ERROR:  input is out of range
>   Windows: ERROR:  input is out of range

OSX: NaN

> sin(NaN):
>   POSIX: NaN
>   Linux: NaN
>   Windows: ERROR:  input is out of range

OSX: NaN

> asin(NaN):
>   POSIX: NaN
>   Linux: NaN
>   Windows: ERROR:  input is out of range

OSX: NaN.

> I tried using feclearexcept + fetestexcept as recommended by
> POSIX.1-2008, and that does indeed make the above examples compliant
> on my linux box. Unfortunately it introduces new errors -- asin starts
> generating FE_UNDERFLOW errors for numbers that are really not that
> small, such as asin(1e-9), although the function still returns the
> correct answer. A bigger problem though is that these are C99
> functions and so won't necessarily be available on all supported
> platforms.
>
> So I think that a simpler answer is to just to add explicit tests for
> these inputs and avoid relying on errno, at least for the inverse
> functions, which have pretty clear constraints on their allowed
> inputs. For the forward functions, I'm not sure if there are some
> platforms on which large but finite inputs might generate errors, so I
> think it's safest to keep the existing errno checks as well just in
> case.

Thanks for this investigation. That's really inconsistent...

> Attached are patches for this and the new functions in degrees, now
> with POSIX compatible Inf/NaN handling.

The first patch looks fine to me, a nitpick:
+   /* Be sure to throw an error if the input is infinite --- see dcos */
s/dcos/docs

For patch 2, another nitpick :)
+   return ( sin(x * (M_PI / 180.0)) / sin(30.0 * (M_PI / 180.0)) ) / 2.0;
parenthesis format looks incorrect, too many spaces at the border.

Except those things the second patch looks good to me as well. Let's
have a committer look at it.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support for N synchronous standby servers - take 2

2016-01-18 Thread Michael Paquier
On Tue, Jan 19, 2016 at 1:40 AM, Masahiko Sawada  wrote:
> On Mon, Jan 18, 2016 at 1:20 PM, Michael Paquier
>  wrote:
>> On Sun, Jan 17, 2016 at 11:09 PM, Masahiko Sawada  
>> wrote:
>>> On Wed, Jan 13, 2016 at 1:54 AM, Alvaro Herrera wrote:
>>> * Confirm value of pg_stat_replication.sync_state (sync, async or potential)
>>> * Confirm that the data is synchronously replicated to multiple
>>> standbys in same cases.
>>>   * case 1 : The standby which is not listed in s_s_name, is down
>>>   * case 2 : The standby which is listed in s_s_names but potential
>>> standby, is down
>>>   * case 3 : The standby which is considered as sync standby, is down.
>>> * Standby promotion
>>>
>>> In order to confirm that the commit isn't done in case #3 forever
>>> unless new sync standby is up, I think we need the framework that
>>> cancels executing query.
>>> That is, what I'm planning is,
>>> 1. Set up master server (s_s_name = '2, standby1, standby2)
>>> 2. Set up two standby servers
>>> 3. Standby1 is down
>>> 4. Create some contents on master (But transaction is not committed)
>>> 5. Cancel the #4 query. (Also confirm that the flush location of only
>>> standby2 makes progress)
>>
>> This will need some thinking and is not as easy as it sounds. There is
>> no way to hold on a connection after executing a query in the current
>> TAP infrastructure. You are just mentioning case 3, but actually cases
>> 1 and 2 are falling into the same need: if there is a failure we want
>> to be able to not be stuck in the test forever and have a way to
>> cancel a query execution at will. TAP uses psql -c to execute any sql
>> queries, but we would need something that is far lower-level, and that
>> would be basically using the perl driver for Postgres or an equivalent
>> here.
>>
>> Honestly for those tests I just thought that we could get to something
>> reliable by just looking at how each sync replication setup reflects
>> in pg_stat_replication as the flow is really getting complicated,
>> giving to the user a clear representation at SQL level of what is
>> actually occurring in the server depending on the configuration used
>> being important here.
>
> I see.
> We could check the transition of sync_state in pg_stat_replication.
> I think it means that it tests for each replication method (switching
> state) rather than synchronization of replication.
>
> What I'm planning to have are,
> * Confirm value of pg_stat_replication.sync_state (sync, async or potential)
> * Standby promotion
> * Standby catching up master
> And each replication method has above tests.
>
> Are these enough?

Does promoting the standby and checking that it caught really have
value in this context of this patch? What we just want to know is on a
master, which nodes need to be waited for when s_s_names or any other
method is used, no?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Combining Aggregates

2016-01-18 Thread David Rowley
On 19 January 2016 at 17:14, Robert Haas  wrote:

> On Mon, Jan 18, 2016 at 12:34 PM, Robert Haas 
> wrote:
> > On Mon, Jan 18, 2016 at 12:09 PM, Tom Lane  wrote:
> >> Robert Haas  writes:
> >>> Yeah.  The API contract for an expanded object took me quite a while
> >>> to puzzle out, but it seems to be this: if somebody hands you an R/W
> >>> pointer to an expanded object, you're entitled to assume that you can
> >>> "take over" that object and mutate it however you like.  But the
> >>> object might be in some other memory context, so you have to move it
> >>> into your own memory context.
> >>
> >> Only if you intend to keep it --- for example, a function that is
> mutating
> >> and returning an object isn't required to move it somewhere else, if the
> >> input is R/W, and I think it generally shouldn't.
> >>
> >> In the context here, I'd think it is the responsibility of nodeAgg.c
> >> not individual datatype functions to make sure that expanded objects
> >> live where it wants them to.
> >
> > That's how I did it in my prototype, but the problem with that is that
> > spinning up a memory context for every group sucks when there are many
> > groups with only a small number of elements each - hence the 50%
> > regression that David Rowley observed.  If we're going to use expanded
> > objects here, which seems like a good idea in principle, that's going
> > to have to be fixed somehow.  We're flogging the heck out of malloc by
> > repeatedly creating a context, doing one or two allocations in it, and
> > then destroying the context.
> >
> > I think that, in general, the memory context machinery wasn't really
> > designed to manage lots of small contexts.  The overhead of spinning
> > up a new context for just a few allocations is substantial.  That
> > matters in some other situations too, I think - I've commonly seen
> > AllocSetContextCreate taking several percent  of runtime in profiles.
> > But here it's much exacerbated.  I'm not sure whether it's better to
> > attack that problem at the root and try to make AllocSetContextCreate
> > cheaper, or whether we should try to figure out some change to the
> > expanded-object machinery to address the issue.
>
> Here is a patch that helps a good deal.  I changed things so that when
> we create a context, we always allocate at least 1kB.  If that's more
> than we need for the node itself and the name, then we use the rest of
> the space as a sort of keeper block.  I think there's more that can be
> done to improve this, but I'm wimping out for now because it's late
> here.
>
> I suspect something like this is a good idea even if we don't end up
> using it for aggregate transition functions.


Thanks for writing this up, but I think the key issue is not addressed,
which is the memory context per aggregate group just being a performance
killer. I ran the test query again with the attached patch and the hashagg
query still took 300 seconds on my VM with 4GB ram.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] Combining Aggregates

2016-01-18 Thread Tomas Vondra


On 01/19/2016 05:14 AM, Robert Haas wrote:

On Mon, Jan 18, 2016 at 12:34 PM, Robert Haas  wrote:

On Mon, Jan 18, 2016 at 12:09 PM, Tom Lane  wrote:

Robert Haas  writes:

Yeah.  The API contract for an expanded object took me quite a while
to puzzle out, but it seems to be this: if somebody hands you an R/W
pointer to an expanded object, you're entitled to assume that you can
"take over" that object and mutate it however you like.  But the
object might be in some other memory context, so you have to move it
into your own memory context.


Only if you intend to keep it --- for example, a function that is
mutating and returning an object isn't required to move it
somewhere else, if the input is R/W, and I think it generally
shouldn't.

In the context here, I'd think it is the responsibility of
nodeAgg.c not individual datatype functions to make sure that
expanded objects live where it wants them to.


That's how I did it in my prototype, but the problem with that is that
spinning up a memory context for every group sucks when there are many
groups with only a small number of elements each - hence the 50%
regression that David Rowley observed.  If we're going to use expanded
objects here, which seems like a good idea in principle, that's going
to have to be fixed somehow.  We're flogging the heck out of malloc by
repeatedly creating a context, doing one or two allocations in it, and
then destroying the context.

I think that, in general, the memory context machinery wasn't really
designed to manage lots of small contexts.  The overhead of spinning
up a new context for just a few allocations is substantial.  That
matters in some other situations too, I think - I've commonly seen
AllocSetContextCreate taking several percent  of runtime in profiles.
But here it's much exacerbated.  I'm not sure whether it's better to
attack that problem at the root and try to make AllocSetContextCreate
cheaper, or whether we should try to figure out some change to the
expanded-object machinery to address the issue.


Here is a patch that helps a good deal.  I changed things so that when
we create a context, we always allocate at least 1kB.  If that's more
than we need for the node itself and the name, then we use the rest of
the space as a sort of keeper block.  I think there's more that can be
done to improve this, but I'm wimping out for now because it's late
here.

I suspect something like this is a good idea even if we don't end up
using it for aggregate transition functions.


I dare to claim that if expanded objects require a separate memory 
context per aggregate state (I don't see why would be the case), it's a 
dead end. Not so long ago we've fixed exactly this issue in array_agg(), 
which addressed a bunch of reported OOM issues and improved array_agg() 
performance quite a bit. It'd be rather crazy introducing the same 
problem to all aggregate functions.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Combining Aggregates

2016-01-18 Thread David Rowley
On 19 January 2016 at 18:04, Tomas Vondra 
wrote:

> Hi,
>
> On 01/19/2016 05:00 AM, David Rowley wrote:
>
>> On 19 January 2016 at 06:03, Pavel Stehule > > wrote:
>>
>> ...
>
>>
>> It is strange, why hashaggregate is too slow?
>>
>>
>> Good question. I looked at this and found my VM was swapping like crazy.
>> Upon investigation it appears that's because, since the patch creates a
>> memory context per aggregated group, and in this case I've got 1 million
>> of them, it means we create 1 million context, which are
>> ALLOCSET_SMALL_INITSIZE (1KB) in size, which means about 1GB of memory,
>> which is more than my VM likes.
>>
>
> Really? Where do we create the memory context? IIRC string_agg uses the
> aggcontext directly, and indeed that's what I see in string_agg_transfn and
> makeStringAggState.
>
>
Yeah, all this is talk is relating to Robert's expandedstring-v1.patch
which changes string_agg to use text and expanded-objects. This also means
that a memory context is created per group, which is rather a big overhead.


-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] Combining Aggregates

2016-01-18 Thread Tomas Vondra



On 01/19/2016 06:04 AM, Tomas Vondra wrote:

Hi,

On 01/19/2016 05:00 AM, David Rowley wrote:

Good question. I looked at this and found my VM was swapping like crazy.
Upon investigation it appears that's because, since the patch creates a
memory context per aggregated group, and in this case I've got 1 million
of them, it means we create 1 million context, which are
ALLOCSET_SMALL_INITSIZE (1KB) in size, which means about 1GB of memory,
which is more than my VM likes.


Really? Where do we create the memory context? IIRC string_agg uses the
aggcontext directly, and indeed that's what I see in string_agg_transfn
and makeStringAggState.

Perhaps you mean that initStringInfo() allocates 1kB buffers by default?


...


I'm not quite sure I understand - the current code ends up using 8192
for the transition space (per count_agg_clauses_walker). Are you
suggesting lowering the value, despite the danger of OOM issues?


Meh, right after sending the message I realized that perhaps this 
relates to Robert's patch and that maybe it changes this. And indeed, it 
changes the type from internal to text, and thus the special case in 
count_agg_clauses_walker no longer applies.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Removing service-related code in pg_ctl for Cygwin

2016-01-18 Thread Michael Paquier
On Tue, Jan 19, 2016 at 2:32 AM, Andrew Dunstan  wrote:
>
>
> On 01/14/2016 12:38 AM, Michael Paquier wrote:
>>
>> Hi all,
>>
>> Beginning a new thread seems more adapted regarding $subject but
>> that's mentioned here as well:
>>
>> http://www.postgresql.org/message-id/CAB7nPqQXghm_SdB5iniupz1atzMxk=95gv9a8ocdo83sxcn...@mail.gmail.com
>>
>> It happens based on some investigation from Andrew that cygwin does
>> not need to use the service-related code, aka register/unregister
>> options or similar that are proper to WIN32 and rely instead on
>> cygrunsrv with a SYSV-like init file to manage the service. Based on
>> my tests with cygwin, I am able to see as well that a server started
>> within a cygwin session is able to persist even after it is closed,
>> which is kind of nice and I think that it is a additional reason to
>> not use the service-related code paths. Hence what about the following
>> patch, that makes cygwin behave like any *nix OS when using pg_ctl?
>> This simplifies a bit the code.
>>
>> Marco, as I think you do some packaging for Postgres in Cygwin, and
>> others, does that sound acceptable?
>>
>
>
>
> I think we can be a bit more adventurous and remove all the Cygwin-specific
> code. See attached patch, which builds fine on buildfarm cockatiel.

Ah, OK. I see the difference. It builds as well for me.

-#ifndef __CYGWIN__
-   AddUserToTokenDacl(restrictedToken);
-#endif
[...]
-#if defined(WIN32) || defined(__CYGWIN__)
+#ifdef WIN32
setvbuf(stderr, NULL, _IONBF, 0);
 #endif
Fine for me, those two do not seem to matter much as far as I have tested.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Combining Aggregates

2016-01-18 Thread Tomas Vondra

Hi,

On 01/19/2016 05:00 AM, David Rowley wrote:

On 19 January 2016 at 06:03, Pavel Stehule mailto:pavel.steh...@gmail.com>> wrote:


...


It is strange, why hashaggregate is too slow?


Good question. I looked at this and found my VM was swapping like crazy.
Upon investigation it appears that's because, since the patch creates a
memory context per aggregated group, and in this case I've got 1 million
of them, it means we create 1 million context, which are
ALLOCSET_SMALL_INITSIZE (1KB) in size, which means about 1GB of memory,
which is more than my VM likes.


Really? Where do we create the memory context? IIRC string_agg uses the 
aggcontext directly, and indeed that's what I see in string_agg_transfn 
and makeStringAggState.


Perhaps you mean that initStringInfo() allocates 1kB buffers by default?



set work_mem = '130MB' does coax the planner into a GroupAggregate plan,
which is faster, but due to the the hash agg executor code not giving
any regard to work_mem. If I set work_mem to 140MB (which is more
realistic for this VM), it does cause the same swapping problems to
occur.  Probably setting aggtransspace for this aggregate to 1024 would
help the costing problem, but it would also cause hashagg to be a less
chosen option during planning.


I'm not quite sure I understand - the current code ends up using 8192 
for the transition space (per count_agg_clauses_walker). Are you 
suggesting lowering the value, despite the danger of OOM issues?


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] custom function for converting human readable sizes to bytes

2016-01-18 Thread Vitaly Burovoy
On 1/18/16, Vitaly Burovoy  wrote:
> <>
> ---
> + if (*strptr != '\0')
> ...
> + while (*strptr && !isspace(*strptr))
> Sometimes it explicitly compares to '\0', sometimes implicitly.
> Common use is explicit comparison and it is preferred due to different
> compilers (their conversions to boolean).
>
> ---
> <>

It seems I distracted on something else... That lines are ok, skip that block.

-- 
Best regards,
Vitaly Burovoy


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] exposing pg_controldata and pg_config as functions

2016-01-18 Thread Michael Paquier
On Tue, Jan 19, 2016 at 1:49 PM, Michael Paquier
 wrote:
> On Tue, Jan 19, 2016 at 11:08 AM, Joe Conway  wrote:
>> On 01/18/2016 04:16 PM, Joe Conway wrote:
>>> Please see the attached. Duplication removed.
>>
>> Actually please see this version instead.
>
> Thanks for the new patch.
>
> +   tuplestore_puttuple(tupstore, tuple);
> +   }
> +
> +   /*
> +* no longer need the tuple descriptor reference created by
> The patch has some whitespaces.
>
> +REVOKE ALL on pg_config FROM PUBLIC;
> +REVOKE EXECUTE ON FUNCTION pg_config() FROM PUBLIC;
> I guess that this portion is still under debate :)
>
> make[1]: Nothing to be done for `all'.
> make -C ../backend submake-errcodes
> make[2]: *** No rule to make target `config_info.o', needed by
> `libpgcommon.a'.  Stop.
> make[2]: *** Waiting for unfinished jobs
> The patch is visibly forgetting to include config_info.c, which should
> be part of src/common.
>
>  /*
> + * This function cleans up the paths for use with either cmd.exe or Msys
> + * on Windows. We need them to use filenames without spaces, for which a
> + * short filename is the safest equivalent, eg:
> + * C:/Progra~1/
> + */
> +void
> +cleanup_path(char *path)
> +{
> Perhaps this refactoring would be useful as a separate patch?

You need as well to update @pgcommonallfiles in Mkvcbuild.pm or the
compilation with MSVC is going to fail.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [JDBC] 9.4-1207 behaves differently with server side prepared statements compared to 9.2-1102

2016-01-18 Thread Pavel Stehule
2016-01-18 23:50 GMT+01:00 Thomas Kellerer :

> Robert Haas wrote:
> > This isn't the first complaint about this mechanism that we've gotten,
> > and it won't be the last.  Way too many of our users are way more
> > aware than they should be that the threshold here is five rather than
> > any other number, which to me is a clear-cut sign that this needs to
> > be improved.  How to improve it is a harder question.  We lack the
> > ability to do any kind of sensitivity analysis on a plan, so we can't
> > know whether there are other parameter values that would have resulted
> > in a different plan, nor can we test whether a particular set of
> > parameter values would have changed the outcome.
>
> (I initially posted that question on the JDBC mailing list)
>
> To be honest: looking at the efforts Oracle has done since 9 up until 12 I
> am not sure this is a problem that can be solved by caching plans.
>
> Even with the new "in-flight" re-planning in Oracle 12 ("cardinality
> feedback") and all the effort that goes into caching plans we are still
> seeing similar problems with (prepared) statements that are suddenly slow.
> And as far as I can tell, the infrastructure around plan caching,
> invalidation, bind variable peeking and all that seems to be a *lot* more
> complex ("sophisticated") in Oracle compared to Postgres. And the results
> don't seem to justify the effort (at least in my experience).
>
> With all the problems I have seen (in Oracle and Postgres) I think that
> maybe a better solution to this problem is to make the planner fast (and
> reliable) enough so that plan caching isn't necessary in the first place.
>
> However I have no idea how feasible that is.
>

for statements like INSERT INTO tab VALUES(..), UPDATE tab SET x = WHERE id
= .. will be planner significant overhead. But these statements are
relative simply and probably some solution is exists.

Regards

Pavel



>
>
>
>
>
> --
> View this message in context:
> http://postgresql.nabble.com/Fwd-JDBC-Re-9-4-1207-behaves-differently-with-server-side-prepared-statements-compared-to-9-2-1102-tp5881825p5882835.html
> Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Re: [HACKERS] exposing pg_controldata and pg_config as functions

2016-01-18 Thread Michael Paquier
On Tue, Jan 19, 2016 at 11:08 AM, Joe Conway  wrote:
> On 01/18/2016 04:16 PM, Joe Conway wrote:
>> Please see the attached. Duplication removed.
>
> Actually please see this version instead.

Thanks for the new patch.

+   tuplestore_puttuple(tupstore, tuple);
+   }
+
+   /*
+* no longer need the tuple descriptor reference created by
The patch has some whitespaces.

+REVOKE ALL on pg_config FROM PUBLIC;
+REVOKE EXECUTE ON FUNCTION pg_config() FROM PUBLIC;
I guess that this portion is still under debate :)

make[1]: Nothing to be done for `all'.
make -C ../backend submake-errcodes
make[2]: *** No rule to make target `config_info.o', needed by
`libpgcommon.a'.  Stop.
make[2]: *** Waiting for unfinished jobs
The patch is visibly forgetting to include config_info.c, which should
be part of src/common.

 /*
+ * This function cleans up the paths for use with either cmd.exe or Msys
+ * on Windows. We need them to use filenames without spaces, for which a
+ * short filename is the safest equivalent, eg:
+ * C:/Progra~1/
+ */
+void
+cleanup_path(char *path)
+{
Perhaps this refactoring would be useful as a separate patch?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: BUG #13685: Archiving while idle every archive_timeout with wal_level hot_standby

2016-01-18 Thread Amit Kapila
On Mon, Jan 18, 2016 at 7:08 PM, Andres Freund  wrote:
>
> On 2015-12-21 16:26:25 +0900, Michael Paquier wrote:
> > On Sun, Dec 20, 2015 at 10:14 PM, Michael Paquier
> >  wrote:
> > > Speaking of which, this patch was registered in this CF, I am moving
> > > it to the next as a bug fix.
> >
> > I found a stupid bug in my previous patch: when issuing XLOG_SWITCH it
> > is possible that the return LSN pointer is on the new segment that has
> > been forcibly archived if RequestXLogSwitch is called multiple times
> > and that subsequent calls are not necessary. Patch updated.
>
> I find this patch rather unsatisfactory. Yes, it kinda solves the
> problem of archive timeout, but it leaves the bigger and longer standing
> problems of unneccessary checkpoints with wal_level=hs in place. It's
> also somewhat fragile in my opinion.
>
> I think we rather want a per backend (or perhaps per wal insertion lock)
> flag that says 'last relevant record inserted at', and a way to not set
> that during insertion. Then during a checkpoint or the relevant bgwriter
> code, we look wether anything relevant happened in any backend since the
> last time we performed a checkpoint/logged a running xacts snapshot.
>

Sounds to be a more robust way of dealing with this problem.  Michael,
if you don't disagree with above proposal, then we can mark this patch
as Waiting on Author?


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Combining Aggregates

2016-01-18 Thread Tom Lane
Robert Haas  writes:
> Here is a patch that helps a good deal.  I changed things so that when
> we create a context, we always allocate at least 1kB.

That's going to kill performance in some other cases; subtransactions
in particular rely on the subtransaction's TransactionContext not causing
any actual malloc unless something gets put into the TransactionContext.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Combining Aggregates

2016-01-18 Thread Robert Haas
On Mon, Jan 18, 2016 at 12:34 PM, Robert Haas  wrote:
> On Mon, Jan 18, 2016 at 12:09 PM, Tom Lane  wrote:
>> Robert Haas  writes:
>>> Yeah.  The API contract for an expanded object took me quite a while
>>> to puzzle out, but it seems to be this: if somebody hands you an R/W
>>> pointer to an expanded object, you're entitled to assume that you can
>>> "take over" that object and mutate it however you like.  But the
>>> object might be in some other memory context, so you have to move it
>>> into your own memory context.
>>
>> Only if you intend to keep it --- for example, a function that is mutating
>> and returning an object isn't required to move it somewhere else, if the
>> input is R/W, and I think it generally shouldn't.
>>
>> In the context here, I'd think it is the responsibility of nodeAgg.c
>> not individual datatype functions to make sure that expanded objects
>> live where it wants them to.
>
> That's how I did it in my prototype, but the problem with that is that
> spinning up a memory context for every group sucks when there are many
> groups with only a small number of elements each - hence the 50%
> regression that David Rowley observed.  If we're going to use expanded
> objects here, which seems like a good idea in principle, that's going
> to have to be fixed somehow.  We're flogging the heck out of malloc by
> repeatedly creating a context, doing one or two allocations in it, and
> then destroying the context.
>
> I think that, in general, the memory context machinery wasn't really
> designed to manage lots of small contexts.  The overhead of spinning
> up a new context for just a few allocations is substantial.  That
> matters in some other situations too, I think - I've commonly seen
> AllocSetContextCreate taking several percent  of runtime in profiles.
> But here it's much exacerbated.  I'm not sure whether it's better to
> attack that problem at the root and try to make AllocSetContextCreate
> cheaper, or whether we should try to figure out some change to the
> expanded-object machinery to address the issue.

Here is a patch that helps a good deal.  I changed things so that when
we create a context, we always allocate at least 1kB.  If that's more
than we need for the node itself and the name, then we use the rest of
the space as a sort of keeper block.  I think there's more that can be
done to improve this, but I'm wimping out for now because it's late
here.

I suspect something like this is a good idea even if we don't end up
using it for aggregate transition functions.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
diff --git a/src/backend/utils/mmgr/aset.c b/src/backend/utils/mmgr/aset.c
index d26991e..3730a21 100644
--- a/src/backend/utils/mmgr/aset.c
+++ b/src/backend/utils/mmgr/aset.c
@@ -164,6 +164,14 @@ typedef void *AllocPointer;
 /*
  * AllocSetContext is our standard implementation of MemoryContext.
  *
+ * Note: An AllocSetContext can include one or more "keeper" blocks which
+ * are not freed when the context is reset.  "keeper" should point to the
+ * first of these blocks.  Sometimes, there may be a block which shouldn't
+ * be freed even when the context is deleted (because that space isn't a
+ * separate allocation); if so, "stopper" can point to this block.  "keeper"
+ * must be set whenever "stopper" is set, and must point to the same block
+ * as "stopper" or an earlier one.
+ *
  * Note: header.isReset means there is nothing for AllocSetReset to do.
  * This is different from the aset being physically empty (empty blocks list)
  * because we may still have a keeper block.  It's also different from the set
@@ -181,7 +189,8 @@ typedef struct AllocSetContext
 	Size		maxBlockSize;	/* maximum block size */
 	Size		nextBlockSize;	/* next block size to allocate */
 	Size		allocChunkLimit;	/* effective chunk size limit */
-	AllocBlock	keeper;			/* if not NULL, keep this block over resets */
+	AllocBlock	keeper;		/* on reset, stop freeing when we reach this */
+	AllocBlock	stopper;	/* on delete, stop freeing when we reach this */
 } AllocSetContext;
 
 typedef AllocSetContext *AllocSet;
@@ -248,7 +257,6 @@ typedef struct AllocChunkData
 static void *AllocSetAlloc(MemoryContext context, Size size);
 static void AllocSetFree(MemoryContext context, void *pointer);
 static void *AllocSetRealloc(MemoryContext context, void *pointer, Size size);
-static void AllocSetInit(MemoryContext context);
 static void AllocSetReset(MemoryContext context);
 static void AllocSetDelete(MemoryContext context);
 static Size AllocSetGetChunkSpace(MemoryContext context, void *pointer);
@@ -267,7 +275,6 @@ static MemoryContextMethods AllocSetMethods = {
 	AllocSetAlloc,
 	AllocSetFree,
 	AllocSetRealloc,
-	AllocSetInit,
 	AllocSetReset,
 	AllocSetDelete,
 	AllocSetGetChunkSpace,
@@ -440,13 +447,45 @@ AllocSetContextCreate(MemoryContext parent,
 	  Size maxBlockSize)
 {
 	AllocSet	set;
+	Size		needed;
+	Size		request_size;
+
+	/*
+	 * We always a

Re: [HACKERS] Combining Aggregates

2016-01-18 Thread David Rowley
On 19 January 2016 at 06:03, Pavel Stehule  wrote:

>
>
> >
>> > # explain analyze select a%100,length(string_agg(b,',')) from ab
>> group
>> > by 1;
>> > QUERY PLAN
>> >
>> ---
>> >  GroupAggregate  (cost=119510.84..144510.84 rows=100 width=32)
>> (actual
>> > time=538.938..1015.278 rows=100 loops=1)
>> >Group Key: ((a % 100))
>> >->  Sort  (cost=119510.84..122010.84 rows=100 width=32) (actual
>> > time=538.917..594.194 rows=100 loops=1)
>> >  Sort Key: ((a % 100))
>> >  Sort Method: quicksort  Memory: 102702kB
>> >  ->  Seq Scan on ab  (cost=0.00..19853.00 rows=100 width=32)
>> > (actual time=0.016..138.964 rows=100 loops=1)
>> >  Planning time: 0.146 ms
>> >  Execution time: 1047.511 ms
>> >
>> >
>> > Patched
>> > # explain analyze select a%100,length(string_agg(b,',')) from ab
>> group
>> > by 1;
>> >QUERY PLAN
>> >
>> 
>> >  HashAggregate  (cost=24853.00..39853.00 rows=100 width=32) (actual
>> > time=8072.346..144424.872 rows=100 loops=1)
>> >Group Key: (a % 100)
>> >->  Seq Scan on ab  (cost=0.00..19853.00 rows=100 width=32)
>> (actual
>> > time=0.025..481.332 rows=100 loops=1)
>> >  Planning time: 0.164 ms
>> >  Execution time: 263288.332 ms
>>
>> Well, that's pretty odd.  I guess the plan change must be a result of
>> switching the transition type from internal to text, although I'm not
>> immediately certain why that would make a difference.
>>
>
> It is strange, why hashaggregate is too slow?
>

Good question. I looked at this and found my VM was swapping like crazy.
Upon investigation it appears that's because, since the patch creates a
memory context per aggregated group, and in this case I've got 1 million of
them, it means we create 1 million context, which are
ALLOCSET_SMALL_INITSIZE (1KB) in size, which means about 1GB of memory,
which is more than my VM likes.

set work_mem = '130MB' does coax the planner into a GroupAggregate plan,
which is faster, but due to the the hash agg executor code not giving any
regard to work_mem. If I set work_mem to 140MB (which is more realistic for
this VM), it does cause the same swapping problems to occur.  Probably
setting aggtransspace for this aggregate to 1024 would help the costing
problem, but it would also cause hashagg to be a less chosen option during
planning.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] PATCH: postpone building buckets to the end of Hash (in HashJoin)

2016-01-18 Thread Tomas Vondra

Hi,

On 12/17/2015 10:28 PM, Tomas Vondra wrote:

Hi,

On 12/17/2015 07:20 PM, Robert Haas wrote:
...


If this doesn't regress performance in the case where the number of
buckets is estimated accurately to begin with, then I think this is
a great idea. Can you supply some performance tests results for that
case, and maybe some of the other cases also?


I don't see how it could regress performance, and the benchmarks I've
done confirm that. I'll do more thorough benchmarking and post the
results here, but not now as this patch is in 2016-01 CF and I want to
put all my time into reviewing patches from the open commitfest.


I've finally got to do more thorough benchmarks, and surprisingly there 
seems to be a minor regression. The scripts I've used are attached, 
along with results, but in short it joins two tables, with different 
combinations:


  1M x 10M
  10M x 100M
  5M x 250M

with different work_mem sizes (so that the smallest value uses a bunch 
of batches while the largest value uses a single batch).


The 1x10 and 10x100 are designed to fit into RAM on the machine 
(including the temporary files in batching mode), while 5x250 is 
designed to specifically force the temp files to disk (but it's on fast 
SSD array, so not a big deal).


Average timings for current master and the first two patches of the 
series (0001 postpones the build of buckets and 0002 always starts 
without batching) look like this (the timings are in milliseconds):


size   work_memmasterpostpone   no-batch
 -
1x10  8  5735  5760 6107
 32  5939  5955 6124
128  4852  5038 5175
 -
   5x250 64158512158429   159008
256144046144584   144994
   1024111478117529   116933
 -
  10x100 64 68389 6827868530
256 66693 6641566605
   1024 48606 5065450490

So compared to master, the differences look like this:

size   work_mempostpone   no-batch
  -
1x10  8   0.44%  6.50%
 32   0.28%  3.13%
128   3.83%  6.65%
  -
   5x250 64  -0.05%  0.31%
256   0.37%  0.66%
   1024   5.43%  4.89%
  -
   10x10064  -0.16%  0.21%
256  -0.42% -0.13%
   1024   4.21%  3.88%

So for the first patch (postponing building of buckets) with batching, 
the difference is rather negligible, possibly within noise (~0.5%). When 
the hash join runs with a single batch, the difference however gets much 
more significant 4-5%.


I'm not going to discuss the second patch for now, but naturally it has 
the same issue and apparently also some additional ones (at least on the 
1x10 data set).


I have spent a fair amount of time profiling this, and I can't wrap my 
head around where the difference comes from, though. Essentially we 
don't do any additional stuff, we've just moved some of the stuff to a 
different place in the code.


It might change the memory access pattern a bit, but the original 
patterns are not exactly sequential or so, as we're moving random 
updates to array of pointers. Or rather moving them to the 
BuildBuckets() method, so if something consumes more time, it should be 
in this method, I believe.


So I've measured how much time the function takes for the 1x10 and 
10x100 datasets, and it's ~25ms and ~290ms respectively. That's way less 
than the actual difference in query runtime, which is ~190ms and ~2050ms.


So even if we entirely skipped the bucket build, we would not recover 
the difference. Not sure what's going on here.


I've also done some profiling using perf, but I haven't really found 
anything suspicious there.


Any ideas what might be the cause of this?

regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


hash-scripts.tgz
Description: application/compressed-tar


hashes-delayed.ods
Description: application/vnd.oasis.opendocument.spreadsheet

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] custom function for converting human readable sizes to bytes

2016-01-18 Thread Vitaly Burovoy
On 1/4/16, Pavel Stehule  wrote:
> 2016-01-04 18:13 GMT+01:00 Shulgin, Oleksandr  :
>> On Mon, Jan 4, 2016 at 6:03 PM, Pavel Stehule  
>> wrote:
>> > 2016-01-04 17:48 GMT+01:00 Shulgin, Oleksandr 
>> > :
>> >> On Mon, Jan 4, 2016 at 4:51 PM, Robert Haas  wrote:
>> >>
>> >> I'm also inclined on dropping that explicit check for empty string
>> >> below and let numeric_in() error out on that.  Does this look OK, or can
>> >> it confuse someone:
>> >> postgres=# select pg_size_bytes('');
>> >> ERROR:  invalid input syntax for type numeric: ""
>> >
>> > both fixed
>>
>> Hm...
>>
>> > + switch (*strptr)
>> > + {
>> > + /* ignore plus symbol */
>> > + case '+':
>> > + case '-':
>> > + *bufptr++ = *strptr++;
>> > + break;
>> > + }
>>
>> Well, to that amount you don't need any special checks, I'm just not sure
>> if reported error message is not misleading if we let numeric_in() handle
>> all the errors.  At least it can cope with the leading spaces, +/- and
>> empty input quite well.
>>
>
> I don't would to catch a exception from numeric_in - so I try to solve some
> simple situations, where I can raise possible better error message.

There are several cases where your behavior gives strange errors (see below).

Next batch of notes:

src/include/catalog/pg_proc.h:
---
+ DATA(insert OID = 3317 ( pg_size_bytes...
now oid 3317 is used (by pg_stat_get_wal_receiver), 3318 is free

---
+ DESCR("convert a human readable text with size units to big int bytes");
May be the best way is to copy the first sentence from the doc?
("convert a size in human-readable format with size units into bytes")


src/backend/utils/adt/dbsize.c:
+ text *arg = PG_GETARG_TEXT_PP(0);
+ char *str = text_to_cstring(arg);
...
+   /* working buffer cannot be longer than original string */
+   buffer = (char *) palloc(VARSIZE_ANY_EXHDR(arg) + 1);
Is there any reason to get TEXT for only converting it to cstring and
call VARSIZE_ANY_EXHDR instead of strlen?

---
+   text   *arg = PG_GETARG_TEXT_PP(0);
+   char   *str = text_to_cstring(arg);
+   char*strptr = str;
+   char   *buffer;
There are wrong offsets of variable names after their types (among all
body of the "pg_size_bytes" function).
See variable declarations in nearby functions (
  >> "make the new code look like the existing code around it"
  http://www.postgresql.org/docs/devel/static/source-format.html
)

---
+errmsg("\"%s\" is not number", str)));
s/is not number/is not a number/
(the second version can be found in a couple places besides translations)

---
+   if (*strptr != '\0')
...
+   while (*strptr && !isspace(*strptr))
Sometimes it explicitly compares to '\0', sometimes implicitly.
Common use is explicit comparison and it is preferred due to different
compilers (their conversions to boolean).

---
+   /* Skip leading spaces */
...
+   /* ignore plus symbol */
...
+   /* copy digits to working buffer */
...
+   /* allow whitespace between integer and unit */
I'm also inclined on dropping that explicit skipping spaces, checking
for +/- symbols, but copying all digits, spaces, dots and '+-' symbols
and let numeric_in() error out on that.

It allows to get correct error messages for something like:
postgres=# select pg_size_bytes('.+912');
ERROR:  invalid input syntax for type numeric: ".+912"
postgres=# select pg_size_bytes('+912+ kB');
ERROR:  invalid input syntax for type numeric: "+912+ "
postgres=# select pg_size_bytes('++123 kB');
ERROR:  invalid input syntax for type numeric: "++123 "

instead of current:
postgres=# select pg_size_bytes('.+912');
ERROR:  invalid input syntax for type numeric: "."
postgres=# select pg_size_bytes('+912+ kB');
ERROR:  invalid unit: "+ kB"
postgres=# select pg_size_bytes('++123 kB');
ERROR:  invalid input syntax for type numeric: "+"

---
+   while (isspace((unsigned char) *strptr))
...
+   while (isspace(*strptr))
...
+   while (*strptr && !isspace(*strptr))
...
+   while (isspace(*strptr))
The first occurece of isspace's parameter is casting to "unsigned
char" whereas the others are not.
Note:
"The behavior is undefined if the value of ch is not representable as
unsigned char and is not equal to EOF"
Proof:
http://en.cppreference.com/w/c/string/byte/isspace

---
+   pfree(buffer);
+   pfree(str);
pfree-s here are not necessary. See:
http://www.neilconway.org/talks/hacking/hack_slides.pdf (page 17)

-- 
Best regards,
Vitaly Burovoy


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RFC: replace pg_stat_activity.waiting with something more descriptive

2016-01-18 Thread Amit Kapila
On Mon, Jan 18, 2016 at 11:06 PM, Robert Haas  wrote:
>
> On Mon, Jan 18, 2016 at 11:09 AM, Alvaro Herrera
>  wrote:
> > Amit Kapila wrote:
> >
> >> The reason for not updating the patch related to this thread is that
it is
> >> dependent on the work for refactoring the tranches for LWLocks [1]
> >> which is now coming towards an end, so I think it is quite reasonable
> >> that the patch can be updated for this work during commit fest, so
> >> I am moving it to upcoming CF.
> >
> > Thanks.  I think the tranche reworks are mostly done now, so is anyone
> > submitting an updated version of this patch?
> >

Before updating the patch, it is better to clarify few points as mentioned
below.

>
> > Also, it would be very good if someone can provide insight on how this
> > patch interacts with the other submitted patch for "waiting for
> > replication" https://commitfest.postgresql.org/8/436/
> > Andres seems to think that the other patch is completely independent of
> > this one, i.e. the "waiting for replication" column needs to exist
> > separately and not as part of the "more descriptive" new 'waiting'
> > column.
>
> Yeah, I really don't agree with that.  I think that it's much better
> to have one column that says what you are waiting for than a bunch of
> separate columns that tell you whether you are waiting for individual
> things for which you might be waiting.  I think this patch, which
> introduces the general mechanism, should win: and the other patch
> should then be one client of that mechanism.
>

I agree with what you have said, but I think here bigger question is about
the UI and which is the more appropriate place to store wait information. I
will try to summarize the options discussed.

Initially, we started with extending the 'waiting' column in
pg_stat_activity,
to which some people have raised concerns about backward
compatability, so another option that came-up during discussion was to
retain waiting as it-is and have an additional column 'wait_event' in
pg_stat_activity, after that there is feedback that we should try to include
wait information about background processes as well which raises a bigger
question whether it is any good to expose this information via
pg_stat_activity
(pg_stat_activity doesn't display information about background processes)
or is it better to have a new view as discussed here [1].

Second important and somewhat related point is whether we should save
this information in PGPROC as 4 bytes or keep it in pgBackendStatus.
I think it is better to store in PGPROC, if we want to save wait information
for backend processes as well.

I am of opinion that we should save this information in PGPROC and
expose it via new view, but I am open to go other ways based on what
others think about this matter.

[1] -
http://www.postgresql.org/message-id/CAA4eK1+=5Ex8-5NRr3u94=_t2p65v0kcjZ5rXddVmkx=lwa...@mail.gmail.com

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


[HACKERS] Advices on custom data type and extension development

2016-01-18 Thread Luciano Coutinho Barcellos

Dear friends,

I'm planning to develop an extension, and I'm here for getting some 
help. But I would like to share the problem I intend to solve. Maybe my 
desired solution is not a good option.


What I have:

* a lot of data being generated every day, which are mainly 
queried by an immutable column of type date or timestamp;
* as a standard, almost every table has a bigserial id column 
as a primary key;
* data is huge enough to demand table partitioning, which is 
implemented as suggested in Postgres documentation, by using triggers 
and table inheritance. A function called by cron deal with creation of 
new partitions.


What I would like to develop first is a custom type (let's call it 
datedserial) for replacing bigserial as the primary key:


* the type would be 8 bytes long, being 4 dedicated to storing 
the Date, and 4 dedicated to storing a serial within that day;
* the text representation of the type would show its date and 
its serial number (something like '2015-10-02.007296' as a canonical 
form, but which could accept inputs like '20151002.007296');
* as a consequence of this internal representation, the serial 
part could not be greater than 4 billion and some;
* support for operator classes allowing the type being used in 
GIN and GIST indexes would be optional for now.


That would allow me to have a compact primary key which I could use 
to partition the table based on the object's date. That would also allow 
me to partition detail tables on the foreign key column having this data 
type. Besides that, just by examining the value, mainly when used as a 
foreign key, I could infer where the record belongs to.


When I have a working custom data type, I would go to the next and 
harder part. I would like to create a new structure like a sequence, and 
it should behave exactly like sequences, but separated by a date space. 
So I would have functions similar to the following:


* createsequencegroup(sequence_group_name text): create a new 
named structure for managing the sequence group;
* nextval(sequence_group_name text, context_date date): return 
next value of the sequence (as a datedserial) belonging to the sequence 
group and associated with the context date. The value returned have the 
context_date in its date part and the next value for that date in the 
sequence part. The first call for a specific date would return 1 for the 
sequence part. Concerning to concurrency and transactions, the function 
behaves exactly like nextval(sequence_group_name text);
* currval(sequence_group_name text, context_date date): the 
currval function counterpart;
* setval(sequence_group_name text, context_date date, int4 
value): the setval function counterpart;
* freeze_before(sequence_group_name text, freeze_date date): 
disallow using the sequence group with context dates before the freeze_date.


I would consider extending the data type to allow including 
information about the cluster which generated the values. This way, the 
user could set a configuration entry defining a byte value for 
identifying the cluster among others involved in replication, so that 
the sequence group could have different sequences not only for different 
dates, but for different nodes as well.


As I've said, I would like to package the resulting work as an 
extension.


For now, I would like some help about where to start. I've 
downloaded the postgres source code and have successfully compiled it 
using my Ubuntu desktop, although have not tested the resulting binary. 
Should I create a folder in the contrib directory and use another 
extension as a starting point? Is this the recommended path? Or is this 
too much and I should create a separate project?


Thanks in advance.

Best regards,
Luciano Barcellos



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgindent-polluted commits

2016-01-18 Thread Noah Misch
On Sat, Jan 16, 2016 at 09:57:45AM +, Simon Riggs wrote:
> On 16 January 2016 at 02:10, Noah Misch  wrote:
> > On Wed, Jan 13, 2016 at 12:13:11PM -0500, Tom Lane wrote:
> > > Basically this is trading off convenience of the committer (all of the
> > > alternatives Noah mentions are somewhat annoying) versus the convenience
> > > of post-commit reviewers.  I'm not sure that his recommendation is the
> > > best trade-off, nor that the situation is precisely comparable to
> > > pre-commit review.  There definitely will be pre-commit review, there
> > > may or may not be any post-commit review.
> >
> > That's a good summary.

> My objective in committing patches to PostgreSQL is to develop the Open
> Source version of PostgreSQL as a standalone product and I encourage others
> to do the same.
> 
> PostgreSQL is open source and therefore usable for various additional
> purposes, one of which is modified versions of PostgreSQL.
> 
> I will not go out of my way to cause problems for the secondary users of
> the code. I will try to implement one of the suggestions for whitespace
> handling, though may make mistakes in that, nobody being perfect.

Thanks.  Clean commits help so many audiences, including immediate post-commit
reviewers, intensive beta testers, fork maintainers, and hackers performing
root cause analysis on the bugs to be discovered in future years.  For what
it's worth, most committers already have been using some mix of strategy 2
(leave pgindent entirely to Bruce) and strategy 1 (neither add nor remove work
for the next whole-tree pgindent to do).  If you're already in that majority,
I advise no change.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Combining Aggregates

2016-01-18 Thread David Rowley
On 19 January 2016 at 02:44, Haribabu Kommi 
wrote:

> On Mon, Jan 18, 2016 at 10:32 PM, David Rowley
>  wrote:
>
> I just thought like direct mapping of the structure with text pointer.
> something like
> the below.
>
> result = PG_ARGISNULL(0) ? NULL : (text *) PG_GETARG_POINTER(0);
> state = (PolyNumAggState *)VARDATA(result);
>
> To handle the big-endian or little-endian, we may need some extra changes.
>
> Instead of adding 3 new columns to the pg_aggregate catalog table to handle
> the internal types, either something like the above to handle the internal
> types
> or some other way is better IMO.


The problem with that is that most of these internal structs for the
aggregate states have pointers to other memory, so even if we laid those
bytes down into a bytea or something, then doing so is not going to
dereference the pointers to the other memory, and when we dereference those
pointers in the other process, we'll have problems as these addresses
belong to the other process.

For example PolyNumAggState is defined as:

typedef NumericAggState PolyNumAggState;

and NumericAggState has:

NumericVar sumX; /* sum of processed numbers */
NumericVar sumX2; /* sum of squares of processed numbers */

And NumericVar has:

NumericDigit *buf; /* start of palloc'd space for digits[] */
NumericDigit *digits; /* base-NBASE digits */

Both of these point to other memory which won't be in the varlena type.

Serialization is the process of collecting all of these pointers up in to
some consecutive bytes.

Of course, that's not to say that there's never Aggregate State structs
which don't have any pointers, I've not checked, but in these cases we
could (perhaps) just make the serialize and deserialize function a simple
memcpy() into a bytea array, although in reality, as you mentioned, we'd
likely want to agree on some format that's cross platform for different
byte orders, as we'll probably, one day, want to forward these values over
to some other server to finish off the aggregation.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] exposing pg_controldata and pg_config as functions

2016-01-18 Thread Joe Conway
On 01/18/2016 04:16 PM, Joe Conway wrote:
> Please see the attached. Duplication removed.

Actually please see this version instead.

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 923fe58..abf9a70 100644
*** a/src/backend/catalog/system_views.sql
--- b/src/backend/catalog/system_views.sql
*** CREATE VIEW pg_timezone_abbrevs AS
*** 433,438 
--- 433,444 
  CREATE VIEW pg_timezone_names AS
  SELECT * FROM pg_timezone_names();
  
+ CREATE VIEW pg_config AS
+ SELECT * FROM pg_config();
+ 
+ REVOKE ALL on pg_config FROM PUBLIC;
+ REVOKE EXECUTE ON FUNCTION pg_config() FROM PUBLIC;
+ 
  -- Statistics views
  
  CREATE VIEW pg_stat_all_tables AS
diff --git a/src/backend/utils/misc/Makefile b/src/backend/utils/misc/Makefile
index 7889101..a0c82c1 100644
*** a/src/backend/utils/misc/Makefile
--- b/src/backend/utils/misc/Makefile
*** include $(top_builddir)/src/Makefile.glo
*** 14,21 
  
  override CPPFLAGS := -I. -I$(srcdir) $(CPPFLAGS)
  
! OBJS = guc.o help_config.o pg_rusage.o ps_status.o rls.o \
!sampling.o superuser.o timeout.o tzparser.o
  
  # This location might depend on the installation directories. Therefore
  # we can't subsitute it into pg_config.h.
--- 14,21 
  
  override CPPFLAGS := -I. -I$(srcdir) $(CPPFLAGS)
  
! OBJS = guc.o help_config.o pg_config.o pg_rusage.o \
!ps_status.o rls.o sampling.o superuser.o timeout.o tzparser.o
  
  # This location might depend on the installation directories. Therefore
  # we can't subsitute it into pg_config.h.
diff --git a/src/backend/utils/misc/pg_config.c b/src/backend/utils/misc/pg_config.c
index ...05ee67b .
*** a/src/backend/utils/misc/pg_config.c
--- b/src/backend/utils/misc/pg_config.c
***
*** 0 
--- 1,102 
+ /*-
+  *
+  * pg_config.c
+  *		Expose same output as pg_config except as an SRF
+  *
+  * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
+  * Portions Copyright (c) 1994, Regents of the University of California
+  *
+  * IDENTIFICATION
+  *	  src/backend/utils/misc/pg_config.c
+  *
+  */
+ 
+ #include "postgres.h"
+ 
+ #include "funcapi.h"
+ #include "miscadmin.h"
+ #include "catalog/pg_type.h"
+ #include "common/config_info.h"
+ #include "utils/builtins.h"
+ #include "utils/elog.h"
+ #include "port.h"
+ 
+ Datum
+ pg_config(PG_FUNCTION_ARGS)
+ {
+ 	ReturnSetInfo	   *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
+ 	Tuplestorestate	   *tupstore;
+ 	HeapTuple			tuple;
+ 	TupleDesc			tupdesc;
+ 	AttInMetadata	   *attinmeta;
+ 	MemoryContext		per_query_ctx;
+ 	MemoryContext		oldcontext;
+ 	configdata		   *ConfigData;
+ 	size_tconfigdata_len;
+ 	char			   *values[2];
+ 	int	i = 0;
+ 
+ 	/* check to see if caller supports us returning a tuplestore */
+ 	if (!rsinfo || !(rsinfo->allowedModes & SFRM_Materialize))
+ 		ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+  errmsg("materialize mode required, but it is not "
+ 		"allowed in this context")));
+ 
+ 	per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
+ 	oldcontext = MemoryContextSwitchTo(per_query_ctx);
+ 
+ 	/* get the requested return tuple description */
+ 	tupdesc = CreateTupleDescCopy(rsinfo->expectedDesc);
+ 
+ 	/*
+ 	 * Check to make sure we have a reasonable tuple descriptor
+ 	 */
+ 	if (tupdesc->natts != 2 ||
+ 		tupdesc->attrs[0]->atttypid != TEXTOID ||
+ 		tupdesc->attrs[1]->atttypid != TEXTOID)
+ 		ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+  errmsg("query-specified return tuple and "
+ 		"function return type are not compatible")));
+ 
+ 	/* OK to use it */
+ 	attinmeta = TupleDescGetAttInMetadata(tupdesc);
+ 
+ 	/* let the caller know we're sending back a tuplestore */
+ 	rsinfo->returnMode = SFRM_Materialize;
+ 
+ 	/* initialize our tuplestore */
+ 	tupstore = tuplestore_begin_heap(true, false, work_mem);
+ 
+ 	ConfigData = get_configdata(my_exec_path, &configdata_len);
+ 	for (i = 0; i < configdata_len; i++)
+ 	{
+ 		values[0] = ConfigData[i].name;
+ 		values[1] = ConfigData[i].setting;
+ 
+ 		tuple = BuildTupleFromCStrings(attinmeta, values);
+ 		tuplestore_puttuple(tupstore, tuple);
+ 	}
+ 	
+ 	/*
+ 	 * no longer need the tuple descriptor reference created by
+ 	 * TupleDescGetAttInMetadata()
+ 	 */
+ 	ReleaseTupleDesc(tupdesc);
+ 
+ 	tuplestore_donestoring(tupstore);
+ 	rsinfo->setResult = tupstore;
+ 
+ 	/*
+ 	 * SFRM_Materialize mode expects us to return a NULL Datum. The actual
+ 	 * tuples are in our tuplestore and passed back through
+ 	 * rsinfo->setResult. rsinfo->setDesc is set to the tuple description
+ 	 * that we actually used to build our tuples with, so the caller can
+ 	 * verify we did what it was expecting.
+ 	 */
+ 	rsinfo->setDesc = tupdesc;
+ 	MemoryContextSwi

Re: [HACKERS] Improved tab completion for FDW DDL

2016-01-18 Thread Andreas Karlsson

On 01/11/2016 02:39 AM, Peter Eisentraut wrote:

The second part is not necessary, because there is already code that
completes FDW names after "FOREIGN DATA WRAPPER".  So this already works.


Good spot, thanks. I have no idea why I did not think it worked. Maybe 
it just did not work in 9.2 and I failed when trying to reproduce it on 
master.



- Also complete RENAME TO in ALTER FOREIGN DATA WRAPPER.


Done.


- Also complete OPTIONS in FOREIGN DATA WRAPPER and SERVER commands.


Done.

Andreas
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index c3c77bd..3d8cdf4 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1417,7 +1417,7 @@ psql_completion(const char *text, int start, int end)
 
 	/* ALTER FOREIGN DATA WRAPPER  */
 	else if (Matches5("ALTER", "FOREIGN", "DATA", "WRAPPER", MatchAny))
-		COMPLETE_WITH_LIST4("HANDLER", "VALIDATOR", "OPTIONS", "OWNER TO");
+		COMPLETE_WITH_LIST5("HANDLER", "VALIDATOR", "OPTIONS", "OWNER TO", "RENAME TO");
 
 	/* ALTER FOREIGN TABLE  */
 	else if (Matches4("ALTER", "FOREIGN", "TABLE", MatchAny))
@@ -1544,7 +1544,10 @@ psql_completion(const char *text, int start, int end)
 		COMPLETE_WITH_LIST3("MINVALUE", "MAXVALUE", "CYCLE");
 	/* ALTER SERVER  */
 	else if (Matches3("ALTER", "SERVER", MatchAny))
-		COMPLETE_WITH_LIST3("VERSION", "OPTIONS", "OWNER TO");
+		COMPLETE_WITH_LIST4("VERSION", "OPTIONS", "OWNER TO", "RENAME TO");
+	/* ALTER SERVER  VERSION */
+	else if (Matches5("ALTER", "SERVER", MatchAny, "VERSION", MatchAny))
+		COMPLETE_WITH_CONST("OPTIONS");
 	/* ALTER SYSTEM SET, RESET, RESET ALL */
 	else if (Matches2("ALTER", "SYSTEM"))
 		COMPLETE_WITH_LIST2("SET", "RESET");
@@ -1993,7 +1996,7 @@ psql_completion(const char *text, int start, int end)
 
 	/* CREATE FOREIGN DATA WRAPPER */
 	else if (Matches5("CREATE", "FOREIGN", "DATA", "WRAPPER", MatchAny))
-		COMPLETE_WITH_LIST2("HANDLER", "VALIDATOR");
+		COMPLETE_WITH_LIST3("HANDLER", "VALIDATOR", "OPTIONS");
 
 	/* CREATE INDEX --- is allowed inside CREATE SCHEMA, so use TailMatches */
 	/* First off we complete CREATE UNIQUE with "INDEX" */
@@ -2372,6 +2375,10 @@ psql_completion(const char *text, int start, int end)
 	else if (TailMatches3("FOREIGN", "DATA", "WRAPPER") &&
 			 !TailMatches4("CREATE", MatchAny, MatchAny, MatchAny))
 		COMPLETE_WITH_QUERY(Query_for_list_of_fdws);
+	/* applies in CREATE SERVER */
+	else if (TailMatches4("FOREIGN", "DATA", "WRAPPER", MatchAny) &&
+			 HeadMatches2("CREATE", "SERVER"))
+		COMPLETE_WITH_CONST("OPTIONS");
 
 /* FOREIGN TABLE */
 	else if (TailMatches2("FOREIGN", "TABLE") &&
@@ -2816,6 +2823,8 @@ psql_completion(const char *text, int start, int end)
 		COMPLETE_WITH_QUERY(Query_for_list_of_user_mappings);
 	else if (Matches5("CREATE|ALTER|DROP", "USER", "MAPPING", "FOR", MatchAny))
 		COMPLETE_WITH_CONST("SERVER");
+	else if (Matches7("CREATE|ALTER", "USER", "MAPPING", "FOR", MatchAny, "SERVER", MatchAny))
+		COMPLETE_WITH_CONST("OPTIONS");
 
 /*
  * VACUUM [ FULL | FREEZE ] [ VERBOSE ] [ table ]

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Making plpython 2 and 3 coexist a bit better

2016-01-18 Thread Bruce Momjian
On Mon, Jan 18, 2016 at 01:10:05PM -0500, Tom Lane wrote:
> Bruce Momjian  writes:
> > On Mon, Jan 11, 2016 at 10:44:42AM -0500, Tom Lane wrote:
> >> But it gets worse: a report today in pgsql-general points out that
> >> even if you have the two languages in use *in different databases*,
> >> pg_upgrade will fail, because pg_upgrade rather overenthusiastically
> >> loads every .so mentioned anywhere in the source installation into
> >> one session.
> 
> > The fix for that would be for pg_upgrade to change
> > check_loadable_libraries() to start a new session for each LOAD command.
> > Would you like that done?
> 
> Not necessary anymore, at least not for PL/Python; and that solution
> sounds a tad expensive.

Yes, it would certainly be inefficient.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COPY (... tab completion

2016-01-18 Thread Andreas Karlsson

On 01/19/2016 01:57 AM, Andreas Karlsson wrote:

Thanks for the review. A new version is attached.


Whops, attached the wrong file.

Andreas
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index ad8a580..bc80ed0 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1934,11 +1934,18 @@ psql_completion(const char *text, int start, int end)
 /* COPY */
 
 	/*
-	 * If we have COPY [BINARY] (which you'd have to type yourself), offer
-	 * list of tables (Also cover the analogous backslash command)
+	 * If we have COPY, offer list of tables or "(" (Also cover the analogous
+	 * backslash command).
 	 */
-	else if (Matches1("COPY|\\copy") || Matches2("COPY", "BINARY"))
+	else if (Matches1("COPY|\\copy"))
+		COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables,
+   " UNION ALL SELECT '('");
+	/* If we have COPY BINARY, compelete with list of tables */
+	else if (Matches2("COPY", "BINARY"))
 		COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables, NULL);
+	/* If we have COPY (, complete it with legal commands */
+	else if (TailMatches2("COPY|\\copy", "("))
+		COMPLETE_WITH_LIST7("SELECT", "TABLE", "VALUES", "INSERT", "UPDATE", "DELETE", "WITH");
 	/* If we have COPY [BINARY] , complete it with "TO" or "FROM" */
 	else if (Matches2("COPY|\\copy", MatchAny) ||
 			 Matches3("COPY", "BINARY", MatchAny))

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COPY (... tab completion

2016-01-18 Thread Andreas Karlsson

On 01/11/2016 02:01 AM, Peter Eisentraut wrote:

I think this would be a useful addition.  A couple of problems:


Thanks for the review. A new version is attached.


This change in the comment doesn't make sense to me and doesn't seem to
match the code:

-   /* If we have COPY [BINARY] , complete it with "TO" or "FROM" */
+   /* If we have COPY|BINARY , complete it with "TO" or "FROM" */


Fixed. As Tom correctly guessed this was the result of a mistake when 
rebasing.



The list of commands to allow as the "query" inside the parentheses is
documented to be: SELECT, VALUES, INSERT, UPDATE or DELETE; and actually
TABLE should also work.  Your list doesn't include all of those.  So
please adjust that.


Fixed. And TABLE works too.

Andreas
commit 3b7a808e710e613f81abd0207847a3378ec3192c
Author: Andreas Karlsson 
Date:   Sat Dec 12 17:38:19 2015 +0100

Improve COPY completion

diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index ad8a580..c928ebf 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1934,11 +1934,18 @@ psql_completion(const char *text, int start, int end)
 /* COPY */
 
 	/*
-	 * If we have COPY [BINARY] (which you'd have to type yourself), offer
-	 * list of tables (Also cover the analogous backslash command)
+	 * If we have COPY, offer list of tables or "(" (Also cover the analogous
+	 * backslash command).
 	 */
-	else if (Matches1("COPY|\\copy") || Matches2("COPY", "BINARY"))
+	else if (Matches1("COPY|\\copy"))
+		COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables,
+   " UNION ALL SELECT '('");
+	/* If we have COPY BINARY, compelete with list of tables */
+	else if (Matches2("COPY", "BINARY"))
 		COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables, NULL);
+	/* If we have COPY (, complete it with legal commands */
+	else if (TailMatches2("COPY|\\copy", "("))
+		COMPLETE_WITH_LIST6("SELECT", "TABLE", "WITH", "INSERT", "UPDATE", "DELETE");
 	/* If we have COPY [BINARY] , complete it with "TO" or "FROM" */
 	else if (Matches2("COPY|\\copy", MatchAny) ||
 			 Matches3("COPY", "BINARY", MatchAny))

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Interesting read on SCM upending software and hardware architecture

2016-01-18 Thread Jim Nasby

On 1/18/16 2:47 PM, Peter Geoghegan wrote:

On Mon, Jan 18, 2016 at 12:31 PM, Robert Haas  wrote:

People keep predicting the death of spinning media, but I think
it's not happening to anywhere near as fast as that people think.
Yes, I'm writing this on a laptop with an SSD, and my personal laptop
also has an SSD, but their immediate predecessors did not, and these
are fairly expensive laptops.  And most customers I talk to are still
using spinning disks.  Meanwhile, main memory is getting so large that
even pretty significant databases can be entirely RAM-cached.  So I
tend to think that this is a lot less exciting than people who are not
me seem to think.


I tend to agree that the case for SSDs as a revolutionary technology
has been significantly overstated. This recent article makes some
interesting points:

http://www.zdnet.com/article/what-we-learned-about-ssds-in-2015/

I think it's much more true that main memory scaling (in particular,
main memory capacity) has had a huge impact, but that trend appears to
now be stalling.


My original article doesn't talk about SSDs; it's talking about 
non-volatile memory architectures (quoted extract below). Fusion IO is 
an example of this, and if NVDIMMs become available we'll see even 
faster non-volatile performance.


To me, the most interesting point the article makes is that systems now 
need much better support for multiple classes of NV storage. I agree 
with your point that spinning rust is here to stay for a long time, 
simply because it's cheap as heck. So systems need to become much better 
at moving data between different layers of NV storage so that you're 
getting the biggest bang for the buck. That will remain critical as long 
as SCM's remain 25x more expensive than rust.


Quote from article:



Flash-based storage devices are not new: SAS and SATA SSDs have been 
available for at least the past decade, and have brought flash memory 
into computers in the same form factor as spinning disks. SCMs reflect a 
maturing of these flash devices into a new, first-class I/O device: SCMs 
move flash off the slow SAS and SATA buses historically used by disks, 
and onto the significantly faster PCIe bus used by more 
performance-sensitive devices such as network interfaces and GPUs. 
Further, emerging SCMs, such as non-volatile DIMMs (NVDIMMs), interface 
with the CPU as if they were DRAM and offer even higher levels of 
performance for non-volatile storage.


Today's PCIe-based SCMs represent an astounding three-order-of-magnitude 
performance change relative to spinning disks (~100K I/O operations per 
second versus ~100). For computer scientists, it is rare that the 
performance assumptions that we make about an underlying hardware 
component change by 1,000x or more. This change is punctuated by the 
fact that the performance and capacity of non-volatile memories continue 
to outstrip CPUs in year-on-year performance improvements, closing and 
potentially even inverting the I/O gap.


The performance of SCMs means that systems must no longer "hide" them 
via caching and data reduction in order to achieve high throughput. 
Unfortunately, however, this increased performance comes at a high 
price: SCMs cost 25x as much as traditional spinning disks ($1.50/GB 
versus $0.06/GB), with enterprise-class PCIe flash devices costing 
between three and five thousand dollars each. This means that the cost 
of the non-volatile storage can easily outweigh that of the CPUs, DRAM, 
and the rest of the server system that they are installed in. The 
implication of this shift is significant: non-volatile memory is in the 
process of replacing the CPU as the economic center of the datacenter.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] exposing pg_controldata and pg_config as functions

2016-01-18 Thread Bruce Momjian
fOn Mon, Jan 18, 2016 at 01:54:02PM -0800, Joe Conway wrote:
> On 01/18/2016 01:47 PM, Bruce Momjian wrote:
> > On Sun, Jan 17, 2016 at 02:24:46PM -0800, Joe Conway wrote:
> >> On 01/16/2016 06:02 AM, Michael Paquier wrote:
> >>> On Wed, Dec 30, 2015 at 9:08 AM, Joe Conway  wrote:
>  1) Change NextXID output format from "%u/%u" to "%u:%u"
> (see recent hackers thread)
> >>>
> >>> ! printf(_("Latest checkpoint's NextXID:  %u/%u\n"),
> >>>  ControlFile.checkPointCopy.nextXidEpoch,
> >>>  ControlFile.checkPointCopy.nextXid);
> >>>   printf(_("Latest checkpoint's NextOID:  %u\n"),
> >>> --- 646,652 
> >>>  ControlFile.checkPointCopy.ThisTimeLineID);
> >>>   printf(_("Latest checkpoint's full_page_writes: %s\n"),
> >>>  ControlFile.checkPointCopy.fullPageWrites ? _("on") : 
> >>> _("off"));
> >>> ! printf(_("Latest checkpoint's NextXID:  %u:%u\n"),
> >>> This should be definitely a separate patch.
> >>
> >> Ok. Notwithstanding Simon's reply, there seems to be consensus that this
> >> is the way to go. Will commit it this way unless some additional
> >> objections surface in the next day or so.
> > 
> > FYI, this slash-colon change will break pg_upgrade unless it is patched.
> > Dp you want a patch from me?
> 
> Didn't realize that -- yes please.

Sure, attached, and it would be applied only to head, where you change
pg_controldata.  pg_upgrade has to read the old and new cluster's
pg_controldata.  We could get more sophisticated by checking the catalog
version number where the format was changed, but that doesn't seem worth
it, and is overly complex because we get the catalog version number from
pg_controldata, so you would be adding a dependency in ordering of the
pg_controldata entries.

I can test all suppored Postgres versions with pg_upgrade once you apply
the patch, but I think it will be fine. 

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +
diff --git a/src/bin/pg_upgrade/controldata.c b/src/bin/pg_upgrade/controldata.c
new file mode 100644
index 1f7b65e..aaaea7b
*** a/src/bin/pg_upgrade/controldata.c
--- b/src/bin/pg_upgrade/controldata.c
*** get_control_data(ClusterInfo *cluster, b
*** 198,203 
--- 198,206 
  			cluster->controldata.chkpnt_nxtepoch = str2uint(p);
  
  			p = strchr(p, '/');
+ 			/* delimiter changed from '/' to ':' in 9.6 */
+ 			if (p == NULL && GET_MAJOR_VERSION(cluster->major_version) >= 906)
+ p = strchr(p, ':');
  			if (p == NULL || strlen(p) <= 1)
  pg_fatal("%d: controldata retrieval problem\n", __LINE__);
  

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] exposing pg_controldata and pg_config as functions

2016-01-18 Thread Michael Paquier
On Tue, Jan 19, 2016 at 6:55 AM, Robert Haas  wrote:
> On Mon, Jan 18, 2016 at 4:43 AM, Andres Freund  wrote:
>> On 2016-01-18 10:18:34 +0900, Michael Paquier wrote:
>>> We are trying to hide away from non-superusers WAL-related information
>>> in system views and system function, that's my point to do the same
>>> here.
>>
>> We are? pg_current_xlog_insert_location(), pg_current_xlog_location(),
>> pg_is_xlog_replay_paused(), pg_stat_bgwriter ... are all non-superuser?
>
> Yeah.  There's certainly no need for the WAL positions reported by
> pg_controldata to be any more restricted than other functions that
> give away information about WAL position.  We had some discussion
> about restricting WAL position information in general due to possible
> information leakage, and if we do that, then perhaps this should be
> similarly restricted.  Presumably vulnerabilities here would be harder
> to exploit because the values change much less frequently, so if you
> are trying to learn something the rate at which you can glean
> information will be much lower.  But maybe we should put the same
> restrictions on all of it.

Well, we can still use REVOKE on those functions, so it is not like a
user cannot restrict the access to this information. The current
situation makes it hard for both us and the user to figure out if an
instance is considered as secure or not, so things are unbalanced.
Perhaps the best answer is to add a documentation section to tell
people how to harden their database after initdb'ing it, with
different sections aimed at hardening different things, one being the
WAL information, and mention as well in those docs which hardening
action covers what. Stephen mentioned that some time ago, that would
still be good.

>>> For the data of pg_control, it seems to me that this can give
>>> away to any authorized users hints regarding the way Postgres is
>>> built, perhaps letting people know for example which Linux
>>> distribution is used and which flavor of Postgres is used (we already
>>> give away some information with version() but that's different than
>>> the libraries this is linking to), so an attacker may be able to take
>>> advantage of that to do attacks on potentially outdated packages? And
>>> I would think that many users are actually going to revoke the access
>>> of those functions to public if we are going to make them
>>> world-visible. It is easier as well to restrict things first, and then
>>> relax if necessary, than the opposite as well.
>>
>> Meh, that seems pretty far into pseudo security arguments.
>
> Yeah, I really don't see anything in the pg_controldata output that
> looks sensitive.  The WAL locations are the closest of anything,
> AFAICS.

The system identifier perhaps? I honestly don't have on top of my head
a way to exploit this information but leaking that at SQL level seems
sensible: that's a unique identifier of a Postgres instance used when
setting up a cluster after all.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] source files without copyright notices

2016-01-18 Thread Michael Paquier
On Tue, Jan 19, 2016 at 5:55 AM, Joe Conway  wrote:
> I never noticed before, but today I came across a header file without
> any copyright notice at all. Turns out there are quite a few:
>
>   grep -riL Copyright src/* --include=*.c --include=*.h
>
> Shouldn't at least some of these get a copyright?

+1 for adding them where needed to be consistent, for example the
files in snowball don't need them. It would be nice as well to fix the
header format of what is in ecpg... That's a boring and repetitive
work, I don't mind helping out.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] exposing pg_controldata and pg_config as functions

2016-01-18 Thread Joe Conway
On 01/17/2016 02:29 PM, Joe Conway wrote:
>> Documentation would be good to have.
> 
> I'm definitely happy to write the docs, but earlier it was not clear
> that there was enough support for this patch at all, and I don't want to
> waste cycles writing docs for a feature that ultimately does not get
> committed. What's the current feel for whether this feature in general
> is a good idea or bad?


Thoughts anyone?

>> ! # don't include subdirectory-path-dependent -I and -L switches
>> ! STD_CPPFLAGS := $(filter-out -I$(top_srcdir)/src/include
>> -I$(top_builddir)/src/include,$(CPPFLAGS))
>> ! STD_LDFLAGS := $(filter-out -L$(top_builddir)/src/port,$(LDFLAGS))
>> ! override CPPFLAGS += -DVAL_CONFIGURE="\"$(configure_args)\""
>> ! override CPPFLAGS += -DVAL_CC="\"$(CC)\""
>> ! override CPPFLAGS += -DVAL_CPPFLAGS="\"$(STD_CPPFLAGS)\""
>> ! override CPPFLAGS += -DVAL_CFLAGS="\"$(CFLAGS)\""
>> ! override CPPFLAGS += -DVAL_CFLAGS_SL="\"$(CFLAGS_SL)\""
>> ! override CPPFLAGS += -DVAL_LDFLAGS="\"$(STD_LDFLAGS)\""
>> ! override CPPFLAGS += -DVAL_LDFLAGS_EX="\"$(LDFLAGS_EX)\""
>> ! override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
>> ! override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
>> This duplication from src/bin/pg_config is a bad idea. Couldn't we do
>> something in src/common instead that sets up values at compilation
>> time in a routine (perhaps set of routines) available for both the
>> frontend and backend?
> 
> Will take a look at it.

Please see the attached. Duplication removed.

Thanks,

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 923fe58..abf9a70 100644
*** a/src/backend/catalog/system_views.sql
--- b/src/backend/catalog/system_views.sql
*** CREATE VIEW pg_timezone_abbrevs AS
*** 433,438 
--- 433,444 
  CREATE VIEW pg_timezone_names AS
  SELECT * FROM pg_timezone_names();
  
+ CREATE VIEW pg_config AS
+ SELECT * FROM pg_config();
+ 
+ REVOKE ALL on pg_config FROM PUBLIC;
+ REVOKE EXECUTE ON FUNCTION pg_config() FROM PUBLIC;
+ 
  -- Statistics views
  
  CREATE VIEW pg_stat_all_tables AS
diff --git a/src/backend/utils/misc/Makefile b/src/backend/utils/misc/Makefile
index 7889101..a0c82c1 100644
*** a/src/backend/utils/misc/Makefile
--- b/src/backend/utils/misc/Makefile
*** include $(top_builddir)/src/Makefile.glo
*** 14,21 
  
  override CPPFLAGS := -I. -I$(srcdir) $(CPPFLAGS)
  
! OBJS = guc.o help_config.o pg_rusage.o ps_status.o rls.o \
!sampling.o superuser.o timeout.o tzparser.o
  
  # This location might depend on the installation directories. Therefore
  # we can't subsitute it into pg_config.h.
--- 14,21 
  
  override CPPFLAGS := -I. -I$(srcdir) $(CPPFLAGS)
  
! OBJS = guc.o help_config.o pg_config.o pg_rusage.o \
!ps_status.o rls.o sampling.o superuser.o timeout.o tzparser.o
  
  # This location might depend on the installation directories. Therefore
  # we can't subsitute it into pg_config.h.
diff --git a/src/backend/utils/misc/pg_config.c b/src/backend/utils/misc/pg_config.c
index ...05ee67b .
*** a/src/backend/utils/misc/pg_config.c
--- b/src/backend/utils/misc/pg_config.c
***
*** 0 
--- 1,102 
+ /*-
+  *
+  * pg_config.c
+  *		Expose same output as pg_config except as an SRF
+  *
+  * Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
+  * Portions Copyright (c) 1994, Regents of the University of California
+  *
+  * IDENTIFICATION
+  *	  src/backend/utils/misc/pg_config.c
+  *
+  */
+ 
+ #include "postgres.h"
+ 
+ #include "funcapi.h"
+ #include "miscadmin.h"
+ #include "catalog/pg_type.h"
+ #include "common/config_info.h"
+ #include "utils/builtins.h"
+ #include "utils/elog.h"
+ #include "port.h"
+ 
+ Datum
+ pg_config(PG_FUNCTION_ARGS)
+ {
+ 	ReturnSetInfo	   *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
+ 	Tuplestorestate	   *tupstore;
+ 	HeapTuple			tuple;
+ 	TupleDesc			tupdesc;
+ 	AttInMetadata	   *attinmeta;
+ 	MemoryContext		per_query_ctx;
+ 	MemoryContext		oldcontext;
+ 	configdata		   *ConfigData;
+ 	size_tconfigdata_len;
+ 	char			   *values[2];
+ 	int	i = 0;
+ 
+ 	/* check to see if caller supports us returning a tuplestore */
+ 	if (!rsinfo || !(rsinfo->allowedModes & SFRM_Materialize))
+ 		ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+  errmsg("materialize mode required, but it is not "
+ 		"allowed in this context")));
+ 
+ 	per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
+ 	oldcontext = MemoryContextSwitchTo(per_query_ctx);
+ 
+ 	/* get the requested return tuple description */
+ 	tupdesc = CreateTupleDescCopy(rsinfo->expectedDesc);
+ 
+ 	/*
+ 	 * Check to make sure we have a reasonable tuple descriptor
+ 	 */
+ 	if (tupdesc->natts != 2 ||
+ 		tupdesc->attrs[0]->atttypid != TE

Re: [HACKERS] custom function for converting human readable sizes to bytes

2016-01-18 Thread Vitaly Burovoy
On 1/4/16, Robert Haas  wrote:
> On Mon, Jan 4, 2016 at 10:17 AM, Pavel Stehule 
> wrote:
>> [ new patch ]
>
> + case '-':
> + ereport(ERROR,
> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
> +  errmsg("size cannot be negative")));
>
> Why not?  I bet if you copy any - sign to the buffer, this will Just Work.

Hmm. The function's name is pg_size_bytes. How number of bytes can be
negative? How any length can be negative? If anyone insert '-' sign to
an argument, it is copy-paste error. I don't see any case where there
is possible negatives as input value.

I prefer error message instead of getting all relations (by using
comparison from the initial letter) just because of copy-paste mistake
or incomplete checking of input values at app-level.

> ...
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company

-- 
Best regards,
Vitaly Burovoy


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb - jsonb operators

2016-01-18 Thread Vitaly Burovoy
On 1/15/16, Glyn Astill  wrote:
> Hi all,
>
> I was just looking through the new jsonb operators in the 9.5 release, and
> was wondering if there's any future intention to add a delete operator that
> removes element/pair matches?  I.e. some sort of top-level "jsonb - jsonb"
> operator, e.g.
>
>
> # select '{"a":1, "b":2}'::jsonb - '{"b":2, "a":4}'::jsonb;
> ?column?
> --
> {"a": 1}
> (1 row)
>
> Or would this behaviour be classed as incorrect in some way?
>
> Thanks
> Glyn

I thing the operator 'jsonb-jsonb' behavior in such case is not obvious.
How to understand the result is not like that:

# select '{"a":1, "b":2}'::jsonb - '{"b":2, "a":4}'::jsonb;
?column?
--
{"a": -3, "b": 0}
(1 row)


P.S.: I guess an _operator_ jsonb+jsonb doesn't exist by the same way…
-- 
Best regards,
Vitaly Burovoy


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Improve spinlock inline assembly for x86.

2016-01-18 Thread Andres Freund
On 2016-01-18 16:56:22 -0600, Kevin Grittner wrote:
> On Mon, Jan 18, 2016 at 4:50 PM, Andres Freund  wrote:
> 
> > Now I'm equally unconvinced that it's worthwhile to do anything
> > here. I just don't think benchmarking plays a role either way.
> 
> Well, that would be the crucial point on which we differ -- the
> rest is all agreement.  I don't think we should accept the patch
> *in the absence* of benchmarking to show a result that is neutral
> or better.  Spinlocks are just too performance-critical and too
> fussy to accept a change on the basis that "the source code looks
> fine".  IMO, anyway.

By that justification we need to start benchmarking adding new variables
on the stack, that'll most of the time have a bigger performance impact
than this.  Benchmarking minor source code cleanups is just not worth
the time.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Improve spinlock inline assembly for x86.

2016-01-18 Thread Kevin Grittner
On Mon, Jan 18, 2016 at 4:50 PM, Andres Freund  wrote:

> Now I'm equally unconvinced that it's worthwhile to do anything
> here. I just don't think benchmarking plays a role either way.

Well, that would be the crucial point on which we differ -- the
rest is all agreement.  I don't think we should accept the patch
*in the absence* of benchmarking to show a result that is neutral
or better.  Spinlocks are just too performance-critical and too
fussy to accept a change on the basis that "the source code looks
fine".  IMO, anyway.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Improve spinlock inline assembly for x86.

2016-01-18 Thread Peter Geoghegan
On Mon, Jan 18, 2016 at 2:39 PM, Kevin Grittner  wrote:
>> but I don't think that Andreas' patch is necessarily a
>> performance patch. There can be value in removing superfluous
>> code; doing so sometimes clarifies intent and understanding.
>
> Well, that's why I said I would be satisfied with a neutral
> benchmark result -- when there is a tie, the shorter, simpler code
> should generally win.  I'm not really sure what there was in what I
> said to argue about; since that I've just been trying figure that
> out.  If we all agree, let's let it drop.

If we don't want to apply this, then I think that a sensible
compromise would be to add a code comment that says that we don't
believe the LOCK prefix matters.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Improve spinlock inline assembly for x86.

2016-01-18 Thread Andres Freund
On 2016-01-18 16:14:05 -0600, Kevin Grittner wrote:
> Unconvinced that we should do performance testing on a proposed
> performance patch before accepting it

I'm unconvinced that it makes sense to view this as a performance
patch. And unconvinced that you can sanely measure it. The lock prefix
is a one byte instruction prefix, and lock xchg, and xchg are exactly
the same, leaving the instruction width aside. It's just a littlebit
less work for the instruction decoder.

The point about alignment and such is, that changing some code somewhere
is likely to have a bigger performance impact than the actual effect of
the removal of those few bytes. So when you benchmark, you'd just
benchmark a slightly changed code layout.
objdump -d build/postgres/dev-assert/vpath/src/backend/postgres |grep 'lock 
xchg'|head -n1
  4b732f:f0 86 01   lock xchg %al,(%rcx)
the f0 is the lock prefix. In total there's 22 of them in the postgres
codebase, when compiled with my flags/compiler.

I think it's unrealistic to benchmark slight codemovements on a regular
basis, particularly using a large machine. There's just not enough time
and hardware around for that.

Now I'm equally unconvinced that it's worthwhile to do anything
here. I just don't think benchmarking plays a role either way.

>, that the changes in NUMA
> scheduling in the Linux 3.8 kernel have a major effect on how well
> our code performs at high concurrency on NUMA machines with a lot
> of memory nodes

That I believe immediately.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Buildfarm server move

2016-01-18 Thread Andrew Dunstan



On 01/18/2016 05:20 PM, Andrew Dunstan wrote:

People,

Apologies for the late notice.

Tomorrow, January 18th, at 4.00 pm US East Coast time (UT - 5.0) we 
will be moving the buildfarm server from its current home at 
CommandPrompt, where we have been ever since we started, to a machine 
that is part of the standard core infrastructure. In doing so we will 
be moving to a) a more modern and supported PostgreSQL version, and b) 
a machine with more disk space so that our current severe pace 
shortage will be alleviated. In addition, the community would be much 
better placed to maintain the buildfarm if either JD or I were to fall 
under a bus.


The outage is expected to last about 4 hours or less, and we will sent 
out notifications when this is complete.


Buildfarm owners who want to avoid getting reporting failures should 
disable their animals during that time. We don't have an avalanche of 
commits right now either, but it might also be nice if committers were 
to refrain from adding changes in the hours leading up to this and 
until we announce that we're back online, for the benefit of those 
owners who don't see this message in time.


Thanks in advance for your help and understanding.

And many thanks to CommandPrompt for their constant support over the 
many years we've been in operation.






Yes, sorry about that. It will be on the 19th. Fat fingers strike again.

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [JDBC] 9.4-1207 behaves differently with server side prepared statements compared to 9.2-1102

2016-01-18 Thread Thomas Kellerer
Robert Haas wrote:
> This isn't the first complaint about this mechanism that we've gotten,
> and it won't be the last.  Way too many of our users are way more
> aware than they should be that the threshold here is five rather than
> any other number, which to me is a clear-cut sign that this needs to
> be improved.  How to improve it is a harder question.  We lack the
> ability to do any kind of sensitivity analysis on a plan, so we can't
> know whether there are other parameter values that would have resulted
> in a different plan, nor can we test whether a particular set of
> parameter values would have changed the outcome.

(I initially posted that question on the JDBC mailing list)

To be honest: looking at the efforts Oracle has done since 9 up until 12 I
am not sure this is a problem that can be solved by caching plans. 

Even with the new "in-flight" re-planning in Oracle 12 ("cardinality
feedback") and all the effort that goes into caching plans we are still
seeing similar problems with (prepared) statements that are suddenly slow.
And as far as I can tell, the infrastructure around plan caching,
invalidation, bind variable peeking and all that seems to be a *lot* more
complex ("sophisticated") in Oracle compared to Postgres. And the results
don't seem to justify the effort (at least in my experience).

With all the problems I have seen (in Oracle and Postgres) I think that
maybe a better solution to this problem is to make the planner fast (and
reliable) enough so that plan caching isn't necessary in the first place. 

However I have no idea how feasible that is. 





--
View this message in context: 
http://postgresql.nabble.com/Fwd-JDBC-Re-9-4-1207-behaves-differently-with-server-side-prepared-statements-compared-to-9-2-1102-tp5881825p5882835.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Buildfarm server move

2016-01-18 Thread Stephen Frost
Tom, all,

* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Andrew Dunstan  writes:
> > Tomorrow, January 18th, at 4.00 pm US East Coast time (UT - 5.0) we will 
> > be moving the buildfarm server from its current home at CommandPrompt, 
> 
> Um, this message is postmarked 18 Jan 17:20, an hour later than the
> stated move time.  Did you mean the move will be Tue 19 Jan?

Yes.  It'll be tomorrow.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] exposing pg_controldata and pg_config as functions

2016-01-18 Thread Andres Freund
On January 18, 2016 11:10:35 PM GMT+01:00, Stephen Frost  
wrote:
>* Robert Haas (robertmh...@gmail.com) wrote:
>> On Mon, Jan 18, 2016 at 4:43 AM, Andres Freund 
>wrote:
>> > Meh, that seems pretty far into pseudo security arguments.
>> 
>> Yeah, I really don't see anything in the pg_controldata output that
>> looks sensitive.  The WAL locations are the closest of anything,
>> AFAICS.
>
>Heikki already showed how the WAL location information could be
>exploited if compression is enabled.
>
>I believe that's something we should care about and fix in one way or
>another (my initial approach was using defualt roles, but using the ACL
>system and starting out w/ no rights granted to that function also
>works).

Sure. But it's pointless to make things more complicated when there's functions 
providing equivalent information already.

--- 
Please excuse brevity and formatting - I am writing this on my mobile phone.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Improve spinlock inline assembly for x86.

2016-01-18 Thread Kevin Grittner
On Mon, Jan 18, 2016 at 4:24 PM, Peter Geoghegan  wrote:
> On Mon, Jan 18, 2016 at 2:14 PM, Kevin Grittner  wrote:

>> It's hard to understand quite what you're saying there.  If you're
>> saying that code changes that should be performance neutral can
>> sometimes affect performance because of alignment of code with
>> cache line boundaries -- I absolutely agree; is that an argument
>> against performance testing performance patches?
>
> No, it isn't an argument against performance testing patches like
> this, but I don't think anyone suggested otherwise.

I'm still not sure what argument he *was* making, but I'm glad to
hear it wasn't that.

> Of course every performance related patch should be tested to
> make sure it meets its goals and at acceptable cost,

I argued that it should be tested to ensure that it caused no
regression in a case which has been a problem for some of our
customers (spinlock contention at high concurrency on a machine
with 4 or more NUMA nodes).  We have been able to solve those
problems, but it has been a fussy business -- sometimes we have
tweaked spinlock code and sometimes that has not worked but an OS
upgrade has.  I am quite unconvinced about whether the change will
help, hurt, or have no impact on these problems; I'm only arguing
for testing to find out.

> but I don't think that Andreas' patch is necessarily a
> performance patch. There can be value in removing superfluous
> code; doing so sometimes clarifies intent and understanding.

Well, that's why I said I would be satisfied with a neutral
benchmark result -- when there is a tie, the shorter, simpler code
should generally win.  I'm not really sure what there was in what I
said to argue about; since that I've just been trying figure that
out.  If we all agree, let's let it drop.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Buildfarm server move

2016-01-18 Thread Tom Lane
Andrew Dunstan  writes:
> Tomorrow, January 18th, at 4.00 pm US East Coast time (UT - 5.0) we will 
> be moving the buildfarm server from its current home at CommandPrompt, 

Um, this message is postmarked 18 Jan 17:20, an hour later than the
stated move time.  Did you mean the move will be Tue 19 Jan?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Random inconsistencies in GiST support function declarations

2016-01-18 Thread Tom Lane
I was idly trying to improve the just-added index AM amvalidate()
functions by having them verify the expected signatures (argument
and result types) of opclass support functions.  opr_sanity currently
does this for btree, hash, and spgist functions, but not for other
cases; it'd be useful IMO if we had more complete coverage on that.

However, I was soon rudely reminded of the reason that opr_sanity
doesn't check GiST support functions, which is that their declarations
in pg_proc are wildly inconsistent.  An example is that the strategy
number argument of GiST consistent functions is stated in the
documentation to be smallint, which is the way that many of the contrib
modules declare that, and all of the consistent functions I've looked
at use PG_GETARG_UINT16() to fetch it ... but most of the built-in
consistent functions declare the argument as integer not smallint.
(This has no impact on what happens at runtime, since the GiST core
code pays no attention to what the functions are declared as.)

There's not a lot of sanity to the declarations of union functions
either, and I've not even gotten to the other seven support functions.

I think it'd be a good idea to clean these things up, rather than weaken
the amvalidate() tests to the point where they'll accept all the existing
weirdnesses.  And the documentation needs to match reality more closely
in any case.

Fixing the pg_proc entries in HEAD seems like no big deal, but some of
the errors are in contrib modules.  If we wanted to be really clean
about that, we'd have to bump those modules' extension versions, which
is a pain in the rear.  Since this has no user-visible impact AFAICS
(the support functions not being callable from SQL), I'm inclined to
just fix the incorrect declarations in-place.  I think it's sufficient
if the contrib modules pass amvalidate checks in future, I don't feel
a need to make existing installations pass.

Any objections?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Improve spinlock inline assembly for x86.

2016-01-18 Thread Peter Geoghegan
On Mon, Jan 18, 2016 at 2:14 PM, Kevin Grittner  wrote:
>> You get just as much churn by changing code elsewhere, which
>> often causes code movement and alignment changes.
>
> It's hard to understand quite what you're saying there.  If you're
> saying that code changes that should be performance neutral can
> sometimes affect performance because of alignment of code with
> cache line boundaries -- I absolutely agree; is that an argument
> against performance testing performance patches?

No, it isn't an argument against performance testing patches like
this, but I don't think anyone suggested otherwise. Of course every
performance related patch should be tested to make sure it meets its
goals and at acceptable cost, but I don't think that Andreas' patch is
necessarily a performance patch. There can be value in removing
superfluous code; doing so sometimes clarifies intent and
understanding.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Buildfarm server move

2016-01-18 Thread Andrew Dunstan

People,

Apologies for the late notice.

Tomorrow, January 18th, at 4.00 pm US East Coast time (UT - 5.0) we will 
be moving the buildfarm server from its current home at CommandPrompt, 
where we have been ever since we started, to a machine that is part of 
the standard core infrastructure. In doing so we will be moving to a) a 
more modern and supported PostgreSQL version, and b) a machine with more 
disk space so that our current severe pace shortage will be alleviated. 
In addition, the community would be much better placed to maintain the 
buildfarm if either JD or I were to fall under a bus.


The outage is expected to last about 4 hours or less, and we will sent 
out notifications when this is complete.


Buildfarm owners who want to avoid getting reporting failures should 
disable their animals during that time. We don't have an avalanche of 
commits right now either, but it might also be nice if committers were 
to refrain from adding changes in the hours leading up to this and until 
we announce that we're back online, for the benefit of those owners who 
don't see this message in time.


Thanks in advance for your help and understanding.

And many thanks to CommandPrompt for their constant support over the 
many years we've been in operation.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Improve spinlock inline assembly for x86.

2016-01-18 Thread Kevin Grittner
On Mon, Jan 18, 2016 at 3:47 PM, Andres Freund  wrote:
> On January 18, 2016 10:42:42 PM GMT+01:00, Kevin Grittner  
> wrote:

>> I took a look at this and agree that the shorter, simpler code
>> proposed in this patch should make no *logical* difference, and
>> looks like it *should* have a neutral or beneficial affect; but
>> performance tuning in general, and spinlock performance in
>> particular, is full of surprises.  We have seen customers suffer
>> poor scaling on their brand new monster machines because of the
>> interaction between NUMA scheduling and our spinlock
>> implementation, and seen those problems go away with an upgrade
>> from pre-3.8 to post-3.8 kernels.  I would be hesitant to accept
>> this change without seeing a benchmark on a large NUMA machine with
>> 4 or more memory nodes, on Linux kernels both before and after 3.8,
>> to make sure that the effects are at least neutral.
>
> Unconvinced.

Unconvinced that we should do performance testing on a proposed
performance patch before accepting it, that the changes in NUMA
scheduling in the Linux 3.8 kernel have a major effect on how well
our code performs at high concurrency on NUMA machines with a lot
of memory nodes, that patches to improve performance sometimes
cause surprising regressions, that the results will come out any
particular way, or that the difference will be out of the noise?
Personally I'm only convinced on the first three of those.

> You get just as much churn by changing code elsewhere, which
> often causes code movement and alignment changes.

It's hard to understand quite what you're saying there.  If you're
saying that code changes that should be performance neutral can
sometimes affect performance because of alignment of code with
cache line boundaries -- I absolutely agree; is that an argument
against performance testing performance patches?

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] exposing pg_controldata and pg_config as functions

2016-01-18 Thread Stephen Frost
* Robert Haas (robertmh...@gmail.com) wrote:
> On Mon, Jan 18, 2016 at 4:43 AM, Andres Freund  wrote:
> > Meh, that seems pretty far into pseudo security arguments.
> 
> Yeah, I really don't see anything in the pg_controldata output that
> looks sensitive.  The WAL locations are the closest of anything,
> AFAICS.

Heikki already showed how the WAL location information could be
exploited if compression is enabled.

I believe that's something we should care about and fix in one way or
another (my initial approach was using defualt roles, but using the ACL
system and starting out w/ no rights granted to that function also
works).

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] exposing pg_controldata and pg_config as functions

2016-01-18 Thread Joe Conway
On 01/18/2016 01:47 PM, Bruce Momjian wrote:
> On Sun, Jan 17, 2016 at 02:24:46PM -0800, Joe Conway wrote:
>> On 01/16/2016 06:02 AM, Michael Paquier wrote:
>>> On Wed, Dec 30, 2015 at 9:08 AM, Joe Conway  wrote:
 1) Change NextXID output format from "%u/%u" to "%u:%u"
(see recent hackers thread)
>>>
>>> ! printf(_("Latest checkpoint's NextXID:  %u/%u\n"),
>>>  ControlFile.checkPointCopy.nextXidEpoch,
>>>  ControlFile.checkPointCopy.nextXid);
>>>   printf(_("Latest checkpoint's NextOID:  %u\n"),
>>> --- 646,652 
>>>  ControlFile.checkPointCopy.ThisTimeLineID);
>>>   printf(_("Latest checkpoint's full_page_writes: %s\n"),
>>>  ControlFile.checkPointCopy.fullPageWrites ? _("on") : 
>>> _("off"));
>>> ! printf(_("Latest checkpoint's NextXID:  %u:%u\n"),
>>> This should be definitely a separate patch.
>>
>> Ok. Notwithstanding Simon's reply, there seems to be consensus that this
>> is the way to go. Will commit it this way unless some additional
>> objections surface in the next day or so.
> 
> FYI, this slash-colon change will break pg_upgrade unless it is patched.
> Dp you want a patch from me?

Didn't realize that -- yes please.

Thanks,

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] exposing pg_controldata and pg_config as functions

2016-01-18 Thread Robert Haas
On Mon, Jan 18, 2016 at 4:43 AM, Andres Freund  wrote:
> On 2016-01-18 10:18:34 +0900, Michael Paquier wrote:
>> We are trying to hide away from non-superusers WAL-related information
>> in system views and system function, that's my point to do the same
>> here.
>
> We are? pg_current_xlog_insert_location(), pg_current_xlog_location(),
> pg_is_xlog_replay_paused(), pg_stat_bgwriter ... are all non-superuser?

Yeah.  There's certainly no need for the WAL positions reported by
pg_controldata to be any more restricted than other functions that
give away information about WAL position.  We had some discussion
about restricting WAL position information in general due to possible
information leakage, and if we do that, then perhaps this should be
similarly restricted.  Presumably vulnerabilities here would be harder
to exploit because the values change much less frequently, so if you
are trying to learn something the rate at which you can glean
information will be much lower.  But maybe we should put the same
restrictions on all of it.

>> For the data of pg_control, it seems to me that this can give
>> away to any authorized users hints regarding the way Postgres is
>> built, perhaps letting people know for example which Linux
>> distribution is used and which flavor of Postgres is used (we already
>> give away some information with version() but that's different than
>> the libraries this is linking to), so an attacker may be able to take
>> advantage of that to do attacks on potentially outdated packages? And
>> I would think that many users are actually going to revoke the access
>> of those functions to public if we are going to make them
>> world-visible. It is easier as well to restrict things first, and then
>> relax if necessary, than the opposite as well.
>
> Meh, that seems pretty far into pseudo security arguments.

Yeah, I really don't see anything in the pg_controldata output that
looks sensitive.  The WAL locations are the closest of anything,
AFAICS.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: pglogical_output - a general purpose logical decoding output plugin

2016-01-18 Thread Tomasz Rybak
I just quickly went through patch v5.
It's rather big patch, on (or beyond) my knowledge of PostgreSQL to perform 
high-quality review. But during this week I'll try to send reviews of parts of 
the code, as going through it in one sitting seems impossible.

One proposed change - update copyright to 2016.

i'd also propose to change of pglogical_output_control.in:
comment = 'general purpose logical decoding plugin'
to something like "general-purpoer plugin decoding and generating stream of 
logical changes"

We might also think about changing name of plugin to something resembling 
"logical_streaming_decoder" or even "logical_streamer"
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plpgsql - DECLARE - cannot to use %TYPE or %ROWTYPE for composite types

2016-01-18 Thread Robert Haas
On Mon, Jan 18, 2016 at 4:35 PM, Pavel Stehule  wrote:
>> I know that Oracle uses syntax of this general type, but I've always
>> found it ugly.  It's also pretty non-extensible.  You could want
>> similar things for range types and any other container types we might
>> get in the future, but clearly adding new reserved words for each one
>> is no good.
>
> It doesn't use reserved worlds.

OK - keywords, then.

>> One idea that occurs to me is: If you can DECLARE BAR FOO%TYPE, but
>> then you want to make BAR an array of that type rather than a scalar,
>> why not write that as DECLARE BAR FOO%TYPE[]?  That seems quite
>> natural to me.
>
> what you propose for syntax for taking a element of array?

No idea.

>> I think the part of this patch that makes %TYPE work for more kinds of
>> types is probably a good idea, although I haven't carefully studied
>> exactly what it does.
>
>
> I invite any ideas, but currently used notation is only in direction
> type->array. The working with symbols looks more difficult, than using words
> (in design area).
>
> More - the textual form is more near to our system of polymorphics types:
> anyelement, anyarray, ... We have not anyelement[]

True, but this is hardly a straightforward extension of what we have
today either.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Improve spinlock inline assembly for x86.

2016-01-18 Thread Andres Freund
On January 18, 2016 10:42:42 PM GMT+01:00, Kevin Grittner  
wrote:
>On Mon, Jan 18, 2016 at 3:04 PM, Andres Freund 
>wrote:
>> On January 18, 2016 7:27:59 PM GMT+01:00, Robert Haas
> wrote:
>>> On Sun, Jan 17, 2016 at 6:38 AM, Andreas Seltenreich
> wrote:
>
 While discussing issues with its developers, it was pointed out to
>me
 that our spinlock inline assembly is less than optimal.  Attached
>is
 a patch that addresses this.
>
>>> I did a Google search and everybody seems to agree that the LOCK
>>> prefix is redundant.  I found a source agreeing with the idea that
>it
>>> doesn't clobber registers
>
>>> So I guess it would be safe to change this.  Scares me slightly,
>>> though.
>>
>> Clobbers IIRC are implicit on x86 anyway. Rather doubt that the
>> space for the prefix makes any sorry of practical difference, but
>> there indeed seems no reason to have it.
>
>I took a look at this and agree that the shorter, simpler code
>proposed in this patch should make no *logical* difference, and
>looks like it *should* have a neutral or beneficial affect; but
>performance tuning in general, and spinlock performance in
>particular, is full of surprises.  We have seen customers suffer
>poor scaling on their brand new monster machines because of the
>interaction between NUMA scheduling and our spinlock
>implementation, and seen those problems go away with an upgrade
>from pre-3.8 to post-3.8 kernels.  I would be hesitant to accept
>this change without seeing a benchmark on a large NUMA machine with
>4 or more memory nodes, on Linux kernels both before and after 3.8,
>to make sure that the effects are at least neutral.

Unconvinced. You get just as much churn by changing code elsewhere, which often 
causes code movement and alignment changes.

--- 
Please excuse brevity and formatting - I am writing this on my mobile phone.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] exposing pg_controldata and pg_config as functions

2016-01-18 Thread Bruce Momjian
On Sun, Jan 17, 2016 at 02:24:46PM -0800, Joe Conway wrote:
> On 01/16/2016 06:02 AM, Michael Paquier wrote:
> > On Wed, Dec 30, 2015 at 9:08 AM, Joe Conway  wrote:
> >> 1) Change NextXID output format from "%u/%u" to "%u:%u"
> >>(see recent hackers thread)
> > 
> > ! printf(_("Latest checkpoint's NextXID:  %u/%u\n"),
> >  ControlFile.checkPointCopy.nextXidEpoch,
> >  ControlFile.checkPointCopy.nextXid);
> >   printf(_("Latest checkpoint's NextOID:  %u\n"),
> > --- 646,652 
> >  ControlFile.checkPointCopy.ThisTimeLineID);
> >   printf(_("Latest checkpoint's full_page_writes: %s\n"),
> >  ControlFile.checkPointCopy.fullPageWrites ? _("on") : 
> > _("off"));
> > ! printf(_("Latest checkpoint's NextXID:  %u:%u\n"),
> > This should be definitely a separate patch.
> 
> Ok. Notwithstanding Simon's reply, there seems to be consensus that this
> is the way to go. Will commit it this way unless some additional
> objections surface in the next day or so.

FYI, this slash-colon change will break pg_upgrade unless it is patched.
Dp you want a patch from me?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pglogical_output - a general purpose logical decoding output plugin

2016-01-18 Thread Tomasz Rybak
W dniu 07.01.2016, czw o godzinie 15∶50 +0800, użytkownik Craig Ringer
napisał:
> On 7 January 2016 at 01:17, Peter Eisentraut  wrote:
> > On 12/22/15 4:55 AM, Craig Ringer wrote:
> > > I'm a touch frustrated by that, as a large part of the point of
> > > submitting the output plugin separately and in advance of the
> > downstream
> > > was to get attention for it separately, as its own entity. A lot
> > of
> > > effort has been put into making this usable for more than just a
> > data
> > > source for pglogical's replication tools.
> > 

Maybe chosen name was not the best one - I assumed from the very 
eginning that it's replication solution and not something separate.

> > I can't imagine that there is a lot of interest in a replication
> > tool
> > where you only get one side of it, no matter how well-designed or
> > general it is.
> Well, the other part was posted most of a week ago.
> 
> http://www.postgresql.org/message-id/5685bb86.5010...@2ndquadrant.com
> 
> ... but this isn't just about replication. At least, not just to
> another PostgreSQL instance. This plugin is designed to be general
> enough to use for replication to other DBMSes (via appropriate
> receivers), to replace trigger-based data collection in existing
> replication systems, for use in audit data collection, etc.
> 
> Want to get a stream of data out of PostgreSQL in a consistent,
> simple way, without having to add triggers or otherwise interfere
> with the origin database? That's the purpose of this plugin, and it
> doesn't care in the slightest what the receiver wants to do with that
> data. It's been designed to be usable separately from pglogical
> downstream and - before the Python tests were rejected in discussions
> on this list - was tested using a test suite completely separate to
> the pglogical downstream using psycopg2 to make sure no unintended
> interdependencies got introduced.
> 
> You can do way more than that with the output plugin but you have to
> write your own downstream/receiver for the desired purpose, since
> using a downstream based on bgworkers and SPI won't make any sense
> outside PostgreSQL.
> 

Put those 3 paragraphs into README.md - and this is not a joke.
This is very good rationale behind this plugin; for now README
starts with link to documentation describing logical decoding
and the second paragraph talks about replication.
So when replication (and only it) is in README, it should be
no wonder that people (only - or mostly) think about replication.

Maybe we should think about changing the name to something like
logical_decoder or logical_streamer, to divorce this plugin
from pglogical? Currently even name suggests tight coupling - and
in other way than it should be. pglogical depends on this plugin,
not the other way around.

> If you just want a canned product to use, see the pglogical post
> above for the downstream code.
> 
>  
> > Ultimately, what people will want to do with this is
> > replicate things, not muse about its design aspects. So if we're
> > going
> >  to ship a replication solution in PostgreSQL core, we should ship
> > all
> > the pieces that make the whole system work.
> I don't buy that argument. Doesn't that mean logical decoding
> shouldn't have been accepted? Or the initial patches for parallel
> query? Or any number of other things that're part of incremental
> development solutions?
> 
> (This also seems to contradict what you then argue below, that the
> proposed feature is too broad and does too much.)
> 
> I'd be happy to see both parts go in, but I'm frustrated that
> nobody's willing to see beyond "replicate from one Pg to another Pg"
> and see all the other things you can do. Want to replicate to Oracle
> / MS-SQL / etc? This will help a lot and solve a significant part of
> the problem for you. Want to stream data to append-only audit logs?
> Ditto. But nope, it's all about PostgreSQL to PostgreSQL.
> 
> Please try to look further into what client applications can do with
> this directly. I already know it meets the needs of the pglogical
> downstream. What I was hoping to achieve with posting the output
> plugin earlier was to get some thought going about what *else* it'd
> be good for.
> 
> Again: pglogical is posted now (it just took longer than expected to
> get ready) and I'll be happy to see both it and the output plugin
> included. I just urge people to look at the output plugin as more
> than a tightly coupled component of pglogical.
> 
> Maybe some quality name bikeshedding for the output plugin would help
> ;)
> 
> > Also, I think there are two kinds of general systems: common core,
> > and
> > all possible features.  A common core approach could probably be
> > made
> > acceptable with the argument that anyone will probably want to do
> > things
> > this way, so we might as well implement it once and give it to
> > people.
> That's what we're going for here. Extensible, something people can
> build on and use.
>  
> >  In a way, the logical decoding i

Re: [HACKERS] Fwd: [JDBC] Re: 9.4-1207 behaves differently with server side prepared statements compared to 9.2-1102

2016-01-18 Thread Tom Lane
Robert Haas  writes:
> This isn't the first complaint about this mechanism that we've gotten,
> and it won't be the last.  Way too many of our users are way more
> aware than they should be that the threshold here is five rather than
> any other number, which to me is a clear-cut sign that this needs to
> be improved.

No argument there.

> How to improve it is a harder question.

Exactly.  I'm very suspicious of any easy answers to this; they'll
most likely just shift the pain around.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Improve spinlock inline assembly for x86.

2016-01-18 Thread Kevin Grittner
On Mon, Jan 18, 2016 at 3:04 PM, Andres Freund  wrote:
> On January 18, 2016 7:27:59 PM GMT+01:00, Robert Haas  
> wrote:
>> On Sun, Jan 17, 2016 at 6:38 AM, Andreas Seltenreich  
>> wrote:

>>> While discussing issues with its developers, it was pointed out to me
>>> that our spinlock inline assembly is less than optimal.  Attached is
>>> a patch that addresses this.

>> I did a Google search and everybody seems to agree that the LOCK
>> prefix is redundant.  I found a source agreeing with the idea that it
>> doesn't clobber registers

>> So I guess it would be safe to change this.  Scares me slightly,
>> though.
>
> Clobbers IIRC are implicit on x86 anyway. Rather doubt that the
> space for the prefix makes any sorry of practical difference, but
> there indeed seems no reason to have it.

I took a look at this and agree that the shorter, simpler code
proposed in this patch should make no *logical* difference, and
looks like it *should* have a neutral or beneficial affect; but
performance tuning in general, and spinlock performance in
particular, is full of surprises.  We have seen customers suffer
poor scaling on their brand new monster machines because of the
interaction between NUMA scheduling and our spinlock
implementation, and seen those problems go away with an upgrade
from pre-3.8 to post-3.8 kernels.  I would be hesitant to accept
this change without seeing a benchmark on a large NUMA machine with
4 or more memory nodes, on Linux kernels both before and after 3.8,
to make sure that the effects are at least neutral.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plpgsql - DECLARE - cannot to use %TYPE or %ROWTYPE for composite types

2016-01-18 Thread Pavel Stehule
2016-01-18 22:21 GMT+01:00 Robert Haas :

> On Mon, Jan 18, 2016 at 3:51 PM, Alvaro Herrera
>  wrote:
> > BTW are we all agreed that enabling
> >   foo%ARRAYTYPE
> > and
> >   foo%ELEMENTYPE
> > in plpgsql's DECLARE section is what we want for this?
>
> I know that Oracle uses syntax of this general type, but I've always
> found it ugly.  It's also pretty non-extensible.  You could want
> similar things for range types and any other container types we might
> get in the future, but clearly adding new reserved words for each one
> is no good.
>

It doesn't use reserved worlds.


>
> One idea that occurs to me is: If you can DECLARE BAR FOO%TYPE, but
> then you want to make BAR an array of that type rather than a scalar,
> why not write that as DECLARE BAR FOO%TYPE[]?  That seems quite
> natural to me.
>

what you propose for syntax for taking a element of array?


>
> I think the part of this patch that makes %TYPE work for more kinds of
> types is probably a good idea, although I haven't carefully studied
> exactly what it does.
>

I invite any ideas, but currently used notation is only in direction
type->array. The working with symbols looks more difficult, than using
words (in design area).

More - the textual form is more near to our system of polymorphics types:
anyelement, anyarray, ... We have not anyelement[]

Regards

Pavel


>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


Re: [HACKERS] Fwd: [JDBC] Re: 9.4-1207 behaves differently with server side prepared statements compared to 9.2-1102

2016-01-18 Thread Robert Haas
On Wed, Jan 13, 2016 at 10:47 AM, Tom Lane  wrote:
> Vladimir Sitnikov  writes:
>> Note: I state that mixing "kinds" of bind values is a bad application
>> design anyway. In other words, application developer should understand
>> if a query is DWH-like (requires replans) or OLTP-like (does not
>> require replans). Agreed?
>
> No, not agreed.  As was already pointed out upthread, such information
> is not available in many use-cases for the plancache.
>
> The real problem here IMO is inaccurate plan cost estimates, and that's
> not something that there is any easy fix for.

Not really.  Even if the cost estimates for all of the plans tried are
perfectly accurate, you'll have only seen 5 values when you decide to
switch to a generic plan.  If the 6th, 60th, 600th, or 6000th
execution uses a parameter where a custom plan would have been a big
win, you will blindly use the generic plan anyway and lose bigtime.
On the other hand, if first five plans are all equivalent to each
other and to the generic plan, then you've spent the cost of uselessly
replanning six times instead of just caching the first plan and being
done with it.  I'm aware of an actual case where that extra
re-planning causes a serious performance problem, aggregated across
many queries and many backends.

This isn't the first complaint about this mechanism that we've gotten,
and it won't be the last.  Way too many of our users are way more
aware than they should be that the threshold here is five rather than
any other number, which to me is a clear-cut sign that this needs to
be improved.  How to improve it is a harder question.  We lack the
ability to do any kind of sensitivity analysis on a plan, so we can't
know whether there are other parameter values that would have resulted
in a different plan, nor can we test whether a particular set of
parameter values would have changed the outcome.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] system mingw not recognized

2016-01-18 Thread Andrew Dunstan



On 01/18/2016 04:11 PM, Igal @ Lucee.org wrote:

I posted the error in the docs to pgsql-d...@postgresql.org

If it's possible to update it myself via git, or if it should be 
reported elsewhere -- please advise.



1. Please don't top-post on the PostgreSQL lists. See 
>


2. You've done all you need to do. We'll take care of it. Thanks for the 
report.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Removing service-related code in pg_ctl for Cygwin

2016-01-18 Thread Andrew Dunstan



On 01/18/2016 03:46 PM, Alvaro Herrera wrote:

Andrew Dunstan wrote:


On 01/18/2016 12:38 PM, Alvaro Herrera wrote:

Andrew Dunstan wrote:


I think we can be a bit more adventurous and remove all the Cygwin-specific
code. See attached patch, which builds fine on buildfarm cockatiel.

Hopefully you also tested that it builds under MSVC :-)

Why would I? This isn't having the slightest effect on MSVC builds.

You never know ... you might have inadvertently broken some WIN32 block.



If you can point out a line where that might be true I'll test it ;-)

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plpgsql - DECLARE - cannot to use %TYPE or %ROWTYPE for composite types

2016-01-18 Thread Robert Haas
On Mon, Jan 18, 2016 at 3:51 PM, Alvaro Herrera
 wrote:
> BTW are we all agreed that enabling
>   foo%ARRAYTYPE
> and
>   foo%ELEMENTYPE
> in plpgsql's DECLARE section is what we want for this?

I know that Oracle uses syntax of this general type, but I've always
found it ugly.  It's also pretty non-extensible.  You could want
similar things for range types and any other container types we might
get in the future, but clearly adding new reserved words for each one
is no good.

One idea that occurs to me is: If you can DECLARE BAR FOO%TYPE, but
then you want to make BAR an array of that type rather than a scalar,
why not write that as DECLARE BAR FOO%TYPE[]?  That seems quite
natural to me.

I think the part of this patch that makes %TYPE work for more kinds of
types is probably a good idea, although I haven't carefully studied
exactly what it does.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Tsvector editing functions

2016-01-18 Thread Alvaro Herrera
So, Tomas, Teodor, did you like this new version of the patch?

Stas Kelvich wrote:

> > 7) Some of the functions use intexterm that does not match the function
> >   name. I see two such cases - to_tsvector and setweight. Is there a
> >   reason for that?
> 
> Because sgml compiler wants unique indexterm. Both functions that you
> mentioned use overloading of arguments and have non-unique name.

This sounds wrong.  I think what you should really do is use


  foo
  bar


to distinguish the two entries.

It's a bit funny that you reintroduce the "unrecognized weight: %d"
(instead of %c) in tsvector_setweight_by_filter.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] system mingw not recognized

2016-01-18 Thread Igal @ Lucee.org

I posted the error in the docs to pgsql-d...@postgresql.org

If it's possible to update it myself via git, or if it should be 
reported elsewhere -- please advise.



On 1/18/2016 12:59 PM, Igal @ Lucee.org wrote:

It looks like the docs are indeed wrong.

According to http://sourceforge.net/p/mingw-w64/wiki2/TypeTriplets/ it 
should be x86_64-w64-mingw32


So in summary, the docs at
http://www.postgresql.org/docs/current/static/installation-platform-notes.html#INSTALLATION-NOTES-MINGW
should be updated from:

--host=x86_64-w64-mingw

to:

--host=x86_64-w64-mingw32

Not sure where to report that?


On 1/18/2016 12:52 PM, Igal @ Lucee.org wrote:

Per the docs at
http://www.postgresql.org/docs/current/static/installation-platform-notes.html#INSTALLATION-NOTES-MINGW

"To build 64 bit binaries using MinGW ... and run configure with the 
--host=x86_64-w64-mingw option"


But when I try to run: $ ~/sources/postgresql-9.5.0/configure 
--host=x86_64-w64-mingw


I get an error system mingw not recognized:

checking build system type... x86_64-pc-mingw64
checking host system type... Invalid configuration 
`x86_64-w64-mingw': *system `mingw' not recognized*
configure: error: /bin/sh 
/home/Admin/sources/postgresql-9.5.0/config/config.sub 
x86_64-w64-mingw failed


The sources were downloaded from 
http://www.postgresql.org/ftp/source/v9.5.0/


Are the docs up to date?

Thanks

--

Igal Sapir
Lucee Core Developer
Lucee.org 







Re: [HACKERS] Cannot find a working 64-bit integer type

2016-01-18 Thread Robert Haas
On Mon, Jan 18, 2016 at 2:42 PM, Igal @ Lucee.org  wrote:
> It looks like Tom is correct.
>
> I added the directory tree to an exclude list of Microsoft Security
> Essentials and
> ran `configure` without any flags and it completed successfully this time.

Cool.

Man, Windows anti-virus software is a real nuisance.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Truncating/vacuuming relations on full tablespaces

2016-01-18 Thread Robert Haas
On Mon, Jan 18, 2016 at 2:26 PM, Tom Lane  wrote:
> Robert Haas  writes:
>> On Fri, Jan 15, 2016 at 2:16 PM, Tom Lane  wrote:
>>> Presumably the hope would be that VACUUM would truncate off some of the
>>> heap, else there's not much good that's going to happen.  That leaves
>>> me wondering exactly what the invariant is for the maps, and if it's
>>> okay to not touch them during a heap truncation.
>
>> No, you're missing the point, or at least I think you are.  Suppose
>> somebody creates a big table and then deletes all the tuples in the
>> second half, but VACUUM never runs.  When at last VACUUM does run on
>> that table, it will try to build the VM and FSM forks as it scans the
>> table, and will only truncate AFTER that has been done.  If building
>> the VM and FSM forks fails because there is no freespace, we will
>> never reach the part of the operation that could create some.
>
> No, I follow that perfectly.  I think you missed *my* point, which is:
> suppose that we do have a full-length VM and/or FSM fork for a relation,
> and VACUUM decides to truncate the relation.  Is it okay to not truncate
> the VM/FSM?  If it isn't, we're going to have to have very tricky
> semantics for any "don't touch the map forks" option, because it will
> have to suppress only some of VACUUM's map updates.

Oh, I see.  I think it's probably not a good idea to skip truncating
those maps, but perhaps the option could be defined as making no new
entries, rather than ignoring them altogether, so that they never
grow.  It seems that both of those are defined in such a way that if
page Y follows page X in the heap, the entries for page Y in the
relation fork will follow the one for page X.  So truncating them
should be OK; it's just growing them that we need to avoid.

> An alternative approach that might avoid such worries is to have a mode
> wherein VACUUM always truncates the map forks to nothing, rather than
> attempting to update them.

That could work, too, but might be stronger medicine than needed.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Improve spinlock inline assembly for x86.

2016-01-18 Thread Andres Freund
On January 18, 2016 7:27:59 PM GMT+01:00, Robert Haas  
wrote:
>On Sun, Jan 17, 2016 at 6:38 AM, Andreas Seltenreich
> wrote:
>> I'm currently experimenting with just-in-time compilation using
>libfirm.
>> While discussing issues with its developers, it was pointed out to me
>> that our spinlock inline assembly is less than optimal.  Attached is
>a
>> patch that addresses this.
>>
>> ,
>> | Remove the LOCK prefix from the XCHG instruction.  Locking is
>implicit
>> | with XCHG and the prefix wastes a byte.  Also remove the "cc"
>register
>> | from the clobber list as the XCHG instruction does not modify any
>flags.
>> |
>> | Reported by Christoph Mallon.
>> `
>
>I did a Google search and everybody seems to agree that the LOCK
>prefix is redundant.  I found a source agreeing with the idea that it
>doesn't clobber registers, too:
>
>http://www.oopweb.com/Assembly/Documents/ArtOfAssembly/Volume/Chapter_6/CH06-1.html#HEADING1-85
>
>So I guess it would be safe to change this.  Scares me slightly,
>though.

Clobbers IIRC are implicit on x86 anyway. Rather doubt that the space for the 
prefix makes any sorry of practical difference, but there indeed seems no 
reason to have it.

Andres

--- 
Please excuse brevity and formatting - I am writing this on my mobile phone.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: [JDBC] Re: 9.4-1207 behaves differently with server side prepared statements compared to 9.2-1102

2016-01-18 Thread Bruce Momjian
On Mon, Jan 18, 2016 at 02:14:11PM -0500, Tom Lane wrote:
> Bruce Momjian  writes:
> > I never understood why we don't just keep the selectivity estimates of
> > previous plans and just reuse the plan if the selectivity estimates are
> > similar.  Isn't parameter selectivity the only thing that distinguishes
> > on plan with parameter from another?
> 
> > Checking selectivity estimates must be cheaper than replanning.  This
> > could be done at the second use of the prepared plan, and maybe for all
> > plan reuses, rather than waiting for five and then perhaps getting this
> > bad behavior.
> 
> You're imagining that a selectivity recheck could be separated out from
> the rest of the planner.  That's nowhere near feasible, IMO.  Even if it

I think you would have to do the checks before entering the planner and
save them off for use in the planner.

> were, what would we do with it?  There's no reliable way to determine
> whether X% change in one or another selectivity number would change the
> selected plan, other than by redoing practically all of the planning work.

Well, if it is +/-1%, I think we can assume we can reuse the plan just
fine from the second prepared call until we see a major selectivity
change.  While we have never exposed the count of prepared queries
before we choose a generic plan, I can see us exposing this percentage.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] system mingw not recognized

2016-01-18 Thread Igal @ Lucee.org

It looks like the docs are indeed wrong.

According to http://sourceforge.net/p/mingw-w64/wiki2/TypeTriplets/ it 
should be x86_64-w64-mingw32


So in summary, the docs at
http://www.postgresql.org/docs/current/static/installation-platform-notes.html#INSTALLATION-NOTES-MINGW
should be updated from:

--host=x86_64-w64-mingw

to:

--host=x86_64-w64-mingw32

Not sure where to report that?


On 1/18/2016 12:52 PM, Igal @ Lucee.org wrote:

Per the docs at
http://www.postgresql.org/docs/current/static/installation-platform-notes.html#INSTALLATION-NOTES-MINGW

"To build 64 bit binaries using MinGW ... and run configure with the 
--host=x86_64-w64-mingw option"


But when I try to run: $ ~/sources/postgresql-9.5.0/configure 
--host=x86_64-w64-mingw


I get an error system mingw not recognized:

checking build system type... x86_64-pc-mingw64
checking host system type... Invalid configuration `x86_64-w64-mingw': 
*system `mingw' not recognized*
configure: error: /bin/sh 
/home/Admin/sources/postgresql-9.5.0/config/config.sub 
x86_64-w64-mingw failed


The sources were downloaded from 
http://www.postgresql.org/ftp/source/v9.5.0/


Are the docs up to date?

Thanks

--

Igal Sapir
Lucee Core Developer
Lucee.org 





[HACKERS] source files without copyright notices

2016-01-18 Thread Joe Conway
I never noticed before, but today I came across a header file without
any copyright notice at all. Turns out there are quite a few:

  grep -riL Copyright src/* --include=*.c --include=*.h

Shouldn't at least some of these get a copyright?

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


[HACKERS] system mingw not recognized

2016-01-18 Thread Igal @ Lucee.org

Per the docs at
http://www.postgresql.org/docs/current/static/installation-platform-notes.html#INSTALLATION-NOTES-MINGW

"To build 64 bit binaries using MinGW ... and run configure with the 
--host=x86_64-w64-mingw option"


But when I try to run: $ ~/sources/postgresql-9.5.0/configure 
--host=x86_64-w64-mingw


I get an error system mingw not recognized:

checking build system type... x86_64-pc-mingw64
checking host system type... Invalid configuration `x86_64-w64-mingw': 
*system `mingw' not recognized*
configure: error: /bin/sh 
/home/Admin/sources/postgresql-9.5.0/config/config.sub x86_64-w64-mingw 
failed


The sources were downloaded from 
http://www.postgresql.org/ftp/source/v9.5.0/


Are the docs up to date?

Thanks

--

Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [HACKERS] plpgsql - DECLARE - cannot to use %TYPE or %ROWTYPE for composite types

2016-01-18 Thread Alvaro Herrera
FWIW the reason I read through this patch is that I wondered if there
was anything in common with this other patch
https://commitfest.postgresql.org/8/459/ -- and the answer seems to be
"no".  What that patch does is add a new construct
  TYPE(1+1) 
which in this case returns "int4"; I guess if we wanted to augment that
functionality to cover Pavel's use case we would additionally need
  ELEMENTTYPE(somearray)
and
  ARRAYTYPE(some-non-array)
in the core grammar ... sounds like a hard sell.

BTW are we all agreed that enabling
  foo%ARRAYTYPE
and
  foo%ELEMENTYPE
in plpgsql's DECLARE section is what we want for this?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Interesting read on SCM upending software and hardware architecture

2016-01-18 Thread Peter Geoghegan
On Mon, Jan 18, 2016 at 12:31 PM, Robert Haas  wrote:
> People keep predicting the death of spinning media, but I think
> it's not happening to anywhere near as fast as that people think.
> Yes, I'm writing this on a laptop with an SSD, and my personal laptop
> also has an SSD, but their immediate predecessors did not, and these
> are fairly expensive laptops.  And most customers I talk to are still
> using spinning disks.  Meanwhile, main memory is getting so large that
> even pretty significant databases can be entirely RAM-cached.  So I
> tend to think that this is a lot less exciting than people who are not
> me seem to think.

I tend to agree that the case for SSDs as a revolutionary technology
has been significantly overstated. This recent article makes some
interesting points:

http://www.zdnet.com/article/what-we-learned-about-ssds-in-2015/

I think it's much more true that main memory scaling (in particular,
main memory capacity) has had a huge impact, but that trend appears to
now be stalling.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Removing service-related code in pg_ctl for Cygwin

2016-01-18 Thread Alvaro Herrera
Andrew Dunstan wrote:
> 
> 
> On 01/18/2016 12:38 PM, Alvaro Herrera wrote:
> >Andrew Dunstan wrote:
> >
> >>I think we can be a bit more adventurous and remove all the Cygwin-specific
> >>code. See attached patch, which builds fine on buildfarm cockatiel.
> >Hopefully you also tested that it builds under MSVC :-)
> 
> Why would I? This isn't having the slightest effect on MSVC builds.

You never know ... you might have inadvertently broken some WIN32 block.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plpgsql - DECLARE - cannot to use %TYPE or %ROWTYPE for composite types

2016-01-18 Thread Alvaro Herrera
> diff --git a/src/pl/plpgsql/src/pl_comp.c b/src/pl/plpgsql/src/pl_comp.c
> new file mode 100644
> index 1ae4bb7..c819517
> *** a/src/pl/plpgsql/src/pl_comp.c
> --- b/src/pl/plpgsql/src/pl_comp.c
> *** plpgsql_parse_tripword(char *word1, char
> *** 1617,1622 
> --- 1617,1677 
>   return false;
>   }
>   
> + /*
> +  * Derive type from ny base type controlled by reftype_mode
> +  */
> + static PLpgSQL_type *
> + derive_type(PLpgSQL_type *base_type, int reftype_mode)
> + {
> + Oid typoid;

I think you should add a typedef to the REFTYPE enum, and have this
function take that type rather than int.

> + case PLPGSQL_REFTYPE_ARRAY:
> + {
> + /*
> +  * Question: can we allow anyelement (array or 
> nonarray) -> array direction.
> +  * if yes, then probably we have to modify 
> enforce_generic_type_consistency,
> +  * parse_coerce.c where still is check on scalar type 
> -> raise error
> +  * ERROR:  42704: could not find array type for data 
> type integer[]
> +  *
> + if (OidIsValid(get_element_type(base_type->typoid)))
> + return base_type;
> + */

I think it would be better to resolve this question outside a code
comment.

> + typoid = get_array_type(base_type->typoid);
> + if (!OidIsValid(typoid))
> + ereport(ERROR,
> + 
> (errcode(ERRCODE_DATATYPE_MISMATCH),
> +  errmsg("there are not array 
> type for type %s",
> + 
> format_type_be(base_type->typoid;

nodeFuncs.c uses this wording:
errmsg("could not find array type for data type %s",
which I think you should adopt.

> --- 1681,1687 
>* --
>*/
>   PLpgSQL_type *
> ! plpgsql_parse_wordtype(char *ident, int reftype_mode)
>   {
>   PLpgSQL_type *dtype;
>   PLpgSQL_nsitem *nse;

Use the typedef'ed enum, as above.

> --- 1699,1721 
>   switch (nse->itemtype)
>   {
>   case PLPGSQL_NSTYPE_VAR:
> ! {
> ! dtype = ((PLpgSQL_var *) 
> (plpgsql_Datums[nse->itemno]))->datatype;
> ! return derive_type(dtype, reftype_mode);
> ! }
>   
> ! case PLPGSQL_NSTYPE_ROW:
> ! {
> ! dtype = ((PLpgSQL_row *) 
> (plpgsql_Datums[nse->itemno]))->datatype;
> ! return derive_type(dtype, reftype_mode);
> ! }
>   
> + /*
> +  * XXX perhaps allow REC here? Probably it has not any 
> sense, because
> +  * in this moment, because PLpgSQL doesn't support rec 
> parameters, so
> +  * there should not be any rec polymorphic parameter, 
> and any work can
> +  * be done inside function.
> +  */

I think you should remove from the "?" onwards in that comment, i.e.
just keep what was already in the original comment (minus the ROW)

> --- 1757,1763 
>* --
>*/
>   PLpgSQL_type *
> ! plpgsql_parse_cwordtype(List *idents, int reftype_mode)
>   {
>   PLpgSQL_type *dtype = NULL;
>   PLpgSQL_nsitem *nse;

Typedef.

> --- 2720,2737 
>   tok = yylex();
>   if (tok_is_keyword(tok, &yylval,
>  K_TYPE, "type"))
> ! result = plpgsql_parse_wordtype(dtname, 
> PLPGSQL_REFTYPE_TYPE);
> ! else if (tok_is_keyword(tok, &yylval,
> ! 
> K_ELEMENTTYPE, "elementtype"))
> ! result = plpgsql_parse_wordtype(dtname, 
> PLPGSQL_REFTYPE_ELEMENT);
> ! else if (tok_is_keyword(tok, &yylval,
> ! 
> K_ARRAYTYPE, "arraytype"))
> ! result = plpgsql_parse_wordtype(dtname, 
> PLPGSQL_REFTYPE_ARRAY);
>   else if (tok_is_keyword(tok, &yylval,
>   
> K_ROWTYPE, "rowtype"))
>   result = plpgsql_parse_wordrowtype(dtname);
> ! if (result)
> ! return result;
>   }

This plpgsql parser stuff is pretty tiresome.  (Not this patch's fault
-- just saying.)

   
> *** extern bool plpgsql_parse_dblword(char *
> *** 961,968 
> PLwdatum *wdatum, PLcword *cword);
>   extern bool plp

Re: [HACKERS] Interesting read on SCM upending software and hardware architecture

2016-01-18 Thread Robert Haas
On Mon, Jan 18, 2016 at 1:44 AM, David Fetter  wrote:
> On Sun, Jan 17, 2016 at 11:13:33PM -0500, Bruce Momjian wrote:
>> On Thu, Jan  7, 2016 at 02:30:06PM -0600, Jim Nasby wrote:
>> > https://queue.acm.org/detail.cfm?id=2874238 discusses how modern
>> > Storage Class Memory (SCM), such as PCIe SSD and NVDIMMs are
>> > completely upending every assumption made about storage. To put
>> > this in perspective, you can now see storage latency that is
>> > practically on-par with things like lock acquisition[1].
>>
>> How is this different from Fusion I/O devices, which have been
>> around for years?
>
> Price.
>
> As these things come down in price, it'll start being more and more
> reasonable to treat rotating media as exotic.

People keep predicting the death of spinning media, but I think
it's not happening to anywhere near as fast as that people think.
Yes, I'm writing this on a laptop with an SSD, and my personal laptop
also has an SSD, but their immediate predecessors did not, and these
are fairly expensive laptops.  And most customers I talk to are still
using spinning disks.  Meanwhile, main memory is getting so large that
even pretty significant databases can be entirely RAM-cached.  So I
tend to think that this is a lot less exciting than people who are not
me seem to think.

Now that having been said, I will not complain if vast quantities of
low-latency, high-bandwidth, non-volatile storage become available at
bargain prices.  And it very well may - eventually.  But I'm not quite
ready to break out the ticker tape just yet.  I think it will be a
while.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Removing service-related code in pg_ctl for Cygwin

2016-01-18 Thread Andrew Dunstan



On 01/18/2016 12:38 PM, Alvaro Herrera wrote:

Andrew Dunstan wrote:


I think we can be a bit more adventurous and remove all the Cygwin-specific
code. See attached patch, which builds fine on buildfarm cockatiel.

Hopefully you also tested that it builds under MSVC :-)




Why would I? This isn't having the slightest effect on MSVC builds.

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Refactoring speculative insertion with unique indexes a little

2016-01-18 Thread Peter Geoghegan
On Mon, Jan 18, 2016 at 11:56 AM, Alvaro Herrera
 wrote:
>> That's because I believe this is quite broken, as already pointed out.
>
> I think I like your approach better.

That makes things far simpler, then.

>> Your premise here is that what Heikki said in passing months ago is
>> incontrovertibly the right approach. That's ridiculous. I think Heikki
>> and I could work this out quite quickly, if he engaged, but for
>> whatever reason he appears unable to. I doubt that Heikki thinks that
>> about what he said, so why do you?
>
> I don't -- I just think you could have sent a patch that addressed all
> the other points, leave this one as initially submitted, and note that
> the new submission left it unaddressed because you disagreed.

I'll try to do that soon. I've got a very busy schedule over the next
couple of weeks, though.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Refactoring speculative insertion with unique indexes a little

2016-01-18 Thread Alvaro Herrera
Peter Geoghegan wrote:
> On Mon, Jan 18, 2016 at 8:36 AM, Alvaro Herrera
>  wrote:
> > If you refuse to post an updated version of the patch until Heikki
> > weighs in some more, and given that Heikki has (for the purposes of this
> > patch) completely vanished, I think we should mark this rejected.
> 
> I don't refuse. I just don't want to waste anyone's time. I will
> follow all of Heikki's feedback immediately, except this:
> 
> "I think it'd be better to define it as "like CHECK_UNIQUE_YES, but
> return FALSE instead of throwing an error on conflict". The difference
> is that the aminsert would not be allowed to return FALSE when there
> is no conflict".
> 
> That's because I believe this is quite broken, as already pointed out.

I think I like your approach better.

> > If somebody else is open to reviewing the patch, I think that'd be
> > another way to move forward, but presumably they would start from a
> > version with the discussed changes already fixed.  Otherwise it's a
> > waste of time.
> 
> Your premise here is that what Heikki said in passing months ago is
> incontrovertibly the right approach. That's ridiculous. I think Heikki
> and I could work this out quite quickly, if he engaged, but for
> whatever reason he appears unable to. I doubt that Heikki thinks that
> about what he said, so why do you?

I don't -- I just think you could have sent a patch that addressed all
the other points, leave this one as initially submitted, and note that
the new submission left it unaddressed because you disagreed.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Refactoring speculative insertion with unique indexes a little

2016-01-18 Thread Peter Geoghegan
On Mon, Jan 18, 2016 at 8:36 AM, Alvaro Herrera
 wrote:
> If you refuse to post an updated version of the patch until Heikki
> weighs in some more, and given that Heikki has (for the purposes of this
> patch) completely vanished, I think we should mark this rejected.

I don't refuse. I just don't want to waste anyone's time. I will
follow all of Heikki's feedback immediately, except this:

"I think it'd be better to define it as "like CHECK_UNIQUE_YES, but
return FALSE instead of throwing an error on conflict". The difference
is that the aminsert would not be allowed to return FALSE when there
is no conflict".

That's because I believe this is quite broken, as already pointed out.

> If somebody else is open to reviewing the patch, I think that'd be
> another way to move forward, but presumably they would start from a
> version with the discussed changes already fixed.  Otherwise it's a
> waste of time.

Your premise here is that what Heikki said in passing months ago is
incontrovertibly the right approach. That's ridiculous. I think Heikki
and I could work this out quite quickly, if he engaged, but for
whatever reason he appears unable to. I doubt that Heikki thinks that
about what he said, so why do you?

The point about CHECK_UNIQUE_YES I highlighted above felt like a
temporary misunderstanding to me, and not even what you might call a
real disagreement. It wasn't as if Heikki was insistent at the time. I
pointed out that what he said was broken according to an established
definition of broken (it would result in unprincipled deadlocks). He
didn't respond to that point. I think he didn't get back quickly in
part because I gave him something to think about.

If any other committer wants to engage with me on this, then I will of
course work with them. But that will not be predicated on my first
revising the patch in a way that this other committer does not
understand. That would be profoundly unfair.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cannot find a working 64-bit integer type

2016-01-18 Thread Igal @ Lucee.org

It looks like Tom is correct.

I added the directory tree to an exclude list of Microsoft Security 
Essentials and

ran `configure` without any flags and it completed successfully this time.

Thank you both for your time and expertise,


Igal

On 1/18/2016 11:23 AM, Igal @ Lucee.org wrote:

On 1/18/2016 11:09 AM, Tom Lane wrote:

Robert Haas  writes:

The relevant portion of config.log seems to be this:
I do not think configure pays attention to mere warnings for this 
type of test. The real problem here seems to be the "permission 
denied" errors, which to me reek of broken Windows antivirus 
software. (As far as I'm aware, the word "broken" is redundant in 
that phrase.)
Thank you both for looking into this.  The only A/V-type software that 
I have running is the "Microsoft Security Essentials".



I'm a little confused as to why -Wno-cpp fixes any of that, though.
Most likely, it's pure chance that a retry worked.  Or if it's 
repeatable,

maybe no-cpp changes the compiler's file access patterns enough that
there's no longer a false trip of the AV check.

Short answer is that I wonder how much of the OP's multiple problems
are being caused by AV bugs.
I did not make any changes other than adding the compiler flags 
between those two runs (nor afterwards).


The reason that I decided to try to add the -Wno-error flag was that I 
searched the net for the error message, and found this
thread from 4 years ago: 
http://postgresql.nabble.com/Setting-Werror-in-CFLAGS-td5118384.html 
-- which showed

a similar error message and a play of the compiler flags.

I will try to run both forms again and report whether it is repeatable.

Thanks again,


Igal





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Truncating/vacuuming relations on full tablespaces

2016-01-18 Thread Tom Lane
Robert Haas  writes:
> On Fri, Jan 15, 2016 at 2:16 PM, Tom Lane  wrote:
>> Presumably the hope would be that VACUUM would truncate off some of the
>> heap, else there's not much good that's going to happen.  That leaves
>> me wondering exactly what the invariant is for the maps, and if it's
>> okay to not touch them during a heap truncation.

> No, you're missing the point, or at least I think you are.  Suppose
> somebody creates a big table and then deletes all the tuples in the
> second half, but VACUUM never runs.  When at last VACUUM does run on
> that table, it will try to build the VM and FSM forks as it scans the
> table, and will only truncate AFTER that has been done.  If building
> the VM and FSM forks fails because there is no freespace, we will
> never reach the part of the operation that could create some.

No, I follow that perfectly.  I think you missed *my* point, which is:
suppose that we do have a full-length VM and/or FSM fork for a relation,
and VACUUM decides to truncate the relation.  Is it okay to not truncate
the VM/FSM?  If it isn't, we're going to have to have very tricky
semantics for any "don't touch the map forks" option, because it will
have to suppress only some of VACUUM's map updates.

If the map invariants are such that leaving garbage in them is
unconditionally safe, then this isn't a problem; but I'm unsure of that.

> The key point is that both the VM and the FSM are optional.

No, the key point is whether it's okay if they *are* there and contain
lies, or self-inconsistent data.

An alternative approach that might avoid such worries is to have a mode
wherein VACUUM always truncates the map forks to nothing, rather than
attempting to update them.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cannot find a working 64-bit integer type

2016-01-18 Thread Igal @ Lucee.org

On 1/18/2016 11:09 AM, Tom Lane wrote:

Robert Haas  writes:

The relevant portion of config.log seems to be this:
I do not think configure pays attention to mere warnings for this type 
of test. The real problem here seems to be the "permission denied" 
errors, which to me reek of broken Windows antivirus software. (As far 
as I'm aware, the word "broken" is redundant in that phrase.)
Thank you both for looking into this.  The only A/V-type software that I 
have running is the "Microsoft Security Essentials".



I'm a little confused as to why -Wno-cpp fixes any of that, though.

Most likely, it's pure chance that a retry worked.  Or if it's repeatable,
maybe no-cpp changes the compiler's file access patterns enough that
there's no longer a false trip of the AV check.

Short answer is that I wonder how much of the OP's multiple problems
are being caused by AV bugs.
I did not make any changes other than adding the compiler flags between 
those two runs (nor afterwards).


The reason that I decided to try to add the -Wno-error flag was that I 
searched the net for the error message, and found this
thread from 4 years ago: 
http://postgresql.nabble.com/Setting-Werror-in-CFLAGS-td5118384.html -- 
which showed

a similar error message and a play of the compiler flags.

I will try to run both forms again and report whether it is repeatable.

Thanks again,


Igal



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb - jsonb operators

2016-01-18 Thread Glyn Astill


On Mon, 18/1/16, Tom Lane  wrote:

 Subject: Re: [HACKERS] jsonb - jsonb operators
 To: "Dmitry Dolgov" <9erthali...@gmail.com>
 Cc: "Glyn Astill" , "Merlin Moncure" 
, "pgsql-hackers@postgresql.org" 

 Date: Monday, 18 January, 2016, 16:50
 
 Dmitry Dolgov <9erthali...@gmail.com>
 writes:
 >> if there's any future intention to
 add a delete operator that removes
 >
 element/pair matches?
 
 >
 I think the operator (jsonb - jsonb) is logical because we
 have a shallow
 > concatenation function
 (something like a "union" operation), but we
 have
 > nothing like "set
 difference" and "intersection" functions.
 Actually, I
 > thought to implement these
 functions (at least for jsonbx). But of course
 > this function should be quite simple and
 consider only full key/value
 > matching
 as a target.
 
 I am
 wary of this proposal because it seems to be taking
 little
 account of the fact that there
 *already is* a jsonb minus operator,
 two of
 them in fact.  For example
 
 regression=# select
 '["a","b","c"]'::jsonb
 - 'b';
   ?column?  
 
  ["a",
 "c"]
 (1 row)
 
 regression=# select '{"a":1,
 "b":2}'::jsonb - 'b';
 
 ?column? 
 --
 
 {"a": 1}
 (1 row)
 
 The proposed full-match
 semantics don't seem to me to be consistent with
 the way that the existing operator works.
 
 Another rather nasty problem
 is that the latter case works at all,
 ie the
 parser will decide the unknown literal is "text"
 so that it can
 apply "jsonb -
 text", there being no other plausible choice.  If
 there
 were a "jsonb - jsonb"
 operator, the parser would prefer that one, due
 to its heuristic about assuming that an unknown
 literal is of the same
 type as the other
 operator input.  So adding such an operator will almost
 certainly break queries that work in 9.5. 
 Maybe it's worth adding one
 anyway, but
 I don't think the case for its usefulness has been
 proven
 to the point where we should create
 compatibility issues to get it.
 
             regards, tom lane
 

In that case pehaps there is no need for an operator, but a function would be 
useful. Perhaps specifying the depth to delete on like Dimitri's key versions 
do?

I mocked up the top level version last year, like you say its trivial, but I 
find it useful.  It's here https://github.com/glynastill/jsonb_delete


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: [JDBC] Re: 9.4-1207 behaves differently with server side prepared statements compared to 9.2-1102

2016-01-18 Thread Tom Lane
Bruce Momjian  writes:
> I never understood why we don't just keep the selectivity estimates of
> previous plans and just reuse the plan if the selectivity estimates are
> similar.  Isn't parameter selectivity the only thing that distinguishes
> on plan with parameter from another?

> Checking selectivity estimates must be cheaper than replanning.  This
> could be done at the second use of the prepared plan, and maybe for all
> plan reuses, rather than waiting for five and then perhaps getting this
> bad behavior.

You're imagining that a selectivity recheck could be separated out from
the rest of the planner.  That's nowhere near feasible, IMO.  Even if it
were, what would we do with it?  There's no reliable way to determine
whether X% change in one or another selectivity number would change the
selected plan, other than by redoing practically all of the planning work.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cannot find a working 64-bit integer type

2016-01-18 Thread Tom Lane
Robert Haas  writes:
> The relevant portion of config.log seems to be this:

> configure:13285: gcc -o conftest.exe -Wall -Wmissing-prototypes
> -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels
> -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing
> -fwrapv -fexcess-precision=standard -O2
> -I/home/Admin/sources/postgresql-9.5.0/src/include/port/win32
> -DEXEC_BACKEND -Wl,--allow-multiple-definition
> -Wl,--disable-auto-import conftest.c -lz -lws2_32 -lm >&5
> conftest.c:106:5: warning: no previous prototype for 'does_int64_work'
> [-Wmissing-prototypes] int does_int64_work()
> conftest.c:120:1: warning: return type defaults to 'int' [-Wimplicit-int]
> conftest.c: In function 'main':
> conftest.c:121:3: warning: implicit declaration of function 'exit'
> [-Wimplicit-function-declaration]
> conftest.c:121:3: warning: incompatible implicit declaration of
> built-in function 'exit'
> conftest.c:121:3: note: include '' or provide a declaration of 
> 'exit'
> C:/Apps/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/5.3.0/../../../../x86_64-w64-mingw32/bin/ld.exe:
> reopening conftest.exe: Permission denied
> C:/Apps/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/5.3.0/../../../../x86_64-w64-mingw32/bin/ld.exe:
> final link failed: Permission denied collect2.exe: error: ld returned
> 1 exit status
> configure:13285: $? = 1 configure: program exited with status 1

I do not think configure pays attention to mere warnings for this type of
test.  The real problem here seems to be the "permission denied" errors,
which to me reek of broken Windows antivirus software.  (As far as I'm
aware, the word "broken" is redundant in that phrase.)

> I'm a little confused as to why -Wno-cpp fixes any of that, though.

Most likely, it's pure chance that a retry worked.  Or if it's repeatable,
maybe no-cpp changes the compiler's file access patterns enough that
there's no longer a false trip of the AV check.

Short answer is that I wonder how much of the OP's multiple problems
are being caused by AV bugs.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: [JDBC] Re: 9.4-1207 behaves differently with server side prepared statements compared to 9.2-1102

2016-01-18 Thread Bruce Momjian
On Wed, Jan 13, 2016 at 10:47:18AM -0500, Tom Lane wrote:
> Vladimir Sitnikov  writes:
> > Note: I state that mixing "kinds" of bind values is a bad application
> > design anyway. In other words, application developer should understand
> > if a query is DWH-like (requires replans) or OLTP-like (does not
> > require replans). Agreed?
> 
> No, not agreed.  As was already pointed out upthread, such information
> is not available in many use-cases for the plancache.
> 
> The real problem here IMO is inaccurate plan cost estimates, and that's
> not something that there is any easy fix for.
> 
> However ... one specific aspect of that is that to some extent, the cost
> estimate made for the generic plan is incommensurate with the estimates
> for the custom plans because the latter are made with more information.
> I don't remember the details of your specific case anymore, but we've
> seen cases where the generic plan is falsely estimated to be cheaper
> than custom plans because of this.

I never understood why we don't just keep the selectivity estimates of
previous plans and just reuse the plan if the selectivity estimates are
similar.  Isn't parameter selectivity the only thing that distinguishes
on plan with parameter from another?

Checking selectivity estimates must be cheaper than replanning.  This
could be done at the second use of the prepared plan, and maybe for all
plan reuses, rather than waiting for five and then perhaps getting this
bad behavior.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >