Re: [HACKERS] checkpointer continuous flushing

2016-01-16 Thread Fabien COELHO


Hello Andres,


Hello Tomas.


Ooops, sorry Andres, I mixed up the thread in my head so was not clear who 
was asking the questions to whom.



I was/am using ext4, and it turns out that, when abling flushing, the
results are hugely dependant on barriers=on/off, with the latter making
flushing rather advantageous. Additionally data=ordered/writeback makes
measureable difference too.


These are very interesting tests, I'm looking forward to have a look at the 
results.


The fact that these options change performance is expected. Personnaly the 
test I submitted on the thread used ext4 with default mount options plus 
"relatime".


I confirm that: nothing special but "relatime" on ext4 on my test host.

If I had a choice, I would tend to take the safest options, because the point 
of a database is to keep data safe. That's why I'm not found of the 
"synchronous_commit=off" chosen above.


"found" -> "fond". I confirm this opinion. If you have BBU on you 
disk/raid system probably playing with some of these options is safe, 
though. Not the case with my basic hardware.


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS]

2016-01-16 Thread Michael Paquier
On Fri, Jan 15, 2016 at 11:53 PM, Fabien COELHO  wrote:
> Here is a v19 :
>  - avoid noisy changes
>  - abort on double->int overflow
>  - implement operators as functions
>
> There is still \setrandom, that I can remove easily with a green light.

Thanks for the new patch and replacing the operator stuff by functions.

+   uniformly-distributed random integer in [lb,ub]
Nitpick: when defining an interval like that, you may want to add a
space after the comma. For example seg.sgml does that. It would be
good to be consistent even here. And actually you wrote [ub, lb] in
two places, this should have been reversed.

+  /* beware that the list is reverse in make_func */
s/reverse/reversed/?

}
+
 #ifdef DEBUG
Some noise.

With this example:
\set cid debug(sqrt(-1))
I get that:
debug(script=0,command=1): double nan
An error would be more logical, no? You want to emulate with complex
numbers instead?

The basic operator functions also do not check for integer overflows.
Those three ones are just overflowing:
\set cid debug(9223372036854775807 + 1)
\set cid debug(-9223372036854775808 - 1)
\set cid debug(9223372036854775807 * 9223372036854775807)
debug(script=0,command=1): int -9223372036854775807
debug(script=0,command=2): int 9223372036854775807
debug(script=0,command=3): int 1
And this one generates a core dump:
\set cid debug(-9223372036854775808 / -1)
Floating point exception: 8 (core dumped)

A more general comment: what about splitting all the execution
functions into a separate file exprexec.c? evaluateExpr (renamed as
execExpr) is the root function, but then we have a set of static
sub-functions for each node, like execExprFunc, execExprVar,
execExprConst, etc? This way we would save a bit of tab-indentation,
this patch making the new code lines becoming larger than 80
characters because of all the switch/case stuff that gets more
complicated.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] exposing pg_controldata and pg_config as functions

2016-01-16 Thread Michael Paquier
On Sun, Dec 27, 2015 at 5:39 AM, Joe Conway  wrote:
> First installment -- pg_config function/view as a separate patch,
> rebased to current master.

Documentation would be good to have.

! # don't include subdirectory-path-dependent -I and -L switches
! STD_CPPFLAGS := $(filter-out -I$(top_srcdir)/src/include
-I$(top_builddir)/src/include,$(CPPFLAGS))
! STD_LDFLAGS := $(filter-out -L$(top_builddir)/src/port,$(LDFLAGS))
! override CPPFLAGS += -DVAL_CONFIGURE="\"$(configure_args)\""
! override CPPFLAGS += -DVAL_CC="\"$(CC)\""
! override CPPFLAGS += -DVAL_CPPFLAGS="\"$(STD_CPPFLAGS)\""
! override CPPFLAGS += -DVAL_CFLAGS="\"$(CFLAGS)\""
! override CPPFLAGS += -DVAL_CFLAGS_SL="\"$(CFLAGS_SL)\""
! override CPPFLAGS += -DVAL_LDFLAGS="\"$(STD_LDFLAGS)\""
! override CPPFLAGS += -DVAL_LDFLAGS_EX="\"$(LDFLAGS_EX)\""
! override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
! override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
This duplication from src/bin/pg_config is a bad idea. Couldn't we do
something in src/common instead that sets up values at compilation
time in a routine (perhaps set of routines) available for both the
frontend and backend?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] exposing pg_controldata and pg_config as functions

2016-01-16 Thread Michael Paquier
On Sat, Jan 16, 2016 at 11:07 PM, Michael Paquier
 wrote:
> On Sun, Dec 27, 2015 at 5:39 AM, Joe Conway  wrote:
>> First installment -- pg_config function/view as a separate patch,
>> rebased to current master.
>
> Documentation would be good to have.
>
> ! # don't include subdirectory-path-dependent -I and -L switches
> ! STD_CPPFLAGS := $(filter-out -I$(top_srcdir)/src/include
> -I$(top_builddir)/src/include,$(CPPFLAGS))
> ! STD_LDFLAGS := $(filter-out -L$(top_builddir)/src/port,$(LDFLAGS))
> ! override CPPFLAGS += -DVAL_CONFIGURE="\"$(configure_args)\""
> ! override CPPFLAGS += -DVAL_CC="\"$(CC)\""
> ! override CPPFLAGS += -DVAL_CPPFLAGS="\"$(STD_CPPFLAGS)\""
> ! override CPPFLAGS += -DVAL_CFLAGS="\"$(CFLAGS)\""
> ! override CPPFLAGS += -DVAL_CFLAGS_SL="\"$(CFLAGS_SL)\""
> ! override CPPFLAGS += -DVAL_LDFLAGS="\"$(STD_LDFLAGS)\""
> ! override CPPFLAGS += -DVAL_LDFLAGS_EX="\"$(LDFLAGS_EX)\""
> ! override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
> ! override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
> This duplication from src/bin/pg_config is a bad idea. Couldn't we do
> something in src/common instead that sets up values at compilation
> time in a routine (perhaps set of routines) available for both the
> frontend and backend?

Just forgot to mention that those new functions should be superuser-only.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] checkpointer continuous flushing

2016-01-16 Thread Fabien COELHO


Hello Andres,


I measured it in a different number of cases, both on SSDs and spinning
rust. I just reproduced it with:

postgres-ckpt14 \
   -D /srv/temp/pgdev-dev-800/ \
   -c maintenance_work_mem=2GB \
   -c fsync=on \
   -c synchronous_commit=off \
   -c shared_buffers=2GB \
   -c wal_level=hot_standby \
   -c max_wal_senders=10 \
   -c max_wal_size=100GB \
   -c checkpoint_timeout=30s

Using a fresh cluster each time (copied from a "template" to save time)
and using
pgbench -M prepared -c 16 -j 16 -T 300 -P 1


I'm running some tests similar to those above...

Do you do some warmup when testing? I guess the answer is "no".

I understand that you have 8 cores/16 threads on your host?

Loading scale 800 data for 300 seconds tests takes much more than 300 
seconds (init takes ~360 seconds, vacuum & index are slow). With 30 
seconds checkpoint cycles and without any warmup, I feel that these tests 
are really on the very short (too short) side, so I'm not sure how much I 
can trust such results as significant. The data I reported were with more 
real life like parameters.


Anyway, I'll have some results to show with a setting more or less similar 
to yours.


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: BUG #13685: Archiving while idle every archive_timeout with wal_level hot_standby

2016-01-16 Thread Amit Kapila
On Sat, Jan 16, 2016 at 5:08 PM, Michael Paquier 
wrote:
>
> On Sat, Jan 16, 2016 at 7:10 PM, Amit Kapila 
wrote:
> > On Sun, Dec 20, 2015 at 6:44 PM, Michael Paquier <
michael.paqu...@gmail.com>
> > wrote:
> >> On Sun, Nov 8, 2015 at 9:50 PM, Michael Paquier
> >>  wrote:
> >> > On Sat, Nov 7, 2015 at 3:54 PM, Michael Paquier wrote:
> >> >> I thought about something like that at some point by saving a
minimum
> >> >> activity pointer in XLogCtl, updated each time a segment was
forcibly
> >> >> switched or after inserting a checkpoint record. Then the bgwriter
> >> >> looked at if the current insert position matched this minimum
activity
> >> >> pointer, skipping LogStandbySnapshot if both positions match. Does
> >> >> this match your line of thoughts?
> >> >
> >> > Looking at the code, it occurred to me that the LSN position saved
for
> >> > a XLOG_SWITCH record is the last position of current segment, so we
> >> > would still need to check if the current insert LSN matches the
> >> > beginning of a new segment and if the last segment was forcibly
> >> > switched by saving RecPtr of RequestXLogSwitch in XLogCtl for
example.
> >> > Thoughts?
> >>
> >> I haven't given up on this patch yet, and putting again my head on
> >> this problem I have finished with the patch attached, which checks if
> >> the current insert LSN position is at the beginning of a segment that
> >> has just been switched to decide if a standby snapshot should be
> >> logged or not. This allows bringing back an idle system to the pre-9.3
> >> state where a segment would be archived in the case of a low
> >> archive_timeout only when a checkpoint has been issued on the system.
> >>
> >
> > Won't this be a problem if the checkpoint occurs after a long time and
in
> > the mean time there is some activity in the server?
>
> Why? If there is some activity on the server, the snapshot will be
> immediately taken at the next iteration without caring about the
> checkpoint.
>

+ (insert_lsn % XLOG_SEG_SIZE) != SizeOfXLogLongPHD))

Do you mean to intend that it is protected by above check in the
patch?

Isn't it possible that so much WAL is inserted between bgwriter cycles,
that when it checks the location of WAL, it founds it to be at the beginning
of a new segment?

> > Another idea to solve this issue could be to see if there is any
progress
> > in the server by checking buffers dirtied/written (we can refer that
> > information using pgBufferUsage) since last time we log this record in
> > bgwriter.
>
> Yes, that may be an idea worth considering, but I really think that we
> had better measure that at WAL level..
>

I thought this is quite close to the previous patch you proposed where
Andres wanted some measurement in terms of progress since last
checkpoint. I understand as per current code your patch can work, but
what if some more similar WAL records needs to be added?


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] exposing pg_controldata and pg_config as functions

2016-01-16 Thread Michael Paquier
On Wed, Dec 30, 2015 at 9:08 AM, Joe Conway  wrote:
> 1) Change NextXID output format from "%u/%u" to "%u:%u"
>(see recent hackers thread)

! printf(_("Latest checkpoint's NextXID:  %u/%u\n"),
 ControlFile.checkPointCopy.nextXidEpoch,
 ControlFile.checkPointCopy.nextXid);
  printf(_("Latest checkpoint's NextOID:  %u\n"),
--- 646,652 
 ControlFile.checkPointCopy.ThisTimeLineID);
  printf(_("Latest checkpoint's full_page_writes: %s\n"),
 ControlFile.checkPointCopy.fullPageWrites ? _("on") : _("off"));
! printf(_("Latest checkpoint's NextXID:  %u:%u\n"),
This should be definitely a separate patch.

> 2) Refactor bin/pg_controldata (there should be no visible change to
>pg_controldata output)
> 3) Adds new functions, more or less in line with previous discussions:
>* pg_checkpoint_state()
>* pg_controldata_state()
>* pg_recovery_state()
>* pg_init_state()

Taking the opposite direction of Josh upthread, why is this split
actually necessary? Isn't the idea to provide a SQL interface of what
pg_controldata shows? If this split proves to be useful, shouldn't we
do it as well for pg_controldata?

> ===
> Missing (TODO once agreement on the above is reached):
> ---
> a) documentation

This would be good to have.

> b) catversion bump

That's committer work.

> c) regression tests

Hm, what would be the value of those tests? I think we could live
without for simple functions like that honestly.

I think that those functions should be superuser-only. They provide
information about the system globally.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump fails on domain constraint comments

2016-01-16 Thread Michael Paquier
On Tue, Jan 12, 2016 at 7:56 AM, Elvis Pranskevichus  wrote:
> It looks like pg_dump emits incorrect text for domain constraint comments:
>
> Assuming the following structure,

Nice catch! qtypname already has fmtId applied to it, so quotes are
applied twice to it in this case. I am adding an entry in the next CF
with patch marked as ready for committer so as this does not fail into
oblivion. A backpatch would be welcome as well.

Here is the patch entry:
https://commitfest.postgresql.org/9/481/
I could not add your name as an author as you visibly do not have a
community account. That's no big deal, I am just mentioning it here to
avoid confusion.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: BUG #13685: Archiving while idle every archive_timeout with wal_level hot_standby

2016-01-16 Thread Michael Paquier
On Sat, Jan 16, 2016 at 7:10 PM, Amit Kapila  wrote:
> On Sun, Dec 20, 2015 at 6:44 PM, Michael Paquier 
> wrote:
>> On Sun, Nov 8, 2015 at 9:50 PM, Michael Paquier
>>  wrote:
>> > On Sat, Nov 7, 2015 at 3:54 PM, Michael Paquier wrote:
>> >> I thought about something like that at some point by saving a minimum
>> >> activity pointer in XLogCtl, updated each time a segment was forcibly
>> >> switched or after inserting a checkpoint record. Then the bgwriter
>> >> looked at if the current insert position matched this minimum activity
>> >> pointer, skipping LogStandbySnapshot if both positions match. Does
>> >> this match your line of thoughts?
>> >
>> > Looking at the code, it occurred to me that the LSN position saved for
>> > a XLOG_SWITCH record is the last position of current segment, so we
>> > would still need to check if the current insert LSN matches the
>> > beginning of a new segment and if the last segment was forcibly
>> > switched by saving RecPtr of RequestXLogSwitch in XLogCtl for example.
>> > Thoughts?
>>
>> I haven't given up on this patch yet, and putting again my head on
>> this problem I have finished with the patch attached, which checks if
>> the current insert LSN position is at the beginning of a segment that
>> has just been switched to decide if a standby snapshot should be
>> logged or not. This allows bringing back an idle system to the pre-9.3
>> state where a segment would be archived in the case of a low
>> archive_timeout only when a checkpoint has been issued on the system.
>>
>
> Won't this be a problem if the checkpoint occurs after a long time and in
> the mean time there is some activity in the server?

Why? If there is some activity on the server, the snapshot will be
immediately taken at the next iteration without caring about the
checkpoint.

> Another idea to solve this issue could be to see if there is any progress
> in the server by checking buffers dirtied/written (we can refer that
> information using pgBufferUsage) since last time we log this record in
> bgwriter.

Yes, that may be an idea worth considering, but I really think that we
had better measure that at WAL level..
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: BUG #13685: Archiving while idle every archive_timeout with wal_level hot_standby

2016-01-16 Thread Michael Paquier
On Sat, Jan 16, 2016 at 9:07 PM, Amit Kapila  wrote:
> On Sat, Jan 16, 2016 at 5:08 PM, Michael Paquier 
> wrote:
>>
>> On Sat, Jan 16, 2016 at 7:10 PM, Amit Kapila 
>> wrote:
>> > On Sun, Dec 20, 2015 at 6:44 PM, Michael Paquier
>> >  wrote:
>> >> > Looking at the code, it occurred to me that the LSN position saved
>> >> > for
>> >> > a XLOG_SWITCH record is the last position of current segment, so we
>> >> > would still need to check if the current insert LSN matches the
>> >> > beginning of a new segment and if the last segment was forcibly
>> >> > switched by saving RecPtr of RequestXLogSwitch in XLogCtl for
>> >> > example.
>> >> > Thoughts?
>> >>
>> >> I haven't given up on this patch yet, and putting again my head on
>> >> this problem I have finished with the patch attached, which checks if
>> >> the current insert LSN position is at the beginning of a segment that
>> >> has just been switched to decide if a standby snapshot should be
>> >> logged or not. This allows bringing back an idle system to the pre-9.3
>> >> state where a segment would be archived in the case of a low
>> >> archive_timeout only when a checkpoint has been issued on the system.
>> >>
>> >
>> > Won't this be a problem if the checkpoint occurs after a long time and
>> > in
>> > the mean time there is some activity in the server?
>>
>> Why? If there is some activity on the server, the snapshot will be
>> immediately taken at the next iteration without caring about the
>> checkpoint.
>>
>
> + (insert_lsn % XLOG_SEG_SIZE) != SizeOfXLogLongPHD))
>
> Do you mean to intend that it is protected by above check in the
> patch?

Yep, in combination with is_switch_current to check if the last
completed segment was forcibly switched.

> Isn't it possible that so much WAL is inserted between bgwriter cycles,
> that when it checks the location of WAL, it founds it to be at the beginning
> of a new segment?

Er, no. Even if the insert LSN is at the beginning of a new segment,
this would take a standby snapshot if the last segment was not
forcibly switched.

>> > Another idea to solve this issue could be to see if there is any
>> > progress
>> > in the server by checking buffers dirtied/written (we can refer that
>> > information using pgBufferUsage) since last time we log this record in
>> > bgwriter.
>>
>> Yes, that may be an idea worth considering, but I really think that we
>> had better measure that at WAL level..
>
> I thought this is quite close to the previous patch you proposed where
> Andres wanted some measurement in terms of progress since last
> checkpoint. I understand as per current code your patch can work, but
> what if some more similar WAL records needs to be added?

A record like this one for the bgwriter? We could still rely on the
same check tracking the last-forced-segment, no?

It seems to me that tracking progress here is not really only a matter
of the number of shared buffers dirtied or written, we would need as
well to track XLOG_STANDBY_LOCK and provide for them fresh snapshots.
Imagine for example a read-only load on the master with some exclusive
locks taken on relations from time to time (perhaps unlikely so but
who knows?).
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] extend pgbench expressions with functions

2016-01-16 Thread Michael Paquier
On Fri, Jan 15, 2016 at 11:53 PM, Fabien COELHO  wrote:
> Here is a v19 :
>  - avoid noisy changes
>  - abort on double->int overflow
>  - implement operators as functions
>
> There is still \setrandom, that I can remove easily with a green light.

(I am not sure why *$%"# gmail broke the thread in my last email)

Thanks for the new patch and replacing the operator stuff by functions.

+   uniformly-distributed random integer in [lb,ub]
Nitpick: when defining an interval like that, you may want to add a
space after the comma. For example seg.sgml does that. It would be
good to be consistent even here. And actually you wrote [ub, lb] in
two places, this should have been reversed.

+  /* beware that the list is reverse in make_func */
s/reverse/reversed/?

}
+
 #ifdef DEBUG
Some noise.

With this example:
\set cid debug(sqrt(-1))
I get that:
debug(script=0,command=1): double nan
An error would be more logical, no? You want to emulate with complex
numbers instead?

The basic operator functions also do not check for integer overflows.
Those three ones are just overflowing:
\set cid debug(9223372036854775807 + 1)
\set cid debug(-9223372036854775808 - 1)
\set cid debug(9223372036854775807 * 9223372036854775807)
debug(script=0,command=1): int -9223372036854775807
debug(script=0,command=2): int 9223372036854775807
debug(script=0,command=3): int 1
And this one generates a core dump:
\set cid debug(-9223372036854775808 / -1)
Floating point exception: 8 (core dumped)

A more general comment: what about splitting all the execution
functions into a separate file exprexec.c? evaluateExpr (renamed as
execExpr) is the root function, but then we have a set of static
sub-functions for each node, like execExprFunc, execExprVar,
execExprConst, etc? This way we would save a bit of tab-indentation,
this patch making the new code lines becoming larger than 80
characters because of all the switch/case stuff that gets more
complicated.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgindent-polluted commits

2016-01-16 Thread Simon Riggs
On 16 January 2016 at 02:10, Noah Misch  wrote:

> On Wed, Jan 13, 2016 at 12:13:11PM -0500, Tom Lane wrote:
> > Simon Riggs  writes:
> > > On 13 January 2016 at 14:48, Noah Misch  wrote:
> > >> I've noticed commits, from a few of you, carrying pgindent changes to
> lines
> > >> the patch would not otherwise change.
> >
> > > Could we review again why this matters?
> >
> > Basically this is trading off convenience of the committer (all of the
> > alternatives Noah mentions are somewhat annoying) versus the convenience
> > of post-commit reviewers.  I'm not sure that his recommendation is the
> > best trade-off, nor that the situation is precisely comparable to
> > pre-commit review.  There definitely will be pre-commit review, there
> > may or may not be any post-commit review.
>
> That's a good summary.
>
> > I'm willing to go with the "separate commit to reindent individual files"
> > approach if there's a consensus that that makes for a cleaner git
> history.
> > But I'm not 100% convinced it matters.
>
> Thanks.
>

My objective in committing patches to PostgreSQL is to develop the Open
Source version of PostgreSQL as a standalone product and I encourage others
to do the same.

PostgreSQL is open source and therefore usable for various additional
purposes, one of which is modified versions of PostgreSQL.

I will not go out of my way to cause problems for the secondary users of
the code. I will try to implement one of the suggestions for whitespace
handling, though may make mistakes in that, nobody being perfect.

The secondary purposes of the code can only occur if the core code lives
and breathes, so I expect such users to make positive contributions to core
directly and not to block or slow down inclusion of features by others.
Quid pro quo.

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [HACKERS] exposing pg_controldata and pg_config as functions

2016-01-16 Thread Andres Freund
On January 17, 2016 12:46:36 AM GMT+01:00, Michael Paquier 
 wrote:
, but we surely do not want to give away
>checkpoint and recovery information.

Why is that? A lot of that information is available anyway?

--- 
Please excuse brevity and formatting - I am writing this on my mobile phone.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Declarative partitioning

2016-01-16 Thread Corey Huinker
>
>
> If we have a CREATE statement for each partition, how do we generalize
> that to partitions at different levels? For example, if we use something
> like the following to create a partition of parent_name:
>
> CREATE PARTITION partition_name OF parent_name FOR VALUES ...
> WITH ... TABLESPACE ...
>
> Do we then say:
>
> CREATE PARTITION subpartition_name OF partition_name ...
>


That's how I'd want it for partitions created after the initial partitioned
table is created.

I'd like to be able to identify the parent partition by it's own
partitioning parameters rather than name, like the way we can derive the
name of an index in ON CONFLICT. But I see no clean way to do that, and if
one did come up, we'd simply allow the user to replace

with
table_name PARTITION partition_spec [...PARTITION partition_spec [
...PARTITION turtles_all_the_way_down]]).

Again, totally fine with forcing the maintenance script to know or discover
the name of the partition to be subpartitioned...for now.



> to create a level 2 partition (sub-partition) of parent_name? Obviously,
> as is readily apparent from the command, it is still a direct partition of
> partition_name for all internal purposes (consider partition list caching
> in relcache, recursive tuple routing, etc.) save some others.
>
> I ask that also because it's related to the choice of syntax to use to
> declare the partition key for the multi-level case. I'm considering the
> SUBPARTITION BY notation and perhaps we could generalize it to more than
> just 2 levels. So, for the above case, parent_name would have been created
> as:
>
> CREATE TABLE parent_name PARTITION BY ... SUBPARTITION BY ...


> Needless to say, when subpartition_name is created with the command we saw
> a moment ago, the root partitioned table would be locked. In fact, adding
> a partition anywhere in the hierarchy needs an exclusive lock on the root
> table. Also, partition rule (the FOR VALUES clause) would be validated
> against PARTITION BY or SUBPARTITION BY clause at the respective level.
>
> Although, I must admit I feel a little uneasy about the inherent asymmetry
> in using SUBPARTITION BY for key declaration whereas piggybacking CREATE
> PARTITION for creating sub-partitions. Is there a better way?
>

Provided that the syntax allows for N levels of partitioning, I don't care
if it's
PARTITION BY.., PARTITION BY..., PARTITION BY ...
or
PARTITION BY.., SUBPARTITION BY..., SUBPARTITION BY ...

The first is probably better for meta-coding purposes, but the second makes
it clear which partition layer is first.


>
> > As for the convenience syntax (if at all), how about:
> >
> > CREATE TABLE foo (
> >   ...
> > )
> > PARTITION BY ... ON (...)
> > SUBPARTITION BY ... ON (...)
> > opt_partition_list;
> >
> > where opt_partition_list is:
> >
> > PARTITIONS (
> >   partname FOR VALUES ... [WITH] [TABLESPACE] opt_subpart_list
> >   [, ...]
> > )
> >
> > where opt_subpart_list is:
> >
> > SUBPARTITIONS (
> >   subpartname FOR VALUES ... [WITH] [ TABLESPACE]
> >   [, ...]
> > )
>
> Do we want this at all? It seems difficult to generalize this to
> multi-level hierarchy of more than 2 levels.
>

I want this.

Granted the syntax of a 3+ level partitioning would be cumbersome, but it
is what the user wanted, and the nested PARTITION/SUBPARTITION. In those
cases, the user might opt to not create more than the default first
subpartition to keep the syntax sane, or we might auto-generate default
partitions (with a VALUES clause of whatever "all values" is for that
datatype...again, this is an area where leveraging range types would be
most valuable).



> On one hand, I think to keep treating "partition hierarchies" as
> "inheritance hierachies" might have some issues. I am afraid that
> documented inheritance semantics may not be what we want to keep using for
> the new partitioned tables. By that, I mean all the user-facing behaviors
> where inheritance has some bearing. Should it also affect new partitioned
> tables? Consider whether inheritance semantics would render infeasible
> some of the things that we'd like to introduce for the new partitioned
> tables such as automatic tuple routing, or keep us from improving planner
> smarts and executor capabilities for partitioned tables over what we
> already have.
>

I feel that Automatic tuple routing should be considered they key benefit
of "real" partitions over inherited tables. Trigger maintenance is most of
the work of custom partitioning schemes, at least the ones I've written.

There's a great chance that not everyone cares right now about this part
> of the new partitioning but just want to put it out there. There are more
> contentious issues like the syntax, partitioning maintenance commands that
> we plan to support (now or later) and such.
>

What I've read so far addresses most of my concerns.

Still somewhat on my mind:

1. ability to describe partition bounds via range types, regardless of
whether the Automatic 

Re: [HACKERS] Some bugs in psql_complete of psql

2016-01-16 Thread Peter Eisentraut
On 1/12/16 9:46 PM, Peter Eisentraut wrote:
> On 12/22/15 4:44 AM, Kyotaro HORIGUCHI wrote:
>> 1. 0001-Fix-tab-complete-of-CREATE-INDEX.patch
>>
>>   Fixes completion for CREATE INDEX in ordinary way.
> 
> This part has been fixed in another thread.  Please check whether that
> satisfies all your issues.
> 
>> 3. 0002-Fix-tab-completion-for-DROP-INDEX.patch
>>
>>   Fix of DROP INDEX completion in the type-2 way.
> 
> I agree that we could use completion support for DROP INDEX
> CONCURRENTLY, but I would rather not throw IF NOT EXISTS into the same
> patch.  We don't have support for IF NOT EXISTS anywhere else.  If you
> think about, it's rather unnecessary, because tab completion will
> determine for you whether an object exists.

I have applied a reduced version of the DROP INDEX patch.  I think that
covers everything in your submission, but please check.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: BUG #13685: Archiving while idle every archive_timeout with wal_level hot_standby

2016-01-16 Thread Amit Kapila
On Sat, Jan 16, 2016 at 6:37 PM, Michael Paquier 
wrote:
>
> On Sat, Jan 16, 2016 at 9:07 PM, Amit Kapila 
wrote:
> > On Sat, Jan 16, 2016 at 5:08 PM, Michael Paquier <
michael.paqu...@gmail.com>
> > wrote:
> >>
> >> On Sat, Jan 16, 2016 at 7:10 PM, Amit Kapila 
> >> wrote:
> >> > Won't this be a problem if the checkpoint occurs after a long time
and
> >> > in
> >> > the mean time there is some activity in the server?
> >>
> >> Why? If there is some activity on the server, the snapshot will be
> >> immediately taken at the next iteration without caring about the
> >> checkpoint.
> >>
> >
> > + (insert_lsn % XLOG_SEG_SIZE) != SizeOfXLogLongPHD))
> >
> > Do you mean to intend that it is protected by above check in the
> > patch?
>
> Yep, in combination with is_switch_current to check if the last
> completed segment was forcibly switched.
>
> > Isn't it possible that so much WAL is inserted between bgwriter cycles,
> > that when it checks the location of WAL, it founds it to be at the
beginning
> > of a new segment?
>
> Er, no. Even if the insert LSN is at the beginning of a new segment,
> this would take a standby snapshot if the last segment was not
> forcibly switched.
>

So here if I understand correctly the check related to the last segment
forcibly switched is based on the fact that if it is forcibly switched, then
we don't need to log this record?  What is the reason of such an
assumption?   It is not very clear by reading the comments you have
added in patch, may be you can expand comments slightly to explain
the reason of same.



With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Let PostgreSQL's On Schedule checkpoint write buffer smooth spread cycle by tuning IsCheckpointOnSchedule?

2016-01-16 Thread Bruce Momjian
On Wed, Dec 23, 2015 at 04:37:00PM +0100, Fabien COELHO wrote:
> Hmmm. Let us try with both hands:
> 
> AFAICR with xlog-triggered checkpoints, the checkpointer progress is
> measured with respect to the size of the WAL file, which does not
> grow linearly in time for the reason you pointed above (a lot of FPW
> at the beginning, less in the end). As the WAL file is growing
> quickly, the checkpointer thinks that it is late and that it has
> some catchup to do, so it will start to try writing quickly as well.
> There is a double whammy as both are writing more, and are probably
> not succeeding.
> 
> For time triggered checkpoints, the WAL file gets filled up *but*
> the checkpointer load is balanced against time. This is a "simple"
> whammy, where the checkpointer uses IO bandwith which is needed for
> the WAL, and it could wait a little bit because the WAL will need
> less later, but it is not trying to catch up by even writing more,
> so the load shifting needed in this case is not the same as the
> previous case.
> 
> As you point out there is a WAL spike in both case, but in one case
> there is also a checkpointer spike and in the other the checkpointer
> load is flat.

Coming in late here, but I always thought the fact that the FPW happen
mostly at the start of the checkpoint, and the checkpoint writes/fsyncs
happen mostly in the first half of the checkpoint period was always
suboptimal, i.e. it would be nice of one of these was more active in the
second half of the checkpoint period.  I assume that is what is being
discussed here.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [GENERAL] Re: [BUGS] about test_parser installation failure problem(PostgreSQL in 9.5.0)?

2016-01-16 Thread Michael Paquier
On Sun, Jan 17, 2016 at 12:28 AM, Tom Lane  wrote:
> Michael Paquier  writes:
>> On Sat, Jan 16, 2016 at 2:03 AM, Tom Lane  wrote:
>>> Yes, we moved test_parser and some other only-useful-for-testing modules
>>> from contrib to src/test/modules, which means they won't get installed in
>>> standard installations.
>
>> Additional note: on Windows when code is compiled with MSVC, they are 
>> installed.
>
> Really?  Surely that's a bug.

We are doing so to simplify the work of vcregress.pl where an instance
needs to be usually in place before running the tests.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Rework access method interface

2016-01-16 Thread Amit Kapila
On Sat, Jan 16, 2016 at 9:43 PM, Tom Lane  wrote:
>
> Alexander Korotkov  writes:
> > [ aminterface-13.patch ]
>
> I've started to review this.  There are a bunch of cosmetic things I don't
> like, notably the include-file nesting you've chosen, but they're fixable.
> One item that I think could use some discussion is where to put the new
> amvalidate functions.  I don't especially like your choice to drop them
> into nbtree.c, gist.c, etc, for a couple of reasons:
>
> 1. These aren't really at the same semantic level as functions like
> btinsert or btgettuple; they're not part of the implementation of an
> index, and indeed are *users* of indexes (at least of the catalog
> indexes).
>
> 2. This approach substantially bloats the #include lists for the
> relevant files, which again is a token of the validate functions not
> belonging where they were put.
>
> 3. There's probably room to share code across the different validators;
> but this design isn't very amenable to that.
>
> A comparison point worth noting is that the amcostestimate functions
> are in more or less the same boat: they aren't part of the index
> implementation in any meaningful way, but are really part of the
> planner instead.  Those are all in selfuncs.c, not under backend/access
> at all.
>
> There are a couple of things we could do instead:
>
> * Put each amvalidate function into its own file (but probably keep it
> in the same directory as now).  This is a reasonable response to
> points #1 and #2 but isn't very much help for #3.
>

Shouldn't we try to move amhandler function as well along with
amvalidate?  I think moving each am's handler and validate into
am specific new file can make this arrangement closer to what
we have for PL's (ex. we have plpgsql_validator and plpgsql_call_
handler in pl_handler.c and similar handler and validator functions
for other languages in their corresponding modules).


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Re: BUG #13685: Archiving while idle every archive_timeout with wal_level hot_standby

2016-01-16 Thread Amit Kapila
On Sun, Dec 20, 2015 at 6:44 PM, Michael Paquier 
wrote:
> On Sun, Nov 8, 2015 at 9:50 PM, Michael Paquier
>  wrote:
> > On Sat, Nov 7, 2015 at 3:54 PM, Michael Paquier wrote:
> >> I thought about something like that at some point by saving a minimum
> >> activity pointer in XLogCtl, updated each time a segment was forcibly
> >> switched or after inserting a checkpoint record. Then the bgwriter
> >> looked at if the current insert position matched this minimum activity
> >> pointer, skipping LogStandbySnapshot if both positions match. Does
> >> this match your line of thoughts?
> >
> > Looking at the code, it occurred to me that the LSN position saved for
> > a XLOG_SWITCH record is the last position of current segment, so we
> > would still need to check if the current insert LSN matches the
> > beginning of a new segment and if the last segment was forcibly
> > switched by saving RecPtr of RequestXLogSwitch in XLogCtl for example.
> > Thoughts?
>
> I haven't given up on this patch yet, and putting again my head on
> this problem I have finished with the patch attached, which checks if
> the current insert LSN position is at the beginning of a segment that
> has just been switched to decide if a standby snapshot should be
> logged or not. This allows bringing back an idle system to the pre-9.3
> state where a segment would be archived in the case of a low
> archive_timeout only when a checkpoint has been issued on the system.
>

Won't this be a problem if the checkpoint occurs after a long time and in
the mean time there is some activity in the server?

Another idea to solve this issue could be to see if there is any progress
in the server by checking buffers dirtied/written (we can refer that
information using pgBufferUsage) since last time we log this record in
bgwriter.


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS]

2016-01-16 Thread Michael Paquier
On Sat, Jan 16, 2016 at 10:45 PM, Michael Paquier
 wrote:
> On Fri, Jan 15, 2016 at 11:53 PM, Fabien COELHO  wrote:
>> Here is a v19 :
>>  - avoid noisy changes
>>  - abort on double->int overflow
>>  - implement operators as functions
>>
>> There is still \setrandom, that I can remove easily with a green light.
>
> Thanks for the new patch and replacing the operator stuff by functions.

Please ignore this email, for some reason gmail has broken the
original thread (visibly it is better to not request gmail to put in a
thread a draft from another window, even if this draft was originally
attached to the thread).
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dealing with extension dependencies that aren't quite 'e'

2016-01-16 Thread Robert Haas
On Jan 16, 2016, at 9:48 AM, Abhijit Menon-Sen  wrote:
> Right, here's another try.
> 
> The extension does trigger-based DML auditing. You install it using
> CREATE EXTENSION and then call one of its functions to enable auditing
> for a particular table. That function will create a customised trigger
> function based on the table's columns and a trigger that uses it:
> 
>CREATE FUNCTION fn_audit_$table_name() RETURNS TRIGGER …
>CREATE TRIGGER … ON $table_name … EXECUTE fn_audit_$table_name;
> 
> All that works fine (with pg_dump too). But if you drop the extension,
> the triggers stop working because the trigger function calls functions
> in the extension that are now gone.

This seems like one manifestation of the more general problem that we don't 
have any real idea what objects a function definition depends on.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Rework access method interface

2016-01-16 Thread Andres Freund
On January 16, 2016 6:32:47 PM GMT+01:00, Tom Lane  wrote:

>As for modularity, nobody's moaned particularly about the
>amcostestimate
>functions all being in selfuncs.c.  It all depends on what you think is
>"modular".

Well, back then you couldn't really have a production grade indeed as an 
extension. With this were getting pretty close to being able to do that for a 
bunch of useful indexes.

Andres

--- 
Please excuse brevity and formatting - I am writing this on my mobile phone.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dealing with extension dependencies that aren't quite 'e'

2016-01-16 Thread Abhijit Menon-Sen
Right, here's another try.

The extension does trigger-based DML auditing. You install it using
CREATE EXTENSION and then call one of its functions to enable auditing
for a particular table. That function will create a customised trigger
function based on the table's columns and a trigger that uses it:

CREATE FUNCTION fn_audit_$table_name() RETURNS TRIGGER …
CREATE TRIGGER … ON $table_name … EXECUTE fn_audit_$table_name;

All that works fine (with pg_dump too). But if you drop the extension,
the triggers stop working because the trigger function calls functions
in the extension that are now gone.

To mitigate this problem, the extension actually does:

CREATE FUNCTION fn_audit…
ALTER EXTENSION … ADD FUNCTION fn_audit…

Now the trigger depends on the trigger function (as before), and the
trigger function depends on the extension, so you can't inadvertently
break the system by dropping the extension.

But now pg_dump has a problem: it'll dump the trigger definitions, but
not the trigger functions (because of their new 'e' dependency on the
extension). So if you restore, you get the extension and the triggers,
but the trigger functions are gone, and things break.

*This* is the problem I'm trying to solve. Sorry, my earlier explanation
was not clear, because I didn't fully understand the problem and what
the extension was doing.

One possible solution is to make the trigger function depend on the
extension with a dependency type that isn't 'e', and therefore doesn't
prevent pg_dump from including the function in its output. We would need
some way to record the dependency, but no changes to pg_dump would be
needed.

Thoughts?

-- Abhijit


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] Re: [BUGS] about test_parser installation failure problem(PostgreSQL in 9.5.0)?

2016-01-16 Thread Tom Lane
Michael Paquier  writes:
> On Sat, Jan 16, 2016 at 2:03 AM, Tom Lane  wrote:
>> Yes, we moved test_parser and some other only-useful-for-testing modules
>> from contrib to src/test/modules, which means they won't get installed in
>> standard installations.

> Additional note: on Windows when code is compiled with MSVC, they are 
> installed.

Really?  Surely that's a bug.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] exposing pg_controldata and pg_config as functions

2016-01-16 Thread Robert Haas
On Jan 16, 2016, at 9:08 AM, Michael Paquier  wrote: 
> Just forgot to mention that those new functions should be superuser-only.

I think nobody should ever say this without explaining why. Superuser 
restrictions are necessary in some cases, but the fewer of them we have, the 
better.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Rework access method interface

2016-01-16 Thread Robert Haas
On Jan 16, 2016, at 11:13 AM, Tom Lane  wrote:
> Alexander Korotkov  writes:
>> [ aminterface-13.patch ]
> 
> I've started to review this.  There are a bunch of cosmetic things I don't
> like, notably the include-file nesting you've chosen, but they're fixable.
> One item that I think could use some discussion is where to put the new
> amvalidate functions.  I don't especially like your choice to drop them
> into nbtree.c, gist.c, etc, for a couple of reasons:
> 
> 1. These aren't really at the same semantic level as functions like
> btinsert or btgettuple; they're not part of the implementation of an
> index, and indeed are *users* of indexes (at least of the catalog
> indexes).
> 
> 2. This approach substantially bloats the #include lists for the
> relevant files, which again is a token of the validate functions not
> belonging where they were put.
> 
> 3. There's probably room to share code across the different validators;
> but this design isn't very amenable to that.
> 
> A comparison point worth noting is that the amcostestimate functions
> are in more or less the same boat: they aren't part of the index
> implementation in any meaningful way, but are really part of the
> planner instead.  Those are all in selfuncs.c, not under backend/access
> at all.
> 
> There are a couple of things we could do instead:
> 
> * Put each amvalidate function into its own file (but probably keep it
> in the same directory as now).  This is a reasonable response to
> points #1 and #2 but isn't very much help for #3.
> 
> * Collect the amvalidate functions into one file, which then leaves
> us wondering where to put that; surely not under any one AM's directory.
> A new file in src/backend/access/index/ is one plausible solution.
> This file would also be a reasonable place to put the amvalidate()
> dispatch function itself.
> 
> I'm somewhat leaning to the second choice, but perhaps someone has
> a better idea, or an argument against doing that.

Doesn't seem very modular.  How about putting common code there but AM-specific 
code in each AM's directory?  It would be nice if adding a new AM meant mostly 
adding a new directory, not much touching the common code.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Rework access method interface

2016-01-16 Thread Tom Lane
Robert Haas  writes:
> On Jan 16, 2016, at 11:13 AM, Tom Lane  wrote:
>> There are a couple of things we could do instead:
>> 
>> * Put each amvalidate function into its own file (but probably keep it
>> in the same directory as now).  This is a reasonable response to
>> points #1 and #2 but isn't very much help for #3.
>> 
>> * Collect the amvalidate functions into one file, which then leaves
>> us wondering where to put that; surely not under any one AM's directory.
>> A new file in src/backend/access/index/ is one plausible solution.
>> This file would also be a reasonable place to put the amvalidate()
>> dispatch function itself.
>> 
>> I'm somewhat leaning to the second choice, but perhaps someone has
>> a better idea, or an argument against doing that.

> Doesn't seem very modular.  How about putting common code there but 
> AM-specific code in each AM's directory?  It would be nice if adding a new AM 
> meant mostly adding a new directory, not much touching the common code.

Then we're going to end up with option A; and I suspect that we'll never
bother with factoring out any common code, because it won't be worth the
notational trouble if it involves common code that's in a different file
in a different directory.

As for modularity, nobody's moaned particularly about the amcostestimate
functions all being in selfuncs.c.  It all depends on what you think is
"modular".

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Rework access method interface

2016-01-16 Thread Tom Lane
Alexander Korotkov  writes:
> [ aminterface-13.patch ]

I've started to review this.  There are a bunch of cosmetic things I don't
like, notably the include-file nesting you've chosen, but they're fixable.
One item that I think could use some discussion is where to put the new
amvalidate functions.  I don't especially like your choice to drop them
into nbtree.c, gist.c, etc, for a couple of reasons:

1. These aren't really at the same semantic level as functions like
btinsert or btgettuple; they're not part of the implementation of an
index, and indeed are *users* of indexes (at least of the catalog
indexes).

2. This approach substantially bloats the #include lists for the
relevant files, which again is a token of the validate functions not
belonging where they were put.

3. There's probably room to share code across the different validators;
but this design isn't very amenable to that.

A comparison point worth noting is that the amcostestimate functions
are in more or less the same boat: they aren't part of the index
implementation in any meaningful way, but are really part of the
planner instead.  Those are all in selfuncs.c, not under backend/access
at all.

There are a couple of things we could do instead:

* Put each amvalidate function into its own file (but probably keep it
in the same directory as now).  This is a reasonable response to
points #1 and #2 but isn't very much help for #3.

* Collect the amvalidate functions into one file, which then leaves
us wondering where to put that; surely not under any one AM's directory.
A new file in src/backend/access/index/ is one plausible solution.
This file would also be a reasonable place to put the amvalidate()
dispatch function itself.

I'm somewhat leaning to the second choice, but perhaps someone has
a better idea, or an argument against doing that.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] extend pgbench expressions with functions

2016-01-16 Thread Fabien COELHO


Hello Michaël,


+   uniformly-distributed random integer in [lb,ub]



Nitpick: when defining an interval like that, you may want to add a
space after the comma.


Why not.


+  /* beware that the list is reverse in make_func */
s/reverse/reversed/?


Indeed.


+
#ifdef DEBUG
Some noise.


Ok.


With this example:
\set cid debug(sqrt(-1))
I get that:
debug(script=0,command=1): double nan
An error would be more logical, no?


If "sqrt(-1)" as a double is Nan for the computer, I'm fine with that. It 
makes the code simpler to just let the math library do its stuff and not 
bother.



You want to emulate with complex numbers instead?


Nope.


The basic operator functions also do not check for integer overflows.


This is a feature. I think that they should not check for overflow, as in 
C, this is just int64_t arithmetic "as is".


Moreover, it would be a new feature to add such a check if desirable, so 
it would belong to another patch, it is not related to adding functions.

The addition already overflows in the current code.

Finally I can think of good reason to use overflows deliberately, so I 
think it would argue against such a change.



Those three ones are just overflowing:
\set cid debug(9223372036854775807 + 1)
\set cid debug(-9223372036854775808 - 1)
\set cid debug(9223372036854775807 * 9223372036854775807)
debug(script=0,command=1): int -9223372036854775807
debug(script=0,command=2): int 9223372036854775807
debug(script=0,command=3): int 1


All these results are fine from my point of view.


And this one generates a core dump:
\set cid debug(-9223372036854775808 / -1)
Floating point exception: 8 (core dumped)


This one is funny, but it is a fact of int64_t life: you cannot divide 
INT64_MIN by -1 because the result cannot be represented as an int64_t.
This is propably hardcoded in the processor. I do not think it is worth 
doing anything about it for pgbench.



A more general comment: what about splitting all the execution
functions into a separate file exprexec.c? evaluateExpr (renamed as
execExpr) is the root function, but then we have a set of static
sub-functions for each node, like execExprFunc, execExprVar,
execExprConst, etc?


I do not see a strong case for renaming. The function part could be split 
because of the indentation, though.


This way we would save a bit of tab-indentation, this patch making the 
new code lines becoming larger than 80 characters because of all the 
switch/case stuff that gets more complicated.


I agree that the code is pretty ugly, but this is partly due to postgres 
indentation rules for switch which are *NOT* reasonnable, IMO.


I put the function evaluation in a function in the attached version.

--
Fabien.diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml
index 541d17b..0767b46 100644
--- a/doc/src/sgml/ref/pgbench.sgml
+++ b/doc/src/sgml/ref/pgbench.sgml
@@ -771,17 +771,21 @@ pgbench  options  dbname
   Sets variable varname to an integer value calculated
   from expression.
   The expression may contain integer constants such as 5432,
-  references to variables :variablename,
+  double constants such as 3.14159,
+  references to integer variables :variablename,
   and expressions composed of unary (-) or binary operators
-  (+, -, *, /, %)
-  with their usual associativity, and parentheses.
+  (+, -, *, /,
+  %) with their usual associativity, function calls and
+  parentheses.
+   shows the available
+  functions.
  
 
  
   Examples:
 
 \set ntellers 10 * :scale
-\set aid (1021 * :aid) % (10 * :scale) + 1
+\set aid (1021 * random(1, 10 * :scale)) % (10 * :scale) + 1
 
 

@@ -801,66 +805,35 @@ pgbench  options  dbname
  
 
  
-  By default, or when uniform is specified, all values in the
-  range are drawn with equal probability.  Specifying gaussian
-  or  exponential options modifies this behavior; each
-  requires a mandatory parameter which determines the precise shape of the
-  distribution.
- 
+  
+   
+
+ \setrandom n 1 10 or \setrandom n 1 10 uniform
+ is equivalent to \set n random(1, 10) and uses a uniform
+ distribution.
+
+   
 
- 
-  For a Gaussian distribution, the interval is mapped onto a standard
-  normal distribution (the classical bell-shaped Gaussian curve) truncated
-  at -parameter on the left and +parameter
-  on the right.
-  Values in the middle of the interval are more likely to be drawn.
-  To be precise, if PHI(x) is the cumulative distribution
-  function of the standard normal distribution, with mean mu
-  defined as (max + min) / 2.0, with
-
- f(x) = PHI(2.0 * parameter * (x - mu) / (max - min + 1)) /
-(2.0 * PHI(parameter) - 1.0)
-
-  then value i between min and
-  max inclusive is drawn with probability:
-  f(i + 0.5) - f(i - 0.5).
-  Intuitively, 

[HACKERS] Developing and Building PostgreSQL in an IDE on Windows

2016-01-16 Thread Igal @ Lucee.org

Hi,

Java developer here with very basic knowledge of C and C++ and therefore 
some noob questions, so please bear with me (to further complicate 
things -- I am using Windows).


My goal is to be able run PostgreSQL in an IDE like Eclipse CDT or 
Code::Blocks so that I can run it in debug mode and set breakpoints etc.


I downloaded the Source Release distribution REL_9_5_0, and ran 
./configure via MinGW-w64 -- the config.log file is available at 
https://gist.github.com/TwentyOneSolutions/f3ce3df3d8cc655db45d


Now I have a .project file, which I thought to be an Eclipse project 
file, but opening it in Eclipse doesn't do much.  Is it, indeed, an 
Eclipse project file?  If not, then what (cross-platform) IDE would open it?


When I import the directory to Eclipse as Existing Project I get the 
directory structure in Eclipse, but trying to run "Build" results in 
"Info: Nothing to build for postgres-REL9_5_0", and trying to Run or 
Debug says "The selection cannot be launched, and there no recent launches".


Any ideas on how I can make progress will be greatly appreciated.

Thanks!


--

Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [HACKERS] pglogical - logical replication contrib module

2016-01-16 Thread Steve Singer

On 01/15/2016 12:07 PM, Petr Jelinek wrote:

That's bug, fixed.



Can you posted an updated patch with whatever fixes you have so far made?


There are several statuses the table goes through, during the COPY 
it's in synchronizing status, so next logical step seemed to be 
synchronized. Maybe it should be renamed to 'replicating' instead as 
that's what it actually means (table has finished synchronization and 
is now replicating normally).




I agree 'replicating' is clearer



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pglogical - logical replication contrib module

2016-01-16 Thread leo
I also run into same problem and waiting for bug fix.  
please update if new patch has published.

THX



--
View this message in context: 
http://postgresql.nabble.com/pglogical-logical-replication-contrib-module-tp5879755p5882564.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] count_nulls(VARIADIC "any")

2016-01-16 Thread Pavel Stehule
2016-01-12 17:27 GMT+01:00 Marko Tiikkaja :

> On 03/01/16 22:49, Jim Nasby wrote:
>
>> In the unit test, I'd personally prefer just building a table with the
>> test cases and the expected NULL/NOT NULL results, at least for all the
>> calls that would fit that paradigm. That should significantly reduce the
>> size of the test. Not a huge deal though...
>>
>
> I don't really see the point.  "The size of the test" doesn't seem like a
> worthwhile optimization target, unless the test scripts are somehow really
> unnecessarily large.
>
> Further, if you were developing code related to this, previously you could
> just copy-paste the defective test case in order to easily reproduce a
> problem.  But now suddenly you need a ton of different setup.
>
> I don't expect to really have a say in this, but I think the tests are now
> worse than they were before.
>

the form of regress tests is not pretty significant issue. Jim's design is
little bit transparent, Marko's is maybe little bit practical. Both has
sense from my opinion, and any hasn't significant advantage against other.

Regards

Pavel


>
>
> .m
>