Re: [HACKERS] Bug in to_timestamp().

2016-06-25 Thread Steve Crawford
On Fri, Jun 24, 2016 at 3:43 PM, Joshua D. Drake 
wrote:

> On 06/24/2016 02:16 PM, Tom Lane wrote:
>
>> Robert Haas  writes:
>>
>>> On Fri, Jun 24, 2016 at 12:26 PM, Steve Crawford
>>>  wrote:
>>>
 To me, 2016-02-30 is an invalid date that should generate an error.

>>>
>> I don't particularly disagree with that, but on the other hand, as
>>> mentioned earlier, to_timestamp() is here for Oracle compatibility,
>>> and if it doesn't do what Oracle's function does, then (1) it's not
>>> useful for people migrating from Oracle and (2) we're making up the
>>> behavior out of whole cloth.  I think things that we invent ourselves
>>> should reject stuff like this, but in a compatibility function we
>>> might want to, say, have compatibility.
>>>
>>
>> Agreed, mostly, but ... how far are we prepared to go on that?
>>
>
> We don't at all. Our goal has never been Oracle compatibility. Yes, we
> have "made allowances" but we aren't in a position that requires that
> anymore.
>
> Let's just do it right.
>
> Sincerely,
>
> JD
>
> /me speaking as someone who handles many, many migrations, none of which
> have ever said, "do we have Oracle compatibility available".
>
>
Tongue (partlyish) in cheek:

Developer: I need a database to support my project. Based on my research
this PostgreSQL thing is awesome so we will use it.

PostgreSQL: Welcome to our community!

Developer: I need to convert a string to a timestamp. This to_timestamp()
function I tried does not operate as I expect based on the documentation.

PostgreSQL: Ah, yes, grasshopper. You are young and do not understand the
Things That Must Not Be Documented . In time you will grow a gray ponytail
and/or white beard and learn the history and ways of every database that
came before. Only then will you come to understand how The Functions
*truly* behave.

Developer: Are you #@%!$ kidding me?

I will allow that there may be selected cases where a good argument could
be made for intentionally overly permissive behavior in the pursuit of
compatibility. But in those cases the documentation should specify clearly
and in detail the deviant behavior and reason for its existence.

As one who selected PostgreSQL from the start, I am more interested in the
functions working correctly.

Cheers,
Steve


Re: [HACKERS] Better solution to final adjustment of combining Aggrefs

2016-06-25 Thread Tom Lane
I wrote:
> The patch is not quite finished: as noted in the XXX comment, it'd be
> a good idea to refactor apply_partialaggref_adjustment so that it can
> share code with this function, to ensure they produce identical
> representations of the lower partial Aggref.  But that will just make
> the patch longer, not any more interesting, so I left it out for now.

Here's an extended version that includes that refactoring.  I ended up
deciding that apply_partialaggref_adjustment was useless: stripped of the
actual Aggref-modification logic, it's little more than an iteration over
a pathtarget list, and the fact that examining the top-level nodes is
sufficient seems like an artifact of its one existing caller rather than
general-purpose functionality.  It also had no business being in tlist.c
AFAICS (the fact that putting it there required doubling the length of
tlist.c's #include list should have clued somebody that it didn't belong
there...).  So I moved the loop into make_partialgroup_input_target and
created a separate function for the Aggref-munging part.

While at that I noticed that make_partialgroup_input_target was
misleadingly named and documented: what it produces is the output target
for the partial aggregation step, not the input.  And for that matter
its argument is not what the rest of planner.c calls the final_target.
So this attempts to clean that up as well.

regards, tom lane

diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 2372311..322a18d 100644
*** a/src/backend/optimizer/plan/planner.c
--- b/src/backend/optimizer/plan/planner.c
***
*** 23,28 
--- 23,29 
  #include "access/sysattr.h"
  #include "access/xact.h"
  #include "catalog/pg_constraint_fn.h"
+ #include "catalog/pg_type.h"
  #include "executor/executor.h"
  #include "executor/nodeAgg.h"
  #include "foreign/fdwapi.h"
*** static RelOptInfo *create_ordered_paths(
*** 140,147 
  	 double limit_tuples);
  static PathTarget *make_group_input_target(PlannerInfo *root,
  		PathTarget *final_target);
! static PathTarget *make_partialgroup_input_target(PlannerInfo *root,
! 			   PathTarget *final_target);
  static List *postprocess_setop_tlist(List *new_tlist, List *orig_tlist);
  static List *select_active_windows(PlannerInfo *root, WindowFuncLists *wflists);
  static PathTarget *make_window_input_target(PlannerInfo *root,
--- 141,148 
  	 double limit_tuples);
  static PathTarget *make_group_input_target(PlannerInfo *root,
  		PathTarget *final_target);
! static PathTarget *make_partial_grouping_target(PlannerInfo *root,
! 			 PathTarget *grouping_target);
  static List *postprocess_setop_tlist(List *new_tlist, List *orig_tlist);
  static List *select_active_windows(PlannerInfo *root, WindowFuncLists *wflists);
  static PathTarget *make_window_input_target(PlannerInfo *root,
*** create_grouping_paths(PlannerInfo *root,
*** 3456,3467 
  		Path	   *cheapest_partial_path = linitial(input_rel->partial_pathlist);
  
  		/*
! 		 * Build target list for partial aggregate paths. We cannot reuse the
! 		 * final target as Aggrefs must be set in partial mode, and we must
! 		 * also include Aggrefs from the HAVING clause in the target as these
! 		 * may not be present in the final target.
  		 */
! 		partial_grouping_target = make_partialgroup_input_target(root, target);
  
  		/* Estimate number of partial groups. */
  		dNumPartialGroups = get_number_of_groups(root,
--- 3457,3469 
  		Path	   *cheapest_partial_path = linitial(input_rel->partial_pathlist);
  
  		/*
! 		 * Build target list for partial aggregate paths.  These paths cannot
! 		 * just emit the same tlist as regular aggregate paths, because (1) we
! 		 * must include Vars and Aggrefs needed in HAVING, which might not
! 		 * appear in the result tlist, and (2) the Aggrefs must be set in
! 		 * partial mode.
  		 */
! 		partial_grouping_target = make_partial_grouping_target(root, target);
  
  		/* Estimate number of partial groups. */
  		dNumPartialGroups = get_number_of_groups(root,
*** make_group_input_target(PlannerInfo *roo
*** 4317,4362 
  }
  
  /*
!  * make_partialgroup_input_target
!  *	  Generate appropriate PathTarget for input for Partial Aggregate nodes.
   *
!  * Similar to make_group_input_target(), only we don't recurse into Aggrefs, as
!  * we need these to remain intact so that they can be found later in Combine
!  * Aggregate nodes during set_combineagg_references(). Vars will be still
!  * pulled out of non-Aggref nodes as these will still be required by the
!  * combine aggregate phase.
   *
!  * We also convert any Aggrefs which we do find and put them into partial mode,
!  * this adjusts the Aggref's return type so that the partially calculated
!  * aggregate value can make its way up the execution tree up to the Finalize
!  * Aggregate node.
   */
  static PathTarget *
! 

[HACKERS] Better solution to final adjustment of combining Aggrefs

2016-06-25 Thread Tom Lane
I complained earlier about the brute-force way that the partial
aggregation patch deals with constructing Aggrefs for the upper stage of
aggregation.  It copied-and-pasted several hundred lines of setrefs.c
so as to inject a nonstandard rule for comparing upper and lower Aggrefs.
That's bulky and will create a constant risk of omissions in future
maintenance.  I felt there had to be a better way to do that, and after
some thought I propose the attached not-quite-complete patch.

Basically what this does is to explicitly construct the representation of
an upper combining Aggref with a lower partial Aggref as input.  After
that, the regular set_upper_references pass can match the lower partial
Aggref to what it will find in the output tlist of the child plan node,
producing the desired result of a combining Aggref with a Var as input.

The objection that could be raised to this is that it's slightly less
efficient than the original code, since it requires an additional
mutation pass over the tlist and qual of a combining Agg node.  I think
that is negligible, though, in comparison to all the other setup costs
of a parallel aggregation plan.

The patch is not quite finished: as noted in the XXX comment, it'd be
a good idea to refactor apply_partialaggref_adjustment so that it can
share code with this function, to ensure they produce identical
representations of the lower partial Aggref.  But that will just make
the patch longer, not any more interesting, so I left it out for now.

Another bit of followon work is to get rid of Aggref.aggoutputtype,
which I've also complained about and which is no longer particularly
necessary.  I'm inclined to do that in the same commit that adjusts
the partial-aggregation-control fields in Aggref as per the thread at
https://www.postgresql.org/message-id/29309.1466699...@sss.pgh.pa.us

Comments/objections?

regards, tom lane

diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index 17edc27..3088af1 100644
*** a/src/backend/optimizer/plan/setrefs.c
--- b/src/backend/optimizer/plan/setrefs.c
*** static Node *fix_scan_expr_mutator(Node 
*** 104,111 
  static bool fix_scan_expr_walker(Node *node, fix_scan_expr_context *context);
  static void set_join_references(PlannerInfo *root, Join *join, int rtoffset);
  static void set_upper_references(PlannerInfo *root, Plan *plan, int rtoffset);
! static void set_combineagg_references(PlannerInfo *root, Plan *plan,
! 		  int rtoffset);
  static void set_dummy_tlist_references(Plan *plan, int rtoffset);
  static indexed_tlist *build_tlist_index(List *tlist);
  static Var *search_indexed_tlist_for_var(Var *var,
--- 104,110 
  static bool fix_scan_expr_walker(Node *node, fix_scan_expr_context *context);
  static void set_join_references(PlannerInfo *root, Join *join, int rtoffset);
  static void set_upper_references(PlannerInfo *root, Plan *plan, int rtoffset);
! static Node *convert_combining_aggrefs(Node *node, void *context);
  static void set_dummy_tlist_references(Plan *plan, int rtoffset);
  static indexed_tlist *build_tlist_index(List *tlist);
  static Var *search_indexed_tlist_for_var(Var *var,
*** static Var *search_indexed_tlist_for_sor
*** 119,126 
  	  Index sortgroupref,
  	  indexed_tlist *itlist,
  	  Index newvarno);
- static Var *search_indexed_tlist_for_partial_aggref(Aggref *aggref,
- 	  indexed_tlist *itlist, Index newvarno);
  static List *fix_join_expr(PlannerInfo *root,
  			  List *clauses,
  			  indexed_tlist *outer_itlist,
--- 118,123 
*** static Node *fix_upper_expr(PlannerInfo 
*** 135,147 
  			   int rtoffset);
  static Node *fix_upper_expr_mutator(Node *node,
  	   fix_upper_expr_context *context);
- static Node *fix_combine_agg_expr(PlannerInfo *root,
- 	 Node *node,
- 	 indexed_tlist *subplan_itlist,
- 	 Index newvarno,
- 	 int rtoffset);
- static Node *fix_combine_agg_expr_mutator(Node *node,
- 			 fix_upper_expr_context *context);
  static List *set_returning_clause_references(PlannerInfo *root,
  List *rlist,
  Plan *topplan,
--- 132,137 
*** static bool extract_query_dependencies_w
*** 171,190 
   * 3. We adjust Vars in upper plan nodes to refer to the outputs of their
   * subplans.
   *
!  * 4. PARAM_MULTIEXPR Params are replaced by regular PARAM_EXEC Params,
   * now that we have finished planning all MULTIEXPR subplans.
   *
!  * 5. We compute regproc OIDs for operators (ie, we look up the function
   * that implements each op).
   *
!  * 6. We create lists of specific objects that the plan depends on.
   * This will be used by plancache.c to drive invalidation of cached plans.
   * Relation dependencies are represented by OIDs, and everything else by
   * PlanInvalItems (this distinction is motivated by the shared-inval APIs).
   * Currently, relations and user-defined functions are the 

Re: [HACKERS] Rethinking representation of partial-aggregate steps

2016-06-25 Thread Tom Lane
I wrote:
> David Rowley  writes:
>> The attached implements this, with the exception that I didn't really
>> think AggPartialMode was the best name for the enum. I've named this
>> AggregateMode instead, as the aggregate is only partial in some cases.

> Hm.  We already have an AggStrategy (for hashed vs. grouped aggregation)
> so adding AggregateMode beside it seems somewhere between confusing and
> content-free.  And it's needlessly inconsistent with the spelling of the
> existing enum name.  I'm not wedded to "AggPartialMode" but I think
> we need some name that's a bit more specific than "AggregateMode".
> Suggestions anyone?

After a bit of thought, maybe AggDivision or AggSplit or something
along those lines?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Rethinking representation of partial-aggregate steps

2016-06-25 Thread Tom Lane
David Rowley  writes:
> The attached implements this, with the exception that I didn't really
> think AggPartialMode was the best name for the enum. I've named this
> AggregateMode instead, as the aggregate is only partial in some cases.

Hm.  We already have an AggStrategy (for hashed vs. grouped aggregation)
so adding AggregateMode beside it seems somewhere between confusing and
content-free.  And it's needlessly inconsistent with the spelling of the
existing enum name.  I'm not wedded to "AggPartialMode" but I think
we need some name that's a bit more specific than "AggregateMode".
Suggestions anyone?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Memory leak in Pl/Python

2016-06-25 Thread Tom Lane
Andrey Zhidenkov  writes:
> I see memory consumption in htop and pg_activity tools.

"top" can be pretty misleading if you don't know how to interpret its
output, specifically that you have to discount whatever it shows as
SHR space.  That just represents the amount of the shared memory block
that this process has touched so far in its lifetime; even if it appears
to be growing, it's not a leak.  That growth will stop eventually, once
the process has touched every available shared buffer.  RES minus SHR
is a fairer estimate of the process's own memory consumption.

I tried to reduce your example to a self-contained test case, thus:

create extension if not exists plpythonu;
create table test (test text);
create or replace
function test() returns bigint as $$
plpy.execute("insert into test(test) values ('test')")
return 1
$$ language plpythonu;
do $$
begin
  for i in 1..1000 loop
perform test();
  end loop;
end;
$$;

I do not see any significant leakage with this example.  There is some
memory growth, approximately 4 bytes per plpy.execute(), due to having to
keep track of a subtransaction XID for each uncommitted subtransaction.
That's not plpython's fault --- it would happen with any PL that executes
each SQL command as a separate subtransaction, which is probably all of
them other than plpgsql.  And it really ought to be negligible anyway in
any sane usage.

It's possible that you're seeing some other, larger memory consumption;
for instance, if there were triggers or foreign keys on the "test" table
then perhaps there would be an opportunity for leakage in those.
But without a self-contained test case or any indication of the rate
of leakage you're seeing, it's hard to guess about the problem.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] initdb issue on 64-bit Windows - (Was: [pgsql-packagers] PG 9.6beta2 tarballs are ready)

2016-06-25 Thread Tom Lane
"Haroon ."  writes:
> And if I comment these out i.e. setup_description, setup_privileges and
> 'setup_schema' it seem to progress well without any errors/crashes.

Presumably, what you've done there is remove every single join query
from the post-bootstrap scripts.  That isn't particularly useful in
itself, but it does suggest that you would be able to fire up a
normal session afterwards in which you could use a more conventional
debugging approach.  The problem can evidently be categorized as
"planning of any join query whatsoever crashes", so a test case
ought to be easy enough to come by.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] initdb issue on 64-bit Windows - (Was: [pgsql-packagers] PG 9.6beta2 tarballs are ready)

2016-06-25 Thread Tom Lane
Craig Ringer  writes:
> On 24 June 2016 at 21:34, Tom Lane  wrote:
>> TBH, this looks more like a compiler bug than anything else.

> I tend to agree. Especially since valgrind has no complaints on x64 linux,
> and neither does DrMemory for 32-bit builds with the same toolchain on the
> same Windows and same SDK.

If that is the explanation, I'm suspicious that it's got something to do
with the interaction of a static inline-able (single-call-site) function
and taking the address of a formal parameter.  We certainly have multiple
other instances of each thing, but maybe not both at the same place.
This leads to a couple of suggestions for dodging the problem:

1. Make get_foreign_key_join_selectivity non-static so that it doesn't
get inlined, along the lines of

   List *restrictlist);
-static Selectivity get_foreign_key_join_selectivity(PlannerInfo *root,
+extern Selectivity get_foreign_key_join_selectivity(PlannerInfo *root,
 Relids 
outer_relids,
...
  */
-static Selectivity
+Selectivity
 get_foreign_key_join_selectivity(PlannerInfo *root,

2. Don't pass the original formal parameter to
get_foreign_key_join_selectivity, ie do something like

 static double
 calc_joinrel_size_estimate(PlannerInfo *root,
   RelOptInfo *outer_rel,
   RelOptInfo *inner_rel,
   double outer_rows,
   double inner_rows,
   SpecialJoinInfo *sjinfo,
-  List *restrictlist)
+  List *orig_restrictlist)
 {
JoinTypejointype = sjinfo->jointype;
+   List   *restrictlist = orig_restrictlist;
Selectivity fkselec;
Selectivity jselec;
Selectivity pselec;

Obviously, if either of those things do make the problem go away, it's
a compiler bug.  If not, we'll need to dig deeper.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Rename max_parallel_degree?

2016-06-25 Thread Julien Rouhaud
On 25/06/2016 09:33, Amit Kapila wrote:
> On Wed, Jun 15, 2016 at 11:43 PM, Julien Rouhaud
>  wrote:
>>
>> Attached v4 implements the design you suggested, I hope everything's ok.
>>
> 
> Few review comments:
> 

Thanks for the review.

> 1.
> + if (parallel && (BackgroundWorkerData->parallel_register_count -
> +
> BackgroundWorkerData->parallel_terminate_count) >=
> +
> max_parallel_workers)
> + return false;
> 
> I think this check should be done after acquiring
> BackgroundWorkerLock, otherwise some other backend can simultaneously
> increment parallel_register_count.
> 

You're right, fixed.

> 2.
> 
> +/*
> + * This flag is used internally for parallel queries, to keep track of the
> + * number of active
> parallel workers and make sure we never launch more than
> + * max_parallel_workers parallel workers at
> the same time.  Third part
> + * background workers should not use this flag.
> + */
> +#define
> BGWORKER_IS_PARALLEL_WORKER 0x0004
> +
> 
> "Third part", do yo want to say Third party?
> 

Yes, sorry. Fixed

> 3.
> static bool
> SanityCheckBackgroundWorker(BackgroundWorker *worker, int elevel)
> {
> ..
> }
> 
> Isn't it better to have a check in above function such that if
> bgw_flags is BGWORKER_IS_PARALLEL_WORKER and max_parallel_workers is
> zero, then ereport?  Also, consider if it is better to have some other
> checks related to BGWORKER_IS_PARALLEL_WORKER, like if caller sets
> BGWORKER_IS_PARALLEL_WORKER, then it must have database connection and
> shared memory access.
> 

I added these checks. I don't see another check to add right now.

> 4.
> +   xreflabel="max_parallel_workers">
> +   max_parallel_workers (integer)
> +   
> +max_parallel_workers configuration
> parameter
> +   
> +   
> +   
> +
> + Sets the maximum number of workers that can be launched at the same
> + time for the whole server.  This parameter allows the administrator 
> to
> + reserve background worker slots for for third part dynamic 
> background
> + workers.  The default value is 4.  Setting this value to 0 disables
> + parallel query execution.
> +
> +   
> +  
> 
> How about phrasing it as:
> Sets the maximum number of workers that the system can support for
> parallel queries.  The default value is 4.  Setting this value to 0
> disables parallel query execution.
> 

It's better thanks.  Should we document somewhere the link between this
parameter and custom dynamic background workers or is it pretty
self-explanatory?

> 5.
> max_parallel_workers_per_gather configuration
> parameter
>
>
>
> 
>  Sets the maximum number of workers that can be started by a single
>  Gather node.  Parallel workers are taken from the
>  pool of processes established by
>  .
> 
> I think it is better to change above in documentation to indicate that
> "pool of processes established by guc-max-parallel-workers".
> 

The real limit is the minimum of these two values, I think it's
important to be explicit here, since this pool is shared for parallelism
and custom bgworkers.

What about "pool of processes established by guc-max-worker-processes,
limited by guc-max-parallel-workers" (used in attached v5 patch)

-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index a82bf06..6812b0d 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -2009,7 +2009,8 @@ include_dir 'conf.d'
  Sets the maximum number of workers that can be started by a single
  Gather node.  Parallel workers are taken from the
  pool of processes established by
- .  Note that the requested
+ , limited by
+ .  Note that the requested
  number of workers may not actually be available at runtime.  If this
  occurs, the plan will run with fewer workers than expected, which may
  be inefficient.  The default value is 2.  Setting this value to 0
@@ -2018,6 +2019,21 @@ include_dir 'conf.d'

   
 
+  
+   max_parallel_workers (integer)
+   
+max_parallel_workers configuration 
parameter
+   
+   
+   
+
+ Sets the maximum number of workers that the system can support for
+ parallel queries.  The default value is 4.  Setting this value to 0
+ disables parallel query execution.
+
+   
+  
+
   
backend_flush_after (integer)

diff --git a/src/backend/access/transam/parallel.c 
b/src/backend/access/transam/parallel.c
index 088700e..ea7680b 100644
--- a/src/backend/access/transam/parallel.c
+++ b/src/backend/access/transam/parallel.c
@@ -452,7 +452,8 @@ LaunchParallelWorkers(ParallelContext *pcxt)
snprintf(worker.bgw_name, BGW_MAXLEN, "parallel worker for PID %d",
 MyProcPid);

Re: [HACKERS] bug in citext's upgrade script for parallel aggregates

2016-06-25 Thread Andreas Karlsson


On 06/24/2016 01:31 PM, David Rowley wrote:

Seems there's a small error in the upgrade script for citext for 1.1
to 1.2 which will cause min(citext) not to be parallel enabled.

max(citext)'s combinefunc is first set incorrectly, but then updated
to the correct value. I assume it was meant to set the combine
function for min(citext) instead.

Fix attached. I've assumed that because we're still in beta that we
can get away with this fix rather than making a 1.3 version to fix the
issue.


Yes, this is indeed a bug.

Andreas


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Rename max_parallel_degree?

2016-06-25 Thread Amit Kapila
On Wed, Jun 15, 2016 at 11:43 PM, Julien Rouhaud
 wrote:
>
> Attached v4 implements the design you suggested, I hope everything's ok.
>

Few review comments:

1.
+ if (parallel && (BackgroundWorkerData->parallel_register_count -
+
BackgroundWorkerData->parallel_terminate_count) >=
+
max_parallel_workers)
+ return false;

I think this check should be done after acquiring
BackgroundWorkerLock, otherwise some other backend can simultaneously
increment parallel_register_count.

2.

+/*
+ * This flag is used internally for parallel queries, to keep track of the
+ * number of active
parallel workers and make sure we never launch more than
+ * max_parallel_workers parallel workers at
the same time.  Third part
+ * background workers should not use this flag.
+ */
+#define
BGWORKER_IS_PARALLEL_WORKER 0x0004
+

"Third part", do yo want to say Third party?

3.
static bool
SanityCheckBackgroundWorker(BackgroundWorker *worker, int elevel)
{
..
}

Isn't it better to have a check in above function such that if
bgw_flags is BGWORKER_IS_PARALLEL_WORKER and max_parallel_workers is
zero, then ereport?  Also, consider if it is better to have some other
checks related to BGWORKER_IS_PARALLEL_WORKER, like if caller sets
BGWORKER_IS_PARALLEL_WORKER, then it must have database connection and
shared memory access.

4.
+  
+   max_parallel_workers (integer)
+   
+max_parallel_workers configuration
parameter
+   
+   
+   
+
+ Sets the maximum number of workers that can be launched at the same
+ time for the whole server.  This parameter allows the administrator to
+ reserve background worker slots for for third part dynamic background
+ workers.  The default value is 4.  Setting this value to 0 disables
+ parallel query execution.
+
+   
+  

How about phrasing it as:
Sets the maximum number of workers that the system can support for
parallel queries.  The default value is 4.  Setting this value to 0
disables parallel query execution.

5.
max_parallel_workers_per_gather configuration
parameter
   
   
   

 Sets the maximum number of workers that can be started by a single
 Gather node.  Parallel workers are taken from the
 pool of processes established by
 .

I think it is better to change above in documentation to indicate that
"pool of processes established by guc-max-parallel-workers".

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Memory leak in Pl/Python

2016-06-25 Thread Andrey Zhidenkov
I found commit, that fixes some memory leaks in 9.6 beta 2:
https://github.com/postgres/postgres/commit/8c75ad436f75fc629b61f601ba884c8f9313c9af#diff-4d0cb76412a1c4ee5d9c7f76ee489507

I'm interesting in how Tom Lane check that is no more leaks in plpython?

On Sat, Jun 25, 2016 at 4:54 AM, Andrey Zhidenkov
 wrote:
> For test I wrote script in Python, which calls a test function via psycopg2:
>
> #!/usr/bin/env python2
>
> import psycopg2
>
> conn = psycopg2.connect('xxx')
>
> cursor = conn.cursor()
>
> cursor.execute('set application_name to \'TEST\'')
>
> for i in range(1, 100):
> cursor.execute('select test()')
> conn.commit()
>
>
> I see memory consumption in htop and pg_activity tools.
>
> On Sat, Jun 25, 2016 at 2:00 AM, David G. Johnston
>  wrote:
>> On Fri, Jun 24, 2016 at 6:41 PM, Andrey Zhidenkov
>>  wrote:
>>>
>>> For example, when I call this procedure
>>> many times,
>>
>>
>> Call how?  Specifically, how are you handling transactions in the calling
>> client?  And what/how are you measuring memory consumption?
>>
>> David J.
>>
>
>
>
> --
> Andrey Zhidenkov / Database developer
> +79265617190/ andrey.zhiden...@gmail.com
>
>
>
>
> This e-mail message may contain confidential or legally privileged
> information and is intended only for the use of the intended
> recipient(s). Any unauthorized disclosure, dissemination,
> distribution, copying or the taking of any action in reliance on the
> information herein is prohibited. E-mails are not secure and cannot be
> guaranteed to be error free as they can be intercepted, amended, or
> contain viruses. Anyone who communicates with us by e-mail is deemed
> to have accepted these risks. Company Name is not responsible for
> errors or omissions in this message and denies any responsibility for
> any damage arising from the use of e-mail. Any opinion and other
> statement contained in this message and any attachment are solely
> those of the author and do not necessarily represent those of the
> company.



-- 
Andrey Zhidenkov / Database developer
+79265617190/ andrey.zhiden...@gmail.com




This e-mail message may contain confidential or legally privileged
information and is intended only for the use of the intended
recipient(s). Any unauthorized disclosure, dissemination,
distribution, copying or the taking of any action in reliance on the
information herein is prohibited. E-mails are not secure and cannot be
guaranteed to be error free as they can be intercepted, amended, or
contain viruses. Anyone who communicates with us by e-mail is deemed
to have accepted these risks. Company Name is not responsible for
errors or omissions in this message and denies any responsibility for
any damage arising from the use of e-mail. Any opinion and other
statement contained in this message and any attachment are solely
those of the author and do not necessarily represent those of the
company.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers