Re: [GENERAL] Recursive Arrays 101

2015-11-04 Thread David Blomstrom
Thanks for that tip, Achilleas.

On Wed, Nov 4, 2015 at 7:53 AM, Rob Sargent  wrote:

> On 11/04/2015 03:03 AM, Achilleas Mantzios wrote:
>
> Sorry for being kind of late to the party (I was in 2015.PgConf.EU !!),
> and not having read
> most of the replies, what we have been successfully doing for this problem
> for our app
> is do it this way :
> parents int[] -- where parents stores the path from the node to the root
> of the tree
> and then have those indexes :
> btree (first(parents))
> btree (level(parents)) -- length
> btree (last(parents))
> gin (parents gin__int_ops) -- the most important
>
> This has been described as "genealogical tree" approach, and works very
> good, IMHO much better
> than nested sets.
>
> Is there a more complete description of this approach available?  By the
> title one might assume could be applied to populations as opposed to
> phylogeny (the OP's use case).  Does it deal with consanguinity?  Does it
> perform well going "up" the tree (which is of course branched at every
> level)?
>



-- 
David Blomstrom
Writer & Web Designer (Mac, M$ & Linux)
www.geobop.org


Re: [GENERAL] Deadlock detected after pg_repack receives SIGINT

2015-11-04 Thread Michael Paquier
On Wed, Nov 4, 2015 at 10:16 PM, Jiří Hlinka  wrote:

> I'm on pg_repack 1.3.2 (latest sable, no devel version available to check
> if it is already fixed).
>
> Michael: your memories are fresh and clear :-), yes, it is part of a
> cleanup rollback. The problem is, that the pgrepack_drop call this
> statement:
> DROP TABLE IF EXISTS repack.log_%u CASCADE
>
> ...and it collides with this trigger inserting new touples inside the [
> frequently_updated_table] in the repack.log_[OID of
> frequently_updated_table] routine:
>
> SELECT 'CREATE TRIGGER z_repack_trigger'
> ' BEFORE INSERT OR DELETE OR UPDATE ON ' || repack.oid2text($1) || ' FOR
> EACH ROW EXECUTE PROCEDURE repack.repack_trigger(' || '''INSERT INTO
> repack.log_' || $1 || '(pk, row) VALUES(' || ' CASE WHEN $1 IS NULL THEN
> NULL ELSE (ROW($1.' || repack.get_index_columns($2, ', $1.') ||
> ')::repack.pk_' || $1 || ') END, $2)'')';
>
> As these two actions are both run by pg_repack, it seems like a bug to me
> as pg_repack should not be able to deadlock itself, but not 100% sure about
> this part...
>

Is the trigger activity moving on or is this one waiting as well for a
lock? It sounds like pg_repack is waiting for the end of the transaction
running this trigger to finish before being able to drop the trigger and
this relation safely. I guess that you are running large UPDATE queries,
right? It really looks like you should let more room to pg_repack to do its
stuff.
-- 
Michael


Re: [GENERAL] pg_archivecleanup not deleting anything?

2015-11-04 Thread Michael Paquier
On Wed, Nov 4, 2015 at 7:16 PM, Albe Laurenz  wrote:
> Michael Paquier wrote:
 The docs mention that "all WAL files" preceding a given point are
 removed, personally I understand that as "all 16MB-size segments shall
 die", hence excluding backup and history files from the stack. But one
 may understand that "WAL files" means everything in pg_xlog, so as
 something that includes backup and history files. Perhaps we would
 gain in clarity by saying "WAL file segments, including .partial
 segments" in the docs, and not just "WAL files". Thoughts?
>>>
>>> It might help to add:
>>> "Timeline history files and backup history files are not deleted."
>>
>> Something among those lines?
>
>> +  
>> +   WAL file segments and WAL file segments with .partial
>> +   are deleted, while timeline history files and backup history files are 
>> not.
>> +  
>
> "WAL file segments with .partial" sounds strange.
> What about "WAL file segments (including partial ones) are deleted, while 
> ..."?

Does that look better?
-- 
Michael
diff --git a/doc/src/sgml/ref/pgarchivecleanup.sgml b/doc/src/sgml/ref/pgarchivecleanup.sgml
index 60a7fc4..c375a88 100644
--- a/doc/src/sgml/ref/pgarchivecleanup.sgml
+++ b/doc/src/sgml/ref/pgarchivecleanup.sgml
@@ -80,6 +80,10 @@ pg_archivecleanup:  removing file "archive/00010037000E"
archivelocation is a directory readable and writable by the
server-owning user.
   
+  
+   WAL file segments (including partial ones) are deleted, while timeline
+   history files and backup history files are not.
+  
  
 
  

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Deadlock detected after pg_repack receives SIGINT

2015-11-04 Thread Jiří Hlinka
I'll check if the trigger activity was moving on or waiting for a lock from
logs.

I'm doing simple UPDATES, INSERTs and DELETEs on this table, but frequency
of these DMLs is _very_ high (it is a queue table used for a one-way
selective [just part of data are replicated] replication of queries between
two instances of the database, lets say from the main DB to data warehouse
DB, therefore part of DML queries on tables of the main table is "copied"
by a trigger to this queue table which is than used as a source for
replicating changes into data warehouse DB) - this is the reason why the
table needs VACUUM FULL at least twice a day, or better - running pg_repack
on it at least twice a day.

My opinion is, that pg_repack should guarantee a consistent, deadlock-free
behaviour via proper locking policy - if it is frequently updated table or
not and the "worst" case that could appear is, that the pg_repack will wait
for very long time (and this is the reason for the timeout usage in the
script - to stop pg_repack if it is unable to receive the lock it is
waiting for).  I know it is hard to achieve this and not sure if it is
possible to achieve this "clean" behaviour, but if for example deadlock
happend, it is possible to fix it as a bug to make it even better than it
is now.

Maybe I could lower the impact of the problem by checking inside the script
if, after timeout expires, the pg_repack is still waiting in "Waiting for X
transactions to finished" (= safe to SIGINT pg_repack) or if it is already
in the process of repacking the table - do not stop it here. But there
still remain the fact, that there may be a bug which I could and sholud
report to the developers.

Thanks Michael,
Jiri

On Thu, Nov 5, 2015 at 2:57 AM, Michael Paquier 
wrote:

>
>
> On Wed, Nov 4, 2015 at 10:16 PM, Jiří Hlinka 
> wrote:
>
>> I'm on pg_repack 1.3.2 (latest sable, no devel version available to check
>> if it is already fixed).
>>
>> Michael: your memories are fresh and clear :-), yes, it is part of a
>> cleanup rollback. The problem is, that the pgrepack_drop call this
>> statement:
>> DROP TABLE IF EXISTS repack.log_%u CASCADE
>>
>> ...and it collides with this trigger inserting new touples inside the [
>> frequently_updated_table] in the repack.log_[OID of
>> frequently_updated_table] routine:
>>
>> SELECT 'CREATE TRIGGER z_repack_trigger'
>> ' BEFORE INSERT OR DELETE OR UPDATE ON ' || repack.oid2text($1) || ' FOR
>> EACH ROW EXECUTE PROCEDURE repack.repack_trigger(' || '''INSERT INTO
>> repack.log_' || $1 || '(pk, row) VALUES(' || ' CASE WHEN $1 IS NULL THEN
>> NULL ELSE (ROW($1.' || repack.get_index_columns($2, ', $1.') ||
>> ')::repack.pk_' || $1 || ') END, $2)'')';
>>
>> As these two actions are both run by pg_repack, it seems like a bug to me
>> as pg_repack should not be able to deadlock itself, but not 100% sure about
>> this part...
>>
>
> Is the trigger activity moving on or is this one waiting as well for a
> lock? It sounds like pg_repack is waiting for the end of the transaction
> running this trigger to finish before being able to drop the trigger and
> this relation safely. I guess that you are running large UPDATE queries,
> right? It really looks like you should let more room to pg_repack to do its
> stuff.
> --
> Michael
>



-- 
Bc. Jiří Hlinka
Tel.: 725 315 263


Re: [GENERAL] Drop or alter column under load give ERROR #42804 structure of query does not match function result type:

2015-11-04 Thread Victor Blomqvist
On Wed, Nov 4, 2015 at 1:31 AM, Tom Lane  wrote:

> Victor Blomqvist  writes:
> > In case any of you are interested of recreating this problem, I today had
> > the time to create a short example that reproduce the error every time I
> > try.
>
> Hmm.  If you just do that serially:
>
> regression=# select * from select_a() ;
>  id | x
> +---
> (0 rows)
>
> regression=# alter table a add column y text;
> ALTER TABLE
> regression=# select * from select_a() ;
>  id | x | y
> +---+---
> (0 rows)
>
> regression=# alter table a drop column y;
> ALTER TABLE
> regression=# select * from select_a() ;
>  id | x
> +---
> (0 rows)
>
> So actually, we *are* tracking the change of table rowtype, both at the
> level of the query inside the function and at the level of the function
> result.  The problem is that the instant at which the result rowtype of
> the function is determined (while parsing the outer query) is different
> from the instant at which the inner query's result rowtype is determined.
>
> I'm not really sure that there's anything we can, or even should, try
> to do about this.  There would be a whole lot of action-at-a-distance
> involved and it would be likely to make some other use-cases worse.
>
> A possible user-level answer if you need to make an application robust
> against this sort of thing is to take out a low-grade lock on the
> table that's determining the function's result type:
>
> begin;
> lock table a in access share mode;
> select * from select_a();
> commit;
>
> Holding the table lock will prevent any other transactions from altering
> the table's rowtype while this transaction runs.
>
> regards, tom lane
>

Ok, then I dont hope for a fix in a future Postgres version.

Given this problem it seems like its generally a bad idea to ever ALTER
anything that is returned from a function, unless you want to add a lock
around the function call (which get a bit unpractical if you have many
functions, especially if they are nested). I wonder if it might be good to
mention this in the docs about the different RETURNs? On the other hand
maybe it only affects a very limited amount of users..

Thanks for the input so far!
/Victor


Re: [GENERAL] Auto-analyse on insert operations

2015-11-04 Thread Bertrand Roos

Le 04/11/2015 14:45, Adrian Klaver a écrit :

On 11/04/2015 05:32 AM, Bertrand Roos wrote:

Hello,

I try to configure auto-analyse task with postgresql 9.4.
I have the following configuration (default configuration):
track_counts = on
autovacuum = on
log_autovacuum_min_duration = -1
autovacuum_max_workers = 3
autovacuum_naptime = 300s
autovacuum_vacuum_threshold = 50
autovacuum_analyze_threshold = 50
autovacuum_vacuum_scale_factor = 0.2
autovacuum_analyze_scale_factor = 0.2
autovacuum_freeze_max_age = 2
autovacuum_multixact_freeze_max_age = 4
autovacuum_vacuum_cost_delay = 20ms
autovacuum_vacuum_cost_limit = -1

With this configuration, I can observe that some tables are
auto-analysed, but some others are not. Even if there are millions of
insert operations on an empty table (all tables are in cluster mode).
In fact it seems that tables with update operations are the only ones
that are auto-analysed.
I'm quite suprised because the documentation says that daemon check the
count of insert, update and delete operations.
What could it be the reason ? Why tables which have only update
operation, aren't analysed ?
Are update operations really taken into account ?


Somewhere you switched actions. You start by saying updated tables are 
being analyzed and end by saying they are not, or are the last 
sentences supposed to refer to inserts?




Thanks,

Bertrand







Yes the last sentences referred to inserts.
To make it clear, tables with all kinds of operations are auto-analysed.
Table with only insert operations are never auto-analysed.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Auto-analyse on insert operations

2015-11-04 Thread Adrian Klaver

On 11/04/2015 06:35 AM, Bertrand Roos wrote:

Le 04/11/2015 14:45, Adrian Klaver a écrit :

On 11/04/2015 05:32 AM, Bertrand Roos wrote:

Hello,

I try to configure auto-analyse task with postgresql 9.4.
I have the following configuration (default configuration):
track_counts = on
autovacuum = on
log_autovacuum_min_duration = -1
autovacuum_max_workers = 3
autovacuum_naptime = 300s
autovacuum_vacuum_threshold = 50
autovacuum_analyze_threshold = 50
autovacuum_vacuum_scale_factor = 0.2
autovacuum_analyze_scale_factor = 0.2
autovacuum_freeze_max_age = 2
autovacuum_multixact_freeze_max_age = 4
autovacuum_vacuum_cost_delay = 20ms
autovacuum_vacuum_cost_limit = -1

With this configuration, I can observe that some tables are
auto-analysed, but some others are not. Even if there are millions of
insert operations on an empty table (all tables are in cluster mode).
In fact it seems that tables with update operations are the only ones
that are auto-analysed.
I'm quite suprised because the documentation says that daemon check the
count of insert, update and delete operations.
What could it be the reason ? Why tables which have only update
operation, aren't analysed ?
Are update operations really taken into account ?


Somewhere you switched actions. You start by saying updated tables are
being analyzed and end by saying they are not, or are the last
sentences supposed to refer to inserts?



Thanks,

Bertrand







Yes the last sentences referred to inserts.
To make it clear, tables with all kinds of operations are auto-analysed.
Table with only insert operations are never auto-analysed.


How are you determining this?






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MinGW-W64 compile error

2015-11-04 Thread Tom Lane
"=?UTF-8?Q?Leonardo_M._Ram=c3=a9?="  writes:
> El 04/11/15 a las 00:05, Tom Lane escribió:
>> Why is it invoking g++ and not gcc?

> I don't know. I just installed MinGW, then ./configure, maybe there's an 
> error in the configure script?.

I looked at the configure script carefully, and so far as I can see,
it will only choose compilers named "gcc" or "cc" of its own accord.
Choosing g++ would only be possible if you have CC = g++ in your
environment.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query regarding

2015-11-04 Thread dinesh kumar
Hi,

On Wed, Nov 4, 2015 at 1:21 AM, M Tarkeshwar Rao <
m.tarkeshwar@ericsson.com> wrote:

> Hi all,
>
> We have started to convert some oracle sql scripts and converting them to
> postgres, but facing some issues to create table.
>
> I have some common doubts in create table script ie.
>
> ALTER TABLE employee
> DROP PRIMARY KEY CASCADE;
>
> DROP employee CASCADE CONSTRAINTS;
>
> *CREATE TABLE *employee
> *(*
> *  LABEL**_IMP**  VARCHAR2(50 BYTE)*
> *)*
> *TABLESPACE DATA**_TB*
> *PCTUSED0*
> *PCTFREE10*
>

We need to use FILLFACTOR, which is an opposite setting of PCTFREE.

*INITRANS   1*
> *MAXTRANS   255*
> *STORAGE(*
> *INITIAL  5M*
> *NEXT 5M*
> *MINEXTENTS   1*
> *MAXEXTENTS   UNLIMITED*
> *PCTINCREASE  0*
> *BUFFER_POOL  DEFAULT*
> *   )*
> *LOGGING*
>

Default is LOGGED. We can specify UNLOGGED if we do not want to log trx
into WAL.


> *NOCOMPRESS*
>

Postgres supports column level storage support.

SET STORAGE { PLAIN | EXTERNAL | EXTENDED | MAIN }

*NOCACHE*
>



> *NOPARALLEL*
> *MONITORING;*
>
>
> I am trying to find replacement for above keywords highlighted in BOLD in
> postgres, but unable to find it on google.
> Could you please let us know if these parameter are managed internally in
> postgres or what are the alternative.
>
> Regards,
> Tarkeshwar
>
>



-- 

Regards,
Dinesh
manojadinesh.blogspot.com


Re: [GENERAL] Deadlock detected after pg_repack receives SIGINT

2015-11-04 Thread Jiří Hlinka
Thanks Jim and Michael for comments.

I'm on pg_repack 1.3.2 (latest sable, no devel version available to check
if it is already fixed).

Michael: your memories are fresh and clear :-), yes, it is part of a
cleanup rollback. The problem is, that the pgrepack_drop call this
statement:
DROP TABLE IF EXISTS repack.log_%u CASCADE

...and it collides with this trigger inserting new touples inside the [
frequently_updated_table] in the repack.log_[OID of
frequently_updated_table] routine:

SELECT 'CREATE TRIGGER z_repack_trigger'
' BEFORE INSERT OR DELETE OR UPDATE ON ' || repack.oid2text($1) || ' FOR
EACH ROW EXECUTE PROCEDURE repack.repack_trigger(' || '''INSERT INTO
repack.log_' || $1 || '(pk, row) VALUES(' || ' CASE WHEN $1 IS NULL THEN
NULL ELSE (ROW($1.' || repack.get_index_columns($2, ', $1.') ||
')::repack.pk_' || $1 || ') END, $2)'')';

As these two actions are both run by pg_repack, it seems like a bug to me
as pg_repack should not be able to deadlock itself, but not 100% sure about
this part...


On Wed, Nov 4, 2015 at 3:48 AM, Jim Nasby  wrote:

> On 11/3/15 7:44 AM, Michael Paquier wrote:
>
>> I doubt there is anything involving Postgres here. It seems that some
>> process is still holding a lock on a relation that is being dropped,
>> caused by a race condition in pg_repack code.
>>
>> >PS: I was trying a mailing list of pg_repack
>>> >(http://lists.pgfoundry.org/pipermail/reorg-genera) and github without
>>> any
>>> >notice. If there is better place to ask, tell me, please.
>>>
>> I guess you should let those folks time to answer. If I were you I'd
>> begin first by letting more time for the repack operation to complete.
>>
>
> Make sure you're on the latest version too. There were some recent commits
> aimed at fixing some race conditions.
> --
> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
> Experts in Analytics, Data Architecture and PostgreSQL
> Data in Trouble? Get it in Treble! http://BlueTreble.com
>



-- 
Bc. Jiří Hlinka
Tel.: 725 315 263


Re: [GENERAL] Auto-analyse on insert operations

2015-11-04 Thread Bill Moran
On Wed, 4 Nov 2015 14:32:37 +0100
Bertrand Roos  wrote:
> 
> I try to configure auto-analyse task with postgresql 9.4.
> I have the following configuration (default configuration):
> track_counts = on
> autovacuum = on
> log_autovacuum_min_duration = -1
> autovacuum_max_workers = 3
> autovacuum_naptime = 300s
> autovacuum_vacuum_threshold = 50
> autovacuum_analyze_threshold = 50
> autovacuum_vacuum_scale_factor = 0.2
> autovacuum_analyze_scale_factor = 0.2
> autovacuum_freeze_max_age = 2
> autovacuum_multixact_freeze_max_age = 4
> autovacuum_vacuum_cost_delay = 20ms
> autovacuum_vacuum_cost_limit = -1
> 
> With this configuration, I can observe that some tables are 
> auto-analysed, but some others are not. Even if there are millions of 
> insert operations on an empty table (all tables are in cluster mode).
> In fact it seems that tables with update operations are the only ones 
> that are auto-analysed.
> I'm quite suprised because the documentation says that daemon check the 
> count of insert, update and delete operations.
> What could it be the reason ? Why tables which have only update 
> operation, aren't analysed ?
> Are update operations really taken into account ?

Given that autoanalyze is pretty critical to the way the system functions,
it's unlikely that it just doesn't work (someone else would have noticed).

A more likely scenario is that you've found some extremely obscure edge
case. If that's the case, you're going to have to give very specific
details as to how you're testing it before anyone is liable to be able
to help you.

I get the impression that you're somewhat new to Postgres, in which case
it's very likely that the problem is that you're not testing the situation
correctly. In that case, we're going to need specific details on how you're
observing that tables are or are not being analysed.

As a wild-guess theory: the process that does the analyze only wakes up
to check tables every 5 minutes (based on the config you show) ... so are
you doing the inserts then checking the table without leaving enough time
in between for the system to wake up and notice the change?

-- 
Bill Moran


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Auto-analyse on insert operations

2015-11-04 Thread Bertrand Roos

Hello,

I try to configure auto-analyse task with postgresql 9.4.
I have the following configuration (default configuration):
track_counts = on
autovacuum = on
log_autovacuum_min_duration = -1
autovacuum_max_workers = 3
autovacuum_naptime = 300s
autovacuum_vacuum_threshold = 50
autovacuum_analyze_threshold = 50
autovacuum_vacuum_scale_factor = 0.2
autovacuum_analyze_scale_factor = 0.2
autovacuum_freeze_max_age = 2
autovacuum_multixact_freeze_max_age = 4
autovacuum_vacuum_cost_delay = 20ms
autovacuum_vacuum_cost_limit = -1

With this configuration, I can observe that some tables are 
auto-analysed, but some others are not. Even if there are millions of 
insert operations on an empty table (all tables are in cluster mode).
In fact it seems that tables with update operations are the only ones 
that are auto-analysed.
I'm quite suprised because the documentation says that daemon check the 
count of insert, update and delete operations.
What could it be the reason ? Why tables which have only update 
operation, aren't analysed ?

Are update operations really taken into account ?

Thanks,

Bertrand



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Auto-analyse on insert operations

2015-11-04 Thread Adrian Klaver

On 11/04/2015 05:32 AM, Bertrand Roos wrote:

Hello,

I try to configure auto-analyse task with postgresql 9.4.
I have the following configuration (default configuration):
track_counts = on
autovacuum = on
log_autovacuum_min_duration = -1
autovacuum_max_workers = 3
autovacuum_naptime = 300s
autovacuum_vacuum_threshold = 50
autovacuum_analyze_threshold = 50
autovacuum_vacuum_scale_factor = 0.2
autovacuum_analyze_scale_factor = 0.2
autovacuum_freeze_max_age = 2
autovacuum_multixact_freeze_max_age = 4
autovacuum_vacuum_cost_delay = 20ms
autovacuum_vacuum_cost_limit = -1

With this configuration, I can observe that some tables are
auto-analysed, but some others are not. Even if there are millions of
insert operations on an empty table (all tables are in cluster mode).
In fact it seems that tables with update operations are the only ones
that are auto-analysed.
I'm quite suprised because the documentation says that daemon check the
count of insert, update and delete operations.
What could it be the reason ? Why tables which have only update
operation, aren't analysed ?
Are update operations really taken into account ?


Somewhere you switched actions. You start by saying updated tables are 
being analyzed and end by saying they are not, or are the last sentences 
supposed to refer to inserts?




Thanks,

Bertrand






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recursive Arrays 101

2015-11-04 Thread Rob Sargent

On 11/04/2015 03:03 AM, Achilleas Mantzios wrote:
Sorry for being kind of late to the party (I was in 2015.PgConf.EU 
!!), and not having read
most of the replies, what we have been successfully doing for this 
problem for our app

is do it this way :
parents int[] -- where parents stores the path from the node to the 
root of the tree

and then have those indexes :
btree (first(parents))
btree (level(parents)) -- length
btree (last(parents))
gin (parents gin__int_ops) -- the most important

This has been described as "genealogical tree" approach, and works 
very good, IMHO much better

than nested sets.

Is there a more complete description of this approach available?  By the 
title one might assume could be applied to populations as opposed to 
phylogeny (the OP's use case).  Does it deal with consanguinity?  Does 
it perform well going "up" the tree (which is of course branched at 
every level)?


Re: [GENERAL] Auto-analyse on insert operations

2015-11-04 Thread Bertrand Roos



Le 04/11/2015 14:55, Bill Moran a écrit :

On Wed, 4 Nov 2015 14:32:37 +0100
Bertrand Roos  wrote:

I try to configure auto-analyse task with postgresql 9.4.
I have the following configuration (default configuration):
track_counts = on
autovacuum = on
log_autovacuum_min_duration = -1
autovacuum_max_workers = 3
autovacuum_naptime = 300s
autovacuum_vacuum_threshold = 50
autovacuum_analyze_threshold = 50
autovacuum_vacuum_scale_factor = 0.2
autovacuum_analyze_scale_factor = 0.2
autovacuum_freeze_max_age = 2
autovacuum_multixact_freeze_max_age = 4
autovacuum_vacuum_cost_delay = 20ms
autovacuum_vacuum_cost_limit = -1

With this configuration, I can observe that some tables are
auto-analysed, but some others are not. Even if there are millions of
insert operations on an empty table (all tables are in cluster mode).
In fact it seems that tables with update operations are the only ones
that are auto-analysed.
I'm quite suprised because the documentation says that daemon check the
count of insert, update and delete operations.
What could it be the reason ? Why tables which have only update
operation, aren't analysed ?
Are update operations really taken into account ?

Given that autoanalyze is pretty critical to the way the system functions,
it's unlikely that it just doesn't work (someone else would have noticed).

A more likely scenario is that you've found some extremely obscure edge
case. If that's the case, you're going to have to give very specific
details as to how you're testing it before anyone is liable to be able
to help you.

I get the impression that you're somewhat new to Postgres, in which case
it's very likely that the problem is that you're not testing the situation
correctly. In that case, we're going to need specific details on how you're
observing that tables are or are not being analysed.

As a wild-guess theory: the process that does the analyze only wakes up
to check tables every 5 minutes (based on the config you show) ... so are
you doing the inserts then checking the table without leaving enough time
in between for the system to wake up and notice the change?


Thanks for your answer Bill.
Indeed, I'm pretty new to Postgres and I don't exclude that I'm doing 
something wrong. But I did my test on a more than 1 day duration, so 
it's not an issue of autovacuum_naptime (I insert 760 lignes each 30 
seconds during 36 hours).
I can't give all the details of this test because it is to complicated 
with triggers and partman (and your objective is not to solve 
configuration issues of others).


In fact, I was telling the question because I have read on some forums 
that the auto vacuum deamon only count dead tuple so only update and 
delete operations can cause the scheduling of auto-analyse.
So if it's the case it perfectly explain why my test case doesn't work. 
But in the other hand the documentation says that delete, update and 
insert operations are counted.
Is it an know issue that insert operations are not counted for the 
trigger of auto-analyse ?


If it's not, I can try to reproduce this weird behaviour with a simpler 
test and give you all the details of the test.


Bertrand




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Auto-analyse on insert operations

2015-11-04 Thread Adrian Klaver

On 11/04/2015 07:43 AM, Bertrand Roos wrote:



Le 04/11/2015 14:55, Bill Moran a écrit :

On Wed, 4 Nov 2015 14:32:37 +0100
Bertrand Roos  wrote:

I try to configure auto-analyse task with postgresql 9.4.
I have the following configuration (default configuration):
track_counts = on
autovacuum = on
log_autovacuum_min_duration = -1
autovacuum_max_workers = 3
autovacuum_naptime = 300s
autovacuum_vacuum_threshold = 50
autovacuum_analyze_threshold = 50
autovacuum_vacuum_scale_factor = 0.2
autovacuum_analyze_scale_factor = 0.2
autovacuum_freeze_max_age = 2
autovacuum_multixact_freeze_max_age = 4
autovacuum_vacuum_cost_delay = 20ms
autovacuum_vacuum_cost_limit = -1

With this configuration, I can observe that some tables are
auto-analysed, but some others are not. Even if there are millions of
insert operations on an empty table (all tables are in cluster mode).
In fact it seems that tables with update operations are the only ones
that are auto-analysed.
I'm quite suprised because the documentation says that daemon check the
count of insert, update and delete operations.
What could it be the reason ? Why tables which have only update
operation, aren't analysed ?
Are update operations really taken into account ?

Given that autoanalyze is pretty critical to the way the system
functions,
it's unlikely that it just doesn't work (someone else would have
noticed).

A more likely scenario is that you've found some extremely obscure edge
case. If that's the case, you're going to have to give very specific
details as to how you're testing it before anyone is liable to be able
to help you.

I get the impression that you're somewhat new to Postgres, in which case
it's very likely that the problem is that you're not testing the
situation
correctly. In that case, we're going to need specific details on how
you're
observing that tables are or are not being analysed.

As a wild-guess theory: the process that does the analyze only wakes up
to check tables every 5 minutes (based on the config you show) ... so are
you doing the inserts then checking the table without leaving enough time
in between for the system to wake up and notice the change?


Thanks for your answer Bill.
Indeed, I'm pretty new to Postgres and I don't exclude that I'm doing
something wrong. But I did my test on a more than 1 day duration, so
it's not an issue of autovacuum_naptime (I insert 760 lignes each 30
seconds during 36 hours).
I can't give all the details of this test because it is to complicated
with triggers and partman (and your objective is not to solve
configuration issues of others).

In fact, I was telling the question because I have read on some forums
that the auto vacuum deamon only count dead tuple so only update and
delete operations can cause the scheduling of auto-analyse.
So if it's the case it perfectly explain why my test case doesn't work.
But in the other hand the documentation says that delete, update and
insert operations are counted.
Is it an know issue that insert operations are not counted for the
trigger of auto-analyse ?


No, see below:

http://www.postgresql.org/docs/9.4/interactive/routine-vacuuming.html#AUTOVACUUM

23.1.3. Updating Planner Statistics

"The autovacuum daemon, if enabled, will automatically issue ANALYZE 
commands whenever the content of a table has changed sufficiently. 
However, administrators might prefer to rely on manually-scheduled 
ANALYZE operations, particularly if it is known that update activity on 
a table will not affect the statistics of "interesting" columns. The 
daemon schedules ANALYZE strictly as a function of the number of rows 
inserted or updated; it has no knowledge of whether that will lead to 
meaningful statistical changes."



Is the partman you refer to this?:

https://github.com/keithf4/pg_partman

Can you give an outline view of what you are doing and how you are 
determining the status of analyze?




If it's not, I can try to reproduce this weird behaviour with a simpler
test and give you all the details of the test.

Bertrand







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Auto-analyse on insert operations

2015-11-04 Thread Tom Lane
Bertrand Roos  writes:
> In fact, I was telling the question because I have read on some forums 
> that the auto vacuum deamon only count dead tuple so only update and 
> delete operations can cause the scheduling of auto-analyse.

That might have been true years ago, don't remember at the moment.
But in modern PG versions auto-vacuum is driven by the estimated number
of dead tuples while auto-analyze is driven by the total count of
inserts+updates+deletes.  It's easy to show in a standalone experiment
that auto-analyze *will* run against a table that's only had inserts.

With the numbers you're showing, auto-analyze should trigger once the
table gets to 20% new tuples.  It would be interesting to see the
pg_stat_all_tables values for one of your problematic tables.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Auto-analyse on insert operations

2015-11-04 Thread Bill Moran
On Wed, 4 Nov 2015 16:43:57 +0100
Bertrand Roos  wrote:
> 
> Le 04/11/2015 14:55, Bill Moran a écrit :
> > On Wed, 4 Nov 2015 14:32:37 +0100
> > Bertrand Roos  wrote:
> >> I try to configure auto-analyse task with postgresql 9.4.
> >> I have the following configuration (default configuration):
> >> track_counts = on
> >> autovacuum = on
> >> log_autovacuum_min_duration = -1
> >> autovacuum_max_workers = 3
> >> autovacuum_naptime = 300s
> >> autovacuum_vacuum_threshold = 50
> >> autovacuum_analyze_threshold = 50
> >> autovacuum_vacuum_scale_factor = 0.2
> >> autovacuum_analyze_scale_factor = 0.2
> >> autovacuum_freeze_max_age = 2
> >> autovacuum_multixact_freeze_max_age = 4
> >> autovacuum_vacuum_cost_delay = 20ms
> >> autovacuum_vacuum_cost_limit = -1
> >>
> >> With this configuration, I can observe that some tables are
> >> auto-analysed, but some others are not. Even if there are millions of
> >> insert operations on an empty table (all tables are in cluster mode).
> >> In fact it seems that tables with update operations are the only ones
> >> that are auto-analysed.
> >> I'm quite suprised because the documentation says that daemon check the
> >> count of insert, update and delete operations.
> >> What could it be the reason ? Why tables which have only update
> >> operation, aren't analysed ?
> >> Are update operations really taken into account ?
> > Given that autoanalyze is pretty critical to the way the system functions,
> > it's unlikely that it just doesn't work (someone else would have noticed).
> >
> > A more likely scenario is that you've found some extremely obscure edge
> > case. If that's the case, you're going to have to give very specific
> > details as to how you're testing it before anyone is liable to be able
> > to help you.
> >
> > I get the impression that you're somewhat new to Postgres, in which case
> > it's very likely that the problem is that you're not testing the situation
> > correctly. In that case, we're going to need specific details on how you're
> > observing that tables are or are not being analysed.
> >
> > As a wild-guess theory: the process that does the analyze only wakes up
> > to check tables every 5 minutes (based on the config you show) ... so are
> > you doing the inserts then checking the table without leaving enough time
> > in between for the system to wake up and notice the change?
> >
> Thanks for your answer Bill.
> Indeed, I'm pretty new to Postgres and I don't exclude that I'm doing 
> something wrong. But I did my test on a more than 1 day duration, so 
> it's not an issue of autovacuum_naptime (I insert 760 lignes each 30 
> seconds during 36 hours).
> I can't give all the details of this test because it is to complicated 
> with triggers and partman (and your objective is not to solve 
> configuration issues of others).

Others have answered some of your other questions, so I'll just throw
out another possibility: have the per-table analyze settings been altered
on the table(s) that are behaving badly? See
http://www.postgresql.org/docs/9.4/static/sql-createtable.html#SQL-CREATETABLE-STORAGE-PARAMETERS

Attaching the output of
pg_dump -s -t $table_name -U postgres $database_name
will probably go a long way toward getting more targeted assistance.
(substitute the actual database name, and the name of a table that is
giving you trouble)

In addition, the output of
SELECT * FROM pg_stat_user_tables WHERE relname = '$table_name';
(Again, substitute an actual table name that's giving you trouble,
preferrably the same table as from the pg_dump)

-- 
Bill Moran


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MinGW-W64 compile error

2015-11-04 Thread Leonardo M . Ramé


El 04/11/15 a las 00:05, Tom Lane escribió:

"=?UTF-8?Q?Leonardo_M._Ram=c3=a9?="  writes:

Hi, I'm trying to build the client library of PostgreSql 9.3.x using
this version of MinGW's gcc:
...
g++  -DFRONTEND -I../../src/include -I./src/include/port/win32
-DEXEC_BACKEND  "-I../../src/include/port/win32" -DBUILDING_DLL  -c -o
relpath.o relpath.c


Why is it invoking g++ and not gcc?


I don't know. I just installed MinGW, then ./configure, maybe there's an 
error in the configure script?.





relpath.c: In function 'int forkname_chars(const char*, ForkNumber*)':
relpath.c:55:15: error: invalid conversion from 'int' to 'ForkNumber'
[-fpermissive]
for (forkNum = 1; forkNum <= MAX_FORKNUM; forkNum++)
relpath.c:55:51: error: no 'operator++(int)' declared for postfix '++'
[-fpermissive]
for (forkNum = 1; forkNum <= MAX_FORKNUM; forkNum++)


The second of these definitely comes from trying to compile C code as C++,
and I think the first does too.

regards, tom lane







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Query regarding

2015-11-04 Thread M Tarkeshwar Rao
Hi all,

We have started to convert some oracle sql scripts and converting them to 
postgres, but facing some issues to create table.

I have some common doubts in create table script ie.

ALTER TABLE employee
 DROP PRIMARY KEY CASCADE;

DROP employee CASCADE CONSTRAINTS;

CREATE TABLE employee
(
  LABEL_IMP  VARCHAR2(50 BYTE)
)
TABLESPACE DATA_TB
PCTUSED0
PCTFREE10
INITRANS   1
MAXTRANS   255
STORAGE(
INITIAL  5M
NEXT 5M
MINEXTENTS   1
MAXEXTENTS   UNLIMITED
PCTINCREASE  0
BUFFER_POOL  DEFAULT
   )
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;


I am trying to find replacement for above keywords highlighted in BOLD in 
postgres, but unable to find it on google.
Could you please let us know if these parameter are managed internally in 
postgres or what are the alternative.

Regards,
Tarkeshwar



Re: [GENERAL] MinGW-W64 compile error

2015-11-04 Thread Leonardo M . Ramé


El 04/11/15 a las 06:00, Leonardo M. Ramé escribió:


El 04/11/15 a las 00:05, Tom Lane escribió:

"=?UTF-8?Q?Leonardo_M._Ram=c3=a9?="  writes:

Hi, I'm trying to build the client library of PostgreSql 9.3.x using
this version of MinGW's gcc:


Nevermind, deleted my MinGW instalation, then installed msys and 
MinGW-W64 again, defined /etc/fstab to my MinGW/mingw32/bin directory 
ans voila!, it compiled without errors.


Regards,
Leonardo.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recursive Arrays 101

2015-11-04 Thread Achilleas Mantzios

Sorry for being kind of late to the party (I was in 2015.PgConf.EU !!), and not 
having read
most of the replies, what we have been successfully doing for this problem for 
our app
is do it this way :
parents int[] -- where parents stores the path from the node to the root of the 
tree
and then have those indexes :
btree (first(parents))
btree (level(parents)) -- length
btree (last(parents))
gin (parents gin__int_ops) -- the most important

This has been described as "genealogical tree" approach, and works very good, 
IMHO much better
than nested sets.

On 27/10/2015 14:46, David G. Johnston wrote:

On Monday, October 26, 2015, John R Pierce > wrote:

On 10/26/2015 7:44 PM, David G. Johnston wrote:

​They both have their places.  It is usually quite difficult to 
automate and version control the manual work that goes into using command line 
tools.​


I hope you mean, its difficult to automate and version control 
clickity-clicky work that goes into using GUI tools

automating shell scripts is trivial.   putting said shell scripts into 
version control is also trivial.


Yes, that is a typo on my part.



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] pg_archivecleanup not deleting anything?

2015-11-04 Thread Albe Laurenz
Michael Paquier wrote:
>>> The docs mention that "all WAL files" preceding a given point are
>>> removed, personally I understand that as "all 16MB-size segments shall
>>> die", hence excluding backup and history files from the stack. But one
>>> may understand that "WAL files" means everything in pg_xlog, so as
>>> something that includes backup and history files. Perhaps we would
>>> gain in clarity by saying "WAL file segments, including .partial
>>> segments" in the docs, and not just "WAL files". Thoughts?
>>
>> It might help to add:
>> "Timeline history files and backup history files are not deleted."
> 
> Something among those lines?

> +  
> +   WAL file segments and WAL file segments with .partial
> +   are deleted, while timeline history files and backup history files are 
> not.
> +  

"WAL file segments with .partial" sounds strange.
What about "WAL file segments (including partial ones) are deleted, while ..."?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query regarding

2015-11-04 Thread Guillaume Lelarge
Le 4 nov. 2015 10:23 AM, "M Tarkeshwar Rao" 
a écrit :
>
> Hi all,
>
> We have started to convert some oracle sql scripts and converting them to
postgres, but facing some issues to create table.
>
> I have some common doubts in create table script ie.
>
> ALTER TABLE employee
> DROP PRIMARY KEY CASCADE;
>
> DROP employee CASCADE CONSTRAINTS;
>
> CREATE TABLE employee
> (
>   LABEL_IMP  VARCHAR2(50 BYTE)
> )
> TABLESPACE DATA_TB
> PCTUSED0
> PCTFREE10
> INITRANS   1
> MAXTRANS   255
> STORAGE(
> INITIAL  5M
> NEXT 5M
> MINEXTENTS   1
> MAXEXTENTS   UNLIMITED
> PCTINCREASE  0
> BUFFER_POOL  DEFAULT
>)
> LOGGING
> NOCOMPRESS
> NOCACHE
> NOPARALLEL
> MONITORING;
>
>
> I am trying to find replacement for above keywords highlighted in BOLD in
postgres, but unable to find it on google.
> Could you please let us know if these parameter are managed internally in
postgres or what are the alternative.
>

Those are really specific to oracle. The only exception is pctfree. It's
called fillfactor in PostgreSQL and is the opposite of pctfree (as in if
pctfree is 10%, fillfactor is 90%).


Re: [GENERAL] Query regarding

2015-11-04 Thread Thomas Kellerer
M Tarkeshwar Rao schrieb am 04.11.2015 um 10:21:
> Hi all,
>  
> We have started to convert some oracle sql scripts and converting them to 
> postgres, but facing some issues to create table.
>  
> I have some common doubts in create table script ie.
>  
> ALTER TABLE employee
> DROP PRIMARY KEY CASCADE;
>  
> DROP employee CASCADE CONSTRAINTS;
>  
> /CREATE TABLE /employee
> /(/
> /  LABEL//_IMP//  VARCHAR2(50 BYTE)/
> /)/
> /TABLESPACE DATA//_TB/
> */PCTUSED0/*
> */PCTFREE10/*
> */INITRANS   1/*
> */MAXTRANS   255/*
> */STORAGE(/*
> */INITIAL  5M/*
> */NEXT 5M/*
> */MINEXTENTS   1/*
> */MAXEXTENTS   UNLIMITED/*
> */PCTINCREASE  0/*
> */BUFFER_POOL  DEFAULT/*
> */   )/*
> */LOGGING /*
> */NOCOMPRESS /*
> */NOCACHE/*
> */NOPARALLEL/*
> */MONITORING;/*
>  
>  
> I am trying to find replacement for above keywords highlighted in BOLD in 
> postgres, but unable to find it on google.
> Could you please let us know if these parameter are managed internally in 
> postgres or what are the alternative.

There are only two options that have a match in Postgres: PCTFREE and 
TABLESPACE 

As all of them are using the default values in Oracle anyway, I wouldn't bother 
to translate them. Just remove everything.
If you really want to tune PCTFREE, you need to use fillfactor in Postgres - 
which is the same thing "the other way round". 
So it would be 90% in Postgres

You also need to change "VARCHAR2(50 BYTE)". 

In Postgres varchar lengths are always specified in characters, never in bytes. 
But as "Byte Semantics" is also the default in Oracle I assume replacing that 
with VARCHAR(50) in Postgres will work just fine. 

The first statement:

   ALTER TABLE employee DROP PRIMARY KEY CASCADE;

was useless in Oracle to begin with - if you drop the table afterwards (with 
CASCADE), 
there is no need to drop the PK first.

> DROP employee CASCADE CONSTRAINTS;

Assuming that the missing "TABLE" keyword is a copy & paste error, 
this translates to "DROP TABLE employee CASCADE" in Postgres.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general