Re: How to watch for schema changes

2018-07-11 Thread Adrian Klaver

On 07/11/2018 08:46 PM, Igor Korot wrote:

Hi, guys,


On Mon, Jul 9, 2018 at 5:38 PM, Adrian Klaver  wrote:

On 07/09/2018 01:49 PM, Igor Korot wrote:


Hi, Adrian

On Tue, Jul 3, 2018 at 1:24 PM, Adrian Klaver 
wrote:


On 07/03/2018 11:15 AM, Igor Korot wrote:



Adrian,

On Tue, Jul 3, 2018 at 12:32 PM, Adrian Klaver
 wrote:



On 07/03/2018 10:21 AM, Igor Korot wrote:




Hi, ALL,
Is there any trigger or some other means I can do on the server
which will watch for CREATE/ALTER/DROP TABLE command and after
successful
execution of those will issue a NOTIFY statement?





https://www.postgresql.org/docs/10/static/event-triggers.html




According to the documentation the lowest version it supports is 9.3.
Anything prior to that?

I'm working with OX 10.8 and it has 9.1 installed.



9.1 went EOL almost two years ago. The oldest supported version is 9.3,
though it will go EOL this September:

https://www.postgresql.org/support/versioning/

Are you forced to work with 9.1 or can you use something from here:

https://www.postgresql.org/download/macosx/

to get a newer version? FYI that will be a major upgrade so will require
a
dump/restore or use of pg_upgrade.



Just a thought...
Is it possible to create a trigger for a system table? Or this
operation is restricted



Easy enough to test. As postgres super user:

test_(postgres)# create trigger info_test before insert on pg_class execute
procedure ts_update();

ERROR:  permission denied: "pg_class" is a system catalog


But

draft=# CREATE TRIGGER info_test AFTER INSERT ON
information_schema.tables EXECUTE PROCEDURE test();
ERROR:  function test() does not exist

So it looks like this should be possible?


No, see:

https://www.postgresql.org/docs/10/static/sql-createtrigger.html

AFTER trigger on views are STATEMENT level only.

https://www.postgresql.org/docs/10/static/plpgsql-trigger.html

"NEW

Data type RECORD; variable holding the new database row for 
INSERT/UPDATE operations in row-level triggers. This variable is 
unassigned in statement-level triggers and for DELETE operations.

OLD

Data type RECORD; variable holding the old database row for 
UPDATE/DELETE operations in row-level triggers. This variable is 
unassigned in statement-level triggers and for INSERT operations.

"

So you won't know what was INSERTed in row.



Thank you.





for when the server is actually being set-up?

Successful "CREATE TABLE..." statement creates a row inside the
information_schema.tables
so if I can create a trigger after this record is inserted or deleted
that should be fine.

Thank you.





And a second question - how do I work with it?
I presume that function will have to be compiled in its own module
(either dll, so or dylib).
But then from the libpq interface how do I call it?




It can use functions written in PL languages. See below:

https://www.postgresql.org/docs/10/static/sql-createeventtrigger.html

for an example written in plpgsql.







Thank you.





Thank you.




--
Adrian Klaver
adrian.kla...@aklaver.com





--
Adrian Klaver
adrian.kla...@aklaver.com




--
Adrian Klaver
adrian.kla...@aklaver.com





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: How to watch for schema changes

2018-07-11 Thread Igor Korot
Hi, guys,


On Mon, Jul 9, 2018 at 5:38 PM, Adrian Klaver  wrote:
> On 07/09/2018 01:49 PM, Igor Korot wrote:
>>
>> Hi, Adrian
>>
>> On Tue, Jul 3, 2018 at 1:24 PM, Adrian Klaver 
>> wrote:
>>>
>>> On 07/03/2018 11:15 AM, Igor Korot wrote:


 Adrian,

 On Tue, Jul 3, 2018 at 12:32 PM, Adrian Klaver
  wrote:
>
>
> On 07/03/2018 10:21 AM, Igor Korot wrote:
>>
>>
>>
>> Hi, ALL,
>> Is there any trigger or some other means I can do on the server
>> which will watch for CREATE/ALTER/DROP TABLE command and after
>> successful
>> execution of those will issue a NOTIFY statement?
>
>
>
>
> https://www.postgresql.org/docs/10/static/event-triggers.html



 According to the documentation the lowest version it supports is 9.3.
 Anything prior to that?

 I'm working with OX 10.8 and it has 9.1 installed.
>>>
>>>
>>> 9.1 went EOL almost two years ago. The oldest supported version is 9.3,
>>> though it will go EOL this September:
>>>
>>> https://www.postgresql.org/support/versioning/
>>>
>>> Are you forced to work with 9.1 or can you use something from here:
>>>
>>> https://www.postgresql.org/download/macosx/
>>>
>>> to get a newer version? FYI that will be a major upgrade so will require
>>> a
>>> dump/restore or use of pg_upgrade.
>>
>>
>> Just a thought...
>> Is it possible to create a trigger for a system table? Or this
>> operation is restricted
>
>
> Easy enough to test. As postgres super user:
>
> test_(postgres)# create trigger info_test before insert on pg_class execute
> procedure ts_update();
>
> ERROR:  permission denied: "pg_class" is a system catalog

But

draft=# CREATE TRIGGER info_test AFTER INSERT ON
information_schema.tables EXECUTE PROCEDURE test();
ERROR:  function test() does not exist

So it looks like this should be possible?

Thank you.

>
>
>> for when the server is actually being set-up?
>>
>> Successful "CREATE TABLE..." statement creates a row inside the
>> information_schema.tables
>> so if I can create a trigger after this record is inserted or deleted
>> that should be fine.
>>
>> Thank you.
>>
>>>

 And a second question - how do I work with it?
 I presume that function will have to be compiled in its own module
 (either dll, so or dylib).
 But then from the libpq interface how do I call it?
>>>
>>>
>>>
>>> It can use functions written in PL languages. See below:
>>>
>>> https://www.postgresql.org/docs/10/static/sql-createeventtrigger.html
>>>
>>> for an example written in plpgsql.
>>>
>>>
>>>
>>>
>>>

 Thank you.

>
>>
>> Thank you.
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>>>
>>>
>>>
>>>
>>> --
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com



Re: Open Source tool to deploy/promote PostgreSQL DDL

2018-07-11 Thread tirveni yadav
On Thu, Jul 12, 2018 at 5:18 AM, Adrian Klaver
 wrote:
> On 07/11/2018 04:01 PM, Ron wrote:
>>
>> On 07/11/2018 04:10 PM, Christopher Browne wrote:
>> [snip]
>>>
>>> ITIL surely does NOT specify the use of database rollback scripts as
>>> THE SPECIFIED MECHANISM for a backout procedure.
>>>
>>> In practice, we tend to take database snapshots using filesystem
>>> tools, as that represents a backout procedure that will work regardless
>>> of the complexity of an upgrade.
>>>
>>> It is quite possible for an upgrade script to not be reversible.
>>>
>>> After all, not all matrices are invertible; there are a surprisingly
>>> large
>>> number of preconditions that are required for that in linear algebra.
>>>
>>> And in databases, not all upgrades may be reversed via rollback scripts.
>>
>>
>> Does "rollback script" truly mean undoing what you just did in a
>> transaction-like manner?
>>
>
> Hard to say without knowing the system you are using, but I would guess no.
> I use Sqitch and it uses the term revert:
>
> https://metacpan.org/pod/sqitchtutorial#Status,-Revert,-Log,-Repeat
>
> which I think is more accurate. I find it very handy feature when in
> development mode. Write script --> deploy --> test, if fails --> revert,
> rewrite deploy script --> deploy and so on.
>


And, If you are familiar with git, using Sqitch become easy.

It handles dependencies very well.

Go with Sqitch.

-- 
Regards,

Tirveni Yadav

www.bael.io

What is this Universe ? From what it arises ? Into what does it go?
In freedom it arises, In freedom it rests and into freedom it melts away.
Upanishads.



Re: Open Source tool to deploy/promote PostgreSQL DDL

2018-07-11 Thread Adrian Klaver

On 07/11/2018 04:01 PM, Ron wrote:

On 07/11/2018 04:10 PM, Christopher Browne wrote:
[snip]

ITIL surely does NOT specify the use of database rollback scripts as
THE SPECIFIED MECHANISM for a backout procedure.

In practice, we tend to take database snapshots using filesystem
tools, as that represents a backout procedure that will work regardless
of the complexity of an upgrade.

It is quite possible for an upgrade script to not be reversible.

After all, not all matrices are invertible; there are a surprisingly 
large

number of preconditions that are required for that in linear algebra.

And in databases, not all upgrades may be reversed via rollback scripts.


Does "rollback script" truly mean undoing what you just did in a 
transaction-like manner?




Hard to say without knowing the system you are using, but I would guess 
no. I use Sqitch and it uses the term revert:


https://metacpan.org/pod/sqitchtutorial#Status,-Revert,-Log,-Repeat

which I think is more accurate. I find it very handy feature when in 
development mode. Write script --> deploy --> test, if fails --> revert, 
rewrite deploy script --> deploy and so on.


--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Open Source tool to deploy/promote PostgreSQL DDL

2018-07-11 Thread Ron

On 07/11/2018 04:10 PM, Christopher Browne wrote:
[snip]

ITIL surely does NOT specify the use of database rollback scripts as
THE SPECIFIED MECHANISM for a backout procedure.

In practice, we tend to take database snapshots using filesystem
tools, as that represents a backout procedure that will work regardless
of the complexity of an upgrade.

It is quite possible for an upgrade script to not be reversible.

After all, not all matrices are invertible; there are a surprisingly large
number of preconditions that are required for that in linear algebra.

And in databases, not all upgrades may be reversed via rollback scripts.


Does "rollback script" truly mean undoing what you just did in a 
transaction-like manner?


--
Angular momentum makes the world go 'round.



Re: Open Source tool to deploy/promote PostgreSQL DDL

2018-07-11 Thread Tim Cross


Gavin Flower  writes:

> On 11/07/18 11:04, Hustler DBA wrote:
>> Thanks Adrian and Rich,
>> I will propose sqitch to the client, but I think they want something 
>> with a GUI frontend.
>>
>> They want to deploy database changes, track which environments the 
>> change was deployed to, be able to rollback a change (with a rollback 
>> script), track when and if the change was rolled back and in which 
>> environment/database... so pretty much a deployment and tracking GUI 
>> software with a frontend.
>>
>> In the past, for doing database deployments to Oracle, I created a 
>> tool using PHP (frontend/backend), MySQL (repository to track 
>> deployments and store deployment logs) and scripted the release 
>> scripts for deployment and rollback, and had my tool manage the 
>> scripts up the environments. The client is "looking" for something 
>> more open source for PostgreSQL. Do we have anything similar to this?
>>
>> Neil
>>
>> On Tue, Jul 10, 2018 at 6:22 PM, Rich Shepard 
>> mailto:rshep...@appl-ecosys.com>> wrote:
>>
>> On Tue, 10 Jul 2018, Hustler DBA wrote:
>>
>> A client of mine is looking for an open source tool to deploy
>> and promote
>> PostgreSQL DDL changes through database environments as part
>> of SDLC. What
>> tools (open source) does the community members use? I normally use
>> scripts, but they want something open source.
>>
>>
>> Neil,
>>
>>  I'm far from a professional DBA, but scripts are certainly open
>> source
>> because they're text files.
>>
> Being text files has nothing to do with being Open Source! As I could 
> send you a BASH script, or Java source code of a program, where they are 
> under a Proprietary licence.
>
> On the other hand, being a script could be open source, it all depends 
> on the licence!
>
>>
>>  To track changes for almoste everything I highly recommend Git
>> for version
>> control. It's distributed and can handle most types of files. I
>> use it for
>> tracking coding projects and well as report and other text
>> documents that
>> are edited and revised prior to release.
>>
>> Rich
>>
>>
>
> Note that using scripts makes it easier to automate and to document, 
> plus it gives you far more control. With PostgreSQL I use psql, as it 
> is easier to use than any GUI tool. I use an editer to create SQL 
> scripts and execute them from psql. Note that you can use psql to 
> execute SQL from within a BASH script.
>
> Scripts once working and tested, can be reused and stored in git. This 
> is not something you can do with actions in a GUI!
>

+1 for using scripts.

I've tried various 'fancy' deployment tools and always found them
lacking. I prefer to have deployment/migration scripts as part of the
GIT repo which has all the rest of the DDL/DML for the system. Check out
the version from GIT you want to deploy, run the scripts and your done.

To track deployments, have a version table in your target which your
scripts update on completion of the deployment. Other scripts can then
query this table to determine which version has been deployed.

I typically don't worry about rollback as part of the scripts. This
tends to just make deployments far more complicated than necessary (and
therefore more error prone). Instead, ensure you have an effective
backup restore process and ensure that backup is part of the deployment
task.

Rollback capability in deployment scripts or programs is rarely of
significant benefit because it is highly dependent on the type of
changes being made (so needs to be developed as part of the specific
deployment) and is typically only possible for a very short time
following deployment (usually shorter than the time required to make the
decision to rollback). I've seen environments where rollback in the
scripts 8is mandatory and as a consequence, deployment of the rollback
components takes nearly as much time as development of the
feature/change being deployed.  

I suspect maintaining a GUI for such systems is probably more complex
than the benefits it realises. While I guess it would be possible to have a web
based interface, sorting out and maintaining access permissions without
compromising security will likely take more time than the benefits a GUI
offers. Management often likes the idea of a GUI as they think it means
the deployments can then be performed by less skilled (and cheaper)
staff. Reality is, you probably want your more skilled and experienced
staff dealing with deployments and they will typically prefer the
flexibility of scripts over the constraints of a GUI.

Given the high level of variability in environments, you are probably
best off developing the process and scripts rather than trying to find
an existing tool. Putting a web front end is likely easier than finding
a tool flexible enough to fit with the environment which avoids
situations where the tool begins to dictate how you operate (tail
wagging the dog).

Tim


Re: sorting/comparing column values in non-alphanumeric sorting ways ?

2018-07-11 Thread Peter Geoghegan
On Wed, Jul 11, 2018 at 2:44 PM, David Gauthier
 wrote:
> I want to load the 'highestver' column with the highest version of tcfg1-3.
>
> This won't work...
> update tv set greatest = greatest(tcfg1,tcfg2,tcfg3)
> ...because it thinks 1.0.9 is greater than 1.0.10
>
> Is there a way to get this to work right ?

If you're using v10 with ICU, then you can create a custom ICU
collation for this, with "natural" sort order. Something like this
should work:

CREATE COLLATION numeric (provider = icu, locale = 'en-u-kn-true');

See the docs -- "23.2.2.3.2. ICU collations".

-- 
Peter Geoghegan



Re: sorting/comparing column values in non-alphanumeric sorting ways ?

2018-07-11 Thread David G. Johnston
On Wednesday, July 11, 2018, David Gauthier 
wrote:

>
> This won't work...
> update tv set greatest = greatest(tcfg1,tcfg2,tcfg3)
> ...because it thinks 1.0.9 is greater than 1.0.10
>
> Is there a way to get this to work right ?
>
>
Haven't used it personally but this seems promising:

 https://pgxn.org/dist/semver/doc/semver.html

David J.


sorting/comparing column values in non-alphanumeric sorting ways ?

2018-07-11 Thread David Gauthier
Hi:

I have a table listing tools and tool versions for a number of different
tool configurations.  Something like this...

create table tv (tool text, tcfg1 test, tcfg2 text, tcfg3 text, highestver
text);

insert into tv (tool,tcfg1mtcfg2,tcfg3) values
('tool_a','1.0.5b','1.0.10','1.0.9');

I want to load the 'highestver' column with the highest version of tcfg1-3.

This won't work...
update tv set greatest = greatest(tcfg1,tcfg2,tcfg3)
...because it thinks 1.0.9 is greater than 1.0.10

Is there a way to get this to work right ?

Thanks in Advance !


Re: timestamp (military) at time zone without the suffix

2018-07-11 Thread Adrian Klaver

On 07/11/2018 02:21 PM, David Gauthier wrote:

Pleas reply to list also.
Ccing list.

Table columns have already been defined with timestamp datatype.  The 
on;y way I know of to fix this is to...


1) add a new column as timestamptz called 'tmp' (whatever)
2) update tmp with the value in the timestamp collumn perhaps using "at 
time zone 'utc' "

3) drop the original timestamp column
4) recreate the column with the same name but with data type timestamptz
5) Move all the records over to this column (from tmp)
6) drop the tmp column.

There would be some needed downtime to do this of course.

Is there an easier way?


create table ts_test(id int, ts_fld timestamp);

insert into ts_test values (1, now()), (2, now() - interval '1 day');

test_(aklaver)> select * from ts_test ;
 id |   ts_fld
+
  1 | 2018-07-11 14:24:43.960989
  2 | 2018-07-10 14:24:43.960989
(2 rows)

Assuming the timestamp values where at UTC:

alter table ts_test alter COLUMN ts_fld type timestamptz using ts_fld at 
time zone 'UTC';


test_(aklaver)> select * from ts_test ; 



 id |ts_fld 




+--- 




  1 | 2018-07-11 07:28:17.279899-07 




  2 | 2018-07-10 07:28:17.279899-07

The above depends on you knowing what the timestamps in the timestamp 
field where entered as. I would test first.


See below for more info:

https://www.postgresql.org/docs/10/static/sql-altertable.html



On Wed, Jul 11, 2018 at 5:14 PM, Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> wrote:


On 07/11/2018 01:34 PM, David Gauthier wrote:

Thanks Everyone, they all work, but TL's seems to be the simplest...
select current_timestamp(0) at time zone 'utc'

I'm kinda stuck with the timestamp data type (vs timestamptz). 
Wondering if I can stick with that.



The above is at little unclear. Can you change the data type or not?
If you can your life will be a lot easier if you change it to
timestamptz.


One last question...
I want to store the current UTC date/time in the DB.  Does PG
unconditionally store something like UTC, then let the queries
figure out how they want to look at it (with "at time zone" and
"to_char()" etc...) ?  Or do I have to intentionally store the
UTC value somehow?


Per:

https://www.postgresql.org/docs/10/static/datatype-datetime.html


"For timestamp with time zone, the internally stored value is always
in UTC (Universal Coordinated Time, traditionally known as Greenwich
Mean Time, GMT). An input value that has an explicit time zone
specified is converted to UTC using the appropriate offset for that
time zone. If no time zone is stated in the input string, then it is
assumed to be in the time zone indicated by the system's TimeZone
parameter, and is converted to UTC using the offset for the timezone
zone.

When a timestamp with time zone value is output, it is always
converted from UTC to the current timezone zone, and displayed as
local time in that zone. To see the time in another time zone,
either change timezone or use the AT TIME ZONE construct (see
Section 9.9.3).

Conversions between timestamp without time zone and timestamp with
time zone normally assume that the timestamp without time zone value
should be taken or given as timezone local time. A different time
zone can be specified for the conversion using AT TIME ZONE."


Right now the code is just inserting and updating records using
"localtimestamp(0)".




On Wed, Jul 11, 2018 at 4:11 PM, David G. Johnston
mailto:david.g.johns...@gmail.com>
>> wrote:

     On Wednesday, July 11, 2018, David Gauthier
     mailto:davegauthie...@gmail.com>
>> wrote:

         OK, the "to_char" gets rid of the timezone extension. 
But the

         times still don't make sense.

         When I go to store this in a DB, I want to store the
UTC time.         How d I do that ?

     Use the data type that represents exactly that,
timestamptz.  Using
     the timestamp data type is generally not what you want even
if you
     can get the manipulation logic figured out.

     David J.




-- 
Adrian Klaver

adrian.kla...@aklaver.com 





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: timestamp (military) at time zone without the suffix

2018-07-11 Thread Adrian Klaver

On 07/11/2018 01:34 PM, David Gauthier wrote:

Thanks Everyone, they all work, but TL's seems to be the simplest...
select current_timestamp(0) at time zone 'utc'

I'm kinda stuck with the timestamp data type (vs timestamptz).  
Wondering if I can stick with that.


The above is at little unclear. Can you change the data type or not?
If you can your life will be a lot easier if you change it to timestamptz.



One last question...
I want to store the current UTC date/time in the DB.  Does PG 
unconditionally store something like UTC, then let the queries figure 
out how they want to look at it (with "at time zone" and "to_char()" 
etc...) ?  Or do I have to intentionally store the UTC value somehow?


Per:

https://www.postgresql.org/docs/10/static/datatype-datetime.html

"For timestamp with time zone, the internally stored value is always in 
UTC (Universal Coordinated Time, traditionally known as Greenwich Mean 
Time, GMT). An input value that has an explicit time zone specified is 
converted to UTC using the appropriate offset for that time zone. If no 
time zone is stated in the input string, then it is assumed to be in the 
time zone indicated by the system's TimeZone parameter, and is converted 
to UTC using the offset for the timezone zone.


When a timestamp with time zone value is output, it is always converted 
from UTC to the current timezone zone, and displayed as local time in 
that zone. To see the time in another time zone, either change timezone 
or use the AT TIME ZONE construct (see Section 9.9.3).


Conversions between timestamp without time zone and timestamp with time 
zone normally assume that the timestamp without time zone value should 
be taken or given as timezone local time. A different time zone can be 
specified for the conversion using AT TIME ZONE."




Right now the code is just inserting and updating records using 
"localtimestamp(0)".





On Wed, Jul 11, 2018 at 4:11 PM, David G. Johnston 
mailto:david.g.johns...@gmail.com>> wrote:


On Wednesday, July 11, 2018, David Gauthier
mailto:davegauthie...@gmail.com>> wrote:

OK, the "to_char" gets rid of the timezone extension.  But the
times still don't make sense.

When I go to store this in a DB, I want to store the UTC time. 
How d I do that ? 



Use the data type that represents exactly that, timestamptz.  Using
the timestamp data type is generally not what you want even if you
can get the manipulation logic figured out.

David J.





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Open Source tool to deploy/promote PostgreSQL DDL

2018-07-11 Thread Christopher Browne
On Wed, 11 Jul 2018 at 16:37, Ron  wrote:
>
> On 07/11/2018 03:21 PM, Christopher Browne wrote:
> > I have built one that I call Mahout
> > (https://github.com/cbbrowne/mahout) which has the merit of involving
> > just two shell scripts, one of which is an auditing tool (pgcmp).
> >
> > It implements a "little language" to indicate dependencies between the
> > SQL scripts that implement the DDL changes.
> >
> > The notable thing that I have seen "downthread" that it does not
> > attempt to implement is "rollback scripts."  I find that "necessity"
> > to be a ruby-on-rails dogma that does not seem to fit what I see
> > people doing.
>
> Where I work, the requirement to have rollback scripts is part of the ITIL
> requirement for Changes to have a backout procedure.

ITIL surely does NOT specify the use of database rollback scripts as
THE SPECIFIED MECHANISM for a backout procedure.

In practice, we tend to take database snapshots using filesystem
tools, as that represents a backout procedure that will work regardless
of the complexity of an upgrade.

It is quite possible for an upgrade script to not be reversible.

After all, not all matrices are invertible; there are a surprisingly large
number of preconditions that are required for that in linear algebra.

And in databases, not all upgrades may be reversed via rollback scripts.
--
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"



Re: How to tell which event was fired in Trigger function

2018-07-11 Thread Igal @ Lucee.org

On 7/11/2018 11:02 AM, David G. Johnston wrote:
On Wed, Jul 11, 2018 at 10:54 AM, Igal @ Lucee.org >wrote:


On 7/11/2018 10:38 AM, Adrian Klaver wrote:

On 07/11/2018 10:36 AM, Igal @ Lucee.org wrote:

How can I tell inside the trigger function if the event
was DELETE or INSERT/UPDATE?


https://www.postgresql.org/docs/10/static/plpgsql-trigger.html



This looks like it have all of the information that I need.  For
some reason search engines bring up many other pages but not that one.


​For these situations I recommend bookmarking and navigating to the 
Table of Contents [1] for the documentation and looking for relevant 
chapter titles - in this case the "V. 38 - Triggers" one is right 
there on the main page.


https://www.postgresql.org/docs/10/static/index.html

On a related note, maybe the pl/pgsql link from the Triggers chapter 
should link directly to the Triggers section under pl/pgsql instead of 
the top of the chapter...


+1

Should link to ¶42.9 - 
https://www.postgresql.org/docs/10/static/plpgsql-trigger.html





Re: timestamp (military) at time zone without the suffix

2018-07-11 Thread David G. Johnston
On Wednesday, July 11, 2018, David Gauthier 
wrote:
>
> I want to store the current UTC date/time in the DB.  Does PG
> unconditionally store something like UTC, then let the queries figure out
> how they want to look at it (with "at time zone" and "to_char()" etc...) ?
> Or do I have to intentionally store the UTC value somehow?
>

If you use timestamptz the server interprets the stored value as being
UTC.  If you use timestamp the server interprets the stored value using
whatever the current Time Zone happens to be when the value is read (or it
uses the at time zone value if specified).

David J.


Re: Open Source tool to deploy/promote PostgreSQL DDL

2018-07-11 Thread Ron

On 07/11/2018 03:39 PM, Ravi Krishna wrote:

Where I work, the requirement to have rollback scripts is part of the ITIL 
requirement for Changes to have a backout procedure.


Liquibase provides that ability, but IMO rollback for RDBMS is always bit 
tricky.  Certain DDL operations can take long time if it involves
a table rewrite.  PG is actually better than others.


Yeah, that's true.  Sometimes I just dump the whole table, and reload if a 
rollback is necessary.




--
Angular momentum makes the world go 'round.



Re: Open Source tool to deploy/promote PostgreSQL DDL

2018-07-11 Thread Ravi Krishna
> 
> Where I work, the requirement to have rollback scripts is part of the ITIL 
> requirement for Changes to have a backout procedure.
> 

Liquibase provides that ability, but IMO rollback for RDBMS is always bit 
tricky.  Certain DDL operations can take long time if it involves
a table rewrite.  PG is actually better than others.




Re: Open Source tool to deploy/promote PostgreSQL DDL

2018-07-11 Thread Ron

On 07/11/2018 03:21 PM, Christopher Browne wrote:

I have built one that I call Mahout
(https://github.com/cbbrowne/mahout) which has the merit of involving
just two shell scripts, one of which is an auditing tool (pgcmp).

It implements a "little language" to indicate dependencies between the
SQL scripts that implement the DDL changes.

The notable thing that I have seen "downthread" that it does not
attempt to implement is "rollback scripts."  I find that "necessity"
to be a ruby-on-rails dogma that does not seem to fit what I see
people doing.


Where I work, the requirement to have rollback scripts is part of the ITIL 
requirement for Changes to have a backout procedure.


--
Angular momentum makes the world go 'round.



Re: timestamp (military) at time zone without the suffix

2018-07-11 Thread David Gauthier
 Thanks Everyone, they all work, but TL's seems to be the simplest...
select current_timestamp(0) at time zone 'utc'

I'm kinda stuck with the timestamp data type (vs timestamptz).  Wondering
if I can stick with that.

One last question...
I want to store the current UTC date/time in the DB.  Does PG
unconditionally store something like UTC, then let the queries figure out
how they want to look at it (with "at time zone" and "to_char()" etc...) ?
Or do I have to intentionally store the UTC value somehow?

Right now the code is just inserting and updating records using
"localtimestamp(0)".




On Wed, Jul 11, 2018 at 4:11 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wednesday, July 11, 2018, David Gauthier 
> wrote:
>
>> OK, the "to_char" gets rid of the timezone extension.  But the times
>> still don't make sense.
>>
>> When I go to store this in a DB, I want to store the UTC time.  How d I
>> do that ?
>>
>
> Use the data type that represents exactly that, timestamptz.  Using the
> timestamp data type is generally not what you want even if you can get the
> manipulation logic figured out.
>
> David J.
>


Re: Open Source tool to deploy/promote PostgreSQL DDL

2018-07-11 Thread Christopher Browne
I have built one that I call Mahout
(https://github.com/cbbrowne/mahout) which has the merit of involving
just two shell scripts, one of which is an auditing tool (pgcmp).

It implements a "little language" to indicate dependencies between the
SQL scripts that implement the DDL changes.

The notable thing that I have seen "downthread" that it does not
attempt to implement is "rollback scripts."  I find that "necessity"
to be a ruby-on-rails dogma that does not seem to fit what I see
people doing.



Re: timestamp (military) at time zone without the suffix

2018-07-11 Thread David G. Johnston
On Wednesday, July 11, 2018, David Gauthier 
wrote:

> OK, the "to_char" gets rid of the timezone extension.  But the times still
> don't make sense.
>
> When I go to store this in a DB, I want to store the UTC time.  How d I do
> that ?
>

Use the data type that represents exactly that, timestamptz.  Using the
timestamp data type is generally not what you want even if you can get the
manipulation logic figured out.

David J.


Re: timestamp (military) at time zone without the suffix

2018-07-11 Thread Adrian Klaver

On 07/11/2018 12:59 PM, David Gauthier wrote:
OK, the "to_char" gets rid of the timezone extension.  But the times 
still don't make sense.


UTC should be 5 hours ahead, not behind.  It should be EST plus 5 hours 
(or 4 for DST), not minus.  That's why I said I expected 20:27 .



When I go to store this in a DB, I want to store the UTC time.  How d I 
do that ?


insert into foo (dt) values (localtimestamp(0) at time zone 'utc') ???


What is the data type for foo.dt?




On Wed, Jul 11, 2018 at 3:45 PM, David G. Johnston 
mailto:david.g.johns...@gmail.com>> wrote:


On Wednesday, July 11, 2018, David Gauthier
mailto:davegauthie...@gmail.com>> wrote:

Hi:

I would like to get the utc timestamp, 24-hr clock (military
time), without the time zone suffix.

Below commands were run nearly at the same time...

sqfdev=> select now()::timestamp(0)  ;
          now
-
  2018-07-11 15:27:12
(1 row)

...then immediately...

sqfdev=> select now()::timestamp(0) at time zone 'utc' ;
         timezone

  2018-07-11 11:27:12-04
(1 row)


15:27:12 makes sense (it's a bout 3:30 in the afternoon EST).
11:27:12 doesn't make sense.  UTC is 5 hours ahead.

Apparently it's only four hours ahead of your server's time zone
setting.

   I would have expected either 20:27 (if it stuck to military
time, which I want), or 08:27 (P.M., non-military time)

And I want to get rid of the -04 suffix.

Is there a way to do this ?


Specify an appropriate format string with the to_char function.

https://www.postgresql.org/docs/10/static/functions-formatting.html


David J.





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: timestamp (military) at time zone without the suffix

2018-07-11 Thread David Gauthier
OK, the "to_char" gets rid of the timezone extension.  But the times still
don't make sense.

UTC should be 5 hours ahead, not behind.  It should be EST plus 5 hours (or
4 for DST), not minus.  That's why I said I expected 20:27 .


When I go to store this in a DB, I want to store the UTC time.  How d I do
that ?

insert into foo (dt) values (localtimestamp(0) at time zone 'utc') ???


On Wed, Jul 11, 2018 at 3:45 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wednesday, July 11, 2018, David Gauthier 
> wrote:
>
>> Hi:
>>
>> I would like to get the utc timestamp, 24-hr clock (military time),
>> without the time zone suffix.
>>
>> Below commands were run nearly at the same time...
>>
>> sqfdev=> select now()::timestamp(0)  ;
>>  now
>> -
>>  2018-07-11 15:27:12
>> (1 row)
>>
>> ...then immediately...
>>
>> sqfdev=> select now()::timestamp(0) at time zone 'utc' ;
>> timezone
>> 
>>  2018-07-11 11:27:12-04
>> (1 row)
>>
>>
>> 15:27:12 makes sense (it's a bout 3:30 in the afternoon EST).
>> 11:27:12 doesn't make sense.  UTC is 5 hours ahead.
>>
>
> Apparently it's only four hours ahead of your server's time zone setting.
>
>
>>
>   I would have expected either 20:27 (if it stuck to military time, which
>> I want), or 08:27 (P.M., non-military time)
>>
>> And I want to get rid of the -04 suffix.
>>
>> Is there a way to do this ?
>>
>
> Specify an appropriate format string with the to_char function.
>
>  https://www.postgresql.org/docs/10/static/functions-formatting.html
>
> David J.
>
>


Re: timestamp (military) at time zone without the suffix

2018-07-11 Thread Adrian Klaver

On 07/11/2018 12:36 PM, David Gauthier wrote:

Hi:



And I want to get rid of the -04 suffix.

Is there a way to do this ?


For the details see:

https://www.postgresql.org/docs/10/static/functions-datetime.html#FUNCTIONS-DATETIME-ZONECONVERT



Thanks !



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Optimizing execution of expensive subqueries

2018-07-11 Thread Hellmuth Vargas
Hi

Try this way:

SELECT
  tbl.field1, tbl.field2, tbl.field3, ...,
  b.Thingy1Sum,
  ... repeat for multiply thingies ...
FROM
  tbl
  LATERAL JOIN (
SELECT anothertbl.UserId,SUM(Duration) as Thingy1Sum
FROM anothertbl
WHERE anothertbl.UserId = tbl.UserId AND anothertbl.ThingyId = 1
group  by 1) as b on tbl.UserId=b.UserId
ORDER BY tbl.field1 LIMIT 20


El mié., 11 de jul. de 2018 a la(s) 09:25, Mathieu Fenniak (
mathieu.fenn...@replicon.com) escribió:

> Hi pgsql-general!
>
> I'm currently looking at a query that is generally selecting a bunch of
> simple columns from a table, and also performing some subqueries to
> aggregate related data, and then sorting by one of the simple columns and
> paginating the result.
>
> eg.
>
> SELECT
>   tbl.field1, tbl.field2, tbl.field3, ...,
>   (SELECT SUM(Duration) FROM anothertbl WHERE anothertbl.UserId = tbl.UserId
> AND anothertbl.ThingyId = 1) as Thingy1Sum,
>   ... repeat for multiply thingies ...
> FROM
>   tbl
> ORDER BY tbl.field1 LIMIT 20
>
> I'm finding that if "tbl" contains hundreds of thousands of rows, the
> subqueries are being executed hundreds of thousands of times.  Because of
> the sorting and pagination, this is appears to be unnecessary, and the
> result is slow performance.  (PostgreSQL 9.5.9 server)
>
> I've only found one solution so far, which is to perform the sort &
> pagination in a CTE, and the subqueries externally.  Are there any other
> approaches that can be taken to optimize this and prevent the unnecessary
> computation?
>
> CTE rewrite:
>
> WITH cte AS (
> SELECT
>   tbl.field1, tbl.field2, tbl.field3
> FROM
>   tbl
> ORDER BY tbl.field1 LIMIT 20
> )
> SELECT cte.*,
>   (SELECT SUM(Duration) FROM anothertbl WHERE anothertbl.UserId
> = tbl.UserId AND anothertbl.ThingyId = 1) as Thingy1Sum,
>   ... repeat for multiply thingies ...
> FROM cte;
>
> Thanks for any thoughts you have,
>
> Mathieu Fenniak
>


-- 
Cordialmente,

Ing. Hellmuth I. Vargas S.


Re: timestamp (military) at time zone without the suffix

2018-07-11 Thread Tom Lane
David Gauthier  writes:
> sqfdev=> select now()::timestamp(0) at time zone 'utc' ;
> timezone
> 
>  2018-07-11 11:27:12-04
> (1 row)

You're doing it wrong: coercing to timestamp already involves a rotation
to local time, and then "at time zone" says to interpret that as a time
in UTC, and the output is a timestamptz which will be displayed in your
local time.

I think the result you want is more like

select (now() at time zone 'utc')::timestamp(0);

although personally I'd choose some other way of dropping the fractional
second, probably

select current_timestamp(0) at time zone 'utc';

regards, tom lane



Re: timestamp (military) at time zone without the suffix

2018-07-11 Thread Adrian Klaver

On 07/11/2018 12:36 PM, David Gauthier wrote:

select now()::timestamp(0) at time zone 'utc' ;

Or:

test=> select now();
  now
---
 2018-07-11 12:51:50.498416-07
(1 row)

test=>  select now()::timestamptz(0) at time zone 'utc' ;
  timezone
-
 2018-07-11 19:51:52
(1 row)


--
Adrian Klaver
adrian.kla...@aklaver.com



Re: timestamp (military) at time zone without the suffix

2018-07-11 Thread David G. Johnston
On Wednesday, July 11, 2018, David Gauthier 
wrote:

> Hi:
>
> I would like to get the utc timestamp, 24-hr clock (military time),
> without the time zone suffix.
>
> Below commands were run nearly at the same time...
>
> sqfdev=> select now()::timestamp(0)  ;
>  now
> -
>  2018-07-11 15:27:12
> (1 row)
>
> ...then immediately...
>
> sqfdev=> select now()::timestamp(0) at time zone 'utc' ;
> timezone
> 
>  2018-07-11 11:27:12-04
> (1 row)
>
>
> 15:27:12 makes sense (it's a bout 3:30 in the afternoon EST).
> 11:27:12 doesn't make sense.  UTC is 5 hours ahead.
>

Apparently it's only four hours ahead of your server's time zone setting.


>
  I would have expected either 20:27 (if it stuck to military time, which I
> want), or 08:27 (P.M., non-military time)
>
> And I want to get rid of the -04 suffix.
>
> Is there a way to do this ?
>

Specify an appropriate format string with the to_char function.

 https://www.postgresql.org/docs/10/static/functions-formatting.html

David J.


timestamp (military) at time zone without the suffix

2018-07-11 Thread David Gauthier
Hi:

I would like to get the utc timestamp, 24-hr clock (military time), without
the time zone suffix.

Below commands were run nearly at the same time...

sqfdev=> select now()::timestamp(0)  ;
 now
-
 2018-07-11 15:27:12
(1 row)

...then immediately...

sqfdev=> select now()::timestamp(0) at time zone 'utc' ;
timezone

 2018-07-11 11:27:12-04
(1 row)


15:27:12 makes sense (it's a bout 3:30 in the afternoon EST).
11:27:12 doesn't make sense.  UTC is 5 hours ahead.  I would have expected
either 20:27 (if it stuck to military time, which I want), or 08:27 (P.M.,
non-military time)

And I want to get rid of the -04 suffix.

Is there a way to do this ?

Thanks !


Re: How to tell which event was fired in Trigger function

2018-07-11 Thread David G. Johnston
On Wed, Jul 11, 2018 at 10:54 AM, Igal @ Lucee.org  wrote:

> On 7/11/2018 10:38 AM, Adrian Klaver wrote:
>
>> On 07/11/2018 10:36 AM, Igal @ Lucee.org wrote:
>>
>>> How can I tell inside the trigger function if the event was DELETE or
>>> INSERT/UPDATE?
>>>
>>
>> https://www.postgresql.org/docs/10/static/plpgsql-trigger.html
>>
>
> This looks like it have all of the information that I need.  For some
> reason search engines bring up many other pages but not that one.
>

​For these situations I recommend bookmarking and navigating to the Table
of Contents [1] for the documentation and looking for relevant chapter
titles - in this case the "V. 38 - Triggers" one is right there on the main
page.

https://www.postgresql.org/docs/10/static/index.html

On a related note, maybe the pl/pgsql link from the Triggers chapter should
link directly to the Triggers section under pl/pgsql instead of the top of
the chapter...

David J.


Re: How to tell which event was fired in Trigger function

2018-07-11 Thread Igal @ Lucee.org

On 7/11/2018 10:38 AM, Adrian Klaver wrote:

On 07/11/2018 10:36 AM, Igal @ Lucee.org wrote:
How can I tell inside the trigger function if the event was DELETE or 
INSERT/UPDATE?


https://www.postgresql.org/docs/10/static/plpgsql-trigger.html


This looks like it have all of the information that I need.  For some 
reason search engines bring up many other pages but not that one.


Thanks Adrian!

Igal



Re: How to tell which event was fired in Trigger function

2018-07-11 Thread Adrian Klaver

On 07/11/2018 10:36 AM, Igal @ Lucee.org wrote:

Hi,

I am writing a trigger function that is used after DELETE, INSERT, and 
UPDATE, like so:


     CREATE TRIGGER tr_name AFTER DELETE OR INSERT OR UPDATE ...

How can I tell inside the trigger function if the event was DELETE or 
INSERT/UPDATE?


The table has a `NOT NULL id` column, so I am thinking that maybe a 
DELETE will have a NULL value in NEW.id?  Will that work?  Is there a 
better way?


https://www.postgresql.org/docs/10/static/plpgsql-trigger.html



Thanks,

Igal Sapir
Lucee Core Developer
Lucee.org 




--
Adrian Klaver
adrian.kla...@aklaver.com



How to tell which event was fired in Trigger function

2018-07-11 Thread Igal @ Lucee.org

Hi,

I am writing a trigger function that is used after DELETE, INSERT, and 
UPDATE, like so:


    CREATE TRIGGER tr_name AFTER DELETE OR INSERT OR UPDATE ...

How can I tell inside the trigger function if the event was DELETE or 
INSERT/UPDATE?


The table has a `NOT NULL id` column, so I am thinking that maybe a 
DELETE will have a NULL value in NEW.id?  Will that work?  Is there a 
better way?


Thanks,

Igal Sapir
Lucee Core Developer
Lucee.org 



Re: Using CTE vs temporary tables

2018-07-11 Thread Adam Brusselback
One thing to note, if this is a query you would like to run on a replica,
temp tables are a non-starter.

I really wish that wasn't the case. I have quite a few analytical queries I
had to optimize with temp tables and indexes, and I really wish I could run
on my hot standby.

I in most cases I can't refactor them to use a CTE for performance reasons.

Anyways, my 2¢.
- Adam


Re: Using CTE vs temporary tables

2018-07-11 Thread David G. Johnston
On Wed, Jul 11, 2018 at 9:35 AM, Ravi Krishna 
wrote:

> ​Does temp tables also suffer from optimization fence we see in CTE.​
>
>>
​I suppose it depends on how they end up being referenced in the query.  It
is not possible for the auto-vacuum daemon to vacuum/analyze them so if you
aren't doing that manually there will be a different kind of problem (bad
stats) preventing the query from being executed efficiently.  But in terms
of the "barrier", no, they are no different than any other table added to a
query FROM list.

David J.


Re: Using CTE vs temporary tables

2018-07-11 Thread Ravi Krishna
​Does temp tables also suffer from optimization fence we see in CTE.​

>


Re: Using CTE vs temporary tables

2018-07-11 Thread David G. Johnston
On Wed, Jul 11, 2018 at 9:30 AM, hmidi slim  wrote:

> Hi,
> I have a big query that used about 15 cte and its execution time is
> acceptable. I'm trying to optimize my query because it contains about 150
> lines of code and becomes hard to understand it and add new filter or
> condition easily.
> I think to change some cte with temporary tables and using indexes. I
> found this answer about performance between CTE and temp tables:
> https://dba.stackexchange.com/questions/78253/postgresql-
> common-table-expressions-vs-a-temporary-table
>
> Any suggestion will be welcome.
>
>
​Consider views and functions too.

David J.


Using CTE vs temporary tables

2018-07-11 Thread hmidi slim
Hi,
I have a big query that used about 15 cte and its execution time is
acceptable. I'm trying to optimize my query because it contains about 150
lines of code and becomes hard to understand it and add new filter or
condition easily.
I think to change some cte with temporary tables and using indexes. I found
this answer about performance between CTE and temp tables:
https://dba.stackexchange.com/questions/78253/postgresql-common-table-expressions-vs-a-temporary-table

Any suggestion will be welcome.
Best regards.


Optimizing execution of expensive subqueries

2018-07-11 Thread Mathieu Fenniak
Hi pgsql-general!

I'm currently looking at a query that is generally selecting a bunch of
simple columns from a table, and also performing some subqueries to
aggregate related data, and then sorting by one of the simple columns and
paginating the result.

eg.

SELECT
  tbl.field1, tbl.field2, tbl.field3, ...,
  (SELECT SUM(Duration) FROM anothertbl WHERE anothertbl.UserId = tbl.UserId
AND anothertbl.ThingyId = 1) as Thingy1Sum,
  ... repeat for multiply thingies ...
FROM
  tbl
ORDER BY tbl.field1 LIMIT 20

I'm finding that if "tbl" contains hundreds of thousands of rows, the
subqueries are being executed hundreds of thousands of times.  Because of
the sorting and pagination, this is appears to be unnecessary, and the
result is slow performance.  (PostgreSQL 9.5.9 server)

I've only found one solution so far, which is to perform the sort &
pagination in a CTE, and the subqueries externally.  Are there any other
approaches that can be taken to optimize this and prevent the unnecessary
computation?

CTE rewrite:

WITH cte AS (
SELECT
  tbl.field1, tbl.field2, tbl.field3
FROM
  tbl
ORDER BY tbl.field1 LIMIT 20
)
SELECT cte.*,
  (SELECT SUM(Duration) FROM anothertbl WHERE anothertbl.UserId
= tbl.UserId AND anothertbl.ThingyId = 1) as Thingy1Sum,
  ... repeat for multiply thingies ...
FROM cte;

Thanks for any thoughts you have,

Mathieu Fenniak


RE: Extremely slow autovacuum:vacuum to prevent wraparound

2018-07-11 Thread Rijo Roy
Thanks a lot Steve, that really answers my question.. 
Many thanks, Rijo Roy 

Sent from Yahoo Mail on Android 
 
  On Wed, 11 Jul 2018 at 7:50 pm, Steven 
Winfield wrote:   #yiv8600935552 
#yiv8600935552 -- _filtered #yiv8600935552 {panose-1:2 2 6 9 4 2 5 8 3 4;} 
_filtered #yiv8600935552 {panose-1:2 2 6 9 4 2 5 8 3 4;} _filtered 
#yiv8600935552 {font-family:Calibri;panose-1:2 15 5 2 2 2 4 3 2 4;} _filtered 
#yiv8600935552 {font-family:Tahoma;panose-1:2 11 6 4 3 5 4 4 2 4;} _filtered 
#yiv8600935552 {panose-1:2 2 6 9 4 2 5 8 3 4;} _filtered #yiv8600935552 
{font-family:New;panose-1:0 0 0 0 0 0 0 0 0 0;}#yiv8600935552 #yiv8600935552 
p.yiv8600935552MsoNormal, #yiv8600935552 li.yiv8600935552MsoNormal, 
#yiv8600935552 div.yiv8600935552MsoNormal 
{margin:0cm;margin-bottom:.0001pt;font-size:12.0pt;font-family:New;}#yiv8600935552
 a:link, #yiv8600935552 span.yiv8600935552MsoHyperlink 
{color:blue;text-decoration:underline;}#yiv8600935552 a:visited, #yiv8600935552 
span.yiv8600935552MsoHyperlinkFollowed 
{color:purple;text-decoration:underline;}#yiv8600935552 
p.yiv8600935552msonormal, #yiv8600935552 li.yiv8600935552msonormal, 
#yiv8600935552 div.yiv8600935552msonormal 
{margin-right:0cm;margin-left:0cm;font-size:12.0pt;font-family:New;}#yiv8600935552
 span.yiv8600935552msohyperlink {}#yiv8600935552 
span.yiv8600935552msohyperlinkfollowed {}#yiv8600935552 
span.yiv8600935552emailstyle17 {}#yiv8600935552 p.yiv8600935552msonormal1, 
#yiv8600935552 li.yiv8600935552msonormal1, #yiv8600935552 
div.yiv8600935552msonormal1 
{margin:0cm;margin-bottom:.0001pt;font-size:12.0pt;}#yiv8600935552 
span.yiv8600935552msohyperlink1 
{color:blue;text-decoration:underline;}#yiv8600935552 
span.yiv8600935552msohyperlinkfollowed1 
{color:purple;text-decoration:underline;}#yiv8600935552 
span.yiv8600935552emailstyle171 {color:#1F497D;}#yiv8600935552 
span.yiv8600935552EmailStyle25 {color:#1F497D;}#yiv8600935552 
.yiv8600935552MsoChpDefault {} _filtered #yiv8600935552 {margin:72.0pt 72.0pt 
72.0pt 72.0pt;}#yiv8600935552 div.yiv8600935552WordSection1 {}#yiv8600935552  
 
 
From: Rijo Roy [mailto:rjo_...@yahoo.com]
Sent: 11 July 2018 15:14
To: Steven Winfield
Cc: pgsql-gene...@postgresql.org
Subject: RE: Extremely slow autovacuum:vacuum to prevent wraparound
 
  
 
Hi Steve, 
 
  
 
Apologies, I misread it as 2 billion, it is as you said set as 200 million 
which is the default value for the parameter autovacuum_freeze_max_age. 
 
I just wanted to confirm whether there would be any impact if I cancel or 
terminate the backend for the existing autovacuum :vacuum table_name (to 
prevent wraparound). I have initiated a normal vacuum manually but it is stuck 
on the table where the autovacuum is processing it. 
 
  
 
Thanks, 
 
Rijo Roy
 
  
 
  
 
Stack Overflow confirms my memories about killing autovacuum - 
pg_cancel_backend() should be fine.
 
https://stackoverflow.com/questions/18048842/danger-in-killing-autovacuum-vacuum-queries-to-prevent-wraparound

  
 
Steve.
 
  This email is confidential. If you are not the intended recipient, please 
advise us immediately and delete this message. The registered name of Cantab- 
part of GAM Systematic is Cantab Capital Partners LLP. See - 
http://www.gam.com/en/Legal/Email+disclosures+EU for further information on 
confidentiality, the risks of non-secure electronic communication, and certain 
disclosures which we are required to make in accordance with applicable 
legislation and regulations. If you cannot access this link, please notify us 
by reply message and we will send the contents to you.

GAM Holding AG and its subsidiaries (Cantab – GAM Systematic) will collect and 
use information about you in the course of your interactions with us. Full 
details about the data types we collect and what we use this for and your 
related rights is set out in our online privacy policy at 
https://www.gam.com/en/legal/privacy-policy. Please familiarise yourself with 
this policy and check it from time to time for updates as it supplements this 
notice


RE: Extremely slow autovacuum:vacuum to prevent wraparound

2018-07-11 Thread Steven Winfield
From: Rijo Roy [mailto:rjo_...@yahoo.com]
Sent: 11 July 2018 15:14
To: Steven Winfield
Cc: pgsql-gene...@postgresql.org
Subject: RE: Extremely slow autovacuum:vacuum to prevent wraparound

Hi Steve,

Apologies, I misread it as 2 billion, it is as you said set as 200 million 
which is the default value for the parameter autovacuum_freeze_max_age.
I just wanted to confirm whether there would be any impact if I cancel or 
terminate the backend for the existing autovacuum :vacuum table_name (to 
prevent wraparound). I have initiated a normal vacuum manually but it is stuck 
on the table where the autovacuum is processing it.

Thanks,
Rijo Roy


Stack Overflow confirms my memories about killing autovacuum - 
pg_cancel_backend() should be fine.
https://stackoverflow.com/questions/18048842/danger-in-killing-autovacuum-vacuum-queries-to-prevent-wraparound

Steve.


RE: Extremely slow autovacuum:vacuum to prevent wraparound

2018-07-11 Thread Rijo Roy
Hi Steve, 
Apologies, I misread it as 2 billion, it is as you said set as 200 million 
which is the default value for the parameter autovacuum_freeze_max_age. I just 
wanted to confirm whether there would be any impact if I cancel or terminate 
the backend for the existing autovacuum :vacuum table_name (to prevent 
wraparound). I have initiated a normal vacuum manually but it is stuck on the 
table where the autovacuum is processing it. 
Thanks, Rijo Roy 
Sent from Yahoo Mail on Android 
 
  On Wed, 11 Jul 2018 at 7:30 pm, Steven 
Winfield wrote:   #yiv7029743585 
#yiv7029743585 -- _filtered #yiv7029743585 {panose-1:2 2 6 9 4 2 5 8 3 4;} 
_filtered #yiv7029743585 {panose-1:2 2 6 9 4 2 5 8 3 4;} _filtered 
#yiv7029743585 {font-family:Calibri;panose-1:2 15 5 2 2 2 4 3 2 4;} _filtered 
#yiv7029743585 {font-family:Tahoma;panose-1:2 11 6 4 3 5 4 4 2 4;} _filtered 
#yiv7029743585 {panose-1:2 2 6 9 4 2 5 8 3 4;}#yiv7029743585 #yiv7029743585 
p.yiv7029743585MsoNormal, #yiv7029743585 li.yiv7029743585MsoNormal, 
#yiv7029743585 div.yiv7029743585MsoNormal 
{margin:0cm;margin-bottom:.0001pt;font-size:12.0pt;font-family:New;}#yiv7029743585
 a:link, #yiv7029743585 span.yiv7029743585MsoHyperlink 
{color:blue;text-decoration:underline;}#yiv7029743585 a:visited, #yiv7029743585 
span.yiv7029743585MsoHyperlinkFollowed 
{color:purple;text-decoration:underline;}#yiv7029743585 
span.yiv7029743585EmailStyle17 {color:#1F497D;}#yiv7029743585 
.yiv7029743585MsoChpDefault {} _filtered #yiv7029743585 {margin:72.0pt 72.0pt 
72.0pt 72.0pt;}#yiv7029743585 div.yiv7029743585WordSection1 {}#yiv7029743585 
  
 
 
 
From: Rijo Roy [mailto:rjo_...@yahoo.com]
Sent: 11 July 2018 14:30
To: srkris...@yahoo.com
Cc: pgsql-gene...@postgresql.org
Subject: Re: Extremely slow autovacuum:vacuum to prevent wraparound
 
  
 
No, I have  3 sessions of autovacuum against 3 tables which is doing a Vacuum 
to prevent wraparound as it hit the limit of autovacuum_freeze_max_age of 2 
billion. 
 

I also have a vaccumdb session which is invoked by me on these tables which is 
currently in paused state. So, I want to know whether I can kill the autivacuum 
sessions which is running since 5 days and extremely slow and just run a manual 
vacuum against these tables instead.  
  
 
  
 
Is your autovacuum_freeze_max_age really set to 2 billion? The default value is 
200 million. Setting it that high and disabling autovacuum isn’t just silly - 
it borders on sabotage!
 
I’ve used pg_cancel_backend() (NOT pg_terminate_backend()) on autovacuum jobs 
before without issue.
 
  
 
Good luck!
 
  
 
Steve.
 
  
 
  This email is confidential. If you are not the intended recipient, please 
advise us immediately and delete this message. The registered name of Cantab- 
part of GAM Systematic is Cantab Capital Partners LLP. See - 
http://www.gam.com/en/Legal/Email+disclosures+EU for further information on 
confidentiality, the risks of non-secure electronic communication, and certain 
disclosures which we are required to make in accordance with applicable 
legislation and regulations. If you cannot access this link, please notify us 
by reply message and we will send the contents to you.

GAM Holding AG and its subsidiaries (Cantab – GAM Systematic) will collect and 
use information about you in the course of your interactions with us. Full 
details about the data types we collect and what we use this for and your 
related rights is set out in our online privacy policy at 
https://www.gam.com/en/legal/privacy-policy. Please familiarise yourself with 
this policy and check it from time to time for updates as it supplements this 
notice


Re: Extremely slow autovacuum:vacuum to prevent wraparound

2018-07-11 Thread Ravi Krishna
>  Setting it that high and disabling autovacuum isn’t just silly - it borders 
> on sabotage!


LOL.  My thoughts too.  Perhaps some disgruntled employee's parting shot before 
quitting :-)

Re: Open Source tool to deploy/promote PostgreSQL DDL

2018-07-11 Thread Jeremy Finzel
On Tue, Jul 10, 2018 at 5:13 PM Hustler DBA  wrote:

> Hi Community,
> A client of mine is looking for an open source tool to deploy and promote
> PostgreSQL DDL changes through database environments as part of SDLC. What
> tools (open source) does the community members use? I normally use scripts,
> but they want something open source.
>
> Thanks,
> Neil Barrett
>

A popular one is ActiveRecord Migrations. It is open source. Although
ActiveRecord is also an ORM it’s not required to use it -you can just use
SQL. FWIW:
http://edgeguides.rubyonrails.org/active_record_migrations.html

>


RE: Extremely slow autovacuum:vacuum to prevent wraparound

2018-07-11 Thread Steven Winfield

From: Rijo Roy [mailto:rjo_...@yahoo.com]
Sent: 11 July 2018 14:30
To: srkris...@yahoo.com
Cc: pgsql-gene...@postgresql.org
Subject: Re: Extremely slow autovacuum:vacuum to prevent wraparound

No, I have  3 sessions of autovacuum against 3 tables which is doing a Vacuum 
to prevent wraparound as it hit the limit of autovacuum_freeze_max_age of 2 
billion.
I also have a vaccumdb session which is invoked by me on these tables which is 
currently in paused state. So, I want to know whether I can kill the autivacuum 
sessions which is running since 5 days and extremely slow and just run a manual 
vacuum against these tables instead.


Is your autovacuum_freeze_max_age really set to 2 billion? The default value is 
200 million. Setting it that high and disabling autovacuum isn’t just silly - 
it borders on sabotage!
I’ve used pg_cancel_backend() (NOT pg_terminate_backend()) on autovacuum jobs 
before without issue.

Good luck!

Steve.



Re: Extremely slow autovacuum:vacuum to prevent wraparound

2018-07-11 Thread Olivier Gautherot
On Wed, Jul 11, 2018 at 8:17 AM, Rijo Roy  wrote:

> +pgsql-general
>
> Sent from Yahoo Mail on Android
> 
>
> On Wed, 11 Jul 2018 at 5:43 pm, Rijo Roy
>  wrote:
> Hi Ravi,
>
> It was the application teams call to disable it fearing the alteration of
> execution plans and slowness as per their explanation. I have joined 2 days
> back and I have tried educating them for enabling the same but they said
> after enough testing and evidence collected it will be enabled back in
> future..
> As of now, I have started the manual vacuum operation on the databases but
> in one of them I am facing this issue that my vacuum jobs for 3 tables are
> paused as there are 3 autovacuum jobs running since 5 days on the same
> tables to prevent wraparound.
> Could you please confirm whether I can stop these autovacuum jobs so that
> I can run my manual run of vaccum job.
>
> Rijo, from a technical perspective (see how this can apply in Production),
one thing you can do is disable autovacuum in postgresql.conf and restart
the cluster. This will kill the vacuum processes in an orderly fashion.

One thing you could try is recreate these big tables (this will run an
implicit vacuum). There are 2 ways:
- COPY (SELECT * FROM bigtable) TO tempfile; COPY bigtable2 FROM tempfile;
- SELECT * INTO bigtable2 FROM (SELECT * FROM bigtable)

See which one works best for you. Note that this will only copy the columns
and the data - you will have to recreate the indexes and constraints by
hand afterwards.

As a comment, NEVER disable the autovacuum unless you're doing heavy
maintenance (like deleting millions of rows - and run the vacuum by hand
afterwards). From our experience, it can be a lot lighter to perform the
SELECT INTO described above if you happen to delete a majority of rows (it
happened to us).

Olivier


>
> Thanks,
> Rijo Roy
>
> Sent from Yahoo Mail on Android
> 
>
> On Wed, 11 Jul 2018 at 5:37 pm, Ravi Krishna
>  wrote:
> Who disabled auto vacuum and why ?  Basically you are paying the price for
> it now
>
> Sent from my iPad
>
> On Jul 11, 2018, at 7:33 AM, Rijo Roy  wrote:
>
> Hi Experts,
>
> I am running a PG10.4 in a RHEL 6.9. The tables stored in my database are
> mainly used for reads and very less writes happen. I have recently joined
> this environment and as soon as I logged into the postgresql servers,
> checked for dead tuples and relfrozenids.. And almost all have reached the
> autovacuum_freeze_max_age of 2billion and autovacuum was disabled.. Last
> vacuum was run 3 months back.
> Today I got permission to vacuum the tables but for 3 tables, Postgresql
> has already started the vacuum freeze using the autovacuum daemon running
> to prevent wraparound and these tables have TB'S of data.
> Experts, please confirm whether I can kill these sessions and run a manual
> vacuum job with parallel jobs.
>
> Thanks in advance.
>
> Regards,
> Rijo Roy
>
> Sent from Yahoo Mail on Android
> 
>
>


Re: Extremely slow autovacuum:vacuum to prevent wraparound

2018-07-11 Thread Rijo Roy
No, I have  3 sessions of autovacuum against 3 tables which is doing a Vacuum 
to prevent wraparound as it hit the limit of autovacuum_freeze_max_age of 2 
billion. I also have a vaccumdb session which is invoked by me on these tables 
which is currently in paused state. So, I want to know whether I can kill the 
autivacuum sessions which is running since 5 days and extremely slow and just 
run a manual vacuum against these tables instead. 

Sent from Yahoo Mail on Android 
 
  On Wed, 11 Jul 2018 at 6:36 pm, Ravi Krishna wrote:   
Are you telling that you have multiple autovac on the same table?  If yes, that 
causes lock waits.
=== Original Message===

On Jul 11, 2018, at 08:13 , Rijo Roy  wrote:
Hi Ravi, 
It was the application teams call to disable it fearing the alteration of 
execution plans and slowness as per their explanation. I have joined 2 days 
back and I have tried educating them for enabling the same but they said after 
enough testing and evidence collected it will be enabled back in future.. As of 
now, I have started the manual vacuum operation on the databases but in one of 
them I am facing this issue that my vacuum jobs for 3 tables are paused as 
there are 3 autovacuum jobs running since 5 days on the same tables to prevent 
wraparound. Could you please confirm whether I can stop these autovacuum jobs 
so that I can run my manual run of vaccum job. 
Thanks, Rijo Roy 

Sent from Yahoo Mail on Android 
 
  On Wed, 11 Jul 2018 at 5:37 pm, Ravi Krishna wrote:   
Who disabled auto vacuum and why ?  Basically you are paying the price for it 
now 

Sent from my iPad
On Jul 11, 2018, at 7:33 AM, Rijo Roy  wrote:


Hi Experts, 
I am running a PG10.4 in a RHEL 6.9. The tables stored in my database are 
mainly used for reads and very less writes happen. I have recently joined this 
environment and as soon as I logged into the postgresql servers, checked for 
dead tuples and relfrozenids.. And almost all have reached the 
autovacuum_freeze_max_age of 2billion and autovacuum was disabled.. Last vacuum 
was run 3 months back. Today I got permission to vacuum the tables but for 3 
tables, Postgresql has already started the vacuum freeze using the autovacuum 
daemon running to prevent wraparound and these tables have TB'S of 
data.Experts, please confirm whether I can kill these sessions and run a manual 
vacuum job with parallel jobs. 
Thanks in advance. 
Regards, Rijo Roy 

Sent from Yahoo Mail on Android
  


  


Re: Extremely slow autovacuum:vacuum to prevent wraparound

2018-07-11 Thread Rijo Roy
+pgsql-general

Sent from Yahoo Mail on Android 
 
  On Wed, 11 Jul 2018 at 5:43 pm, Rijo Roy wrote:   Hi Ravi, 
It was the application teams call to disable it fearing the alteration of 
execution plans and slowness as per their explanation. I have joined 2 days 
back and I have tried educating them for enabling the same but they said after 
enough testing and evidence collected it will be enabled back in future.. As of 
now, I have started the manual vacuum operation on the databases but in one of 
them I am facing this issue that my vacuum jobs for 3 tables are paused as 
there are 3 autovacuum jobs running since 5 days on the same tables to prevent 
wraparound. Could you please confirm whether I can stop these autovacuum jobs 
so that I can run my manual run of vaccum job. 
Thanks, Rijo Roy 

Sent from Yahoo Mail on Android 
 
  On Wed, 11 Jul 2018 at 5:37 pm, Ravi Krishna wrote:   
Who disabled auto vacuum and why ?  Basically you are paying the price for it 
now 

Sent from my iPad
On Jul 11, 2018, at 7:33 AM, Rijo Roy  wrote:


Hi Experts, 
I am running a PG10.4 in a RHEL 6.9. The tables stored in my database are 
mainly used for reads and very less writes happen. I have recently joined this 
environment and as soon as I logged into the postgresql servers, checked for 
dead tuples and relfrozenids.. And almost all have reached the 
autovacuum_freeze_max_age of 2billion and autovacuum was disabled.. Last vacuum 
was run 3 months back. Today I got permission to vacuum the tables but for 3 
tables, Postgresql has already started the vacuum freeze using the autovacuum 
daemon running to prevent wraparound and these tables have TB'S of 
data.Experts, please confirm whether I can kill these sessions and run a manual 
vacuum job with parallel jobs. 
Thanks in advance. 
Regards, Rijo Roy 

Sent from Yahoo Mail on Android
  
  


Extremely slow autovacuum:vacuum to prevent wraparound

2018-07-11 Thread Rijo Roy
Hi Experts, 
I am running a PG10.4 in a RHEL 6.9. The tables stored in my database are 
mainly used for reads and very less writes happen. I have recently joined this 
environment and as soon as I logged into the postgresql servers, checked for 
dead tuples and relfrozenids.. And almost all have reached the 
autovacuum_freeze_max_age of 2billion and autovacuum was disabled.. Last vacuum 
was run 3 months back. Today I got permission to vacuum the tables but for 3 
tables, Postgresql has already started the vacuum freeze using the autovacuum 
daemon running to prevent wraparound and these tables have TB'S of 
data.Experts, please confirm whether I can kill these sessions and run a manual 
vacuum job with parallel jobs. 
Thanks in advance. 
Regards, Rijo Roy 

Sent from Yahoo Mail on Android

Re: Open Source tool to deploy/promote PostgreSQL DDL

2018-07-11 Thread Raymond O'Donnell

On 10/07/18 23:13, Hustler DBA wrote:

Hi Community,
A client of mine is looking for an open source tool to deploy and 
promote PostgreSQL DDL changes through database environments as part of 
SDLC. What tools (open source) does the community members use? I 
normally use scripts, but they want something open source.


I use Depesz Versioning[1] - simple concept and easy to use, and quite 
sufficient for my (simple) needs.


Ray.

[1] https://www.depesz.com/2010/08/22/versioning

--
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie



Re: Create event triger

2018-07-11 Thread Ken Tanzer
On Tue, Jul 10, 2018 at 5:45 PM Adrian Klaver 
wrote:

> select add_trigger('trg_test');
>
> test=> \d trg_test
>
> Table "public.trg_test"
>
>
>   Column |   Type| Collation | Nullable | Default
>
>
> +---+---+--+-
>
>
>   id | integer   |   |  |
>
>
>   fld_1  | character varying |   |  |
>
>
> Triggers:
>
>
>  trg_test_change AFTER UPDATE ON trg_test FOR EACH STATEMENT EXECUTE
> PROCEDURE ts_update()
>
>
To take this a step further, if you really have a lot of tables and want to
do it automatically, you could do something like this:

SELECT table_name,add_trigger(table_name) FROM
information_schema.tables WHERE table_schema='public';

This assumes that you want to add the trigger to _all_ your tables, and
that you haven't made use of schemas and so your tables are all in the
public schema.
If that's not the case, you could adjust accordingly.  It would be safest
to just pull the table names first, make sure the list is what you want,
and then run with the add_trigger.  So start with this:

SELECT table_name FROM information_schema.tables WHERE table_schema='public';

and if the list of tables is what you want, then run with the add_trigger
included.

Cheers,
Ken


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: Open Source tool to deploy/promote PostgreSQL DDL

2018-07-11 Thread Thomas Kellerer
Hustler DBA schrieb am 11.07.2018 um 00:13:
> A client of mine is looking for an open source tool to
> deploy and promote PostgreSQL DDL changes through database
> environments as part of SDLC. What tools (open source) does the
> community members use? I normally use scripts, but they want
> something open source.
We are using Liquibase (with the XML format) and that has served as well.