Re: Need help with trigger

2021-01-23 Thread Melvin Davidson
Maybe this example will help.
>From https://www.postgresql.org/docs/current/sql-insert.html

INSERT INTO distributors AS d (did, dname) VALUES (8, 'Anvil Distribution')
ON CONFLICT (did) DO UPDATE
SET dname = EXCLUDED.dname || ' (formerly ' || d.dname || ')'
WHERE d.zipcode <> '21201';

On Sat, Jan 23, 2021 at 3:47 PM Adrian Klaver 
wrote:

> On 1/23/21 12:14 PM, Condor wrote:
> > On 23-01-2021 20:49, Adrian Klaver wrote:
> >> On 1/23/21 10:20 AM, Condor wrote:
>
> >
> > Sorry,
> >
> > I'm sorry, I don't understand something. You mean to do pure INSERT ON
> > CONFLICT DO or to modify the trigger ?
>
> No I meant that in the external program you use to fetch the data from
> the other table and reorganize the fields.  Do your test there and do
> either the INSERT or UPDATE.
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Date created for tables

2019-12-25 Thread Melvin Davidson
AFAICT, this is something that should have been designed into the initial
release of PostgreSQL, but rather than go down that road, I have thought
about this and will attempt to explain the "complexity" of implementing it,
which to date has only been described as too hard or, we don't have enough
developers or resources.

In order to record the CREATE and ALTER dates, it would require adding two
columns to pg_class. IE date_create & date_altered.
However, that being said, it would also then require a routine (or generic
routine) to populate those columns for EVERY system type that gets placed
in pg_class when created or altered. So tables, views, materialized views,
etc.

Now, AFAICT the implementation of new features has nothing to do with
complexity, but rather what has been determined to be required or desirable.

Indeed, there exists a feature request url, but AFAICT, little attention
has been payed to that url.

user feature request <https://postgresql.uservoice.com/forums/21853-general>


On Tue, Dec 24, 2019 at 10:11 PM Ron  wrote:

> On 12/24/19 8:58 PM, Rob Sargent wrote:
>
>
> On Dec 24, 2019, at 11:48 AM, Ron 
>  wrote:
>
>  On 12/24/19 1:14 PM, Rob Sargent wrote:
>
> If there's not enough time and motivation for the developers to implement 
> CREATED_ON and LAST_ALTERED in pg_class, then you should have said that in 
> the first place.  We're adults; we understand that OSS projects have limited 
> resources, and won't go off and pout in the corner.
>
> But that's not what y'all said.  "It's too complicated, mission creep, blah 
> blah blah" just extended way too long.
>
> Is there a list of purported uses cases for these two attributes (other than 
> auditing)?  Especially anything to do with managing the data as they 
> currently exist?
>
>
> I've used last_altered for comparing tables on Staging and Prod database.
>
> If, for example, the last_altered on a prod table is *earlier* than
> last_altered on the staging table, then that's a *strong hint* that the
> staging and prod schema are out of sync, and more detailed examination is
> required.
>
> Another example is that -- since username is also recorded in other RDBMSs
> --it's useful when the customer is screaming at your boss asking who made
> that unauthorized modification to production that's breaking their
> application.  You then show them that the table hasn't been altered in X
> months, and point the finger back at their incompetent developers.
>
> All in all, it's not something that you use every day, but when it *is*
> useful, it's *very* useful.
>
> Don’t both of those examples hi-light flaws in the release procedures?
>
>
> And bug highlight flaws in the development process.  We're human, after
> all.
>
> --
> Angular momentum makes the world go 'round.
>


-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Date created for tables

2019-12-23 Thread Melvin Davidson
>You all are *grossly* over-complicating this.
Agree +1


On Mon, Dec 23, 2019 at 9:14 PM Ron  wrote:

> On 12/23/19 7:01 PM, Bruce Momjian wrote:
>
> On Thu, Dec  5, 2019 at 05:10:20PM +, Chloe Dives wrote:
>
> Having moved to PostgreSQL from Oracle a few years ago I have been generally
> very impressed by Postgres, but there are a few things that I still miss. One
> of those is being able to see the created and last modified dates for database
> objects.
>
>
>
> Is this something that has been considered for implementation?
>
> I wrote a blog about this:
>
>   https://momjian.us/main/blogs/pgblog/2017.html#November_21_2017
>
>
> You all are *grossly* over-complicating this.
>
> By creation time, "we DBAs" think the time we ran "CREATE object", not
> when pg_dump, pg_basebackup and pg_update ran.
>
> Likewise, modification time is when we last ran an ALTER command ran, not
> when VACUUM ran (that's tracked elsewhere) or DML ran.
>
> That's all.
>
> --
> Angular momentum makes the world go 'round.
>


-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Schema Copy

2019-12-05 Thread Melvin Davidson
> Can someone please help in schema copy in same database ?

A while ago, I created the clone_schema function which will duplicate one
schema to a new schema with the option to copy or not copy data. However,
since there are two versions depending on the version of PostgreSQL you are
using, and you have not specified which version of PostgreSQL you have, I
cannot help you at this time.

On Thu, Dec 5, 2019 at 9:37 AM Justin  wrote:

> Hi Sonam
>
> If you want a copy the data do a Select * into  (newschema.table) from
> oldschema.table
> https://www.postgresql.org/docs/9.1/sql-selectinto.html
>
> If you want to just move the table
>
> https://www.postgresql.org/docs/current/sql-altertable.html
>
> Alter Table oldschema.table Set Schema  newschema
>
> On Thu, Dec 5, 2019 at 9:31 AM Sonam Sharma  wrote:
>
>> Hi Justin,
>>
>> What can be done to move the data..
>>
>> On Thu, Dec 5, 2019, 7:57 PM Justin  wrote:
>>
>>> Hi Sonam
>>>
>>> As long as the edited sql script  has been changed from
>>> oldschema.tables  or oldschema.functions etc...  to newschema.functions,
>>> newschema.functios etc...
>>>
>>> This does not move  data
>>>
>>> On Thu, Dec 5, 2019 at 5:07 AM Sonam Sharma 
>>> wrote:
>>>
>>>> Can someone please help in schema copy in same database ?
>>>>
>>>> I have taken db dump of current schema and in the ddl file , I have
>>>> edited it with new schema.
>>>> Will this work?
>>>>
>>>

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: SOLVED Re: Recovering disk space

2019-10-10 Thread Melvin Davidson
>OOK, after  a period of time, I recovered my disk space.

Stan, for those of us that have lost the ability to long distance mind
read, would you please
inform us of the steps you took to solve the problem? It might be helpful
in the future to someone
else in the same situation. It might also be useful to include the O/S and
PostgreSQL version.


On Thu, Oct 10, 2019 at 4:01 AM stan  wrote:

> OOK, after  a period of time, I recovered my disk space.
>
> On Thu, Oct 10, 2019 at 03:57:00AM -0400, stan wrote:
> > Sorry to repeat this, I am still in trouble on it.
> >
> > I made the mistake of trying to import the mouse gnome database in an
> > effort to do some testing with very large data sets.
> >
> > This import failed, probably because I ran out of disk space.  I deleted
> the
> > dump file, and a few other things, and now I have a bit of disk space.
> > Looks like I see the mad schema both in the postfres db, and in my own
> > stan db/ I did a drop schema cascade on both as the db superuser, and ran
> > vacuum full on both. This command came back almost instantly, which
> > surprises me.
> >
> > However, my 50G disk is still 96% full. How can I recover the disk space
> I
> > seem to have used u doing this?
> >
> >
> > --
> > "They that would give up essential liberty for temporary safety deserve
> > neither liberty nor safety."
> >   -- Benjamin Franklin
> >
> >
>
> --
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>
>
>

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: problems importing from csv

2019-09-13 Thread Melvin Davidson
>ERROR:  invalid input syntax for type numeric: "$1.00"
>CONTEXT:  COPY employee, line 2, column hourly_rate: "$1.00"

Try changing the format of the column from currency to numeric BEFORE
exporting.

On Fri, Sep 13, 2019 at 2:22 PM stan  wrote:

>
> I am trying to import some data from spreadsheets. Included in the data
> sets are US monetary values. These appear in the CSV file, like this: $1.00
> The column is defined like this: NUMERIC(5,2) NOT NULL. When I try to
> import this data using the \copy functionality, i get the following error;
>
> stan@smokey:~/pm_db/live_data$ psql < import_employee.sql
> ERROR:  invalid input syntax for type numeric: "$1.00"
> CONTEXT:  COPY employee, line 2, column hourly_rate: "$1.00"
>
> How can I work around this?
>
> --
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>
>
>

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: How to determine what is preventing deleting a rule

2019-08-17 Thread Melvin Davidson
> How can I find out what this object is?
This query might help if the object is a table:

SELECT c.relname as table,
   a.rolname as owner,
   c.relacl as permits
  FROM pg_class c
  JOIN pg_authid a ON ( a.OID = c.relowner )
WHERE relname NOT LIKE 'pg_%'
  AND relname NOT LIKE 'information%'
  AND relname NOT LIKE 'sql_%'
  AND relkind = 'r'
  AND a.rolname = ''
ORDER BY relname;

Since you have not stated the PostgreSQL version, I can only
assure you this will work for 9.6 and below.

On Sat, Aug 17, 2019 at 7:49 PM Tom Lane  wrote:

> stan  writes:
> > I have deleted,  what i thought were all the objects referenced by these
> roles,
> > but I still get an error saying there is one object they reference.
> > How can I find out what this object is?
>
> The error should tell you, if you're connected to the database where
> that object is.
>
> regression=# create database d1;
> CREATE DATABASE
> regression=# create database d2;
> CREATE DATABASE
> regression=# create user joe;
> CREATE ROLE
> regression=# \c d2 joe
> You are now connected to database "d2" as user "joe".
> d2=> create table tt(d1 int);
> CREATE TABLE
> d2=> \c d1 postgres
> You are now connected to database "d1" as user "postgres".
> d1=# drop user joe;
> ERROR:  role "joe" cannot be dropped because some objects depend on it
> DETAIL:  1 object in database d2
> d1=# \c d2
> You are now connected to database "d2" as user "postgres".
> d2=# drop user joe;
> ERROR:  role "joe" cannot be dropped because some objects depend on it
> DETAIL:  owner of table tt
>
> Unfortunately, we can't see into the catalogs of a different database
> to explain about dependencies there ...
>
> regards, tom lane
>
>
>

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Probably a newbie question

2019-08-10 Thread Melvin Davidson
>ERROR:  more than one row returned by a subquery used as an expression

Without knowledge as to the contents of your data, the best I can suggest is
to use *SELECT DISTINCT* in your subqueries.

On Sat, Aug 10, 2019 at 3:42 PM Rob Sargent  wrote:

>
>
> I'd look here:
>
> SELECT
> name
> FROM
> vendor
> WHERE
> bom_item.vendor_key =
> (
> SELECT
> vendor_key
> FROM
> mfg_vendor_relationship
> WHERE
> bom_item.mfg_key = mfg_key
> AND
> prefered = TRUE
> AND
> bom_item.project_key = project_key
>
> )
>
>
> --
> Angular momentum makes the world go ‘round.
>
>
> You might get away with adding
> group by vendor_key
> if it turns out you’re simply getting many copies of vendor key from that
> inner select.
> Run it alone to see.
>
>
>

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: FW: Undelivered Mail Returned to Sender

2019-08-10 Thread Melvin Davidson
>ERROR:  more than one row returned by a subquery used as an expression

Well, we really do not have any insight as to the contents of your data, but
have you thought about using *SELECT DISTINCT i*n your subquerys?

On Sat, Aug 10, 2019 at 2:53 PM stan  wrote:

> I apologize for asking, what I suspect will turn out to be a newbie
> question, but I have managed to get myself quite confused on this.
>
> I am defining a view as follows
>
>
> CREATE OR REPLACE view purchase_view as
> select
> project.proj_no ,
> qty ,
> mfg_part.mfg_part_no ,
> mfg.name as m_name ,
> mfg_part.descrip as description ,
> (
> SELECT
> name
> FROM
> vendor
> WHERE
> bom_item.vendor_key =
> (
> SELECT
> vendor_key
> FROM
> mfg_vendor_relationship
> WHERE
> bom_item.mfg_key = mfg_key
> AND
> prefered = TRUE
> AND
> bom_item.project_key = project_key
>
> )
> )
> as v_name ,
> /*
> vendor.name as v_name ,
> */
> cost_per_unit ,
> costing_unit.unit,
> need_date ,
> order_date ,
> recieved_date ,
> po_no ,
> po_line_item
> from
> bom_item
> right join project on
> project.project_key = bom_item.project_key
> inner join mfg_part on
> mfg_part.mfg_part_key = bom_item.mfg_part_key
> inner join vendor on
> vendor.vendor_key = bom_item.vendor_key
> inner join costing_unit on
> costing_unit.costing_unit_key = bom_item.costing_unit_key
> inner join mfg on
> mfg.mfg_key = bom_item.mfg_key
> WHERE bom_item is NOT NULL
> ORDER BY
> project.proj_no ,
> mfg_part
> ;
>
> Most of the tables are pretty much simple key -> value relationships for
> normalization. I can add the create statements to this thread if it adds
> clarity.
>
> The exception is:
>
>
>
> CREATE TABLE mfg_vendor_relationship (
> mfg_vendor_relationship_key_serial integer DEFAULT
> nextval('mfg_vendor_relationship_key_serial')
> PRIMARY KEY ,
> mfg_key   integer NOT NULL,
> vendor_keyinteger NOT NULL,
> project_key   integer NOT NULL,
> prefered  boolean NOT NULL ,
> modtime   timestamptz DEFAULT current_timestamp ,
> FOREIGN KEY (mfg_key) references mfg(mfg_key) ,
> FOREIGN KEY (vendor_key) references vendor(vendor_key) ,
> FOREIGN KEY (project_key) references project(project_key) ,
> CONSTRAINT mfg_vendor_constraint
> UNIQUE (
> mfg_key ,
> vendor_key ,
> project_key
> )
> );
>
>
> I am down to having a single row in the mfg_vendor_relationship as follows:
>
>  mfg_vendor_relationship_key_serial | mfg_key | vendor_key | project_key |
>  prefered |modtime
>
>  
> +-++-+--+---
>  164 |   1 |  1 |   2 |
>  t| 2019-08-10 14:21:04.896619-04
>
> But trying to do a select * from this view returns:
>
> ERROR:  more than one row returned by a subquery used as an expression
>
> Can someone please enlighten me as to the error of my ways?
>
>
> --
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>
>
> - End forwarded message -
>
> --
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>
>
>

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Does pgadmin4 work with postgresql 8.4?

2019-08-05 Thread Melvin Davidson
 >What I really dont want is some strange
> feature that is unique to 8.4 not have support.

Actually, the reverse is more likely. Newer versions of PostgreSQL have
newer and additional
features, so it is highly unlikely that PgAdmin4 would not be able to
support all features in 8.4.

That being said, what is the specific reason you cannot upgrade to a newer
version of PostgreSQL?




On Mon, Aug 5, 2019 at 5:11 PM Adrian Klaver 
wrote:

> On 8/5/19 1:51 PM, Benedict Holland wrote:
> > Yea. I did look in the doc's. What I really dont want is some strange
> > feature that is unique to 8.4 not have support. Like, a select option
> > for example. I am fairly sure most of it will work but my "not
> > supported" does it mean that it wasnt tested but will probably work or
> > that it really isnt supported at all. Either is fine as I can use
> > pgadmin3 if I have to but I would prefer at least 1 piece of software to
> > have support rather than none.
>
> You might have more luck getting a definitive answer here:
>
> https://www.postgresql.org/list/pgadmin-support/
>
> >
> > Thanks,
> > ~Ben
> >
> > On Mon, Aug 5, 2019, 4:48 PM Adrian Klaver  > <mailto:adrian.kla...@aklaver.com>> wrote:
> >
> > On 8/5/19 1:15 PM, Benedict Holland wrote:
> >  > The subject says it all. I am using a very old database that I
> cant
> >  > upgrade. Do I have to manage it with pgadmin3 or can I use
> > postgresql 4?
> >
> > Trolling the docs does not offer any insight. If you have pgAdmin4
> > already installed I would just set up a connection to the 8.4
> instance
> > and see if it works:)
> >
> >  >
> >  > Thanks,
> >  > ~Ben
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Possible Values of Command Tag in PG Log file

2019-07-18 Thread Melvin Davidson
The values and explanations are listed here:
https://www.postgresql.org/docs/11/protocol-message-formats.html

Or simply INSERT, UPDATE, DELETE, SELECT, MOVE, FETCH and/or COPY

On Thu, Jul 18, 2019 at 5:04 PM Kumar, Virendra 
wrote:

> Here is it:
> --
> https://www.postgresql.org/docs/10/protocol-message-formats.html
>
> CommandComplete (B), string value of this section.
>
> I loaded log file (csv format) into postgres_log table as per below
> document and can see entries like:
> https://www.postgresql.org/docs/10/runtime-config-logging.html
>
> postgres=# select command_tag from postgres_log where command_tag is not
> null limit 5;
>   command_tag
> 
>
>  authentication
>
>  authentication
>
> (5 rows)
>
> Regads,
> Virendra
>
>
> -Original Message-
> From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
> Sent: Thursday, July 18, 2019 4:55 PM
> To: Kumar, Virendra; pgsql-general@lists.postgresql.org
> Subject: Re: Possible Values of Command Tag in PG Log file
>
> On 7/18/19 12:58 PM, Kumar, Virendra wrote:
> > Hello Team,
> >
> > Can somebody redirect me to document which lists all possible value of
> > command tag in instance log of PostgreSQL instance.
>
> Can you show example of a command tag in the log?
>
> >
> > Regards,
> >
> > Virendra
> >
> >
> > 
> >
> > This message is intended only for the use of the addressee and may
> contain
> > information that is PRIVILEGED AND CONFIDENTIAL.
> >
> > If you are not the intended recipient, you are hereby notified that any
> > dissemination of this communication is strictly prohibited. If you have
> > received this communication in error, please erase all copies of the
> message
> > and its attachments and notify the sender immediately. Thank you.
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
> 
>
> This message is intended only for the use of the addressee and may contain
> information that is PRIVILEGED AND CONFIDENTIAL.
>
> If you are not the intended recipient, you are hereby notified that any
> dissemination of this communication is strictly prohibited. If you have
> received this communication in error, please erase all copies of the
> message
> and its attachments and notify the sender immediately. Thank you.
>
>
>

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Back Slash \ issue

2019-05-02 Thread Melvin Davidson
Have you tried adding

ESCAPE '*\*' to the PostgreSQL COPY command?

https://www.postgresql.org/docs/10/sql-copy.html


On Thu, May 2, 2019 at 12:54 PM Rob Sargent  wrote:

>
> On 5/2/19 9:19 AM, Guntry Vinod wrote:
> > Hi Adrian\Kiran,
> >
> > Below is the issue.
> >
> > We are migrating from Db2 to Postgre.
> >
> > The Db2 dump consists of back Slash \ with in the data [dump] , but
> postgre is not escaping the Slash.
> >
> > For example if name consist of Vinod\G after the inserting the dump the
> value in postgre is VinodG where as I need it as Vinod\G.
>
> Have you tried somlething like
>
>  sed 's/\\//g' db2dump > db2dump.fixed
>
> >
> > Since the dump is huge data I can't replace \  within the data to escape
> Slash.
> >
> > We are using import utility provided by Postgre.
> >
> > Can you please give me any solution in this regard?.
> >
> > @kiran-let me also try solution you provided in the below mail.
> >
> > Regards,
> > Vinod
> > ___
> > From: Adrian Klaver [adrian.kla...@aklaver.com]
> > Sent: 02 May 2019 20:04
> > To: ravikris...@mail.com; Guntry Vinod
> > Cc: pgsql-gene...@postgresql.org; Venkatamurali Krishna Gottuparthi;
> Biswa Ranjan Dash
> > Subject: Re: Back Slash \ issue
> >
> > On 5/2/19 7:30 AM, ravikris...@mail.com wrote:
> >>   > We have dump from DB2 and trying to upload it Postgre.
> >> DB2 export command has an option to export it as CSV which quotes data
> >> so that any embedded
> >> lines or special characters inside the data is treated fine.  Does the
> >> csv format has quotes enclosing it ?
> > Please show the command you use to import into Postgres.
> > Also the error message you get.
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com
> >
> 
> >
> > Disclaimer:  This message and the information contained herein is
> proprietary and confidential and subject to the Tech Mahindra policy
> statement, you may review the policy at
> http://www.techmahindra.com/Disclaimer.html <
> http://www.techmahindra.com/Disclaimer.html> externally
> http://tim.techmahindra.com/tim/disclaimer.html <
> http://tim.techmahindra.com/tim/disclaimer.html> internally within
> TechMahindra.
> >
> >
> 
> >
> >
> >
>
>
>

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Transactions

2019-04-09 Thread Melvin Davidson
In addition to what Andreas has suggested, you should also verify user
claims by
A. Have them show you the query they executed,
B. Verify they have appropriate SELECT, INSERT, UPDATE ad/or DELETE
permissions
on the tables involved.


On Tue, Apr 9, 2019 at 6:10 AM Achilleas Mantzios <
ach...@matrix.gatewaynet.com> wrote:

> On 9/4/19 12:26 μ.μ., Karl Martin Skoldebrand wrote:
>
> Hi,
>
>
>
> Is there a way to track “transactions” by default (i.e. without anyone
> having set up anything specific). The problem I am facing is that users are
> claiming that settings are disappearing with them doing anything to affect
> them. It would be good to be able to see what postgresql thinks is going on.
>
> **Subscriber adds Severity/BU/Service by ticking the corresponding box in
> subscriber configuration in WEBAPP. This works for some time.*
>
> **Subscriber stops receiving selected [tickets].*
>
> **Upon checking settings the selected Severity/BU/Service has been
> unselected.*
>
>
> If what you need is business-like trail to specific relations (tables)
> (most probably what you need) then you may find pgaudit very useful. Its
> author dwsteele also wrote pgbackrest, as a tool it works wonders and the
> support is superb.
>
>
>
> /M.
>
>
>
>
> 
>
> Disclaimer:  This message and the information contained herein is
> proprietary and confidential and subject to the Tech Mahindra policy
> statement, you may review the policy at
> http://www.techmahindra.com/Disclaimer.html externally
> http://tim.techmahindra.com/tim/disclaimer.html internally within
> TechMahindra.
>
>
> ========
>
>
>
> --
> Achilleas Mantzios
> IT DEV Lead
> IT DEPT
> Dynacom Tankers Mgmt
>
>

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: pg_upgrade --jobs

2019-04-07 Thread Melvin Davidson
 > The original scheduled downtime for one installation was 24 hours. By 21
hours it had not >completed the pg_dump schema-only so it was returned to
operation.

To me, your best option is to create a slony cluster with the version you
need to upgrade to.
When slony is in sync, simply make it the master and switch to it. It may
take a while for
slony replication to be in sync, but when it is, there will be very little
down time to switch
over.

Slony <http://www.slony.info/>

On Sun, Apr 7, 2019 at 3:36 PM Adrian Klaver 
wrote:

> On 4/7/19 12:05 PM, senor wrote:
> > Thank you Adrian. I'm not sure if I can provide as much as you'd need
> for a definite answer but I'll give you what I have.
> >
> > The original scheduled downtime for one installation was 24 hours. By 21
> hours it had not completed the pg_dump schema-only so it was returned to
> operation.
>
> So this is more then one cluster?
>
> I am assuming the below was repeated at different sites?
>
> > The amount of data per table is widely varied. Some daily tables are
> 100-200GB and thousands of reports tables with stats are much smaller. I'm
> not connected to check now but I'd guess 1GB max. We chose to use the
> --link option partly because some servers do not have the disk space to
> copy. The time necessary to copy 1-2TB was also going to be an issue.
> > The vast majority of activity is on current day inserts and stats
> reports of that data. All previous days and existing reports are read only.
> > As is all too common, the DB usage grew with no redesign so it is a
> single database on a single machine with a single schema.
> > I get the impression there may be an option of getting the schema dump
> while in service but possibly not in this scenario. Plan B is to drop a lot
> of tables and deal with imports later.
>
> I take the above to mean that a lot of the tables are cruft, correct?
>
> >
> > I appreciate the help.
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Reg: Pg_Ctl command help

2019-04-03 Thread Melvin Davidson
When you open a command shell to execute pg_ctl, all commands, including
Postgresql will be
terminated once you close the command shell. As Rene stated, the proper way
to start and stop PostgreSQL is to use the Services from the Task Manager.

On Wed, Apr 3, 2019 at 8:30 PM Rene Romero Benavides <
rene.romer...@gmail.com> wrote:

> How did you install it? It should be installed as a service.
>
> On Wed, Apr 3, 2019 at 4:24 AM Nadeem Akbar basha 
> wrote:
>
>> Hello,
>>
>>
>>
>> I have a query regarding starting the Postgres server using the ‘pg_ctl’
>> command in the command prompt (Windows OS).
>>
>>
>>
>> I try to start  the postgres server using the following command,
>>
>> pg_ctl -D "C:\Program Files\PostgreSQL\9.6\data" start -w
>>
>>
>>
>> After the server gets started, I’m closing the command prompt. But as
>> soon as I close the console, the server gets forcefully shutdown. Again I
>> have to restart the server  through command prompt.
>>
>> Is there any way, where after starting the server, I have to exit the
>> command prompt, still running the server at the background.
>>
>>
>>
>> Please help me to resolve this issue.
>>
>>
>>
>> *Thanks & Regards,*
>>
>> *A. Nadeem Ahmed*
>>
>>
>> ::DISCLAIMER::
>>
>> --
>> The contents of this e-mail and any attachment(s) are confidential and
>> intended for the named recipient(s) only. E-mail transmission is not
>> guaranteed to be secure or error-free as information could be intercepted,
>> corrupted, lost, destroyed, arrive late or incomplete, or may contain
>> viruses in transmission. The e mail and its contents (with or without
>> referred errors) shall therefore not attach any liability on the originator
>> or HCL or its affiliates. Views or opinions, if any, presented in this
>> email are solely those of the author and may not necessarily reflect the
>> views or opinions of HCL or its affiliates. Any form of reproduction,
>> dissemination, copying, disclosure, modification, distribution and / or
>> publication of this message without the prior written consent of authorized
>> representative of HCL is strictly prohibited. If you have received this
>> email in error please delete it and notify the sender immediately. Before
>> opening any email and/or attachments, please check them for viruses and
>> other defects.
>> ------
>>
>>
>
>
> --
> El genio es 1% inspiración y 99% transpiración.
> Thomas Alva Edison
> http://pglearn.blogspot.mx/
>
>

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Copy entire schema A to a different schema B

2019-02-21 Thread Melvin Davidson
Tiffany, have you tried the clone_schema function? It seems to me it does
exactly what you need, no dumping or restoring. There is
even an option to copy the data or not. Default is not.

On Thu, Feb 21, 2019 at 3:23 PM Adrian Klaver 
wrote:

> On 2/21/19 11:52 AM, Tiffany Thang wrote:
> > Thanks everyone. Unfortunately the schema rename would not work since
> > the source database will be our production system. We have not gone live
> > yet but the system is expected to be constantly used.
> >
> > I have multiple tables that I need to export ranging from 20GB to 60GB
> > each. The parallel will not work for a single table but would be
> > beneficial if I have multiple tables to dump.
> >
> > I'm thinking maybe using what Adrian has suggested with the -f option
> > and then modify the file or maybe use a common public schema everywhere
> > on the source and target databases. I would have to restrict who has
> > access to the public schema.
>
> You can further break this down by using -s and -a switches to only work
> with the table definitions and table data respectively. This can also be
> done on the pg_dump end.
>
> >
> > Thanks.
> >
> > Tiff
> >
>
> >  >> --
> >  >> Adrian Klaver
> >  >> adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Alter table column constraint

2018-12-17 Thread Melvin Davidson
See https://www.postgresql.org/docs/current/tutorial-fk.html

On Mon, Dec 17, 2018 at 3:32 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Mon, Dec 17, 2018 at 1:20 PM Rich Shepard 
> wrote:
> >
> > On Mon, 17 Dec 2018, Melvin Davidson wrote:
> >
> > > Yes, you must drop then add the revised constraint. However, from your
> > > statement above, it sounds to me as if you would be better off using A
> > > FOREIGN kEY CONSTRAINT. It makes things a lot simpler.
> >
> > Melvin,
> >
> >I don't follow. Here's the DDL for that column:
> >
> > industry varchar(24) NOT NULL
> >  CONSTRAINT invalid_industry
> >  CHECK (industry in ('Agriculture', 'Business, other', 'Chemicals',
> >  'Energy', 'Law', 'Manufacturing', 'Mining', 'Municipalities',
> >  'Ports/Marine Services', 'Transportation')),
> >
> > and I want to remove Municipalities for the more general Government.
>
> --not tested
>
> CREATE TABLE industry (
> industry_name text PRIMARY KEY
> );
>
> CREATE TABLE company (
> company_id serial PRIMARY KEY,
> industry_name text REFERENCES industry (industry_name)
> ON UPDATE CASCADE
> ON DELETE RESTRICT
> );
>
> UPDATE industries SET industry_name = 'Government' WHERE industry_name
> = 'Municipalities';
> -- All records in company have changed now too thanks to the ON UPDATE
> CASCADE
>
> To avoid the effective table rewrite use surrogate keys and turn the
> text into a simple label.  It should still have a UNIQUE index on it
> though as it is your real key.
>
> David J.
>
>

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Alter table column constraint

2018-12-17 Thread Melvin Davidson
So CREATE a table eg:
CREATE TABLE fks_for_tables
( fks_id serial
  fks_values varchar(20),
 CONSTRAINT fks_pkey PRIMARY KEY (fks_id),
 CONSTRAINT fks-unique UNIQUE fks_values
)

Then
INSERT INTO fks_for_tables
(fks_values)
VALUES
( 'Agriculture'),
('Business'),
('other))',
'Chemicals')
...
...
('Transportation');

Then you can
ALTER TABLE your_table
ADD CONSTRAINT FOREIGN KEY (industry)
REFERENCES fks_for_tables(fks_valies);




On Mon, Dec 17, 2018 at 3:20 PM Rich Shepard 
wrote:

> On Mon, 17 Dec 2018, Melvin Davidson wrote:
>
> > Yes, you must drop then add the revised constraint. However, from your
> > statement above, it sounds to me as if you would be better off using A
> > FOREIGN kEY CONSTRAINT. It makes things a lot simpler.
>
> Melvin,
>
>I don't follow. Here's the DDL for that column:
>
> industry varchar(24) NOT NULL
>  CONSTRAINT invalid_industry
>  CHECK (industry in ('Agriculture', 'Business, other', 'Chemicals',
>  'Energy', 'Law', 'Manufacturing', 'Mining', 'Municipalities',
>  'Ports/Marine Services', 'Transportation')),
>
> and I want to remove Municipalities for the more general Government.
>
> Regards,
>
> Rich
>
>

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Alter table column constraint

2018-12-17 Thread Melvin Davidson
>I want to alter a term in a column's constraint to allow only specified
> strings as attributes
Yes, you must drop then add the revised constraint. However, from your
statement above, it sounds to
me as if you would be better off using A FOREIGN kEY CONSTRAINT. It makes
things a lot simpler.

On Mon, Dec 17, 2018 at 3:07 PM Joshua D. Drake 
wrote:

> On 12/17/18 12:01 PM, Rich Shepard wrote:
> > On Mon, 17 Dec 2018, Rich Shepard wrote:
> >
> >> I want to alter a term in a column's constraint to allow only specified
> >> strings as attributes and have not found how to do this in the docs
> >> (using
> >> version 10 docs now). There is an alter table command that allows
> >> renaming
> >> a constraint but I've not seen how to modify the constraint itself.
> >
> >   Is the procedure to drop the current check constraint then add the
> > revised
> > one?
>
> Or the other way around but yes.
>
> JD
>
>
> >
> > Rich
> >
>
> --
> Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc
> ***  A fault and talent of mine is to tell it exactly how it is.  ***
> PostgreSQL centered full stack support, consulting and development.
> Advocate: @amplifypostgres || Learn: https://postgresconf.org
> * Unless otherwise stated, opinions are my own.   *
>
>
>

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Code of Conduct plan

2018-09-15 Thread Melvin Davidson
How about we just simplify the code of conduct to the following:
Any member in the various PostgreSQL lists is expected to maintain
respect to others and not use foul language. A variation from
the previous sentence shall be considered a violation of the CoC.


On Sat, Sep 15, 2018 at 11:51 AM Tom Lane  wrote:

> Bruce Momjian  writes:
> > There is a risk that if we adopt a CoC, and nothing happens, and the
> > committee does nothing, that they will feel like a failure, and get
> > involved when it was best they did nothing.  I think the CoC tries to
> > address that, but nothing is perfect.
>
> Yeah, a busybody CoC committee could do more harm than good.
> The way the CoC tries to address that is that the committee can't
> initiate action of its own accord: somebody has to bring it a complaint.
>
> Of course, a member of the committee could go out and find a "problem"
> and then file a complaint --- but then they'd have to recuse themselves
> from dealing with that complaint, so there's an incentive not to.
>
> regards, tom lane
>
>

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: extracting the sql for a function

2018-08-23 Thread Melvin Davidson
*>is there something equivalent for extracting the sql for a function from
the database? *

*Have a look at pg_extractor. It is essentially a pg_dump but with many
more options.*

*https://github.com/omniti-labs/pg_extractor
<https://github.com/omniti-labs/pg_extractor>*

On Thu, Aug 23, 2018 at 2:53 PM Charles Clavadetscher <
clavadetsc...@swisspug.org> wrote:

> Hello
>
> > -Original Message-
> > From: Paul Tilles [mailto:paul.til...@noaa.gov]
> > Sent: Donnerstag, 23. August 2018 19:18
> > To: pgsql-gene...@postgresql.org
> > Subject: extracting the sql for a function
> >
> > Using postgres Version 9.5
> >
> > I can extract the sql for a table using
> >
> > pg_dump  -d database_name  -s  -t  table_name  -f  table_name.sql
> >
> > Is there something equivalent for extracting the sql for a function from
> the database?
> >
> >
> > Paul Tilles
> >
>
> I don't know if there is but you can get the function definition:
>
> SELECT * FROM pg_get_functiondef('functionname'::regproc);
>
> or
>
> SELECT * FROM pg_get_functiondef('functionname(list of parameters data
> types)'::regprocedure);
>
> If you want to export you may use the -t and -o switches.
>
> Regards
> Charles
>
>
>
>

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Restore relhaspkey in PostgreSQL Version 11 Beta

2018-07-31 Thread Melvin Davidson
> [2] https://www.postgresql.org/message-id/flat/20140317185255.20724.49675%
> 40wrigleys.postgresql.org
>
> --
>  David Rowley   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>

> https://www.postgresql.org/message-id/flat/20140317185255.20724.49675%
40wrigleys.postgresql.org



















*Wow, so BUG #9606 has been known since 2014-03-17, but no one has bothered
to fix it?As I've said before, instead of fixing the problem, the
resolution seems to be to "make it go away". I've seen that logic echoed in
other situations outside of the IT environment.The fact remains, my code
works (or worked) because I was only interested in finding tables that were
newly created without a primary key. While I acknowledge that bug, the
situation where the primary key is dropped is extremely rare and would only
happen in the rare case where the primary key needed to be altered or
replaced by another key, in which case the problem is moot. Since
PostgreSQL is a relational database,I cannot think of a situation where a
DBA would allow the primary key to just be dropped/removed altogether.I was
hoping that at least one other person would see my point of view, but by
the harsh replies I've been getting, I feel more like a whistle blower that
insiders think I also should be made to "go away". Well, you are right.
This old Viet Vet shall now end this conversation and his career.I just
need a way to do so quietly and painlessly.The truth is absolute and cannot
be changed.Perception is not the truth.Flerp!*

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Restore relhaspkey in PostgreSQL Version 11 Beta

2018-07-30 Thread Melvin Davidson
On Mon, Jul 30, 2018 at 6:21 PM, Andres Freund  wrote:

> Hi,
>
> On 2018-07-30 17:21:25 -0400, Melvin Davidson wrote:
> > * >it has never been the case that relhaspkey meant that the table
> > *currently* has a primary key. *
>
> > *Hmmm, I guess it's a lot harder to fix "squishy semantics"from
> "True
> > if the table has (or once had) a primary key"  to"True if the table
> has
> > a primary key after vacuum"rather than just dropping a column that has
> > existed from version 7.2.So now I guess the policy is break code
> instead of
> > fix documention.That meakes sense...NOT!*
>
> A large portion of the system catalogs (i.e. objects within
> pg_catalog.*) are essentially internal implementation details and we'll
> change them if it makes our live easier. If you want stability use
> information_schema which we'll try very hard to not ever break.  Keeping
> random atavistic things around, would slow us down, which will be a
> price everybody is paying.
>
> Greetings,
>
> Andres Freund
>


*> If you want stability use information_schema which we'll try very hard
to not ever break.  *

*Of course. Would you be so kind as to point out where in the
information_schema  it *
*indicates if a table has a primary key or not. Oh wait, now I
remember...no place.*



*>Keeping random atavistic things around, would slow us down, which will be
a>price everybody is paying. *
*Random atavistic things? I hardly think relhaspkey is random. It's been
there since version 7.2.*

*Exactly how does keeping it around slow you/us down?*


-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Restore relhaspkey in PostgreSQL Version 11 Beta

2018-07-30 Thread Melvin Davidson
* >it has never been the case that relhaspkey meant that the table
*currently* has a primary key. *







*Hmmm, I guess it's a lot harder to fix "squishy semantics"from  "True
if the table has (or once had) a primary key"  to"True if the table has
a primary key after vacuum"rather than just dropping a column that has
existed from version 7.2.So now I guess the policy is break code instead of
fix documention.That meakes sense...NOT!*


-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Restore relhaspkey in PostgreSQL Version 11 Beta

2018-07-30 Thread Melvin Davidson
On Mon, Jul 30, 2018 at 10:31 AM, Tom Lane  wrote:

> Melvin Davidson  writes:
> > In the release notes for Version 11 Beta, under changes, I see these
> scary
> > remarks:
> > Remove relhaspkey column from system table pg_class (Peter Eisentraut)
> > Applications needing to check for a primary key should consult pg_index.
> >
> > That absolutely breaks my code (and I'm guessing others), as I have a
> cron
> > job that checks for tables that were created with no pkey.
>
> Well, I'd say your code was broken anyway, because it has never been the
> case that relhaspkey meant that the table *currently* has a primary key.
> We got rid of it on the grounds that its semantics were too squishy to
> be useful.
>
> What you want is something like
>
> select relname from pg_class c where relkind = 'r' and
>   not exists (select 1 from pg_index where indrelid = c.oid and
> indisprimary);
>
> which will give the right answer in all PG versions.
>
> regards, tom lane
>

it has never been the
case that relhaspkey meant that the table *currently* has a primary key.

Tom,


* >it has never been the case that relhaspkey meant that the table
*currently* has a primary key. *

*That is a poor excuse, because that is exactly what I am looking for!*

* squishy semantics or not, dropping columns from system catalogs is
ridiculous. *

*It appears to me that the developers are going rogue. Why should I, and
others, *
*I have to change my code ( which absolutely works ), simply because the
developers*
*feel it's ok to drop columns from system catalogs based on semantics?*




-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Restore relhaspkey in PostgreSQL Version 11 Beta

2018-07-30 Thread Melvin Davidson
In the release notes for Version 11 Beta, under changes, I see these scary
remarks:

Remove relhaspkey column from system table pg_class (Peter Eisentraut)

Applications needing to check for a primary key should consult pg_index.

That absolutely breaks my code (and I'm guessing others), as I have a cron
job that checks for tables that were created with no pkey.
IE: SELECT n.nspname,
   c.relname as table,
   c.reltuples::bigint
  FROM pg_class c
  JOIN pg_namespace n ON (n.oid =c.relnamespace )
 WHERE relkind = 'r' AND
   relname NOT LIKE 'pg_%' AND
   relname NOT LIKE 'sql_%' AND
   relhaspkey = FALSE
ORDER BY n.nspname, c.relname;

relhaspkey has been in pg_class since the earliest version of PostgreSQL.
AFAIK
there is NO NEED to remove it! In fact, the system catalogs should only be
changed when there is an absolute requirement, not at someone's whim.
Adding a column is fine, but dropping columns that breaks code is
ridiculous.

Please restore that column before the final release!

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: User documentation vs Official Docs

2018-07-20 Thread Melvin Davidson
On Fri, Jul 20, 2018 at 6:59 PM, Alvaro Herrera 
wrote:

> On 2018-Jul-20, Adrian Klaver wrote:
>
> > On 07/20/2018 11:45 AM, Joshua D. Drake wrote:
>
> > > Back to the original idea, it would be great if those participating
> > > would be willing to help even a little in determining an actual
> > > direction to take this.
> >
> > I would say that discussion should take place in --docs:
> >
> > https://www.postgresql.org/list/pgsql-docs/
>
> I don't see why we need this thread to continue.  This sounds like
> somebody looking for a solution when they don't yet know what the
> problem is.
>
> If people want to contribute, there are already some places where they
> can do so.  Articles can be drafted in the wiki initially or, heck, even
> sites like StackOverflow[1], and if something gets to a level so great
> that they think it should be enshrined in DocBook, they can turn it into
> a documentation patch.
>
> [1] for extra points, write in SO and then add a link to the question to
> FAQ in the wiki.
>
> --
> Álvaro Herrerahttps://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
>
> This sounds like somebody looking for a solution when they don't yet know
what the problem is.
+1

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: User documentation vs Official Docs

2018-07-19 Thread Melvin Davidson
On Thu, Jul 19, 2018 at 8:54 PM, Adrian Klaver 
wrote:

> On 07/19/2018 05:43 PM, Melvin Davidson wrote:
>
>>
>>
>>
>
>>
>>  > Then again people might use shared, university or library computers
>> Would you please be so kind as to inform us which university or library
>> allows users to install software on a _shared_ computer.
>>
>
> Pretty sure Ken was referring to looking up documentation, not running
> Postgres.
>
>
>> BTW, since you mention library, that is an excellent way to have the
>> books ordered and shared.>FOR FREE<.  AFAIK, all that is required is for
>> someone to request the library purchase the book, to be used for shared
>> learning.
>>
>>
>> --
>> *Melvin Davidson**
>> Maj. Database & Exploration Specialist**
>> Universe Exploration Command – UXC***
>> Employment by invitation only!
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>

> Pretty sure Ken was referring to looking up documentation, not running
Postgres.
That does not correlate. To have the need to look up documentation implies
that the user has a computer running PostgreSQL.
As universities DO NOT ALLOW software to be installed on shared computers,
and this is the case especially in a library, it implies
the user has their own computer. As libraries allow users/citizens to
request books be purchased >at no cost to the user/citizen, the
argument that someone cannot afford a book is now a moot point.

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: User documentation vs Official Docs

2018-07-19 Thread Melvin Davidson
On Thu, Jul 19, 2018 at 8:09 PM, Ken Tanzer  wrote:

> On Thu, Jul 19, 2018 at 11:35 AM Melvin Davidson 
> wrote:
>
>> >> Politely tell them to buy some of the many well written books that are
>> available on these very topics...
>> >Fair enough but what about those that cant afford it? I think us in the
>> Western World tend to forget that by >far the majority of users cant afford
>> a latte from Starbucks let alone a 60.00 USD dead tree.
>>
>> Seriously? So someone can afford a $500 computer ( in addition to
>> accessories ), but they wouldn't be able to afford a $60 book (which BTW,
>> there are several for lesser amount) ?
>>
>>
> Seriously?  With global median household income at less than $200 / week
> (and ranging by country from $2 to $400)[1], you can't understand that $60
> could be a barrier for lots of people? And computers can be had for far
> less than $500, even before you get to used or refurbished.  Then again
> people might use shared, university or library computers.  You've nicely
> illustrated Josh's point which you yourself quoted:  "I think us in the
> Western World tend to forget that by far the majority of users cant afford
> a latte from Starbucks let alone a 60.00 USD dead tree."
>
> My comment, BTW, applies to all versions of Postgresql, whether current,
> unsupported or yet-to-be-developed.
>
> Cheers,
> Ken
>
>
> [1] https://news.gallup.com/poll/166211/worldwide-median-
> household-income-000.aspx
>
> --
> AGENCY Software
> A Free Software data system
> By and for non-profits
> *http://agency-software.org/ <http://agency-software.org/>*
> *https://demo.agency-software.org/client
> <https://demo.agency-software.org/client>*
> ken.tan...@agency-software.org
> (253) 245-3801
>
> Subscribe to the mailing list
>  to
> learn more about AGENCY or
> follow the discussion.
>

> Then again people might use shared, university or library computers
Would you please be so kind as to inform us which university or library
allows users to install software on a _shared_ computer.

BTW, since you mention library, that is an excellent way to have the books
ordered and shared.>FOR FREE<.  AFAIK, all that is required is for
someone to request the library purchase the book, to be used for shared
learning.


-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: User documentation vs Official Docs

2018-07-19 Thread Melvin Davidson
>> Politely tell them to buy some of the many well written books that are
available on these very topics...
>Fair enough but what about those that cant afford it? I think us in the
Western World tend to forget that by >far the majority of users cant afford
a latte from Starbucks let alone a 60.00 USD dead tree.

Seriously? So someone can afford a $500 computer ( in addition to
accessories ), but they wouldn't be able to afford a $60 book (which BTW,
there are several for lesser amount) ?

I have been monitoring this thread for many days now. It originally started
as a request for users to write a separate user tutorial. But a tutorial
for what? That was never stated.
What exactly is missing from the official documentation? Isn't it the
purpose of pgsql-general to answer user questions? Why is there such a
great need to create yet a third form to guide
PostgreSQL users, which would only end up being included in a google search
anyway, with no guarantee of priority?


Re: Disabling/Enabling index before bulk loading

2018-07-10 Thread Melvin Davidson
> We recently did a test on COPY and found that on large tables (47 million
rows , 20GB of raw data) the
> difference in COPY with 16 indexes...

*I am very suspicious of why you need 16 indexes. Are you sure all those
indexes are actually being utilized?*

*Try executing the attached query, You may find find some are really not
needed.*

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!
SELECT n.nspname as schema,
   i.relname as table,
   i.indexrelname as index,
   i.idx_scan,
   i.idx_tup_read,
   i.idx_tup_fetch,
   pg_size_pretty(pg_relation_size(quote_ident(n.nspname) || '.' || 
quote_ident(i.relname))) AS table_size, 
   pg_size_pretty(pg_relation_size(quote_ident(n.nspname) || '.' || 
quote_ident(i.indexrelname))) AS index_size,
   pg_get_indexdef(idx.indexrelid) as idx_definition
  FROM pg_stat_all_indexes i
  JOIN pg_class c ON (c.oid = i.relid)
  JOIN pg_namespace n ON (n.oid = c.relnamespace)
  JOIN pg_index idx ON (idx.indexrelid =  i.indexrelid )
 WHERE i.idx_scan = 0
   AND n.nspname <> 'pg_catalog'
   AND NOT idx.indisprimary
   AND NOT idx.indisunique
 ORDER BY 1, 2, 3;




Re: Cloning schemas

2018-07-09 Thread Melvin Davidson
Dias,

As an experiment, I commented out the code that creates the comment on
indexes and it still works flawlessly, so that part is redundant.
I have attached the modified function below,
Please retry and see if the problem still exists.
If it does, then please do a schema only pg_dump of the source schema abd
send me the call to clone schema that you are using.

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!
-- Function: public.clone_schema(text, text, boolean)

-- DROP FUNCTION public.clone_schema(text, text, boolean);

CREATE OR REPLACE FUNCTION public.clone_schema(
source_schema text,
dest_schema text,
include_recs boolean)
  RETURNS void AS
$BODY$
--  Initial code by Emanuel '3manuek' 
--  Last revision 2017-04-17 by Melvin Davidson
--  Added SELECT REPLACE for schema views
--
--  This function will clone all sequences, tables, indexes, rules, triggers, 
--  data(optional), views & functions from any existing schema to a new schema
-- SAMPLE CALL:
-- SELECT clone_schema('public', 'new_schema', TRUE);

DECLARE
  src_oid  oid;
  tbl_oid  oid;
  func_oid oid;
  con_oid  oid;
  v_path   text;
  v_func   text;
  v_args   text;
  v_connametext;
  v_rule   text;
  v_trig   text;
  object   text;
  buffer   text;
  srctbl   text;
  default_ text;
  v_column text;
  qry  text;
  dest_qry text;
  v_deftext;
  v_stat   integer;
  seqval   bigint;
  sq_last_valuebigint;
  sq_max_value bigint;
  sq_start_value   bigint;
  sq_increment_by  bigint;
  sq_min_value bigint;
  sq_cache_value   bigint;
  sq_log_cnt   bigint;
  sq_is_called boolean;
  sq_is_cycled boolean;
  sq_cycledchar(10);

BEGIN

-- Check that source_schema exists
  SELECT oid INTO src_oid
FROM pg_namespace
   WHERE nspname = quote_ident(source_schema);
  IF NOT FOUND
THEN 
RAISE NOTICE 'source schema % does not exist!', source_schema;
RETURN ;
  END IF;

  -- Check that dest_schema does not yet exist
  PERFORM nspname 
FROM pg_namespace
   WHERE nspname = quote_ident(dest_schema);
  IF FOUND
THEN 
RAISE NOTICE 'dest schema % already exists!', dest_schema;
RETURN ;
  END IF;

  EXECUTE 'CREATE SCHEMA ' || quote_ident(dest_schema) ;

  -- Add schema comment
  SELECT description INTO v_def
FROM pg_description
   WHERE objoid = src_oid
   AND objsubid = 0;
  IF FOUND 
THEN 
  EXECUTE 'COMMENT ON SCHEMA ' || quote_ident(dest_schema) || ' IS ' || 
quote_literal(v_def);
  END IF;

SET search_path TO dest_schema, source_schema, 'public', pg_catalog;  
SELECT current_schemas(TRUE) INTO v_path;
RAISE NOTICE 'search path = %', v_path;

  -- Create sequences
  -- TODO: Find a way to make this sequence's owner is the correct table.
  FOR object IN
SELECT sequence_name::text 
  FROM information_schema.sequences
 WHERE sequence_schema = quote_ident(source_schema)
  LOOP
EXECUTE 'CREATE SEQUENCE ' || quote_ident(dest_schema) || '.' || 
quote_ident(object);
srctbl := quote_ident(source_schema) || '.' || quote_ident(object);

EXECUTE 'SELECT last_value, max_value, start_value, increment_by, 
min_value, cache_value, log_cnt, is_cycled, is_called 
  FROM ' || quote_ident(source_schema) || '.' || 
quote_ident(object) || ';' 
  INTO sq_last_value, sq_max_value, sq_start_value, 
sq_increment_by, sq_min_value, sq_cache_value, sq_log_cnt, sq_is_cycled, 
sq_is_called ; 

IF sq_is_cycled 
  THEN 
sq_cycled := 'CYCLE';
ELSE
sq_cycled := 'NO CYCLE';
END IF;

EXECUTE 'ALTER SEQUENCE '   || quote_ident(dest_schema) || '.' || 
quote_ident(object) 
|| ' INCREMENT BY ' || sq_increment_by
|| ' MINVALUE ' || sq_min_value 
|| ' MAXVALUE ' || sq_max_value
|| ' START WITH '   || sq_start_value
|| ' RESTART '  || sq_min_value 
|| ' CACHE '|| sq_cache_value 
|| sq_cycled || ' ;' ;

buffer := quote_ident(dest_schema) || '.' || quote_ident(object);
IF include_recs 
THEN
EXECUTE 'SELECT setval( ''' || buffer || ''', ' || sq_last_value || 
', ' || sq_is_called || ');' ; 
ELSE
EXECUTE 'SELECT setval( ''' || buffer || ''', ' || sq_start_value 
|| ', ' || sq_is_called || ');' ;
END IF;

  -- add sequence comments
SELECT oid INTO tbl_oid
  FROM pg_class 
 WHERE relkind = 'S'
   AND relnamespace = src_oid
   AND relname = quote_ident(object);

SELECT description INTO v_def
  FROM pg_description
 WHERE objoid = tbl_oid
   AND objsubid = 0;

IF FOUND
  THEN  
EXECUTE 'COMMENT ON SEQUENCE ' || quote_ident(dest_schema) || '.' || 
quote_iden

Re: Cloning schemas

2018-07-09 Thread Melvin Davidson
Adrian,
The code that CREATES the TABLE is

EXECUTE 'CREATE TABLE ' || buffer || ' (LIKE ' ||
quote_ident(source_schema) || '.' || quote_ident(object)
|| ' INCLUDING ALL)';

The schema names are supposed to be changed!

This function HAS been tested and does WORK. Please do not muddle the
problem without testing yourself.


On Mon, Jul 9, 2018 at 2:56 PM, Adrian Klaver 
wrote:

> On 07/09/2018 09:49 AM, Melvin Davidson wrote:
>
>
>>
>> On Mon, Jul 9, 2018 at 12:32 PM, DiasCosta > <mailto:diasco...@diascosta.org>> wrote:
>>
>> Hi Melvin,
>>
>> Trying run 9.6 clone_schema on a different schema and I get the
>> following error:
>>
>> NOTICE:  search path = {public,pg_catalog}
>> CONTEXT:  PL/pgSQL function clone_schema(text,text,boolean) line 79
>> at RAISE
>> ERROR:  relation "bilhetica_logic_schema.idx_unq_cod_operador_aml"
>> does not exist
>> CONTEXT:  SQL statement "COMMENT ON INDEX
>> bilhetica_logic_schema.idx_unq_cod_operador_aml IS 'garante
>> unicidade do Cod_Operador_AML';"
>> PL/pgSQL function clone_schema(text,text,boolean) line 401 at EXECUTE
>> ** Error **
>>
>> ERROR: relation "bilhetica_logic_schema.idx_unq_cod_operador_aml"
>> does not exist
>> SQL state: 42P01
>> Context: SQL statement "COMMENT ON INDEX
>> bilhetica_logic_schema.idx_unq_cod_operador_aml IS 'garante
>> unicidade do Cod_Operador_AML';"
>> PL/pgSQL function clone_schema(text,text,boolean) line 401 at EXECUTE
>>
>>
>> Can you help me, please?
>> Thanks in advance
>> Dias Costa
>>
>>
>> Dias
>>  > NOTICE:  search path = {public,pg_catalog}
>>  >ERROR: relation "bilhetica_logic_schema.idx_unq_cod_operador_aml" does
>> not exist
>>
>> This is not related to the clone_schema function. It looks like you may
>> have corruption in your syste catalogs,
>> Try reindexing your system_catalogs.
>>
>
> Or from clone_schema.sql:
>
> EXECUTE 'CREATE TABLE ' || buffer || ' (LIKE ' ||
> quote_ident(source_schema) || '.' || quote_ident(object)
> || ' INCLUDING ALL)';
>
> https://www.postgresql.org/docs/10/static/sql-createtable.html
>
> "LIKE source_table [ like_option ... ]
>
> ...
>
> Indexes, PRIMARY KEY, UNIQUE, and EXCLUDE constraints on the original
> table will be created on the new table only if INCLUDING INDEXES is
> specified. <*/Names for the new indexes and constraints are chosen
> according to the default rules, regardless of how the originals were named.
> (This behavior avoids possible duplicate-name failures for the new
> indexes.)/*>
>
> ...
> INCLUDING ALL is an abbreviated form of INCLUDING COMMENTS INCLUDING
> CONSTRAINTS INCLUDING DEFAULTS INCLUDING IDENTITY INCLUDING INDEXES
> INCLUDING STATISTICS INCLUDING STORAGE.
>
> ..."
>
> See tagged part(<*/ /*>) part above. I could see where the indexes in the
> new schema have new names while the index comments in the old schema refer
> to the old name. Then you would get the error the OP showed.
>
>
>> REINDEX VERBOSE SYSTEM  ;
>>
>>
>>
>> --
>> *Melvin Davidson**
>> Maj. Database & Exploration Specialist**
>> Universe Exploration Command – UXC***
>> Employment by invitation only!
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>



-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Cloning schemas

2018-07-09 Thread Melvin Davidson
On Mon, Jul 9, 2018 at 12:32 PM, DiasCosta  wrote:

> Hi Melvin,
>
> Trying run 9.6 clone_schema on a different schema and I get the following
> error:
>
> NOTICE:  search path = {public,pg_catalog}
> CONTEXT:  PL/pgSQL function clone_schema(text,text,boolean) line 79 at
> RAISE
> ERROR:  relation "bilhetica_logic_schema.idx_unq_cod_operador_aml" does
> not exist
> CONTEXT:  SQL statement "COMMENT ON INDEX 
> bilhetica_logic_schema.idx_unq_cod_operador_aml
> IS 'garante unicidade do Cod_Operador_AML';"
> PL/pgSQL function clone_schema(text,text,boolean) line 401 at EXECUTE
> ** Error **
>
> ERROR: relation "bilhetica_logic_schema.idx_unq_cod_operador_aml" does
> not exist
> SQL state: 42P01
> Context: SQL statement "COMMENT ON INDEX 
> bilhetica_logic_schema.idx_unq_cod_operador_aml
> IS 'garante unicidade do Cod_Operador_AML';"
> PL/pgSQL function clone_schema(text,text,boolean) line 401 at EXECUTE
>
>
> Can you help me, please?
> Thanks in advance
> Dias Costa
>
>
Dias
> NOTICE:  search path = {public,pg_catalog}
>ERROR: relation "bilhetica_logic_schema.idx_unq_cod_operador_aml" does not
exist

This is not related to the clone_schema function. It looks like you may
have corruption in your syste catalogs,
Try reindexing your system_catalogs.

REINDEX VERBOSE SYSTEM  ;



-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Cloning schemas

2018-07-09 Thread Melvin Davidson
On Mon, Jul 9, 2018 at 5:14 AM, Łukasz Jarych  wrote:

> Hi Melvin,
>
> i am trying to run postgresql 10 cloning schema function but still i am
> getting error...
>
> [image: image.png]
>
> Error: Error in syntax near "SYSTEM"
> Context: Function PL/pgSQL, row 212 in EXECUTE
>
> What is happening?
>
> Best,
> Jacek
>
> > Error: Error in syntax near "SYSTEM"

Jacek,
I have changed the code from OVERRIDING SYSTEM VALUE to OVERRIDING USER
VALUE
and attached the revised version below.

If that does not fix the problem, then I will need you to do a
pgdump -F p -t public.t_cpuinfo

and send the output to me.

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!

-- Function: public.clone_schema(text, text, boolean)

-- DROP FUNCTION public.clone_schema(text, text, boolean);

CREATE OR REPLACE FUNCTION public.clone_schema(
source_schema text,
dest_schema text,
include_recs boolean)
  RETURNS void AS
$BODY$
--  Initial code by Emanuel '3manuek' 
--  Last revision 2018-07-03 by Melvin Davidson
--  Added ' OVERRIDING USER VALUE' for INSERT records
--
--  This function will clone all sequences, tables, indexes, rules, triggers, 
--  data(optional), views & functions from any existing schema to a new schema
-- SAMPLE CALL:
-- SELECT clone_schema('public', 'new_schema', TRUE);

DECLARE
  src_oid  oid;
  tbl_oid  oid;
  seq_oid  oid;
  func_oid oid;
  owner_id oid;
  con_oid  oid;
  v_path   text;
  v_func   text;
  v_args   text;
  v_connametext;
  v_rule   text;
  v_trig   text;
  object   text;
  buffer   text;
  srctbl   text;
  default_ text;
  v_column text;
  qry  text;
  dest_qry text;
  v_deftext;
  v_stat   integer;
  seqval   bigint;
  seq_name name;
  sq_type  oid;
  sq_last_valuebigint;
  sq_max_value bigint;
  sq_start_value   bigint;
  sq_increment_by  bigint;
  sq_min_value bigint;
  sq_cache_value   bigint;
  sq_log_cnt   bigint;
--  sq_is_called boolean;
  sq_is_cycled boolean;
  sq_typname   name;
  seq_cycled   text;
  seq_ownername;  
--  sq_cycledchar(10);
  
BEGIN

-- Check that source_schema exists
  SELECT oid INTO src_oid
FROM pg_namespace
   WHERE nspname = quote_ident(source_schema);
   
  IF NOT FOUND
THEN 
  RAISE NOTICE 'source schema % does not exist!', source_schema;
  RETURN ;
  END IF;

  -- Check that dest_schema does not yet exist
  PERFORM nspname 
FROM pg_namespace
   WHERE nspname = quote_ident(dest_schema);
   
  IF FOUND
THEN 
  RAISE NOTICE 'dest schema % already exists!', dest_schema;
  RETURN ;
  END IF;

  EXECUTE 'CREATE SCHEMA ' || quote_ident(dest_schema) ;
   
  -- Add schema comment
  SELECT description INTO v_def
FROM pg_description
   WHERE objoid = src_oid
 AND objsubid = 0;

  IF FOUND 
THEN 
  EXECUTE 'COMMENT ON SCHEMA ' || quote_ident(dest_schema) || ' IS ' || 
quote_literal(v_def);
  END IF;

  SET search_path TO dest_schema, source_schema, 'public', pg_catalog;  
  SELECT current_schemas(TRUE) INTO v_path;
  RAISE NOTICE 'search path = %', v_path;

  -- Create sequences
  -- TODO: Find a way to make this sequence's owner is the correct table.
  FOR object IN
 SELECT sequence_name::text 
  FROM information_schema.sequences
 WHERE sequence_schema = quote_ident(source_schema)

  LOOP

SELECT oid, relowner into seq_oid, owner_id
  FROM pg_class 
 WHERE relname = quote_ident(object)
   AND relkind = 'S';

SELECT seqtypid,
   seqstart,
   seqincrement,
   seqmax,
   seqmin,
   seqcache,
   seqcycle
  INTO sq_type, 
   sq_start_value,
   sq_increment_by,
   sq_max_value,
   sq_min_value,
   sq_cache_value,
   sq_is_cycled
  FROM pg_sequence
 WHERE seqrelid = seq_oid;
 
SELECT typname INTO sq_typname
  FROM pg_type 
 WHERE oid = sq_type;

SELECT rolname INTO seq_owner
  FROM pg_authid WHERE oid = owner_id;

--EXECUTE 'CREATE SEQUENCE ' || quote_ident(dest_schema) || '.' || 
quote_ident(object);
--srctbl := quote_ident(source_schema) || '.' || quote_ident(object);

IF sq_is_cycled 
  THEN 
seq_cycled = ' CYCLE'; 
  ELSE 
seq_cycled = ' NO CYCLE';
END IF;

EXECUTE 'CREATE SEQUENCE ' || quote_ident(dest_schema) || '.' || 
quote_ident(object)
 || ' AS ' || sq_typname
 || ' INCREMENT ' ||  sq_increment_by
 || ' MINVALUE ' || sq

Re: Cloning schemas

2018-07-07 Thread Melvin Davidson
2018-07-07 4:32 GMT-04:00 DiasCosta :

> Hi Melvin,
>
> Thank you.
>
> Dias Costa
>
> On 04-07-2018 23:38, Melvin Davidson wrote:
>
>
>
> On Wed, Jul 4, 2018 at 2:48 PM, DiasCosta  wrote:
>
>> Hi Melvin,
>>
>> I'm new to clone_schema.
>> Can I use it on PostgreSQL 9.6?
>>
>> TIA
>> DCostaployment by invitation only!
>>
>
> > Can I use it on PostgreSQL 9.6?
> Yes, but because the developer(s) once again monkeyed with the system
> catalogs, there are now
> two versions. One for 10 and one for 9.6 and below. I've attached the 9.6
> version for you.
>
>
>
>
> --
> J. M. Dias Costa
> Telef. 214026948 Telem. 939307421
>
> Se divulgar esta mensagem por terceiros, por favor:
> 1. Apague o meu endereço de correio electrónico e o meu nome.
> 2. Apague também os endereços dos seus amigos antes de distribuir.
> 3. Enderece como cópia oculta (Cc ou Bcc) para os SEUS destinatários.
> Agindo deste modo, dificultará a disseminação de "vírus", "spams" e
> "banners" e contribuirá para manter a privacidade de todos e cada um.
> Obrigado.
>
> Nota: Não se deverá ao acaso a ocorrência de palavras na minha escrita que 
> não respeitem o
> malfadado acordo ortográfico.
>
>
You are welcome Dias!
Good to have positive feedback.


-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Role problem in Windows

2018-07-06 Thread Melvin Davidson
On Fri, Jul 6, 2018 at 10:01 AM, Moreno Andreo 
wrote:

> Hi,
> Running 9.1 on Windows 10, upgrading to 10 with pg_upgrade.
>
> "Once upon a time" there was a bug in our automatic role creation
> procedure that did not mask vowels with accent (used in Italian language),
> like "ò, è" and the result was a role with an empty name.
> We are now upgrading to 10, and pg_dumpall exits complaining with this
> role, showing its name (with mis-encoded UTF-8 accented vowel) as an
> invalid utf-8 character.
>
> Trying to get rid of the role, that can't be deleted with a drop role
> because of the empty name, I did
> delete from pg_authid where oid = 
>
> Role disappeared from role list.
>
> At the next execution of the pg_upgrade it complains that role "" does
> not exist while dumping a trigger function. I tried remove the privilege
> from function ACL, but "role n does not exists".
>
> Is there a way to recreate the deleted role, either as a dummy, so I can
> finish upgrade?
> Is there another way to bypass the problem?
>
> Any help would be appreciated.
>
> Cheers,
> Moreno.-
>
>
>
>Is there a way to recreate the deleted role, either as a dummy, so I can
finish upgrade?
I can't really suggest how to recreate the dummy role, but I do have an
alternate solution.
Most probably pg_dump is complaining that role 'xxx' owns some tables. So
you can use the
attached script and add 'AND a.rolname = 'xxx' to the WHERE clause.
Then as a superuser you can use ALTER TABLE xyz OWNER TO new_owner for each
table found.

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!
SELECT c.relname as table, 
   a.rolname as owner,
   c.relacl as permits
  FROM pg_class c
  JOIN pg_authid a ON ( a.OID = c.relowner )
WHERE relname NOT LIKE 'pg_%' 
  AND relname NOT LIKE 'information%' 
  AND relname NOT LIKE 'sql_%' 
  AND relkind = 'r'
  --AND position('cp' in ARRAY_TO_STRING(c.relacl, '') )> 0
ORDER BY relname;



Re: How to watch for schema changes

2018-07-05 Thread Melvin Davidson
>As I said I'm writing the client application with libpq/ODBC. How will I
get
>the results?

Igor,
You DO NOT need libpq/ODBC . Just use the count option of grep (-c).
If it is greater than zero, then send a mail to yourself and/or attach the
log for review.

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Cloning schemas

2018-07-05 Thread Melvin Davidson
On Thu, Jul 5, 2018 at 10:38 AM, Łukasz Jarych  wrote:

> You gave me working example.
>
> the function from here is not working:
>
> https://www.postgresql.org/message-id/CANu8FiyJtt-0q%
> 3DbkUxyra66tHi6FFzgU8TqVR2aahseCBDDntA%40mail.gmail.com
>
> Best,
> Jacek
>
> czw., 5 lip 2018 o 16:02 Melvin Davidson 
> napisał(a):
>
>>
>>
>> On Thu, Jul 5, 2018 at 1:49 AM, Łukasz Jarych  wrote:
>>
>>> From link function is not working.
>>>
>>
>> There is no " From link" in PostgreSQL,
>> Would you please be more specific. Please provide a working example.
>>
>>
>the function from here is not working:

>https://www.postgresql.org/message-id/CANu8FiyJtt-0q%
3DbkUxyra66tHi6FFzgU8TqVR2aahseCBDDntA%40mail.gmail.com
<https://www.postgresql.org/message-id/CANu8FiyJtt-0q%3DbkUxyra66tHi6FFzgU8TqVR2aahseCBDDntA%40mail.gmail.com>


Jacek,

That is NOT the version for PostgreSQL 10. It is for 9.6 and below only!
I attached the working function for 10 and it is attached again to this
response.


-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!

-- Function: public.clone_schema(text, text, boolean)

-- DROP FUNCTION public.clone_schema(text, text, boolean);

CREATE OR REPLACE FUNCTION public.clone_schema(
source_schema text,
dest_schema text,
include_recs boolean)
  RETURNS void AS
$BODY$
--  Initial code by Emanuel '3manuek' 
--  Last revision 2018-07-03 by Melvin Davidson
--  Added ' OVERRIDING SYSTEM VALUE' for INSERT records
--
--  This function will clone all sequences, tables, indexes, rules, triggers, 
--  data(optional), views & functions from any existing schema to a new schema
-- SAMPLE CALL:
-- SELECT clone_schema('public', 'new_schema', TRUE);

DECLARE
  src_oid  oid;
  tbl_oid  oid;
  seq_oid  oid;
  func_oid oid;
  owner_id oid;
  con_oid  oid;
  v_path   text;
  v_func   text;
  v_args   text;
  v_connametext;
  v_rule   text;
  v_trig   text;
  object   text;
  buffer   text;
  srctbl   text;
  default_ text;
  v_column text;
  qry  text;
  dest_qry text;
  v_deftext;
  v_stat   integer;
  seqval   bigint;
  seq_name name;
  sq_type  oid;
  sq_last_valuebigint;
  sq_max_value bigint;
  sq_start_value   bigint;
  sq_increment_by  bigint;
  sq_min_value bigint;
  sq_cache_value   bigint;
  sq_log_cnt   bigint;
--  sq_is_called boolean;
  sq_is_cycled boolean;
  sq_typname   name;
  seq_cycled   text;
  seq_ownername;  
--  sq_cycledchar(10);
  
BEGIN

-- Check that source_schema exists
  SELECT oid INTO src_oid
FROM pg_namespace
   WHERE nspname = quote_ident(source_schema);
   
  IF NOT FOUND
THEN 
  RAISE NOTICE 'source schema % does not exist!', source_schema;
  RETURN ;
  END IF;

  -- Check that dest_schema does not yet exist
  PERFORM nspname 
FROM pg_namespace
   WHERE nspname = quote_ident(dest_schema);
   
  IF FOUND
THEN 
  RAISE NOTICE 'dest schema % already exists!', dest_schema;
  RETURN ;
  END IF;

  EXECUTE 'CREATE SCHEMA ' || quote_ident(dest_schema) ;
   
  -- Add schema comment
  SELECT description INTO v_def
FROM pg_description
   WHERE objoid = src_oid
 AND objsubid = 0;

  IF FOUND 
THEN 
  EXECUTE 'COMMENT ON SCHEMA ' || quote_ident(dest_schema) || ' IS ' || 
quote_literal(v_def);
  END IF;

  SET search_path TO dest_schema, source_schema, 'public', pg_catalog;  
  SELECT current_schemas(TRUE) INTO v_path;
  RAISE NOTICE 'search path = %', v_path;

  -- Create sequences
  -- TODO: Find a way to make this sequence's owner is the correct table.
  FOR object IN
 SELECT sequence_name::text 
  FROM information_schema.sequences
 WHERE sequence_schema = quote_ident(source_schema)

  LOOP

SELECT oid, relowner into seq_oid, owner_id
  FROM pg_class 
 WHERE relname = quote_ident(object)
   AND relkind = 'S';

SELECT seqtypid,
   seqstart,
   seqincrement,
   seqmax,
   seqmin,
   seqcache,
   seqcycle
  INTO sq_type, 
   sq_start_value,
   sq_increment_by,
   sq_max_value,
   sq_min_value,
   sq_cache_value,
   sq_is_cycled
  FROM pg_sequence
 WHERE seqrelid = seq_oid;
 
SELECT typname INTO sq_typname
  FROM pg_type 
 WHERE oid = sq_type;

SELECT rolname INTO seq_owner
  FROM pg_authid WHERE oid = owner_id;

--EXECUTE 'CREATE SEQUENCE ' || quote_ident(dest_schema) || '.' || 
quote_ident(ob

Re: Cloning schemas

2018-07-05 Thread Melvin Davidson
On Thu, Jul 5, 2018 at 1:49 AM, Łukasz Jarych  wrote:

> From link function is not working.
>

There is no " From link" in PostgreSQL,
Would you please be more specific. Please provide a working example.


Re: Cloning schemas

2018-07-04 Thread Melvin Davidson
>The folks that wanted transactional ALTER SEQUENCE might disagree:):
Ah, so you mean the previous version was not working or sufficient?

https://www.postgresql.org/docs/9.6/static/sql-altersequence.html

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Cloning schemas

2018-07-04 Thread Melvin Davidson
On Wed, Jul 4, 2018 at 6:48 PM, Adrian Klaver 
wrote:

> On 07/04/2018 03:38 PM, Melvin Davidson wrote:
>
>>
>>
>> On Wed, Jul 4, 2018 at 2:48 PM, DiasCosta > <mailto:diasco...@diascosta.org>> wrote:
>>
>> Hi Melvin,
>>
>> I'm new to clone_schema.
>> Can I use it on PostgreSQL 9.6?
>>
>> TIA
>> DCostaployment by invitation only!
>>
>>
>>  > Can I use it on PostgreSQL 9.6?
>>
>> Yes, but because the developer(s) once again monkeyed with the system
>> catalogs, there are now
>>
>
> Well that is one of the things that distinguish a major release so it
> should be no surprise.
>
>
>
The problem is, AFAICS, none of the changes induced were really necessary
or increased performance.


*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Cloning schemas

2018-07-04 Thread Melvin Davidson
On Wed, Jul 4, 2018 at 2:48 PM, DiasCosta  wrote:

> Hi Melvin,
>
> I'm new to clone_schema.
> Can I use it on PostgreSQL 9.6?
>
> TIA
> DCostaployment by invitation only!
>

> Can I use it on PostgreSQL 9.6?
Yes, but because the developer(s) once again monkeyed with the system
catalogs, there are now
two versions. One for 10 and one for 9.6 and below. I've attached the 9.6
version for you.
-- Function: public.clone_schema(text, text, boolean)

-- DROP FUNCTION public.clone_schema(text, text, boolean);

CREATE OR REPLACE FUNCTION public.clone_schema(
source_schema text,
dest_schema text,
include_recs boolean)
  RETURNS void AS
$BODY$
--  Initial code by Emanuel '3manuek' 
--  Last revision 2017-04-17 by Melvin Davidson
--  Added SELECT REPLACE for schema views
--
--  This function will clone all sequences, tables, indexes, rules, triggers, 
--  data(optional), views & functions from any existing schema to a new schema
-- SAMPLE CALL:
-- SELECT clone_schema('public', 'new_schema', TRUE);

DECLARE
  src_oid  oid;
  tbl_oid  oid;
  func_oid oid;
  con_oid  oid;
  v_path   text;
  v_func   text;
  v_args   text;
  v_connametext;
  v_rule   text;
  v_trig   text;
  object   text;
  buffer   text;
  srctbl   text;
  default_ text;
  v_column text;
  qry  text;
  dest_qry text;
  v_deftext;
  v_stat   integer;
  seqval   bigint;
  sq_last_valuebigint;
  sq_max_value bigint;
  sq_start_value   bigint;
  sq_increment_by  bigint;
  sq_min_value bigint;
  sq_cache_value   bigint;
  sq_log_cnt   bigint;
  sq_is_called boolean;
  sq_is_cycled boolean;
  sq_cycledchar(10);

BEGIN

-- Check that source_schema exists
  SELECT oid INTO src_oid
FROM pg_namespace
   WHERE nspname = quote_ident(source_schema);
  IF NOT FOUND
THEN 
RAISE NOTICE 'source schema % does not exist!', source_schema;
RETURN ;
  END IF;

  -- Check that dest_schema does not yet exist
  PERFORM nspname 
FROM pg_namespace
   WHERE nspname = quote_ident(dest_schema);
  IF FOUND
THEN 
RAISE NOTICE 'dest schema % already exists!', dest_schema;
RETURN ;
  END IF;

  EXECUTE 'CREATE SCHEMA ' || quote_ident(dest_schema) ;

  -- Add schema comment
  SELECT description INTO v_def
FROM pg_description
   WHERE objoid = src_oid
   AND objsubid = 0;
  IF FOUND 
THEN 
  EXECUTE 'COMMENT ON SCHEMA ' || quote_ident(dest_schema) || ' IS ' || 
quote_literal(v_def);
  END IF;

SET search_path TO dest_schema, source_schema, 'public', pg_catalog;  
SELECT current_schemas(TRUE) INTO v_path;
RAISE NOTICE 'search path = %', v_path;

  -- Create sequences
  -- TODO: Find a way to make this sequence's owner is the correct table.
  FOR object IN
SELECT sequence_name::text 
  FROM information_schema.sequences
 WHERE sequence_schema = quote_ident(source_schema)
  LOOP
EXECUTE 'CREATE SEQUENCE ' || quote_ident(dest_schema) || '.' || 
quote_ident(object);
srctbl := quote_ident(source_schema) || '.' || quote_ident(object);

EXECUTE 'SELECT last_value, max_value, start_value, increment_by, 
min_value, cache_value, log_cnt, is_cycled, is_called 
  FROM ' || quote_ident(source_schema) || '.' || 
quote_ident(object) || ';' 
  INTO sq_last_value, sq_max_value, sq_start_value, 
sq_increment_by, sq_min_value, sq_cache_value, sq_log_cnt, sq_is_cycled, 
sq_is_called ; 

IF sq_is_cycled 
  THEN 
sq_cycled := 'CYCLE';
ELSE
sq_cycled := 'NO CYCLE';
END IF;

EXECUTE 'ALTER SEQUENCE '   || quote_ident(dest_schema) || '.' || 
quote_ident(object) 
|| ' INCREMENT BY ' || sq_increment_by
|| ' MINVALUE ' || sq_min_value 
|| ' MAXVALUE ' || sq_max_value
|| ' START WITH '   || sq_start_value
|| ' RESTART '  || sq_min_value 
|| ' CACHE '|| sq_cache_value 
|| sq_cycled || ' ;' ;

buffer := quote_ident(dest_schema) || '.' || quote_ident(object);
IF include_recs 
THEN
EXECUTE 'SELECT setval( ''' || buffer || ''', ' || sq_last_value || 
', ' || sq_is_called || ');' ; 
ELSE
EXECUTE 'SELECT setval( ''' || buffer || ''', ' || sq_start_value 
|| ', ' || sq_is_called || ');' ;
END IF;

  -- add sequence comments
SELECT oid INTO tbl_oid
  FROM pg_class 
 WHERE relkind = 'S'
   AND relnamespace = src_oid
   AND relname = quote_ident(object);

SELECT description INTO v_def
  FROM pg_description
 WHERE objoid = tbl_oid
   AND objsubid = 0;

IF FOUND
  THEN  
EXECUTE 'COMMENT ON SEQUENCE ' || quote_ident(dest_schema) || '.' || 
quote_ident(object)
 || ' IS ''' || v_def || ''';';
END IF;


  END LOOP;

-- Create tables 
  FOR 

Re: Cloning schemas

2018-07-04 Thread Melvin Davidson
On Wed, Jul 4, 2018 at 2:27 PM, Łukasz Jarych  wrote:

> Hi Melvin,
>
>
>
> Best,
> Jacek
>
>
> wt., 3 lip 2018 o 15:34 Melvin Davidson  napisał(a):
>
>>
>>
>>
>>> ERROR: BŁĄD: you cannot add to column "TopoToVersion_ID" DETAIL: Column
>>> "TopoToVersion_ID" is an identity column defined as GENERATED ALWAYS. HINT:
>>> Use OVERRIDING SYSTEM VALUE to override. CONTEXT: wyrażenie SQL "INSERT
>>> INTO version4.t_topotoversions SELECT * FROM public.t_topotoversions;"
>>> funkcja PL/pgSQL clone_schema(text,text,boolean), wiersz 212 w EXECUTE
>>>
>>
>> Lukasz,
>> That ERROR is occuring because you choose to copy the data (include_recs
>> = TRUE).
>> I have added OVERRIDING SYSTEM VALUE to the insert statement and
>> attached revised version.
>> CAUTION: The value of TopoToVersion_ID and any other IDENTITY columns
>> may be changed.
>>
>> The revised version is attached.
>>
>>
> thank you very much. Awesome!!!

Jacek,
You are quite welcome.



-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: How to watch for schema changes

2018-07-03 Thread Melvin Davidson
>I'm writing a client in C++ with libpq. So I will have to do a lot of
polling .
Can't you just run a cron job?

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: How to watch for schema changes

2018-07-03 Thread Melvin Davidson
>Unfortunately I'm stuck with 9.1.

Have you thought about just setting   *log_statement = 'ddl'* in
postgresql.conf
and just greping the log for CREATE and ALTER?


Re: Cloning schemas

2018-07-03 Thread Melvin Davidson
> ERROR: BŁĄD: you cannot add to column "TopoToVersion_ID" DETAIL: Column
> "TopoToVersion_ID" is an identity column defined as GENERATED ALWAYS. HINT:
> Use OVERRIDING SYSTEM VALUE to override. CONTEXT: wyrażenie SQL "INSERT
> INTO version4.t_topotoversions SELECT * FROM public.t_topotoversions;"
> funkcja PL/pgSQL clone_schema(text,text,boolean), wiersz 212 w EXECUTE
>

Lukasz,
That ERROR is occuring because you choose to copy the data (include_recs =
TRUE).
I have added OVERRIDING SYSTEM VALUE to the insert statement and attached
revised version.
CAUTION: The value of TopoToVersion_ID and any other IDENTITY columns may
be changed.

The revised version is attached.

-- Function: public.clone_schema(text, text, boolean)

-- DROP FUNCTION public.clone_schema(text, text, boolean);

CREATE OR REPLACE FUNCTION public.clone_schema(
source_schema text,
dest_schema text,
include_recs boolean)
  RETURNS void AS
$BODY$
--  Initial code by Emanuel '3manuek' 
--  Last revision 2018-07-03 by Melvin Davidson
--  Added ' OVERRIDING SYSTEM VALUE' for INSERT records
--
--  This function will clone all sequences, tables, indexes, rules, triggers, 
--  data(optional), views & functions from any existing schema to a new schema
-- SAMPLE CALL:
-- SELECT clone_schema('public', 'new_schema', TRUE);

DECLARE
  src_oid  oid;
  tbl_oid  oid;
  seq_oid  oid;
  func_oid oid;
  owner_id oid;
  con_oid  oid;
  v_path   text;
  v_func   text;
  v_args   text;
  v_connametext;
  v_rule   text;
  v_trig   text;
  object   text;
  buffer   text;
  srctbl   text;
  default_ text;
  v_column text;
  qry  text;
  dest_qry text;
  v_deftext;
  v_stat   integer;
  seqval   bigint;
  seq_name name;
  sq_type  oid;
  sq_last_valuebigint;
  sq_max_value bigint;
  sq_start_value   bigint;
  sq_increment_by  bigint;
  sq_min_value bigint;
  sq_cache_value   bigint;
  sq_log_cnt   bigint;
--  sq_is_called boolean;
  sq_is_cycled boolean;
  sq_typname   name;
  seq_cycled   text;
  seq_ownername;  
--  sq_cycledchar(10);
  
BEGIN

-- Check that source_schema exists
  SELECT oid INTO src_oid
FROM pg_namespace
   WHERE nspname = quote_ident(source_schema);
   
  IF NOT FOUND
THEN 
  RAISE NOTICE 'source schema % does not exist!', source_schema;
  RETURN ;
  END IF;

  -- Check that dest_schema does not yet exist
  PERFORM nspname 
FROM pg_namespace
   WHERE nspname = quote_ident(dest_schema);
   
  IF FOUND
THEN 
  RAISE NOTICE 'dest schema % already exists!', dest_schema;
  RETURN ;
  END IF;

  EXECUTE 'CREATE SCHEMA ' || quote_ident(dest_schema) ;
   
  -- Add schema comment
  SELECT description INTO v_def
FROM pg_description
   WHERE objoid = src_oid
 AND objsubid = 0;

  IF FOUND 
THEN 
  EXECUTE 'COMMENT ON SCHEMA ' || quote_ident(dest_schema) || ' IS ' || 
quote_literal(v_def);
  END IF;

  SET search_path TO dest_schema, source_schema, 'public', pg_catalog;  
  SELECT current_schemas(TRUE) INTO v_path;
  RAISE NOTICE 'search path = %', v_path;

  -- Create sequences
  -- TODO: Find a way to make this sequence's owner is the correct table.
  FOR object IN
 SELECT sequence_name::text 
  FROM information_schema.sequences
 WHERE sequence_schema = quote_ident(source_schema)

  LOOP

SELECT oid, relowner into seq_oid, owner_id
  FROM pg_class 
 WHERE relname = quote_ident(object)
   AND relkind = 'S';

SELECT seqtypid,
   seqstart,
   seqincrement,
   seqmax,
   seqmin,
   seqcache,
   seqcycle
  INTO sq_type, 
   sq_start_value,
   sq_increment_by,
   sq_max_value,
   sq_min_value,
   sq_cache_value,
   sq_is_cycled
  FROM pg_sequence
 WHERE seqrelid = seq_oid;
 
SELECT typname INTO sq_typname
  FROM pg_type 
 WHERE oid = sq_type;

SELECT rolname INTO seq_owner
  FROM pg_authid WHERE oid = owner_id;

--EXECUTE 'CREATE SEQUENCE ' || quote_ident(dest_schema) || '.' || 
quote_ident(object);
--srctbl := quote_ident(source_schema) || '.' || quote_ident(object);

IF sq_is_cycled 
  THEN 
seq_cycled = ' CYCLE'; 
  ELSE 
seq_cycled = ' NO CYCLE';
END IF;

EXECUTE 'CREATE SEQUENCE ' || quote_ident(dest_schema) || '.' || 
quote_ident(object)
 || ' AS ' || sq_typname
 || ' INCREMENT ' ||  sq_increment_by
 || ' MINVALUE ' || sq_min_value
 || ' MAXVALUE ' || sq_max_value
 || ' START W

Re: Cloning schemas

2018-07-02 Thread Melvin Davidson
On Mon, Jul 2, 2018 at 10:14 AM, Łukasz Jarych  wrote:

>
> Hi,
>
> i see. thank you
>
> I am using:
>
> [image: image.png]
>
> Best,
> Jacek
>
> pon., 2 lip 2018 o 16:03 Adrian Klaver 
> napisał(a):
>
>> On 07/02/2018 06:57 AM, Łukasz Jarych wrote:
>> > Hi,
>> >
>> > "
>> > Strange. "audit_sq" looks like an invalid sequence table. I went
>> > here,https://www.postgresql.org/docs/9.6/static/sql-createsequence.html,
>>
>> > and checked all the way back to version 7.1 and "maxvalue" has been a
>> > column since back then.
>>
>> What version of Postgres are you actually doing the cloning in?
>>
>> Per Tom's post:
>>
>> https://www.postgresql.org/docs/10/static/release-10.html
>>
>> "
>> Move sequences' metadata fields into a new pg_sequence system catalog
>> (Peter Eisentraut)
>>
>> ...
>>
>> A sequence relation now stores only the fields that can be modified by
>> nextval(), that is last_value, log_cnt, and is_called.
>>
>> ...
>>
>> The main incompatibility introduced by this change is that selecting
>> from a sequence relation now returns only the three fields named above.
>> To obtain the sequence's other properties, applications must look into
>> pg_sequence. The new system view pg_sequences can also be used for this
>> purpose; it provides column names that are more compatible with existing
>> code.
>> "
>>
>> >
>> > Maybe skip that table for now? It even says the last value is 1. You
>> > should also check the other sequence tables. You can get them by doing:
>> >
>> > select * from information_schema.sequences;"
>> >
>> > Result of select:
>> >
>> > image.png
>> >
>> > Are you sure that I can skip " audit_sq" seq?
>> >
>> > I wrote here because i am newbie and i can not update this. Hope for
>> > your help Guys.
>> >
>> > Best,
>> > Jacek
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > pon., 2 lip 2018 o 15:30 Tom Lane > > <mailto:t...@sss.pgh.pa.us>> napisał(a):
>> >
>> > =?UTF-8?Q?=C5=81ukasz_Jarych?= > > <mailto:jarys...@gmail.com>> writes:
>> >  > I am trying to use :
>> >
>> >  > "select * from clone_schema('public','Version8',true) but i am
>> > getting
>> >  > error:
>> >
>> >  > "Column "max_value" does not exist.
>> >  > LINE 1: SELECT last_value, max_value, start_value, increment_by,
>> > min...
>> >  > HINT: Maybe you wanted to point to column "
>> "audit_sq.last_value"?
>> >  > QUERY: SELECT last_value, max_value, start_value, increment_by,
>> > min_value,
>> >  > cache_value, log_cnt, is_cycled, is_called FROM public.audit_sq;
>> >  > CONTEXT: function PL/pgSQL clone_schema(text,text,boolean), row
>> 66 in
>> >  > EXECUTE
>> >
>> > I guess audit_sq is a sequence?  It looks to me like this function
>> has
>> > not been taught about the changes in sequence metadata in PG v10.
>> > You need to update it, or talk to its author about an update.
>> >
>> >  regards, tom lane
>> >
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>
Lukas,

Once again, in Version 10, the developers have changed the system catalogs.
Please use the attached clone_schema_10.sql which
has been adjusted for PostgreSQL 10.



-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!

-- Function: public.clone_schema(text, text, boolean)

-- DROP FUNCTION public.clone_schema(text, text, boolean);

CREATE OR REPLACE FUNCTION public.clone_schema(
source_schema text,
dest_schema text,
include_recs boolean)
  RETURNS void AS
$BODY$
--  Initial code by Emanuel '3manuek' 
--  Last revision 2017-04-17 by Melvin Davidson
--  Added SELECT REPLACE for schema views
--
--  This function will clone all sequences, tables, indexes, rules, triggers, 
--  data(optional), views & functions from any existing schema to a new schema
-- SAMPLE CALL:
-- SELECT clone_schema('public', 'new_schema', TRUE);

DECLARE
  src_oid  oid;
  tbl_oid  oid;
  seq_oid  oid;
  func_oid oid;
  owner_id oid;
  con_oid  

Re: dumping only table definitions

2018-06-29 Thread Melvin Davidson
On Fri, Jun 29, 2018 at 6:30 PM, Kevin Brannen  wrote:

> I’m trying to figure out how to dump only the table definitions, well
> those and things they need directly, like sequences & types. What I do NOT
> want are all the millions (not literally but it feels like it J) of
> functions we have. Triggers would be all right if I must, as we only have a
> few of those.
>
>
>
> I’ve tried various combinations of args to pg_dump, with -s being what I’d
> like to work, but I still get all the functions.
>
>
>
> I suppose I could filter it with Perl or Awk, but that could be tricky. In
> fact, the best my searching could find is to do:
>
>
>
>  pg_dump -s databasename | awk 'RS="";/CREATE TABLE[^;]*;/'
>
>
>
> which fails in some interesting ways.
>
>
>
> I could dump the schema (with functions) then load it into another DB then
> programmatically drop all the functions before dumping that with pg_dump,
> but again why should I have to.
>
>
>
> Is there any Pg tool that gives me just the table defs or am I going to
> have to write my own?
>
>
>
> Thanks,
>
> Kevin
> This e-mail transmission, and any documents, files or previous e-mail
> messages attached to it, may contain confidential information. If you are
> not the intended recipient, or a person responsible for delivering it to
> the intended recipient, you are hereby notified that any disclosure,
> distribution, review, copy or use of any of the information contained in or
> attached to this message is STRICTLY PROHIBITED. If you have received this
> transmission in error, please immediately notify us by reply e-mail, and
> destroy the original transmission and its attachments without reading them
> or saving them to disk. Thank you.
>

> What I do NOT want are all the millions (not literally but it feels like
it J) of functions we have.

It sounds like you would best be served by installing pg_extractor. In
essence, it is a super flexible version of pg_dump. You have your choice of
exactly what you want to dump.

https://github.com/omniti-labs/pg_extractor

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: We find few queries running three times simultaneously with same parameters on postgres db

2018-06-26 Thread Melvin Davidson
On Tue, Jun 26, 2018 at 1:26 PM, amandeep singh  wrote:

> Hi Andreas
>
> The value for $1 is same in all queries.
>
> @Edson: number of such running queries are always  3.
>
> Get Outlook for Android <https://aka.ms/ghei36>
>
>
>
> From: Andreas Kretschmer
> Sent: Tuesday, 26 June, 6:04 PM
> Subject: Re: We find few queries running three times simultaneously with
> same parameters on postgres db
> To: pgsql-general@lists.postgresql.org
>
>
> On 26 June 2018 12:32:44 CEST, Andreas Kretschmer wrote: > > >Am
> 26.06.2018 um 12:19 schrieb amandeep singh: >> We have been observing our
> postgres database from past few days,We >> found few queries running three
> times simultaneously with same >> parameters.I would like to back track how
> a query is running multiple > >> times. > >they are independend each other.
> You can run those queries with EXPLAIN > >ANALYSE to see the execution
> plan. > >Regards, Andreas seems like i misunderstud your query. are you
> sure, that the parameter $1 is equal across all different connections?
> Andreas -- 2ndQuadrant - The PostgreSQL Support Company
>
>
This query will show you the pid, client_addr and user that submitted the
queries, in addition to other useful information.

SELECT backend_start as be_start,
   datname,
   pid as pid,
   client_addr,
   usename as user,
   state,
   query,
   wait_event_type,
   query_start,
   current_timestamp - query_start as duration
  FROM pg_stat_activity
 WHERE pg_backend_pid() <> pid
ORDER BY 1,
 datname,
 query_start;


-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Is there a way to be notified on the CREATE TABLE execution?

2018-06-20 Thread Melvin Davidson
On Wed, Jun 20, 2018 at 1:28 PM, Francisco Olarte 
wrote:

> Melvin:
>
> Maybe old eyes, but ...
>
> On Wed, Jun 20, 2018 at 7:03 PM, Melvin Davidson 
> wrote:
> >>Is there a way to be notified on the CREATE TABLE execution?
> > Here is sample code that will notify for a CREATE or DROP table:
>
> Doesn't this trigger just log the events? I think it's missing
> something like "Notify create_or_drop" somewhere after the logging (
> so a listener knows to look at the log table ).
>
> ( Normally I would use something like this, listen to a channel, do a
> first scan from the saved last tbl_cl_key, and then do another scan
> from the last each time listen fires, it seems the difficult work is
> done but it misses the notify to signal listeners and avoid having to
> rescan on a timer or a similar thing ).
>
> ( Maybe I missed the line, it would not be the first time, that's why I
> ask ).
>
> Francisco Olarte.
>

> Maybe old eyes, but ...
>I think it's missing
>something like "Notify create_or_drop" somewhere after the logging (
>so a listener knows to look at the log table ).

Uh, I said it was a SAMPLE. It's not that difficult to add RAISE INFO... or
NOTIFY...
which, if you look, is commented out with generic information..
Of course, the exact message is left to the creator.

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Is there a way to be notified on the CREATE TABLE execution?

2018-06-20 Thread Melvin Davidson
>Is there a way to be notified on the CREATE TABLE execution?

Here is sample code that will notify for a CREATE or DROP table:

CREATE TABLE public.tbl_create_log
(
  tbl_cl_key bigint NOT NULL DEFAULT
nextval('tbl_create_log_tbl_cl_key_seq'::regclass),
  tbl_cre8_time timestamp without time zone DEFAULT now(),
  log_table_schema name,
  log_table_name name,
  log_session_user name,
  CONSTRAINT tbl_create_log_pk PRIMARY KEY (tbl_cl_key)
)
WITH (
  OIDS=FALSE
);
ALTER TABLE public.tbl_create_log
  OWNER TO postgres;
GRANT ALL ON TABLE public.tbl_create_log TO postgres;


CREATE OR REPLACE FUNCTION public.fn_notify_ddl()
  RETURNS event_trigger AS
$BODY$
DECLARE
obj record;
BEGIN
/*
RAISE INFO 'Type: %', TG_TAG;
RAISE INFO 'Command: %', current_query();
RAISE INFO 'DB Name: %', current_database();
RAISE INFO 'DB User: %', session_user;
RAISE INFO 'DB Port: %', inet_server_port();
RAISE INFO 'Server Host: %', inet_server_addr();
RAISE INFO 'Client Host: %', inet_client_addr();
*/
FOR obj IN SELECT *
 FROM pg_event_trigger_ddl_commands() LOOP
IF obj.command_tag = 'CREATE TABLE'
OR obj.command_tag = 'DROP TABLE'THEN
--  RAISE INFO 'we got a % event for object "%"', obj.command_tag,
obj.object_identity;
  INSERT INTO tbl_create_log
  ( log_table_schema,
log_table_name,
log_session_user
  )
  SELECT n.nspname,
 c.relname,
 session_user
FROM pg_class c
JOIN pg_namespace n ON n.oid = c.relnamespace
   WHERE c.oid = obj.objid
 AND c.relkind = 'r';
END IF;
END LOOP;
END;
$BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100;
ALTER FUNCTION public.fn_notify_ddl()
  OWNER TO postgres;

CREATE EVENT TRIGGER table_created_dropped ON DDL_COMMAND_END
EXECUTE PROCEDURE public.fn_notify_ddl();

ALTER EVENT TRIGGER table_created_dropped
OWNER TO postgres;

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: PostgreSQL Volume Question

2018-06-15 Thread Melvin Davidson
On Fri, Jun 15, 2018 at 12:26 PM, Data Ace  wrote:

> Well I think my question is somewhat away from my intention cause of my
> poor understanding and questioning :(
>
>
>
> Actually, I have 1TB data and have hardware spec enough to handle this
> amount of data, but the problem is that it needs too many join operations
> and the analysis process is going too slow right now.
>
>
>
> I've searched and found that graph model nicely fits for network data like
> social data in query performance.
>
>
>
> Should I change my DB (I mean my DB for analysis)? or do I need some other
> solutions or any extension?
>
>
> Thanks
>
> On Thu, Jun 14, 2018 at 3:36 PM, Melvin Davidson 
> wrote:
>
>>
>>
>> On Thu, Jun 14, 2018 at 6:30 PM, Adrian Klaver > > wrote:
>>
>>> On 06/14/2018 02:33 PM, Data Ace wrote:
>>>
>>>> Hi, I'm new to the community.
>>>>
>>>> Recently, I've been involved in a project that develops a social
>>>> network data analysis service (and my client's DBMS is based on 
>>>> PostgreSQL).
>>>> I need to gather huge volume of unstructured raw data for this project,
>>>> and the problem is that with PostgreSQL, it would be so dfficult to handle
>>>> this kind of data. Are there any PG extension modules or methods that are
>>>> recommended for my project?
>>>>
>>>
>>> In addition to Ravi's questions:
>>>
>>> What does the data look like?
>>>
>>> What Postgres version?
>>>
>>> How is the data going to get from A <--> B, local or remotely or both?
>>>
>>> Is there another database or program involved in the process?
>>>
>>>
>>>> Thanks in advance.
>>>>
>>>
>>>
>>> --
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com
>>>
>>>
>> In addition to Ravi's and Adrian's questions:
>>
>> What is the hardware configuration?
>>
>> --
>> *Melvin Davidson*
>> *Maj. Database & Exploration Specialist*
>> *Universe Exploration Command – UXC*
>> Employment by invitation only!
>>
>
>
At  this point, your are still giving general instead of specific answers.
It is most important to answer Adrian's and my quesions:
What does the data look like?
What Postgres version?
How is the data going to get from A <--> B, local or remotely or both?
Is there another database or program involved in the process?
What is the O/S?
What does the hardware configuration look like?

>the problem is that it needs too many join operations and the analysis
process is going too slow right now.

 So what is the structure of the tables involved, including indexes?

What is the actual query?


We cannot help unless you give us specifics to work with.

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: PostgreSQL Volume Question

2018-06-14 Thread Melvin Davidson
On Thu, Jun 14, 2018 at 6:30 PM, Adrian Klaver 
wrote:

> On 06/14/2018 02:33 PM, Data Ace wrote:
>
>> Hi, I'm new to the community.
>>
>> Recently, I've been involved in a project that develops a social network
>> data analysis service (and my client's DBMS is based on PostgreSQL).
>> I need to gather huge volume of unstructured raw data for this project,
>> and the problem is that with PostgreSQL, it would be so dfficult to handle
>> this kind of data. Are there any PG extension modules or methods that are
>> recommended for my project?
>>
>
> In addition to Ravi's questions:
>
> What does the data look like?
>
> What Postgres version?
>
> How is the data going to get from A <--> B, local or remotely or both?
>
> Is there another database or program involved in the process?
>
>
>> Thanks in advance.
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
In addition to Ravi's and Adrian's questions:

What is the hardware configuration?

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: How to get postmaster shut down time in postgres?

2018-06-04 Thread Melvin Davidson
On Mon, Jun 4, 2018 at 9:44 AM, pavan95  wrote:

> Hi Adrian/Melvin,
>
> Thanks for your prompt replies. Yeah, I'm aware of that way.
>
> But my requirement is to get the server shutdown time whenever that event
> occurs and insert into a table dynamically!! Is it possible?
>
> The main reason for my requirement is to find the time swing between server
> stop and start.
>
> Thanks in advance.
>
> Regards,
> Pavan
>
>
>
> --
> Sent from: http://www.postgresql-archive.org/PostgreSQL-general-
> f1843780.html
>
>
>... my requirement is to get the server shutdown time whenever that event
>occurs and insert into a table dynamically!! Is it possible?

Since you have not stated your O/S, we cannot provide a specific solution.
However, I can suggest that you use
a command line script to grab the time from the log and then write to a
file. EG. grep / find

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: How to get postmaster shut down time in postgres?

2018-06-04 Thread Melvin Davidson
On Mon, Jun 4, 2018 at 9:17 AM, pavan95  wrote:

> Hi all,
>
> Is there any way to get postmaster(postgres database) shutdown time?
> Please
> help!!
>
> Thanks in Advance.
>
>
> Regards,
> Pavan
>
>
>
> --
> Sent from: http://www.postgresql-archive.org/PostgreSQL-general-
> f1843780.html
>
>
Look in your postgresql log. You should see a line similar to this"

2018-06-04 09:21:27 EDT LOG:  database system is shut down

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Question on disk contention

2018-05-31 Thread Melvin Davidson
On Thu, May 31, 2018 at 1:13 AM, Charles Clavadetscher <
clavadetsc...@swisspug.org> wrote:

> Hi Melvin
>
> As an answer to a previous post you wrote:
>
> "Also, your main problem is that when you have two exact same queries
> executing at the same time, they will cause contention in
> the disk, and neither one will make much progress."
>
> Could you elaborate a little more on the meaning of "contention in the
> disk"?
> What is it that happens?
>
> Thank you and have a good day.
> Regards
> Charles
>
>
>
>Could you elaborate a little more on the meaning of "contention in the
disk"?
>What is it that happens?

To simplify, you have two users/jobs, both wanting the exact same
information. So the system instructs the disk to get
that information from the disk, which causes the disk head to "seek" to the
position of the first eligible row and
continues positioning to other eligible rows. Now the job is not exclusive,
so the system temporarily switches to the
other job, which causes the disk to go back to the first row and work from
there. The switching back and forth continues,
so that instead of one job finishing quickly, they both have to take turns
waiting for needed information. That takes
a lot longer,

Try this, Select a table that has a lot of rows, ideally 1M+. Then start a
query with a WHERE clause and see how long
it takes. Then submit the same query from 5 separate connections
simultaneously and see how long that takes.

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Query running for 12 hours

2018-05-30 Thread Melvin Davidson
s*
> * Execution time: 5431.817 ms*
>
>
>
T
*wo things:*
*From your explain, it looks like you need an index on this_.tarih, since
it is doing a sequential scan when it should be using an index.*

*Also, your main problem is that when you have two exact same queries
executing at the same time, they will cause contention in*

*the disk, and neither one will make much progress.*

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: How to drop a value from an ENUM?

2018-05-29 Thread Melvin Davidson
On Mon, May 28, 2018 at 11:08 PM, Torsten Förtsch 
wrote:

> Hi,
>
> I am absolutely sure a certain value of one of my ENUM types is not used
> in the entire database. Now I am asked to drop that value. Unfortunately,
> there is no ALTER TYPE DROP VALUE.
>
> On my development box I tried
>
> delete from pg_enum
>  where enumtypid='my_type_name'::regtype
>and enumlabel='my_label'
>
> It worked and I could not find any adverse effects.
>
> Given the value is not used anywhere, is this a save command?
>
> Thanks,
> Torsten
>




*Well, imho, you should avoid enums at all cost. As you have discovered,
enums are hard to maintain and have long been replaced by Foreign Keys.With
that being said, apparently your command was safe. However, the best way is
to do the following.to  drop/delete an enum.*




















*1. Determine that the particular enum value is NOT referenced by any
column of any table in the database.2. As a superuser, use the following
queries:SELECT t.typname,   e.enumlabel,   e.enumsortorder,
e.enumtypid  FROM pg_type t  JOIN pg_enum e ON e.enumtypid = t.oid WHERE
t.typtype = 'e'   AND e.enumlabel = 'your_enum_value' ORDER BY 1,
e.enumsortorder; DELETE FROM pg_enum WHERE enumtypid =AND enumlabel = 'your_enum_value';-- *


*Melvin DavidsonMaj. Database & Exploration SpecialistUniverse Exploration
Command – UXC*
Employment by invitation only!


Re: Help in Postgresql

2018-05-22 Thread Melvin Davidson
On Tue, May 22, 2018 at 8:58 AM, mooh Rash <moohr...@yahoo.com> wrote:

> Hi ,
>
> I am a PhD student searching in query optimization. I want to access the
> information about queries and statistics. Especially, I am interested in   
> finding
> out how I can collect the history information (user, when, query,
> query_plan) from postgreqsl. Also, I wonder if there is a plan-table in
> postgresql than I can access, and I can access it?. I appreciate if you
> guide me in getting the mentioned information that are necessary for my
> research.
>
> Best,
> Moohanad
>

> I want to access the information about queries and statistics


*I strongly recommend you install PgBadger, which analyzes the log file to
provide you the information you need.*

*https://sourceforge.net/projects/pgbadger/
<https://sourceforge.net/projects/pgbadger/>*

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: When use triggers?

2018-05-18 Thread Melvin Davidson
On Fri, May 18, 2018 at 9:35 AM, Vick Khera <vi...@khera.org> wrote:

> On Wed, May 16, 2018 at 6:19 PM, hmidi slim <hmidi.sl...@gmail.com> wrote:
>
>> HI,
>>
>> I'm working on a microservice application and I avoid using triggers
>> because they will not be easy to maintain and need an experimented person
>> in database administration to manage them. So I prefer to manage the work
>> in the application using ORM and javascript.
>> However I want to get some opinions and advices about using triggers:
>> when should I use them? How to manage them when there are some problems?
>>
>
> I have used triggers to keep audit-logs of changes to certain columns in a
> table. For example, I want to know when a customer went "overdue" and then
> back to "active". The best place to create that log is in the database
> itself, since that also captures any manually updated rows (ie, those
> actions not initiated by the application code itself).
>
> I have also used triggers to ensure data consistency and enforce state
> diagram transition rules for status columns in a table. These help capture
> logic errors in application code. For example, if your state diagram allows
> A -> B <-> C, then the trigger would disallow a transition from B or C  to
> A, disallow A -> C, but allow C -> B and B -> C and A -> B.
>
> To manage them, we treat them like all DDL changes: everything is done via
> SQL script, and those are tracked using our version control software, go
> through developer testing then staging testing, then finally production.
>

> I have used triggers to keep audit-logs of changes to certain columns in
a table
Another good use for triggers is to maintain customer balance..EG: An
INSERT, UPDATE or DELETE involving a customer payment
(or in the case of banks (deposit or withdrawals) would automatically
maintain the balance in the customer master record.

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: sql function with empty row

2018-05-16 Thread Melvin Davidson
On Wed, May 16, 2018 at 2:25 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 05/16/2018 11:07 AM, Philipp Kraus wrote:
>
>> Hello,
>>
>> I have defined a SQL function
>>
>> CREATE OR REPLACE FUNCTION substancetrivialname(text)
>>  RETURNS substance
>>  LANGUAGE 'sql'
>>  COST 100
>>  VOLATILE
>> AS $BODY$
>> select s.* from substancetrivialname n
>>  join substance s on s.id = n.idsubstance
>> where lower(btrim(n.name)) = lower(btrim($1));
>> $BODY$;
>>
>> substance and substancetrivialname have got a 1-to-N relationship (for
>> each substance can exist multiple trivial names).
>> If I call the function with a non-existing trivial name it returns a
>> single row with all fields are set to NULL.
>>
>
> Ignore my previous post I got turned around on what was being returned.
>
>
>
>> If I run the join query directly it returns an empty record set on a
>> non-existing trivial name.
>> I expected equal behavior on my function, so my question is, how can I
>> fix this?
>>
>> Thanks
>>
>> Phil
>>
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
*I would start by changing*
*  RETURNS substance*
*to*
*RETURNS RECORD*

*Note: you might also conside using RETURNS TABLE(...)*

*https://www.postgresql.org/docs/10/static/sql-createfunction.html
<https://www.postgresql.org/docs/10/static/sql-createfunction.html>*


*You might also want to consider adding LIMIT 1 to the end of the query.*

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Add schema to the query

2018-05-06 Thread Melvin Davidson
On Sun, May 6, 2018 at 10:33 PM, Melvin Davidson <melvin6...@gmail.com>
wrote:

>
>
> On Sun, May 6, 2018 at 10:19 PM, Igor Korot <ikoro...@gmail.com> wrote:
>
>> Hi, ALL,
>>
>> Is there an easy way to add the schema to the following query:
>>
>> SELECT u.usename FROM pg_class c, pg_user u WHERE u.usesysid =
>> c.relowner AND relname = ?
>>
>> Thank you.
>>
>>
> >Is there an easy way to add the schema to the following query:
>
> You mean like this?
> SELECT u.usename,
>*n.nspname AS schema*
>   FROM pg_class c
> *JOIN pg_namespace n ON n.oid = c.relnamespace,*
>pg_user u
>  WHERE u.usesysid = c.relowner
>AND relname = ?
>
>
> --
> *Melvin Davidson*
> *Maj. Database & Exploration Specialist*
> *Universe Exploration Command – UXC*
> Employment by invitation only!
>

OR do you mean like this?
SELECT u.usename
  FROM pg_class c
*JOIN pg_namespace n ON n.oid = c.relnamespace, *
   pg_user u
 WHERE u.usesysid = c.relowner
   AND relname = ?


* AND n.nspname = 'public'*

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Add schema to the query

2018-05-06 Thread Melvin Davidson
On Sun, May 6, 2018 at 10:19 PM, Igor Korot <ikoro...@gmail.com> wrote:

> Hi, ALL,
>
> Is there an easy way to add the schema to the following query:
>
> SELECT u.usename FROM pg_class c, pg_user u WHERE u.usesysid =
> c.relowner AND relname = ?
>
> Thank you.
>
>
>Is there an easy way to add the schema to the following query:

You mean like this?
SELECT u.usename,
   *n.nspname AS schema*
  FROM pg_class c
*JOIN pg_namespace n ON n.oid = c.relnamespace,*
   pg_user u
 WHERE u.usesysid = c.relowner
   AND relname = ?


-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: How to find the hits on the databases and tables in Postgres

2018-05-05 Thread Melvin Davidson
 >How to monitor the Hits on database and how many hits on each user tables
> Through query.


*Do these help? You can run them through a cron job.*

*-- For all databases*














*SELECT pg_stat_database.datname,pg_stat_database.blks_read,
pg_stat_database.blks_hit,round((pg_stat_database.blks_hit::double
precision   / (pg_stat_database.blks_read  +
pg_stat_database.blks_hit  +1)::double precision *
100::double precision)::numeric, 2) AS cachehitratio   FROM
pg_stat_database  WHERE pg_stat_database.datname !~
'^(template(0|1)|postgres)$'::text  ORDER BY
round((pg_stat_database.blks_hit::double precision  /
(pg_stat_database.blks_read + pg_stat_database.blks_hit
+ 1)::double precision * 100::double
precision)::numeric, 2) DESC;*

*-- For all tables in a specific database (you need to iterate through all
db's)*





















*SELECT n.nspname,   s.relname,   c.reltuples::bigint,
c.relfrozenxid,   age(c.relfrozenxid) AS age_frozenxid,
n_live_tup,   n_tup_ins,   n_tup_upd,   n_tup_del,
date_trunc('second', last_vacuum) as last_vacuum,
date_trunc('second', last_autovacuum) as last_autovacuum,
date_trunc('second', last_analyze) as last_analyze,
date_trunc('second', last_autoanalyze) as last_autoanalyze ,   round(
current_setting('autovacuum_vacuum_threshold')::integer +
current_setting('autovacuum_vacuum_scale_factor')::numeric * C.reltuples)
AS av_threshold,CASE WHEN reltuples > 0  THEN
round(100.0 * n_dead_tup / (reltuples))ELSE 0   END AS
pct_dead,   CASE WHEN n_dead_tup > round(
current_setting('autovacuum_vacuum_threshold')::integer +
current_setting('autovacuum_vacuum_scale_factor')::numeric * C.reltuples)
  THEN 'VACUUM'ELSE 'ok' END AS "av_needed"*






*  FROM pg_stat_all_tables s  JOIN pg_class c ON c.oid = s.relid  JOIN
pg_namespace n ON (n.oid = c.relnamespace) WHERE s.relname NOT LIKE
'pg_%'   AND s.relname NOT LIKE 'sql_%' ORDER by 1, 2;*

On Sat, May 5, 2018 at 8:13 AM, Pavel Stehule <pavel.steh...@gmail.com>
wrote:

>
>
> 2018-05-05 13:03 GMT+02:00 PT <wmo...@potentialtech.com>:
>
>> On Fri, 4 May 2018 17:14:39 +0530
>> nikhil raj <nikhilraj...@gmail.com> wrote:
>>
>> >  Hi,
>> > Any one can please help me out
>> >
>> > How to monitor the Hits on database and how many hits on each user
>> tables
>> > Through query.
>> > Is there any other tools for that so it can full fill my requirement
>> for it
>>
>> pgBadger has always been my goto tool for that:
>> https://github.com/dalibo/pgbadger
>>
>>
> There are some statistic per tables: .. select * from pg_stat_user_tables,
> indexes: select * from pg_stat_user_indexes, and databases: select * from
> pg_stat_database;
>
> Regards
>
> Pavel
>
> --
>> Bill Moran <wmo...@potentialtech.com>
>>
>>
>


-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Inconsistent compilation error

2018-04-19 Thread Melvin Davidson
On Thu, Apr 19, 2018 at 8:13 AM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wednesday, April 18, 2018, Adrian Klaver <adrian.kla...@aklaver.com>
> wrote:
>>
>> Hmm, wonder if there is an oops in the below:
>>
>> http://www.pygresql.org/contents/changelog.html
>>
>> Version 5.0 (2016-03-20)
>> Changes in the DB-API 2 module (pgdb):
>> "SQL commands are always handled as if they include parameters, i.e.
>> literal percent signs must always be doubled. This consistent behavior is
>> necessary for using pgdb with wrappers like SQLAlchemy."
>
>
> I'd hope not, as far as the driver is concerned the percent signs are text
> content.  It's plpgsql that is interpreting them directly in the server.
>
> David J.
>


*Hmmm, looking at the code you provided us, it is obviously not what is
actually in production. So if you copy the actual function from the system
that works *


*and use that to frop and rebuild the function on the system that fails,
does that resolve the problem? I suspect there is an error in the failing
system that you have overlooked.*

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Inconsistent compilation error

2018-04-18 Thread Melvin Davidson
On Wed, Apr 18, 2018 at 9:02 PM, <r...@raf.org> wrote:

> Hi,
>
> postgresql-9.5.12 on debian-9
>
> I have a stored function with code that looks like:
>
> create or replace function tla_audit_delete_thing()
> returns boolean stable language plpgsql as $$
> declare
> r record;
> status boolean := 1;
> begin
> for r in select _.* from blah_history _ where _.original_id not in
> (select id from blah)
> loop
> raise notice '% %', 'blah_history.original_id', r;
> status := 0;
> end loop;
> [...]
> end
> $$
> security definer
> set search_path = public, pg_temp;
> revoke all on function tla_audit_delete_thing() from public;
> grant execute on function tla_audit_delete_thing() to staff;
>
> And I have a program that loads stored functions from disk
> when they are different to what's in the database and I have
> just loaded a very old database backup, brought the schema up
> to date, and tried to bring the stored functions up to date.
>
> But I'm getting this compilation error when it tries to load this
> function:
>
> ERROR:  too many parameters specified for RAISE
> CONTEXT:  compilation of PL/pgSQL function "tla_audit_delete_thing"
> near line 9
>
> Traceback (most recent call last):
>   File "lib/loadfunc.py", line 228, in main
> db.cursor().execute(src)
>   File "/usr/lib/python2.7/dist-packages/pgdb.py", line 1026, in
> execute
> return self.executemany(operation, [parameters])
>   File "/usr/lib/python2.7/dist-packages/pgdb.py", line 1050, in
> executemany
> rows = self._src.execute(sql)
> ProgrammingError: ERROR:  too many parameters specified for RAISE
> CONTEXT:  compilation of PL/pgSQL function "tla_audit_delete_thing"
> near line 9
>
> The line in question is:
>
> raise notice '% %', 'blah_history.original_id', r;
>
> Which looks fine. The really wierd thing is that this happens when done on
> a
> debian9 host but when I load the function from another host (my macos
> laptop)
> with the same function into the same database, it works fine.
>
> I've never encountered an inconsistency like this before.
>
> Any suggestions as to what might be causing it?
>
> The python versions are slightly different and the pgdb module versions
> are different but I wouldn't have thought that that would affect the
> compilation performed by the database server itself:
>
>   debian9:   python-2.7.13 pgdb-5.0.3
>   macos-10.11.6: python-2.7.14 pgdb-4.2.2
>
> And the sql sent to the database server is identical from both hosts.
>
> And I don't think anything much has changed on the debian host recently.
>
> And it's not just the old backup. The same is happening with other copies
> of
> essentially the same database.
>
> And all the other stored functions were loaded fine. It's just this one
> that
> went wrong.
>
> Thanks in advance for any insights you can share.
>
> cheers,
> raf
>
>
>









*>The line in question is:>>raise notice '% %',
'blah_history.original_id', r;>>Which looks fine. It is not fine. You have
specifed TWO percent signs (%) which requires TWO argumenrts,but you have
only provided ONE -> r.*

*Hence->  ERROR:  too many parameters specified for RAISE *





*https://www.postgresql.org/docs/9.6/static/plpgsql-errors-and-messages.html#PLPGSQL-STATEMENTS-RAISE
<https://www.postgresql.org/docs/9.6/static/plpgsql-errors-and-messages.html#PLPGSQL-STATEMENTS-RAISE>"
Inside the format string, % is replaced by the string representation of the
next optional argument's value"*--
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: How to fetch data from tables in PostgreSQL

2018-04-18 Thread Melvin Davidson
On Wed, Apr 18, 2018 at 9:48 AM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 04/18/2018 12:38 AM, vaibhav zaveri wrote:
>
>> Hi,
>>
>> Thanks for your reply
>> Which are the important tables from which we need to get data?
>>
>
> Important for what?
>
>>
>> Regards,
>> Vaibhav Zaveri
>>
>> On 18 Apr 2018 12:52, "vaibhav zaveri" <vaibhavzave...@gmail.com > vaibhavzave...@gmail.com>> wrote:
>>
>> Hi,
>>
>> How to fetch data from tables in PostgreSQL.
>>
>> Regards,
>> Vaibhav Zaveri
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


*>Which are the important tables from which we need to get data? *

*The whole idea of a relational database such as PostgreSQL, is that YOU
create the schema/tables that are important to you and you store *

*and retrieve the information that is important to you. When PostgreSQL is
first installed it cannot possibly have any tables/information that you*

*would consider important. *
-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Postgresql Split Brain: Which one is latest

2018-04-10 Thread Melvin Davidson
On Tue, Apr 10, 2018 at 11:04 AM, Vikas Sharma <shavi...@gmail.com> wrote:

> Hi Adrian,
>
> This can be a good example: Application server e.g. tomcat having two
> entries to connect to databases, one for master and 2nd for Slave (ideally
> used when slave becomes master). If application is not able to connect to
> first, it will try to connect to 2nd.
>
> Regards
> Vikas
>
> On 10 April 2018 at 15:26, Adrian Klaver <adrian.kla...@aklaver.com>
> wrote:
>
>> On 04/10/2018 06:50 AM, Vikas Sharma wrote:
>>
>>> Hi,
>>>
>>> We have postgresql 9.5 with streaming replication(Master-slave) and
>>> automatic failover. Due to network glitch we are in master-master situation
>>> for quite some time. Please, could you advise best way to confirm which
>>> node is latest in terms of updates to the postgres databases.
>>>
>>
>> It might help to know how the two masters received data when they where
>> operating independently.
>>
>>
>>> Regards
>>> Vikas Sharma
>>>
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>
>


*Vikas,*

*Presuming the the real "master" will have additional records/rows inserted
in the tables,*

*if you run ANALYZE on the database(s) in both "masters", then execute the
following query *






*on both, whichever returns the highest count would be the real
"master". SELECT sum(c.reltuples::bigint)FROM pg_stat_all_tables s
JOIN pg_class c ON c.oid = s.relid WHERE s.relname NOT LIKE 'pg_%'   AND
s.relname NOT LIKE 'sql_%';*


-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Rationale for aversion to the central database?

2018-04-09 Thread Melvin Davidson
On Mon, Apr 9, 2018 at 9:45 AM, Ray Cote <rgac...@appropriatesolutions.com>
wrote:

> Maintaining your database logic in version control and versioning the
> deployed code can be a bit problematic.
>
> Conversely, providing a standardized pgsql module through which data is
> updated and retrieved can help standardize access logic across multiple
> languages and libraries.
>
> And I concur that database portability is a thing people like to discuss,
> but rarely occurs.
> Portability is important for general ORM tools, less so for corporate
> projects (there are always exceptions).
>
> Like any tool, needs to be used wisely.
> I've worked on a project that has 10s of thousands of lines of business
> logic in the database and it makes perfect sense for that environment.
> --Ray
>
>
>
>



*While this discussion is very interesting, it is important to realize that
ultimately, it is the needs and policy of the company that decides how that
database is used. The primary purpose of a DBA is to install the proper
security, protect the integrity of the data and maintain
performance.Determining whether it is better to place business logic in the
database or the application is strictly on a case by case basis. *
-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: decompose big queries

2018-04-06 Thread Melvin Davidson
On Fri, Apr 6, 2018 at 9:35 AM, pinker <pin...@onet.eu> wrote:

> Edson Carlos Ericksson Richter wrote
> > I don't know if there are best practices (each scenario requires its own
> > solution), but for plain complex SELECT queries, I do use "WITH"
> > queries... They work really well.
>
> Be cautious with CTE's. They weren't meant to be an alternative to
> subqueries and will probably change the way your query is executed, because
> they are optimisation fences:
> https://blog.2ndquadrant.com/postgresql-ctes-are-optimization-fences/
>
>
>
> --
> Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f184378
> 0.html
>
>
Often times, large queries like the example you have provided
were written as a generic query to obtain information for an
application. So developers should be cautioned to write queries
that are specific to the data needed for each situation.

Objectively speaking, you should look at two main areas.
First, examine the WHERE clause. Sometimes there are
redundant restrictions which can be removed.
Next, look at the columns that are selected. Are all those
columns really needed?
After you have cleaned the WHERE clause and columns, you
may find it is no longer necessary to join so many tables.

On Fri, Apr 6, 2018 at 9:35 AM, pinker <pin...@onet.eu> wrote:

> Edson Carlos Ericksson Richter wrote
> > I don't know if there are best practices (each scenario requires its own
> > solution), but for plain complex SELECT queries, I do use "WITH"
> > queries... They work really well.
>
> Be cautious with CTE's. They weren't meant to be an alternative to
> subqueries and will probably change the way your query is executed, because
> they are optimisation fences:
> https://blog.2ndquadrant.com/postgresql-ctes-are-optimization-fences/
>
>
>
> --
> Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f184378
> 0.html
>
>


-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Problem with connection to host (wrong host)

2018-03-31 Thread Melvin Davidson
On Sat, Mar 31, 2018 at 11:44 AM, Mike Martin <redt...@gmail.com> wrote:

> Hi
> I am just setting up a postgres server, details
> Host 192.168.0.3
> pg_hba.conf
> # TYPE  DATABASEUSERADDRESS METHOD
>
> # "local" is for Unix domain socket connections only
> local   all all peer
> # IPv4 local connections:
> hostall all 192.168.0.0/32ident
> # IPv6 local connections:
> hostall all 127.0.0.1/32ident
>
> hostall all ::1/128 ident
>
> postgresql.conf
> listen_addresses-'*'
>
> however when I try to connect from my laptop (ip 192.168.0.2) I get
>
> psql -h 192.168.0.3 -U usevideo -W
> Password for user usevideo:
> psql: FATAL:  no pg_hba.conf entry for host "192.168.0.2", user
> "usevideo", database "usevideo", SSL off
>
> So a bit confused, is psql ignoring the host parameter
>
> thanks
>
>
>




*>So a bit confused, is psql ignoring the host parameter It is not ignoring
your entry, you simply have not entry for the laptop you are connecting
from,So simply add the following:*
*hostall all     192.168.0.2/32
ident *   <- note, you may have to change the authentication
method depending on your environment



-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Introducing brand-new dbForge Studio for PostgreSQL, a powerful IDE for working with PostgreSQL databases

2018-03-22 Thread Melvin Davidson
The "Free" version doesn't do much. Just lets you connect to a DB,
create/edit and execute SQL.
oh hum.

On Thu, Mar 22, 2018 at 9:55 AM, Tim Clarke <tim.cla...@manifest.co.uk>
wrote:

> On 22/03/18 11:53, Steven Hirsch wrote:
> > On Thu, 22 Mar 2018, Devart wrote:
> >
> >> /Devart announced the first release of dbForge Studio for PostgreSQL
> >> that
> >> allows to manage and develop database objects in PostgreSQL. This new
> >> IDE
> >> offers many robust features like Code Completion, Object Explorer and
> >> Data
> >> Editor which help users create, develop and execute queries as well
> >> as edit
> >> and adjust the code to their requirements in a convenient and
> >> user-friendly
> >> interface./
> >
> > (snip...)
> >
> >  Windows only.  Wake me up when you have a Mac or Linux version.
> >
> >
>
> +1
>
> Tim Clarke
>
>


-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: PostgreSQL 9.6 Temporary files

2018-03-19 Thread Melvin Davidson
On Mon, Mar 19, 2018 at 1:17 PM, Jimmy Augustine <jimmy.august...@enyx.fr>
wrote:

> Hi,
>
> I used this command and I found the same value in total_size column.
>
> 2018-03-19 18:01 GMT+01:00 Melvin Davidson <melvin6...@gmail.com>:
>
>>
>>
>> On Mon, Mar 19, 2018 at 12:45 PM, Adrian Klaver <
>> adrian.kla...@aklaver.com> wrote:
>>
>>> On 03/19/2018 09:31 AM, Jimmy Augustine wrote:
>>>
>>>> Dear Friends,
>>>>
>>>> I am newbie to postgresql.
>>>> I have 162 GB on my database but when I check size of all tables, I
>>>> approximately obtain 80 GB.
>>>> I also see that I have 68GB of temporary files however I only found
>>>> 2.4MB at postgres/data/base/pgsql_tmp.
>>>>
>>>
>>> Exactly how did you determine this?
>>>
>>>
>>>> Could you tell me what are those temporary files and where are they at?
>>>> Can I delete some of them?
>>>>
>>>> All values come from pgAdmin 4 and checked by my own SQL
>>>> queries(postgresql-9.6).
>>>>
>>>
>>> Can you show actual queries used?
>>>
>>> I already run vacuum full and there is few dead tuples.
>>>>
>>>> Best regards,
>>>> Jimmy AUGUSTINE
>>>>
>>>
>>>
>>> --
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com
>>>
>>> > I have 162 GB on my database but when I check size of all tables, I
>> approximately obtain 80 GB.
>> >I also see that I have 68GB of temporary files however I only found
>> 2.4MB at postgres/data/base/pgsql_tmp.
>>
>>
>> *I am not sure what your query was that deteremined table and index
>> sizes, but try using the query instead.*
>>
>> *Note that total_size is the size of the table and all it's indexes.*
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *SELECT n.nspname as schema,   c.relname as table,   a.rolname as
>> owner,   c.relfilenode as filename,   c.reltuples::bigint,
>> pg_size_pretty(pg_relation_size(quote_ident(n.nspname) || '.' ||
>> quote_ident(c.relname) )) as size,
>> pg_size_pretty(pg_total_relation_size(quote_ident(n.nspname) || '.' ||
>> quote_ident(c.relname) )) as total_size,
>> pg_relation_size(quote_ident(n.nspname) || '.' || quote_ident(c.relname) )
>> as size_bytes,   pg_total_relation_size(quote_ident(n.nspname) || '.'
>> || quote_ident(c.relname) ) as total_size_bytes,   CASE WHEN
>> c.reltablespace = 0THEN 'pg_default'ELSE (SELECT
>> t.spcname FROM pg_tablespace t WHERE (t.oid =
>> c.reltablespace) )END as tablespaceFROM
>> pg_class c  JOIN pg_namespace n ON (n.oid = c.relnamespace)  JOIN pg_authid
>> a ON ( a.oid = c.relowner )  WHERE quote_ident(nspname) NOT LIKE 'pg_%'
>> AND quote_ident(relname) NOT LIKE 'pg_%' AND quote_ident(relname) NOT
>> LIKE 'information%' AND quote_ident(relname) NOT LIKE 'sql_%' AND
>> quote_ident(relkind) IN ('r')ORDER BY total_size_bytes DESC, 1, 2;-- *
>>
>>
>>
>> *Melvin DavidsonMaj. Database & Exploration SpecialistUniverse
>> Exploration Command – UXCEmployment by invitation only!*
>>
>
>


*>I used this command and I found the same value in total_size column. *

*Please be specific. Exactly WHAT is the SQL query? *


-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: PostgreSQL 9.6 Temporary files

2018-03-19 Thread Melvin Davidson
On Mon, Mar 19, 2018 at 1:12 PM, Jimmy Augustine <jimmy.august...@enyx.fr>
wrote:

>
>
> 2018-03-19 18:09 GMT+01:00 Adrian Klaver <adrian.kla...@aklaver.com>:
>
>> On 03/19/2018 10:04 AM, Jimmy Augustine wrote:
>>
>>>
>>>
>>> 2018-03-19 17:45 GMT+01:00 Adrian Klaver <adrian.kla...@aklaver.com
>>> <mailto:adrian.kla...@aklaver.com>>:
>>>
>>> On 03/19/2018 09:31 AM, Jimmy Augustine wrote:
>>>
>>> Dear Friends,
>>>
>>> I am newbie to postgresql.
>>> I have 162 GB on my database but when I check size of all
>>> tables, I approximately obtain 80 GB.
>>> I also see that I have 68GB of temporary files however I only
>>> found 2.4MB at postgres/data/base/pgsql_tmp.
>>>
>>>
>>> Exactly how did you determine this?
>>>
>>> I used this command and sum result for all database :
>>> SELECT pg_size_pretty(pg_total_relation_size('table_name'));
>>>
>>> And this for complete database :
>>> SELECT pg_size_pretty(pg_database_size('Database Name'));
>>>
>>>
>> So where did the 68GB number for temporary files come from?
>>
>> I don't measure this value by my own. I was disappointed by the gap
> between the two queries, so I checked pgAdmin 4 and I saw this value.
>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>
>


*>I don't measure this value by my own. I was disappointed by the gap
between the two queries, so I checked pgAdmin 4 and I saw this value. *

*I think your problem is that SELECT
pg_size_pretty(pg_total_relation_size('table_name')); only looks at the
current database*


*but SELECT pg_size_pretty(pg_database_size('Database Name'));  looks at
ALL databases.*












*Try this query instead to show individual database sizes.SELECT oid,
   datname,pg_size_pretty(pg_database_size(datname))as
size_pretty,pg_database_size(datname) as size,   (SELECT
pg_size_pretty (SUM( pg_database_size(datname))::bigint)FROM
pg_database)  AS total,   ((pg_database_size(datname) / (SELECT SUM(
pg_database_size(datname)) FROM
pg_database) ) * 100)::numeric(6,3) AS pct  FROM pg_database   ORDER BY
datname;*

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Prompt for parameter value in psql

2018-03-16 Thread Melvin Davidson
On Fri, Mar 16, 2018 at 11:12 AM, Tiffany Thang <tiffanyth...@gmail.com>
wrote:

> Hi,
> Would it be possible to prompt for a user input in psql like in Oracle
> sqlplus?
>
> In oracle, we use the & sign, for example,
> select * from emp where empid=
>
> Thanks.
>





*https://www.postgresql.org/docs/9.6/static/app-psql.html
<https://www.postgresql.org/docs/9.6/static/app-psql.html> \prompt [ text ]
name *

*Prompts the user to supply text, which is assigned to the variable name.
An optional prompt string, text, can be specified. (For multiword prompts,
surround the text with single quotes.)*

*By default, \prompt uses the terminal for input and output. However, if
the -f command line switch was used, \prompt uses standard input and
standard output.*

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: SELECT .. FOR UPDATE: find out who locked a row

2018-03-16 Thread Melvin Davidson
Tom,
this whole discussion started because Enrico did not originally specify the
PostgreSQL version he was working with. So after he did advise it was for
9.6, I felt it necessary to explain to him why a certain section of my
query was commented out and that it would also work for 10. I have
previously made it a policy to request ops include the PostgreSQL version
and O/S when submitting to this list, but I was berated by others for
always requesting ops to provide that extremely difficult information to
obtain.
I also felt it important that I express my opinion that the changes needed
were caused by what I felt was cosmetic and unnecessary changes to the
catalog. There is an old saying "If it ain't broke, don't fix it" and that
certainly applies here.

Now, as to your request for me to read the thread in the url's you
suggested, I did read most of the content. I note that the entire
discussion was amongst
PostgreSQL-development <pgsql-hackers(at)postgreSQL(dot)org>, So at no time
was the generic population of PostgreSQL users and DBA's involved.
Therefore, said population, myself included, had no foreknowledge of the
intended changes which is the cause of the problem. Therefore your
statement "you might want to consider speaking up in some reasonable time
frame, not six years later" is abrasive at best, since I, and others, only
found out about it after the fact. Not to mention, even if I did complain
earlier, I seriously doubt the changes could or would be reversed.

At this point I have said all I have to say and will discuss it no further.
I can only strongly recommend that in the future, proposed changes to
system catalogs that could adversely affect existing scripts and
applications be sent to the generic PostgreSQL population (IE:
pgsql-general@lists.postgresql.org) for comment BEFORE said changes are
implemented.

On Thu, Mar 15, 2018 at 11:23 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> Melvin Davidson <melvin6...@gmail.com> writes:
> > Yes, Stephen, I certainly understand making changes to system catalogs
> > _when necessary_.  That being said, the first change was the renaming of
> > pid to procpid in pg_stat_activity.  However, I contend that was more
> > because someone felt that it was more to make the column names
> > consistent across catalogs, rather than necessity.
>
> Please read all of
> https://www.postgresql.org/message-id/flat/201106091554.
> p59Fso314146%40momjian.us
> where this was discussed to death (and rejected), and then read all of
> https://www.postgresql.org/message-id/flat/CAKq0gvK8PzMWPv19_o7CGg8ZQ0G%
> 2BUuAWor5RrAg0SOmWTqqLwg%40mail.gmail.com
> which is the thread in which the change was agreed to after all
> (on the grounds that we were breaking backwards compatibility of
> the view anyway with respect to other, more important, columns).
>
> If you still feel that we make incompatible changes without adequate
> consideration, that's your right, but you might want to consider
> speaking up in some reasonable time frame, not six years later.
> This could have been objected to as late as 9.2 beta, so it's not
> like you need to be drinking from the pgsql-hackers firehose continually
> in order to weigh in.  But 9.2 is not just released, it's EOL, so it's
> really kinda late to be objecting.
>
> regards, tom lane
>



-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: SELECT .. FOR UPDATE: find out who locked a row

2018-03-15 Thread Melvin Davidson
On Thu, Mar 15, 2018 at 10:14 PM, Stephen Frost <sfr...@snowman.net> wrote:

> Greetings Melvin,
>
> * Melvin Davidson (melvin6...@gmail.com) wrote:
> > >I guess with your query I can figure out which connection holds a lock,
> > but it seems I cannot correlate those locks to the rows which actually
> are
> > locked, since pg_locks seems not to reference this in any way.
> >
> > *FWIW, I really don't understand your need to identify the actual rows
> that
> > are locked. Once you have identified the query that is causing a block
> > (which is usually due to "Idle in Transaction"), AFAIK the only way to
> > remedy the problem is to kill the offending query, or wait for it to
> > complete. I am not aware of any way available to a user to "unlock"
> > individual rows". Indeed, if you could, it would probably lead to
> > corruption of some form.*
>
> No, locks are not able to be released mid-transaction.  That said, it
> can be difficult sometimes to determine which of the many sessions is
> holding a lock on a specific row, hence the two approaches I provided,
> which actually address the question which was raised.  While the
> use-case might not be on completely solid ground here, I don't think
> it's entirely unreasonable, so I don't think there's any need to tell
> the OP that what they're asking for isn't really what they want, in this
> case.
>
> > *BTW, the query I provided WILL work in version 10.  The commented
> section
> > was for v9.1 and prior, as "someone" felt it necessary to rename some
> > fields in pg_stat_activity*
> > *and remove/replace another field. Hopefully they will refrain from doing
> > so in the future, as it breaks queries and applications.*
>
> Changes will continue to be made between major versions of PostgreSQL
> when they're deemed necessary; I'd suggest those applications be
> prepared to adjust on a per-major-version basis when future changes
> happen.  We do have quite a bit of discussion about changes which are
> made and they are not done so without good justification, but they can
> and do happen.
>
> Thanks!
>
> Stephen
>

> Changes will continue to be made between major versions of PostgreSQL
>when they're deemed necessary; I'

Yes, Stephen, I certainly understand making changes to system catalogs
_when necessary_.
That being said, the first change was the renaming of pid to procpid in
pg_stat_activity.
However, I contend that was more because someone felt that it was more to
make the column names
consistent across catalogs, rather than necessity. Care should be taken to
consider the need and
effect of changing EXISTING system columns. It may have been a mistake in
originally naming it,
but it was certainly working and not causing any problems at the time.
Just my personal opinion.



-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: SELECT .. FOR UPDATE: find out who locked a row

2018-03-15 Thread Melvin Davidson
>I guess with your query I can figure out which connection holds a lock,
but it seems I cannot correlate those locks to the rows which actually are
locked, since pg_locks seems not to reference this in any way.



*Enrico,*



*FWIW, I really don't understand your need to identify the actual rows that
are locked. Once you have identified the query that is causing a block
(which is usually due to "Idle in Transaction"), AFAIK the only way to
remedy the problem is to kill the offending query, or wait for it to
complete. I am not aware of any way available to a user to "unlock"
individual rows". Indeed, if you could, it would probably lead to
corruption of some form.*


*BTW, the query I provided WILL work in version 10.  The commented section
was for v9.1 and prior, as "someone" felt it necessary to rename some
fields in pg_stat_activity*

*and remove/replace another field. Hopefully they will refrain from doing
so in the future, as it breaks queries and applications.*

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: SELECT .. FOR UPDATE: find out who locked a row

2018-03-15 Thread Melvin Davidson
On Thu, Mar 15, 2018 at 4:48 PM, Stephen Frost <sfr...@snowman.net> wrote:

> Greetings,
>
> * Enrico Thierbach (e...@open-lab.org) wrote:
> > I am using `SELECT * FROM queue ... FOR UPDATE SKIP LOCKED` to implement
> a
> > queueing system.
> >
> > Now I wonder if it is possible, given the id of one of the locked rows in
> > the queue table, to find out which connection/which transaction owns the
> > lock.
>
> Sure, you can attempt to lock the record and then run pg_blocking_pids()
> (in another session) against the pid which is trying to acquire the
> lock.
>
> Session #1:
>
> Connect
> SELECT * FROM queue ... FOR UPDATE SKIP LOCKED;
> ... gets back some id X
> ... waits
>
> Session #2:
>
> Connect
> SELECT pg_backend_pid(); -- save this for the 3rd session
> SELECT * FROM queue WHERE id = X FOR UPDATE;
> ... get blocked waiting for #1
> ... waits
>
> Session #3:
>
> SELECT pg_blocking_pids(SESSION_2_PID);
> -- returns PID of Session #1
>
> Obviously there's race conditions and whatnot (what happens if session
> #1 releases the lock?), but that should work to figure out who is
> blocking who.
>
> If you're on a version of PG without pg_blocking_pids then you can look
> in the pg_locks view, though that's a bit more annoying to decipher.
>
> Thanks!
>
> Stephen
>

> Now I wonder if it is possible, given the id of one of the locked rows in
> the queue table, to find out which connection/which transaction owns

You have not specified which version of PostgreSQL, but try this query.

SELECT c.datname,
   c.pid as pid,
   c.client_addr,
   c.usename as user,
   c.query,
   c.wait_event,
   c.wait_event_type,
/*   CASE WHEN c.waiting = TRUE
THEN 'BLOCKED'
ELSE 'no'
END as waiting,
*/
  l.pid as blocked_by,
   c.query_start,
   current_timestamp - c.query_start as duration
  FROM pg_stat_activity c
  LEFT JOIN pg_locks l1 ON (c.pid = l1.pid and not l1.granted)
  LEFT JOIN pg_locks l2 on (l1.relation = l2.relation and l2.granted)
  LEFT JOIN pg_stat_activity l ON (l2.pid = l.pid)
  LEFT JOIN pg_stat_user_tables t ON (l1.relation = t.relid)
 WHERE pg_backend_pid() <> c.pid
ORDER BY datname,
 query_start;

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Postgresql upgrade to 9.5.12/10.3 changes pg_dump format for cloning schemas

2018-03-13 Thread Melvin Davidson
On Tue, Mar 13, 2018 at 1:47 PM, Melvin Davidson <melvin6...@gmail.com>
wrote:

>
>
>
>> Thank you Melvin, I forgot to mention I've already found your script
>> before I asked here, but I didn’t think it was robust enough (please don't
>> offend :-). Particularly, it didn't work well on PostgreSQL 10.
>>
>>
> Aldrin,
>
> I apologize. I just tested and found that the reason it is failing is
> because, once again, the catalogs have been changed. In this case the
> structure of sequences.
> I am trying to find a workaround. I will get back to you when I do.
>
>
> --
> *Melvin Davidson*
> *Maj. Database & Exploration Specialist*
> *Universe Exploration Command – UXC*
> Employment by invitation only!
>



*Aldrin,*


*I've solved the problem with the sequences. *

*The attached clone_schema_10.sql has been tested on my system and now
works. *

*Let me know if you find any bugs. As you also said it is not robust
enough, Please also let me know what additional features you think it
needs.*


-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!

-- Function: public.clone_schema(text, text, boolean)

-- DROP FUNCTION public.clone_schema(text, text, boolean);

CREATE OR REPLACE FUNCTION public.clone_schema(
source_schema text,
dest_schema text,
include_recs boolean)
  RETURNS void AS
$BODY$
--  Initial code by Emanuel '3manuek' 
--  Last revision 2017-04-17 by Melvin Davidson
--  Added SELECT REPLACE for schema views
--
--  This function will clone all sequences, tables, indexes, rules, triggers, 
--  data(optional), views & functions from any existing schema to a new schema
-- SAMPLE CALL:
-- SELECT clone_schema('public', 'new_schema', TRUE);

DECLARE
  src_oid  oid;
  tbl_oid  oid;
  seq_oid  oid;
  func_oid oid;
  owner_id oid;
  con_oid  oid;
  v_path   text;
  v_func   text;
  v_args   text;
  v_connametext;
  v_rule   text;
  v_trig   text;
  object   text;
  buffer   text;
  srctbl   text;
  default_ text;
  v_column text;
  qry  text;
  dest_qry text;
  v_deftext;
  v_stat   integer;
  seqval   bigint;
  seq_name name;
  sq_type  oid;
  sq_last_valuebigint;
  sq_max_value bigint;
  sq_start_value   bigint;
  sq_increment_by  bigint;
  sq_min_value bigint;
  sq_cache_value   bigint;
  sq_log_cnt   bigint;
--  sq_is_called boolean;
  sq_is_cycled boolean;
  sq_typname   name;
  seq_cycled   text;
  seq_ownername;  
--  sq_cycledchar(10);
  
BEGIN

-- Check that source_schema exists
  SELECT oid INTO src_oid
FROM pg_namespace
   WHERE nspname = quote_ident(source_schema);
   
  IF NOT FOUND
THEN 
  RAISE NOTICE 'source schema % does not exist!', source_schema;
  RETURN ;
  END IF;

  -- Check that dest_schema does not yet exist
  PERFORM nspname 
FROM pg_namespace
   WHERE nspname = quote_ident(dest_schema);
   
  IF FOUND
THEN 
  RAISE NOTICE 'dest schema % already exists!', dest_schema;
  RETURN ;
  END IF;

  EXECUTE 'CREATE SCHEMA ' || quote_ident(dest_schema) ;
   
  -- Add schema comment
  SELECT description INTO v_def
FROM pg_description
   WHERE objoid = src_oid
 AND objsubid = 0;

  IF FOUND 
THEN 
  EXECUTE 'COMMENT ON SCHEMA ' || quote_ident(dest_schema) || ' IS ' || 
quote_literal(v_def);
  END IF;
  
  -- Create sequences
  -- TODO: Find a way to make this sequence's owner is the correct table.
  FOR object IN
 SELECT sequence_name::text 
  FROM information_schema.sequences
 WHERE sequence_schema = quote_ident(source_schema)

  LOOP

SELECT oid, relowner into seq_oid, owner_id
  FROM pg_class 
 WHERE relname = quote_ident(object)
   AND relkind = 'S';

SELECT seqtypid,
   seqstart,
   seqincrement,
   seqmax,
   seqmin,
   seqcache,
   seqcycle
  INTO sq_type, 
   sq_start_value,
   sq_increment_by,
   sq_max_value,
   sq_min_value,
   sq_cache_value,
   sq_is_cycled
  FROM pg_sequence
 WHERE seqrelid = seq_oid;
 
SELECT typname INTO sq_typname
  FROM pg_type 
 WHERE oid = sq_type;

SELECT rolname INTO seq_owner
  FROM pg_authid WHERE oid = owner_id;

--EXECUTE 'CREATE SEQUENCE ' || quote_ident(dest_schema) || '.' || 
quote_ident(object);
--srctbl := quote_ident(source_schema) || '.' || quote_ident(object);

IF sq_is_cycled 
  THEN 
seq_cycled = ' CYCLE'; 
  ELSE 
seq_cycled = 

Re: Postgresql upgrade to 9.5.12/10.3 changes pg_dump format for cloning schemas

2018-03-13 Thread Melvin Davidson
> Thank you Melvin, I forgot to mention I've already found your script
> before I asked here, but I didn’t think it was robust enough (please don't
> offend :-). Particularly, it didn't work well on PostgreSQL 10.
>
>
Aldrin,

I apologize. I just tested and found that the reason it is failing is
because, once again, the catalogs have been changed. In this case the
structure of sequences.
I am trying to find a workaround. I will get back to you when I do.


-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: UPSERT on a view

2018-03-13 Thread Melvin Davidson
On Tue, Mar 13, 2018 at 10:02 AM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Tue, Mar 13, 2018 at 6:47 AM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
>
>> On Mon, Mar 12, 2018 at 2:26 PM, Steven Roth <st...@rothskeller.net>
>> wrote:
>>
>>> This code raises the error 'foo', even though the insert says DO NOTHING
>>> and the error type is unique_violation.  Why?
>>>
>>> More generally:  how can one write trigger functions for a view (that is
>>> not automatically updatable), such that INSERT ... ON CONFLICT DO UPDATE
>>> will work with the correct semantics?  What can one do in the INSERT
>>> trigger that will cause PostgreSQL to execute the caller-supplied UPDATE
>>> clause?
>>>
>>>
> Sorry, not sure...
> ​
>
>> >CREATE TRIGGER vtest_insert INSTEAD OF INSERT ON *vtest*
>> >FOR EACH ROW EXECUTE PROCEDURE vtest_insert();
>> >*INSERT INTO vtest *VALUES (1) ON CONFLICT DO NOTHING;
>>
>>
>>
>> *Your problem is that A. TRIGGERS are meant for TABLES, not views*
>>
>> *and *
>>
>> *B. You CANNOT insert into a VIEW.*
>>
>>
> ​Your knowledge is this area is out-of-date...you should read the page you
> linked to again.
>
> I'll admit "table_name" probably could be labelled
> "relation_name"...though that is then too broad.
>
> David J.
> ​
>
>

>Your knowledge is this area is out-of-date...you should read the page you
linked to again.
Yes, you can create a TRIGGER on view, but you CANNOT INSERT INTO A VIEW,

IE:
CREATE TRIGGER vtest_insert INSTEAD OF INSERT ON vtest
FOR EACH ROW EXECUTE PROCEDURE vtest_insert();
*INSERT INTO vtest VALUES (1) ON CONFLICT DO NOTHING;*

Which is the cause of the Error.!



-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: UPSERT on a view

2018-03-13 Thread Melvin Davidson
On Mon, Mar 12, 2018 at 2:26 PM, Steven Roth <st...@rothskeller.net> wrote:

> Why does the following code raise an error?
>
> CREATE TABLE ttest (x integer);
> CREATE VIEW vtest AS SELECT x FROM ttest;
> CREATE FUNCTION vtest_insert() RETURNS trigger LANGUAGE plpgsql AS $$
> RAISE 'foo' USING ERRCODE='unique_violation';
> END $$;
> CREATE TRIGGER vtest_insert INSTEAD OF INSERT ON vtest
> FOR EACH ROW EXECUTE PROCEDURE vtest_insert();
> INSERT INTO vtest VALUES (1) ON CONFLICT DO NOTHING;
>
> This code raises the error 'foo', even though the insert says DO NOTHING
> and the error type is unique_violation.  Why?
>
> More generally:  how can one write trigger functions for a view (that is
> not automatically updatable), such that INSERT ... ON CONFLICT DO UPDATE
> will work with the correct semantics?  What can one do in the INSERT
> trigger that will cause PostgreSQL to execute the caller-supplied UPDATE
> clause?
>
> Thanks,
> Steve
>
>
>CREATE TRIGGER vtest_insert INSTEAD OF INSERT ON *vtest*
>FOR EACH ROW EXECUTE PROCEDURE vtest_insert();
>*INSERT INTO vtest *VALUES (1) ON CONFLICT DO NOTHING;



*Your problem is that A. TRIGGERS are meant for TABLES, not views*

*and *

*B. You CANNOT insert into a VIEW.*



*https://www.postgresql.org/docs/9.6/static/sql-createtrigger.html
<https://www.postgresql.org/docs/9.6/static/sql-createtrigger.html>*CREATE
[ CONSTRAINT ] TRIGGER name { BEFORE | AFTER | INSTEAD OF } { event [
OR ... ] }
ON *table_name*
[ FROM referenced_table_name ]
[ NOT DEFERRABLE | [ DEFERRABLE ] [ INITIALLY IMMEDIATE |
INITIALLY DEFERRED ] ]
[ FOR [ EACH ] { ROW | STATEMENT } ]*
[ WHEN ( condition ) ]*
EXECUTE PROCEDURE function_name ( arguments )


*Please also note that it is very helpful if you specify PostgreSQL version
and O/S when submitting to this list.*


-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Programmatically duplicating a schema

2018-03-13 Thread Melvin Davidson
* > What is a reliable way to programmatically & generically populate an
empty schema with all the objects in the public schema as a template? The
simplest way is just to load the attached clone_schema function. It was
originally created by Emanuel '3manuek', which I enhanced. Itnow copies all
sequences, tables, indexes, rules, triggers, data(optional), views &
functions from any existing schema to a new schema.Then just call the
function.SAMPLE CALL:SELECT clone_schema('public', 'new_schema', TRUE);Use
FALSE if you do not want to copy data.*
-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!
-- Function: public.clone_schema(text, text, boolean)

-- DROP FUNCTION public.clone_schema(text, text, boolean);

CREATE OR REPLACE FUNCTION public.clone_schema(
source_schema text,
dest_schema text,
include_recs boolean)
  RETURNS void AS
$BODY$
--  Initial code by Emanuel '3manuek' 
--  Last revision 2017-04-17 by Melvin Davidson
--  Added SELECT REPLACE for schema views
--
--  This function will clone all sequences, tables, indexes, rules, triggers, 
--  data(optional), views & functions from any existing schema to a new schema
-- SAMPLE CALL:
-- SELECT clone_schema('public', 'new_schema', TRUE);

DECLARE
  src_oid  oid;
  tbl_oid  oid;
  func_oid oid;
  con_oid  oid;
  v_path   text;
  v_func   text;
  v_args   text;
  v_connametext;
  v_rule   text;
  v_trig   text;
  object   text;
  buffer   text;
  srctbl   text;
  default_ text;
  v_column text;
  qry  text;
  dest_qry text;
  v_deftext;
  v_stat   integer;
  seqval   bigint;
  sq_last_valuebigint;
  sq_max_value bigint;
  sq_start_value   bigint;
  sq_increment_by  bigint;
  sq_min_value bigint;
  sq_cache_value   bigint;
  sq_log_cnt   bigint;
  sq_is_called boolean;
  sq_is_cycled boolean;
  sq_cycledchar(10);

BEGIN

-- Check that source_schema exists
  SELECT oid INTO src_oid
FROM pg_namespace
   WHERE nspname = quote_ident(source_schema);
  IF NOT FOUND
THEN 
RAISE NOTICE 'source schema % does not exist!', source_schema;
RETURN ;
  END IF;

  -- Check that dest_schema does not yet exist
  PERFORM nspname 
FROM pg_namespace
   WHERE nspname = quote_ident(dest_schema);
  IF FOUND
THEN 
RAISE NOTICE 'dest schema % already exists!', dest_schema;
RETURN ;
  END IF;

  EXECUTE 'CREATE SCHEMA ' || quote_ident(dest_schema) ;

  -- Add schema comment
  SELECT description INTO v_def
FROM pg_description
   WHERE objoid = src_oid
   AND objsubid = 0;
  IF FOUND 
THEN 
  EXECUTE 'COMMENT ON SCHEMA ' || quote_ident(dest_schema) || ' IS ' || 
quote_literal(v_def);
  END IF;
  
  -- Create sequences
  -- TODO: Find a way to make this sequence's owner is the correct table.
  FOR object IN
SELECT sequence_name::text 
  FROM information_schema.sequences
 WHERE sequence_schema = quote_ident(source_schema)
  LOOP
EXECUTE 'CREATE SEQUENCE ' || quote_ident(dest_schema) || '.' || 
quote_ident(object);
srctbl := quote_ident(source_schema) || '.' || quote_ident(object);

EXECUTE 'SELECT last_value, max_value, start_value, increment_by, 
min_value, cache_value, log_cnt, is_cycled, is_called 
  FROM ' || quote_ident(source_schema) || '.' || 
quote_ident(object) || ';' 
  INTO sq_last_value, sq_max_value, sq_start_value, 
sq_increment_by, sq_min_value, sq_cache_value, sq_log_cnt, sq_is_cycled, 
sq_is_called ; 

IF sq_is_cycled 
  THEN 
sq_cycled := 'CYCLE';
ELSE
sq_cycled := 'NO CYCLE';
END IF;

EXECUTE 'ALTER SEQUENCE '   || quote_ident(dest_schema) || '.' || 
quote_ident(object) 
|| ' INCREMENT BY ' || sq_increment_by
|| ' MINVALUE ' || sq_min_value 
|| ' MAXVALUE ' || sq_max_value
|| ' START WITH '   || sq_start_value
|| ' RESTART '  || sq_min_value 
|| ' CACHE '|| sq_cache_value 
|| sq_cycled || ' ;' ;

buffer := quote_ident(dest_schema) || '.' || quote_ident(object);
IF include_recs 
THEN
EXECUTE 'SELECT setval( ''' || buffer || ''', ' || sq_last_value || 
', ' || sq_is_called || ');' ; 
ELSE
EXECUTE 'SELECT setval( ''' || buffer || ''', ' || sq_start_value 
|| ', ' || sq_is_called || ');' ;
END IF;

  -- add sequence comments
SELECT oid INTO tbl_oid
  FROM pg_class 
 WHERE relkind = 'S'
   AND relnamespace = src_oid
   AND relname = quote_ident(object);

SELECT description INTO v_def
  FROM pg_description
 WHERE objoid = tbl_oid
   AND objsubid = 0;

IF FOUND
  THEN  
EXECUTE 'COMMENT ON SEQUENCE ' || quote_ident(dest_schema) || 

Re: psql in a bash function

2018-03-12 Thread Melvin Davidson
On Mon, Mar 12, 2018 at 2:14 PM, Ron Johnson <ron.l.john...@cox.net> wrote:

> Hi,
>
> Because I need to log into many servers, I created functions as keyboard
> shortcuts (not aliases, since I will want to embed these shortcuts in other
> functions).
>
> psqlxyz ()
> {
> echo "P1=$1";
> echo "P2=$2";
> psql -U postgres -h XYZ $@
> }
>
> This is the (simple, test) command that I want to run, which works when
> run explicitly using psql, but not my function.  Any ideas why the function
> isn't properly passing the "-c" and '"select ..."' to psql?
>
> $ psql -U postgres -h XYZ -c "select oid, datname from pg_database;"
>oid   |datname
> -+
>1 | template1
>11563 | template0
>11564 | postgres
>16404 | test1
>  3039800 | ABCD
>   319011 | EFGH
>   649861 | IJKL
> (7 rows)
>
> $ psqldba -c *'*"select oid, datname from pg_database;"*'*
> P1=*-c*
> P2=*"select oid, datname from pg_database;"*
> psql: warning: extra command-line argument "datname" ignored
> psql: warning: extra command-line argument "from" ignored
> psql: warning: extra command-line argument "pg_database;"" ignored
> psql: FATAL:  database "oid," does not exist
>
> Thanks
>
> --
> Angular momentum makes the world go 'round.
>

Ron,

Here is a model that works in any LINUX environment. You can tweak for your
commands:
Please include your PostgreSQL version and O/S in future submissions to
this list.

#!/bin/bash
# Reports sizes for all or selected database

PORT=""
USER=""
DBNAME="%"
usage() {
echo "Usage: $0 [-d  -U  -p ]"
exit 1
}

while getopts "d:p:uU:" OPT;
do case "${OPT}" in
  d) DBNAME=$OPTARG
 ;;
  p) PORT="-p $OPTARG"
 ;;
  U) USER="-U $OPTARG"
 ;;
  u) usage
 ;;
[?]) usage
   esac;
done

if [ "$DBNAME" = "" ]
  then
usage
exit 1
fi


psql $PORT $USER postgres <<_EOF_
SELECT txid_current() AS txid_current;
SELECT datname,
   rolname as owner,
   pg_size_pretty(pg_database_size(datname) )as size_pretty,
   pg_database_size(datname) as size,
   (SELECT pg_size_pretty (SUM( pg_database_size(datname))::bigint)
  FROM pg_database)  AS total,
   (pg_database_size(datname) / (SELECT SUM(
pg_database_size(datname))
   FROM pg_database) ) *
100::numeric(6,3) AS pct
  FROM pg_database d
  JOIN pg_authid a ON a.oid = datdba
  WHERE datname LIKE '%$DBNAME%'
ORDER BY datname;
_EOF_


-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Postgresql upgrade to 9.5.12/10.3 changes pg_dump format for cloning schemas

2018-03-09 Thread Melvin Davidson
On Fri, Mar 9, 2018 at 10:00 AM, Aldrin Martoq Ahumada <
aldrin.mar...@gmail.com> wrote:

> Hi Andre,
>
> Yes, here is the issue: https://github.com/influitive/apartment/issues/532
>
> It happens if you configured apartment with use_sql=true, which means it
> clones the schema from pg_dump. My first attempt was to “fix” the script
> generated by pg_dump, but I feel it will be a mess. We solved our issue
> going back to use_sql=false, which is the default (creates the schema from
> db/schema.rb). But there is people that have other requirements, like
> functions, so the easier way for them is to keep use_sql and replace
> strings in the script.
>
>
> Cheers,
>
> On Mar 9, 2018, at 11:51 AM, Andre Oliveira Freitas <
> afrei...@callixbrasil.com> wrote:
>
> Hello Aldrin,
>
> I'm also using apartment with postgresql 9.6.6, and I don't see any
> issue with it. Are you using Apartment::Tenant.create?
>
> 2018-03-09 10:26 GMT-03:00 Aldrin Martoq Ahumada <aldrin.mar...@gmail.com
> >:
>
> Hi,
>
> For a multi tenant system, we are using the following command to blindly
> clone a schema into another:
> pg_dump -s -x -O -n #{default_tenant} #{dbname}
>
>
> This is done for us by a rails gem, which then feeds that script into the
> new created schema for the new tenant.
> https://github.com/influitive/apartment/blob/
> 80a21f2e1cdcbe5b0bd976f88c14332657804536/lib/apartment/
> adapters/postgresql_adapter.rb#L150
>
>
>
>
> When we upgraded from 9.5.11 to 9.5.12, the format of the dump changed (it
> now always appends the schema name), so this is broken. We could patch the
> SQL generated, but that’s not a generic/robust solution.
>
> # pg_dump postgresql 9.5.11
> SET search_path = public, pg_catalog;
> CREATE TABLE ahoy_events (
>id bigint NOT NULL,
>visit_id integer,
>user_id integer,
>name character varying,
>properties jsonb,
>"time" timestamp without time zone
> );
>
> # pg_dump postgresql 9.5.12
> CREATE TABLE public.ahoy_events (
>id bigint NOT NULL,
>visit_id integer,
>user_id integer,
>name character varying,
>properties jsonb,
>"time" timestamp without time zone
> );
>
>
>
> Thinking in the long term, how could be the best way to clone a schema into
> another?
>
>
>
>
> —
> Aldrin
>
>
>
>
> --
>
> André Luis O. Freitas
> System Architect
>
> Rua do Rócio, 220 - Cj. 72
> São Paulo - SP - 04552-000
> 55 11 4063 4222
>
> afrei...@callix.com.br
> www.callix.com.br
>
>
>


















* >...how could be the best way to clone a schema into another?  The safest
way is to use pgdump -F p -n  > schema.sql Then edit
schema.sql and change all references to old_schema name to new_schema
name. Finally, use psql < schema.sql to create the new_schema.  That being
said, a year ago I optimized a function originally written by Emanuel
'3manuek' called clone_schema, which is added to the public schema. It
clones all sequences, tables,  indexes, rules, triggers, data(optional),
views & functions from any existing schema to a  new  schema SAMPLE
CALL: SELECT clone_schema('public', 'new_schema', TRUE);  I've attached it
for your convenience. disclaimer: I do not accept any responsibility for
any unknow bugs in the function.  Test first and use at your own risk.-- *


*Melvin DavidsonMaj. Database & Exploration SpecialistUniverse Exploration
Command – UXC*
Employment by invitation only!
-- Function: public.clone_schema(text, text, boolean)

-- DROP FUNCTION public.clone_schema(text, text, boolean);

CREATE OR REPLACE FUNCTION public.clone_schema(
source_schema text,
dest_schema text,
include_recs boolean)
  RETURNS void AS
$BODY$
--  Initial code by Emanuel '3manuek' 
--  Last revision 2017-04-17 by Melvin Davidson
--  Added SELECT REPLACE for schema views
--
--  This function will clone all sequences, tables, indexes, rules, triggers, 
--  data(optional), views & functions from any existing schema to a new schema
-- SAMPLE CALL:
-- SELECT clone_schema('public', 'new_schema', TRUE);

DECLARE
  src_oid  oid;
  tbl_oid  oid;
  func_oid oid;
  con_oid  oid;
  v_path   text;
  v_func   text;
  v_args   text;
  v_connametext;
  v_rule   text;
  v_trig   text;
  object   text;
  buffer   text;
  srctbl   text;
  default_ text;
  v_column text;
  qry  text;
  dest_qry text;
  v_deftext;
  v_stat   integer;
  seqval   bigint;
  sq_last_valuebigint;
  sq_max_value bigint;
  sq_start_value   bigint;
  sq_increment_by  bigint;
  sq_min_value bigint;
  sq_c

Re: save query as sql file

2018-03-07 Thread Melvin Davidson
On Wed, Mar 7, 2018 at 8:46 AM, Łukasz Jarych <jarys...@gmail.com> wrote:

> thank you !
>
> Jacek
>
> 2018-03-07 14:45 GMT+01:00 Adrian Klaver <adrian.kla...@aklaver.com>:
>
>> On 03/06/2018 11:07 PM, Łukasz Jarych wrote:
>>
>>> Hello,
>>>
>>> I ma trying to save query as sql file in pg_admin4 but file --> save as
>>> not exists like here:
>>>
>>> https://www.youtube.com/watch?v=L4KJ_Kpymh4
>>>
>>> where can i do it ?
>>>
>>>
>> Well according to manual:
>>
>> https://www.pgadmin.org/docs/pgadmin4/2.x/query_tool.html
>>
>> Query tool toolbar
>>
>>
>> Click the Save icon to perform a quick-save of a previously saved query,
>> or to access the Save menu:
>>
>> Select Save to save the selected content of the SQL Editor panel
>> in a file.
>> Select Save As to open a new browser dialog and specify a new
>> location to which to save the selected content of the SQL Editor panel.
>>
>>
>>
>> Best,
>>> Jacek
>>>
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>
>

> trying to save query as sql file in pg_admin4 but file --> save as not
exists like here:

As previously answered, you have to use the icon(s) for all options in the
query window

FYI,In October 2017, I entered Feature request #2797 in PgAdmin's redmin to
add a MENU to the query window in addition to the icons.
That request is still open.
https://redmine.postgresql.org/issues/2797
-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Can I grant permissions to specific set of uids (linux) ?

2018-03-06 Thread Melvin Davidson
On Tue, Mar 6, 2018 at 3:26 PM, David Gauthier <davegauthie...@gmail.com>
wrote:

> Hi:
>
> I'd like to grant select, insert, update, delete to a table for a specific
> set of uids (linux).  All others get select only.  Can the DB authenticate
> the current linux user and grant access based on the fact that they are
> logged in ()IOW, no passwords ?Is this possible ? If so, how ?
>
> Thanks for any help
>


*> I'd like to grant select, insert, update, delete to a table for a
specific set of uids (linux). *


*PostgreSQL does not allow permissions based on uid's.*

*The accepted/implemented way of doing that is to:*

*1. CREATE the appropriate _user_ ROLEs.*

*2: Create a GROUP (role that cannot login)  with the permissions needed. *
*3. GRANT that GROUP/ROLE to the user ROLEs that need it.  -> *

*GRANT role_name [, ...] TO role_name [, ...] [ WITH ADMIN OPTION ]*






*https://www.postgresql.org/docs/current/static/sql-creategroup.html
<https://www.postgresql.org/docs/current/static/sql-creategroup.html>https://www.postgresql.org/docs/current/static/sql-createrole.html
<https://www.postgresql.org/docs/current/static/sql-createrole.html>https://www.postgresql.org/docs/current/static/sql-grant.html
<https://www.postgresql.org/docs/current/static/sql-grant.html>*--
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Enforce primary key on every table during dev?

2018-03-01 Thread Melvin Davidson
On Thu, Mar 1, 2018 at 2:00 PM, Ron Johnson <ron.l.john...@cox.net> wrote:

> On 03/01/2018 12:32 PM, Daevor The Devoted wrote:
>
>
>
> On Thu, Mar 1, 2018 at 8:18 PM, Ron Johnson <ron.l.john...@cox.net> wrote:
>
>>
>> On 03/01/2018 11:47 AM, Daevor The Devoted wrote:
>>
>>
>> On Thu, Mar 1, 2018 at 2:07 PM, Rakesh Kumar <rakeshkumar...@aol.com>
>> wrote:
>>
>>>
>>> >Adding a surrogate key to such a table just adds overhead, although
>>> that could be useful
>>> >in case specific rows need updating or deleting without also modifying
>>> the other rows with
>>> >that same data - normally, only insertions and selections happen on
>>> such tables though,
>>> >and updates or deletes are absolutely forbidden - corrections happen by
>>> inserting rows with
>>> >an opposite transaction.
>>>
>>> I routinely add surrogate keys like serial col to a table already having
>>> a nice candidate keys
>>> to make it easy to join tables.  SQL starts looking ungainly when you
>>> have a 3 col primary
>>> key and need to join it with child tables.
>>>
>>>
>> I was always of the opinion that a mandatory surrogate key (as you
>> describe) is good practice.
>> Sure there may be a unique key according to business logic (which may be
>> consist of those "ungainly" multiple columns), but guess what, business
>> logic changes, and then you're screwed!
>>
>>
>> And so you drop the existing index and build a new one.  I've done it
>> before, and I'll do it again.
>>
>> So using a primary key whose sole purpose is to be a primary key makes
>> perfect sense to me.
>>
>>
>> I can't stand synthetic keys.  By their very nature, they're so
>> purposelessly arbitrary, and allow you to insert garbage into the table.
>>
>
> Could you perhaps elaborate on how a surrogate key allows one to insert
> garbage into the table? I'm afraid I don't quite get what you're saying.
>
>
> If your only unique index is a synthetic key, then you can insert the same
> "business data" multiple times with different synthetic keys.
>
>
> --
> Angular momentum makes the world go 'round.
>

* If you are going to go to the trouble of having a surrogate/synthetic
key, then you may as well have a primary key , which is much better. *

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Version upgrade: is restoring the postgres database needed?

2018-03-01 Thread Melvin Davidson
On Thu, Mar 1, 2018 at 12:22 PM, Ron Johnson <ron.l.john...@cox.net> wrote:

> On 03/01/2018 11:03 AM, Melvin Davidson wrote:
>
>
>
> On Thu, Mar 1, 2018 at 11:51 AM, Ron Johnson <ron.l.john...@cox.net>
> wrote:
>
>> On 03/01/2018 10:37 AM, Vick Khera wrote:
>>
>> On Thu, Mar 1, 2018 at 11:15 AM, Ron Johnson <ron.l.john...@cox.net>
>> wrote:
>>
>>> No, I do:
>>>
>>> $ pg_dump -Fc PROD > PROD.pgdump
>>> $ pg_dump --globals-only postgres > globals.sql
>>> $ pg_dump -Fc postgres > postgres.pgdump
>>>
>>>
>> That's how I back them up as well. You are correct that all you need to
>> do is restore the globals.sql, then each "pgdump" file individually. Just
>> ignore the warning when it tries to restore your initial postgres
>> superuser, since it was created by the initdb already.
>>
>> You probably don't need the "postgres" db at all, since it is just there
>> to allow the client to connect to something on initial install. Normally
>> you don't use it in production.
>>
>>
>> Good.  What, then, have I forgotten to restore such that the "Access
>> privileges" are showing on my current 9.2 servers, but not on the
>> newly-restored 9.6.6 server?
>>
>> *Current*
>> postgres=# \l
>>List of databases
>> Name |  Owner   | Encoding |   Collate   |Ctype|   Access
>> privileges
>> -+--+--+-+--
>> ---+---
>> CSSCAT_STI   | CSS  | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
>> CSS=CTc/CSS  +
>>  |  |  | | |
>> =Tc/CSS  +
>>  |  |  | | |
>> app_user=CTc/CSS
>> CSSCAT_STIB  | CSS  | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
>> CSS=CTc/CSS  +
>>  |  |  | | |
>> =Tc/CSS  +
>>  |  |  | | |
>> app_user=CTc/CSS
>> CSSCAT_STIC  | CSS  | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
>> CSS=CTc/CSS  +
>>  |  |  | | |
>> =Tc/CSS  +
>>  |  |  | | |
>> app_user=CTc/CSS
>>
>> *Newly restored*
>> postgres=# \l
>>List of databases
>> Name |  Owner   | Encoding |   Collate   |Ctype|   Access
>> privileges
>> -+--+--+-+--
>> ---+---
>> CSSCAT_STIB | CSS  | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
>> CSSCAT_STIC | CSS  | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
>> postgres| postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
>>
>>
>> --
>> Angular momentum makes the world go 'round.
>>
>
>
>
>
>
> *>$ pg_dump -Fc PROD > PROD.pgdump >$ pg_dump --globals-only postgres >
> globals.sql >$ pg_dump -Fc postgres > postgres.pgdump *
>
> *The last I looked, pg_dump does not have a "--globals-only" *
>
>
>
>
>
>
>
> *Did you mean? $ pg_dump -Fc PROD > PROD.pgdump $ pg_dumpall
> --globals-only postgres > globals.sql OR $ pg_dumpall -g > globals.sql $
> pg_dump -Fc postgres > postgres.pgdump*
>
>
> Hmmm.  I just looked at the script, and it says:
>
> $ pg_dumpall --schema-only > globals.sql
>
> That's not good.
>
>
> --
> Angular momentum makes the world go 'round.
>








* >Hmmm.  I just looked at the script, and it says: >$ pg_dumpall
--schema-only > globals.sql >That's not good. *


*No that's actually correct. pg_dumpall  can and will dump the globals*

*pg_dump cannot*

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Version upgrade: is restoring the postgres database needed?

2018-03-01 Thread Melvin Davidson
On Thu, Mar 1, 2018 at 11:51 AM, Ron Johnson <ron.l.john...@cox.net> wrote:

> On 03/01/2018 10:37 AM, Vick Khera wrote:
>
> On Thu, Mar 1, 2018 at 11:15 AM, Ron Johnson <ron.l.john...@cox.net>
> wrote:
>
>> No, I do:
>>
>> $ pg_dump -Fc PROD > PROD.pgdump
>> $ pg_dump --globals-only postgres > globals.sql
>> $ pg_dump -Fc postgres > postgres.pgdump
>>
>>
> That's how I back them up as well. You are correct that all you need to do
> is restore the globals.sql, then each "pgdump" file individually. Just
> ignore the warning when it tries to restore your initial postgres
> superuser, since it was created by the initdb already.
>
> You probably don't need the "postgres" db at all, since it is just there
> to allow the client to connect to something on initial install. Normally
> you don't use it in production.
>
>
> Good.  What, then, have I forgotten to restore such that the "Access
> privileges" are showing on my current 9.2 servers, but not on the
> newly-restored 9.6.6 server?
>
> *Current*
> postgres=# \l
>List of databases
> Name |  Owner   | Encoding |   Collate   |Ctype|   Access
> privileges
> -+--+--+-+--
> ---+---
> CSSCAT_STI   | CSS  | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
> CSS=CTc/CSS  +
>  |  |  | | |
> =Tc/CSS  +
>  |  |  | | |
> app_user=CTc/CSS
> CSSCAT_STIB  | CSS  | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
> CSS=CTc/CSS  +
>  |  |  | | |
> =Tc/CSS  +
>  |  |  | | |
> app_user=CTc/CSS
> CSSCAT_STIC  | CSS  | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
> CSS=CTc/CSS  +
>  |  |  | | |
> =Tc/CSS  +
>  |  |  | | |
> app_user=CTc/CSS
>
> *Newly restored*
> postgres=# \l
>List of databases
> Name |  Owner   | Encoding |   Collate   |Ctype|   Access
> privileges
> -+--+--+-+--
> ---+---
> CSSCAT_STIB | CSS  | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
> CSSCAT_STIC | CSS  | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
> postgres| postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
>
>
> --
> Angular momentum makes the world go 'round.
>





*>$ pg_dump -Fc PROD > PROD.pgdump >$ pg_dump --globals-only postgres >
globals.sql >$ pg_dump -Fc postgres > postgres.pgdump *

*The last I looked, pg_dump does not have a "--globals-only"*








*Did you mean? $ pg_dump -Fc PROD > PROD.pgdump $ pg_dumpall --globals-only
postgres > globals.sqlOR $ pg_dumpall -g > globals.sql $ pg_dump -Fc
postgres > postgres.pgdump *

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Enforce primary key on every table during dev?

2018-03-01 Thread Melvin Davidson
On Thu, Mar 1, 2018 at 11:43 AM, Alban Hertroys <haram...@gmail.com> wrote:

> On 1 March 2018 at 17:22, Steven Lembark <lemb...@wrkhors.com> wrote:
> >
> >> On 03/01/2018 02:20 AM, Alban Hertroys wrote:
> >> [snip]
> >> > Not to mention that not all types of tables necessarily have
> >> > suitable candidates for a primary key. You could add a surrogate
> >> > key based on a serial type, but in such cases that may not serve
> >> > any purpose other than to have some arbitrary primary key.
> >> >
> >> > An example of such tables is a monetary transaction table that
> >> > contains records for deposits and withdrawals to accounts.
>
> (...)
>
> > Start with Date's notion that a database exists to correclty represent
> > data about the real world. Storing un-identified data breaks this
> > since we have no idea what the data means or have any good way of
> > getting it back out. Net result is that any workable relational
> > database will have at least one candidate key for any table in it.
>
> (...)
>
> > If you have a design with un-identified data it means that you havn't
> > normalized it properly: something is missing from the table with
> > un-identifiable rows.
>
> While that holds true for a relational model, in reporting for
> example, it is common practice to denormalize data without a
> requirement to be able to identify a single record. The use case for
> such tables is providing quick aggregates on the data. Often this
> deals with derived data. It's not that uncommon to not have a primary
> or even a uniquely identifiable key on such tables.
>
> I do not disagree that having a primary key on a table is a bad thing,
> but I do disagree that a primary key is a requirement for all tables.
>
> More generally: For every rule there are exceptions. Even for this one.
> --
> If you can't see the forest for the trees,
> Cut the trees and you'll see there is no forest.
>
>



*> it is common practice to denormalize data without a>requirement to be
able to identify a single record *

*You may perceive that to be "common practice", but in reality it is not,
and in fact a bad one. As was previously stated, PosgreSQL is a
_relational_ database,*
*and breaking that premise will eventually land you in very big trouble.
There is no solid reason not to a primary key for every table.*

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Version upgrade: is restoring the postgres database needed?

2018-03-01 Thread Melvin Davidson
On Thu, Mar 1, 2018 at 5:24 AM, Ron Johnson <ron.l.john...@cox.net> wrote:

>
> Or do we just apply the globals.sql created by "pg_dumpall --globals-only"?
>
> (We're upgrading by restoring all databases on a new server, that,
> naturally, has it's own new postgres, template0 and template1 databases.)
>
>
> Thanks
>
> --
> Angular momentum makes the world go 'round.
>
>
>is restoring the postgres database needed?

That would depend on how you did the dump. If you did a_complete pg_dumpall
(did not use -g or any other limiting flags), then all roles and databases
are contained
in the output file created. NOTE: restoring from the dumped file will
require rebuilding all indexes, because indexes are not dumped.

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Enforce primary key on every table during dev?

2018-02-28 Thread Melvin Davidson
On Wed, Feb 28, 2018 at 4:07 PM, Tim Cross <theophil...@gmail.com> wrote:

>
> Jeremy Finzel <finz...@gmail.com> writes:
>
> > We want to enforce a policy, partly just to protect those who might
> forget,
> > for every table in a particular schema to have a primary key.  This can't
> > be done with event triggers as far as I can see, because it is quite
> > legitimate to do:
> >
> > BEGIN;
> > CREATE TABLE foo (id int);
> > ALTER TABLE foo ADD PRIMARY KEY (id);
> > COMMIT;
> >
> > It would be nice to have some kind of "deferrable event trigger" or some
> > way to enforce that no transaction commits which added a table without a
> > primary key.
> >
>
> I think you would be better off having an automated report which alerts
> you to tables lacking a primary key and deal with that policy through
> other means. Using triggers in this way often leads to unexpected
> behaviour and difficult to identify bugs. The policy is a management
> policy and probably should be dealt with via management channels rather
> than technical ones. Besides, the likely outcome will be your developers
> will just adopt the practice of adding a serial column to every table,
> which in itself doesn't really add any value.
>
> Tim
>
>
> --
> Tim Cross
>
>










*> I think you would be better off having an automated report which
alerts>you to tables lacking a primary key and deal with that policy
through>other means. Perhaps a better solution is to have a meeting with
the developers and explain to them WHY the policy of enforcing a primary
key is important. Also, explain the purpose ofprimary keys and why it is
not always suitable to just use an integer or serial as the key,but rather
why natural unique (even multi column) keys are better. But this begs the
question, why are "developers" allowed to design database tables? That
should be the job of the DBA! Atthe very minimum, the DBA should be
reviewing and have the authority to approve of disapprove of table/schema
designs/changes .*

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Enforce primary key on every table during dev?

2018-02-28 Thread Melvin Davidson
On Wed, Feb 28, 2018 at 8:57 AM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 02/28/2018 05:52 AM, John McKown wrote:
>
>> On Wed, Feb 28, 2018 at 7:34 AM, Jeremy Finzel <finz...@gmail.com
>> <mailto:finz...@gmail.com>>wrote:
>>
>> We want to enforce a policy, partly just to protect those who might
>> forget, for every table in a particular schema to have a primary
>> key.  This can't be done with event triggers as far as I can see,
>> because it is quite legitimate to do:
>>
>> BEGIN;
>> CREATE TABLE foo (id int);
>> ALTER TABLE foo ADD PRIMARY KEY (id);
>> COMMIT;
>>
>> It would be nice to have some kind of "deferrable event trigger" or
>> some way to enforce that no transaction commits which added a table
>> without a primary key.
>>
>> Any ideas?
>>
>> Thanks,
>> Jeremy
>>
>>
>>
>> ​What stops somebody from doing:
>>
>> CREATE TABLE foo (filler text primary key default null, realcol1 int,
>> realcol2 text);
>>
>> And then just never bother to ever insert anything into the column
>> FILLER? It fulfills your stated requirement​ of every table having a
>>
>
> Then you would get this:
>
> test=# CREATE TABLE foo (filler text primary key default null, realcol1
> int, realcol2 text);
> CREATE TABLE
> test=# insert into  foo (realcol1, realcol2) values (1, 'test');
> ERROR:  null value in column "filler" violates not-null constraint
> DETAIL:  Failing row contains (null, 1, test).
>
>
> primary key. Of course, you could amend the policy to say a "non-NULL
>> primary key".
>>
>>
>>
>> --
>> I have a theory that it's impossible to prove anything, but I can't prove
>> it.
>>
>> Maranatha! <><
>> John McKown
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
As Adrian pointed out, by definition, PK's create a constraint which are
NOT NULLABLE;

Here is the SQL to check for tables with no primary key.









*SELECT n.nspname,c.relname as table,
c.reltuples::bigint   FROM pg_class c JOIN pg_namespace n ON (n.oid
=c.relnamespace ) WHERE relkind = 'r' AND   relhaspkey =
FALSEORDER BY n.nspname, c.relname;*

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: system catalog permissions

2018-02-26 Thread Melvin Davidson
On Mon, Feb 26, 2018 at 7:50 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Mon, Feb 26, 2018 at 4:55 PM, Paul Jungwirth <
> p...@illuminatedcomputing.com> wrote:
>
>> On 02/26/2018 03:47 PM, Tom Lane wrote:
>>
>>> PropAAS DBA <d...@propaas.com> writes:
>>>
>>>> We have a client which is segmenting their multi-tenant cluster
>>>> (PostgreSQL 9.6) by schema, however if one of their clients connects via
>>>> pgadmin they see ALL schemas, even the ones they don't have access to
>>>> read.
>>>>
>>> PG generally doesn't assume that anything in the system catalogs is
>>> sensitive.  If you don't want user A looking at user B's catalog
>>> entries, give them separate databases, not just separate schemas.
>>>
>>
>> I'm sure this is what you meant, but you need to give them separate
>> *clusters*, right? Even with separate databases you can still get a list of
>> the other databases and other roles in the cluster. I would actually love
>> to be mistaken but when I looked at it a year or two ago I couldn't find a
>> way to lock that down (without breaking a lot of tools anyway).
>>
>
> ​Yes, both the database and role namespace is global to an individual
> cluster.  Its another level of trade-off; database and role names could
> realistically and easily be done UUID-style so knowing the labels doesn't
> really tell anything except a vague impression of host size.
>
> Assuming clients don't want to see their log files...
>
> David J.
>
>
>... both the database and role namespace is global to an individual
cluster



*Slight correction to that.*





*https://www.postgresql.org/docs/10/static/runtime-config-connection.html
<https://www.postgresql.org/docs/10/static/runtime-config-connection.html>by
defaultdb_user_namespace = off *
*However, if set = on, then " you should create users as username@dbname "
which makes role names specific to each database.*

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: PostgreSQL backup stategies

2018-02-21 Thread Melvin Davidson
On Wed, Feb 21, 2018 at 10:00 PM, Luis Marin <luismarina...@gmail.com>
wrote:

> Dear Friends,
>
> Please, somebody knows a good source of information about PostgreSQL 9
> backup strategies for production systems, if you have read a good book with
> this subject, could you share the author and name of the book ?
>
> Thanks
>

>... a good source of information about PostgreSQL 9 backup strategies

*That is a very broad topic, It is also highly dependent of your particular
configuration.*

*What is your O/S?*

*How big is your database?*

*What is your high availability (replication) configuration?*

*What are the hours for production access?*


*Do you intend to use 3rd party solution? EG: Barman, *
*You might want to start here ->
https://www.compose.com/articles/postgresql-backups-and-everything-you-need-to-know/
<https://www.compose.com/articles/postgresql-backups-and-everything-you-need-to-know/>*
-- 
*Melvin Davidson*

*Maj. Database & Exploration Specialist*

*Universe Exploration Command – UXC*

Employment by invitation only!


Re: gathering ownership and grant permissions

2018-02-16 Thread Melvin Davidson
On Fri, Feb 16, 2018 at 3:50 PM, chris <chr...@pgsqlrocket.com> wrote:

> I'm sorry I realized that I only need to know which users have permissions
> to the table which I can do through
>
> $ psql -t
>
> SELECT grantee
> FROM information_schema.role_table_grants
> WHERE table_name='table_name'
> GROUP BY grantee;
>
> thanks!
>
>
>
> On 02/16/2018 01:13 PM, chris wrote:
>
> Thanks for the quick response.
>
> That does not work for what I need because I only need the owner and
> permissions of one table, I need the grant to look like the output that
> pg_dump displays.
>
> ex:
>
> GRANT ALL ON TABLE testing_cdc TO bob;
>
>
> --
> -- PostgreSQL database dump complete
> --
>
> I need a way which my script can isolate the owner's name and set it to a
> variable on its own. Same with grant.
>
> Thanks
>
> On 02/16/2018 01:05 PM, Melvin Davidson wrote:
>
>
>
> On Fri, Feb 16, 2018 at 2:47 PM, chris <chr...@pgsqlrocket.com> wrote:
>
>> HI,
>>
>> I would like to know if there is a better way to grab the grant
>> permissions  as well as the "owner to" of a table.
>>
>> I can currently do this through a pg_dumb with greps for "^grant" and
>> "^alter" but than I need to do a word search of those lines looking for the
>> specific answers which gets much more involved.
>>
>> I essentially need to know what grant command was ran and use that grant
>> permission to set to a variable for a script.
>>
>> Ex: GRANT ALL ON TABLE testing TO bob; then set only the "all" to a
>> variable.
>>
>> And then same for the ALTER  OWNER TO bob.
>>
>> This is on postgresl 9.6.
>>
>> Thank you,
>>
>> Chris
>>
>>
>>
>
>
> *>... is a better way to grab the grant permissions  as well as the "owner
> to" of a table. *
>
>
>
>
>
>
>
>
>
>
>
>
> *Chris, see if the query below will help. Note, you need to execute as a
> superuser. SELECT n.nspname,c.relname,
> o.rolname AS owner,array_to_string(ARRAY[c.relacl], '|') as
> permits   FROM pg_class c JOIN pg_namespace n ON (n.oid =
> c.relnamespace) JOIN pg_authid o ON (o.oid = c.relowner) WHERE
> n.nspname not like 'pg_%'  AND n.nspname not like 'inform_%'  AND
> relkind = 'r' ORDER BY 1;*
>
>
>
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>
>
>
>


*>I'm sorry I realized that I only need to know which users have
permissions to the table *
*No need to apologize. No hurt, no foul. Thank you for your query.*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: gathering ownership and grant permissions

2018-02-16 Thread Melvin Davidson
On Fri, Feb 16, 2018 at 2:47 PM, chris <chr...@pgsqlrocket.com> wrote:

> HI,
>
> I would like to know if there is a better way to grab the grant
> permissions  as well as the "owner to" of a table.
>
> I can currently do this through a pg_dumb with greps for "^grant" and
> "^alter" but than I need to do a word search of those lines looking for the
> specific answers which gets much more involved.
>
> I essentially need to know what grant command was ran and use that grant
> permission to set to a variable for a script.
>
> Ex: GRANT ALL ON TABLE testing TO bob; then set only the "all" to a
> variable.
>
> And then same for the ALTER  OWNER TO bob.
>
> This is on postgresl 9.6.
>
> Thank you,
>
> Chris
>
>
>


*>... is a better way to grab the grant permissions  as well as the "owner
to" of a table. *












*Chris, see if the query below will help. Note, you need to execute as a
superuser.SELECT n.nspname,   c.relname,
o.rolname AS owner,   array_to_string(ARRAY[c.relacl], '|') as
permits  FROM pg_class cJOIN pg_namespace n ON (n.oid =
c.relnamespace)JOIN pg_authid o ON (o.oid = c.relowner)WHERE n.nspname
not like 'pg_%'  AND n.nspname not like 'inform_%'  AND relkind =
'r'ORDER BY 1;*




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


  1   2   >