Re: [GENERAL] Incremental refresh - Materialized view

2017-11-07 Thread Melvin Davidson
On Tue, Nov 7, 2017 at 7:08 AM, Rakesh Kumar <rakeshkumar...@outlook.com>
wrote:

> You have already been informed. PG, as yet, does not allow incremental
> refresh of a MV.  It allows online refresh of a MV, but that it does by
> doing a full table scan of the base table and rebuilding the MV.
>
>
> --
> *From:* Krithika Venkatesh <krithikavenkates...@gmail.com>
> *To:* John R Pierce <pie...@hogranch.com>
> *Cc:* pgsql-general@postgresql.org
> *Sent:* Tuesday, November 7, 2017 2:35 AM
> *Subject:* Re: [GENERAL] Incremental refresh - Materialized view
>
> Materialized view log is one of the feature in oracle. It creates a log in
> which the changes made to the table are recorded. This log is required for
> an asynchronous materialized view that is refreshed incrementally.
>
> I read in the below link about incrementally refreshing the materialized
> view in postgresql:
>
> https://medium.com/@hariprasathnallsamy/postgresql-materialized-view-
> incremental-refresh-44d1ca742599
>
> Can someone let me how to do incremental refresh using Write Ahead Log
>
> Thanks,
> Krithika
>
>
>
>
>
> On 07-Nov-2017 12:37 PM, "John R Pierce" <pie...@hogranch.com> wrote:
>
> On 11/6/2017 10:38 PM, Krithika Venkatesh wrote:
>
> I need to implement incremental refresh of materialized view.
>
> Please let me know how to do the incremental refresh of materialized view
> in postgresql 9.5.9 version.
>
> Is there anything similar to materialized view log in postgresql.
>
>
>
> you refresh a materialized view with REFRESH MATERIALIZED VIEW name;
> There's no 'incremental' methods, as views can be quite complex.
>
> I do not know what you mean by 'materialized view log', is this a feature
> of some other database server ?
>
>
>
> --
> john r pierce, recycling bits in santa cruz
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mail pref/pgsql-general
> <http://www.postgresql.org/mailpref/pgsql-general>
>
>
>
>
>
>Please let me know how to do the incremental refresh of materialized view
in postgresql 9.5.9 version.

As previously stated, there is currently no such thing as an incremental
refresh of a materialized view.
I believe what you are looking for is:

*REFRESH MATERIALIZED VIEW CONCURRENTLY mat_view_name;*

REFRESH MATERIALIZED VIEW
<https://www.postgresql.org/docs/9.5/static/sql-refreshmaterializedview.html>

Specifying CONCURRENTLY with prevent locking of the underlying table(s),
but will extend the
time it takes to complete the refresh.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Naming conventions for column names

2017-11-06 Thread Melvin Davidson
On Mon, Nov 6, 2017 at 10:04 AM, Karsten Hilbert <karsten.hilb...@gmx.net>
wrote:

> On Mon, Nov 06, 2017 at 08:23:07PM +0530, Sachin Kotwal wrote:
>
> > You are right. Those naming conventions are old and that is why we have
> to
> > improve those where ever and when ever required.
>
> I'd love to see the "requirement" defined.
>
> Regards,
> Karsten
> --
> GPG key ID E4071346 @ eu.pool.sks-keyservers.net
> E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

*All,*

*Please take note that when PG 9.2 was release, a column rename of*
*"procpid" to just "pid" in pg_stat_activity DID break prior user code,*
*therefore I must strongly advise against any other unnecessary column*

*renames in system catalogs.*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Re: Restoring tables with circular references dumped to separate files

2017-10-21 Thread Melvin Davidson
On Sat, Oct 21, 2017 at 4:48 PM, doganmeh <meh...@edgle.com> wrote:

> The list approach for partial restore is also useful, thank you.
>
> On another note, I used to take full backups (entire database), however
> switched to table by table scheme in order to make it more VCS friendly.
> Namely, so I only check into github the dumps of the tables that are
> updated
> only.
>
> So, from that perspective, is there a dump-restore scenario that is widely
> used, but is also VCS friendly? To my knowledge, pg_restore does not
> restore
> backups that are in "plain text" format, and compressed formats such as
> "tar" would not be github friendly.
>
>
>
> --
> Sent from: http://www.postgresql-archive.org/PostgreSQL-general-
> f1843780.html
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


*>...is there a dump-restore scenario that is widely used, but is also VCS
friendly?*

*You might want to give consideration to pg_extractor.*

*https://github.com/omniti-labs/pg_extractor
<https://github.com/omniti-labs/pg_extractor>*

*https://www.keithf4.com/pg_extractor/
<https://www.keithf4.com/pg_extractor/>*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Re: Restoring tables with circular references dumped to separate files

2017-10-21 Thread Melvin Davidson
On Sat, Oct 21, 2017 at 8:24 AM, doganmeh <meh...@edgle.com> wrote:

> Seems that would be easier and less error prone. Thanks,
>
>
>
> --
> Sent from: http://www.postgresql-archive.org/PostgreSQL-general-
> f1843780.html
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


>I was dumping each table to a separate file so I could pick and choose
when restoring...

*It seems to me that instead on multiple single table dumps, you could take
advantage of restoring from a list-file.*


*Just *
*1. do a dump with custom format ( -F c)*

*EG: pg_dump -U postgres -F c -t table1 -t table2 -t table3 yourdb >
yourdb.dmp*


*2. use pg_restore -l to create a list-file*
*EG: pg_restore -l yourdb.dmp > yourdb.lis*

*3. edit yourdb.lis and comment out ( prefix with ; ) any line you don't
want*


*4. then use pg_restore with the edited list-file  *

*EG:  pg_restore -L yourdb.lis yourdb.dmp*

*See examples at the end of*

*https://www.postgresql.org/docs/9.6/static/app-pgrestore.html
<https://www.postgresql.org/docs/9.6/static/app-pgrestore.html>*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] How to find out extension directory

2017-10-20 Thread Melvin Davidson
On Fri, Oct 20, 2017 at 4:12 PM, rakeshkumar464 <rakeshkumar...@outlook.com>
wrote:

> I am documenting on automating installation of pgaudit extension for
> containers.  On my laptop I see that the directory where the files
> pgaudit.control and pgaudit--1.2.sql needs to be present is
>
> /usr/share/postgresql/10/extension.
>
> How do I know beforehand where the dir path is ?
>
>
>
>
> --
> Sent from: http://www.postgresql-archive.org/PostgreSQL-general-
> f1843780.html
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

> How do I know beforehand where the dir path is ?

That would depend on the O/S, but it is not really necessary to know the
path, as PostgreSQL will know it.
Simply establish a connection to the database you want to create the
extension in and do

*CREATE EXTENSION extention_name;*

https://www.postgresql.org/docs/10/static/sql-createextension.html

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Finally upgrading to 9.6!

2017-10-18 Thread Melvin Davidson
On Wed, Oct 18, 2017 at 11:46 AM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wed, Oct 18, 2017 at 8:16 AM, Igal @ Lucee.org <i...@lucee.org> wrote:
>
>> On 10/18/2017 7:45 AM, Ron Johnson wrote:
>>
>> On 10/18/2017 09:34 AM, Igal @ Lucee.org wrote:
>>
>> A bit off-topic here, but why upgrade to 9.6 when you can upgrade to
>> 10.0?
>>
>>
>> There's no way we're going to put an x.0.0 version into production.
>>
>>
>> Then think of it as 9.7.0 but with an easier name to pronounce ;)
>>
>
> The OP likely intended to say "x.0" version; which a "[9.7].0" version is
> just the same as a [10].0 version
>
> The contributors do an excellent job but the reality of this community is
> that a critical mass of people do not start seriously testing and using a
> new version until it is officially released.  The first couple of bug-fix
> releases are thus, unfortunately, likely to be non-trivial as the masses
> flex the system at scales and using workloads that were not known or
> available to the developers.  Its a balancing act for most and falling on
> the side of waiting for a few point releases before promoting to production
> is, I suspect, common.
>
> David J.
>
>
I support the policy of using caution with regards to new versions. They
are often thought of as "bleeding edge" for the reason described by David G
Johnston. The fact that PostgreSQL 10 was only released this month is
critical and therefore is should not be a production server. It should be
used as development, or QA, at best.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Adding identity column to a non-empty table

2017-10-15 Thread Melvin Davidson
On Sun, Oct 15, 2017 at 9:09 PM, Igal @ Lucee.org <i...@lucee.org> wrote:

> Melvin,
>
> On 10/15/2017 5:56 PM, Melvin Davidson wrote:
>
>
> On Sun, Oct 15, 2017 at 8:24 PM, Igal @ Lucee.org <i...@lucee.org> wrote:
>
>> On 10/15/2017 4:01 PM, Igal @ Lucee.org wrote:
>>
>>>
>>> Hello,
>>>
>>> I'm trying to add an identity column to a table that has records
>>> (previously had a bigserial column which I removed):
>>>
>>>
>> There is probably a better solution, but the one I came up with is to add
>> the column as BIGSERIAL and DROP the SEQUENCE CASCADE, SELECT the max(rid)
>> + 1, and then convert the column to IDENTITY:
>>
>>
> The correct way to make r_id the primary key would be:
>
> ALTER TABLE  event_log
>   ADD COLUMN r_id SERIAL;
>
> ALTER TABLE  event_log
>   ALTER COLUMN r_id TYPE BIGINT,
>   ADD CONSTRAINT dummy_pk PRIMARY KEY (r_id);
>
> That automatically generates the column as
>
> r_id bigint NOT NULL DEFAULT nextval('dummy_r_id_seq'::regclass),
>   CONSTRAINT dummy_pk PRIMARY KEY (r_id)
>
> and creates the appropriate sequence for you.
>
>
> Does that use the new IDENTITY construct that was added in Postgres 10?  I
> do not really care for the PRIMARY KEY constraint.  I just want the
> sequence with the benefits of the new IDENTITY "type".
>
> Thanks,
>
>
> Igal
>

> Does that use the new IDENTITY construct that was added in Postgres 10?

I cannot say, as I do not yet have PostgreSQL 10 installed because it was
very recently released.
However, the method I supplied works for all prior versions of PostgreSQL.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Adding identity column to a non-empty table

2017-10-15 Thread Melvin Davidson
On Sun, Oct 15, 2017 at 8:24 PM, Igal @ Lucee.org <i...@lucee.org> wrote:

> On 10/15/2017 4:01 PM, Igal @ Lucee.org wrote:
>
>>
>> Hello,
>>
>> I'm trying to add an identity column to a table that has records
>> (previously had a bigserial column which I removed):
>>
>>   ALTER TABLE event_log
>> ADD COLUMN r_id BIGINT GENERATED BY DEFAULT AS IDENTITY;
>>
>> But I'm getting an error `column r_id contains null values`.
>>
>> How can I add the column and populate it for the existing rows?
>>
>>
> There is probably a better solution, but the one I came up with is to add
> the column as BIGSERIAL and DROP the SEQUENCE CASCADE, SELECT the max(rid)
> + 1, and then convert the column to IDENTITY:
>
>   ALTER TABLE transient.event_log ADD COLUMN r_id BIGSERIAL;
>
>   -- find the sequence name and then
>   DROP sequence  CASCADE;
>
>   -- find min value by executing select max(r_id) + 1
>   ALTER table transient.event_log
>   ALTER COLUMN r_id
>   ADD GENERATED BY DEFAULT AS IDENTITY (MINVALUE );
>
> If anyone has a better suggestion then please let me know.
>
> Thanks,
>
>
> Igal
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

The correct way to make r_id the primary key would be:

ALTER TABLE  event_log
  ADD COLUMN r_id SERIAL;

ALTER TABLE  event_log
  ALTER COLUMN r_id TYPE BIGINT,
  ADD CONSTRAINT dummy_pk PRIMARY KEY (r_id);

That automatically generates the column as

r_id bigint NOT NULL DEFAULT nextval('dummy_r_id_seq'::regclass),
  CONSTRAINT dummy_pk PRIMARY KEY (r_id)

and creates the appropriate sequence for you.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Non-overlapping updates blocking each other

2017-10-15 Thread Melvin Davidson
On Sun, Oct 15, 2017 at 8:01 AM, Seamus Abshere <sea...@abshere.net> wrote:

> > On Sat, Oct 14, 2017 at 10:30 AM, Seamus Abshere <sea...@abshere.net>
> > > UPDATE [...] WHERE id BETWEEN 'ff00----'
> AND
> > > 'ff0f----'
> > > and
> > > UPDATE [...] WHERE id BETWEEN 'f8c0----'
> AND
> > > 'f8ff----'
> > > Yet one blocks the other one. How is this possible?
>
> On Sat, Oct 14, 2017, at 12:32 PM, Melvin Davidson wrote:
> > More than likely, the optimizer has determined that a table scan is best,
> > in which case it will use a table lock.
> > You can also execute the following query and check the wait_event_type to
> > verify.
>
> hi Melvin,
>
> Very interesting! The result:
>
> wait_event  | page
> wait_event_type | Lock
>
> So I guess this means that the ids don't overlap, but they are sometimes
> found in the same page, and the whole page gets locked?
>
> Any narrative (pretending I don't know anything) would be very helpful.
>
> Thanks!
> Seamus
>
> PS. I do a SELECT 1 FROM x WHERE [ID_RANGE] FOR UPDATE right before the
> update, but that's to prevent a race condition. The id ranges still
> don't overlap.
>

Seamus,

As Tom suggests, to get an exact cause of your problem, it is very
important we get the following additional information

1. Exact PostgreSQL version. IE: SELECT  version();
2. Your O/S
3. The full structure of your table in query, including constraints
4. The full, exact queries.
5. The FULL output from the query I gave you.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Non-overlapping updates blocking each other

2017-10-14 Thread Melvin Davidson
On Sat, Oct 14, 2017 at 12:01 PM, Thomas Kellerer <spam_ea...@gmx.net>
wrote:

> Melvin Davidson schrieb am 14.10.2017 um 17:32:
>
>> More than likely, the optimizer has determined that a table scan is best,
>> in which case it will use a table lock.
>> That means one updates will be blocking each other.
>>
>
> Since when does Postgres lock the whole table during an update?
>
>
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


>Since when does Postgres lock the whole table during an update?
When the optimizer thinks it needs to do a TABLE SCAN!

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Non-overlapping updates blocking each other

2017-10-14 Thread Melvin Davidson
On Sat, Oct 14, 2017 at 10:30 AM, Seamus Abshere <sea...@abshere.net> wrote:

> hi,
>
> I've got 2 updates on non-overlapping uuid (primary key) ranges. For
> example:
>
> UPDATE [...] WHERE id BETWEEN 'ff00----' AND
> 'ff0f----'
> and
> UPDATE [...] WHERE id BETWEEN 'f8c0----' AND
> 'f8ff----'
>
> Yet one blocks the other one. How is this possible?
>
> Thanks,
> Seamus
>
> --
> Seamus Abshere, SCEA
> https://www.faraday.io
> https://github.com/seamusabshere
> https://linkedin.com/in/seamusabshere
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

Have you done and EXPLAIN on each of the updates?
More than likely, the optimizer has determined that a table scan is best,
in which case it will use a table lock.
That means one updates will be blocking each other.

You can also execute the following query and check the wait_event_type to
verify.

SELECT c.datname,
   c.pid as pid,
   c.client_addr,
   c.usename as user,
   c.query,
   c.wait_event,
   c.wait_event_type,
  l.pid as blocked_by,
   c.query_start,
   current_timestamp - c.query_start as duration
  FROM pg_stat_activity c
  LEFT JOIN pg_locks l1 ON (c.pid = l1.pid and not l1.granted)
  LEFT JOIN pg_locks l2 on (l1.relation = l2.relation and l2.granted)
  LEFT JOIN pg_stat_activity l ON (l2.pid = l.pid)
  LEFT JOIN pg_stat_user_tables t ON (l1.relation = t.relid)
 WHERE pg_backend_pid() <> c.pid
ORDER BY datname,
 query_start;

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Multiple Schemas vs. Multiple Databases

2017-10-13 Thread Melvin Davidson
On Fri, Oct 13, 2017 at 3:29 PM, Igal @ Lucee.org <i...@lucee.org> wrote:

> Hello,
>
> I have read quite a few articles about multiple schemas vs. multiple
> databases, but they are all very generic so I wanted to ask here for a
> specific use case:
>
> I am migrating a Web Application from MS SQL Server to PostgreSQL.  For
> the sake of easier maintenance, on SQL Server I have two separate databases:
>
>   1) Primary database containing the data for the application
>
>   2) Secondary database containing "transient" data, e.g. logging of
> different activities on the website in order to generate statistics etc.
>
> Both databases belong to the same application with the same roles and
> permissions.
>
> The secondary database grows much faster, but the data in it is not
> mission-critical , and so the data is aggregated daily and the summaries
> are posted to the primary database, because only the aggregates are
> important here.
>
> To keep the database sizes from growing too large, I periodically delete
> old data from the secondary database since the data becomes obsolete after
> a certain period of time.
>
> At first I thought of doing the same in Postgres, but now it seems like
> the better way to go would be to keep one database with two schemas:
> primary and transient.
>
> The main things that I need to do is:
>
>   a) Be able to backup/restore each "part" separately.  Looks like pg_dump
> allows that for schemas via the --schema=schema argument.
>
>   b) Be able to query aggregates from the secondary "part" and store the
> results in the primary one, which also seems easier with multiple schemas
> than multiple databases.
>
> Am I right to think that two schemas are better in this use case or am I
> missing something important?
>
> Thanks,
>
> Igal Sapir
> Lucee Core Developer
> Lucee.org <http://lucee.org/>
>

>b) Be able to query aggregates from the secondary "part" and store the
results in the primary one, which also seems easier with multiple >schemas
than multiple databases.

If that is what you need to do, then definitely use multiple schemas. In
PostgreSQL, the only way to do cross db queries / DML, is with the dblink
extension, and from personal use, it is a PIA to use.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] "Shared strings"-style table

2017-10-13 Thread Melvin Davidson
On Fri, Oct 13, 2017 at 12:52 PM, Melvin Davidson <melvin6...@gmail.com>
wrote:

>
>
> On Fri, Oct 13, 2017 at 12:43 PM, Seamus Abshere <sea...@abshere.net>
> wrote:
>
>> > > On Fri, Oct 13, 2017 at 8:49 AM, Seamus Abshere wrote:
>> > >> Theoretically / blue sky, could there be a table or column type that
>> > >> transparently handles "shared strings" like this, reducing size on
>> disk
>> > >> at the cost of lookup overhead for all queries?
>> > >> (I guess maybe it's like TOAST, but content-hashed and de-duped and
>> not
>> > >> only for large objects?)
>>
>> On Fri, Oct 13, 2017, at 01:29 PM, Melvin Davidson wrote:
>> > What was described is exactly what relations and Foreign Keys are for.
>>
>> hi Melvin, appreciate the reminder. Our issue is that we have 300+
>> columns and frequently include them in the SELECT or WHERE clauses... so
>> traditional normalization would involve hundreds of joins.
>>
>> That's why I ask about a new table or column type that handles basic
>> translation and de-duping transparently, keeping the coded values
>> in-table.
>>
>
> >I ask about a new table or column type that handles basic translation
>
> AFAIK, there is no such thing currently available.Your initial post
> indicated you were working with spreadsheets and were
> looking to translate to PostgreSQL database. There is no short cut to
> normalizing, but the time you spend doing so in initial
> design will well be worthwhile once it is implemented.
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>


*Seamus,*




*Just a thought. As I mentioned previously, there is no shortcut to
optimizing your database. However, you can do it in increments.First,
create all your foreign key / repetative data tables.*


*Next, add additional FK columns to you current tables to reference the fk
/ repetative data tables.*


*Modify your application / queries to utilize the new columns.*






*Do extensive testing to make sure your modifications work properly.VERY
IMPORTANT: Before the next step, make a backup of the existing database and
verify you have a good copy.Finally, drop all the old repetative data
columns.*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] "Shared strings"-style table

2017-10-13 Thread Melvin Davidson
On Fri, Oct 13, 2017 at 12:43 PM, Seamus Abshere <sea...@abshere.net> wrote:

> > > On Fri, Oct 13, 2017 at 8:49 AM, Seamus Abshere wrote:
> > >> Theoretically / blue sky, could there be a table or column type that
> > >> transparently handles "shared strings" like this, reducing size on
> disk
> > >> at the cost of lookup overhead for all queries?
> > >> (I guess maybe it's like TOAST, but content-hashed and de-duped and
> not
> > >> only for large objects?)
>
> On Fri, Oct 13, 2017, at 01:29 PM, Melvin Davidson wrote:
> > What was described is exactly what relations and Foreign Keys are for.
>
> hi Melvin, appreciate the reminder. Our issue is that we have 300+
> columns and frequently include them in the SELECT or WHERE clauses... so
> traditional normalization would involve hundreds of joins.
>
> That's why I ask about a new table or column type that handles basic
> translation and de-duping transparently, keeping the coded values
> in-table.
>

>I ask about a new table or column type that handles basic translation

AFAIK, there is no such thing currently available.Your initial post
indicated you were working with spreadsheets and were
looking to translate to PostgreSQL database. There is no short cut to
normalizing, but the time you spend doing so in initial
design will well be worthwhile once it is implemented.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] "Shared strings"-style table

2017-10-13 Thread Melvin Davidson
On Fri, Oct 13, 2017 at 12:12 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Fri, Oct 13, 2017 at 8:49 AM, Seamus Abshere <sea...@abshere.net>
> wrote:
>
>> Theoretically / blue sky, could there be a table or column type that
>> transparently handles "shared strings" like this, reducing size on disk
>> at the cost of lookup overhead for all queries?
>>
>> (I guess maybe it's like TOAST, but content-hashed and de-duped and not
>> only for large objects?)
>>
>
> Row-independence is baked into PostgreSQL pretty deeply...
>
> I think an enum type is about as close are you are likely to get if you
> don't wish to setup your own foreign-key relationships with surrogate keys.
>
> David J.
>

I STRONGLY advise againt the use of ENUMS.

What was described is exactly what relations and Foreign Keys are for.

Example:
CREATE TABLE residence_type
(
  residence_type_id   INTEGER NOT NULL,
  residence_type_desc TEXT NOT NULL,
  CONSTRAINT residence_type_pk PRIMARY KEY (residence_type_id)
);

CREATE TABLE state
(
  state_id   CHAR(02) NOT NULL,
  state_name TEXT NOT NULL,
  CONSTRAINT state_pk PRIMARY KEY (state_id)
);

CREATE TABLE residence
(
  residence_idBIGINT NOT NULL,
  residence_type_id INTEGER,
  street_numCHARACTER(10),
  street_nameCHARACTER(20),
  cityCHARACTER(40),
  state_idCHAR(02),
  CONSTRAINT residence_pk PRIMARY KEY (residence_id),
  CONSTRAINT fk_residence_state FOREIGN KEY (state_id)
REFERENCES state (state_id),
  CONSTRAINT fk_residence_type FOREIGN KEY (residence_type_id)
REFERENCES residence_type (residence_type_id)
);

SELECT t.residence_type_desc,
   r.street_num,
   r.street_name,
   r.city,
   s.state_name
  FROM residence r
  JOIN residence_type t ON t.residence_id = r.residence_id
  JOIN state s ON s.state_id = r.state_id
 WHERE residence_id = 12345;

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Determine size of table before it's committed?

2017-10-11 Thread Melvin Davidson
On Wed, Oct 11, 2017 at 9:43 AM, Seamus Abshere <sea...@abshere.net> wrote:

> hi,
>
> I've had an `INSERT INTO x SELECT FROM [...]` query running for more
> then 2 days.
>
> Is there a way to see how big x has gotten? Even a very rough estimate
> (off by a gigabyte) would be fine.
>
> Best,
> Seamus
>
>
> --
> Seamus Abshere, SCEA
> https://www.faraday.io
> https://github.com/seamusabshere
> https://linkedin.com/in/seamusabshere
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

>Is there a way to see how big x has gotten?...

Try:

SELECT n.nspname as schema,
   c.relname as table,
   a.rolname as owner,
   c.relfilenode as filename,
   c.reltuples::bigint,
   pg_size_pretty(pg_relation_size(n.nspname|| '.' || c.relname)) as
size,
   pg_size_pretty(pg_total_relation_size(n.nspname|| '.' || c.relname))
as total_size,
   pg_relation_size(n.nspname|| '.' || c.relname) as size_bytes,
   pg_total_relation_size(n.nspname|| '.' || c.relname) as
total_size_bytes,
   CASE WHEN c.reltablespace = 0
THEN 'pg_default'
ELSE (SELECT t.spcname
FROM pg_tablespace t WHERE (t.oid = c.reltablespace)
)
END as tablespace
FROM pg_class c
  JOIN pg_namespace n ON (n.oid = c.relnamespace)
  JOIN pg_authid a ON ( a.oid = c.relowner )
  WHERE relname = 'x'
ORDER BY total_size_bytes DESC, 1, 2;

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Using cp to back up a database?

2017-10-09 Thread Melvin Davidson
On Mon, Oct 9, 2017 at 9:51 AM, Larry Rosenman <l...@lerctr.org> wrote:

> If you want a consistent database (you **REALLY** do), pg_dump is the
> correct tool.
>
>
>
> --
>
> Larry Rosenman http://www.lerctr.org/~ler
>
> Phone: +1 214-642-9640 <(214)%20642-9640> E-Mail:
> l...@lerctr.org
>
> US Mail: 5708 Sabbia Drive, Round Rock, TX 78665
> <https://maps.google.com/?q=5708+Sabbia+Drive,+Round+Rock,+TX+78665=gmail=g>
> -2106
>
>
>
>
>
> *From: *<pgsql-general-ow...@postgresql.org> on behalf of Ron Johnson <
> ron.l.john...@cox.net>
> *Date: *Monday, October 9, 2017 at 8:41 AM
> *To: *"pgsql-general@postgresql.org" <pgsql-general@postgresql.org>
> *Subject: *[GENERAL] Using cp to back up a database?
>
>
>
> Hi,
>
> v8.4.20
>
> This is what the current backup script uses:
>
> /usr/bin/psql -U postgres -c "SELECT pg_start_backup('
> Incrementalbackup',true);"
>
> cp -r /var/lib/pgsql/data/* $dumpdir/data/
>
> /usr/bin/psql -U postgres template1 -c "SELECT pg_stop_backup();"
>
>
>
> Should it use rsync or pg_dump instead?
>
> Thanks
>
>
> --
>
> World Peace Through Nuclear Pacification
>
>
>Should it use rsync or pg_dump instead?

It is a matter of choice. What you are doing with the script is making a
backup of the entire PostgreSQL data file directory. That includes all
PostgreSQL system catalogs as well as user objects. A restore of your data
directory would be faster than a full restore of pg_dump as indexes and
constraints would not have to be recreated. However, it does not give you
the option to restore one or more selected objects.
pg_dump, on the other hand, gives you the flexibility of selecting what to
backup and what to restore. FWIW, I much prefer pg_dump.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] delete a file everytime pg server starts/crashes

2017-10-08 Thread Melvin Davidson
On Sun, Oct 8, 2017 at 10:01 AM, athinivas <athini...@gmail.com> wrote:

> Yes, will take the epoch value from same and create a new file. Upon
> subsequent calls, will access with that filename...if it fails(incase if
> the
> postmaster is restarted), will create a new one.
>
> @Melvin Does this capture all failure scenarios?? Or you have any other
> better ways to do it. Your comments are much appreciated !!
>
>
>
> --
> Sent from: http://www.postgresql-archive.org/PostgreSQL-general-
> f1843780.html
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

*athinivas,*

*This is covered in the documentation:*


*https://www.postgresql.org/docs/9.6/static/functions-info.html
<https://www.postgresql.org/docs/9.6/static/functions-info.html>*

*9.25. System Information Functions*
*pg_postmaster_start_time()*

*It does not matter what caused the failure, it is the time Postgres is
started again.*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] delete a file everytime pg server starts/crashes

2017-10-08 Thread Melvin Davidson
On Sun, Oct 8, 2017 at 8:33 AM, athinivas <athini...@gmail.com> wrote:

> Hi,
>
> Thank you...will try it :) ...As of now, I'm creating the filename as
> pg_start_time so that, every time the server is up, a new file will be
> created.
>
>
> Regards,
> Athi
>
>
>
> --
> Sent from: http://www.postgresql-archive.org/PostgreSQL-general-
> f1843780.html
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

>... I'm creating the filename as pg_start_time...

Just an FYI, you do know that

SELECT pg_postmaster_start_time();

will return start time for postgres?

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] delete a file everytime pg server starts/crashes

2017-10-05 Thread Melvin Davidson
On Thu, Oct 5, 2017 at 10:04 AM, athinivas <athini...@gmail.com> wrote:

> Hi,
>
> I'm having a requirement to delete a file in system whenever pg server is
> started/crashed. Any idea?
>
> Thanks,
> Athi
>
>
>
> --
> Sent from: http://www.postgresql-archive.org/PostgreSQL-general-
> f1843780.html
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

It would be nice to know your Operating System and PostgreSQL version().

That being said, have you looked at the documentation for
"shared_preload_libraries" ?

*https://www.postgresql.org/docs/9.6/static/runtime-config-client.html
<https://www.postgresql.org/docs/9.6/static/runtime-config-client.html>*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] time series data

2017-10-02 Thread Melvin Davidson
On Sun, Oct 1, 2017 at 6:20 PM, Clifford Snow <cliff...@snowandsnow.us>
wrote:

> I have a stream that updates every minute with a trigger that updates
> another table with information from the stream. That way I'm constantly
> updated with no need to run a script to update before I want a report.
>
> Clifford
>
> On Sun, Oct 1, 2017 at 10:08 AM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
>
>>
>>
>> On Sun, Oct 1, 2017 at 4:17 AM, Khalil Khamlichi <
>> khamlichi.kha...@gmail.com> wrote:
>>
>>> Hi everyone,
>>>
>>> I have a data stream of a call center application coming in  to postgres
>>> in this format :
>>>
>>> user_name, user_status, event_time
>>>
>>> 'user1', 'ready', '2017-01-01 10:00:00'
>>> 'user1', 'talking', '2017-01-01 10:02:00'
>>> 'user1', 'after_call', '2017-01-01 10:07:00'
>>> 'user1', 'ready', '2017-01-01 10:08:00'
>>> 'user1', 'talking', '2017-01-01 10:10:00'
>>> 'user1', 'after_call', '2017-01-01 10:15:00'
>>> 'user1', 'paused', '2017-01-01 10:20:00'
>>> ...
>>> ...
>>>
>>> so as you see each new insert of an "event" is in fact the start_time of
>>> that event and also the end_time of the previous one so should be used to
>>> calculate the duration of this previous one.
>>>
>>> What is the best way to get user_status statistics like total duration,
>>> frequency, avg ...etc , does any body have an experience with this sort of
>>> data streams ?
>>>
>>>
>>> Thanks in advance.
>>>
>>
>> Just a suggestion, but here is what I would do.
>> First, create your tables similar to as follows
>>
>> CREATE TABLE status
>> (
>>  call_statusvarchar(10) NOT NULL,
>>  CONSTRAINT status_pk PRIMARY KEY (call_status)
>> );
>>
>> INSERT INTO status
>> (call_status)
>> VALUES
>> ('ready'),
>> ('talking'),
>> ('after_call');
>>
>> CREATE TABLE user_sessions
>> (
>>  usernamenameNOT NULL,
>>  session_idbigintNOT NULL,
>>  call_status varchar(10) NOT NULL,
>>  call_timetimestamp NOT NULL,
>>  CONSTRAINT user_sessions_pk PRIMARY KEY (username,
>> session_id,call_status),
>>  CONSTRAINT user_sessions_fk_status FOREIGN KEY (call_status)
>> REFERENCES status(call_status)
>> );
>>
>> Next, you will need to generate a unique session_id for each
>> user, but only for when call_status is 'ready'. So probably
>> a table of the form:
>>
>> CREATE TABLE current_session
>> (
>> usernamenameNOT NULL,
>> session_idserialNOT NULL,
>> CONSTRAINT current_session_pk PRIMARY KEY (username)
>> );
>>
>> Then all you need to do is:
>> 1. Update current_session and get the new session_id each time a user
>> connects (call_status = 'ready'.
>>Probably best to use a BEFORE trigger to do this, but you will need to
>> code it yourself.
>>
>> 2. You can then do
>>
>> SELECT username,
>>age ( (SELECT call_time FROM current_session WHERE call_status =
>> 'talking'),
>>   ( SELECT call_time FROM current_session WHERE call_status =
>> 'after_call')
>>) as duration
>>   FROM user_sessions
>>  WHERE username = '*actual_user_name*'
>>AND session_id =  *actual_session_id*;
>>
>> You can use similar queries for avg and frequency.
>>
>> --
>> *Melvin Davidson*
>> I reserve the right to fantasize.  Whether or not you
>> wish to share my fantasy is entirely up to you.
>>
>
>
>
> --
> @osm_seattle
> osm_seattle.snowandsnow.us
> OpenStreetMap: Maps with a human touch
>

I thought about the table design for user_sessions and came up
with a better one:

CREATE TABLE user_sessions
(
 username  name  NOT NULL,
 session_id bigint   NOT NULL,
 call_status varchar(10) NOT NULL,
 call_ready  timestamp   NOT NULL,
 call_talking timestamp,
 call_after_call timestamp,
 call_duration   interval,
 CONSTRAINT user_sessions_pk PRIMARY KEY (username, session_id),
 CONSTRAINT user_sessions_fk_status FOREIGN KEY (call_status)
REFERENCES status(call_status)
);

So in essence, when the call starts, just do:

INSERT INTO user_sessions
(username, call_ready)
VALUES
('actual_user_name', now() );

Then
SELECT max(session_id) AS current_session
  FROM user_sessions
 WHERE username = 'actual_user_name';

When talking starts:
UPDATE user_sessions
   SET call_status = 'talking',
   call_talking = now()
 WHERE username = 'actual_user_name'
   AND session_id = current_session;

When call ends:
UPDATE user_sessions
   SET call_status = 'after_call',
   call_after_call = now()
 WHERE username = 'actual_user_name'
   AND session_id = current_session;

Now all you have to do to get call length is:

SELECT username,
   age ( call_after_call, call_talking ) as duration
  FROM user_sessions
 WHERE username = 'actual_user_name'
   AND session_id =  current_session;


Re: [GENERAL] time series data

2017-10-01 Thread Melvin Davidson
On Sun, Oct 1, 2017 at 4:17 AM, Khalil Khamlichi <khamlichi.kha...@gmail.com
> wrote:

> Hi everyone,
>
> I have a data stream of a call center application coming in  to postgres
> in this format :
>
> user_name, user_status, event_time
>
> 'user1', 'ready', '2017-01-01 10:00:00'
> 'user1', 'talking', '2017-01-01 10:02:00'
> 'user1', 'after_call', '2017-01-01 10:07:00'
> 'user1', 'ready', '2017-01-01 10:08:00'
> 'user1', 'talking', '2017-01-01 10:10:00'
> 'user1', 'after_call', '2017-01-01 10:15:00'
> 'user1', 'paused', '2017-01-01 10:20:00'
> ...
> ...
>
> so as you see each new insert of an "event" is in fact the start_time of
> that event and also the end_time of the previous one so should be used to
> calculate the duration of this previous one.
>
> What is the best way to get user_status statistics like total duration,
> frequency, avg ...etc , does any body have an experience with this sort of
> data streams ?
>
>
> Thanks in advance.
>

Just a suggestion, but here is what I would do.
First, create your tables similar to as follows

CREATE TABLE status
(
 call_statusvarchar(10) NOT NULL,
 CONSTRAINT status_pk PRIMARY KEY (call_status)
);

INSERT INTO status
(call_status)
VALUES
('ready'),
('talking'),
('after_call');

CREATE TABLE user_sessions
(
 usernamenameNOT NULL,
 session_idbigintNOT NULL,
 call_statusvarchar(10) NOT NULL,
 call_timetimestamp NOT NULL,
 CONSTRAINT user_sessions_pk PRIMARY KEY (username, session_id,call_status),
 CONSTRAINT user_sessions_fk_status FOREIGN KEY (call_status)
REFERENCES status(call_status)
);

Next, you will need to generate a unique session_id for each
user, but only for when call_status is 'ready'. So probably
a table of the form:

CREATE TABLE current_session
(
usernamenameNOT NULL,
session_idserialNOT NULL,
CONSTRAINT current_session_pk PRIMARY KEY (username)
);

Then all you need to do is:
1. Update current_session and get the new session_id each time a user
connects (call_status = 'ready'.
   Probably best to use a BEFORE trigger to do this, but you will need to
code it yourself.

2. You can then do

SELECT username,
   age ( (SELECT call_time FROM current_session WHERE call_status =
'talking'),
  ( SELECT call_time FROM current_session WHERE call_status =
'after_call')
   ) as duration
  FROM user_sessions
 WHERE username = '*actual_user_name*'
   AND session_id =  *actual_session_id*;

You can use similar queries for avg and frequency.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Converting inherited partitions into declarative ones

2017-09-29 Thread Melvin Davidson
On Fri, Sep 29, 2017 at 1:32 PM, Victor Yegorov <vyego...@gmail.com> wrote:

> Greetings.
>
> I am looking into new partitioning of 10rc1 on a copy of a production
> system.
> And I'm having tough times with the full scan.
>
> Per documentation:
> > It is possible to avoid this scan by adding a valid CHECK constraint to
> the table
> > that would allow only the rows satisfying the desired partition
> constraint before
> > running this command. It will be determined using such a constraint that
> the table
> > need not be scanned to validate the partition constraint.
>
>
> So I have this table with CHECK constraint:
>
> test=# \d stats_201503
>  Table "public.stats_201503"
>Column   Type Collation
> Nullable Default
>  --- -
>  
> …
> created_at   timestamp without time zone
> …
> Check constraints:
> "stats_201503_created_at_check" CHECK (created_at >= '2015-02-28
> 19:00:00'::timestamp without time zone AND created_at < '2015-03-31
> 20:00:00'::timestamp without time zone)
>
>
>
> Still, if I try to attach it, I get Full Scan:
>
> test=# ALTER TABLE jsm ATTACH PARTITION stats_201503 FOR VALUES FROM
> ('2015-02-28 19:00:00') TO ('2015-03-31 20:00:00');
> ALTER TABLE
> Time: 55502.875 ms (00:55.503)
>
>
>
> Is it possible to avoid Full Scan here? I have TBs worth of data in
> partitions,
> so it'll takes ages to switch to the declarative partitioning the way
> things stand now.
>
> Thanks in advance.
>
>
> --
> Victor Yegorov
>


*>Is it possible to avoid Full Scan here? *

*Have you verified that constraint_exclusion is set to "on" or "partition"
?*

*https://www.postgresql.org/docs/9.6/static/ddl-partitioning.html
<https://www.postgresql.org/docs/9.6/static/ddl-partitioning.html>*
* <https://www.postgresql.org/docs/9.6/static/ddl-partitioning.html>*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Function to return per-column counts?

2017-09-28 Thread Melvin Davidson
On Thu, Sep 28, 2017 at 3:31 PM, Seamus Abshere <sea...@abshere.net> wrote:

> > > > Does anybody have a function lying around (preferably pl/pgsql) that
> > > > takes a table name and returns coverage counts?
> > >
> > > What is "coverage count"?
>
> Ah, I should have explained better. I meant how much of a column is
> null.
>
> Basically you have to
>
> 0. count how many total records in a table
> 1. discover the column names in a table
> 2. for each column name, count how many nulls and subtract from total
> count
>
> If nobody has one written, I'll write one and blog it.
>
> Thanks!
> Seamus
>
> PS. In a similar vein, we published
> http://blog.faraday.io/how-to-do-histograms-in-postgresql/ which gives
> plpsql so you can do:
>
> SELECT * FROM histogram($table_name_or_subquery, $column_name)
>
> --
> Seamus Abshere, SCEA
> https://www.faraday.io
> https://github.com/seamusabshere
> https://linkedin.com/in/seamusabshere
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

I can't really do the full query for you, but the following should be able
to give you a head start:

SELECT c.relname AS table,
   a.attname AS column,
   a.attnum   AS colnum,
   s.stanullfrac as pct_null,
   s.stadistinct
  FROM pg_class c
  JOIN pg_attribute a ON a.attrelid = c.oid
  JOIN pg_statistic s ON (s.starelid = c.oid AND s.staattnum = a.attnum)
 WHERE c.relname = 'your_table_name'
   AND a.attnum > 0
 ORDER BY 3

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Is auto-analyze as thorough as manual analyze?

2017-09-23 Thread Melvin Davidson
On Sat, Sep 23, 2017 at 6:10 PM, Jack Christensen <j...@jackchristensen.com>
wrote:

> Just had an issue where a prepared query would occasionally choose a very
> bad plan in production. The same data set in a different environment
> consistently would choose the index scan. As would be expected, running
> analyze on that table in production resolved the issue.
>
> However, before I ran the analyze I checked pg_stat_user_tables to see
> last_autoanalyze for that table. It had run today. But the problem existed
> before that. I would have expected that the auto-analyze would have
> corrected this (or prevented it entirely if run enough).
>
> So that leaves me wondering: is an auto-analyze the same as manually
> running analyze or is a manual analyze more thorough? This is running
> version 9.6.3 on Heroku.
>
> Thanks,
>
> Jack
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

>is an auto-analyze the same as manually running analyze or is a manual
analyze more thorough?

It's not that one is "more thorough" than the other, it's that
autovacuum_analyze will only kick in when it meets
one of the following conditions:

autovacuum_analyze_scale_factor  0.1  #Number of tuple inserts,
updates, or deletes prior to analyze as a fraction of reltuples.
autovacuum_analyze_threshold 50#Minimum number of tuple
inserts, updates, or deletes prior to analyze.

https://www.postgresql.org/docs/9.6/static/runtime-config-autovacuum.html

Note: You can adjust the settings for individual tables.
EG:

*ALTER TABLE some_schema.your_table SET (autovacuum_vacuum_scale_factor = 0.5);
ALTER TABLE some_schema.your_table SET (autovacuum_vacuum_threshold = 1000);*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Why can't the database owner create schemas and how can I enable that?

2017-09-23 Thread Melvin Davidson
On Sat, Sep 23, 2017 at 2:33 AM, John R Pierce <pie...@hogranch.com> wrote:

> On 9/22/2017 10:29 PM, Tim Uckun wrote:
>
>> I am setting up a database for an app. So I create a user without
>> superuser but with login priveleges
>>
>> I then create a database and set it's owner to that user like this...
>>
>> dropdb --if-exists api_development
>> dropuser --if-exists api_user
>>
>> createuser api_user -P -d
>> createdb api_development -O api_user
>>
>> The app can now connect to the database but it can't create any tables,
>> schemas or anything else unless I give it superuser privileges.
>>
>> Is there any way I can make this user a superuser for this database
>> without making it a superuser on other databases?
>>
>
>
> that should have worked just fine.
>
>
> [root@new ~]# useradd fred
> [root@new ~]# su - postgres
> $ createuser fred
> $ createdb fred -O fred
> $ logout
> [root@new ~]# su - fred
> [fred@new ~]$ psql
> psql (9.3.19)
> Type "help" for help.
>
> fred=> create schema xyzzy;
> CREATE SCHEMA
> fred=> create table xyzzy.abc (id serial, dat text);
> CREATE TABLE
> fred=> \q
>
> .
>
>
>
>
> --
> john r pierce, recycling bits in santa cruz
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Make sure you are the DB owner when you connect.

Either
psql -U api_user  -d api_development

OR
psql -d api_development

SET ROLE api_user;
SELECT current_user;

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] [HACKERS] USER Profiles for PostgreSQL

2017-09-19 Thread Melvin Davidson
On Tue, Sep 19, 2017 at 1:28 PM, Stephen Frost <sfr...@snowman.net> wrote:

> Tom,
>
> * Tom Lane (t...@sss.pgh.pa.us) wrote:
> > chiru r <chir...@gmail.com> writes:
> > > We are looking  for User profiles in ope source PostgreSQL.
> > > For example, If a  user password failed n+ times while login ,the user
> > > access has to be blocked few seconds.
> > > Please let us know, is there any plan to implement user profiles in
> feature
> > > releases?.
> >
> > Not particularly.  You can do that sort of thing already via PAM,
> > for example.
>
> Ugh, hardly and it's hokey and a huge pain to do, and only works on
> platforms that have PAM.
>
> Better is to use an external authentication system (Kerberos, for
> example) which can deal with this, but I do think this is also something
> we should be considering for core, especially now that we've got a
> reasonable password-based authentication method with SCRAM.
>
> Thanks!
>
> Stephen
>

Perhaps, as an alternative, although not currently supported, connection
attempts can be added in the future to "Event Triggers"?
Users could then create a trigger function to enable/disable logins.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Selecting a daily puzzle record - which type of column to add?

2017-09-18 Thread Melvin Davidson
On Mon, Sep 18, 2017 at 9:30 AM, Ron Johnson <ron.l.john...@cox.net> wrote:

> On 09/18/2017 08:17 AM, Melvin Davidson wrote:
> [snip]
>
> I don't have any specific suggestion for an additional column, other than
> Berend's idea. However, I strongly advise against the use
> of ENUM's. They can create a major problem in the event one needs to be
> removed.
>
>
> Because it will internally renumber them?
>
> --
> World Peace Through Nuclear Pacification
>
>
>Because it will internally renumber them?
No, because while there is a facility to ADD a value to an enum, there is
none to delete/drop a value.


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Selecting a daily puzzle record - which type of column to add?

2017-09-18 Thread Melvin Davidson
On Mon, Sep 18, 2017 at 7:59 AM, Berend Tober <bto...@computer.org> wrote:

> Alexander Farber wrote:
>
>> Good evening,
>>
>> In a word game I store all player moves in the table:
>>
>> CREATE TYPE words_action AS ENUM (
>> 'play',
>> 'skip',
>> 'swap',
>> 'resign',
>> 'ban',
>> 'expire'
>> );
>>
>> CREATE TABLE words_moves (
>> mid BIGSERIAL PRIMARY KEY,
>> action  words_action NOT NULL,
>> gid integer NOT NULL REFERENCES words_games ON DELETE CASCADE,
>> uid integer NOT NULL REFERENCES words_users ON DELETE CASCADE,
>> played  timestamptz NOT NULL,
>> tiles   jsonb,
>> score   integer CHECK(score >= 0)
>> );
>>
>> I could run a cron job on all moves played each day and select the
>> "spectacular" ones by it,
>> i.e. when a very big score has been achieved in the move or all 7 tiles
>> have been played...
>>
>> Then I (as admin of the game) would manually review the daily mails sent
>> by that cronjob and
>> select the few I have found interesting - for later publishing them as
>> "daily puzzle" in my day.
>>
>> However I don't want to do the reviewing every day as that would be
>> tedious, but more like once
>> per week and then select several such moves at once (for the future
>> dates).
>>
>> My question (and thanks for reading my mail sofar) is: which column would
>> you add to the
>> words_moves table for such a purpose?
>>
>> If I add a boolean (is a daily puzzle move: true or false) - then it is
>> difficult to keep the
>> order of the daily puzzles, I think.
>>
>> If I add a timestamptz, then to which date to set it, when I do my manual
>> review once a week?
>>
>> I need to add a useful column, so that it would be easy to me to create a
>> web script which would
>> display today's and all past "daily puzzle" records - and wouldn't change
>> the already published
>> puzzles...
>>
>> If you have a good idea here, please share with me. If not, sorry for the
>> maybe offtopic
>> question.
>>
>
> I like the idea of a new column in words_games that allows nulls and to be
> filled in subsequently with the review date, but here's another idea to
> consider:
>
> If you have another place to store control information, you could store
> the mid value of the last-reviewed  words_moves table row. That serial
> column also keeps track of the order, btw.
>
> Or maybe you define another table capturing more detail, if you need it,
> such as
>
> CREATE TABLE spectacular_moves (
> mid BIGINTEGER REFERENCES words_games,
> review_date  timestamptz NOT NULL,
> publication_date timestamptz /*NULL allowed ... date to be filled
> in subsequently */,
> );
>
> Or those last two columns could both be appended to the word_games table,
> again, allowing NULL, but then filled in as the events occur.
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

I don't have any specific suggestion for an additional column, other than
Berend's idea. However, I strongly advise against the use
of ENUM's. They can create a major problem in the event one needs to be
removed. It is a lot easier and simpler to use a
Foreign Key table instead.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Numeric numbers

2017-09-02 Thread Melvin Davidson
On Sat, Sep 2, 2017 at 11:54 AM, Francisco Olarte <fola...@peoplecall.com>
wrote:

> On Sat, Sep 2, 2017 at 4:16 PM, Олег Самойлов <ol...@mipt.ru> wrote:
> > Hi all. I have silly question. Look at "numeric" type. According to
> > docs it must be "up to 131072 digits before the decimal point; up to
> > 16383 digits after the decimal point". Well, lets see.
> >
> > => select 1::numeric/3;
> > ?column?
> > 
> >  0.
>
> => select 1::numeric(100,90)/3;
>?column?
> 
> --
>  0.
> 33
> (1 row)
>
> It's probably doing 1(integer) => double precioson => numeric(20) or
> something similar if you do not specify.
>
> Francisco Olarte.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



*Franciso,*
*Per the docs, is is not "must be', it is "up to 131072 digits before the
decimal point; up to 16383 digits after the decimal point".*

*https://www.postgresql.org/docs/9.6/static/datatype-numeric.html#DATATYPE-NUMERIC-TABLE
<https://www.postgresql.org/docs/9.6/static/datatype-numeric.html#DATATYPE-NUMERIC-TABLE>*

*YOU have specified a precision of numeric(100,90), which means 90 decimals
and that is exactly what you got!*

*The result is correct, so what is your question?*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Table create time

2017-08-31 Thread Melvin Davidson
On Thu, Aug 31, 2017 at 11:46 AM, Melvin Davidson <melvin6...@gmail.com>
wrote:

>
>
> On Thu, Aug 31, 2017 at 11:26 AM, Achilleas Mantzios <
> ach...@matrix.gatewaynet.com> wrote:
>
>> On 31/08/2017 18:20, Melvin Davidson wrote:
>>
>> >you could just create an event trigger looking for CREATE TABLE as
>> filter_value:
>>
>> I have tried that. Unfortunately, I have been unable to extract the table
>> name from the event because TG_TABLE_NAME is not
>> available during an event trigger, albeit perhaps I am missing something?
>>
>> You need to use ddl_command_end event and then select from
>> pg_event_trigger_ddl_commands() . Search for some example how to do this.
>>
>> That being said, I still believe it is extra work that could easily be
>> avoided and should be added to the postgresql catalogs simply to be
>> more feature competitive with  Oracle & SQL Server, as well as a boost to
>> the PostgreSQL community.
>>
>> On Thu, Aug 31, 2017 at 10:54 AM, Michael Paquier <
>> michael.paqu...@gmail.com> wrote:
>>
>>> On Thu, Aug 31, 2017 at 10:21 PM, Melvin Davidson <melvin6...@gmail.com>
>>> wrote:
>>> > Wolfgang, as David said, a column in pg_class for the creation time of
>>> a table does not exist. I long ago requested that feature as it is
>>> > in other DB's (Oracle & MS SQL Server), but the main reason that it
>>> was not done was that no one was interested in doing it.
>>>
>>> Is there any need for a column in pg_class for that? You could just
>>> create an event trigger looking for CREATE TABLE as filter_value:
>>> https://www.postgresql.org/docs/9.6/static/sql-createeventtrigger.html
>>> And then have this event trigger just save the timestamp value of
>>> now() in a custom table with the name and/or OID of the relation
>>> involved.
>>> --
>>> Michael
>>>
>>
>>
>>
>> --
>> *Melvin Davidson*
>> I reserve the right to fantasize.  Whether or not you
>> wish to share my fantasy is entirely up to you.
>>
>> Achilleas Mantzios
>> IT DEV Lead
>> IT DEPT
>> Dynacom Tankers Mgmt
>>
>>
> *>You need to use ddl_command_end event and then select from
> pg_event_trigger_ddl_commands()*
>
> *I have, but the result for CREATE TABLE is an error.*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *CREATE OR REPLACE FUNCTION public.fn_notify_ddl()  RETURNS event_trigger
> AS$BODY$DECLAREobj record;BEGINRAISE NOTICE 'Type: %', TG_TAG;
> RAISE NOTICE 'Command: %', current_query();RAISE NOTICE 'Table: %',
> (pg_event_trigger_ddl_commands()).objid;   -- < causes an error?RAISE
> NOTICE 'DB Name: %', current_database();RAISE NOTICE 'DB User: %',
> session_user;    RAISE NOTICE 'DB Port: %', inet_server_port();RAISE
> NOTICE 'Server Host: %', inet_server_addr();RAISE NOTICE 'Client Host:
> %', inet_client_addr();END;$BODY$  LANGUAGE plpgsql VOLATILE  COST
> 100;ALTER FUNCTION public.fn_notify_ddl()  OWNER TO postgres;*
>
>
>
>
>
> *CREATE TABLE sneaky_pete( id_col varchar(5),  col2   varchar(1),
> CONSTRAINT sneaky_pete_pk PRIMARY KEY (id_col));*
>
> *ERROR:  query "SELECT (pg_event_trigger_ddl_commands()).objid" returned
> more than one row*
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>

*Wolfgang,*


*Kudos to Álvaro Herrera! Thanks to his slide presentation Capturing DDL
Events
<https://pgday.ru/files/papers/22/pgday.2015.alvaro.herrera.capturing-ddl.pdf>*
*I was able to figure out how to trap and log table creates. See the
attached log_tbl_cre8.sql for my solution*

*that I am passing on to help you and others.*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.

CREATE SEQUENCE public.log_seq
  INCREMENT 1
  MINVALUE 1
  MAXVALUE 9223372036854775807
  START 1
  CACHE 1;
ALTER TABLE public.log_seq
  OWNER TO postgres;

DROP TABLE public.tbl_create_log
CREATE TABLE public.tbl_create_log
(
  log_id  bigint NOT NULL DEFAULT nextval('log_seq'::regclass),
  log_table_schema name NOT NULL,
  log_table_name name NOT NULL,
  log_table_cre8_time timestamp without time zone NOT NULL DEFAULT 
clock_timestamp(),
  CONSTRAINT tbl_create_log_pk PRIMARY KEY (log_table_schema, log_table_name)
)
WITH (
  OIDS=FALSE
);
ALTER TABLE public.tbl_create_log
  OWNER TO postgres;


-- Function: public.fn_notify_ddl()

-- DROP FUNCTION public.fn_notify_ddl();

CREATE O

Re: [GENERAL] Table create time

2017-08-31 Thread Melvin Davidson
On Thu, Aug 31, 2017 at 11:26 AM, Achilleas Mantzios <
ach...@matrix.gatewaynet.com> wrote:

> On 31/08/2017 18:20, Melvin Davidson wrote:
>
> >you could just create an event trigger looking for CREATE TABLE as
> filter_value:
>
> I have tried that. Unfortunately, I have been unable to extract the table
> name from the event because TG_TABLE_NAME is not
> available during an event trigger, albeit perhaps I am missing something?
>
> You need to use ddl_command_end event and then select from
> pg_event_trigger_ddl_commands() . Search for some example how to do this.
>
> That being said, I still believe it is extra work that could easily be
> avoided and should be added to the postgresql catalogs simply to be
> more feature competitive with  Oracle & SQL Server, as well as a boost to
> the PostgreSQL community.
>
> On Thu, Aug 31, 2017 at 10:54 AM, Michael Paquier <
> michael.paqu...@gmail.com> wrote:
>
>> On Thu, Aug 31, 2017 at 10:21 PM, Melvin Davidson <melvin6...@gmail.com>
>> wrote:
>> > Wolfgang, as David said, a column in pg_class for the creation time of
>> a table does not exist. I long ago requested that feature as it is
>> > in other DB's (Oracle & MS SQL Server), but the main reason that it was
>> not done was that no one was interested in doing it.
>>
>> Is there any need for a column in pg_class for that? You could just
>> create an event trigger looking for CREATE TABLE as filter_value:
>> https://www.postgresql.org/docs/9.6/static/sql-createeventtrigger.html
>> And then have this event trigger just save the timestamp value of
>> now() in a custom table with the name and/or OID of the relation
>> involved.
>> --
>> Michael
>>
>
>
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>
> Achilleas Mantzios
> IT DEV Lead
> IT DEPT
> Dynacom Tankers Mgmt
>
>
*>You need to use ddl_command_end event and then select from
pg_event_trigger_ddl_commands()*

*I have, but the result for CREATE TABLE is an error.*






















*CREATE OR REPLACE FUNCTION public.fn_notify_ddl()  RETURNS event_trigger
AS$BODY$DECLAREobj record;BEGINRAISE NOTICE 'Type: %', TG_TAG;
RAISE NOTICE 'Command: %', current_query();RAISE NOTICE 'Table: %',
(pg_event_trigger_ddl_commands()).objid;   -- < causes an error?RAISE
NOTICE 'DB Name: %', current_database();RAISE NOTICE 'DB User: %',
session_user;RAISE NOTICE 'DB Port: %', inet_server_port();RAISE
NOTICE 'Server Host: %', inet_server_addr();RAISE NOTICE 'Client Host:
%', inet_client_addr();END;$BODY$  LANGUAGE plpgsql VOLATILE  COST
100;ALTER FUNCTION public.fn_notify_ddl()  OWNER TO postgres;*





*CREATE TABLE sneaky_pete( id_col varchar(5),  col2   varchar(1),
CONSTRAINT sneaky_pete_pk PRIMARY KEY (id_col));*

*ERROR:  query "SELECT (pg_event_trigger_ddl_commands()).objid" returned
more than one row*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Table create time

2017-08-31 Thread Melvin Davidson
On Thu, Aug 31, 2017 at 11:19 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> Melvin Davidson <melvin6...@gmail.com> writes:
> > Wolfgang, as David said, a column in pg_class for the creation time of a
> > table does not exist. I long ago requested that feature as it is
> > in other DB's (Oracle & MS SQL Server), but the main reason that it was
> not
> > done was that no one was interested in doing it.
>
> No, I'd say the big reason it hasn't happened is lack of consensus on
> details of how the feature should work --- notably, ought the creation
> date be preserved across a dump/restore?  Doing that would require
> exposing some method to set the creation date from SQL, whereupon it loses
> value for some purposes because you can never be sure that what you're
> looking at is the "real" date and not something somebody frobbed later.
> OTOH, losing all your creation date info during dump/restore is annoying
> too.
>
> regards, tom lane
>

>lack of consensus on details of how the feature should work --- notably,
ought the creation
>date be preserved across a dump/restore?

Tom, I have already countered that there is no need to preserve a creation
date on dump/restore.

A. Only the creation time of an object is tracked, not mods.
B. When doing a dump, the catalogs (and relcreated time) are NOT included
in the dump.
C. When restoring an object that was deleted, or to a new database, the
   relcreated = now();
D. When restoring data to an existing object, relcreated is not/cannot be
updated.

The _only_ time it would be a factor is restoring to a _new_ database. In
that case, C (from above) takes effect, as new tables/objects
are being created.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Table create time

2017-08-31 Thread Melvin Davidson
>you could just create an event trigger looking for CREATE TABLE as
filter_value:

I have tried that. Unfortunately, I have been unable to extract the table
name from the event because TG_TABLE_NAME is not
available during an event trigger, albeit perhaps I am missing something?
That being said, I still believe it is extra work that could easily be
avoided and should be added to the postgresql catalogs simply to be
more feature competitive with  Oracle & SQL Server, as well as a boost to
the PostgreSQL community.

On Thu, Aug 31, 2017 at 10:54 AM, Michael Paquier <michael.paqu...@gmail.com
> wrote:

> On Thu, Aug 31, 2017 at 10:21 PM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
> > Wolfgang, as David said, a column in pg_class for the creation time of a
> table does not exist. I long ago requested that feature as it is
> > in other DB's (Oracle & MS SQL Server), but the main reason that it was
> not done was that no one was interested in doing it.
>
> Is there any need for a column in pg_class for that? You could just
> create an event trigger looking for CREATE TABLE as filter_value:
> https://www.postgresql.org/docs/9.6/static/sql-createeventtrigger.html
> And then have this event trigger just save the timestamp value of
> now() in a custom table with the name and/or OID of the relation
> involved.
> --
> Michael
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Table create time

2017-08-31 Thread Melvin Davidson
On Thu, Aug 31, 2017 at 8:29 AM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wednesday, August 30, 2017, <haman...@t-online.de> wrote:
>
>>
>> Hi,
>>
>> is there a way to add a table create (and perhaps schema modify)
>> timestamp to the system?
>>
>>
> There is not.  You may wish to search the archives for discussions as to
> why previous requests for this feature have not resulted in patches.
>
> David J.
>

Wolfgang, as David said, a column in pg_class for the creation time of a
table does not exist. I long ago requested that feature as it is
in other DB's (Oracle & MS SQL Server), but the main reason that it was not
done was that no one was interested in doing it.

As for finding tables with no comments, you can use the following query:

















*SELECT DISTINCT ON (n.nspname, c.relname)   n.nspname as schema,
c.relname,   a.rolname as owner,  d.description as
comment  FROM pg_class c LEFT  JOIN pg_attribute col ON (col.attrelid =
c.oid)LEFT  JOIN pg_description d ON (d.objoid = col.attrelid AND
d.objsubid = 0)  JOIN pg_namespace n ON (n.oid = c.relnamespace)  JOIN
pg_authid a ON ( a.OID = c.relowner )  WHERE n.nspname NOT LIKE
'information%' AND relname NOT LIKE 'pg_%' AND relname NOT LIKE
'information%' AND relname NOT LIKE 'sql_%'AND relkind = 'r'AND
d.description IS NULLORDER BY 1, 2;*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] clustered index benchmark comparing Postgresql vs Mariadb

2017-08-30 Thread Melvin Davidson
On Wed, Aug 30, 2017 at 10:03 PM, 유상지 <y0...@naver.com> wrote:

>
>
> I want to get help with Postgresql.
>
> I investigated that Postgresql could be rather fast in an environment
> using a secondary index. but It came up with different results on benckmark.
>
> The database I compared was mariadb, and the benchmark tool was sysbench
> 1.0.8 with the postgresql driver.
>
> Server environment: vmware, Ubuntu 17.04, processor: 4, RAM: 4 GB,
> Harddisk: 40 GB, Mariadb (v10.3), PostgreSQL (v9.6.4)
>
>
>
> The created sysbench progress statement is as follows.
>
> Sysbench /usr/share/sysbench/oltp_read_only.lua --db-driver = mysql
> --mysql-host = 127.0.0.1 --mysql-port = 3306 --mysql-user = root
> --mysql-password = ajdajddl75 - Mysql-db = sysbench --tables = 3
> --table_size = 10 --report-interval = 10 --secondary = on --time = 60
>
>
>
> Used options
>
> Select only, Num of threads = 1, num of tables = 3, table-size = 10
> and Table-size = 100, secondary index select instead of primary key.
>
>
>
>
>
>
>
>  My hypothesis was that  selecting by secondary index in postgresql is
> faster than in Mariadb. However, the results depend on table size.
>
>
>
> Postgresql was faster than Mariadb when the table size was 100, but
> slower at 10.
>
>
>
> Cluster secondary indexes were faster than those without cluster indexes
> in pg, but slower than mariadb.
>
>
>
> I'd like to see the difference in architecture rather than optimization,
> so every benchmark executed with default options except for clustered index.
>
>  I wonder if there are any settings I missed.
>
>
>
> I would be very pleased if someone could explain why these results came up.
>

>Postgresql was faster than Mariadb when the table size was 100, but
slower at 10.

You made a general statement, but you left out a lot of important
information.

A. Did you do an ANALYZE table_name BEFORE running your test?
B. Did you verify the index was being used with EXPLAIN your_query?
C. What was the exact query you used?
D. Most important, what is the structure of the table and index?
E. How much system memory is there?
F. In postgresql.conf What are the settings for
1. shared_memory
2. work_memory
3. All Planner Cost Constants values, All Genetic Query Optimizer values
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Get user defined type OID (PostgreSQL extension in C)

2017-08-25 Thread Melvin Davidson
On Fri, Aug 25, 2017 at 2:34 PM, Fabiana Zioti <fabi_zi...@hotmail.com>
wrote:

>
> I'm developing an extension to PostgreSQL using C. I created a
> user-defined type called geo_trajc_elem. I also created a function to
> generate an array of this new type.
>
>
> ()
>
>   ArrayType *result_array;
>   struct geo_trajc_elem *traje = (struct geo_trajc_elem
> *)palloc(sizeof(struct geo_trajc_elem));
>
>   Timestamp time_el = PG_GETARG_TIMESTAMP(1);
>   struct geo_point *pt = PG_GETARG_GEOPOINT_TYPE_P(2);
>
>   int16 typlen;
>   bool typbyval;
>   char typalign;
>
>   Datum datum_element;
>
>
>   traje = DatumGetGeoTrajETypeP(DirectFunctionCall2(get_trajectory_elem,
> PointerGetDatum(time_el), PointerGetDatum(pt)));
>   datum_element = PointerGetDatum(traje);
>
>   /* Oid element_type = get_fn_expr_argtype(fcinfo->flinfo, 0); */
>
>   Oid element_type = ?
>
>   get_typlenbyvalalign(element_type, , , );
> ()
>
>
> Is there a function to get the id of my new type? Like get_oid_elem
> (struct geo_trajc_elem)
>
> In this case, it is not possible to use get_fn_expr_argtype because I am
> not passing the new type as argument but creating it in the function
> itself, correct ?
>

*>Is there a function to get the id of my new type? Like get_oid_elem
(struct geo_trajc_elem)*

*A simple*

*SELECT oid, typname*
* FROM pg_type*
* WHERE typname = 'struct geo_trajc_elem';*

*should do the trick for you.*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] What is the proper query

2017-08-22 Thread Melvin Davidson
*While the information_schema is useful, there is no substitute for
learning how to use the pg_catalog and system information functions.*


*See if this query gives you what you are looking for:*






*SELECT rel.relname,   con.conname,   con.contype,
con.consrc,   pg_get_constraintdef(con.oid, true)  FROM pg_class rel
JOIN pg_constraint con ON (con.conrelid = rel.oid)*

*ORDER by relname, contype, conname;*


On Tue, Aug 22, 2017 at 12:42 PM, Igor Korot <ikoro...@gmail.com> wrote:

> Hi, David,
>
> On Tue, Aug 22, 2017 at 12:06 PM, David G. Johnston
> <david.g.johns...@gmail.com> wrote:
> > On Tue, Aug 22, 2017 at 8:43 AM, Igor Korot <ikoro...@gmail.com> wrote:
> >>
> >> Or this is the bug in 9.1?
> >> Since it looks like there are 2 columns with the same info in 1
> >> table/view
> >
> >
> > This old email thread sounds similar to what you are describing here.
> >
> > https://www.postgresql.org/message-id/56D0C4B8.7020200%40aklaver.com
>
> Consider following table creation command:
>
> CREATE TABLE leaguescorehitter(id integer, playerid integer, scoreid
> integer, value double, foreign key(id) references leagues(id), foreign
> key(id, playerid) references playersinleague(id, playerid), foreign
> key(scoreid) references scorehits(scoreid));
>
> There are 3 foreign keys in this table for which there are 4 rows
> displayed in my query as it should be:
>
> 1 for leagues(id)
> 1 for scorehits(scoreid)
> 2 for playersinleague(id,playerid) - 1 row per field
>
> However what I would expect to see is:
>
> [code]
> ordinal_position  |position_in_unique_constraint
>   0  1
>- this is for leagues(id)
>   1  1
>   1  2
>- those 2 are for
> playersinleague(id,playerid)
>   2  1
>- this is for scorehits(scoreid)
> [/code]
>
> Instead I got ordinal_positionv = position_in_unique_constraints and can't
> tell
> which constraint is which, or more precisely, when the one ends and
> second starts.
>
> Hopefully this above will not be mangled and the spacing will be kept.
>
> Thank you.
>
> >
> > David J.
> >
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Deadlocks

2017-08-22 Thread Melvin Davidson
On Tue, Aug 22, 2017 at 9:42 AM, Martin Moore <martin.mo...@avbrief.com>
wrote:

> Hi, I’m having issues with deadlocks.
>
> v9.6 on Debian Jessie.
>
> I have a number of ‘select’ functions which for logging purposes also call
> another fn that inserts a record into a ‘logging’ table.
>
> However, even with only 1 or 2 users making very infrequent calls to the
> parent fn, deadlocks are occurring. I’m hoping there’s something I’m
> unaware of as this shouldn’t happen!
>
> What could this be? I don’t have any logs at present as I removed the
> insert statement some time ago, but need to get this issue resolved.
>
> Thanks,
>
> Martin.
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


*I don't think you will get much help unless you show us the query and the
structure*
*of the tables involved.*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] make postgresql 9.5 default on centos 7

2017-08-18 Thread Melvin Davidson
On Fri, Aug 18, 2017 at 1:50 PM, Steve Clark <steve.cl...@netwolves.com>
wrote:

> Hi List,
>
> I loaded 9.5 on CentOS 7 but by default every thing wants to use the
> default
> 9.2 version that comes with CentOS 7.
>
> Is there a simple way to fix this so the 9.5 version of tools and
> libraries are used.
>
> Thanks,
> Steve
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


*> Is there a simple way to fix this so the 9.5 version of tools and
libraries are used.*


*Have you looked at the PGPORT environment variable?*

*https://www.postgresql.org/docs/9.5/static/libpq-envars.html
<https://www.postgresql.org/docs/9.5/static/libpq-envars.html>*



*Also *




*PATH=/usr/local/pgsql/bin:$PATH
export PATHhttps://www.postgresql.org/docs/9.1/static/install-post.html
<https://www.postgresql.org/docs/9.1/static/install-post.html>*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Queries for Diagramming Schema Keys

2017-08-14 Thread Melvin Davidson
On Mon, Aug 14, 2017 at 6:38 PM, <stim...@comcast.net> wrote:

> ...
> > Just to add that running psql with the -E switch is REALLY handy for
> seeing how psql executes queries to
> > find how tables etc are put together.
>
> I can't actually use that feature to gather the information I'm interested
> in since all I have are tables and data with no formal information on key
> relations and allowed load order. There is so much data in so many tables
> that testing even a single load iteration takes many hours and there are
> literally many thousands of load order combinations possible. Logs of
> hundreds of thousands (or millions) of XML loads would take a very long
> time to go through, and would then only give one possible load order.
>
> Automating a diagram of key relations and visualizing it is the first step
> to methodically computing a correct load order, but I can't do that until I
> figure out how to use the system tables to describe (1) columns which are
> not keys, (2) columns which are primary keys not referring to another
> column, and (3) columns which are foreign keys and the table/column they
> are pointed at. My SQL knowledge is somewhat limited and I am struggling
> with the system tables.
>


*For *
*> (3) columns which are foreign keys and the table/column they are pointed
a*

*This should do the trick, you can tweak as needed.*














*SELECT nsp.nspname,   rel.relname,   con.conname,
con.contype,   pg_get_constraintdef(con.oid, true)  FROM pg_class rel
JOIN pg_namespace nsp ON (nsp.oid = rel.relnamespace)  JOIN pg_constraint
con ON (con.conrelid = rel.oid) WHERE contype = 'f'   AND rel.relname =
'account'  ORDER by relname,   contype,   conname;*
*However, for the others, I have no intention of creating the queries for
you. I encourage you to learn the PostgreSQL system catalogs.*
*You have not provided us with the version of PostgreSQL you are using, so
I'll just point you to the relevant part in the latest doc.*


*https://www.postgresql.org/docs/9.6/static/catalogs.html
<https://www.postgresql.org/docs/9.6/static/catalogs.html>*


*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Queries for Diagramming Schema Keys

2017-08-14 Thread Melvin Davidson
On Mon, Aug 14, 2017 at 2:46 PM, <stim...@comcast.net> wrote:

> Hi,
>
> I'm trying to write an application (using libpqxx/C++) which creates
> graphical images of large and complex relations between tables (basically
> an SVG image with hot spots for drilling down on details). I need to
> essentially create icons of tables and their columns, and then draw
> interconnecting lines where foreign keys are involved, and to distinctly
> label primary keys, foreign keys, and non-key columns. Eventually this will
> have knowledge of an XML file loading scheme and be able to reverse
> engineer the required load order (imagine approximately 1,000 tables with
> many foreign keys and file loads which may take hours for each failure to
> load). I need some advice on using ANSI/information_schema queries to
> accomplish this.
>
> Thus I have this query to list all tables:
>
> SELECT table_name
>   FROM information_schema.tables
> WHERE table_schema='public'
>AND table_type='BASE TABLE';
>
>
> ...this seems to work ok. The next query is to find all foreign
> keys...this seems to sort of work, but may have issues:
>
> SELECT
> tc.table_name AS local_table,
> kcu.column_name AS key_column,
> ccu.table_name AS fk_table,
> ccu.column_name AS fk_column
> FROM
> information_schema.table_constraints AS tc
> JOIN information_schema.key_column_usage AS kcu
>   ON tc.constraint_name = kcu.constraint_name
> JOIN information_schema.constraint_column_usage AS ccu
>   ON ccu.constraint_name = tc.constraint_name
> WHERE constraint_type = 'FOREIGN KEY' AND tc.table_name
> IN (SELECT table_name
>   FROM information_schema.tables
> WHERE table_schema='public'
>AND table_type='BASE TABLE');
>
>
> This is my query to find all primary keys which are not foreign keys, and
> this definitely is not 100% correct:
> SELECT DISTINCT
> tc.table_name AS local_table,
> kcu.column_name AS key_column
> FROM
> information_schema.table_constraints AS tc
> JOIN information_schema.key_column_usage AS kcu
>   ON tc.constraint_name = kcu.constraint_name
> JOIN information_schema.constraint_column_usage AS ccu
>   ON ccu.constraint_name = tc.constraint_name
> WHERE constraint_type = 'PRIMARY KEY' AND tc.table_name
> IN (SELECT table_name
>   FROM information_schema.tables
>  WHERE table_schema='public'
>AND table_type='BASE TABLE')
> AND (tc.table_name, kcu.column_name)
> NOT IN (
> SELECT
> tc.table_name,
> kcu.column_name
> FROM
> information_schema.table_constraints AS tc
> JOIN information_schema.key_column_usage AS kcu
>   ON tc.constraint_name = kcu.constraint_name
> JOIN information_schema.constraint_column_usage AS ccu
>   ON ccu.constraint_name = tc.constraint_name
> WHERE constraint_type = 'FOREIGN KEY' AND tc.table_name
> IN (SELECT table_name
>   FROM information_schema.tables
>  WHERE table_schema='public'
>AND table_type='BASE TABLE')
> )
> ORDER BY local_table, key_column;
>
> I am completely at a loss how I would query for all columns which are
> neither primary nor foreign keys. Would anyone have a suggestion for
> something like this:
> SELECT table_name, non_key_column
> FROM ...
> WHERE ...
> ORDER BY table_name, non_key_column
>
> Any advice on any of the queries would be appreciated!
>
> Thanks!
>

>I am completely at a loss how I would query for all columns

*Does this help?*


*-- TABLES AND COLUMNS*






*SELECT c.table_schema as schema,   c.table_name as table,
c.ordinal_position as order,   c.column_name as column,   CASE WHEN
c.data_type IN ('character', 'varchar') THEN c.data_type || '(' ||
c.character_maximum_length || ')'WHEN TRIM(c.data_type) IN
('numeric')THEN c.data_type || '(' || c.numeric_precision_radix ||
',' *










*  ||  c.numeric_scale || ')'   ELSE c.data_type
END,   c.is_nullable as null,   col_description(t.oid,
c.ordinal_position) as comment  FROM information_schema.columns c  JOIN
pg_class t ON (t.relname = c.table_name) WHERE table_schema = 'public'
AND c.table_name = 'album'ORDER BY 1, 2, 3;*

*-- TABLES AND PKEYS*










*SELECT n.nspname,   t.relname as table,c.conname as pk_name
FROM pg_class t  JOIN pg_constraint c ON ( c.conrelid = t.OID AND c.contype
= 'p')  JOIN pg_namespace n  ON (n.oid = t.relnamespace) WHERE relkind =
'r'   AND t.relname NOT LIKE 'pg_%'   AND t.relname NOT LIKE 'sql_%'
ORDER BY n.nspname, t.relname, c.conname;*


*-- TABLES and FKEYS*













*SELECT n.nspname as schema,t.relname as table,c.conname as
fk_name  FROM pg_class t  JOIN pg_namespace n ON n.oid = t.relnamespace
JOIN pg_constraint c ON ( c.conrelid 

Re: [GENERAL] Help on Index only scan

2017-08-13 Thread Melvin Davidson
On Sun, Aug 13, 2017 at 7:37 PM, Ertan Küçükoğlu <
ertan.kucuko...@1nar.com.tr> wrote:

>
> On 14 Aug 2017, at 01:15, Melvin Davidson <melvin6...@gmail.com> wrote:
>
>
> On Sun, Aug 13, 2017 at 5:59 PM, Ertan Küçükoğlu <
> ertan.kucuko...@1nar.com.tr> wrote:
>
>> Hello,
>>
>> My table details:
>> robox=# \dS+ updates
>>Table "public.updates"
>> Column |  Type   | Modifiers
>> | Storage  | Stats target | Description
>> ---+-+--
>> 
>> -+--+--+-
>>  autoinc   | integer | not null default
>> nextval('updates_autoinc_seq'::regclass) | plain|  |
>>  filename  | text|
>> | extended |  |
>>  dateofrelease | date|
>> | plain|  |
>>  fileversion   | text|
>> | extended |  |
>>  afile | text|
>> | extended |  |
>>  filehash  | text|
>> | extended |  |
>>  active| boolean |
>> | plain|  |
>> Indexes:
>> "updates_pkey" PRIMARY KEY, btree (autoinc)
>> "update_filename" btree (filename)
>> "updates_autoinc" btree (autoinc DESC)
>> "updates_dateofrelease" btree (dateofrelease)
>> "updates_filename_dateofrelease" btree (filename, dateofrelease)
>>
>>
>> robox=# select count(autoinc) from updates;
>>  count
>> ---
>>   2003
>> (1 row)
>>
>> robox=# select autoinc, filename, fileversion from updates limit 10;
>>  autoinc | filename | fileversion
>> -+--+-
>>   18 | Robox.exe| 1.0.1.218
>>   19 | Robox.exe| 1.0.1.220
>>   20 | Robox.exe| 1.0.1.220
>>   21 | 8423bfc5a669864f9b66b6b15ce908b9 | 1.1.1.1
>>   22 | 4fdabb0c7adbc5a89fbe679ce76ccef9 | 1.1.1.1
>>   23 | f469d77bfa86c8917c7846c0f871137c | 1.1.1.1
>>   24 | bc10af4c8789718a9ca6565ea14cb17d | 1.1.1.1
>>   25 | d9f87ee46cdb41cd15c2f71ed599faf9 | 1.1.1.1
>>   26 | 6f7428a5364aae1d5914a66cba3e6f3b | 1.1.1.1
>>   27 | 66ec4cdb8d64ca1414f75c1fb9eaa518 | 1.1.1.1
>> (10 rows)
>>
>> I want to have an index only scan for my below query:
>> select autoinc, fileversion from updates where filename = 'Robox.exe'
>> order
>> by autoinc desc;
>>
>> I simply could not understand planner and cannot provide right index for
>> it.
>> Below index names "update_filename" and "updates_autoinc" are added just
>> for
>> the query that I would like to have a index only scan plan. I also failed
>> with following indexes
>> "autoinc desc, filename, fileversion"
>> "autoinc desc, filename"
>>
>> First 3 rows in above select results are actual data. You will find that I
>> have inserted about 2000 rows of dummy data to have somewhat meaningful
>> plan
>> for the query.
>>
>> Current planner result:
>> robox=# vacuum full;
>> VACUUM
>> robox=# explain analyze
>> robox-# select autoinc, fileversion
>> robox-# from updates
>> robox-# where filename = 'Robox.exe'
>> robox-# order by autoinc desc;
>>   QUERY PLAN
>> 
>> 
>> --
>>  Sort  (cost=12.79..12.79 rows=3 width=12) (actual time=0.047..0.047
>> rows=3
>> loops=1)
>>Sort Key: autoinc DESC
>>Sort Method: quicksort  Memory: 25kB
>>->  Bitmap Heap Scan on updates  (cost=4.30..12.76 rows=3 width=12)
>> (actual time=0.040..0.040 rows=3 loops=1)
>>  Recheck Cond: (filename = 'Robox.exe'::text)
>>  Heap Blocks: exact=1
>>  ->  Bitmap Index Scan on update_filename  (cost=0.00..4.30 rows=3
>> width=0) (actual time=0.035..0.035 rows=3 loops=1)
>>Index Cond: (filename = 'Robox.exe'::text)
>>  Planning time: 1.873 ms
>>  Execution time: 0.076 ms
>> (10 rows)
>>
>>
>> I appreciate any help on having right index(es) as I simply failed myself.
>>
>> Regards,
>> Ertan Küçükoğlu
>>
>> *First, you do not need index &qu

Re: [GENERAL] Help on Index only scan

2017-08-13 Thread Melvin Davidson
On Sun, Aug 13, 2017 at 5:59 PM, Ertan Küçükoğlu <
ertan.kucuko...@1nar.com.tr> wrote:

> Hello,
>
> My table details:
> robox=# \dS+ updates
>Table "public.updates"
> Column |  Type   | Modifiers
> | Storage  | Stats target | Description
> ---+-+--
> 
> -+--+--+-
>  autoinc   | integer | not null default
> nextval('updates_autoinc_seq'::regclass) | plain|  |
>  filename  | text|
> | extended |  |
>  dateofrelease | date|
> | plain|  |
>  fileversion   | text|
> | extended |  |
>  afile | text|
> | extended |  |
>  filehash  | text|
> | extended |  |
>  active| boolean |
> | plain|  |
> Indexes:
> "updates_pkey" PRIMARY KEY, btree (autoinc)
> "update_filename" btree (filename)
> "updates_autoinc" btree (autoinc DESC)
> "updates_dateofrelease" btree (dateofrelease)
> "updates_filename_dateofrelease" btree (filename, dateofrelease)
>
>
> robox=# select count(autoinc) from updates;
>  count
> ---
>   2003
> (1 row)
>
> robox=# select autoinc, filename, fileversion from updates limit 10;
>  autoinc | filename | fileversion
> -+--+-
>   18 | Robox.exe| 1.0.1.218
>   19 | Robox.exe| 1.0.1.220
>   20 | Robox.exe| 1.0.1.220
>   21 | 8423bfc5a669864f9b66b6b15ce908b9 | 1.1.1.1
>   22 | 4fdabb0c7adbc5a89fbe679ce76ccef9 | 1.1.1.1
>   23 | f469d77bfa86c8917c7846c0f871137c | 1.1.1.1
>   24 | bc10af4c8789718a9ca6565ea14cb17d | 1.1.1.1
>   25 | d9f87ee46cdb41cd15c2f71ed599faf9 | 1.1.1.1
>   26 | 6f7428a5364aae1d5914a66cba3e6f3b | 1.1.1.1
>   27 | 66ec4cdb8d64ca1414f75c1fb9eaa518 | 1.1.1.1
> (10 rows)
>
> I want to have an index only scan for my below query:
> select autoinc, fileversion from updates where filename = 'Robox.exe' order
> by autoinc desc;
>
> I simply could not understand planner and cannot provide right index for
> it.
> Below index names "update_filename" and "updates_autoinc" are added just
> for
> the query that I would like to have a index only scan plan. I also failed
> with following indexes
> "autoinc desc, filename, fileversion"
> "autoinc desc, filename"
>
> First 3 rows in above select results are actual data. You will find that I
> have inserted about 2000 rows of dummy data to have somewhat meaningful
> plan
> for the query.
>
> Current planner result:
> robox=# vacuum full;
> VACUUM
> robox=# explain analyze
> robox-# select autoinc, fileversion
> robox-# from updates
> robox-# where filename = 'Robox.exe'
> robox-# order by autoinc desc;
>   QUERY PLAN
> 
> 
> --
>  Sort  (cost=12.79..12.79 rows=3 width=12) (actual time=0.047..0.047 rows=3
> loops=1)
>Sort Key: autoinc DESC
>Sort Method: quicksort  Memory: 25kB
>->  Bitmap Heap Scan on updates  (cost=4.30..12.76 rows=3 width=12)
> (actual time=0.040..0.040 rows=3 loops=1)
>  Recheck Cond: (filename = 'Robox.exe'::text)
>  Heap Blocks: exact=1
>  ->  Bitmap Index Scan on update_filename  (cost=0.00..4.30 rows=3
> width=0) (actual time=0.035..0.035 rows=3 loops=1)
>Index Cond: (filename = 'Robox.exe'::text)
>  Planning time: 1.873 ms
>  Execution time: 0.076 ms
> (10 rows)
>
>
> I appreciate any help on having right index(es) as I simply failed myself.
>
> Regards,
> Ertan Küçükoğlu
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

*First, you do not need index "updates_autoinc", since autoinc is the
Primary Key, you are just duplicating the index.*

*As far as "Index only scan" , since the table only has 2003 rows, the
optimizer has determined it is faster just to*
*load all the rows into memory and then filter. If you really want to force
an index scan, then you would have to do*
*SET enable_seqscan = off; Before doing the query, however you are just
shooting yourself in the foot by doing that*

*as it will make the query slower.*


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] pg_stat_statements -- Historical Query

2017-08-10 Thread Melvin Davidson
On Thu, Aug 10, 2017 at 3:30 AM, Julien Rouhaud <rjuju...@gmail.com> wrote:

> On Thu, Aug 10, 2017 at 6:41 AM, Michael Paquier
> <michael.paqu...@gmail.com> wrote:
> > On Thu, Aug 10, 2017 at 6:23 AM, anand086 <anand...@gmail.com> wrote:
> >> I was looking for a way to maintain historical query details in
> Postgres to
> >> answer questions like
> >>
> >> What was the sql call rate between time X and Y?
> >> Did the execution count increase for the query increase between time X
> and
> >> Y?
> >> In past 10mins what all queries were run in the db?
> >>
> >> and few others like this.
> >>
> >> What would be best way to do it? Any thoughts?
> >
> > pg_stat_statements has a function allowing to reset what the view
> > pg_stat_statements holds as information. You could copy periodically
> > the data of pg_stat_statements and then invoke
> > pg_stat_statements_reset to put everything back to zero. Then you
> > would just need to do your analysis work based on the amount of data
> > copied into your custom table.
>
> You can also use powa-archivist extension which does the aggregation,
> data retention and so on with a bgworker:
> https://github.com/dalibo/powa-archivist.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


*If you are interested in historical stats, you would probably fair a lot
better with PgBadger. It is free*
*and highly customizable.  In addition to SQL call rates at different
times, it provides analysis of*

*most used queries, slowest queries, etc.*


*https://sourceforge.net/projects/pgbadger/
<https://sourceforge.net/projects/pgbadger/>*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] How to ALTER EXTENSION name OWNER TO new_owner ?

2017-08-09 Thread Melvin Davidson
On Wed, Aug 9, 2017 at 3:32 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wed, Aug 9, 2017 at 12:26 PM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
>
>> *>I'm am wondering whether "REASSIGNED OWNED" **needs fixing as well*
>>
>> *Possibly, but as the op is on 9.3, it is not available to him.*
>>
>
> ​You should check the docs again...​
>
>
>> *I would also argue that since* *"OWNER TO new_owner" is available in
>> all other ALTER object statements, it is an omission and should be*
>> *included for extenstions as well..*
>>
>
> ​As am I, but omission or not I don't recall that we've ever back-patched
> new SQL grammar.
>
> David J.
> ​
>
*>You should check the docs again...​*

*Yes, you are correct, it is in 9.3*

*>I don't recall that we've ever back-patched new SQL grammar.*

*I am not saying back patch, I am saying an enhancement for version 10 or
11.*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] How to ALTER EXTENSION name OWNER TO new_owner ?

2017-08-09 Thread Melvin Davidson
On Wed, Aug 9, 2017 at 3:00 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wed, Aug 9, 2017 at 11:30 AM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
>
>>
>>
>> On Wed, Aug 9, 2017 at 1:56 PM, David G. Johnston <
>> david.g.johns...@gmail.com> wrote:
>>
>>> On Wed, Aug 9, 2017 at 10:37 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>>>
>>>> Scott Marlowe <scott.marl...@gmail.com> writes:
>>>> > Seems like something that should be handled by alter doesn't it?
>>>>
>>>> I have some vague memory that we intentionally didn't implement
>>>> ALTER EXTENSION OWNER because we were unsure what it ought to do
>>>> about ownership of objects belonging to the extension.  If the answer
>>>> is "nothing" then it wouldn't be hard to add such a statement.
>>>>
>>>
>>> The documented contract of CREATE EXTENSION(1)/ALTER EXTENSION ADD
>>> MEMBER(2) requires that the extension owner and the owner of the member
>>> objects be one-and-the-same (I suppose the inclusion of DROP in (2) makes
>>> this debatable).  I do not know what happens today if someone tries to
>>> ALTER OBJECT SET OWNER on a member object to a role other than the owner of
>>> the extension.  From the docs I'd suggest that it should fail.  Likewise,
>>> ALTER EXTENSION OWNER should cascade to all members - which (3), and normal
>>> dependency tracking, seems to make straight-forward.
>>>
>>> 1>The user who runs CREATE EXTENSION becomes the owner of the extension
>>> for purposes of later privilege checks, as well as the owner of any objects
>>> created by the extension's script.
>>>
>>> 2>You must own the extension to use ALTER EXTENSION. The ADD/DROP forms
>>> require ownership of the added/dropped object as well.
>>>
>>> 3>CREATE EXTENSION additionally records the identities of all the
>>> created objects, so that they can be dropped again if DROP EXTENSION is
>>> issued.
>>>
>>> David J.
>>>
>>>
>>
>>
>> *David,*
>>
>> *The problem is, The current owner of the extension needs to be dropped.
>> No one should have to jump through hoops*
>> *just to be able to do that. There is definitely a need for an*
>>
>> *ALTER EXTENSION name OWNER TO new_owner.*
>> *As Tom Lane has already pointed out, it would not be hard to add that.*
>>
>>
> ​I'm not sure what it is you think I'm missing here.  My only point was
> I'm tending to think that "nothing", while workable, diverges from what I
> would expect - that an extension and all of its member objects should, at
> all times, share a common owner.  I don't imagine that either definition
> would be abnormally difficult to implement for v11.
>
> I'm am wondering whether "REASSIGNED OWNED" needs fixing as well...since
> that command is specifically designed to handle this use case.
>
> https://www.postgresql.org/docs/9.6/static/sql-reassign-owned.html
> ​
> ​
>
> D
> ​avid J.
> ​
>

*>I'm am wondering whether "REASSIGNED OWNED" **needs fixing as well*

*Possibly, but as the op is on 9.3, it is not available to him.*
*I would also argue that since* *"OWNER TO new_owner" is available in all
other ALTER object statements, it is an omission and should be*
*included for extenstions as well..*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] How to ALTER EXTENSION name OWNER TO new_owner ?

2017-08-09 Thread Melvin Davidson
On Wed, Aug 9, 2017 at 1:56 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wed, Aug 9, 2017 at 10:37 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>
>> Scott Marlowe <scott.marl...@gmail.com> writes:
>> > Seems like something that should be handled by alter doesn't it?
>>
>> I have some vague memory that we intentionally didn't implement
>> ALTER EXTENSION OWNER because we were unsure what it ought to do
>> about ownership of objects belonging to the extension.  If the answer
>> is "nothing" then it wouldn't be hard to add such a statement.
>>
>
> The documented contract of CREATE EXTENSION(1)/ALTER EXTENSION ADD
> MEMBER(2) requires that the extension owner and the owner of the member
> objects be one-and-the-same (I suppose the inclusion of DROP in (2) makes
> this debatable).  I do not know what happens today if someone tries to
> ALTER OBJECT SET OWNER on a member object to a role other than the owner of
> the extension.  From the docs I'd suggest that it should fail.  Likewise,
> ALTER EXTENSION OWNER should cascade to all members - which (3), and normal
> dependency tracking, seems to make straight-forward.
>
> 1>The user who runs CREATE EXTENSION becomes the owner of the extension
> for purposes of later privilege checks, as well as the owner of any objects
> created by the extension's script.
>
> 2>You must own the extension to use ALTER EXTENSION. The ADD/DROP forms
> require ownership of the added/dropped object as well.
>
> 3>CREATE EXTENSION additionally records the identities of all the created
> objects, so that they can be dropped again if DROP EXTENSION is issued.
>
> David J.
>
>


*David,*

*The problem is, The current owner of the extension needs to be dropped. No
one should have to jump through hoops*
*just to be able to do that. There is definitely a need for an*

*ALTER EXTENSION name OWNER TO new_owner.*
*As Tom Lane has already pointed out, it would not be hard to add that.*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] How to ALTER EXTENSION name OWNER TO new_owner ?

2017-08-09 Thread Melvin Davidson
On Wed, Aug 9, 2017 at 12:19 PM, Scott Marlowe <scott.marl...@gmail.com>
wrote:

> On Wed, Aug 9, 2017 at 10:10 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> > Melvin Davidson <melvin6...@gmail.com> writes:
> >> *UPDATE pg_extensionSET extowner = {oid_of_new_owner} WHERE
> extowner =
> >> {oid_from_above_statement};*
> >
> > Note you'll also have to modify the rows in pg_shdepend that reflect
> > this ownership property.
>
> Seems like something that should be handled by alter doesn't it?
>








*In keeping with what Tom advised, the SQL to do that would be"UPDATE
pg_shdepend   SET refobjid = {oid_of_new_owner} WHERE refobjid = {oid_of
old_owner}   AND deptype = 'o';*

*However, as Scott suggested, there should definitely be an ALTER statement
to change the owner of the extension *


*and that does the work required.*


*IE: ALTER EXTENSION name OWNER TO new_owner;*

*Perhaps in Version 10 or 11?*

*-- *


*Melvin DavidsonI reserve the right to fantasize.  Whether or not you wish
to share my fantasy is entirely up to you. *


Re: [GENERAL] How to ALTER EXTENSION name OWNER TO new_owner ?

2017-08-09 Thread Melvin Davidson
On Wed, Aug 9, 2017 at 11:20 AM, Melvin Davidson <melvin6...@gmail.com>
wrote:

>
> On Wed, Aug 9, 2017 at 10:42 AM, Colin 't Hart <colinth...@gmail.com>
> wrote:
>
>> Hi,
>>
>> Why does
>> ALTER EXTENSION name OWNER TO new_owner;
>> not exist?
>>
>> I have a bunch of extensions that were installed by a role that I want
>> to drop. So I thought I would do like I do for other object types:
>> ALTER  name OWNER TO new_owner;
>>
>> But that doesn't exist for extensions. I also can't drop the extension
>> and recreate it because other objects depend on it.
>>
>> What can I do?
>>
>> This is on PostgreSQL 9.3.
>>
>> Thanks,
>>
>> Colin
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>
> *AFAIK, extensions do not have an owner.  They just exist and are
> available to everyone.*
>
> *If you are having a particular problem (other than owner) with an
> extension, it would be helpful to*
>
> *post a script to illustrate that.  You should be able to drop the role
> without any problem. *
>
> *If an error occurs, then please advise on that and include the exact
> message.*
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>

*Hmm, I have to retract my previous statement, as the structure of
pg_extension is:*














*CREATE TABLE pg_extension(  extname name NOT NULL,  extowner oid NOT
NULL,  extnamespace oid NOT NULL,  extrelocatable boolean NOT NULL,
extversion text,  extconfig oid[],  extcondition text[])WITH (  OIDS=TRUE);*
*So to solve your problem, as a superuser you can do:*

*SELECT oid, rolname*

* FROM pg_authid *
* WHERE rolname =  '{new_owner}';*

*SELECT oid, rolname*

* FROM pg_authid *
* WHERE rolname = '{user_you_want_to_drop}';*

*Then:*




*UPDATE pg_extensionSET extowner = {oid_of_new_owner} WHERE extowner =
{oid_from_above_statement};*




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] How to ALTER EXTENSION name OWNER TO new_owner ?

2017-08-09 Thread Melvin Davidson
On Wed, Aug 9, 2017 at 10:42 AM, Colin 't Hart <colinth...@gmail.com> wrote:

> Hi,
>
> Why does
> ALTER EXTENSION name OWNER TO new_owner;
> not exist?
>
> I have a bunch of extensions that were installed by a role that I want
> to drop. So I thought I would do like I do for other object types:
> ALTER  name OWNER TO new_owner;
>
> But that doesn't exist for extensions. I also can't drop the extension
> and recreate it because other objects depend on it.
>
> What can I do?
>
> This is on PostgreSQL 9.3.
>
> Thanks,
>
> Colin
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


*AFAIK, extensions do not have an owner.  They just exist and are available
to everyone.*

*If you are having a particular problem (other than owner) with an
extension, it would be helpful to*

*post a script to illustrate that.  You should be able to drop the role
without any problem. *

*If an error occurs, then please advise on that and include the exact
message.*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Any thoughts on making a psql meta-command "hide (or show alt text) if no results"?

2017-08-08 Thread Melvin Davidson
On Tue, Aug 8, 2017 at 9:16 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> Hey all, looking for thoughts on a feature request:
>
> I run quite a few queries, using psql, that are intended for exceptional
> situations.  When there are no results, which is expected, I still get the
> table header and basic frame showing up in the output.  The option I'd like
> is to be able to suppress the output of the empty table (and header if
> there is one) or possibly substitute the empty table with user-supplied
> text.
>
> Thinking something that is used like \g
>
> SELECT * FROM (VALUES (1)) vals (v) WHERE v = 0 \ghideifempty
>
> [SQL] \galtifempty 'No values matching 0 in vals'
>
> The names are descriptive, not suggestions...
>
> David J.
>

>The option I'd like is to be able to suppress the output of the empty
table (and header if there is one)
*Have you looked at the TUPLES ONLY option?*

*-t* *--tuples-only*

*Turn off printing of column names and result row count footers, etc. This
is equivalent to the \t command.*
*https://www.postgresql.org/docs/9.4/static/app-psql.html
<https://www.postgresql.org/docs/9.4/static/app-psql.html>*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Audit based on role

2017-08-07 Thread Melvin Davidson
*You can tweak the following query to help you determine if your user is a
member of role/group  'module_dml'.*













*Then you can use it in a trigger function that does the logging.SELECT
g.rolname as group,   u.rolname as user,   r.admin_option as
admin,   g.rolsuper as g_super,   u.rolsuper as u_super  FROM
pg_auth_members r   JOIN pg_authid g ON (r.roleid = g.oid)  JOIN pg_authid
u ON (r.member = u.oid) WHERE u.rolname = '{your_user}'   AND g.rolname =
'module_dm;' ORDER BY 1, 2;*


On Mon, Aug 7, 2017 at 8:05 PM, Joe Conway <m...@joeconway.com> wrote:

> On 08/07/2017 04:47 PM, anand086 wrote:
> > Only Insert/Update/Delete sqls are to be audited.
>
> You could get close to what you want, I think, by setting log_statement
> = mod for the users of interest, e.g. by doing:
>
>  ALTER USER whomever SET log_statement = mod;
>
> See:
>
> https://www.postgresql.org/docs/9.6/static/runtime-
> config-logging.html#GUC-LOG-STATEMENT
>
> Note: "mod logs all ddl statements, plus data-modifying statements such
>as INSERT, UPDATE, DELETE, TRUNCATE, and COPY FROM."
>
> Caveat: You would have to do this per user in that group. However you
> could write a query against the system catalogs though to loop through
> the members of the group and execute this statement against each one.
> Maybe rerun it periodically.
>
> HTH,
>
> Joe
>
> --
> Crunchy Data - http://crunchydata.com
> PostgreSQL Support for Secure Enterprises
> Consulting, Training, & Open Source Development
>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] PostgreSQL with PowerBuilder, and Identity keys (serials)

2017-08-05 Thread Melvin Davidson
On Sat, Aug 5, 2017 at 6:26 PM, Igor Korot <ikoro...@gmail.com> wrote:

> Hi,
>
> Did you try bringing it to SAP?
>
> Thank you.
>
> On Sat, Aug 5, 2017 at 2:39 PM, Dan Cooperstock at Software4Nonprofits
> <i...@software4nonprofits.com> wrote:
> > Yes my direct SQL testing used all caps and worked fine.
> >
> > There is no error message. It's just that PowerBuilder's built-in
> mechanism that should retrieve the identity key column's value after an
> insert is done using its DataWindow control, that is based on the setting I
> gave in my first post,
> >
> > GetIdentity="Select currval('GEN_')"
> >
> > isn't working - the value doesn't get filled in. I have no way of
> knowing whether that code isn't getting called, isn't working, or what. (I
> tried a trace option in PowerBuilder, and it didn't show any call to that
> code, but it also didn't show any call to the corresponding code in my
> setup for Firebird SQL, where this works perfectly.)
> >
> > I really need responses from people who have successfully done this with
> PowerBuilder, because I think it's an issue about the combination of
> PowerBuilder and PostgreSQL, not PostgreSQL alone.
> >
> > 
> > Dan Cooperstock
> > DONATION and ACCOUNTS web site: http://www.Software4Nonprofits.com
> > Email: i...@software4nonprofits.com
> > Phone: 416-423-7722
> > Mail: 57 Gledhill Ave., Toronto ON M4C 5K7, Canada
> >
> > If you do not want to receive any further emails from
> Software4Nonprofits, please reply to this email with the subject line
> "UNSUBSCRIBE", and we will immediately remove you from our mailing list, if
> you are on it.
> >
> >
> > -Original Message-
> > From: Rob Sargent [mailto:robjsarg...@gmail.com]
> > Sent: August 5, 2017 5:30 PM
> > To: Dan Cooperstock at Software4Nonprofits <i...@software4nonprofits.com
> >
> > Cc: Forums postgresql <pgsql-general@postgresql.org>
> > Subject: Re: [GENERAL] PostgreSQL with PowerBuilder, and Identity keys
> (serials)
> >
> >
> >> On Aug 5, 2017, at 3:12 PM, Dan Cooperstock at Software4Nonprofits <
> i...@software4nonprofits.com> wrote:
> >>
> >> I’m on PostgreSQL 9.6, 64-bit Windows.
> >>
> >> That really is the correct name for the sequence, because I’m not using
> SERIAL. (I needed everything to match the naming in my existing DB I’m
> using for the app, Firebird SQL, so the changes to make it work with either
> DB would be as minimal as possible.) The setup of tables I’m using with
> this sort of thing are like the following example:
> >>
> >> CREATE SEQUENCE GEN_CATEGORY MINVALUE 0;
> >>
> >> CREATE TABLE CATEGORY(
> >>   CATEGORYNUMSMALLINT NOT NULL DEFAULT NEXTVAL('GEN_CATEGORY'),
> >>   DESCRIPTIONVARCHAR(20) NOT NULL,
> >>   CONSTRAINT PK_CATEGORY PRIMARY KEY (CATEGORYNUM) );
> >>
> >> So as you can see GEN_ plus the tablename is indeed correct. The
> default on the CATEGORYNUM column is definitely working, which I tested
> with direct SQL commands: after inserting a row (with the CATEGORYNUM not
> specified in the INSERT), if I SELECT currval(‘GEN_CATEGORY’), it gives me
> the correct value, which is also what got saved in that column.
> >>
> >> 
> >> Dan Cooperstock
> >> DONATION and ACCOUNTS web site: http://www.Software4Nonprofits.com
> >> Email: i...@software4nonprofits.com
> >> Phone: 416-423-7722
> >> Mail: 57 Gledhill Ave., Toronto ON M4C 5K7, Canada
> >>
> >> If you do not want to receive any further emails from
> Software4Nonprofits, please reply to this email with the subject line
> "UNSUBSCRIBE", and we will immediately remove you from our mailing list, if
> you are on it.
> >>
> >>
> >> From: Melvin Davidson [mailto:melvin6...@gmail.com]
> >> Sent: August 5, 2017 4:46 PM
> >> To: Dan Cooperstock at Software4Nonprofits
> >> <i...@software4nonprofits.com>
> >> Cc: pgsql-general@postgresql.org
> >> Subject: Re: [GENERAL] PostgreSQL with PowerBuilder, and Identity keys
> >> (serials)
> >>
> >> >GetIdentity="Select currval('GEN_')"
> >>
> >> FYI, it would be helpful to specify the PostgreSQL version & O/S, but
> >> generically speaking, in PostgreSQL, when you generate a sequence by
> >> specifying serial as data type, the name takews the form of
> >> tab

Re: [GENERAL] PostgreSQL with PowerBuilder, and Identity keys (serials)

2017-08-05 Thread Melvin Davidson
>GetIdentity="Select currval('GEN_')"


*FYI, it would be helpful to specify the PostgreSQL version & O/S, but
generically speaking, in PostgreSQL, when you generate a sequence *






*by specifying serial as data type, the name takews the form of
tablename_columnname_seq, so in your case,
tryhttps://www.postgresql.org/docs/9.4/static/datatype-numeric.html#DATATYPE-SERIAL
<https://www.postgresql.org/docs/9.4/static/datatype-numeric.html#DATATYPE-SERIAL>GetIdentity="Select
currval('tablename_column_seq')".*

*BTW, in PostgreSQL, it is NOT recommended to use mixed case object names,
as all names are converted to lowercase unless enclosed in double quotes.*

On Sat, Aug 5, 2017 at 4:09 PM, Dan Cooperstock at Software4Nonprofits <
i...@software4nonprofits.com> wrote:

> I’m trying to get a Postgres DB version of an application I write in
> PowerBuilder working. The thing I’m stuck on is Identity keys – what you
> set up with the SERIAL attribute or SEQUENCEs / GENERATORs in Postgres.
>
>
>
> I have the sequence set up and clearly working. And in PowerBuilder, I
> have added a section I found online to a file it uses to know how to deal
> with different aspects of different DBs (PBODBxxx.INI) for Postgres,
> including the following line for retrieving an identity key after saving a
> record, when the DB automatically fills in the value:
>
>
>
> GetIdentity="Select currval('GEN_')"
>
>
>
> That obviously depends on the generator being named “GEN_” plus the
> table’s name – which is true in our case.
>
>
>
> But nothing like that is happening. Does anyone else have PostgresSQL
> working with PowerBuilder and identity keys, who can give me some pointers
> on how get this to work?
>
>
>
> Thanks.
>
>
>
> 
>
> Dan Cooperstock
> DONATION and ACCOUNTS web site: http://www.Software4Nonprofits.com
> Email: i...@software4nonprofits.com
> Phone: 416-423-7722 <(416)%20423-7722>
> Mail: 57 Gledhill Ave., Toronto ON M4C 5K7, Canada
>
> *If you do not want to receive any further emails from
> Software4Nonprofits, please reply to this email with the subject line
> "UNSUBSCRIBE", and we will immediately remove you from our mailing list, if
> you are on it.*
>
>
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] org.postgresql.util.PSQLException: FATAL: terminating connection due to administrator command

2017-08-02 Thread Melvin Davidson
On Wed, Aug 2, 2017 at 9:42 AM, Emi <em...@encs.concordia.ca> wrote:

> Hello,
>
> Running psql table updates() by using 
> org.springframework.scheduling.quartz.JobDetailFactoryBean
> cronjob from web application. Got the following exception:
>
> org.postgresql.util.PSQLException: FATAL: terminating connection due to
> administrator command
>
>
> Re-run the same cronjob several times, and cannot re-generate the error.
>
> May I know what might cause the above error message please? And which log
> file(online doc) could help provide more detail information about this
> please?
>
> Thanks a lot.
> --
> psql version: PostgreSQL 8.3.18 on x86_64-unknown-linux-gnu
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


This says it all:  "terminating connection due to administrator command"
It means someone killed your session, either from the command line with
kill {the pid}  or with SELECT pg_terminate_backend(pid);

*https://www.postgresql.org/docs/9.4/static/functions-admin.html#FUNCTIONS-ADMIN-SIGNAL
<https://www.postgresql.org/docs/9.4/static/functions-admin.html#FUNCTIONS-ADMIN-SIGNAL>*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Schemas and serials

2017-07-29 Thread Melvin Davidson
On Sat, Jul 29, 2017 at 3:38 PM, tel medola <tel.med...@gmail.com> wrote:

> Depends.
> When you create your tables in new schema, the script was the same from
> "qa"?
> Sequences, tables, etc.. belong to the schema where was created.
>
> Roberto.
>
> Em sáb, 29 de jul de 2017 às 16:17, marcelo <marcelo.nico...@gmail.com>
> escreveu:
>
>> Some days ago I asked regarding tables located in different schemas.
>> Now, my question is
>> Suppose I have two schemas (other than public): "qa" and "production".
>> Initially I create all my tables in "qa". All of them have a primary key
>> of type serial.
>> Later, I will copy the tables definitions to production.
>> It will automatically create the sequences in the new schema, starting
>> at zero?
>> TIA
>> Marcelo
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>




*Marcelo,>Initially I create all my tables in "qa". All of them have a
primary key of type serial. >Later, I will copy the tables definitions to
production.*

*A word of caution, creating tables in a qa "schema" and then transferring
to production is not the normal/correct (or safe) way to do development.*

*The standard procedure is to create a seperate "qa" database (and/or
server) with the exact same schema(s) as production. Then, after testing *

*is completed, the schemas/tables are copied to production.*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] I can't cancel/terminate query.

2017-07-13 Thread Melvin Davidson
On Thu, Jul 13, 2017 at 2:45 PM, Edmundo Robles <edmu...@sw-argos.com>
wrote:

> i  executed the commands   many times like  superuser but  that queries
> still running :(
>
> On Thu, Jul 13, 2017 at 11:10 AM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
>
>>
>>
>> On Thu, Jul 13, 2017 at 11:57 AM, Edmundo Robles <edmu...@sw-argos.com>
>> wrote:
>>
>>> Hi!  i have  many  too long time queries,  the oldest  is  almost 16
>>> days, so i  tried to cancel and terminate with pg_cancel_backend and
>>> pg_terminate_backend  but queries is still running.
>>>
>>> STIME ELAPSED ELAPSED %CPU   PID COMMAND
>>> jun27 15-23:05:46 1379146  0.3 29660 postgres: argos_admin bdkairos
>>> 127.0.0.1(55605) SELECT
>>> jun27 15-23:05:46 1379146  0.3 29659 postgres: argos_admin bdkairos
>>> 127.0.0.1(55604) SELECT
>>> jun27 15-23:05:46 1379146  0.3 29658 postgres: argos_admin bdkairos
>>> 127.0.0.1(55603) SELECT
>>> jun27 15-23:05:16 1379116  0.3 29909 postgres: argos_admin bdkairos
>>> 127.0.0.1(55767) SELECT
>>> jun27 15-23:04:51 1379091  0.3 30097 postgres: argos_admin bdkairos
>>> 127.0.0.1(55909) SELECT
>>> jun27 15-23:03:49 1379029  0.3 30661 postgres: argos_admin bdkairos
>>> 127.0.0.1(56303) SELECT
>>> ...
>>> What should  i do to safely close that  queries, before to use kill
>>> command in linux?
>>>
>>> I have Postgres 9.4  running on debian jessie.
>>>
>>>
>>>
>>> --
>>>
>>>
>> T
>>
>> *o cancel a process with pg_terminate_backend, you need to be a
>> superuser.*
>> *So first;*
>>
>> *SET ROLE postgres;*
>>
>> Then you should be able to
>> *SELECT **pg_terminate_backend(); *  Where  is the pid of
>> process you want to termnate.
>> --
>> *Melvin Davidson*
>> I reserve the right to fantasize.  Whether or not you
>> wish to share my fantasy is entirely up to you.
>>
>
>
>
> --
>
>



*If you are running on Linux, then:*

*sudo su*

*kill  *

*Where  is the same as in pg_terminate_backend() *

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] I can't cancel/terminate query.

2017-07-13 Thread Melvin Davidson
On Thu, Jul 13, 2017 at 11:57 AM, Edmundo Robles <edmu...@sw-argos.com>
wrote:

> Hi!  i have  many  too long time queries,  the oldest  is  almost 16 days,
> so i  tried to cancel and terminate with pg_cancel_backend and
> pg_terminate_backend  but queries is still running.
>
> STIME ELAPSED ELAPSED %CPU   PID COMMAND
> jun27 15-23:05:46 1379146  0.3 29660 postgres: argos_admin bdkairos
> 127.0.0.1(55605) SELECT
> jun27 15-23:05:46 1379146  0.3 29659 postgres: argos_admin bdkairos
> 127.0.0.1(55604) SELECT
> jun27 15-23:05:46 1379146  0.3 29658 postgres: argos_admin bdkairos
> 127.0.0.1(55603) SELECT
> jun27 15-23:05:16 1379116  0.3 29909 postgres: argos_admin bdkairos
> 127.0.0.1(55767) SELECT
> jun27 15-23:04:51 1379091  0.3 30097 postgres: argos_admin bdkairos
> 127.0.0.1(55909) SELECT
> jun27 15-23:03:49 1379029  0.3 30661 postgres: argos_admin bdkairos
> 127.0.0.1(56303) SELECT
> ...
> What should  i do to safely close that  queries, before to use kill
> command in linux?
>
> I have Postgres 9.4  running on debian jessie.
>
>
>
> --
>
>
T

*o cancel a process with pg_terminate_backend, you need to be a superuser.*
*So first;*

*SET ROLE postgres;*

Then you should be able to
*SELECT **pg_terminate_backend(); *  Where  is the pid of process
you want to termnate.
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Get table OID

2017-07-13 Thread Melvin Davidson
On Thu, Jul 13, 2017 at 11:06 AM, Igor Korot <ikoro...@gmail.com> wrote:

> Hi, Melvin,
>
> On Thu, Jul 13, 2017 at 10:42 AM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
>
>>
>> On Thu, Jul 13, 2017 at 10:36 AM, Igor Korot <ikoro...@gmail.com> wrote:
>>
>>>  Hi, ALL,
>>> Is it possible to get the table ID (or OID) from information_schema
>>> somewhere?
>>>
>>> Thank you.
>>>
>>>
>>> --
>>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgsql-general
>>>
>>
>> It 's in pg_class!
>>
>> https://www.postgresql.org/docs/9.4/static/catalog-pg-class.html
>>
>
> But pg_class is in postgres database.
> Can I do something like this:
>
> SELECT oid FROM postgres.pg_class WHERE relname = ;
>
> Or I will have to connect to postgres database?
>
> Thank you.
>
>
>>
>> IOW:
>> SELECT relname, oid  FROM pg_class WHERE relkind = 'r' ORDER BY 1;
>> --
>> *Melvin Davidson*
>> I reserve the right to fantasize.  Whether or not you
>> wish to share my fantasy is entirely up to you.
>>
>
>


*Igor,*

*You do not need to specify "postgres" schema (postgres.pg_class). That is
wrong anyway.*

*FYI, the correct schema is pg_catalog, but  All postgres CATALOGS are
always available regardless of which database you are connected to.*

*and it is in the default search path, so you do not have to specify it.*





*Just do:SELECT oidFROM pg_class  WHERE relname = ;*




*It will work just fine!  I highly encourage you to RTFM.*


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] How to drop column from interrelated views

2017-07-09 Thread Melvin Davidson
On Sun, Jul 9, 2017 at 9:56 AM, Berend Tober <bto...@computer.org> wrote:

> Guyren Howe wrote:
>
>> On Jul 8, 2017, at 16:11 , Berend Tober <bto...@computer.org > bto...@computer.org>>
>> wrote:
>>
>>>
>>> Guyren Howe wrote:
>>>
>>>> I’ve a set of interrelated views. I want to drop a column from a table
>>>> and from all the views
>>>> that cascade from it.
>>>>
>>>> I’ve gone to the leaf dependencies and removed the field from them. But
>>>> I can’t remove the
>>>> field from the intermediate views because Postgres doesn’t appear to be
>>>> clever enough to see
>>>> that the leafs no longer depend on the column. Or did I just miss one?
>>>>
>>>> In general, this seems like a major weakness expressing a model in
>>>> Postgres (I get that any
>>>> such weakness derives from SQL; that doesn’t stop me wanting a
>>>> solution).
>>>>
>>>> Thoughts? Comments?
>>>>
>>>
>>> This usually involves a pg_dump in the custom format, editing the list
>>> file, creating a script
>>> with pg_restore.
>>>
>>> I described a way I have had success with it at one point at
>>>
>>>
>>> https://www.postgresql.org/message-id/55C3F0B4.5010600%40computer.org
>>>
>>
>> I was wondering if I changed up all the things that interrelate in a
>> transaction, whether that
>> would bundle them up so they’re all correct afterward. I was hoping so.
>>
>
>
> Well, nothing beats empirical evidence ... set up a test case and try it!
>
> You definitely want to do it in a transaction anyway, so that if you get
> it wrong the first few times and have to iterate, the data base rolls back
> to where you started.
>
> Note the method suggested in the earlier link appears to have a error.
> Step 4 should be
>
>
>   pg_restore -c -1  -L mydatabase.list mydatabase.dump > sql
>
>
> The lower case "c" flag will include DROP statements for the views. The
> "1" will wrap in a transaction, like you want.
>
>
> BTW, please generally use the "reply-all" functionality of your email
> client when interacting with this list ... the server puts the list alias
> in the CC line, so you have to reply all to keep the conversation publicly
> available for others to learn from.
>
> -- B
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Another alternative is to simply extract all the view defs  with the column
name you want to drop,
edit the viewdef to remove the columns and then use that to redefine the
views BEFORE
dropping the column from the table.

IE: in the following query, replace  with the name of the
column to be dropped.

1. Make a SQL pg_dump of the database before proceeding

2.
\o edit_views.sql

SELECT 'CREATE OR REPLACE VIEW  '
   || n.nspname || '.' || c.relname || ' AS '
   || pg_get_viewdef(c.oid, true)
  FROM pg_attribute a
  JOIN pg_class c ON c.oid = a.attrelid
  JOIN pg_namespace n ON (n.oid = c.relnamespace)
 WHERE c.relkind = 'v'
   AND n.nspname NOT LIKE 'pg_%'
   AND n.nspname NOT LIKE 'sql_%'
   AND n.nspname NOT LIKE 'information%'
   AND a.attname = ''
 ORDER BY 1;

3. Edit edit_views.sql to remove all occurrences of the column being dropped

4.psql  < edit_views.sql

5. Then drop the column from the table.

*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Trying to reduce xlog wal files

2017-07-06 Thread Melvin Davidson
On Thu, Jul 6, 2017 at 10:44 AM, Steve DeLong <sdel...@saucontech.com>
wrote:

> I am running Postgresql 9.3.15 and I am trying to reduce the amount of wal
> files being recycled in pg_xlog.  Archive is set up and working correctly.
> A while ago we had problems with the streaming slave falling behind because
> of hardware being slower and tuned postgres to keep over 1000k wal files so
> it could catch up without us manually moving files from the backup server.
> We have since upgraded the slave so it never falls behind and would like to
> reduce the wals files kept in pg_xlog.  I have updated postgresql.conf
> checkpoint_segments from 1000 to 250 and checkpoint_timeout from 16min to
> 5min.  I ran SELECT pg_reload_conf(); and using pg_controldata I have
> confirmed the settings and checkpoints are happening at 5 min intervals.
> However, file count in pg_xlog stays at 1152 and never changes. How can I
> get this to reduce the file count?
>
> Thanks
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


*>I have updated postgresql.conf checkpoint_segments from 1000 to 250*

*But what is the value of wal_keep_segments?*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] pg_start/stop_backup non-exclusive scripts to snapshot

2017-07-05 Thread Melvin Davidson
On Wed, Jul 5, 2017 at 10:14 AM, Stephen Frost <sfr...@snowman.net> wrote:

> Greetings,
>
> * Melvin Davidson (melvin6...@gmail.com) wrote:
> > Stephen,
> > >This script is a good example of why trying to take a PG backup using
> > shell scripts isn't a good idea.
> >
> > Your criticism is noted, however, I have used it many times in the past
> > with absolutely no problem. I submitted that script as a possible
> solution
> > to the op's problem/question. If you have an alternate solution or can
> make
> > improvements to it, then I am sure the op and I would welcome them.
>
> Part of my concern is that such a script is unlikely to show any
> problems until it comes time to do a restore- it could be failing now
> due to the issues I noted previously without any obvious error being
> thrown but with the resulting backup not being viable.  Hopefully that
> isn't the case and ideally you're performing test restores of each
> backup you take to ensure that it works.
>
> Further, it doesn't address the OP's question, which was specifically
> how to avoid using the now-deprecated exclusive backup method that the
> script you posted uses.
>
> * Michael Paquier (michael.paqu...@gmail.com) wrote:
> > On Wed, Jul 5, 2017 at 10:47 PM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
> > > Your criticism is noted, however, I have used it many times in the
> past with absolutely no problem.
> >
> > Plug off the server on which is stored the backup just after your
> > script finishes, you have a good chance to be surprised if you try to
> > restore from this backup later on.
>
> What might be worse would be to pull the plug while the backup is
> running and then try to bring the primary back online. :/  That issue is
> part of why the API used in this script is now deprecated.
>
> > > I submitted that script as a possible solution
> > > to the op's problem/question. If you have an alternate solution or can
> make improvements to it, then I am sure the op and I would welcome them.
> >
> > Stephen has mentioned two of them, with hundreds of man hours spent in
> developing those backup tools to be robust solutions, done by
> > specialists on the matter.
>
> Right, there's little sense in trying to perfect a shell script when
> proper solutions exist.
>
> Thanks!
>
> Stephen
>

>Part of my concern is that such a script is unlikely to show any problems
until it comes time to do a restore
As previously stated, the script was used to set up a slave and has done so
successfully many times. There are subsequent scripts
that check results.

>What might be worse would be to pull the plug while the backup is running
and then try to bring the primary back online.
Uh, whom would be so stupid as to do that?

>Right, there's little sense in trying to perfect a shell script when
proper solutions exist.
>>It's better to create something that others criticise than to create
nothing and criticise others. Go create, have fun!!

http://www.azquotes.com/quote/874849





-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] pg_start/stop_backup non-exclusive scripts to snapshot

2017-07-05 Thread Melvin Davidson
Stephen,
>This script is a good example of why trying to take a PG backup using
shell scripts isn't a good idea.

Your criticism is noted, however, I have used it many times in the past
with absolutely no problem. I submitted that script as a possible solution
to the op's problem/question. If you have an alternate solution or can make
improvements to it, then I am sure the op and I would welcome them.


On Wed, Jul 5, 2017 at 9:10 AM, Stephen Frost <sfr...@snowman.net> wrote:

> Greetings,
>
> * Melvin Davidson (melvin6...@gmail.com) wrote:
> > On Tue, Jul 4, 2017 at 5:55 PM, Stephen Frost <sfr...@snowman.net>
> wrote:
> > > I'd recommend considering one of the existing PG backup tools which
> know
> > > how to properly perform WAL archiving and tracking the start/stop
> points
> > > in the WAL of the backup.  Trying to write your own using shell
> scripts,
> > > even with ZFS snapshots, isn't trivial.  If you trust the ZFS snapshot
> > > to be perfectly atomic across all filesystems/tablespaces used for PG,
> > > you could just take a snapshot and forget the rest- PG will do crash
> > > recovery when you have to restore from that snapshot but that's not
> much
> > > different from having to do WAL replay of the WAL generated during the
> > > backup.
> > >
> > > As for existing solutions, my preference/bias is for pgBackRest, but
> > > there are other options out there which also work, such as barman.
> >
> > Here is a model shell script I use to do a base backup to set up a slave.
> > See attached ws_base_backup.sh
>
> This script is a good example of why trying to take a PG backup using
> shell scripts isn't a good idea.  Offhand, there's issues like:
>
> - No check that start_backup was successful
> - No check that stop_backup was successful
> - No syncing of files to disk anywhere
> - Requires running as root (without any particular clear reason why)
> - Doesn't check if the database is already in 'exclusive backup' mode
> - Doesn't check the return codes for the main 'tar' command
> - Uses pipes without checking return codes through PIPESTATUS
> - Doesn't capture the output from pg_start/stop_backup
> - Doesn't verify that all of the WAL required for the backup was
>   archvied
> - Doesn't check the exit code of the rsync
>
> I'm sure there's other issues also and I do hope it's working enough
> that you have viable backups, but I wouldn't use such a script today
> (though I wrote plenty like it in the distant past).
>
> Thanks!
>
> Stephen
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] pg_start/stop_backup non-exclusive scripts to snapshot

2017-07-04 Thread Melvin Davidson
On Tue, Jul 4, 2017 at 5:55 PM, Stephen Frost <sfr...@snowman.net> wrote:

> Greetings,
>
> * hvjunk (hvj...@gmail.com) wrote:
> > I’ve previously done ZFS snapshot backups like this:
> >
> > psql -c “select pg_start_backup(‘snapshot’);”
> > zfs snapshot TANK/postgresql@`date ‘+%Ymd’`
> > psql -c “select * from  pg_stop_backup();”
>
> Hopefully you are also doing WAL archiving...
>
> > Reading the PostgreSQL9.6 documentation, the advice/future is to use the
> non-exclusive method, where I’ll need to keep a session *open* while the
> snapshot takes place, and after that I’ll have to issue the
> pg_stop_backup(false); in that active connection that issued the
> pg_start_backup(‘backup’,false,false);
>
> Right.
>
> > How is this done inside a shell script?
>
> Generally, it's not.  I suppose it might be possible to use '\!' with
> psql and then have a shell snippet that looks for some file that's
> touched when the snapshot has finished, but really, trying to perform a
> PG backup using hacked together shell scripts isn't recommended and
> tends to have problems.
>
> In particular WAL archiving- there's no simple way for a shell script
> which is being used for archiving to confirm that the WAL it has
> "archived" has been completely written out to disk (which is required
> for proper archiving).  Further, simple shell scripts also don't check
> that all of the WAL has been archived and that there aren't any holes in
> the WAL between the starting point of the backup and the end point.
>
> > Especially how to do error checking from the commands as psql -c “select
> pg_start_backup{‘test’,false,false);” not going to work?
>
> I'd recommend considering one of the existing PG backup tools which know
> how to properly perform WAL archiving and tracking the start/stop points
> in the WAL of the backup.  Trying to write your own using shell scripts,
> even with ZFS snapshots, isn't trivial.  If you trust the ZFS snapshot
> to be perfectly atomic across all filesystems/tablespaces used for PG,
> you could just take a snapshot and forget the rest- PG will do crash
> recovery when you have to restore from that snapshot but that's not much
> different from having to do WAL replay of the WAL generated during the
> backup.
>
> As for existing solutions, my preference/bias is for pgBackRest, but
> there are other options out there which also work, such as barman.
>
> Thanks!
>
> Stephen
>

Here is a model shell script I use to do a base backup to set up a slave.
See attached ws_base_backup.sh

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


ws_base_backup.sh
Description: Bourne shell script

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] duplicate key value violates unique constraint and duplicated records

2017-07-01 Thread Melvin Davidson
On Sat, Jul 1, 2017 at 10:05 AM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 06/30/2017 09:42 PM, Steven Chang wrote:
>
>> Uh...we also met duplicate rows with primary key column through
>>  restoring database by pg_basebackup.
>> H.
>> I don't think its an issue with primary key index corruption.
>>
>
> That is interesting, more information would be helpful though:
>
> Postgres version?
>
> OS and version?
>
> The pg_basebackup command line invocation?
>
> Why you don't think it is index corruption?
>
>
>
>>
>>
>> 2017-07-01 7:30 GMT+08:00 Adrian Klaver <adrian.kla...@aklaver.com
>> <mailto:adrian.kla...@aklaver.com>>:
>>
>> On 06/30/2017 07:33 AM, Timokhin Maxim wrote:
>>
>> Sure, here it is.
>>
>> pg_basebackup -h servername -R -P -D /data/upgrade/94 -U pgsql
>> -v —xlog-method=stream —checkpoint=fast
>>
>> /usr/pgsql-9.5/bin/initdb -D /data/upgrade/95/ —encoding=utf8
>> —locale=ru_RU.utf8 —lc-collate=ru_RU.utf8 —lc-ctype=ru_RU.utf8
>> —lc-messages=en_US.utf8
>>
>> Then updating:
>> /usr/pgsql-9.5/bin/pg_upgrade -b /usr/pgsql-9.4/bin/ -d
>> /data/upgrade/94 -B /usr/pgsql-9.5/bin/ -D /data/upgrade/95 -k
>>
>> and so on to 9.6
>>
>>
>> The original 9.4 database has the same encoding setup?
>>
>> FYI, you can use pg_upgrade to go straight from 9.4 to 9.6.
>>
>> https://www.postgresql.org/docs/9.6/static/pgupgrade.html
>> <https://www.postgresql.org/docs/9.6/static/pgupgrade.html>
>>
>> "pg_upgrade supports upgrades from 8.4.X and later to the current
>> major release of PostgreSQL, including snapshot and alpha releases."
>>
>>
>>
>> after that server starts normally.
>>
>>
>> -- Timokhin 'maf' Maxim
>>
>>
>>
>>
>>
>> -- Adrian Klaver
>> adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


>I don't think its an issue with primary key index corruption.

Well, have you verified that? Try running the following query and make sure
the status column shows "valid" for ALL indexes.

SELECT n.nspname as schema,
   i.relname as table,
   i.indexrelname as index,
   i.idx_scan,
   i.idx_tup_read,
   i.idx_tup_fetch,
   CASE WHEN idx.indisprimary THEN 'pkey'
  WHEN idx.indisunique  THEN 'uidx'
  ELSE 'idx'
END AS type,
   idx.indisexclusion,
   pg_get_indexdef(idx.indexrelid),
   CASE WHEN idx.indisvalid THEN 'valid'
  ELSE 'INVALID'
END as statusi,
   pg_relation_size(quote_ident(n.nspname)|| '.' ||
quote_ident(i.relname)) as size_in_bytes,
   pg_size_pretty(pg_relation_size(quote_ident(n.nspname)|| '.' ||
quote_ident(i.relname))) as size
  FROM pg_stat_all_indexes i
  JOIN pg_class c ON (c.oid = i.relid)
  JOIN pg_namespace n ON (n.oid = c.relnamespace)
  JOIN pg_index idx ON (idx.indexrelid =  i.indexrelid )
 WHERE i.relname LIKE '%%'
   AND n.nspname NOT LIKE 'pg_%'
ORDER BY 1, 2, 3;
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] postgres: dbname dbuser 9.9.9.9[2222] PARSE waiting

2017-06-30 Thread Melvin Davidson
On Fri, Jun 30, 2017 at 11:36 AM, DrakoRod <drakofla...@hotmail.com> wrote:

> > Do you control the app?
>
> Nop Just I know how it's developed.
>
> > The app has a pooling component and you still are having problems, have
> > you looked at what the pooler is actually doing?
>
> As far as I know, the wildfly's jdbc pool. No really I don't know what are
> doing. I suspect that problem is that in DAO's not are closing the sessions
> or not beginning transactions properly. I going to ask them send me the
> logfile or I'll could verify the pool behavior.
>
> > Not sure what the above means. Are you saying the application you refer
> > to above has a history of not correctly closing connections or are you
> > talking in general terms about applications interacting with databases.
>
> Sorry, it's not like that, just was a comment, The problem I have is with a
> specific application.
>
> > I've attached two files that may be helpful to you.
>
> Melvin , Thanks for the scripts! I owe one!
>
> I have another question, I've was reading about the lock_timeout, Somehow
> this parameter will help or could affect all the behaviour?
>
> Thanks!
>
>
>
>
>
>
> -
> Dame un poco de fe, eso me bastará.
> Rozvo Ware Solutions
> --
> View this message in context: http://www.postgresql-archive.
> org/postgres-dbname-dbuser-9-9-9-9--PARSE-waiting-
> tp5968923p5969552.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

>I've was reading about the lock_timeout, Somehow this parameter will help
or could affect all the behaviour?
This affects all behavior and will abort statements that wait too long to
get locks.

https://www.postgresql.org/docs/9.4/static/runtime-config-client.html

lock_timeout (integer)

Abort any statement that waits longer than the specified number of
milliseconds while attempting to acquire a lock on a table, index, row, or
other database object. The time limit applies separately to each lock
acquisition attempt. The limit applies both to explicit locking requests
(such as LOCK TABLE, or SELECT FOR UPDATE without NOWAIT) and to
implicitly-acquired locks. If log_min_error_statement is set to ERROR or
lower, the statement that timed out will be logged. A value of zero (the
default) turns this off.

Unlike statement_timeout, this timeout can only occur while waiting for
locks. Note that if statement_timeout is nonzero, it is rather pointless to
set lock_timeout to the same or larger value, since the statement timeout
would always trigger first.

*Setting lock_timeout in postgresql.conf is not recommended because it
would affect all sessions.*


*You are probably much better off using tcp_keepalives... , providing your
system supports it. I don't remember if you ever gave us *


*the O/S.*



*Have a look at them. You might possibly be able to use them to force
disconnect after a set amount of inactivity
time.https://www.postgresql.org/docs/current/static/runtime-config-connection.html
<https://www.postgresql.org/docs/current/static/runtime-config-connection.html>*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] duplicate key value violates unique constraint and duplicated records

2017-06-30 Thread Melvin Davidson
On Fri, Jun 30, 2017 at 9:07 AM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 06/30/2017 04:58 AM, Timokhin Maxim wrote:
>
>> BTW, we are moving using:
>>
>> pg_basebackup -h servername -R -P -D /data/upgrade/94 -U pgsql -v
>> —xlog-method=stream —checkpoint=fast
>>
>
> Going from 9.4 to 9.6 is a major version upgrade and you cannot use
> pg_basebackup for that. Besides I can't see how you even got the 9.6.3
> server to start:
>
> /usr/local/pgsql94/bin/pg_basebackup -D /home/aklaver/pgback_test94 -U
> postgres -p 5412 -v --xlog-method=stream
>
> /usr/local/pgsql96/bin/pg_ctl -D /home/aklaver/pgback_test94/ start
> server starting
> FATAL:  database files are incompatible with server
> DETAIL:  The data directory was initialized by PostgreSQL version 9.4,
> which is not compatible with this version 9.6.3
>
>
>> After that we are upping version to 9.6.3.
>>
>
> Given the above how did you actually get 9.6.3 to start?
>
>
> I've looked through the documentation https://postgrespro.ru/docs/po
>> stgrespro/9.6/app-pgbasebackup and didn't find details about how
>> pg_basebackup works with b-tree indexes.
>> Is it possible that pg_basebackup just copies indexes as is and that is
>> cause of corruption. Or it pass indexes as instruction that says "after
>> upping db make indexes" ?
>>
>> Thank you.
>>
>>
>> --
>> Timokhin 'maf' Maxim
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


*Since you are doing a major version upgrade, the correct way to do that,
depending on the size of your DB, is *

*A. pg_dumpall on old version and pg_reload on new version*

*OR*

*B. pg_upgrade*



*https://www.postgresql.org/docs/9.6/static/upgrading.html
<https://www.postgresql.org/docs/9.6/static/upgrading.html>*--
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] postgres: dbname dbuser 9.9.9.9[2222] PARSE waiting

2017-06-29 Thread Melvin Davidson
On Thu, Jun 29, 2017 at 7:30 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 06/29/2017 10:03 AM, DrakoRod wrote:
>
>> To expand information, the application are written in Grails on wildfly
>> with
>> pool connections.
>>
>
> Do you control the app?
>
> The app has a pooling component and you still are having problems, have
> you looked at what the pooler is actually doing?
>
>
>> I didn't have time to check pg_locks with detail, I'll configure the
>> connections logs to monitoring those.
>>
>> I can't close connections on the application side. How I close connections
>> on the database side? With pg_terminate_backend, pg_cancel_backend or
>> exists
>> other function? I didn't want terminate backends because all connections
>> state was active.
>>
>
> https://www.postgresql.org/docs/9.4/static/functions-admin.
> html#FUNCTIONS-ADMIN-SIGNAL
>
> "pg_cancel_backend(pid int) boolean Cancel a backend's current
> query. You can execute this against another backend that has exactly the
> same role as the user calling the function. In all other cases, you must be
> a superuser.
>
> ...The process ID of an active backend can be found from the pid column of
> the pg_stat_activity view, ..."
>
>
>> I refer only to "idle" because almost in every database that I've saw the
>> application doesn't close correctly the connections. If are "idle in
>> transaction" is not normal.
>>
>
> Not sure what the above means. Are you saying the application you refer to
> above has a history of not correctly closing connections or are you talking
> in general terms about applications interacting with databases.
>
>
>> Your right Adrian, I need to know why the connections are not closing
>> properly.
>>
>> I can't apply idle_in_transation_session_timeout because the version of
>> PostgreSQL is 9.4.4 and the paramater not yet i'ts included. But sounds
>> good
>> the upgrade.
>>
>> Thanks for your help!
>>
>>
>>
>>
>> -
>> Dame un poco de fe, eso me bastará.
>> Rozvo Ware Solutions
>> --
>> View this message in context: http://www.postgresql-archive.
>> org/postgres-dbname-dbuser-9-9-9-9--PARSE-waiting-tp5968
>> 923p5969262.html
>> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>>
>>
>>
>
>
>
> *DrakoRod,*
>
> *If you are concerned that the application is leaving connections 'idle in
> transaction', I've attached two files that may be helpful to you. *
>
> *The first is a SQL query -> current_queries.sql  'idle in transaction'
> will appear in the "state" column. *
>
>
> *CAVEAT EMPTOR! in PG 9.6 the "CASE WHEN waiting ..."  needs to be
> commented out..*
> *The second is a Linux script "kill_long_idles.sh" that will kill any
> connection that is *
> *'idle in transaction' for longer than max_time. Currently it is set to 30
> minutes, but you can adjust to*
>
> * your desire. Just run it from a root cron job.*
>
> *Don't forget to   chmod +x  **kill_long_idles.sh*
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.
SELECT backend_start as be_start,
   datname,
   pid as pid,
   client_addr,
   usename as user,
   state,
   query,
   CASE WHEN waiting = TRUE
THEN 'BLOCKED'
ELSE 'no'
END as waiting,
   query_start,
   current_timestamp - query_start as duration 
  FROM pg_stat_activity
 WHERE pg_backend_pid() <> pid
ORDER BY 1, 
 datname,
 query_start;



kill_long_idles.sh
Description: Bourne shell script

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] duplicate key value violates unique constraint and duplicated records

2017-06-29 Thread Melvin Davidson
t; CONSTRAINT "quotations_fkey3" FOREIGN KEY (item_id)
> REFERENCES items(id) ON UPDATE CASCADE ON DELETE CASCADE
> TABLE "summaries" CONSTRAINT "summaries_fkey2" FOREIGN KEY (item_id)
> REFERENCES items(id) ON UPDATE CASCADE ON DELETE CASCADE
>
>
> Everything would be not bad if in the table weren't appeared duplicated
> records in url column.
> Any idea how is it possible?
>
> Thank you!
>
> --
> Timokhin 'maf' Maxim
>
>













*It's possible you have index corruption on 9.4.8 version that was not
detected.Try the following query on 9.4.8 to see if any rows are selected.
Then you can decide how to fix from there.SELECT a.id <http://a.id>,
a.url,   b.id <http://b.id>, b.url  FROM items a,   items b WHERE
a.id <http://a.id> <> b.id <http://b.id>   AND a.url = b.url ORDER by a.id
<http://a.id>;*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] postgres: dbname dbuser 9.9.9.9[2222] PARSE waiting

2017-06-27 Thread Melvin Davidson
On Tue, Jun 27, 2017 at 6:32 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 06/27/2017 01:10 PM, DrakoRod wrote:
>
>> Hi folks.
>>
>> Today I had a problem with production's database PostgreSQL version
>> 9.4.4.9.
>> The server have max_connections set to 200, but today I reviewed
>> pg_stat_activity and saw 199 active connections, obviously the server
>> rejected any new connection and the production stopped.
>>
>> I saw another posts with a similar problems, but this was because the
>> pg_xlog was full or disk does'nt write, but the directory and disk  had no
>> problems.
>>
>> I just canceled some SELECTs querys and the server returned to normality.
>> Now a monitoring activity of server and I can see some backends like this:
>>
>> postgres  9737 23340  2 14:55 ?00:00:15 postgres: dbname user
>> 8.8.8.8[37082] idle in transaction
>> postgres  9741 23340  9 14:55 ?00:00:47 postgres: dbname user
>> 8.8.8.8[54286] idle in transaction
>>
>> Any suggestions?
>>
>
> https://www.postgresql.org/docs/9.4/static/monitoring-stats.
> html#PG-STAT-ACTIVITY-VIEW
>
> SELECT * FROM pg_stat_activity WHERE state = 'idle in transaction';
>
> To get more detail on what is holding these transactions open.
>
>
>>
>>
>> -
>> Dame un poco de fe, eso me bastará.
>> Rozvo Ware Solutions
>> --
>> View this message in context: http://www.postgresql-archive.
>> org/postgres-dbname-dbuser-9-9-9-9--PARSE-waiting-tp5968923.html
>> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

*His problem is NOT 'idle in transaction' per se. It is all connections are
used up.*
*Hence the need for pg_bouncer for connection pooling.*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] postgres: dbname dbuser 9.9.9.9[2222] PARSE waiting

2017-06-27 Thread Melvin Davidson
On Tue, Jun 27, 2017 at 4:10 PM, DrakoRod <drakofla...@hotmail.com> wrote:

> Hi folks.
>
> Today I had a problem with production's database PostgreSQL version
> 9.4.4.9.
> The server have max_connections set to 200, but today I reviewed
> pg_stat_activity and saw 199 active connections, obviously the server
> rejected any new connection and the production stopped.
>
> I saw another posts with a similar problems, but this was because the
> pg_xlog was full or disk does'nt write, but the directory and disk  had no
> problems.
>
> I just canceled some SELECTs querys and the server returned to normality.
> Now a monitoring activity of server and I can see some backends like this:
>
> postgres  9737 23340  2 14:55 ?00:00:15 postgres: dbname user
> 8.8.8.8[37082] idle in transaction
> postgres  9741 23340  9 14:55 ?00:00:47 postgres: dbname user
> 8.8.8.8[54286] idle in transaction
>
> Any suggestions?
>
>
>
> -
> Dame un poco de fe, eso me bastará.
> Rozvo Ware Solutions
> --
> View this message in context: http://www.postgresql-archive.
> org/postgres-dbname-dbuser-9-9-9-9--PARSE-waiting-tp5968923.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


*Are you asking how to track down the user using up all the connection?
With the information you provided that cannot. be down.*



*If you are asking how to prevent problems in the future, then install
Pg_Bouncer and use that to pool connections.https://pgbouncer.github.io/
<https://pgbouncer.github.io/>*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] 10beta1 role

2017-06-22 Thread Melvin Davidson
On Thu, Jun 22, 2017 at 4:07 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 06/22/2017 12:06 PM, Ray Stell wrote:
>
>> I used "initdb -U" to specify an alternate superuser.  On startup it
>>
>
> Can you show your complete initdb command?
>
> throws these msgs:
>>
>> 2017-06-22 14:36:34 EDT,0,startup FATAL:  28000: role "postgresql" does
>> not exist
>>
>> 2017-06-22 14:36:34 EDT,0,startup LOCATION: InitializeSessionUserId,
>> miscinit.c:503
>>
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

It would be useful to know the O/S and PostgreSQL version.
That way we can direct you to where to find that start up command to modify
it.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Schedule

2017-06-20 Thread Melvin Davidson
On Tue, Jun 20, 2017 at 11:12 AM, Steve Clark <steve.cl...@netwolves.com>
wrote:

> On 06/20/2017 10:38 AM, Adrian Klaver wrote:
> > On 06/20/2017 07:00 AM, Steve Clark wrote:
> >> On 06/20/2017 09:02 AM, Adrian Klaver wrote:
> >>> On 06/20/2017 05:35 AM, Steve Clark wrote:
> >>>> Hello,
> >>>>
> >>>> We have customers whose equipment we monitor. Some of the customers
> don't run a 24/7 operation
> >>>> and turn their equipment off when the go home. We need to create a
> schedule for them of when we
> >>>> can ignore alerts from their equipment. We use postgresql in our
> monitoring environment to maintain
> >>>> alerts and equipment to be monitored. Each piece of equipment has a
> unique unit serial number so
> >>>> the schedule would be tied to this unit serial no.
> >>>>
> >>>> I would be very interested in what might be the best was to organize
> a scheduling table(s) in postgresql.
> >>> Some questions:
> >>>
> >>> 1) What happens if someone ignores the schedule and the alert is real?
> >> That is up in the air for now, probably if our NOC wasn't informed by
> the customer they
> >> were working outside of the schedule the alert would be ignored, but
> then the customer
> >> would probably call us because something wasn't working.
> > It might be just me, but looks like a finger pointing exercise in the
> > making. The classic '(Person 1)I thought you had it. (Person 2)No, I
> > thought you had it'. The whole idea of ignoring an alert makes me
> > nervous anyway. It seems that it should be possible to have the
> > equipment produce an manual off state and the monitoring to acknowledge
> > that. That being said, see more below.
> >
> >>> 2) What are the alerts and how many are there?
> >> Device not pingable, as an example. The alerts continue to be sent to
> our
> >> monitoring system, typically at 2 minute intervals, the monitoring
> system would look at the schedule for that
> >> unit a decide whether or not to ignore the alert.
> >>> 3) How is planned downtime during scheduled work times handled?
> >> They would get a call from our NOC if the unit was down during
> scheduled uptimes.
> > Could they not schedule a downtime?
> Yes that would certainly be an option.
> >
> >>> 4) Do you want to organize by customers or equipment or both?
> >> We have one piece of equipment at each customer that monitors one to
> many devices at the customer.
> > So when you where talking about unique serial numbers where you talking
> > about the monitoring equipment only or does that include the monitored
> > equipment?
> >
> >>> 5) What is the equipment and do you or the customer provide it?
> >> We provide the monitoring equipment, we or the customer could provide
> the equipment being monitored.
> > My first draft of an idea(I'm guessing some of this exists already):
> >
> > 1) Location/customer table. Not sure if a customer can have more then
> > one location.
> >
> > 2) Table of alerts and what they mean.
> >
> > 3) Schedule table keyed to location.
> > To make life a good simpler I would use range types for the schedule:
> > https://www.postgresql.org/docs/9.6/static/rangetypes.html
> >
> > Then you could use the range type operators and functions:
> >
> > https://www.postgresql.org/docs/9.6/static/functions-
> range.html#RANGE-OPERATORS-TABLE
> >
> > to verify whether an alert occurs in or out of the schedule.
> >
> > What I have not taken into account is whether a location has multiple
> > schedules e.g. weekday vs weekend. Then there is the holidays issue. Is
> > this something that needs to be dealt with?
> >
> > 4) Equipment table keyed to location.
> We already have a monitoring system in place that has been in operation
> circa 2003. Just recently we have
> added a new class of customer whose operation is not 24/7.
>
> I envision the schedule could be fairly complicated
> including WE and holidays, plus the enduser might shut down for lunch etc.
> I am looking for more on how to organize the
> schedule, EG a standard weekly schedule then exceptions for holidays etc,
> or a separate individual schedule for
> each week, also need to consider how easy it is to maintain the schedule,
> etc.
>
> Thanks,
> Steve
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgre

Re: [GENERAL] Remote connection to PostgreSQL

2017-06-19 Thread Melvin Davidson
On Mon, Jun 19, 2017 at 12:21 PM, Igor Korot <ikoro...@gmail.com> wrote:

> Thx, David.
>
> On Mon, Jun 19, 2017 at 12:09 PM, David G. Johnston
> <david.g.johns...@gmail.com> wrote:
> > On Mon, Jun 19, 2017 at 9:02 AM, Igor Korot <ikoro...@gmail.com> wrote:
> >>
> >>  Hi, ALL,
> >> Is there some magic in order to turn on remote connection to PostgreSQL?
> >>
> >> There are some extra steps to turn it on for MS SQL and MySQL, so I
> >> figured
> >> it should be the same for Postgre.
> >
> >
> > See "listen_addresses"
> >
> > https://www.postgresql.org/docs/9.6/static/runtime-
> config-connection.html
> >
> > It forward links you to setting up client authentication for the same.
> >
> > David J.
> >
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

In addition to enabling  listen_addresses, you also have to add entries in
pg_hba.conf.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] effective_io_concurrency increasing

2017-06-18 Thread Melvin Davidson
*As per the docs:*



*1. This is dependent on whether or not you are using a RAID disk,2. "Some
experimentation may be needed to find the best value"*

*IOW, there is no general recommendation.*

On Sun, Jun 18, 2017 at 9:24 PM, Patrick B <patrickbake...@gmail.com> wrote:

>
>
> 2017-06-19 13:19 GMT+12:00 Melvin Davidson <melvin6...@gmail.com>:
>
>>
>>
>> On Sun, Jun 18, 2017 at 9:02 PM, Patrick B <patrickbake...@gmail.com>
>> wrote:
>>
>>> Hi guys.
>>>
>>> I just wanna understand the effective_io_concurrency value better.
>>>
>>> My current Master database server has 16 vCPUS and I
>>> use effective_io_concurrency = 0.
>>>
>>> What can be the benefits of increasing that number? Also, do you guys
>>> have any recommendations?
>>>
>>> I'm using PG 9.2 and the official doc does not say much about which
>>> value you should use.
>>>
>>> If I put it to 1, does it mean I can have a query spread into 1
>>> processor?
>>>
>>> Thanks
>>> P
>>>
>>
>>
>>
>> *Perhaps you should read the
>> doc.https://www.postgresql.org/docs/9.2/static/runtime-config-resource.html
>> <https://www.postgresql.org/docs/9.2/static/runtime-config-resource.html>*
>> 18.4.6. Asynchronous Behavior *effective_io_concurrency (integer)*
>>
>
>
> I've done that! But I'm looking for some personal experiences and
> suggestions!!
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] effective_io_concurrency increasing

2017-06-18 Thread Melvin Davidson
On Sun, Jun 18, 2017 at 9:02 PM, Patrick B <patrickbake...@gmail.com> wrote:

> Hi guys.
>
> I just wanna understand the effective_io_concurrency value better.
>
> My current Master database server has 16 vCPUS and I
> use effective_io_concurrency = 0.
>
> What can be the benefits of increasing that number? Also, do you guys have
> any recommendations?
>
> I'm using PG 9.2 and the official doc does not say much about which value
> you should use.
>
> If I put it to 1, does it mean I can have a query spread into 1 processor?
>
> Thanks
> P
>



*Perhaps you should read the
doc.https://www.postgresql.org/docs/9.2/static/runtime-config-resource.html
<https://www.postgresql.org/docs/9.2/static/runtime-config-resource.html>*
18.4.6. Asynchronous Behavior *effective_io_concurrency (integer)*
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] ERROR: type " " does not exist

2017-06-12 Thread Melvin Davidson
On Mon, Jun 12, 2017 at 4:28 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Mon, Jun 12, 2017 at 1:03 PM, Alexander Farber <
> alexander.far...@gmail.com> wrote:
>
>> ERROR:  type " " does not exist at character 149
>>
>
> ​Nothing shown would explain what you are seeing.  Of course, white-space
> issues are really hard to communicate via email.
>
> As a brute-force suggestion I'd say you want to re-write the function one
> statement at a time to confirm that, one, you are indeed calling the
> correct function body, and, two figure out the exact statement that is
> problematic.  I'd do this via copy-paste until the problem statement
> appears then I'd re-type in the problem statement by hand.
>
> There are probably more efficient ways to do all this but given a lack of
> experience and a relatively small function I'd spend more effort trying to
> figure out a better way than just doing it brute force.
>
> David J.
>
>

*You did not state the O/S you are using, but if it's LINUX, just use vi
and do a "set list".*

*That should reveal any strange characters that might be the cause,*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Does NUMERIC lose precision?

2017-05-29 Thread Melvin Davidson
On Mon, May 29, 2017 at 4:19 PM, Kaiting Chen <ktche...@gmail.com> wrote:

> Hi everyone. I’m trying to perform some exact precision arithmetic with
> PostgreSQL’s NUMERIC type. However I can’t seem to get the unparameterized
> NUMERIC type to perform exact precision arithmetic:
>
> # SELECT 2::NUMERIC ^ 64;
>?column?
> ---
>  18446744073709551616.
> (1 row)
>
> While the above operation works fine once I divide 1 by that number the
> result is an inexact decimal number:
>
> # SELECT 1 / (2::NUMERIC ^ 64);
> ?column?
> 
>  0.00054210108624275222
> (1 row)
>
> It doesn't seem to be an issue with the output either as taking the
> reciprocal yields a different number than I started with:
>
> # SELECT 1 / (1 / (2::NUMERIC ^ 64));
>  ?column?
> ---
>  18446744073709551514.042092759729171265910020841463748922
> (1 row)
>
> The only way to get an exact result is by specifying an explicit precision
> and scale:
>
> # SELECT 1 / (2::NUMERIC(96, 64) ^ 64);
>   ?column?
> 
>  0.000542101086242752217003726400434970855712890625
> (1 row)
>
> # SELECT 1 / (1 / (2::NUMERIC(96, 64) ^ 64));
>?column?
> 
> ---
>  18446744073709551616.00
> 00
> (1 row)
>
> However this does not seem intuitive from the documentation which states
> that:
>
> Specifying:
>
> NUMERIC
>
> without any precision or scale creates a column in which numeric values of
> any precision and scale can be stored, up to the implementation limit on
> precision. A column of this kind will not coerce input values to any
> particular scale...
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

>While the above operation works fine once I divide 1 by that number the
result is an inexact decimal number:
># SELECT 1 / (2::NUMERIC ^ 64);
  ?   ?column?
>
>0.00054210108624275222
>(1 row)


*That is the same answer you get when you use any calculator. *


*Are you sure you did not meanSELECT 2::NUMERIC^ 64/1;*


*?column?18446744073709551616.*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


[GENERAL] Memorial Day

2017-05-29 Thread Melvin Davidson
*On this Memorial Day, Let us not forget the many that gave their lives so
that we can enjoy our Freedom today. I give thanks to all my fellow
veterans, and those still on active duty. May future generations still
remember us and grow to never know the horrors of war.Da Nang 1969->'70*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Help with terminology to describe what my software does please?

2017-05-28 Thread Melvin Davidson
On Sun, May 28, 2017 at 9:51 AM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 05/28/2017 05:49 AM, Neil Anderson wrote:
>
>> Hi,
>>
>> I'm working on a tool that can compare the properties of Postgres
>> objects from different instances, finding the differences and
>> outputting the update SQL.
>>
>> It can compare objects that are defined at the cluster, database or
>> schema level. As such I'm finding it difficult to describe what the
>> tool does simply and accurately. I've tried 'compares PostgreSQL
>> schemas' but that doesn't capture the database and cluster parts,
>> 'compares PostgreSQL schema and database objects'. That sort of thing.
>> Right now I have a mix of terms on my website and I would prefer to
>> tighten it up.
>>
>> I guess I don't know what is the most common way to say that it
>> compares everything but the data. Any suggestions from your
>> experience?
>>
>
> From above the first sentence of the second paragraph seems to me the best
> description of what you are doing.
>
>
>> Thanks,
>> Neil
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Cluster comparison would only occur if you have two or more clusters on the
same server, although it's possible to compare across servers,
but that would involve a lot more work. AFAIK, the only differences for a
cluster would be:
1. PostgreSQL version
2. path to database
3. database users (note: it is also possible to make users database
specific)
4. list of defined databases

Database comparison would involve db names, owners, encodings, tablespaces
and acl's
You might also want to include sizes. You can use the following two queries
to help
with that

SELECT db.datname,
   au.rolname as datdba,
   pg_encoding_to_char(db.encoding) as encoding,
   db.datallowconn,
   db.datconnlimit,
   db.datfrozenxid,
   tb.spcname as tblspc,
   db.datacl
  FROM pg_database db
  JOIN pg_authid au ON au.oid = db.datdba
  JOIN pg_tablespace tb ON tb.oid = db.dattablespace
 ORDER BY 1;

SELECT datname,
   pg_size_pretty(pg_database_size(datname))as size_pretty,
   pg_database_size(datname) as size,
   (SELECT pg_size_pretty (SUM( pg_database_size(datname))::bigint)
  FROM pg_database)  AS total,
   ((pg_database_size(datname) / (SELECT SUM(
pg_database_size(datname))
   FROM pg_database) ) *
100)::numeric(6,3) AS pct
  FROM pg_database
  ORDER BY datname;

 schema comparison is a lot more complication as it involves comparing
 collations
 domains
 functions
 trigger functions
 sequences
 tables
 types
 views

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Call for users to talk about table partitioning

2017-05-21 Thread Melvin Davidson
On Thu, May 18, 2017 at 5:41 PM, Scott Marlowe <scott.marl...@gmail.com>
wrote:

> On Thu, May 18, 2017 at 3:40 PM, Scott Marlowe <scott.marl...@gmail.com>
> wrote:
> > I would say that the best thing to do is to run 9.6 grab pgadmin4 and do
> all
> > the examples in the doc page on partitioning.
> >
> > https://www.postgresql.org/docs/9.6/static/ddl-partitioning.html
> >
> > If that works well then the question becomes are there any esoteric cases
> > where pgadmin4 won't quite get you there?
>
> Or maybe what can we automate from pgadmin4 that you currently need to
> script etc?
>













*FWIW, It's not much of a problem creating partitioned tables. You simply
follow thedocumentation in
https://www.postgresql.org/docs/9.6/static/ddl-partitioning.html
<https://www.postgresql.org/docs/9.6/static/ddl-partitioning.html>The
problem occurs when you are already in production and have data in the
mastertable. In that case, you need a function (or script) that reads
records from the old masterand  inserts them  into the appropriate child
tables. Verification of success is needed, and then the old master table
needs to be truncated. This involves a timeout period in production. One
technique which minimizes that, which I personally have done, is to create
a new master that is empty, and then create all children from that. When
inserts are complete and verified (from old master), then the old master is
renamed and the new master is renamed in place of the old master. That
means the only downtime is during the renaming, whichis minimal. In the
event a problem occurs, you can easily reverse the renames, and since *

*the old master will still  contain all original records, the risk is
minimal.*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Call for users to talk about table partitioning

2017-05-18 Thread Melvin Davidson
Shirley,
I am a bit confused. AFAIK, it is PostgreSQL that is responsible for table
partitioning. PgAdmin4 is just an administrative tool.
Are you saying that PgAdmin4 now can make partition tables automatically?

On Thu, May 18, 2017 at 2:10 PM, Shirley Wang <sw...@pivotal.io> wrote:

> Hello!
>
> We're a team from Pivotal, working with members of the Postgres community
> on table partitioning in pgAdmin4. We're looking to chat with some Postgres
> users on their expectations with table partitioning within pgAdmin4.
>
> If you have some availability next week, we'd love to set up some time to
> learn about your needs with this feature. Let us know some times that work
> for you and we'll send over details for the call.
>
> Thanks!
> Shirley
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] database is not accepting commands

2017-05-16 Thread Melvin Davidson
On Tue, May 16, 2017 at 11:03 AM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 05/16/2017 01:28 AM, reem wrote:
>
>> We have 1.5 TB database that's shown an error and block all commands.
>> The error is :
>> "ERROR:  database is not accepting commands to avoid wraparound data loss
>> in
>> database "dbname"
>> HINT:  Stop the postmaster and use a standalone backend to vacuum that
>> database.
>> You might also need to commit or roll back old prepared transactions."
>>
>> I tried to do vacuum in the backend mode. Also I tried to set
>> zero_damaged_pages = on then do the vacuum again but same error appeared.
>> The error appeared after two hours of vacuuming where verbose shows
>> passing
>> tables.
>>
>
> Postgres version?
>
> So was all the above done in the standalone backend?
>
>
>> The cause of error could be :
>> 1- a lot of insertion queries for not existing table
>> or
>> 2- truncating data with 40 GB in a week,
>> we used to do that but this is the first time we had this error.
>> also, autovacuume is turned on.
>>
>> Please , I need help or any suggestion?
>>
>>
>>
>>
>>
>> --
>> View this message in context: http://www.postgresql-archive.
>> org/database-is-not-accepting-commands-tp5961831.html
>> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

As Adrian requested:
PostgreSQL Version?
O/S?

Did you ?
A. Stop PostgreSQL
B.postgres --single -D 
C
VACUUMDB  -f -d 
OR
VACUUMDB -f -a

D. Restart PostgreSQL normally
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL]

2017-05-11 Thread Melvin Davidson
On Thu, May 11, 2017 at 9:24 AM, Igor Korot <ikoro...@gmail.com> wrote:

> Melvin et al,
>
> On Thu, May 11, 2017 at 8:50 AM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
>
>>
>> On Wed, May 10, 2017 at 11:26 PM, Igor Korot <ikoro...@gmail.com> wrote:
>>
>>> Hi, John et al,
>>>
>>> On Wed, May 10, 2017 at 11:02 PM, John R Pierce <pie...@hogranch.com>
>>> wrote:
>>> > On 5/10/2017 7:45 PM, Igor Korot wrote:
>>> >>
>>> >> I found
>>> >> this:https://wiki.postgresql.org/wiki/Retrieve_primary_key_columns,
>>> >> but now I need
>>> >> to connect this with information_schema.columns.
>>> >>
>>> >> What is best way to do it?
>>> >>
>>> >> Or maybe that query I referenced is completely wrong?
>>> >
>>> >
>>> >
>>> > if you're using pg_catalog stuff there's little point in using the
>>> > information_schema views, which exist for compatability with the SQL
>>> > standard.
>>> >
>>> > information_schema.columns is a view, like...
>>>
>>> Like I said, what I expect to see from the query is:
>>>
>>> id | integer | | 5| 2 | 0 | P |
>>> name | varchar | 50| 2 | | | | 
>>>
>>> So I need the information about the field and whether the field is a
>>> primary/foreign key or not.
>>>
>>> And this is according to the schema.table.
>>>
>>> Thank you.
>>>
>>> >
>>> > View definition:
>>> >  SELECT current_database()::information_schema.sql_identifier AS
>>> > table_catalog,
>>> > nc.nspname::information_schema.sql_identifier AS table_schema,
>>> > c.relname::information_schema.sql_identifier AS table_name,
>>> > a.attname::information_schema.sql_identifier AS column_name,
>>> > a.attnum::information_schema.cardinal_number AS ordinal_position,
>>> > pg_get_expr(ad.adbin, ad.adrelid)::information_schema.character_data
>>> AS
>>> > column_default,
>>> > CASE
>>> > WHEN a.attnotnull OR t.typtype = 'd'::"char" AND
>>> t.typnotnull
>>> > THEN 'NO'::text
>>> > ELSE 'YES'::text
>>> > END::information_schema.yes_or_no AS is_nullable,
>>> > CASE
>>> > WHEN t.typtype = 'd'::"char" THEN
>>> > CASE
>>> > WHEN bt.typelem <> 0::oid AND bt.typlen = (-1) THEN
>>> > 'ARRAY'::text
>>> > WHEN nbt.nspname = 'pg_catalog'::name THEN
>>> > format_type(t.typbasetype, NULL::integer)
>>> > ELSE 'USER-DEFINED'::text
>>> > END
>>> > ELSE
>>> > CASE
>>> > WHEN t.typelem <> 0::oid AND t.typlen = (-1) THEN
>>> > 'ARRAY'::text
>>> > WHEN nt.nspname = 'pg_catalog'::name THEN
>>> > format_type(a.atttypid, NULL::integer)
>>> > ELSE 'USER-DEFINED'::text
>>> > END
>>> > END::information_schema.character_data AS data_type,
>>> > information_schema._pg_char_max_length(information_schema._p
>>> g_truetypid(a.*,
>>> > t.*), information_schema._pg_truetypmod(a.*,
>>> > t.*))::information_schema.cardinal_numb
>>> > er AS character_maximum_length,
>>> > information_schema._pg_char_octet_length(information_schema.
>>> _pg_truetypid(a.*,
>>> > t.*), information_schema._pg_truetypmod(a.*,
>>> > t.*))::information_schema.cardinal_nu
>>> > mber AS character_octet_length,
>>> > information_schema._pg_numeric_precision(information_schema.
>>> _pg_truetypid(a.*,
>>> > t.*), information_schema._pg_truetypmod(a.*,
>>> > t.*))::information_schema.cardinal_nu
>>> > mber AS numeric_precision,
>>> > information_schema._pg_numeric_precision_radix(information_s
>>> chema._pg_truetypid(a.*,
>>> > t.*), information_schema._pg_truetypmod(a.*,
>>> t.*))::information_schema.cardi
>>> > nal_number AS numeric_precision_radix,
>>> > information_schema._pg_numeric_scale(information_schema._pg_
>>> truetypid(a.*,
>>> > t.*), information_schema._pg_truetypmod(a.*,
>>> > t.*))::informa

Re: [GENERAL]

2017-05-11 Thread Melvin Davidson
IS NOT NULL THEN current_database()
> > ELSE NULL::name
> > END::information_schema.sql_identifier AS collation_catalog,
> > nco.nspname::information_schema.sql_identifier AS collation_schema,
> > co.collname::information_schema.sql_identifier AS collation_name,
> > CASE
> > WHEN t.typtype = 'd'::"char" THEN current_database()
> > ELSE NULL::name
> > END::information_schema.sql_identifier AS domain_catalog,
> > CASE
> > WHEN t.typtype = 'd'::"char" THEN nt.nspname
> > ELSE NULL::name
> > END::information_schema.sql_identifier AS domain_schema,
> > CASE
> > WHEN t.typtype = 'd'::"char" THEN t.typname
> > ELSE NULL::name
> > END::information_schema.sql_identifier AS domain_name,
> > current_database()::information_schema.sql_identifier AS
> udt_catalog,
> > COALESCE(nbt.nspname, nt.nspname)::information_schema.sql_identifier
> AS
> > udt_schema,
> > COALESCE(bt.typname, t.typname)::information_schema.sql_identifier
> AS
> > udt_name,
> > NULL::character varying::information_schema.sql_identifier AS
> > scope_catalog,
> > NULL::character varying::information_schema.sql_identifier AS
> > scope_schema,
> > NULL::character varying::information_schema.sql_identifier AS
> > scope_name,
> > NULL::integer::information_schema.cardinal_number AS
> > maximum_cardinality,
> > a.attnum::information_schema.sql_identifier AS dtd_identifier,
> > 'NO'::character varying::information_schema.yes_or_no AS
> > is_self_referencing,
> > 'NO'::character varying::information_schema.yes_or_no AS
> is_identity,
> > NULL::character varying::information_schema.character_data AS
> > identity_generation,
> > NULL::character varying::information_schema.character_data AS
> > identity_start,
> > NULL::character varying::information_schema.character_data AS
> > identity_increment,
> > NULL::character varying::information_schema.character_data AS
> > identity_maximum,
> > NULL::character varying::information_schema.character_data AS
> > identity_minimum,
> > NULL::character varying::information_schema.yes_or_no AS
> identity_cycle,
> > 'NEVER'::character varying::information_schema.character_data AS
> > is_generated,
> > NULL::character varying::information_schema.character_data AS
> > generation_expression,
> > CASE
> > WHEN c.relkind = 'r'::"char" OR (c.relkind = ANY
> > (ARRAY['v'::"char", 'f'::"char"])) AND
> > pg_column_is_updatable(c.oid::regclass, a.attnum, false) THEN 'YE
> > S'::text
> > ELSE 'NO'::text
> > END::information_schema.yes_or_no AS is_updatable
> >FROM pg_attribute a
> >  LEFT JOIN pg_attrdef ad ON a.attrelid = ad.adrelid AND a.attnum =
> > ad.adnum
> >  JOIN (pg_class c
> >  JOIN pg_namespace nc ON c.relnamespace = nc.oid) ON a.attrelid =
> c.oid
> >  JOIN (pg_type t
> >  JOIN pg_namespace nt ON t.typnamespace = nt.oid) ON a.atttypid =
> t.oid
> >  LEFT JOIN (pg_type bt
> >  JOIN pg_namespace nbt ON bt.typnamespace = nbt.oid) ON t.typtype =
> > 'd'::"char" AND t.typbasetype = bt.oid
> >  LEFT JOIN (pg_collation co
> >  JOIN pg_namespace nco ON co.collnamespace = nco.oid) ON
> a.attcollation
> > = co.oid AND (nco.nspname <> 'pg_catalog'::name OR co.collname <>
> > 'default'::name)
> >   WHERE NOT pg_is_other_temp_schema(nc.oid) AND a.attnum > 0 AND NOT
> > a.attisdropped AND (c.relkind = ANY (ARRAY['r'::"char", 'v'::"char",
> > 'f'::"char"])) AND (pg_has_
> > role(c.relowner, 'USAGE'::text) OR has_column_privilege(c.oid, a.attnum,
> > 'SELECT, INSERT, UPDATE, REFERENCES'::text));
> >
> >
> >
> > --
> > john r pierce, recycling bits in santa cruz
> >
> >
> >
> >
> > --
> > Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-general
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



*Igor,*

*as others have suggested, you would be better off querying the system
catalogs to get constraint information. The query below is what I use.*






















*Perhaps it will help you modify to your needs.SELECT cn.conname,
CASE WHEN cn.contype = 'c' THEN 'check' WHEN cn.contype = 'f'
THEN 'foreign key'WHEN cn.contype = 'p' THEN 'primary
key'WHEN cn.contype = 'u' THEN 'unique'WHEN
cn.contype = 't' THEN 'trigger'WHEN cn.contype = 'x' THEN
'exclusion'   END as type,   cn.condeferrable,   CASE WHEN
cn.conrelid > 0 THEN (SELECT nspname || '.' || relname
   FROM pg_class c   JOIN pg_namespace n ON
n.oid = c.relnamespace  WHERE c.oid =
cn.conrelid)ELSE ''   END as table,   confkey,
consrc FROM pg_constraint cn ORDER BY 1;*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] relation create time

2017-05-10 Thread Melvin Davidson
On Wed, May 10, 2017 at 12:31 PM, Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:

> On 5/10/17 12:05, Hu, Patricia wrote:
> > I am trying to find out when a table was created in postgresql. Thought
> > it would be easy (coming from Oracle world), but haven’t had any luck,
> > especially since we are on RDS and can’t peek at the timestamp on the
> > file system. Is this information stored anywhere in the catalog?
>
> It is not.
>
> > Or I
> > need to store it myself? Is there any plan to add such meta data
> > information to the catalog as a feature? Thanks a lot!
>
> You could write an event trigger to record it.
>
> --
> Peter Eisentraut  http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



*One option would be to set log_statement = 'ddl' , and make sure
log_line_prefix includes time, then you can review the log for creation
time.*



*Also, I have been requesting that feature for a long time, so if you want
to add your support, vote for it
athttps://postgresql.uservoice.com/forums/21853-general?filter=top=2
<https://postgresql.uservoice.com/forums/21853-general?filter=top=2>*

*and look for relcreate*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] all serial type was changed to 1

2017-05-01 Thread Melvin Davidson
On Mon, May 1, 2017 at 7:55 PM, Max Wang <mw...@1080agile.com> wrote:

> Hi Melvin,
>
>
>
> I already reset id to correct value after this happen. This is a
> production database. We could not stop and wait for trouble shooting. I
> manually reset sequence of id to correct value.
>
>
>
> Below is current result:
>
>
>
> sequence_name | xx_id_seq
>
> last_value| 190996
>
> start_value   | 1
>
> increment_by  | 1
>
> max_value | 9223372036854775807
>
> min_value | 1
>
> cache_value   | 1
>
> log_cnt   | 29
>
> is_cycled | f
>
> is_called | t
>
>
>
> Regards,
>
> Max
>
>
>
> *From:* Melvin Davidson [mailto:melvin6...@gmail.com]
> *Sent:* Tuesday, 2 May 2017 9:49 AM
> *To:* Adrian Klaver <adrian.kla...@aklaver.com>
> *Cc:* Max Wang <mw...@1080agile.com>; Amitabh Kant <amitabhk...@gmail.com>;
> pgsql-general@postgresql.org
>
> *Subject:* Re: [GENERAL] all serial type was changed to 1
>
>
>
>
>
>
>
> On Mon, May 1, 2017 at 7:45 PM, Adrian Klaver <adrian.kla...@aklaver.com>
> wrote:
>
> On 05/01/2017 04:36 PM, Max Wang wrote:
>
> Hi Amitabh,
>
> I mean the serial sequence that controls the id value has been set to 1
> for all tables. That's why I got the duplicate key value error when I tried
> to insert the new record to table.
>
>
> So what does the sequence query show?
>
> As an example in psql:
>
> test=# create table serial_test(id serial PRIMARY KEY, fld_1 varchar);
> CREATE TABLE
>
> test=# \d serial_test
>   Table "public.serial_test"
>  Column |   Type|Modifiers
> +---+---
> ---
>  id | integer   | not null default
> nextval('serial_test_id_seq'::regclass)
>  fld_1  | character varying |
> Indexes:
> "serial_test_pkey" PRIMARY KEY, btree (id)
>
> The above shows that the sequence associated with the serial type is:
> 'serial_test_id_seq'
>
> The below shows how to select from that sequence:
>
> test=# select * from serial_test_id_seq ;
> -[ RECORD 1 ]-+
> sequence_name | serial_test_id_seq
> last_value| 1
> start_value   | 1
> increment_by  | 1
> max_value | 9223372036854775807
> min_value | 1
> cache_value   | 1
> log_cnt   | 0
> is_cycled | f
> is_called | f
>
>
> Can you do that on the serial column from one the affected tables and post
> the results here?
>
>
>
>
> Thanks.
>
> Regards,
> Max
>
> -Original Message-
> From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
> Sent: Tuesday, 2 May 2017 9:31 AM
> To: Max Wang <mw...@1080agile.com>; Amitabh Kant <amitabhk...@gmail.com>
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] all serial type was changed to 1
>
> On 05/01/2017 04:11 PM, Max Wang wrote:
>
> Hi Amitabh,
>
>
>
> Thank you for suggestion. We did not reach the limit of serial type.
> Some tables only have hundreds of rows.
>
>
> It would helpful if you ran the query I showed in my previous post on one
> the sequences just so we can see.
>
>  From subsequent post of yours:
>
> "Sorry. I mean all tables’ id column were reset to 1."
>
> I thought I understood on this, now I am not sure. Do you mean that the
> actual values in the id column in all the tables have been set to 1 or that
> the serial sequence that controls the id value has been set to 1?
>
>
>
>
> Regards,
>
> Max
>
>
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
>
>
> *Hmmm, it's beginning to look like someone did a "SELECT setval('seqname',
> 1);"  For every sequence. *
>
> *Is that a possibility?*
>
>
> --
>
> *Melvin Davidson*
>
> *I reserve the right to fantasize.  Whether or not you wish to share my
> fantasy is entirely up to you. *
>


*>I already reset id to correct value after this happen. *
*Then as Adrian or I suggested, someone did a manual*  *"SELECT
setval('seqname', 1);"  For every sequence. *

*Since you have reset to correct values already, I seriously doubt we can
trace this any furthur.*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] all serial type was changed to 1

2017-05-01 Thread Melvin Davidson
On Mon, May 1, 2017 at 7:45 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 05/01/2017 04:36 PM, Max Wang wrote:
>
>> Hi Amitabh,
>>
>> I mean the serial sequence that controls the id value has been set to 1
>> for all tables. That's why I got the duplicate key value error when I tried
>> to insert the new record to table.
>>
>
> So what does the sequence query show?
>
> As an example in psql:
>
> test=# create table serial_test(id serial PRIMARY KEY, fld_1 varchar);
> CREATE TABLE
>
> test=# \d serial_test
>   Table "public.serial_test"
>  Column |   Type|Modifiers
> +---+---
> ---
>  id | integer   | not null default
> nextval('serial_test_id_seq'::regclass)
>  fld_1  | character varying |
> Indexes:
> "serial_test_pkey" PRIMARY KEY, btree (id)
>
> The above shows that the sequence associated with the serial type is:
> 'serial_test_id_seq'
>
> The below shows how to select from that sequence:
>
> test=# select * from serial_test_id_seq ;
> -[ RECORD 1 ]-+
> sequence_name | serial_test_id_seq
> last_value| 1
> start_value   | 1
> increment_by  | 1
> max_value | 9223372036854775807
> min_value | 1
> cache_value   | 1
> log_cnt   | 0
> is_cycled | f
> is_called | f
>
>
> Can you do that on the serial column from one the affected tables and post
> the results here?
>
>
>
>
>> Thanks.
>>
>> Regards,
>> Max
>>
>> -Original Message-
>> From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
>> Sent: Tuesday, 2 May 2017 9:31 AM
>> To: Max Wang <mw...@1080agile.com>; Amitabh Kant <amitabhk...@gmail.com>
>> Cc: pgsql-general@postgresql.org
>> Subject: Re: [GENERAL] all serial type was changed to 1
>>
>> On 05/01/2017 04:11 PM, Max Wang wrote:
>>
>>> Hi Amitabh,
>>>
>>>
>>>
>>> Thank you for suggestion. We did not reach the limit of serial type.
>>> Some tables only have hundreds of rows.
>>>
>>
>> It would helpful if you ran the query I showed in my previous post on one
>> the sequences just so we can see.
>>
>>  From subsequent post of yours:
>>
>> "Sorry. I mean all tables’ id column were reset to 1."
>>
>> I thought I understood on this, now I am not sure. Do you mean that the
>> actual values in the id column in all the tables have been set to 1 or that
>> the serial sequence that controls the id value has been set to 1?
>>
>>
>>>
>>>
>>> Regards,
>>>
>>> Max
>>>
>>>
>>>
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


*Hmmm, it's beginning to look like someone did a "SELECT setval('seqname',
1);"  For every sequence. *

*Is that a possibility?*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] all serial type was changed to 1

2017-05-01 Thread Melvin Davidson
On Mon, May 1, 2017 at 7:31 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 05/01/2017 04:11 PM, Max Wang wrote:
>
>> Hi Amitabh,
>>
>>
>>
>> Thank you for suggestion. We did not reach the limit of serial type.
>> Some tables only have hundreds of rows.
>>
>
> It would helpful if you ran the query I showed in my previous post on one
> the sequences just so we can see.
>
> From subsequent post of yours:
>
> "Sorry. I mean all tables’ id column were reset to 1."
>
> I thought I understood on this, now I am not sure. Do you mean that the
> actual values in the id column in all the tables have been set to 1 or that
> the serial sequence that controls the id value has been set to 1?
>
>
>
>>
>>
>> Regards,
>>
>> Max
>>
>>
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>










*Just to help clarify things, please execute and attach the output from the
following query.SELECT sequence_schema,
sequence_name,  start_value,
maximum_value,  cycle_optionFROM
information_schema.sequences  ORDER BY 1, 2;   *


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] all serial type was changed to 1

2017-05-01 Thread Melvin Davidson
On Mon, May 1, 2017 at 7:08 PM, Max Wang <mw...@1080agile.com> wrote:

> Hi Adrian,
>
> Only sequences (id) reset to 1.
>
> Regards,
> Max
>
> -Original Message-
> From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
> Sent: Monday, 1 May 2017 11:30 PM
> To: Max Wang <mw...@1080agile.com>; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] all serial type was changed to 1
>
> On 04/30/2017 10:51 PM, Max Wang wrote:
> > Hi All,
> >
> >
> >
> > We have a PostgreSQL database. There are 26 tables and we use serial
> > type as primary key.  We had a insert error as "duplicate key value
> > violates unique constraint, DETAIL:  Key (id)=(1) already exists." one
> > weeks ago. I checked and found all tables' id were reset to 1.
>
> So to be clear:
>
> Every row in each of the 26 tables has an id of 1?
>
> or
>
> Do you mean the sequences where reset to 1?
>
> >
> >
> >
> > I checked database log and did not find any useful information.  I am
> > not sure why this happen. The only script which connect to this
> > database is a Python script and only do normal insert/update/delete
> actions.
> >
> >
> >
> > Please give me some suggestions if you happen to know something about
> > this issue. I appreciate any feedback you might have.
> >
> >
> >
> > I am very new to PostgreSQL and this mail list. Please let me know if
> > I did not something wrong.
> >
> >
> >
> > Thank you.
> >
> >
> >
> > Regards,
> >
> > Max
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



*But as Adrian asked, was the sequence reset to 1 for ALL tables sequences
or just 1?*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] GENERAL : [Can somebody explain about Base directory]

2017-05-01 Thread Melvin Davidson
On Mon, May 1, 2017 at 11:41 AM, VENKTESH GUTTEDAR <
venkteshgutte...@gmail.com> wrote:

> Hello All,
>
> Base directory is consuming to much memory, leading to no space on
> server and stopping the application
>
> Somebody please explain why it takes so much memory and is it safe to
> delete those files.?
>
> --
> Regards :
> Venktesh Guttedar.
>
>

*Please clarify. Are your talking about file space or physical memory? They
are two different things.*

*Which version of PostgreSQL are you using?*

*What is your O/S.*


*How many databases have you created in your PostgreSQL server?-- *
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] PostgreSQL Required Monitoring

2017-04-28 Thread Melvin Davidson
On Fri, Apr 28, 2017 at 10:46 AM, Joshua D. Drake <j...@commandprompt.com>
wrote:

> On 04/28/2017 07:39 AM, Andrew Kerber wrote:
>
>> I am a fairly experienced Oracle DBA, and we are starting to move in to
>> the PostgreSQL world.  I would expect the standard monitoring items are
>> required for mission critical postgres apps, Ie, disk space, wal log
>> space, log monitoring, process counts,software running, connection
>> available on the correct port, CPU usage.
>>
>> Are there additional PostgreSQL specific items that need to be
>> monitored?  if so, what items?
>>
>
> Pretty much anything that starts with pg_stat_
>
> JD
>
>
> --
> Command Prompt, Inc.  http://the.postgres.company/
> +1-503-667-4564
> PostgreSQL Centered full stack support, consulting and development.
> Everyone appreciates your honesty, until you are honest with them.
> Unless otherwise stated, opinions are my own.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

One thing I always monitor is index sizes, usage and status. See attached
pg_stat_all_indexes.sh

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


pg_stat_all_indexes.sh
Description: Bourne shell script

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] The Contractor Conundrum

2017-04-26 Thread Melvin Davidson
This article is written as my experience as a PostgreSQL contractor and to
all those that that think
they might want to be one.

It all began about 4 years ago when, oddly enough, I retired. Shortly after
I moved to my new,
and final home ( I hope), I began recieving calls from headhunters wanting
to place me in companies
that required me to relocate. Having had to relocate 4 times in 8 years due
to a bad job market,
I was in no mood to due it again, so I politely informed them I was not
interested and hung up
the phone. That  was until I recieved a call that advised me I only had to
be on site for two
days and then I could work remotely from home. I was asked what my hourly
rate would be and I
just threw something out there, not expecting them to be ok with it, but
they were. So I took
the job (and trip) and was impressed with the company and personnel I was
working with. The
database was well designed and I was given a laptop to connect from my home
with. I was given
the task of testing new auxillary software for PostgreSQL and writing bash
scripts to insure
backups and restores were being done and were reliable. That position
lasted 6 months and I left
with a feeling of having completed a job well done. However, my next
contract was a nightmare.
The database was poorly designed. Instead of a simple scaled database, each
customer was given
their own schema, with a copy of each table in it. What was really bad was
they had put an index
on every column of every table, only a few of which were actually used in
queries. I begged them
to let me drop the unused indexes so they could save space and optimize
inserts, updates & deletes, but they refused because "they had higher
priorities".
Now, my latest commitment is somewhat better, but not without it's own
annoyances. Primarily,
despite my repeated attempts to convince them otherwise, the developers at
that site seem to feel
it is essential to constantly issue queries of "SELECT 1;", so that they do
not lose the connection
to the database. That of course, has resulted at times in no connection
being available for other
users/developers. Let alone the fact that developers should not be allowed
on production systems.
I have advised them they are much better off using a connection manager
like PgBouncer, but they apparently do not understand the concept or
benefit. In summary, I can only advise that aspiring contractors find out
as much about a system/schema/policies before commiting to a contract, no
matter how much they offer to pay.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Total ram size study

2017-04-22 Thread Melvin Davidson
Thanks Vick,

Those were my thoughts as well. Your response gives me something to help
convince the client to kick up the hardware.

On Sat, Apr 22, 2017 at 12:20 PM, Vick Khera <vi...@khera.org> wrote:

> I've not done a formal study, but I've always found that throwing hardware
> at the problem does wonders. My current database I made faster by bumping
> RAM until the entire working set fits in memory. The server has 256GB of
> RAM, half of which is used by ZFS for its purposes, and the other half for
> Postgres. The prior iteration of the servers only had 64GB of RAM and the
> difference was very remarkable.
>
> On Sat, Apr 22, 2017 at 11:27 AM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
>
>>
>>
>>
>> *Has anyone ever done a study on performance increase via ram increase?I
>> have a client on AWS with 8GB total ram (2GB shared_buffers), and I
>> amcurious if doubling the ram to 16GB (4GB shared_buffers) will result in
>> minimizing query response time.*
>>
>> --
>> *Melvin Davidson*
>> I reserve the right to fantasize.  Whether or not you
>> wish to share my fantasy is entirely up to you.
>>
>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


[GENERAL] Total ram size study

2017-04-22 Thread Melvin Davidson
*Has anyone ever done a study on performance increase via ram increase?I
have a client on AWS with 8GB total ram (2GB shared_buffers), and I
amcurious if doubling the ram to 16GB (4GB shared_buffers) will result in
minimizing query response time.*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Clone PostgreSQL schema

2017-04-18 Thread Melvin Davidson
On Tue, Apr 18, 2017 at 3:48 AM, R. Reiterer <r.reite...@posteo.at> wrote:

> Hi Melvin,
>
> after a first test, the function seems to work perfect! MANY THX!!!
>
> Regards,
>
> Reinhard
>
> Am 17.04.2017 17:21 schrieb Melvin Davidson:
>
>> On Mon, Apr 17, 2017 at 11:02 AM, Melvin Davidson
>> <melvin6...@gmail.com> wrote:
>>
>> On Mon, Apr 17, 2017 at 9:42 AM, Melvin Davidson
>>> <melvin6...@gmail.com> wrote:
>>>
>>> On Mon, Apr 17, 2017 at 2:20 AM, R. Reiterer <r.reite...@posteo.at>
>>> wrote:
>>> Hi Melvin,
>>>
>>> thanks again for your help! I did some testing, but views in the
>>> new schema still refer to the old schema.
>>>
>>> Regards, Reinhard
>>>
>>> Am 17.04.2017 04:53 schrieb Melvin Davidson:
>>> On Sun, Apr 16, 2017 at 4:42 PM, R. Reiterer <r.reite...@posteo.at>
>>> wrote:
>>>
>>> Unfortunately, I do not have the skills to improve the function.
>>> Maybe someone at dba.stackexchange.com [1] [1] can help me. I'll
>>>
>>> open a
>>>
>>> ticket. I hope this is okay for you.
>>>
>>> Am 16.04.2017 22:31 schrieb Melvin Davidson:
>>> I missed to note that this is a VIEW issue (?)
>>> AH, IN THAT CASE, YOU ARE RIGHT. I AM RETIRED, BUT IF YOU WISH, GO
>>> AHEAD AND APPLY THE FIX YOURSELF. JUST MAKE A COMMENT AT THE TOP TO
>>> INCLUDE THE DATE, YOUR NAME AND THE FIX.
>>>
>>> On Sun, Apr 16, 2017 at 4:24 PM, R. Reiterer <r.reite...@posteo.at>
>>> wrote:
>>>
>>> Hi Melvin,
>>>
>>> thanks for your reply. I missed to note that this is a VIEW issue
>>> (?). After duplicating a schema, views in the cloned schema
>>> (schema_new) refer still to the source schema (schema_old) in the
>>> FROM clause:
>>>
>>> View in cloned schema (schema_new) -->
>>>
>>> CREATE VIEW schema_new.my_view AS
>>> SELECT *
>>> FROM schema_old.my_table;
>>>
>>> To me 'FROM schema_new.my_table' would be more logical.
>>>
>>> Regards, Reinhard
>>>
>>> Am 16.04.2017 22:12 schrieb Melvin Davidson:
>>> On Sun, Apr 16, 2017 at 3:20 PM, R. Reiterer <r.reite...@posteo.at>
>>> wrote:
>>>
>>> Hi Melvin,
>>>
>>> I use your PL/pgSQL function posted at
>>>
>>>
>>> https://www.postgresql.org/message-id/CANu8FixK9P8UD43nv2s%
>> 2Bc-9jHkyy8wPUc_f5K3HcrmJYVPr-wQ%40mail.gmail.com
>>
>>> [2]
>>> [2]
>>>
>>> [1]
>>> [1] to clone schemas in PostgreSQL databases. Many thanks for your
>>> work!
>>>
>>> I noticed that in cloned schemas the schema name isn't updated in
>>> the FROM clause:
>>>
>>> schema_old -->
>>>
>>> CREATE VIEW schema_old.my_view AS
>>> SELECT *
>>> FROM schema_old.my_table;
>>>
>>> schema_new -->
>>>
>>> CREATE VIEW schema_new.my_view AS
>>> SELECT *
>>> FROM schema_old.my_table;
>>>
>>> Are you interessted to fix this?
>>>
>>> Regards,
>>>
>>> Reinhard
>>>
>>> FIRST, THANK YOU FOR THE COMPLEMENT.
>>>
>>> However, AFAIC, there is nothing to "fix" with regards to cloning
>>> schema name. In a database, you cannot have two schemas with the
>>> same
>>> name,
>>>
>>> so what would be the point? If you want to "clone" to a different
>>> database, then just use pg_dump and pg_restore.
>>>
>>> --
>>>
>>> MELVIN DAVIDSON
>>> I reserve the right to fantasize. Whether or not you
>>> wish to share my fantasy is entirely up to you.
>>>
>>> Links:
>>> --
>>> [1]
>>>
>>>
>>> https://www.postgresql.org/message-id/CANu8FixK9P8UD43nv2s%
>> 2Bc-9jHkyy8wPUc_f5K3HcrmJYVPr-wQ%40mail.gmail.com
>>
>>> [2]
>>> [2]
>>> [1]
>>>
>>> --
>>>
>>> MELVIN DAVIDSON
>>> I reserve the right to fantasize. Whether or not you
>>> wish to share my fantasy is entirely up to you.
>>>
>>> Links:
>>> --
>>> [1]
>>>
>>>
>>> https://www.postgresql.org/message-id/CANu8FixK9P8UD43nv2s%
>> 2Bc-9jHkyy8wPUc_f5K3HcrmJYVPr-wQ%40mail.gmail.com
>>
>>> [2]
>>> [2]
>>>
>>> Reinhard,
>>

Re: [GENERAL] Clone PostgreSQL schema

2017-04-17 Thread Melvin Davidson
On Mon, Apr 17, 2017 at 11:02 AM, Melvin Davidson <melvin6...@gmail.com>
wrote:

>
>
> On Mon, Apr 17, 2017 at 9:42 AM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
>
>>
>> On Mon, Apr 17, 2017 at 2:20 AM, R. Reiterer <r.reite...@posteo.at>
>> wrote:
>>
>>> Hi Melvin,
>>>
>>> thanks again for your help! I did some testing, but views in the new
>>> schema still refer to the old schema.
>>>
>>> Regards, Reinhard
>>>
>>> Am 17.04.2017 04:53 schrieb Melvin Davidson:
>>>
>>>> On Sun, Apr 16, 2017 at 4:42 PM, R. Reiterer <r.reite...@posteo.at>
>>>> wrote:
>>>>
>>>> Unfortunately, I do not have the skills to improve the function.
>>>>> Maybe someone at dba.stackexchange.com [1] can help me. I'll open a
>>>>>
>>>>> ticket. I hope this is okay for you.
>>>>>
>>>>> Am 16.04.2017 22:31 schrieb Melvin Davidson:
>>>>> I missed to note that this is a VIEW issue (?)
>>>>> AH, IN THAT CASE, YOU ARE RIGHT. I AM RETIRED, BUT IF YOU WISH, GO
>>>>> AHEAD AND APPLY THE FIX YOURSELF. JUST MAKE A COMMENT AT THE TOP TO
>>>>> INCLUDE THE DATE, YOUR NAME AND THE FIX.
>>>>>
>>>>> On Sun, Apr 16, 2017 at 4:24 PM, R. Reiterer <r.reite...@posteo.at>
>>>>> wrote:
>>>>>
>>>>> Hi Melvin,
>>>>>
>>>>> thanks for your reply. I missed to note that this is a VIEW issue
>>>>> (?). After duplicating a schema, views in the cloned schema
>>>>> (schema_new) refer still to the source schema (schema_old) in the
>>>>> FROM clause:
>>>>>
>>>>> View in cloned schema (schema_new) -->
>>>>>
>>>>> CREATE VIEW schema_new.my_view AS
>>>>> SELECT *
>>>>> FROM schema_old.my_table;
>>>>>
>>>>> To me 'FROM schema_new.my_table' would be more logical.
>>>>>
>>>>> Regards, Reinhard
>>>>>
>>>>> Am 16.04.2017 22:12 schrieb Melvin Davidson:
>>>>> On Sun, Apr 16, 2017 at 3:20 PM, R. Reiterer <r.reite...@posteo.at>
>>>>> wrote:
>>>>>
>>>>> Hi Melvin,
>>>>>
>>>>> I use your PL/pgSQL function posted at
>>>>>
>>>>>
>>>>> https://www.postgresql.org/message-id/CANu8FixK9P8UD43nv2s%2
>>>> Bc-9jHkyy8wPUc_f5K3HcrmJYVPr-wQ%40mail.gmail.com
>>>>
>>>>> [2]
>>>>>
>>>>> [1]
>>>>> [1] to clone schemas in PostgreSQL databases. Many thanks for your
>>>>> work!
>>>>>
>>>>> I noticed that in cloned schemas the schema name isn't updated in
>>>>> the FROM clause:
>>>>>
>>>>> schema_old -->
>>>>>
>>>>> CREATE VIEW schema_old.my_view AS
>>>>> SELECT *
>>>>> FROM schema_old.my_table;
>>>>>
>>>>> schema_new -->
>>>>>
>>>>> CREATE VIEW schema_new.my_view AS
>>>>> SELECT *
>>>>> FROM schema_old.my_table;
>>>>>
>>>>> Are you interessted to fix this?
>>>>>
>>>>> Regards,
>>>>>
>>>>> Reinhard
>>>>>
>>>>> FIRST, THANK YOU FOR THE COMPLEMENT.
>>>>>
>>>>> However, AFAIC, there is nothing to "fix" with regards to cloning
>>>>> schema name. In a database, you cannot have two schemas with the
>>>>> same
>>>>> name,
>>>>>
>>>>> so what would be the point? If you want to "clone" to a different
>>>>> database, then just use pg_dump and pg_restore.
>>>>>
>>>>> --
>>>>>
>>>>> MELVIN DAVIDSON
>>>>> I reserve the right to fantasize. Whether or not you
>>>>> wish to share my fantasy is entirely up to you.
>>>>>
>>>>> Links:
>>>>> --
>>>>> [1]
>>>>>
>>>>>
>>>>> https://www.postgresql.org/message-id/CANu8FixK9P8UD43nv2s%2
>>>> Bc-9jHkyy8wPUc_f5K3HcrmJYVPr-wQ%40mail.gmail.com
>>>>
>>>>> [2]
>>>>> [1]
>>>>>
>>>>> --
>>>>>
>>>>> MELVIN

Re: [GENERAL] Clone PostgreSQL schema

2017-04-17 Thread Melvin Davidson
On Mon, Apr 17, 2017 at 9:42 AM, Melvin Davidson <melvin6...@gmail.com>
wrote:

>
> On Mon, Apr 17, 2017 at 2:20 AM, R. Reiterer <r.reite...@posteo.at> wrote:
>
>> Hi Melvin,
>>
>> thanks again for your help! I did some testing, but views in the new
>> schema still refer to the old schema.
>>
>> Regards, Reinhard
>>
>> Am 17.04.2017 04:53 schrieb Melvin Davidson:
>>
>>> On Sun, Apr 16, 2017 at 4:42 PM, R. Reiterer <r.reite...@posteo.at>
>>> wrote:
>>>
>>> Unfortunately, I do not have the skills to improve the function.
>>>> Maybe someone at dba.stackexchange.com [1] can help me. I'll open a
>>>>
>>>> ticket. I hope this is okay for you.
>>>>
>>>> Am 16.04.2017 22:31 schrieb Melvin Davidson:
>>>> I missed to note that this is a VIEW issue (?)
>>>> AH, IN THAT CASE, YOU ARE RIGHT. I AM RETIRED, BUT IF YOU WISH, GO
>>>> AHEAD AND APPLY THE FIX YOURSELF. JUST MAKE A COMMENT AT THE TOP TO
>>>> INCLUDE THE DATE, YOUR NAME AND THE FIX.
>>>>
>>>> On Sun, Apr 16, 2017 at 4:24 PM, R. Reiterer <r.reite...@posteo.at>
>>>> wrote:
>>>>
>>>> Hi Melvin,
>>>>
>>>> thanks for your reply. I missed to note that this is a VIEW issue
>>>> (?). After duplicating a schema, views in the cloned schema
>>>> (schema_new) refer still to the source schema (schema_old) in the
>>>> FROM clause:
>>>>
>>>> View in cloned schema (schema_new) -->
>>>>
>>>> CREATE VIEW schema_new.my_view AS
>>>> SELECT *
>>>> FROM schema_old.my_table;
>>>>
>>>> To me 'FROM schema_new.my_table' would be more logical.
>>>>
>>>> Regards, Reinhard
>>>>
>>>> Am 16.04.2017 22:12 schrieb Melvin Davidson:
>>>> On Sun, Apr 16, 2017 at 3:20 PM, R. Reiterer <r.reite...@posteo.at>
>>>> wrote:
>>>>
>>>> Hi Melvin,
>>>>
>>>> I use your PL/pgSQL function posted at
>>>>
>>>>
>>>> https://www.postgresql.org/message-id/CANu8FixK9P8UD43nv2s%2
>>> Bc-9jHkyy8wPUc_f5K3HcrmJYVPr-wQ%40mail.gmail.com
>>>
>>>> [2]
>>>>
>>>> [1]
>>>> [1] to clone schemas in PostgreSQL databases. Many thanks for your
>>>> work!
>>>>
>>>> I noticed that in cloned schemas the schema name isn't updated in
>>>> the FROM clause:
>>>>
>>>> schema_old -->
>>>>
>>>> CREATE VIEW schema_old.my_view AS
>>>> SELECT *
>>>> FROM schema_old.my_table;
>>>>
>>>> schema_new -->
>>>>
>>>> CREATE VIEW schema_new.my_view AS
>>>> SELECT *
>>>> FROM schema_old.my_table;
>>>>
>>>> Are you interessted to fix this?
>>>>
>>>> Regards,
>>>>
>>>> Reinhard
>>>>
>>>> FIRST, THANK YOU FOR THE COMPLEMENT.
>>>>
>>>> However, AFAIC, there is nothing to "fix" with regards to cloning
>>>> schema name. In a database, you cannot have two schemas with the
>>>> same
>>>> name,
>>>>
>>>> so what would be the point? If you want to "clone" to a different
>>>> database, then just use pg_dump and pg_restore.
>>>>
>>>> --
>>>>
>>>> MELVIN DAVIDSON
>>>> I reserve the right to fantasize. Whether or not you
>>>> wish to share my fantasy is entirely up to you.
>>>>
>>>> Links:
>>>> --
>>>> [1]
>>>>
>>>>
>>>> https://www.postgresql.org/message-id/CANu8FixK9P8UD43nv2s%2
>>> Bc-9jHkyy8wPUc_f5K3HcrmJYVPr-wQ%40mail.gmail.com
>>>
>>>> [2]
>>>> [1]
>>>>
>>>> --
>>>>
>>>> MELVIN DAVIDSON
>>>> I reserve the right to fantasize. Whether or not you
>>>> wish to share my fantasy is entirely up to you.
>>>>
>>>> Links:
>>>> --
>>>> [1]
>>>>
>>>> https://www.postgresql.org/message-id/CANu8FixK9P8UD43nv2s%2
>>> Bc-9jHkyy8wPUc_f5K3HcrmJYVPr-wQ%40mail.gmail.com
>>>
>>>> [2]
>>>>
>>>
>>> Reinhard,
>>>
>>> After reviewing things, I note it's possible that you downloaded an
>>

Re: [GENERAL] Clone PostgreSQL schema

2017-04-16 Thread Melvin Davidson
On Sun, Apr 16, 2017 at 4:42 PM, R. Reiterer <r.reite...@posteo.at> wrote:

> Unfortunately, I do not have the skills to improve the function. Maybe
> someone at dba.stackexchange.com can help me. I'll open a ticket. I hope
> this is okay for you.
>
> Am 16.04.2017 22:31 schrieb Melvin Davidson:
>
>> I missed to note that this is a VIEW issue (?)
>>>
>> AH, IN THAT CASE, YOU ARE RIGHT. I AM RETIRED, BUT IF YOU WISH, GO
>> AHEAD AND APPLY THE FIX YOURSELF. JUST MAKE A COMMENT AT THE TOP TO
>> INCLUDE THE DATE, YOUR NAME AND THE FIX.
>>
>>
>> On Sun, Apr 16, 2017 at 4:24 PM, R. Reiterer <r.reite...@posteo.at>
>> wrote:
>>
>> Hi Melvin,
>>>
>>> thanks for your reply. I missed to note that this is a VIEW issue
>>> (?). After duplicating a schema, views in the cloned schema
>>> (schema_new) refer still to the source schema (schema_old) in the
>>> FROM clause:
>>>
>>> View in cloned schema (schema_new) -->
>>>
>>> CREATE VIEW schema_new.my_view AS
>>> SELECT *
>>> FROM schema_old.my_table;
>>>
>>> To me 'FROM schema_new.my_table' would be more logical.
>>>
>>> Regards, Reinhard
>>>
>>> Am 16.04.2017 22:12 schrieb Melvin Davidson:
>>> On Sun, Apr 16, 2017 at 3:20 PM, R. Reiterer <r.reite...@posteo.at>
>>> wrote:
>>>
>>> Hi Melvin,
>>>
>>> I use your PL/pgSQL function posted at
>>>
>>>
>>> https://www.postgresql.org/message-id/CANu8FixK9P8UD43nv2s%
>> 2Bc-9jHkyy8wPUc_f5K3HcrmJYVPr-wQ%40mail.gmail.com
>>
>>> [1]
>>> [1] to clone schemas in PostgreSQL databases. Many thanks for your
>>> work!
>>>
>>> I noticed that in cloned schemas the schema name isn't updated in
>>> the FROM clause:
>>>
>>> schema_old -->
>>>
>>> CREATE VIEW schema_old.my_view AS
>>> SELECT *
>>> FROM schema_old.my_table;
>>>
>>> schema_new -->
>>>
>>> CREATE VIEW schema_new.my_view AS
>>> SELECT *
>>> FROM schema_old.my_table;
>>>
>>> Are you interessted to fix this?
>>>
>>> Regards,
>>>
>>> Reinhard
>>>
>>> FIRST, THANK YOU FOR THE COMPLEMENT.
>>>
>>> However, AFAIC, there is nothing to "fix" with regards to cloning
>>> schema name. In a database, you cannot have two schemas with the
>>> same
>>> name,
>>>
>>> so what would be the point? If you want to "clone" to a different
>>> database, then just use pg_dump and pg_restore.
>>>
>>> --
>>>
>>> MELVIN DAVIDSON
>>> I reserve the right to fantasize. Whether or not you
>>> wish to share my fantasy is entirely up to you.
>>>
>>> Links:
>>> --
>>> [1]
>>>
>>> https://www.postgresql.org/message-id/CANu8FixK9P8UD43nv2s%
>> 2Bc-9jHkyy8wPUc_f5K3HcrmJYVPr-wQ%40mail.gmail.com
>>
>>> [1]
>>>
>>
>> --
>>
>> MELVIN DAVIDSON
>> I reserve the right to fantasize.  Whether or not you
>>  wish to share my fantasy is entirely up to you.
>>
>>
>> Links:
>> --
>> [1]
>> https://www.postgresql.org/message-id/CANu8FixK9P8UD43nv2s%
>> 2Bc-9jHkyy8wPUc_f5K3HcrmJYVPr-wQ%40mail.gmail.com
>>
>


*Reinhard,*

*After reviewing things, I note it's possible that you downloaded an
earlier version that had some errors in it and was not as complete.*

*Therefore, I've attached the latest, more complete version of the
function. Please let me know if this solves the problem.*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.
-- Function: clone_schema(text, text, boolean)

-- DROP FUNCTION clone_schema(text, text, boolean);

CREATE OR REPLACE FUNCTION clone_schema(
source_schema text,
dest_schema text,
include_recs boolean)
  RETURNS void AS
$BODY$
--  Initial code by Emanuel '3manuek' 
--  Last revision 2015-09-20 by Melvin Davidson
--  This function will clone all sequences, tables, indexes, rules, triggers, 
--  data(optional), views & functions from any existing schema to a new schema
-- SAMPLE CALL:
-- SELECT clone_schema('public', 'new_schema', TRUE);

DECLARE
  src_oid  oid;
  tbl_oid  oid;
  func_oid oid;
  con_oid  oid;
  v_path   text;
  v_func   text;
  v_args   text;
  v_connametext;
  v_rule   text;
  v_trig   text;
  object   text;
  buff

  1   2   3   4   5   6   7   >