Re: pg_multixact/members growing

2018-05-23 Thread Tiffany Thang
Thanks Tom and Thomas.

Where do I find pg_controldata? I could not locate it on the file system.

pg_clog/ or pg_subtrans/ or pg_multixact/offsets/  are getting larger too
but by only a few hundreds MBs.

This is not a replicated system.

How do I tell if a system is aggressively running "wraparound prevention"
autovacuums?

Sorry, I failed to follow the calculation. How did you get
“~435 million more members can be created.”?

What happens when no more members can be created? Does the database halt or
shut down?

Thanks.

On Tue, May 22, 2018 at 7:20 PM Thomas Munro <thomas.mu...@enterprisedb.com>
wrote:

> On Wed, May 23, 2018 at 7:49 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> > Tiffany Thang <tiffanyth...@gmail.com> writes:
> >> Our pg_multixact/members directory has been growing to more than 18GB
> over
> >> the last couple of months. According to the documentation, the files in
> >> there are used to support row locking by multiple transactions and when
> all
> >> tables in all databases are eventually scanned by VACUUM, the older
> >> multixacts are removed. In our case, the files are not removed.
> >
> > Hmm.  What does pg_controldata tell you about NextMultiXactId,
> > NextMultiOffset, oldestMultiXid, oldestMulti's DB?
> > Are pg_clog/ or pg_subtrans/ or pg_multixact/offsets/ getting large?
> > Is there anything at all in pg_twophase/?  Is this system a replication


> > master, and if so are any of its slaves lagging behind?
>
> Some thoughts:
>
> There are MULTIXACT_MEMBERS_PER_PAGE = 1636 members for every 8KB
> page.  The reported directory size implies 18GB / 8KB * 1636 =
> 3,859,808,256 members.  Above MULTIXACT_MEMBER_SAFE_THRESHOLD =
> 2,147,483,647 we should be triggering emergency autovacuums to try to
> reclaim space.  Only ~435 million more members can be created.
>
> Is this system now aggressively running "wraparound prevention"
> autovacuums?
>
> There are MULTIXACT_OFFSETS_PER_PAGE = 2048 multixacts for every 8KB
> page, so the default autovacuum_multixact_freeze_max_age should
> soft-cap the size of pg_multixact/offsets at around 1.5GB ~=
> 400,000,000 / 2048 * 8KB.
>
> Unfortunately autovacuum_multixact_freeze_max_age doesn't impose any
> limit on the number of members.  The totals can be quite explosive
> with high numbers of backends, because when n backends share lock a
> row we make O(n) multixacts and O(n^2) members.  First we make a
> multixact with 2 members, then a new one with 3 members, etc... so
> that's n - 1 multixacts and (n * (n + 1)) / 2 - 1 members.
>
> --
> Thomas Munro
> http://www.enterprisedb.com
>


pg_multixact/members growing

2018-05-22 Thread Tiffany Thang
Hi,
Our pg_multixact/members directory has been growing to more than 18GB over
the last couple of months. According to the documentation, the files in
there are used to support row locking by multiple transactions and when all
tables in all databases are eventually scanned by VACUUM, the older
multixacts are removed. In our case, the files are not removed. Any
suggestions what I should do to purge the files automatically? Can old
files since the last reboot be manually removed?

PG version: 9.6.8

autovacuum_multixact_freeze_max_age is the default 4. After reading
the documentation, I'm still not clear what this parameter does and if it
should be adjusted. Can someone explain?

Thanks.


Database connection log

2018-06-14 Thread Tiffany Thang
Hi,
Does PostgreSQL keep a log of client connections to the database like
Oracle's listener.log? I would like to extract information such as how many
connections are made to the database daily, the IP addresses they
originated from and the schemas they are connected to. Would it be possible
to extract the above information in PostgreSQL?

Thanks.


Re: Replication questions - read-only and temporary read/write slaves

2018-01-04 Thread Tiffany Thang
Michael,
Thanks for your input. What I meant to say was rolling back all the
changes. I was hoping for a way to temporary open the read-only standby in
r/w for testing purpose and then rollback all the changes made during the
test without having to re-create the standby from scratch.

Thanks.

Tiff

On Wed, Jan 3, 2018 at 11:52 PM, Michael Paquier <michael.paqu...@gmail.com>
wrote:

> On Fri, Dec 15, 2017 at 12:03:08PM -0500, Tiffany Thang wrote:
> > 1. set up a read-only slave database? The closest solution I could find
> is
> > Hot Standby but the slave would not be accessible until after a
> > failover.
>
> That's what the parameter hot_standby is for in recovery.conf. When a
> server is in recovery mode and once it has reached a consistent point,
> then it can be accessed for read-only queries if this parameter is
> enabled. You need to be careful about how you want to handle replication
> conflicts though, particularly if you have long read-queries on
> standbys, which can be tuned with hot_standby_feedback. Be careful
> though to not bloat too much the primary: retaining a XID horizon older
> causes tables to retain more past versions of tuples, which costs in
> space as well as in future VACUUM cleanups.
>
> > 2. temporary convert a read-only slave in read-write mode for testing
> > read/write workloads? Currently in Oracle, we can temporary open our
> > read-only standby database in read-write mode to occasionally test our
> > read-write workloads. We would stop the log apply on the standby
> database,
> > convert the read-only database to read-write,
> > perform our read/write test, discard all the changes after testing and
> > reopen and resync the standby database in read-only mode. Is there a
> > similar feature in PostgreSQL or are there ways to achieve something
> close
> > to our needs?
>
> Unfortunately not. You could reach the same kind of behavior by
> promoting a standby, and then do your testing. Then you would need to
> re-create a standby from scratch. What does "discard all the changes"
> mean?
> --
> Michael
>


List of objects owned by a schema/owner

2018-08-03 Thread Tiffany Thang
Hi,
Does anyone have a query that will list all the objects (composite types,
sequences, tables, triggers, functions, indices, etc) owned by a schema or
owner? I find fragments of information here and there that query on
pg_tables, pg_views, etc.

Thanks.


Re: List of objects owned by a schema/owner

2018-08-03 Thread Tiffany Thang
Thanks Tom for giving me the lead. I’ll take a look.


On Fri, Aug 3, 2018 at 10:52 AM Tom Lane  wrote:

> Tiffany Thang  writes:
> > Does anyone have a query that will list all the objects (composite types,
> > sequences, tables, triggers, functions, indices, etc) owned by a schema
> or
> > owner? I find fragments of information here and there that query on
> > pg_tables, pg_views, etc.
>
> Well, you could attack it the hard way:
>
>   select relname from pg_class where relowner = [oid of role of interest]
>   union all
>   ... similar select from every other catalog that has an owner column ...
>
> I don't know if anyone's built such a query before, but a little quality
> time with the system catalog documentation would get you there:
> https://www.postgresql.org/docs/current/static/catalogs.html
>
> Another idea is to rely on owner dependencies recorded in pg_shdepend,
> along the lines of
>
> select pg_describe_object(classid,objid,objsubid)
> from pg_shdepend where deptype = 'o' and
>   refobjid = [oid of role of interest] and
>   dbid = [oid of current database];
>
> That won't work for objects owned by the bootstrap superuser, and
> I think there are some other omissions --- for instance, it looks
> like we only record an owner dependency for a table, not for its
> indexes.
>
> Or, if you're feeling truly lazy, you can do
>
> begin;
> drop user joe;
> -- read the error message whining about what joe owns
> rollback;
>
> That's largely a hacky way to get the same info as the pg_shdepend
> query I sketched before, since the error message is derived from
> exactly that info.
>
> You have the same three options for schemas, though the details of each
> are a bit different (in particular, schema dependencies would be found
> in pg_depend not pg_shdepend).
>
> regards, tom lane
>


Prefixing schema name

2018-03-07 Thread Tiffany Thang
Hi,
Would it be possible to create a table or index in a schema without
prefixing the schema name? I find it a hassle to keep specifying the schema
name when I create an object or performing DMLs in a schema. The
search_path configuration works only for queries.

For example:
Is there a way to run the create/insert statements below without prefixing
the schema name, user1?

As the user, user1:
Create table user1.table1 (id int);
Insert into user1.table1 values (1);

Thanks.


Re: psql output result

2018-03-15 Thread Tiffany Thang
Thanks Geoff and Adrian!

On Thu, Mar 15, 2018 at 11:03 AM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 03/15/2018 07:57 AM, Tiffany Thang wrote:
>
>> Hi,
>> I have a SQL script that does some DDLs, inserts and counts.
>>
>> The command I ran is
>> psql dbname -c "\i crscript.sql" > output.txt
>>
>> In output.txt, I got something like
>> INSERT 0 1
>> INSERT 0 1
>> CREATE TABLE
>> INSERT 0 2
>>   count
>> ---
>>   9
>> (1 row)
>>
>>
>> Is there a way to output the SQLs and DDLs so that I could easily
>> identify what statements were executed?
>>
>
> aklaver@tito:~> psql -d test -U aklaver -a -f sql_test.sql
> \pset null 'NULL'
> Null display is "NULL".
> CREATE TABLE tbl_test(fld_1 int, fld_2 varchar);
> CREATE TABLE
> INSERT INTO tbl_test VALUES (1, 'dog'), (2, 'cat');
> INSERT 0 2
> DROP TABLE tbl_test;
> DROP TABLE
>
>
>> Thanks.
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Prefixing schema name

2018-03-08 Thread Tiffany Thang
Thanks David and Achilleas. This is no longer an issue. The table
previously went to the public schema. I'm not sure what I did. Anyway,
without a prefix, it defaults to the current schema.

On Thu, Mar 8, 2018 at 2:02 AM, Achilleas Mantzios <
ach...@matrix.gatewaynet.com> wrote:

> On 08/03/2018 01:13, David G. Johnston wrote:
>
> On Wed, Mar 7, 2018 at 4:05 PM, Tiffany Thang <tiffanyth...@gmail.com>
> wrote:
>
>> ​
>> The search_path configuration works only for queries.
>>
>
> ​Um
>
> ​https://www.postgresql.org/docs/10/static/sql-createschema.html
>
> "​A CREATE command specifying an unqualified object name creates the
> object in the current schema (the one at the front of the search path,
> which can be determined with the function current_schema)"
>
>
> In pg_dump relies heavily on search_path for all CREATE statements.
>
>
>> For example:
>> Is there a way to run the create/insert statements below without
>> prefixing the schema name, user1?​
>>
>
>> As the user, user1:
>> Create table user1.table1 (id int);
>> Insert into user1.table1 values (1);
>>
>
> ​Just omitting "user1" and seeing what happens would be informative.  You
> should find it does exactly what you think - namely because the default
> search_path will cause "user1" to appear first.
>
> Insert is more similar to Select than it is to Create - the object being
> inserted into must already exist
>
> David J.
> ​​
>
>
> --
> Achilleas Mantzios
> IT DEV Lead
> IT DEPT
> Dynacom Tankers Mgmt
>
>


psql output result

2018-03-15 Thread Tiffany Thang
Hi,
I have a SQL script that does some DDLs, inserts and counts.

The command I ran is
psql dbname -c "\i crscript.sql" > output.txt

In output.txt, I got something like
INSERT 0 1
INSERT 0 1
CREATE TABLE
INSERT 0 2
 count
---
 9
(1 row)


Is there a way to output the SQLs and DDLs so that I could easily identify
what statements were executed?

Thanks.


Prompt for parameter value in psql

2018-03-16 Thread Tiffany Thang
Hi,
Would it be possible to prompt for a user input in psql like in Oracle
sqlplus?

In oracle, we use the & sign, for example,
select * from emp where empid=

Thanks.


Re: Copy entire schema A to a different schema B

2019-02-21 Thread Tiffany Thang
Thanks everyone. Unfortunately the schema rename would not work since the
source database will be our production system. We have not gone live yet
but the system is expected to be constantly used.

I have multiple tables that I need to export ranging from 20GB to 60GB
each. The parallel will not work for a single table but would be beneficial
if I have multiple tables to dump.

I'm thinking maybe using what Adrian has suggested with the -f option and
then modify the file or maybe use a common public schema everywhere on the
source and target databases. I would have to restrict who has access to the
public schema.

Thanks.

Tiff

On Wed, Feb 20, 2019 at 11:14 PM Jiří Fejfar  wrote:

> Hi Tiffany, Isn't it possible for you to do:
>
> 1) rename temporarily schema testuser1 to testuser2
> 2) dump that schema into binary format
> 3 rename back schema testuser2 to testuser1
> 4 restore backup to testuser2 schema on other DB?
>
> Jura.
>
> On Wed, 20 Feb 2019 at 23:23, Tiffany Thang 
> wrote:
> >
> > Hi Adrian,
> > I managed to backup my table in parallel using -Fd but I'm back to my
> original issue where I could not restore the table to a different schema.
> >
> > For example,
> > I would like to backup testuser1.mytable and restore it to
> testuser2.mytable.
> >
> > pg_dump -U testuser1 -Fd -f c:\temp\testuser1 -j 8 -t mytable -h
> myserver testdb
> >
> > where mytable is in testuser1 schema
> >
> > The dump completed fine but when I attempted to restore the table using
> pg_restore to another database, it tried to create the table in testuser1
> schema. The restore failed since testuser1 schema does not exist in the
> target database. When I created a testuser1 schema in the target database,
> the restore worked fine. Since the dump toc is in binary format, I could
> not make the change to reflect the new target schema, testuser2.
> >
> > So, how should I go about restoring tables from one schema to a
> different schema name?
> >
> > Thanks.
> >
> > Tiff
> >
> > On Mon, Feb 11, 2019 at 11:53 AM Adrian Klaver <
> adrian.kla...@aklaver.com> wrote:
> >>
> >> On 2/11/19 8:30 AM, Tiffany Thang wrote:
> >> > Thanks Adrian and Ron. Sorry, I was not clear. What I'm trying to
> >> > achieve was to dump the schema quickly and be able to restore a single
> >> > or subset of objects from the dump. As far as I understand, the only
> way
> >> > of achieving that is to use the custom format and the -j option. Is
> that
> >> > correct? Are there any other alternatives?
> >>
> >> If you want to use -j then you need to use the -Fd output:
> >>
> >> https://www.postgresql.org/docs/10/app-pgdump.html
> >>
> >> "-j njobs
> >> --jobs=njobs
> >>
> >>  Run the dump in parallel by dumping njobs tables simultaneously.
> >> This option reduces the time of the dump but it also increases the load
> >> on the database server. You can only use this option with the directory
> >> output format because this is the only output format where multiple
> >> processes can write their data at the same time."
> >>
> >> If you need to grab just a subset of the schema then there are options
> >> to do that depending on the object. From above link as examples:
> >>
> >> "-n schema
> >> --schema=schema
> >>
> >>  Dump only schemas matching schema; this selects both the schema
> >> itself, and all its contained objects. ..."
> >>
> >>
> >> "-t table
> >> --table=table
> >>
> >>  Dump only tables with names matching table.  .."
> >>
> >>
> >> >
> >> > Thanks.
> >> >
> >> > Tiff
> >> >
> >> > On Mon, Feb 11, 2019 at 11:10 AM Ron  >> > <mailto:ronljohnso...@gmail.com>> wrote:
> >> >
> >> > On 2/11/19 10:00 AM, Tiffany Thang wrote:
> >> >  > Hi,
> >> >  > To copy the source schema A to target schema B in the same
> >> > database in
> >> >  > PG10.3, I use psql to dump schema A and manually removes
> anything
> >> > specific
> >> >  > to the schema in the text dump file before importing into
> schema
> >> > B. How do
> >> >  > I achieve the same exporting from Schema A and importing into
> >> > schema B
> >> >  > using pg_dump with the -Fc option? Since the dump file
> generated is
> >> >  > binary, I could not make modifications to the file. Is the
> >> > procedure the
> >> >  > same in version 11?
> >> >
> >> > Why do you need to use "--format=custom" instead of
> "--format=plain"?
> >> >
> >> > For example:
> >> > $ pg_dump --format=plain --schema-only --schema=A
> >> >
> >> >
> >> > --
> >> > Angular momentum makes the world go 'round.
> >> >
> >>
> >>
> >> --
> >> Adrian Klaver
> >> adrian.kla...@aklaver.com
>


Re: Copy entire schema A to a different schema B

2019-02-22 Thread Tiffany Thang
Hi Melvin,
Unfortunately I was not able to use it because I was not able to access
pg_authid in RDS.

Thanks.

Tiff

On Thu, Feb 21, 2019 at 6:09 PM Melvin Davidson 
wrote:

> Tiffany, have you tried the clone_schema function? It seems to me it does
> exactly what you need, no dumping or restoring. There is
> even an option to copy the data or not. Default is not.
>
> On Thu, Feb 21, 2019 at 3:23 PM Adrian Klaver 
> wrote:
>
>> On 2/21/19 11:52 AM, Tiffany Thang wrote:
>> > Thanks everyone. Unfortunately the schema rename would not work since
>> > the source database will be our production system. We have not gone
>> live
>> > yet but the system is expected to be constantly used.
>> >
>> > I have multiple tables that I need to export ranging from 20GB to 60GB
>> > each. The parallel will not work for a single table but would be
>> > beneficial if I have multiple tables to dump.
>> >
>> > I'm thinking maybe using what Adrian has suggested with the -f option
>> > and then modify the file or maybe use a common public schema everywhere
>> > on the source and target databases. I would have to restrict who has
>> > access to the public schema.
>>
>> You can further break this down by using -s and -a switches to only work
>> with the table definitions and table data respectively. This can also be
>> done on the pg_dump end.
>>
>> >
>> > Thanks.
>> >
>> > Tiff
>> >
>>
>> >  >> --
>> >  >> Adrian Klaver
>> >  >> adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
>> >
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>>
>
> --
> *Melvin Davidson*
> *Maj. Database & Exploration Specialist*
> *Universe Exploration Command – UXC*
> Employment by invitation only!
>


Re: Copy entire schema A to a different schema B

2019-02-20 Thread Tiffany Thang
Hi Ron,
How would that work if I'm only interested in importing/refreshing a single
table in a target schema that contains several other tables?

Thanks.

Tiff

On Wed, Feb 20, 2019 at 5:36 PM Rob Sargent  wrote:

>
> On 2/20/19 3:22 PM, Tiffany Thang wrote:
>
> Hi Adrian,
> I managed to backup my table in parallel using -Fd but I'm back to my
> original issue where I could not restore the table to a different schema.
>
> For example,
> I would like to backup testuser1.mytable and restore it to
> testuser2.mytable.
>
> pg_dump -U testuser1 -Fd -f c:\temp\testuser1 -j 8 -t mytable -h myserver
> testdb
>
> where mytable is in testuser1 schema
>
> The dump completed fine but when I attempted to restore the table using
> pg_restore to another database, it tried to create the table in testuser1
> schema. The restore failed since testuser1 schema does not exist in the
> target database. When I created a testuser1 schema in the target database,
> the restore worked fine. Since the dump toc is in binary format, I could
> not make the change to reflect the new target schema, testuser2.
>
> So, how should I go about restoring tables from one schema to a different
> schema name?
>
> Thanks.
>
> Tiff
>
>
> Can you not simply rename the schema after importing it? Perhaps renaming
> an exiting version first, if necessary. Rename schema A to B; import schema
> A; rename A to C; rename B to A;
>
>
>


Re: Copy entire schema A to a different schema B

2019-02-20 Thread Tiffany Thang
Hi Adrian,
I managed to backup my table in parallel using -Fd but I'm back to my
original issue where I could not restore the table to a different schema.

For example,
I would like to backup testuser1.mytable and restore it to
testuser2.mytable.

pg_dump -U testuser1 -Fd -f c:\temp\testuser1 -j 8 -t mytable -h myserver
testdb

where mytable is in testuser1 schema

The dump completed fine but when I attempted to restore the table using
pg_restore to another database, it tried to create the table in testuser1
schema. The restore failed since testuser1 schema does not exist in the
target database. When I created a testuser1 schema in the target database,
the restore worked fine. Since the dump toc is in binary format, I could
not make the change to reflect the new target schema, testuser2.

So, how should I go about restoring tables from one schema to a different
schema name?

Thanks.

Tiff

On Mon, Feb 11, 2019 at 11:53 AM Adrian Klaver 
wrote:

> On 2/11/19 8:30 AM, Tiffany Thang wrote:
> > Thanks Adrian and Ron. Sorry, I was not clear. What I'm trying to
> > achieve was to dump the schema quickly and be able to restore a single
> > or subset of objects from the dump. As far as I understand, the only way
> > of achieving that is to use the custom format and the -j option. Is that
> > correct? Are there any other alternatives?
>
> If you want to use -j then you need to use the -Fd output:
>
> https://www.postgresql.org/docs/10/app-pgdump.html
>
> "-j njobs
> --jobs=njobs
>
>  Run the dump in parallel by dumping njobs tables simultaneously.
> This option reduces the time of the dump but it also increases the load
> on the database server. You can only use this option with the directory
> output format because this is the only output format where multiple
> processes can write their data at the same time."
>
> If you need to grab just a subset of the schema then there are options
> to do that depending on the object. From above link as examples:
>
> "-n schema
> --schema=schema
>
>  Dump only schemas matching schema; this selects both the schema
> itself, and all its contained objects. ..."
>
>
> "-t table
> --table=table
>
>  Dump only tables with names matching table.  .."
>
>
> >
> > Thanks.
> >
> > Tiff
> >
> > On Mon, Feb 11, 2019 at 11:10 AM Ron  > <mailto:ronljohnso...@gmail.com>> wrote:
> >
> > On 2/11/19 10:00 AM, Tiffany Thang wrote:
> >  > Hi,
> >  > To copy the source schema A to target schema B in the same
> > database in
> >  > PG10.3, I use psql to dump schema A and manually removes anything
> > specific
> >  > to the schema in the text dump file before importing into schema
> > B. How do
> >  > I achieve the same exporting from Schema A and importing into
> > schema B
> >  > using pg_dump with the -Fc option? Since the dump file generated
> is
> >  > binary, I could not make modifications to the file. Is the
> > procedure the
> >  > same in version 11?
> >
> > Why do you need to use "--format=custom" instead of "--format=plain"?
> >
> > For example:
> > $ pg_dump --format=plain --schema-only --schema=A
> >
> >
> > --
> > Angular momentum makes the world go 'round.
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Copy entire schema A to a different schema B

2019-02-12 Thread Tiffany Thang
Thanks Adrian!

Tiff

On Mon, Feb 11, 2019 at 11:53 AM Adrian Klaver 
wrote:

> On 2/11/19 8:30 AM, Tiffany Thang wrote:
> > Thanks Adrian and Ron. Sorry, I was not clear. What I'm trying to
> > achieve was to dump the schema quickly and be able to restore a single
> > or subset of objects from the dump. As far as I understand, the only way
> > of achieving that is to use the custom format and the -j option. Is that
> > correct? Are there any other alternatives?
>
> If you want to use -j then you need to use the -Fd output:
>
> https://www.postgresql.org/docs/10/app-pgdump.html
>
> "-j njobs
> --jobs=njobs
>
>  Run the dump in parallel by dumping njobs tables simultaneously.
> This option reduces the time of the dump but it also increases the load
> on the database server. You can only use this option with the directory
> output format because this is the only output format where multiple
> processes can write their data at the same time."
>
> If you need to grab just a subset of the schema then there are options
> to do that depending on the object. From above link as examples:
>
> "-n schema
> --schema=schema
>
>  Dump only schemas matching schema; this selects both the schema
> itself, and all its contained objects. ..."
>
>
> "-t table
> --table=table
>
>  Dump only tables with names matching table.  .."
>
>
> >
> > Thanks.
> >
> > Tiff
> >
> > On Mon, Feb 11, 2019 at 11:10 AM Ron  > <mailto:ronljohnso...@gmail.com>> wrote:
> >
> > On 2/11/19 10:00 AM, Tiffany Thang wrote:
> >  > Hi,
> >  > To copy the source schema A to target schema B in the same
> > database in
> >  > PG10.3, I use psql to dump schema A and manually removes anything
> > specific
> >  > to the schema in the text dump file before importing into schema
> > B. How do
> >  > I achieve the same exporting from Schema A and importing into
> > schema B
> >  > using pg_dump with the -Fc option? Since the dump file generated
> is
> >  > binary, I could not make modifications to the file. Is the
> > procedure the
> >  > same in version 11?
> >
> > Why do you need to use "--format=custom" instead of "--format=plain"?
> >
> > For example:
> > $ pg_dump --format=plain --schema-only --schema=A
> >
> >
> > --
> > Angular momentum makes the world go 'round.
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Copy entire schema A to a different schema B

2019-02-11 Thread Tiffany Thang
Hi,
To copy the source schema A to target schema B in the same database in
PG10.3, I use psql to dump schema A and manually removes anything specific
to the schema in the text dump file before importing into schema B. How do
I achieve the same exporting from Schema A and importing into schema B
using pg_dump with the -Fc option? Since the dump file generated is binary,
I could not make modifications to the file. Is the procedure the same in
version 11?

Thank.

Tiff


Re: Copy entire schema A to a different schema B

2019-02-11 Thread Tiffany Thang
Thanks Adrian and Ron. Sorry, I was not clear. What I'm trying to achieve
was to dump the schema quickly and be able to restore a single or subset of
objects from the dump. As far as I understand, the only way of achieving
that is to use the custom format and the -j option. Is that correct? Are
there any other alternatives?

Thanks.

Tiff

On Mon, Feb 11, 2019 at 11:10 AM Ron  wrote:

> On 2/11/19 10:00 AM, Tiffany Thang wrote:
> > Hi,
> > To copy the source schema A to target schema B in the same database in
> > PG10.3, I use psql to dump schema A and manually removes anything
> specific
> > to the schema in the text dump file before importing into schema B. How
> do
> > I achieve the same exporting from Schema A and importing into schema B
> > using pg_dump with the -Fc option? Since the dump file generated is
> > binary, I could not make modifications to the file. Is the procedure the
> > same in version 11?
>
> Why do you need to use "--format=custom" instead of "--format=plain"?
>
> For example:
> $ pg_dump --format=plain --schema-only --schema=A
>
>
> --
> Angular momentum makes the world go 'round.
>
>


multiple indexes on the same column

2019-04-12 Thread Tiffany Thang
Hi,
Can someone explain the use of creating multiple indexes on the same
column?

How would the optimizer determine which index to use? From my brief
testing, the optimizer picked the latest created index, testidx3. Can you
provide a scenario where creating multiple indexes on the same column would
be beneficial?

create table test (a int, b int);
create index testidx1 on test (a);
create index testidx2 on test (a);
create index testidx3 on test (a);


Thanks.

Tiff


Re: multiple indexes on the same column

2019-04-12 Thread Tiffany Thang
Got it! Thanks Andres and Tom!

Tiff

On Fri, Apr 12, 2019 at 1:07 PM Tom Lane  wrote:

> Andres Freund  writes:
> > On 2019-04-12 09:51:51 -0400, Tom Lane wrote:
> >> Tiffany Thang  writes:
> >>> Can someone explain the use of creating multiple indexes on the same
> >>> column?
>
> >> There is none, unless the indexes have different properties (e.g.
> >> different opclasses and/or index AMs).
>
> > Well, it can be beneficial to create a new index concurrently, and then
> > drop the old one concurrently.
>
> Right, but in that situation there's no intent to keep both indexes
> in place.  You're just putting up with extra overhead temporarily
> as a means to avoid taking an exclusive lock.
>
> regards, tom lane
>


Restrict user to create only one db with a specific name

2020-03-06 Thread Tiffany Thang
Hi,
Is there a way in PostgreSQL 12 to restrict user to creating a database
with a specific database name?

For example, userA can only create a database with a name called mydb. Any
other names would generate an error.

If that is not possible, will it be possible then to limit userA to
creating only one database? Granting the createdb privilege would allow the
user to create any  number of databases which I want to avoid.

Thanks.

Tiff


Re: Restrict user to create only one db with a specific name

2020-03-07 Thread Tiffany Thang
Thanks David and Paul. Because each user wants the ability to drop and
recreate their own databases, I want to impose a restriction on the
database names. I’ll just implement what Paul has suggested.

Thanks.

Tiff

On Sat, Mar 7, 2020 at 2:35 AM Paul Förster  wrote:

> Hi Tiff,
>
> from what you say, it sounds that each user should have his or her own
> database.
>
> Considering the other answers here already pointing out the difficulties,
> why don't you just create a database for each user with the same name as
> the username and grant him or her access to it.
>
> So, basically like this:
>
> postgres=# create role "userA" login;
> CREATE ROLE
> postgres=# create database "userA" owner "userA";
> CREATE DATABASE
> postgres=# create role "userB" login;
> CREATE ROLE
> postgres=# create database "userB" owner "userB";
> CREATE DATABASE
>
> When, say, "userB" goes away, his or her data will go the way all things
> do:
>
> drop database "userB";
> drop role "userB";
>
> Or did I misunderstand you?
>
> Cheers,
> Paul
>
>
> > On 07. Mar, 2020, at 00:28, Tiffany Thang 
> wrote:
> >
> > Hi,
> > Is there a way in PostgreSQL 12 to restrict user to creating a database
> with a specific database name?
> >
> > For example, userA can only create a database with a name called mydb.
> Any other names would generate an error.
> >
> > If that is not possible, will it be possible then to limit userA to
> creating only one database? Granting the createdb privilege would allow the
> user to create any  number of databases which I want to avoid.
> >
> > Thanks.
> >
> > Tiff
>
>


Re: Capturing just slow queries

2020-07-17 Thread Tiffany Thang
Thank you for the information. That's a comprehensive list of monitoring
solutions. I'll take a look.

Tiff

On Fri, Jul 17, 2020 at 4:14 AM Wim Bertels  wrote:

> You might have a look at:
> https://www.postgresql.org/docs/current/auto-explain.html
>
> Also there are several monitoring solutions:
> https://wiki.postgresql.org/wiki/Monitoring
>
>
> Tiffany Thang schreef op do 16-07-2020 om 13:41 [-0400]:
> > Hi,
> > log_min_duration_statement captures all statements including DMLs
> > that have exceeded the threshold. Is there a way in PG 12 to capture
> > just select statements excluding all DMLs and DDLs? In my
> > environment, it's acceptable for DMLs and DDLs to cross the threshold
> > and we are more interested in capturing poor performing select
> > statements.
> >
> > Thanks.
> >
> > Tiff
> --
> mvg,
> Wim Bertels
> --
> Lector
> UC Leuven-Limburg
> --
> There is no hunting like the hunting of man, and those who have hunted
> armed men long enough and liked it, never care for anything else
> thereafter.
> -- Ernest Hemingway
>
>
>
>


Capturing just slow queries

2020-07-16 Thread Tiffany Thang
Hi,
log_min_duration_statement captures all statements including DMLs that have
exceeded the threshold. Is there a way in PG 12 to capture just select
statements excluding all DMLs and DDLs? In my environment, it's
acceptable for DMLs and DDLs to cross the threshold and we are more
interested in capturing poor performing select statements.

Thanks.

Tiff


Dropping dependent tables

2021-06-03 Thread Tiffany Thang
 Hi,
I would like to write a SQL statement to drop all the tables owned by me
but a problem I’m struggling with is with referential integrity. The
statement I have now to generate the drop statements is

select 'drop table '||tablename||' cascade;' from pg_tables where
tableowner='';

The generated SQLs above might attempt to drop the parent tables first
before the child and to be able to drop all the tables, I had to run the
SQL script in multiple iterations. Not very clean.

Can someone advise how I could formulate the SQL to check for table
dependencies to generate a SQL script that drops the child tables first
before the parent? Or are there any better alternatives?

Thank you.

Tiff


Virus-free.
www.avast.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>


Re: Dropping dependent tables

2021-06-03 Thread Tiffany Thang
On Thu, Jun 3, 2021 at 10:18 AM Tom Lane  wrote:

> Tiffany Thang  writes:
> > I would like to write a SQL statement to drop all the tables owned by me
> > but a problem I’m struggling with is with referential integrity. The
> > statement I have now to generate the drop statements is
>
> > select 'drop table '||tablename||' cascade;' from pg_tables where
> > tableowner='';
>
> > The generated SQLs above might attempt to drop the parent tables first
> > before the child and to be able to drop all the tables, I had to run the
> > SQL script in multiple iterations. Not very clean.
>
> Uh ... it's not clear to me why that wouldn't work.  CASCADE should
> be able to take care of foreign keys:
>
> postgres=# create table t1 (f1 int primary key);
> CREATE TABLE
> postgres=# create table t2 (f2 int references t1);
> CREATE TABLE
> postgres=# drop table t1;
> ERROR:  cannot drop table t1 because other objects depend on it
> DETAIL:  constraint t2_f2_fkey on table t2 depends on table t1
> HINT:  Use DROP ... CASCADE to drop the dependent objects too.
> postgres=# drop table t1 cascade;
> NOTICE:  drop cascades to constraint t2_f2_fkey on table t2
> DROP TABLE
>
> Could you enlarge on what problem you saw, specifically?
>
> regards, tom lane
>

Thanks Tom for your response. I tried it again and I was not able to
reproduce the issue.

Obviously I did something incorrectly previously.

Thanks.

Tiff


Error creating postgres extensions

2021-05-04 Thread Tiffany Thang
Hi,

I'm having trouble finding in the documentation the steps to install the
postgres extensions such as pg_stat_statements and pg_trgm on
PG13.1/CentOS7. Can someone please assist?



postgres=# create extension pg_stat_statements;

ERROR:  could not open extension control file
"/usr/pgsql-13/share/extension/pg_stat_statements.control": No such file or
directory

postgres=# create extension pg_trgm;

ERROR:  could not open extension control file
"/usr/pgsql-13/share/extension/pg_trgm.control": No such file or directory



I installed PG13 via the YUM repository
https://download.postgresql.org/pub/repos/yum/13/redhat/rhel-7-x86_64 and
then

I ran "yum install postgresql13 postgresql13-server"





Also, what are the steps to update these extensions in the future when
newer releases are made available. For example, updating

pg_stat_statements from 1.7 to 1.8?





Thanks.



Tiff


Re: Error creating postgres extensions

2021-05-04 Thread Tiffany Thang
On Tue, May 4, 2021 at 10:04 PM Ian Lawrence Barwick 
wrote:

> 2021年5月5日(水) 10:43 Tiffany Thang :
> >
> > Hi,
> >
> > I'm having trouble finding in the documentation the steps to install the
> postgres extensions such as pg_stat_statements and pg_trgm on
> PG13.1/CentOS7. Can someone please assist?
> >
> >
> > postgres=# create extension pg_stat_statements;
> >
> > ERROR:  could not open extension control file
> "/usr/pgsql-13/share/extension/pg_stat_statements.control": No such file or
> directory
> >
> > postgres=# create extension pg_trgm;
> >
> > ERROR:  could not open extension control file
> "/usr/pgsql-13/share/extension/pg_trgm.control": No such file or directory
> >
> >
> >
> > I installed PG13 via the YUM repository
> https://download.postgresql.org/pub/repos/yum/13/redhat/rhel-7-x86_64 and
> then
> >
> > I ran "yum install postgresql13 postgresql13-server"
>
> You need the "postgresql13-contrib" package as well.
>
> > Also, what are the steps to update these extensions in the future when
> newer releases are made available. For example, updating
> >
> > pg_stat_statements from 1.7 to 1.8?
>
> Normally new extension versions (at least for the core contrib
> modules) are provided with
> each major version release, so that will be take care of when
> performing a major version upgrade.
>
> If you do ever need to update an extension, the process is:
> - install new binary
> - execute "ALTER EXTENSION UPDATE"
> - if the extension provides a shared library, PostgreSQL will need to
> be restarted to activate the new library
>
> Regards
>
> Ian Barwick
>
>
> --
> EnterpriseDB: https://www.enterprisedb.com


Thanks Ian for your help! I was able to create the extensions.


Managing major PostgreSQL upgrades

2021-11-15 Thread Tiffany Thang
Hi,
Every year we spent a lot of time planning and manually performing major
PostgreSQL upgrade on many of our on-prem and RDS instances. I’m wondering
if there is a better way of managing these upgrades through automation. Can
anyone share their experiences?

Thanks.

Tiff


Re: Managing major PostgreSQL upgrades

2021-11-16 Thread Tiffany Thang
On Mon, Nov 15, 2021 at 8:48 PM Saul Perdomo  wrote:

> Hey Tiff,
>
> We're in a similar boat. We currently lean on (mostly custom) ansible
> scripting to automate all the repeatable tasks we can, but automation of
> major PG version upgrades is something we are yet to tackle -- although we
> plan to start this effort in the short term.
>
> Would you mind sharing a bit more about your own current upgrade process?
> What's your standard, a dump+restore, a pg_upgrade, or replication-based?
> Also if you are able to share any lessons learned (e.g. common pitfalls
> you've run into) will all be useful information to identify ahead of time
> when drafting an automation strategy.
>

Our upgrades (pg_upgrade) have been pretty smooth so far. We have not
encountered any issues other than the occasional missing OS packages. The
issue is I do not know what problems to expect during a major upgrade which
makes automating the process difficult.


>
> On Mon., Nov. 15, 2021, 6:45 a.m. Tiffany Thang, 
> wrote:
>
>> Hi,
>> Every year we spent a lot of time planning and manually performing major
>> PostgreSQL upgrade on many of our on-prem and RDS instances. I’m wondering
>> if there is a better way of managing these upgrades through automation. Can
>> anyone share their experiences?
>>
>> Thanks.
>>
>> Tiff
>>
>>
>>
>>


Re: Profiling a function call

2023-05-22 Thread Tiffany Thang
Thanks, Jan and Pavel. I will check out the tracer.

Tiff

On Sun, May 21, 2023 at 7:43 AM Pavel Stehule 
wrote:

>
>
> ne 21. 5. 2023 v 13:30 odesílatel Jan Wieck  napsal:
>
>> On 5/20/23 00:36, Tiffany Thang wrote:
>> > Hi,
>> > I have a function that has been executing for a long time and not
>> > returning any results. Wait event=NULL so it seems like it is still
>> > executing and not waiting on any specific resources. Is there a way to
>> > profile the function call to get an idea of what it is currently
>> > executing within the function? All I could find in pg_stat_activity is
>> > the function is running. I want to know what query/transaction within
>> > the function is running. Is there a way to obtain that information?
>>
>> This may give you some idea. I haven't touched or used it in many years,
>> but it might get you started:
>>
>> https://github.com/wieck/plbacktrace
>
>
> plpgsql_check has integrated tracer
>
> https://github.com/okbob/plpgsql_check#tracer
>
> Regards
>
> Pavel
>
>
>>
>>
>>
>> Best Regards, Jan
>>
>> >
>> > PostgreSQL 15.2 on Linux.
>> >
>> > Thanks.
>> >
>> > Tiff
>>
>>
>>
>>


Profiling a function call

2023-05-19 Thread Tiffany Thang
Hi,
I have a function that has been executing for a long time and not returning
any results. Wait event=NULL so it seems like it is still executing and not
waiting on any specific resources. Is there a way to profile the function
call to get an idea of what it is currently executing within the function?
All I could find in pg_stat_activity is the function is running. I want to
know what query/transaction within the function is running. Is there a way
to obtain that information?

PostgreSQL 15.2 on Linux.

Thanks.

Tiff