Re: [GENERAL] Request to share information regarding deadlock in postgresql-9.3.6

2016-11-16 Thread Yogesh Sharma
Dear John,

Thanks for sharing solution approaches.

>do each reindex as a seperate transaction so only one table gets locked at a 
>time.
Transaction is already in separate like REINDEX TABLE table1, REINDEX TABLE  
table2 etc..
But this problem is occurred.

One more question regarding below.
alter table table1 add constraint fk_key foreign key (id, roll_number) 
references table2 (id, roll_number) on delete restrict on update restrict;
It shows below error:
ERROR:  there is no unique constraint matching given keys for referenced table 
"table2"

Table1 contains below structure:
create table table1
(
   idchar(6) not null,
   roll_number   varchar(20) not null,
-
primary key (id, roll_number)

Table2 contains below structure:

create table table2
(
   idchar(6) not null,
   roll_number   varchar(20) not null,
account_id  varchar(20) not null default '-',
-
primary key (id, roll_number, account_id)

How to resolve this issue?

Regards,
Yogesh

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of John R Pierce
Sent: Thursday, November 17, 2016 12:04 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Request to share information regarding deadlock in 
postgresql-9.3.6

On 11/16/2016 6:22 PM, Yogesh Sharma wrote:
> process 5764 is waiting for relation (table) with OID 
> 16459(table2_primary_key), that table is blocked by process 4970 and process 
> 4970 is waiting for a lock on another table, OID 16502(table1), which the 
> first process has a lock on.
> Process 5764: INSERT INTO table2 ( id , roll_number, name) VALUES( 
> '1', '4','abc' ) Process 4970: REINDEX TABLE table1, table2 etc..
>
> How to resolve this problem?

don't do reindex when the tables are in use.

or

why does process 5764 have a lock on table 1 while its inserting into table 2?

or

do each reindex as a seperate transaction so only one table gets locked at a 
time.

--
john r pierce, recycling bits in santa cruz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Check integrity between servers

2016-11-16 Thread Venkata B Nagothi
On Thu, Nov 17, 2016 at 10:19 AM, Patrick B 
wrote:

> Would be possible to check the integrity between two database servers?
>
> Both servers are slaves (streaming replication + wal_files) but I believe
> one of them, when recovered from wal_files in a fast outage we got, got
> recovered not 100%.
>

Did you mean to say recovery is not successful or complete ? How did you
know that recovery is not 100% ? Any errors, messages etc ?


> How could I check the data between both DB?
>

That is something which Application needs to do. From database perspective
you can check if the databases are in complete sync.

Regards,

Venkata B N
Database Consultant

Fujitsu Australia


Re: [GENERAL] pg_class (system) table increasing size.

2016-11-16 Thread dhaval jaiswal

>> Because you are creating (specific) objects.

I have gone through the link and  how would i figure out which specific object 
is causing this.  Can you please elaborate more here.


We do not have the much temporary table usage.


Since the size is bigger (5 GB) to maintain. does it requires maintenance as 
well for the pg_class.


It seems its affecting performance.


From: David G. Johnston 
Sent: Thursday, November 17, 2016 8:13 AM
To: dhaval jaiswal
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg_class (system) table increasing size.

On Wed, Nov 16, 2016 at 7:30 PM, dhaval jaiswal 
mailto:dhava...@hotmail.com>> wrote:

PostgreSQL 9.4.0

Are generalizing here or are you really running 2+ year old patch version?

Why pg_class table is getting bigger in size.

Because you are creating (specific) objects.

See: https://www.postgresql.org/docs/9.6/static/catalog-pg-class.html

How to stop increasing it.

Stop creating (those specific) objects.

Does it affect the performance.

It can - depends greatly on scale.

Note, frequent usage of temporary tables is a common cause for this kind of 
behavior.

David J.



Re: [GENERAL] Request to share information regarding deadlock in postgresql-9.3.6

2016-11-16 Thread John R Pierce

On 11/16/2016 6:22 PM, Yogesh Sharma wrote:

process 5764 is waiting for relation (table) with OID 
16459(table2_primary_key), that table is blocked by process 4970 and process 
4970 is waiting for a lock on another table, OID 16502(table1), which the first 
process has a lock on.
Process 5764: INSERT INTO table2 ( id , roll_number, name) VALUES( '1', 
'4','abc' )
Process 4970: REINDEX TABLE table1, table2 etc..

How to resolve this problem?


don't do reindex when the tables are in use.

or

why does process 5764 have a lock on table 1 while its inserting into 
table 2?


or

do each reindex as a seperate transaction so only one table gets locked 
at a time.


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_class (system) table increasing size.

2016-11-16 Thread David G. Johnston
On Wed, Nov 16, 2016 at 7:30 PM, dhaval jaiswal 
wrote:

> PostgreSQL 9.4.0
>
​Are generalizing here or are you really running ​2+ year old patch version?

> Why pg_class table is getting bigger in size.
>
​Because you are creating (specific) objects.​

See: https://www.postgresql.org/docs/9.6/static/catalog-pg-class.html

> How to stop increasing it.
>
​Stop creating (those specific) objects​.

> Does it affect the performance.
>
​It can - depends greatly on scale.

Note, frequent usage of temporary tables is a common cause for this kind of
behavior.

David J.
​


[GENERAL] pg_class (system) table increasing size.

2016-11-16 Thread dhaval jaiswal
PostgreSQL 9.4.0


Why pg_class table is getting bigger in size.

How to stop increasing it.

Does it affect the performance.



[GENERAL] Request to share information regarding deadlock in postgresql-9.3.6

2016-11-16 Thread Yogesh Sharma
Dear All,

Thanks for your support.

Could you please share your opinion for  deadlock resolution.
Process 5764 waits for AccessShareLock on relation 16459 of database 16385; 
blocked by process 4970.
 Process 4970 waits for ShareLock on relation 16502 of database 16385; blocked 
by process 5764.

It means we have two processes that are each waiting for locks the other have.

process 5764 is waiting for relation (table) with OID 
16459(table2_primary_key), that table is blocked by process 4970 and process 
4970 is waiting for a lock on another table, OID 16502(table1), which the first 
process has a lock on.
Process 5764: INSERT INTO table2 ( id , roll_number, name) VALUES( '1', 
'4','abc' )
Process 4970: REINDEX TABLE table1, table2 etc..

How to resolve this problem?


Regards,
Yogesh



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming replication failover/failback

2016-11-16 Thread Adrian Klaver

On 11/16/2016 04:51 PM, Israel Brewster wrote:

I've been playing around with streaming replication, and discovered that
the following series of steps *appears* to work without complaint:

- Start with master on server A, slave on server B, replicating via
streaming replication with replication slots.
- Shut down master on A
- Promote slave on B to master
- Create recovery.conf on A pointing to B
- Start (as slave) on A, streaming from B

After those steps, A comes up as a streaming replica of B, and works as
expected. In my testing I can go back and forth between the two servers
all day using the above steps.

My understanding from my initial research, however, is that this
shouldn't be possible - I should need to perform a new basebackup from B
to A after promoting B to master before I can restart A as a slave. Is
the observed behavior then just a "lucky fluke" that I shouldn't rely


You don't say how active the database is, but I going to say it is not 
active enough for the WAL files on B to go out for scope for A in the 
time it takes you to do the switch over.



on? Or is it expected behavior and my understanding about the need for a
new basebackup is simply off? Does the new pg_rewind feature of 9.5
change things? If so, how?

Thanks for your time!
---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
---








--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Check integrity between servers

2016-11-16 Thread Patrick B
2016-11-17 12:19 GMT+13:00 Patrick B :

> Would be possible to check the integrity between two database servers?
>
> Both servers are slaves (streaming replication + wal_files) but I believe
> one of them, when recovered from wal_files in a fast outage we got, got
> recovered not 100%.
>
> How could I check the data between both DB?
>
> I'm using Postgres 9.2
> Cheers
>



As far as I could see, checksum is only allowed on Postgres 9.3?
https://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.3#Data_Checksums

So can't use that! Any other ideas?


[GENERAL] Streaming replication failover/failback

2016-11-16 Thread Israel Brewster
I've been playing around with streaming replication, and discovered that the following series of steps *appears* to work without complaint:- Start with master on server A, slave on server B, replicating via streaming replication with replication slots.- Shut down master on A- Promote slave on B to master- Create recovery.conf on A pointing to B- Start (as slave) on A, streaming from BAfter those steps, A comes up as a streaming replica of B, and works as expected. In my testing I can go back and forth between the two servers all day using the above steps.My understanding from my initial research, however, is that this shouldn't be possible - I should need to perform a new basebackup from B to A after promoting B to master before I can restart A as a slave. Is the observed behavior then just a "lucky fluke" that I shouldn't rely on? Or is it expected behavior and my understanding about the need for a new basebackup is simply off? Does the new pg_rewind feature of 9.5 change things? If so, how?Thanks for your time!
---Israel BrewsterSystems Analyst IIRavn Alaska5245 Airport Industrial RdFairbanks, AK 99709(907) 450-7293---BEGIN:VCARD
VERSION:3.0
N:Brewster;Israel;;;
FN:Israel Brewster
ORG:Frontier Flying Service;MIS
TITLE:PC Support Tech II
EMAIL;type=INTERNET;type=WORK;type=pref:isr...@frontierflying.com
TEL;type=WORK;type=pref:907-450-7293
item1.ADR;type=WORK;type=pref:;;5245 Airport Industrial Wy;Fairbanks;AK;99701;
item1.X-ABADR:us
CATEGORIES:General
X-ABUID:36305438-95EA-4410-91AB-45D16CABCDDC\:ABPerson
END:VCARD




[GENERAL] Check integrity between servers

2016-11-16 Thread Patrick B
Would be possible to check the integrity between two database servers?

Both servers are slaves (streaming replication + wal_files) but I believe
one of them, when recovered from wal_files in a fast outage we got, got
recovered not 100%.

How could I check the data between both DB?

I'm using Postgres 9.2
Cheers


Re: [GENERAL] Wal files being delayed - Pgsql 9.2

2016-11-16 Thread Patrick B
2016-11-14 15:33 GMT+13:00 Venkata B Nagothi :

>
> On Mon, Nov 14, 2016 at 1:22 PM, Patrick B 
> wrote:
>
>> Hi guys,
>>
>> My current scenario is:
>>
>> master01 - Postgres 9.2 master DB
>> slave01 - Postgres 9.2 streaming replication + wal_files slave server for
>> read-only queries
>> slave02 - Postgres 9.2 streaming replication + wal_files slave server @
>> AWS
>>
>> master01 sends wal_files to both slaves via ssh.
>>
>>
>> *On the master:*
>>
>> select * from pg_current_xlog_location();
>>
>>  pg_current_xlog_location
>>
>> --
>>
>>  1A7C/14AEB2C0
>>
>> (1 row)
>>
>>
>> *On the slaves:*
>>
>> ls -ltr /var/lib/pgsql/archive/
>>
>> -rw--- 1 postgres postgres 16777216 Nov 14 01:21
>> 00021A7A00F9
>>
>>
>> As you can see, the last wal_files on the slaves isn't the
>> 00021A7C* ones. I think the SSH delivery is being delayed. Not sure
>> why tho.
>> How can I see how many files are behind?
>>
>
>
> You can identify the file name by using the function pg_xlogfile_name().
>
> "select pg_xlogfile_name('1A7C/14AEB2C0');"
>
>
>
>
Thanks a lot!


Re: [GENERAL] Change column type from int to bigint - quickest way

2016-11-16 Thread Andreas Brandl
> On Wed, Nov 16, 2016 at 7:49 AM, Merlin Moncure < mmonc...@gmail.com > wrote:

>> On Fri, Nov 11, 2016 at 9:30 AM, Andreas Brandl < m...@andreas-brandl.de >
>> wrote:

>> You just posted the same question a few days ago -- were the answers
>> there unsatisfactory?

> This seems to be a mail system provoked duplicate since Gmail is telling me
> this was sent 11/11 but I just received it yesterday. See my attribution
> quotes
> above.

My first post got bounced but re-appeared today - sorry about this.

The answers were very helpful indeed - thanks Tom and Merlin!

We ended up including the datatype change with the upgrade to 9.6. We're going 
to have a downtime for this anyways and with a dump-restore-style upgrade this 
is fairly easy to change.

Thanks again,
Andreas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore --clean failing due to dependancies

2016-11-16 Thread Tom Lane
Arnaud Lesauvage  writes:
> [ dump from problematic database ]

OK, thanks for the test case.  The problem here is that pg_dump is setting
up a circular dependency that it doesn't know how to break correctly.
You've got a couple of views that are implicitly dependent on the primary
keys of their underlying tables, because they use a GROUP BY the primary
key without also grouping by other columns they use post-grouping.  That
means that pg_dump has to dump the view definition after the creation of
the primary key, but it also needs to put the view out sooner than that
for other reasons.  It manages to deal with that okay in the default mode,
but when you have --clean in there, it ends up generating an illegal DROP
RULE command.

This is something we ought to fix, but it's not exactly trivial to do.
In the meantime I'd suggest changing the view definitions to not assume
that the underlying tables have primary keys.  It looks like in
view_temp_export_geo_recherche_extra_sites_projets you need to add
c.official_language_id to the GROUP BY, and similarly in
view_temp_export_geo_recherche_offtrad_sites.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrade from 9.5.4 to 9.6.1

2016-11-16 Thread Rich Shepard

On Tue, 15 Nov 2016, Adrian Klaver wrote:


To add to my previous post. If you do decide to follow the pg_upgrade
procedure in the README do a pg_dump of the 9.5 data just before you do
pg_ugrade and store it away in a safe place. The first time through a new
process does not always end well:)


Adrian, et al.:

  Did a pg_dumpall and stored it with my database-backups. Now that I know
where to find the bin/ directories pg_upgrade ran flawlessly. Quick and
easy.

Thanks all,

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Change column type from int to bigint - quickest way

2016-11-16 Thread David G. Johnston
On Wed, Nov 16, 2016 at 7:49 AM, Merlin Moncure  wrote:

> On Fri, Nov 11, 2016 at 9:30 AM, Andreas Brandl 
> wrote:
>
> You just posted the same question a few days ago -- were the answers
> there unsatisfactory?
>

​This seems to be a mail system provoked duplicate since Gmail is telling
me this was sent 11/11 but I just received it yesterday.  See my
attribution quotes​
 above.

​David J.
​
​


Re: [GENERAL] Upgrade from 9.5.4 to 9.6.1

2016-11-16 Thread Rich Shepard

On Wed, 16 Nov 2016, Alban Hertroys wrote:


pg_upgrade migrates your databases from your old (9.5) cluster to the
new (9.6) one. Initdb doesn't do that.


Alban,

  That's what I assumed to be the case.


If your 9.6 database does indeed contain your databases, then something
must have done the pg_upgrade for you. Perhaps the slackware package
script does something like that,


  No, it only builds the package. I manually install it and run initdb.


What I think what happened is that you are using the new pg 9.6 psql
binary to list the databases in your old 9.5 cluster.


  Not only list them, but access them from the command line. This is what
I'm trying to understand.

  Regardless, I'll stop the running postgres, run pg_upgrade, then start it
from the /9.6/data/ directory.

Thanks,

Rich



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrade from 9.5.4 to 9.6.1

2016-11-16 Thread Alban Hertroys
On 16 November 2016 at 16:33, Rich Shepard  wrote:
> On Tue, 15 Nov 2016, Rich Shepard wrote:

>   If 9.6.1 is currently running after running initdb, and I can access my
> databases, what does pg_upgrade do that's necessary?

pg_upgrade migrates your databases from your old (9.5) cluster to the
new (9.6) one. Initdb doesn't do that.

If your 9.6 database does indeed contain your databases, then
something must have done the pg_upgrade for you. Perhaps the slackware
package script does something like that, but that would cause problems
for people who do _not_ want to migrate their databases, so I doubt
that's what happened.

What I think what happened is that you are using the new pg 9.6 psql
binary to list the databases in your old 9.5 cluster.

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrade from 9.5.4 to 9.6.1

2016-11-16 Thread Rich Shepard

On Tue, 15 Nov 2016, Rich Shepard wrote:


$ /usr/bin/pg_ctl --version
pg_ctl (PostgreSQL) 9.6.1

ls -al /usr/bin/pg_ctl
lrwxrwxrwx 1 root root 32 Nov 15 14:16 /usr/bin/pg_ctl ->
../lib/postgresql/9.6/bin/pg_ctl*


  To increase my understanding I want to resolve an apparent discrepancy in
versions.

  After installing and initiating 9.6.1 that's reported as the current
running version, yet I've not yet migrated the cluster from 9.5.4 to 9.6.1.

  Last week, after a kernel upgrade, I rebooted the system and re-started
postgres pointing to the 9.5/data/ directory. When I enter the command
'psql -l' I see all the databases and assumed they running on 9.5.4. But
that's not the case, is it?

  If 9.6.1 is currently running after running initdb, and I can access my
databases, what does pg_upgrade do that's necessary?

A curious mind wants to learn,

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Change column type from int to bigint - quickest way

2016-11-16 Thread Merlin Moncure
On Fri, Nov 11, 2016 at 9:30 AM, Andreas Brandl  wrote:
> Hi,
>
> we have a pretty big table with an integer-type primary key. I'm looking for 
> the quickest way to change the column type to bigint to avoid hitting the 
> integer limit. We're trying to avoid prolonged lock situations and full table 
> rewrites.
>
> I know I can hack this with an UPDATE on pg_attribute:
>
> -- change id type to bigint
> update pg_attribute set atttypid=20 where attrelid=264782 and attname = 'id';
>
> After that I'd need to reflect the change on dependent objects like views as 
> well.
>
> Is this safe to do? Are there any unwanted consequences to this?
>
> This is still on 9.1 unfortunately - upgrade is going to follow soon after 
> this.

You just posted the same question a few days ago -- were the answers
there unsatisfactory?

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Full text search tsv column aproach vs concat confusion

2016-11-16 Thread cen

Hi

I am seeking some clarification in regard to full text search across 
multiple tables and what the best approach is. Documentation talks about 
two approaches when it comes to building a document: on-the-fly concat 
of columns and a dedicated tsv column approach. Let's say I want to 
perform a search for |"txt1 & txt2 & txt3" on columns table1.col1, 
table1.col2 and table2.col1. I see the following solutions:|


|1. Concat all three into a document and perform a FTS.|

|SELECT * FROM (
|

|SELECTto_tsvector(table1.col1)||to_tsvector(table1.col2)||to_tsvector(table2.col1)asdocument 
F|||ROM table1 LEFTJOINtable2 ONtable1.table2_id=table2.id| ) subquery |||WHEREsubquery.document@@to_tsquery(unaccent(?));| |


|2. Create a tsv column in each table, concat tsv columns and perform 
FTS on that.|


|SELECT*FROMtable1 LEFTJOINtable2 ONtable1.table2_id=table2.id 
WHEREtable1.tsv ||tale2.tsv @@to_tsquery(unaccent(?));|


|3. Have a tsv column only in table1 and insert table2.col1 to the tsv 
via triggers. Works but seems very hacky.|


|
|

|It seems to me that option #2 is fast and easy to implement but I am 
not sure what the concat of tsvs really means from index usage and 
performance standpoint. Option #1 is the most flexible and I'd use that 
all the time if it was not THAT much slower than tsv column approacj. 
Documentation on TSV columns states: "||Another advantage is that 
searches will be faster, since it will not be necessary to redo the 
to_tsvector calls to verify index matches."

|

The question is, how much faster are tsv columns really? Are there any 
benchmarks about this? If the performance difference is negligible I'd 
advocate that using tsv columns is a waste of time and space in most 
general cases. But since there is no information on how much faster it's 
hard to decide.



Best regards,
Klemen

||