Dear John,
Thanks for sharing solution approaches.
>do each reindex as a seperate transaction so only one table gets locked at a
>time.
Transaction is already in separate like REINDEX TABLE table1, REINDEX TABLE
table2 etc..
But this problem is occurred.
One more question regarding below.
alt
On Thu, Nov 17, 2016 at 10:19 AM, Patrick B
wrote:
> Would be possible to check the integrity between two database servers?
>
> Both servers are slaves (streaming replication + wal_files) but I believe
> one of them, when recovered from wal_files in a fast outage we got, got
> recovered not 100%.
>> Because you are creating (specific) objects.
I have gone through the link and how would i figure out which specific object
is causing this. Can you please elaborate more here.
We do not have the much temporary table usage.
Since the size is bigger (5 GB) to maintain. does it requires ma
On 11/16/2016 6:22 PM, Yogesh Sharma wrote:
process 5764 is waiting for relation (table) with OID
16459(table2_primary_key), that table is blocked by process 4970 and process
4970 is waiting for a lock on another table, OID 16502(table1), which the first
process has a lock on.
Process 5764: IN
On Wed, Nov 16, 2016 at 7:30 PM, dhaval jaiswal
wrote:
> PostgreSQL 9.4.0
>
Are generalizing here or are you really running 2+ year old patch version?
> Why pg_class table is getting bigger in size.
>
Because you are creating (specific) objects.
See: https://www.postgresql.org/docs/9.6/stat
PostgreSQL 9.4.0
Why pg_class table is getting bigger in size.
How to stop increasing it.
Does it affect the performance.
Dear All,
Thanks for your support.
Could you please share your opinion for deadlock resolution.
Process 5764 waits for AccessShareLock on relation 16459 of database 16385;
blocked by process 4970.
Process 4970 waits for ShareLock on relation 16502 of database 16385; blocked
by process 5764.
On 11/16/2016 04:51 PM, Israel Brewster wrote:
I've been playing around with streaming replication, and discovered that
the following series of steps *appears* to work without complaint:
- Start with master on server A, slave on server B, replicating via
streaming replication with replication sl
2016-11-17 12:19 GMT+13:00 Patrick B :
> Would be possible to check the integrity between two database servers?
>
> Both servers are slaves (streaming replication + wal_files) but I believe
> one of them, when recovered from wal_files in a fast outage we got, got
> recovered not 100%.
>
> How coul
I've been playing around with streaming replication, and discovered that the following series of steps *appears* to work without complaint:- Start with master on server A, slave on server B, replicating via streaming replication with replication slots.- Shut down master on A- Promote slave on B to
Would be possible to check the integrity between two database servers?
Both servers are slaves (streaming replication + wal_files) but I believe
one of them, when recovered from wal_files in a fast outage we got, got
recovered not 100%.
How could I check the data between both DB?
I'm using Postg
2016-11-14 15:33 GMT+13:00 Venkata B Nagothi :
>
> On Mon, Nov 14, 2016 at 1:22 PM, Patrick B
> wrote:
>
>> Hi guys,
>>
>> My current scenario is:
>>
>> master01 - Postgres 9.2 master DB
>> slave01 - Postgres 9.2 streaming replication + wal_files slave server for
>> read-only queries
>> slave02 -
> On Wed, Nov 16, 2016 at 7:49 AM, Merlin Moncure < mmonc...@gmail.com > wrote:
>> On Fri, Nov 11, 2016 at 9:30 AM, Andreas Brandl < m...@andreas-brandl.de >
>> wrote:
>> You just posted the same question a few days ago -- were the answers
>> there unsatisfactory?
> This seems to be a mail syste
Arnaud Lesauvage writes:
> [ dump from problematic database ]
OK, thanks for the test case. The problem here is that pg_dump is setting
up a circular dependency that it doesn't know how to break correctly.
You've got a couple of views that are implicitly dependent on the primary
keys of their un
On Tue, 15 Nov 2016, Adrian Klaver wrote:
To add to my previous post. If you do decide to follow the pg_upgrade
procedure in the README do a pg_dump of the 9.5 data just before you do
pg_ugrade and store it away in a safe place. The first time through a new
process does not always end well:)
A
On Wed, Nov 16, 2016 at 7:49 AM, Merlin Moncure wrote:
> On Fri, Nov 11, 2016 at 9:30 AM, Andreas Brandl
> wrote:
>
> You just posted the same question a few days ago -- were the answers
> there unsatisfactory?
>
This seems to be a mail system provoked duplicate since Gmail is telling
me this
On Wed, 16 Nov 2016, Alban Hertroys wrote:
pg_upgrade migrates your databases from your old (9.5) cluster to the
new (9.6) one. Initdb doesn't do that.
Alban,
That's what I assumed to be the case.
If your 9.6 database does indeed contain your databases, then something
must have done the p
On 16 November 2016 at 16:33, Rich Shepard wrote:
> On Tue, 15 Nov 2016, Rich Shepard wrote:
> If 9.6.1 is currently running after running initdb, and I can access my
> databases, what does pg_upgrade do that's necessary?
pg_upgrade migrates your databases from your old (9.5) cluster to the
ne
On Tue, 15 Nov 2016, Rich Shepard wrote:
$ /usr/bin/pg_ctl --version
pg_ctl (PostgreSQL) 9.6.1
ls -al /usr/bin/pg_ctl
lrwxrwxrwx 1 root root 32 Nov 15 14:16 /usr/bin/pg_ctl ->
../lib/postgresql/9.6/bin/pg_ctl*
To increase my understanding I want to resolve an apparent discrepancy in
version
On Fri, Nov 11, 2016 at 9:30 AM, Andreas Brandl wrote:
> Hi,
>
> we have a pretty big table with an integer-type primary key. I'm looking for
> the quickest way to change the column type to bigint to avoid hitting the
> integer limit. We're trying to avoid prolonged lock situations and full tabl
Hi
I am seeking some clarification in regard to full text search across
multiple tables and what the best approach is. Documentation talks about
two approaches when it comes to building a document: on-the-fly concat
of columns and a dedicated tsv column approach. Let's say I want to
perform a
21 matches
Mail list logo