It seems that it was the Postgres bug on replica, after upgrading minor
version to 9.1.21 on replica1, the corruption goes away.
Thanks everyone for the help
On Tue, Apr 5, 2016 at 1:32 AM, Soni M <diptat...@gmail.com> wrote:
> Hello Adrian, thanks for the response.
>
> master da
messages detected.
On Sun, Apr 3, 2016 at 11:23 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:
> On 04/02/2016 08:38 PM, Soni M wrote:
>
>> Hello Everyone,
>>
>> We face TOAST table corruption.
>>
>> One master and two streaming replicas. The c
, Joshua D. Drake <j...@commandprompt.com>
wrote:
>
> What version of PostgreSQL and which OS?
>
>
> On 04/02/2016 08:38 PM, Soni M wrote:
>
>
>> How can the corruption occurs ? and how can I resolve them ?
>>
>> Thank so much for the help.
>>
Hello Everyone,
We face TOAST table corruption.
One master and two streaming replicas. The corruption happen only on both
streaming replicas.
We did found the corrupted rows. Selecting on this row, return (on both
replica) : unexpected chunk number 0 (expected 1) for toast value
1100613112 in
This is hard to tell, but You can get some estimation.
1. You can have WAL rate estimation from pg_xlog/ dir, i.e. How many WAL
generated per minutes
2. How long this pg_basebackup will last. Lets say for 3 hours.
Then You can multiple values in #1 and #2 to get rough estimation.
Hope this would
On Sat, Aug 23, 2014 at 2:18 AM, Joseph Kregloh jkreg...@sproutloud.com
wrote:
On Fri, Aug 22, 2014 at 2:21 PM, Jerry Sievers gsiever...@comcast.net
wrote:
Joseph Kregloh jkreg...@sproutloud.com writes:
Hi,
Currently I am doing asynchronous replication from master to
slave. Now if
:34, Soni M diptat...@gmail.com wrote:
On Fri, Aug 22, 2014 at 9:10 PM, Alban Hertroys haram...@gmail.com
wrote:
On 22 August 2014 14:26, Soni M diptat...@gmail.com wrote:
Currently we have only latest_transmission_id as FK, described here :
TABLE ticket CONSTRAINT fkcbe86b0c6ddac9e
On Thu, Aug 21, 2014 at 9:26 AM, David G Johnston
david.g.johns...@gmail.com wrote:
Soni M wrote
Hi Everyone,
I have this query :
select t.ticket_id ,
tb.transmission_id
from ticket t,
transmission_base tb
where t.latest_transmission_id = tb.transmission_id
On Fri, Aug 22, 2014 at 9:10 PM, Alban Hertroys haram...@gmail.com wrote:
On 22 August 2014 14:26, Soni M diptat...@gmail.com wrote:
Currently we have only latest_transmission_id as FK, described here :
TABLE ticket CONSTRAINT fkcbe86b0c6ddac9e FOREIGN KEY
(latest_transmission_id
Hi Everyone,
I have this query :
select t.ticket_id ,
tb.transmission_id
from ticket t,
transmission_base tb
where t.latest_transmission_id = tb.transmission_id
and t.ticket_number = tb.ticket_number
and tb.parse_date ('2014-07-31');
Execution plan: http://explain.depesz.com/s/YAak
Indexes on
On each session created by the client, run set log_statement to 'all'
before firing your query
On Wed, Aug 13, 2014 at 4:21 PM, M Tarkeshwar Rao
m.tarkeshwar@ericsson.com wrote:
Hi all,
Can I see the detailed log of query fired by particular Postgres client
on Postgres server?
blank log file on
pg_log.
*From:* Soni M [mailto:diptat...@gmail.com]
*Sent:* 13 August 2014 15:02
*To:* M Tarkeshwar Rao
*Cc:* pgsql-general@postgresql.org
*Subject:* Re: [GENERAL] Can I see the detailed log of query fired by
particular Postgres client on Postgres server?
On each session
Genereal advice is to set up shared_buffers to 25% of total RAM. 75% RAM
for OS cache.
On my case (1.5 TB database, 145 GB RAM), setting shared_buffers bigger
than 8GB would give no significant performance impact.
On some cases, setting it low would be an advantage
Do you run intensive read query on slave ?
If yes, query conflict can cause that,
http://www.postgresql.org/docs/9.1/static/hot-standby.html#HOT-STANDBY-CONFLICT
On conflict, xlog stream will be saved on xlog dir on slave instead of
replaying it. This happen until slave has opportunity to write
Hello All,
This is how i set up the db :
Slave using streaming replica.
We configure slave to run pg_dump which usually last for about 12 hours.
We have limited pg_xlog on slave.
Once the pg_xlog on slave is full while pg_dump still in progress.
2014-08-11 09:39:23.226
On Tue, Aug 12, 2014 at 12:37 PM, Michael Paquier michael.paqu...@gmail.com
wrote:
On Tue, Aug 12, 2014 at 2:10 PM, Soni M diptat...@gmail.com wrote:
This is how i set up the db :
Slave using streaming replica.
We configure slave to run pg_dump which usually last for about 12 hours.
We
i think you could try pg_basebackup tools. it has options to achieve same
thing as you wanted. but need pgdata on destination emptied. if you really
need to do the exact thing as you stated, then you need to set postgres to
keep high enough number of xlog files on master to ensure that needed xlog
17 matches
Mail list logo