Hi,
>> Also, we can see that 9.2.3 has been released now and has a number of fixes
>> relating to WAL replay, so we have decided to try again using that.
>> We will scrub the standby and make a fresh copy using pg_basebackup. If that
>> doesn't work then we may try using rsync instead.
I am pl
x27;t
work then we may try using rsync instead.
We'll let you all know the result.
Regards // Mike
-Original Message-
From: Magnus Hagander [mailto:mag...@hagander.net]
Sent: Thursday, 7 February 2013 11:49 PM
To: amutu
Cc: Michael Harris; pgsql-general@postgresql.org; Hari Babu
S
Hi Hari,
Thanks for the tip. We tried applying that patch, however the error recurred
exactly as before.
Regards // Mike
-Original Message-
From: Hari Babu [mailto:haribabu.ko...@huawei.com]
Sent: Tuesday, 5 February 2013 10:07 PM
To: Michael Harris; pgsql-general@postgresql.org
Hi All,
We are having a thorny problem I'm hoping someone will be able to help with.
We have a pair of machines set up as an active / hot SB pair. The database they
contain is quite large - approx. 9TB. They were working fine on 9.1, and we
recently upgraded the active DB to 9.2.1.
After upgra
Hi Vibhor,
>> Not sure about above wrapper function. However, if you can share some
>> information from pg_log when you have started the restore with
>> backup_label information.
Here it is at the beginning:
[2011-02-25 09:40:11 EST] LOG: database system was interrupted; last known up
at 2011-0
ion until we reached the last WAL file made by the
original database.
Regards // Mike
-Original Message-
From: Vibhor Kumar [mailto:vibhor.ku...@enterprisedb.com]
Sent: Monday, 28 February 2011 3:25 PM
To: Michael Harris
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] ERROR: missing chu
>> ERROR: missing chunk number 0 for toast value 382548694 in
>> pg_toast_847386
>
> This seems more like a corrupted toast table.
>
> Did you try to reindex the pg_toast_847386?
> REINDEX table pg_toast.pg_toast_847386;
> VACUUM ANALYZE ;
Hi Vibhor,
Thanks for the suggestion.
We didn't try th
Hi,
We have a PG 8.4 database approx 5TB in size.
We were recently testing our restore procedure against our latest dump. The
dumps are taken using the Continuous Archiving method with base dumps taken
using tar. Our tar script is set up to ignore missing/modified files but should
stop on all
al Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Tuesday, 10 November 2009 12:04 PM
To: Michael Harris
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Database Startup Failure: FATAL: could not read block 6
of relation 16390/16391/5153282: Success
Michael Harris writes:
>
Hi,
I have been asked to help recover a database that seems to have been corrupted
after a power failure.
This is what we see when psql tries to start up:
[2009-11-10 10:39:17 EST] LOG: checkpoint record is at 41E/BF2D5DC0
[2009-11-10 10:39:17 EST] LOG: redo record is at 41E/BF008F28; undo re
Hi,
I recently had to do something similar: change one column from INT to BIGINT in
a table which has inherited to a depth of 3 and where some of the child tables
had millions of records.
All affected tables have to be rewritten for such a command. One consequence of
this is that you (temporar
in my application that won't be a problem.
Thanks again,
Regards // Mike
-Original Message-
From: arta...@comcast.net [mailto:arta...@comcast.net]
Sent: Saturday, 23 May 2009 1:23 AM
To: Michael Harris
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Aggregate Function to return m
Hi Experts,
I want to use an aggregate function that will return the most commonly
occurring value in a column.
The column consists of VARCHAR(32) values.
Is it possible to construct such an aggregate using PL/PgSql ?
If I was trying to do something like this in Perl I would use a hash
table t
Hi,
First you need to identify the correct postgresql process. Postgresql
spawns an individual server process for each database connection. They
look something like this:
postgres 27296 7089 9 08:00 ?00:05:52 postgres: username
databasename [local] idle
If a query was running
Hi,
I had a similar problem and overcame it by temporarily setting
zero_damaged_pages, then doing a full vacuum and re-index on the affected table.
The rows contained in the corrupted page were lost but the rest of the table
was OK after this.
Regards // Mike
-Original Message-
From:
Hi,
Am not sure if this is something we've done wrong or maybe a bug.
Whenever any kind of query is done on the table below, this is the
result:
ispdb_vxe=> select * from pm.carrier_on_13642;
ERROR: cache lookup failed for type 0
I first noticed it when I noticed that the regular backups were
. I will try to locate the corrupted row(s), maybe
pg_filedump can help with that.
Regards // Mike
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Saturday, 26 May 2007 9:38 AM
To: Michael Harris (BR/EPA)
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] ERROR: cache
,
complaining about another table pm.carrier_oo_13642 with the same error.
I then excluded that table also, after which the dump succeeded.
What does "ERROR: cache lookup failed for type 0" mean? I searched all
over the place for a good descripion but could not find one.
Regards // Mike
---
Hi,
Am not sure if this is something we've done wrong or maybe a bug.
Whenever any kind of query is done on the table below, this is the
result:
ispdb_vxe=> select * from pm.carrier_on_13642;
ERROR: cache lookup failed for type 0
I first noticed it when I noticed that the regular backups were
19 matches
Mail list logo