Hi!
We have a PostgreSQL cluster that is runnign on tens of server on
different colocations.
Failovers / switchovers between colocations can be done per server.
Uptime has been
>99.9%. All the software is opensource and has gone through
intensive live testing.
Our cluster is running on:
s
HP was providing CA (Continuous Access) software that was claimed to provide
WAN SAN replication by repeating IO in exactly the sequence it was generated
on the master, to the slave. SO while there was a delay, updates on the
slave would be sequentially intact, providing a good level of integrity
Jon Colverson wrote:
I've been testing my DB backup and restore procedure and I've run into
something I can't figure out. After recovering from a PITR backup, when
I do another pg_start_backup PostgreSQL attempts to re-archive the last
log again, which fails because it already exists in the arc
On Wed, May 30, 2007 at 06:12:02PM -0400, Adam Tauno Williams wrote:
>
> Sure it can be done. Get two SANs that support replication, redundant
> high-speed WAN links, high end servers, large UPSs, and generators.
Most SANs that I've seen aren't in "geographically separate"
locations in the way m
On Wed, 2007-05-30 at 17:18 -0400, Andrew Sullivan wrote:
> On Wed, May 30, 2007 at 04:42:08PM -0300, Fernando Ike de Oliveira wrote:
> > was 99,7% but considering the current necessities, change percentual
> > to 99,99%. I think in solution probability pgpool-2 or Heartbeat +
> > GFS. The Postgre
"Chris Hoover" <[EMAIL PROTECTED]> writes:
> I am getting the following error when trying to run a reindex on one of my
> databases.
> reindexdb: reindexing of database "xxx" failed: ERROR: out of memory
> DETAIL: Failed on request of size 268435456.
> Can someone advise on what memory paramete
Roman Chervotkin escribió:
> Hi list.
>
> Do usual pg_dump today and have got an error.
>
> -
> pg_dump: SQL command failed
> pg_dump: Error message from server: ERROR: compressed data is corrupt
> pg_dump: The command was: COPY public.candidates (id, name, surname,
> mid_nam
On Wed, May 30, 2007 at 04:42:08PM -0300, Fernando Ike de Oliveira wrote:
> was 99,7% but considering the current necessities, change percentual
> to 99,99%. I think in solution probability pgpool-2 or Heartbeat +
> GFS. The PostgreSQL servers will be in different physical places.
I would be ver
Please help,
I cannot understand what should be done in order to fix the issue.
Any hint, I tried search and irc but found nothing useful yet.
On 5/30/07, Roman Chervotkin <[EMAIL PROTECTED]> wrote:
Do usual pg_dump today and have got an error.
-
pg_dump: SQL command fai
Different physical places, I hope high bandwidth + low latency :)
On Wed, 30 May 2007, Fernando Ike de Oliveira wrote:
Hi,
I need solution to PostgreSQL for High Available, originally
was 99,7% but considering the current necessities, change percentual
to 99,99%. I think in solution
Hi,
I need solution to PostgreSQL for High Available, originally
was 99,7% but considering the current necessities, change percentual
to 99,99%. I think in solution probability pgpool-2 or Heartbeat +
GFS. The PostgreSQL servers will be in different physical places.
Suggestions
Chris Hoover wrote:
Any ideas?
Drop it and try to recreate it. As far as the parameter it is
maintenance_work_mem but that should spill to disk which means you ran
out of actual memory too.
Joshua D. Drake
P.S. You *need* to upgrade to 8.1.9
-- Forwarded message --
From:
Any ideas?
-- Forwarded message --
From: Chris Hoover <[EMAIL PROTECTED]>
Date: May 29, 2007 11:36 AM
Subject: Out of Memory on Reindex
To: "pgsql-admin@postgresql.org"
I am getting the following error when trying to run a reindex on one of my
databases.
reindexdb: reindexing o
On Wed, May 30, 2007 at 12:57:55PM +0200, Peter Hausmann wrote:
> Hi,
>
> After a node is dropped due to failover, it takes days to recover the
> database, because it is build up from scratch.
This is from Slony? You know that that's a built-in limitation, and
that the preferred method is switch
Hi,
After a node is dropped due to failover, it takes days to recover the
database, because it is build up from scratch.
We would prefer another approach:
Make an Online Backup of the Provider database.
The provider database continues writing new data and the Slony-Logs.
The Backup is restored to
Hi list.
Do usual pg_dump today and have got an error.
-
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: compressed data is corrupt
pg_dump: The command was: COPY public.candidates (id, name, surname,
mid_name, compensation, created, birthday, updated,
16 matches
Mail list logo