Hi,
Il 09/02/11 17:53, CS DBA ha scritto:
One of the main considerations per Hot Standby vs SLONY is replication
scope. With Hot Standby you get everything that occurs in the cluster,
across all databases.
Yep, I agree with you Kevin regarding the replication scope. I assumed
that Jai was loo
One of the main considerations per Hot Standby vs SLONY is replication
scope. With Hot Standby you get everything that occurs in the cluster,
across all databases. With SLONY you are limited to at most a single
database per "SLONY Cluster", and you can define replication sets which
only contain
Hi,
Il 09/02/11 01:34, Rangi, Jai ha scritto:
Hello,
I am looking for a replication solution for PG 9.x. Idea is to have
one master replication server and multiple (around 20) slave servers
read only. I see PG 9 has inbuilt Streaming replication. Is this the
best replication solution. How a
On Tue, Feb 8, 2011 at 5:34 PM, Rangi, Jai wrote:
> I am looking for a replication solution for PG 9.x. Idea is to have one
> master replication server and multiple (around 20) slave servers read only.
I know (anecdotally) of at least one organization that's using Bucardo
[1] to synchronize many
Hello,
I am looking for a replication solution for PG 9.x. Idea is to have one
master replication server and multiple (around 20) slave servers read
only. I see PG 9 has inbuilt Streaming replication. Is this the best
replication solution. How about slony? Which option will keep the slave
nodes in
ursday, January 10, 2008 8:48 pm
Subject: Re: [ADMIN] Postgres replication
To: nalini <[EMAIL PROTECTED]>
Cc: pgsql-admin@postgresql.org
> On Jan 10, 2008 1:38 AM, nalini <[EMAIL PROTECTED]> wrote:
> > Dear All
> >
> > I have a application running at various location
On Jan 10, 2008 1:38 AM, nalini <[EMAIL PROTECTED]> wrote:
> Dear All
>
> I have a application running at various locations with backend as postgres
> Slony is configured at all locations for replication at local site anda
> central server where each site has its own database instance.
>
> Now we w
On Thursday 10 January 2008 00:38:38 nalini wrote:
> Dear All
>
> I have a application running at various locations with backend as postgres
> Slony is configured at all locations for replication at local site anda
> central server where each site has its own database instance.
>
> Now we wish to m
Dear All
I have a application running at various locations with backend as postgres
Slony is configured at all locations for replication at local site anda central
server where each site has its own database instance.
Now we wish to merge data collected from all the sites into one database
inst
Mario Splivalo wrote:
On Fri, 2006-07-28 at 11:01 -0700, Jeff Frost wrote:
Mario,
There's also Command Prompt's Mammoth replicator:
http://commandprompt.com/products/mammothreplicator/
I sent an email asking if they have an evaluation version of some
sort...
We do, but our 1.7 8.1 version
On Sat, 29 Jul 2006, Mario Splivalo wrote:
PIRT restores are not working for me. I did it like this: I issued
pg_start_backup, then I 'tar cvf - pg_data/ | nc destination 9876'-ed
the cluster directory, when that was done i did pg_stop_backup, after
that I deleted pg_xlog directory, and put new
On Fri, 2006-07-28 at 11:01 -0700, Jeff Frost wrote:
> Mario,
>
> There's also Command Prompt's Mammoth replicator:
> http://commandprompt.com/products/mammothreplicator/
I sent an email asking if they have an evaluation version of some
sort...
> Also, you could batch up PITR restores or even j
Mario,
There's also Command Prompt's Mammoth replicator:
http://commandprompt.com/products/mammothreplicator/
Also, you could batch up PITR restores or even just dump restores if the
timing was acceptable and you could afford the replicated DB to be down during
those batch updates.
Another
Besides slony, is there any other postgres replication active project? I
I need a solution where I could mirror a database to a 'spare' server,
for statistical analysis (just SELECTs beeing done there), and 'lag'
between actuall data and the slave server synchronisation can be even
few hours.
It w
14 matches
Mail list logo