I had set up this streaming replication pair of systems a few days ago and
everything seemed pretty happy as changes were being replicated. I set
it up without wal archiving turned up. The backup log this morning caught
my eye. I find the standby reports to have activated itself.
I don't see a
On Wed, Feb 09, 2011 at 08:55:48AM -0500, Ray Stell wrote:
> I had set up this streaming replication pair of systems a few days ago and
> everything seemed pretty happy as changes were being replicated. I set
> it up without wal archiving turned up. The backup log this morning caught
> my eye. I
The account has a password. Apache is working. I never configured the vhost on
the live red hat server. This is what is so confusing everything that I can see
is the same on the live server (which I configured as well) and it works there
but not on this test box.
I must be missing something sim
I just found my problem. In the config.inc.php page, I needed to change
$conf['servers'][0]['host'] = 'host'; to $conf['servers'][0]['host'] =
'localhost';. After that change I was able to login with the account that I
created.
-Original Message-
From: Guillaume Lelarge [mailto:guilla
On Wed, Feb 09, 2011 at 09:28:18AM -0500, Ray Stell wrote:
> On Wed, Feb 09, 2011 at 08:55:48AM -0500, Ray Stell wrote:
> my bad, this was a path issue, I was using a v8 pg_controldata cmd.
> the v9 pg_controldata looks good. Sorry.
Come to think about it, could the pg_controldata check the versi
One of the main considerations per Hot Standby vs SLONY is replication
scope. With Hot Standby you get everything that occurs in the cluster,
across all databases. With SLONY you are limited to at most a single
database per "SLONY Cluster", and you can define replication sets which
only contain
I am considering running a Postgres with the database hosted on a NAS via NFS.
I have read a few things on the Web saying this is not recommended, as it will
be slow and could potentially cause data corruption.
My goal is to have the database on a shared filesystem so in case of server
failure,
I think SAN is better for block access instead of file access (NAS)
-Mensaje original-
De: pgsql-admin-ow...@postgresql.org
[mailto:pgsql-admin-ow...@postgresql.org] En nombre de Bryan Keller
Enviado el: miércoles, 09 de febrero de 2011 05:00 p.m.
Para: pgsql-admin@postgresql.org
Asunto:
Hi,
Once I've shut down a streaming replica standby, is there a way from the master
I can tell which WAL file is going to be the earliest one needed in order for
the replica to catch back up?
What I want to do is be able to automatically detect if that WAL file is still
around so that the repl
On Wed, Feb 9, 2011 at 2:59 PM, Bryan Keller wrote:
> I am considering running a Postgres with the database hosted on a NAS via
> NFS. I have read a few things on the Web saying this is not recommended, as
> it will be slow and could potentially cause data corruption.
>
> My goal is to have the
On Wed, Feb 9, 2011 at 3:37 PM, Nicholson, Brad (Toronto, ON, CA)
wrote:
> Hi,
>
> Once I’ve shut down a streaming replica standby, is there a way from the
> master I can tell which WAL file is going to be the earliest one needed in
> order for the replica to catch back up?
>
that info is on the
Howdy,
Environment:
Postgres 8.3.13
Solaris 10
I have a SELECT query that runs no problem standalone but when running
within a perl script it intermittently core dumps. Random, no pattern
to the timing of the core dumps. The perl script processes the rows
from the query, if the rows satisfy a
12 matches
Mail list logo