[ADMIN] How to track down locks held by recovery process on a slave?

2013-10-04 Thread Amador Alvarez
Hi there,

I am using a postgres 9.2 slave for some reports and I noticed that
sometimes they are held up because certain activity on the master like
vacuums can affect long queries on a slave .

Looking at pg_locks view , I can see that the recovering process on the
slave is holding some AccessExclusiveLock locks on relations which numbers
that do not mach any pg_class.oid.

Any idea How could I find out which relations are locked by the recovering
process?


Thanks in advance,
A.A.


Re: [ADMIN] postgresql patching

2013-09-10 Thread Amador Alvarez
Check this out :

http://www.postgresql.org/support/versioning/


Cheers,
A.A.


On Mon, Sep 9, 2013 at 8:17 PM, rajkphb rajkph...@gmail.com wrote:

 Hi, I am relatively new to postgresql. I did not find any detailed
 documentation on patching . Please provide me some links where i can get
 some note on postgresql patching. Why does postgres documentation does not
 have patching topic? Thanks Suhas
 --
 View this message in context: postgresql 
 patchinghttp://postgresql.1045698.n5.nabble.com/postgresql-patching-tp5770236.html
 Sent from the PostgreSQL - admin mailing list 
 archivehttp://postgresql.1045698.n5.nabble.com/PostgreSQL-admin-f2076596.htmlat
  Nabble.com.



[ADMIN] Looking for a better approach for pacemaker, DRDB VS pgsql 9.1 standby feature.

2012-08-27 Thread Amador Alvarez

Hi there,
I am setting up a 2 node database cluster (Master/standby) on pgsql 9.1 
and it looks to me that drdb option could be easier to maintain  keep 
running after several failovers on a row, specially because after a 
failover operation it is not too friendly to set back up automatically 
the former primary as the new standby. Besides I would not like to 
automate an rsync over the same data directory as it could be destroyed 
by a 'wrong' syncing .

Any comments are welcome !

Thanks in advance,
A.A.


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] standby with a fixed lag behind the master

2012-08-08 Thread Amador Alvarez

Hi there,

And why not shipping older WAL files to the target on a regular basis ?.
On the master you can control with a crontab job to ship the wanted WAL 
files (n hours older than current time and clean the shipped up, check 
rsync options up) in a regular basis.


A.A.



On 07/26/2012 02:24 AM, Alexey Klyukin wrote:

Hello,

I've recently come across the task of setting up a PostgreSQL 9.1 standby 
server that is N hours behind the master, i.e. only transactions that finished 
N hours in the past or older should be replayed on a standby. The goal is to 
have a  known good state server to perform backups from and possibly revert to 
in case of unwanted changes on primary. It seems that there is no mechanism in 
PostgreSQL to just ask the standby to keep a fixed distance (in terms of either 
WAL segments or time) between the primary, so these are possible solutions:

1. Use restore command on standby to fetch the current WAL segment only if it 
has been created not less than N hours in the past (according to ctime).
2. Pause the restore process on standby if the lag * is less than N hours (with 
pg_xlog_replay_pause()) and resume if it is more than that.
3. Set recovery_target_time to current - 6 hours and pause_at_recovery_target 
to true, periodically check whether the recovery is paused, reset the recovery 
target time to a new value (and restart the standby) if it is.

* - the lag would be calculated as now() - pg_last_xact_replay_timestamp() on 
standby.

Both 2 and 3 requires external cron job to pause/resume the recovery, and 1, 
while being the easiest of all, doesn't work with SR (unless it's combined with 
WAL shipping). I wonder if there are other well established approaches at 
solving this problem and if there is an interest for adding such feature to the 
-core?

Thank you,
--
Alexey Klyukinhttp://www.commandprompt.com
The PostgreSQL Company – Command Prompt, Inc.







--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Merging two databases

2012-06-14 Thread Amador Alvarez

Hi,
Obviously it will always depends on the constraints in your schema, but 
why do not dump the databases out and import it into a single one 
properly? Once you get the new database, it should be easy to catch up 
with the 2 databases before the switching (with statements like 
insert/update into Newdb tables ...(select from Olddb tables)).
It is up to you, this is to avoid replication settings, however using 
replication could be a good thing for future replication experiences.


A.A.



On 06/12/2012 03:23 AM, Lukasz Brodziak wrote:

Hi there,

Is there a way of merging two databases in postgresql in the way of
differential replication? The thing is that we have two DBs with same
schemas and we want to merge the data within them into one single
replica. Are there any tools/ways of doing this? Of course when the
merged db is created it will be only updated/inserted with new data
from each of source DBs.



--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Hot backup for postgres 8.4

2012-06-12 Thread Amador Alvarez

Hi there,

I have been looking for a good solution to backup a postgresql 8.4 
database server (not pg_dump)  and the only options that it seems that I 
have are either a Omnipitr or a custom-coded solution.
I am a little bit afraid about setting up omipitr in production for 
archiving  backup, even though it has been running seamlessly on a 
testing environment.

I wish I could use pg_basebackup but  unfortunately it is V. 9 oriented.

Does anyone have production experience with Omnipitr ?
If not, is there something out there trustworthy enough for production use ?

Thanks in advance,

A.A.


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Can schemas be ordered regarding their creation time ?

2012-06-07 Thread Amador Alvarez

Thanks a lot Hari,
very resourceful, you have been very helpful.

cheers,
A.A.

On 06/07/2012 12:47 AM, hari.fu...@gmail.com wrote:

Amador Alvarezaalva...@d2.com  writes:


Any idea on doing (COMMENT ON SCHEMA x IS 'y') as 'y' variable?

You could use PL/pgSQL's EXECUTE for that:

DO $$BEGIN
   EXECUTE 'COMMENT ON SCHEMA myschema IS ''Created ' ||
   current_timestamp || ;
END$$;




--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Can schemas be ordered regarding their creation time ?

2012-06-06 Thread Amador Alvarez


Thanks hary and Matthias,
It is a very good idea, however the schema names are meaningful and not 
allowed to be attached to a date.
Regarding the comment solution (COMMENT ON SCHEMA x IS 'y'), it sounds 
great and I tried to run different examples without happy ending as 'y' 
must be a literal string.


I ran commands like these :

comment on schema users is select to_char(current_timestamp,'DD Mon ');

comment on schema users is select 
quote_literal(to_char(current_timestamp,'DD Mon '));


. and even simple concatenated strings :  comment on schema users is 
'a'||'b';
  
ERROR:  syntax error at or near ||
  
LINE 1: comment on schema users is 'a'||'b';


... and even with a function returning a text ... : (


Any idea on doing (COMMENT ON SCHEMA x IS 'y') as 'y' variable?


Thanks again,
I appreciate it a lot,
A.A.




I have been trying a lot of different options to do this :

COMMENT ON SCHEMA x IS 'y'.




On 06/06/2012 01:06 AM, hari.fu...@gmail.com wrote:

Amador Alvarezaalva...@d2.com  writes:


Hi there,
I would like to know if it is possible to get the date when the
different schemas were created, as I want to write an script to dump
only the latest schemas, the latest month created schemas for
instance.

You could attach the creation date to the schema name or use
COMMENT ON SCHEMA x IS 'y'.




--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Can schemas be ordered regarding their creation time ?

2012-06-05 Thread Amador Alvarez

Thanks Tom,
I will figure out then how to add a newly created schema to the 
schema-list to be backed up (dumped) but not directly as i expected.


Cheers,
A.A.


On 06/05/2012 05:43 PM, Tom Lane wrote:

Amador Alvarezaalva...@d2.com  writes:

I would like to know if it is possible to get the date when the
different schemas were created, as I want to write an script to dump
only the latest schemas, the latest month created schemas for instance.

No, sorry, Postgres does not track creation time of database objects.

regards, tom lane


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Data split -- Creating a copy of database without outage

2012-05-30 Thread Amador Alvarez

Hi ,

I would start with a single high performance tuned database focusing 
mainly on dealing efficiently with concurrent activity and identifying 
the real hot spots.
If you check out that you really need to go forward on database power, 
consider on adding  new databases and relocate some users whenever you 
need it.


Take a look on the different options related to replication, tunning and 
database balancers, set up some testings and get a deeper understanding 
on your real needs and performance.


A.A.



On 05/30/2012 08:14 AM, Igor Shmain wrote:


Thank you, Liu, for your suggestion.

I might be missing something (I am new to postgres), but it seems that 
your suggestion will not help much in my case.  Since the number of db 
requests will grow with time (more users will come into the system), 
the current server will become incapable of serving all the requests 
quickly enough.


The idea is to increase overall calculation capacity of the system by 
adding more servers to it. Throwing more hardware to a single main 
server is not an option in my case. Creating multiple replicas 
(slaves) is also not an good option -- it would be way more efficient 
to have a group of db servers, each serving only some subset of users 
and hosting data for those users only. Buying new servers in advance 
is not an option too.


What I am looking for is switching some of the users to another db 
server when the capacity of the existing server(s) is not enough. The 
point is to do it without interrupting the users' work (so they do not 
see that horrible Sorry! This site is under maintenance...).


If I missed something it would be very kind of you to point this out.

Thank you once again,

-igorS

*From:*Haifeng Liu [mailto:liuhaif...@live.com]
*Sent:* May-29-12 9:13 PM
*To:* Igor Shmain
*Cc:* pgsql-admin@postgresql.org
*Subject:* Re: [ADMIN] Data split -- Creating a copy of database 
without outage


Why not use a partitioned table? You can write a trigger to control 
which partition the coming data should be inserted.


Regards,



Liu Haifeng

Home: http://liuhaifeng.com



On May 30, 2012, at 4:13 AM, Igor Shmain wrote:



Can you please help with advice?

I need to design a solution for a database which will grow and will 
require a horizontal split at some moment.


Here is how I am planning to do it: Every record in every table has a 
shard number. After the database becomes too large (gets too many 
requests), the tables need to be horizontally split. It means that for 
every table all records with some specific shard numbers need to be 
moved to a new database.


My plan is to (1) create a copy of the database on a new server 
(backup/restore?), (2) synchronize the databases (how?), and then (3) 
forward all requests for the moved shards to the new database. (I will 
get rid of the unneeded shards later). An important requirement: there 
should be no outage for the users.


What are the ways to achieve it?

Thank you in advance,
-igorS



Re: [ADMIN] Backup/disaster recovery and bandwidth (long)

2012-04-25 Thread amador alvarez

Hi Scott,
Why you do not replicate this master to the other location/s using other 
methods like bucardo?, you can pick the tables you really want get 
replicated there.
For the backup turn to hot backup (tar $PGDATA)+ archiving, easier, 
faster and more efficient rather than a logical copy with pgdump.


A.A

On 04/25/2012 09:11 AM, Scott Whitney wrote:
Hello, everyone. I want to throw a scenario out there to see what 
y'all think.


Soon, my cluster backups will be increasing in size inordinately. 
They're going to immediately go to 3x as large as they currently are 
with the potential to be about 20x within a year or so.


My current setup uses a single PG 8.x server doing nightly dumps (not 
ideal but sufficient for the moment, and one of the main reasons to 
move to PG 9) which are then downloaded from my hosting center to our 
offices for DR purposes. Each night I pull down roughly 5GB of 
compressed pg_dump data. Dumping this takes about 1.5hrs. Downloading 
this at 15Mbps takes about an hour. Soon I'll be looking at somewhere 
around 7hrs for the dumps to complete and downloading a 12GB file 
(which will take about 3 hrs). Oh, and I'll have to pay for 
significant bandwidth overage since I'm charged on a 95%, and while an 
hour a day does NOT kick me up to 15Mbps usage at 95%, 3hrs per night 
certainly will, so there's a real cost associated with this strategy 
as well.


While the time of the actual dumps is not a huge issue, the time of 
the download IS a large concern, especially since my support folks use 
that file daily to extract individual customer databases for restore 
in assisting customer support issues.


So, while now I have my pg_dumps completed around 2AM and downloaded 
to my local network at about 3AM, with the increase in our database 
sizes, what will be happening is that my pg_dump will not be completed 
until around 7AM, and the download would not be completed until around 
10AM, best-case scenario. Add into that support trying to restore a 
database...more on that in a moment.


My _new_ setup will instead be 2 PG 9.x servers with hot-standby 
enabled (at my hosting center) and a 3rd PG 9.x server at my local 
office also replicating off of the master. Each one of those servers 
will perform his own pg_dumps of the individual databases for 
backup/disaster recovery purposes, and while each dump might not be 
consistent with one another, each SERVER will have dumps consistent to 
itself, which is viable for our situation, and does not require me to 
download 12GB (or more) each night with all of those associated 
nightmares, costs and other problems.


Alright, well, I've got that part all thought out, and it seems like a 
good way to do it to me, but I'm _still_ running into the situation 
that I've got to take 8hrs-ish to run the pg_dump no matter where it 
runs, and when my support folks need it (which they do daily), this 
basically means that if they have to have a customer database up NOW 
NOW NOW for support reasons, they simply cannot have it within an hour 
in many cases. Specifically, one database takes between 2 and 7.5hrs 
to pg_dump depending on which format I use, so if they need a CURRENT 
copy, they're at least 4 hours out. Additionally, they can't directly 
use the replicating server at my local office, because they need to 
test the problems the customers are having which include pesky things 
like INSERT, UPDATE and DELETE, so they have to restore this data to 
another internal PG backend.


Enter my outside-the-box thinking.

I rather assume that you cannot do a start/stop backup on a 
hot-standby server. HOWEVER, what if


I set up a 4th database server internally at my office. Each night I 
stop PG on my 3rd server (the local one replicating off of the master) 
and rsync my pg_data directory to this new 4th server. I bring up the 
4th server NOT as a standby, but as a master. They would then have all 
customer data on an internal, usable PG system from the time of the 
rsync, and while it might not reflect the immediate state of the 
database, that's pretty well always true, and they're used to that, 
since whenever they clone a site, they're using the dumps done 
around midnight anyway.


I believe, then, that when I restart server #3 (the standby who is 
replicating), he'll say oh, geez, I was down, let me catch up on all 
that crap that happened while I was out of the loop, he'll replay the 
WAL files that were written while he was down, and then he'll catch 
back up.


Does this sound like a viable option? Or does someone have additional 
suggestions?


Re: [ADMIN] Backup/disaster recovery and bandwidth (long)

2012-04-25 Thread amador alvarez
I  mean bucardo (even though there are more tools like this one) just 
for the replication stuff and the hot database backup only for the 
backup stuff and only one bounce is needed to turn the archiving on, you 
do not need to turn anything at all down during the backup.


A.A

On 04/25/2012 10:23 AM, Scott Ribe wrote:

On Apr 25, 2012, at 10:11 AM, Scott Whitney wrote:


I believe, then, that when I restart server #3 (the standby who is replicating), he'll 
say oh, geez, I was down, let me catch up on all that crap that happened while I 
was out of the loop, he'll replay the WAL files that were written while he was 
down, and then he'll catch back up.

Does this sound like a viable option? Or does someone have additional 
suggestions?

Perfectly viable. However, the WAL files must exist for this to happen. So you 
need to set wal_keep_segments appropriately, or set up WAL archiving.

Note that you could even provide a more up-to-date database for your people to 
work with. If the testbed is nearly up to date, then an rsync to update it 
would take very little time. So you could shut down the replica, rsync, and 
bring the replica back up.




--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Urgent help required

2012-04-19 Thread amador alvarez
Usually the standard location for data is /var/lib/pgsql/data for 
postgresql 8.

So try to restore this directory first and underneath.

Only with that you can hopefully restore the whole system assuming that 
if tablespaces were creates are under the standard location.


I would ask others to make sure there are no backups, sometimes there is 
guy who knows...


Good luck
A.A.


On 04/19/2012 10:47 AM, Nagaraj Shindagi wrote:

Hi team,

i am nagaraj, i am newbi in this database world. i required your help.
2 dyas back i formatted one of my client system. which is having 
postgresql 8.2 database  that was having data. but i am not taken 
backup of the data.

1) how to take the data from the formatted harddisk. ?
2) how many folders or files will be their at base folder. When we 
install postgresql 8.2?

3) how to identify which folder contain which file. ?

Please help it is very urgent.
thanks and regards


--
Nagaraj V Shindagi


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Recreate primary key without dropping foreign keys?

2012-04-17 Thread amador alvarez


Unfortunately I checked out that the deferrable option does not let us 
drop the PK (postgres8.4) while remaining FK's , I did not try on the 
constraint as NOT VALID  is not supported by postgres8.
So unless you have a 9 release or you get a try on a parallel table, you 
have to follow the manual procedure :


Generate new index
drop FK's
Drop PK
Recreate PK swiching to the new index
Recreate FK's

Can you afford a quick temporary user access to the database?




On 04/17/2012 06:43 AM, Igor Neyman wrote:

-Original Message-
From: Chris Ernst [mailto:cer...@zvelo.com]
Sent: Monday, April 16, 2012 10:55 PM
To: pgsql-admin@postgresql.org
Subject: Re: Recreate primary key without dropping foreign keys?

On 04/16/2012 07:02 PM, amador alvarez wrote:

How about deferring the FK's while recreating the PK ?
or using a temporary parallel table to be pointed by the other tables
(FK) and swap it up on the recreation.

Hmm.. Interesting.   But it appears that you have to declare the
foreign
key as deferrable at creation.  Is there any way to set an existing
foreign key as deferrable?

- Chris

May be this (from the docs) would help:

ADD table_constraint [ NOT VALID ]

 This form adds a new constraint to a table using the same syntax as CREATE 
TABLE, plus the option NOT VALID, which is currently only allowed for foreign key 
constraints. If the constraint is marked NOT VALID, the potentially-lengthy initial 
check to verify that all rows in the table satisfy the constraint is skipped. The 
constraint will still be enforced against subsequent inserts or updates (that is, 
they'll fail unless there is a matching row in the referenced table). But the 
database will not assume that the constraint holds for all rows in the table, until 
it is validated by using the VALIDATE CONSTRAINT option.

Using this option you can drop and recreate corresponding FKs in a very short time, and 
start using them, while postponing to run VALIDATE CONSTRAINT for later.

It's similar to Oracle's adding FK with NOCHECK option, but if IRC there is no need to 
run VALIDATE CONSTRAINT later.

Regards,
Igor Neyman



--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Recreate primary key without dropping foreign keys?

2012-04-16 Thread amador alvarez

How about deferring the FK's while recreating the PK ?
or using a temporary parallel table to be pointed by the other tables 
(FK) and swap it up on the recreation.


Cheers,
A.A

On 04/16/2012 06:54 AM, Chris Ernst wrote:

On 04/16/2012 02:39 AM, Frank Lanitz wrote:

Am 16.04.2012 10:32, schrieb Chris Ernst:

On 04/15/2012 10:57 PM, Frank Lanitz wrote:

On Sun, 15 Apr 2012 18:41:05 -0600 Chris Ernstcer...@zvelo.com
wrote:


Hi all,

In PostgreSQL 9.1.3, I have a few fairly large tables with
bloated primary key indexes.  I'm trying to replace them using
newly created unique indexes as outlined in the docs.  Something
like:

CREATE UNIQUE INDEX CONCURRENTLY dist_id_temp_idx ON
distributors (dist_id); ALTER TABLE distributors DROP CONSTRAINT
distributors_pkey, ADD CONSTRAINT distributors_pkey PRIMARY KEY
USING INDEX dist_id_temp_idx;

However, the initial drop of the primary key constraint fails
because there are a whole bunch of foreign keys depending on it.

I've done some searching and haven't found a workable solution.
Is there any way to swap in the new index for the primary key
constraint without dropping all dependent foreign keys?  Or am I
pretty much stuck with dropping and recreating all of the foreign
keys?

REINDEX is not working here?

Hi Frank,

Thanks, but REINDEX is not an option as it would take an exclusive
lock on the table for several hours.

Well, from my little view I guess all rebuilding index action would
require such, as its the primary key with uniqueness. I'd think of a
complete reinit of the cluster with pg_dump and restoring, but this
would also need a downtime at least for write access.

Why is the index so bloated?

As in my original post, you can create a unique index concurrently and
then replace the primary key index with it.  This way, the index
creation doesn't require an exclusive lock.  You only need a very brief
exclusive lock to drop and recreate the primary key constraint using the
new index.

However, the index creation is not the issue here.  That part is done.
The issue is that there are several foreign keys depending on the
primary key index that I want to drop and replace with the newly built
unique index.  I would prefer not to drop and recreate all of the
foreign keys as that would require many hours of down time as well (the
very situation I was trying to avoid by building the index concurrently
and swapping it in).

I believe the index bloat is due to a combination of under aggressive
autovacuum settings and recently deleting about 30% of the table.

- Chris



--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] custom conflict resolution information request

2012-04-02 Thread amador alvarez


What are you using for replication?

-Kevin


Hi Kevin,

I set up a master-master asynchronous replication of one database with bucardo 
4.4.5 and testing right now 4.99.3.
I know the standard settings when a conflict between same id rows comes up :

source - the rows on the source database always win (in a conflict, we copy 
rows from source to target)


target - the rows on the target database always win


skip - any conflicted rows are simply not replicated. Not recommended 
for most cases.



random - each database has an equal chance of winning each time


latest - the row that was most recently changed wins


abort - the sync is aborted on a conflict You can also provide custom 
conflict handlers to allow you to use business logic for better conflict 
resolution.



I have not found info or examples about the custom option yet, and still on the 
search ...

Cheers,

Amador A.



On 04/02/2012 11:57 AM, Kevin Grittner wrote:

amador alvarezaalva...@d2.com  wrote:


I am trying to find any kind of information or examples to deal
with custom conflict resolution on swap syncs in a master-master
replication.


What are you using for replication?

-Kevin


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] custom conflict resolution information request

2012-03-29 Thread amador alvarez

Hi there,
I am trying to find any kind of information or examples to deal with 
custom conflict resolution on swap syncs in a master-master replication.


Any clue will be very wellcome,

Thanks in advance,

Amador A.


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] clustering with shared storage

2012-03-20 Thread amador alvarez

Hi there,
I wonder why are you considering this solution, as if something wrong 
comes within the data (logical corruption, user error) it will be spread 
on both locations, Would not be better a delayed standby database.


I am curious because I am setting this up right now and do not get all 
the advantages of the shared disk solution,


Thanks in advance,
AA

On 03/20/2012 12:58 PM, Devrim GÜNDÜZ wrote:

Hi,

On Tue, 2012-03-20 at 15:26 -0400, Gary Webster wrote:


The best I've found is here:
http://wiki.postgresql.org/images/5/58/06.5_-_Devrim_Gunduz_-_PostgreSQLClusteringWithRedHatClusterSuite--LT.pdf
but, it doesn't go into setup details.

http://www.gunduz.org/download.php?dlid=190

has a newer version (still not that detailed, but much better)


I have a few questions/issues, but figure I will attack it piecemeal, in
chronological order.

RHCS is handling the fencing.
I have installed postgres on the local drive of the primary, with pgdata on
the shared drive.
I am generally using the EnterpriseDB install package for v9.1.3 .

So, first, just how to install postgres on the secondary?

Just install the binaries -- don't initdb, and then use the shared disk
as the $PGDATA. RHCS will mount the shared filesystem to only one
server, and that one will use the $PGDATa.


1)  If I leave it as secondary (non-active), it can't see the shared drive.
  Is this correct/OK ?

It *can* see, but it does not see (or it should not see) and it *cannot*
use.


If so, it will create its own pgdata locally, which is later ignored?

or

2)  Do I make it primary, then point its pgdata to the already existing
data during install ?!?

See above.

Regards,


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin