On Mon, Jan 23, 2012 at 10:21 AM, Andreas Hasenack wrote:
> On Fri, Jan 20, 2012 at 16:45, Nicholson, Brad (Toronto, ON, CA)
> wrote:
>> In the past I've used Slony to upgrade much larger database clusters than
>> yours with minimal down time (I'm talking seconds for the actual master
>> switch
On Fri, Jan 20, 2012 at 16:45, Nicholson, Brad (Toronto, ON, CA)
wrote:
> In the past I've used Slony to upgrade much larger database clusters than
> yours with minimal down time (I'm talking seconds for the actual master
> switch). You set up a new replica on the new version and then move the
> -Original Message-
> From: francis picabia [mailto:fpica...@gmail.com]
> Sent: Friday, January 20, 2012 3:39 PM
> To: Nicholson, Brad (Toronto, ON, CA)
> Cc: pgsql-admin@postgresql.org
> Subject: Re: [ADMIN] Best practise for upgrade of 24GB+ database
> > In the
francis picabia wrote:
> That's great information. 9.0 is introducing streaming
> replication, so that is another option I'll look into.
We upgrade multi-TB databases in just a couple minutes using
pg_upgrade using the hard-link option. That doesn't count
post-upgrade vacuum/analyze time, bu
1:12 PM
>> To: pgsql-admin@postgresql.org
>> Subject: [ADMIN] Best practise for upgrade of 24GB+ database
>>
>> How do others manage larger database upgrades while minimizing
>> downtime? Do you avoid pg_upgradecluster and simply do a pg_restore
>> from a dump made prior
> -Original Message-
> From: pgsql-admin-ow...@postgresql.org [mailto:pgsql-admin-
> ow...@postgresql.org] On Behalf Of francis picabia
> Sent: Friday, January 20, 2012 1:12 PM
> To: pgsql-admin@postgresql.org
> Subject: [ADMIN] Best practise for upgrade of 24GB+ da
In an academic setting, we have a couple of larger than typical
Postgres databases.
One for moodle is now 15GB and another for a research project is
currently 24 GB.
I notice while upgrading Postgresql in Debian from 8.3 to 8.4, the downtime
on the 24 GB research database is extensive while using