On 24/11/2010, at 5:07 PM, Tom Lane wrote:

> Elliot Chance <elliotcha...@gmail.com> writes:
>> This is a hypothetical problem but not an impossible situation. Just curious 
>> about what would happen.
> 
>> Lets say you have an OLTP server that keeps very busy on a large database. 
>> In this large database you have one or more tables on super fast storage 
>> like a fusion IO card which is handling (for the sake of argument) 1 million 
>> transactions per second.
> 
>> Even though only one or a few tables are using almost all of the IO, pg_dump 
>> has to export a consistent snapshot of all the tables to somewhere else 
>> every 24 hours. But because it's such a large dataset (or perhaps just 
>> network congestion) the daily backup takes 2 hours.
> 
>> Heres the question, during that 2 hours more than 4 billion transactions 
>> could of occurred - so what's going to happen to your backup and/or database?
> 
> The DB will shut down to prevent wraparound once it gets 2 billion XIDs
> in front of the oldest open snaphot.
> 
>                       regards, tom lane

Wouldn't that mean at some point it would be advisable to be using 64bit 
transaction IDs? Or would that change too much of the codebase?
-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Reply via email to