Hi All,
I have some other issue related to taking backup of the database having
bigger size. I have been getting the
following errors
anil@ubuntu107:~/Desktop$ pg_dump -Uadmin -h192.168.2.5 dbname
filename.sql
pg_dump: Dumping the contents of table tbl_voucher failed:
PQgetCopyData() failed.
On 26 Srpen 2011, 11:48, Niyas wrote:
Hi All,
I have some other issue related to taking backup of the database having
bigger size. I have been getting the
following errors
anil@ubuntu107:~/Desktop$ pg_dump -Uadmin -h192.168.2.5 dbname
filename.sql
pg_dump: Dumping the contents of table
Actually database is not crashed. I can run my application perfectly.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/backup-strategies-for-large-databases-tp4697145p4737697.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
Sent via
On 26 Srpen 2011, 12:46, Niyas wrote:
Actually database is not crashed. I can run my application perfectly.
That does not mean one of the backends did not crash. Check the log.
Tomas
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
Hello
2011/8/26 Niyas cmni...@gmail.com:
Hi All,
I have some other issue related to taking backup of the database having
bigger size. I have been getting the
following errors
anil@ubuntu107:~/Desktop$ pg_dump -Uadmin -h192.168.2.5 dbname
filename.sql
pg_dump: Dumping the contents of
I also guessed the same at initial stage of debugging. So i tried to export
the tbl_voucher data
to a file and it works fine. Then i googled and found some link, its
explaines the reason is
higher size of the database. But didnt get any proper solution in the
internet.
--
View this message in
On 08/13/2011 05:44 PM, MirrorX wrote:
at the moment, the copy of the PGDATA folder (excluding pg_xlog folder), the
compression of it and the storing of it in a local storage disk takes about
60 hours while the file size is about 550 GB. the archives are kept in a
different location so that not
i looked into data partitioning and it is definitely something we will use
soon. but, as far as the backups are concerned, how can i take a backup
incrementally? if i get it correctly, the idea is to partition a big table
(using a date field for example) and then take each night for example a dump
On Mon, Aug 15, 2011 at 5:06 PM, MirrorX mirr...@gmail.com wrote:
i looked into data partitioning and it is definitely something we will use
soon. but, as far as the backups are concerned, how can i take a backup
incrementally? if i get it correctly, the idea is to partition a big table
(using
On 08/15/11 4:12 PM, Scott Marlowe wrote:
Exactly. Sometimes PITR is the right answer, sometimes partitioning is.
those answer two completely different questions.
--
john r pierceN 37, W 122
santa cruz ca mid-left coast
--
Sent via
thx a lot. i will definitely look into that option
in the meantime, if there are any other suggestions i 'd love to hear them
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/backup-strategies-for-large-databases-tp4697145p4698006.html
Sent from the PostgreSQL - general
On Sun, Aug 14, 2011 at 12:44 AM, MirrorX mirr...@gmail.com wrote:
the issue here is that the server is heavily loaded. the daily traffic is
heavy, which means the db size is increasing every day (by 30 gb on
average)
and the size is already pretty large (~2TB).
at the moment, the copy of
hello to all
i am trying to find an acceptable solution for a backup strategy on one of
our servers (will be 8.4.8 soon, now its 8.4.7). i am familiar with both
logical (dump/restore) and physical backups (pg_start_backup, walarchives
etc) and have tried both in some other cases.
the issue here
One possible answer to your issues is data partitioning. By
partitioning your data by date or primary key or some other field, you
can backup individual partitions for incremental backups. I run a
stats database that partitions by day daily and we can just backup
yesterday's partition each night.
14 matches
Mail list logo