Re: [BackupPC-users] how does backuppc handle DBs

2009-02-07 Thread Nick Smith
ok, i use volume shadow so i dont need to dump.
what does it do for the backup after you have dumped the DB?
does it only take what has changed since the
last dump or does it take the entire dump?

On Fri, Feb 6, 2009 at 10:26 PM,  m...@ehle.homelinux.org wrote:
 I make my mysql servers dump their DB's to a file before running BackupPC.

 Quoting Nick Smith nick.smit...@gmail.com:

 On Fri, Feb 6, 2009 at 9:09 PM, Nick Smith nick.smit...@gmail.com wrote:
 I have a couple clients backup nightly over the internet.
 I seem to be doubting if the backups are actually being completed
 successfully.
 Backuppc tells me they are, but the reason im questioning is because
 it is suppose
 to backup 10gigs a night (their sql db changes daily) and backup
 completes in 40mins
 over a max connection of 600kb.  The backup results do say its getting
 about 50%
 compression and the backup is only 5gigs when it complete.

 How does backuppc handle databases? does it take just whats changed
 or if the DB
 changes does it have to backup the entire thing again?

 ive gotten 14 complete full backups (one a day) and they all have the
 same results
 completed successfully and are about 10gigs before compression and pooling.

 its there a way i can make sure that things are working as planned?

 i mean without doing a restore and testing what its backing up.

 i just dont want to think im getting good backups when im really not.

 im using scripts that use volume shadow to get backups of their
 windows servers
 after hours.

 it just seems like the backups are going way to fast with the speed of
 the connection
 and the size of the data.

 thanks for any light you can shed on the subject.

 and to add to my own post im doing these backups over a pfsense
 firewall on both sides and the stats for the last 4 hours
 (when the 10gig backup was supposedly preformed), the output
 transmitted on the WAN was only 50megs. so something
 isnt working correctly here.

 --
 Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
 software. With Adobe AIR, Ajax developers can use existing skills and code to
 build responsive, highly engaging applications that combine the
 power of local
 resources and data with the reach of the web. Download the Adobe AIR SDK and
 Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/





 --
 Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
 software. With Adobe AIR, Ajax developers can use existing skills and code to
 build responsive, highly engaging applications that combine the power of local
 resources and data with the reach of the web. Download the Adobe AIR SDK and
 Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] how does backuppc handle DBs

2009-02-07 Thread John Rouillard
On Sat, Feb 07, 2009 at 08:55:27AM -0500, Nick Smith wrote:
 ok, i use volume shadow so i dont need to dump.

Umm, I assume you are telling the database that you are going to
snapshot it's database so it syncs all of the data to disk to make it
consistant when you snaphot the volume right?

If the database support PITR, you must be doing the standard backup a
corrupted database and the rollback/journal logs that are needed to
recover the corrupted database.

If you aren't doing one of these two things (manually or
automatically) and have restorable backups, you are just lucky.

 what does it do for the backup after you have dumped the DB?
 does it only take what has changed since the
 last dump or does it take the entire dump?

If the file is changed, it backs up the entire file, but using rsync
it only transfers deltas between the prior verion of the file and the
current version. If using tar/smb it copies the entire fileover the
wire.

Also it can compress the file on disk. Do you see files under
/backuppc-root/cpool? Does the compressed size of the source file
approximate the size of the file you find on disk?

-- 
-- rouilj

John Rouillard
System Administrator
Renesys Corporation
603-244-9084 (cell)
603-643-9300 x 111

--
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] how does backuppc handle DBs

2009-02-07 Thread David Morton
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Nick Smith wrote:
 ok, i use volume shadow so i dont need to dump.
 what does it do for the backup after you have dumped the DB?

That doesn't necessarily do the job - the database may be in the middle
of writing a change and the shadow copy could pick it up in an
inconsistent state.

To back up a database, you have to read the documentation for that
database. For example, a google search for mysql backup gets this for
a first hit:

http://dev.mysql.com/doc/refman/5.1/en/backup.html

YMMV for other databases.

At the very least, you may need to issue commands to flush tables and do
a read lock while you start the shadow snapshot, and then release the locks.

OTOH, doing a dump (such as mysqldump for mysql) usually means the dump
itself is in a consistent state, and backing up that file means you have
a consistent backup.  While your backuppc back may pick that up as well
as the binary database files, you don't know for sure if the dtatabase
files are consistent.

The other good thing about a dump is that it can be loaded on newer
versions of the database in case of a rebuild - I've heard horror
stories of when someone tried to restore a binary database file after a
crash, only to have it fail because the database versions didn't match.

Ultimately, the question you want answered is, Can I restore the
database files and have them work.  Well, the answer to that is, of
course, Test it!  An untested backup is no backup at all.

As for the amount of data transfered, if you are doing a full backup, it
should transmit the entire file, but if you are using rsync and doing an
incremental, it should only transmit what has changed within a file.
tar and smb have to transmit a changed file in its entirety I belive.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkmNro0ACgkQUy30ODPkzl2mqwCdFodlm4g2rzuZ/SeXMqc667TE
9/sAoLKeH1A5YpKK0H+1cD8VvxJX523M
=M6sn
-END PGP SIGNATURE-

--
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] how does backuppc handle DBs

2009-02-07 Thread Nick Smith
Well please correct me if my thinking is not correct.

im using volume shadow, which is suppose to be a way to backup in use
files without unmounting or corrupting them.

the volume shadow and backups are being done after hours at a time
when no one is logged on and making any changes to the database.

so from what ive read and understand this is a safe way to backup sql
and exchange.

i could edit the pre/post scripts that launches the volume shadow to
unmount the sql and exchange dbs of you guys think that would be
a safer way.  it just adds a bit of complexity to a relatively easy
script. and i am no programmer, i thought i was lucky to get this far.

I do understand the fact that an untested backup is no backup at all,
i just need to figure out a good time to bring a doctors office down
thats two hours away to do the testing.

I am doing ony full backups with rsyncd over a site to site vpn
connection.  i think what you are saying is that rsync will do a
compare on the
original file and only transfer whats changed correct?  someone else
on here told me that repeated full backups do pretty much the same as
an incremental with a little more cpu overhead and slightly increased
disk usage, does that still stand as correct?

if it is actually only transmitting the changes in the db file, then i
could see how the backup was only taking 40mins.  i just dont
understand
why it tells me im backing up 10gigs.  unless its just telling me the
size of the file is 10gigs and its telling me its backed up.

thanks for the posts any other input would be much appreciated.

On Sat, Feb 7, 2009 at 10:53 AM, David Morton morto...@dgrmm.net wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Nick Smith wrote:
 ok, i use volume shadow so i dont need to dump.
 what does it do for the backup after you have dumped the DB?

 That doesn't necessarily do the job - the database may be in the middle
 of writing a change and the shadow copy could pick it up in an
 inconsistent state.

 To back up a database, you have to read the documentation for that
 database. For example, a google search for mysql backup gets this for
 a first hit:

 http://dev.mysql.com/doc/refman/5.1/en/backup.html

 YMMV for other databases.

 At the very least, you may need to issue commands to flush tables and do
 a read lock while you start the shadow snapshot, and then release the locks.

 OTOH, doing a dump (such as mysqldump for mysql) usually means the dump
 itself is in a consistent state, and backing up that file means you have
 a consistent backup.  While your backuppc back may pick that up as well
 as the binary database files, you don't know for sure if the dtatabase
 files are consistent.

 The other good thing about a dump is that it can be loaded on newer
 versions of the database in case of a rebuild - I've heard horror
 stories of when someone tried to restore a binary database file after a
 crash, only to have it fail because the database versions didn't match.

 Ultimately, the question you want answered is, Can I restore the
 database files and have them work.  Well, the answer to that is, of
 course, Test it!  An untested backup is no backup at all.

 As for the amount of data transfered, if you are doing a full backup, it
 should transmit the entire file, but if you are using rsync and doing an
 incremental, it should only transmit what has changed within a file.
 tar and smb have to transmit a changed file in its entirety I belive.
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.9 (GNU/Linux)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

 iEYEARECAAYFAkmNro0ACgkQUy30ODPkzl2mqwCdFodlm4g2rzuZ/SeXMqc667TE
 9/sAoLKeH1A5YpKK0H+1cD8VvxJX523M
 =M6sn
 -END PGP SIGNATURE-

 --
 Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
 software. With Adobe AIR, Ajax developers can use existing skills and code to
 build responsive, highly engaging applications that combine the power of local
 resources and data with the reach of the web. Download the Adobe AIR SDK and
 Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com

Re: [BackupPC-users] how does backuppc handle DBs

2009-02-07 Thread Les Mikesell
Nick Smith wrote:
 Well please correct me if my thinking is not correct.
 
 im using volume shadow, which is suppose to be a way to backup in use
 files without unmounting or corrupting them.

A shadow copy is a snapshot of the moment the snapshot is taken. You 
need to do something to ensure that the database file is consistent at 
that moment.

 the volume shadow and backups are being done after hours at a time
 when no one is logged on and making any changes to the database.

Shutting down the database before the shadow is made might work.

 so from what ive read and understand this is a safe way to backup sql
 and exchange.

It is only part of it. Once you have a consistent snapshot, shadowing 
permits the database to be live during the backup run.

 i could edit the pre/post scripts that launches the volume shadow to
 unmount the sql and exchange dbs of you guys think that would be
 a safer way.  it just adds a bit of complexity to a relatively easy
 script. and i am no programmer, i thought i was lucky to get this far.

What you are getting is approximately what you'd have if the machine 
crashed at the time you made your shadow.  If you are lucky, the apps 
can fix it when they start up.  Stopping the apps just for the time it 
takes to make the shadow will make sure the files are closed and clean. 
  Or, as someone else already mentioned, doing a dump to a more portable 
format is even better, since in a real disaster scenario you might be 
trying to restore into a somewhat different environment.

 I do understand the fact that an untested backup is no backup at all,
 i just need to figure out a good time to bring a doctors office down
 thats two hours away to do the testing.

A spare machine (or a VMware image) might be a better place to test. 
You can easily see the restored file size on any machine - and if you 
have the same database, see if it will start up and find recent entries.

 I am doing ony full backups with rsyncd over a site to site vpn
 connection.  i think what you are saying is that rsync will do a
 compare on the
 original file and only transfer whats changed correct?

Yes, it walks through the files comparing block checkums and only 
transfers differing parts.  Since database files may have their changes 
distributed within the file it is hard to predict how well this will 
work.  For something like a growing logfile it is very effective at 
finding and only sending the new part.  Also, if your vpn is doing any 
compression, you might be saving some time there since databases tend to 
have very compressible data.

  someone else
 on here told me that repeated full backups do pretty much the same as
 an incremental with a little more cpu overhead and slightly increased
 disk usage, does that still stand as correct?

Incrementals skip over files where the length and timestamp match the 
copy in the previous full run.  Changed files (like your big DB) always 
go through the block comparison anyway.

 if it is actually only transmitting the changes in the db file, then i
 could see how the backup was only taking 40mins.  i just dont
 understand
 why it tells me im backing up 10gigs.  unless its just telling me the
 size of the file is 10gigs and its telling me its backed up.

The receiving side actually reconstructs the full file from the matching 
old blocks and the new changes, so you end up with a new 10 gig file 
which is then compressed and added to the pool.  There's no particular 
relationship between this size and the size of the new/changed blocks 
transmitted.  The one down side of the backuppc design is that large 
files with small changes between runs can't pool the large portion of 
duplicated content.

-- 
   Les Mikesell
lesmikes...@gmail.com



--
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] how does backuppc handle DBs

2009-02-06 Thread Nick Smith
On Fri, Feb 6, 2009 at 9:09 PM, Nick Smith nick.smit...@gmail.com wrote:
 I have a couple clients backup nightly over the internet.
 I seem to be doubting if the backups are actually being completed 
 successfully.
 Backuppc tells me they are, but the reason im questioning is because
 it is suppose
 to backup 10gigs a night (their sql db changes daily) and backup
 completes in 40mins
 over a max connection of 600kb.  The backup results do say its getting
 about 50%
 compression and the backup is only 5gigs when it complete.

 How does backuppc handle databases? does it take just whats changed or if the 
 DB
 changes does it have to backup the entire thing again?

 ive gotten 14 complete full backups (one a day) and they all have the
 same results
 completed successfully and are about 10gigs before compression and pooling.

 its there a way i can make sure that things are working as planned?

 i mean without doing a restore and testing what its backing up.

 i just dont want to think im getting good backups when im really not.

 im using scripts that use volume shadow to get backups of their windows 
 servers
 after hours.

 it just seems like the backups are going way to fast with the speed of
 the connection
 and the size of the data.

 thanks for any light you can shed on the subject.

and to add to my own post im doing these backups over a pfsense
firewall on both sides and the stats for the last 4 hours
(when the 10gig backup was supposedly preformed), the output
transmitted on the WAN was only 50megs. so something
isnt working correctly here.

--
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] how does backuppc handle DBs

2009-02-06 Thread Nick Smith
I have a couple clients backup nightly over the internet.
I seem to be doubting if the backups are actually being completed successfully.
Backuppc tells me they are, but the reason im questioning is because
it is suppose
to backup 10gigs a night (their sql db changes daily) and backup
completes in 40mins
over a max connection of 600kb.  The backup results do say its getting
about 50%
compression and the backup is only 5gigs when it complete.

How does backuppc handle databases? does it take just whats changed or if the DB
changes does it have to backup the entire thing again?

ive gotten 14 complete full backups (one a day) and they all have the
same results
completed successfully and are about 10gigs before compression and pooling.

its there a way i can make sure that things are working as planned?

i mean without doing a restore and testing what its backing up.

i just dont want to think im getting good backups when im really not.

im using scripts that use volume shadow to get backups of their windows servers
after hours.

it just seems like the backups are going way to fast with the speed of
the connection
and the size of the data.

thanks for any light you can shed on the subject.

--
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] how does backuppc handle DBs

2009-02-06 Thread mark
I make my mysql servers dump their DB's to a file before running BackupPC.

Quoting Nick Smith nick.smit...@gmail.com:

 On Fri, Feb 6, 2009 at 9:09 PM, Nick Smith nick.smit...@gmail.com wrote:
 I have a couple clients backup nightly over the internet.
 I seem to be doubting if the backups are actually being completed  
 successfully.
 Backuppc tells me they are, but the reason im questioning is because
 it is suppose
 to backup 10gigs a night (their sql db changes daily) and backup
 completes in 40mins
 over a max connection of 600kb.  The backup results do say its getting
 about 50%
 compression and the backup is only 5gigs when it complete.

 How does backuppc handle databases? does it take just whats changed  
 or if the DB
 changes does it have to backup the entire thing again?

 ive gotten 14 complete full backups (one a day) and they all have the
 same results
 completed successfully and are about 10gigs before compression and pooling.

 its there a way i can make sure that things are working as planned?

 i mean without doing a restore and testing what its backing up.

 i just dont want to think im getting good backups when im really not.

 im using scripts that use volume shadow to get backups of their  
 windows servers
 after hours.

 it just seems like the backups are going way to fast with the speed of
 the connection
 and the size of the data.

 thanks for any light you can shed on the subject.

 and to add to my own post im doing these backups over a pfsense
 firewall on both sides and the stats for the last 4 hours
 (when the 10gig backup was supposedly preformed), the output
 transmitted on the WAN was only 50megs. so something
 isnt working correctly here.

 --
 Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
 software. With Adobe AIR, Ajax developers can use existing skills and code to
 build responsive, highly engaging applications that combine the  
 power of local
 resources and data with the reach of the web. Download the Adobe AIR SDK and
 Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/





--
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/