[BackupPC-users] BackupPC_link got error -4 when calling MakeFileLink

2009-01-14 Thread tagore

Mount was bad.

Good mount:
/var/lib/backuppc

Bad mount:
/var/lib/backuppc/pc

Thanks

+--
|This was sent by hirleve...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Custom schedule

2009-01-14 Thread tagore

Hi!

I like custom schedule per host*.pl

Example backup start:
host1.pl:   19.00
host2.pl:   20.00
host3.pl:   21.00

How can I this?

Which $Conf parameter?


Thanks

+--
|This was sent by hirleve...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Creating shadow copies in windows

2009-01-14 Thread Kevin Kimani
Hi all,

Have been trying to use the script posted for copying open files in windows
but have not been able to fully make it run. Could someone who has used the
script help me and let me know how to make it work. I will really
appreciate.

Kind Regards

Kevin
--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Custom schedule

2009-01-14 Thread Johan Ehnberg
tagore wrote:
 Hi!
 
 I like custom schedule per host*.pl
 
 Example backup start:
 host1.pl:   19.00
 host2.pl:   20.00
 host3.pl:   21.00
 
 How can I this?
 
 Which $Conf parameter?
 
 
 Thanks
 

This has been asked a few times before. In short you have a few options:

- WakeupSchedule (for specifying when NOT to run anything)
- MaxBackups (from what it seems, maybe you want to spread out the load)
- Blackouts (for specifying when not to run unless necessary)
- cron jobs (for forcing a run outside BackupPC, beware - not ideal)

Cron is the only way to accomplish exactly what you're looking for.

Regards,
Johan

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] reinstall problems

2009-01-14 Thread Janeks Kamerovskis
Hi!

I successfully installed backuppc by using apt-get, but apparently made
a mistake in config.pl file. Then I tried to remove them with apt-get
and install it again by using apt-get, but now I am getting errors:

apt-get install backuppc
Reading package lists... Done
Building dependency tree... Done
backuppc is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.
1 not fully installed or removed.
Need to get 0B of archives.
After unpacking 0B of additional disk space will be used.
Setting up backuppc (2.1.2-6) ...
This module is already enabled!
Starting backuppc: Wrong user: my userid is 114, instead of  ()
BackupPC::Lib-new failed
invoke-rc.d: initscript backuppc, action start failed.
dpkg: error processing backuppc (--configure):
 subprocess post-installation script returned error exit status 255
Errors were encountered while processing:
 backuppc

What could be done to correct this?

Thanks in advance,
Janeks



--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Backuppc timing out

2009-01-14 Thread cantthinkofanickname

This timeout is now happening for the SMB type backup. Can anyone provide an 
idea (I suspect my client is not configured somehow because backuppc reports 
ping OK)?

+--
|This was sent by forumsmail...@btinternet.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copying in a file instead of backing up?

2009-01-14 Thread Les Mikesell
Johan Ehnberg wrote:

 OK. I can see now why this is true. But it seems like one could
 rewrite the backuppc rsync protocol to check the pool for a file with
 same checksum  before syncing. This could give some real speedup on
 long files. This would be possible at least for the cpool where the
 rsync checksums (and full file checksums) are stored at the end of
 each file.
 Now this would be quite the feature - and it fits perfecty with the idea 
 of smart pooling that BackupPC has. The effects are rather interesting:

 - Different incremental levels won't be needed to preserve bandwidth
 - Full backups will indirectly use earlier incrementals as reference

 Definite whishlist item.
 But you'll have to read through millions of files and the common case of 
 a growing logfile isn't going to find a match anyway.  The only way this 
 could work is if the remote rsync could send a starting hash matching 
 the one used to construct the pool filenames - and then you still have 
 to deal with the odds of collisions.

 
 Sure you are pointing to something and are right. What I don't see is 
 why we'd have to do an (extra?) read through millions of files?

You are asking to find an unknown file among millions using a checksum 
that is stored at the end.  How else would you find it?  The normal test 
  for a match uses the hashed filename to quickly eliminate the 
possibilities that aren't hash collisions - this only requires reading a 
few of the directories, not each file's contents and is something the OS 
can do quickly.

  That is
 done with every full anyway,

No, nothing ever searches the contents of the pool.  Fulls compare 
against the previously known matching files from that client.

 and in the case of an incremental it would 
 only be necessary for new/changed files. It would in fact also speed up 
 those logs because of rotation: an old log changes name but is still 
 found on the server.

On the first rotation that would only be true if the log hadn't grown 
since the moment of the last backup.  You'd need file chunking to take 
advantage of partial matches.  After that, a rotation scheme that 
attached a timestamp to the filename would make more sense.

 I suspect there is no problem in getting the hash with some tuning to 
 Rsync::Perl? It's just a command as long as the protocol allows it.

There are two problems.  One is that you have a stock rsync at the other 
end and at least for the protocols that Rsync::Perl understands there is 
not a full hash of the file sent first.  The other is that even if it 
did, it would have to be computed exactly in the same way that backuppc 
does for the pool filenames or you'll spend hours looking up each match.

 Are collisions aren't exactly a performance problem? BackupPC handles 
 them nicely from what I've seen.

But it must have access to the contents of the file in question to 
handle them.  It might be possible to do that will an rsync block 
compare across the contents but you'd have to repeat it over each hash 
match to determine which, if any, have the matching content. It might 
not be completely impossible to do remotely, but it would take a well 
designed client-server protocol to match up unknown files.

-- 
   Les Mikesell
 lesmikes...@gmail.com



--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copying in a file instead of backing up?

2009-01-14 Thread Jeffrey J. Kosowsky
Les Mikesell wrote at about 01:11:06 -0600 on Wednesday, January 14, 2009:
  Johan Ehnberg wrote:
   OK. I can see now why this is true. But it seems like one could
   rewrite the backuppc rsync protocol to check the pool for a file with
   same checksum  before syncing. This could give some real speedup on
   long files. This would be possible at least for the cpool where the
   rsync checksums (and full file checksums) are stored at the end of
   each file.
   
   Now this would be quite the feature - and it fits perfecty with the idea 
   of smart pooling that BackupPC has. The effects are rather interesting:
   
   - Different incremental levels won't be needed to preserve bandwidth
   - Full backups will indirectly use earlier incrementals as reference
   
   Definite whishlist item.
  
  But you'll have to read through millions of files and the common case of 
  a growing logfile isn't going to find a match anyway.  The only way this 
  could work is if the remote rsync could send a starting hash matching 
  the one used to construct the pool filenames - and then you still have 
  to deal with the odds of collisions.
  

First, I agree that this is not necessarily easy and would probably
require some significant changes to the design of how the pool files
are named and structured.

However, collisions are pretty easy to deal with.
Also, my suggestion was to do this on a selective basis -- say large
files when backing up over a slow link, so it would not necessarily
involve millions of files.

Finally, I don't know much at all about the inner workings of
rsync. But the above might be possible if rsync allowed you to
calculate the checksums first before initiating transfer. If that were
true, then it might not be hard to have a corresponding process on the
BackupPC server check the checksum against the existing pool before
deciding to proceed with the data transfer. The big problem would be
that the partial file md5sum used to define pool file names is not
consistent with the rsync checksum calculations -- which all goes back
to my thinking that long-term, a relational database is a better
structure for storing backup information than the kludge of
hard-links plus attrib files. In this case, relational database, would
allow for pool lookup based on rsync mdsums (as well as the existing
partial md5sums).

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] backups not running on schedule

2009-01-14 Thread Lee A. Connell
I have a server that will not automatically backup any of my servers, I
have to manually do it and it works just fine, here is one of my
configs:

 

$Conf{PingPath}= '/bin/echo';

 

$Conf{PingMaxMsec} = 1000;

 

# Minimum period in days between full and incremental backups:

$Conf{FullPeriod} = 6.97;

$Conf{IncrPeriod} = 0.97;

 

$Conf{FullKeepCnt} = 1;

$Conf{IncrKeepCnt} = 14;

$Conf{ClientTimeout} = 179600;

$Conf{PartialAgeMax} = 3;

 

# Note that additional fulls will be kept for as long as is necessary

# to support remaining incrementals.

 

 

$Conf{BlackoutPeriods} = [

{

hourBegin =  7.0,

hourEnd   = 19.0,

weekDays  = [1, 2, 3, 4, 5],

},

];

 

# What transport to use backup the client
[smb|rsync|rsyncd|tar|archive]:

$Conf{XferMethod} = 'rsyncd';

# The file system path or the name of the rsyncd module to backup when

# using rsync/rsyncd:

 

# If this is defined only these files/paths will be included in the
backup:

$Conf{RsyncShareName} = ['WWW'];

 

Lee Connell
Ammonoosuc Computer Services, Inc. 

Network  Systems Engineer 

15 Main St. Suite 10
Littleton, NH 03561
603-444-3937 

If you require immediate response please send your inquiry to
helpd...@ammocomp.com mailto:helpd...@ammocomp.com 

 

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Creating shadow copies in windows

2009-01-14 Thread Jeffrey J. Kosowsky
Kevin Kimani wrote at about 11:21:44 +0300 on Wednesday, January 14, 2009:
  Hi all,
  
  Have been trying to use the script posted for copying open files in windows
  but have not been able to fully make it run. Could someone who has used the
  script help me and let me know how to make it work. I will really
  appreciate.
  
  Kind Regards
  
  Kevin
  
I think I'm the author of the script but (unfortunately) your email
doesn't provide any details. You say only that you have not been
able to fully make it run and that you want to know how to make it
work. I do not know how to answer such an open-ended  question.


--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copying in a file instead of backing up?

2009-01-14 Thread Jeffrey J. Kosowsky
Les Mikesell wrote at about 07:59:36 -0600 on Wednesday, January 14, 2009:
  Johan Ehnberg wrote:
  
   OK. I can see now why this is true. But it seems like one could
   rewrite the backuppc rsync protocol to check the pool for a file with
   same checksum  before syncing. This could give some real speedup on
   long files. This would be possible at least for the cpool where the
   rsync checksums (and full file checksums) are stored at the end of
   each file.
   Now this would be quite the feature - and it fits perfecty with the idea 
   of smart pooling that BackupPC has. The effects are rather interesting:
  
   - Different incremental levels won't be needed to preserve bandwidth
   - Full backups will indirectly use earlier incrementals as reference
  
   Definite whishlist item.
   But you'll have to read through millions of files and the common case of 
   a growing logfile isn't going to find a match anyway.  The only way this 
   could work is if the remote rsync could send a starting hash matching 
   the one used to construct the pool filenames - and then you still have 
   to deal with the odds of collisions.
  
   
   Sure you are pointing to something and are right. What I don't see is 
   why we'd have to do an (extra?) read through millions of files?
  
  You are asking to find an unknown file among millions using a checksum 
  that is stored at the end.  How else would you find it?  The normal test 
for a match uses the hashed filename to quickly eliminate the 
  possibilities that aren't hash collisions - this only requires reading a 
  few of the directories, not each file's contents and is something the OS 
  can do quickly.

That's why I mentioned in my previous post that having a relational
database structure would be very helpful here since the current
hard link-based storage approach allows for only a single way of
efficiently retrieving pool files (other than by their backup path)
and that method depends on a non-standard partial file md5sum. A
relational database would allow for pool files to be found based upon
any number of attributes or md5sum-type labels.

  
That is
   done with every full anyway,
  
  No, nothing ever searches the contents of the pool.  Fulls compare 
  against the previously known matching files from that client.
  
   and in the case of an incremental it would 
   only be necessary for new/changed files. It would in fact also speed up 
   those logs because of rotation: an old log changes name but is still 
   found on the server.
  
  On the first rotation that would only be true if the log hadn't grown 
  since the moment of the last backup.  You'd need file chunking to take 
  advantage of partial matches.  After that, a rotation scheme that 
  attached a timestamp to the filename would make more sense.
  
   I suspect there is no problem in getting the hash with some tuning to 
   Rsync::Perl? It's just a command as long as the protocol allows it.
  
  There are two problems.  One is that you have a stock rsync at the other 
  end and at least for the protocols that Rsync::Perl understands there is 
  not a full hash of the file sent first.  The other is that even if it 
  did, it would have to be computed exactly in the same way that backuppc 
  does for the pool filenames or you'll spend hours looking up each
  match.
Are you sure that you can't get rsync to calculate the checksums (both
block and full-file) before file transfer begins -- I don't know I'm
just asking..

  
   Are collisions aren't exactly a performance problem? BackupPC handles 
   them nicely from what I've seen.
  
  But it must have access to the contents of the file in question to 
  handle them.  It might be possible to do that will an rsync block 
  compare across the contents but you'd have to repeat it over each hash 
  match to determine which, if any, have the matching content. It might 
  not be completely impossible to do remotely, but it would take a well 
  designed client-server protocol to match up unknown files.
  
  -- 
 Les Mikesell
   lesmikes...@gmail.com
  
  
  
  --
  This SF.net email is sponsored by:
  SourcForge Community
  SourceForge wants to tell your story.
  http://p.sf.net/sfu/sf-spreadtheword
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: http://backuppc.sourceforge.net/
  

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users

Re: [BackupPC-users] Copying in a file instead of backing up?

2009-01-14 Thread Craig Barratt
Jeff writes:

 Are you sure that you can't get rsync to calculate the checksums (both
 block and full-file) before file transfer begins -- I don't know I'm
 just asking..

I believe rsync's --checksum option precomutes and sends the whole
file checksum (which as has been noted is different to BackupPC's
partial file checksum).

Currently File::RsyncP doesn't handle that option.  Also, it requires
two passes on each changed file on the client.

Craig

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copying in a file instead of backing up?

2009-01-14 Thread Les Mikesell
Jeffrey J. Kosowsky wrote:

 However, collisions are pretty easy to deal with.
 Also, my suggestion was to do this on a selective basis -- say large
 files when backing up over a slow link, so it would not necessarily
 involve millions of files.

If you can't map the hash you get to the filename you have to search the 
whole pool, and if that doesn't have millions of files you won't have 
much chance of a random new match.

 The big problem would be
 that the partial file md5sum used to define pool file names is not
 consistent with the rsync checksum calculations -- which all goes back
 to my thinking that long-term, a relational database is a better
 structure for storing backup information than the kludge of
 hard-links plus attrib files. 

I wouldn't call using the inherent properties of a unix filesystem to 
store files in a convenient way a kludge.  It is something that has 
been developed and optimized even longer than relational databases.

 In this case, relational database, would
 allow for pool lookup based on rsync mdsums (as well as the existing
 partial md5sums).

Yes, you could keep multiple keys and indexes in a database but there is 
no reason to think it would be more efficient, and unless the file 
contents are also stored in the database you introduce the problem that 
changes to the databased keys aren't atomic with the files themselves.

-- 
   Les Mikesell
 lesmikes...@gmail.com


--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copying in a file instead of backing up?

2009-01-14 Thread Rich Rauenzahn



Les Mikesell wrote:

Johan Ehnberg wrote:
  

OK. I can see now why this is true. But it seems like one could
rewrite the backuppc rsync protocol to check the pool for a file with
same checksum  before syncing. This could give some real speedup on
long files. This would be possible at least for the cpool where the
rsync checksums (and full file checksums) are stored at the end of
each file.
  
Now this would be quite the feature - and it fits perfecty with the idea 
of smart pooling that BackupPC has. The effects are rather interesting:


- Different incremental levels won't be needed to preserve bandwidth
- Full backups will indirectly use earlier incrementals as reference

Definite whishlist item.



But you'll have to read through millions of files and the common case of 
a growing logfile isn't going to find a match anyway.  The only way this 
could work is if the remote rsync could send a starting hash matching 
the one used to construct the pool filenames - and then you still have 
to deal with the odds of collisions


I thought about this a little a year or so ago -- enough to attempt to 
try to understand the rsync perl modules (failed!).


I thought perhaps what would be best is a berkeley db/tied hash lookup 
table/cache that would map rsync checksums+file size to a pool item.
The local rsync client would request the checksum of each remote file 
before transfer, and if it was in the cache and in the pool, it could be 
used as the local version, then let the rsync protocol take over to 
verify all of the blocks.


I really like that BackupPC doesn't store its data in a database that 
could get corrupted, and the berkeley db would just be a cache whose 
integrity wouldn't be critical to the integrity of the backups.   And 
the cache isn't relied on 100%, but rather the actual pool file the 
cache points to is used as the ultimate authority.


Rich


--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copying in a file instead of backing up?

2009-01-14 Thread Rich Rauenzahn


 I thought about this a little a year or so ago -- enough to attempt to 
 try to understand the rsync perl modules (failed!).

 I thought perhaps what would be best is a berkeley db/tied hash lookup 
 table/cache that would map rsync checksums+file size to a pool 
 item.The local rsync client would request the checksum of each 
 remote file before transfer, and if it was in the cache and in the 
 pool, it could be used as the local version, then let the rsync 
 protocol take over to verify all of the blocks.

 I really like that BackupPC doesn't store its data in a database that 
 could get corrupted, and the berkeley db would just be a cache whose 
 integrity wouldn't be critical to the integrity of the backups.   And 
 the cache isn't relied on 100%, but rather the actual pool file the 
 cache points to is used as the ultimate authority.


Sorry -- to develop my idea further, the cache would be created/updated 
during backups, and a tool could be written to generate them in batch.   
A weekly routine could walk the cache and remove checksum entries that 
no longer point to pool items.  Small files (size user configurable) 
could also be excluded from the cache to minimize overhead/space.

Rich

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copying in a file instead of backing up

2009-01-14 Thread Adam Goryachev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

In advance, sorry for messing up the threading, I don't have an actual
email from the thread to reply to...

In any case though, I would like to extend this discussion in two
different directions:
1) The ability of an admin to manually insert a file into the pool and
a host's backup. I imagine it should be fairly simple to do this. ie,
BackupPC_addfile2pool -c 3 abc.doc
where -c means add it to the cpool, 3 means the level of compression to
use, and abc.doc is the file to add.

Then, add a hardlink from the pc/machine/234/etc/fabc.doc to the new
cpool file.

Then, finally, add the attrib file entry as needed.

Really should not be difficult, but if someone could assist in providing
a solution for the above 3 steps, it would be greatly appreciated.

2) The ability for backuppc to know that file abc.doc on host1 is
identical to file abc.doc on host3 and file def.doc on host 2

During the backuppc_link stage, we could add entries to the berkeley DB
if the uncompressed file is larger than CacheMinSize.

During the backuppc_nightly as we delete a file from the pool, we also
remove the entry from the DB file (if it exists). At the end of the
nightly run, maybe we need to compress the DB file if that is an issue
for Berkeley DB files (ie, in case it becomes too fragmented/etc...

The other option would be to change the hashing system used by backuppc
to store files into the pool to match the checksum's used by rsync. Yes,
this would be an incompatible change, and a lot of work for people with
a large pool, but overall, should not in itself be so difficult. What is
perhaps more important is whether the result is any better/worse than
what we have, especially in the case where the user is only using tar
for backups If it is no worse, or is better, then perhaps it is
worth looking at it.

In the end, what problem are we solving?
A) Backing up identical files from multiple remote hosts
B) Backing up identical files when renamed/moved

(A) shouldn't really be a huge problem, although these days files being
emailed from person to person can encourage this, but I can't imagine it
really is a massive issue.

(B) This has and will be an issue for me at least on a few occasions. In
one case, I had to write a script to rename the backup files (dumped
from another program) to ensure they would have the same filename each
day. I'll probably need to do that again in the near future for another
remote server. If we could solve this, it would be useful.
In another case, we have around 30G of image files, which will be
renamed/etc in the near future so that they sit on a different server,
and the directory structure will also change. Again, the above would
solve this issue, but this is a once off thing, and not really a problem
since each file is small

Finally, would the original problem be solved by being able to keep a
partial file in a partial backup? This would allow rsync to continue,
and also allow you to have some of your old file instead of none of
it... I understand it may increase your pool/disk use but as soon as the
backup changes from partial to full/incr, then the old portion of the
file would be removed from the backup.

Anyway, gotta get back to work now, just thought I might add my
thoughts/2c worth :)

Regards,
Adam

- --
Adam Goryachev
Website Managers
www.websitemanagers.com.au
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkluneIACgkQGyoxogrTyiWM7wCfd6YP18FF7XngwUZIlBjBihHp
GdUAn1DBYzRUtvDXyFrvT4FQkdo+oNWm
=Ioya
-END PGP SIGNATURE-

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/