Re: [BackupPC-users] Large directory

2018-07-10 Thread Markus Köberl
On Monday, 9 July 2018 11:09:32 CEST Les Mikesell wrote:
> On Mon, Jul 9, 2018 at 9:42 AM, Bowie Bailey  wrote:
> >
> > There was still plenty of free RAM and no swap usage.  I know it was
> > still doing something because the pool filesystem was slowly growing.  I
> > could try an strace, but I'll have to research that.  I've never used
> > strace before.
> >
> 
> Be sure to check RAM/swap at both ends.   You give strace a process id
> number and it shows every system call and its arguments that the
> process is making.  It's usually an overwhelming stream of text but
> you should be able to see the open() and read()s happening as long as
> it is still sending files.

if you are only interestet for open and read you can filter them with:
strace -p $PID -e trace=open,read
you can add the parameter -f to also watch the child processes


regards
Markus
-- 
Markus Koeberl
Graz University of Technology
Signal Processing and Speech Communication Laboratory
E-mail: markus.koeb...@tugraz.at

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Large directory

2018-07-09 Thread Michael Stowe

On 2018-07-09 09:21, Bowie Bailey wrote:

On 7/9/2018 12:09 PM, Les Mikesell wrote:
On Mon, Jul 9, 2018 at 9:42 AM, Bowie Bailey  
wrote:

There was still plenty of free RAM and no swap usage.  I know it was
still doing something because the pool filesystem was slowly growing. 
 I

could try an strace, but I'll have to research that.  I've never used
strace before.


Be sure to check RAM/swap at both ends.   You give strace a process id
number and it shows every system call and its arguments that the
process is making.  It's usually an overwhelming stream of text but
you should be able to see the open() and read()s happening as long as
it is still sending files.


Both ends are on the same machine for this backup.


rsync still spawns two independent processes
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Large directory

2018-07-09 Thread Bowie Bailey
On 7/9/2018 12:09 PM, Les Mikesell wrote:
> On Mon, Jul 9, 2018 at 9:42 AM, Bowie Bailey  wrote:
>> There was still plenty of free RAM and no swap usage.  I know it was
>> still doing something because the pool filesystem was slowly growing.  I
>> could try an strace, but I'll have to research that.  I've never used
>> strace before.
>>
> Be sure to check RAM/swap at both ends.   You give strace a process id
> number and it shows every system call and its arguments that the
> process is making.  It's usually an overwhelming stream of text but
> you should be able to see the open() and read()s happening as long as
> it is still sending files.

Both ends are on the same machine for this backup.

-- 
Bowie

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Large directory

2018-07-09 Thread Les Mikesell
On Mon, Jul 9, 2018 at 9:42 AM, Bowie Bailey  wrote:
>
> There was still plenty of free RAM and no swap usage.  I know it was
> still doing something because the pool filesystem was slowly growing.  I
> could try an strace, but I'll have to research that.  I've never used
> strace before.
>

Be sure to check RAM/swap at both ends.   You give strace a process id
number and it shows every system call and its arguments that the
process is making.  It's usually an overwhelming stream of text but
you should be able to see the open() and read()s happening as long as
it is still sending files.
-- 
   Les Mikesell
  lesmikes...@gmail.com

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Large directory

2018-07-09 Thread Bowie Bailey
On 7/6/2018 6:18 PM, Les Mikesell wrote:
> On Fri, Jul 6, 2018 at 1:38 PM, Bowie Bailey  wrote:
>> Right now, the only error I've seen is the error that stopped the backup:
>> rsync error: error in rsync protocol data stream (code 12) at io.c(1556)
>> [generator=3.0.9.12]
>>
>> The main annoyance is that I have no way to track progress.  While the
>> backup is running, I can't tell if it's about to finish, or if it's
>> bogged down and is likely to take a few more hours (or days).
> Rsync probably builds a table in memory to track the inodes of files
> with multiple links so you may be running out of RAM or slowing down
> from swapping.  For a brute-force approach to see what is happening
> you could strace the rsync process on the sending side.  You could at
> least tell if it is still opening/reading files.

There was still plenty of free RAM and no swap usage.  I know it was
still doing something because the pool filesystem was slowly growing.  I
could try an strace, but I'll have to research that.  I've never used
strace before.

-- 
Bowie

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Large directory

2018-07-09 Thread Bowie Bailey
On 7/6/2018 4:14 PM, Carl W. Soderstrom wrote:
> On 07/06 02:38 , Bowie Bailey wrote:
>> The problem is that the original backup took 2 weeks to fail with no
>> indication of problems that I could see... it was just very slow.  I
>> posted a previous question about it on this list while it was running. 
>> I could not find any bottlenecks or problems.  I'm reluctant to start it
>> again without some idea of what I'm looking for.  How would you suggest
>> I go about collecting more info?  Up the log level in BPC?  Make rsync
>> more verbose?
>>
>> Right now, the only error I've seen is the error that stopped the backup:
>> rsync error: error in rsync protocol data stream (code 12) at io.c(1556)
>> [generator=3.0.9.12]
>>
>> The main annoyance is that I have no way to track progress.  While the
>> backup is running, I can't tell if it's about to finish, or if it's
>> bogged down and is likely to take a few more hours (or days).
>
> Do you have a preferred tool for tracking how much bandwidth is in use, and
> to which ports and hosts it's going to?
>
> For this I use 'iftop' (or, 'iftop -BP' to show output by bytes and to name
> the ports). It's a good way to see if the backup is still moving data or if
> it's hung up on something.

That sounds useful.  I'll give that a try.

-- 
Bowie

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Large directory

2018-07-07 Thread G.W. Haywood via BackupPC-users

Hello again,

On Sat, 7 Jul 2018, Bowie Bailey wrote:


... The directory is XFS.


I have no experience of XFS, but I've read of strangenesses.  It's my
understanding is that yours is a fairly newly-copied filesystem, and
the strangenesses I've read about have been after the filesystems have
had a lot of use, so at this stage I don't think messing about trying
different filesystems is called for - but things might change.

What's the root filesystem?


... There is a single directory with 3 million files ... it has
grown over time and was never expected to get that large.


Now might be a good time to think about restructuring it - thinking,
as you do so, about what else was unexpected.


... The backup is local to the machine and is being done via rsync.?
The data and backup are on separate disks.


First of all I suggest trying to use $Conf{BackupFilesOnly} and/or
$Conf{BackupFilesExclude} to see if you can back up just a few (er, a
few tens of thousands:) of the files in a reasonable time.  Obviously
I know nothing about the way the files are used/modified.  Would it be
out of the question to back up something like 1% of the files per day,
i.e. have a rotation period of three months?


How would you suggest I go about collecting more info?


To see more log information increase the value of $Conf{XferLogLevel}.

--

73,
Ged.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Large directory

2018-07-06 Thread Les Mikesell
On Fri, Jul 6, 2018 at 1:38 PM, Bowie Bailey  wrote:
> >
> Right now, the only error I've seen is the error that stopped the backup:
> rsync error: error in rsync protocol data stream (code 12) at io.c(1556)
> [generator=3.0.9.12]
>
> The main annoyance is that I have no way to track progress.  While the
> backup is running, I can't tell if it's about to finish, or if it's
> bogged down and is likely to take a few more hours (or days).

Rsync probably builds a table in memory to track the inodes of files
with multiple links so you may be running out of RAM or slowing down
from swapping.  For a brute-force approach to see what is happening
you could strace the rsync process on the sending side.  You could at
least tell if it is still opening/reading files.

-- 
   Les Mikesell
  lesmikes...@gmail.com

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Large directory

2018-07-06 Thread Carl W. Soderstrom
On 07/06 02:38 , Bowie Bailey wrote:
> The problem is that the original backup took 2 weeks to fail with no
> indication of problems that I could see... it was just very slow.  I
> posted a previous question about it on this list while it was running. 
> I could not find any bottlenecks or problems.  I'm reluctant to start it
> again without some idea of what I'm looking for.  How would you suggest
> I go about collecting more info?  Up the log level in BPC?  Make rsync
> more verbose?
> 
> Right now, the only error I've seen is the error that stopped the backup:
> rsync error: error in rsync protocol data stream (code 12) at io.c(1556)
> [generator=3.0.9.12]
> 
> The main annoyance is that I have no way to track progress.  While the
> backup is running, I can't tell if it's about to finish, or if it's
> bogged down and is likely to take a few more hours (or days).


Do you have a preferred tool for tracking how much bandwidth is in use, and
to which ports and hosts it's going to?

For this I use 'iftop' (or, 'iftop -BP' to show output by bytes and to name
the ports). It's a good way to see if the backup is still moving data or if
it's hung up on something.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Large directory

2018-07-06 Thread Bowie Bailey
On 7/6/2018 10:22 AM, G.W. Haywood via BackupPC-users wrote:
> Hi there,
>
> On Fri, 6 Jul 2018, Bowie Bailey wrote:
>
>> I am trying to backup a large directory tree with BackupPC v4.? This
>> directory is 660GB and contains over 25 million files with about 3
>> million hard links.? The initial backup ran for 2 weeks before dying
>> with an rsync error.? It is showing as a partial backup, but it doesn't
>> show a file count.
>
> This is not an unusually large task for BackupPC.  I routinely back up
> directories with similar volumes of contents, but there are caveats.
>
> My 'home' directory on my desktop machine at work for example is half
> that size, and just over a million files, although it contains only a
> few thousand hard links.  Using rsync over a 100Mbit/s Ethernet link
> between a couple of 2.4GHz dual Opteron machines, each with 8-16GBytes
> RAM, I don't recall ever being surprised or disappointed by the time
> it took to complete a backup.  The first backup was 376G in ~1M files,
> and took nearly 16 hours.
>
> Recently I added another user's home directory to the pool.  To back
> up 530GBytes of fresh data in 1.06M files between the same machines
> took 23.5 hours for the first pass.  A full backup with a few tens of
> megabytes of new files takes half a day, an incremental takes only ten
> to 25 minutes depending mostly on the numbers of new files.
>
> Your files average 26.4kBytes, my new user's average ~500kBytes, you
> can expect some differences because of that.
>
> The problem description isn't very clear about the structure of the
> data, and doesn't mention the type(s) of filesystem involved, CPU/RAM,
> the transport mechanism(s).  For example do you have huge numbers of
> files at a single directory level?  Putting even tens of thousands of
> files, let alone millions, in a single ext[234] directory is likely to
> cause performance problems.  Are you using a 56k modem? :)

The directory is XFS.  There is a single directory with 3 million files,
which is where the hardlinks originate, but the rest are scattered
through various subdirectories.  I know it's not a good thing to have
that many files in one directory, but it has grown over time and was
never expected to get that large.

The backup is local to the machine and is being done via rsync.  The
data and backup are on separate disks.

This is a new system and these files were recently copied from another
disk.  A file-level rsync (rsync -aH /orig /new) did not have any
problems copying the files and hard links (although it did take a day or
two).

>
>> Is BackupPC going to be able to deal with this directory...
>
> Yes.  Is the filesystem going to cause trouble?  I don't know.
>
>> do I need to look for a different backup method?
>
> No, you need to find out what's going on.  Make sure you're looking at
> all the logs, and if there isn't enough information in the logs tell
> them to collect more

The problem is that the original backup took 2 weeks to fail with no
indication of problems that I could see... it was just very slow.  I
posted a previous question about it on this list while it was running. 
I could not find any bottlenecks or problems.  I'm reluctant to start it
again without some idea of what I'm looking for.  How would you suggest
I go about collecting more info?  Up the log level in BPC?  Make rsync
more verbose?

Right now, the only error I've seen is the error that stopped the backup:
rsync error: error in rsync protocol data stream (code 12) at io.c(1556)
[generator=3.0.9.12]

The main annoyance is that I have no way to track progress.  While the
backup is running, I can't tell if it's about to finish, or if it's
bogged down and is likely to take a few more hours (or days).

-- 
Bowie

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Large directory

2018-07-06 Thread Guillermo Rozas
I think the important point here is the number of hard-links, rsync
can have problems in those situations because it has to search them
all and keep track of them
(https://lists.samba.org/archive/rsync/2014-June/029537.html)

> > I am trying to backup a large directory tree with BackupPC v4.? This
> > directory is 660GB and contains over 25 million files with about 3
> > million hard links.? The initial backup ran for 2 weeks before dying
> > with an rsync error.? It is showing as a partial backup, but it doesn't
> > show a file count.
>
> My 'home' directory on my desktop machine at work for example is half
> that size, and just over a million files, although it contains only a
> few thousand hard links.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Large directory

2018-07-06 Thread G.W. Haywood via BackupPC-users

Hi there,

On Fri, 6 Jul 2018, Bowie Bailey wrote:


I am trying to backup a large directory tree with BackupPC v4.? This
directory is 660GB and contains over 25 million files with about 3
million hard links.? The initial backup ran for 2 weeks before dying
with an rsync error.? It is showing as a partial backup, but it doesn't
show a file count.


This is not an unusually large task for BackupPC.  I routinely back up
directories with similar volumes of contents, but there are caveats.

My 'home' directory on my desktop machine at work for example is half
that size, and just over a million files, although it contains only a
few thousand hard links.  Using rsync over a 100Mbit/s Ethernet link
between a couple of 2.4GHz dual Opteron machines, each with 8-16GBytes
RAM, I don't recall ever being surprised or disappointed by the time
it took to complete a backup.  The first backup was 376G in ~1M files,
and took nearly 16 hours.

Recently I added another user's home directory to the pool.  To back
up 530GBytes of fresh data in 1.06M files between the same machines
took 23.5 hours for the first pass.  A full backup with a few tens of
megabytes of new files takes half a day, an incremental takes only ten
to 25 minutes depending mostly on the numbers of new files.

Your files average 26.4kBytes, my new user's average ~500kBytes, you
can expect some differences because of that.

The problem description isn't very clear about the structure of the
data, and doesn't mention the type(s) of filesystem involved, CPU/RAM,
the transport mechanism(s).  For example do you have huge numbers of
files at a single directory level?  Putting even tens of thousands of
files, let alone millions, in a single ext[234] directory is likely to
cause performance problems.  Are you using a 56k modem? :)


Is BackupPC going to be able to deal with this directory...


Yes.  Is the filesystem going to cause trouble?  I don't know.


do I need to look for a different backup method?


No, you need to find out what's going on.  Make sure you're looking at
all the logs, and if there isn't enough information in the logs tell
them to collect more.

--

73,
Ged.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Large directory

2018-07-05 Thread Bowie Bailey
I am trying to backup a large directory tree with BackupPC v4.  This
directory is 660GB and contains over 25 million files with about 3
million hard links.  The initial backup ran for 2 weeks before dying
with an rsync error.  It is showing as a partial backup, but it doesn't
show a file count.

Is BackupPC going to be able to deal with this directory, or do I need
to look for a different backup method?

-- 
Bowie

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Large directory moved on backuped server, how can avoid the complete re-synchronization

2015-11-19 Thread Yoann David
Hello,

mostly everything is in the title.

On a target server, we move a quite large directory (90 Go), as the 
target change in backuppc, it try to re-sync everything (the whole 90Go) 
not only the difference.
Our bandwith between the backuped/target server and backuppc server is 
low (80ko/s), so it will take more than 13 days to transfer all datas !!!


What can we do ?

Yoann DAVID

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Large directory moved on backuped server, how can avoid the complete re-synchronization

2015-11-19 Thread Johan Ehnberg
Hello Yoann,

Rsync checksum caching will help in transferring only the changes. Are 
you using it? Here are the details:

http://backuppc.sourceforge.net/faq/BackupPC.html#Rsync-checksum-caching

Best regards,
Johan Ehnberg

On 2015-11-19 13:09, Yoann David wrote:
> Hello,
>
> mostly everything is in the title.
>
> On a target server, we move a quite large directory (90 Go), as the
> target change in backuppc, it try to re-sync everything (the whole 90Go)
> not only the difference.
> Our bandwith between the backuped/target server and backuppc server is
> low (80ko/s), so it will take more than 13 days to transfer all datas !!!
>
>
> What can we do ?
>
> Yoann DAVID
>
> --
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Large directory moved on backuped server, how can avoid the complete re-synchronization

2015-11-19 Thread Yoann David
Hello Johan,

thanks for your answer, unfortunately checksum caching is not configured 
on my backuppc.

With this system rsync/backuppc can detect file moving ?

ie : in my case, backuppc wil detect that 
/var/opt/gitolite/repositories/aa.git folder is now 
/home/git/repositories/aa.git
(the rsync share name was /var/opt/gitolite/repositories and move also 
to /home/git/repositories)

In the doc you linked, it said that the full performance benefit will be 
noticed on third full backup, so it may be to late to activate it ?

Yoann

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Large directory moved on backuped server, how can avoid the complete re-synchronization

2015-11-19 Thread Johan Ehnberg
Hello Yoann,

Now I understand better what you are attempting. No, unfortunately 
BackupPC will not detect moves. This is a feature in upcoming version 4. 
Checksum caching will offload the server, and rsync allows transferring 
just changes in current versions of BackupPC, but move detetion (or 
rather, opportunistic pool matching based on full-file checksum) is not 
yet available.

I think unison may work for you, or some live sync tools such as 
owncloud, they detect moves.

Best regards,
Johan

On 2015-11-19 15:04, Yoann David wrote:
> Hello Johan,
>
> thanks for your answer, unfortunately checksum caching is not configured
> on my backuppc.
>
> With this system rsync/backuppc can detect file moving ?
>
> ie : in my case, backuppc wil detect that
> /var/opt/gitolite/repositories/aa.git folder is now
> /home/git/repositories/aa.git
> (the rsync share name was /var/opt/gitolite/repositories and move also
> to /home/git/repositories)
>
> In the doc you linked, it said that the full performance benefit will be
> noticed on third full backup, so it may be to late to activate it ?
>
> Yoann
>
> --
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Large directory moved on backuped server, how can avoid the complete re-synchronization

2015-11-19 Thread Johan Ehnberg
If it is a one-time move, you may be able to do some tricks with the 
rsync sharename. Or, use a restore tar file to seed the new location. 
See my post from earlier today for details, I'd be happy to hear your 
results. You can use archivemount to change the paths of the tar file 
you create.

For continuous moves, this is not feasible.

Best regards,
Johan

On 2015-11-19 15:23, Yoann David wrote:
> My problem is not to find the tool, backuppc is great, but how I avoid
> the 13 days of file transfer due to the move of the files...
> Apparently it is not possible.
>
> I'me sad ;)
>
>
>
> Le 19/11/2015 14:17, Johan Ehnberg a écrit :
>> Hello Yoann,
>>
>> Now I understand better what you are attempting. No, unfortunately
>> BackupPC will not detect moves. This is a feature in upcoming version 4.
>> Checksum caching will offload the server, and rsync allows transferring
>> just changes in current versions of BackupPC, but move detetion (or
>> rather, opportunistic pool matching based on full-file checksum) is not
>> yet available.
>>
>> I think unison may work for you, or some live sync tools such as
>> owncloud, they detect moves.
>>
>> Best regards,
>> Johan
>>
>> On 2015-11-19 15:04, Yoann David wrote:
>>> Hello Johan,
>>>
>>> thanks for your answer, unfortunately checksum caching is not configured
>>> on my backuppc.
>>>
>>> With this system rsync/backuppc can detect file moving ?
>>>
>>> ie : in my case, backuppc wil detect that
>>> /var/opt/gitolite/repositories/aa.git folder is now
>>> /home/git/repositories/aa.git
>>> (the rsync share name was /var/opt/gitolite/repositories and move also
>>> to /home/git/repositories)
>>>
>>> In the doc you linked, it said that the full performance benefit will be
>>> noticed on third full backup, so it may be to late to activate it ?
>>>
>>> Yoann
>>>
>>> --
>>> ___
>>> BackupPC-users mailing list
>>> BackupPC-users@lists.sourceforge.net
>>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>>> Wiki:http://backuppc.wiki.sourceforge.net
>>> Project:http://backuppc.sourceforge.net/
>>>
>> --
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project:http://backuppc.sourceforge.net/
>
>
>
> --
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/