Re: [BackupPC-users] extremely long backup time

2013-05-30 Thread Les Mikesell
On Thu, May 30, 2013 at 6:53 AM, Nicola Scattolin  wrote:
>
> i have checked the disk usage and the i/o that backuppc output me in the
> summary page, and 7.37 is Mb/sec is the value i got.
> The server is virtualized but the hardisk is linked directly to the
> virtual machine in mirroring raid, do you thing is a good speed or could
> be better?

I always start with the assumption that disk seek time is the
bottleneck until proven otherwise, because it is orders of magnitude
slower than any other computer operation and backuppc does a lot of
it.What else is competing for the head position on those drives?
It can also be on the sending side, especially if you have a lot of
little files and/or applications running concurrently.It is
possible for CPU/RAM/network to be the problem, but you have to have
done something wrong.

--
  Les Mikesell
  lesmikes...@gmail.com

--
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Exclude directories

2013-05-30 Thread Kameleon
On each of our samba servers inside of each share is a .deleted folder that
all of the files that a user deletes from the share within windows goes to
instead of actually being deleted immediately. I do not want to back these
up but they are not all in the same path on all the servers. What is the
correct syntax to exclude these from the backups? Should */.deleted work?
Or will I need to explicitly declare all the paths?
--
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] extremely long backup time

2013-05-30 Thread Nicola Scattolin
Il 30/05/2013 14:10, Phil K. ha scritto:
> Just to take things in a different direction;
>
> What do your transfer logs say? Is this an OS disk, or is it strictly
> data? If you're seeing strings of errors when reading files (crypto and
> AV related files are notorious for this) You may want to adjust your
> include / exclude files. This will improve read time and, by proxy,
> transfer times.
> ~Phil
>
> Nicola Scattolin  wrote:
>
> Il 30/05/2013 12:56, Adam Goryachev ha scritto:
>
> On 30/05/13 18:13, Nicola Scattolin wrote:
>
> Il 30/05/2013 10:04, Adam Goryachev ha scritto:
>
> On 30/05/13 16:57, Nicola Scattolin wrote:
>
> hi,
> i have a problem in full backups of a 2TB disk.
> when backuppc do fullbackup it takes on average
> 1866.0 minutes while the
> incremental backup takes around 20 minutes.
> do you think there is something wrong or it's just
> for the amount of
> data to be backupd?
>
> Most likely this is a limitation of bandwidth, CPU, or
> memory on either
> the backuppc server, or the machine being backed up.
>
> Have you enabled checksum-seed in your config?
> Are you even using rsync?
>
> Remember a full backup will read the full content of
> every file (talking
> about rsync because I will assume that is what you are
> using) on both
> the client and backuppc server. A incremental only looks
> at file
> attributes such as size and timestamp.
>
> Can you be more detailed about your configuration, and
> during a full
> backup look at memory utilisation on both backuppc
> server and the client.
>
> PS, this question is asked regularly, so you should also
> look at the
> archives to see the previous discussions (which have
> been very detailed,
> and sometimes heated).
>
> Regards,
> Adam
>
>
> i use smb to transfer file, and there are not be cpu or
> bandwidth
> limitation, it's a local server.
> where is the checksum-seed option? i can't find it
>
>
> OK, so this is even more obvious.
>
> An incremental will only look at the timestamp, and transfer all
> files
> newer than the timestamp of the previous backup.
> A full will transfer ALL files, therefore this is disk I/O + network
> bandwidth limited.
>
> 2TB of data will take 335 minutes at 1Gbps (assuming you can
> read from
> the source disk at least 1Gbps, and write to the destination disk at
> 1Gbps, and utilise 100% of source/destination disk bandwidth as
> well as
> 100% of network bandwidth, and there was nil overhead for
> handling each
> individual filename/etc...
>
> You are getting just under 20MB/sec, which is probably not
> unreasonable.
>
> As mentioned, if you want it faster, you will need to determine
> where
> the bottleneck is, which means looking at disk IO (most likely),
> network
> bandwidth, CPU (especially if you use compression on the backuppc
> server), etc...
>
> Regards,
> Adam
>
>
>
> i have checked the disk usage and the i/o that backuppc output me in the
> summary page, and 7.37 is Mb/sec is the value i got.
> The server is virtualized but the hardisk is linked directly to the
> virtual machine in mirroring raid, do you thing is a good speed or could
> be better?
>
>
> --
> Phil Kennedy
> Yankee Air Museum
> Systems Admin
> phillip.kenn...@yankeeairmuseum.org
>
> Sent from my Android phone with K-9 Mail.
>
>
> --
> Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
> Get 100% visibility into your production application - at no cost.
> Code-level diagnostics for performance bottlenecks with <2% overhead
> Download for free and get started troubleshooting in minutes.
> http://p.sf.net/sfu/appdyn_d2d_ap1
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
i got errors reading 1 directory, but i don't think it spin up my backup 
time so much

-- 
Nicola Scattolin
Ser.Tec s.r.l.
Via E. Salgari 14/E
31056 Roncade, Treviso
http://dpidgprinting.com

-

Re: [BackupPC-users] BlackOut during entire days

2013-05-30 Thread Nicola Scattolin
Il 30/05/2013 14:36, Nicolas Cauchie ha scritto:
> Hello all,
>
> I'm looking for a trick to program a blackout during weekends days (from
> saturaday morning 00h to sunday night 24h).
>
> Results : no backups at all saturadays and sundays.
>
>  From Google :
>
>  $Conf{BlackoutPeriods} = [
>  {
>  hourBegin =>  0.0,
>  hourEnd   => 24.0,
>  weekDays  => [0,7],
>  },
>
> seems to don't work..
>
> Any idea ?
>
> Thanks
>
> Nicolas
> --
>
>   
>
>
>
> --
> Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
> Get 100% visibility into your production application - at no cost.
> Code-level diagnostics for performance bottlenecks with <2% overhead
> Download for free and get started troubleshooting in minutes.
> http://p.sf.net/sfu/appdyn_d2d_ap1
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
i got the same problem

-- 
Nicola Scattolin
Ser.Tec s.r.l.
Via E. Salgari 14/E
31056 Roncade, Treviso
http://dpidgprinting.com

--
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] extremely long backup time

2013-05-30 Thread Phil K.
Just to take things in a different direction;

What do your transfer logs say? Is this an OS disk, or is it strictly data? If 
you're seeing strings of errors when reading files (crypto and AV related files 
are notorious for this) You may want to adjust your include / exclude files. 
This will improve read time and, by proxy, transfer times. 
~Phil

Nicola Scattolin  wrote:

>Il 30/05/2013 12:56, Adam Goryachev ha scritto:
>> On 30/05/13 18:13, Nicola Scattolin wrote:
>>> Il 30/05/2013 10:04, Adam Goryachev ha scritto:
 On 30/05/13 16:57, Nicola Scattolin wrote:
> hi,
> i have a problem in full backups of a 2TB disk.
> when backuppc do fullbackup it takes on average 1866.0 minutes
>while the
> incremental backup takes around 20 minutes.
> do you think there is something wrong or it's just for the amount
>of
> data to be backupd?
 Most likely this is a limitation of bandwidth, CPU, or memory on
>either
 the backuppc server, or the machine being backed up.

 Have you enabled checksum-seed in your config?
 Are you even using rsync?

 Remember a full backup will read the full content of every file
>(talking
 about rsync because I will assume that is what you are using) on
>both
 the client and backuppc server. A incremental only looks at file
 attributes such as size and timestamp.

 Can you be more detailed about your configuration, and during a
>full
 backup look at memory utilisation on both backuppc server and the
>client.

 PS, this question is asked regularly, so you should also look at
>the
 archives to see the previous discussions (which have been very
>detailed,
 and sometimes heated).

 Regards,
 Adam

>>> i use smb to transfer file, and there are not be cpu or bandwidth
>>> limitation, it's a local server.
>>> where is the checksum-seed option? i can't find it
>>
>> OK, so this is even more obvious.
>>
>> An incremental will only look at the timestamp, and transfer all
>files
>> newer than the timestamp of the previous backup.
>> A full will transfer ALL files, therefore this is disk I/O + network
>> bandwidth limited.
>>
>> 2TB of data will take 335 minutes at 1Gbps (assuming you can read
>from
>> the source disk at least 1Gbps, and write to the destination disk at
>> 1Gbps, and utilise 100% of source/destination disk bandwidth as well
>as
>> 100% of network bandwidth, and there was nil overhead for handling
>each
>> individual filename/etc...
>>
>> You are getting just under 20MB/sec, which is probably not
>unreasonable.
>>
>> As mentioned, if you want it faster, you will need to determine where
>> the bottleneck is, which means looking at disk IO (most likely),
>network
>> bandwidth, CPU (especially if you use compression on the backuppc
>> server), etc...
>>
>> Regards,
>> Adam
>>
>>
>i have checked the disk usage and the i/o that backuppc output me in
>the 
>summary page, and 7.37 is Mb/sec is the value i got.
>The server is virtualized but the hardisk is linked directly to the 
>virtual machine in mirroring raid, do you thing is a good speed or
>could 
>be better?
>
>-- 
>Nicola Scattolin
>Ser.Tec s.r.l.
>Via E. Salgari 14/E
>31056 Roncade, Treviso
>http://dpidgprinting.com
>
>--
>Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
>Get 100% visibility into your production application - at no cost.
>Code-level diagnostics for performance bottlenecks with <2% overhead
>Download for free and get started troubleshooting in minutes.
>http://p.sf.net/sfu/appdyn_d2d_ap1
>___
>BackupPC-users mailing list
>BackupPC-users@lists.sourceforge.net
>List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>Wiki:http://backuppc.wiki.sourceforge.net
>Project: http://backuppc.sourceforge.net/

-- 
Phil Kennedy
Yankee Air Museum
Systems Admin
phillip.kenn...@yankeeairmuseum.org

Sent from my Android phone with K-9 Mail.--
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] extremely long backup time

2013-05-30 Thread Adam Goryachev
On 30/05/13 21:53, Nicola Scattolin wrote:
> Il 30/05/2013 12:56, Adam Goryachev ha scritto:
>> On 30/05/13 18:13, Nicola Scattolin wrote:
>>> Il 30/05/2013 10:04, Adam Goryachev ha scritto:
 On 30/05/13 16:57, Nicola Scattolin wrote:
> hi, i have a problem in full backups of a 2TB disk. when
> backuppc do fullbackup it takes on average 1866.0 minutes
> while the incremental backup takes around 20 minutes. do you
> think there is something wrong or it's just for the amount
> of data to be backupd?
 Most likely this is a limitation of bandwidth, CPU, or memory
 on either the backuppc server, or the machine being backed up.

 Have you enabled checksum-seed in your config? Are you even
 using rsync?

 Remember a full backup will read the full content of every file
 (talking about rsync because I will assume that is what you are
 using) on both the client and backuppc server. A incremental
 only looks at file attributes such as size and timestamp.

 Can you be more detailed about your configuration, and during a
 full backup look at memory utilisation on both backuppc server
 and the client.

 PS, this question is asked regularly, so you should also look
 at the archives to see the previous discussions (which have
 been very detailed, and sometimes heated).

 Regards, Adam

>>> i use smb to transfer file, and there are not be cpu or
>>> bandwidth limitation, it's a local server. where is the
>>> checksum-seed option? i can't find it
>>
>> OK, so this is even more obvious.
>>
>> An incremental will only look at the timestamp, and transfer all
>> files newer than the timestamp of the previous backup. A full will
>> transfer ALL files, therefore this is disk I/O + network bandwidth
>> limited.
>>
>> 2TB of data will take 335 minutes at 1Gbps (assuming you can read
>> from the source disk at least 1Gbps, and write to the destination
>> disk at 1Gbps, and utilise 100% of source/destination disk
>> bandwidth as well as 100% of network bandwidth, and there was nil
>> overhead for handling each individual filename/etc...
>>
>> You are getting just under 20MB/sec, which is probably not
>> unreasonable.
>>
>> As mentioned, if you want it faster, you will need to determine
>> where the bottleneck is, which means looking at disk IO (most
>> likely), network bandwidth, CPU (especially if you use compression
>> on the backuppc server), etc...
>>
>> Regards, Adam
>>
>>
> i have checked the disk usage and the i/o that backuppc output me in
> the summary page, and 7.37 is Mb/sec is the value i got. The server
> is virtualized but the hardisk is linked directly to the virtual
> machine in mirroring raid, do you thing is a good speed or could be
> better?


Well, you failed to provide complete information in the original post,
you said:

2TB disk fullbackup it takes on average 1866.0 minutes
So, 200MB / (1866*60 secs) = 17.86MB/sec

>From the above, it would sound like the 2TB disk is only 50% full (approx)
7.37MB/s * 1866 mins * 60 sec/min = 825GB used space

In any case, I would expect you can backup the full 2TB of data in much
less than 31 hours that it is taking you to backup only 825GB. I would
suggest you investigate where the bottleneck is.

Are the two machines on the same LAN? What speed?
Can the VM actually get decent disk performance? Don't just use dd, test
random read speed.
What speed can you transfer files with smbclient between the backuppc
server and this VM?
Actually look at, and provide information about CPU utilisation on both
backuppc server and VM
Same for disk IO, and bandwidth, and memory usage

Consider changing backup protocol to something more efficient. Maybe tar
would be more efficient (or less), or perhaps rsync (reduces network
bandwidth at cost of CPU, but also provides better backups, and less
disk load on the backuppc server due to checksum-seed option).

You will actually need to do a lot more work before any really useful
comments/suggestions can be made. You should verify the achievable
performance outside of backuppc first to ensure you don't have a real
problem somewhere else (eg, virtualisation layer). Also, consider other
loads on the same physical machine, especially if the disk is shared to
other VM's what disk IO they are doing.

Regards,
Adam


-- 
Adam Goryachev
Website Managers
www.websitemanagers.com.au

--
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:   

[BackupPC-users] BlackOut during entire days

2013-05-30 Thread Nicolas Cauchie

Hello all,

I'm looking for a trick to program a blackout during weekends days (from 
saturaday morning 00h to sunday night 24h).


Results : no backups at all saturadays and sundays.

From Google :

$Conf{BlackoutPeriods} = [
{
hourBegin =>  0.0,
hourEnd   => 24.0,
weekDays  => [0,7],
},

seems to don't work..

Any idea ?

Thanks

Nicolas
--



--
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] extremely long backup time

2013-05-30 Thread Nicola Scattolin
Il 30/05/2013 12:56, Adam Goryachev ha scritto:
> On 30/05/13 18:13, Nicola Scattolin wrote:
>> Il 30/05/2013 10:04, Adam Goryachev ha scritto:
>>> On 30/05/13 16:57, Nicola Scattolin wrote:
 hi,
 i have a problem in full backups of a 2TB disk.
 when backuppc do fullbackup it takes on average 1866.0 minutes while the
 incremental backup takes around 20 minutes.
 do you think there is something wrong or it's just for the amount of
 data to be backupd?
>>> Most likely this is a limitation of bandwidth, CPU, or memory on either
>>> the backuppc server, or the machine being backed up.
>>>
>>> Have you enabled checksum-seed in your config?
>>> Are you even using rsync?
>>>
>>> Remember a full backup will read the full content of every file (talking
>>> about rsync because I will assume that is what you are using) on both
>>> the client and backuppc server. A incremental only looks at file
>>> attributes such as size and timestamp.
>>>
>>> Can you be more detailed about your configuration, and during a full
>>> backup look at memory utilisation on both backuppc server and the client.
>>>
>>> PS, this question is asked regularly, so you should also look at the
>>> archives to see the previous discussions (which have been very detailed,
>>> and sometimes heated).
>>>
>>> Regards,
>>> Adam
>>>
>> i use smb to transfer file, and there are not be cpu or bandwidth
>> limitation, it's a local server.
>> where is the checksum-seed option? i can't find it
>
> OK, so this is even more obvious.
>
> An incremental will only look at the timestamp, and transfer all files
> newer than the timestamp of the previous backup.
> A full will transfer ALL files, therefore this is disk I/O + network
> bandwidth limited.
>
> 2TB of data will take 335 minutes at 1Gbps (assuming you can read from
> the source disk at least 1Gbps, and write to the destination disk at
> 1Gbps, and utilise 100% of source/destination disk bandwidth as well as
> 100% of network bandwidth, and there was nil overhead for handling each
> individual filename/etc...
>
> You are getting just under 20MB/sec, which is probably not unreasonable.
>
> As mentioned, if you want it faster, you will need to determine where
> the bottleneck is, which means looking at disk IO (most likely), network
> bandwidth, CPU (especially if you use compression on the backuppc
> server), etc...
>
> Regards,
> Adam
>
>
i have checked the disk usage and the i/o that backuppc output me in the 
summary page, and 7.37 is Mb/sec is the value i got.
The server is virtualized but the hardisk is linked directly to the 
virtual machine in mirroring raid, do you thing is a good speed or could 
be better?

-- 
Nicola Scattolin
Ser.Tec s.r.l.
Via E. Salgari 14/E
31056 Roncade, Treviso
http://dpidgprinting.com

--
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] extremely long backup time

2013-05-30 Thread Adam Goryachev
On 30/05/13 18:13, Nicola Scattolin wrote:
> Il 30/05/2013 10:04, Adam Goryachev ha scritto:
>> On 30/05/13 16:57, Nicola Scattolin wrote:
>>> hi,
>>> i have a problem in full backups of a 2TB disk.
>>> when backuppc do fullbackup it takes on average 1866.0 minutes while the
>>> incremental backup takes around 20 minutes.
>>> do you think there is something wrong or it's just for the amount of
>>> data to be backupd?
>> Most likely this is a limitation of bandwidth, CPU, or memory on either
>> the backuppc server, or the machine being backed up.
>>
>> Have you enabled checksum-seed in your config?
>> Are you even using rsync?
>>
>> Remember a full backup will read the full content of every file (talking
>> about rsync because I will assume that is what you are using) on both
>> the client and backuppc server. A incremental only looks at file
>> attributes such as size and timestamp.
>>
>> Can you be more detailed about your configuration, and during a full
>> backup look at memory utilisation on both backuppc server and the client.
>>
>> PS, this question is asked regularly, so you should also look at the
>> archives to see the previous discussions (which have been very detailed,
>> and sometimes heated).
>>
>> Regards,
>> Adam
>>
> i use smb to transfer file, and there are not be cpu or bandwidth 
> limitation, it's a local server.
> where is the checksum-seed option? i can't find it

OK, so this is even more obvious.

An incremental will only look at the timestamp, and transfer all files
newer than the timestamp of the previous backup.
A full will transfer ALL files, therefore this is disk I/O + network
bandwidth limited.

2TB of data will take 335 minutes at 1Gbps (assuming you can read from
the source disk at least 1Gbps, and write to the destination disk at
1Gbps, and utilise 100% of source/destination disk bandwidth as well as
100% of network bandwidth, and there was nil overhead for handling each
individual filename/etc...

You are getting just under 20MB/sec, which is probably not unreasonable.

As mentioned, if you want it faster, you will need to determine where
the bottleneck is, which means looking at disk IO (most likely), network
bandwidth, CPU (especially if you use compression on the backuppc
server), etc...

Regards,
Adam


-- 
Adam Goryachev
Website Managers
www.websitemanagers.com.au


--
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] extremely long backup time

2013-05-30 Thread Nicola Scattolin
Il 30/05/2013 10:04, Adam Goryachev ha scritto:
> On 30/05/13 16:57, Nicola Scattolin wrote:
>> hi,
>> i have a problem in full backups of a 2TB disk.
>> when backuppc do fullbackup it takes on average 1866.0 minutes while the
>> incremental backup takes around 20 minutes.
>> do you think there is something wrong or it's just for the amount of
>> data to be backupd?
> Most likely this is a limitation of bandwidth, CPU, or memory on either
> the backuppc server, or the machine being backed up.
>
> Have you enabled checksum-seed in your config?
> Are you even using rsync?
>
> Remember a full backup will read the full content of every file (talking
> about rsync because I will assume that is what you are using) on both
> the client and backuppc server. A incremental only looks at file
> attributes such as size and timestamp.
>
> Can you be more detailed about your configuration, and during a full
> backup look at memory utilisation on both backuppc server and the client.
>
> PS, this question is asked regularly, so you should also look at the
> archives to see the previous discussions (which have been very detailed,
> and sometimes heated).
>
> Regards,
> Adam
>
i use smb to transfer file, and there are not be cpu or bandwidth 
limitation, it's a local server.
where is the checksum-seed option? i can't find it

-- 
Nicola Scattolin
Ser.Tec s.r.l.
Via E. Salgari 14/E
31056 Roncade, Treviso
http://dpidgprinting.com

--
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] extremely long backup time

2013-05-30 Thread Adam Goryachev
On 30/05/13 16:57, Nicola Scattolin wrote:
> hi,
> i have a problem in full backups of a 2TB disk.
> when backuppc do fullbackup it takes on average 1866.0 minutes while the 
> incremental backup takes around 20 minutes.
> do you think there is something wrong or it's just for the amount of 
> data to be backupd?
Most likely this is a limitation of bandwidth, CPU, or memory on either
the backuppc server, or the machine being backed up.

Have you enabled checksum-seed in your config?
Are you even using rsync?

Remember a full backup will read the full content of every file (talking
about rsync because I will assume that is what you are using) on both
the client and backuppc server. A incremental only looks at file
attributes such as size and timestamp.

Can you be more detailed about your configuration, and during a full
backup look at memory utilisation on both backuppc server and the client.

PS, this question is asked regularly, so you should also look at the
archives to see the previous discussions (which have been very detailed,
and sometimes heated).

Regards,
Adam

-- 
Adam Goryachev
Website Managers
www.websitemanagers.com.au


--
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] extremely long backup time

2013-05-30 Thread Nicola Scattolin
hi,
i have a problem in full backups of a 2TB disk.
when backuppc do fullbackup it takes on average 1866.0 minutes while the 
incremental backup takes around 20 minutes.
do you think there is something wrong or it's just for the amount of 
data to be backupd?
-- 
Nicola Scattolin
Ser.Tec s.r.l.
Via E. Salgari 14/E
31056 Roncade, Treviso
http://dpidgprinting.com

--
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/