Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-20 Thread Les Mikesell
On Wed, Sep 20, 2017 at 10:20 AM, Gandalf Corvotempesta
 wrote:
> 2017-09-20 17:15 GMT+02:00 Ray Frush :
>> You indicate that your ZFS store does 50-70MB/s.  That's pretty slow in
>> today's world.  I get bothered when storage is slower than a single 10K RPM
>> drive (~100-120MB/sec).  I wonder how fast metadata operations are.
>> bonnie++ benchmarks might indicate an issue here as BackupPC is metadata
>> intensive, and has to read a lot of metadata to properly place files in the
>> CPOOL.   Compare those results with other storage to gauge how well your ZFS
>> is performing.   I'm not a ZFS expert.
>
> Yes, is not very fast but keep in mind that i'm using SATA disks.
> But the issue is not the server performance, because all other software are
> able to backup in a very short time, with the same hardware.

Backuppc uses the disk much more intensively than other systems.  For
the reasons that you want use it.   And I'd guess that zfs block-level
compression activity would be fairly inefficient on partial blocks for
more or less the same reasons there is a hit with raid 5.   I assume
that your attempt to use the --inplace option reflects problems you've
noticed with your other backup systems.  If it works, --whole-file
might be better, given a fast LAN and slow disks.  Reconstructing a
file from copied bits of the old and merging in the changes is pretty
expensive.


Also, I'm not sure anyone has experience with changing the compression
level between runs.  If you've done that it might add overhead.

--
   Les Mikesell
 lesmikes...@gmail.com

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-20 Thread Adam Goryachev



On 21/9/17 01:20, Gandalf Corvotempesta wrote:

2017-09-20 17:15 GMT+02:00 Ray Frush :

You indicate that your ZFS store does 50-70MB/s.  That's pretty slow in
today's world.  I get bothered when storage is slower than a single 10K RPM
drive (~100-120MB/sec).  I wonder how fast metadata operations are.
bonnie++ benchmarks might indicate an issue here as BackupPC is metadata
intensive, and has to read a lot of metadata to properly place files in the
CPOOL.   Compare those results with other storage to gauge how well your ZFS
is performing.   I'm not a ZFS expert.

Yes, is not very fast but keep in mind that i'm using SATA disks.
But the issue is not the server performance, because all other software are
able to backup in a very short time, with the same hardware.

Except it probably is or else it wouldn't take so long ;)


A 4x 1Gbps network link will look exactly like a single 1Gbps per network
channel (stream) unless you've got some really nice port aggregation
hardware that can spray data at 4Gbps across those.   As such, unless you
have parallel jobs running (multithreaded), I wouldn't expect to see any
product do better than 1Gbps from any single client in your environment.
The BackupPC server, running multiple backup jobs could see a benefit from
the bonded connection, being able to manage 4 1Gpbs streams at the same
time, under optimal conditions, which never happens.

I'm running 4 concurrent backups, with plain rsync/rsnapshot i'm able to run 8.

Except as you know, BPC demands a little more resources to do the 
backup, and running concurrent backups multiplies those demands. When 
you are close to, or slightly exceed your system capacity, then you will 
see massive decrease in overall performance. The more the system has to 
"pretend" it has capacity that it doesn't, and therefore it will do more 
swapping, or more seeking (on the HDD's) and less useful caching, then 
useful activity decreases significantly.


Find the performance bottleneck first, then you can decide how best to 
proceed, whether that's increase hardware, modify the config, or change 
the code.


Regards,
Adam

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-20 Thread Adam Goryachev



On 21/9/17 00:34, Gandalf Corvotempesta wrote:

2017-09-20 16:14 GMT+02:00 Ray Frush :

Question:   Just how big is the host you're trying to backup?  GB?  number
of files?

 From BackupPC web page: 147466.4 MB, 3465344 files.

FYI, I have a single host with 926764MB, 5327442 files.

What is the network connection between the client and the backup
server?

4x 1GbE bonded on both sides.

Remote VPN, one end is 100M the other is 40M

Full backup is about 11 hours, incr is around 8 hours, up to 10 hours.

These stats simply say "It works for me". I see you are saying "It 
doesn't work for me", but I guess my statement will help you as much as 
your statement helps you.


I would guess that you have a bottleneck/performance limitation 
somewhere in your stack, and I'd suggest finding it and then hitting it 
with something appropriate. Check CPU utilisation on the client and 
server, memory usage, swap usage, and of course bandwidth usage. Once 
you rule out all of those, then you get to the fun stuff, and start 
looking at disk performance. While everyone says it is not accurate, 
personally, I've found that "/usr/bin/iostat -dmx" and watching the util 
column is a pretty good indicator.


Apologies, I certainly haven't followed the full thread, but at this 
point, throwing your hands in the air and name calling isn't going to 
improve the situation. Either you are interested in solving the problem, 
in which case you will probably need to get your hands dirty and work 
with us to find and fix it, or you don't really care, in which case we 
don't either (because it works fine for us).


Regards,
Adam

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-20 Thread Gandalf Corvotempesta
2017-09-20 17:15 GMT+02:00 Ray Frush :
> You indicate that your ZFS store does 50-70MB/s.  That's pretty slow in
> today's world.  I get bothered when storage is slower than a single 10K RPM
> drive (~100-120MB/sec).  I wonder how fast metadata operations are.
> bonnie++ benchmarks might indicate an issue here as BackupPC is metadata
> intensive, and has to read a lot of metadata to properly place files in the
> CPOOL.   Compare those results with other storage to gauge how well your ZFS
> is performing.   I'm not a ZFS expert.

Yes, is not very fast but keep in mind that i'm using SATA disks.
But the issue is not the server performance, because all other software are
able to backup in a very short time, with the same hardware.

> 2) rsyncd vs rsync:
> When BackupPC uses the 'rsync' method, it uses ssh to start a dedicated
> rsync server on the client system with parameters picked by BackupPC
> developers.
> When you use the 'rsyncd' method,  the options on the client side were
> picked by you, and may not play well with BackupPC.  It would be easy to
> test around this by setting up backupPC to use the 'rsync' method instead
> (setting up ssh correctly of course) and seeing if you note any improvement.
> That will isolate any issues with your rsyncd configs.

Ok, I can try that.

> A 4x 1Gbps network link will look exactly like a single 1Gbps per network
> channel (stream) unless you've got some really nice port aggregation
> hardware that can spray data at 4Gbps across those.   As such, unless you
> have parallel jobs running (multithreaded), I wouldn't expect to see any
> product do better than 1Gbps from any single client in your environment.
> The BackupPC server, running multiple backup jobs could see a benefit from
> the bonded connection, being able to manage 4 1Gpbs streams at the same
> time, under optimal conditions, which never happens.

I'm running 4 concurrent backups, with plain rsync/rsnapshot i'm able to run 8.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-20 Thread Ray Frush
Gandalf-

The server you're trying to back up doesn't seem that large.   A similarly
sized server in my environment (210GB, 3M files) does a full backup in
~80-90 minutes, and incrementals run 10-15 minutes.  Monitoring suggests
that the server never exceed 1Gbps outbound on the network connection.
This indicates to me that BackupPC is capable of doing the job, so that
leaves elements of your environment that are variables.

Here's a list that comes to mind:

1) target disk performance:
You indicate that your ZFS store does 50-70MB/s.  That's pretty slow in
today's world.  I get bothered when storage is slower than a single 10K RPM
drive (~100-120MB/sec).  I wonder how fast metadata operations are.
 bonnie++ benchmarks might indicate an issue here as BackupPC is metadata
intensive, and has to read a lot of metadata to properly place files in the
CPOOL.   Compare those results with other storage to gauge how well your
ZFS is performing.   I'm not a ZFS expert.

2) rsyncd vs rsync:
When BackupPC uses the 'rsync' method, it uses ssh to start a dedicated
rsync server on the client system with parameters picked by BackupPC
developers.
When you use the 'rsyncd' method,  the options on the client side were
picked by you, and may not play well with BackupPC.  It would be easy to
test around this by setting up backupPC to use the 'rsync' method instead
(setting up ssh correctly of course) and seeing if you note any
improvement.  That will isolate any issues with your rsyncd configs.


A 4x 1Gbps network link will look exactly like a single 1Gbps per network
channel (stream) unless you've got some really nice port aggregation
hardware that can spray data at 4Gbps across those.   As such, unless you
have parallel jobs running (multithreaded), I wouldn't expect to see any
product do better than 1Gbps from any single client in your environment.
The BackupPC server, running multiple backup jobs could see a benefit from
the bonded connection, being able to manage 4 1Gpbs streams at the same
time, under optimal conditions, which never happens.




On Wed, Sep 20, 2017 at 8:34 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> 2017-09-20 16:14 GMT+02:00 Ray Frush :
> > Question:   Just how big is the host you're trying to backup?  GB?
> number
> > of files?
>
> From BackupPC web page: 147466.4 MB, 3465344 files.
>
> > What is the network connection between the client and the backup
> > server?
>
> 4x 1GbE bonded on both sides.
>
>
> > I'm curious about what it is about your environment that is making
> > it so hard to back up.
>
> It's the same with ALL servers that i'm trying to backup.
> BPC is about 12 times slower than any other tested solution.
> Obviously, same backup server, same source servers.
>
> > I believe I've mentioned my largest, hairiest server is 770GB with 6.8
> > Million files.   Full backups on that system take 8.5 hours to run.
> > Incrementals take 20-30 minutes.   I have no illusions that the
> > infrastructure I'm using to back things up is the fastest, but it's fast
> > enough for the job.
>
> The long running backup (about 38 hours, still running) is an incremental.
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>



-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-20 Thread Gandalf Corvotempesta
2017-09-20 16:52 GMT+02:00 Gandalf Corvotempesta
:
> running "zfs iostat" show about 50-70MB/s

Even more:

   capacity operationsbandwidth
poolalloc   free   read  write   read  write
--  -  -  -  -  -  -
rpool   6.66T  4.21T291393  3.44M  5.88M
rpool   6.66T  4.21T128  10.2K  14.3M   133M
rpool   6.66T  4.21T147  7.44K  16.6M   111M
rpool   6.66T  4.21T118  3.18K  7.55M  58.6M
rpool   6.66T  4.21T339  3.70K  32.1M   101M

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-20 Thread Gandalf Corvotempesta
2017-09-20 16:50 GMT+02:00 Les Mikesell :
> You mentioned using zfs with compression.  What kind of disk
> performance does that give when working with odd sized chunks?

running "zfs iostat" show about 50-70MB/s

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-20 Thread Les Mikesell
On Wed, Sep 20, 2017 at 9:34 AM, Gandalf Corvotempesta
 wrote:
>
>> I'm curious about what it is about your environment that is making
>> it so hard to back up.
>
> It's the same with ALL servers that i'm trying to backup.
> BPC is about 12 times slower than any other tested solution.
> Obviously, same backup server, same source servers.

You mentioned using zfs with compression.  What kind of disk
performance does that give when working with odd sized chunks?

-- 
  Les Mikesell
 lesmikes...@gmail.com

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-20 Thread Gandalf Corvotempesta
2017-09-20 16:14 GMT+02:00 Ray Frush :
> Question:   Just how big is the host you're trying to backup?  GB?  number
> of files?

>From BackupPC web page: 147466.4 MB, 3465344 files.

> What is the network connection between the client and the backup
> server?

4x 1GbE bonded on both sides.


> I'm curious about what it is about your environment that is making
> it so hard to back up.

It's the same with ALL servers that i'm trying to backup.
BPC is about 12 times slower than any other tested solution.
Obviously, same backup server, same source servers.

> I believe I've mentioned my largest, hairiest server is 770GB with 6.8
> Million files.   Full backups on that system take 8.5 hours to run.
> Incrementals take 20-30 minutes.   I have no illusions that the
> infrastructure I'm using to back things up is the fastest, but it's fast
> enough for the job.

The long running backup (about 38 hours, still running) is an incremental.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-20 Thread Gandalf Corvotempesta
2017-09-20 15:10 GMT+02:00 Craig Barratt via BackupPC-users
:
> Since you have concluded what the problem is, I don't have anything
> constructive to add.

No, i've not found the exact bug. Some help is needed.
Having a backup still running from about (right now) 38 hours when the
same host is able to backup the same server with the same rsync
arguments in about 3 hours it
making me suspicious.

I know that BPC does much more things than other software. Ok, that's
clear, but I can understand something like a little slowdown (maybe
200%? 300%?), not 12 times slower. TWELVE TIMES.
It's 1200% slower

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-20 Thread Ray Frush
Question:   Just how big is the host you're trying to backup?  GB?  number
of files?  What is the network connection between the client and the backup
server?  I'm curious about what it is about your environment that is making
it so hard to back up.

I believe I've mentioned my largest, hairiest server is 770GB with 6.8
Million files.   Full backups on that system take 8.5 hours to run.
Incrementals take 20-30 minutes.   I have no illusions that the
infrastructure I'm using to back things up is the fastest, but it's fast
enough for the job.


--
Ray Frush
Colorado State University.



On Wed, Sep 20, 2017 at 12:41 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> 2017-09-19 18:01 GMT+02:00 Gandalf Corvotempesta
> :
> > Removed "--inplace" from the command like and running the same backup
> > right now from BPC.
> > It's too early to be sure, but seems to go further. Let's see during the
> night.
>
> Is still running. This is ok on one side, as "--inplace" may caused
> the issue, but on the other side,
> there is something not working properly in BPC. An incremental backup
> is still running (about 80%)
> from yesterday at 17:39.
>
> rsync, rsnapshot, bacula, rdiff-backup, bareos, and borg took about 3
> hours (some a little bit more, some a little bit less) to backup this
> host in the same way (more or less, same final size of backup)
> BackupPC is taking 10 times more of any other backup software, this
> makes BPC unusable with huge hosts. An order of magnitude is totally
> unacceptable and can only mean some bugs in the code.
>
> As wrote many month ago, I think there are something not working
> properly in BPC, it's impossible that BPC deduplication is slowing
> down backups in this way.
> Also, I've removed the compression, because I'm using ZFS with native
> compression, thus BPC doesn't have to decompress, check local file,
> compress the new one and so on.
>
> And after the backup, refCnt and fsck is ran. For this server, the
> "post-backup" phase takes another hours or two.
>
> Maybe I have hardware issue on this backup server, but even all other
> backup software that i've tried are running in this server with no
> issue at all. Only BPC is slow as hell.
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>



-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-20 Thread Craig Barratt via BackupPC-users
>
> An order of magnitude is totally unacceptable and can only mean some bugs
> in the code.

Only BPC is slow as hell.


Since you have concluded what the problem is, I don't have anything
constructive to add.

Craig

On Tue, Sep 19, 2017 at 11:41 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> 2017-09-19 18:01 GMT+02:00 Gandalf Corvotempesta
> :
> > Removed "--inplace" from the command like and running the same backup
> > right now from BPC.
> > It's too early to be sure, but seems to go further. Let's see during the
> night.
>
> Is still running. This is ok on one side, as "--inplace" may caused
> the issue, but on the other side,
> there is something not working properly in BPC. An incremental backup
> is still running (about 80%)
> from yesterday at 17:39.
>
> rsync, rsnapshot, bacula, rdiff-backup, bareos, and borg took about 3
> hours (some a little bit more, some a little bit less) to backup this
> host in the same way (more or less, same final size of backup)
> BackupPC is taking 10 times more of any other backup software, this
> makes BPC unusable with huge hosts. An order of magnitude is totally
> unacceptable and can only mean some bugs in the code.
>
> As wrote many month ago, I think there are something not working
> properly in BPC, it's impossible that BPC deduplication is slowing
> down backups in this way.
> Also, I've removed the compression, because I'm using ZFS with native
> compression, thus BPC doesn't have to decompress, check local file,
> compress the new one and so on.
>
> And after the backup, refCnt and fsck is ran. For this server, the
> "post-backup" phase takes another hours or two.
>
> Maybe I have hardware issue on this backup server, but even all other
> backup software that i've tried are running in this server with no
> issue at all. Only BPC is slow as hell.
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-20 Thread Gandalf Corvotempesta
2017-09-19 18:01 GMT+02:00 Gandalf Corvotempesta
:
> Removed "--inplace" from the command like and running the same backup
> right now from BPC.
> It's too early to be sure, but seems to go further. Let's see during the 
> night.

Is still running. This is ok on one side, as "--inplace" may caused
the issue, but on the other side,
there is something not working properly in BPC. An incremental backup
is still running (about 80%)
from yesterday at 17:39.

rsync, rsnapshot, bacula, rdiff-backup, bareos, and borg took about 3
hours (some a little bit more, some a little bit less) to backup this
host in the same way (more or less, same final size of backup)
BackupPC is taking 10 times more of any other backup software, this
makes BPC unusable with huge hosts. An order of magnitude is totally
unacceptable and can only mean some bugs in the code.

As wrote many month ago, I think there are something not working
properly in BPC, it's impossible that BPC deduplication is slowing
down backups in this way.
Also, I've removed the compression, because I'm using ZFS with native
compression, thus BPC doesn't have to decompress, check local file,
compress the new one and so on.

And after the backup, refCnt and fsck is ran. For this server, the
"post-backup" phase takes another hours or two.

Maybe I have hardware issue on this backup server, but even all other
backup software that i've tried are running in this server with no
issue at all. Only BPC is slow as hell.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/