Re: transfer speed data

2020-12-24 Thread Michael Stone

On Wed, Dec 23, 2020 at 05:09:47PM +0100, Nicolas George wrote:

Michael Stone (12020-12-23):

No, network speeds are traditionally measured in bits because networks
transferred data in bits and telcos dealt with bits, and they sold and
billed bits. Computer internals were measured in bytes and words because
they transferred data in bytes and words. Some people do now talk about
network speeds for computers in byte units, but you're really just swapping
one source of confusion for another when you do that. (There's an immense
amount of existing tooling for network-related information that already uses
bits, so everything that decides bytes are better for networking requires
conversion when dealing with most other networking tools even if it
eliminates conversion when dealing with filesystem or memory tools.) There
isn't one "right answer" that magically simplifies communications.


I read this paragraph as the defense of a cargo cult.


You may certainly read it as you wish, nevertheless I'll continue to 
explain why things are measured in different ways in different domains 
without resorting to baseless conspiracy theories. I'm also mostly 
interested in communicating with actual people, so regardless of whether 
you believe you have a superior approach that is more logical than the 
way language is used by other people is irrelevant to me--I find it more 
productive to be as clear as possible within the bounds of common
usage than to insist that everyone else should change. YMMV, have fun 
with your crusade.




Re: transfer speed data

2020-12-24 Thread Michael Stone

On Wed, Dec 23, 2020 at 07:27:49PM -0600, David Wright wrote:

I thought Michael Stone had already covered that, by suggesting sparse
files (with which I'm not familiar) 


A sparse file is one which has logically allocated empty (zero-filled) 
blocks without allocating physical blocks. You can create one easily 
with something like "truncate -s 1G testfile" and use "ls -l testfile ; 
du testfile" to confirm that it's logically 1G but using 0 disk blocks. 
This is convenient for storing certain data structures with a lot of 
empty space (e.g., /var/log/lastlog). On some ancient unix systems it 
could actually be slower to access sparse files than real files, but 
you're unlikely to run into those anymore and sparse files can be useful 
in certain kinds of testing. You do want to make sure you're not testing 
something that compresses data, as a file full of zeros will skew 
results for that sort of application.


On Thu, Dec 24, 2020 at 11:06:50AM +0200, Andrei POPESCU wrote:

I was rather referring to real use ;)

Speed tests under optimized conditions do have their purpose (e.g. is my
network interface working properly?[1]), but they might be misleading
when the bottleneck for real world transfers is elsewhere (like the
limited storage options on the PINE A64+).


Generally you'd want to test multiple things in isolation to understand 
where the bottlenecks are. I was speaking specifically about the 
encryption algorithms because someone suggested that was the problem 
was. Testing the disk I/O in isolation might be a logical next step, if 
a null-disk and null-network copy performed well. If it didn't perform 
well, then you'd have established an upper bound on what to expect from 
scp. (This would be relevant mainly on very low-power hardware these 
days, and though you're talking about an A64 I don't see where the OP 
said that was what he was using.)




Re: transfer speed data

2020-12-24 Thread Andrei POPESCU
On Mi, 23 dec 20, 19:27:49, David Wright wrote:
> 
> I thought Michael Stone had already covered that, by suggesting sparse
> files (with which I'm not familiar) and /dev/null for conducting his
> encryption tests. I don't think any other posts had covered what's
> *between* the PCs, rather than in them.
> 
> Common sense would dictate against using SD cards etc for testing
> network speeds (unless, say, you were using such files), because you
> obviously want to reduce to a minimum the number of points of weakness
> (ie slowness), testing the speed of just one link in the chain at a time.

I was rather referring to real use ;)

Speed tests under optimized conditions do have their purpose (e.g. is my 
network interface working properly?[1]), but they might be misleading 
when the bottleneck for real world transfers is elsewhere (like the 
limited storage options on the PINE A64+).

[1] Especially for the PINE A64+, which did have some troubles with the 
network interface due to a hardware bug. Fortunately the issue was 
identified and corrected in later batches.

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: transfer speed data

2020-12-23 Thread David Wright
On Wed 23 Dec 2020 at 20:15:59 (+0200), Andrei POPESCU wrote:
> On Mi, 23 dec 20, 11:48:31, David Wright wrote:
> > 
> > Some sort of rough calculation between the expected/nominal bit rate
> > and the actual data rate achieved is certainly useful, if only to
> > ascertain whether the link itself is performing well. For that, you
> > need to reduce the amount of processing (like encryption) of the
> > data, and any other tasks tying up the CPU(s). Just for the test,
> > obviously.
> > 
> > If you're still getting poor transfer rates, you might want to swap
> > cables and (if you're dealing with several hardware systems) cards.
> > For example, on the cable front, a cable with damaged blue or brown
> > pairs will give perfect 100BASE-T performance but not 1000BASE-T,
> > which uses all eight wires. (For the same reason, structural wiring
> > using brown for the telephone sockets will have the same problem.)
> 
> Let's not forget the source and destination storage (connection).
> 
> Even if the Gigabit (per second) interface on the PINE A64+ performs 
> pretty well, a transfer larger that can fit in any buffers that might be 
> involved will quickly be slowed down to the speed of the SD card and/or 
> interface, respectively the USB2 connection for the hard disk (the board 
> doesn't have SATA).

I thought Michael Stone had already covered that, by suggesting sparse
files (with which I'm not familiar) and /dev/null for conducting his
encryption tests. I don't think any other posts had covered what's
*between* the PCs, rather than in them.

Common sense would dictate against using SD cards etc for testing
network speeds (unless, say, you were using such files), because you
obviously want to reduce to a minimum the number of points of weakness
(ie slowness), testing the speed of just one link in the chain at a time.

Cheers,
David.



Re: transfer speed data

2020-12-23 Thread George Shuklin



On 12/23/20 2:55 AM, mick crane wrote:

hello,
I have a buster PC and a bullseye PC which are both supposed to have 
gigabyte network cards connected via a little Gigabyte switch box.
Transferring files between them, I forget which shows the transfer 
speed per file, either scp or rsync the maximum is 50 Mbs per file.

Would you expect that to be quicker ?

mick

Rsync is the fastest, I think. If you want fast scp, you can try to 
change cryptographic algorithm ssh/scp uses for transfer.


In older time it was possible to force ssh (scp) to use arcfour crypto, 
which was able to yield about 900Mbit/s even on old CPU in a single 
thread with 70% CPU utilization (Dell R220).


I don't know which one is the fastest from those left after great 
excommunication of obsolete crypto in ssh 1:7.6p1-1.




Re: transfer speed data

2020-12-23 Thread Andrei POPESCU
On Mi, 23 dec 20, 11:48:31, David Wright wrote:
> 
> Some sort of rough calculation between the expected/nominal bit rate
> and the actual data rate achieved is certainly useful, if only to
> ascertain whether the link itself is performing well. For that, you
> need to reduce the amount of processing (like encryption) of the
> data, and any other tasks tying up the CPU(s). Just for the test,
> obviously.
> 
> If you're still getting poor transfer rates, you might want to swap
> cables and (if you're dealing with several hardware systems) cards.
> For example, on the cable front, a cable with damaged blue or brown
> pairs will give perfect 100BASE-T performance but not 1000BASE-T,
> which uses all eight wires. (For the same reason, structural wiring
> using brown for the telephone sockets will have the same problem.)

Let's not forget the source and destination storage (connection).

Even if the Gigabit (per second) interface on the PINE A64+ performs 
pretty well, a transfer larger that can fit in any buffers that might be 
involved will quickly be slowed down to the speed of the SD card and/or 
interface, respectively the USB2 connection for the hard disk (the board 
doesn't have SATA).

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: transfer speed data

2020-12-23 Thread David Wright
On Wed 23 Dec 2020 at 16:47:21 (+0200), Andrei POPESCU wrote:
> On Mi, 23 dec 20, 10:56:36, Nicolas George wrote:
> > Andy Smith (12020-12-23):
> > > "gigabyte" is not a network speed. You probably mean gigabit
> > 
> > No, gigabit is 10³ bits, there is no "per second" involved either.
> > 
> > Anyway, why would anybody honest want to use this kind of unit to
> > measure an actual speed is beyond me. The only point to speak in
> > kilo/mega/gigabits per second instead is to make the numbers seem larger
> > to attract clueless customers. Moreover, the ratio between these numbers
> > and the actual useful network speed is not eight at one might believe,
> > because they measure below the low-level network protocols.
> 
> I took that to mean the theoretical maximum.

Well, I took it to mean that mick crane had read the sides of the
boxes that the hardware came in, and mistranslated it into the OP.
So I would assume he's talking about 1000BASE-T cards.

Fortunately the engineers that design the electronics know better than
to try and stuff 1Gbps down the wires, particularly my old Cat5 ones,
and they manipulate the bits to reduce the symbol rate (think "baud")
to what the wire can manage.

> For a quick estimation of "good" transfer rates a ten to one ratio is 
> probably sufficient, e.g.:
> 
> 1 Gbit/s = 1000 Mbit/s ~ 100 M(ilion) bytes (octets) per second
> 
> (which is approximately 95,367432 MiB/s according to 'qalc')
> 
> 
> If one reaches that in real world transfers (as opposed to specialized 
> tests) any further significant improvements will likely require hardware 
> upgrades.

Some sort of rough calculation between the expected/nominal bit rate
and the actual data rate achieved is certainly useful, if only to
ascertain whether the link itself is performing well. For that, you
need to reduce the amount of processing (like encryption) of the
data, and any other tasks tying up the CPU(s). Just for the test,
obviously.

If you're still getting poor transfer rates, you might want to swap
cables and (if you're dealing with several hardware systems) cards.
For example, on the cable front, a cable with damaged blue or brown
pairs will give perfect 100BASE-T performance but not 1000BASE-T,
which uses all eight wires. (For the same reason, structural wiring
using brown for the telephone sockets will have the same problem.)

Cheers,
David.



Re: transfer speed data

2020-12-23 Thread Michael Stone

On Thu, Dec 24, 2020 at 12:13:19AM +0800, Jeremy Ardley wrote:
Getting back to the original question, rsync is inherently slower 
because both ends do deep file inspection and handshaking to decide 
what data transfer is required. scp is usually faster.


If you're rsyncing to a non-existent destination there's nothing to 
inspect and reading the files on the source doesn't take longer with 
rsync than scp. If you're rsyncing to an existing file tree to update a 
small number of files, rsync will typically be much faster because it's 
transferring only the delta. For some kinds of files (e.g., sparse 
files) rsync can be much faster because it can handle them 
intelligently. In certain (mostly pathological) cases involving large 
trees on both sides rsync will perform badly because it has to read data 
on the reciever and if the delta is high that's wasted work. Bottom 
line: YMMV, but in general fear of rsync being slow shouldn't be a 
factor in deciding what to use.




Re: transfer speed data

2020-12-23 Thread Jeremy Ardley


On 23/12/20 11:51 pm, Michael Stone wrote:

On Wed, Dec 23, 2020 at 11:37:07PM +0800, Jeremy Ardley wrote:

I did some tests and found there was around a 10-20% difference in speed
between runs. 


Yes, if you want more consistent numbers you'd need much larger test 
file sizes; if the transfer is taking less than a second there's a lot 
of noise in the data. I didn't bother because this is sufficient to 
show that the encryption algorithm isn't the bottleneck on 1 Gbit 
ethernet and I didn't feel like waiting longer for larger transfers. 
:-) You're also maxing out 2 cores just for crypto doing this, and the 
actual packet handling will take a bit more CPU--so if the system 
isn't completely idle other processes can affect the results. Again, 
not significant for a rough approximation.



I have 12 cores, mostly idle, but I get your point.

Getting back to the original question, rsync is inherently slower 
because both ends do deep file inspection and handshaking to decide what 
data transfer is required. scp is usually faster.



--




OpenPGP_signature
Description: OpenPGP digital signature


Re: transfer speed data

2020-12-23 Thread Nicolas George
Michael Stone (12020-12-23):
> No, network speeds are traditionally measured in bits because networks
> transferred data in bits and telcos dealt with bits, and they sold and
> billed bits. Computer internals were measured in bytes and words because
> they transferred data in bytes and words. Some people do now talk about
> network speeds for computers in byte units, but you're really just swapping
> one source of confusion for another when you do that. (There's an immense
> amount of existing tooling for network-related information that already uses
> bits, so everything that decides bytes are better for networking requires
> conversion when dealing with most other networking tools even if it
> eliminates conversion when dealing with filesystem or memory tools.) There
> isn't one "right answer" that magically simplifies communications.

I read this paragraph as the defense of a cargo cult.

>  "octet" was a term that was actually needed before bytes were
> standardized to 8 bits, but that usage confuses far more people than it
> helps these days.

You are missing part of the issue: many people do not know these
details, not only will they neglect proper capitalization, but they may
also mistake one word for a typo, and possibly "fix" it. Or maybe they
are not native English speakers and their accent will mangle the
difference when speaking.

Octet may confuse somebody *once*, but then they have learned something,
and the risk of ambiguousness is much much lower.

Anyway, this is getting out of topic, I will probably not continue
discussing this subthread.

Regards,

-- 
  Nicolas George


signature.asc
Description: PGP signature


Re: transfer speed data

2020-12-23 Thread Nicolas George
Andrei POPESCU (12020-12-23):
> I took that to mean the theoretical maximum.

Not just that. Network protocols have many layers, and each layers adds
overhead. The rates are given at the lowest level, sometimes ATM,
therefore the usable rate at the application levels are significantly
lower.

Regards,

-- 
  Nicolas George


signature.asc
Description: PGP signature


Re: transfer speed data

2020-12-23 Thread Michael Stone

On Wed, Dec 23, 2020 at 11:37:07PM +0800, Jeremy Ardley wrote:

I did some tests and found there was around a 10-20% difference in speed
between runs. 


Yes, if you want more consistent numbers you'd need much larger test 
file sizes; if the transfer is taking less than a second there's a lot 
of noise in the data. I didn't bother because this is sufficient to show 
that the encryption algorithm isn't the bottleneck on 1 Gbit ethernet 
and I didn't feel like waiting longer for larger transfers. :-) You're 
also maxing out 2 cores just for crypto doing this, and the actual 
packet handling will take a bit more CPU--so if the system isn't 
completely idle other processes can affect the results. Again, not 
significant for a rough approximation.




Re: transfer speed data

2020-12-23 Thread Jeremy Ardley


On 23/12/20 11:03 pm, Michael Stone wrote:

On Wed, Dec 23, 2020 at 09:56:01AM +0800, Jeremy Ardley wrote:
Having said that, scp and ssh are affected by the encryption 
algorithm. The
fastest one at the moment is blowfish and it's possible to get up to 
50 MB/s on

a gig lan.


That's pretty ancient advice. The fastest on most modern x86 CPUs with 
AES-NI instructions is aes128-...@openssh.com. Without AES-NI your 
fastest may be chacha20-poly1...@openssh.com. The default is chacha20, 
which is fast enough in most cases that it doesn't matter, but worth 
testing & reconfiguring in cases where it does. Blowfish isn't 
supported in the latest versions of ssh, and even before it was 
dropped it was much slower than hardware-accelerated AES. It also 
never got an authenticated encryption mode IIRC, so it had additional 
MAC overhead that the more modern modes do not.


The following are on a mid-range Ryzen machine running to localhost, 
to take the network out of the equation, and are copying a sparse 1G 
file to /dev/null so there's no disk I/O; either of these algorithms 
will easily max out a gigabit connection if the disks are fast enough.



scp -o Ciphers=aes128-...@openssh.com testfil localhost:/dev/null

testfil   100% 1024MB 864.3MB/s 00:01

scp -o Ciphers=chacha20-poly1...@openssh.com testfil localhost:/dev/null

testfil   100% 1024MB 475.1MB/s 00:02

For comparison, here's stretch (still supported blowfish) on a much 
lower power intel CPU (i3-7100U):


$ scp -o Ciphers=chacha20-poly1...@openssh.com testfil 
localhost:/dev/null
testfil   100% 1024MB 167.7MB/s 00:06    $ 
scp -o Ciphers=aes128-...@openssh.com testfil localhost:/dev/null
testfil   100% 1024MB 507.5MB/s 00:02    $ 
scp -o Ciphers=blowfish-cbc testfil localhost:/dev/null

testfil   100% 1024MB  77.8MB/s 00:13
(see how terrible blowfish is, and how the AES-NI acceleration leads 
to AES tremendously outperforming CHACHA20?)


here's an almost 10 year old non-AES-NI desktop cpu:

$ scp -o Ciphers=aes128-...@openssh.com testfil localhost:/dev/null
testfil   100% 1024MB 224.7MB/s 00:04    $ 
scp -o Ciphers=chacha20-poly1...@openssh.com testfil localhost:/dev/null

testfil   100% 1024MB 184.9MB/s 00:05
Note that AES & CHACHA20 are much closer in performance, but AES is 
still faster. Note also that either can still max out gigabit ethernet.




Thanks for the update. Here is my available cipher list:

ssh -Q cipher
3des-cbc
aes128-cbc
aes192-cbc
aes256-cbc
rijndael-...@lysator.liu.se
aes128-ctr
aes192-ctr
aes256-ctr
aes128-...@openssh.com
aes256-...@openssh.com
chacha20-poly1...@openssh.com

I did some tests and found there was around a 10-20% difference in speed 
between runs. This is on a Ryzen 5 with M.2 PCIe drive using 
aes128-...@openssh.com


(base) jeremy@client:~$ scp -o Ciphers=aes128-...@openssh.com 
sparse_file localhost:/dev/null

sparse_file 100% 1024MB 770.8MB/s   00:01
(base) jeremy@client:~$ scp -o Ciphers=aes128-...@openssh.com 
sparse_file localhost:/dev/null

sparse_file 100% 1024MB 814.2MB/s   00:01
(base) jeremy@client:~$ scp -o Ciphers=aes128-...@openssh.com 
sparse_file localhost:/dev/null

sparse_file 100% 1024MB 757.6MB/s   00:01


--
Jeremy


OpenPGP_signature
Description: OpenPGP digital signature


Re: transfer speed data

2020-12-23 Thread Michael Stone

On Wed, Dec 23, 2020 at 10:56:36AM +0100, Nicolas George wrote:

Anyway, why would anybody honest want to use this kind of unit to
measure an actual speed is beyond me. The only point to speak in
kilo/mega/gigabits per second instead is to make the numbers seem larger
to attract clueless customers.


No, network speeds are traditionally measured in bits because networks 
transferred data in bits and telcos dealt with bits, and they sold and 
billed bits. Computer internals were measured in bytes and words because 
they transferred data in bytes and words. Some people do now talk about 
network speeds for computers in byte units, but you're really just 
swapping one source of confusion for another when you do that. (There's 
an immense amount of existing tooling for network-related information 
that already uses bits, so everything that decides bytes are better for 
networking requires conversion when dealing with most other networking 
tools even if it eliminates conversion when dealing with filesystem or 
memory tools.) There isn't one "right answer" that magically simplifies 
communications. 

The most obvious improvement, as always, is to simply write out "bit" 
and "byte" and not abbreviate them because everybody forgets which is 
capitalized. "octet" was a term that was actually needed before bytes 
were standardized to 8 bits, but that usage confuses far more people 
than it helps these days.




Re: transfer speed data

2020-12-23 Thread Michael Stone

On Wed, Dec 23, 2020 at 09:56:01AM +0800, Jeremy Ardley wrote:

Having said that, scp and ssh are affected by the encryption algorithm. The
fastest one at the moment is blowfish and it's possible to get up to 50 MB/s on
a gig lan.


That's pretty ancient advice. The fastest on most modern x86 CPUs with 
AES-NI instructions is aes128-...@openssh.com. Without AES-NI your 
fastest may be chacha20-poly1...@openssh.com. The default is chacha20, 
which is fast enough in most cases that it doesn't matter, but worth 
testing & reconfiguring in cases where it does. Blowfish isn't supported 
in the latest versions of ssh, and even before it was dropped it was 
much slower than hardware-accelerated AES. It also never got an 
authenticated encryption mode IIRC, so it had additional MAC overhead 
that the more modern modes do not.


The following are on a mid-range Ryzen machine running to localhost, to 
take the network out of the equation, and are copying a sparse 1G file 
to /dev/null so there's no disk I/O; either of these algorithms will 
easily max out a gigabit connection if the disks are fast enough.



scp -o Ciphers=aes128-...@openssh.com testfil localhost:/dev/null
testfil   100% 1024MB 864.3MB/s   00:01

scp -o Ciphers=chacha20-poly1...@openssh.com testfil localhost:/dev/null

testfil   100% 1024MB 475.1MB/s   00:02

For comparison, here's stretch (still supported blowfish) on a much lower 
power intel CPU (i3-7100U):


$ scp -o Ciphers=chacha20-poly1...@openssh.com testfil localhost:/dev/null
testfil   100% 1024MB 167.7MB/s   00:06
$ scp -o Ciphers=aes128-...@openssh.com testfil localhost:/dev/null
testfil   100% 1024MB 507.5MB/s   00:02
$ scp -o Ciphers=blowfish-cbc testfil localhost:/dev/null
testfil   100% 1024MB  77.8MB/s   00:13

(see how terrible blowfish is, and how the AES-NI acceleration leads to 
AES tremendously outperforming CHACHA20?)


here's an almost 10 year old non-AES-NI desktop cpu:

$ scp -o Ciphers=aes128-...@openssh.com testfil localhost:/dev/null
testfil   100% 1024MB 224.7MB/s   00:04
$ scp -o Ciphers=chacha20-poly1...@openssh.com testfil localhost:/dev/null
testfil   100% 1024MB 184.9MB/s   00:05

Note that AES & CHACHA20 are much closer in performance, but AES is 
still faster. Note also that either can still max out gigabit ethernet.




Re: transfer speed data

2020-12-23 Thread Andrei POPESCU
On Mi, 23 dec 20, 10:56:36, Nicolas George wrote:
> Andy Smith (12020-12-23):
> > "gigabyte" is not a network speed. You probably mean gigabit
> 
> No, gigabit is 10³ bits, there is no "per second" involved either.
> 
> Anyway, why would anybody honest want to use this kind of unit to
> measure an actual speed is beyond me. The only point to speak in
> kilo/mega/gigabits per second instead is to make the numbers seem larger
> to attract clueless customers. Moreover, the ratio between these numbers
> and the actual useful network speed is not eight at one might believe,
> because they measure below the low-level network protocols.

I took that to mean the theoretical maximum.

For a quick estimation of "good" transfer rates a ten to one ratio is 
probably sufficient, e.g.:

1 Gbit/s = 1000 Mbit/s ~ 100 M(ilion) bytes (octets) per second

(which is approximately 95,367432 MiB/s according to 'qalc')


If one reaches that in real world transfers (as opposed to specialized 
tests) any further significant improvements will likely require hardware 
upgrades.

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: transfer speed data

2020-12-23 Thread deloptes
Andy Smith wrote:

> Hi Mick,
> 
> On Wed, Dec 23, 2020 at 12:55:58AM +, mick crane wrote:
>> I have a buster PC and a bullseye PC which are both supposed to have
>> gigabyte network cards connected via a little Gigabyte switch box.
> 
> "gigabyte" is not a network speed. You probably mean gigabit; that
> is 10⁹ bits per second, i.e. 1000 * 1000 * 1000 bits per
> second.
> 
> GNU units can be useful to indicate what you can expect:
> 
> $ units gigabit
> Definition: 1e9 bit = 1e+09 bit
> $ units megabyte
> Definition: 1e6 byte = 800 bit
> $ units 1gigabit/sec megabyte/sec
> * 125
> / 0.008
> 
> So on a gigabit network the absolute maximum you could expect is
> 125MByte/sec. Note that's SI prefix mega- meaning million bytes, not IEC
> binary prefix MiB, meaning 1024 * 1024 bytes.
> 
>> Transferring files between them, I forget which shows the transfer speed
>> per file, either scp or rsync the maximum is 50 Mbs per file.
> 
> I shall assume that "50Mbs" means "50 megabytes per second" and not
> what it literally means which is "50 Megabits per second", a
> quantity one eighth as much.
> 
> scp and rsync add a lot of overhead, especially when operating on
> relatively small files. On a gigabit network I find myself lucky to
> get more than about 90MB/sec through ssh or rsync-over-ssh.
> 
> So I find 50MB/s plausible.
> 
>> Would you expect that to be quicker ?
> 
> Not really.
> 
> To get a more realistic idea of your network's performance use
> something like Iperf. You still won't see the full 125MB/s but I'd
> expect it to go over 100.
> 
> If you are trying to transfer files as fast as possible and don't
> need encryption, consider netcat. If you do need the encryption of
> ssh, but don't need the features of rsync, then "tar | ssh" will be
> a little faster.
> 
> On a low latency network (like your local network) at gigabit+
> speeds, compression won't make things faster.
> 
> Cheers,
> Andy

fully agree with you regarding the units and the rest.
SCP displays transfer rate in MB/s not Mbps, so the expected maximum on
1Gbit network would be 125MB/s

I managed to get above 50MBs on the home network (with luks+LVM using SCP)
only after I replaced the old discs.
I guess it depends on the machine and disks. I bet I'll get even higher
speed (close to 125MB/s) if I would use SSDs.

The point is - it is much more than the mere calculation - there are boards
with different architectures, where you could have a small bottle neck here
or there, memory and other things too.




Re: transfer speed data

2020-12-23 Thread Nicolas George
Andy Smith (12020-12-23):
> "gigabyte" is not a network speed. You probably mean gigabit

No, gigabit is 10³ bits, there is no "per second" involved either.

Anyway, why would anybody honest want to use this kind of unit to
measure an actual speed is beyond me. The only point to speak in
kilo/mega/gigabits per second instead is to make the numbers seem larger
to attract clueless customers. Moreover, the ratio between these numbers
and the actual useful network speed is not eight at one might believe,
because they measure below the low-level network protocols.

To express a network speed, use bytes per second at the IP level,
because this is what is useful for administrators and users. And to
avoid all ambiguousness, call them octets per second, because byte and
bit are too similar.

To measure it on the same network, we can use:

10.0.1.2 ~ $ socat - udp-listen:1234 < /dev/zero

10.0.1.3 ~ $ {echo;sleep 100} | socat udp:10.0.1.2:1234 - | pv > /dev/null
2.21GiB 0:00:20 [ 113MiB/s] [  <=> ]

Yes, 113 mega-octets per second seems right for this kind of link.

Beware: do not use this on an asymmetrical network. I tried to measure
the bandwidth of a DSL line using a very fast server as peer, and the
network immediately went down, I believe I triggered some kind of flood
protection at the ISP. For this case, rely on TCP flow control, even if
it adds a little overhead:

socat - tcp-listen:1234,reuseaddr=1 < /dev/urandom

53.5MiB 0:00:25 [2.16MiB/s]

(Using /dev/urandom instead of /dev/zero to avoid possible illusory
boosts from transparent compression somewhere; but /dev/urandom could be
CPU-bound on a fast network.)

Or we can just download something from a public server that we know is
very fast:

Length: 783548416 (747M) [application/octet-stream]
Saving to: ‘/dev/null’

/dev/null21%[===>] 157.25M  71.4MB/s

Regards,

-- 
  Nicolas George


signature.asc
Description: PGP signature


Re: transfer speed data

2020-12-22 Thread Andy Smith
Hi Mick,

On Wed, Dec 23, 2020 at 12:55:58AM +, mick crane wrote:
> I have a buster PC and a bullseye PC which are both supposed to have
> gigabyte network cards connected via a little Gigabyte switch box.

"gigabyte" is not a network speed. You probably mean gigabit; that
is 10⁹ bits per second, i.e. 1000 * 1000 * 1000 bits per
second.

GNU units can be useful to indicate what you can expect:

$ units gigabit
Definition: 1e9 bit = 1e+09 bit
$ units megabyte
Definition: 1e6 byte = 800 bit
$ units 1gigabit/sec megabyte/sec
* 125
/ 0.008

So on a gigabit network the absolute maximum you could expect is
125MByte/sec. Note that's SI prefix mega- meaning million bytes, not IEC
binary prefix MiB, meaning 1024 * 1024 bytes.

> Transferring files between them, I forget which shows the transfer speed per
> file, either scp or rsync the maximum is 50 Mbs per file.

I shall assume that "50Mbs" means "50 megabytes per second" and not
what it literally means which is "50 Megabits per second", a
quantity one eighth as much.

scp and rsync add a lot of overhead, especially when operating on
relatively small files. On a gigabit network I find myself lucky to
get more than about 90MB/sec through ssh or rsync-over-ssh.

So I find 50MB/s plausible.

> Would you expect that to be quicker ?

Not really.

To get a more realistic idea of your network's performance use
something like Iperf. You still won't see the full 125MB/s but I'd
expect it to go over 100.

If you are trying to transfer files as fast as possible and don't
need encryption, consider netcat. If you do need the encryption of
ssh, but don't need the features of rsync, then "tar | ssh" will be
a little faster.

On a low latency network (like your local network) at gigabit+
speeds, compression won't make things faster.

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: transfer speed data

2020-12-22 Thread Jeremy Ardley


On 23/12/20 9:40 am, Jeremy Ardley wrote:


rsync is never particularly fast as there is a lot of handshaking and 
file examination at each end prior to a transfer. I wouldn't be 
surprised at 50 Mbps.


scp should be a lot faster as there is no handshaking other than 
establishing the session; and the encryption/decryption process is not 
very compute intensive.





Having said that, scp and ssh are affected by the encryption algorithm. 
The fastest one at the moment is blowfish and it's possible to get up to 
50 MB/s on a gig lan.


See 
https://www.systutorials.com/improving-sshscp-performance-by-choosing-ciphers/


--
Jeremy


OpenPGP_signature
Description: OpenPGP digital signature


Re: transfer speed data

2020-12-22 Thread Jeremy Ardley


On 23/12/20 8:55 am, mick crane wrote:

hello,
I have a buster PC and a bullseye PC which are both supposed to have 
gigabyte network cards connected via a little Gigabyte switch box.
Transferring files between them, I forget which shows the transfer 
speed per file, either scp or rsync the maximum is 50 Mbs per file.

Would you expect that to be quicker ?

mick



rsync is never particularly fast as there is a lot of handshaking and 
file examination at each end prior to a transfer. I wouldn't be 
surprised at 50 Mbps.


scp should be a lot faster as there is no handshaking other than 
establishing the session; and the encryption/decryption process is not 
very compute intensive.


--
Jeremy


OpenPGP_signature
Description: OpenPGP digital signature


Re: transfer speed data

2020-12-22 Thread Georgi Naplatanov
On 12/23/20 2:55 AM, mick crane wrote:
> hello,
> I have a buster PC and a bullseye PC which are both supposed to have
> gigabyte network cards connected via a little Gigabyte switch box.
> Transferring files between them, I forget which shows the transfer speed
> per file, either scp or rsync the maximum is 50 Mbs per file.
> Would you expect that to be quicker ?
> 

Hi Mick,

scp uses encryption and if your processor is slow then yes - it's
possible transfer to be slower. For raw speed measurement you can use
FTP if possible.

Kind regards
Georgi



Re: transfer speed data

2020-12-22 Thread Bob Weber

On 12/22/20 7:55 PM, mick crane wrote:

hello,
I have a buster PC and a bullseye PC which are both supposed to have gigabyte 
network cards connected via a little Gigabyte switch box.
Transferring files between them, I forget which shows the transfer speed per 
file, either scp or rsync the maximum is 50 Mbs per file.

Would you expect that to be quicker ?

mick

To check the network try iperf3.  Set one PC to be a server and the other a 
client.  It will show the transfer rate between the 2 PCs.  It uses port 5201 
so if you have a firewall on the server side PC you will need to open that port.Â



Here is what I get between 2 PC running gigabit nics.


Connecting to host bingo, port 5201
[  5] local 172.16.0.3 port 40752 connected to 172.16.0.1 port 5201
[ ID] Interval           Transfer     Bitrate         Retr 
 Cwnd
[  5]   0.00-1.00   sec   112 MBytes   939 Mbits/sec    2    277 
KBytes       Â
[  5]   1.00-2.00   sec   112 MBytes   943 Mbits/sec    0    277 
KBytes       Â
[  5]   2.00-3.00   sec   112 MBytes   942 Mbits/sec    0    277 
KBytes       Â
[  5]   3.00-4.00   sec   112 MBytes   938 Mbits/sec    0    277 
KBytes       Â
[  5]   4.00-5.00   sec   112 MBytes   942 Mbits/sec    0    277 
KBytes       Â
[  5]   5.00-6.00   sec   112 MBytes   941 Mbits/sec    0    277 
KBytes       Â
[  5]   6.00-7.00   sec   112 MBytes   938 Mbits/sec    0    277 
KBytes       Â
[  5]   7.00-8.00   sec   113 MBytes   944 Mbits/sec    0    277 
KBytes       Â
[  5]   8.00-9.00   sec   110 MBytes   922 Mbits/sec    0    277 
KBytes       Â
[  5]   9.00-10.00  sec   112 MBytes   938 Mbits/sec    0    277 
KBytes       Â

- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.09 GBytes   939 Mbits/sec    2 
            sender
[  5]   0.00-10.00  sec  1.09 GBytes   938 Mbits/sec 
                 receiver


iperf Done.


--



*...Bob*