How can I verify that a 1Gb/s network is indeed
operating at its optimal speed? I tried this:
By transferring large amounts of data using a light-weight protocol
(maybe FTP) and timing the amount of time it takes.
Also various testing utilities, for instance ttcp.
[master]$ ping -s 65507 node
65515 bytes from node: icmp_seq=0 ttl=64 time=1.97 ms
65515 bytes from node: icmp_seq=1 ttl=64 time=1.95 ms
65515 bytes from node: icmp_seq=2 ttl=64 time=1.94 ms
65515 bytes from node: icmp_seq=3 ttl=64 time=1.97 ms
This is a measure of latency only.
For instance, I can easily get 10ms pings on 512kbit/sec ADSL. It can
only transfer data at ~60 KB/sec though.
I can get these values on a very lightly loaded 100Mbit/sec network:
[EMAIL PROTECTED] ping 10.0.0.5
PING 10.0.0.5 (10.0.0.5): 56 data bytes
64 bytes from 10.0.0.5: icmp_seq=0 ttl=128 time=0.844 ms
64 bytes from 10.0.0.5: icmp_seq=1 ttl=128 time=0.740 ms
PS: I verified my calculation method for two
computers here on a 100Mbit/s network, from which
time with ping: 12.4 ms
ideal calculated time: 10 ms
Sounds like your 100Mbit/s network is very heavily loaded, you would
expect ~1ms pings.
firstname.lastname@example.org mailing list
To unsubscribe, send any mail to "[EMAIL PROTECTED]"