>
> *You might want to look at the NFS docs a bit more.  NFS has used UDP
> forever, TCP was only added in version 4 if memory serves, and it had to be
> specifically enabled.*
>

I do not really care what transport NFS uses at the lower level. NFS in
previous kernels was pretty much the fastest network storage protocol( on
this hardware ). I've tested many in the last 3 years or so on this
hardware. But here look . . .

*dd to ramdisk* just to show how fast dd from /dev/zero can be on this
hardware
william@beaglebone:~$ df -h ramfs/
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           256M     0  256M   0% /home/william/ramfs
william@beaglebone:~$ dd if=/dev/zero bs=1M count=200
of=/home/william/ramfs/test.log
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 1.00041 s,

*210 MB/s*
*dd to NFS share*
william@beaglebone:~$ df -h ti/
Filesystem                           Size  Used Avail Use% Mounted on
192.168.254.162:/home/william/share  136G   41G   88G  32% /home/william/ti
william@beaglebone:~$ dd if=/dev/zero bs=1M count=1000
of=/home/william/ti/test.log
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 108.343 s, *9.7 MB/s*

So  something to note here. Apparently in this test, TCP is faster than
UDP, since I'm using NFS v3 on the server side. Then since netcat is TCP .
. . but these are also really basic "tests", that may give a decent
indication of what is fastest, but may not be entirely accurate for
different situations.





On Sat, Mar 12, 2016 at 6:00 PM, Mike <[email protected]> wrote:

> On 03/12/2016 05:08 PM, William Hermans wrote:
>
> Hey Walley,
>
> I don't think TCP/IP would be all that much slower than UDP when
> transmitting data over the wire on this board. The Beaglebone's ethernet is
> incredibly fast for 10/100 fast ethernet. What's more, yes by comparrison
> TCP connection can have more latency than UDP connections but latency does
> not usually matter. What matters most of the time is bandwidth. So the
> connection speed could be the same, it may just arrive a few milliseconds
> slower.
>
> But, I'll have to devise some sort of test using netcat to see what's
> really up with netcat. That should not be too hard to do. I can say that
> NFS comes really close to the interfaced maximum theoretical speed. But NFS
> uses neither UDP, or TCP, and if memory serves it operates on layer2 on
> some level . . .
>
>
> You might want to look at the NFS docs a bit more.  NFS has used UDP
> forever, TCP was only added in version 4 if memory serves, and it had to be
> specifically enabled.
>
> Mike
>
>
> On Sat, Mar 12, 2016 at 12:57 PM, Wally Bkg <[email protected]> wrote:
>
>> TCP/IP connection will guarantee no data loss at the cost of possibly
>> greatly increased latency.  I've done such in the past using UDP which, if
>> client and server are on the same subnet, is about as deterministic as
>> standard Ethernet gets, but is generally blocked by default on most
>> firewalls.  If the data leaves your subnet you will need error correction
>> and likely just end up reinventing TCP badly.
>>
>> I've done pseudo-simultaneous sampling systems (all the A/D channels are
>> sampled at the maximum rate once per tick of a slower timer that sets the
>> system sampling rate) using this method with the server sending each
>> multi-channel sample to a different system via UDP for further  processing
>> at every tick of the sample clock.  It works very well when the systems are
>> on the same subnet and the data rate is not too high compared to the
>> network overhead.
>>
>> If your data rate is low enough the  "file system" based solutions might
>> be easier to code and troubleshoot, but you have extra overhead from the
>> network transport and the file system layer to contend with.  IMHO the
>> easiest to code, troubleshoot, and document architecture is the way to go
>> as long as all requirements can be met.
>>
>> The UDP client/server solution can be pretty fast -- I've controlled a
>> 6-DOF motion base (Stewart Platform) with a 100Hz servo loop where one
>> system calculated the desired motion profile from user input while another
>> did the matrix calculations to set the actuator link lengths for the
>> desired motion, and a third processed the visual scene presented to the
>> user, all talking via UDP on an isolated network (only these three systems
>> were connected).
>>
>>
>>
>> On Friday, March 11, 2016 at 11:22:18 AM UTC-6, Dhanesh Kothari wrote:
>>>
>>> Thank you @Wally and @William.
>>> My goal is to send continuous data stream from my system and my
>>> beaglebone should be receiving data serially and than process the data as
>>> per my algorithm without any data loss.
>>> We are using sshfs to mount a directory on beaglebone to our system.
>>>
>> --
>> For more options, visit http://beagleboard.org/discuss
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "BeagleBoard" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> For more options, visit http://beagleboard.org/discuss
> ---
> You received this message because you are subscribed to the Google Groups
> "BeagleBoard" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> For more options, visit http://beagleboard.org/discuss
> ---
> You received this message because you are subscribed to the Google Groups
> "BeagleBoard" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 
For more options, visit http://beagleboard.org/discuss
--- 
You received this message because you are subscribed to the Google Groups 
"BeagleBoard" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to