Hi Graeme,

I tested the patch and it seems to be working.

Thanks a lot for your help!

Nir.

From: Graeme Foot <graeme.f...@touchcut.com>
Sent: Thursday, May 6, 2021 4:39 AM
To: Geller, Nir <nir.gel...@servotronix.com>; Gavin Lambert 
<gavin.lamb...@tomra.com>; Richard Hacker <h...@igh.de>; 
etherlab-users@etherlab.org
Subject: RE: [Etherlab-users] Running a large number of slaves

Hi Nir,

I had a bit more of a look.  The "if (fmmu->logical_domain_offset >= 
candidate_start)" condition ratchets up the candidate_start until the FMMU is 
not overlapped with any previous FMMUs.

However, the test as to whether the FMMU no longer fits in the current datagram 
should occur for all FMMUs.

I have attached a patch that moves the size check outside of the above 
condition.  In doing so I have also changed datagram_first_fmmu to be set to 
valid_fmmu.

I have also added a number of checks to ensure the combined size of overlapped 
FMMUs, that need to be handled as a block, do not exceed the maximum datagram 
size.

Please note that I haven’t checked to see if it compiles.

Regards,
Graeme.

From: Graeme Foot
Sent: Thursday, 6 May 2021 10:49
To: 'Geller, Nir' 
<nir.gel...@servotronix.com<mailto:nir.gel...@servotronix.com>>; Gavin Lambert 
<gavin.lamb...@tomra.com<mailto:gavin.lamb...@tomra.com>>; Richard Hacker 
<h...@igh.de<mailto:h...@igh.de>>; 
etherlab-users@etherlab.org<mailto:etherlab-users@etherlab.org>
Subject: RE: [Etherlab-users] Running a large number of slaves


Hi Nir,



I don't use overlapping PDO’s and don't know the constraints on them, but 
looking at the code, if you remove the "if (fmmu->logical_domain_offset >= 
candidate_start)" condition there will be a couple of problems.



1) When creating the first datagram the "emplace_datagram()" call will receive 
an incorrect data size.  The data size is calculated by "valid_start - 
datagram_offset".  "valid_start" for the current FMMU assumes that it is at the 
end of the previous FMMU, but due to the overlap it is still at the start of 
the previous FMMU.  On the other hand it looks like the datagrams working 
counter will include counts for the first FMMU, and be missed for the second 
datagram.  The domain working counter should however be correct.



2) It looks like a slave with more than 2 overlapping FMMU's can stack their tx 
and rx data separately.  If a datagram fills up after the first 2 FMMU's the 
data will be split between two datagrams which probably won't work at all.  
e.g. if 3 won’t fit, 1 & part of 2 will end up in the first datagram, the rest 
of 2, 3 & 4 will end up in the second datagram:

rx: ---1---- ---3---

tx: ------2----- ---4----



(I have come across slaves with more than 2 FMMU’s, but don’t know if they can 
also be overlapped.)



So, I don’t think this is the correct fix.





There is a “TODO overlapping PDOs" section in slave_config.c that look’s like 
it is a thought on how to deal with the problem, but it looks like it is trying 
to align pairs of rx/tx data rather than allowing the example in #2 above.



I suspect the answer will be for “ec_domain_finish()” to refer back to the 
slave and if it has overlapping pdo’s make sure all pdo’s from the slave make 
it into the same datagram.  I’m out of time to look further at the moment.



Regards,

Graeme.





-----Original Message-----
From: Etherlab-users 
<etherlab-users-boun...@etherlab.org<mailto:etherlab-users-boun...@etherlab.org>>
 On Behalf Of Geller, Nir
Sent: Wednesday, 5 May 2021 21:20
To: Gavin Lambert <gavin.lamb...@tomra.com<mailto:gavin.lamb...@tomra.com>>; 
Richard Hacker <h...@igh.de<mailto:h...@igh.de>>; 
etherlab-users@etherlab.org<mailto:etherlab-users@etherlab.org>
Subject: Re: [Etherlab-users] Running a large number of slaves



I think I got caught in a very specific situation that isn't handled properly 
by the code.



I will try to describe it more accurately.



The last slave in the topology supports overlapping PDOs, therefore Rx and Tx 
FMMUs logical_domain_offset have the same value.

Suppose until the last slave the accumulated domain size is 1400, the last 
slave's Rx PDO size is 30, and the Tx PDO size is 150.



The last slave's Rx FMMU is handled first by the loop.

The condition

if (fmmu->logical_domain_offset + fmmu->data_size - datagram_offset > 
(EC_MAX_DATA_SIZE)) is not met, and candidate_start is set to the value of the 
Rx FMMU logical_domain_offset.



The Tx FMMU is handled second by the loop, but this time the condition if 
(fmmu->logical_domain_offset >= candidate_start) is not met.



Being the last slave in the topology, the loop ends, and only 1 LRW datagram in 
size > 1500 is created after the condition if (domain->data_size > 
datagram_offset)



Removing completely the condition

if (fmmu->logical_domain_offset >= candidate_start)



fixes the error, but I'm not sure if it safe.

What do you think?



Thanks,



Nir.





-----Original Message-----

From: Gavin Lambert <gavin.lamb...@tomra.com<mailto:gavin.lamb...@tomra.com>>

Sent: Wednesday, May 5, 2021 1:25 AM

To: Geller, Nir 
<nir.gel...@servotronix.com<mailto:nir.gel...@servotronix.com>>; Richard Hacker 
<h...@igh.de<mailto:h...@igh.de>>; 
etherlab-users@etherlab.org<mailto:etherlab-users@etherlab.org>

Subject: RE: [Etherlab-users] Running a large number of slaves



I've never used overlapped PDOs myself, but something doesn't sound right in 
your description below.



As I understand it, overlapping only supports overlapping the input and output 
FMMUs on the same slave; you can never overlap data from different slaves 
(otherwise they would corrupt the data).  As such, while the 
logical_domain_offset of the input and output of one slave might be the same, 
afterwards it should be incremented by the max of both and the next slave 
should always have a strictly higher value.  (That appears to agree with the 
log message output.)







Gavin Lambert

Senior Software Developer









COMPAC SORTING EQUIPMENT LTD | 4 Henderson Pl | Onehunga | Auckland 1061 | New 
Zealand

Switchboard: +49 2630 96520 | https://www.tomra.com



The information contained in this communication and any attachment is 
confidential and may be legally privileged. It should only be read by the 
person(s) to whom it is addressed. If you have received this communication in 
error, please notify the sender and delete the communication.

From: Geller, Nir 
<nir.gel...@servotronix.com<mailto:nir.gel...@servotronix.com>>

Sent: Wednesday, 5 May 2021 2:27 am

To: Geller, Nir 
<nir.gel...@servotronix.com<mailto:nir.gel...@servotronix.com>>; Gavin Lambert 
<gavin.lamb...@tomra.com<mailto:gavin.lamb...@tomra.com>>; Richard Hacker 
<h...@igh.de<mailto:h...@igh.de>>; 
etherlab-users@etherlab.org<mailto:etherlab-users@etherlab.org>

Subject: RE: [Etherlab-users] Running a large number of slaves



Hi,



I've made some progress and now I can get all the slaves to OP and exchange 
data with the master.

Fragmentation is working properly and can be seen clearly in a wireshark 
capture.



I have 7 slaves that support overlapping PDOs with PDO domains of 68 and 241 
bytes, and an 8th slave that doesn't support overlapping PDOs with PDO domains 
of 35 and 33 bytes.

In dmesg, coming out of domain.c, ec_domain_add_fmmu_config():



[   78.299113] Domain 0: fmmu f08d0924 Added 68 bytes at 0.

[   78.306251] Domain 0: fmmu f08d0944 Added 241 bytes at 0.

[   78.317895] Domain 0: fmmu f08d1124 Added 68 bytes at 241.

[   78.325476] Domain 0: fmmu f08d1144 Added 241 bytes at 241.

[   78.337518] Domain 0: fmmu f08d1924 Added 68 bytes at 482.

[   78.345279] Domain 0: fmmu f08d1944 Added 241 bytes at 482.

[   78.357735] Domain 0: fmmu f08d2124 Added 68 bytes at 723.

[   78.365580] Domain 0: fmmu f08d2144 Added 241 bytes at 723.

[   78.378337] Domain 0: fmmu f08d2924 Added 68 bytes at 964.

[   78.386187] Domain 0: fmmu f08d2944 Added 241 bytes at 964.

[   78.403592] Domain 0: fmmu f08d3124 Added 68 bytes at 1205.

[   78.411901] Domain 0: fmmu f08d3144 Added 241 bytes at 1205.

[   78.425531] Domain 0: fmmu f08d3924 Added 68 bytes at 1446.

[   78.433643] Domain 0: fmmu f08d3944 Added 241 bytes at 1446.

[   78.447590] Domain 0: fmmu f08d0124 Added 35 bytes at 1687.

[   78.454539] Domain 0: fmmu f08d0144 Added 33 bytes at 1722.



So in the case of overlapping PDOs, both Rx and Tx FMMUs have 
logical_domain_offset of equal values. If a slave does not support overlapping 
PDOs, the FMMUs logical_domain_offset have different values.



Then in domain.c, ec_domain_finish(), in the loop list_for_each_entry(fmmu, 
&domain->fmmu_configs, list) The condition if (fmmu->logical_domain_offset >= 
candidate_start) is met when a slave's first FMMU is evaluated, then 
candidate_start is added with fmmu->data_size.

The condition is never met with the second slave's FMMU, that bares the same 
value logical_domain_offset like the prior FMMU.



Completely removing the condition



if (fmmu->logical_domain_offset >= candidate_start)



fixed the problem for me, though I'm not sure whether this code change is the 
best way to solve the problem.

Please share if you have any thoughts.



Thanks a lot for your support,



Nir.



From: Etherlab-users <mailto:etherlab-users-boun...@etherlab.org> On Behalf Of 
Geller, Nir

Sent: Sunday, April 25, 2021 4:00 PM

To: Gavin Lambert <mailto:gavin.lamb...@tomra.com>; Richard Hacker 
<mailto:h...@igh.de>; mailto:etherlab-users@etherlab.org

Subject: Re: [Etherlab-users] Running a large number of slaves



Hi Gavin,



Thanks for your reply.



The final system I am developing will run about 80 slaves, each with a PDO 
domain size of ~130 bytes.

I do not have such a number of slaves at my disposal, so I'm trying to somehow 
simulate a PDO domain that exceeds 1486 bytes in order to test the ecat master.



I created 2 slaves, each with a PDO domain of about 920 bytes.



I also asked myself how the data will be split between the ethernet frames.

(Both slaves support overlapping PDOs).



I expected that 2 LRW datagrams will be created, each carrying the PDO data 
intended for a specific slave and sent in a separate ethernet frame. Why should 
this be a problem?



The situation is that only 1 LRW datagram is created, in the size of 1830 bytes.

This, of course, is not going to work.



In datagram.c, ec_domain_finish(),



The if statement

if (fmmu->logical_domain_offset + fmmu->data_size - datagram_offset > 
EC_MAX_DATA_SIZE) { is never executed, but



/* Allocate last datagram pair, if data are left (this is also the case if

  * the process data fit into a single datagram) */ if (domain->data_size > 
datagram_offset) {



Is executed only once.



Could you direct me more precisely where in the code a fix should be 
implemented?



Thanks,



Nir.



From: Gavin Lambert <mailto:gavin.lamb...@tomra.com>

Sent: Friday, April 23, 2021 1:13 AM

To: Geller, Nir <mailto:nir.gel...@servotronix.com>; Richard Hacker 
<mailto:h...@igh.de>; mailto:etherlab-users@etherlab.org

Subject: RE: [Etherlab-users] Running a large number of slaves



The EtherCAT bus in general is not designed for that kind of thing.  When you 
have a large PDO domain, it is supposed to be because you have a large number 
of smaller slaves, not a small number of large slaves.



I'm not sure exactly where the cutoff should be, but most likely any single 
slave wanting to exchange more than 300 bytes or so should probably be using 
SDO rather than PDO.  (SDO mailboxes can be configured up to 1484 bytes, 
although that depends on slave implementation too and most only support a much 
smaller size.)  Do you really need that amount of data each and every cycle?



The packet-splitting does work - I have systems where each cycle sends three 
PDO packets - but the largest amount of PDO data in any one slave in my network 
is about 200 bytes.  Most slaves are a lot less.



Gavin Lambert

Senior Software Developer







COMPAC SORTING EQUIPMENT LTD | 4 Henderson Pl | Onehunga | Auckland 1061 | New 
Zealand

Switchboard: +64 96 34 00 88 | 
https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.tomra.com%2F&data=04%7C01%7Cgavin.lambert%40tomra.com%7Cef0038c509794b37b1c408d90f08b796%7C4308d118edd143008a37cfeba8ad5898%7C0%7C0%7C637557352403228212%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=p6IUHnsiTnJZuQSABgihqJ%2BwLrbez8y8HuUCiHyja%2B0%3D&reserved=0

The information contained in this communication and any attachment is 
confidential and may be legally privileged. It should only be read by the 
person(s) to whom it is addressed. If you have received this communication in 
error, please notify the sender and delete the communication.

From: Geller, Nir <mailto:nir.gel...@servotronix.com>

Sent: Friday, 23 April 2021 1:39 am

To: Geller, Nir <mailto:nir.gel...@servotronix.com>; Gavin Lambert 
<mailto:gavin.lamb...@tomra.com>; Richard Hacker <mailto:h...@igh.de>; 
mailto:etherlab-users@etherlab.org

Subject: RE: [Etherlab-users] Running a large number of slaves





This email originated from outside of the organization. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.

Hi,



In my setup I have 2 ecat slaves with a large PDO data worth of 917 bytes.

A LRW datagram is allocated with a payload size of 1830 bytes:



[   56.206673] EtherCAT DEBUG 0: Adding datagram pair with expected WC 6.

[   56.206690] EtherCAT 0: Domain0: Logical address 0x00000000, 1830 byte, 
expected working counter 6.

[   56.215738] EtherCAT 0:   Datagram domain0-0-main: Logical offset 
0x00000000, 1830 byte, type LRW at f1c6600c.





My suspicion is that in master.c, ec_master_send_datagrams(), the following 
piece of code



             // does the current datagram fit in the frame?

             datagram_size = EC_DATAGRAM_HEADER_SIZE + datagram->data_size

                 + EC_DATAGRAM_FOOTER_SIZE;

             if (cur_data - frame_data + datagram_size > ETH_DATA_LEN) {

                 more_datagrams_waiting = 1;

                 break;

             }



Gets stuck in an infinite loop because it can't handle a datagram larger than 
1500 bytes.



Is my assumption correct?



Do you happen to have a code fix for this situation?



Thanks,



Nir.





From: Etherlab-users <mailto:etherlab-users-boun...@etherlab.org> On Behalf Of 
Geller, Nir

Sent: Wednesday, April 21, 2021 11:10 AM

To: Gavin Lambert <mailto:gavin.lamb...@tomra.com>; Richard Hacker 
<mailto:h...@igh.de>; mailto:etherlab-users@etherlab.org

Subject: Re: [Etherlab-users] Running a large number of slaves





Hello again,



I tried running the ethercat master with a large PDO domain, but with no 
success.



Following

https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsourceforge.net%2Fu%2Fuecasm%2Fetherlab-patches%2Fci%2Fdefault%2Ftree%2F%23readme&data=04%7C01%7Cgavin.lambert%40tomra.com%7Cef0038c509794b37b1c408d90f08b796%7C4308d118edd143008a37cfeba8ad5898%7C0%7C0%7C637557352403238205%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=k%2FpkYyFUxG3XschRFUI0ywGaThKmXoWtTR35zJ0MfUk%3D&reserved=0

I built the ethercat master with Gavin's patch set.



I'm running on a x86 Intel Atom dual core with linux kernel 3.18.48. Ethernet 
adapter is igb.



To achieve a very large PDO volume I created 2 ecat slaves, each with PDO data 
worth of 917 bytes.



When connecting only 1 slave, and running examples/user/ec_user_example I can 
raise the slave to OP and exchange data between the master and slave over PDO.



When connecting 2 slaves the start up process of the ethercat master gets stuck 
after



EtherCAT DEBUG 0-main-0: Checking system time offset.



And according to a wireshark capture the communication completely stops even 
though the application is still running cyclically.



Can you please help me setup a functional system?



Thanks,



Nir.

________________________________________

From: Etherlab-users <mailto:etherlab-users-boun...@etherlab.org> on behalf of 
Geller, Nir <mailto:nir.gel...@servotronix.com>

Sent: Wednesday, March 31, 2021 1:48 PM

To: Gavin Lambert <mailto:gavin.lamb...@tomra.com>; Richard Hacker 
<mailto:h...@igh.de>; mailto:etherlab-users@etherlab.org 
<mailto:etherlab-users@etherlab.org>

Subject: Re: [Etherlab-users] Running a large number of slaves



Hi Gavin,



This sounds promising.



With regard to cyclic real time performance, does fragmentation work properly 
and efficiently with slaves that support DC?



Thanks,



Nir.



-----Original Message-----

From: Gavin Lambert <mailto:gavin.lamb...@tomra.com>

Sent: Tuesday, March 30, 2021 9:40 AM

To: Geller, Nir <mailto:nir.gel...@servotronix.com>; Richard Hacker 
<mailto:h...@igh.de>; mailto:etherlab-users@etherlab.org

Subject: RE: [Etherlab-users] Running a large number of slaves



Yes, it splits to multiple packets automatically.  Just be careful to not use 
more data than your cycle rate will allow.



Note that initialization and configuration of a large number of slaves is very 
slow by default, as it occurs in series.

The unofficial patchset changes this to occur in parallel (for groups at a time 
rather than the whole network, to avoid creating too many packets at once).





Gavin Lambert

Senior Software Developer









COMPAC SORTING EQUIPMENT LTD | 4 Henderson Pl | Onehunga | Auckland 1061 | New 
Zealand

Switchboard: +49 2630 96520 | https://www.tomra.com



The information contained in this communication and any attachment is 
confidential and may be legally privileged. It should only be read by the 
person(s) to whom it is addressed. If you have received this communication in 
error, please notify the sender and delete the communication.

-----Original Message-----

From: Geller, Nir

Sent: Tuesday, 30 March 2021 1:23 am

To: Richard Hacker <mailto:h...@igh.de>; mailto:etherlab-users@etherlab.org

Subject: Re: [Etherlab-users] Running a large number of slaves



Hi,



Thanks for your reply.



You mean that in the case of a large amount of PDO data ( > 1500), a single 
invoke of ecrt_master_send(master) will result several frames sent out 1 after 
another?



Nir.



-----Original Message-----

From: Etherlab-users <mailto:etherlab-users-boun...@etherlab.org> On Behalf Of 
Richard Hacker

Sent: Monday, March 29, 2021 3:09 PM

To: mailto:etherlab-users@etherlab.org

Subject: Re: [Etherlab-users] Running a large number of slaves



EtherCAT and the master are not limited to the ethernet packet size.

EtherCAT frames are automatically divided into smaller ethernet packets as 
required. As long as you're not exceeding physical limits, (like sending ~1,5kb 
at a rate of 1kHz), you should be fine.



Physically EtherCAT can address ~64k slaves on a network.



On 2021-03-29 13:22, Geller, Nir wrote:

> Hi There,

>

> I'm trying to setup one ethercat master with a very large number of

> ethercat slaves.

>

> The first obstacle I'm thinking about is a very large amount of data

> sent over PDO each cycle, that will definitely exceed 1500 bytes.

>

> In order to address this issue I want to understand if it is possible

> to send more than one frame each cycle?

>

> Another method could be using jumbo frames. Does the ethercat master

> support that?

>

> Does anybody have practical experience with such a setup?

>

> Thanks,

>

> Nir.

>

>



Mit freundlichem Gruß



Richard Hacker



--

------------------------------------------------------------------------



Richard Hacker M.Sc.

mailto:richard.hac...@igh.de

Tel.: +49 201 / 36014-16



Ingenieurgemeinschaft IgH

Gesellschaft für Ingenieurleistungen mbH Nordsternstraße 66

D-45329 Essen



Amtsgericht Essen HRB 11500

USt-Id.-Nr.: DE 174 626 722

Geschäftsführung:

- Dr.-Ing. Siegfried Rotthäuser

- Dr. Sven Beermann, Prokurist

Tel.: +49 201 / 360-14-0

https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.igh.de%2F&data=04%7C01%7Cgavin.lambert%40tomra.com%7Cef0038c509794b37b1c408d90f08b796%7C4308d118edd143008a37cfeba8ad5898%7C0%7C0%7C637557352403248203%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=HprynbaLPszRU3rACdqT0tIs2ciTzXHvz8nDfuXX8KI%3D&reserved=0



------------------------------------------------------------------------

--

Etherlab-users mailing list

mailto:Etherlab-users@etherlab.org

https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.etherlab.org%2Fmailman%2Flistinfo%2Fetherlab-users&data=04%7C01%7Cgavin.lambert%40tomra.com%7Cef0038c509794b37b1c408d90f08b796%7C4308d118edd143008a37cfeba8ad5898%7C0%7C0%7C637557352403248203%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=y0wmPBDSmZDTSpCHOHZflDDl11yFx%2BfG8lf5pl5DuSA%3D&reserved=0

--

Etherlab-users mailing list

mailto:Etherlab-users@etherlab.org

https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.etherlab.org%2Fmailman%2Flistinfo%2Fetherlab-users&data=04%7C01%7Cgavin.lambert%40tomra.com%7Cef0038c509794b37b1c408d90f08b796%7C4308d118edd143008a37cfeba8ad5898%7C0%7C0%7C637557352403258190%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=wdeq3mELf01pLN%2Fk%2Fy3dwq56qi49n1aDd7lKvfU%2BQi4%3D&reserved=0

--

Etherlab-users mailing list

mailto:Etherlab-users@etherlab.org

https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.etherlab.org%2Fmailman%2Flistinfo%2Fetherlab-users&data=04%7C01%7Cgavin.lambert%40tomra.com%7Cef0038c509794b37b1c408d90f08b796%7C4308d118edd143008a37cfeba8ad5898%7C0%7C0%7C637557352403258190%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=wdeq3mELf01pLN%2Fk%2Fy3dwq56qi49n1aDd7lKvfU%2BQi4%3D&reserved=0

--

Etherlab-users mailing list

Etherlab-users@etherlab.org<mailto:Etherlab-users@etherlab.org>

https://lists.etherlab.org/mailman/listinfo/etherlab-users
-- 
Etherlab-users mailing list
Etherlab-users@etherlab.org
https://lists.etherlab.org/mailman/listinfo/etherlab-users

Reply via email to