Re: [etherlab-users] How to perform DC time synchronisation the right way?

2018-05-16 Thread Michael Ruder
Hi Graeme, hi Gavin,

wow, thanks for the great, detailed and quick replies!

This was extraordinarily helpful!

I like Graemes suggestion for the cycle very much. I can cope with the 
delayed PDO sending much better than I could cope with a fixed, rather 
long wait in my cycle.

I modified my test program accordingly, including the move of the 
application_time call directly before the send call and get a sync error 
of < 1000 ns (I'd say 200-300 ns RMS) according to 
ecrt_master_sync_monitor_process(). When I look at the 0x92c registers, I 
can see that the error is in relation to slave 0 while between slave 0 and 
slave 1 it is much smaller, around 20-50 ns RMS. Very nice!

I found that ecrt_master_sync_monitor_process() only delivers a "good" 
value if I call it between the master_receive and master_send, so I 
retrieve the value before I send out again:

- master receive
- domain process
- save sync monitor progress
- write cached PDO values
- domain queue
- dc sync
- master send
- perform application calcs (writes to PDO data is cached)

Is there anything else I need to keep an eye on when doing all the work 
after "master send"? Can I safely assume that all TxPDOs from the drive 
are still valid after the master send and I can read them after the master 
send using EC_READ_*? Or should I save them when I write the cached 
RxPDOs? In my test program, it works without caching though.

Thanks again a lot for the help!
-- 
.  -Michael
___
etherlab-users mailing list
etherlab-users@etherlab.org
http://lists.etherlab.org/mailman/listinfo/etherlab-users


Re: [etherlab-users] How to perform DC time synchronisation the right way?

2018-05-15 Thread Graeme Foot
Hi,



Firstly, there is no way of checking if the frame transfer is complete.



One comment, to reduce the jitter and offset of the master application time, 
place the clock_gettime(), ecrt_master_application_time(), 
ecrt_master_sync_reference_clock(), ecrt_master_sync_slave_clocks() calls after 
ecrt_domain_queue() (and before ecrt_master_send()).



It doesn't matter how long "do a lot of stuff" takes as long as the application 
time is set just before the send and your cycle start time is in sync with the 
initial call to ecrt_master_application_time().





Traditional cycle:

- master receive

- domain process

- perform application calcs

- domain queue

- dc sync

- master send



-> as long as the time to master send and the time on the wire is shorter than 
sync0 offset then you are all good.





Alternate option:

- cycle time 0, wakeup

- domain queue

- dc sync

- master send

- sleep for data over wire time

- wakeup

- master receive

- domain process

- perform application calcs

- sleep



-> you need to determine the appropriate sleep time between the master send and 
receive to allow for software overhead and time on the wire.



Note: You can guess a time on the wire by using the "ethercat slaves -v" 
command.  Look up the "DC system time transmission delay" on your last slave 
and double it.





I do something a little different.  My cycle:

- master receive

- domain process

- write cached PDO values

- domain queue

- dc sync

- master send

- perform application calcs (writes to PDO data is cached)



This has the advantage of a very short turnaround with minimal jitter between 
the receive and send.  It allows nearly a whole cycle for the data to be on the 
wire.  It also allows for up to the remainder of that cycle to be used for 
application calculations, in parallel to the data being on the wire.  The 
drawbacks are:

- The domain process step will overwrite any PDO data changes you have made 
while performing your application calcs, so you need to cache your changes 
somewhere else then apply them after the domain process step

- You add 1 extra cycle delay for the PDO data being read.



However, your cycle time can generally also be reduced by an amount since the 
"app calc time" and "data on the wire time" are now in parallel.



The traditional cycle takes around three cycles between writing data and 
receiving its results (3 * 1ms = 3ms turnaround)

My cycle can often be reduced to at least ½ of the traditional cycle.  Even 
though it has the extra cycle overhead it still has a better turnaround (4 * 
0.5ms = 2ms turnaround)



I personally don't reduce my cycle time (I keep it at 1ms) as I'm happy at the 
extra cycle delay and some of our controllers can have quite a large 
calculation overhead.





Just some info on timing from one of our controllers (around 55 slaves)

- 30us to perform master receive through to master send

- 150us to perform application calcs





One last comment.  Assuming a linear topology, the EtherCAT frames are sent out 
through all of the slaves and once it reaches the last slave it returns through 
all of the slaves.  It should take a similar time between outgoing and 
returning.  Only the outgoing data needs to arrive before the Sync0 event.  So 
with my method you can allow nearly the whole cycle for the data to be on the 
wire as long as your Sync0 events are configured to be after the cycle half 
time.  If it's not a linear topology, this does not apply.





Regards,

Graeme.





-Original Message-

From: etherlab-users [mailto:etherlab-users-boun...@etherlab.org] On Behalf Of 
Michael Ruder

Sent: Wednesday, 16 May 2018 4:34 AM

To: etherlab-users@etherlab.org

Subject: [etherlab-users] How to perform DC time synchronisation the right way?



Hello,



I am progressing quite well with EtherLab and am currently working on 
synchronizing outputs/movement with the Master time. We are using the Master 
1.5.2 from the 1.5.2 branch, ec_generic driver with PREEMPT RT (kernel 4.14.28).



In our application, we need to be synchronized to the real time (UTC). We use a 
GPS receiver and Chrony to synchronize our PC clock to within a few 
microseconds.



Now I want to have the slaves also synchronized to this time frame and have the 
following dilemma:



- normally, I would like to call



// cycle begins



ecrt_master_receive(master);

ecrt_domain_process(domain1);



// do a lot of stuff



clock_gettime(CLOCK_REALTIME, );

ecrt_master_application_time(master, ((time.tv_sec - 946684800ULL) * 
10ULL + time.tv_nsec));



ecrt_master_sync_reference_clock(master);

ecrt_master_sync_slave_clocks(master);



ecrt_domain_queue(domain1);

ecrt_master_send(master);



// cycle ends, wait for next cycle



However, as the "lot of stuff" takes different amounts of time, this seems to 
be not good, as this means a few hundred microseconds jitter as to when the 
application time is set in our (1 ms long) cycle.



- therefore, I 

Re: [etherlab-users] How to perform DC time synchronisation the right way?

2018-05-15 Thread Gavin Lambert
On 16 May 2018 4:34 a.m., quoth Michael Ruder:
> Now I want to have the slaves also synchronized to this time frame and have
> the following dilemma:
> 
> - normally, I would like to call
> 
> // cycle begins
> 
> ecrt_master_receive(master);
> ecrt_domain_process(domain1);
> 
> // do a lot of stuff
> 
> clock_gettime(CLOCK_REALTIME, );
> ecrt_master_application_time(master, ((time.tv_sec - 946684800ULL) *
> 10ULL + time.tv_nsec));
> 
> ecrt_master_sync_reference_clock(master);
> ecrt_master_sync_slave_clocks(master);
> 
> ecrt_domain_queue(domain1);
> ecrt_master_send(master);
> 
> // cycle ends, wait for next cycle
> 
> However, as the "lot of stuff" takes different amounts of time, this seems to
> be not good, as this means a few hundred microseconds jitter as to when the
> application time is set in our (1 ms long) cycle.

While it's true that this produces jitter in terms of when the packets actually 
go through, with the code above it won't adversely affect the slave clocks (or 
at least not much worse than the accuracy of CLOCK_REALTIME).

Technically speaking, the "best" (most accurate you can) time to call 
ecrt_master_application_time is immediately prior to ecrt_master_send; this is 
the call that actually kicks off the datagram transmission and actually sends 
the values to the slaves.  (All of the prior methods, including 
ecrt_master_sync_reference_clock, merely request that the datagram be sent on 
the next ecrt_master_send, without actually capturing the specific data.)

Thus in most cases it makes sense to use the above pattern (perhaps moving your 
ecrt_master_application_time call to make the time accuracy slightly less 
dependent on your domain size), since as you pointed out in the rest of your 
message the minimum delay between ecrt_master_send and ecrt_master_receive is 
unknown, and you might as well let the CPU relax and do other work during this 
time.

For especially fast RT cycles or for slaves that are highly sensitive to output 
jitter it can sometimes make sense to use an altered cycle pattern like you 
suggested, although this has the caveat of either busy-waiting or imprecise 
waiting between the two calls.

___
etherlab-users mailing list
etherlab-users@etherlab.org
http://lists.etherlab.org/mailman/listinfo/etherlab-users