Hello Gavin,
Thank you very much for your explanations. They made the things much more 
clearly to me!
About the Sync0 phase between the slaves and my master, I forgot to tell that 
I'm using a shift time of -150 microseconds. My cycle time is 500 microseconds.
As far as I understood the slaves should trigger the data exchange and start 
their own plc cycle 150 microseconds before my master cycle starts.
When I read the input data at the beginning of my cycle I should get the slave 
data I/O from 150 microseconds before. Is it right?
In contrast the dc slave reference clock time will be the time when the 
telegram has passed the reference clock slave (- transmission time). Is it 
Calling the "queue" commands will just trigger the desired operation for the 
next "send" command.
So that means I have to compare the time I get from the reference clock with my 
master application time when I have called the send function. Ok?

I'll rework my calculation of the application time because I think it's wrong.
I'm using a servo drive in cyclic synchronous torque mode but very rarely I get 
a wrong actual position from the slave. This is happened if the jitter for 
starting my communication cycle is 15 microseconds higher than normal.
Actually the jitter shouldn't be so important because I request the data 
triggered by Sync0 150 microseconds before....

Have a nice weekend!

Von: Gavin Lambert [mailto:gav...@compacsort.com]
Gesendet: Freitag, 2. Februar 2018 00:22
An: Matthias Bartsch; etherlab-users@etherlab.org
Betreff: RE: Synchronizing the EtherCAT application time to the DC reference 

Actually you typically shouldn't have Sync0 simultaneous with your 
communication cycle; that causes problems.  The goal is to get it into a locked 
phase arrangement.

Sync0 is typically when the slave's actions trigger - it asserts outputs and 
captures inputs ideally both at that precise instant (though there might be 
some delay if it needs to do one before the other).  However there is generally 
some setup time before Sync0 required (so you have to provide the next cycle's 
outputs at least this amount of time before Sync0) and some transfer time after 
Sync0 required (so you have to wait that long after Sync0 before you can read 
the inputs).  The slave's documentation should tell you how long each of these 
times are, and you need to allow a little bit of extra time to cope with jitter 
on the master's end and the comms delay of the network itself.  If you have no 
clue, aiming for somewhere in the middle of your sync cycle is usually a fairly 
safe bet.

The most important aspect of the EtherCAT comms cycle is when you call 
ecrt_master_send.  This is what actually sends (and receives back) the 
datagrams and transfers all data to and from the slaves.  Your goal is always 
to make this call happen consistently with as little latency and jitter as 
possible.  None of the other calls matter in terms of timing.

ecrt_master_reference_clock_time retrieves the 32-bit time of the reference 
slave as of when you called ecrt_master_send, provided that you called 
ecrt_master_sync_slave_clocks at some previous point (each cycle).

ecrt_master_64bit_reference_clock_time retrieves the 64-bit time of the 
reference slave as of when you called ecrt_master_send, provided that you 
called ecrt_master_64bit_reference_clock_time_queue at some previous point 
(each cycle).

(In principle doing both ecrt_master_64bit_reference_clock_time_queue and 
ecrt_master_sync_slave_clocks on every cycle is a bit wasteful.  It would be 
better if you could just do a 64-bit sync, but this is not supported at 
present.  If you're worried about bandwidth then you should call 
ecrt_master_sync_slave_clocks on every cycle and 
ecrt_master_64bit_reference_clock_time_queue only occasionally.  You cannot 
omit calling ecrt_master_sync_slave_clocks.)

For maximum consistency, you should call ecrt_master_application_time 
immediately prior to ecrt_master_send.  You should also call it on each cycle, 
although in practice it matters most when it's doing the slave DC 
configuration, which will span several cycles across the slaves shortly after 
activating the master.  Also rather than using a monotonically increasing value 
as it appears you're using at the moment, I think it's more typical to use the 
actual PC clock time, corrected for the offset between the master clock and the 
reference clock.  I'm not sure which is "better" though.

(Also note that most things work best when you always use the first DC-capable 
slave as the reference clock, which is the default.  If you're explicitly 
designating a clock elsewhere on the network then there might be complications.)

If you are trying to synchronise the reference clock to the master (which you 
are not; it tends to be less accurate), you also have to call 
ecrt_master_sync_reference_clock between ecrt_master_application_time and 
ecrt_master_send.  It doesn't matter when you call 
ecrt_master_sync_slave_clocks as long as you do it periodically (typically 
recommended once per cycle, unless you have a really fast cycle time).  If 
you're using ecrt_master_reference_clock_time then you have to sync slave 
clocks at least as often as you ask for the time.

ecrt_master_receive can be called at any time after the packets arrive back, 
which will be shortly after ecrt_master_send (exactly how long depends on your 
network size) - but it's most common to use this time to do a proper idle sleep 
and process it at the start of the next cycle rather than the end of the 
previous one.  That can cause higher jitter if your processing times aren't 
consistent, however (unless you have two smaller sleeps rather than one large 
one), so the way you're doing it isn't a bad one.  You do have to call it 
before anything else that uses the results of the datagrams, such as the time 
calls above.

If you can hook a scope up to a slave's SYNC0 and SOF pins (if accessible), 
that will give you the best idea of how your timing cycle looks.

From: Matthias Bartsch
Sent: Friday, 2 February 2018 05:12
To: etherlab-users@etherlab.org<mailto:etherlab-users@etherlab.org>
Subject: [etherlab-users] Synchronizing the EtherCAT application time to the DC 
reference clock

Hello everybody!
I'm using the unofficial patch set from https://github.com/ribalda/ethercat 
My RTAI communication cycle is synchronized to the DC slave reference clock 
(average jitter < 1µs). I need to extrapolate the position of servo drives to 
the beginning of my cycle.

I'm not sure about the right use of the functions for reading the reference 
I want to start my cycle at the time of the "Sync0" interrupt.
My questions are: When is the time sampled that I get by calling
&ui64RefClockTime)); ?

How I have to initialize my application time (first call of 
My synchronisation seems to work but I'm not sure about the phase shift between 
the Sync0 event and the start of my communication cycle.

My code looks something like this:

In the first real time cycle I call:
ecrt_master_application_time(m_poMaster, m_ui64AppTime_ns);  // with 
// busy wait 25µs for getting the answer
// .... Later sending the output data
In the next cycles I do this:
// busy wait 25µs for getting the answer

uint64_t ui64RefClockTime;
ecrt_master_64bit_reference_clock_time(m_poMaster, &ui64RefClockTime));
m_ui64AppTime_ns += static_cast<uint64_t>((m_io__dNominalCycleTime) * 1e9);
ecrt_master_application_time(m_poMaster, m_ui64AppTime_ns);
m_out_iDcSysTimeDifferenceMaster   = m_ui64AppTime_ns - ui64RefClockTime;  // 
Time drift value

With kind regards

Matthias Bartsch

etherlab-users mailing list

Reply via email to