Zhang Yuchen wrote:
 > In other words: Multicasting? Or where is the difference?

Yes, you can also call it Multicasting. But IMO, the IP multicasting on
ethernet is technically different from the multicasting on Firewire.As far
as I know, in IP/Ethernet, a node needs to register itself in a certain
multicast group if it wants to receieve the packet from that group, while in
Firewire iso, the listening node only needs to tune its hardware, so more
"radio-like"-:).


No difference so far: For Firewire, you select one or more iso channels. With Ethernet, you set up you NIC to listen to specific additional MAC addresses. The effect is the same: the data is on the media, the hardware just tunes it in.


So much for the Ethernet low-level world. Firewire seems to have a clean

model by allowing to filter those channels in hardware, doesn't it? And how
is the periodical.transmission managed in Firewire? Does the driver/stack
have to interact for each isochronous packet or only if anything else than a
linear memory stream is transmitted. I.e. what will you have to do for
transmitting always the some memory region (e.g. some process states), and
how to synchronise any access to this memory?


[Mmh, badly formatted quotations - you should try a different mail client... ;)]

Yes, the Firewire multicasting is more conveninet. The peroid is 125us,
which means the "bus" on each channel goes every 125us. So the application
can have a maximum freq as 8KHz. I dont really understand the
synchronization problem you mentioned. My idea is the application should be
totally decoupled from the lower layer, e.g Packet Management Module. What
the application does is to deliver its data to PM Module and/or wait for the
data from other nodes. But the PM Module is only responsible to index the
data from applications, if there is any, like the bus stop. But the bus
still goes in every 125us even there is no passenger. So in the case of
process state monitoring, it is the application's responsibility to deliver
the state data to PM Module in time, i.e. before the bus goes.


Well, for me the need for a new API depends on what extra features

Firewire may give us compared to multicasting. With respect to packet oriented multicasting, IP already comes with such an interface, it "only" has to be added to the RTnet core. If Firewire also manages periodical re-transmissions of memory regions in hardware, one may think about a fitting API again.

[end of my text]
The IP multicasting is different from Firewire multicasting, so I am
wondering if we can use the same BSD interface for Firewire. A possible way
is to keep the naming of the new APIs the same as from BSD, but change the
internals. Again, I dont really understand the "re-transmissins of memory
region in hardware", could you explain more?


I think Firewire has been designed to formost transmit data streams. You pick some megabyte data block and say to the hardware: "Go and transmit this for me in an iso channel!" The hardware will then stream the buffer from the beginning to the end, quite useful for any multimedia application with constant bandwidth requirements.

But in the process control scenario, you have only a small buffer reflecting the current local status of a station. This buffer gets updated by the real-time application upon some other I/O activity or as the result of periodic control calculations. So, the question is if I have to re-program the Firewire controller for each cycle to take the same buffer again? If this can be done automatically, the question arises how to synchronise both the application's and the controller's access to the buffer? If there is a need for reprogramming the hardware anyhow (e.g. the buffer start offset), one would have packet oriented transmission again, but at this time with hardware-enforced bandwidth reservation (compared to TDMA's software-based bandwidth management).


Shared skbs would be required if we want to have multiple local receivers

of multicasted packets or broadcast streams. Whatever solution might be found, it will likely increase the complexity of the reception path compared to the current version. If we deny this as a first approach, things would certainly be easier. However, it may significantly degrade the usability of RTnet in non-trivial application scenarios.

[eomt]
Just to give a brainstorming idea:). Maybe we can first deny rtskb sharing,
but let the multiple local receivers for a certain rtskb be possible, i.e.
let each socket pay for a "shared" rtskb.The realtime demultiplexing task in
PM module/stack_mgr in RTnet can not be preempted by the waken up
applications. That means the demultiplexng task only copys the rtskb to each

Copying memory is expensive - much more than the pointer game played by RTnet so far to exchange full with empty buffers. Will not be a solution for older or embedded boxes or if cpu time is restricted due to other requirements.

application, rasie the corresponding semaphore without being preempted. So
it can kfree the rtskb (to NIC) before return. Then all the waken up
applications can be executed in order of priority. Of coz, the negative
point of this solution is we need more memory, which maybe a problem for
certain embedded systems.

Depends. We are still talking about kilobytes here which can be ok. The performance problem will hit earlier, I think.

Jan

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature



Reply via email to