Jan Kiszka wrote:
> Wolfgang Grandegger wrote:
>> Hello,
>>
>> Siniša Denić wrote:
>>> Hello Jan, we met each other on Real Time Conference in Linz, a few days
>>> ago.
>>> We talked about, rt_8139too driver and also I told you about my idea to
>>> wrote some kind of  rt_mpc52xx_fec for  2.6 kernel.
>>>  For now I have two questions:
>>> 1. does the MPC5200 Bestcomm  DMA engine must be handled as rtdm device
>>> in order to serve data transfer between rt_mpc52xx_fec driver and
>>> memory, or I can use Bestcomm API as is?
>> I believe you can use it as-is. Spin_lock/spin_unlock functions are used
>> for SDMA task creation, but they get called at initialization time in
>> secondary mode. I need to check more carefully, though.
> 
> CONFIG_IPIPE_DEBUG_CONTEXT can help to find remaining incompatible
> spinlock usages during runtime.

OK

>>> 2. what I have to do to put built-in fec driver "out of the kernel" and 
>>> get it as module.I think that problem  is because BestcommDMA is system
>>> device and must be built-in, hence(I suppose)  fec is also work only as
>>> built-in. 
>> You need to port the FEC driver of a recent version of 2.6 to RTnet
>> using arch/powerpc.
>>
>>> This is already done for  2.4 kernel with kernel-patch which alows
>>> rt_mpc52xx_fec as module, but I think for 2.6 there is difference in
>>> patch and I don't know how to get it?
>> The RTnet driver for the FEC om MPC5200 currently available for 2.4 is
>> not usable for 2.6 and it makes little sense to port it.
>>
>>> I hope you understand me.If these questions are rather for Wolfgang I'll
>>> be pleased to meet him here.
>> You want to use RTnet on the MPC5200 with Linux 2.6, right?
>> Unfortunately, that FEC driver is still missing in Linux 2.6.23. There
>> are patches around and hopefully they will get included already in
>> 2.6.24. The corresponding RTnet driver should then be derived from that
>> Linux driver.
> 
> One further question we discussed in Linz was if a directly attached FEC
> device might provide faster roundtrip times than a PCI-attached NIC with
> that controller. Wolfgang, what would you say?

I believe that a "good" PCI NIC is a bit faster. The bestcomm task may
introduce some overhead and latencies because the FEC is not the only
user. But I do not have real numbers to show.

> In any case, the so far used rt_8139 is not an optimal choice even if
> ignoring the PCI path for a while. The reason I just realised again:
> This hardware requires incoming and outgoing packets to be _copied_
> between the stack data structures and special DMA regions of this card.
> So, maybe switching to another PCI adapter would already provide better
> numbers!

Such a comparison would be interesting, indeed. I have the gut feeling
that most of the latencies are due to the CPU power of the MPC5200 and
the memory bandwidth of the board.

Wolfgang-


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
RTnet-users mailing list
RTnet-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rtnet-users

Reply via email to