Jorge Almeida wrote:
> Em Wednesday, 7 de February de 2007 18:55, o Jan Kiszka escreveu:
>> Jorge Almeida wrote:
>>> Hello to all.
>>>
>>> I'm runnig an RT apllication using RTAI 3.4 and rtnet 0.9.8.
>>>
>>>
>>> I've found two different types of dropping packtes messages during the 
>>> execution of my application:
>>>
>>> "memory squeeze: dropping packet"
>>> and 
>>> "dropping packet in rtnetif_rx"
>> What comes first?
> 
> 
> this is the result of the messages file (the first lines):
> 
> Feb  8 10:33:18 nav-aft kernel: Task CLogManagerRT start.       0xe0488220
> Feb  8 10:53:16 nav-aft kernel: rteth0: Memory squeeze, dropping packet.
> Feb  8 10:53:16 nav-aft kernel: rteth0: Memory squeeze, dropping packet.
> Feb  8 10:53:16 nav-aft kernel: RTnet: dropping packet in rtnetif_rx()
> Feb  8 10:53:16 nav-aft kernel: rteth0: Memory squeeze, dropping packet.
> Feb  8 10:53:17 nav-aft kernel: rteth0: Memory squeeze, dropping packet.
> Feb  8 10:53:17 nav-aft kernel: RTnet: dropping packet in rtnetif_rx
> .............
> 
>>>
>>> I have one hard rt_task reading in a RAW socket. And 3 hard rt_task to 
> write 
>>> for different lans.
>>>
>>> And several different other rt_tasks to do some things in the application. 
>>> Several are in hard real time and others are in soft realtime.
>>>
>>> One thing that i've found strange is that when these errors (dropping 
> packet) 
>>> occur the hard real time tasks are stoped.
>> Hmm, maybe they stop first and _then_ RTnet runs into troubles
>> delivering packets. Where/on what do they block then?
>>
>>> The most part of they are periodic, and should be running anyway.
>>>
>>> The soft tasks still run with the correct period.
>>> Theres any known malfunction with the hard real-time scheduler of RTAI??
>>>
>>> Or problem with RTnet framework that can cause this strange behaviour??
>> You run into an overload situation /wrt incoming frames. To find out if
>> this is application-driven or due to a bug in what subsystem soever, we
>> have to analyse your priority setup first: are the receiver task
>> together with RTnet's stack manager task the highest ones? Furthermore,
>> can you run your scenario without further RT tasks except the receiver
>> to reduce potential trouble sources?
> 
> My priority setup for all the task in the application is the following:
> 
> #define REAL_TIME_TASK_PRIORITY_NAV                                 4
> #define REAL_TIME_TASK_PRIORITY_LOG                                3
> #define REAL_TIME_TASK_PRIORITY_LOG_MANAGER                             8
> #define REAL_TIME_TASK_PRIORITY_ETHERNET_MANAGER_READER 10
> #define REAL_TIME_TASK_PRIORITY_GYRO_NAV_MANAGER                 12
> #define REAL_TIME_TASK_PRIORITY_TIME_MANAGER_READER             9
> #define REAL_TIME_TASK_PRIORITY_TIME_MANAGER_WRITER             11
> #define REAL_TIME_TASK_PRIORITY_ETHERNET_MANAGER_WRITER  13
> #define REAL_TIME_TASK_PRIORITY_ANALOGIC_STATE                         7
> #define REAL_TIME_TASK_PRIORITY_MONITORING                                  8
> #define REAL_TIME_TASK_PRIORITY_MANAGEMENT                                8
> #define REAL_TIME_TASK_PRIORITY_NAV_VALUES_SIMULATION            4
> #define REAL_TIME_TASK_PRIORITY_NAV_SIMULATION_MANAGER        6
> #define REAL_TIME_TASK_PRIORITY_FAILOVER_MANAGER                    10
> 
> 
> But presently are running:
> TASK LOG                                               (blocked in semaphore) 
>        
> TASK  ETHERNET_MANAGER_READER        (blocked in the rt_recv_msg)
> 3 x TASK  ETHERNET_MANAGER_WRITER    (blocked in queue semaphores)
> TASK GYRO_NAV_MANAGER (this task sends 1 message to LAN A and LAN B every 5 
> ms 
> interval (Periodic))
> 
> 
> RTAI LXRT Real Time Task Scheduler.
> 
>     Calibrated CPU Frequency: 2791143000 Hz
>     Calibrated interrupt to scheduler latency: 2943 ns
>     Calibrated oneshot timer setup_to_firing time: 999 ns
> 
> Number of RT CPUs in system: 1
> 
> Real time kthreads in resorvoir (cpu/#): (0/6)
> 
> Number of forced hard/soft/hard transitions: traps 0, syscalls 0
> 
> Priority  Period(ns)  FPU  Sig  State  CPU  Task  HD/SF  PID  RT_TASK *  TIME
> ------------------------------------------------------------------------------
> 1          1           Yes  No  0x9    0:1   1      1    2610  e0698180   6
> 999999999  1           Yes  No  0x9    0:1   2      1    2606  e0698c80   6
> 999999999  1           Yes  No  0x9    0:1   3      1    2848  e005d500   6
> 1000000004 1           Yes  No  0x0    0:1   4      0    4865  e0485220   0
> 1000000003 1           Yes  No  0x9    0:1   5      0    4866  e0485a20   0
> 1000000008 1           Yes  No  0x0    0:1   6      0    4878  e0486220   0
> 10         1           Yes  No  0x9    0:1   7      1    4873  e0486a20   8
> 12         5000001     Yes  No  0x5    0:1   8      1    4877  e0487220   8
> 13         1           Yes  No  0x9    0:1   9      1    4874  e0487a20   8
> 13         1           Yes  No  0x9    0:1   10     1    4875  e0488220   8
> 13         1           Yes  No  0x9    0:1   11     1    4876  e0488a20   8
> TIMED
>> e0487220 
> READY
> 
> 
> 
>> [BTW, what packet burst to you expect at maximum? Check if
>> CONFIG_RTNET_RX_FIFO_SIZE is large enough.]
> 
> CONFIG_RTNET_RX_FIFO_SIZE  = 32 the default value.
> 
> I'm running in promisc mode and with ETH_P_ALL and made some SCP sessions to 
> other machine in the same network and no problem arises. 
> My application can take care of burst with no problems.

Your application may, but RTnet has to be tuned for it as well.

See, RTnet is designed to handle predictable traffic. It has guaranteed
but limited resources like buffers or the RX-FIFO size above. You have
to specify the worst-case load scenario first and then dimension RTnet's
resources appropriately. The rt_8139too allows you to tune its input
buffer pool via a module parameter (rx_pool_size), other drivers
currently require a recompilation. When increasing those buffers, also
adopt the RX_FIFO_SIZE.

So, increasing those buffers may be a first try to exclude real
shortages. Once there are more than sufficient resources, any further
squeeze is an indication of some scheduling issue. Then you would have
to check what blocks the stack manager task (the one that pushes the
buffers into the socket's queue or drops it if the socket is short on
memory, but not the whole system).

Jan

Attachment: signature.asc
Description: OpenPGP digital signature

-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
RTnet-users mailing list
RTnet-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rtnet-users

Reply via email to