I set the maximal number of the message for a mbox to 2, so that it can be 
full easy.In the funtion sys_mbox_post, I set a counter, when it bigger than a 
maximal number(such as 10), the funtion return viod. I'm test the system, and 
look into from the debug information. I find my board died sometime, the stack 
does debug information anymore, I guess ARM is in the Data Abort Interrupt. 
Notice I just return void in the funtion sys_mbox_post, is it any error? 
在2007-12-05,"Frédéric BERNON" <[EMAIL PROTECTED]> 写道:
Your problem is perhaps that tcpip_thread is blocked somewhere, which explains 
that your driver can post new input packets, because tcpip_thread doesn't fetch 
them (or too slowly). That's why you got "memp_malloc: out of memory in pool 
TCPIP_MSG_INPKT". One of the main cause of tcpip_thread blocking is it have to 
post pbuf or netbuf to netconn::recvmbox. Since it is actually asked that 
sys_mbox_post was blocking, perhaps the problem is that in one of your incoming 
connections, netconn::recvmbox become full between the time where the 
application thread fetch a packet and try to call do_recv. If at this time, 
tcpip_thread got a new packet to post to this netconn::recvmbox, it block 
waiting some free spaces, but this space will never be freed since the 
application thread wants that tcpip_thread execute a do_recv before => 
deadlock. This is a possible explain of your problem. Perhaps there could be 
some other problem in another place. But if it's this problem, try to increase 
your mbox size to a higher value (like this, in the worth case, your will got a 
"out of memory" of your PBUF_POOL, and since this packet will be droped, tcp 
source will resend it). Of course, I suppose your mboxes have a fixed size too 
low. Hope it's help you ==================================== Frédéric BERNON 
HYMATOM SA Chef de projet informatique Microsoft Certified Professional Tél. : 
+33 (0)4-67-87-61-10 Fax. : +33 (0)4-67-70-85-44 Email : [EMAIL PROTECTED] Web 
Site : http://www.hymatom.fr ==================================== P Avant 
d'imprimer, penser à l'environnement -----Message d'origine----- De : [EMAIL 
PROTECTED] [mailto:[EMAIL PROTECTED] De la part de Andrew Lukefahr Envoyé : 
mercredi 5 décembre 2007 05:31 ? : Mailing list for lwIP users Objet : 
[lwip-users] (no subject) Hi, I'm trying to use the sequential API to read and 
write to several sockets simultaneously. I'm using FreeRTOS on an AT91SAM7X256 
and a fairly recent(~1 month ago) cvs version of lwip. I'm trying to read in 
data over several connections, parse out the useful data, and then send data 
back out over several other connections. The incoming connections are only used 
for incoming data, and the outgoing connections are only used for outgoing 
data. None of the streams have much data flowing though them, at most ~1KB/s. I 
got it working pretty consistently with 3 incoming connections and 1-5 outgoing 
connections. I found out I had to put a ~150ms delay on the output loop in my 
write thread, otherwise lwip would lock up. However, now I need to add a fourth 
incoming connection. With the addition of the 4th incoming connection, lwip 
works for anywhere from a few seconds to a few minutes before locking up. It 
seems the more outgoing connections that lwip is trying to service, the quicker 
the lockup, however, even if there aren't any outgoing connections, eventually 
lwip will still lock up. I've been playing around with debugging, and so far I 
cant find anything that stands out. The last few lines of my debug look like 
this... LWIP: tcp_receive: window update 1052 LWIP: tcp_receive: dupack averted 
77772192 77772083 LWIP: tcp_receive: pcb->rttest 0 rtseq 6509 ackno 6510 ... 
then there is a long pause, followed by.... LWIP: memp_malloc: out of memory in 
pool TCPIP_MSG_INPKT LWIP: memp_malloc: out of memory in pool TCPIP_MSG_INPKT 
LWIP: memp_malloc: out of memory in pool TCPIP_MSG_INPKT LWIP: memp_malloc: out 
of memory in pool TCPIP_MSG_INPKT LWIP: memp_malloc: out of memory in pool 
TCPIP_MSG_INPKT LWIP: memp_malloc: out of memory in pool TCPIP_MSG_INPKT LWIP: 
memp_malloc: out of memory in pool TCPIP_MSG_INPKT LWIP: memp_malloc: out of 
memory in pool TCPIP_MSG_INPKT LWIP: memp_malloc: out of memory in pool 
TCPIP_MSG_INPKT LWIP: memp_malloc: out of memory in pool TCPIP_MSG_INPKT LWIP: 
memp_malloc: out of memory in pool TCPIP_MSG_INPKT LWIP: memp_malloc: out of 
memory in pool TCPIP_MSG_INPKT LWIP: memp_malloc: out of memory in pool 
PBUF_POOL So, a few questions. First, I know that lwip isn't thread safe. 
However, I'm wondering what I need to control with a semaphore or mutex. I've 
got 4 different recv functions in 4 different threads, and a single write 
thread. So I can't block for each one, otherwise only one can be blocked in 
recv at a time, and it will have to timeout if there isnt any data, which will 
take too long. Also, do I need to try to serialize lwip if I'm only using each 
netconn for one directional data flow? (If each netconn is only being used for 
sending OR receiving, not both ) Second, I assume that something is going wrong 
in netconn_recv(). I'm also guessing that as it works fine with three recv's, 
but locks up with four, it is mostly likely overwhelming a buffer with incoming 
data. So, my question is what buffer(s) should I look at increasing to see if 
that helps? Or is there something I'm missing? Any debugging recommendations to 
help nail down whats causing the error? I can post my lwipopts.h and and 
send/recv code if anyone thinks that will help. Oh, and on a completely 
unrelated question, if FreeRTOS implements malloc and free as pvPortMalloc and 
vPortFree, what would I need to modify in lwip to get it to use FreeRTOS memory 
instead of its own? I assume I would need to define MEM_LIBC_MALLOC correctly, 
and redefine mem_malloc and mem_free in mem.h? Thanks -- Andrew Lukefahr [EMAIL 
PROTECTED] Electrical and Computer Engineering Univeristy of Missouri Open 
Source, Open Minds
_______________________________________________
lwip-users mailing list
[email protected]
http://lists.nongnu.org/mailman/listinfo/lwip-users

Reply via email to