Sorry for irrelevant reply, but I am so frustrated to find a job.
I used Lwip 1.4.1 in my project + RTOS, but ... almost nothing I can find.

Is this industry dead? These skill sets are not useful?



On Thu, Jun 23, 2016 at 7:33 AM, Joel Cunningham <[email protected]>
wrote:

> Hi,
>
> select() can be used by multiple threads at the same time and you can even
> have the same sockets in multiple calls and it will be safe.  The
> limitation comes from trying to use the same sockets from multiple
> simultaneous threads in other socket APIs (select is the exception,
> simultaneous send, recv, or close on the same socket is not supported,
> especially in 1.4.1).
>
> In regards to the maxfd question, I think there is a small
> misunderstanding.  The maxfd parameter in select() is the largest FD in
> your input FD_SETs + 1.  This is because maxfd is used to iterate through
> the FD_SETs provided.  It has nothing to do with other threads calling
> select() or how many sockets are in the system, but the maximum value of
> any FDs in the current select() call.
>
> Joel
>
> On Jun 23, 2016, at 03:59 AM, lampo <[email protected]> wrote:
>
> hello,
> can someone help me with multithread problem ?
>
> we use lwip 1.4.1 in PLC products, and if it live through long time(1 month
> for example) testing in industry use,
> I will back here to report.
>
> *our problem*
> we use multithread select mode in our system, and it "seems" it's ok till
> now(1 days passed),
> but I see the doc in rawapi.txt,"/Netconn or Socket Api are not reentrant
> at
> the control bolck granularity level/" .
> so I really cannot figure out if im doing right thing.
>
> *our application background/restriction*
>
> a, our system is RTOS based , we need 2 TCP clients(separately connecting
> to
> 2 TCP servers, i.e., 2 lasers),
> 1 TCP server(deal with max of 4 remote clients,i.e.,4 modbus clients ), and
> 1 web server.
>
> b, lwip related thread cannot be in polling mode, for here is already a
> polling thread in RTOS and it has "soft real time" requirements.
>
> c, current lwip version is 1.4.1(we are evaluating 2.0.0)
>
> d, we need to send data to tcp servers periodically, 10 ms
>
> *our design/implementation *
>
> 3 thread has used 'select' , but the same socket is only closed by the
> thread who created it. here is details,
>
> we use 'select' in "user tcp client send thread" ,we first call
> 'connect',and then call 'select' to determine whether the socket is
> connected correctly or not; if some errors detected in connecting or data
> sending, we call 'close' to close the socket.
> we use 'select' in "user tcp client receive thread", blocked forever
> waiting for data, and call 'recvfrom' if data received, if some errors
> detected
> in this thread,we don't just clost the socket ,but send message to "user
> tcp
> client send thread" and tell it to close.
> we use 'select' in "user tcp server thread",listening for incoming
> requests,
> receiving and responding(sending) data.
>
> *our doubt*
> 1、 ‘select’ can be used in multithread or not? if yes, the first param
> maxfdp1 of lwip_select() must be set to the total number of sockets in each
> thread?
> for example,in our situation, maxfdp1 = 2(clients) + 4(remote clients) +
> 1(listen socket) in each select?
>
> 2、is SYS_ARCH_PROTECT must be used in multithread?
>
> 3、if lwip 1.4.1 does not support the multithread select mode, does lwip
> 2.0.0 supports it?
>
> Thanks a lot !
>
>
>
> --
> View this message in context:
> http://lwip.100.n7.nabble.com/LwIP-multithread-select-mode-problems-tp26561.html
> Sent from the lwip-users mailing list archive at Nabble.com.
>
> _______________________________________________
> lwip-users mailing list
> [email protected]
> https://lists.nongnu.org/mailman/listinfo/lwip-users
>
>
> _______________________________________________
> lwip-users mailing list
> [email protected]
> https://lists.nongnu.org/mailman/listinfo/lwip-users
>
_______________________________________________
lwip-users mailing list
[email protected]
https://lists.nongnu.org/mailman/listinfo/lwip-users

Reply via email to