Carsten, Welcome and thanks for joining.
You are right. Repeating and augmenting your statements covering why we need to track the destination interface: 1) The stack is REQUIRED to behave differently when we receive a request via multicast. 2) Replies need to be via the interface on which the request was received. If you are on an IPv4 single homed device, the above can be done with simple socket setup since multi-cast replies should be sent via their unicast port. If you are a multihomed host and/or on IPv6, you need one unicast socket per subnet (IP address) and you MAY need one multicast socket per subnet. Where supported IPV6_RECVPKTINFO is a nice alternative where it is supported, but the last time that I looked there was not a good way to specify which interface that a packet should be sent *from* other than having a socket bound to each interface. Pat > -----Original Message----- > From: iotivity-dev-bounces at lists.iotivity.org [mailto:iotivity-dev- > bounces at lists.iotivity.org] On Behalf Of Carsten Bormann > Sent: Wednesday, April 01, 2015 6:32 AM > To: Chandan > Cc: iotivity-dev at lists.iotivity.org; raj.rajan at samsung.com > Subject: Re: [dev] Timing issue in handling COAP request in > HandleCoAPRequests > > I didn't dive into the code yet, but it looks a lot like a symptom of binding > a > UDP socket to INADDR_ANY. For receiving unicast packets, this is almost > always wrong, for two reasons: > > -- you won't have a way to reply from the same source address, > -- you are receiving multicasts without knowing it. > > For the "almost", see RFC 7252: > > 8.1. Messaging Layer > [...] > A server SHOULD be aware that a request arrived via multicast, e.g., > by making use of modern APIs such as IPV6_RECVPKTINFO [RFC3542], if > available. > > At some point, I need to write all of this up for draft-ietf-lwig-coap. > > Gr??e, Carsten > > > Chandan wrote: > > HI ALL > > > > Below is an analysis of defect *IOT-191 > > (*https://jira.iotivity.org/browse/IOT-191*)* > > * > > * > > *Step - 1* > > On client side, perform a multicast discovery using command > > occlientslow -u0 -t3 which means multicast with CON messages > > > > *Step - 2* > > In server side we register COAP request as > > coap_register_request_handler(*gCoAPCtx*, HandleCoAPRequests); > > > > So this request is received in both sockets i.e *gCoAPCtx->sockfd and > > * > > *gCoAPCtx->**sockfd_wellknown* > > * > > * > > *Expected Scenario - * > > i) HandleCoAPRequests is invoked for *gCoAPCtx->sockfd* > > ii)* *HandleSingleResponse gets invoked for this but response is not > > sent yet. > > iii) HandleCoAPRequests is invoked for *gCoAPCtx->sockfd**_wellknown* > > but it gets rejected at HandleStackRequests as the token is same as > > received in step-i > > iv) Unicast response is sent from the*server only ONCE from > > *HandleSingleResponse . > > > > > > *BUG Scenario - * > > i) HandleCoAPRequests is invoked for *gCoAPCtx->sockfd* > > ii)* *HandleSingleResponse gets invoked for this and *RESPONSE is SENT > > 1st time.* > > iii) HandleCoAPRequests is invoked > > for *gCoAPCtx->sockfd**_wellknown* and its treated as a new request > > iv) Unicast response is sent from the* server for 2ND time in > > *HandleSingleResponse * *. > > > > There can be below possible solutions to handle this timing problem. > > *Please suggest me which is good option as I am new to IOTIVITY*. > > * > > * > > *option - 1* > > We can cache the requests received for a certain time period so that > > we can avoid sending both unicast n multicast response by comparing > > request information. > > Decision can be taken to ignore duplicate request. > > But difficult to decide the time to cache. > > # > > *option - 2* > > While we detect that coap://224.0.1.187 > > <http://224.0.1.187></span>/oc/core?rt=alpha.light* > > * > > is received with port number *5683* > > then we should not send response from the invocation done > > for* * *gCoAPCtx->sockfd* > > As it will definitely be sent from the well known socket. > > > > > > -- > > *Regards, > > Chandan* > > > > _______________________________________________ > > iotivity-dev mailing list > > iotivity-dev at lists.iotivity.org > > https://lists.iotivity.org/mailman/listinfo/iotivity-dev > _______________________________________________ > iotivity-dev mailing list > iotivity-dev at lists.iotivity.org > https://lists.iotivity.org/mailman/listinfo/iotivity-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 7198 bytes Desc: not available URL: <http://lists.iotivity.org/pipermail/iotivity-dev/attachments/20150401/f2c7a8ee/attachment.p7s>
