---------- Forwarded message ----------
From: Rick van Haasen <rick.van.haa...@gmail.com>
Date: Thu, Mar 29, 2012 at 7:00 AM
Subject: [Contiki-developers] Another bug in siclowpan.c, resulting in
dropped fragmented packets...
To: Contiki developer mailing list
<contiki-develop...@lists.sourceforge.net>,
rick.van.haa...@philips.com


Last week I reported the buffer overflow bug in 6lowpan.
This week I noticed another problem that causes the reassembly of
packages to fail:
Assume that a fragmented package is being send by some node, and the
receiving node misses the first fragment.
When it receives the second part, it stores the fragment_tag and tries
to assemble from then, which will obviously never happen.
The variable processed_ip_len is updated with the frament size of the
second fragment.
This variable "processed_ip_len" is used to determine that assemly is ongoing.
It will not get out off this state untill the timeout (8 sec).

Both the receiving as well as the sending parts of sicslowpan.c use
the global variable processed_ip_len.
By coincidence, because my application both receives and sends
messages, the sending of data assings "processed_ip_len" to 0 after
having send the fragments, thereby "resetting the state for the
receiving part!

This "resetting" only happens when fragmented packets are beeing send.
If the sender sends unfragmented ip-packets, the variable
processed_ip_len is nut touched.

I discovered this situation by testing the combination of nodes.
When all nodes send messages that are large enough to trigger
fragmentation, everything seemed to work fine.
When all nodes send small, unfragmented packages also everything looked ok.
When however 1 nodes send large, and the other node sends small
packages. The node that receives the large messages suddenly fails to
reassemble  incoming fragments!
In this case, the state is not reset by the sending part in
sicslowpan.c (again, processed_ip_len is only set to 0 after having
send all fragments of a  multi-fragment ip-packet...)

In order to solve it, first thing I did is to use separate variables
for the send and recieve part:

input_processed_ip_len
output_ processed_ip_len

(this separation is not stricly needed, but it prevents coupling,
which caused the unexpected resetting by sending data...)

Next I modified the code such that input_processed_ip_len is set to 0
when a first fragment of a fragmented packet is beeing received.

There are other solutions, this was however the most straightforward,
and it solved the problem...

 switch((GET16(RIME_FRAG_PTR, RIME_FRAG_DISPATCH_SIZE) & 0xf800) >> 8) {
    case SICSLOWPAN_DISPATCH_FRAG1:
      //printf("-F1-");
      PRINTFI("sicslowpan input: FRAG1 ");
      frag_offset = 0;
/*       frag_size = (uip_ntohs(RIME_FRAG_BUF->dispatch_size) & 0x07ff); */
      frag_size = GET16(RIME_FRAG_PTR, RIME_FRAG_DISPATCH_SIZE) & 0x07ff;
/*       frag_tag = uip_ntohs(RIME_FRAG_BUF->tag); */
      frag_tag = GET16(RIME_FRAG_PTR, RIME_FRAG_TAG);
      PRINTFI("size %d, tag %d, offset %d)\n",
             frag_size, frag_tag, frag_offset);
      rime_hdr_len += SICSLOWPAN_FRAG1_HDR_LEN;
      /*      printf("frag1 %d %d\n", reass_tag, frag_tag);*/
      first_fragment = 1;

      // RVH
      sicslowpan_len = 0;
      input_processed_ip_len = 0;

------------------------------------------------------------------------------
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here
http://p.sf.net/sfu/sfd2d-msazure
_______________________________________________
Contiki-developers mailing list
contiki-develop...@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/contiki-developers



-- 
Jon Smirl
jonsm...@gmail.com

------------------------------------------------------------------------------
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
_______________________________________________
Linux-zigbee-devel mailing list
Linux-zigbee-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-zigbee-devel

Reply via email to