Hi Zaki,
On Fri, Oct 14, 2011 at 8:00 AM, Zaki <[email protected]> wrote:
> Thanks. Is there any possibility at all that while iw is requesting the
> parameters involved, the return values are not current?. Maybe because of
> some locks for example, so it needs time to get the current value. Reason
> is because the observed behaviour sort of telling that the best path
> already altered but it didn't reflect on the mpath dump timely.
>
I've written a test which consistently pegs the recovery time (for A)
in this scenario as 3.5s when using 'ping -f'. An mpath is detected as
failed and deleted in the following code path:
ieee80211_tx_status()
ieee80211s_update_metric()
stainfo->fail_avg = ((80 * stainfo->fail_avg + 5) / 100 + 20 * failed)
if (stainfo->fail_avg > 95)
mesh_plink_broken();
So about 17 TX failures in a row will sever the plink. If this seems
too high, patches are welcome.
Thomas
> On Fri, Oct 14, 2011 at 10:35 PM, David Fulgham <[email protected]>
> wrote:
>>
>> Check out the wiki howto page as it now has descriptions for the
>> mesh_param's
>> http://www.o11s.org/trac/wiki/HOWTO-0.4.2#advancedmesh
>> I think the settings that would speed up the transition would be:
>> mesh_holding_timeout (time in ms to hold a peer entry before removing it
>> from the peer list, following a peering close request, or other peering
>> timeouts)
>> also
>> mesh_path_refresh_time (how many ms prior to a path expiration should a
>> path refresh be attempted)
>> mesh_hwmp_active_path_timeout (the length of time [in TU] that derived
>> forwarding path information will remain valid)
>> may help too.
>> Regards,
>> David Fulgham
>>
>> Keep in mind that lowering the timeouts creates more overhead on the mesh.
>> Changes to the parameters are made on each node, so you could keep the
>> timeouts low on nodes that require and leave the others higher.
>>
>> On Fri, Oct 14, 2011 at 10:27 AM, Zaki <[email protected]> wrote:
>>>
>>> Hi,
>>>
>>> I have a question about which parameter should i change to improve the
>>> mesh path refresh performance as indicated by mpath dump.
>>> In my setup, the best path chosen is A -> B -> C -> F. Another node, E
>>> is set-up to create another best path should B is dead.
>>> A video tranmission is commenced from A to F without any problem. And by
>>> lowering down the TxPower of B to 0dBm, i notice there is a very short
>>> interruption to the video transmission but then it picked up to its previous
>>> speed performance. During this time i know that node E is replacing node B
>>> to make a new best path. But from my immediate mpath dump. i didn't see
>>> the path change as yet. It still showed the old one. Only after 10s then
>>> only the mpath dump shows the new best path. Tried many times and the
>>> duration for the mpath to update as expected range between 10-30s. Is this
>>> normal?. Anything i can do to improve it to become immediate?.
>>>
>>> Thanks.
>>> Zaki.
>>>
>>>
>>> _______________________________________________
>>> Devel mailing list
>>> [email protected]
>>> http://open80211s.com/mailman/listinfo/devel
>>>
>>
>>
>> _______________________________________________
>> Devel mailing list
>> [email protected]
>> http://open80211s.com/mailman/listinfo/devel
>>
>
>
> _______________________________________________
> Devel mailing list
> [email protected]
> http://open80211s.com/mailman/listinfo/devel
>
>
_______________________________________________
Devel mailing list
[email protected]
http://open80211s.com/mailman/listinfo/devel