Jiacheng

1) Sorry about not converting msecs to os time ticks. Good catch!
2) I understand using a semaphore to wake up a task but looking at the exact 
code you have shown, I dont understand why the task would release the semaphore 
in this case. Doesnt the interrupt release the semaphore?
3) Blocking interrupts. If you block for 600-700 usecs you will cause failures 
in the underlying BLE stack. These wont be “catastrophic” (at least, I dont 
think so) but it can cause you to miss things like connection events, scan 
requests/responses, advertising events, etc. If your high priority interrupt 
fires off frequently you could possibly cause connections to fail. If you do it 
occasionally you should be ok.

> On Jan 24, 2017, at 5:08 PM, WangJiacheng <[email protected]> wrote:
> 
> Thanks, Will, you help me  a lot.
> 
> Since my task is triggered by a semaphore, and the semaphore is released by 
> another interrupt routine,  so if my task have no enough time to running and 
> go to sleep, after wake up, it will release the semaphore again. Another 
> minor change is time unit conversion (ms -> OS tick) by function 
> os_time_ms_to_ticks(). 
> 
> The main body of my task will like
> /********************************************************************************/
> while (1) 
> {
>    t = os_sched_get_current_task();
>    assert(t->t_func == phone_command_read_handler);
>               
>    /* Wait for semaphore from ISR */
>    err = os_sem_pend(&g_phone_command_read_sem, OS_TIMEOUT_NEVER);
>    assert(err == OS_OK);
> 
>> time_till_next = ll_eventq_free_time_from_now();
>> if (time_till_next > X) {
>>      /* Take control of transceiver and do what you want */
>> } else {
>>      /* Delay task until LL services event. This assumes time_till_next is 
>> not negative. */
>>      os_delay = os_cputime_ticks_to_usecs(time_till_next);
>>      os_time_delay(os_time_ms_to_ticks((os_delay + 999) / 1000));
>                    
>            /* Release the semaphore after wake up  */
>           err = os_sem_release(&g_phone_command_read_sem);
>           assert(err == OS_OK);
> 
>> }
> }
> /********************************************************************************/
> 
> I will test if this can work. BTW, current test results show there will be an 
> event collision between 2 stacks about 3~4 hours running.
> 
> I have a question about using interrupt disable, How long can the LL task be 
> blocked by interrupt disable? The high priority interrupt of Nordic’s 
> SoftDevice can be blocked only within 10us. I have an interrupt with most 
> high priority, it will take 600us~700us, is it safe to block LL task and 
> other interrupt such as Nimble Radio and OS time tick during this time?
> 
> Best Regards,
> 
> Jiacheng 
>                                       
> 
> 
>> 在 2017年1月25日,00:37,will sanfilippo <[email protected]> 写道:
>> 
>> Jiacheng:
>> 
>> Given that your task is lower in priority than the LL task, you are going to 
>> run into issues if you dont either disable interrupts or prevent the LL task 
>> from running. Using interrupt disable as an example (since this is easy), 
>> you would do this. The code below is a function that returns the time till 
>> the next event.:
>> 
>> os_sr_t sr;
>> uint32_t time_now;
>> int32_t time_free;
>> 
>> time_free = 100000000;
>> OS_ENTER_CRITICAL(sr);
>> time_now = os_cputime_get32();
>> sch = TAILQ_FIRST(&g_ble_ll_sched_q);
>> if (sch) {
>>   time_free = (int32_t)(sch->start_time - time_now);
>> }
>> OS_EXIT_CRITICAL();
>> 
>> /* 
>> * NOTE: if time_free < 0 it means that you have to wait since the LL task
>> * should be waking up and servicing that event soon.
>> */
>> return time_free;
>> 
>> Given that you are in control of what the LL is doing with your app, I guess 
>> you could do something like this in your task;
>> 
>> time_till_next = ll_eventq_free_time_from_now();
>> if (time_till_next > X) {
>>      /* Take control of transceiver and do what you want */
>> } else {
>>      /* Delay task until LL services event. This assumes time_till_next is 
>> not negative. */
>>      os_delay = os_cputime_ticks_to_usecs(time_till_next);
>>      os_time_delay((os_delay + 999) / 1000);
>> }
>> 
>> So the problem with the above code, and also with the code you have below is 
>> something I mentioned previously. If you check the sched queue and there is 
>> nothing on it, you might think you have time, but in reality you dont 
>> because the LL has pulled the item off the schedule queue and is executing 
>> it. The LL state will tell you if the LL task is doing anything. The API 
>> ble_ll_state_get() will return the current LL state. So you could modify the 
>> above to do this:
>> 
>> OS_ENTER_CRITICAL(sr);
>> time_now = os_cputime_get32();
>> sch = TAILQ_FIRST(&g_ble_ll_sched_q);
>> if (sch) {
>>   time_free = (int32_t)(sch->start_time - time_now);
>> } else {
>>   if (ble_ll_state_get() != BLE_LL_STATE_STANDBY) {
>>       /* LL is busy. You dont know how long it will be busy. You can
>>        * return some small time here and your task will just keep
>>          polling. */
>>   }
>> }
>> OS_EXIT_CRITICAL();
>> Not sure if this will work, and I think there are other things that you 
>> would want to do to insure that the LL task does not grab the transceiver 
>> while your task is using it.
>> 
>> Hope this helps.
>> 
>>> On Jan 24, 2017, at 3:00 AM, WangJiacheng <[email protected]> wrote:
>>> 
>>> Hi, Will,
>>> 
>>> My use scenario is when I have an event (with knowing running time) ready 
>>> to run,  I’ll try to get a free time slot (with a required duration) in the 
>>> Nimble events queue. 
>>> 
>>> 1). Only consider Nimble connection events, do not consider scanning 
>>> events, so only look at the scheduled events queue in “g_ble_ll_sched_q”;
>>> 2). Only consider the event start_time in the queue, since  I only care if 
>>> there is enough free time slot from now to nearest future event start_time 
>>> to run my event, if there is no enough time, just wait.
>>> 3). My code is in a lower priority task function than nimble stack task (in 
>>> blue_ll.c, task “ble_ll” is initialized with task priority 0),so when I 
>>> check the free time slot, there should be no Nimble events running.
>>> 4). When I’m waiting the free time slot, since Nimble stack has higher 
>>> priority, the Nimble events in the queue will keep on running.
>>> 5). If an event start_time in the queue already expired, I will get a 
>>> negative number of  free time slot from now, since I always require a 
>>> positive number of free time slot, so I’ll wait the event to be done.
>>> 6) For the periodic events, the event already in the queue is always 
>>> “earlier” than the following periodic events, so I do not care the 
>>> following events. The worst case is that the periodic time is less than  my 
>>> event running time, I can not get any opportunity to run my event.
>>> 
>>> Function for check free time slot from now is as 
>>> /********************************************************************************/
>>> int32_t ll_eventq_free_time_from_now(void)
>>> {
>>> struct ble_ll_sched_item *sch;
>>> uint32_t cpu_time_now;
>>> int32_t time_free;
>>> int32_t time_diff;
>>>     
>>> time_free = 100000000;
>>> cpu_time_now = os_cputime_get32();
>>> 
>>> /* Look through the schedule queue */
>>> TAILQ_FOREACH(sch, &g_ble_ll_sched_q, link)
>>> {
>>>  time_diff = sch->start_time - cpu_time_now;
>>>  if (time_diff < time_free)
>>>  {
>>>    time_free = time_diff;
>>>  }          
>>> }   
>>>     
>>> return (time_free);
>>> }
>>> /********************************************************************************/
>>> 
>>> When I have a event (require 10000 CPU time ticks) ready to run, the code 
>>> will be:
>>> /********************************************************************************/
>>> /* wait for time slot to run event */
>>> while (ll_eventq_free_time_from_now( ) < 10000)
>>> {
>>> /* just loop to wait the free time slot > 10000 CPU time ticks */
>>> /* Nimble events have higher task priority, will keep on running */
>>> if (time_out)
>>> {
>>>   return(1);
>>> }
>>> }
>>> 
>>> /********** my event require 10000 CPU time ticks run here **************/  
>>>                 
>>> /********************************************************************************/
>>> 
>>> Does this make sense?
>>> 
>>> Thanks,
>>> 
>>> Jiacheng
>>> 
>>> 
>>> 
>>> 
>>>> 在 2017年1月24日,14:25,WangJiacheng <[email protected]> 写道:
>>>> 
>>>> Thanks, Will,
>>>> 
>>>> It seems I can not get the things  work by the simple way. I just want to 
>>>> find out a free time slot at high level to access PHY resource such as CPU 
>>>> and radio RF exclusively. With your explain, I should interleave my events 
>>>> into BLE events at low level in the same schedule queue.
>>>> 
>>>> Best Regards,
>>>> 
>>>> Jiacheng
>>>> 
>>>> 
>>>>> 在 2017年1月24日,13:48,will sanfilippo <[email protected]> 写道:
>>>>> 
>>>>> Jiacheng:
>>>>> 
>>>>> First thing with the code excerpt below: TAILQ_FIRST always gives you the 
>>>>> head of the queue. To iterate through all the queue elements you would 
>>>>> use TAILQ_FOREACH() or you would modify the code to get the next element 
>>>>> using TAILQ_NEXT. I would just use TAILQ_FOREACH. There is an example of 
>>>>> this in ble_ll_sched.c.
>>>>> 
>>>>> Some other things to note about scheduler queue:
>>>>> 1) It is possible for items to be on the queue that have already expired. 
>>>>> That means that the current cputime might have passed sch->start_time. 
>>>>> Depending on how you want to deal with things, you are might be  better 
>>>>> off doing a signed 32-bit subtract when calculating time_tmp.
>>>>> 2) You are not taking into account the end time of the scheduled event. 
>>>>> The event starts at sch->start_time and ends at sch->end_time. Well, if 
>>>>> all you care about is the time till the next event you wont have to worry 
>>>>> about the end time of the event, but if you want to iterate through the 
>>>>> schedule, the time between events is the start time of event N minus the 
>>>>> end time of event N - 1.
>>>>> 3) When an event is executed it is removed from the scheduler queue. 
>>>>> Thus, if you asynchronously look at the first item in the scheduler queue 
>>>>> and compare it to the time now you have to be aware that an event might 
>>>>> be running and that the nimble stack is using the PHY. This could also 
>>>>> cause you to think that nothing is going to be done in the future, but 
>>>>> when the scheduled event is over that item gets rescheduled and might get 
>>>>> put back in the scheduler queue (see #4, below).
>>>>> 4) Events in the scheduler queue appear only once. This is not an issue 
>>>>> if you are only looking at the first item on the queue, but if you 
>>>>> iterate through the queue this could affect you. For example, say there 
>>>>> are two items on the queue (item 1 is at head, item 2 is next and is 
>>>>> last). You see that the gap between the two events is 400 milliseconds (I 
>>>>> just made that number up). When item 1 is executed and done, that event 
>>>>> will get rescheduled. So lets say item 1 is a periodic event that occurs 
>>>>> every 100 msecs. Item 1 will get rescheduled causing you to really only 
>>>>> have 100 msecs between events.
>>>>> 5) The “end_time” of the scheduled item may not be the true end time of 
>>>>> the underlying event. When scheduling connections we schedule them for 
>>>>> some fixed amount of time. This is done to guarantee that all connections 
>>>>> get a place in the scheduler queue. When the schedule item executes at 
>>>>> “start_time” and the item is a connection event, the connection code will 
>>>>> keep the current connection going past the “end_time” of the scheduled 
>>>>> event if there is more data to be sent and the next scheduled item wont 
>>>>> be missed. So you may think you have a gap between scheduled events when 
>>>>> in reality the underlying code is still running.
>>>>> 6) For better or worse, scanning events are not on the scheduler queue; 
>>>>> they are dealt with in an entirely different manner. This means that the 
>>>>> underlying PHY could be used when there is nothing on the schedule queue.
>>>>> 
>>>>> I have an idea of what you are trying to do and it might end up being a 
>>>>> bit tricky given the current code implementation. You may be better 
>>>>> served adding an item to the schedule queue but it all depends on how you 
>>>>> want to prioritize BLE activity with what you want to do.
>>>>> 
>>>>> Will
>>>>> 
>>>>>> On Jan 23, 2017, at 8:56 PM, WangJiacheng <[email protected]> 
>>>>>> wrote:
>>>>>> 
>>>>>> Hi, 
>>>>>> 
>>>>>> I’m trying to find out a free time slot between Nimble scheduled events.
>>>>>> 
>>>>>> I try to go through  all items on the schedule queue  global variable 
>>>>>> “g_ble_ll_sched_q” to find out all the scheduled LL events near future, 
>>>>>> function as
>>>>>> /********************************************************************************/
>>>>>> uint32_t ll_eventq_free_time_from_now(void)
>>>>>> {
>>>>>> struct ble_ll_sched_item *sch;
>>>>>> uint32_t cpu_time_now;
>>>>>> uint32_t time_free;
>>>>>> uint32_t time_tmp;
>>>>>>  
>>>>>> time_free = 1000000000;
>>>>>> cpu_time_now = os_cputime_get32();
>>>>>> 
>>>>>> /* Look through schedule queue */
>>>>>> while ((sch = TAILQ_FIRST(&g_ble_ll_sched_q)) != NULL)
>>>>>> {
>>>>>> time_tmp = sch->start_time - cpu_time_now;
>>>>>> if  (time_tmp < time_free)
>>>>>> {
>>>>>>  time_free = time_tmp;
>>>>>> }
>>>>>> }        
>>>>>>  
>>>>>> return (time_free);
>>>>>> }
>>>>>> /********************************************************************************/
>>>>>> 
>>>>>> Does above function make sense to find out the free time at any given 
>>>>>> time point? or any suggestion to find out the free time slot between LL 
>>>>>> events?
>>>>>> 
>>>>>> 
>>>>>> Thanks,
>>>>>> 
>>>>>> Jiacheng
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
> 

Reply via email to