It's on my list.  :)  I should get to it today.

On Fri, Apr 15, 2016 at 7:51 AM, Ivan Khoronzhuk <[email protected]
> wrote:

> Hi Bill,
>
> What about to review this one?
>
> On 16.02.16 16:02, Ivan Khoronzhuk wrote:
>
>> When test for single thread is finished the following test for
>> many threads can be started, and for some implementations can
>> happen that future one event can arrive to main thread, as it was
>> requested in previous test. As result one event can be lost for rest
>> threads. So, it's better to pause scheduling for main thread when
>> it doesn't participate in the multi-threaded test. Also move
>> pause/resume test closer to beginning, because it's used in tests
>> before.
>>
>> Signed-off-by: Ivan Khoronzhuk <[email protected]>
>> ---
>>   test/validation/scheduler/scheduler.c | 12 +++++++++++-
>>   1 file changed, 11 insertions(+), 1 deletion(-)
>>
>> diff --git a/test/validation/scheduler/scheduler.c
>> b/test/validation/scheduler/scheduler.c
>> index dcf01c0..c1b61c5 100644
>> --- a/test/validation/scheduler/scheduler.c
>> +++ b/test/validation/scheduler/scheduler.c
>> @@ -1000,6 +1000,8 @@ static void schedule_common(odp_schedule_sync_t
>> sync, int num_queues,
>>         args.enable_schd_multi = enable_schd_multi;
>>         args.enable_excl_atomic = 0;    /* Not needed with a single CPU */
>>
>> +       /* resume scheduling in case it was paused */
>> +       odp_schedule_resume();
>>         fill_queues(&args);
>>
>>         schedule_common_(&args);
>> @@ -1037,6 +1039,9 @@ static void parallel_execute(odp_schedule_sync_t
>> sync, int num_queues,
>>         args->enable_schd_multi = enable_schd_multi;
>>         args->enable_excl_atomic = enable_excl_atomic;
>>
>> +       /* disable receive events for main thread */
>> +       exit_schedule_loop();
>> +
>>         fill_queues(args);
>>
>>         /* Create and launch worker threads */
>> @@ -1249,6 +1254,9 @@ void scheduler_test_pause_resume(void)
>>         int i;
>>         int local_bufs = 0;
>>
>> +       /* resume scheduling in case it was paused */
>> +       odp_schedule_resume();
>> +
>>         queue = odp_queue_lookup("sched_0_0_n");
>>         CU_ASSERT(queue != ODP_QUEUE_INVALID);
>>
>> @@ -1296,6 +1304,8 @@ void scheduler_test_pause_resume(void)
>>         }
>>
>>         CU_ASSERT(exit_schedule_loop() == 0);
>> +
>> +       odp_schedule_resume();
>>   }
>>
>>   static int create_queues(void)
>> @@ -1556,6 +1566,7 @@ odp_testinfo_t scheduler_suite[] = {
>>         ODP_TEST_INFO(scheduler_test_num_prio),
>>         ODP_TEST_INFO(scheduler_test_queue_destroy),
>>         ODP_TEST_INFO(scheduler_test_groups),
>> +       ODP_TEST_INFO(scheduler_test_pause_resume),
>>         ODP_TEST_INFO(scheduler_test_parallel),
>>         ODP_TEST_INFO(scheduler_test_atomic),
>>         ODP_TEST_INFO(scheduler_test_ordered),
>> @@ -1586,7 +1597,6 @@ odp_testinfo_t scheduler_suite[] = {
>>         ODP_TEST_INFO(scheduler_test_multi_mq_mt_prio_a),
>>         ODP_TEST_INFO(scheduler_test_multi_mq_mt_prio_o),
>>         ODP_TEST_INFO(scheduler_test_multi_1q_mt_a_excl),
>> -       ODP_TEST_INFO(scheduler_test_pause_resume),
>>         ODP_TEST_INFO_NULL,
>>   };
>>
>>
>>
> --
> Regards,
> Ivan Khoronzhuk
>
_______________________________________________
lng-odp mailing list
[email protected]
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to