On 18 May 2015 at 11:37, Radu-Andrei Bulie <[email protected]> wrote:

>  Hi,
>
>
>
> I have some observations regarding the odp_scheduler test functionality.
>
> Scheduler validation  creates two kind of tests:
>
> -single threaded tests – that uses the cunit main thread for
> initialization.
>
> -multithreaded tests – that create a number of threads equal with the
> number of cores in the system.
>
>
>
> As I said in an older post (regarding the classification tests) there
> could be a problem on some platforms when the schedule function is called
>
> on a core that is used by linux(that is –schedule cannot be call on any
> core).  This issue could happen on both kind of tests.
>
>
>
> The same approach as in the main applications should be applied regarding
> tests (e.g core 0 is for linux and the other for odp threads)
>
>
>
> Another possible problem is that the tests use a *while(1)* – and cycle
> until all the expected packets are received on the scheduled queues.
>
> That means that the queues will be scheduled on each of the cores where
> the odp_threads were created. I think that there is no guarantee on this
> and there is a possibility
>
> that some of the threads will remain in the *while(1) –(queues will not
> be scheduled on those cores where the threads were running, the number of
> expected frames is not reached) *and thus the tests will hang.
>
As we have noticed in the ODP timer example, there are no guarantees on how
events are dispatched to different CPU's when using the scheduler so any
attempt at determinism (e.g. stop after N events) is futile. Applications
that should terminate gracefully need some other mechanism to notify
threads to cleanup and terminate.


>
> Regards,
>
>
>
>
>
> Radu
>
>
>
>
>
>
>
> _______________________________________________
> lng-odp mailing list
> [email protected]
> https://lists.linaro.org/mailman/listinfo/lng-odp
>
>
_______________________________________________
lng-odp mailing list
[email protected]
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to