I just tried to run this in a docker container to see if valgrind thread lock checking tools could help and never got that far, I consistently get issues and I used two sandboxes.
run1 odp_queue.c:328:odp_queue_destroy():queue "sched_00_47" not empty odp_schedule.c:271:schedule_term_global():Queue not empty odp_schedule.c:294:schedule_term_global():Pool destroy fail. odp_init.c:188:_odp_term_global():ODP schedule term failed. odp_queue.c:170:odp_queue_term_global():Not destroyed queue: sched_00_47 odp_init.c:195:_odp_term_global():ODP queue term failed. odp_pool.c:149:odp_pool_term_global():Not destroyed pool: odp_sched_pool odp_pool.c:149:odp_pool_term_global():Not destroyed pool: msg_pool odp_init.c:202:_odp_term_global():ODP buffer pool term failed. run2 odp_queue.c:328:odp_queue_destroy():queue "sched_00_03" not empty odp_queue.c:328:odp_queue_destroy():queue "sched_00_15" not empty odp_queue.c:328:odp_queue_destroy():queue "sched_00_47" not empty odp_queue.c:328:odp_queue_destroy():queue "sched_00_55" not empty odp_schedule.c:271:schedule_term_global():Queue not empty odp_schedule.c:271:schedule_term_global():Queue not empty odp_schedule.c:271:schedule_term_global():Queue not empty odp_schedule.c:271:schedule_term_global():Queue not empty odp_schedule.c:294:schedule_term_global():Pool destroy fail. odp_init.c:188:_odp_term_global():ODP schedule term failed. odp_queue.c:170:odp_queue_term_global():Not destroyed queue: sched_00_03 odp_queue.c:170:odp_queue_term_global():Not destroyed queue: sched_00_15 odp_queue.c:170:odp_queue_term_global():Not destroyed queue: sched_00_47 odp_queue.c:170:odp_queue_term_global():Not destroyed queue: sched_00_55 odp_init.c:195:_odp_term_global():ODP queue term failed. odp_pool.c:149:odp_pool_term_global():Not destroyed pool: odp_sched_pool odp_pool.c:149:odp_pool_term_global():Not destroyed pool: msg_pool odp_init.c:202:_odp_term_global():ODP buffer pool term failed. I then did a git clean -xdf on the host and still got odp_queue.c:328:odp_queue_destroy():queue "sched_00_03" not empty odp_queue.c:328:odp_queue_destroy():queue "sched_00_43" not empty odp_schedule.c:271:schedule_term_global():Queue not empty odp_schedule.c:271:schedule_term_global():Queue not empty odp_schedule.c:294:schedule_term_global():Pool destroy fail. odp_init.c:188:_odp_term_global():ODP schedule term failed. odp_queue.c:170:odp_queue_term_global():Not destroyed queue: sched_00_03 odp_queue.c:170:odp_queue_term_global():Not destroyed queue: sched_00_43 odp_init.c:195:_odp_term_global():ODP queue term failed. odp_pool.c:149:odp_pool_term_global():Not destroyed pool: odp_sched_pool odp_pool.c:149:odp_pool_term_global():Not destroyed pool: msg_pool odp_init.c:202:_odp_term_global():ODP buffer pool term failed. *On monarch_lts it is fine for me on the same host* [3] sched_multi_hi enq+deq 196 CPU cycles [4] sched_multi_hi enq+deq 201 CPU cycles Thread 4 exits Thread 5 exits Thread 1 exits Thread 3 exits Thread 6 exits Thread 2 exits ODP example complete Anders confirmed he saw the same thing and it appears that make check does not get informed that it failed. On 12 August 2016 at 07:12, Maxim Uvarov <[email protected]> wrote: > Looking to issue with issue with odp_scheduling.c > > Problem can be described with following code: > > Each cpu does this: > > 1. code allocs some events and place them to queue. > 2. odp_schedule_pause(); > 3. odp_schedule_multi() put them back to queue. > 4. odp_schedule_resume(); > 5. odp_barrier_wait(&globals->barrier); > 6. clear_sched_queues(); > > static void clear_sched_queues(void) > { > odp_event_t ev; > int cnt = 0; > > while (1) { > ev = odp_schedule(NULL, ODP_SCHED_NO_WAIT); > if (ev == ODP_EVENT_INVALID) > break; > odp_event_free(ev); > cnt++; > } > printf("clear %d\n", cnt); > } > > > Now I see that some threads inside clear_sched_queues() clear more or less > events. But all of them exits because invalid queue was returned. > I.e. there are some events still in scheduler. > > If I call clear_sched_queues() 2 or 3 times rest of events freeing. Which > looks like we have a race between threads somewhere... > > Maxim. > > > -- Mike Holmes Technical Manager - Linaro Networking Group Linaro.org <http://www.linaro.org/> *│ *Open source software for ARM SoCs "Work should be fun and collaborative, the rest follows"
