Hi again,

> 3. run/smp test hangs during threads destruction
>
> Like I wrote earlier run/smp test is almost working on multiple
> cores. It hangs somewhere in [5] during destructing thread running
> non-zero core (destroying first one does not cause problems).
>
> I don't have any concrete questions here. The only ones are:
>
>  a. Does this test properly work on arndale?
>
>  b. Do you have any thoughts what can go wrong? Maybe I haven't
>     implemented something important for smp yet? Only timer and IPI?

Situation with this test has changed after I rebased my branch to
current staging. It doesn't stop at threads destruction in Affinity test
like earlier so probably something was fixed since 18.11 release.

Currently it hangs a little further:

...
init -> test-smp] Affinity: Round 09:  A  A  A  A 
[init -> test-smp] Affinity:     CPU: 00 01 02 03 
[init -> test-smp] Affinity: Round 10:  A  A  A  A 
[init -> test-smp] Affinity: --- test finished ---
[init -> test-smp] TLB: --- test started ---
[init -> test-smp] TLB: thread started on CPU 1
[init -> test-smp] TLB: thread started on CPU 2
[init -> test-smp] TLB: thread started on CPU 3
[init -> test-smp] TLB: all threads are up and running...
[init -> test-smp] TLB: ram dataspace destroyed, all will fault...
no RM attachment (READ pf_addr=0xc00c pf_ip=0x1000d2c from pager_object: 
pd='init -> test-smp' thread='tlb_thread') 
Warning: page fault, pager_object: pd='init -> test-smp' thread='tlb_thread' 
ip=0x1000d2c fault-addr=0xc00c type=no-page
Warning: core -> pager_ep: cannot submit unknown signal context
[init -> test-sm

Can you say if this reported fault is something that currently happens
on tested configurations on staging or is it specific to my rpi work?

I can investigate it, but if it's something generic it will be much
harder for me than for you and probably you will get it working quicker.

Tomasz Gajewski

_______________________________________________
Genode users mailing list
users@lists.genode.org
https://lists.genode.org/listinfo/users

Reply via email to