On 04/05/2017 08:57 PM, Mark Rutland wrote:
> On Wed, Apr 05, 2017 at 08:38:52PM +0200, Ralf Ramsauer wrote:
>> On 04/05/2017 07:10 PM, Ralf Ramsauer wrote:
>>> On 04/05/2017 06:57 PM, Mark Rutland wrote:
>>>> It *might* be the case that on Orange Pi, the ldrex generates a PL1
>>>> exception that the guest doesn't handle, leaving it stuck in the
>>>> exception handlers.
>>> Let me verify that. I already registered handlers on the TK1 but none of
>>> them trapped. Didn't try that on the Orange Pi.
>> Here we go!
>>
>> On the Orange Pi, I get a DABT, and then it continues executing. The
>> previously installed default inmate handler was just spinning around,
>> that's the reason why we only saw "Foo!" on the Orange Pi before, now we
>> get:
>>
>> Foo!
>> DABT!
>> Bar!
>>
>> Just for my understanding, we're receiving a DABT, because we're not
>> allowed to use the ldrex without active MMU resp. we're not using it on
>> write-back cacheable memory?
> 
> I believe that is the case, yes.
Ok, the conclusion is that I must not use strex/ldrex-based spinlocks in
my context.
> 
> To be able to use exclusives, you'll need active page tables mapping
> memory as write-back cacheable.
> 
>> Well, anyway, on the TK1 there's no DABT :-)
>>>>
>>>> I do not have a good explanation for what's going on with the TK1. If
>>>> the ldrex generates some unhandled PL2 exception, that could explain the
>>> I think i could be able to instrument the hypervisor and check that as
>>> well...
>> ... so I tried to instrument the hypervisor code to see if it arrives
>> there. But actually the HV does already handle DABTs: it would simply
>> panic the cell if a DABT arrives.
>>
>> So seems like the DABT arrives nowhere?
> 
> Given it's UNPREDICTABLE, we're not guaranteed a DABT, and the core
> might do a number of different things. It's allowed to behave as if any
> instructions had been (validly) executed at PL1.
> 
> Maybe the core generates a different exception, maybe it replays the
> instruction, or maybe something else entirely. Regardless, forward
> progress of PL2 shouldn't be affected.
> 
>> Besides that I added a printk for every call of arch_handle_exit of the
>> HV. On the TK1, I can see (of course) tons of exits, Traps and vIRQs.
> 
> What are the last traps you see when running the guest?
I don't have the hardware in a physical reachable distance atm, but
nothing special. Nothing else than Traps and vIRQs.
> 
> Do you see any interrupts or traps being taken from the guest CPU(s)
> even after the guest has hung?
> 
>> When I try to destroy the cell, I get some more Trap and IRQ exits, and
>> then it suddenly stops reporting.
> 
> Is there anything consistent w.r.t. what you see right before the cell
> destruction hangs?
Let me filter that tomorrow.

But at least I have a clue what's going on now. It's not the first time
that the Tegra architecture behaves somewhat weird...

Thank you very much!
  Ralf
> 
> Thanks,
> Mark.
> 

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to