It seems that even if I hardcode the addresses in there (to eliminate the
possibility that my registers were getting overwritten somewhere), that I
get the bus error.  Does enabling the OCP Master port work the same way as
on the BBB? It's supposedly being set here:
https://github.com/PocketNC/machinekit-hal/blob/c8b38386d87abc45baa33593681cbae46d996980/src/hal/drivers/hal_pru_generic/pru_generic.p#L174-L176

On Tue, Apr 28, 2020 at 11:47 AM John Allwine <[email protected]> wrote:

> It's the hal_pru_generic code. It definitely smells like a bus error. In
> fact, if I comment out the lines that write to the GPIO, it stops
> happening, so it seems like I have the wrong addresses in there, but I'm
> struggling figuring out how that could be.
>
> These lines are where the GPIO ports are written to in memory:
>
> https://github.com/PocketNC/machinekit-hal/blob/c8b38386d87abc45baa33593681cbae46d996980/src/hal/drivers/hal_pru_generic/pru_wait.p#L214-L217
>
> Theoretically, the addresses should be set to the clear addresses of
> GPIO3, GPIO5, GPIO6 and GPIO7:
>
> Addresses defined here:
>
> https://github.com/PocketNC/machinekit-hal/blob/c8b38386d87abc45baa33593681cbae46d996980/src/hal/support/pru/pru.h#L303-L307
>
> Loaded into registers here:
>
> https://github.com/PocketNC/machinekit-hal/blob/c8b38386d87abc45baa33593681cbae46d996980/src/hal/drivers/hal_pru_generic/pru_generic.p#L261-L264
>
>
>
>
>
>
>
> On Tue, Apr 28, 2020 at 10:50 AM Jason Kridner <[email protected]>
> wrote:
>
>> What is the code running on PRUSS2 PRU1?
>>
>> This line kinda spells out an illegal access by that PRU or of that PRU:
>> MASTER PRUSS2 PRU1 TARGET L4_PER1_P3 (Idle): Data Access in Supervisor
>> mode during Functional access
>>
>> Looks like the error is from here:
>> https://github.com/beagleboard/linux/blob/7a920684860a790099061b67961d0b5ffa033fdf/drivers/bus/omap_l3_noc.c#L135
>>
>> Looks like a bus exception to me.
>>
>> On Tue, Apr 28, 2020 at 11:46 AM <[email protected]> wrote:
>>
>>> I'm getting this stack trace in dmesg, but I'm unsure what it means or
>>> how to figure out what it means. As far as I can tell, the code running on
>>> the PRU is working. I'm generating a 100Khz signal on a direct output, and
>>> am able to successfully measure this signal. The Beaglebone is locking up,
>>> though, and I believe this stack trace is being spammed so heavily that the
>>> logging is taking over the CPU and my ssh shell gets locked out.
>>>
>>> I'm using this device tree overlay:
>>> https://github.com/PocketNC/BeagleBoard-DeviceTrees/blob/pocketnc-ai-test/src/arm/am5729-beagleboneai-pocketnc-pro.dts
>>>
>>> The code I'm running is implemented in PRU Assembly that is assembled
>>> with pasm. pasm outputs a .bin file and I need a .elf file for running it
>>> with remoteproc, so I'm jumping through some hoops to do that conversion.
>>> The elf file does seem to work, but I'm not sure if I need to do more to
>>> ensure I'm specifying what resources I need access to or something like
>>> that. I can go into more detail if need be.
>>>
>>> The stack trace is below. Any ideas about what is going on are
>>> appreciated!
>>>
>>> [  168.153783] ------------[ cut here ]------------
>>> [  168.153829] WARNING: CPU: 0 PID: 0 at drivers/bus/omap_l3_noc.c:147
>>> l3_interrupt_handler+0x27c/0x39c
>>> [  168.153851] 44000000.ocp:L3 Custom Error: MASTER PRUSS2 PRU1 TARGET
>>> L4_PER1_P3 (Idle): Data Access in Supervisor mode during Functional access
>>> [  168.153865] Modules linked in: xt_conntrack ipt_MASQUERADE
>>> nf_nat_masquerade_ipv4 rpmsg_rpc rpmsg_proto bnep btsdio bluetooth
>>> ecdh_generic brcmfmac pvrsrvkm(O) brcmutil cfg80211 uio_pruss_shmem evdev
>>> joydev stmpe_adc omap_remoteproc virtio_rpmsg_bus rpmsg_core 8021q garp mrp
>>> stp llc iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat
>>> usb_f_acm nf_conntrack u_serial usb_f_ecm usb_f_mass_storage iptable_mangle
>>> iptable_filter usb_f_rndis u_ether libcomposite cmemk(O) uio_pdrv_genirq
>>> uio spidev pruss_soc_bus pru_rproc pruss pruss_intc ip_tables x_tables
>>> [  168.154474] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G        W  O
>>> 4.14.108-ti-r119 #1
>>> [  168.154490] Hardware name: Generic DRA74X (Flattened Device Tree)
>>> [  168.154538] [<c0113180>] (unwind_backtrace) from [<c010d690>]
>>> (show_stack+0x20/0x24)
>>> [  168.154575] [<c010d690>] (show_stack) from [<c0ce54f4>]
>>> (dump_stack+0x80/0x94)
>>> [  168.154609] [<c0ce54f4>] (dump_stack) from [<c013f5b8>]
>>> (__warn+0xf8/0x110)
>>> [  168.154636] [<c013f5b8>] (__warn) from [<c013f628>]
>>> (warn_slowpath_fmt+0x58/0x74)
>>> [  168.154667] [<c013f628>] (warn_slowpath_fmt) from [<c0741e10>]
>>> (l3_interrupt_handler+0x27c/0x39c)
>>> [  168.154703] [<c0741e10>] (l3_interrupt_handler) from [<c01abcbc>]
>>> (__handle_irq_event_percpu+0xbc/0x280)
>>> [  168.154734] [<c01abcbc>] (__handle_irq_event_percpu) from
>>> [<c01abebc>] (handle_irq_event_percpu+0x3c/0x8c)
>>> [  168.154761] [<c01abebc>] (handle_irq_event_percpu) from [<c01abf54>]
>>> (handle_irq_event+0x48/0x6c)
>>> [  168.154792] [<c01abf54>] (handle_irq_event) from [<c01aff78>]
>>> (handle_fasteoi_irq+0xc8/0x17c)
>>> [  168.154822] [<c01aff78>] (handle_fasteoi_irq) from [<c01aad7c>]
>>> (generic_handle_irq+0x34/0x44)
>>> [  168.154850] [<c01aad7c>] (generic_handle_irq) from [<c01ab390>]
>>> (__handle_domain_irq+0x8c/0xfc)
>>> [  168.154879] [<c01ab390>] (__handle_domain_irq) from [<c01015e0>]
>>> (gic_handle_irq+0x4c/0x88)
>>> [  168.154908] [<c01015e0>] (gic_handle_irq) from [<c0d02bcc>]
>>> (__irq_svc+0x6c/0xa8)
>>> [  168.154925] Exception stack(0xc1501ed8 to 0xc1501f20)
>>> [  168.154946] 1ec0:
>>>    00000001 00000000
>>> [  168.154973] 1ee0: fe600000 00000000 c1500000 c1504e60 c1504dfc
>>> c14cbb78 c1501f48 00000000
>>> [  168.154997] 1f00: 00000000 c1501f34 c1501f14 c1501f28 c012fcb8
>>> c0109768 600f0013 ffffffff
>>> [  168.155031] [<c0d02bcc>] (__irq_svc) from [<c0109768>]
>>> (arch_cpu_idle+0x30/0x4c)
>>> [  168.155061] [<c0109768>] (arch_cpu_idle) from [<c0d02044>]
>>> (default_idle_call+0x30/0x3c)
>>> [  168.155092] [<c0d02044>] (default_idle_call) from [<c018cc6c>]
>>> (do_idle+0x180/0x214)
>>> [  168.155124] [<c018cc6c>] (do_idle) from [<c018d00c>]
>>> (cpu_startup_entry+0x28/0x2c)
>>> [  168.155156] [<c018d00c>] (cpu_startup_entry) from [<c0cfb4b0>]
>>> (rest_init+0xdc/0xe0)
>>> [  168.155194] [<c0cfb4b0>] (rest_init) from [<c1400eb8>]
>>> (start_kernel+0x434/0x45c)
>>> [  168.155217] ---[ end trace d9047b952a20ba7f ]---
>>>
>>>
>>>
>>> --
>>> For more options, visit http://beagleboard.org/discuss
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "BeagleBoard" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to [email protected].
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/beagleboard/fde6b3e0-1a1d-43d5-8f78-14d604a7b1fa%40googlegroups.com
>>> <https://groups.google.com/d/msgid/beagleboard/fde6b3e0-1a1d-43d5-8f78-14d604a7b1fa%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>>
>>
>> --
>> https://beagleboard.org/about - a 501c3 non-profit educating around open
>> hardware computing
>>
>> --
>> For more options, visit http://beagleboard.org/discuss
>> ---
>> You received this message because you are subscribed to a topic in the
>> Google Groups "BeagleBoard" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/beagleboard/TyaYiVQkscM/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> [email protected].
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/beagleboard/CA%2BT6QPnjG32_3QgsYZxhNOx428MxKznSg6N7gV7oyfO8WUmEAg%40mail.gmail.com
>> <https://groups.google.com/d/msgid/beagleboard/CA%2BT6QPnjG32_3QgsYZxhNOx428MxKznSg6N7gV7oyfO8WUmEAg%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>>
>

-- 
For more options, visit http://beagleboard.org/discuss
--- 
You received this message because you are subscribed to the Google Groups 
"BeagleBoard" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/beagleboard/CAPEK9qZnEafJK_j%3DB2rRP1Gujqej5%3DQi5v%2Bv3Z8QoAxWr61ppQ%40mail.gmail.com.

Reply via email to