Those systemd-udevd entries appear as soon as the pru is enabled with the device tree overlay.. totally independant of my program that later utilizes the PRU.
Erik On Tuesday, November 24, 2015 at 2:06:36 PM UTC-8, William Hermans wrote: > > You're not firing an interrupt every program loop are you ? > > On Tue, Nov 24, 2015 at 2:54 PM, William Hermans <[email protected] > <javascript:>> wrote: > >> Just tell me in high level terms what your program loop does. >> >> On Tue, Nov 24, 2015 at 2:52 PM, William Hermans <[email protected] >> <javascript:>> wrote: >> >>> *William, * >>>> >>>> *Have you seen this.... when I enable the PRU with a device tree >>> overlay, I get this if I type top: (something to do with the interrupts >>> taking up tons of CPU)* >>> >>> >>> No I haven't, since I have not been looking. Perhaps the interrupts can >>> be disabled ? I'll have to look into it, but I do not really have any code >>> / binaries built to use both PRUs. What are you doing exactly ? >>> >>> On Tue, Nov 24, 2015 at 2:23 PM, Erik Stauber <[email protected] >>> <javascript:>> wrote: >>> >>>> William, >>>> >>>> Have you seen this.... when I enable the PRU with a device tree >>>> overlay, I get this if I type *top*: (something to do with the >>>> interrupts taking up tons of CPU) >>>> >>>> >>>> >>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ >>>> COMMAND >>>> 678 root 20 0 72872 21368 6372 S 11.2 4.2 0:03.56 node >>>> 687 root 20 0 11324 2252 1552 R 6.2 0.4 0:00.43 >>>> systemd-udevd >>>> 691 root 20 0 11324 1644 948 R 6.2 0.3 0:00.42 >>>> systemd-udevd >>>> 692 root 20 0 11324 1644 948 R 6.2 0.3 0:00.41 >>>> systemd-udevd >>>> 685 root 20 0 11324 2312 1612 R 5.9 0.5 0:00.43 >>>> systemd-udevd >>>> 686 root 20 0 11324 2252 1552 R 5.9 0.4 0:00.43 >>>> systemd-udevd >>>> 689 root 20 0 11324 2188 1488 R 5.9 0.4 0:00.43 >>>> systemd-udevd >>>> 690 root 20 0 11324 2188 1488 R 5.9 0.4 0:00.42 >>>> systemd-udevd >>>> 693 root 20 0 11324 1644 948 R 5.9 0.3 0:00.41 >>>> systemd-udevd >>>> 680 root 20 0 8076 3524 3040 S 3.6 0.7 0:00.23 >>>> laserlux >>>> 3 root 20 0 0 0 0 R 2.6 0.0 0:00.37 >>>> ksoftirqd/0 >>>> 103 root 20 0 0 0 0 S 0.3 0.0 0:00.04 >>>> usb-storage >>>> 696 root 20 0 2980 1644 1288 R 0.3 0.3 0:00.06 top >>>> 1 root 20 0 21836 3204 2120 S 0.0 0.6 0:06.05 >>>> systemd >>>> 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 >>>> kthreadd >>>> 4 root 20 0 0 0 0 S 0.0 0.0 0:00.02 >>>> kworker/0:0 >>>> 5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 >>>> kworker/0:0H >>>> 6 root 20 0 0 0 0 S 0.0 0.0 0:00.33 >>>> kworker/u2:0 >>>> 7 root rt 0 0 0 0 S 0.0 0.0 0:00.00 >>>> watchdog/0 >>>> 8 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 >>>> khelper >>>> 9 root 20 0 0 0 0 S 0.0 0.0 0:00.00 >>>> kdevtmpfs >>>> 10 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 netns >>>> 11 root 20 0 0 0 0 S 0.0 0.0 0:00.00 >>>> kswork >>>> 12 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 perf >>>> 13 root 20 0 0 0 0 S 0.0 0.0 0:00.04 >>>> kworker/0:1 >>>> 14 root 20 0 0 0 0 S 0.0 0.0 0:00.00 >>>> khungtaskd >>>> 15 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 >>>> writeback >>>> 16 root 25 5 0 0 0 S 0.0 0.0 0:00.00 ksmd >>>> 17 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 >>>> crypto >>>> 18 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 >>>> kintegrityd >>>> 19 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 >>>> bioset >>>> 20 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 >>>> kblockd >>>> >>>> >>>> Then after a couple minutes, those processes disappear and this appears >>>> in /var/log/syslog >>>> >>>> Nov 24 13:15:15 beaglebone systemd-udevd[172]: worker [679] >>>> /devices/platform/ocp/4a300000.pruss/uio/uio0 timeout; kill it >>>> Nov 24 13:15:15 beaglebone rsyslogd-2007: action 'action 17' suspended, >>>> next retry is Tue Nov 24 13:15:45 2015 [try >>>> http://www.rsyslog.com/e/2007 ] >>>> Nov 24 13:15:15 beaglebone systemd-udevd[172]: seq 2030 >>>> '/devices/platform/ocp/4a300000.pruss/uio/uio0' killed >>>> Nov 24 13:15:15 beaglebone systemd-udevd[172]: worker [680] >>>> /devices/platform/ocp/4a300000.pruss/uio/uio1 timeout; kill it >>>> Nov 24 13:15:15 beaglebone systemd-udevd[172]: seq 2036 >>>> '/devices/platform/ocp/4a300000.pruss/uio/uio1' killed >>>> Nov 24 13:15:15 beaglebone systemd-udevd[172]: worker [681] >>>> /devices/platform/ocp/4a300000.pruss/uio/uio2 timeout; kill it >>>> Nov 24 13:15:15 beaglebone systemd-udevd[172]: seq 2037 >>>> '/devices/platform/ocp/4a300000.pruss/uio/uio2' killed >>>> Nov 24 13:15:15 beaglebone systemd-udevd[172]: worker [683] >>>> /devices/platform/ocp/4a300000.pruss/uio/uio3 timeout; kill it >>>> Nov 24 13:15:15 beaglebone systemd-udevd[172]: seq 2038 >>>> '/devices/platform/ocp/4a300000.pruss/uio/uio3' killed >>>> Nov 24 13:15:15 beaglebone systemd-udevd[172]: worker [684] >>>> /devices/platform/ocp/4a300000.pruss/uio/uio4 timeout; kill it >>>> Nov 24 13:15:15 beaglebone systemd-udevd[172]: seq 2039 >>>> '/devices/platform/ocp/4a300000.pruss/uio/uio4' killed >>>> Nov 24 13:15:15 beaglebone systemd-udevd[172]: worker [685] >>>> /devices/platform/ocp/4a300000.pruss/uio/uio5 timeout; kill it >>>> Nov 24 13:15:15 beaglebone systemd-udevd[172]: seq 2040 >>>> '/devices/platform/ocp/4a300000.pruss/uio/uio5' killed >>>> Nov 24 13:15:15 beaglebone systemd-udevd[172]: worker [686] >>>> /devices/platform/ocp/4a300000.pruss/uio/uio6 timeout; kill it >>>> Nov 24 13:15:15 beaglebone systemd-udevd[172]: seq 2041 >>>> '/devices/platform/ocp/4a300000.pruss/uio/uio6' killed >>>> Nov 24 13:15:15 beaglebone systemd-udevd[172]: worker [687] >>>> /devices/platform/ocp/4a300000.pruss/uio/uio7 timeout; kill it >>>> Nov 24 13:15:15 beaglebone systemd-udevd[172]: seq 2042 >>>> '/devices/platform/ocp/4a300000.pruss/uio/uio7' killed >>>> Nov 24 13:15:15 beaglebone systemd-udevd[172]: worker [681] terminated >>>> by signal 9 (Killed) >>>> Nov 24 13:15:15 beaglebone systemd-udevd[172]: worker [683] terminated >>>> by signal 9 (Killed) >>>> Nov 24 13:15:15 beaglebone systemd-udevd[172]: worker [679] terminated >>>> by signal 9 (Killed) >>>> Nov 24 13:15:15 beaglebone systemd-udevd[172]: worker [680] terminated >>>> by signal 9 (Killed) >>>> Nov 24 13:15:15 beaglebone systemd-udevd[172]: worker [684] terminated >>>> by signal 9 (Killed) >>>> Nov 24 13:15:15 beaglebone systemd-udevd[172]: worker [685] terminated >>>> by signal 9 (Killed) >>>> Nov 24 13:15:15 beaglebone systemd-udevd[172]: worker [686] terminated >>>> by signal 9 (Killed) >>>> Nov 24 13:15:15 beaglebone systemd-udevd[172]: worker [687] terminated >>>> by signal 9 (Killed) >>>> Nov 24 13:16:10 beaglebone systemd-timesyncd[209]: >>>> interval/delta/delay/jitter/drift 256s/-0.005s/0.075s/0.261s/-34ppm >>>> Nov 24 13:16:10 beaglebone rsyslogd-2007: action 'action 17' suspended, >>>> next retry is Tue Nov 24 13:16:40 2015 [try >>>> http://www.rsyslog.com/e/2007 ] >>>> Nov 24 13:16:56 beaglebone rsyslogd-2007: action 'action 17' suspended, >>>> next retry is Tue Nov 24 13:17:26 2015 [try >>>> http://www.rsyslog.com/e/2007 ] >>>> Nov 24 13:17:01 beaglebone CRON[1650]: (root) CMD ( cd / && run-parts >>>> --report /etc/cron.hourly) >>>> >>>> >>>> What's odd is my program continues to operate normally, and is using >>>> both PRU0 and PRU1 interrupts.. >>>> >>>> Erik >>>> >>>> >>>> >>>> >>>> >>>> On Monday, November 23, 2015 at 12:03:04 PM UTC-8, William Hermans >>>> wrote: >>>>> >>>>> Micka, >>>>> >>>>> TI 4.x kernels will not work with "traditional" PRU stuff. TI kernels >>>>> have remoteproc enabled. . . which takes over the PRU in a different way. >>>>> >>>>> On Mon, Nov 23, 2015 at 9:41 AM, Micka <[email protected]> wrote: >>>>> >>>>>> Hi, did you managed to make this kernel working with the PRU ? >>>>>> Because I got that : >>>>>> >>>>>> >>>>>> >>>>>> https://www.mail-archive.com/[email protected]/msg32826.html >>>>>> >>>>>> >>>>>> Micka, >>>>>> >>>>>> Le lun. 23 nov. 2015 17:38, Erik Stauber <[email protected]> a >>>>>> écrit : >>>>>> >>>>>>> William, >>>>>>> >>>>>>> I installed the 4.1.13-bone-rt-r16 kernel, and the /dev/uioX entires >>>>>>> showed up. I guess I'll try using this one. Thanks for the help! >>>>>>> >>>>>>> Erik >>>>>>> >>>>>>> >>>>>>> On Saturday, November 21, 2015 at 9:44:14 PM UTC-8, William Hermans >>>>>>> wrote: >>>>>>> >>>>>>>> By the way, I had to make my own device tree overlay for the PRU. >>>>>>>> It's pretty simple. . . . >>>>>>>> >>>>>>>> /dts-v1/; >>>>>>>> /plugin/; >>>>>>>> >>>>>>>> / { >>>>>>>> compatible = "ti,beaglebone", "ti,beaglebone-black"; >>>>>>>> >>>>>>>> /* identification */ >>>>>>>> part-number = "pru_enable"; >>>>>>>> version = "00A0"; >>>>>>>> >>>>>>>> fragment@0 { >>>>>>>> target = <&pruss>; >>>>>>>> __overlay__ { >>>>>>>> status = "okay"; >>>>>>>> >>>>>>>> }; >>>>>>>> }; >>>>>>>> >>>>>>>> }; >>>>>>>> >>>>>>>> $ dtc -O dtb -o pru_enable-00A0.dtbo -b 0 -@ pru_enable-00A0.dts >>>>>>>> $ sudo cp pru_enable-00A0.dtbo /lib/firmware/ >>>>>>>> $ sudo sh -c "echo 'pru_enable' > >>>>>>>> /sys/devices/platform/bone_capemgr/slots" >>>>>>>> $ dmesg | grep pru_enable >>>>>>>> [ 886.921624] bone_capemgr bone_capemgr: part_number 'pru_enable', >>>>>>>> version 'N/A' >>>>>>>> [ 886.941686] bone_capemgr bone_capemgr: slot #6: 'Override Board >>>>>>>> Name,00A0,Override Manuf,pru_enable' >>>>>>>> [ 886.981959] bone_capemgr bone_capemgr: slot #6: dtbo >>>>>>>> 'pru_enable-00A0.dtbo' loaded; overlay id #0 >>>>>>>> >>>>>>>> On Sat, Nov 21, 2015 at 10:36 PM, William Hermans <[email protected] >>>>>>>> > wrote: >>>>>>>> >>>>>>> bone-rt has real time enhancements. I do not know all the >>>>>>>>> differences, but the kernel latency seems to be reduced. >>>>>>>>> >>>>>>>>> Anyway, you do not see what ? >>>>>>>>> >>>>>>>> >>>>>>>>> On Sat, Nov 21, 2015 at 7:08 PM, Erik Stauber <[email protected]> >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> hmmm, i don't see that on 4.1.13-bone16. Maybe I need to use >>>>>>>>>> 4.1.13-bone-rt-r16? What is the difference between the bone and >>>>>>>>>> bone-rt? >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Friday, November 20, 2015 at 2:26:38 PM UTC-8, William Hermans >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> The kernel I'm using by the way . . . >>>>>>>>>>> >>>>>>>>>>> $ uname -a >>>>>>>>>>> Linux beaglebone 4.1.9-bone-rt-r16 #1 Thu Oct 1 06:19:41 UTC >>>>>>>>>>> 2015 armv7l GNU/Linux >>>>>>>>>>> >>>>>>>>>>> $ ls /dev/ |grep uio >>>>>>>>>>> uio >>>>>>>>>>> uio0 >>>>>>>>>>> uio1 >>>>>>>>>>> uio2 >>>>>>>>>>> uio3 >>>>>>>>>>> uio4 >>>>>>>>>>> uio5 >>>>>>>>>>> uio6 >>>>>>>>>>> uio7 >>>>>>>>>>> >>>>>>>>>>> $ ./lsuio >>>>>>>>>>> uio7: name=pruss_evt7, version=1.0, events=0 >>>>>>>>>>> map[0]: addr=0x4A300000, size=524288 >>>>>>>>>>> map[1]: addr=0x9E880000, size=262144 >>>>>>>>>>> uio6: name=pruss_evt6, version=1.0, events=0 >>>>>>>>>>> map[0]: addr=0x4A300000, size=524288 >>>>>>>>>>> map[1]: addr=0x9E880000, size=262144 >>>>>>>>>>> uio5: name=pruss_evt5, version=1.0, events=0 >>>>>>>>>>> map[0]: addr=0x4A300000, size=524288 >>>>>>>>>>> map[1]: addr=0x9E880000, size=262144 >>>>>>>>>>> uio4: name=pruss_evt4, version=1.0, events=0 >>>>>>>>>>> map[0]: addr=0x4A300000, size=524288 >>>>>>>>>>> map[1]: addr=0x9E880000, size=262144 >>>>>>>>>>> uio3: name=pruss_evt3, version=1.0, events=0 >>>>>>>>>>> map[0]: addr=0x4A300000, size=524288 >>>>>>>>>>> map[1]: addr=0x9E880000, size=262144 >>>>>>>>>>> uio2: name=pruss_evt2, version=1.0, events=0 >>>>>>>>>>> map[0]: addr=0x4A300000, size=524288 >>>>>>>>>>> map[1]: addr=0x9E880000, size=262144 >>>>>>>>>>> uio1: name=pruss_evt1, version=1.0, events=0 >>>>>>>>>>> map[0]: addr=0x4A300000, size=524288 >>>>>>>>>>> map[1]: addr=0x9E880000, size=262144 >>>>>>>>>>> uio0: name=pruss_evt0, version=1.0, events=0 >>>>>>>>>>> map[0]: addr=0x4A300000, size=524288 >>>>>>>>>>> map[1]: addr=0x9E880000, size=262144 >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Fri, Nov 20, 2015 at 2:59 PM, William Hermans < >>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>> >>>>>>>>>>>> The TI kernels have remoteproc enabled in the kernel, which >>>>>>>>>>>> will interfere with uio_pruss. You need to switch to a *bone* >>>>>>>>>>>> kernel. >>>>>>>>>>>> >>>>>>>>>>>> On Fri, Nov 20, 2015 at 9:59 AM, Erik Stauber < >>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> I'm trying to migrate to 4.1 from 3.8, and it seems as if the >>>>>>>>>>>>> PRU is up and running on the latest 4.1 kernel. However, a >>>>>>>>>>>>> difference is >>>>>>>>>>>>> the I'm not getting the 8 uioX (x=0-8) entries in the /dev >>>>>>>>>>>>> directory, and >>>>>>>>>>>>> therefore the prussdrv library errors out when it can't find >>>>>>>>>>>>> those files. >>>>>>>>>>>>> >>>>>>>>>>>>> The prussdrv is looking for this: >>>>>>>>>>>>> sprintf(name, "/dev/uio%d", host_interrupt); >>>>>>>>>>>>> >>>>>>>>>>>>> The dmesg output on 4.1.13-ti-r33 reports that it is skipping >>>>>>>>>>>>> intr mapping... >>>>>>>>>>>>> >>>>>>>>>>>>> *[ 20.830764] pru-rproc 4a334000.pru0: version 0 >>>>>>>>>>>>> event_chnl_map_size 1 event_chnl_map 0000039c* >>>>>>>>>>>>> *[ 20.830799] pru-rproc 4a334000.pru0: sysevt-to-ch[60] -> 0* >>>>>>>>>>>>> *[ 20.830812] pru-rproc 4a334000.pru0: chnl-to-host[0] -> 0* >>>>>>>>>>>>> *[ 20.830823] pru-rproc 4a334000.pru0: skip intr mapping for >>>>>>>>>>>>> chnl 1* >>>>>>>>>>>>> *[ 20.830833] pru-rproc 4a334000.pru0: skip intr mapping for >>>>>>>>>>>>> chnl 2* >>>>>>>>>>>>> *[ 20.830844] pru-rproc 4a334000.pru0: skip intr mapping for >>>>>>>>>>>>> chnl 3* >>>>>>>>>>>>> *[ 20.830854] pru-rproc 4a334000.pru0: skip intr mapping for >>>>>>>>>>>>> chnl 4* >>>>>>>>>>>>> *[ 20.830864] pru-rproc 4a334000.pru0: skip intr mapping for >>>>>>>>>>>>> chnl 5* >>>>>>>>>>>>> *[ 20.830875] pru-rproc 4a334000.pru0: skip intr mapping for >>>>>>>>>>>>> chnl 6* >>>>>>>>>>>>> *[ 20.830885] pru-rproc 4a334000.pru0: skip intr mapping for >>>>>>>>>>>>> chnl 7* >>>>>>>>>>>>> *[ 20.830896] pru-rproc 4a334000.pru0: skip intr mapping for >>>>>>>>>>>>> chnl 8* >>>>>>>>>>>>> *[ 20.830906] pru-rproc 4a334000.pru0: skip intr mapping for >>>>>>>>>>>>> chnl 9* >>>>>>>>>>>>> >>>>>>>>>>>>> Does anyone know how to not skip that? Or a way for me to map >>>>>>>>>>>>> them manually? >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks, >>>>>>>>>>>>> Erik >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> For more options, visit http://beagleboard.org/discuss >>>>>>>>>>>>> --- >>>>>>>>>>>>> You received this message because you are subscribed to the >>>>>>>>>>>>> Google Groups "BeagleBoard" group. >>>>>>>>>>>>> To unsubscribe from this group and stop receiving emails from >>>>>>>>>>>>> it, send an email to [email protected]. >>>>>>>>>>>>> For more options, visit https://groups.google.com/d/optout. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>> For more options, visit http://beagleboard.org/discuss >>>>>>>>>> --- >>>>>>>>>> You received this message because you are subscribed to the >>>>>>>>>> Google Groups "BeagleBoard" group. >>>>>>>>>> To unsubscribe from this group and stop receiving emails from it, >>>>>>>>>> send an email to [email protected]. >>>>>>>>>> For more options, visit https://groups.google.com/d/optout. >>>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>> For more options, visit http://beagleboard.org/discuss >>>>>>> --- >>>>>>> You received this message because you are subscribed to the Google >>>>>>> Groups "BeagleBoard" group. >>>>>>> To unsubscribe from this group and stop receiving emails from it, >>>>>>> send an email to [email protected]. >>>>>>> For more options, visit https://groups.google.com/d/optout. >>>>>>> >>>>>> -- >>>>>> For more options, visit http://beagleboard.org/discuss >>>>>> --- >>>>>> You received this message because you are subscribed to the Google >>>>>> Groups "BeagleBoard" group. >>>>>> To unsubscribe from this group and stop receiving emails from it, >>>>>> send an email to [email protected]. >>>>>> For more options, visit https://groups.google.com/d/optout. >>>>>> >>>>> >>>>> -- >>>> For more options, visit http://beagleboard.org/discuss >>>> --- >>>> You received this message because you are subscribed to the Google >>>> Groups "BeagleBoard" group. >>>> To unsubscribe from this group and stop receiving emails from it, send >>>> an email to [email protected] <javascript:>. >>>> For more options, visit https://groups.google.com/d/optout. >>>> >>> >>> >> > -- For more options, visit http://beagleboard.org/discuss --- You received this message because you are subscribed to the Google Groups "BeagleBoard" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
