Re: [yocto] [meta-selinux][PULL] refpolicy: update to 2.20190201 and git HEAD policies (2019-04-10 10:57:14 -0400)
Hi Yi, [Re: [yocto] [meta-selinux][PULL] refpolicy: update to 2.20190201 and git HEAD policies (2019-04-10 10:57:14 -0400)] On 19.04.11 (Thu 16:19) Yi Zhao wrote: > Hi Joe, > > Thank you for working on the refpolicy upgrade. > I have a quick test with your patch. Here are the results: > > Machine: qemux86-64 > Image: core-image-selinux > Init manager: systemd > Boot command: runqemu qemux86-64 kvm nographic bootparams="selinux=1 > enforcing= > X" qemuparams="-m 1024" > > 1. All refpolicy type of git version can be built without problems. > > 2. With parameter selinux=1 & enforcing=0 > The qemu can boot up and login for all refpolicy types. Perfect, that's what I had when testing on my reference hardware, so I'm happy you were able to validate those results. > 3. With parameter selinux=1 & enforcing=1 > Some of services failed to startup when booting. But this issue also exist on > old refpolicy version (2.20170204) Yeah, and given the scope of this change my goal was mainly parity with the old policy but based on a version that's 2-ish years newer. So once that's done I think we can reasonably work at enabling the additional services in some structured way. > 4. refpolicy stable version (2.20190201) > I got an do_fetch error with refpolicy stable version. > Seems the SRC_URI is not correct. It should be "https://github.com/ > SELinuxProject/refpolicy/releases/download/RELEASE_2_20190201/refpolicy-$ > {PV}.tar.bz2" Thanks, good catch, I don't know how that slipped through. Corrected on my end, I'll update it in a bit. -J. > > > Regards, > Yi -- -Joe MacDonald. :wq signature.asc Description: PGP signature -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [linux-yocto] [linux-yocto-rt][PATCH 0/2] Backport two rt patches to fix build error for powerpc
On Tue, Apr 9, 2019 at 11:23 PM Hongzhi.Song wrote: > > There are tow errors when compiling powerpc rt kernel. > 1. > Error: operand out of range (0x00208690 is not between > 0x and 0x) > 2. > error: implicit declaration of function 'printk_safe_flush_on_panic' > > There are for v5.0/standard/preempt-rt/base I'll do a full refresh of the 5.x -rt support shortly, and it should include these changes. Bruce > > John Ogness (1): > printk: An all-in-one commit to fix build failures > > Sebastian Andrzej Siewior (1): > powerpc: reshuffle TIF bits > > arch/powerpc/include/asm/thread_info.h | 13 - > arch/powerpc/kernel/entry_32.S | 12 +++- > arch/powerpc/kernel/entry_64.S | 12 +++- > arch/powerpc/kernel/traps.c| 1 - > arch/powerpc/kernel/watchdog.c | 5 - > kernel/printk/printk.c | 16 ++-- > lib/printk_ringbuffer.c| 8 +++- > 7 files changed, 43 insertions(+), 24 deletions(-) > > -- > 2.8.1 > > -- > ___ > linux-yocto mailing list > linux-yocto@yoctoproject.org > https://lists.yoctoproject.org/listinfo/linux-yocto -- - Thou shalt not follow the NULL pointer, for chaos and madness await thee at its end - "Use the force Harry" - Gandalf, Star Trek II -- ___ linux-yocto mailing list linux-yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/linux-yocto
Re: [yocto] Dependencies on other file system types for custom file system type
On Fri, 2019-04-12 at 20:50 +0700, Eric Grunt wrote: > > The dependency code is only triggered if the image type is in > > IMAGE_FSTYPES. Did you add it there? > > Yes, in the distro conf file. > Guess, otherwise it wouldn't be built at all (since it isn't > dependency of another fs) Notice how the function names for squashfs are: do_image_squashfs_xz not do_image_squashfs-xz This is because shell functions/variables can't have "-" in their name. You need to change to us IMAGE_CMD_squashfs_xz_ubi. Cheers, Richard -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Dependencies on other file system types for custom file system type
> The dependency code is only triggered if the image type is in > IMAGE_FSTYPES. Did you add it there? Yes, in the distro conf file. Guess, otherwise it wouldn't be built at all (since it isn't dependency of another fs) -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Dependencies on other file system types for custom file system type
On Fri, 2019-04-12 at 20:19 +0700, Eric Grunt wrote: > Dependencies on other file system types for custom file system type > > I'd like to add a custom fs type - a squashfs-xz on an ubi (instead > of ubifs). > For this purpose I created a new class, inherited image_types and > added > a dependency on squashfs-xz and also on squashfs-tools-native and > mtd-utils-native: > > inherit image_types > > IMAGE_TYPEDEP_squashfs-xz-ubi = "squashfs-xz" > > do_image_squashfs-xz-ubi[depends] += "mtd-utils- > native:do_populate_sysroot" > do_image_squashfs-xz-ubi[depends] += "squashfs-tools- > native:do_populate_sysroot" > > (full class file is attached to this mail) > > But the dependency handling is not working corrently in my solution: The dependency code is only triggered if the image type is in IMAGE_FSTYPES. Did you add it there? Cheers, Richard -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Dependencies on other file system types for custom file system type
better formatted class file: inherit image_types IMAGE_TYPEDEP_squashfs-xz-ubi = "squashfs-xz" do_image_squashfs-xz-ubi[depends] += "mtd-utils-native:do_populate_sysroot" do_image_squashfs-xz-ubi[depends] += "squashfs-tools-native:do_populate_sysroot" IMAGE_CMD_squashfs-xz-ubi () { squashfsubi_mkfs "${MKUBIFS_ARGS}" "${UBINIZE_ARGS}" } squashfsubi_mkfs() { local mkubifs_args="$1" local ubinize_args="$2" CFG_NAME=ubinize-${IMAGE_NAME}-squashfs-xz-ubi.cfg # Added prompt error message for ubi and ubifs image creation. if [ -z "$mkubifs_args"] || [ -z "$ubinize_args" ]; then bbfatal "MKUBIFS_ARGS and UBINIZE_ARGS have to be set,\ see http://www.linux-mtd.infradead.org/faq/ubifs.html for details" fi echo \[ubifs\] > ${CFG_NAME} echo mode=ubi >> ${CFG_NAME} echo \ image=${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.squashfs-xz \ >> ${CFG_NAME} echo vol_id=0 >> ${CFG_NAME} echo vol_type=dynamic >> ${CFG_NAME} echo vol_name=${UBI_VOLNAME} >> ${CFG_NAME} echo vol_flags=autoresize >> ${CFG_NAME} # normally we shouldn't need to create the squashfs image ourselves, # because we have a dependency declared (IMAGE_TYPEDEP) # But, if this file is modified, the dependency is _not_ rebuild, # so we have to do this ourselves. if [ ! -e ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.squashfs-xz ] then bbwarn \ "${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.squashfs-xz does not exist. Creating." ${IMAGE_CMD_squashfs-xz} fi ubinize -o ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.squashfs-xz-ubi \ ${ubinize_args} ${CFG_NAME} # Cleanup cfg file mv ${CFG_NAME} ${IMGDEPLOYDIR}/ } -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Dependencies on other file system types for custom file system type
class file: inherit image_types IMAGE_TYPEDEP_squashfs-xz-ubi = "squashfs-xz" do_image_squashfs-xz-ubi[depends] += "mtd-utils-native:do_populate_sysroot" do_image_squashfs-xz-ubi[depends] += "squashfs-tools-native:do_populate_sysroot" IMAGE_CMD_squashfs-xz-ubi () { squashfsubi_mkfs "${MKUBIFS_ARGS}" "${UBINIZE_ARGS}" } squashfsubi_mkfs() { local mkubifs_args="$1" local ubinize_args="$2" CFG_NAME=ubinize-${IMAGE_NAME}-squashfs-xz-ubi.cfg # Added prompt error message for ubi and ubifs image creation. if [ -z "$mkubifs_args"] || [ -z "$ubinize_args" ]; then bbfatal "MKUBIFS_ARGS and UBINIZE_ARGS have to be set, see http://www.linux-mtd.infradead.org/faq/ubifs.html for details" fi echo \[ubifs\] > ${CFG_NAME} echo mode=ubi >> ${CFG_NAME} echo image=${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.squashfs-xz >> ${CFG_NAME} echo vol_id=0 >> ${CFG_NAME} echo vol_type=dynamic >> ${CFG_NAME} echo vol_name=${UBI_VOLNAME} >> ${CFG_NAME} echo vol_flags=autoresize >> ${CFG_NAME} # normally we shouldn't need to create the squashfs image ourselves, # because we have a dependency declared (IMAGE_TYPEDEP) # But, if this file is modified, the dependency is _not_ rebuild, so we have to do this ourselves. if [ ! -e ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.squashfs-xz ] then bbwarn "${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.squashfs-xz does not exist. Creating." ${IMAGE_CMD_squashfs-xz} fi ubinize -o ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.squashfs-xz-ubi ${ubinize_args} ${CFG_NAME} # Cleanup cfg file mv ${CFG_NAME} ${IMGDEPLOYDIR}/ } -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] Dependencies on other file system types for custom file system type
Dependencies on other file system types for custom file system type I'd like to add a custom fs type - a squashfs-xz on an ubi (instead of ubifs). For this purpose I created a new class, inherited image_types and added a dependency on squashfs-xz and also on squashfs-tools-native and mtd-utils-native: inherit image_types IMAGE_TYPEDEP_squashfs-xz-ubi = "squashfs-xz" do_image_squashfs-xz-ubi[depends] += "mtd-utils-native:do_populate_sysroot" do_image_squashfs-xz-ubi[depends] += "squashfs-tools-native:do_populate_sysroot" (full class file is attached to this mail) But the dependency handling is not working corrently in my solution: 1. If I modify the class file inheriting image_types only (and nothing else in the rootfs), the squashfs-xz is not created and thus not found. As a workaround, I have to call ${IMAGE_CMD_squashfs-xz} manually. 2. A symlink to the squashfs-xz image file in tmp/deploy/images/.../ is created, no matter, if I add it to IMAGE_FSTYPES or not. (In comparison, if I remove ubifs, the symlink to the ubifs image is not created (but the ubifs image is created anyway)) So do I need to change the way how I handle dependencies to squashfs-xz? Thank you, Eric squashfs-xz-ubi.bbclass Description: Binary data -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[linux-yocto] L3 cache way-locking memory zone
Hi Community, I'd like to query if you would see a benefit from adding in an extra memory zone to L3 cache lock region/s? This is useful if a platform/system features the L3 cache (LLC) that can be configured to way-locking (eg. such as it could in CCN-512). The benefits arising from it are as, the ones resulting in from cache locking, faster memory access. We have such an implementation for arm64, kernel 4.9 and higher, is in a bit dirty state but if you are interested I could work towards cleaning and then share. Thanks, Marek -- ___ linux-yocto mailing list linux-yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/linux-yocto
Re: [yocto] QA cycle report for 2.6.2 RC3
On Wed, 2019-04-10 at 02:38 +, Jain, Sangeeta wrote: > > QA cycle report for 2.6.2 RC3: > > No high milestone defects. > Test results are available at following location: > ·For results of all automated tests, refer to results at > public AB [1]. > ·For other test results, refer to attachment [2]. > ·For test report for test cases run by Intel and WR team, > refer attachment [3] > ·For full test report, refer attachment [4] > ·For ptest results, please refer to results at public AB [5] > ·For ptest report, refer to attachment [6] > 1 new defects are found in this cycle, Beaglebone [7]. > QA hint: Similar issue for sysemtap failure (bug 13153) was found in > 2.7 M2 RC1 which is now resolved for master. > Number of existing issues observed in this release is 3- toaster [8], > Build-appliance [9] and qemu-shutdown [10] > For ptest, regression data is not available for this release. 3 > packages are facing timeout issues: lttng-tools [11], openssh [12] > and strace [13]. > Test result report on Public AB shows following failures: > buildgalculator.GalculatorTest.test_galculator > QA Comment: Expected failure for hw limitation > python.PythonTest.test_python3 > QA Comment: Failure due to “python3” test case not backported to > Thud. No bug filed as not real Yocto failure. Thanks for running QA on this. Firstly, I can comment on the bugs: > [7] Bug 13273 - [2.6.2 RC3] Systemtap doesn't work on beaglebone > https://bugzilla.yoctoproject.org/show_bug.cgi?id=13273 We've root caused this on master and believe we know how to fix this, its a kernel makefile configuration issue. We can likely document this as a known issue and fix in 2.6.3. > Previous Bugs observed in this release: > > [8] Bug 13191 – [Test Case 1439] Build Failure after adding or > removing packages with and without dependencies > https://bugzilla.yoctoproject.org/show_bug.cgi?id=13191 > > [9] Bug 12991 - [Bug] [2.6 M4 RC1][Build-Appliance] Bitbake build- > appliance-image getting failed during building image due to webkitgtk > package. > https://bugzilla.yoctoproject.org/show_bug.cgi?id=12991 This is a long running issue we've been "ignoring", its likely resource issues on the machine/VM running the test. > [10] Bug 13234 - [2.7 M3 RC1] qemumips & qemumips64: failed to > shutdown > https://bugzilla.yoctoproject.org/show_bug.cgi?id=13234 This is an issue root caused to be from 4.19 and 5.0 kernels. I thought 2.6.2 had a 4.18 kernel so shouldn't have this issue? The bug wasn't reopened? > ptest Bugs: > > [11] Bug 13255 - [2.7 M3 rc1] lttng-tools ptest facing timeout issue > https://bugzilla.yoctoproject.org/show_bug.cgi?id=13255 > > [12] Bug 13256 - [2.7 M3 rc1] openssh ptest facing timeout issue > https://bugzilla.yoctoproject.org/show_bug.cgi?id=13256 > > [13] Bug 13274 - [2.6.2 RC3] strace ptest facing timeout issue > https://bugzilla.yoctoproject.org/show_bug.cgi?id=13274 ptest has been a bit tricky and we've been actively working these bugs on master. Once we have them resolved there we can backport to older releases. There is one other critical issue we have with 2.6.2 which is the revert of the boost upgrade. I'm very reluctant to release without that revert as it caused problems for a lot of people. I'm wondering whether we should rebuild 2.6.2 one commit later and then release that assuming it passes a rerun of the automated QA (but not the manual QA). Opinions on that? I'd propose fixing the systemtap and ptest issues for 2.6.3. Cheers, Richard -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] QA cycle report for 2.6.2 RC3
On Wed, 2019-04-10 at 02:38 +, Jain, Sangeeta wrote: > > QA cycle report for 2.6.2 RC3: > > No high milestone defects. > Test results are available at following location: > ·For results of all automated tests, refer to results at > public AB [1]. > ·For other test results, refer to attachment [2]. > ·For test report for test cases run by Intel and WR team, > refer attachment [3] > ·For full test report, refer attachment [4] > ·For ptest results, please refer to results at public AB [5] > ·For ptest report, refer to attachment [6] > 1 new defects are found in this cycle, Beaglebone [7]. > QA hint: Similar issue for sysemtap failure (bug 13153) was found in > 2.7 M2 RC1 which is now resolved for master. > Number of existing issues observed in this release is 3- toaster [8], > Build-appliance [9] and qemu-shutdown [10] > For ptest, regression data is not available for this release. 3 > packages are facing timeout issues: lttng-tools [11], openssh [12] > and strace [13]. > Test result report on Public AB shows following failures: > buildgalculator.GalculatorTest.test_galculator > QA Comment: Expected failure for hw limitation > python.PythonTest.test_python3 > QA Comment: Failure due to “python3” test case not backported to > Thud. No bug filed as not real Yocto failure. Thanks for running QA on this. Firstly, I can comment on the bugs: > [7] Bug 13273 - [2.6.2 RC3] Systemtap doesn't work on beaglebone > https://bugzilla.yoctoproject.org/show_bug.cgi?id=13273 We've root caused this on master and believe we know how to fix this, its a kernel makefile configuration issue. We can likely document this as a known issue and fix in 2.6.3. > Previous Bugs observed in this release: > > [8] Bug 13191 – [Test Case 1439] Build Failure after adding or > removing packages with and without dependencies > https://bugzilla.yoctoproject.org/show_bug.cgi?id=13191 > > [9] Bug 12991 - [Bug] [2.6 M4 RC1][Build-Appliance] Bitbake build- > appliance-image getting failed during building image due to webkitgtk > package. > https://bugzilla.yoctoproject.org/show_bug.cgi?id=12991 This is a long running issue we've been "ignoring", its likely resource issues on the machine/VM running the test. > [10] Bug 13234 - [2.7 M3 RC1] qemumips & qemumips64: failed to > shutdown > https://bugzilla.yoctoproject.org/show_bug.cgi?id=13234 This is an issue root caused to be from 4.19 and 5.0 kernels. I thought 2.6.2 had a 4.18 kernel so shouldn't have this issue? The bug wasn't reopened? > ptest Bugs: > > [11] Bug 13255 - [2.7 M3 rc1] lttng-tools ptest facing timeout issue > https://bugzilla.yoctoproject.org/show_bug.cgi?id=13255 > > [12] Bug 13256 - [2.7 M3 rc1] openssh ptest facing timeout issue > https://bugzilla.yoctoproject.org/show_bug.cgi?id=13256 > > [13] Bug 13274 - [2.6.2 RC3] strace ptest facing timeout issue > https://bugzilla.yoctoproject.org/show_bug.cgi?id=13274 ptest has been a bit tricky and we've been actively working these bugs on master. Once we have them resolved there we can backport to older releases. There is one other critical issue we have with 2.6.2 which is the revert of the boost upgrade. I'm very reluctant to release without that revert as it caused problems for a lot of people. I'm wondering whether we should rebuild 2.6.2 one commit later and then release that assuming it passes a rerun of the automated QA (but not the manual QA). Opinions on that? I'd propose fixing the systemtap and ptest issues for 2.6.3. Cheers, Richard -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] [meta-yocto-bsp][PATCH v2] beaglebone-yocto: Update u-boot config to match u-boot 19.04
Hello to all, I will ask few questions here regarding U-Boot, since U-Boot mailing list is too frequent/clogged... Apropos (Regarding to) U-Boot 2019.04, I have some questions? May I? I know for the fact the following: Alexander Graf (till 03.2019 SUSE developer) said (September 2018. in Erlangen, Bavaria, DE) that U-Boot folks will move his UEFI patches in master U-Boot git somewhere in midst of January 2019. I am wondering, did these patches were moved from Alexander Graf's U-Boot git? If yes, from which release UEFI shell support does reside in Coreboot (version number)? Here is the pointer to Alex Graf's Git, I am talking about: https://github.com/agraf/u-boot ___ I have UEFI USB stick with already pre-compiled ready EFI shell 2.40, so I would like to program such an U-Boot (2019.04?!) to my BBB and try to get to the EFI shell with attached UEFI USB stick. Nice exercise, won't you all agree!? ;-) Thank you, Zoran Stojsavljevic ___ On Fri, Apr 12, 2019 at 12:09 AM Alistair Francis wrote: > > [YOCTO #13145] > > This was announced at 2019.01: > https://www.mail-archive.com/u-boot@lists.denx.de/msg305424.html > > Basically, am335x_boneblack is just a special subset of am335x_evm config, > created and owned by BeagleBoard.org community. Since it was not migrated to > use CONFIG_BLK in time for 2019.04 release. > > Signed-off-by: Alistair Francis > Acked-by: Denys Dmytriyenko > --- > meta-yocto-bsp/conf/machine/beaglebone-yocto.conf | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/meta-yocto-bsp/conf/machine/beaglebone-yocto.conf > b/meta-yocto-bsp/conf/machine/beaglebone-yocto.conf > index 70d3cfe..bc18ee8 100644 > --- a/meta-yocto-bsp/conf/machine/beaglebone-yocto.conf > +++ b/meta-yocto-bsp/conf/machine/beaglebone-yocto.conf > @@ -32,7 +32,7 @@ KERNEL_EXTRA_ARGS += "LOADADDR=${UBOOT_ENTRYPOINT}" > > SPL_BINARY = "MLO" > UBOOT_SUFFIX = "img" > -UBOOT_MACHINE = "am335x_boneblack_config" > +UBOOT_MACHINE = "am335x_evm_defconfig" > UBOOT_ENTRYPOINT = "0x80008000" > UBOOT_LOADADDRESS = "0x80008000" > > -- > 2.21.0 > > -- > ___ > yocto mailing list > yocto@yoctoproject.org > https://lists.yoctoproject.org/listinfo/yocto -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto