Re: [yocto] QA cycle report for 2.6 M4 RC1

2018-11-10 Thread akuster808

On 11/10/18 8:25 AM, richard.pur...@linuxfoundation.org wrote:
> On Fri, 2018-11-09 at 09:24 +, Jain, Sangeeta wrote:
>> This is the full report for 2.6 M4 RC1: 
>> https://wiki.yoctoproject.org/wiki/WW44_-_2018-10-30_-_Full_Test_Cycle_2.6_M4_RC1
> Thanks Sangeeta and team!
>
> Now we have the QA report for YP 2.6 M4 rc1 (Final 2.6) we need to make
> a release go or nogo decision. To do this we have the following:
>  
> QA Report: 
> https://wiki.yoctoproject.org/wiki/WW44_-_2018-10-30_-_Full_Test_Cycle_2.6_M4_RC1
> Release Criteria: 
> https://wiki.yoctoproject.org/wiki/Yocto_Project_v2.6_Status#Milestone_4.2FFinal_-_Target_Oct._26.2C_2018
>
> We'd be happy to take representations from members and the community to
> help reach that decision.

Regarding.

*Bug 12991* 
-[2.6 M4 RC1][Build-Appliance] Bitbake build-appliance-image getting
failed during building image due to webkitgtk package

Does it mean the Build-Appliance is non functioning ?  It was broken at
the Sumo release time as well. Should it be dropped as the release criteria?

- Armin

> My personal view is that whilst there are a number of issues present in
> rc1, we should release it, collect up fixes on the thud branch (aleady
> happening) and plan on a 2.6.1 as soon as it looks like we have enough
> critical mass behind those as opposed to an rc2 and further delays to
> the release.
>
> Cheers,
>
> Richard
>
>
>
>
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] QA cycle report for 2.6 M4 RC1

2018-11-10 Thread Randy MacLeod

On 11/10/18 11:25 AM, richard.pur...@linuxfoundation.org wrote:

On Fri, 2018-11-09 at 09:24 +, Jain, Sangeeta wrote:


This is the full report for 2.6 M4 RC1:
https://wiki.yoctoproject.org/wiki/WW44_-_2018-10-30_-_Full_Test_Cycle_2.6_M4_RC1


Thanks Sangeeta and team!

Now we have the QA report for YP 2.6 M4 rc1 (Final 2.6) we need to make
a release go or nogo decision. To do this we have the following:
  
QA Report: https://wiki.yoctoproject.org/wiki/WW44_-_2018-10-30_-_Full_Test_Cycle_2.6_M4_RC1

Release Criteria: 
https://wiki.yoctoproject.org/wiki/Yocto_Project_v2.6_Status#Milestone_4.2FFinal_-_Target_Oct._26.2C_2018

We'd be happy to take representations from members and the community to
help reach that decision.

My personal view is that whilst there are a number of issues present in
rc1, we should release it, collect up fixes on the thud branch (aleady
happening) and plan on a 2.6.1 as soon as it looks like we have enough
critical mass behind those as opposed to an rc2 and further delays to
the release.



Despite the fact that there are a few release criteria that are not
in the 'Done' state yet, I approve of releasing YP-2.6M4 on the
condition that the Docs and Web site criteria are taken care of
before release.

I have reviewed the open bugs. Several are resolved or will be soon and
the ones that remain appear to be either limited in impact such
as 12974/systemtap or are likely due to builder problems such
as 12991/webkitgtk on the build appliance.

Build time tests have crept up somewhat for the rootfs and eSDK
tests but not so dramatically that I would suggest blocking GA.
We haven't had anyone investigate the root cause yet AFAIK but
that can happen post-release and noted in the release notes.

The package update status is unknown due to the tracker being down
but in M3 we were at 81% done so we're still in a good albeit
unquantified state.

What are the plans for the Documentation checks and Wiki/Web
site update? That needs to be 'Done' but I expect it will
be taken care of in the coming days.

../Randy




Cheers,

Richard








--
# Randy MacLeod
# Wind River Linux
--
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] QA cycle report for 2.6 M4 RC1

2018-11-10 Thread richard . purdie
On Fri, 2018-11-09 at 09:24 +, Jain, Sangeeta wrote:
> 
> This is the full report for 2.6 M4 RC1: 
> https://wiki.yoctoproject.org/wiki/WW44_-_2018-10-30_-_Full_Test_Cycle_2.6_M4_RC1

Thanks Sangeeta and team!

Now we have the QA report for YP 2.6 M4 rc1 (Final 2.6) we need to make
a release go or nogo decision. To do this we have the following:
 
QA Report: 
https://wiki.yoctoproject.org/wiki/WW44_-_2018-10-30_-_Full_Test_Cycle_2.6_M4_RC1
Release Criteria: 
https://wiki.yoctoproject.org/wiki/Yocto_Project_v2.6_Status#Milestone_4.2FFinal_-_Target_Oct._26.2C_2018

We'd be happy to take representations from members and the community to
help reach that decision.

My personal view is that whilst there are a number of issues present in
rc1, we should release it, collect up fixes on the thud branch (aleady
happening) and plan on a 2.6.1 as soon as it looks like we have enough
critical mass behind those as opposed to an rc2 and further delays to
the release.

Cheers,

Richard





-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [yocto-autobuilder-helper][PATCH] config.json: Enable dedicated sstate during build and remove after

2018-11-10 Thread Richard Purdie
On Sat, 2018-11-10 at 04:25 -0800, Michael Halstead wrote:
> Without any sstate available oe-selftest cannot run in parallel. Set
> sstate to a
> location used only by these builds and remove it as the last step.
> 
> Signed-off-by: Michael Halstead 
> ---
>  config.json | 5 -
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/config.json b/config.json
> index 8a694d7..d898bf3 100644
> --- a/config.json
> +++ b/config.json
> @@ -147,7 +147,7 @@
>  },
> "nightly-bringup" : {
>  "TEMPLATE" : "nightly-arch",
> -"SSTATEDIR" : [],
> +"SSTATEDIR" : ["${BUILDDIR}/../sstate"],
>  "MACHINE" : "qemuarm64",
>  "step2" : {
>  "MACHINE" : "qemux86-64"
> @@ -168,6 +168,9 @@
>  ],
>  "EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc; DISPLAY=:1
> oe-selftest --skip-tests distrodata.Distrodata.test_checkpkg -j 15"],
>  "ADDLAYER" : ["${BUILDDIR}/../meta-selftest"]
> +},
> +"step7" : {
> +   "EXTRACMDS" : ["rm -rf ${BUILDDIR}/../sstate"]
>  }
>  },
>  "nightly-mips" : {

Unfortunately this won't work as BUILDDIR is a shell variable in this
case. I've tweaked the scripts/config a little differently to try and
speed up the bringup tests and fired off another couple of test builds.

Cheers,

Richard

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [yocto-autobuilder-helper][PATCH] config.json: Enable dedicated sstate during build and remove after

2018-11-10 Thread Michael Halstead
Without any sstate available oe-selftest cannot run in parallel. Set sstate to a
location used only by these builds and remove it as the last step.

Signed-off-by: Michael Halstead 
---
 config.json | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/config.json b/config.json
index 8a694d7..d898bf3 100644
--- a/config.json
+++ b/config.json
@@ -147,7 +147,7 @@
 },
"nightly-bringup" : {
 "TEMPLATE" : "nightly-arch",
-"SSTATEDIR" : [],
+"SSTATEDIR" : ["${BUILDDIR}/../sstate"],
 "MACHINE" : "qemuarm64",
 "step2" : {
 "MACHINE" : "qemux86-64"
@@ -168,6 +168,9 @@
 ],
 "EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc; DISPLAY=:1 oe-selftest 
--skip-tests distrodata.Distrodata.test_checkpkg -j 15"],
 "ADDLAYER" : ["${BUILDDIR}/../meta-selftest"]
+},
+"step7" : {
+   "EXTRACMDS" : ["rm -rf ${BUILDDIR}/../sstate"]
 }
 },
 "nightly-mips" : {
-- 
2.17.2

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH] [poky] Bug 8729 - grub: create a script to boot between rootfs and a maintenance partition

2018-11-10 Thread dimtass
From: Dimitris Tassopoulos 

Hi all, I've have a patch which is ready for testing and evaluation.

I've send that also to to the poky mail-list but that list seems
a bit and I've seen everyone send their patches to the yocto-list.

In order to build the qemu wic image you need to apply the patch
and then add these to the end of your local.conf file.

MACHINE ?= "qemux86"
MACHINE_FEATURES += "pcbios efi"
IMAGE_FSTYPES += "wic"
WKS_FILE = "directdisk-multi-rootfs.wks.in"
EFI_PROVIDER = "grub-efi"
IMAGE_INSTALL_append = " \
grub \
grub-efi \
grub-multibootconf-cntr-rst \
"
EXTRA_IMAGEDEPENDS += "ovmf"
PREFERRED_RPROVIDER_virtual/grub-bootconf = "grub-multibootconf"

Then build the core-image-minimal image (that's the only one I've tried).
If the build ends without errors then load the image with:

$ runqemu wic kvm ovmf

You can debug the grub bootloader script by setting the 'GRUB_DEBUG = 1'
flag in poky/meta/recipes-bsp/grub/grub-multibootconf_1.00.bb.

Finally, `grub-multibootconf-cntr-rst` is a service that resets the
boot counter when the OS loads. This is meant only for testing,
therefore if you remove that then the boot counter will increment
until reaches the max limit and then Grub will fallback to the
rescue partition.Therefore, if you want to test that it works properly
you need to either disable the init/service or remove it from the
IMAGE_INSTALL. Also the counter timeout value is set in
`poky/meta/recipes-bsp/grub/grub-multibootconf_1.00.bb`
with the GRUB_BOOT_TRIES variable.

Signed-off-by: Dimitris Tassopoulos 
---
 meta/recipes-bsp/grub/files/bootcntr-rst.init |  35 +
 .../grub/files/bootcntr-rst.service   |   9 ++
 .../recipes-bsp/grub/files/grub-multiboot.cfg | 148 ++
 meta/recipes-bsp/grub/grub-efi_2.02.bb|   2 +-
 .../grub/grub-multibootconf-cntr-rst_1.00.bb  |  30 
 .../grub/grub-multibootconf_1.00.bb   | 107 +
 meta/recipes-bsp/grub/grub_2.02.bb|   2 +
 .../canned-wks/directdisk-multi-rootfs.wks|  23 ---
 .../canned-wks/directdisk-multi-rootfs.wks.in |  22 +++
 9 files changed, 354 insertions(+), 24 deletions(-)
 create mode 100755 meta/recipes-bsp/grub/files/bootcntr-rst.init
 create mode 100644 meta/recipes-bsp/grub/files/bootcntr-rst.service
 create mode 100644 meta/recipes-bsp/grub/files/grub-multiboot.cfg
 create mode 100644 meta/recipes-bsp/grub/grub-multibootconf-cntr-rst_1.00.bb
 create mode 100644 meta/recipes-bsp/grub/grub-multibootconf_1.00.bb
 delete mode 100644 scripts/lib/wic/canned-wks/directdisk-multi-rootfs.wks
 create mode 100644 scripts/lib/wic/canned-wks/directdisk-multi-rootfs.wks.in

diff --git a/meta/recipes-bsp/grub/files/bootcntr-rst.init 
b/meta/recipes-bsp/grub/files/bootcntr-rst.init
new file mode 100755
index 00..e89dbf53e2
--- /dev/null
+++ b/meta/recipes-bsp/grub/files/bootcntr-rst.init
@@ -0,0 +1,35 @@
+#!/bin/sh
+#/etc/init.d/bootcntr-rst: reset the grub boot counter.
+
+### BEGIN INIT INFO
+# Provides:  Grub boot counter reset
+# Short-Description: Resets the bootcntr grub env var
+# Required-Start:$all
+# Required-Stop: $all
+# Should-Start:  
+# Should-Stop:   
+# Default-Start: 2 3 4 5
+# Default-Stop:  0 1 6
+### END INIT INFO
+
+. /etc/init.d/functions
+
+case "$1" in
+start)
+/usr/bin/grub-editenv /boot/EFI/BOOT/bootcntr.env set bootcntr=0
+;;
+stop)
+echo "No stop available"
+   ;;
+status)
+echo $(/usr/bin/grub-editenv /boot/EFI/BOOT/bootcntr.env list)
+;;
+*)
+echo $"Usage: $0 {start|stop|status}"
+exit 2
+esac
+
+
+
+
+
diff --git a/meta/recipes-bsp/grub/files/bootcntr-rst.service 
b/meta/recipes-bsp/grub/files/bootcntr-rst.service
new file mode 100644
index 00..3582036241
--- /dev/null
+++ b/meta/recipes-bsp/grub/files/bootcntr-rst.service
@@ -0,0 +1,9 @@
+[Unit]
+Description=Resets the grub boot counter. This should be used only for testing
+
+[Service]
+Type=oneshot
+ExecStart=/usr/bin/grub-editenv /boot/EFI/BOOT/bootcntr.env set bootcntr=0
+
+[Install]
+WantedBy=multi-user.target
diff --git a/meta/recipes-bsp/grub/files/grub-multiboot.cfg 
b/meta/recipes-bsp/grub/files/grub-multiboot.cfg
new file mode 100644
index 00..433b05c2d1
--- /dev/null
+++ b/meta/recipes-bsp/grub/files/grub-multiboot.cfg
@@ -0,0 +1,148 @@
+# @description: This is a simple sample of a grub configuration file that can 
be
+#   used for multiboot configurations. This script assumes by default that 
there
+#   is a rootfs partition which is the normal boot partition and a rescue 
partition
+#   which is used for recovery. Normally, the system boots to the rootfs and 
every
+#   time Grub increments a bootcntr value. This value should be reset after 
booting
+#   whith either a service or even better from the main application itself or a
+#   special watchdog application/service. If for some reason Grub detects that 
th