Re: [yocto] [yocto-autobuilder-helper][PATCH] config.json: add a workaround for the "autobuilderlog.json" error

2023-10-18 Thread Jose Quaresma
Hi Yoann,

I have found the same for BBLAYERS that was fixed [1] expanding any
relative path that could exist.
Maybe it would be better to also expand BB_LOGCONFIG when
the newbuilddir argument is present.

[1]
https://git.yoctoproject.org/poky/commit/id=f5f465ff5777eb99941db3dda84e65d4699d97f7

Jose

Yoann Congal  escreveu no dia quarta, 18/10/2023
à(s) 23:28:

> For the reproducible-meta-oe builder, workaround the bug #15241 [1], by
> passing BB_LOGCONFIG through "readlink -f" to avoid relative reference
> to the main build dir.
>
> Also, switch from BBPATH to BUILDDIR to reference the main build dir.
>
> [1]: https://bugzilla.yoctoproject.org/show_bug.cgi?id=15241
>
> Signed-off-by: Yoann Congal 
> ---
>  config.json | 20 ++--
>  1 file changed, 10 insertions(+), 10 deletions(-)
>
> diff --git a/config.json b/config.json
> index c01a453..84da67b 100644
> --- a/config.json
> +++ b/config.json
> @@ -303,7 +303,7 @@
>  ],
>  "step1" : {
>  "shortname" : "Repro meta-oe/meta-filesystems",
> -"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc;
> OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail-openembedded/meta-filesystems/
> DISPLAY=:1 oe-selftest --newbuilddir $BBPATH/build-st-meta-filesystems -r
> reproducible"],
> +"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc;
> OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail-openembedded/meta-filesystems/
> DISPLAY=:1 BB_LOGCONFIG=$(readlink -f $BB_LOGCONFIG) oe-selftest
> --newbuilddir $BUILDDIR/build-st-meta-filesystems -r reproducible"],
>  "ADDLAYER" : [
>  "${BUILDDIR}/../meta-openembedded/meta-oe",
>  "${BUILDDIR}/../meta-openembedded/meta-python",
> @@ -323,7 +323,7 @@
>  },
>  "step2" : {
>  "shortname" : "Repro meta-oe/meta-gnome",
> -"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc;
> OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail-openembedded/meta-gnome/
> DISPLAY=:1 oe-selftest --newbuilddir $BBPATH/build-st-meta-gnome -r
> reproducible"],
> +"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc;
> OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail-openembedded/meta-gnome/
> DISPLAY=:1 BB_LOGCONFIG=$(readlink -f $BB_LOGCONFIG) oe-selftest
> --newbuilddir $BUILDDIR/build-st-meta-gnome -r reproducible"],
>  "ADDLAYER" : [
>  "${BUILDDIR}/../meta-openembedded/meta-oe",
>  "${BUILDDIR}/../meta-openembedded/meta-python",
> @@ -343,14 +343,14 @@
>  },
>  "step3" : {
>  "shortname" : "Repro meta-oe/meta-initramfs",
> -"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc;
> OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail-openembedded/meta-initramfs/
> DISPLAY=:1 oe-selftest --newbuilddir $BBPATH/build-st-meta-initramfs -r
> reproducible"],
> +"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc;
> OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail-openembedded/meta-initramfs/
> DISPLAY=:1 BB_LOGCONFIG=$(readlink -f $BB_LOGCONFIG) oe-selftest
> --newbuilddir $BUILDDIR/build-st-meta-initramfs -r reproducible"],
>  "ADDLAYER" : [
>  "${BUILDDIR}/../meta-openembedded/meta-initramfs"
>  ]
>  },
>  "step4" : {
>  "shortname" : "Repro meta-oe/meta-multimedia",
> -"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc;
> OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail-openembedded/meta-multimedia/
> DISPLAY=:1 oe-selftest --newbuilddir $BBPATH/build-st-meta-multimedia -r
> reproducible"],
> +"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc;
> OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail-openembedded/meta-multimedia/
> DISPLAY=:1 BB_LOGCONFIG=$(readlink -f $BB_LOGCONFIG) oe-selftest
> --newbuilddir $BUILDDIR/build-st-meta-multimedia -r reproducible"],
>  "ADDLAYER" : [
>  "${BUILDDIR}/../meta-openembedded/meta-oe",
>  "${BUILDDIR}/../meta-openembedded/meta-python",
> @@ -367,7 +367,7 @@
>  },
>  "step5" : {
>  "shortname" : "Repro meta-oe/meta-networking",
> -"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc;
> OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail-openembedded/meta-networking/
> DISPLAY=:1 oe-selftest --newbuilddir $BBPATH/build-st-meta-networking -r
> reproducible"],
> +"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc;
> OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail-openembedded/meta-networking/
> DISPLAY=:1 BB_LOGCONFIG=$(readlink -f $BB_LOGCONFIG) oe-selftest
> --newbuilddir $BUILDDIR/build-st-meta-networking -r reproducible"],
>  "ADDLAYER" : [
>  "${BUILDDIR}/../meta-openembedded/meta-oe",
>  

[yocto] [yocto-autobuilder-helper][PATCH] config.json: add a workaround for the "autobuilderlog.json" error

2023-10-18 Thread Yoann Congal
For the reproducible-meta-oe builder, workaround the bug #15241 [1], by
passing BB_LOGCONFIG through "readlink -f" to avoid relative reference
to the main build dir.

Also, switch from BBPATH to BUILDDIR to reference the main build dir.

[1]: https://bugzilla.yoctoproject.org/show_bug.cgi?id=15241

Signed-off-by: Yoann Congal 
---
 config.json | 20 ++--
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/config.json b/config.json
index c01a453..84da67b 100644
--- a/config.json
+++ b/config.json
@@ -303,7 +303,7 @@
 ],
 "step1" : {
 "shortname" : "Repro meta-oe/meta-filesystems",
-"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc; 
OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail-openembedded/meta-filesystems/
 DISPLAY=:1 oe-selftest --newbuilddir $BBPATH/build-st-meta-filesystems -r 
reproducible"],
+"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc; 
OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail-openembedded/meta-filesystems/
 DISPLAY=:1 BB_LOGCONFIG=$(readlink -f $BB_LOGCONFIG) oe-selftest --newbuilddir 
$BUILDDIR/build-st-meta-filesystems -r reproducible"],
 "ADDLAYER" : [
 "${BUILDDIR}/../meta-openembedded/meta-oe",
 "${BUILDDIR}/../meta-openembedded/meta-python",
@@ -323,7 +323,7 @@
 },
 "step2" : {
 "shortname" : "Repro meta-oe/meta-gnome",
-"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc; 
OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail-openembedded/meta-gnome/
 DISPLAY=:1 oe-selftest --newbuilddir $BBPATH/build-st-meta-gnome -r 
reproducible"],
+"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc; 
OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail-openembedded/meta-gnome/
 DISPLAY=:1 BB_LOGCONFIG=$(readlink -f $BB_LOGCONFIG) oe-selftest --newbuilddir 
$BUILDDIR/build-st-meta-gnome -r reproducible"],
 "ADDLAYER" : [
 "${BUILDDIR}/../meta-openembedded/meta-oe",
 "${BUILDDIR}/../meta-openembedded/meta-python",
@@ -343,14 +343,14 @@
 },
 "step3" : {
 "shortname" : "Repro meta-oe/meta-initramfs",
-"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc; 
OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail-openembedded/meta-initramfs/
 DISPLAY=:1 oe-selftest --newbuilddir $BBPATH/build-st-meta-initramfs -r 
reproducible"],
+"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc; 
OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail-openembedded/meta-initramfs/
 DISPLAY=:1 BB_LOGCONFIG=$(readlink -f $BB_LOGCONFIG) oe-selftest --newbuilddir 
$BUILDDIR/build-st-meta-initramfs -r reproducible"],
 "ADDLAYER" : [
 "${BUILDDIR}/../meta-openembedded/meta-initramfs"
 ]
 },
 "step4" : {
 "shortname" : "Repro meta-oe/meta-multimedia",
-"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc; 
OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail-openembedded/meta-multimedia/
 DISPLAY=:1 oe-selftest --newbuilddir $BBPATH/build-st-meta-multimedia -r 
reproducible"],
+"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc; 
OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail-openembedded/meta-multimedia/
 DISPLAY=:1 BB_LOGCONFIG=$(readlink -f $BB_LOGCONFIG) oe-selftest --newbuilddir 
$BUILDDIR/build-st-meta-multimedia -r reproducible"],
 "ADDLAYER" : [
 "${BUILDDIR}/../meta-openembedded/meta-oe",
 "${BUILDDIR}/../meta-openembedded/meta-python",
@@ -367,7 +367,7 @@
 },
 "step5" : {
 "shortname" : "Repro meta-oe/meta-networking",
-"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc; 
OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail-openembedded/meta-networking/
 DISPLAY=:1 oe-selftest --newbuilddir $BBPATH/build-st-meta-networking -r 
reproducible"],
+"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc; 
OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail-openembedded/meta-networking/
 DISPLAY=:1 BB_LOGCONFIG=$(readlink -f $BB_LOGCONFIG) oe-selftest --newbuilddir 
$BUILDDIR/build-st-meta-networking -r reproducible"],
 "ADDLAYER" : [
 "${BUILDDIR}/../meta-openembedded/meta-oe",
 "${BUILDDIR}/../meta-openembedded/meta-networking"
@@ -382,7 +382,7 @@
 },
 "step6" : {
 "shortname" : "Repro meta-oe/meta-oe",
-"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc; 
OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail-openembedded/meta-oe/
 DISPLAY=:1 oe-selftest --newbuilddir $BBPATH/build-st-meta-oe -r 
reproducible"],
+"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc; 

Re: [yocto] [tsc] [qa-build-notification] QA notification for completed autobuilder build (yocto-4.3.rc1)

2023-10-18 Thread Richard Purdie
On Wed, 2023-10-18 at 15:32 +, Ross Burton wrote:
> On 18 Oct 2023, at 07:29, Richard Purdie via lists.yoctoproject.org 
>  wrote:
> > 
> > On Wed, 2023-10-18 at 06:16 +, Pokybuild User wrote:
> > >A build flagged for QA (yocto-4.3.rc1) was completed on the 
> > > autobuilder and is available at:
> > > 
> > > 
> > >https://autobuilder.yocto.io/pub/releases/yocto-4.3.rc1
> > > 
> > > 
> > >Build URL: 
> > > https://autobuilder.yoctoproject.org/typhoon/#/builders/83/builds/6062
> > 
> > There was one failure in the build, a serial login issue on ttyS1, so
> > an occurrence of our annoying 6.5 issue. This is the first time we've
> > seen it with the workaround applied.
> > 
> > The question is to proceed with rc1 in testing, or apply the upstream
> > fixes and try an rc2 with that? I'm torn…
> 
> I’d say we should proceed with RC1 whilst testing upstream fixes.
> 
> If they manage to shake out all of the failures then we can decide
> whether to spin another RC with the fixes in, or just release with
> what with have - with copious release notes obviously. 

Lets run with this approach, I agree. I am running test builds in the
background and we'll see how reliable the upstream fix is. I think
moving forward with rc1 as is makes sense.

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61404): https://lists.yoctoproject.org/g/yocto/message/61404
Mute This Topic: https://lists.yoctoproject.org/mt/102041450/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] CDN for sstate.yoctoproject.org

2023-10-18 Thread Richard Purdie
On Wed, 2023-10-18 at 14:02 +0200, Alexander Kanavin wrote:
> On Wed, 18 Oct 2023 at 08:45, Richard Purdie
>  wrote:
> > I'm torn on the targets to test as sato-sdk is a large image and world
> > is a lot of work. I'd be tempted to test sato, weston and full-cmdline?
> > World is a good test I guess and if from sstate, shouldn't have that
> > much of an issue. It does also prove things are working.
> 
> I ran '-S printdiff world' on a blank build directory. First,
> scalability isn't great:
> 
> Initialising tasks: 100%
> > ##|
> Time: 0:24:19
> Checking sstate mirror object availability: 100%
> > ##|
> Time: 0:12:14
> 
> So it's taking 36 minutes just preparing to fetch the objects, and 2/3
> of that time goes into communicating with hash equivalence server
> (e.g. BB_HASHSERVE_UPSTREAM = "hashserv.yocto.io:8687").

FWIW the second time an artefact is pulled from the CDN, it will be
much faster. I don't know how much that is a factor here.

We know hashequiv is slow, particularly from europe. We've ideas on
that we're looking into but for now it is what it is. Part of this work
was to find out what the issues are so we write that one up.

> Second, there are significant misses. I don't have a clear theory
> where they come from, just want to list them:
> 
> The differences between the current build and any cached tasks start
> at the following tasks:
> /srv/work/alex/poky/meta/recipes-core/meta/meta-ide-support.bb:do_populate_ide_support
> /srv/work/alex/poky/meta/recipes-graphics/xorg-driver/xf86-input-synaptics_1.9.2.bb:do_collect_spdx_deps
> /srv/work/alex/poky/meta/recipes-devtools/bootchart2/bootchart2_0.14.9.bb:do_create_runtime_spdx
> /srv/work/alex/poky/meta/recipes-graphics/igt-gpu-tools/igt-gpu-tools_git.bb:do_collect_spdx_deps
> /srv/work/alex/poky/meta/recipes-devtools/python/python3-dbusmock_0.29.1.bb:do_create_runtime_spdx
> /srv/work/alex/poky/meta/recipes-devtools/dpkg/dpkg_1.22.0.bb:do_configure
> /srv/work/alex/poky/meta/recipes-graphics/xorg-driver/xf86-input-vmmouse_13.2.0.bb:do_collect_spdx_deps
> /srv/work/alex/poky/meta/recipes-graphics/glew/glew_2.2.0.bb:do_collect_spdx_deps
> /srv/work/alex/poky/meta/recipes-graphics/xorg-driver/xf86-input-evdev_2.10.6.bb:do_collect_spdx_deps
> /srv/work/alex/poky/meta/recipes-graphics/libva/libva_2.19.0.bb:do_collect_spdx_deps
> /srv/work/alex/poky/meta/recipes-sato/webkit/libwpe_1.14.1.bb:do_collect_spdx_deps
> /srv/work/alex/poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-meta-base.bb:do_collect_spdx_deps
> /srv/work/alex/poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-python_1.22.6.bb:do_collect_spdx_deps
> /srv/work/alex/poky/meta/recipes-extended/cups/cups_2.4.6.bb:do_collect_spdx_deps
> /srv/work/alex/poky/meta/recipes-gnome/libdazzle/libdazzle_3.44.0.bb:do_collect_spdx_deps
> /srv/work/alex/poky/meta/recipes-graphics/igt-gpu-tools/igt-gpu-tools_git.bb:do_package
> /srv/work/alex/poky/meta/recipes-multimedia/gstreamer/gst-devtools_1.22.6.bb:do_collect_spdx_deps
> /srv/work/alex/poky/meta/recipes-core/packagegroups/packagegroup-core-sdk.bb:do_package
> /srv/work/alex/poky/meta/recipes-graphics/xorg-driver/xf86-input-mouse_1.9.5.bb:do_collect_spdx_deps
> /srv/work/alex/poky/meta/recipes-graphics/waffle/waffle_1.7.2.bb:do_collect_spdx_deps
> /srv/work/alex/poky/meta/recipes-multimedia/alsa/alsa-tools_1.2.5.bb:do_collect_spdx_deps
> /srv/work/alex/poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-rtsp-server_1.22.6.bb:do_collect_spdx_deps
> /srv/work/alex/poky/meta/recipes-devtools/apt/apt_2.6.1.bb:do_install
> /srv/work/alex/poky/meta/recipes-gnome/gtk+/gtk4_4.12.3.bb:do_collect_spdx_deps
> /srv/work/alex/poky/meta/recipes-devtools/devel-config/distcc-config.bb:do_create_runtime_spdx
> /srv/work/alex/poky/meta/recipes-gnome/libhandy/libhandy_1.8.2.bb:do_collect_spdx_deps
> /srv/work/alex/poky/meta/recipes-support/vim/vim_9.0.bb:do_collect_spdx_deps

We'll have to dig into this.

> So I think we should start with specific images first - minimal,
> full-cmdline, weston, sato and sato-sdk are all much faster to check.

Agreed, lets start with simpler images, full-cmdline, minimal, sato and
weston. We can increase coverage as we move forward (world, sato-sdk).

> On qemux86_64 none of them show misses, but on qemuarm64 there are
> problems with sato, sato-sdk and weston, i.e. sato-sdk shows:
> 
> The differences between the current build and any cached tasks start
> at the following tasks:
> /srv/work/alex/poky/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf_2.42.10.bb:do_package_write_rpm
> /srv/work/alex/poky/meta/recipes-connectivity/connman/connman-gnome_0.7.bb:do_package_write_rpm
> 

Re: [yocto] [tsc] [qa-build-notification] QA notification for completed autobuilder build (yocto-4.3.rc1)

2023-10-18 Thread Ross Burton
On 18 Oct 2023, at 07:29, Richard Purdie via lists.yoctoproject.org 
 wrote:
> 
> On Wed, 2023-10-18 at 06:16 +, Pokybuild User wrote:
>>A build flagged for QA (yocto-4.3.rc1) was completed on the autobuilder 
>> and is available at:
>> 
>> 
>>https://autobuilder.yocto.io/pub/releases/yocto-4.3.rc1
>> 
>> 
>>Build URL: 
>> https://autobuilder.yoctoproject.org/typhoon/#/builders/83/builds/6062
> 
> There was one failure in the build, a serial login issue on ttyS1, so
> an occurrence of our annoying 6.5 issue. This is the first time we've
> seen it with the workaround applied.
> 
> The question is to proceed with rc1 in testing, or apply the upstream
> fixes and try an rc2 with that? I'm torn…

I’d say we should proceed with RC1 whilst testing upstream fixes.

If they manage to shake out all of the failures then we can decide whether to 
spin another RC with the fixes in, or just release with what with have - with 
copious release notes obviously. 

Ross
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61402): https://lists.yoctoproject.org/g/yocto/message/61402
Mute This Topic: https://lists.yoctoproject.org/mt/102041450/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [yocto-autobuilder-helper][PATCH v2] meta-oe-mirror: Use a 2 step job to fetch and verify meta-openembedded mirror.

2023-10-18 Thread David Pierret
Hi,
This patch is v2 for this one :
https://lists.yoctoproject.org/g/yocto/message/61306

Regards
David

On Wed, Oct 18, 2023 at 3:46 PM David Pierret via
lists.yoctoproject.org 
wrote:
>
> Inspired from trigger-build and trigger-build-post-trigger
> The branch must be selected on build configuration.
>
> Signed-off-by: David Pierret 
> Reviewed-by: Yoann Congal 
> ---
>  config.json | 30 ++
>  1 file changed, 30 insertions(+)
>
> diff --git a/config.json b/config.json
> index 3acb710..9e6779f 100644
> --- a/config.json
> +++ b/config.json
> @@ -1420,6 +1420,36 @@
>  "${SCRIPTSDIR}/setup-auh ${HELPERBUILDDIR}; 
> ${SCRIPTSDIR}/run-auh ${HELPERBUILDDIR} ${WEBPUBLISH_DIR}/pub/auh/"
>  ]
>  },
> +"meta-oe-mirror" : {
> +"SDKMACHINE" : "x86_64",
> +"MACHINE" : "qemux86-64",
> +"NEEDREPOS" : ["poky", "meta-openembedded"],
> +"ADDLAYER" : [
> +"${BUILDDIR}/../meta-selftest",
> +
> +"${BUILDDIR}/../meta-openembedded/meta-oe",
> +"${BUILDDIR}/../meta-openembedded/meta-python",
> +"${BUILDDIR}/../meta-openembedded/meta-perl",
> +"${BUILDDIR}/../meta-openembedded/meta-networking",
> +"${BUILDDIR}/../meta-openembedded/meta-multimedia",
> +"${BUILDDIR}/../meta-openembedded/meta-gnome",
> +"${BUILDDIR}/../meta-openembedded/meta-xfce",
> +"${BUILDDIR}/../meta-openembedded/meta-filesystems",
> +"${BUILDDIR}/../meta-openembedded/meta-initramfs",
> +"${BUILDDIR}/../meta-openembedded/meta-webserver"
> +],
> +"step1" : {
> +"shortname" : "Sources pre-fetching",
> +"BBTARGETS" : "universe -c fetch -k",
> +"extravars" : [
> +"SOURCE_MIRROR_FETCH = '1'"
> +]
> +},
> +"step2" : {
> +"shortname" : "Source Mirror Selftest",
> +"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc; DISPLAY=:1 
> oe-selftest -r buildoptions.SourceMirroring.test_yocto_source_mirror"],
> +}
> +},
>  "a-quick" : {
>  "TEMPLATE" : "trigger-build"
>  },
> --
> 2.39.2
>
>
> 
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61401): https://lists.yoctoproject.org/g/yocto/message/61401
Mute This Topic: https://lists.yoctoproject.org/mt/102039136/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper][PATCH v2] meta-oe-mirror: Use a 2 step job to fetch and verify meta-openembedded mirror.

2023-10-18 Thread David Pierret
Inspired from trigger-build and trigger-build-post-trigger
The branch must be selected on build configuration.

Signed-off-by: David Pierret 
Reviewed-by: Yoann Congal 
---
 config.json | 30 ++
 1 file changed, 30 insertions(+)

diff --git a/config.json b/config.json
index 3acb710..9e6779f 100644
--- a/config.json
+++ b/config.json
@@ -1420,6 +1420,36 @@
 "${SCRIPTSDIR}/setup-auh ${HELPERBUILDDIR}; 
${SCRIPTSDIR}/run-auh ${HELPERBUILDDIR} ${WEBPUBLISH_DIR}/pub/auh/"
 ]
 },
+"meta-oe-mirror" : {
+"SDKMACHINE" : "x86_64",
+"MACHINE" : "qemux86-64",
+"NEEDREPOS" : ["poky", "meta-openembedded"],
+"ADDLAYER" : [
+"${BUILDDIR}/../meta-selftest",
+
+"${BUILDDIR}/../meta-openembedded/meta-oe",
+"${BUILDDIR}/../meta-openembedded/meta-python",
+"${BUILDDIR}/../meta-openembedded/meta-perl",
+"${BUILDDIR}/../meta-openembedded/meta-networking",
+"${BUILDDIR}/../meta-openembedded/meta-multimedia",
+"${BUILDDIR}/../meta-openembedded/meta-gnome",
+"${BUILDDIR}/../meta-openembedded/meta-xfce",
+"${BUILDDIR}/../meta-openembedded/meta-filesystems",
+"${BUILDDIR}/../meta-openembedded/meta-initramfs",
+"${BUILDDIR}/../meta-openembedded/meta-webserver"
+],
+"step1" : {
+"shortname" : "Sources pre-fetching",
+"BBTARGETS" : "universe -c fetch -k",
+"extravars" : [
+"SOURCE_MIRROR_FETCH = '1'"
+]
+},
+"step2" : {
+"shortname" : "Source Mirror Selftest",
+"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc; DISPLAY=:1 
oe-selftest -r buildoptions.SourceMirroring.test_yocto_source_mirror"],
+}
+},
 "a-quick" : {
 "TEMPLATE" : "trigger-build"
 },
-- 
2.39.2


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61400): https://lists.yoctoproject.org/g/yocto/message/61400
Mute This Topic: https://lists.yoctoproject.org/mt/102039136/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [yocto-security] SRTool usage for CVE management at YP

2023-10-18 Thread Steve Sakoman
On Tue, Oct 17, 2023 at 7:43 PM Marta Rybczynska  wrote:
>
> Hello all,
> There' a discussion pending on the usage of SRTool and CVE management
> for the Yocto project in general. It is related to the need of having
> a list of CVEs the project is affected by, those fixed and those that
> we know we are not affected.
>
> In the previous episode, we have had a demo of SRTool by David Reyna.
> This weekend he has updated the code base. The next call is tomorrow
> (Thursday, 19th October, half an hour after the end of the Bug Triage
> meeting) to discuss conclusions of first tests and the next steps. If
> you are interested to join, let us know!

I'd like to join.

Steve

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61399): https://lists.yoctoproject.org/g/yocto/message/61399
Mute This Topic: https://lists.yoctoproject.org/mt/102038905/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] CDN for sstate.yoctoproject.org

2023-10-18 Thread Alexander Kanavin
On Wed, 18 Oct 2023 at 08:45, Richard Purdie
 wrote:
> I'm torn on the targets to test as sato-sdk is a large image and world
> is a lot of work. I'd be tempted to test sato, weston and full-cmdline?
> World is a good test I guess and if from sstate, shouldn't have that
> much of an issue. It does also prove things are working.

I ran '-S printdiff world' on a blank build directory. First,
scalability isn't great:

Initialising tasks: 100%
|##|
Time: 0:24:19
Checking sstate mirror object availability: 100%
|##|
Time: 0:12:14

So it's taking 36 minutes just preparing to fetch the objects, and 2/3
of that time goes into communicating with hash equivalence server
(e.g. BB_HASHSERVE_UPSTREAM = "hashserv.yocto.io:8687").

Second, there are significant misses. I don't have a clear theory
where they come from, just want to list them:

The differences between the current build and any cached tasks start
at the following tasks:
/srv/work/alex/poky/meta/recipes-core/meta/meta-ide-support.bb:do_populate_ide_support
/srv/work/alex/poky/meta/recipes-graphics/xorg-driver/xf86-input-synaptics_1.9.2.bb:do_collect_spdx_deps
/srv/work/alex/poky/meta/recipes-devtools/bootchart2/bootchart2_0.14.9.bb:do_create_runtime_spdx
/srv/work/alex/poky/meta/recipes-graphics/igt-gpu-tools/igt-gpu-tools_git.bb:do_collect_spdx_deps
/srv/work/alex/poky/meta/recipes-devtools/python/python3-dbusmock_0.29.1.bb:do_create_runtime_spdx
/srv/work/alex/poky/meta/recipes-devtools/dpkg/dpkg_1.22.0.bb:do_configure
/srv/work/alex/poky/meta/recipes-graphics/xorg-driver/xf86-input-vmmouse_13.2.0.bb:do_collect_spdx_deps
/srv/work/alex/poky/meta/recipes-graphics/glew/glew_2.2.0.bb:do_collect_spdx_deps
/srv/work/alex/poky/meta/recipes-graphics/xorg-driver/xf86-input-evdev_2.10.6.bb:do_collect_spdx_deps
/srv/work/alex/poky/meta/recipes-graphics/libva/libva_2.19.0.bb:do_collect_spdx_deps
/srv/work/alex/poky/meta/recipes-sato/webkit/libwpe_1.14.1.bb:do_collect_spdx_deps
/srv/work/alex/poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-meta-base.bb:do_collect_spdx_deps
/srv/work/alex/poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-python_1.22.6.bb:do_collect_spdx_deps
/srv/work/alex/poky/meta/recipes-extended/cups/cups_2.4.6.bb:do_collect_spdx_deps
/srv/work/alex/poky/meta/recipes-gnome/libdazzle/libdazzle_3.44.0.bb:do_collect_spdx_deps
/srv/work/alex/poky/meta/recipes-graphics/igt-gpu-tools/igt-gpu-tools_git.bb:do_package
/srv/work/alex/poky/meta/recipes-multimedia/gstreamer/gst-devtools_1.22.6.bb:do_collect_spdx_deps
/srv/work/alex/poky/meta/recipes-core/packagegroups/packagegroup-core-sdk.bb:do_package
/srv/work/alex/poky/meta/recipes-graphics/xorg-driver/xf86-input-mouse_1.9.5.bb:do_collect_spdx_deps
/srv/work/alex/poky/meta/recipes-graphics/waffle/waffle_1.7.2.bb:do_collect_spdx_deps
/srv/work/alex/poky/meta/recipes-multimedia/alsa/alsa-tools_1.2.5.bb:do_collect_spdx_deps
/srv/work/alex/poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-rtsp-server_1.22.6.bb:do_collect_spdx_deps
/srv/work/alex/poky/meta/recipes-devtools/apt/apt_2.6.1.bb:do_install
/srv/work/alex/poky/meta/recipes-gnome/gtk+/gtk4_4.12.3.bb:do_collect_spdx_deps
/srv/work/alex/poky/meta/recipes-devtools/devel-config/distcc-config.bb:do_create_runtime_spdx
/srv/work/alex/poky/meta/recipes-gnome/libhandy/libhandy_1.8.2.bb:do_collect_spdx_deps
/srv/work/alex/poky/meta/recipes-support/vim/vim_9.0.bb:do_collect_spdx_deps

So I think we should start with specific images first - minimal,
full-cmdline, weston, sato and sato-sdk are all much faster to check.

On qemux86_64 none of them show misses, but on qemuarm64 there are
problems with sato, sato-sdk and weston, i.e. sato-sdk shows:

The differences between the current build and any cached tasks start
at the following tasks:
/srv/work/alex/poky/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf_2.42.10.bb:do_package_write_rpm
/srv/work/alex/poky/meta/recipes-connectivity/connman/connman-gnome_0.7.bb:do_package_write_rpm
/srv/work/alex/poky/meta/recipes-multimedia/gstreamer/gst-examples_1.18.6.bb:do_package_write_rpm
/srv/work/alex/poky/meta/recipes-sato/images/core-image-sato-sdk.bb:do_deploy_source_date_epoch
/srv/work/alex/poky/meta/recipes-graphics/xorg-font/font-util_1.4.1.bb:do_package_write_rpm
/srv/work/alex/poky/meta/recipes-core/packagegroups/packagegroup-core-sdk.bb:do_package
/srv/work/alex/poky/meta/recipes-gnome/librsvg/librsvg_2.56.3.bb:do_package_write_rpm
/srv/work/alex/poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad_1.22.6.bb:do_package_write_rpm


I'm not sure if that's because cache for the current master needs to
be populated properly first, or if there's a deeper issue.

> Either we start logging what we've 

Re: [yocto] Remove packagegroup from image recipes

2023-10-18 Thread Enrico Jörns
Am Mittwoch, dem 18.10.2023 um 12:50 +0200 schrieb Alexander Kanavin:
> On Wed, 18 Oct 2023 at 12:37, Alexander Kanavin via
> lists.yoctoproject.org 
> wrote:
> > 
> > On Wed, 18 Oct 2023 at 12:24, Enrico Jörns  wrote:
> > > In how is this different to the default packagegroup handling in oe-core 
> > > where package 'force
> > > themself' into images based on specific DISTRO_FEATURE or MACHINE_FEATURE 
> > > settings?
> > > 
> > > Thus this is a mechanism that is explicitly designed to be used by 
> > > oe-core exclusively?
> > 
> > I see that as something of a historical mis-feature. I would take
> > those things out of the packagegroup recipe and into a class used by
> > images if I had a bit of time (core-image.bbclass most likely).
> > 
> > That's what you could do as well, define a class that pulls in rauc
> > for those images that inherit it.
> 
> You can see how core-image-weston.bb does it:
> 
> IMAGE_FEATURES += "splash package-management ssh-server-dropbear
> hwcodecs weston"
> 
> and then in core-image class:
> 
> FEATURE_PACKAGES_weston = "packagegroup-core-weston"
> ... etc

Oh yes, I came across this feature a few weeks ago for the first time.

Before I had never noticed that the FEATURE_PACKAGES mechanism exists.

Maybe this could be an alternative to 'hacking' rauc into the base packagegroup.
Thanks for the hint.

My original intention was to reduce the number of manual variable switches 
required for building a
rauc-compatible image. But maybe this sort of 'over-optimized'.


Enrico

> Alex
> 

-- 
Pengutronix e.K.   | Enrico Jörns|
Embedded Linux Consulting & Support| https://www.pengutronix.de/ |
Steuerwalder Str. 21   | Phone: +49-5121-206917-180  |
31137 Hildesheim, Germany  | Fax:   +49-5121-206917-9|

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61397): https://lists.yoctoproject.org/g/yocto/message/61397
Mute This Topic: https://lists.yoctoproject.org/mt/102035396/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Remove packagegroup from image recipes

2023-10-18 Thread Alexander Kanavin
On Wed, 18 Oct 2023 at 12:37, Alexander Kanavin via
lists.yoctoproject.org 
wrote:
>
> On Wed, 18 Oct 2023 at 12:24, Enrico Jörns  wrote:
> > In how is this different to the default packagegroup handling in oe-core 
> > where package 'force
> > themself' into images based on specific DISTRO_FEATURE or MACHINE_FEATURE 
> > settings?
> >
> > Thus this is a mechanism that is explicitly designed to be used by oe-core 
> > exclusively?
>
> I see that as something of a historical mis-feature. I would take
> those things out of the packagegroup recipe and into a class used by
> images if I had a bit of time (core-image.bbclass most likely).
>
> That's what you could do as well, define a class that pulls in rauc
> for those images that inherit it.

You can see how core-image-weston.bb does it:

IMAGE_FEATURES += "splash package-management ssh-server-dropbear
hwcodecs weston"

and then in core-image class:

FEATURE_PACKAGES_weston = "packagegroup-core-weston"
... etc

Alex

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61396): https://lists.yoctoproject.org/g/yocto/message/61396
Mute This Topic: https://lists.yoctoproject.org/mt/102035396/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Remove packagegroup from image recipes

2023-10-18 Thread Alexander Kanavin
On Wed, 18 Oct 2023 at 12:24, Enrico Jörns  wrote:
> In how is this different to the default packagegroup handling in oe-core 
> where package 'force
> themself' into images based on specific DISTRO_FEATURE or MACHINE_FEATURE 
> settings?
>
> Thus this is a mechanism that is explicitly designed to be used by oe-core 
> exclusively?

I see that as something of a historical mis-feature. I would take
those things out of the packagegroup recipe and into a class used by
images if I had a bit of time (core-image.bbclass most likely).

That's what you could do as well, define a class that pulls in rauc
for those images that inherit it.

Alex

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61395): https://lists.yoctoproject.org/g/yocto/message/61395
Mute This Topic: https://lists.yoctoproject.org/mt/102035396/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Remove packagegroup from image recipes

2023-10-18 Thread Enrico Jörns
Hi Alex,

Am Mittwoch, dem 18.10.2023 um 12:09 +0200 schrieb Alexander Kanavin:
> On Wed, 18 Oct 2023 at 11:57, Enrico Jörns  wrote:
> > it's not really that it's 'forced' into all image. The .bbappend just says:
> > 
> > > require ${@bb.utils.contains('DISTRO_FEATURES', 'rauc', 
> > > '${BPN}_rauc.inc', '', d)}
> > The assumption was that if you have 'rauc' in you DISTRO_FEATURES, you want 
> > to install it.
> > And this was just the best way I saw to automatically end up in base images 
> > for this case while
> > ensuring that we do not modify the base packagegroup unconditionally.
> 
> You should provide a packagegroup-rauc recipe, a sample rauc-image
> recipe (or set of recipes) that uses that packagegroup, and leave it
> at that. Users of meta-rauc will figure out the rest, particularly
> when and how they want to install rauc. You've just seen a specific
> case where it is *not* wanted in some images.
> 
> Forcing your way into base packagegroups from oe-core via bbappends is
> really not the right way to go about things.

In how is this different to the default packagegroup handling in oe-core where 
package 'force
themself' into images based on specific DISTRO_FEATURE or MACHINE_FEATURE 
settings?

Thus this is a mechanism that is explicitly designed to be used by oe-core 
exclusively?


Regards, Enrico

> Alex
> 

-- 
Pengutronix e.K.   | Enrico Jörns|
Embedded Linux Consulting & Support| https://www.pengutronix.de/ |
Steuerwalder Str. 21   | Phone: +49-5121-206917-180  |
31137 Hildesheim, Germany  | Fax:   +49-5121-206917-9|

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61394): https://lists.yoctoproject.org/g/yocto/message/61394
Mute This Topic: https://lists.yoctoproject.org/mt/102035396/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Remove packagegroup from image recipes

2023-10-18 Thread Alexander Kanavin
On Wed, 18 Oct 2023 at 11:57, Enrico Jörns  wrote:
> it's not really that it's 'forced' into all image. The .bbappend just says:
>
> > require ${@bb.utils.contains('DISTRO_FEATURES', 'rauc', '${BPN}_rauc.inc', 
> > '', d)}
> The assumption was that if you have 'rauc' in you DISTRO_FEATURES, you want 
> to install it.
> And this was just the best way I saw to automatically end up in base images 
> for this case while
> ensuring that we do not modify the base packagegroup unconditionally.

You should provide a packagegroup-rauc recipe, a sample rauc-image
recipe (or set of recipes) that uses that packagegroup, and leave it
at that. Users of meta-rauc will figure out the rest, particularly
when and how they want to install rauc. You've just seen a specific
case where it is *not* wanted in some images.

Forcing your way into base packagegroups from oe-core via bbappends is
really not the right way to go about things.

Alex

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61393): https://lists.yoctoproject.org/g/yocto/message/61393
Mute This Topic: https://lists.yoctoproject.org/mt/102035396/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Remove packagegroup from image recipes

2023-10-18 Thread Enrico Jörns
Hi,

Am Mittwoch, dem 18.10.2023 um 11:08 +0200 schrieb Alexander Kanavin:
> Ugh, that's a really ugly hack in meta-rauc. They shouldn't force
> themselves into all images that way. You should file a ticket.

it's not really that it's 'forced' into all image. The .bbappend just says:

> require ${@bb.utils.contains('DISTRO_FEATURES', 'rauc', '${BPN}_rauc.inc', 
> '', d)}

The assumption was that if you have 'rauc' in you DISTRO_FEATURES, you want to 
install it.
And this was just the best way I saw to automatically end up in base images for 
this case while
ensuring that we do not modify the base packagegroup unconditionally.

A step back: I would expect that a recovery image is exactly where you would 
want RAUC to be
available to recover you rootfs by installing a valid bundle, or?

Anyway, feel free to file an issue or a discussion at

https://github.com/rauc/meta-rauc 

Regards, Enrico

> I would suggest either dropping rauc from DISTRO_FEATURES globally
> (that may have unwanted side effects though and you should check what
> else that affects), or using BBMASK as shown in reference manual to
> blacklist the offending bbappend, e.g. something like:
> 
>  BBMASK += "/meta-ti/recipes-misc/ meta-ti/recipes-ti/packagegroup/"
> 
> 
> Alex
> 
> On Wed, 18 Oct 2023 at 11:00, Ivan Stojanovic  wrote:
> > 
> > We have 2 images, adios-image-standard and adios-image-recovery.
> > In adios-image-standard, I want to have rauc.
> > In adios-image-recovery due to lack of space, I want to remove it. It is a 
> > part of it since it
> > is in DISTRO_FEATURES.
> > 
> > In meta-rauc layer, they add rauc to pacakgegroup-base:
> > https://github.com/rauc/meta-rauc/blob/kirkstone/recipes-core/packagegroups/packagegroup-base.bbappend
> >  via
> > https://github.com/rauc/meta-rauc/blob/kirkstone/recipes-core/packagegroups/packagegroup-base_rauc.inc
> > .
> > 
> > I tried removing it with PACKAGE_EXCLUDE:
> > PACKAGE_EXCLUDE = "\
> >   packagegroup-base-rauc \
> > "
> > as well as
> > PACKAGE_EXCLUDE = "\
> >   rauc \
> >   rauc-mark-good \
> >   rauc-service \
> > "
> > I also tried with:
> > RDEPENDS:packagegroup-base:remove "packagegroup-base-rauc"
> > 
> > But in both cases I get that error.
> > I do not have any idea how I could remove it from DISTRO_FEATURES for that 
> > image. I think that
> > is not possible.
> > 
> > If I remove it "manually" using ROOTFS_POSTPROCESS_COMMAND, I have to take 
> > care of all
> > dependencies as well and it would be still be visible in .manifest file 
> > that it is there
> > although it is not.
> > 
> > That is why I am looking for a more elegant way to achive it.
> > 
> > Thanks!
> > 
> > 
> > 
> 
> 
> 

-- 
Pengutronix e.K.   | Enrico Jörns|
Embedded Linux Consulting & Support| https://www.pengutronix.de/ |
Steuerwalder Str. 21   | Phone: +49-5121-206917-180  |
31137 Hildesheim, Germany  | Fax:   +49-5121-206917-9|

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61392): https://lists.yoctoproject.org/g/yocto/message/61392
Mute This Topic: https://lists.yoctoproject.org/mt/102035396/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Remove packagegroup from image recipes

2023-10-18 Thread Alexander Kanavin
Ugh, that's a really ugly hack in meta-rauc. They shouldn't force
themselves into all images that way. You should file a ticket.

I would suggest either dropping rauc from DISTRO_FEATURES globally
(that may have unwanted side effects though and you should check what
else that affects), or using BBMASK as shown in reference manual to
blacklist the offending bbappend, e.g. something like:

 BBMASK += "/meta-ti/recipes-misc/ meta-ti/recipes-ti/packagegroup/"


Alex

On Wed, 18 Oct 2023 at 11:00, Ivan Stojanovic  wrote:
>
> We have 2 images, adios-image-standard and adios-image-recovery.
> In adios-image-standard, I want to have rauc.
> In adios-image-recovery due to lack of space, I want to remove it. It is a 
> part of it since it is in DISTRO_FEATURES.
>
> In meta-rauc layer, they add rauc to pacakgegroup-base: 
> https://github.com/rauc/meta-rauc/blob/kirkstone/recipes-core/packagegroups/packagegroup-base.bbappend
>  via 
> https://github.com/rauc/meta-rauc/blob/kirkstone/recipes-core/packagegroups/packagegroup-base_rauc.inc.
>
> I tried removing it with PACKAGE_EXCLUDE:
> PACKAGE_EXCLUDE = "\
>   packagegroup-base-rauc \
> "
> as well as
> PACKAGE_EXCLUDE = "\
>   rauc \
>   rauc-mark-good \
>   rauc-service \
> "
> I also tried with:
> RDEPENDS:packagegroup-base:remove "packagegroup-base-rauc"
>
> But in both cases I get that error.
> I do not have any idea how I could remove it from DISTRO_FEATURES for that 
> image. I think that is not possible.
>
> If I remove it "manually" using ROOTFS_POSTPROCESS_COMMAND, I have to take 
> care of all dependencies as well and it would be still be visible in 
> .manifest file that it is there although it is not.
>
> That is why I am looking for a more elegant way to achive it.
>
> Thanks!
>
> 
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61391): https://lists.yoctoproject.org/g/yocto/message/61391
Mute This Topic: https://lists.yoctoproject.org/mt/102035396/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Remove packagegroup from image recipes

2023-10-18 Thread Ivan Stojanovic
We have 2 images, adios-image-standard and adios-image-recovery.
In adios-image-standard, I want to have rauc.
In adios-image-recovery due to lack of space, I want to remove it. It is a part 
of it since it is in DISTRO_FEATURES.

In meta-rauc layer, they add rauc to pacakgegroup-base: 
https://github.com/rauc/meta-rauc/blob/kirkstone/recipes-core/packagegroups/packagegroup-base.bbappend
 via 
https://github.com/rauc/meta-rauc/blob/kirkstone/recipes-core/packagegroups/packagegroup-base_rauc.inc.
 ( 
https://github.com/rauc/meta-rauc/blob/kirkstone/recipes-core/packagegroups/packagegroup-base_rauc.inc
 )

I tried removing it with PACKAGE_EXCLUDE:
PACKAGE_EXCLUDE = "\
packagegroup-base-rauc \
"
as well as
PACKAGE_EXCLUDE = "\
rauc \
rauc-mark-good \
rauc-service \
"
I also tried with:
RDEPENDS:packagegroup-base:remove "packagegroup-base-rauc"

But in both cases I get that error.
I do not have any idea how I could remove it from DISTRO_FEATURES for that 
image. I think that is not possible.

If I remove it "manually" using ROOTFS_POSTPROCESS_COMMAND, I have to take care 
of all dependencies as well and it would be still be visible in .manifest file 
that it is there although it is not.

That is why I am looking for a more elegant way to achive it.

Thanks!

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61390): https://lists.yoctoproject.org/g/yocto/message/61390
Mute This Topic: https://lists.yoctoproject.org/mt/102035396/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Remove packagegroup from image recipes

2023-10-18 Thread Alexander Kanavin
I suppose you need to make an image recipe where the unneeded
packagegroup is not listed? Can you show the existing recipes?

Alex

On Wed, 18 Oct 2023 at 10:38, Ivan Stojanovic  wrote:
>
> Hi,
>
> I am trying to remove a package group (pacakgegroup-core-rauc) from a custom 
> image but it fails with error:
> "package packagegroup-base requires packagegroup-base-rauc, but none of the 
> providers can be installed".
>
> We use Rauc for our "standard" image, but I want to remove it from the 
> "recovery" image. "rauc" is in DISTRO_FEATURES which is fine for all images 
> except for "recovery" image.
>
> The only way I managed to remove it is using post-process command, but I am 
> wondering if there is a more elegant way to remove it?
>
> Any ideas?
>
> Regards,
> Ivan
> 
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61389): https://lists.yoctoproject.org/g/yocto/message/61389
Mute This Topic: https://lists.yoctoproject.org/mt/102035396/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] Remove packagegroup from image recipes

2023-10-18 Thread Ivan Stojanovic
Hi,

I am trying to remove a package group (pacakgegroup-core-rauc) from a custom 
image but it fails with error:
"package packagegroup-base requires packagegroup-base-rauc, but none of the 
providers can be installed".

We use Rauc for our "standard" image, but I want to remove it from the 
"recovery" image. "rauc" is in DISTRO_FEATURES which is fine for all images 
except for "recovery" image.

The only way I managed to remove it is using post-process command, but I am 
wondering if there is a more elegant way to remove it?

Any ideas?

Regards,
Ivan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61388): https://lists.yoctoproject.org/g/yocto/message/61388
Mute This Topic: https://lists.yoctoproject.org/mt/102035396/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] CDN for sstate.yoctoproject.org

2023-10-18 Thread Richard Purdie
On Tue, 2023-10-17 at 17:33 +0200, Alexander Kanavin wrote:
> Thanks for working on this! I finally got to playing with CDN mirror a
> bit, and it seems to basically work ok, so next I'm going to write AB
> tests that check that it remains useful. Specifically:
> 
> 1. What should the test be?
> 
> I tried 'bitbake -S printdiff core-image-sato' on a blank build
> directory, and the result against current master is:
> 
> ===
> Checking sstate mirror object availability: 100%
> > ##|
> Time: 0:05:35
> The differences between the current build and any cached tasks start
> at the following tasks:
> /srv/work/alex/poky/meta/recipes-sato/images/core-image-sato.bb:do_deploy_source_date_epoch
> /srv/work/alex/poky/meta/recipes-core/base-files/base-files_3.0.14.bb:do_install
> ===
> 
> I think this is pretty good! The two missing objects depend on the
> current date, and so should go to the exception list in the test, and
> otherwise there is a 100% match rate. The bitbake targets could be
> 'world core-image-sato-sdk', and target machines could be qemux86_64
> and qemuarm64.

That match does look good. As you say, the revision of the metadata
will change base-files so that is expected.

I'm torn on the targets to test as sato-sdk is a large image and world
is a lot of work. I'd be tempted to test sato, weston and full-cmdline?
World is a good test I guess and if from sstate, shouldn't have that
much of an issue. It does also prove things are working.

> Just to be sure that mismatches elsewhere will be reported as CDN
> misses, adding my shadow 4.14 update and re-running the command shows:
> 
> The differences between the current build and any cached tasks start
> at the following tasks:
> /srv/work/alex/poky/meta/recipes-gnome/hicolor-icon-theme/hicolor-icon-theme_0.17.bb:do_package
> /srv/work/alex/poky/meta/recipes-sato/pulseaudio-sato/pulseaudio-client-conf-sato_1.bb:do_package
> /srv/work/alex/poky/meta/recipes-core/initscripts/initscripts_1.0.bb:do_package
> /srv/work/alex/poky/meta/recipes-core/update-rc.d/update-rc.d_0.8.bb:do_package
> /srv/work/alex/poky/meta/recipes-multimedia/alsa/alsa-ucm-conf_1.2.10.bb:do_package
> /srv/work/alex/poky/meta/recipes-kernel/wireless-regdb/wireless-regdb_2023.09.01.bb:do_package
> /srv/work/alex/poky/meta/recipes-sato/packagegroups/packagegroup-core-x11-sato.bb:do_package
> /srv/work/alex/poky/meta/recipes-kernel/linux-libc-headers/linux-libc-headers_6.5.bb:do_package
> /srv/work/alex/poky/meta/recipes-core/packagegroups/packagegroup-core-ssh-dropbear.bb:do_package
> /srv/work/alex/poky/meta/recipes-graphics/packagegroups/packagegroup-core-x11.bb:do_package
> /srv/work/alex/poky/meta/recipes-sato/shutdown-desktop/shutdown-desktop.bb:do_package
> virtual:native:/srv/work/alex/poky/meta/recipes-support/libbsd/libbsd_0.11.7.bb:do_prepare_recipe_sysroot
> /srv/work/alex/poky/meta/recipes-support/ca-certificates/ca-certificates_20211016.bb:do_package
> /srv/work/alex/poky/meta/recipes-core/sysvinit/sysvinit-inittab_2.88dsf.bb:do_package
> virtual:native:/srv/work/alex/poky/meta/recipes-extended/shadow/shadow_4.14.0.bb:do_recipe_qa
> ... (a few more missing do_package objects)
> ===
> 
> 2. When to run the test, and against which commit in poky?
> 
> RP suggested that the test could run in an antiphase with the nightly
> a-quick (i.e. 12 hours after). We can do that for a start, but I'm
> (perhaps prematurely) worrying that this will be unstable: either
> because the objects from the nightly haven't yet propagated to CDN, or
> because master has meanwhile (e.g. in the 12 hours that have passed)
> gained new commits without corresponding cache objects. Are those real
> concerns?
> 
> Thoughts?

You're right, we need to run the test against a commit we know has been
built on the autobuilder. If we run with a newer commit we could easily
see mismatch since it won't have been built yet.

I'm less worried about CDN propagation, that should happen quickly and
if something isn't present, it might just be slow to obtain/lookup as
it ripples through the system.

Either we start logging what we've built so we get the last known
revisions, or we run the test as part of a-quick/a-full at the end of
the build? I don't really want to extend the build but I'm not sure we
may have much choice.

Cheers,

Richard


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61387): https://lists.yoctoproject.org/g/yocto/message/61387
Mute This Topic: https://lists.yoctoproject.org/mt/101525879/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [qa-build-notification] QA notification for completed autobuilder build (yocto-4.3.rc1)

2023-10-18 Thread Richard Purdie
On Wed, 2023-10-18 at 06:16 +, Pokybuild User wrote:
> A build flagged for QA (yocto-4.3.rc1) was completed on the autobuilder 
> and is available at:
> 
> 
> https://autobuilder.yocto.io/pub/releases/yocto-4.3.rc1
> 
> 
> Build URL: 
> https://autobuilder.yoctoproject.org/typhoon/#/builders/83/builds/6062

There was one failure in the build, a serial login issue on ttyS1, so
an occurrence of our annoying 6.5 issue. This is the first time we've
seen it with the workaround applied.

The question is to proceed with rc1 in testing, or apply the upstream
fixes and try an rc2 with that? I'm torn...

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61386): https://lists.yoctoproject.org/g/yocto/message/61386
Mute This Topic: https://lists.yoctoproject.org/mt/102034597/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] QA notification for completed autobuilder build (yocto-4.3.rc1)

2023-10-18 Thread Pokybuild User

A build flagged for QA (yocto-4.3.rc1) was completed on the autobuilder and 
is available at:


https://autobuilder.yocto.io/pub/releases/yocto-4.3.rc1


Build URL: 
https://autobuilder.yoctoproject.org/typhoon/#/builders/83/builds/6062

Build hash information: 

bitbake: 5419a8473d6d4cd1d01537de68ad8d72cf5be0b2
meta-agl: 4063b4f9a712e32337c1d9678b2f2481dde2a346
meta-arm: 3ed13d25a065f29bd46ee725c708d12ebc3f175a
meta-aws: a30a2b66f1447dc5abdbca6c5de743e39c08b99b
meta-intel: 1bca60610c597571769edc4a057a04bfdbd2f994
meta-mingw: 65ef95a74f6ae815f63f636ed53e140a26a014ce
meta-openembedded: 35bcd8c6ddfb6bc8729d0006dab887afcc772ec9
meta-virtualization: 827092c2ec925ea3a024dcda9ccfd738e351e6ba
oecore: 4f84537670020a8d902248479efa9f062089c0d3
poky: f65f100bc5379c3153ee00b2aa62ea5c9a66ec79



This is an automated message from the Yocto Project Autobuilder
Git: git://git.yoctoproject.org/yocto-autobuilder2
Email: richard.pur...@linuxfoundation.org


 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61385): https://lists.yoctoproject.org/g/yocto/message/61385
Mute This Topic: https://lists.yoctoproject.org/mt/102034502/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[linux-yocto] [PATCH] usb: hcd-pci: remove the action of faking interrupt request

2023-10-18 Thread Meng Li via lists.yoctoproject.org
With below command steps, PCIe-USB drvier also hangs even if
commit c548795abe0d("USB: add check to detect host controller
hardware removal") exists in latest kernel.
- hot-unplug the PCIe-USB card
- echo 1 > /sys/bus/pci/devices//remove
The code hangs in function xhci_urb_dequeue() because it is not able
to get the spin lock xhci->lock. The root cause is that when PCIe-USB card
is hot-plugged, there is an exception("Internal error: synchronous external
abort") occurred in function xhci_urb_enqueue() that acquire the xhci->lock
during sending data. After exception is processed, the usb-storage kthread
is killed, and the code is not able to return to function xhci_urb_enqueue()
again, so the xhci->lock is not released and cause the device removing action
hangs in xhci_urb_dequeue().
Even if moving code of commit c548795abe0d to function xhci_pci_remove(),
it also doesn't resolve the deadlock issue, because in fact, there is an 
exception
("Internal error: synchronous external abort") occurred without PCIe card. The
code doesn't return back again.
Because the above steps are not correct operation for PCIe device, it should not
hot-unplug the PCIe-USB card directly before removing device from drvier level.
Moreover, it has better offer dedicated power to support PCIe hot-plug so that
avoid destroying PCIe hardware.
Based on above analysis, I think the action of faking interrupt request should 
be
removed, it does not need to resolve a unreasonable use case. Moreover, remove
the code of faking interrupt request also can avoid below calltrace in RT 
kernel.
Call trace:
 ..
 __might_resched+0x160/0x1c0
 rt_spin_lock+0x38/0xb0
 xhci_irq+0x44/0x16d0
 usb_hcd_irq+0x38/0x5c
 usb_hcd_pci_remove+0x84/0x14c
 xhci_pci_remove+0x78/0xc0
 pci_device_remove+0x44/0xcc
 device_remove+0x54/0x8c
 device_release_driver_internal+0x1ec/0x260
 device_release_driver+0x20/0x30
 pci_stop_bus_device+0x8c/0xcc
 pci_stop_and_remove_bus_device_locked+0x28/0x44
 ..
 el0t_64_sync_handler+0xf4/0x120
 el0t_64_sync+0x18c/0x190

Fixes: c548795abe0d ("USB: add check to detect host controller hardware 
removal")
Cc: sta...@vger.kernel.org
Signed-off-by: Meng Li 
---
 drivers/usb/core/hcd-pci.c | 8 
 1 file changed, 8 deletions(-)

diff --git a/drivers/usb/core/hcd-pci.c b/drivers/usb/core/hcd-pci.c
index 9b77f49b3560..3202583b0a40 100644
--- a/drivers/usb/core/hcd-pci.c
+++ b/drivers/usb/core/hcd-pci.c
@@ -322,14 +322,6 @@ void usb_hcd_pci_remove(struct pci_dev *dev)
if (pci_dev_run_wake(dev))
pm_runtime_get_noresume(>dev);
 
-   /* Fake an interrupt request in order to give the driver a chance
-* to test whether the controller hardware has been removed (e.g.,
-* cardbus physical eject).
-*/
-   local_irq_disable();
-   usb_hcd_irq(0, hcd);
-   local_irq_enable();
-
/* Note: dev_set_drvdata must be called while holding the rwsem */
if (dev->class == CL_EHCI) {
down_write(_rwsem);
-- 
2.34.1


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#13200): 
https://lists.yoctoproject.org/g/linux-yocto/message/13200
Mute This Topic: https://lists.yoctoproject.org/mt/102034460/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[linux-yocto]: [kernel/kernel-rt v6.1]: usb: hcd-pci: remove the action of faking interrupt request

2023-10-18 Thread Meng Li via lists.yoctoproject.org
From: Limeng 

Hi Bruce,

This patch is used to remove the action of faking interrupt request. 
Could you please help merge this patch into linux-ycoto kernel firstly? I will 
send it to mainline upstream in later.
Branch are v6.1/standard/nxp-sdk-6.1/nxp-soc and 
v6.1/standard/preempt-rt/nxp-sdk-6.1/nxp-soc


diffstat info as below:

 hcd-pci.c |8 
 1 file changed, 8 deletions(-)

thanks,
Limeng

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#13199): 
https://lists.yoctoproject.org/g/linux-yocto/message/13199
Mute This Topic: https://lists.yoctoproject.org/mt/102034459/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-