Re: [yocto] [patchtest-oe][PATCH] test_patch_cve.py: fix cve tag checking logic

2018-11-08 Thread Mittal, Anuj
On Wed, 2018-11-07 at 09:01 +, Richard Purdie wrote:
> On Fri, 2018-11-02 at 14:03 +0800, Chen Qi wrote:
> > The current logic for checking cve tag is not correct. It errors
> > out if and only if the patch contains a line which begins with
> > CVE-- and contains nothing else.
> > 
> > It will not error out if the patch contains no CVE information, nor
> > will it error out if the patch contains line like below.
> > 
> > 'Fix CVE--'
> > 
> > I can see that the cve tag checking logic tries to ensure the patch
> > contains something like 'CVE: CVE--'. So fix to implement
> > such
> > logic.
> > 
> > Signed-off-by: Chen Qi 
> > ---
> >  tests/test_patch_cve.py | 15 ---
> >  1 file changed, 8 insertions(+), 7 deletions(-)
> 
> Thanks, good find.
> 
> I've merged this and I believe the instance should have it applied
> now
> too.
> 

Not sure if this is related but it looks like the tests aren't running
at all now ...


https://patchwork.openembedded.org/project/oe-core/series/?ordering=-last_updated

Thanks,

Anuj
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [RFC] Yocto Autobuilder and LAVA Integration

2018-11-08 Thread Chan, Aaron Chun Yew
Hi Anibal/RP,

> In order to do a distributed Boards testing the Yocto Autobuilder 
> needs to publish in some place accessible the artifacts (image, 
> kernel, etc) to flash/boot the board and the test suite expected to 
> execute.

[Reply] That is correct, since Linaro have this in place to use 
https://archive.validation.linaro.org/directories/ and I have look into this as 
well, we can leverage on this 
  but I am up for any suggestion you might have. So the idea here 
is that we have a placeholder to store the publish artifacts remotely and 
deploy using 
  native curl command with token access. Then based on your LAVA 
job definitions we can instruct LAVA to source the images via https.
  Having said that, the deploy stage in LAVA must have some 
capabilities to read a token file in the LAVA job defintion and pick up the 
binaries from public repo (git LFS).

  In order for Board Distributed Tests to happen, there are 2 items 
in my wish lists

  1. Public hosting of binary repository - with access control 
  2. Ease Handshaking between two(2) different systems CI (e.g. 
Jenkins/Autobuilder) with LAVA 
   a. Exchange build property (metadata) - includes hardware 
info, system info 
   b. Test reporting results
   
> I created a simple LAVA test definition that allows run testimage 
> (oe-test runtime) in my own LAVA LAB, is very simplistic only has a 
> regex to parse results and uses lava-target-ip and lava-echo-ipv4 to 
> get target and server IP addresses.

[Reply] Although the lava test shell have these capabilities to use 
lava-target-ip or/and lava-echo-ipv4 this only works within LAVA scope, the way 
we retrieve the Ipv4
  address is reading the logs from LAVA thru XML-RPC and grep the 
pattern matching string which contains IP even before the HW get initialize 
entirely then parse
  IP back to the Yocto Autobuilder. 

  
http://git.yoctoproject.org/cgit/cgit.cgi/yocto-autobuilder-helper/tree/lava/trigger-lava-jobs

> Some of the tasks, I identified,  (if is accepted)
> 
> - Yocto-aubuilder-helper: Implement/adapt to cover this new behavior , 
> move the EXTRA_PLAIN_CMDS to a class.
> - Poky/OE: Review/fix test-export or provide other mechanism to export 
> the test suite. > - Poky/OE: Review/fix test-export or provide other 
> mechanism to export 
> the test suite.

[Reply] I would like to understand further what is the implementation here and 
how it addresses the problems that we have today. I believe in the past, Tim 
has tried
  to enable testexport and transfer the testexport into the DUT but 
it was not very successful and we found breakage.

> -  Yocto-aubuilder-helper: Create a better approach to re-use LAVA job 
> templates across boards.

[Reply] I couldn’t be more supportive on this having a common LAVA job template 
across boards but I would like to stress this, we don’t exactly know how 
  community will define their own LAVA job definition, therefore 
what I had in mind as per today is to create a placeholde where LAVA job 
templates 
  can be define and other boards/community can reuse the same 
template if it fits their use cases. In general the templates we have today are 
  created to fit into Yocto Project use cases.  

Lastly  there are some works I've done on provisiong QEMU on LAVA sourceing 
from Yocto Project public releases, I am looking at where we can upstream this
https://github.com/lab-github/yoctoproject-lava-test-shell

Thanks!

Cheers,
Aaron Chan
Open Source Technology Center Intel 

-Original Message-
From: richard.pur...@linuxfoundation.org 
[mailto:richard.pur...@linuxfoundation.org] 
Sent: Thursday, November 8, 2018 6:45 AM
To: Anibal Limon ; yocto@yoctoproject.org
Cc: Nicolas Dechesne ; Chan, Aaron Chun Yew 

Subject: Re: [RFC] Yocto Autobuilder and LAVA Integration

Hi Anibal,

On Wed, 2018-11-07 at 16:25 -0600, Anibal Limon wrote:
> We know the need to execute OE testimage over real HW not only QEMU,
> 
> I'm aware that currently there is an implementation on the Yocto 
> Autobuilder Helper , this initial implementation looks pretty well 
> separating parts for template generation [1] and the script to send 
> jobs to LAVA [2].
> 
> There are some limitations.
> 
> - Requires that the boards are accessible trough SSH (same network?) 
> by the Autobuilder, so no distributed LAB testing.
> - LAVA doesn't know about test results because the execution is 
> injected via SSH.
> 
> In order to do a distributed Boards testing the Yocto Autobuilder 
> needs to publish in some place accessible the artifacts (image, 
> kernel, etc) to flash/boot the board and the test suite expected to 
> execute.
> 
> Currently there is a functionality called testexport (not too
> used/maintained) that allows you to export the test suite.

I continue to have mixed feelings ab

Re: [yocto] [OE-core] 2.6 migration guide

2018-11-08 Thread Paul Eggleton
On Friday, 9 November 2018 11:41:42 AM NZDT Paul Eggleton wrote:
> You could accomplish the latter using PACKAGECONFIG_pn-python3 = "pgo" at
> the configuration level 

Oops, I meant PACKAGECONFIG_remove_pn-python3 = "pgo" here.

Cheers,
Paul

-- 

Paul Eggleton
Intel Open Source Technology Centre


-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] 2.6 migration guide

2018-11-08 Thread Paul Eggleton
On Monday, 5 November 2018 4:32:26 PM NZDT Paul Eggleton wrote:
> On Wednesday, 31 October 2018 11:06:31 AM NZDT Scott Rifenbark wrote:
> > I have an initial section at
> > https://yoctoproject.org/docs/2.6/ref-manual/ref-manual.html#moving-to-the-yocto-project-2.6-release,
> > which is based on Richard's input.  I am sure there are more items.
> 
> OK, so I have removed Richard's items from the wiki page (since they're now
> in the manual) and added items I have gathered from reviewing all of the 
> git commits in the release. Let me know if you need clarification on anything
> - everyone else, let us know if I missed or messed up something.
> 
>   https://wiki.yoctoproject.org/wiki/FutureMigrationGuide

I forgot one thing:

--- snip ---

Python 3 profile-guided optimisation
---

The python3 recipe now enables profile-guided optimisation; this requires a 
little extra build time in exchange for improved performance on the target at 
runtime, and is only enabled if the current MACHINE has support for user-mode 
emulation in QEMU (i.e. "qemu-usermode" is in MACHINE_FEATURES, which it is by 
default, so the machine configuration would need to have opted out for it not 
to be). If you wish to disable Python profile-guided optimisation regardless of 
the value of MACHINE_FEATURES, then ensure that PACKAGECONFIG for python3 does 
not contain "pgo". You could accomplish the latter using 
PACKAGECONFIG_pn-python3 = "pgo" at the configuration level or by setting 
PACKAGECONFIG using a bbappend for the python3 recipe.

--- snip ---


> BTW I think the section on postinstalls (from Richard's input) is a little 
> terse
> and needs expansion - most importantly it needs to describe what actions the
> user might need to take. I'll take care of this on review if nobody else does.

So what I think we need here is to copy part of the "Using exit 1 to explicitly 
defer a postinstall script until first boot..." paragraph that's mentioned in 
the 2.5 migration section. Would it make sense to actually break this out to 
its own place in the documentation (on "Deferring postinstalls to first boot") 
and then link to that from both places?

While I think of it, could you also move the _remove item that's in the 
"Bitbake Changes" section into the "Override Changes" section?

Thanks,
Paul


-- 

Paul Eggleton
Intel Open Source Technology Centre


-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [mono] AOT compile system assemblies into target image

2018-11-08 Thread Smith, Virgil (US)
Is there a way to configure meta-mono to ahead of time (AOT) compile the system 
assemblies for the target?
Specifically, I would like to not have to run the following on my targets.
sudo mono --aot /usr/lib/mono/4.5/mscorlib.dll
for f in /usr/lib/mono/gac/*/*/*.dll; do sudo mono --aot "$f"; done

Presumably this would require a "mono-cross" recipe to build mono in cross 
compilation mode and setting up an environment in which running mono -aot found
the appropriate cross version of the assembler and linker; and a recipe that 
use the target sysroot files as input or extending the main mono recipe.


I've also posted this question to Stack Overflow in case someone here wants to 
gain some SO rep.
https://stackoverflow.com/questions/53213420/aot-compile-system-assemblies-using-yocto-meta-mono

---
The Yocto project maintainers and user community are the intended recipients of 
this message, and it is not subject to export restriction.



Notice to recipient: This email is meant for only the intended recipient of the 
transmission, and may be a communication privileged by law, subject to export 
control restrictions or that otherwise contains proprietary information. If you 
receive this email by mistake, please notify us immediately by replying to this 
message and then destroy it and do not review, disclose, copy or distribute it. 
Thank you in advance for your cooperation.
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Set linux capabilities on binary on a recipe in meta-oe layer

2018-11-08 Thread Piotr Tworek
Hi Markus,

Have you tried doing it in the postinst step executed on your target? Try:

pkg_postinst_ontarget_${PN} () {
setcap cap_net_raw+eip $D${bindir}/node
}

RDEPENDS_${PN} += "libcap-bin"

/ptw

> I have tested to set capabilities on the node binary within a custom recipe
> (custom layer) but that failed.
> 
> pkg_postinst_${PN} () {
> setcap cap_net_raw+eip $D${bindir}/node
> }
> PACKAGE_WRITE_DEPS = "libcap-native"
> RDEPENDS_${PN} = "libcap"
> 
> The error message:
> 
> ERROR: core-image-full-cmdline-1.0-r0 do_rootfs: [log_check]
> core-image-full-cmdline: found 1 error message in the logfile:
> [log_check] Failed to set capabilities on file
> `/home/ubuntu/yocto-sumo/build/tmp/work/raspberrypi3-poky-linux-gnueabi/core
> -image-full-cmdline/1.0-r0/rootfs/usr/bin/node' (No such file or directory)
> 
> When I check the node binary is there in the rootfs directory. It seems
> that when the the pkg_postinst function is executed the node binary is not
> there.
> 
> What am I missing? Any answer is much appreciated!
> 
> Regards,
> Markus
> 
> On Wed, 7 Nov 2018 at 11:32, Markus W  wrote:
> > Hi!
> > 
> > Background:
> > In my raspberry project I am developing a nodejs app that needs access to
> > bluetooth/ble device. I want to run the node application as non root user
> > for security reasons. In order to get access from within the app, the node
> > binary need to have the following capability cap_net_raw+eip set. I am
> > using the nodejs recipe from meta-oe and added it in my local.conf:
> > 
> > IMAGE_INSTALL_append = " nodejs i2c-tools bluez5 kernel-image
> > kernel-devicetree"
> > 
> > Question:
> > Where should I apply the following command? setcap cap_net_raw+eip
> > /usr/bin/node
> > 
> > What are my options? Can I create a recipe in a different package that
> > will apply the above command on the meta-oe package for the nodejs recipe?
> > 
> > I have been following this thread (
> > https://lists.yoctoproject.org/pipermail/yocto/2016-June/030811.html),
> > but the node binaries and my node-app are in different layers and
> > packages.
> > 
> > Any advice how to do this is much appreciated?
> > 
> > Regards,
> > Markus




-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] QA notification for completed autobuilder build (yocto-2.4.4.rc1)

2018-11-08 Thread Poky Build User


A build flagged for QA (yocto-2.4.4.rc1) was completed on the autobuilder and 
is available at:


https://autobuilder.yocto.io/pub/releases/yocto-2.4.4.rc1


Build hash information: 

bitbake: 8bd16328a9332c57b03198826e22b48fadcd21d9
eclipse-poky-neon: 6d0598b1d567a07bd20c5c99c65a404844a8f8c3
eclipse-poky-oxygen: b9eef4c1b3e14cfc0c81c902c3e52d864dfc3a76
meta-gplv2: f875c60ecd6f30793b80a431a2423c4b98e51548
meta-intel: 4ee8ff5ebe0657bd376d7a79703a21ec070ee779
meta-mingw: 1cc620b38f6f30a0bdd181783297998fe073387f
meta-qt3: 8ef7261a331e0869a0461ab2fb44e39980cedc02
meta-qt4: e290738759ef3f39c9e079eaa9b606a62107e5ba
oecore: 8a2c177c7dad5c838b3c6abd3088a2bc3896a6a3
poky: 940da2e688cc6ae3cc3d95842033c3e51bd9fe29



This is an automated message from the Yocto Project Autobuilder
Git: git://git.yoctoproject.org/yocto-autobuilder2
Email: richard.pur...@linuxfoundation.org


 
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [yocto-autobuilder-helper][PATCH] config.json: Remove sstate, add oe-selftest, and replace beaglebone

2018-11-08 Thread Michael Halstead
We want to build everything without sstate during bringup testing.
Adding oe-selftest will catch errors other targets wouldn't.  We just want to
test on qemuarm64 and qemux86-64 architectures so remove beaglebone as an
unneeded third. This takes advantage of the template as well.

Signed-off-by: Michael Halstead 
---
 config.json | 17 ++---
 1 file changed, 14 insertions(+), 3 deletions(-)

diff --git a/config.json b/config.json
index 2945155..abc5928 100644
--- a/config.json
+++ b/config.json
@@ -146,16 +146,27 @@
 }
 },
"nightly-bringup" : {
-"MACHINE" : "qemuarm",
 "TEMPLATE" : "nightly-arch",
+"SSTATEDIR" : [],
+"MACHINE" : "qemuarm64",
 "step2" : {
-"MACHINE" : "beaglebone-yocto",
-"BBTARGETS" : "core-image-sato core-image-sato-dev 
core-image-sato-sdk core-image-minimal core-image-minimal-dev 
core-image-sato-sdk-ptest core-image-sato:do_populate_sdk"
+"MACHINE" : "qemux86-64"
 },
 "step5" : {
 "MACHINE" : "qemux86-64",
 "SDKMACHINE" : "x86_64",
 "BBTARGETS" : "core-image-minimal:do_populate_sdk_ext 
core-image-sato:do_populate_sdk"
+},
+"step6" : {
+"MACHINE" : "qemux86-64",
+"SDKMACHINE" : "x86_64",
+"PACKAGE_CLASSES" : "package_rpm",
+"PRSERV" : false,
+"extravars" : [
+"RPM_GPG_SIGN_CHUNK = '1'"
+],
+"EXTRACMDS" : ["checkvnc; DISPLAY=:1 oe-selftest --skip-tests 
distrodata.Distrodata.test_checkpkg -j 15"],
+"ADDLAYER" : ["${BUILDDIR}/../meta-selftest"]
 }
 },
 "nightly-mips" : {
-- 
2.17.2

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Set linux capabilities on binary on a recipe in meta-oe layer

2018-11-08 Thread Markus W
I have tested to set capabilities on the node binary within a custom recipe
(custom layer) but that failed.

pkg_postinst_${PN} () {
setcap cap_net_raw+eip $D${bindir}/node
}
PACKAGE_WRITE_DEPS = "libcap-native"
RDEPENDS_${PN} = "libcap"

The error message:

ERROR: core-image-full-cmdline-1.0-r0 do_rootfs: [log_check]
core-image-full-cmdline: found 1 error message in the logfile:
[log_check] Failed to set capabilities on file
`/home/ubuntu/yocto-sumo/build/tmp/work/raspberrypi3-poky-linux-gnueabi/core-image-full-cmdline/1.0-r0/rootfs/usr/bin/node'
(No such file or directory)

When I check the node binary is there in the rootfs directory. It seems
that when the the pkg_postinst function is executed the node binary is not
there.

What am I missing? Any answer is much appreciated!

Regards,
Markus

On Wed, 7 Nov 2018 at 11:32, Markus W  wrote:

> Hi!
>
> Background:
> In my raspberry project I am developing a nodejs app that needs access to
> bluetooth/ble device. I want to run the node application as non root user
> for security reasons. In order to get access from within the app, the node
> binary need to have the following capability cap_net_raw+eip set. I am
> using the nodejs recipe from meta-oe and added it in my local.conf:
>
> IMAGE_INSTALL_append = " nodejs i2c-tools bluez5 kernel-image
> kernel-devicetree"
>
> Question:
> Where should I apply the following command? setcap cap_net_raw+eip
> /usr/bin/node
>
> What are my options? Can I create a recipe in a different package that
> will apply the above command on the meta-oe package for the nodejs recipe?
>
> I have been following this thread (
> https://lists.yoctoproject.org/pipermail/yocto/2016-June/030811.html),
> but the node binaries and my node-app are in different layers and packages.
>
> Any advice how to do this is much appreciated?
>
> Regards,
> Markus
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto