Re: [yocto] populate_sdk with my image

2019-11-21 Thread Mark Hatle



On 11/21/19 12:00 PM, Mauro Ziliani wrote:
> Thanks.
> 
> This is true for a Krogoth based project?

Same class, slightly different semantics.  I don't believe src-pkgs existed yet
at that point, but dev-pkgs would have.

You will have to investigate the class for the parameters.. but general behavior
is the same.

--Mark

> Il 21/11/19 17:40, Mark Hatle ha scritto:
>> populate_sdk uses the same configuration as the regular image, as well as 
>> adding
>> "dev-pkgs dbg-pkgs src-pkgs" and optionally doc-pkgs.
>>
>> See:
>> http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/classes/populate_sdk_base.bbclass
>>
>> Lines 3-11, and 22.
>>
>> If dev-pkgs/src-pkgs isn't inclyding your Qt/Qml development components, then
>> they may not be packaged properly.
>>
>> The way dev-pkgs works (like 5) is by taking a list of each package 
>> installed in
>> the system and then trying to add '-dev' to it, and then install that.  
>> (Roughly)
>>
>> --Mark
>>
>>
>> On 11/21/19 10:31 AM, Mauro Ziliani wrote:
>>> Hi all.
>>>
>>> I have a recipe for my image with depends from Qt/Qml recipes
>>>
>>> When I do
>>>
>>> bitbake -c populate_sdk myimage.bb
>>>
>>>
>>> the sdk doesn't contains the dev version of the Qt/Qml libraries installed 
>>> in
>>> the final image
>>>
>>>
>>> I managing the bitbake variables  TOOLCHAIN_TARGET_TASK and 
>>> TOOLCHAIN_HOST_TASK
>>> adding manually the dependecies.
>>>
>>>
>>> There is an automatic way to do that?
>>>
>>>
>>> M
>>>
>>>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] populate_sdk with my image

2019-11-21 Thread Mark Hatle
populate_sdk uses the same configuration as the regular image, as well as adding
"dev-pkgs dbg-pkgs src-pkgs" and optionally doc-pkgs.

See:
http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/classes/populate_sdk_base.bbclass

Lines 3-11, and 22.

If dev-pkgs/src-pkgs isn't inclyding your Qt/Qml development components, then
they may not be packaged properly.

The way dev-pkgs works (like 5) is by taking a list of each package installed in
the system and then trying to add '-dev' to it, and then install that.  
(Roughly)

--Mark


On 11/21/19 10:31 AM, Mauro Ziliani wrote:
> Hi all.
> 
> I have a recipe for my image with depends from Qt/Qml recipes
> 
> When I do
> 
> bitbake -c populate_sdk myimage.bb
> 
> 
> the sdk doesn't contains the dev version of the Qt/Qml libraries installed in
> the final image
> 
> 
> I managing the bitbake variables  TOOLCHAIN_TARGET_TASK and 
> TOOLCHAIN_HOST_TASK
> adding manually the dependecies.
> 
> 
> There is an automatic way to do that?
> 
> 
> M
> 
> 
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] busybox + SELinux (warrior) - reboot issue

2019-11-21 Thread Mark Hatle
I've been trying to find time to look into it, but I've not had any so far.

I'd suggest trying it on more full Linux system first to see if that resolves
the issue.  If it does, then it's simply a configuration and you can use the
audit messages to help figure it out..  but the fact it's rebooting suggests to
me that something is incorrect in the initscripts when used with busybox.

--Mark

On 11/21/19 8:54 AM, Yair Itzhaki wrote:
> Anybody?
> 
>  
> 
> Thanks,
> 
> Yair
> 
>  
> 
>  
> 
> 
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] :how to solve the basehash value changed from 'xxx' to 'aaaa' ?

2019-11-18 Thread Mark Hatle
You are changing the value of something in there dynamically.  Most likely
you've done something like embed the current date and time.

If you do something like that, you need to evaluate it -once- during parse time
and not again.  This will fix the hash value at parse time and not change it 
later.

--Mark

On 11/12/19 10:20 PM, www wrote:
> Dear all,
> 
> When I modify the os-release file in my yocto project, it appear some error, 
> and
> how can I solve it ? Who can give me some help or advice? Thank you!
> I carried out the recommended order and it didn't work.
> 
> /ERROR: os-release-1.0-r0 do_compile: Taskhash mismatch
> ce133f0458608e03aa55224df28156e523e54903115efbbcd62946f84a867201 versus
> 7269881f0eb1759ed420a2db4c04fb477cd8c1288bc5f82df5c8161bb926ea1f for
> /home/temp/wanghp/wsp/git_s/local-source/obmc-sugon/entity_fruu/meta/recipes-core/os-release/os-release.bb.do_compile/
> /ERROR: Taskhash mismatch
> ce133f0458608e03aa55224df28156e523e54903115efbbcd62946f84a867201 versus
> 7269881f0eb1759ed420a2db4c04fb477cd8c1288bc5f82df5c8161bb926ea1f for
> /home/temp/wanghp/wsp/git_s/local-source/obmc-sugon/entity_fruu/meta/recipes-core/os-release/os-release.bb.do_compile/
> /ERROR: When reparsing
> /home/temp/wanghp/wsp/git_s/local-source/obmc-sugon/entity_fruu/meta/recipes-core/os-release/os-release.bb.do_compile,//
>  //the
> basehash value changed from
> 99a42a1a3b1a151de604267b159558ecaf1031a3bec8917df132c81302e729a5 to
> 4f3288a8763e2e1af78e4b3cdd9c0c0ccb3b0d5c78a3073c188b22200df2a9b0.// //The
> metadata is not deterministic and this needs to be fixed./
> /ERROR: The following commands may help:/
> /ERROR: $ bitbake os-release -cdo_compile -Snone/
> /ERROR: Then:/
> /ERROR: $ bitbake os-release -cdo_compile -Sprintdiff/
> 
> /ERROR: When reparsing
> /home/temp/wanghp/wsp/git_s/local-source/obmc-sugon/entity_fruu/meta/recipes-core/os-release/os-release.bb.do_compile,
>  //the
> basehash value changed from
> 99a42a1a3b1a151de604267b159558ecaf1031a3bec8917df132c81302e729a5 to
> 47c30012daa6aa77be09a93fe21e66995361ef26b4487111005617db8cb4de59. The metadata
> is not deterministic and this needs to be fixed./
> /ERROR: The following commands may help:/
> /ERROR: $ bitbake os-release -cdo_compile -Snone/
> /ERROR: Then:/
> /ERROR: $ bitbake os-release -cdo_compile -Sprintdiff/
> 
> thanks,
> Byron
> 
> 
> 
>  
> 
> 
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] No Package Provides /bin/awk

2019-11-13 Thread Mark Hatle
Bitbake inspects the binaries and looks at the #! line.  You need to change the
line itself (via a patch) to /usr/bin/awk, and then it will pick up the
dependency automatically on a rebuild.

--Mark

On 11/13/19 12:14 PM, Wayne Li wrote:
> On further inspection of the older image my coworker made, it looks like awk 
> is
> located at "/usr/bin/awk".  I see this by just doing a "which awk" in the
> console of the T4240 RDB when the old image is loaded.  So it looks like the
> most likely case is that Khem Raj is correct and that bitbake is expecting awk
> to be in "/bin" when awk is actually in "/usr/bin".  In that case, I need to
> know where the line telling bitbake to look for awk in the "/bin" directory 
> is.
> 
> On Wed, Nov 13, 2019 at 12:02 PM Wayne Li  > wrote:
> 
> I'd like also like to mention that my main concern with the ver_linux 
> files
> that I found in my project was that the shebang line was " #!/bin/sh"
> instead of "#!/bin/awk -f" which is the shebang line in the ver_linux file
> in the patch.  The patch wants to change the shebang line from "#!/bin/awk
> -f" to  "#!/usr/bin/awk -f" so I'm not sure how that change would 
> translate
> when I'm working with the shebang line "#!/bin/sh".  Or perhaps Yocto has
> changed since that patch was posted and maybe the place /bin/awk is
> specified is different?
> 
> On Wed, Nov 13, 2019 at 11:51 AM Wayne Li  > wrote:
> 
> So after further investigation, I'm fairly sure awk is actually 
> present
> in the target image.  Here are my reasons why I feel that this is the 
> case:
> 
> -I looked at the busybox menuconfig GUI that comes up when I run
> "bitbake -c menuconfig busybox" and it says awk is built-in. 
> -I looked at various def-config files I found by just doing a "grep 
> -rn
> "CONFIG_AWK"" and found that CONFIG_AWK seems to have been enabled
> throughout the project (there was the line CONFIG_AWK=y uncommented in
> the various def-config files I mentioned). 
> -I have an older version of the target image that my coworker (who has
> since left the company) created.  I just need to rebuild this image
> because I am trying to add some kernel modules to the image.  When I 
> run
> the command "awk" in the console for the T4240 RDB when the older
> version of the image is loaded, I do see the gawk help info come up.
> This shows awk is present in the older image.
> 
> But looking at what Khem Raj mentioned, perhaps bitbake is just not
> finding the awk because it's actually in /usr/bin when bitbake expects
> it to be in /bin?  Though I am a little confused about the link you
> sent, Khem Raj.  How do exactly do I apply this patch?  I'm assuming 
> we
> have to change the file ver_linux?  I did a "find . -name "ver_linux""
> and I see multiple results:
> 
> bash-4.2$ find . -name "ver_linux"
> 
> ./build_t4240rdb-64b/tmp/work/ppc64e6500-fsl-linux/linux-libc-headers/4.1-r0/linux-4.1/scripts/ver_linux
> 
> ./build_t4240rdb-64b/tmp/work/ppce6500-fslmllib32-linux/lib32-linux-libc-headers/4.1-r0/linux-4.1/scripts/ver_linux
> 
> ./build_t4240rdb-64b/tmp/work/t4240rdb_64b-fsl-linux/kernel-devsrc/1.0-r0/package/usr/src/kernel/scripts/ver_linux
> 
> ./build_t4240rdb-64b/tmp/work/t4240rdb_64b-fsl-linux/kernel-devsrc/1.0-r0/image/usr/src/kernel/scripts/ver_linux
> 
> ./build_t4240rdb-64b/tmp/work/t4240rdb_64b-fsl-linux/kernel-devsrc/1.0-r0/packages-split/kernel-devsrc/usr/src/kernel/scripts/ver_linux
> 
> ./build_t4240rdb-64b/tmp/work-shared/t4240rdb-64b/kernel-source/scripts/ver_linux
> 
> ./build_t4240rdb/tmp/work/ppc64e6500-fslmllib64-linux/lib64-linux-libc-headers/4.1-r0/linux-4.1/scripts/ver_linux
> 
> ./build_t4240rdb/tmp/work/ppce6500-fsl-linux/linux-libc-headers/4.1-r0/linux-4.1/scripts/ver_linux
> 
> ./build_t4240rdb/tmp/work-shared/t4240rdb/kernel-source/scripts/ver_linux
> 
> Now the build I'm working on is build_t4240rdb-64b so the last three
> results in that search probably don't matter.  Though there are still
> six more results for when I search ver_linux.  So I'm not sure which 
> one
> I need to change.  Moreover, all of the ver_linux files I found more 
> or
> less look like the following:
> 
> https://gist.github.com/WayneZhenLi/c7475cf382a80bfd2de31e82c40c1677
> 
> Which seems to be very different from the ver_linux file mentioned in
> the patch.  This further confuses me on how to apply the patch. 
> 
> Or maybe do you guys think maybe the patch isn't the solution here? 
> Maybe there's some other reason bitbake isn't finding the awk?
> 
> -Thanks!, Wayne Li
> 
> On Wed, Nov 13, 2019 at 10:57 AM Khem Raj  

Re: [yocto] bitbake SRC_URI fetch Azure DevOps repository Azure DevOps Services Basic

2019-11-05 Thread Mark Hatle
When cloning a repository using the builtin fetcher that is git based, the
default protocol is 'git'.

If you want to use an alternative protocol, such as http or ssh, you must
specify the protocol to use.

In your example below, you have specified http:

SRC_URI = "git://git.oe.handhelds.org/git/vip.git;protocol=http"

Change the http to 'ssh' to switch to that protocol.  Note, for the fetcher the
system MUST be able to automatically login without user interaction.  This means
that you must have authorized keys enabled.

Note, ssh is not suggested as a protocol -- except for internal private
development -- because you can't easily share you recipes with others.  Many
corporations block ssh, and will only allow http via a proxy.

--Mark

On 11/3/19 8:52 PM, Samuel Jiang (江騏先) wrote:
> Dear yocto developer,
> The below disscussed about azure devops clone issue including test result with
> Microsoft support team member.
> We wonder know how Bitbake access repository through SSH. Does it different
> between git and Azue DevOps?
> 
> Thanks,
> 
> Samuel Jiang
> -- Forwarded message --
> *From:* Alex Chong (Shanghai Wicresoft Co,.Ltd.) 
> *Date:* 2019年10月31日 PM5:39 [+0800]
> *To:* Samuel Jiang (江騏先) 
> *Cc:* support , Alex Chong (Shanghai
> Wicresoft Co,.Ltd.) 
> *Subject:* RE: 119102923000220 use yocto bitbake SRC_URI fetch Azure DevOps
> repository Azure DevOps Services Basic
> 
>> Hi Samuel,
>>
>>  
>>
>> Thank you for reply.
>>
>>  
>>
>> I test and read the BitBake doc, let me share with you my finding and 
>> analysis.
>>
>>  1. I have verified that our Azure DevOps Organization Repo could be clone
>> through SSH authentication successfully. 
>>
>>  1. I create a new SSH key and add it to Azure DevOps SSH public keys. 
>>
>>  2. Then I use git clone command and clone this repo successfully. The url is
>> 
>> “v-chucho-micros...@vs-ssh.visualstudio.com:v3/v-chucho-microsoft/testCodeCoverage/testCodeCoverage”
>>
>>
>> My testing verify that SSH Authentication is available, and the url given by
>> Azure DevOps is correct.
>>
>>  
>>
>>  2. Then I also test it in GitHub, and succeed. The only difference is the 
>> url
>> of GitHub is “g...@github.com:xx.git” 
>>
>>  
>>
>>  3. So let’s come back to BitBake. I read more of the BitBake doc.(
>> 
>> https://www.yoctoproject.org/docs/1.8/bitbake-user-manual/bitbake-user-manual.html#bb-fetchers)
>> . Your command SRC_URI =
>> 
>> "git://quant...@vs-ssh.visualstudio.com:v3/quanta01/OpenBMC/crashdump;protocol=ssh;nobranch=1".
>> The BitBake example is below. 
>>
>> We can see that the BitBake syntax is like git SSH but not Azure DevOps SSH.
>> Do you try to use this command clone a repo from GitHub?
>>
>>  
>>
>>  4. Then I read the doc you shared. The customer also met your issue when
>> trying to clone Azure DevOps repo with SSH url. In my understanding, 
>> seems
>> BitBake Git fetcher is good for Git SSH but not ready to access the Azure
>> DevOps SSH. 
>>
>>  
>>
>> *_Action Plan_*
>>
>> Could you involve BitBake support team? We can deliver this testing result 
>> and
>> consult them how BitBake access Azure DevOps through SSH.
>>
>>  
>>
>> If you have any conern or query, please feel free to let me know.
>>
>>  
>>
>> Best Regards,
>>
>>  
>>
>> Alex Chong
>>
>>  
>>
>> image001
>>
>> Support Engineer
>>
>> Microsoft APAC Developer Support Team
>>
>> Customer Service & Support (CSS)
>>
>> Email: v-chu...@microsoft.com 
>>
>> Office: +86 (21) 52638610
>>
>> Time zone: (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi
>>
>> Working time: 9:00am-6:00pm, Mon-Tue-Wed-Thu-Fri
>>
>>  
>>
>> *From:* Samuel Jiang (江騏先) 
>> *Sent:* Thursday, October 31, 2019 12:16 PM
>> *To:* Alex Chong (Shanghai Wicresoft Co,.Ltd.) 
>> *Cc:* support 
>> *Subject:* RE: 119102923000220 use yocto bitbake SRC_URI fetch Azure DevOps
>> repository Azure DevOps Services Basic
>>
>>  
>>
>> Hi Alex,
>>
>> I catch same problem on the yocto mail list
>> link: https://lists.yoctoproject.org/pipermail/yocto/2018-October/042736.html
>> 
>>
>> when I use SRC_URI =
>> "git://quant...@vs-ssh.visualstudio.com:v3/quanta01/OpenBMC/crashdump;protocol=ssh;nobranch=1"
>>
>> the bitbake response below error message:
>> git -c core.fsyncobjectfiles=0 ls-remote
>> ssh://quant...@vs-ssh.visualstudio.com:v3/quanta01/OpenBMC/crashdump  failed
>> with exit code 128, output:
>>
>> ssh: Could not resolve hostname vs-ssh.visualstudio.com:v3: Name or service
>> not known
>>
>> fatal: Could not read from remote repository.
>>
>> I think the bitbake call “git ls-remote” command however it could not 

Re: [yocto] [OE-core] [prelink-cross] Bug 13529 add SPDX identifier

2019-10-22 Thread Mark Hatle
I need to look through it further, but everything I looked at (and your comments
below) look good to me.

I'll try to get this merge soon.  (If you don't see it get merged, please ping
me.)  I'm preparing for ELC-E and am short on time right now.

--Mark

On 10/22/19 4:28 AM, Yann CARDAILLAC wrote:
> 
> 
> On Mon, Oct 21, 2019 at 4:35 PM Mark Hatle  <mailto:mark.ha...@kernel.crashing.org>> wrote:
> 
> On 10/21/19 4:43 AM, Yann CARDAILLAC wrote:
> > Hi Mark Hatle, Jakub Jelinek,
> 
> Jakub is no longer supporting this code, but he may have valuable 
> insights into
> licensing.
> 
> > I'm currently beginning the work on bug 13529:
> >
> > https://bugzilla.yoctoproject.org/show_bug.cgi?id=13529
> >
> > The purpose is to add SPDX identifier to scripts and sources.
> >
> > Most of the sources have licences on the sources, however some of them 
> don't,
> > and I've question about others :
> >
> > - src/sha.c do not shall it be GPLV2-or-later as most of the other?
> 
> /* sha.c - Functions to compute the SHA1 hash (message-digest) of files
>    or blocks of memory.  Complies to the NIST specification FIPS-180-1.
> 
>    Copyright (C) 2000, 2001, 2003 Scott G. Miller
> 
>    Credits:
>       Robert Klep mailto:rob...@ilse.nl>>  -- Expansion
> function fix
>    NOTE: The canonical source of this file is maintained in GNU coreutils.
> */
> 
> The last line is the key.  This apparently came from GNU coreutils.
> 
> >From the SCM logs, sha.c was introduced in approx 2003-07-01 from what 
> it looks
> like.  So it's a pretty old version.  You will need to do some detective 
> work,
> and find older versions of coreutils until you find the one that 
> corresponds to
> the code that was checked in.  Start back in 2003 and work backwards as
> necessary.  (The version that matches to the original 2003-07-01 should 
> be the
> reasonable license to use.)
> 
> 
> Ok I found it ! https://github.com/coreutils/coreutils/blob/v4.5.8/lib/sha.c
> 
> from : https://github.com/coreutils/coreutils/blob/v4.5.8/COPYING it looks 
> like
> it's only GPLV2
> 
> 
> > - how to licence m4/libelf.m4 ? I'd prefer you to tell me exactly what 
> to
> add in
> > order to avoid errors
> 
> The original version of the m4/libelf.m4 was introduced 2001-09-27.  It 
> appears
> to me that it was written as part of the prelinker, so would be under the
> overall license of the prelinker.
> 
> Based on this, my assumption is that it is GPL-2.0
> 
> I do not see any 'or-later' clauses anywhere.
> 
> I have just add  "dnl SPDX-License-Identifier: GPL-2.0-only" bellow the 
> "Written
> by" comment, however should it also be in the resulting template? 
> I can add it also at first line of the resulting file if necessary ?
> 
> 
> > - what about *.C files ? They don't have licence header, they look like 
> C file
> > to me so I'd probably add :
> > // SPDX-License-Identifier: GPL-2.0-or-later
> 
> There are each simply test cases.  They would be covered by the overall
> 'COPYING' for the package.  Thus GPL-2.0
> 
> > - what about testsuite/ files ?
> 
> Same, no specifically stated license will be GPL-2.0.
> 
> Done ! 
> 
> > Shall every thing just be GPL-2.0-or-later?
> 
> Also just to be clear.  As I am NOT the original author of this work, I 
> won't
> accept a patch to remove any existing license text from the headers in 
> this
> software, but I will accept the SPDX-License-Identifier to be added in 
> addition
> to the existing license text.
>  
> 
> If an existing file does not have any License text in it, then we will 
> need to
> assume that the COPYING file covers all software unless there is some 
> indicator
> it comes from another source with a different license.  For items w/o 
> existing
> licenses, just adding the SPDX-License-Identifier will be acceptable.
> 
> So in a header similar to:
> 
> /* Copyright (C) 2001, 2002, 2003, 2007 Red Hat, Inc.
>    Written by Jakub Jelinek mailto:ja...@redhat.com>>, 
> 2001.
> 
>    This program is free software; you can redistribute it and/or modify
>    it under the terms of the GNU General Public License as published by
>    the Free Software Foundation; either version 2, or (at your option)
>    any later version.
> 
>    This program is distributed in the hope that it will be useful,
>    

[yocto] toaster - in build watching mode

2019-10-22 Thread Mark Hatle
I'm using toaster in a build watching mode and I'm getting errors to the console
log such as:

File:
'/home/jenkins/workspace/OEBuild/build-32/oe-core/meta/classes/toaster.bbclass',
lineno: 130, function: toaster_package_dumpdata
 0126:lpkgdata = {}
 0127:datadir = os.path.join(pkgdatadir, 'runtime')
 0128:
 0129:# scan and send data for each generated package
 *** 0130:for datafile in os.listdir(datadir):
 0131:if not datafile.endswith('.packaged'):
 0132:lpkgdata = _toaster_load_pkgdatafile(datadir, datafile)
 0133:# Fire an event containing the pkg data
 0134:bb.event.fire(bb.event.MetadataEvent("SinglePackageInfo",
lpkgdata), d)
Exception: FileNotFoundError: [Errno 2] No such file or directory:
'/home/jenkins/workspace/OEBuild/builds/build-32/tmp-glibc/work/core2-64-oe-linux/libxfixes/1_5.0.3-r0/pkgdata/runtime'


Also lots of messages like:

NOTE: We did not find one recipe for theconfiguration data package libxml2
Recipe matching query does not exist.
NOTE: We did not find one recipe for theconfiguration data package libx11-6
Recipe matching query does not exist.
NOTE: We did not find one recipe for theconfiguration data package
wpa-supplicant Recipe matching query does not exist.


In the toaster itself, I see the % status of the current build (which appears to
be correct).  However, during the build I didn't get any per task information
like I remembered seeing in the past.  Has this changed, or do we have a bug
somewhere?

--Mark
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Useradd: crypted passwords longer than 8 characters

2019-10-21 Thread Mark Hatle
Crypt the password yourself and pass it in to the adduser command.

--Mark

On 10/21/19 10:25 AM, Lukasz Zemla wrote:
> What is the best way in Yocto (warrior) to add crypted password to 
> /etc/shadow during buildtime?
> 
> Using useradd.bbclass in a standard way we may add only passwords <= 8 
> characters.
> "-p" parameterr followed by the output of "openssl passwd -crypt pAsswOrd" 
> works fine.
> 
> I thought that class may be cheated by providing after "-p" string returned 
> by "openssl passwd -6 verylongpAsswOrd", but it does not work: the password 
> in /etc/shadow file is truncated:
> 
> myuser:/D163GofCVEpMgZ.w2Ro3Z.b5S8XT1:18190:0:9:7:::
> 
> Any suggestions?
> 
> Thank you in advance.
> Lukasz Zemla
> 
> ***
> The information in this email is confidential and intended solely for the 
> individual or entity to whom it is addressed.  If you have received this 
> email in error please notify the sender by return e-mail, delete this email, 
> and refrain from any disclosure or action based on the information.
> ***
> 
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [OE-core] [prelink-cross] Bug 13529 add SPDX identifier

2019-10-21 Thread Mark Hatle
On 10/21/19 4:43 AM, Yann CARDAILLAC wrote:
> Hi Mark Hatle, Jakub Jelinek,

Jakub is no longer supporting this code, but he may have valuable insights into
licensing.

> I'm currently beginning the work on bug 13529:
> 
> https://bugzilla.yoctoproject.org/show_bug.cgi?id=13529
> 
> The purpose is to add SPDX identifier to scripts and sources.
> 
> Most of the sources have licences on the sources, however some of them don't,
> and I've question about others :
> 
> - src/sha.c do not shall it be GPLV2-or-later as most of the other?

/* sha.c - Functions to compute the SHA1 hash (message-digest) of files
   or blocks of memory.  Complies to the NIST specification FIPS-180-1.

   Copyright (C) 2000, 2001, 2003 Scott G. Miller

   Credits:
  Robert Klep   -- Expansion function fix
   NOTE: The canonical source of this file is maintained in GNU coreutils.
*/

The last line is the key.  This apparently came from GNU coreutils.

From the SCM logs, sha.c was introduced in approx 2003-07-01 from what it looks
like.  So it's a pretty old version.  You will need to do some detective work,
and find older versions of coreutils until you find the one that corresponds to
the code that was checked in.  Start back in 2003 and work backwards as
necessary.  (The version that matches to the original 2003-07-01 should be the
reasonable license to use.)

> - how to licence m4/libelf.m4 ? I'd prefer you to tell me exactly what to add 
> in
> order to avoid errors

The original version of the m4/libelf.m4 was introduced 2001-09-27.  It appears
to me that it was written as part of the prelinker, so would be under the
overall license of the prelinker.

Based on this, my assumption is that it is GPL-2.0

I do not see any 'or-later' clauses anywhere.

> - what about *.C files ? They don't have licence header, they look like C file
> to me so I'd probably add :
> // SPDX-License-Identifier: GPL-2.0-or-later

There are each simply test cases.  They would be covered by the overall
'COPYING' for the package.  Thus GPL-2.0

> - what about testsuite/ files ?

Same, no specifically stated license will be GPL-2.0.

> Shall every thing just be GPL-2.0-or-later?

Also just to be clear.  As I am NOT the original author of this work, I won't
accept a patch to remove any existing license text from the headers in this
software, but I will accept the SPDX-License-Identifier to be added in addition
to the existing license text.

If an existing file does not have any License text in it, then we will need to
assume that the COPYING file covers all software unless there is some indicator
it comes from another source with a different license.  For items w/o existing
licenses, just adding the SPDX-License-Identifier will be acceptable.

So in a header similar to:

/* Copyright (C) 2001, 2002, 2003, 2007 Red Hat, Inc.
   Written by Jakub Jelinek , 2001.

   This program is free software; you can redistribute it and/or modify
   it under the terms of the GNU General Public License as published by
   the Free Software Foundation; either version 2, or (at your option)
   any later version.

   This program is distributed in the hope that it will be useful,
   but WITHOUT ANY WARRANTY; without even the implied warranty of
   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
   GNU General Public License for more details.

   You should have received a copy of the GNU General Public License
   along with this program; if not, write to the Free Software Foundation,
   Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.  */

please add the header between the 'Written by' and the existing license text,
such as:

/* Copyright (C) 2001, 2002, 2003, 2007 Red Hat, Inc.
   Written by Jakub Jelinek , 2001.

   SPDX-License-Identifier: GPL-2.0-or-later

   This program is free software; you can redistribute it and/or modify
   it under the terms of the GNU General Public License as published by


> Best Regards,
> 
> Yann CARDAILLAC
> 
> -- 
> SMILE <http://www.smile.eu/>
> 
> 20 rue des Jardins
> 92600 Asnières-sur-Seine
> 
>   
> *Yann CARDAILLAC*
> Ingénieur Systèmes Embarqués
> 
> email yann.cardail...@smile.fr <mailto:yann.cardail...@smile.fr>
> url http://www.smile.eu
> 
> Twitter <https://twitter.com/GroupeSmile> Facebook
> <https://www.facebook.com/smileopensource> LinkedIn
> <https://www.linkedin.com/company/smile> Github <https://github.com/Smile-SA>
> 
> 
> 
> eco Pour la planète, n'imprimez ce mail que si c'est nécessaire
> 
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [layerindex-web] Enabled zeus, not working...

2019-10-14 Thread Mark Hatle



On 10/14/19 5:08 PM, Paul Eggleton wrote:
> Hi Mark
> 
> On Tuesday, 15 October 2019 10:21:29 AM NZDT Mark Hatle wrote:
>> I added the zeus branch on the layers.openembedded.org today and it's not
>> showing up and being indexed.  Any idea why?
> 
> The bitbake branch wasn't correctly specified - it needed to be set to "1.44" 
> (grabbed from https://wiki.yoctoproject.org/wiki/Releases if you don't have 
> it 
> in your head - I didn't ;).
>

Great so it's my fault.. much easier to fix then an infrastructure issue.

>> On my own personal layer index I did it and it worked fine.  So it may be
>> something related to the configuration.
> 
> I guess it worked in your case because (based on your patch of the other day) 
> you're picking bitbake out of a repo where zeus is a valid branch.

Yup, exactly.. it -happened- to work.. :)

> FYI on layers.openembedded.org you also need to select the Python environment 
> for the branch - python3 in this case. 
> 
> I've made both of these changes so it should index correctly the next time 
> around.

Ahh I thought python3 was now the default, and python2 was for the older
systems.  I read it backwards.. I'll try to remember in 6 months.  :)

(Part of the reason I included the mailing list here.)

--Mark

> Cheers
> Paul
> 
> 
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [layerindex-web] Enabled zeus, not working...

2019-10-14 Thread Mark Hatle
I added the zeus branch on the layers.openembedded.org today and it's not
showing up and being indexed.  Any idea why?

On my own personal layer index I did it and it worked fine.  So it may be
something related to the configuration.

Below are the errors from the update log:

Oct. 14, 2019, 7:13 p.m.

openembedded-core zeus
INFO: Collecting data for layer openembedded-core on branch zeus
ERROR: error: pathspec 'origin/zeus' did not match any file(s) known to git.
ERROR: Traceback (most recent call last):
  File "update_layer.py", line 370, in main
(tinfoil, tempdir) = recipeparse.init_parser(settings, branch, bitbakepath,
nocheckout=options.nocheckout, logger=logger)
  File "/opt/layerindex/layerindex/recipeparse.py", line 34, in init_parser
utils.checkout_repo(bitbakepath, bitbake_ref, logger=logger)
  File "/opt/layerindex/layerindex/utils.py", line 246, in checkout_repo
runcmd(['git', 'checkout', commit], repodir, logger=logger)
  File "/opt/layerindex/layerindex/utils.py", line 327, in runcmd
raise e
subprocess.CalledProcessError: Command '['git', 'checkout', 'origin/zeus']'
returned non-zero exit status 1
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] No SELinux security context (/etc/crontab)

2019-10-14 Thread Mark Hatle
There SE Linux policy included in meta-selinux is just a starting point.  It's
expected that you will have to update/customize it.

With that said, these types of issues, we will accept patches for them.

--Mark

On 10/10/19 5:06 AM, Oriya, Raxesh wrote:
> Hi,
> 
>  
> 
> I have enabled SELinux in my yocto project(warrior branch) but *cron *is not
> functioning because of some SELinux context isuue. I am using *minimum* 
> SELinux
> policy. Here is the error from `/var/log/messages`
> 
>  
> 
>     Oct  9 04:50:01 panther2 cron.info crond[261]: ((null)) No SELinux 
> security
> context (/etc/crontab)   
> 
> Oct  9 04:50:01 panther2 cron.info crond[261]: (root) FAILED (loading cron
> table)  
> 
>  
> 
> Here are some contexts for relevant files,
> 
>  
> 
>     root@panther2:~# ps -efZ | grep cron
> 
>     system_u:system_r:kernel_t:s0   root   464 1  0 04:54 ?    
> 00:00:00
> /usr/sbin/crond -n
> 
>  
> 
>     root@panther2:~# ls -lZ /etc/crontab
> 
> -rw---. 1 root root system_u:object_r:unconfined_t:s0 653 Oct  9  2019
> /etc/crontab
> 
>  
> 
>     root@panther2:~# ls -lZ /usr/sbin/crond
> 
> -rwxr-xr-x. 1 root root system_u:object_r:unlabeled_t:s0 68160 Oct  9  
> 2019
> /usr/sbin/crond
> 
>  
> 
> Any help? Thanks !!
> 
>  
> 
> Regards,
> 
> Thanks
> 
> 
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [layerindex-web] [PATCH 3/3] RFC: editlayer: Be more specific on the searches

2019-10-12 Thread Mark Hatle
Just because git.yoctoproject.org is in the URL, doesn't mean we can or
should force the vcs_web_url to be a specific value.  If it starts with
git://git.yoctoproject.org then we can do this.  git.openembedded.org
already did this.

This also changes github, gitlab and bitbucket references.

Signed-off-by: Mark Hatle 
---
 layerindex/tools/import_layer.py   |  8 
 layerindex/tools/import_wiki_layers.py | 13 ++---
 templates/layerindex/editlayer.html|  8 
 3 files changed, 18 insertions(+), 11 deletions(-)

diff --git a/layerindex/tools/import_layer.py b/layerindex/tools/import_layer.py
index 8fcbc15..ace58e5 100755
--- a/layerindex/tools/import_layer.py
+++ b/layerindex/tools/import_layer.py
@@ -36,27 +36,27 @@ def set_vcs_fields(layer, repoval):
 layer.vcs_web_tree_base_url = 'http://cgit.openembedded.org/' + 
reponame + '/tree/%path%?h=%branch%'
 layer.vcs_web_file_base_url = 'http://cgit.openembedded.org/' + 
reponame + '/tree/%path%?h=%branch%'
 layer.vcs_web_commit_url = 'http://cgit.openembedded.org/' + reponame 
+ '/commit/?id=%hash%'
-elif 'git.yoctoproject.org/' in repoval:
+elif repoval.startswith('git://git.yoctoproject.org/'):
 reponame = re.sub('^.*/', '', repoval)
 layer.vcs_web_url = 'http://git.yoctoproject.org/cgit/cgit.cgi/' + 
reponame
 layer.vcs_web_tree_base_url = 
'http://git.yoctoproject.org/cgit/cgit.cgi/' + reponame + 
'/tree/%path%?h=%branch%'
 layer.vcs_web_file_base_url = 
'http://git.yoctoproject.org/cgit/cgit.cgi/' + reponame + 
'/tree/%path%?h=%branch%'
 layer.vcs_web_commit_url = 
'http://git.yoctoproject.org/cgit/cgit.cgi/' + reponame + '/commit/?id=%hash%'
-elif 'github.com/' in repoval:
+elif repoval.startswith('git://github.com/') or 
repoval.startswith('http://github.com/') or 
repoval.startswith('https://github.com/'):
 reponame = re.sub('^.*github.com/', '', repoval)
 reponame = re.sub('.git$', '', reponame)
 layer.vcs_web_url = 'http://github.com/' + reponame
 layer.vcs_web_tree_base_url = 'http://github.com/' + reponame + 
'/tree/%branch%/'
 layer.vcs_web_file_base_url = 'http://github.com/' + reponame + 
'/blob/%branch%/'
 layer.vcs_web_commit_url = 'http://github.com/' + reponame + 
'/commit/%hash%'
-elif 'gitlab.com/' in repoval:
+elif repoval.startswith('git://gitlab.com/') or 
repoval.startswith('http://gitlab.com/') or 
repoval.startswith('https://gitlab.com/'):
 reponame = re.sub('^.*gitlab.com/', '', repoval)
 reponame = re.sub('.git$', '', reponame)
 layer.vcs_web_url = 'http://gitlab.com/' + reponame
 layer.vcs_web_tree_base_url = 'http://gitlab.com/' + reponame + 
'/tree/%branch%/'
 layer.vcs_web_file_base_url = 'http://gitlab.com/' + reponame + 
'/blob/%branch%/'
 layer.vcs_web_commit_url = 'http://gitlab.com/' + reponame + 
'/commit/%hash%'
-elif 'bitbucket.org/' in repoval:
+elif repoval.startswith('git://bitbucket.org/') or 
repoval.startswith('http://bitbucket.org/') or 
repoval.startswith('https://bitbucket.org/'):
 reponame = re.sub('^.*bitbucket.org/', '', repoval)
 reponame = re.sub('.git$', '', reponame)
 layer.vcs_web_url = 'http://bitbucket.org/' + reponame
diff --git a/layerindex/tools/import_wiki_layers.py 
b/layerindex/tools/import_wiki_layers.py
index baf0c71..71f26ea 100755
--- a/layerindex/tools/import_wiki_layers.py
+++ b/layerindex/tools/import_wiki_layers.py
@@ -100,20 +100,27 @@ def main():
 layer.vcs_web_tree_base_url = 
'http://cgit.openembedded.org/' + reponame + '/tree/%path%?h=%branch%'
 layer.vcs_web_file_base_url = 
'http://cgit.openembedded.org/' + reponame + '/tree/%path%?h=%branch%'
 layer.vcs_web_commit_url = 
'http://cgit.openembedded.org/' + reponame + '/commit/?id=%hash%'
-elif 'git.yoctoproject.org/' in repoval:
+elif repoval.startswith('git://git.yoctoproject.org/'):
 reponame = re.sub('^.*/', '', repoval)
 layer.vcs_web_url = 
'http://git.yoctoproject.org/cgit/cgit.cgi/' + reponame
 layer.vcs_web_tree_base_url = 
'http://git.yoctoproject.org/cgit/cgit.cgi/' + reponame + 
'/tree/%path%?h=%branch%'
 layer.vcs_web_file_base_url = 
'http://git.yoctoproject.org/cgit/cgit.cgi/' + reponame + 
'/tree/%path%?h=%branch%'
 layer.vcs_web_commit_url = 
'http://git.yoctoproject.org/cgit/cgit.cgi/' + reponame + '/commit/?id=%hash%'
-elif 'github.com/' in repoval:
+elif repoval.startswith('git://github.com/') or 
repoval.startswith('http://github.com/') or 
repoval.startswith('https://github.com/'):
 reponame = re.sub('^.*github.com/', '', repoval

[yocto] [layerindex-web] [PATCH 2/3] update.py: Allow bitbake to live in a subdirectory of a repository

2019-10-12 Thread Mark Hatle
Add a new BITBAKE_PATH to the settings file to specify the path within the
BITBAKE_REPO_URL where bitbake lives.  This is useful when using a combined
repository, such as poky, that contains bitbake, openembedded-core and other
layers.

This change also changes the default path, in the fetch directory, for the
bitbake checkout.  It no longer uses the path 'bitbake', but instead uses the
same URL processing as the layer fetching.

There is a side effect that, when using a shared fetch, the branch of the
layer will be used instead of the specified bitbake branch.  Generally this
is a reasonable compromise, since in a combined repository bitbake and
openembedded-core component should already match.

Signed-off-by: Mark Hatle 
---
 docker/settings.py   |  3 +++
 layerindex/bulkchange.py |  8 +++-
 layerindex/layerconfparse.py |  8 +++-
 layerindex/update.py | 14 +++---
 layerindex/update_layer.py   |  6 +-
 settings.py  |  3 +++
 6 files changed, 36 insertions(+), 6 deletions(-)

diff --git a/docker/settings.py b/docker/settings.py
index 616b67b..2821d82 100644
--- a/docker/settings.py
+++ b/docker/settings.py
@@ -244,6 +244,9 @@ TEMP_BASE_DIR = "/tmp"
 # Fetch URL of the BitBake repository for the update script
 BITBAKE_REPO_URL = "git://git.openembedded.org/bitbake"
 
+# Path within the BITBAKE_REPO_URL, usually empty
+BITBAKE_PATH = ""
+
 # Core layer to be used by the update script for basic BitBake configuration
 CORE_LAYER_NAME = "openembedded-core"
 
diff --git a/layerindex/bulkchange.py b/layerindex/bulkchange.py
index f6506ef..ea1f85c 100644
--- a/layerindex/bulkchange.py
+++ b/layerindex/bulkchange.py
@@ -98,7 +98,13 @@ def main():
 
 branch = utils.get_branch('master')
 fetchdir = settings.LAYER_FETCH_DIR
-bitbakepath = os.path.join(fetchdir, 'bitbake')
+
+import layerindex.models import LayerItem
+bitbakeitem = LayerItem()
+bitbakeitem.vcs_url = settings.BITBAKE_REPO_URL
+bitbakepath = os.path.join(fetchdir, bitbakeitem.get_fetch_dir())
+if settings.BITBAKE_PATH:
+bitbakepath = os.path.join(bitbakepath, settings.BITBAKE_PATH)
 
 if not os.path.exists(bitbakepath):
 sys.stderr.write("Unable to find bitbake checkout at %s" % bitbakepath)
diff --git a/layerindex/layerconfparse.py b/layerindex/layerconfparse.py
index 526d2c2..a0b7e1c 100644
--- a/layerindex/layerconfparse.py
+++ b/layerindex/layerconfparse.py
@@ -20,7 +20,13 @@ class LayerConfParse:
 
 if not bitbakepath:
 fetchdir = settings.LAYER_FETCH_DIR
-bitbakepath = os.path.join(fetchdir, 'bitbake')
+
+from layerindex.models import LayerItem
+bitbakeitem = LayerItem()
+bitbakeitem.vcs_url = settings.BITBAKE_REPO_URL
+bitbakepath = os.path.join(fetchdir, bitbakeitem.get_fetch_dir())
+if settings.BITBAKE_PATH:
+bitbakepath = os.path.join(bitbakepath, settings.BITBAKE_PATH)
 self.bbpath = bitbakepath
 
 # Set up BBPATH.
diff --git a/layerindex/update.py b/layerindex/update.py
index 7faf6b5..57dd830 100755
--- a/layerindex/update.py
+++ b/layerindex/update.py
@@ -268,8 +268,6 @@ def main():
 logger.error("Layer index lock timeout expired")
 sys.exit(1)
 try:
-bitbakepath = os.path.join(fetchdir, 'bitbake')
-
 if not options.nofetch:
 # Make sure oe-core is fetched since recipe parsing requires it
 layerquery_core = 
LayerItem.objects.filter(comparison=False).filter(name=settings.CORE_LAYER_NAME)
@@ -285,7 +283,17 @@ def main():
 if layer.vcs_url not in allrepos:
 allrepos[layer.vcs_url] = (repodir, urldir, fetchdir, 
layer.name)
 # Add bitbake
-allrepos[settings.BITBAKE_REPO_URL] = (bitbakepath, "bitbake", 
fetchdir, "bitbake")
+if settings.BITBAKE_REPO_URL not in allrepos:
+bitbakeitem = LayerItem()
+bitbakeitem.vcs_url = settings.BITBAKE_REPO_URL
+bitbakeurldir = bitbakeitem.get_fetch_dir()
+bitbakepath = os.path.join(fetchdir, bitbakeurldir)
+allrepos[settings.BITBAKE_REPO_URL] = (bitbakepath, 
bitbakeurldir, fetchdir, "bitbake")
+
+(bitbakepath, _, _, _) = allrepos[settings.BITBAKE_REPO_URL]
+if settings.BITBAKE_PATH:
+bitbakepath = os.path.join(bitbakepath, 
settings.BITBAKE_PATH)
+
 # Parallel fetching
 pool = multiprocessing.Pool(int(settings.PARALLEL_JOBS))
 for url in allrepos:
diff --git a/layerindex/update_layer.py b/layerindex/update_layer.py
index 7131d70..f4111bd 100644
--- a/layerindex/update_layer.py
+++ b/layerindex/update_layer.py
@@ -300

[yocto] [layerindex-web] [PATCH 1/3] layerindex/urls.py: Allow branches with a '.' in the name

2019-10-12 Thread Mark Hatle
Without this change the system will fail parsing various URL components

Signed-off-by: Mark Hatle 
---
 layerindex/urls.py | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/layerindex/urls.py b/layerindex/urls.py
index 7f4e545..89e70a2 100644
--- a/layerindex/urls.py
+++ b/layerindex/urls.py
@@ -107,7 +107,7 @@ urlpatterns = [
 BulkChangeDeleteView.as_view(
 template_name='layerindex/deleteconfirm.html'),
 name="bulk_change_delete"),
-url(r'^branch/(?P[-\w]+)/',
+url(r'^branch/(?P[-.\w]+)/',
 include('layerindex.urls_branch')),
 url(r'^updates/$',
 UpdateListView.as_view(
@@ -146,17 +146,17 @@ urlpatterns = [
 ClassicRecipeDetailView.as_view(
 template_name='layerindex/classicrecipedetail.html'),
 name='classic_recipe'),
-url(r'^comparison/recipes/(?P[-\w]+)/$',
+url(r'^comparison/recipes/(?P[-.\w]+)/$',
 ClassicRecipeSearchView.as_view(
 template_name='layerindex/classicrecipes.html'),
 name='comparison_recipe_search'),
-url(r'^comparison/search-csv/(?P[-\w]+)/$',
+url(r'^comparison/search-csv/(?P[-.\w]+)/$',
 ClassicRecipeSearchView.as_view(
 template_name='layerindex/classicrecipes_csv.txt',
 paginate_by=0,
 content_type='text/csv'),
 name='comparison_recipe_search_csv'),
-url(r'^comparison/stats/(?P[-\w]+)/$',
+url(r'^comparison/stats/(?P[-.\w]+)/$',
 ClassicRecipeStatsView.as_view(
 template_name='layerindex/classicstats.html'),
 name='comparison_recipe_stats'),
@@ -185,11 +185,11 @@ urlpatterns = [
 url(r'^stoptask/(?P[-\w]+)/$',
 task_stop_view,
 name='task_stop'),
-url(r'^ajax/layerchecklist/(?P[-\w]+)/$',
+url(r'^ajax/layerchecklist/(?P[-.\w]+)/$',
 LayerCheckListView.as_view(
 template_name='layerindex/layerchecklist.html'),
 name='layer_checklist'),
-url(r'^ajax/classchecklist/(?P[-\w]+)/$',
+url(r'^ajax/classchecklist/(?P[-.\w]+)/$',
 BBClassCheckListView.as_view(
 template_name='layerindex/classchecklist.html'),
 name='class_checklist'),
-- 
2.17.1

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [layerindex-web] [PATCH 0/3] Some misc changes/fixes..

2019-10-12 Thread Mark Hatle
A few misc changes/fixes.  The first two are well tested.  However, I suspect
the 3/3 may be incorrect and I've labeled it an RFC due to this.

1/3 - '.' wasn't allowed in branch names w/o an error.  This turned out
to be a fairly simple fix.

2/3 - For people who want to use 'poky' repository and not bitbake +
openembedded-core.  I've tested this locally in both configurations.

3/3 - When I was testing, my local git mirror is broken up with
directories that are called 'git.openembedded.org' and 'git.yoctoproject.org'
due to this, the system was matching and locking out the edit layer 
vcs_web_url submissions...  so I tried to make it better.. but I'm not
sure it's right.

Mark Hatle (3):
  layerindex/urls.py: Allow branches with a '.' in the name
  update.py: Allow bitbake to live in a subdirectory of a repository
  editlayer: Be more specific on the searches

 docker/settings.py |  3 +++
 layerindex/bulkchange.py   |  8 +++-
 layerindex/layerconfparse.py   |  8 +++-
 layerindex/tools/import_layer.py   |  8 
 layerindex/tools/import_wiki_layers.py | 13 ++---
 layerindex/update.py   | 14 +++---
 layerindex/update_layer.py |  6 +-
 layerindex/urls.py | 12 ++--
 settings.py|  3 +++
 templates/layerindex/editlayer.html|  8 
 10 files changed, 60 insertions(+), 23 deletions(-)

-- 
2.17.1

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-openssl102-fips][PATCH 3/3] nss: conditionally enable fips

2019-10-12 Thread Mark Hatle
The original goal of this work was to enable a FIPS-140-2 OpenSSL module.  Why
is NSS part of this?

Is something inside of the OpenSSL patches requesting NSS support, or is this a
different -- but related request?

--Mark

On 10/12/19 3:17 AM, Hongxu Jia wrote:
> Add export NSS_FORCE_FIPS=1 to force enable fips, and add the same
> macro limitaition to fips enable test, currently we are not ready
> to support nss fips
> 
> ...
> $ certutil -N -d sql:. --empty-password
> |certutil: function failed: SEC_ERROR_PKCS11_DEVICE_ERROR: A PKCS #11
> module returned CKR_DEVICE_ERROR, indicating that a problem has occurred
> with the token or slot.
> 
> $rpm -h
> |error: Failed to initialize NSS library
> ...
> 
> Signed-off-by: Hongxu Jia 
> ---
>  .../nss/nss/0001-conditionally-enable-fips.patch   | 93 
> ++
>  recipes-support/nss/nss_3.%.bbappend   |  4 +
>  recipes-support/nss/nss_fips.inc   |  4 +
>  3 files changed, 101 insertions(+)
>  create mode 100644 
> recipes-support/nss/nss/0001-conditionally-enable-fips.patch
>  create mode 100644 recipes-support/nss/nss_3.%.bbappend
>  create mode 100644 recipes-support/nss/nss_fips.inc
> 
> diff --git a/recipes-support/nss/nss/0001-conditionally-enable-fips.patch 
> b/recipes-support/nss/nss/0001-conditionally-enable-fips.patch
> new file mode 100644
> index 000..d11db91
> --- /dev/null
> +++ b/recipes-support/nss/nss/0001-conditionally-enable-fips.patch
> @@ -0,0 +1,93 @@
> +From f2cb8bcc556aa1121db7209d433170bd1ab60954 Mon Sep 17 00:00:00 2001
> +From: Hongxu Jia 
> +Date: Sat, 12 Oct 2019 10:49:28 +0800
> +Subject: [PATCH] conditionally enable fips
> +
> +Add export NSS_FORCE_FIPS=1 to force enable fips, and add the same
> +macro limitaition to fips enable test, currently we are not ready
> +to support nss fips
> +
> +...
> +$ certutil -N -d sql:. --empty-password
> +|certutil: function failed: SEC_ERROR_PKCS11_DEVICE_ERROR: A PKCS #11
> +module returned CKR_DEVICE_ERROR, indicating that a problem has occurred
> +with the token or slot.
> +
> +$rpm -h
> +|error: Failed to initialize NSS library
> +...
> +
> +Upstream-Status: Inappropriate [oe specific]
> +
> +Signed-off-by: Hongxu Jia 
> +---
> + nss/coreconf/config.mk   | 2 ++
> + nss/lib/freebl/nsslowhash.c  | 2 +-
> + nss/lib/pk11wrap/pk11util.c  | 2 +-
> + nss/lib/sysinit/nsssysinit.c | 4 
> + 4 files changed, 8 insertions(+), 2 deletions(-)
> +
> +diff --git a/nss/coreconf/config.mk b/nss/coreconf/config.mk
> +index 60a0841..dcca87f 100644
> +--- a/nss/coreconf/config.mk
>  b/nss/coreconf/config.mk
> +@@ -179,6 +179,8 @@ endif
> + # executing the startup tests at library load time.
> + ifndef NSS_FORCE_FIPS
> + DEFINES += -DNSS_NO_INIT_SUPPORT
> ++else
> ++DEFINES += -DNSS_FORCE_FIPS
> + endif
> + 
> + ifdef NSS_SEED_ONLY_DEV_URANDOM
> +diff --git a/nss/lib/freebl/nsslowhash.c b/nss/lib/freebl/nsslowhash.c
> +index 22f9781..baf71c3 100644
> +--- a/nss/lib/freebl/nsslowhash.c
>  b/nss/lib/freebl/nsslowhash.c
> +@@ -26,7 +26,7 @@ struct NSSLOWHASHContextStr {
> + static int
> + nsslow_GetFIPSEnabled(void)
> + {
> +-#ifdef LINUX
> ++#if defined LINUX && defined NSS_FORCE_FIPS
> + FILE *f;
> + char d;
> + size_t size;
> +diff --git a/nss/lib/pk11wrap/pk11util.c b/nss/lib/pk11wrap/pk11util.c
> +index 502c4d0..cd86270 100644
> +--- a/nss/lib/pk11wrap/pk11util.c
>  b/nss/lib/pk11wrap/pk11util.c
> +@@ -98,7 +98,7 @@ SECMOD_Shutdown()
> + int
> + secmod_GetSystemFIPSEnabled(void)
> + {
> +-#ifdef LINUX
> ++#if defined LINUX && defined NSS_FORCE_FIPS
> + FILE *f;
> + char d;
> + size_t size;
> +diff --git a/nss/lib/sysinit/nsssysinit.c b/nss/lib/sysinit/nsssysinit.c
> +index bd0fac2..5c09e8d 100644
> +--- a/nss/lib/sysinit/nsssysinit.c
>  b/nss/lib/sysinit/nsssysinit.c
> +@@ -168,6 +168,7 @@ getFIPSEnv(void)
> + static PRBool
> + getFIPSMode(void)
> + {
> ++#ifdef NSS_FORCE_FIPS
> + FILE *f;
> + char d;
> + size_t size;
> +@@ -186,6 +187,9 @@ getFIPSMode(void)
> + if (d != '1')
> + return PR_FALSE;
> + return PR_TRUE;
> ++#else
> ++return PR_FALSE;
> ++#endif
> + }
> + 
> + #define NSS_DEFAULT_FLAGS "flags=readonly"
> +-- 
> +2.7.4
> +
> diff --git a/recipes-support/nss/nss_3.%.bbappend 
> b/recipes-support/nss/nss_3.%.bbappend
> new file mode 100644
> index 000..9608ca3
> --- /dev/null
> +++ b/recipes-support/nss/nss_3.%.bbappend
> @@ -0,0 +1,4 @@
> +FIPSINC = ""
> +FIPSINC_class-target = "${@'' if d.getVar('OPENSSL_FIPS_ENABLED', True) != 
> '1' else 'nss_fips.inc'}"
> +
> +require ${FIPSINC}
> diff --git a/recipes-support/nss/nss_fips.inc 
> b/recipes-support/nss/nss_fips.inc
> new file mode 100644
> index 000..b183f55
> --- /dev/null
> +++ b/recipes-support/nss/nss_fips.inc
> @@ -0,0 +1,4 @@
> +FILESEXTRAPATHS_prepend := "${THISDIR}/nss:"
> +SRC_URI += " \
> +file://0001-conditionally-enable-fips.patch \
> +"
> 
-- 
___
yocto 

Re: [yocto] [meta-openssl102-fips][PATCH] README.build/image.inc: add missing openssl-fips to image

2019-10-10 Thread Mark Hatle
merged.

--Mark

On 10/9/19 3:28 AM, Hongxu Jia wrote:
> For Yocto and WRLinux, openssl fips works only if installing
> package openssl-fips
> 
> Signed-off-by: Hongxu Jia 
> ---
>  README.build | 1 +
>  templates/feature/openssl-fips/image.inc | 1 +
>  2 files changed, 2 insertions(+)
>  create mode 100644 templates/feature/openssl-fips/image.inc
> 
> diff --git a/README.build b/README.build
> index c6e..50bd9a5 100644
> --- a/README.build
> +++ b/README.build
> @@ -132,6 +132,7 @@ Building Steps (based on section 4 of the 
> UsersGuide-2.0.pdf):
>  prebuilt tar archive.
>  
>  For Yocto, in your build directory, edit conf/local.conf, add:
> +  IMAGE_INSTALL_append = " openssl-fips"
>OPENSSL_FIPS_ENABLED = "1"
>OPENSSL_FIPS_PREBUILT = ""
>  
> diff --git a/templates/feature/openssl-fips/image.inc 
> b/templates/feature/openssl-fips/image.inc
> new file mode 100644
> index 000..0d62e44
> --- /dev/null
> +++ b/templates/feature/openssl-fips/image.inc
> @@ -0,0 +1 @@
> +IMAGE_INSTALL += "openssl-fips"
> 
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [layerindex-web] Having problems instantiating a docker image

2019-10-08 Thread Mark Hatle
I've setup the layerindex in the past (without docker).. but I'm attempting to
follow the current instructions w/o much success.

I'm trying to use:

./dockersetup.py -m 8080:80 --no-https

It asks me for my email address and then builds the docker images...

then I get a failure connecting to https://google.com, connectivity check.
Since my layerindex will never leave my network I don't care about google.. so I
re-run it with:

./dockersetup.py -m 8080:80 --no-https --no-connectivity -r

re-enter my email address, it rebuilds the docker images...

it then sits and spins with what looks like a help message for 'exec' as well 
as:

Database server may not be ready; will try again.


If I try to connect to that machine port 8080 with my browser, I get back a "Bad
Request (400)"

So something is running, but it's broken.

Any suggestions on getting this fixed?  Also what will the admin login be once
it DOES come alive?  I know in the past it had asked me for admin login
information and such, is the admin login documented somewhere for this docker
container?

Thanks!
--Mark
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] meta-selinux warrior support

2019-10-07 Thread Mark Hatle
I thought this issue was already fixed:

http://git.yoctoproject.org/cgit/cgit.cgi/meta-selinux/commit/?h=warrior=bb0c9c3abcb935e4b362eb57985e1ee7fec0bfe0

This patch is what specifically adds the enabled/disabled that the system is
saying (in the logs quoted below) is invalid.

Can you try changing these to 'true' and 'false' instead?

In the file: classes/meson-enable-selinux.bbclass

--Mark

On 10/1/19 1:39 AM, Oriya, Raxesh wrote:
> Hi,
> 
>  
> 
> I am getting the below error when I am trying to integrate 'meta-selinux' into
> our yocto solution. This error also happens when I just build
> 'core-image-selinux' by including the required layers in warrior branch. Can
> anyone provide a fix for this..
> 
>  
> 
> local.conf contains the following lines:
> 
> -
> 
> DISTRO_FEATURES_append = " acl xattr pam selinux"
> 
> PREFERRED_PROVIDER_virtual/refpolicy ?= "refpolicy-mls"
> 
> -
> 
>  
> 
> ERROR: glib-2.0-native-1_2.58.3-r0 do_configure: meson failed
> 
> ERROR: glib-2.0-native-1_2.58.3-r0 do_configure: Function failed: do_configure
> (log file is located at
> /home/panther2/warrior/build/tmp/work/x86_64-linux/glib-2.0-native/1_2.58.3-r0/temp/log.do_configure.34545)
> 
> ERROR: Logfile of failure stored in:
> /home/panther2/warrior/build/tmp/work/x86_64-linux/glib-2.0-native/1_2.58.3-r0/temp/log.do_configure.34545
> 
> Log data follows:
> 
> | DEBUG: Executing shell function do_configure
> 
> | NOTE: Executing meson -Ddtrace=false -Dfam=false -Dsystemtap=false
> -Dselinux=false -Dlibmount=true -Dman=false -Dselinux=disabled
> -Dinternal_pcre=false -Dinstalled_tests=false...
> 
> | The Meson build system
> 
> | Version: 0.49.2
> 
> | Source dir:
> 
> | /home/panther2/warrior/build/tmp/work/x86_64-linux/glib-2.0-native/1_2
> 
> | .58.3-r0/glib-2.58.3 Build dir:
> 
> | /home/panther2/warrior/build/tmp/work/x86_64-linux/glib-2.0-native/1_2
> 
> | .58.3-r0/build
> 
> | Build type: native build
> 
> |
> 
> | meson.build:1:0: ERROR:  Value disabled is not boolean (true or false).
> 
> |
> 
> | A full log can be found at
> 
> | /home/panther2/warrior/build/tmp/work/x86_64-linux/glib-2.0-native/1_2
> 
> | .58.3-r0/build/meson-logs/meson-log.txt
> 
> | ERROR: meson failed
> 
>  
> 
> Thanks,
> 
> Raxesh
> 
>  
> 
> 
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Xilinx/meta-jupyter layer

2019-09-30 Thread Mark Hatle



On 9/27/19 3:12 PM, Chandana Kalluri wrote:
> Hello all,
> 
> https://github.com/Xilinx/meta-jupyter is a meta-jupyter layer containing 
> recipes for jupyter notebook. The initial recipes are based of Dmitry 
> Kargin's meta-jupyter layer 
> https://layers.openembedded.org/layerindex/branch/master/layer/meta-jupyter/. 
> This layer has not been updated for a while. 
> 
> The Xilinx/meta-jupyter layer also adds recipes for python3 based notebooks 
> apart from existing python2 based notebooks.
> This layer has been tested using Yocto thud layer and  by running jupyter 
> notebooks on Ultra96 community boards. 
> We would like to maintain Xilinx/meta-jupyter  layer actively and welcome 
> contributions to this layer either through pull requests or via patches sent 
> to meta-xilinx mailing list until a mailing list for meta-jupyter is in 
> place. In the next cycle we will deprecate python2 recipes. 
> 
> A question to community,
>  - Would you recommend maintaining this layer separately in the current 
> github.com/Xilinx location or be included under meta- openembedded layers.
> - Would you suggest having a separate mailing list ?

I'd suggest that you should consider either keeping it on github, or offering it
to the git.yoctoproject.org.  I can assist with this if you need.

Since this is pretty targeted to a specific use-case, either would make sense 
to me.

--Mark

> Thanks,
> Chandana
> 
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Transfer meta-data between recipes

2019-09-26 Thread Mark Hatle
On 9/26/19 8:59 AM, Westermann, Oliver wrote:
> Hey,
> 
>  
> 
> I’m trying to implement a bootloader-signing mechanism within yocto for 
> extended
> secure-boot support. The bootloader and it’s recipes are provided by NXP (in
> this case it’s the imx-boot_*.bb recipe from meta-freescale) and I want to 
> use a
> secondary recipe which I am creating to sign the resulting boot binary. My 
> issue
> is that the NXP code signing tool needs some info about the binary to sign.
> These details are send to stdout by imx-mkimage tool, which is called by the 
> imx
> boot makefile (used here:
> https://github.com/Freescale/meta-freescale/blob/master/recipes-bsp/imx-mkimage/imx-boot_0.2.bb#L104).
> 

In the cases I know of, most code signing is either done in the recipe itself
(via an added task from a special class after install and prior to packaging) or
it's being done at rootfs/image generation time.

> 
> I can override the compile step of imx-boot to save stdout into a file, deploy
> this file and later parse it to extract the offset dump, but that feels “ugly”
> and the file is by no means a output of the imx-boot recipe for the target
> system, but for another step. Is there any recommended way to parse such
> variable metadata between recipes?

Namespace is local to a recipe.  The only way you can share from one namespace
to another is write the data into a file, and then load it in the second.  That
is done occasionally for link time settings and such.  (i.e. pkgconfig).

--Mark

>  
> 
> Olli
> 
>  
> 
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-openssl102-fips][PATCH V3 4/16] classes/image-enable-fips.bbclass: enable user space fips mode in image

2019-09-25 Thread Mark Hatle
You are correct.  I had found that earlier today.

Anyway, the code has been verified as functional, and has been pushed.

Thanks!
--Mark

On 9/25/19 9:35 PM, Hongxu Jia wrote:
> Refer Fedora/RedHat's way
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/6.5_technical_notes/dracut
> 
> To enable user space fips mode in the image recipe as part of an
> 'IMAGE_CLASSES'. Basically if FIPS-140-2 is enabled, then we can
> touch the file as a post image generation activity.
> 
> Signed-off-by: Hongxu Jia 
> ---
>  classes/image-enable-fips.bbclass | 5 +
>  conf/layer.conf   | 2 ++
>  2 files changed, 7 insertions(+)
>  create mode 100644 classes/image-enable-fips.bbclass
> 
> diff --git a/classes/image-enable-fips.bbclass 
> b/classes/image-enable-fips.bbclass
> new file mode 100644
> index 000..6c5b370
> --- /dev/null
> +++ b/classes/image-enable-fips.bbclass
> @@ -0,0 +1,5 @@
> +ROOTFS_POSTPROCESS_COMMAND_append = "enable_system_fips;"
> +enable_system_fips() {
> +install -d ${IMAGE_ROOTFS}${sysconfdir}
> +touch ${IMAGE_ROOTFS}${sysconfdir}/system-fips
> +}
> diff --git a/conf/layer.conf b/conf/layer.conf
> index 27a872e..185f422 100644
> --- a/conf/layer.conf
> +++ b/conf/layer.conf
> @@ -18,3 +18,5 @@ LAYERDEPENDS_meta-openssl-one-zero-two-fips = " \
>  meta-openssl-one-zero-two \
>  wr-template \
>  "
> +
> +IMAGE_CLASSES_append = "${@'' if d.getVar('OPENSSL_FIPS_ENABLED', True) != 
> '1' else ' image-enable-fips'}"
> 
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Review request V2 0/16: [meta-openssl102-fips] Enable FIPS mode in Kernel and OpenSSH

2019-09-25 Thread Mark Hatle


On 9/25/19 2:23 AM, Hongxu Jia wrote:
> Changed in V1:
> - Follow Mark H's suggestions
> 
> Hi Mark,
> 
> Once openssh enables FIPS mode, openssh ptest will fail (mess of failure).
> It seems the test case of upstream openssh does not consider FIPS mode 
> support.
> I search fedora, there is nothing about openssh `regress'(test suits) in
> FIPS mode support
> 
> So I do not add additional cavs test to the ptest, just add a note
> to README.enable_fips

Ok, that is good to know.  I suspect the issue is that many of the tests are
trying to use unapproved algorithms and should be skipped in FIPS mode.
Something for a future patch set.  I don't think it's necessary to adjust now.

I did modify patch 4.  We want to use the more generic IMAGE_POSTPROCESS_COMMAND
instead.  But otherwise I've taken it as is.  I'm currently running it through a
test pass, once that is complete I'll push the commits.

--Mark

> //Hongxu
> 
> == Comments (indicate scope for each "y" above) ==
> * Git logs
> [meta-openssl102-fips]
> commit 38849c1c52ae04eb2a3931624cd2d1446ab389d6
> Author: Hongxu Jia 
> Date:   Wed Sep 25 15:03:24 2019 +0800
> 
> README.enable_fips: openssh ptest failed in fips mode
> 
> Signed-off-by: Hongxu Jia 
> 
> commit f5b8a66c226541e73cc509a73452bbafc59f2555
> Author: Hongxu Jia 
> Date:   Sun Sep 22 22:40:56 2019 +0800
> 
> README.openssh_cavstest: add CAVS tests for FIPS validation
> 
> Signed-off-by: Hongxu Jia 
> 
> commit bd5de039c60fd2ab89f7925d3801520d742ba09a
> Author: Hongxu Jia 
> Date:   Sun Sep 22 21:54:41 2019 +0800
> 
> openssh: add CAVS tests for FIPS validation
> 
> Refer the latest Fedora to add cavs test binary for the aes-ctr [1]
> and SSH KDF CAVS test driver [2]
> 
> [1] 
> http://pkgs.fedoraproject.org/cgit/rpms/openssh.git/plain/openssh-6.6p1-ctr-cavstest.patch
> [2] 
> http://pkgs.fedoraproject.org/cgit/rpms/openssh.git/plain/openssh-6.7p1-kdf-cavs.patch
> (as of commit 0ca1614ae221578b6b57c61d18fda6cc970a19ce)
> 
> Signed-off-by: Hongxu Jia 
> 
> commit b40cef8f89461342da5c6a621d95cdb19a4d8cff
> Author: Hongxu Jia 
> Date:   Sun Sep 22 20:55:30 2019 +0800
> 
> README.enable_fips: add steps to turn system (kernel and user space) into 
> FIPS mode
> 
> Refer RedHat/Fedora/SUSE/Oracle/IBM ways
> 
> 1. Add `fips=1' to kernel option to enable FIPS mode in kernel
> 
> 2. File /etc/system-fips to determine if a FIPS mode is enabled in user 
> space,
> currently openssh only
> 
> Refer:
> 
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-federal_standards_and_regulations-federal_information_processing_standard
> https://access.redhat.com/discussions/3293631
> 
> https://lists.fedoraproject.org/pipermail/scm-commits/Week-of-Mon-20131007/1124363.html
> 
> https://www.ibm.com/support/knowledgecenter/en/linuxonibm/com.ibm.linux.z.lgdd/lgdd_r_fipsparm.html
> 
> https://support.oracle.com/knowledge/Oracle%20Linux%20and%20Virtualization/2323738_1.html
> 
> Signed-off-by: Hongxu Jia 
> 
> commit a4e3e55688b7a3666bcec95c342dab7984e7e0a3
> Author: Hongxu Jia 
> Date:   Sun Sep 22 19:27:45 2019 +0800
> 
> rng-tools: fix rngd failed in fips mode
> 
> The FIPS test is something done on government or more secure organizations
> for extra security check.
> ...
> root@qemux86-64:~# systemctl status rngd
> Unit rngd-tools.service could not be found.
> root@qemux86-64:~# systemctl status rngd
> rngd.service - Hardware RNG Entropy Gatherer Daemon
>Loaded: loaded (/lib/systemd/system/rngd.service; enabled; vendor 
> preset: enabled)
>Active: inactive (dead) since Sun 2019-09-22 11:10:41 UTC; 18min ago
>   Process: 317 ExecStart=/usr/sbin/rngd -f $EXTRA_ARGS (code=exited, 
> status=0/SUCCESS)
>  Main PID: 317 (code=exited, status=0/SUCCESS)
> 
> Sep 22 11:10:37 qemux86-64 rngd[317]: RNDADDENTROPY failed: Operation not 
> permitted
> Sep 22 11:10:37 qemux86-64 rngd[317]: RNDADDENTROPY failed: Operation not 
> permitted
> Sep 22 11:10:37 qemux86-64 rngd[317]: too many FIPS failures, disabling 
> entropy source
> ...
> 
> From rngd manual, add `-i' to default
> ...
> -i, --ignorefail
>   Ignore repeated fips failures
> ...
> 
> After applying the fix
> ...
> rngd.service - Hardware RNG Entropy Gatherer Daemon
>Loaded: loaded (/lib/systemd/system/rngd.service; enabled; vendor 
> preset: enabled)
>Active: active (running) since Sun 2019-09-22 12:18:31 UTC; 4min 35s 
> ago
>  Main PID: 121 (rngd)
> Tasks: 2
>Memory: 1.8M
>CGroup: /system.slice/rngd.service
>/usr/sbin/rngd -f -r /dev/hwrng -i
> 
> Sep 22 12:23:06 qemux86-64 rngd[121]: RNDADDENTROPY failed: Operation not 
> permitted
> ...
> 
> Refer:
> 
> 

Re: [yocto] [meta-openssl102-fips][PATCH 14/15] openssh: add CAVS tests for FIPS validation

2019-09-23 Thread Mark Hatle
Please include the commit from fedora for these files.

Also, I like how the cavs were packaged.  An additional test should be added to
the ptest if the cavs are installed.

--Mark

On 9/22/19 9:57 AM, Hongxu Jia wrote:
> Refer the latest Fedora to add cavs test binary for the aes-ctr [1]
> and SSH KDF CAVS test driver [2]
> 
> [1] 
> http://pkgs.fedoraproject.org/cgit/rpms/openssh.git/plain/openssh-6.6p1-ctr-cavstest.patch
> [2] 
> http://pkgs.fedoraproject.org/cgit/rpms/openssh.git/plain/openssh-6.7p1-kdf-cavs.patch
> 
> Signed-off-by: Hongxu Jia 
> ---
>  .../openssh/openssh-6.6p1-ctr-cavstest.patch   | 289 +
>  .../openssh/openssh/openssh-6.7p1-kdf-cavs.patch   | 654 
> +
>  recipes-connectivity/openssh/openssh_fips.inc  |   9 +
>  3 files changed, 952 insertions(+)
>  create mode 100644 
> recipes-connectivity/openssh/openssh/openssh-6.6p1-ctr-cavstest.patch
>  create mode 100644 
> recipes-connectivity/openssh/openssh/openssh-6.7p1-kdf-cavs.patch
> 
> diff --git 
> a/recipes-connectivity/openssh/openssh/openssh-6.6p1-ctr-cavstest.patch 
> b/recipes-connectivity/openssh/openssh/openssh-6.6p1-ctr-cavstest.patch
> new file mode 100644
> index 000..038efa0
> --- /dev/null
> +++ b/recipes-connectivity/openssh/openssh/openssh-6.6p1-ctr-cavstest.patch
> @@ -0,0 +1,289 @@
> +From a94a3d95439018dc7d276ec72de91af369ea413e Mon Sep 17 00:00:00 2001
> +From: Hongxu Jia 
> +Date: Sun, 22 Sep 2019 21:32:18 +0800
> +Subject: [PATCH 1/2] add CAVS test driver for the aes-ctr ciphers
> +
> +Original submission to Fedora, see:
> +   
> https://lists.fedoraproject.org/pipermail/scm-commits/2012-January/715044.html
> +
> +this version download from:
> +   
> http://pkgs.fedoraproject.org/cgit/rpms/openssh.git/plain/openssh-6.6p1-ctr-cavstest.patch
> +   (as of commit 991b66246f5151884b63c6d1232610a4569642a5)
> +
> +Makefile.in slightly modified for integration
> +
> +This is the makefile.in change for the normal configuration.
> +
> +Signed-off-by: Mark Hatle 
> +
> +Upstream-Status: Inappropriate [oe specific]
> +Signed-off-by: Hongxu Jia 
> +---
> + Makefile.in|   7 +-
> + ctr-cavstest.c | 215 
> +
> + 2 files changed, 221 insertions(+), 1 deletion(-)
> + create mode 100644 ctr-cavstest.c
> +
> +diff --git a/Makefile.in b/Makefile.in
> +index ddd1804..cb34681 100644
> +--- a/Makefile.in
>  b/Makefile.in
> +@@ -23,6 +23,7 @@ SSH_PROGRAM=@bindir@/ssh
> + ASKPASS_PROGRAM=$(libexecdir)/ssh-askpass
> + SFTP_SERVER=$(libexecdir)/sftp-server
> + SSH_KEYSIGN=$(libexecdir)/ssh-keysign
> ++CTR_CAVSTEST=$(libexecdir)/ctr-cavstest
> + SSH_PKCS11_HELPER=$(libexecdir)/ssh-pkcs11-helper
> + PRIVSEP_PATH=@PRIVSEP_PATH@
> + SSH_PRIVSEP_USER=@SSH_PRIVSEP_USER@
> +@@ -60,7 +61,7 @@ EXEEXT=@EXEEXT@
> + MANFMT=@MANFMT@
> + MKDIR_P=@MKDIR_P@
> + 
> +-TARGETS=ssh$(EXEEXT) sshd$(EXEEXT) ssh-add$(EXEEXT) ssh-keygen$(EXEEXT) 
> ssh-keyscan${EXEEXT} ssh-keysign${EXEEXT} ssh-pkcs11-helper$(EXEEXT) 
> ssh-agent$(EXEEXT) scp$(EXEEXT) sftp-server$(EXEEXT) sftp$(EXEEXT)
> ++TARGETS=ssh$(EXEEXT) sshd$(EXEEXT) ssh-add$(EXEEXT) ssh-keygen$(EXEEXT) 
> ssh-keyscan${EXEEXT} ssh-keysign${EXEEXT} ssh-pkcs11-helper$(EXEEXT) 
> ssh-agent$(EXEEXT) scp$(EXEEXT) sftp-server$(EXEEXT) sftp$(EXEEXT) 
> ctr-cavstest$(EXEEXT)
> + 
> + XMSS_OBJS=\
> + ssh-xmss.o \
> +@@ -193,6 +194,9 @@ ssh-keysign$(EXEEXT): $(LIBCOMPAT) libssh.a 
> ssh-keysign.o readconf.o uidswap.o c
> + ssh-pkcs11-helper$(EXEEXT): $(LIBCOMPAT) libssh.a ssh-pkcs11-helper.o 
> ssh-pkcs11.o
> + $(LD) -o $@ ssh-pkcs11-helper.o ssh-pkcs11.o $(LDFLAGS) -lssh 
> -lopenbsd-compat -lssh -lopenbsd-compat $(LIBS)
> + 
> ++ctr-cavstest$(EXEEXT): $(LIBCOMPAT) libssh.a ctr-cavstest.o
> ++$(LD) -o $@ ctr-cavstest.o $(LDFLAGS) -lssh -lopenbsd-compat -lssh 
> -lfipscheck $(LIBS)
> ++
> + ssh-keyscan$(EXEEXT): $(LIBCOMPAT) libssh.a ssh-keyscan.o
> + $(LD) -o $@ ssh-keyscan.o $(LDFLAGS) -lssh -lopenbsd-compat -lssh 
> -lfipscheck $(LIBS)
> + 
> +@@ -343,6 +347,7 @@ install-files:
> + $(INSTALL) -m 0755 $(STRIP_OPT) ssh-keyscan$(EXEEXT) 
> $(DESTDIR)$(bindir)/ssh-keyscan$(EXEEXT)
> + $(INSTALL) -m 0755 $(STRIP_OPT) sshd$(EXEEXT) 
> $(DESTDIR)$(sbindir)/sshd$(EXEEXT)
> + $(INSTALL) -m 4711 $(STRIP_OPT) ssh-keysign$(EXEEXT) 
> $(DESTDIR)$(SSH_KEYSIGN)$(EXEEXT)
> ++$(INSTALL) -m 0755 $(STRIP_OPT) ctr-cavstest$(EXEEXT) 
> $(DESTDIR)$(libexecdir)/ctr-cavstest$(EXEEXT)
> + $(INSTALL) -m 0755 $(STRIP_OPT) ssh-pkcs11-helper$(EXEEXT) 
> $(DESTDIR)$(SSH_PKCS11_HELPER)$(EXEEXT)
> + $(INSTALL) -m 0755 $(STRIP_OPT) sftp$(EXEEXT) 
> $(DESTDIR)$(bindir)/sf

Re: [yocto] [meta-openssl102-fips][PATCH 9/15] openssh: port sshd_check_keys from oe-core

2019-09-23 Thread Mark Hatle
Please include the oe-core commit that this version was taken from.  It'll be
easier to uprev, if needed, if we need to.

--Mark

On 9/22/19 9:57 AM, Hongxu Jia wrote:
> Signed-off-by: Hongxu Jia 
> ---
>  .../openssh/openssh/sshd_check_keys| 78 
> ++
>  1 file changed, 78 insertions(+)
>  create mode 100644 recipes-connectivity/openssh/openssh/sshd_check_keys
> 
> diff --git a/recipes-connectivity/openssh/openssh/sshd_check_keys 
> b/recipes-connectivity/openssh/openssh/sshd_check_keys
> new file mode 100644
> index 000..1931dc7
> --- /dev/null
> +++ b/recipes-connectivity/openssh/openssh/sshd_check_keys
> @@ -0,0 +1,78 @@
> +#! /bin/sh
> +
> +generate_key() {
> +local FILE=$1
> +local TYPE=$2
> +local DIR="$(dirname "$FILE")"
> +
> +mkdir -p "$DIR"
> +ssh-keygen -q -f "${FILE}.tmp" -N '' -t $TYPE
> +
> +# Atomically rename file public key
> +mv -f "${FILE}.tmp.pub" "${FILE}.pub"
> +
> +# This sync does double duty: Ensuring that the data in the temporary
> +# private key file is on disk before the rename, and ensuring that the
> +# public key rename is completed before the private key rename, since we
> +# switch on the existence of the private key to trigger key generation.
> +# This does mean it is possible for the public key to exist, but be 
> garbage
> +# but this is OK because in that case the private key won't exist and the
> +# keys will be regenerated.
> +#
> +# In the event that sync understands arguments that limit what it tries 
> to
> +# fsync(), we provided them. If it does not, it will simply call sync()
> +# which is just as well
> +sync "${FILE}.pub" "$DIR" "${FILE}.tmp"
> +
> +mv "${FILE}.tmp" "$FILE"
> +
> +# sync to ensure the atomic rename is committed
> +sync "$DIR"
> +}
> +
> +# /etc/default/ssh may set SYSCONFDIR and SSHD_OPTS
> +if test -f /etc/default/ssh; then
> +. /etc/default/ssh
> +fi
> +
> +[ -z "$SYSCONFDIR" ] && SYSCONFDIR=/etc/ssh
> +mkdir -p $SYSCONFDIR
> +
> +# parse sshd options
> +set -- ${SSHD_OPTS} --
> +sshd_config=/etc/ssh/sshd_config
> +while true ; do
> +case "$1" in
> +-f*) if [ "$1" = "-f" ] ; then
> +sshd_config="$2"
> +shift
> +else
> +sshd_config="${1#-f}"
> +fi
> +shift
> +;;
> +--) shift; break;;
> +*) shift;;
> +esac
> +done
> +
> +HOST_KEYS=$(sed -n 's/^[ \t]*HostKey[ \t]\+\(.*\)/\1/p' "${sshd_config}")
> +[ -z "${HOST_KEYS}" ] && HOST_KEYS="$SYSCONFDIR/ssh_host_rsa_key 
> $SYSCONFDIR/ssh_host_ecdsa_key $SYSCONFDIR/ssh_host_ed25519_key"
> +
> +for key in ${HOST_KEYS} ; do
> +[ -f $key ] && continue
> +case $key in
> +*_rsa_key)
> +echo "  generating ssh RSA host key..."
> +generate_key $key rsa
> +;;
> +*_ecdsa_key)
> +echo "  generating ssh ECDSA host key..."
> +generate_key $key ecdsa
> +;;
> +*_ed25519_key)
> +echo "  generating ssh ED25519 host key..."
> +generate_key $key ed25519
> +;;
> +esac
> +done
> 
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-openssl102-fips][PATCH 5/15] openssh: add generation of HMAC checksums in pkg_postinst

2019-09-23 Thread Mark Hatle
Same comment here as in the fipscheck about the post install stuff.

--Mark

On 9/22/19 9:56 AM, Hongxu Jia wrote:
> Refer 
> https://src.fedoraproject.org/rpms/openssh/c/d93958db19129e0f4615865eab22fb36e1f4fb8a
> 
> Signed-off-by: Hongxu Jia 
> ---
>  recipes-connectivity/openssh/openssh_fips.inc | 26 ++
>  1 file changed, 26 insertions(+)
> 
> diff --git a/recipes-connectivity/openssh/openssh_fips.inc 
> b/recipes-connectivity/openssh/openssh_fips.inc
> index 99a3482..df84c39 100644
> --- a/recipes-connectivity/openssh/openssh_fips.inc
> +++ b/recipes-connectivity/openssh/openssh_fips.inc
> @@ -6,3 +6,29 @@ DEPENDS += " \
>  SRC_URI += " \
>  file://0001-openssh-8.0p1-fips.patch \
>  "
> +
> +do_install_append() {
> +install -d ${D}${libdir}/fipscheck
> +}
> +
> +inherit qemu
> +
> +pkg_postinst_append_${PN}-ssh () {
> +if [ -n "$D" ]; then
> +${@qemu_run_binary(d, '$D', '${bindir}/fipshmac')} \
> +-d $D${libdir}/fipscheck $D${bindir}/ssh.${BPN}
> +else
> +${bindir}/fipshmac -d ${libdir}/fipscheck ${bindir}/ssh.${BPN}
> +fi
> +}
> +
> +pkg_postinst_append_${PN}-sshd () {
> +if [ -n "$D" ]; then
> +${@qemu_run_binary(d, '$D', '${bindir}/fipshmac')} \
> +-d $D${libdir}/fipscheck $D${sbindir}/sshd
> +else
> +${bindir}/fipshmac -d ${libdir}/fipscheck ${sbindir}/sshd
> +fi
> +}
> +
> +FILES_${PN} += "${libdir}/fipscheck"
> 
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-openssl102-fips][PATCH 4/15] fipscheck: enable fipscheck on target

2019-09-23 Thread Mark Hatle



On 9/22/19 9:56 AM, Hongxu Jia wrote:
> Refer Fedora/RedHat's way
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/6.5_technical_notes/dracut
> 
> Signed-off-by: Hongxu Jia 
> ---
>  recipes-connectivity/openssh/fipscheck_1.5.0.bb | 4 
>  1 file changed, 4 insertions(+)
> 
> diff --git a/recipes-connectivity/openssh/fipscheck_1.5.0.bb 
> b/recipes-connectivity/openssh/fipscheck_1.5.0.bb
> index 0a06bd3..23a4123 100644
> --- a/recipes-connectivity/openssh/fipscheck_1.5.0.bb
> +++ b/recipes-connectivity/openssh/fipscheck_1.5.0.bb
> @@ -28,6 +28,10 @@ EXTRA_OEMAKE += " \
>  -I${STAGING_LIBDIR_NATIVE}/ssl/fips-2.0/include \
>  "
>  do_install_append() {
> +# Is't the fedora way to enable fipscheck
> +install -d ${D}${sysconfdir}
> +touch ${D}${sysconfdir}/system-fips
> +

After researching the system-fips, I'm wondering if it would be better to enable
this in the image recipe as part of an 'IMAGE_CLASSES'.  Basically if FIPS-140-2
is enabled, then we can touch the file as a post image generation activity.

The alternative would be to create an initscript that would check for 'fips=1'
on the kernel command line and then create that file (or remove it?) as well.

I'm not sure which is the better strategy.  (For read-only devices the image
thing is better.. since /etc/ is otherwise read-only.)

--Mark

>  install -d ${D}${libdir}/fipscheck
>  }
>  
> 
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-openssl102-fips][PATCH 3/15] fipscheck: add generation of the checksums in pkg_postinst

2019-09-23 Thread Mark Hatle



On 9/22/19 9:56 AM, Hongxu Jia wrote:
> Refer https://pagure.io/fipscheck/c/489bc3ab3f73707e12b6c2644d80af5ff6fbbf70
> 
> Signed-off-by: Hongxu Jia 
> ---
>  recipes-connectivity/openssh/fipscheck_1.5.0.bb | 18 ++
>  1 file changed, 18 insertions(+)
> 
> diff --git a/recipes-connectivity/openssh/fipscheck_1.5.0.bb 
> b/recipes-connectivity/openssh/fipscheck_1.5.0.bb
> index 68051d2..0a06bd3 100644
> --- a/recipes-connectivity/openssh/fipscheck_1.5.0.bb
> +++ b/recipes-connectivity/openssh/fipscheck_1.5.0.bb
> @@ -27,4 +27,22 @@ EXTRA_OECONF += " \
>  EXTRA_OEMAKE += " \
>  -I${STAGING_LIBDIR_NATIVE}/ssl/fips-2.0/include \
>  "
> +do_install_append() {
> +install -d ${D}${libdir}/fipscheck
> +}
>  
> +inherit qemu
> +
> +pkg_postinst_${PN} () {
> +if [ -n "$D" ]; then
> +${@qemu_run_binary(d, '$D', '${bindir}/fipshmac')} \
> +-d $D${libdir}/fipscheck $D${bindir}/fipscheck 
> $D${libdir}/libfipscheck.so.1.2.1 && \
> +ln -s libfipscheck.so.1.2.1.hmac 
> $D${libdir}/fipscheck/libfipscheck.so.1.hmac
> +else
> +${bindir}/fipshmac -d ${libdir}/fipscheck ${bindir}/fipscheck \
> +${libdir}/libfipscheck.so.1.2.1 && \
> +ln -s libfipscheck.so.1.2.1.hmac 
> ${libdir}/fipscheck/libfipscheck.so.1.hmac
> +fi
> +}

The way this works has changed a bit since I really knew it.  I was looking in
the manpages.bbclass and they have the following:

> if ${@bb.utils.contains('PACKAGECONFIG', 'manpages', 'true', 'false', 
> d)}; then
> if test -n "$D"; then
> if ${@bb.utils.contains('MACHINE_FEATURES', 
> 'qemu-usermode', 'true','false', d)}; then
> sed "s:\(\s\)/:\1$D/:g" 
> $D${sysconfdir}/man_db.conf | ${@qemu_run_binary(d, '$D', '${bindir}/mandb')} 
> -C - -u -q $D${mandir}
> mkdir -p $D${localstatedir}/cache/man
> mv $D${mandir}/index.db 
> $D${localstatedir}/cache/man
> else
> $INTERCEPT_DIR/postinst_intercept 
> delay_to_first_boot ${PKG} mlprefix=${MLPREFIX}
> fi
> else
> mandb -q
> fi
> fi


That is checking for the presence of the MACHINE_FEATURE.  I'm not sure I like
that in this case though since it makes these recipes machine specific.

But I do think we need the delay until first boot part.

Jason, I know you've been working on first boot things for a while, any opinion?

--Mark

> +
> +FILES_${PN} += "${libdir}/fipscheck"
> 
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-openssl102-fips][PATCH 2/15] openssh_8.%.bbappend: support fips 140-2

2019-09-23 Thread Mark Hatle



On 9/22/19 9:56 AM, Hongxu Jia wrote:
> Signed-off-by: Hongxu Jia 
> ---
>  .../openssh/openssh/0001-openssh-8.0p1-fips.patch  | 528 
> +
>  recipes-connectivity/openssh/openssh_8.%.bbappend  |   4 +
>  recipes-connectivity/openssh/openssh_fips.inc  |   8 +
>  3 files changed, 540 insertions(+)
>  create mode 100644 
> recipes-connectivity/openssh/openssh/0001-openssh-8.0p1-fips.patch
>  create mode 100644 recipes-connectivity/openssh/openssh_8.%.bbappend
>  create mode 100644 recipes-connectivity/openssh/openssh_fips.inc
> 
> diff --git 
> a/recipes-connectivity/openssh/openssh/0001-openssh-8.0p1-fips.patch 
> b/recipes-connectivity/openssh/openssh/0001-openssh-8.0p1-fips.patch
> new file mode 100644
> index 000..fd0a411
> --- /dev/null
> +++ b/recipes-connectivity/openssh/openssh/0001-openssh-8.0p1-fips.patch
> @@ -0,0 +1,528 @@
> +From 255e5dcdec36df7222f69b253dfc05be63927ed2 Mon Sep 17 00:00:00 2001
> +From: Hongxu Jia 
> +Date: Fri, 20 Sep 2019 17:59:00 +0800
> +Subject: [PATCH] openssh 8.0p1 fips
> +
> +Port openssh-7.7p1-fips.patch from Fedora
> +https://src.fedoraproject.org/rpms/openssh.git

Can you include the commit id of the Fedora version where you pulled the patch 
from?

This will make it easier to update in the future.

--Mark

> +Upstream-Status: Inappropriate [oe specific]
> +
> +Signed-off-by: Hongxu Jia 
> +---
> + Makefile.in  | 14 +++---
> + cipher-ctr.c |  3 ++-
> + clientloop.c |  3 ++-
> + dh.c | 40 
> + dh.h |  1 +
> + kex.c|  5 -
> + kexgexc.c|  5 +
> + myproposal.h | 40 
> + readconf.c   | 17 +
> + sandbox-seccomp-filter.c |  3 +++
> + servconf.c   | 19 ++-
> + ssh-keygen.c |  6 ++
> + ssh.c| 16 
> + sshconnect2.c| 11 ---
> + sshd.c   | 19 +++
> + sshkey.c |  4 
> + 16 files changed, 176 insertions(+), 30 deletions(-)
> +
> +diff --git a/Makefile.in b/Makefile.in
> +index 6f001bb..ddd1804 100644
> +--- a/Makefile.in
>  b/Makefile.in
> +@@ -170,31 +170,31 @@ libssh.a: $(LIBSSH_OBJS)
> + $(RANLIB) $@
> + 
> + ssh$(EXEEXT): $(LIBCOMPAT) libssh.a $(SSHOBJS)
> +-$(LD) -o $@ $(SSHOBJS) $(LDFLAGS) -lssh -lopenbsd-compat $(SSHLIBS) 
> $(LIBS) $(GSSLIBS)
> ++$(LD) -o $@ $(SSHOBJS) $(LDFLAGS) -lssh -lopenbsd-compat -lfipscheck 
> $(SSHLIBS) $(LIBS) $(GSSLIBS)
> + 
> + sshd$(EXEEXT): libssh.a $(LIBCOMPAT) $(SSHDOBJS)
> +-$(LD) -o $@ $(SSHDOBJS) $(LDFLAGS) -lssh -lopenbsd-compat $(SSHDLIBS) 
> $(LIBS) $(GSSLIBS) $(K5LIBS)
> ++$(LD) -o $@ $(SSHDOBJS) $(LDFLAGS) -lssh -lopenbsd-compat -lfipscheck 
> $(SSHDLIBS) $(LIBS) $(GSSLIBS) $(K5LIBS)
> + 
> + scp$(EXEEXT): $(LIBCOMPAT) libssh.a scp.o progressmeter.o
> + $(LD) -o $@ scp.o progressmeter.o $(LDFLAGS) -lssh -lopenbsd-compat 
> $(LIBS)
> + 
> + ssh-add$(EXEEXT): $(LIBCOMPAT) libssh.a ssh-add.o
> +-$(LD) -o $@ ssh-add.o $(LDFLAGS) -lssh -lopenbsd-compat $(LIBS)
> ++$(LD) -o $@ ssh-add.o $(LDFLAGS) -lssh -lopenbsd-compat -lfipscheck 
> $(LIBS)
> + 
> + ssh-agent$(EXEEXT): $(LIBCOMPAT) libssh.a ssh-agent.o ssh-pkcs11-client.o
> +-$(LD) -o $@ ssh-agent.o ssh-pkcs11-client.o $(LDFLAGS) -lssh 
> -lopenbsd-compat $(LIBS)
> ++$(LD) -o $@ ssh-agent.o ssh-pkcs11-client.o $(LDFLAGS) -lssh 
> -lopenbsd-compat -lfipscheck $(LIBS)
> + 
> + ssh-keygen$(EXEEXT): $(LIBCOMPAT) libssh.a ssh-keygen.o
> +-$(LD) -o $@ ssh-keygen.o $(LDFLAGS) -lssh -lopenbsd-compat $(LIBS)
> ++$(LD) -o $@ ssh-keygen.o $(LDFLAGS) -lssh -lopenbsd-compat -lfipscheck 
> $(LIBS)
> + 
> + ssh-keysign$(EXEEXT): $(LIBCOMPAT) libssh.a ssh-keysign.o readconf.o 
> uidswap.o compat.o
> +-$(LD) -o $@ ssh-keysign.o readconf.o uidswap.o $(LDFLAGS) -lssh 
> -lopenbsd-compat $(LIBS)
> ++$(LD) -o $@ ssh-keysign.o readconf.o uidswap.o $(LDFLAGS) -lssh 
> -lopenbsd-compat -lfipscheck $(LIBS)
> + 
> + ssh-pkcs11-helper$(EXEEXT): $(LIBCOMPAT) libssh.a ssh-pkcs11-helper.o 
> ssh-pkcs11.o
> + $(LD) -o $@ ssh-pkcs11-helper.o ssh-pkcs11.o $(LDFLAGS) -lssh 
> -lopenbsd-compat -lssh -lopenbsd-compat $(LIBS)
> + 
> + ssh-keyscan$(EXEEXT): $(LIBCOMPAT) libssh.a ssh-keyscan.o
> +-$(LD) -o $@ ssh-keyscan.o $(LDFLAGS) -lssh -lopenbsd-compat -lssh 
> $(LIBS)
> ++$(LD) -o $@ ssh-keyscan.o $(LDFLAGS) -lssh -lopenbsd-compat -lssh 
> -lfipscheck $(LIBS)
> + 
> + sftp-server$(EXEEXT): $(LIBCOMPAT) libssh.a sftp.o sftp-common.o 
> sftp-server.o sftp-server-main.o
> + $(LD) -o $@ sftp-server.o sftp-common.o sftp-server-main.o $(LDFLAGS) 
> -lssh -lopenbsd-compat $(LIBS)
> +diff --git a/cipher-ctr.c b/cipher-ctr.c
> +index 32771f2..74fac3b 100644
> +--- a/cipher-ctr.c
>  b/cipher-ctr.c
> +@@ -138,7 +138,8 @@ 

Re: [yocto] [meta-openssl102-fips][PATCH 1/15] fipscheck: add 1.5.0

2019-09-23 Thread Mark Hatle
Please include the commit id of the Fedora version that was included.  It will
help us review changes in the future.

On 9/22/19 9:56 AM, Hongxu Jia wrote:
> Port it from fedora:
> https://src.fedoraproject.org/rpms/fipscheck
> 
> It is required by openssh fips.
> 
> Signed-off-by: Hongxu Jia 
> ---
>  .../0001-compat-fip-with-openssl-1.0.2.patch   | 34 
> ++
>  recipes-connectivity/openssh/fipscheck_1.5.0.bb| 30 +++
>  templates/feature/openssl-fips/template.conf   |  2 +-
>  3 files changed, 65 insertions(+), 1 deletion(-)
>  create mode 100644 
> recipes-connectivity/openssh/fipscheck/0001-compat-fip-with-openssl-1.0.2.patch
>  create mode 100644 recipes-connectivity/openssh/fipscheck_1.5.0.bb
> 
> diff --git 
> a/recipes-connectivity/openssh/fipscheck/0001-compat-fip-with-openssl-1.0.2.patch
>  
> b/recipes-connectivity/openssh/fipscheck/0001-compat-fip-with-openssl-1.0.2.patch
> new file mode 100644
> index 000..22e5a62
> --- /dev/null
> +++ 
> b/recipes-connectivity/openssh/fipscheck/0001-compat-fip-with-openssl-1.0.2.patch
> @@ -0,0 +1,34 @@
> +From 3147ae2a63f10f9bbdd0a617b450ff8b9868e60f Mon Sep 17 00:00:00 2001
> +From: Hongxu Jia 
> +Date: Fri, 20 Sep 2019 17:51:09 +0800
> +Subject: [PATCH] compat fip with openssl 1.0.2
> +
> +In /usr/lib64/ssl/fips-2.0/include/openssl/opensslv.h
> +...
> +define OPENSSL_VERSION_NUMBER  0x1010L
> +...
> +Since fips include file compat with openssl 1.1.0, do not include it
> +in Yocto
> +
> +Upstream-Status: Inappropriate [oe specific]
> +
> +Signed-off-by: Hongxu Jia 
> +---
> + src/filehmac.c | 1 -
> + 1 file changed, 1 deletion(-)
> +
> +diff --git a/src/filehmac.c b/src/filehmac.c
> +index a8eef00..0b36cec 100644
> +--- a/src/filehmac.c
>  b/src/filehmac.c
> +@@ -41,7 +41,6 @@
> + #include 
> + 
> + #if defined(WITH_OPENSSL)
> +-#include 
> + #include 
> + #include 
> + #elif defined(WITH_NSS)
> +-- 
> +2.7.4
> +
> diff --git a/recipes-connectivity/openssh/fipscheck_1.5.0.bb 
> b/recipes-connectivity/openssh/fipscheck_1.5.0.bb
> new file mode 100644
> index 000..68051d2
> --- /dev/null
> +++ b/recipes-connectivity/openssh/fipscheck_1.5.0.bb
> @@ -0,0 +1,30 @@
> +SUMMARY = "A library for integrity verification of FIPS validated modules"
> +DESCRIPTION = "FIPSCheck is a library for integrity verification of FIPS 
> validated \
> +modules. The package also provides helper binaries for creation and \
> +verification of the HMAC-SHA256 checksum files."
> +HOMEPAGE = "https://pagure.io/fipscheck;
> +SECTION = "libs/network"
> +
> +LICENSE = "MIT"
> +LIC_FILES_CHKSUM = "file://COPYING;md5=35f2904ce138ac5fa63e7cedf96bbedf"
> +
> +SRC_URI = "https://releases.pagure.org/fipscheck/${BPN}-${PV}.tar.bz2 \
> +   file://0001-compat-fip-with-openssl-1.0.2.patch \
> +"
> +SRC_URI[md5sum] = "86e756a7d2aa15f3f91033fb3eced99b"
> +SRC_URI[sha256sum] = 
> "7ba38100ced187f44b12dd52c8c74db8f366a2a8b9da819bd3e7c6ea17f469d5"
> +
> +DEPENDS = " \
> +openssl \
> +openssl-fips \
> +"
> +
> +inherit autotools pkgconfig
> +
> +EXTRA_OECONF += " \
> +--disable-static \
> +"
> +EXTRA_OEMAKE += " \
> +-I${STAGING_LIBDIR_NATIVE}/ssl/fips-2.0/include \
> +"
> +
> diff --git a/templates/feature/openssl-fips/template.conf 
> b/templates/feature/openssl-fips/template.conf
> index 6da678c..9a551c3 100644
> --- a/templates/feature/openssl-fips/template.conf
> +++ b/templates/feature/openssl-fips/template.conf
> @@ -8,4 +8,4 @@ OPENSSL_FIPS_PREBUILT ??= ""
>  
>  PNWHITELIST_meta-openssl-one-zero-two-fips += 'openssl-fips'
>  PNWHITELIST_meta-openssl-one-zero-two-fips += 'openssl-fips-example'
> -
> +PNWHITELIST_meta-openssl-one-zero-two-fips += 'fipscheck'
> 
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-openssl102-fips][PATCH] README.build: add steps to include openssl102

2019-09-17 Thread Mark Hatle
On 9/17/19 1:37 AM, Hongxu Jia wrote:
> The openssl fips only works with old openssl(<=1.0.2),
> update steps to clarify it for Yocto and Wind River Linux

Merged.

> Signed-off-by: Hongxu Jia 
> ---
>  README.build | 8 +---
>  1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/README.build b/README.build
> index bc8fcf3..3da03da 100644
> --- a/README.build
> +++ b/README.build
> @@ -40,13 +40,15 @@ The easiest way to do this with Yocto is include this 
> layer [1]
>  and meta-openssl102 [2], and install packagegroup-core-buildessential
>  to image [3]
>  
> -[1] git://git.yoctoproject.org/meta-openssl102
> -[2] git://git.yoctoproject.org/meta-openssl102-fips
> +[1] git://git.yoctoproject.org/meta-openssl102-fips
> +[2] git://git.yoctoproject.org/meta-openssl102
> +Manually set 1.0.2% to openssl preferred version
> +echo "PREFERRED_VERSION_openssl = '1.0.2%'" >> conf/local.conf
>  [3] echo "IMAGE_INSTALL += 'packagegroup-core-buildessential'" >> 
> conf/local.conf
>  
>  The easiest way to do this with Wind River Linux is include:
>  
> ---templates features/target-toolchain --layers meta-openssl102-fips
> +--templates features/target-toolchain --templates feature/openssl102 
> --layers meta-openssl102-fips
>  
>  Note: do not include template feature/openssl-fips
>  
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-openssl102-fips][PATCH 2/2] README.build: add FAQ to support fips on arm/aarch64/x86

2019-09-17 Thread Mark Hatle
On 9/16/19 9:34 PM, Hongxu Jia wrote:
> Signed-off-by: Hongxu Jia 
> ---
>  README.build | 36 
>  1 file changed, 36 insertions(+)
> 
> diff --git a/README.build b/README.build
> index 9735028..bc8fcf3 100644
> --- a/README.build
> +++ b/README.build
> @@ -245,3 +245,39 @@ Note this sample command is functionally equivalent to:
>  $ env OPENSSL_FIPS=1 openssl sha1 -hmac etaonrishdlcupfm fips_hmac.c
>  HMAC-SHA1(fips_hmac.c)= ae25ad68d9a8cc04075100563a437fa37829afcc
>  
> +===
> +FAQ
> +===
> +1. How to support fips on 32bit arm (such as MACHINE = qemuarm)?
> +Set env MACHINE='arm' before Building the FIPS Object Module
> +(Building Steps 3), which affects fips config not to add option
> +`-march=armv7-a' to avoid failure on gcc8:
> +[snip]
> +|`cc1: error: -mfloat-abi=hard: selected processor lacks an FPU'
> +[snip]
> +
> +2. How to support fips on aarch64 (such as MACHINE = qemuarm64)?
> +For aarch64, FIPS 140-2 module only support android, wrapper gcc
> +at Building the FIPS Object Module(Building Steps 3) to define
> +macro FIPS_REF_POINT_IS_CROSS_COMPILER_AWARE to simulate what
> +android did. Provide a way to add bbappend to wrapper gcc:
> +mkdir -p recipes-devtools/gcc
> +cat << ENDOF > recipes-devtools/gcc/gcc_9.%.bbappend
> +do_install_append_aarch64() {
> +create_cmdline_wrapper \${D}/\${bindir}/gcc 
> -DFIPS_REF_POINT_IS_CROSS_COMPILER_AWARE
> +}
> +
> +FILES_\${PN}-symlinks += "\${bindir}/gcc.real"
> +ENDOF

I'm not sure the above wrapper is really allowed by the FIPS 140-2 User Guide.
However, if it were, the instructions should be different.  Something like

cat > gcc-wrapper.sh << EOF
#!/bin/sh
gcc -FFIPS_REF_POINT_IS_CROSS_COMPILER_AWARE $@
EOF
chmod +x gcc-wrapper.sh

export CC='gcc-wrapper.sh'

I've not tried this though.

I'll give this a try and see if this will work.  We will document it with a
caveat about being unclear if it's allowed.

--Mark

> +3. How to support fips on 32bit x86? (Such as MACHINE = qemux86,
> +or lib32-image on qemux86-64)
> +Set env MACHINE='i686' before Building the FIPS Object Module
> +(Building Steps 3) which affect fips config not to add option
> +`-m 64' on lib32-image which workaround the following failure
> +[snip]
> +|/usr/include/bits/long-double.h:44:10: fatal error:
> +bits/long-double-64.h: No such file or directory
> +|   44 | #include 
> +[snip]
> +


-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-openssl102-fips][PATCH 1/2] README.build: update steps for communtiy

2019-09-17 Thread Mark Hatle
On 9/16/19 9:34 PM, Hongxu Jia wrote:
> Since the layer is now published via the Yocto Project and
> git.yoctoproject.org, we should update steps in README.build

Merged.

> Signed-off-by: Hongxu Jia 
> ---
>  README.build | 21 -
>  1 file changed, 16 insertions(+), 5 deletions(-)
> 
> diff --git a/README.build b/README.build
> index df3f4e4..9735028 100644
> --- a/README.build
> +++ b/README.build
> @@ -36,6 +36,14 @@ In order to build a precompiled version of the binary, you 
> must first
>  construct a target system that includes a target development environment
>  and meta-openssl102-fips layer without feature/openssl-fips
>  
> +The easiest way to do this with Yocto is include this layer [1]
> +and meta-openssl102 [2], and install packagegroup-core-buildessential
> +to image [3]
> +
> +[1] git://git.yoctoproject.org/meta-openssl102
> +[2] git://git.yoctoproject.org/meta-openssl102-fips
> +[3] echo "IMAGE_INSTALL += 'packagegroup-core-buildessential'" >> 
> conf/local.conf
> +
>  The easiest way to do this with Wind River Linux is include:
>  
>  --templates features/target-toolchain --layers meta-openssl102-fips
> @@ -112,13 +120,16 @@ Building Steps (based on section 4 of the 
> UsersGuide-2.0.pdf):
>  Move the tar archive back to your host project into a directory 
> accessable
>  by the build system.
>  
> -5.  Configure the build system to include the template feature/openssl-fips
> -and locate your custom prebuilt tar archive:
> -In your build directory, edit conf/local.conf, add:
> +5.  Configure the build system to enable openssl-fips and locate your custom
> +prebuilt tar archive.
>  
> -WRTEMPLATE += "feature/openssl-fips"
> +For Yocto, in your build directory, edit conf/local.conf, add:
> +  OPENSSL_FIPS_ENABLED = "1"
> +  OPENSSL_FIPS_PREBUILT = ""
>  
> -OPENSSL_FIPS_PREBUILT = ""
> +For Wind River Linux, in your build directory, edit conf/local.conf, add:
> +  WRTEMPLATE += "feature/openssl-fips"
> +  OPENSSL_FIPS_PREBUILT = ""
>  
>  Where path is the location on the host with the prebuilt openssl-fips.
>  
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [help] Multiconfig - Depending on Recipe multiple times

2019-08-29 Thread Mark Hatle
On 8/29/19 10:05 AM, Johannes Wiesböck wrote:
> Hello Everyone,
> 
> i am using Yocto on the thud branch to build images for a real-time

This is not implemented in thud.  Thud had multiconfig supported, but didn't yet
allow for dependencies between configurations.

Master has support for multiconfig dependencies as well, but I'm not sure how
well tested they are.

--Mark

> operating system based on FreeRTOS. I would like to run multiple images
> on different CPU cores in an asynchronous multi-processing (AMP) 
> configuration.
> Each image will contain the software for only one single core. For
> deployment, i would like to pack all images into one package.
> 
> Therefore, i would like to use multiconfig to build the same recipe four
> times for the same hardware but with different compiler options,
> determined by to core the image should run on. These builds are all based on
> the same source code but differ in compiler options used for building. After
> the images for all configurations are built, i would like to pack all images
> into a single package for deployment.
> 
> I have created two recipes, one called amp-image, that should be built
> once for every multiconfig. A second recipe called master-image,
> depends on amp-image for every configuration, i.e. i have the line
> 
> do_compile[mcdepends] = "multiconfig:amp-c0:amp-c0:amp-image:do_build 
> multiconfig:amp-c0:amp-c1:amp-image:do_build"
> 
> in my master-image.bb file. I also have set up my multiconfigs according
> to [0].
> 
> When i try to build master-image with 
> 
> bitbake multiconfig:amp-c0:master-image
> 
> i get an error, which i have attached in [1].
> 
> My question is: Is it generally possible for a recipe to depend on one
> recipe multiple times but from a different multiconfig, like shown above?
> Also, if possible, are the sysroots of the dependencies automatically
> populated to the recipe depending on them, like with the usual DEPENDS
> variable.
> 
> 
> Thanks for any help!
> Johannes Wiesboeck
> 
> [0] 
> https://www.yoctoproject.org/docs/2.6/dev-manual/dev-manual.html#dev-building-images-for-multiple-targets-using-multiple-configurations
> [1] https://home.in.tum.de/~wiesboec/bitbake.txt
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Error in building new recipe for Jool network translator

2019-06-06 Thread Mark Hatle
On 6/6/19 11:46 AM, Gokul Raj wrote:
> Hi Ross,
> 
> Thanks for the suggestion. I have tried with inheriting autotools-brokensep
> bbclass. Now bitbake got the path for compilation. But, bitbake compilation
> refering to native kernel src path for module compilation. For your 
> information
> this package comes with both kernel module and userspace binary. Single 
> makefile
> helps building both kernel module and userspace binary. Attached log for your
> reference.
> 
> Any suggestion to compile this package.

You will need to break this into two components.  The userspace compile is
likely what you have mostly working.  The kernel module compilation though needs
to be a 'machine specific' recipe, tied to the machine and kernel.

Look at the Yocto Project documentation about building out of tree kernel 
modules.

--Mark

> Regards,
> Gokul Raj K
> 
> On Thu, Jun 6, 2019 at 10:01 PM Burton, Ross  > wrote:
> 
> Sounds like the makefiles are broken and can't handle out-of-tree
> builds.  Try replacing inherit autotools with inherit
> autotools-brokensep (and if that fixes it, file a bug with jool as
> their build is broken).
> 
> 
> Ross
> 
> On Thu, 6 Jun 2019 at 17:27, Gokul Raj  > wrote:
> >
> > Hi All,
> >
> > I'm new to yocto and I'm trying to create recipe for Jool network
> translator package. Tried to compile this package in manually and I'm able
> to compile with following steps.
> > Manual Steps:
> >
> > ./autogen.sh
> > ./configure
> > make
> > make install
> >
> > Yocto Receipe:
> >
> > DESCRIPTION = "Jool is an Open Source IPv4/IPv6 Translator"
> > SECTION = "networking"
> > LICENSE = "GPLv2"
> > LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263"
> >
> > SRCREV = "v4.0.1"
> > SRC_URI = "git://github.com/NICMx/Jool.git;protocol=http
> "
> >
> > DEPENDS = "libnl iptables"
> >
> > S = "${WORKDIR}/git"
> >
> > inherit autotools module
> >
> > Getting following error message /bin/bash: line 20: cd: src/mod: No such
> file or directory       But, the src/mod is present in the source 
> directory.
> Help me resolve this issue.
> >
> > Yocto Build Error:
> >
> > ERROR: jool-4.0.1-r0 do_compile: oe_runmake failed
> > ERROR: jool-4.0.1-r0 do_compile: Function failed: do_compile (log file 
> is
> located at
> 
> /home/graj/poky/build/tmp/work/qemuarm-enigma-linux-gnueabi/jool/4.0.1-r0/temp/log.do_compile.58423)
> > ERROR: Logfile of failure stored in:
> 
> /home/graj/poky/build/tmp/work/qemuarm-enigma-linux-gnueabi/jool/4.0.1-r0/temp/log.do_compile.58423
> > Log data follows:
> > | DEBUG: SITE files ['endian-little', 'bit-32', 'arm-common', 'arm-32',
> 'common-linux', 'common-glibc', 'arm-linux', 'arm-linux-gnueabi', 
> 'common']
> > | DEBUG: Executing shell function do_compile
> > | NOTE: make -j 6
> KERNEL_SRC=/home/graj/poky/build/tmp/work-shared/qemuarm/kernel-source
> KERNEL_PATH=/home/graj/poky/build/tmp/work-shared/qemuarm/kernel-source
> KERNEL_VERSION=5.0.7-yocto-standard CC=arm-enigma-linux-gnueabi-gcc 
> -mno-thumb-interwork -marm -fuse-ld=bfd
> 
> -fmacro-prefix-map=/home/graj/poky/build/tmp/work/qemuarm-enigma-linux-gnueabi/jool/4.0.1-r0=/usr/src/debug/jool/4.0.1-r0
>  
>                    
> 
> -fdebug-prefix-map=/home/graj/poky/build/tmp/work/qemuarm-enigma-linux-gnueabi/jool/4.0.1-r0=/usr/src/debug/jool/4.0.1-r0
>  
>                    
> 
> -fdebug-prefix-map=/home/graj/poky/build/tmp/work/qemuarm-enigma-linux-gnueabi/jool/4.0.1-r0/recipe-sysroot=
>  
>                    
> 
> -fdebug-prefix-map=/home/graj/poky/build/tmp/work/qemuarm-enigma-linux-gnueabi/jool/4.0.1-r0/recipe-sysroot-native=
>  
> 
> -fdebug-prefix-map=/home/graj/poky/build/tmp/work-shared/qemuarm/kernel-source=/usr/src/kernel
> LD=arm-enigma-linux-gnueabi-ld.bfd   AR=arm-enigma-linux-gnueabi-ar 
> O=/home/graj/poky/build/tmp/work-shared/qemuarm/kernel-build-artifacts
> KBUILD_EXTRA_SYMBOLS=
> > | Making all in src/mod
> > | /bin/bash: line 20: cd: src/mod: No such file or directory
> > | Makefile:343: recipe for target 'all-recursive' failed
> > | make: *** [all-recursive] Error 1
> > | ERROR: oe_runmake failed
> > | WARNING: exit code 1 from a shell command.
> > | ERROR: Function failed: do_compile (log file is located at
> 
> /home/graj/poky/build/tmp/work/qemuarm-enigma-linux-gnueabi/jool/4.0.1-r0/temp/log.do_compile.58423)
> > ERROR: Task
> 
> (/home/graj/poky/meta-enigma/recipes-example/jool/jool_4.0.1.bb:do_compile)
> failed with exit code '1'
> > NOTE: Tasks Summary: Attempted 570 tasks of which 563 didn't need to be
> rerun and 1 failed.
> >
> > 

Re: [yocto] prelink-cross with -fno-plt

2019-06-01 Thread Mark Hatle
Thanks, this shows that the prelinking is still working in this case.  I'll get
you patch queued up.  If you don't see any progress on it this coming week,
please feel free to remind me.

--Mark

On 5/29/19 1:42 PM, Shane Peelar wrote:
> Hi Mark,
> 
> Thank you for your reply and no problem -- I chose to benchmark ssh-add with
> it.  It contains no `.plt`.
> 
> The results are as follows:
> 
> Without prelink (ran prelink -auv):
> 
>      26019:
>      26019:     runtime linker statistics:
>      26019:       total startup time in dynamic loader: 1321674 cycles
>      26019:                 time needed for relocation: 797948 cycles (60.3%)
>      26019:                      number of relocations: 624
>      26019:           number of relocations from cache: 3
>      26019:             number of relative relocations: 9691
>      26019:                time needed to load objects: 389972 cycles (29.5%)
> Could not open a connection to your authentication agent.
>      26019:
>      26019:     runtime linker statistics:
>      26019:                final number of relocations: 630
>      26019:     final number of relocations from cache: 3
> 
> With prelink (ran prelink -av):
> 
>       1930:
>       1930:     runtime linker statistics:
>       1930:       total startup time in dynamic loader: 462288 cycles
>       1930:                 time needed for relocation: 48730 cycles (10.5%)
>       1930:                      number of relocations: 7
>       1930:           number of relocations from cache: 134
>       1930:             number of relative relocations: 0
>       1930:                time needed to load objects: 286076 cycles (61.8%)
> Could not open a connection to your authentication agent.
>       1930:
>       1930:     runtime linker statistics:
>       1930:                final number of relocations: 9
>       1930:     final number of relocations from cache: 134
> 
> I also tested against execstack, which for sure had the assertion fire on.
> Without prelink:
> 
>      27736:
>      27736:     runtime linker statistics:
>      27736:       total startup time in dynamic loader: 1955954 cycles
>      27736:                 time needed for relocation: 755440 cycles (38.6%)
>      27736:                      number of relocations: 247
>      27736:           number of relocations from cache: 3
>      27736:             number of relative relocations: 1353
>      27736:                time needed to load objects: 710384 cycles (36.3%)
> /usr/bin/execstack: no files given
>      27736:
>      27736:     runtime linker statistics:
>      27736:                final number of relocations: 251
>      27736:     final number of relocations from cache: 3
> 
> With prelink:
> 
>       3268:
>       3268:     runtime linker statistics:
>       3268:       total startup time in dynamic loader: 1421206 cycles
>       3268:                 time needed for relocation: 199396 cycles (14.0%)
>       3268:                      number of relocations: 3
>       3268:           number of relocations from cache: 88
>       3268:             number of relative relocations: 0
>       3268:                time needed to load objects: 696886 cycles (49.0%)
> /usr/bin/execstack: no files given
>       3268:
>       3268:     runtime linker statistics:
>       3268:                final number of relocations: 5
>       3268:     final number of relocations from cache: 88
> 
> So, it looks like prelink is working on these :)
> 
> On Tue, May 28, 2019 at 2:57 PM Mark Hatle  <mailto:mark.ha...@windriver.com>> wrote:
> 
> Sorry for my delayed reply.  I was out on a business trip.
> 
> Did you try this with the ld.so statistics to see if the relocations were 
> indeed
> reduced at runtime?
> 
> One of my worries with these changes (since I am not an ELF expert 
> either) is
> that we make a change that doesn't actually do anything -- but people 
> expect
> it to.
> 
> $ LD_DEBUG=help /lib/ld-linux.so.2
> Valid options for the LD_DEBUG environment variable are:
> 
>   libs        display library search paths
>   reloc       display relocation processing
>   files       display progress for input file
>   symbols     display symbol table processing
>   bindings    display information about symbol binding
>   versions    display version dependencies
>   scopes      display scope information
>   all         all previous options combined
>   statistics  display relocation statistics
>   unused      determined unused DSOs
>   help        display this help message and exit
> 
> To direct the debugging output into a file instead of 

Re: [yocto] prelink-cross with -fno-plt

2019-05-28 Thread Mark Hatle
Sorry for my delayed reply.  I was out on a business trip.

Did you try this with the ld.so statistics to see if the relocations were indeed
reduced at runtime?

One of my worries with these changes (since I am not an ELF expert either) is
that we make a change that doesn't actually do anything -- but people expect it 
to.

$ LD_DEBUG=help /lib/ld-linux.so.2
Valid options for the LD_DEBUG environment variable are:

  libsdisplay library search paths
  reloc   display relocation processing
  files   display progress for input file
  symbols display symbol table processing
  bindingsdisplay information about symbol binding
  versionsdisplay version dependencies
  scopes  display scope information
  all all previous options combined
  statistics  display relocation statistics
  unused  determined unused DSOs
  helpdisplay this help message and exit

To direct the debugging output into a file instead of standard output
a filename can be specified using the LD_DEBUG_OUTPUT environment variable.

I believe that it's the 'statistics' option.

LD_DEBUG=statistics 

Should result in something like:

128820: runtime linker statistics:
128820:   total startup time in dynamic loader: 1974661 cycles
128820: time needed for relocation: 354639 cycles (17.9%)
128820:  number of relocations: 90
128820:   number of relocations from cache: 3
128820: number of relative relocations: 1201
128820:time needed to load objects: 1303654 cycles (66.0%)
128820:
128820: runtime linker statistics:
128820:final number of relocations: 94
128820: final number of relocations from cache: 3

If prelink is working, the number of relocations (relative or otherwise) will be
significantly reduced from the original non-relocated version.

If you can run this test, it would give me the assurance that the patch is safe,
and I'll get it incorporated into the prelink-cross sources.

--Mark

On 5/25/19 2:53 PM, Shane Peelar wrote:
> Patch is attached.  Thank you!
> 
> On Sat, May 25, 2019 at 2:30 AM Khem Raj  > wrote:
> 
> On Fri, May 24, 2019 at 6:58 PM Shane Peelar  > wrote:
> >
> > Great!  Would you be willing to accept a patch that makes arch-x86_64.c
> handle that condition like the other arches?
> >
> 
> yes certainly.
> 
> > -Shane
> >
> > On Fri, May 24, 2019 at 12:27 PM Khem Raj  > wrote:
> >>
> >>
> >>
> >> On 5/24/19 8:10 AM, Shane Peelar wrote:
> >> > I did some reading into the sources in other architectures.  The 
> closest
> >> > match, arch_i386.c, makes the write conditional as you say.
> >> > So do other arches, including |arch_arm.c, |arch_sh.c, |arch-mips.c,
> >> > |arch-s390.c, |arch-s390x.c, and |arch-ia64.c.||
> >> > ||
> >> > ||
> >> > Notably, |||arch-cris.c||| has the same assert as
> >> > |||arch-x86_64.c||| instead of the conditional.
> >> >
> >> > The code roughly looks like follows:||
> >> > ||
> >> > |||
> >> > |||
> >> > 1. Check for dso->info[DT_PLTGOT].  If it does not exist, return 0
> >> > 2. Call addr_to_sec on dso->info[DT_PLTGOT], return 1 if error
> >> > 3. Look for the section named ".plt" in the ELF.
> >> > 4. If the section cannot be found, return 0
> >> > 5. Otherwise, write the address of .plt + constant (dependent on 
> arch)
> >> > to got[1]||
> >> > ||
> >> > |||
> >> > |||
> >> > In |||arch-x86_64.c and arch-cris.c|||, step (4) above is an
> >> > assert:|||
> >> >
> >> > |||1. Check for dso->info[DT_PLTGOT].  If it does not exist, 
> return 0
> >> > 2. Call addr_to_sec on dso->info[DT_PLTGOT], return 1 if error
> >> > 3. Look for the section named ".plt" in the ELF.
> >> > 4. Assert that the section was found
> >> > 5. Write the address of .plt + constant (dependent on arch) to got[1]
> >> >
> >> > I tested out making the assert conditional and nothing seemed to 
> break
> >> > at least.
> >> > |||
> >> > |||
> >>
> >> It seems ok to me.
> >>
> >> >
> >> > On Fri, May 24, 2019 at 12:08 AM Khem Raj  
> >> > >> wrote:
> >> >
> >> >
> >> >
> >> >     On 5/23/19 7:53 PM, Shane Peelar wrote:
> >> >      > Any of them on the system pretty much, and yes they are also
> >> >     built with
> >> >      > -fno-plt.
> >> >
> >> >     OK, I think its better to them conditionally check for .plt 
> section,
> >> >     can you describe more of whats going on when sections are 
> 

Re: [yocto] long time for starting sshd (wait for crng init done ?)

2019-05-13 Thread Mark Hatle
On 5/13/19 2:07 PM, s...@gmx.li wrote:
> From yocto 2.5 to 2.7 I noticed a change in booting. The kernel stops for 
> around 85 seconds.
> It seems to me that starting sshd takes time until crng init is done.
> In 2.5 it doesn't wait for that. How can I avoid that?
> Maybe I have to add that I use a recipe that adds keys as rootfs is usually 
> r/o.
> 
> Another think I have observed (which is not clear to me): I don't get a 
> message from system message bus anymore. ???
> 
> Instead of it udevd complains about "specific group 'kvm' unknown. Looking 
> into source there are  mentioned:
> # The static_node is required on s390x and ppc (they are using MODULE_ALIAS)
> So, can I safely ignore that (use ARM).
> 
> 

There was recently a discussion on this in the oe-core mailing list (Search for
"[OE-core] [PATCH 2/2] openssh: usable sshd depends on rngd from rng-tools", be
sure to read the whole thread.)  Assuming you are using certain cryptography
resources, the system is waiting for enough entropy for a good random number 
set.

Often you may need to enable rngd, or up the quality of the kernel hardware
random number generators, as many are set very low.  (Often the hardware random
number generator you have is of sufficient quality that the quality level can be
increased to generate random numbers more quickly.)

Be aware of the ramifications if you make these changes to your system -- as
faster entropy generation does not necessarily equal quality.  There are
numerous incorrect assumptions about entropy and the kernel for these.  Above
all else, do not use /dev/urandom as an entropy source for /dev/random.  That is
simply not safe to do.

What you do NOT want to do is figure out that you are booting 10k boards in a
factory and they all end up getting exactly the same random numbers and thus
identical keys.  (Yes this has happened in the past!)

--Mark
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [OE-core] patchwork

2019-05-13 Thread Mark Hatle
On 5/13/19 10:46 AM, Adrian Bunk wrote:
> On Mon, May 13, 2019 at 10:32:48AM +0300, Mark Hatle wrote:
>> On 5/12/19 9:04 PM, akuster808 wrote:
>>> ok, so no Admins. This is unexceptionable.
>>>
>>> OE TSC and Board, I believe its your time to get involved.
>>
>> I've not used patchwork before, do you know who originally configured it?  
>> Was
>> it Paul Eggleton, or Richard, or?  If I have an idea who was originally
>> responsible, I'm pretty sure we can figure out a handoff plan to someone 
>> else.
> 
> http://git.yoctoproject.org/cgit/cgit.cgi/patchwork/

No I mean who set it up for OE (or the YP) to use.  I don't even know at this
point whose infrastructure the component runs on.  If I had that I'd know to
start with either Tom or Michael.

--Mark

>> --Mark
> 
> cu
> Adrian
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [OE-core] patchwork

2019-05-13 Thread Mark Hatle
On 5/12/19 9:04 PM, akuster808 wrote:
> ok, so no Admins. This is unexceptionable.
> 
> OE TSC and Board, I believe its your time to get involved.

I've not used patchwork before, do you know who originally configured it?  Was
it Paul Eggleton, or Richard, or?  If I have an idea who was originally
responsible, I'm pretty sure we can figure out a handoff plan to someone else.

--Mark

> regards,
> Armin
> 
> On 5/3/19 10:18 AM, akuster808 wrote:
>> Hello OE folk,
>>
>> My apologies for cross posting.
>>
>> Who is the Admin for Patchwork?
>>
>> I would like additional privileges to manage the "Yocto Project Layers"
>> queue.
>>
>> Also, I would like additional  privileges to add new branches for any
>> oe, meta-oe queues & yp layers.
>>
>> kind regards,
>> Armin
>>
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [OE-core] Git commit process question.

2019-04-01 Thread Mark Hatle
On 4/1/19 6:20 PM, akuster808 wrote:
> 
> 
> On 4/1/19 4:02 PM, Richard Purdie wrote:
>> On Mon, 2019-04-01 at 15:33 -0700, akuster808 wrote:
>>> Hello,
>>>
>>> I have noticed a large number of git commits with no header
>>> information being accepted.
>> Can you be more specific about what "no header information" means? You
>> mean a shortlog and no full log message?
> Commits with just a "subject" and signoff. No additional information

If you can convey the reason for the change in just the subject, that is
acceptable.. but there is -always- supposed to be a signed-off-by line according
to our guidelines.

So if you see this, I think we need to step back and figure out where and why
it's happening and get it resolved in the future.

(Places I've seen in the past were one-off mistakes and clearly that -- so it
wasn't anything that we needed to work on a correction.)

--Mark

> We tend to reference back to how the kernel does things.
> 
> https://www.kernel.org/doc/html/latest/process/submitting-patches.html
> These two sections in particular.
> 
> 
> 2) Describe your changes
> 
> Describe your problem. Whether your patch is a one-line bug fix or 5000 lines 
> of
> a new feature, there must be an underlying problem that motivated you to do 
> this
> work. Convince the reviewer that there is a problem worth fixing and that it
> makes sense for them to read past the first paragraph.
> 
> 
> along with this section.
> 
> 
> 14) The canonical patch format
> 
> This section describes how the patch itself should be formatted. Note that, if
> you have your patches stored in a |git| repository, proper patch formatting 
> can
> be had with |git format-patch|. The tools cannot create the necessary text,
> though, so read the instructions below anyway.
> 
> The canonical patch subject line is:
> 
> Subject: [PATCH 001/123] subsystem: summary phrase
> 
> The canonical patch message body contains the following:
> 
>   * A |from| line specifying the patch author, followed by an empty line
> (only needed if the person sending the patch is not the author).
>   * The body of the explanation, line wrapped at 75 columns, which will be
> copied to the permanent changelog to describe this patch.
>   * An empty line.
>   * The |Signed-off-by:| lines, described above, which will also go in the
> changelog.
>   * A marker line containing simply |---|.
>   * Any additional comments not suitable for the changelog.
>   * The actual patch (|diff| output).
> 
> 
> - Armin
> 
>> Cheers,
>>
>> Richard
>>
> 
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Thud: building SDK fails: cannot find -lssp

2019-03-21 Thread Mark Hatle
On 3/20/19 5:30 PM, Lukasz Zemla wrote:
> Hello All,
> 
> I am trying to build SDK using Yocto thud tagged 2.6.1. Toolchain from 
> meta-linaro (edb7ffc2a121df7596385595abe75180296103e0). Unfortunately it 
> fails during perl build, complaining about missing ssp_nonshared and ssp 
> libraries. 

The ssp component comes from the toolchain.  If you can reproduce this on the
base system (not meta-linaro toolchain), then it may be a broader bug.
Otherwise, it's likely related to either a bug or configuration decision in
meta-linaro.

--Mark

> Do you have any ideas how to fix it? 
> I know it is about stack smash protector, but shouldn't it be automatically 
> included and available for linker during a build?
> 
> 
> | x86_64-pokysdk-linux-gcc  
> --sysroot=/data/home/user/00-projects/proj_myproj_system0/build_myproj_system0/tmp/work/x86_64-nativesdk-pokysdk-linux/nativesdk-perl/5.24.4-r0/recipe-sysroot
>  -Wl,-O1 -fstack-protector -o miniperl \
> | opmini.o perlmini.o  gv.o toke.o perly.o pad.o regcomp.o dump.o util.o 
> mg.o reentr.o mro_core.o keywords.o hv.o av.o run.o pp_hot.o sv.o pp.o 
> scope.o pp_ctl.o pp_sys.o doop.o doio.o regexec.o utf8.o taint.o deb.o 
> universal.o globals.o perlio.o perlapi.o numeric.o mathoms.o locale.o 
> pp_pack.o pp_sort.o caretx.o dquote.o time64.o  miniperlmain.o  -lpthread  
> -ldl -lm -lcrypt -lutil -lc
> | 
> /data/home/user/00-projects/proj_myproj_system0/build_myproj_system0/tmp/work/x86_64-nativesdk-pokysdk-linux/nativesdk-perl/5.24.4-r0/recipe-sysroot-native/usr/bin/x86_64-pokysdk-linux/../../libexec/x86_64-pokysdk-linux/gcc/x86_64-pokysdk-linux/7.1.1/ld:
>  cannot find -lssp_nonshared
> | 
> /data/home/user/00-projects/proj_myproj_system0/build_myproj_system0/tmp/work/x86_64-nativesdk-pokysdk-linux/nativesdk-perl/5.24.4-r0/recipe-sysroot-native/usr/bin/x86_64-pokysdk-linux/../../libexec/x86_64-pokysdk-linux/gcc/x86_64-pokysdk-linux/7.1.1/ld:
>  cannot find -lssp
> | collect2: error: ld returned 1 exit status
> | makefile:391: recipe for target 'lib/buildcustomize.pl' failed
> | make[1]: *** [lib/buildcustomize.pl] Error 1
> | make[1]: Leaving directory 
> '/data/home/user/00-projects/proj_myproj_system0/build_myproj_system0/tmp/work/x86_64-nativesdk-pokysdk-linux/nativesdk-perl/5.24.4-r0/perl-5.24.4'
> | make[1]: Entering directory 
> '/data/home/user/00-projects/proj_myproj_system0/build_myproj_system0/tmp/work/x86_64-nativesdk-pokysdk-linux/nativesdk-perl/5.24.4-r0/perl-5.24.4'
> | ./miniperl -Ilib autodoc.pl
> | make[1]: ./miniperl: Command not found
> | makefile:441: recipe for target 'pod/perlintern.pod' failed
> 
> Thank you in advance,
> Lukasz
> 
> ***
> The information in this email is confidential and intended solely for the 
> individual or entity to whom it is addressed.  If you have received this 
> email in error please notify the sender by return e-mail, delete this email, 
> and refrain from any disclosure or action based on the information.
> ***
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] PREFERRED_VERSION ignored

2019-03-11 Thread Mark Hatle
On 3/11/19 12:46 PM, Marco wrote:
> Hello,
> using Yocto version 'rocko' I have a custom layer defining a new
> recipe and a distro.
> I have a recipe openssl_1.0.1u.bb in my meta-custom layer.
> My meta-custom layer.conf has BBFILE_PRIORITY_meta-custom = "9"
> 
> In my meta-custom layer I have a distro configuration saying
> PREFERRED_VERSION_openssl = "1.0.2o" so I expected to build 1.0.2o but
> when I build bitbake openssl is always selected the 1.0.1u
> 
> Same problem if I set PREFERRED_VERSION_openssl = "1.0.2o" in the local.conf
> 
> Is there an issue or am I doing something wrong?

I've seen this issue before.  There are issues with layer priority as well as
recipe priority.  The only way I've dealt with this is by blacklisting specific
versions of recipes...

PNBLACKLIST[] ?= "${@'Only version ' +
d.getVar('PREFERRED_VERSION_${BPN}') + ' (not ${PV}) is supported in this
configuration. ' if d.getVar('PREFERRED_VERSION_${BPN}') and not
d.getVar('PV').startswith(d.getVar('PREFERRED_VERSION_${BPN}').replace('%', ''))
else ''}"

(note the above is one single line)

--Mark

> --
> Marco
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [opkg-devel] [opkg-utils] Question: why update-alternatives from opkg-utils chooses /usr/lib to hold database?

2019-03-06 Thread Mark Hatle
On 3/5/19 10:44 PM, Alex Kiernan wrote:
> On Tue, Mar 5, 2019 at 10:50 PM Richard Purdie
>  wrote:
>>
>> On Tue, 2019-03-05 at 16:05 +, Alejandro Del Castillo wrote:
>>>
>>> On 3/5/19 12:11 AM, ChenQi wrote:
 Hi All,

 Recently I'm dealing with issue from which some discussion raises.
 I'd like to ask why update-alternatives from opkg-utils chooses
 /usr/lib
 to hold its alternatives database?
 I looked into debian, its update-alternatives chooses /var/lib by
 default.
 Is there some design consideration? Or some historical reason?
>>>
>>> Update-alternatives used to be on the opkg repo. I did a search
>>> there
>>> all the way to the first commit on 2008-12-15 [1], but even then
>>> /usr/lib was used. I can't think of a design consideration that
>>> would
>>> make /usr/lib more palatable than the Debian default.
>>>
>>> Maybe someone with more knowledge of the previous history can chime
>>> in?
>>>
>>> [1]
>>> http://git.yoctoproject.org/clean/cgit.cgi/opkg/commit/?id=8bf49d16a637cca0cd116450dfcabc4c941baf6c
>>
>> I think the history is that the whole of /var was considered volatile
>> and we wanted the alternatives data to stick around so it was put under
>> /usr.
>>
>> That decision doesn't really make sense now since only parts of /var
>> are volatile..
>>
> 
> I don't use opkg (or in fact any package manager on a target), but I
> do use OSTree, where my /var isn't part of what gets deployed to a
> device 
> (https://ostree.readthedocs.io/en/latest/manual/adapting-existing/#adapting-existing-package-managers)
> so having the option to keep it in /usr would be important to anyone
> who has mechanisms like that.
> 

Do you allow a device that has been sent an update via OSTree to then run
update-alternatives to change what has been set by the update mechanism?

In my own uses of both OSTree and update-alternatives, I set this on a global
basis and use it that way.  So no individual user (device) would be different
then what was globally sent out.

If this is desired, then continuing to have a mechanism to allow it to be
overridden for legacy or your purposes seems reasonable... but I think moving
the default still makes a lot of sense.

--Mark
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Removing busybox

2019-02-27 Thread Mark Hatle
On 2/27/19 11:06 AM, Tom Rini wrote:
> On Wed, Feb 27, 2019 at 01:16:56PM +0100, Jean-Christian de Rivaz wrote:
> 
>> Hi all,
>>
>> After reading the thread "Removing busybox completely from the generated
>> image" I tested to reproduce the method but this doesn't work as expected.
>>
>> git clone git://git.yoctoproject.org/poky -b thud
>> cd poky
>> echo 'require conf/distro/poky.conf' > meta-poky/conf/distro/poky-ng.conf
>> echo 'DISTRO = "poky-ng"' >> meta-poky/conf/distro/poky-ng.conf
>> echo 'VIRTUAL-RUNTIME_base-utils = ""' >> meta-poky/conf/distro/poky-ng.conf
>> sed -i 's/^DISTRO.*/DISTRO = "poky-ng"/' conf/local.conf
>> bitbake core-image-minimal
>> runqemu core-image-minimal kvm
>>
>> After login as root there still a lot of links to busybox inside /sbin/
>> /bin/ /usr/sbin/ and /usr/bin/ .
>>
>> Can someone provides a working method ?
> 
> You're missing a few more things, yes.  What I have is:
> # Switch to systemd
> DISTRO_FEATURES += "systemd"
> VIRTUAL-RUNTIME_init_manager = "systemd"
> VIRTUAL-RUNTIME_initscripts = ""
> VIRTUAL-RUNTIME_syslog = ""
> VIRTUAL-RUNTIME_login_manager = "shadow-base"
> DISTRO_FEATURES_BACKFILL_CONSIDERED = "sysvinit"
> 
> # Replace busybox
> PREFERRED_PROVIDER_virtual/base-utils = "coreutils"
> VIRTUAL-RUNTIME_base-utils = "coreutils"
> VIRTUAL-RUNTIME_base-utils-hwclock = "util-linux-hwclock"
> VIRTUAL-RUNTIME_base-utils-syslog = ""
> 
> If you aren't using systemd you'll need to move the login_manager
> example over as well, otherwise busybox gets pulled for that.
> 
> I'm using the above on thud, today.  And that's not a 1:1 replacement as
> my image pulls in a number of other packages for various things I
> want/need.
> 
> 

You can also blacklist busybox to ensure that it never builds, and thus can't
show up in your target image.

PNBLACKLIST[busybox] = "Don't build this"




signature.asc
Description: OpenPGP digital signature
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] gitsm fetcher fixes in thud?

2019-02-12 Thread Mark Hatle
On 2/11/19 10:10 PM, Scott Murray wrote:
> Hi,
> 
> I'm working on upgrading Automotive Grade Linux (AGL) from rocko to thud,
> and there's substantial git submodule breakage with the fetcher as it
> stands in thud ATM.  I've been doing tests with the current gitsm.py from
> master with Mark's recent fixes, and it works fine.  I'm wondering what
> the feasibility of getting those fixes into thud is?  I am willing to do
> the cherry-picking and testing if someone can point me at the mechanism
> for it (poky-contrib?).

The work was being held off from backporting due to concerns about instability
being introduced by the patches.

If you feel that things are improved with the master code, please do a backport.
 There are a small number of patches needed, and I tried to design them in a way
that they would just apply to thud without any additional work [barring 
defuzzing].

(I'm traveling for work current, and won't be back to my normal schedule until
Monday.  But I will be reading email and will attempt to help anyway I can.)

--Mark

> Thanks,
> 
> Scott
> 
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Community support for any Yocto release

2019-01-16 Thread Mark Hatle
On 1/16/19 2:58 AM, Gaurang Shastri wrote:
> Thanks Nicolas for the prompt answer and wiki page information.
> 
> So as you said, "Typically, alongside the latest release the previous two
> releases are also maintained.", do you mean any release will be maintained 
> for 1
> year by community?

Generally it takes 6 months development between releases.  So the rough estimate
1 year of general updates from the community as a whole.  Sometimes it takes
slightly longer (or slightly less) then 6 months for the next release, and that
will then adjust the time scales for the prior version slightly as well.

One thing to keeo in mind for 'community support'.  "Support" is not intended to
be in the commercial sense, where there are specific fix time frames, etc.  It
just means there is an assigned maintainer and QA resources for that version
during that time.  It's still up to developers to send fixes for issues
encountered to the maintainers to get the issues fixed.

If you need a level of more dedicated support (time frame, resources, etc), you
should look for a commercial Yocto Project member.

--Mark

> Regards,
> Gaurang
> 
> On Wed, Jan 16, 2019 at 9:50 AM Nicolas Dechesne  > wrote:
> 
> Hello,
> 
> On Wed, Jan 16, 2019 at 9:47 AM Gaurang Shastri  > wrote:
> 
> 
> Hi All,
> 
> For how many years yocto community officially support any Yocto 
> release?
> (This includes like bug fixes, CVE fixes, enhancements, etc etc)
> 
> For example, the latest release happened on 11/15/2018 (Yocto 2.6), so
> how long this will be supported by community?
> 
> Sorry, if this information is already present on some wiki page, but I
> am not able to find it. So pointer to that wiki page will really help.
> 
> 
> This information is indeed available her:
> https://wiki.yoctoproject.org/wiki/Stable_branch_maintenance
> 
> "Typically, alongside the latest release the previous two releases are 
> also
> maintained."
>  
> 
> 
> Regards,
> Gaurang Shastri
> -- 
> ___
> yocto mailing list
> yocto@yoctoproject.org 
> https://lists.yoctoproject.org/listinfo/yocto
> 
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Error while using PSEUDO

2018-12-19 Thread Mark Hatle
On 12/19/18 8:16 AM, madoga wrote:
> Hello everyone, 
> 
> I am trying to use pseudo due to I need to set my entire rootfs. I would like 
> to
> ask you some questions about how to use it, considering that it does not seem 
> to
> work. I am going to explain the process I have followed: 
> 
> I enabled PSEUDO_DISABLED and PSEUDO_PREFIX at meta/conf/bitbake.conf: / /
> /export PSEUDO_DISABLED = "0"//
> /
> /export PSEUDO_PREFIX = "${STAGING_DIR_NATIVE}${prefix_native}"/

The above is most certainly incorrect.  Bitbake has all of these settings
themselves.  Each recipe has it's own individual pseudo configuration and 
database.

The errors below are likely due to incorrect settings above.

> Also, FILESYSTEM_PERMS_TABLE variable is pointing to my custom 
> fs-perms-cle.txt.
> It is declared at build/conf/local.conf. During the building process, I 
> receive
> constantly these kind of warning messages about "host contamination":
> /WARNING: libgcc-7.3.0-r0 do_package_qa: QA Issue: libgcc: /libgcc-
> dbg/usr/src/debug/libgcc/7.3.0-r0/gcc-7.3.0/build.powerpc-clepm-linux-gnuspe.powerpc-distro-linux-gnuspe/gcc/insn-constants.h
> is owned by uid 1002, which is the same as the user running bitbake. This may 
> be
> due to host contamination [host-user-contaminated]/
> 
> When it has finished, I check my rootfs or packages' image folder and
> permissions are always set exactly equal as the user running bitbake. What is
> going on? Have I missed something? Why my fs-perms.txt is ignored? I debugged
> package.bbclass and parameters are parsed OK...

Use the produced tarball from the build system, extract it with root permissions
on your system (or just view it in place) and the permissions will be correct.

You should not be attempting to use pseudo directly to manipulate things outside
of the build system.  That isn't what it's intended for.  Psuedo is to be used
with specific components to manage individual parts of the build system and
emulate the necessary filesystem parts and pieces that a regular user can not
typically access.

> I have also tried to declare LD_PRELOAD variable, pointing directly to my host
> libpseudo.so, but the result is the same: 
> /LD_PRELOAD = "/usr/lib/x86_64-linux/pseudo/libpseudo.so"//
> /
> /LD_PREFIX = "/usr/"/
> 
> Should I call fakeroot somewhere in the middle of the process?
> It looks like an easy process, reading Yocto's manual but I think I have
> forgotten something... I hope someone has had the same failure.

You do not call pseudo.  The build system calls pseudo.

If you are defining custom tasks that attempt to manipulate components prior to
packaging, then you may have to declare the task as a 'fakeroot' task, this will
trigger the build system to load the necessary components and set it up in a
consistent way for that particular recipe's behavior.

--Mark

> Thank you!
> Best Regards,
> Mario
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [OE-core] FILESYSTEM_PERMS_TABLE / fs-perms.txt

2018-12-10 Thread Mark Hatle
On 12/10/18 4:14 AM, madoga wrote:
>> On 12/5/18 11:12 AM, madoga wrote:
>>
>>> Hello List,
>>> I am trying to configure my entire filesystem by using 
>>> FILESYSTEM_PERMS_TABLES
>>> variable pointing to my custom fs-perms.txt, but it does not work. While I
>>> debugged package.bbclass looking for any error or failure, I found something
>>> strange with os.chmod & os.lchown methods (at function fix_perms):
>>> # Fix the permission, owner and group of path
>>> def fix_perms(path, mode, uid, gid, dir):
>>> if mode and not os.path.islink(path):
>>> #bb.note("Fixup Perms: chmod 0%o %s" % (mode, dir))
>>> os.chmod(path, mode)
>>> # -1 is a special value that means don't change the uid/gid
>>> # if they are BOTH -1, don't bother to lchown
>>> if not (uid == -1 and gid == -1):
>>> #bb.note("Fixup Perms: lchown %d:%d %s" % (uid, gid, dir))
>>> os.lchown(path, uid, gid)
>>> I have hardcoded mode variable to “0333”, just for testing: os.chmod(path,
>>> 0o333)and I have seen that permissions were been configured into a “0711”. 
>>> Also
>>> I am going to ask about os.lchown, due to my filesystem is still been owned 
>>> by
>>> my user and my group.
>>> Does anyone have an idea about what is going on? Has somebody have the same
>>> problem?
>>
>> The commands run under the pseudo environment. Pseudo captures these commands
>> and stores them in a database it can 'replay' at any time.
>>
>> It only make them in the actual filesystem if permitted by the host.
>>
>> You must look at the filesystem results in the live system (running pseudo) 
>> or
>> in the results of the build -- otherwise what you are looking at is not 
>> valid.
>>
>> (BTW this is the reason that the commended code was left in that function. In
>> case something is wrong, just remove the comments and you'll get notes on 
>> what
>> it is doing to help debug. This shouldn't be necessary unless you are
>> developing the function itself.. but that is why they are there.)
>>
> 
> Thank you for your reply Mark!

pseudo replaces fakeroot.  Fakeroot can't capture some of the items that are
needed [and a few other reasons).

> Does the pseudo environment allow to set the entire rootfs by using fakeroot? 
> Perhaps this is not the right way to make it, I am not pretty sure about 
> that. I would like to obtain my rfs with a concrete permissions when bitbake 
> finishes, folders, executables... Is there another/better way to accomplish?

psuedo is just a LD_PRELOAD library that is connected to a database that
captures permissions changes to files..

> About debbuging bb.notes, I have already removed the comments and everything 
> seems working OK.

This tells me that it's likely working properly and you are not in the pseudo
environment.  You need to be in that loaded environment with the correct
database attached to look at the set permissions.  Any task in the system
prefixed by 'fakeroot' will do this for you.

--Mark

> Best Regards,
> Mario
> 
>> --Mark
>>
>>> Thank you
>>> Best Regards,
>>> Mario
> 
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [OE-core] FILESYSTEM_PERMS_TABLE / fs-perms.txt

2018-12-05 Thread Mark Hatle
On 12/5/18 11:12 AM, madoga wrote:
> Hello List,
> 
> I am trying to configure my entire filesystem by using FILESYSTEM_PERMS_TABLES
> variable pointing to my custom fs-perms.txt, but it does not work. While I
> debugged package.bbclass looking for any error or failure, I found something
> strange with os.chmod & os.lchown methods (at function fix_perms):
> 
>  # Fix the permission, owner and group of path
> 
> def fix_perms(path, mode, uid, gid, dir):
> 
>     if mode and not os.path.islink(path):
> 
>     #bb.note("Fixup Perms: chmod 0%o %s" % (mode, dir))
> 
>     os.chmod(path, mode)
> 
>     # -1 is a special value that means don't change the uid/gid
> 
>     # if they are BOTH -1, don't bother to lchown
> 
>     if not (uid == -1 and gid == -1):
> 
>     #bb.note("Fixup Perms: lchown %d:%d %s" % (uid, gid, dir))
> 
>     os.lchown(path, uid, gid)
> 
> I have hardcoded mode variable to “0333”, just for testing: os.chmod(path,
> 0o333)and I have seen that permissions were been configured into a “0711”. 
> Also
> I am going to ask about os.lchown, due to my filesystem is still been owned by
> my user and my group.
> 
>  Does anyone have an idea about what is going on? Has somebody have the same
> problem?
> 

The commands run under the pseudo environment.  Pseudo captures these commands
and stores them in a database it can 'replay' at any time.

It only make them in the actual filesystem if permitted by the host.

You must look at the filesystem results in the live system (running pseudo) or
in the results of the build -- otherwise what you are looking at is not valid.

(BTW this is the reason that the commended code was left in that function.  In
case something is wrong, just remove the comments and you'll get notes on what
it is doing to help debug.  This shouldn't be necessary unless you are
developing the function itself.. but that is why they are there.)

--Mark

> 
> Thank you
> 
> Best Regards,
> 
> Mario
> 
> 
> 
> 
> 
> 
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] meta-mingw: unable to run executables on Windows

2018-11-15 Thread Mark Hatle
On 11/14/18 11:01 PM, Khem Raj wrote:
> On Wed, Nov 14, 2018 at 8:08 PM Joshua Watt  wrote:
>>
>> On Wed, Nov 14, 2018 at 8:41 PM Mark Hatle  wrote:
>>>
>>> On 11/14/18 9:54 AM, Mark Hatle wrote:
>>>> On 11/13/18 3:56 AM, Samuli Piippo wrote:
>>>>> Hi,
>>>>>
>>>>> I've just upgraded poky and meta-mingw layers from sumo to thud and as a 
>>>>> result
>>>>> a lot of the executables in the toolchain no longer run correctly on 
>>>>> Windows.
>>>>
>>>> Which version of windows?
>>>>
>>>>> I've built meta-toolchain for SDKMACHINE=x86_64-mingw32. From that, 
>>>>> gcc/g++ work
>>>>> fine on Windows 10, but ar, as, objdumb, and others hang for ~30 seconds 
>>>>> and
>>>>> exit without any output.
>>>>>
>>>>> Has anyone else seen this?
>>>>
>>>> I've run a toolchain made on mingw after sumo, but before thud's release.  
>>>> I'll
>>>> see if I can find a VM and give it a try later today.
>>>
>>> I'm running on Windows 7 for my testing (ya, I know old.. but it's what I 
>>> got.)
>>>
>>> Can you try adding the following to your conf/local.conf: GCCPIE_mingw32 = 
>>> ""
> 
> this will be effective just for SDK and native versions I hope, but in
> cases if we
> have this override also applicable for target then this fix is not
> correct. We have
> to keep using it for target builds.

The mingw32 override is only present when building -nativesdk- mingw32 software.
 -Not- cross candian and similar.

--Mark

>>>
>>> I found that the SDK was not working properly here as well, but only 
>>> binutils.
>>> The above seems to fix the issue.  (You do have to rebuild your SDK.)
>>
>> I also saw this issue on Windows 7, and your described fix corrected
>> it (Thanks!). On the plus side, the automated SDK testing that I'm
>> working on discovered it as well (e.g. the tests failed because of
>> it), which means that the tests are working and should help prevent
>> issues like this in the future once I get it merged.
>>
>>>
>>>> --Mark
>>>>
>>>
>>> --
>>> ___
>>> yocto mailing list
>>> yocto@yoctoproject.org
>>> https://lists.yoctoproject.org/listinfo/yocto
>> --
>> ___
>> yocto mailing list
>> yocto@yoctoproject.org
>> https://lists.yoctoproject.org/listinfo/yocto

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] meta-mingw: unable to run executables on Windows

2018-11-14 Thread Mark Hatle
On 11/14/18 9:54 AM, Mark Hatle wrote:
> On 11/13/18 3:56 AM, Samuli Piippo wrote:
>> Hi,
>>
>> I've just upgraded poky and meta-mingw layers from sumo to thud and as a 
>> result
>> a lot of the executables in the toolchain no longer run correctly on Windows.
> 
> Which version of windows?
> 
>> I've built meta-toolchain for SDKMACHINE=x86_64-mingw32. From that, gcc/g++ 
>> work
>> fine on Windows 10, but ar, as, objdumb, and others hang for ~30 seconds and
>> exit without any output.
>>
>> Has anyone else seen this?
> 
> I've run a toolchain made on mingw after sumo, but before thud's release.  
> I'll
> see if I can find a VM and give it a try later today.

I'm running on Windows 7 for my testing (ya, I know old.. but it's what I got.)

Can you try adding the following to your conf/local.conf: GCCPIE_mingw32 = ""

I found that the SDK was not working properly here as well, but only binutils.
The above seems to fix the issue.  (You do have to rebuild your SDK.)

> --Mark
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] meta-mingw: unable to run executables on Windows

2018-11-14 Thread Mark Hatle
On 11/13/18 3:56 AM, Samuli Piippo wrote:
> Hi,
> 
> I've just upgraded poky and meta-mingw layers from sumo to thud and as a 
> result
> a lot of the executables in the toolchain no longer run correctly on Windows.

Which version of windows?

> I've built meta-toolchain for SDKMACHINE=x86_64-mingw32. From that, gcc/g++ 
> work
> fine on Windows 10, but ar, as, objdumb, and others hang for ~30 seconds and
> exit without any output.
> 
> Has anyone else seen this?

I've run a toolchain made on mingw after sumo, but before thud's release.  I'll
see if I can find a VM and give it a try later today.

--Mark
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Yocto Add Layer Issue

2018-11-06 Thread Mark Hatle
On 11/6/18 2:25 PM, nick wrote:
> Greetings All,
> I am wondering why this error is occuring:
> yocto-layer: command not found

Do you mean "bitbake-layers"?

--Mark

> as I already sourced into my build with oe-init script and therefore add
> wondering why it's not found. I checked the current developer manual and
> that should work cleanly.
> 
> Thanks,
> 
> Nick
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] How to avoid stripping libthread_db to enable gdb thread debugging?

2018-11-02 Thread Mark Hatle
On 11/2/18 1:03 PM, Hui Liu wrote:
> Hi,
> 
> I add gdb to my recipe, and found gdb can't enable thread debugging.
> 
> The warning from gdb is:
> 
> warning: Unable to find libthread_db matching inferior's thread library, 
> thread
> debugging will not be available
> 
> After some investigation, I found it is due to libthread_db is stripped:
> 
> nm lib/libthread_db.so.1
> nm: lib/libthread_db.so.1: no symbols
> 
> Could you let me know how to avoid this?

You need to install the dbg package.  This contains the symbols necessary for
debugging.  You can install it for either cross or local (on target) debugging.
In the cross-debuging case, you should not need it on the target.

--Mark

> -- 
> Thanks,
> Hui
> 
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [selinux] sumo compilation

2018-10-21 Thread Mark Hatle
On 10/21/18 12:18 AM, Stefano Cappa wrote:
> I reverted the commit as described and finally I'm able to build.
> 
> However, after using -minimal version and booting with selinux=1 and
> enforcing=0, when I run this command: "fixfiles -f -F relabel" I get this 
> error:
> 
> Cleaning out /tmp
> fixfiles: No suitable file systems found
> Cleaning up labels on /tmp
> secon: SELinux is not enabled
> cat: /initial_contexts/unlabeled: No such file or directory
> 
> I don't understand why it's happening.
> 
> I'm trying on the official iMX6 evaluation kit by NXP.

You need to make sure that the filesystem in use has extendded attributes
enabled.  A lot of silicon vendor versions have this disabled, or use a
filesystem where it's not supported.

ext*fs, xfs, etc usually support it, with the right kernel configuration.

--Mark

> Il giorno gio 18 ott 2018 alle ore 23:13 Stefano Cappa
> mailto:stefano.cappa.k...@gmail.com>> ha 
> scritto:
> 
> Thank you
> 
> Il gio 18 ott 2018, 22:48 Sinan Kaya  > ha scritto:
> 
> On 10/18/2018 3:08 PM, Joe MacDonald wrote:
> >> Sorry, I thought it had been created.  I'm going to be traveling 
> the
> >> next few days to ELC-E, but I will try to get to it if someone else
> >> does not first.
> > Yeah, Mark and I are both going to be at ELC-E this week, we'll get 
> it
> > sorted out.
> 
> thanks.
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [selinux] sumo compilation

2018-10-18 Thread Mark Hatle
On 10/18/18 9:49 AM, Sinan Kaya wrote:
> CC'ing the selinux maintainers:
> 
> I was told that using the master branch and reverting the e2fs change
> (http://git.yoctoproject.org/cgit/cgit.cgi/meta-selinux/commit/?id=78eca8242ea5397c4dc0654d62244453b4260151)
>  
> works on sumo.
> 
> Stefano's suggestion unfortunately didn't work.
> 
> Maybe, it is time to create the sumo branch?

Sorry, I thought it had been created.  I'm going to be traveling the next few
days to ELC-E, but I will try to get to it if someone else does not first.

--Mark

> On 10/18/2018 9:48 AM, Steve Scott wrote:
>> I did not try it on sumo.
>>
>>   
>>
>> From: Stefano Cappa [mailto:stefano.cappa.k...@gmail.com]
>> Sent: Wednesday, October 17, 2018 11:15 PM
>> To: ssc...@san.rr.com
>> Cc: Sinan Kaya ; yocto@yoctoproject.org
>> Subject: Re: [yocto] [selinux] sumo compilation
>>
>>   
>>
>> Exactly the same issue since September.
>>
>>   
>>
>> Here is my discussion 
>> https://lists.yoctoproject.org/pipermail/yocto/2018-September/042711.html. 
>> With that trick did you find a solution also on sumo?
>>
>>   
>>
>> Il gio 18 ott 2018, 01:42 mailto:ssc...@san.rr.com> > ha 
>> scritto:
>>
>>
>> It was broken in rocko too. I added this to local.conf to workaround the 
>> problem:
>>
>> PREFERRED_VERSION_refpolicy-standard = "2.20170204"
>>
>> -steve
>>
>>  Sinan Kaya mailto:ok...@kernel.org> > wrote:
>>> Hi,
>>>
>>> We realized today that SELinux does not compile on sumo branch.
>>>
>>> Is it possible for someone to branch the last working version to a sumo 
>>> branch?
>>>
>>> http://git.yoctoproject.org/cgit/cgit.cgi/meta-selinux/refs/
>>>
>>> NOTE: Running task 352 of 2707
>>> (virtual:native:/sources/poky/meta/recipes-devtools/e2fsprogs/e2fsprogs_1.43.8.bb
>>>   :do_patch)
>>> NOTE: recipe e2fsprogs-native-1.43.8-r0: task do_patch: Started
>>> NOTE: Running task 1413 of 2707
>>> (/sources/poky/meta/recipes-devtools/e2fsprogs/e2fsprogs_1.43.8.bb:do_patch)
>>> NOTE: recipe e2fsprogs-1.43.8-r0: task do_patch: Started
>>> ERROR: e2fsprogs-native-1.43.8-r0 do_patch: Command Error: 'quilt --quiltrc
>>> Applying patch misc_create_inode.c-label_rootfs.patch
>>> patching file misc/create_inode.c
>>> Hunk #1 FAILED at 979.
>>> Hunk #2 FAILED at 987.
>>>
>>> Sinan
>>> -- 
>>> ___
>>> yocto mailing list
>>> yocto@yoctoproject.org 
>>> https://lists.yoctoproject.org/listinfo/yocto
>>
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] meta-oracle-java still maintained?

2018-10-17 Thread Mark Hatle
On 10/17/18 9:33 AM, Joshua Watt wrote:
> I was wondering if meta-oracle-java is still maintained? It looks like
> there haven't been any release branches created since pyro.
> 
> We are looking at using it to provide virtual/java-native for some
> build host tools we have that were written in Java, but I want to make
> sure that it isn't going to die off on us if we do.

I believe this was abandoned when Oracle changed their licensing.

The 'meta-java' layer however is still being updated and provides an alternative
to the Oracle Java.

--Mark

> Thanks,
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [prelink-cross] [PATCH] x86_64: allow prelinking of PIE executables with COPY relocs

2018-10-12 Thread Mark Hatle
On 10/10/18 3:50 PM, Sergei Trofimovich wrote:
> COPY relocs are fine to have in PIE executables (as opposed to
> shared libraries).
> 
> By enabling prelink on PIEs we achieve a few goals:
> - prelink more PIE files on system: nicer for uniformity,
> - avoid spurious warnings about shared libraries with COPY relocs
> 
> It's usefult to prelink PIEs when kernel ASLR is disabled:
> kernel.randomize_va_space=0 + PIE-randomization patches.
> 
> I have gcc built as `--enable-default-pie` (generates PIE
> executables by default).
> 
> Testsute results before the change:
>   PASS:  14
>   FAIL:  31
> 
> After the change:
>   PASS:  32
>   FAIL:  13

After applying this patch, I think it uncovered another issue.

See:

http://git.yoctoproject.org/cgit/cgit.cgi/prelink-cross/log/?h=cross_prelink_staging

specifically:

http://git.yoctoproject.org/cgit/cgit.cgi/prelink-cross/commit/?h=cross_prelink_staging=b10e14218646d8b74773b82b0f8b395bce698fa2


Running the test suite, I was getting 4 failures that each manifested 
themselves as:

FAIL: layout1.sh  ../src/prelink: testsuite/layout1: section file offsets not
monotonically increasing
FAIL: cxx2.sh ../src/prelink: testsuite/cxx2: section file offsets not
monotonically increasing
FAIL: cxx3.sh ../src/prelink: testsuite/cxx3: section file offsets not
monotonically increasing
FAIL: quick1.sh   ../src/prelink: testsuite/quick1.tree/usr/bin/bin3: section
file offsets not monotonically increasing

Before this patch they would report:

   COPY relocations don't point into .bss or .sbss section

So I suspect that enabling the PIE executable prelinking has either triggered a
corner case or uncovered a bug in the existing function.

What I am seeing happen, if I dump the sections is:

   section 15 .fini file offset range 12f4 and 12fd
   section 16 .gnu.conflict file offset range 1300 and 2080
   section 17 .rodata file offset range 2000 and 20a3
   section 18 .eh_frame_hdr file offset range 20a4 and 2118

When the .gnu.conflict section is added and processed, it ends up inserting
immediately after .fini.   It then ends up taking more space then the gap allows
for (by 0x80 bytes in this case.)

I looked at the code for a couple of hours, and I'm not making any real progress
on it.

Do you have time to investigate this?

My little hack is enough to identify when this over write happens and abort..
but I suspect it's probably fixable instead.  (I don't intend to keep the hack
commit when transferring this over)

--Mark

> Signed-off-by: Sergei Trofimovich 
> ---
>  src/arch-x86_64.c |  4 ++--
>  src/dso.c | 19 +++
>  src/prelink.h |  1 +
>  3 files changed, 22 insertions(+), 2 deletions(-)
> 
> diff --git a/src/arch-x86_64.c b/src/arch-x86_64.c
> index 5c95f47..2f6c551 100644
> --- a/src/arch-x86_64.c
> +++ b/src/arch-x86_64.c
> @@ -179,7 +179,7 @@ x86_64_prelink_rela (struct prelink_info *info, GElf_Rela 
> *rela,
>   value + rela->r_addend - info->resolvetls->offset);
>break;
>  case R_X86_64_COPY:
> -  if (dso->ehdr.e_type == ET_EXEC)
> +  if (dso->ehdr.e_type == ET_EXEC || dso_is_pie(dso))
>   /* COPY relocs are handled specially in generic code.  */
>   return 0;
>error (0, 0, "%s: R_X86_64_COPY reloc in shared library?", 
> dso->filename);
> @@ -503,7 +503,7 @@ x86_64_undo_prelink_rela (DSO *dso, GElf_Rela *rela, 
> GElf_Addr relaaddr)
>write_le32 (dso, rela->r_offset, 0);
>break;
>  case R_X86_64_COPY:
> -  if (dso->ehdr.e_type == ET_EXEC)
> +  if (dso->ehdr.e_type == ET_EXEC || dso_is_pie(dso))
>   /* COPY relocs are handled specially in generic code.  */
>   return 0;
>error (0, 0, "%s: R_X86_64_COPY reloc in shared library?", 
> dso->filename);
> diff --git a/src/dso.c b/src/dso.c
> index a5fcec5..9fcfc3d 100644
> --- a/src/dso.c
> +++ b/src/dso.c
> @@ -1106,6 +1106,25 @@ dso_is_rdwr (DSO *dso)
>return dso->elfro != NULL;
>  }
>  
> +/* Return true is DSO is position independent executable.
> +
> +   There is no simple way to distinct between shared library
> +   and PIE executable.  Use presence of interpreter as a heuristic.  */
> +
> +int dso_is_pie(DSO *dso)
> +{
> +  int i;
> +
> +  if (dso->ehdr.e_type != ET_DYN)
> +return 0;
> +
> +  for (i = 0; i < dso->ehdr.e_phnum; ++i)
> +if (dso->phdr[i].p_type == PT_INTERP)
> +  return 1;
> +
> +  return 0;
> +}
> +
>  GElf_Addr
>  adjust_old_to_new (DSO *dso, GElf_Addr addr)
>  {
> diff --git a/src/prelink.h b/src/prelink.h
> index 93dbf7a..d8a00c6 100644
> --- a/src/prelink.h
> +++ b/src/prelink.h
> @@ -298,6 +298,7 @@ int reopen_dso (DSO *dso, struct section_move *move, 
> const char *);
>  int adjust_symbol_p (DSO *dso, GElf_Sym *sym);
>  int check_dso (DSO *dso);
>  int dso_is_rdwr (DSO *dso);
> +int dso_is_pie(DSO *dso);
>  void read_dynamic (DSO *dso);
>  int set_dynamic (DSO *dso, GElf_Word tag, GElf_Addr value, int fatal);
>  int 

Re: [yocto] [prelink-cross][PATCH] Support copy relocations in .data.rel.ro

2018-10-12 Thread Mark Hatle
On 10/12/18 9:06 AM, Kyle Russell wrote:
> Do you want me to just resent my last two patches?  I don't mind if that would
> be easier, and I'll remember to add the Signed-off-by. :)

I was able to pull them from the mailing list archives and have them applied to
my local tree.  I have a few other patches and then I'll be pushing to the
staging tree.

--Mark

> On Fri, Oct 12, 2018 at 10:00 AM Mark Hatle  <mailto:mark.ha...@windriver.com>> wrote:
> 
> On 10/4/18 9:12 AM, Kyle Russell wrote:
> > Hey Mark,
> >
> > Do you think this approach is reasonable?  If so, I have another patch 
> I'd
> like
> > to propose that would enable us to better catch error scenarios (like 
> the last
> > two patches address) that we might encounter during do_image_prelink.  
> We just
> > happened to detect these last two issues even though image_prelink 
> didn't fail
> > the build, so I'd like to provide an option to enable a little more 
> assertive
> > error path, if desired.
> 
> I'm getting caught back up on my prelink 'TODO' set.  The approach seems
> reasonable to me.  However, I appear to have lost the reference to the 
> original
> patch email.
> 
> I'm going to try to reapply from this email (or the list archive), but I 
> may
> need you to resend it.  I'll let you [and others know] once I get these 
> things
> merged.
> 
> Thanks!
> --Mark
> 
> > Thanks,
> > Kyle
> >
> > On Fri, Sep 28, 2018 at 10:57 AM Kyle Russell  <mailto:bkyleruss...@gmail.com>
> > <mailto:bkyleruss...@gmail.com <mailto:bkyleruss...@gmail.com>>> wrote:
> >
> >     binutils-2.28 (17026142ef35b62ac88bfe517b4160614902cb28) adds 
> support
> >     for copying read-only dynamic symbols into .data.rel.ro
> <http://data.rel.ro> <http://data.rel.ro>
> >     instead of .bss
> >     since .bss is technically writable.  This causes prelink to error 
> out on
> >     any binary containing COPY relocations in .data.rel.ro
> <http://data.rel.ro> <http://data.rel.ro>.
> >
> >     Read-only variables defined in shared libraries should be copied 
> directly
> >     into the space allocated for them in .data.rel.ro 
> <http://data.rel.ro>
> <http://data.rel.ro> by
> >     the linker.
> >
> >     To achieve this, we determine whether either of the two sections
> >     containing copy relocations is .data.rel.ro <http://data.rel.ro>
> <http://data.rel.ro>.  If so, we
> >     relocate the
> >     symbol memory directly into the existing section instead of 
> constructing
> >     a new .(s)dynbss section once prelink_build_conflicts() returns.
> >
> >     Fixes cxx1.sh, cxx2.sh, and cxx3.sh on Fedora 28 (which uses
> >     binutils-2.29).
> >     ---
> >      src/conflict.c | 51 
> +-
> >      src/undo.c     |  9 +
> >      2 files changed, 47 insertions(+), 13 deletions(-)
> >
> >     diff --git a/src/conflict.c b/src/conflict.c
> >     index 9ae2ddb..5613ace 100644
> >     --- a/src/conflict.c
> >     +++ b/src/conflict.c
> >     @@ -450,7 +450,7 @@ get_relocated_mem (struct prelink_info *info, 
> DSO
> *dso,
> >     GElf_Addr addr,
> >      int
> >      prelink_build_conflicts (struct prelink_info *info)
> >      {
> >     -  int i, ndeps = info->ent->ndepends + 1;
> >     +  int i, reset_dynbss = 0, reset_sdynbss = 0, ndeps =
> info->ent->ndepends + 1;
> >        struct prelink_entry *ent;
> >        int ret = 0;
> >        DSO *dso;
> >     @@ -675,6 +675,11 @@ prelink_build_conflicts (struct prelink_info 
> *info)
> >                          dso->filename);
> >                   goto error_out;
> >                 }
> >     +
> >     +         name = strptr (dso, dso->ehdr.e_shstrndx,
> dso->shdr[bss1].sh_name);
> >     +         if (strcmp(name, ".data.rel.ro <http://data.rel.ro>
> <http://data.rel.ro>") == 0)
> >     +           reset_sdynbss = 1;
> >     +
> >               firstbss2 = i;
> >               info->sdynbss_size = cr.rela[i - 1].r_offset -
> cr.rela[0].r_offset;
> >               info->sdynbss_size += cr.rela[i - 1].r_addend

Re: [yocto] [prelink-cross][PATCH] Support copy relocations in .data.rel.ro

2018-10-12 Thread Mark Hatle
On 10/4/18 9:12 AM, Kyle Russell wrote:
> Hey Mark,
> 
> Do you think this approach is reasonable?  If so, I have another patch I'd 
> like
> to propose that would enable us to better catch error scenarios (like the last
> two patches address) that we might encounter during do_image_prelink.  We just
> happened to detect these last two issues even though image_prelink didn't fail
> the build, so I'd like to provide an option to enable a little more assertive
> error path, if desired.

I'm getting caught back up on my prelink 'TODO' set.  The approach seems
reasonable to me.  However, I appear to have lost the reference to the original
patch email.

I'm going to try to reapply from this email (or the list archive), but I may
need you to resend it.  I'll let you [and others know] once I get these things
merged.

Thanks!
--Mark

> Thanks,
> Kyle
> 
> On Fri, Sep 28, 2018 at 10:57 AM Kyle Russell  > wrote:
> 
> binutils-2.28 (17026142ef35b62ac88bfe517b4160614902cb28) adds support
> for copying read-only dynamic symbols into .data.rel.ro 
> 
> instead of .bss
> since .bss is technically writable.  This causes prelink to error out on
> any binary containing COPY relocations in .data.rel.ro 
> .
> 
> Read-only variables defined in shared libraries should be copied directly
> into the space allocated for them in .data.rel.ro  by
> the linker.
> 
> To achieve this, we determine whether either of the two sections
> containing copy relocations is .data.rel.ro .  If so, 
> we
> relocate the
> symbol memory directly into the existing section instead of constructing
> a new .(s)dynbss section once prelink_build_conflicts() returns.
> 
> Fixes cxx1.sh, cxx2.sh, and cxx3.sh on Fedora 28 (which uses
> binutils-2.29).
> ---
>  src/conflict.c | 51 +-
>  src/undo.c     |  9 +
>  2 files changed, 47 insertions(+), 13 deletions(-)
> 
> diff --git a/src/conflict.c b/src/conflict.c
> index 9ae2ddb..5613ace 100644
> --- a/src/conflict.c
> +++ b/src/conflict.c
> @@ -450,7 +450,7 @@ get_relocated_mem (struct prelink_info *info, DSO 
> *dso,
> GElf_Addr addr,
>  int
>  prelink_build_conflicts (struct prelink_info *info)
>  {
> -  int i, ndeps = info->ent->ndepends + 1;
> +  int i, reset_dynbss = 0, reset_sdynbss = 0, ndeps = 
> info->ent->ndepends + 1;
>    struct prelink_entry *ent;
>    int ret = 0;
>    DSO *dso;
> @@ -675,6 +675,11 @@ prelink_build_conflicts (struct prelink_info *info)
>                      dso->filename);
>               goto error_out;
>             }
> +
> +         name = strptr (dso, dso->ehdr.e_shstrndx, 
> dso->shdr[bss1].sh_name);
> +         if (strcmp(name, ".data.rel.ro ") == 0)
> +           reset_sdynbss = 1;
> +
>           firstbss2 = i;
>           info->sdynbss_size = cr.rela[i - 1].r_offset - 
> cr.rela[0].r_offset;
>           info->sdynbss_size += cr.rela[i - 1].r_addend;
> @@ -702,6 +707,10 @@ prelink_build_conflicts (struct prelink_info *info)
>             }
>         }
> 
> +      name = strptr (dso, dso->ehdr.e_shstrndx, dso->shdr[bss2].sh_name);
> +      if (strcmp(name, ".data.rel.ro ") == 0)
> +        reset_dynbss = 1;
> +
>        info->dynbss_size = cr.rela[cr.count - 1].r_offset
>                           - cr.rela[firstbss2].r_offset;
>        info->dynbss_size += cr.rela[cr.count - 1].r_addend;
> @@ -719,9 +728,9 @@ prelink_build_conflicts (struct prelink_info *info)
>           && strcmp (name = strptr (dso, dso->ehdr.e_shstrndx,
>                                     dso->shdr[bss1].sh_name),
>                      ".dynbss") != 0
> -         && strcmp (name, ".sdynbss") != 0)
> +         && strcmp (name, ".sdynbss") != 0 && strcmp (name, ".data.rel.ro
> ") != 0)
>         {
> -         error (0, 0, "%s: COPY relocations don't point into .bss or 
> .sbss
> section",
> +         error (0, 0, "%s: COPY relocations don't point into .bss, .sbss,
> or .data.rel.ro  sections",
>                  dso->filename);
>           goto error_out;
>         }
> @@ -730,9 +739,9 @@ prelink_build_conflicts (struct prelink_info *info)
>           && strcmp (name = strptr (dso, dso->ehdr.e_shstrndx,
>                                     dso->shdr[bss2].sh_name),
>                      ".dynbss") != 0
> -         && strcmp (name, ".sdynbss") != 0)
> +         && strcmp (name, ".sdynbss") != 0 && strcmp (name, ".data.rel.ro
> ") != 0)
>         {
> -         error (0, 0, "%s: COPY relocations don't point into .bss or 
> 

Re: [yocto] Layerindex - actions required for deleted repos?

2018-10-05 Thread Mark Hatle
On 10/4/18 2:33 AM, Andreas Müller wrote:
> Hi,
> 
> I am cleaning my github and want to remove some repos. One of them is
> meta-gumstix-community - it is not maintained any more.
> This layer is listed in layer-index. Will it be removed automatically
> or are there steps I have to take?

Just let one of the admin's know and we can delete it.  (I'm not sure who all
are admins, but I am as well as Paul E of course.)

--Mark

> Thanks
> 
> Andreas
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [prelink-cross][PATCH] rtld: get machine from undef_map for protected symbols

2018-09-28 Thread Mark Hatle
On 9/28/18 9:55 AM, Kyle Russell wrote:
> Avoids rtld segfault when _dl_lookup_symbol_x is called with NULL
> for skip_map on a protected symbol relocation.
> 
> Global protected symbols may not actually require a copy relocaton,
> in which case skip_map is undefined, so use the undef_map to determine
> the symbol arch.

Thank you.  Any objections with me adding a 'Signed-off-by: ' line with your 
name?

I'm going to start enforcing submissions need a signed-off-by line per the terms
of the "Developer's Certificate of Origin",

https://www.kernel.org/doc/html/latest/process/submitting-patches.html#sign-your-work-the-developer-s-certificate-of-origin

(While it is not yet a requirement, I'd like to start doing it now.)

Thanks!
--Mark

> ---
>  src/rtld/dl-lookupX.h   |  6 +++---
>  testsuite/Makefile.am   |  2 +-
>  testsuite/reloc12.c | 19 +++
>  testsuite/reloc12.h | 11 +++
>  testsuite/reloc12.sh| 20 
>  testsuite/reloc12lib1.c | 11 +++
>  testsuite/reloc12lib2.c | 16 
>  7 files changed, 81 insertions(+), 4 deletions(-)
>  create mode 100644 testsuite/reloc12.c
>  create mode 100644 testsuite/reloc12.h
>  create mode 100755 testsuite/reloc12.sh
>  create mode 100644 testsuite/reloc12lib1.c
>  create mode 100644 testsuite/reloc12lib2.c
> 
> diff --git a/src/rtld/dl-lookupX.h b/src/rtld/dl-lookupX.h
> index 425bb4b..250c509 100644
> --- a/src/rtld/dl-lookupX.h
> +++ b/src/rtld/dl-lookupX.h
> @@ -679,10 +679,10 @@ _dl_lookup_symbol_x (const char *undef_name, struct 
> link_map *undef_map,
>   if (do_lookup_x (undef_name, new_hash, _hash, *ref,
>_value, *scope, i, version, flags,
>skip_map,
> -  
> (ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA(skip_map->machine)
> +  
> (ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA(undef_map->machine)
> && ELFW(ST_TYPE) ((*ref)->st_info) == STT_OBJECT
> -   && type_class == 
> ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA(skip_map->machine))
> -  ? 
> ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA(skip_map->machine)
> +   && type_class == 
> ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA(undef_map->machine))
> +  ? 
> ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA(undef_map->machine)
>: ELF_RTYPE_CLASS_PLT, NULL) != 0)
> break;
>  
> diff --git a/testsuite/Makefile.am b/testsuite/Makefile.am
> index 030f65b..21de6a9 100644
> --- a/testsuite/Makefile.am
> +++ b/testsuite/Makefile.am
> @@ -5,7 +5,7 @@ AM_CFLAGS = -Wall
>  
>  TESTS = movelibs.sh \
>   reloc1.sh reloc2.sh reloc3.sh reloc4.sh reloc5.sh reloc6.sh \
> - reloc7.sh reloc8.sh reloc9.sh reloc10.sh reloc11.sh \
> + reloc7.sh reloc8.sh reloc9.sh reloc10.sh reloc11.sh reloc12.sh \
>   shuffle1.sh shuffle2.sh shuffle3.sh shuffle4.sh shuffle5.sh \
>   shuffle6.sh shuffle7.sh shuffle8.sh shuffle9.sh undo1.sh \
>   layout1.sh layout2.sh unprel1.sh \
> diff --git a/testsuite/reloc12.c b/testsuite/reloc12.c
> new file mode 100644
> index 000..cfa
> --- /dev/null
> +++ b/testsuite/reloc12.c
> @@ -0,0 +1,19 @@
> +#include "reloc12.h"
> +#include 
> +
> +int main()
> +{
> +  A* ptr = find('b');
> +  if(b(ptr) != 0)
> +abort();
> +
> +  ptr = find('a');
> +  if(b(ptr) != 1)
> +abort();
> +
> +  ptr = find('r');
> +  if(b(ptr) != 2)
> +abort();
> +
> +  exit(0);
> +}
> diff --git a/testsuite/reloc12.h b/testsuite/reloc12.h
> new file mode 100644
> index 000..8e09405
> --- /dev/null
> +++ b/testsuite/reloc12.h
> @@ -0,0 +1,11 @@
> +typedef struct
> +  {
> +char a;
> +int b;
> +  } A;
> +
> +extern A foo[] __attribute ((visibility ("protected")));
> +
> +A* find(char a);
> +char a(const A*);
> +int b(const A*);
> diff --git a/testsuite/reloc12.sh b/testsuite/reloc12.sh
> new file mode 100755
> index 000..a8a43c7
> --- /dev/null
> +++ b/testsuite/reloc12.sh
> @@ -0,0 +1,20 @@
> +#!/bin/bash
> +. `dirname $0`/functions.sh
> +rm -f reloc12 reloc12lib*.so reloc12.log
> +rm -f prelink.cache
> +$RUN_HOST $CC -shared -O2 -fpic -o reloc12lib1.so $srcdir/reloc12lib1.c
> +$RUN_HOST $CC -shared -O2 -fpic -o reloc12lib2.so $srcdir/reloc12lib2.c
> +BINS="reloc12"
> +LIBS="reloc12lib1.so reloc12lib2.so"
> +$RUN_HOST $CCLINK -o reloc12 $srcdir/reloc12.c -Wl,--rpath-link,. ${LIBS}
> +savelibs
> +echo $PRELINK ${PRELINK_OPTS--vm} ./reloc12 > reloc12.log
> +$RUN_HOST $PRELINK ${PRELINK_OPTS--vm} ./reloc12 >> reloc12.log 2>&1 || exit 
> 1
> +grep -q ^`echo $PRELINK | sed 's/ .*$/: /'` reloc12.log && exit 2
> +if [ "x$CROSS" = "x" ]; then
> + $RUN LD_LIBRARY_PATH=. ./reloc12 || exit 3
> +fi
> +$RUN_HOST $READELF -a ./reloc12 >> reloc12.log 2>&1 || exit 4
> +# So that it is not prelinked again
> +chmod -x ./reloc12
> +comparelibs >> reloc12.log 2>&1 || exit 5
> diff --git 

Re: [yocto] Intel machine with 64 Bit kernel and 32 Bit user space

2018-07-26 Thread Mark Hatle
On 7/26/18 10:19 AM, Alexander Kanavin wrote:
> 2018-07-26 14:56 GMT+02:00 Ayoub Zaki :
>> Is it possible to define a MACHINE configuration with a 64 Bit kernel and 32
>> Bit user space ?
>>
>> The user space should not be using a  x32 ABI.
> 
> I think (but I am not sure), that you can do it with multilib. Define
> a configuration like this:
> https://github.com/openembedded/openembedded-core/blob/master/meta-skeleton/conf/multilib-example.conf
> 
> Then build lib32-core-image-minimal. That image should include only 32
> bit user space, but the kernel will be 64 bit.

Yes this is the typical approach.  Enable multilibs, and then build the image
you you want with the approach multilib prefix.

This works in any multilib configurations, 64-bit, 32-bit, x32, etc..

--Mark

> Alex
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Reg. rpm changelog

2018-07-17 Thread Mark Hatle
On 7/17/18 1:39 PM, Vikram Chhibber wrote:
> Hi All,
> 
> I use the rpm created from my recipe and I need to know if there is a way to 
> add
> changelog to this rpm. I should be able to use "rpm -q --changelog" option to
> display this changelog.
> 
> Please let me know if this is possible.

No.  Others have talked about it, but changelog information is stored in the
respective layers (.bb, .bbappend, .bbclass, etc) that make up the recipes and
would have to be distilled..

It should be possible, but nobody has done this that I know of.

(Rough way to do it.. find all of the files that went into a recipe.. then run a
git scm command looking for changes to those files, capturing them in an RPM
changelog like format.)

--Mark

> Thanks
> 
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Issue yocto strip the kernel module signature during packaging

2018-06-14 Thread Mark Hatle
On 6/13/18 12:10 PM, Mathieu Alexandre-Tétreault wrote:
> Hello,
> 
> I am working to activate the kernel module signature for in-tree and 
> out-of-tree packages.
> While I was debugging the cause of the kernel complaining about unsigned 
> modules.
> I noticed that yocto was stripping the signature information from the binary, 
> therefore I've
> set INHIBIT_PACKAGE_STRIP = "1" for all of my out-of-tree module. Then I had 
> to  set the flag
> for the kernel recipe to fix the issue for my in-tree kernel module.

This sounds like a bug, it should not be stripping that part of the data..  only
debug information.

> My question is:
>- is there any issue with turning off the stripping for the linux recipe? 
> I'd like to be sure I haven't overlooked anything about this.

The kernel and modules will become much larger.

>- Would there be a better way to achieve this?

The code runs from meta/classes/package.bbclass, function split_and_strip_files,
near the end you will see:

if (d.getVar('INHIBIT_PACKAGE_STRIP') != '1'):
strip = d.getVar("STRIP")
sfiles = []
for file in elffiles:
elf_file = int(elffiles[file])
#bb.note("Strip %s" % file)
sfiles.append((file, elf_file, strip))
for f in kernmods:
sfiles.append((f, 16, strip))

oe.utils.multiprocess_exec(sfiles, oe.package.runstrip)


The last line executes oe.package.runstrip, which is defined in
meta/lib/oe/package.py.  It uses the arguments defined by:

# kernel module
if elftype & 16:
stripcmd.extend(["--strip-debug", "--remove-section=.comment",
"--remove-section=.note", "--preserve-dates"])
# .so and shared library
elif ".so" in file and elftype & 8:
stripcmd.extend(["--remove-section=.comment", "--remove-section=.note",
"--strip-unneeded"])
# shared or executable:
elif elftype & 8 or elftype & 4:
stripcmd.extend(["--remove-section=.comment", "--remove-section=.note"])

First verify that it's using the kernel module, as expected -- assuming it is --
you'll need to find the section that has the information you care about it in..
and figure out how to preserve it with the right arguments.

--Mark

> Cheers,
> 
> Mathieu
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] LIC_FILES_CHKSUM: spaces in file names

2018-05-15 Thread Mark Hatle
On 5/15/18 7:25 AM, Damien LEFEVRE wrote:
> Hi,
> 
> I have a base recipe generated with devtool.
> 
> The package I build has several license files which contain space characters 
> in
> the file names.
> 
> LIC_FILES_CHKSUM = "file://COPYRIGHT.md;md5=b229ca0c79785e9e86311477e7bdd9ea \
>                                          file://LICENSES/MIT
> License.txt;md5=3ba96e7848c3cedc5df2d00094a0d0f3 \
>                                          file://LICENSES/FreeImage Public
> License.txt;md5=ffcd65468a2d2b3e3e43fbaf63ceedf7 \
>                                          file://LICENSES/Boost Software
> License.txt;md5=2c7a3fa82e66676005cd4ee2608fd7d2 \
>                                      file://LICENSES/zlib-libpng
> License.txt;md5=09b00738058950409d6955872d715416 \
>                                      file://LICENSES/OpenSIFT
> License.txt;md5=7a69fc0ac94076df51f7db9b0c02fe7c \
>                                      file://LICENSES/ISSL
> License.txt;md5=1ba0d78ed416760e4a8ef3dc121e69c8"
> 
> Bitbake fails and seem to break at the first space character
> LIC_FILES_CHKSUM contains an invalid URL:
> License.txt;md5=3ba96e7848c3cedc5df2d00094a0d0f3
> 
> How can I deal with this? Any other options than renaming the files? 
> 
> I tried surrounding with quotes, "\ "using, nothing helps.

I've not tried it, but since these are URIs, did you try %20?

--Mark

> Thanks,
> -Damien
> 
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-selinux][PATCH] libselinux: python-importlib is now part of python*-core

2018-05-11 Thread Mark Hatle
On 5/11/18 12:28 PM, Rudolf J Streif wrote:
> Echoing this: may I ask what the current maintenance status of
> meta-selinux is. It appears that no updates have been made for more than
> 9 months. This is of course not to blame anybody but out of concern that
> the layer is falling behind even more and to find a solution.

The answer is the current set of people are horribly overworked and busy, so
day-to-day updates have been 'sparse'.

Usually we update meta-selinux about the time of a release, and thus are due.

The last update of meta-selinux was about the time of the Rocko release, so what
is in master is definitely current as of Rocko.  (I did the last set of updates
-- so I know it did work as of Rocko release.)  The master needs to be branched
as Rocko... master needs to be updated to be Sumo compatible.

My assumption is that once Sumo is formally released (any minute now), we'll
collection all of the patches and get them into place and spend some time
cleaning them up...

It looks like Joe is already working through this effort.

(Only speaking for myself,) I don't have time to do day-to-day maintenance of
meta-selinux any longer -- nor do I have the indepth knowledge to understand
when not to do something.  I filled this role purely out of necessity since
nobody else was doing it.

So with that said, if anyone wants to help, we're all open for help here...  I
doubt there would be any objection to adding or replacing existing maintainers
and/or giving more people push access.

> In addition to Armin's patches there are two patches submitted by Kai
> Kang at Windriver:
> 
> * https://lists.yoctoproject.org/pipermail/yocto/2018-February/039917.html
> * https://lists.yoctoproject.org/pipermail/yocto/2018-February/039918.html
> 
> Curiously enough, the second patch has been applied to master but not
> the first one.
> 
> 
> There is also an issue with building SELinux with systemd. The layer
> enables auditing:
> 
> meta-selinux/classes/enable-audit.bbclass:PACKAGECONFIG[audit] =
> "--enable-audit,--disable-audit,audit,"
> meta-selinux/recipes-core/systemd/systemd_%.bbappend:inherit
> ${@bb.utils.contains('DISTRO_FEATURES', 'selinux', 'enable-audit', '', d)}
> 
> Apparently the --enable-audit switch is passed to meson when running the
> configure task, which meson does not appreciate. I am not that familiar
> with the audit feature nor with meson, so I currently have no idea on
> how to fix this the right way.

audit feature is useful outside of selinux, so my understand was that audit
itself was moving into core during the sumo time frame (if it hadn't already
been oved.)

I don't know anything about meson, so I can't speak to that...

> 
> Further, refpolicy_git does not build anymore as the YP specific patches
> do not apply anymore since upstream changed.

The refpolicy is and has always been crap.  I've been talking to a few people on
IRC about working to replace the refpolicy with a policy that can be generated
dynamically based on the contents of the recipes.  I don't know if that is
really going to happen, but I hate the way it's currently implemented.

One of the key issues about the refpolicy is that you need to be an expert at
this (which I never claimed to be) in order to make any reasonable decision --
add to that any specific policy needs to userstand overall system design, and I
wouldn't trust any of the refpolicy items as they stand in meta-selinux.

--Mark

> Thanks,
> Rudi
> 
> 
> 
> On 05/07/2018 10:20 AM, akuster808 wrote:
>>
>> On 04/14/2018 07:08 PM, Armin Kuster wrote:
>>> Missing or unbuildable dependency chain was: ['meta-world-pkgdata', 
>>> 'restorecond', 'libselinux', 'python-importlib']
>>>
>>> Signed-off-by: Armin Kuster 
>> ping
>>> ---
>>>  recipes-security/selinux/libselinux.inc | 2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/recipes-security/selinux/libselinux.inc 
>>> b/recipes-security/selinux/libselinux.inc
>>> index bd5ce8d..51d0875 100644
>>> --- a/recipes-security/selinux/libselinux.inc
>>> +++ b/recipes-security/selinux/libselinux.inc
>>> @@ -8,7 +8,7 @@ LICENSE = "PD"
>>>  inherit lib_package pythonnative
>>>  
>>>  DEPENDS += "libsepol python libpcre swig-native"
>>> -RDEPENDS_${PN}-python += "python-importlib"
>>> +RDEPENDS_${PN}-python += "python-core"
>>>  
>>>  PACKAGES += "${PN}-python"
>>>  FILES_${PN}-python = 
>>> "${libdir}/python${PYTHON_BASEVERSION}/site-packages/*"
> 
> 
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [layerindex-web][PATCH 2/2] Add CSV export for layer recipes

2018-05-07 Thread Mark Hatle
On 5/6/18 10:35 PM, Paul Eggleton wrote:
> Add the ability to export the recipe listing for a layer to a CSV file
> for importing into external tools. At the moment we include name,
> version and license, but there is a parameter that lets you specify the
> fields to include in the URL if desired.
> 
> Implements [YOCTO #12722].

With the bitbake layer library I'm working on, it should be possible to do this
directly out of bitbake.

This wouldn't limit it to just recipes either, but any data.  Loading (querying)
the json data from the layer index is easy, and the library would know how to
filter to the right branch/layer/etc.

Something to consider as we go forward with this and other work.

--Mark

> Signed-off-by: Paul Eggleton 
> ---
>  layerindex/urls_branch.py|  5 -
>  layerindex/views.py  | 22 ++
>  templates/layerindex/detail.html | 14 +++---
>  3 files changed, 33 insertions(+), 8 deletions(-)
> 
> diff --git a/layerindex/urls_branch.py b/layerindex/urls_branch.py
> index 0e41435e..2809147b 100644
> --- a/layerindex/urls_branch.py
> +++ b/layerindex/urls_branch.py
> @@ -7,7 +7,7 @@
>  from django.conf.urls import *
>  from django.views.defaults import page_not_found
>  from django.core.urlresolvers import reverse_lazy
> -from layerindex.views import LayerListView, RecipeSearchView, 
> MachineSearchView, DistroSearchView, ClassSearchView, LayerDetailView, 
> edit_layer_view, delete_layer_view, edit_layernote_view, 
> delete_layernote_view, RedirectParamsView, DuplicatesView, 
> LayerUpdateDetailView
> +from layerindex.views import LayerListView, RecipeSearchView, 
> MachineSearchView, DistroSearchView, ClassSearchView, LayerDetailView, 
> edit_layer_view, delete_layer_view, edit_layernote_view, 
> delete_layernote_view, RedirectParamsView, DuplicatesView, 
> LayerUpdateDetailView, layer_export_recipes_csv_view
>  
>  urlpatterns = [
>  url(r'^$', 
> @@ -20,6 +20,9 @@ urlpatterns = [
>  LayerDetailView.as_view(
>  template_name='layerindex/detail.html'),
>  name='layer_item'),
> +url(r'^layer/(?P[-\w]+)/recipes/csv/$',
> +layer_export_recipes_csv_view,
> +name='layer_export_recipes_csv'),
>  url(r'^recipes/$',
>  RecipeSearchView.as_view(
>  template_name='layerindex/recipes.html'),
> diff --git a/layerindex/views.py b/layerindex/views.py
> index dbf08497..06d35261 100644
> --- a/layerindex/views.py
> +++ b/layerindex/views.py
> @@ -1022,3 +1022,25 @@ class StatsView(TemplateView):
>  machine_count=Count('layerbranch__machine', distinct=True),
>  distro_count=Count('layerbranch__distro', distinct=True))
>  return context
> +
> +
> +def layer_export_recipes_csv_view(request, branch, slug):
> +import csv
> +layer = get_object_or_404(LayerItem, name=slug)
> +layerbranch = layer.get_layerbranch(branch)
> +
> +response = HttpResponse(content_type='text/csv')
> +response['Content-Disposition'] = 'attachment; 
> filename="recipes_%s_%s.csv"' % (layer.name, layerbranch.branch.name)
> +
> +fieldlist = request.GET.get('fields', 'pn,pv,license').split(',')
> +recipe_fields = [f.name for f in Recipe._meta.get_fields() if not 
> (f.auto_created and f.is_relation)]
> +for field in fieldlist:
> +if field not in recipe_fields:
> +return HttpResponse('Field %s is invalid' % field)
> +
> +writer = csv.writer(response)
> +for recipe in layerbranch.sorted_recipes():
> +values = [getattr(recipe, field) for field in fieldlist]
> +writer.writerow(values)
> +
> +return response
> diff --git a/templates/layerindex/detail.html 
> b/templates/layerindex/detail.html
> index 220d475b..67c21126 100644
> --- a/templates/layerindex/detail.html
> +++ b/templates/layerindex/detail.html
> @@ -199,13 +199,13 @@
>  
>  {{ layeritem.name }} 
> recipes ({{ layerbranch.sorted_recipes.count 
> }})
>  
> -
> -
> -
> - placeholder="Search recipes" class="search-query" id="filter">
> -
> -
> -
> +
> + class="icon-file"> Export CSV
> +
> +
> +
> +
> +
>  
>  
>  
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [Yocto] Prelink compilation for ZCU102

2018-04-17 Thread Mark Hatle
On 4/17/18 3:55 AM, Nicolas Salmin wrote:
> Hello guys,
> 
> Someone here have some information to build prelink recipe for ZCU102 board
> because i'm not able to do it ...
> 
> Hello guys,
> 
> Someone here have some information to build prelink recipe for ZCU102 board
> because i'm not able to do it ...
> 
> ERROR: prelink-1.0+gitAUTOINC+aa2985eefa-r0 do_unpack: Fetcher failure: Fetch
> command export PSEUDO_DISABLED=1; export
> DBUS_SESSION_BUS_ADDRESS="unix:abstract=/tmp/dbus-KTbsI0WgZ2"; export
> SSH_AUTH_SOCK="/run/user/1000/keyring/ssh"; export
> PATH="/private/path/UltraScale/zcu102/yocto/build/tmp/sysroots-uninative/x86_64-linux/usr/bin:/private/path/UltraScale/zcu102/yocto/poky/scripts:/private/path/UltraScale/zcu102/yocto/build/tmp/work/aarch64-poky-linux/prelink/1.0+gitAUTOINC+aa2985eefa-r0/recipe-sysroot-native/usr/bin/aarch64-poky-linux:/private/path/UltraScale/zcu102/yocto/build/tmp/work/aarch64-poky-linux/prelink/1.0+gitAUTOINC+aa2985eefa-r0/recipe-sysroot/usr/bin/crossscripts:/private/path/UltraScale/zcu102/yocto/build/tmp/work/aarch64-poky-linux/prelink/1.0+gitAUTOINC+aa2985eefa-r0/recipe-sysroot-native/usr/sbin:/private/path/UltraScale/zcu102/yocto/build/tmp/work/aarch64-poky-linux/prelink/1.0+gitAUTOINC+aa2985eefa-r0/recipe-sysroot-native/usr/bin:/private/path/UltraScale/zcu102/yocto/build/tmp/work/aarch64-poky-linux/prelink/1.0+gitAUTOINC+aa2985eefa-r0/recipe-sysroot-native/sbin:/private/path/UltraScale/zcu102/yocto/build/tmp/work/aarch64-poky-linux/prelink/1.0+gitAUTOINC+aa2985eefa-r0/recipe-sysroot-n
 
ative/bin:/private/path/UltraScale/zcu102/yocto/poky/bitbake/bin:/private/path/UltraScale/zcu102/yocto/build/tmp/hosttools";
> export HOME="/home/nicolas"; git -c core.fsyncobjectfiles=0 checkout -B
> cross_prelink aa2985eefa94625037ad31e9dc5207fd5bf31ca7 failed with exit code
> 128, output:
> fatal: reference is not a tree: aa2985eefa94625037ad31e9dc5207fd5bf31ca7

The commit is most definitely still in the tree.

http://git.yoctoproject.org/cgit/cgit.cgi/prelink-cross/commit/?h=cross_prelink=aa2985eefa94625037ad31e9dc5207fd5bf31ca7

You likely have a local mirror that is not up-to-date or something similar.

--Mark

> ERROR: prelink-1.0+gitAUTOINC+aa2985eefa-r0 do_unpack: Function failed:
> base_do_unpack
> ERROR: Logfile of failure stored in:
> /private/path/UltraScale/zcu102/yocto/build/tmp/work/aarch64-poky-linux/prelink/1.0+gitAUTOINC+aa2985eefa-r0/temp/log.do_unpack.85986
> ERROR: Task
> (/private/path/UltraScale/zcu102/yocto/poky/meta/recipes-devtools/prelink/prelink_git.bb:do_unpack)
> failed with exit code '1'
> 
> 
> I don't know if it's because of the target is an aarch64 architecture or not 
> ...
> 
> Any advise ?
> Cheers,
> Nicolas
> 
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] btrfs-tools Requires libgcc_s.so.1

2018-03-08 Thread Mark Hatle
On 3/8/18 4:10 PM, Marcelo E. Magallon wrote:
> On Thu, Mar 08, 2018 at 03:16:44PM -0600, Mark Hatle wrote:
> 
>> RDEPENDS are automatically promoted to DEPENDS (build-time).  I would 
>> normally
>> expect libgcc_s.so.1 to be present via the typical default depends.  Does 
>> your
>> recipe have an INHIBIT_DEFAULT_DEPENDS (I think that is it?) defined?  If so,
>> you would need to manually add all build dependencies then.
> 
> INHIBIT_DEFAULT_DEPS.
> 
> No, it doesn't, but that's a good hint.
> 
>> An executable or library with a stated library dependency (soname) will
>> automatically get an RDEPENDS.  The only time you should have to do an
>> RDEPENDS_${PN} of a library is when that library is 'dlopened'.  (This is the
>> case for things like pam modules.)
> 
> This is also the case in this situation.
> 
> glibc has this bit of code in pthread_cancel_init:
> 
> handle = __libc_dlopen_mode (LIBGCC_S_SO, RTLD_NOW | __RTLD_DLOPEN);
>   
> if (handle == NULL
> || (resume = __libc_dlsym (handle, "_Unwind_Resume")) == NULL
> || (personality = __libc_dlsym (handle, "__gcc_personality_v0")) 
> == NULL
> || (forcedunwind = __libc_dlsym (handle, "_Unwind_ForcedUnwind"))
>== NULL
> || (getcfa = __libc_dlsym (handle, "_Unwind_GetCFA")) == NULL
>   #ifdef ARCH_CANCEL_INIT
> || ARCH_CANCEL_INIT (handle)
>   #endif
> )
>   __libc_fatal (LIBGCC_S_SO " must be installed for pthread_cancel to 
> work\n");
> 
> it's dlopen()ing libgcc_s.so.1 in order to get thread 
> cancellation to work via exception unwinding.

Yes, the dlopen means the automated processing can't identify the need.. and
then the RDEPEND is the correct solution.

(This might be a reasonable bug/enhancement request to the system.  Look for
pthread_cancel and automatically infer that libgcc is required.)

> In my case, libgcc_s.so.1 is installed in the image before adding 
> the RDEPENDS.
> 
> What doesn't make sense to me is why in both the OP's and my 
> case, adding RDEPENDS_${PN} += "libgcc" is fixing anything. As 
> you said, this is promoted to a DEPENDS, so libgcc is available 
> at compile time, but that shouldn't change anything that I can 
> see.

I'm guessing if it's not available at compile time that some behavior is
changing (maybe not using pthread_cancel?)

Not sure... but at least the reason the RDEPEND resolves the runtime issue is
now clear to me, and within the design.

--Mark

> Thanks for looking at this,
> 
> Marcelo
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] btrfs-tools Requires libgcc_s.so.1

2018-03-08 Thread Mark Hatle
On 3/8/18 3:00 PM, Marcelo E. Magallon wrote:
> Sorry to go off on a tangent:
> 
> On Fri, Mar 04, 2016 at 04:12:54PM -0800, robert_jos...@selinc.com wrote:
> 
 root@test:~# btrfs scrub start /
 scrub started on /, fsid 79dc4fed-a0f7-43e2-b9e7-056b1a2c4cdd
>> (pid=333)
 libgcc_s.so.1 must be installed for pthread_cancel to work

 I can solve this by adding libgcc to RDEPENDS for btrfs-tools.
> 
> I ran into the same thing with my device, different package. I 
> don't understand the fix:
> 
>> Signed-off-by: Robert Joslyn 
>> ---
>> diff --git a/meta/recipes-devtools/btrfs-tools/btrfs-tools_4.1.2.bb
>> b/meta/recipes-devtools/btrfs-tools/btrfs-tools_4.1.2.bb
>> index 37c622b..cc2ccfc 100644
>> --- a/meta/recipes-devtools/btrfs-tools/btrfs-tools_4.1.2.bb
>> +++ b/meta/recipes-devtools/btrfs-tools/btrfs-tools_4.1.2.bb
>> @@ -11,6 +11,7 @@ LICENSE = "GPLv2"
>> LIC_FILES_CHKSUM = "file://COPYING;md5=fcb02dc552a041dee27e4b85c7396067"
>> SECTION = "base"
>> DEPENDS = "util-linux attr e2fsprogs lzo acl"
>> +RDEPENDS_${PN} = "libgcc"
> 
> What is this doing?
> 
> My understanding until a couple of days ago is that this will 
> simply pull the "libgcc" package into the image, add a dependency 
> in the binary package and NOTHING more. It won't change the way 
> binaries are linked, it won't change flags passed to the 
> compiler, etc.
> 
> I'm confused because in my case libgcc_s.so.1 is already in the 
> image, before this change, but this change seems to be fixing the 
> issue, and I don't understand why.

RDEPENDS are automatically promoted to DEPENDS (build-time).  I would normally
expect libgcc_s.so.1 to be present via the typical default depends.  Does your
recipe have an INHIBIT_DEFAULT_DEPENDS (I think that is it?) defined?  If so,
you would need to manually add all build dependencies then.

An executable or library with a stated library dependency (soname) will
automatically get an RDEPENDS.  The only time you should have to do an
RDEPENDS_${PN} of a library is when that library is 'dlopened'.  (This is the
case for things like pam modules.)

--Mark

> Any clues?
> 
> Thanks!
> 
> Marcelo
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Are Windows SDKs (mingw layer) supposed to work?

2018-03-06 Thread Mark Hatle
On 3/6/18 4:39 AM, Burton, Ross wrote:
> Have you tried using 2.4 to identify when it broke?  Clearly we need to extend
> the selftest so the mingw SDK is actually tested...
> 
> Ross
> 
> On 6 March 2018 at 05:32, Reyna, David  > wrote:
> 
> Hi all,
> 
> __ __
> 
> I am trying to enable a customer for using YP SDKs on Windows. It 
> apparently
> is supposed to work, but I am unable to get past fatal errors.
> 
> __ __
> 
> I have looked for documentation at the YP site and the meta-mingw repo, 
> but
> to no avail.
> 
> __ __
> 
> 1) My project is a simple default “qemux86-64” with YP-2.5 HEAD, with the
> latest “meta-mingw” layer added and .
> 
> __ __
> 
> The SDK builds fine and I get the “*.xz” generated file.

The output format from the Yocto Project of a tarball has been that way since
the beginning.  We (WR) had code that would produce a zip format, but that
stopped working a few releases ago when the archive chaining was reworked..

We've not yet resurrected the code for the .zip generation but probably will in
the next few months.

> __ __
> 
> 2) However, when I use 7ZIP to extract it on my Windows host (which is
> recommended for XY files), I get several fatal issues.
> 
> __ __
> 
>   (a) I get more than a hundred errors “Can not create symbolic link: 
> Access
> is denied”. 

This is odd.  In the past 7ZIP (when not the admin) would create either copies
or 'shortcuts' instead of actual symbolic links.  It sounds like something has
changed in 7zip.

> __ __
> 
> While I do not care about the ones for the bin tools in the sysroot, I do
> care that most of the cross toolchain EXE files are thusly broken, plus 
> many
> of the libraries and header files in the sysroot.
> 
> __ __
> 
> Am I missing a step?
> 
> __ __
> 
> If I in fact extract this file on my Linux host, I can directly see that 
> it
> is full of symlinks! Why are there symlinks in a Windows-specific 
> tarball?

Often I've extracted it on a linux machine, and then uses zip to recompress it.
(You still need to use 7zip to extract because of the long pathnames that blow
up winzip.)

> __ __
> 
>   (b) If I attempt to build a simple C file from the shell in the SDK
> environment, I either get a silent failure (for the 32-bit toolchain) or a
> blatant error as per:

What version of windows are you using?  The last time I tested this (Rocko) it
was working properly still, but I tested with Win 7.  So it's possible that
something more recent has broken it.

I'm not sure if I'll have time today to retry this with Rocko, but I'll see if I
can.

--Mark
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] useradd-staticids

2018-02-23 Thread Mark Hatle
On 2/23/18 10:37 AM, Anders Montonen wrote:
> Hi,
> 
> I'm looking into using the useradd-staticids class for reproducible 
> builds. Is there any way to delay the warning/error about missing ids 
> until a recipe is actually built rather than getting them during parsing? 
> Having to generate tons of ids for packages you're not even using seems a 
> bit nuts.

The system used to do that, but then people started complaining that whenever
they added items to their image they'd get more warnings -- and why don't we
just tell them up-front.  :/

I do suggest you ask on the openembedded-core mailing list though.  The people
working on revising that code have been active there and may be able to help
more then I can.

--Mark

> Regards,
> Anders
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] useradd-example.bb

2018-02-14 Thread Mark Hatle
On 2/14/18 2:57 PM, Jeff Osier-Mixon wrote:
> Can anyone help Jean-Pierre? This seems at first like a simple issue during
> do_populate.

Need more information.  Are you creating new users and putting files in those
user directories in /usr/share or elsewhere?

The /usr/share hierarchy is controlled by the system wide fs-perms.  Many of
it's standard subdirectories will globally set all subdirectories become
root:root 0755, while files become 0644 root:root.

See the fs-perms.txt file: meta/files/fs-perms.txt

Pay specific attention to the comment:

# Note: all standard config directories are automatically assigned "0755 root
root false - - -"

'standard config directories' refer to the following list:

(from classes/package.bbclass)
# By default all of the standard directories specified in
# bitbake.conf will get 0755 root:root.
target_path_vars = ['base_prefix',
'prefix',
'exec_prefix',
'base_bindir',
'base_sbindir',
'base_libdir',
'datadir',
'sysconfdir',
'servicedir',
'sharedstatedir',
'localstatedir',
'infodir',
'mandir',
'docdir',
'bindir',
'sbindir',
'libexecdir',
'libdir',
'includedir',
'oldincludedir' ]


> On Tue, Feb 13, 2018 at 1:32 PM, Jean-Pierre Sainfeld  > wrote:
> 
> Hi,
> I am working on yocto system at the jethro rev level. 
> I am looking to use the meta-skeleton useradd-example.bb
>  recipe
> So far I have been successful in adding the required users and group 
> and creating files in the /usr/share directory 
> The current issue I am facing is the ownership of the created file 
> I see the right permissions at the do_install() time 
> however when the image is loaded in the target 
> those files are reverted to the root:root permissions 
> 
> The requirement of our platform is that we can create specific users and 
> make some specific processes  and data files having the proper uid and 
> ueid.
> 
> I have been able to do all this successfully on the target 
> but not through the yocto build system 
> 
> Any assistance in this regard would be fantastic and very cool :-)
> 
> 
> 
> -- 
> Jean-Pierre Sainfeld
> 408-508-1741 
> 
> --
> ___
> yocto mailing list
> yocto@yoctoproject.org 
> https://lists.yoctoproject.org/listinfo/yocto
> 
> 
> 
> 
> 
> -- 
> Jeff Osier-Mixon - Open Source Community Manager, Intel Corporation
> 
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Installing Wind River Linux

2018-02-01 Thread Mark Hatle
On 1/30/18 3:15 AM, 永瀨桂 wrote:
> Hi,
> I'm trying to build the installer for using the WindRiverLinux9 with a  
> development board.
> 
> I want to build the installer's image using the Intel-corei7-64 BSP, but  
> I can't build wrlinux-image-glibc-small like in the guide below.
> https://knowledge.windriver.com/en-us/000_Products/000/010/050/040/000_Wind_River_Linux_Platform_Developer's_Guide%2C_9/0A0/020/020
> 

The question is specific to Wind River Linux, which is derived from the Yocto
Project, but is not the Yocto Project.

I'll respond in a separate email.

--Mark

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Removing /usr/src/debug from image

2017-12-05 Thread Mark Hatle
On 12/5/17 12:40 PM, Koehler, Yannick wrote:
> Hi,
> 
> I have edited my local.conf to remove the debug related EXTRA_IMAGE_FEATURES. 
>  My rootfs still contains a /usr/src/debug folder, which I would like to get 
> rid of.  This likely come from -dbg package inclusion, which I would like to 
> understand how to control/remove.  We are using Yocto 1.9/2.0, and the only 
> thing I found related to this is 
> 
>   INHIBIT_PACKAGE_DEBUG_SPLIT = "1"

The above is definitely not what you want.  It will keep debug symbols in all of
your binaries, increasing the size of the eventual image.

The base-files recipe created a set of directories.  If this directory is empty
on your resulting image, I would assume it is created from there.  If this is a
concern, create a bbappend and remove that directory in a do_install_append.

Is there any particular reason you want it removed?  It should only be taking 1
inode, assuming it is empty.

> Yet, from documentation this appears to be about generating the -dbg packages 
> which I do not mind, but I do not want them install in my image.  I am trying 
> to understand what pulls them in.  My image recipe pulls the 
> packagegroup-core-boot, ROOTFS_PKGMANAGE_BOOTSTRAP, CORE_IMAGE_EXTRA_INSTALL, 
> ca-certificates, package-management and some other private package but none 
> of those include -dbg...
> 
> Any help appreciated.
> 
> --
> Yannick Koehler
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Contribute meta-installer to yocto

2017-11-28 Thread Mark Hatle
On 11/28/17 9:45 AM, akuster808 wrote:
> 
> 
> On 11/27/2017 09:20 AM, Mark Hatle wrote:
>> On 11/21/17 3:24 PM, Burton, Ross wrote:
>>> On 21 November 2017 at 08:55, Hongxu Jia <hongxu@windriver.com
>>> <mailto:hongxu@windriver.com>> wrote:
>>>
>>> If yocto is interested in this layer and will accept it,
>>> I could send pull request or some one directly fetch
>>> from above github master branch.
>>>
>>>
>>> Are you asking for a git repo on git.yoctoproject.org
>>> <http://git.yoctoproject.org>?  If you want one I believe the process is to 
>>> ask
>>> Michael Halstead.  There's no reason why it can't be maintained in this
>>> repository forever though,  just submit it to the layer index.
>> The request is for more then just a repository.  (We can get a repository
>> anywhere..)  What he is asking for is, is this something that the Yocto 
>> Project
>> itself wants to own.  He is still offering to be the maintainer of the layer,
>> but the project being owned by the Yocto Project itself has more 
>> implications.
> That is an interesting question.  Are you suggesting a discussion with
> the YP membership  since they are the ones who are providing the
> resources for the Project?

At present, we (Hongxu) intend to maintain the code and continue to evolve and
do all of the activities you would expect.  But I do think the YP membership
needs to at least be involved in a discussion of 'should this be a YP layer or
not'.  With the understanding that branding and [some day] resources may be
needed to continue the work.

(I want to make sure this isn't just shoveled over a wall and ignored.. that
serves no one.)

>>
>> I.e. using the bugzilla, discussion on the @yoctoproject.org mailing lists,
>> etc... what happens if he is no longer able to willing to maintain the 
>> layer.. etc.
>>
>> In addition, my understanding is a target based installer has places to 
>> insert
>> logos.  Currently these are blank.  If the Yocto Project wants to be the home
>> for this, then I would also hope that specific logos would be approved for 
>> use
>> within the default installer instance.
>>
>> If this is outside of the scope of what the Yocto Project itself wants to 
>> own,
>> then OpenEmbedded is the next place that might see value in this if not,
>> then a github project will be fine.
> 
> 
> Having an open discussion, like this is more in line with open source
> philosophy and I thank you.

This is exactly why I wanted it done this way.  The discussion needs to be open.
 This isn't a vendor specific BSP or vendor specific chunk of code.. (At least
it's not intended to be.)  Thus the broader question being asked.

> Kind regards,
> Armin
>>> Ross
> 
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Contribute meta-installer to yocto

2017-11-27 Thread Mark Hatle
On 11/21/17 3:24 PM, Burton, Ross wrote:
> On 21 November 2017 at 08:55, Hongxu Jia  > wrote:
> 
> If yocto is interested in this layer and will accept it,
> I could send pull request or some one directly fetch
> from above github master branch.
> 
> 
> Are you asking for a git repo on git.yoctoproject.org
> ?  If you want one I believe the process is to 
> ask
> Michael Halstead.  There's no reason why it can't be maintained in this
> repository forever though,  just submit it to the layer index.

The request is for more then just a repository.  (We can get a repository
anywhere..)  What he is asking for is, is this something that the Yocto Project
itself wants to own.  He is still offering to be the maintainer of the layer,
but the project being owned by the Yocto Project itself has more implications.

I.e. using the bugzilla, discussion on the @yoctoproject.org mailing lists,
etc... what happens if he is no longer able to willing to maintain the layer.. 
etc.

In addition, my understanding is a target based installer has places to insert
logos.  Currently these are blank.  If the Yocto Project wants to be the home
for this, then I would also hope that specific logos would be approved for use
within the default installer instance.

If this is outside of the scope of what the Yocto Project itself wants to own,
then OpenEmbedded is the next place that might see value in this if not,
then a github project will be fine.

> Ross

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [layerindex-web][patch v3 1/1] recipes.html: Require keyword for recipe search

2017-11-08 Thread Mark Hatle
On 11/8/17 1:08 PM, Paul Eggleton wrote:
> On Thursday, 9 November 2017 4:42:50 AM NZDT Mark Hatle wrote:
>> On 11/7/17 8:43 PM, Paul Eggleton wrote:
>>> On Wednesday, 8 November 2017 11:47:49 AM NZDT Mark Hatle wrote:
>>>> On 11/7/17 4:31 PM, Amanda Brindle wrote:
>>>>> Use JavaScript to check if the search box for recipe search is
>>>>> empty before querying the database. This will prevent the "502
>>>>> Bad Gateway" error that occurs when the query takes too long due
>>>>> to the large list of recipes. Since there are so many recipes
>>>>> spread across the layers in the OE index, there's no point in
>>>>> allowing a user to search without a keyword in order to browse
>>>>> the list; it simply isn't digestible as a whole.
>>>>>
>>>>> Add a browse button for the Machines, Classes, and Distros pages.
>>>>
>>>> There are reasons to view all of the recipes, machines, classes, distros,
>>>> etc. (Not necessarily good reasons, but I know people do it.)
>>>>
>>>> If the query is too long, it would be better to figure out a way to get a
>>>> partial response and formulate the first page based on partial
>>>> responses... having a multipage response that the user can look at.
>>>
>>> I'm willing to be proven wrong, but unfortunately it looks to me like this 
>>> will require major re-engineering to sort out for a fairly weak use case.
>>> If you consider this important, would you be able to look into making it
>>> work?
>>
>> I suspect much of the display engine probably needs a rework as the contents
>> of the layer index have grown.  I can't promise anything, but I'll see if I
>> can find anyone who can work on it.
> 
> Could be yes, thanks.
> 
>> For recipes, the use-case is very weak.  I have talked with people though
>> who have just wanted a list of what is available (one big list).  Primarily
>> so they can compare what is available to some magic spec sheet they are
>> looking at.
> 
> If that's the use case though we should provide a proper export capability, 
> because comparing big lists manually 50 items at a time isn't how you'd best 
> support that. Theoretically the API could be used there but an explicit 
> export 
> function would be more accessible.

For my own uses, I export this using the restapi.  So it can certainly be
exported easily enough and processed externally.  (Of course, few people know
about the restapi or how to use it.)

>> For the Machines, Distros pages..  people do want a full list here, as they
>> simply won't know what is available or what something is called.  The scope
>> of the machines and distros of course is nowhere near the scope for the full
>> recipe list.
> 
> Right, and this patch doesn't remove the ability to browse those - in fact it 
> makes it easier to do so by adding an explicit browse button, so I think 
> we're 
> OK there.

I misunderstood that.  I though the machine/distro would be restricted like 
recipes.

> Cheers,
> Paul
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [layerindex-web][patch v3 1/1] recipes.html: Require keyword for recipe search

2017-11-08 Thread Mark Hatle
On 11/7/17 8:43 PM, Paul Eggleton wrote:
> On Wednesday, 8 November 2017 11:47:49 AM NZDT Mark Hatle wrote:
>> On 11/7/17 4:31 PM, Amanda Brindle wrote:
>>> Use JavaScript to check if the search box for recipe search is
>>> empty before querying the database. This will prevent the "502
>>> Bad Gateway" error that occurs when the query takes too long due
>>> to the large list of recipes. Since there are so many recipes
>>> spread across the layers in the OE index, there's no point in
>>> allowing a user to search without a keyword in order to browse
>>> the list; it simply isn't digestible as a whole.
>>>
>>> Add a browse button for the Machines, Classes, and Distros pages.
>>
>> There are reasons to view all of the recipes, machines, classes, distros,
>> etc. (Not necessarily good reasons, but I know people do it.)
>>
>> If the query is too long, it would be better to figure out a way to get a
>> partial response and formulate the first page based on partial responses...
>> having a multipage response that the user can look at.
> 
> I'm willing to be proven wrong, but unfortunately it looks to me like this 
> will require major re-engineering to sort out for a fairly weak use case. If 
> you consider this important, would you be able to look into making it work?

I suspect much of the display engine probably needs a rework as the contents of
the layer index have grown.  I can't promise anything, but I'll see if I can
find anyone who can work on it.

For recipes, the use-case is very weak.  I have talked with people though who
have just wanted a list of what is available (one big list).  Primarily so they
can compare what is available to some magic spec sheet they are looking at.

For the Machines, Distros pages..  people do want a full list here, as they
simply won't know what is available or what something is called.  The scope of
the machines and distros of course is nowhere near the scope for the full recipe
list.

--Mark

> Cheers,
> Paul
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [layerindex-web][patch v3 1/1] recipes.html: Require keyword for recipe search

2017-11-07 Thread Mark Hatle
On 11/7/17 4:31 PM, Amanda Brindle wrote:
> Use JavaScript to check if the search box for recipe search is
> empty before querying the database. This will prevent the "502
> Bad Gateway" error that occurs when the query takes too long due
> to the large list of recipes. Since there are so many recipes
> spread across the layers in the OE index, there's no point in
> allowing a user to search without a keyword in order to browse
> the list; it simply isn't digestible as a whole.
> 
> Add a browse button for the Machines, Classes, and Distros pages.

There are reasons to view all of the recipes, machines, classes, distros, etc.
(Not necessarily good reasons, but I know people do it.)

If the query is too long, it would be better to figure out a way to get a
partial response and formulate the first page based on partial responses...
having a multipage response that the user can look at.

--Mark

> Fixes [YOCTO #11930]
> 
> Signed-off-by: Amanda Brindle 
> ---
>  layerindex/views.py| 15 ---
>  templates/layerindex/classes.html  |  3 ++-
>  templates/layerindex/distros.html  |  3 ++-
>  templates/layerindex/machines.html |  3 ++-
>  templates/layerindex/recipes.html  | 27 +--
>  5 files changed, 39 insertions(+), 12 deletions(-)
> 
> diff --git a/layerindex/views.py b/layerindex/views.py
> index 03d47f2..414c770 100644
> --- a/layerindex/views.py
> +++ b/layerindex/views.py
> @@ -656,7 +656,10 @@ class MachineSearchView(ListView):
>  
>  def get_queryset(self):
>  _check_url_branch(self.kwargs)
> -query_string = self.request.GET.get('q', '')
> +if self.request.GET.get('search', ''):
> +query_string = self.request.GET.get('q', '')
> +else:
> +query_string = ""
>  init_qs = 
> Machine.objects.filter(layerbranch__branch__name=self.kwargs['branch'])
>  if query_string.strip():
>  entry_query = simplesearch.get_query(query_string, ['name', 
> 'description'])
> @@ -705,7 +708,10 @@ class DistroSearchView(ListView):
>  
>  def get_queryset(self):
>  _check_url_branch(self.kwargs)
> -query_string = self.request.GET.get('q', '')
> +if self.request.GET.get('search', ''):
> +query_string = self.request.GET.get('q', '')
> +else:
> +query_string = ""
>  init_qs = 
> Distro.objects.filter(layerbranch__branch__name=self.kwargs['branch'])
>  if query_string.strip():
>  entry_query = simplesearch.get_query(query_string, ['name', 
> 'description'])
> @@ -730,7 +736,10 @@ class ClassSearchView(ListView):
>  
>  def get_queryset(self):
>  _check_url_branch(self.kwargs)
> -query_string = self.request.GET.get('q', '')
> +if self.request.GET.get('search', ''):
> +query_string = self.request.GET.get('q', '')
> +else:
> +query_string = ""
>  init_qs = 
> BBClass.objects.filter(layerbranch__branch__name=self.kwargs['branch'])
>  if query_string.strip():
>  entry_query = simplesearch.get_query(query_string, ['name'])
> diff --git a/templates/layerindex/classes.html 
> b/templates/layerindex/classes.html
> index 34ac5aa..574cdb8 100644
> --- a/templates/layerindex/classes.html
> +++ b/templates/layerindex/classes.html
> @@ -35,7 +35,8 @@
>  
>  
>   id="appendedInputButtons" placeholder="Search classes" name="q" value="{{ 
> search_keyword }}" />
> -search
> + value="1">search
> + value="1">browse
>  
>  
>  
> diff --git a/templates/layerindex/distros.html 
> b/templates/layerindex/distros.html
> index 5b6995a..3266bf6 100644
> --- a/templates/layerindex/distros.html
> +++ b/templates/layerindex/distros.html
> @@ -35,7 +35,8 @@
>  
>  
>   id="appendedInputButtons" placeholder="Search distros" name="q" value="{{ 
> search_keyword }}" />
> -search
> + value="1">search
> + value="1">browse
>  
>  
>  
> diff --git a/templates/layerindex/machines.html 
> b/templates/layerindex/machines.html
> index c0c6f33..e963376 100644
> --- a/templates/layerindex/machines.html
> +++ b/templates/layerindex/machines.html
> @@ -34,7 +34,8 @@
>  
>  
>   id="appendedInputButtons" placeholder="Search machines" name="q" value="{{ 
> search_keyword }}" />
> -search
> + value="1">search
> + value="1">browse
>  
>   

Re: [yocto] pseudo - few general questions

2017-11-01 Thread Mark Hatle
On 11/1/17 8:49 AM, Mark Hatle wrote:
> On 11/1/17 4:17 AM, Pavlina Varekova wrote:
>> Hi,
>> thank you very much. It sounds good. So I tried replace fakechroot with 
>> pseudo.
>> I read man pages and replace:
>>
>> FAKECHROOT_BASE="${DIR1}" fakechroot  command
>>
>> with:
>>
>> %{PATH/TO/PSEUDO/}bin/pseudo  -r "${DIR1}" -P %{PATH/TO/PSEUDO/} command
>>
>> But it does not work good. Probably some variable is set differently.
>> Please is the replacement correct?
>>
>> Pavlina
> 
> I always run pseudo, then once the shell opens then run 'chroot ' within
> pseudo.
> 
> The system will still execute commands outside of the 'chroot', but any
> application looking at the file filesystem will appear to be constrained.

I just realized, it -may- still execute commands outside of the chroot.  pseudo
(like fakechroot) can not protect everything like a real chroot.

Below is my typical use:

$ ./bin/pseudo /bin/bash
# chroot /usr
# cd /
# ls
bash: ls: command not found
# bin/ls
bin  etc  games  include  lib  lib64  libexec  local  sbin  share  src  tmp

So it's working here.


But using the -r option, appears to work as well:

$ ./bin/pseudo -r /usr /bin/bash
bash-4.2# cd /
bash-4.2# ls
bash: ls: command not found
bash-4.2# bin/ls
bin  etc  games  include  lib  lib64  libexec  local  sbin  share  src  tmp
bash-4.2#

--Mark

> --Mark
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] pseudo - few general questions

2017-11-01 Thread Mark Hatle
On 11/1/17 4:17 AM, Pavlina Varekova wrote:
> Hi,
> thank you very much. It sounds good. So I tried replace fakechroot with 
> pseudo.
> I read man pages and replace:
> 
> FAKECHROOT_BASE="${DIR1}" fakechroot  command
> 
> with:
> 
> %{PATH/TO/PSEUDO/}bin/pseudo  -r "${DIR1}" -P %{PATH/TO/PSEUDO/} command
> 
> But it does not work good. Probably some variable is set differently.
> Please is the replacement correct?
> 
> Pavlina

I always run pseudo, then once the shell opens then run 'chroot ' within
pseudo.

The system will still execute commands outside of the 'chroot', but any
application looking at the file filesystem will appear to be constrained.

--Mark
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] pseudo - few general questions

2017-10-16 Thread Mark Hatle
On 10/16/17 7:16 AM, Pavlina Varekova wrote:
> Hi,
> I am evaluating to use pseudo in one project so I have several questions
> regarding the tool.
> 
> As I see in the official home for pseudo's git repository the last commit is
> from 2017-04-13. Is pseudo maintained now and will it be in the future? How
> active the community is?
> 
> From its documentation I know that it is proposed for "provide enough
> functionality to allow RPM and similar tools to run file system installs using
> chroot()". But "implementation is far from complete or perfect". Is pseudo
> suitable to use in automated tests that aim to or can set a broad range or
> combinations of file ownerships?

The component is being actively maintained.  The maintainer is often busy and
general maintenance is not required on a daily basis.. but he does respond to
mailing list messages regularly.

As far as testing goes, I've found that pseudo is better for automated tests
then the combination of fakeroot/fakechroot.  But as indicated there are corner
cases where there may be problems.  If you use it and find one of those corner
cases let us know, as we want to improve it.

--Mark

> Thank you very much.
> 
> Pavlina
> 
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-selinux][PATCH] policycoreutils: update AUDITH, PAMH

2017-10-10 Thread Mark Hatle
This is incorrect.  You are not allowed to dynamically determine capabilities
like this.  Because if another component changes, the system has no way to
determine if this package should also be recompiled.

policycoreutils should be using 'PACKAGECONFIG', with an audit and pam option.
Then set a default with them enabled, allowing a user to override the settings.

--Mark

On 10/10/17 2:46 AM, wenzong@windriver.com wrote:
> From: Wenzong Fan 
> 
> Update definition of AUDITH, PAMH according to the upstream changes
> for Makefiles:
> 
>   commit 89ce96cac6ce5eeed78cb39c58514cd68494d7aa
>   ...
>   -ifeq ($(PAMH), /usr/include/security/pam_appl.h)
>   +ifeq ($(PAMH), y)
>   ...
>   -ifeq ($(AUDITH), /usr/include/libaudit.h)
>   +ifeq ($(AUDITH), y)
> 
> Signed-off-by: Wenzong Fan 
> ---
>  recipes-security/selinux/policycoreutils.inc | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/recipes-security/selinux/policycoreutils.inc 
> b/recipes-security/selinux/policycoreutils.inc
> index 442b086..63ca402 100644
> --- a/recipes-security/selinux/policycoreutils.inc
> +++ b/recipes-security/selinux/policycoreutils.inc
> @@ -118,8 +118,8 @@ export STAGING_LIBDIR
>  export BUILD_SYS
>  export HOST_SYS
>  
> -AUDITH="`ls ${STAGING_INCDIR}/libaudit.h >/dev/null 2>&1 && echo 
> /usr/include/libaudit.h `"
> -PAMH="`ls ${STAGING_INCDIR}/security/pam_appl.h >/dev/null 2>&1 && echo 
> /usr/include/security/pam_appl.h `"
> +AUDITH="`ls ${STAGING_INCDIR}/libaudit.h >/dev/null 2>&1 && echo y`"
> +PAMH="`ls ${STAGING_INCDIR}/security/pam_appl.h >/dev/null 2>&1 && echo y`"
>  EXTRA_OEMAKE += "${@target_selinux(d, 'PAMH=${PAMH} AUDITH=${AUDITH}', 
> 'PAMH= AUDITH= ')} INOTIFYH=n"
>  EXTRA_OEMAKE += "PREFIX=${D}"
>  EXTRA_OEMAKE += "INITDIR=${D}/etc/init.d"
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Framework to implement mirroring in Yocto

2017-10-09 Thread Mark Hatle
On 10/6/17 5:08 PM, Gutierrez, Hernan Ildefonso (Boise R, FW) wrote:
> Hi,
> 
>  
> 
> We are planning to implement a mirror for both source code downloaded and
> sscache in our work environment.
> 

We mirror a ton of code for our customers.  We have two types or mirrors that we
deliver.  Raw 'git servers' and tarball (or equivalent) mirrors.

The way that we populate these mirrors is to use bitbake with a fetchall
'universe'.  That way everything that could be downloaded (with a few
exceptions) is downloaded.

git archives, we identify and create git mirrors in an external server.  This
server regularly updates the git servers from the upstream location.  This
allows us to have git mirrors that are 'live' git servers, and not just tarballs
that could get out of sync.

For this work, you can easily setup a 'premirror' in your projects of the 
format:

PREMIRRORS_append = " \
 git://.*/.* git://mirror.hostname.here/git/MIRRORNAME;protocol=git \n \"
"

You can adjust the protocol to be file, git, or http (as necessary for your
environment).  (In our case, it's usually file.)

For the files, it's even easier.  Just have a shared fileserver that contains
all of the downloads:

PREMIRRORS_append = " \
 .*://.*/.* file:///downloads/ \n \
"

> 
> We are planning to use Nexus and Nuget to allow storage and versioning 
> control.
> We don’t know if these are the right tools.
> 

If you want to formalize this further, there are alternative ways to do this.
(We use an approach where we have 'download layers'.)  Between the setup program
(https://github.com/Wind-River/wr-lx-setup) and these special download layers
our users can download (to their local system) both the mirrors and tarball
downloads.

Our typical download layer looks like:

conf/layer.conf:

BBFILE_COLLECTIONS += "core-dl"
BBFILE_PATTERN_core-dl = ""
BBFILE_PATTERN_IGNORE_EMPTY_core-dl = "1"

# This should only be incremented on significant changes that will
# cause compatibility issues with other layers
LAYERVERSION_core-dl = "2.4"

# We have a pre-populated downloads directory, add to PREMIRRORS
PREMIRRORS_append = " \
 .*://.*/.* file://${LAYERDIR}/downloads/ \n \
 git://.*/.* git://${LAYERDIR}/git/MIRRORNAME;protocol=file \n \
"

Then insides of the 'downloads' directory we have a copy of all of the tarballs
used by the 'meta' layer (oe-core).  [note this can grow fairly large, so we
generally have a 2.4, then a 2.5, etc versions to keep the size managable.]

The setup program uses a local layer index to control everything.  We have a
single local patch to our oe-core that adds a 'recommendation' on this new
download layer.

LAYERRECOMMENDS_core = "core-dl (= 2.4)"


The layer index will bring this is as an 'optional' dependency which the setup
program can then use.   So if you do this for all of the layers you use -- then
a user calling setup (with our private layer index) will automatically get the
downloads that match only the layers they choose to bring into their project.

(The setup program also has a --mirror option, will will allow a local user to
mirror down the layer index contents, as well as the layers that they are using.
 This is intended both for our commercial product delivery, but also for
customers to be able to snapshot a given installation so they have all of the
code necessary to reproduce the environment in the future.)

> 
> Since we are about to embark in this project, before starting I wanted to know
> if you have some pointers on how can be implemented a mirroring framework for 
> yocto.
> 

You can start simple, and just setup a local fileserver and git server with the
premirror setup.

You can extend this to include custom delivery and installation to customers if
this is appropriate for your environment.

(Note, the setup program is current external to OE / YP.  We are working to
integrate this into bitbake.  While there is no specific timeline promised, we
started this work in the 2.4 timeframe, and I expect much of what I explained
above will be implemented in the 2.5 timeframe.  But until then, you can use the
wr-lx-setup repository I mentioned.)

--Mark

> 
> Not sure if there are some common tools (other tools) with which yocto
> integrates nicely.
> 
>  
> 
> Any pointers will be appreciated,
> 
>  
> 
> --Hernan
> 
>  
> 
> PS> We are in morty branch currently.
> 
> 
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Errors building with Windows Subsystem for Linux (aka Bash on Ubuntu on Windows)

2017-09-29 Thread Mark Hatle
On 9/28/17 8:28 AM, Bryan Evenson wrote:
> Ross,
> 
>  
> 
> *From:*Burton, Ross [mailto:ross.bur...@intel.com]
> *Sent:* Wednesday, September 27, 2017 6:43 PM
> *To:* Bryan Evenson 
> *Cc:* yocto@yoctoproject.org
> *Subject:* Re: [yocto] Errors building with Windows Subsystem for Linux (aka
> Bash on Ubuntu on Windows)
> 
>  
> 
> On 27 September 2017 at 21:59, Bryan Evenson  > wrote:
> 
>  
> 
> I think I found the problem.  I started looking at more file properties 
> for
> the files that worked and the ones that didn’t, and I noticed that all the
> ones that failed show a link count of 1024.  The Windows filesystem has a
> link limit of 1023 links per file (at least as reported here:
> 
> https://msdn.microsoft.com/en-us/library/windows/desktop/aa363860(v=vs.85).aspx),
> so I think the hard link is failing due to the Windows link limit.  If 
> that
> is the case, then I don’t think it’ll be a quick fix to get a working
> solution under WSL.
> 
>  
> 
> That link count doesn't seem feasible though...  we hardlink frequently 
> during a
> recipe build, but I'd expect to see 10 links, not over a thousand.  You've
> definitely found the problem, just need to figure out what is causing such
> excessive linking,
> 
>  
> 
> Two files, LC_MEASUREMENT and LC_PAPER, seem to be identical through most the
> locales.  I’m not sure which are copies and which are hard links, but I did a
> md5sum comparison and found over 1200 identical LC_MEASUREMENT files in the
> glibc-locale working directory.  I don’t need all the locales, so I set
> GLIBC_GENERATE_LOCALES = "en_GB.UTF-8 en_US.UTF-8" in my local.conf and now
> glibc-locale completes building.
> 

FYI, this is expected.  The locales have numerous copies of the same files for
the same locales.  So to save space there is a step that runs and identifies the
duplicates and hard links them.  (This helps a lot for the sstate-cache for
instance).

It likely wouldn't be difficult to determine the link count 'issue' and simply
link to a different file for subsequent links... but I'm no longer sure where
the linking occurs that does all of this.

--Mark

> 
> I’ll report back if the rest of the build completes.
> 
>  
> 
> Bryan
> 
>  
> 
> 
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-selinux] Update announcement

2017-09-18 Thread Mark Hatle
On 9/18/17 9:34 PM, Chanho Park wrote:
> Hi Mark,
> 
> Thanks for the update.
> When I ran the semanage tool, I got below errors.
> 
>> ImportError: No module named lib2to3.pgen2.parse
> 
> Does the setools use python3 instead of the python2?

Are you running this on the target or as part of the build?  Part of the system
requires meta-python components to operate properly.  If this piece requires
additional components that are not set as requirements, it's a bug.

Patches welcome.  (I did not experience this when I did a build.)

--Mark

> Best Regards,
> Chanho Park
> 
> On Fri, 15 Sep 2017 at 6:27 AM Mark Hatle <mark.ha...@windriver.com
> <mailto:mark.ha...@windriver.com>> wrote:
> 
> I have pushed an update to meta-selinux to work with master.  A few key 
> changes:
> 
> The oe-selinux and poky-selinux distro configurations HAVE BEEN REMOVED!  
> It is
> now up to the user to enable the components using the appropriate
> DISTRO_FEATURES as documented in the README file.  (acl xattr pam selinux)
> 
> The layer now requires meta-selinux as part of the 'setools' upgrade.  
> The tool
> is now a python based tool.
> 
> SELinux componetns have been updated to version 2.7.
> 
> Refpolicy is at 2.20170204, git at master as of yesterday.
> 
> The README for the layer has been updated to include some additional 
> information
> about running the included images.
> 
> Changelog:
> 
> f1f0860 openssh: set ChallengeResponseAuthentication to no
> 24cce7b refpolicy: fix a typo in RDEPENDS
> 827b305 initscripts: use the 'i' option for restorecon command
> eeb2c2f audit: fix the wrong packaging for auditd.service
> 2aadc0d audit 2.7.1 -> 2.7.6
> beaaa37 attr: fix ptest failures when selinux enabled
> 232bfeb systemd: Remove inherit enable-selinux, obsolete
> 40a581d selinux: uprev include file to 20170804
> 3aafa96 libsepol: uprev to 2.7 (20170804)
> 375dfa6 libselinux: uprev to 2.7 (20170804)
> b00974f libsemanage: uprev to 2.7 (20170804)
> 43adb0c checkpolicy: uprev to 2.7 (20170804)
> f838032 secilc: uprev to 2.7 (20170804)
> c9186be policycoreutils: uprev to 2.7 (20170804)
> 9b70823 sepolgen: remove package
> d8d6ac6 mcstrans: add package 2.7 (20170804)
> 9a07ac8 restorecond: add package 2.7 (20170804)
> a5b5f5b selinux-sandbox: add package 2.7 (20170804)
> 1d3df56 selinux-python: add package 2.7 (20170804)
> 17cda5a semodule-utils: add package 2.7 (20170804)
> 28b961c selinux-dbus: add package 2.7 (20170804)
> a1f9832 selinux-gui: add package 2.7 (20170804)
> 493b567 policycoreutils: fixes for 2.7 uprev
> fe8bc07 refpolicy_common: depends on semodule-utils-native
> fdf7612 setools: uprev to 4.1.1
> 96b54b4 packagegroup-*: sync package names
> 2c7c0e9 selinux-python: add setools to RDEPENDS
> 8bd72df refpolicy-git: Update to lastest git version
> 694b8d1 README: Update and remove references to distros, replace w/
> DISTRO_FEATURES
> 4fefe83 Refactor to conform to YP Compat requirements
> 6733785 README: Add information about running the system
> dddf265 packagegroups: Fix LIC_FILES_CHKSUM
> bca5c61 refpolicy: Add '/bin/bash.bash', an update-alternative to the 
> policy
> 907e373 policycoreutils: Update fixfile
> --
> ___
> yocto mailing list
> yocto@yoctoproject.org <mailto:yocto@yoctoproject.org>
> https://lists.yoctoproject.org/listinfo/yocto
> 
> -- 
> Best Regards,
> Chanho Park

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-selinux][PATCH 04/21] libsemanage: uprev to 2.7 (20170804)

2017-09-18 Thread Mark Hatle
On 9/18/17 2:48 AM, wenzong fan wrote:
> 
> 
> On 09/14/2017 09:33 PM, Mark Hatle wrote:
>> On 9/14/17 5:31 AM, wenzong fan wrote:
>>>
>>>
>>> On 09/14/2017 08:07 AM, Mark Hatle wrote:
>>>> On 9/12/17 9:19 PM, Mark Hatle wrote:
>>>>> On 9/12/17 9:06 PM, wenzong fan wrote:
>>>>>> On 09/12/2017 06:59 PM, Chanho Park wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> I can't apply this patch on top of the master branch. Which revision did
>>>>>>> you make the patches?
>>>>>>
>>>>>> Oops, that's my fault. I did a "sed -i -e 's/Subject: [/Subject:
>>>>>> [meta-selinux][/g' 00*" to add prefix for mail subjects, that also
>>>>>> changed the removed patch files in libsemanage.
>>>>>>
>>>>>> I'll send v2.
>>>>>>
>>>>>> Thanks
>>>>>> Wenzong
>>>>>
>>>>> I don't see the original set of patches in my archives.  When you rebase, 
>>>>> please
>>>>> rebase on top of mgh/master-next.
>>>>
>>>> My mailer finally loaded the original set.  I saw the same problems, but 
>>>> was
>>>> able to get them merged.
>>>>
>>>> I have updated 'mgh/master-next'.  Please verify the contents include all 
>>>> of
>>>> your changes.
>>>
>>> All my changes are there now.
>>>
>>>>
>>>> I tried to build a system and boot it, but it didn't work.  I'm guessing I
>>>> forgot something simple, but I can't make master-next into master without
>>>> knowing I can boot..  Any clue would be useful.  Thanks!
>>>>
>>>>
>>>> My configuration is:
>>>>
>>>> bblayers.conf:
>>>>
>>>> oe-core (master) & meta-selinux (mgh/master-next)
>>>>
>>>>
>>>> local.conf:
>>>>
>>>> IMAGE_FEATURES_append = " debug-tweaks ssh-server-openssh"
>>>>
>>>> DISTRO_FEATURES_append = " opengl x11 wayland acl xattr pam selinux"
>>>>
>>>> PREFERRED_PROVIDER_virtual/refpolicy = "refpolicy-mls"
>>>> PREFERRED_VERSION_refpolicy-mls = "2.20170204"
>>>
>>> Above configs are OK, you can simply use:
>>>
>>> DISTRO = "poky-selinux"
>>> PREFERRED_VERSION_refpolicy-mls ?= "2.20170204"
>>
>> The DISTRO settings in meta-selinux are being removed (they are no longer in 
>> the
>> master-next branch).  Instead the user will be required to set the
>> DISTRO_FEATURE 'selinux' to enable the components.  (It is expected they will
>> also enable acl/xattr and pam.)
>>
>>>>
>>>>
>>>> I ran QEMU using:
>>>>
>>>>
>>>> runqemu qemux86 core-image-selinux ext4 nographic
>>>>
>>>>
>>>
>>> Please run QEMU with:
>>>
>>> $ runqemu qemux86 core-image-selinux ext4 nographic
>>> bootparams="selinux=1 enforcing=0"
>>
>>
>>
>>>>
>>>> Trying to login I get:
>>>>
>>>> qemux86 login: root
>>>> [   23.960609] kauditd_printk_skb: 13 callbacks suppressed
>>>> Cannot execute /bin/sh: Permission denied
>>>> [   23.973922] audit: type=1400 audit(1505347190.805:29): avc:  denied  {
>>>> execute } for  pid=671 comm="login" name="bash.bash" dev="vda" ino=8163
>>>> scontext=system_u:system_r:local_login_t:s0-s15:c0.c1023
>>>> tcontext=system_u:object_r:bin_t:s0 tclass=file permissive=0
>>>> [   23.975463] audit: type=1400 audit(1505347190.813:30): avc:  denied  {
>>>> execute } for  pid=671 comm="login" name="bash.bash" dev="vda" ino=8163
>>>> scontext=system_u:system_r:local_login_t:s0-s15:c0.c1023
>>>> tcontext=system_u:object_r:bin_t:s0 tclass=file permissive=0
>>>>
>>>>
>>>
>>> This should be blocked by refpolicy-mls, please boot with "selinux=1
>>> enforcing=0" to verify if SELinux tools work. For example:
>>
>> I would like to update the README file for the layer on how the user can
>> actually make a bootable system.  If this involves adding a user, that is 
>> fine.
>> But at present there is no way to login w/o turning off enforcing. 

[yocto] [meta-selinux] Update announcement

2017-09-14 Thread Mark Hatle
I have pushed an update to meta-selinux to work with master.  A few key changes:

The oe-selinux and poky-selinux distro configurations HAVE BEEN REMOVED!  It is
now up to the user to enable the components using the appropriate
DISTRO_FEATURES as documented in the README file.  (acl xattr pam selinux)

The layer now requires meta-selinux as part of the 'setools' upgrade.  The tool
is now a python based tool.

SELinux componetns have been updated to version 2.7.

Refpolicy is at 2.20170204, git at master as of yesterday.

The README for the layer has been updated to include some additional information
about running the included images.

Changelog:

f1f0860 openssh: set ChallengeResponseAuthentication to no
24cce7b refpolicy: fix a typo in RDEPENDS
827b305 initscripts: use the 'i' option for restorecon command
eeb2c2f audit: fix the wrong packaging for auditd.service
2aadc0d audit 2.7.1 -> 2.7.6
beaaa37 attr: fix ptest failures when selinux enabled
232bfeb systemd: Remove inherit enable-selinux, obsolete
40a581d selinux: uprev include file to 20170804
3aafa96 libsepol: uprev to 2.7 (20170804)
375dfa6 libselinux: uprev to 2.7 (20170804)
b00974f libsemanage: uprev to 2.7 (20170804)
43adb0c checkpolicy: uprev to 2.7 (20170804)
f838032 secilc: uprev to 2.7 (20170804)
c9186be policycoreutils: uprev to 2.7 (20170804)
9b70823 sepolgen: remove package
d8d6ac6 mcstrans: add package 2.7 (20170804)
9a07ac8 restorecond: add package 2.7 (20170804)
a5b5f5b selinux-sandbox: add package 2.7 (20170804)
1d3df56 selinux-python: add package 2.7 (20170804)
17cda5a semodule-utils: add package 2.7 (20170804)
28b961c selinux-dbus: add package 2.7 (20170804)
a1f9832 selinux-gui: add package 2.7 (20170804)
493b567 policycoreutils: fixes for 2.7 uprev
fe8bc07 refpolicy_common: depends on semodule-utils-native
fdf7612 setools: uprev to 4.1.1
96b54b4 packagegroup-*: sync package names
2c7c0e9 selinux-python: add setools to RDEPENDS
8bd72df refpolicy-git: Update to lastest git version
694b8d1 README: Update and remove references to distros, replace w/ 
DISTRO_FEATURES
4fefe83 Refactor to conform to YP Compat requirements
6733785 README: Add information about running the system
dddf265 packagegroups: Fix LIC_FILES_CHKSUM
bca5c61 refpolicy: Add '/bin/bash.bash', an update-alternative to the policy
907e373 policycoreutils: Update fixfile
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-selinux][PATCH 04/21] libsemanage: uprev to 2.7 (20170804)

2017-09-14 Thread Mark Hatle
On 9/14/17 5:31 AM, wenzong fan wrote:
> 
> 
> On 09/14/2017 08:07 AM, Mark Hatle wrote:
>> On 9/12/17 9:19 PM, Mark Hatle wrote:
>>> On 9/12/17 9:06 PM, wenzong fan wrote:
>>>> On 09/12/2017 06:59 PM, Chanho Park wrote:
>>>>> Hi,
>>>>>
>>>>> I can't apply this patch on top of the master branch. Which revision did
>>>>> you make the patches?
>>>>
>>>> Oops, that's my fault. I did a "sed -i -e 's/Subject: [/Subject:
>>>> [meta-selinux][/g' 00*" to add prefix for mail subjects, that also
>>>> changed the removed patch files in libsemanage.
>>>>
>>>> I'll send v2.
>>>>
>>>> Thanks
>>>> Wenzong
>>>
>>> I don't see the original set of patches in my archives.  When you rebase, 
>>> please
>>> rebase on top of mgh/master-next.
>>
>> My mailer finally loaded the original set.  I saw the same problems, but was
>> able to get them merged.
>>
>> I have updated 'mgh/master-next'.  Please verify the contents include all of
>> your changes.
> 
> All my changes are there now.
> 
>>
>> I tried to build a system and boot it, but it didn't work.  I'm guessing I
>> forgot something simple, but I can't make master-next into master without
>> knowing I can boot..  Any clue would be useful.  Thanks!
>>
>>
>> My configuration is:
>>
>> bblayers.conf:
>>
>> oe-core (master) & meta-selinux (mgh/master-next)
>>
>>
>> local.conf:
>>
>> IMAGE_FEATURES_append = " debug-tweaks ssh-server-openssh"
>>
>> DISTRO_FEATURES_append = " opengl x11 wayland acl xattr pam selinux"
>>
>> PREFERRED_PROVIDER_virtual/refpolicy = "refpolicy-mls"
>> PREFERRED_VERSION_refpolicy-mls = "2.20170204"
> 
> Above configs are OK, you can simply use:
> 
> DISTRO = "poky-selinux"
> PREFERRED_VERSION_refpolicy-mls ?= "2.20170204"

The DISTRO settings in meta-selinux are being removed (they are no longer in the
master-next branch).  Instead the user will be required to set the
DISTRO_FEATURE 'selinux' to enable the components.  (It is expected they will
also enable acl/xattr and pam.)

>>
>>
>> I ran QEMU using:
>>
>>
>> runqemu qemux86 core-image-selinux ext4 nographic
>>
>>
> 
> Please run QEMU with:
> 
> $ runqemu qemux86 core-image-selinux ext4 nographic 
> bootparams="selinux=1 enforcing=0"



>>
>> Trying to login I get:
>>
>> qemux86 login: root
>> [   23.960609] kauditd_printk_skb: 13 callbacks suppressed
>> Cannot execute /bin/sh: Permission denied
>> [   23.973922] audit: type=1400 audit(1505347190.805:29): avc:  denied  {
>> execute } for  pid=671 comm="login" name="bash.bash" dev="vda" ino=8163
>> scontext=system_u:system_r:local_login_t:s0-s15:c0.c1023
>> tcontext=system_u:object_r:bin_t:s0 tclass=file permissive=0
>> [   23.975463] audit: type=1400 audit(1505347190.813:30): avc:  denied  {
>> execute } for  pid=671 comm="login" name="bash.bash" dev="vda" ino=8163
>> scontext=system_u:system_r:local_login_t:s0-s15:c0.c1023
>> tcontext=system_u:object_r:bin_t:s0 tclass=file permissive=0
>>
>>
> 
> This should be blocked by refpolicy-mls, please boot with "selinux=1 
> enforcing=0" to verify if SELinux tools work. For example:

I would like to update the README file for the layer on how the user can
actually make a bootable system.  If this involves adding a user, that is fine.
But at present there is no way to login w/o turning off enforcing.  That seems
to defeat the purpose of enabling selinux in a design.

So any help you can give me for the documentation would be appreciated.

> $ sestatus

root@qemux86:~# sestatus
SELinux status: enabled
SELinuxfs mount:/sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: mls
Current mode:   permissive
Mode from config file:  enforcing
Policy MLS status:  enabled
Policy deny_unknown status: allowed
Memory protection checking: requested (insecure)
Max kernel policy version:  30

> OR:
> $ semanage login -l

root@qemux86:~# semanage login -l

Login Name   SELinux User MLS/MCS RangeService

__default__  user_u   s0-s0*
root root s0-s15:c0.c1023  *

(I followed the information below and enabled the python components.)

> Actually this do

Re: [yocto] [meta-selinux][PATCH 04/21] libsemanage: uprev to 2.7 (20170804)

2017-09-13 Thread Mark Hatle
On 9/12/17 9:19 PM, Mark Hatle wrote:
> On 9/12/17 9:06 PM, wenzong fan wrote:
>> On 09/12/2017 06:59 PM, Chanho Park wrote:
>>> Hi,
>>>
>>> I can't apply this patch on top of the master branch. Which revision did 
>>> you make the patches?
>>
>> Oops, that's my fault. I did a "sed -i -e 's/Subject: [/Subject: 
>> [meta-selinux][/g' 00*" to add prefix for mail subjects, that also 
>> changed the removed patch files in libsemanage.
>>
>> I'll send v2.
>>
>> Thanks
>> Wenzong
> 
> I don't see the original set of patches in my archives.  When you rebase, 
> please
> rebase on top of mgh/master-next.

My mailer finally loaded the original set.  I saw the same problems, but was
able to get them merged.

I have updated 'mgh/master-next'.  Please verify the contents include all of
your changes.

I tried to build a system and boot it, but it didn't work.  I'm guessing I
forgot something simple, but I can't make master-next into master without
knowing I can boot..  Any clue would be useful.  Thanks!


My configuration is:

bblayers.conf:

oe-core (master) & meta-selinux (mgh/master-next)


local.conf:

IMAGE_FEATURES_append = " debug-tweaks ssh-server-openssh"

DISTRO_FEATURES_append = " opengl x11 wayland acl xattr pam selinux"

PREFERRED_PROVIDER_virtual/refpolicy = "refpolicy-mls"
PREFERRED_VERSION_refpolicy-mls = "2.20170204"


I ran QEMU using:


runqemu qemux86 core-image-selinux ext4 nographic



Trying to login I get:

qemux86 login: root
[   23.960609] kauditd_printk_skb: 13 callbacks suppressed
Cannot execute /bin/sh: Permission denied
[   23.973922] audit: type=1400 audit(1505347190.805:29): avc:  denied  {
execute } for  pid=671 comm="login" name="bash.bash" dev="vda" ino=8163
scontext=system_u:system_r:local_login_t:s0-s15:c0.c1023
tcontext=system_u:object_r:bin_t:s0 tclass=file permissive=0
[   23.975463] audit: type=1400 audit(1505347190.813:30): avc:  denied  {
execute } for  pid=671 comm="login" name="bash.bash" dev="vda" ino=8163
scontext=system_u:system_r:local_login_t:s0-s15:c0.c1023
tcontext=system_u:object_r:bin_t:s0 tclass=file permissive=0



> --Mark
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-selinux][PATCH 04/21] libsemanage: uprev to 2.7 (20170804)

2017-09-12 Thread Mark Hatle
On 9/12/17 9:06 PM, wenzong fan wrote:
> On 09/12/2017 06:59 PM, Chanho Park wrote:
>> Hi,
>>
>> I can't apply this patch on top of the master branch. Which revision did 
>> you make the patches?
> 
> Oops, that's my fault. I did a "sed -i -e 's/Subject: [/Subject: 
> [meta-selinux][/g' 00*" to add prefix for mail subjects, that also 
> changed the removed patch files in libsemanage.
> 
> I'll send v2.
> 
> Thanks
> Wenzong

I don't see the original set of patches in my archives.  When you rebase, please
rebase on top of mgh/master-next.

--Mark
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-selinux] krogoth support

2017-09-12 Thread Mark Hatle
On 9/12/17 1:02 PM, Reach, Jonathon A wrote:
> Hello,
> 
>  
> 
> I currently have a krogoth based project that I need to add the meta-selinux
> layer to.
> 

(I've replied this offlist, but figured it should go to the list as well...)

Currently we do not have a krogoth branch.  I never worked on a specific product
that required it.

I generally branch when I know the system works for a particular branch.

I would suggest trying starting with either Jethro or Morty versions.  Most
likely you'll want Morty minus a few commits..

> 
> Does anyone know what it would take to accomplish this? Most poky patches fail
> as well as compilation issues with kernel features that postdate krogoth.
> 

If anyone else has already gotten a working version.  I'm willing to create a
krogoth branch to make it easier for people in the future.

--Mark

> 
> Appreciatively,
> 
>  
> 
> *Jonathon Reach*
> 
> Cyber Security Product Engineer
> Electronics & Safety
> 
>  
> 
> DELPHI_RGB_Mar2013 copy.png 
> 
>  
> 
> 5825 Delphi Drive
> 
> Troy, MI 48098, USA
> 
>  
> 
> _jonathon.a.reach@delphi.com_
> 
> +1 586.291.0692
> 
>  
> 
> 
> Note: If the reader of this message is not the intended recipient, or an
> employee or agent responsible for delivering this message to the intended
> recipient, you are hereby notified that any dissemination, distribution or
> copying of this communication is strictly prohibited. If you have received 
> this
> communication in error, please notify us immediately by replying to the 
> message
> and deleting it from your computer. Thank you.
> 
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Suitable machine for yocto

2017-09-10 Thread Mark Hatle
On 9/10/17 2:31 PM, Alex Lennon wrote:
> 
> 
> On 10/09/2017 19:17, Mark Hatle wrote:
>> On 9/10/17 11:14 AM, Alex Lennon wrote:
>>>
>>> On 10/09/2017 17:06, Mark Hatle wrote:
>>>> On 9/10/17 2:00 AM, Usman Haider wrote:
>>>>> Hi,
>>>>>
>>>>> Can someone please recommend some good machine for yocto environment and
>>>>> building sdks. I am interested in RAM, hard disk space, processor.
>>>> You want fast I/O, as much RAM and as many (fast) cores as you can afford. 
>>>>  I
>>>> don't think there is a single answer as what is 'best'.  It also depends on
>>>> which Yocto Project versions, and which layers you are using as to which
>>>> combination is best.
>>>>
>>>> I run builds on my laptop, 4-core/8-thread & SSD and 16 GB of ram from a 
>>>> few
>>>> years ago.  It's fast, but I wouldn't want to do all of my development on 
>>>> it.
>>>>
>>>> I've had 8-core/16-thread (32GB ram/standard disk), 16-core/32-thread (72GB
>>>> ram/SAS-3 RAID), 24-core/48-thread (64GB ram/SATA - software RAID), 
>>>> 72-core/144
>>>> thread (256 GB ram/hardware raid/SAS-3), and recently upgraded to
>>>> 96-core/192-thread (256 GB ram/hardware raid/SAS-3).
>>>>
>>>> I would not go below quad-core (8-thread) myself.  You can get a quad 
>>>> core, good
>>>> quality machine for $1000 or less these day.  If you move up to the larger
>>>> machines, you can even be able to get to a 24-core for less then $5000.  
>>>> By the
>>>> time you get to 96-core and all of the googles you are likely talking 
>>>> $5 or
>>>> more.
>>>>
>>>> By clock raid, the 24-core machine is the fastest..  While the 96-core 
>>>> monster
>>>> can do the builds the quickest.  But when you figure out 
>>>> cost/performance/etc..
>>>> the 24-core is probably the best performance per dollar, and with adequate 
>>>> RAM
>>>> (I'd say at least 64GB if not 128GB), and fast I/O you'll probably get the
>>>> lowest price for the best performance in that category.
>>>>
>>>> If you need sheer speed and price is no option, then the (4 CPU w/ 24 core 
>>>> each)
>>>> 96-core monster (or even better) is what you want to go with.  256GB ram 
>>>> would
>>>> be a minimum with that configuration (I'm not sure if more is actually 
>>>> helpful,
>>>> I rarely end up in swap -- but I go get into situations where more then 
>>>> 50% of
>>>> ram is used.)  With that many cores, disk I/O starts to become obvious.  So
>>>> faster the better... SSDs would be the fastest, but of course the most 
>>>> expensive.
>>>>
>>>> If your employer is paying for the machine, you may be able to get a 
>>>> better then
>>>> normal machine by explaining how much time a faster machine will save and 
>>>> how
>>>> comparing to your salary a machine is inexpensive.  (If you are a 
>>>> contractor or
>>>> student, that changes of course.)  :)
>>>>
>>>> So my point is really, figure out how much money you have to spend.  My 
>>>> rule of
>>>> thumb is roughly:
>>>>
>>>> 1) Buy as many cores as you can.  Try to get a CPU that has Hyperthreading 
>>>> or
>>>> equivalent to double the effective core count.  Fastest processing speed 
>>>> helps
>>>> in repetitive cases vs full system builds.
>>>>
>>>> So if the choice is a 24 core @ 2.2GHz vs 22 core @ 2.5, I'd probably go 
>>>> with
>>>> the 22-core.  While if it was 24 core @ 2.2GHz vs 8 core @ 4.2 GHz, I'd go 
>>>> with
>>>> the 24 core.
>>>>
>>>> 2) Try to get at least 1 GB of ram per thread (2 GB per core..)  You can 
>>>> cut
>>>> back on the ram (if necessary) once you hit 72 threads or so.   (72 threads
>>>> right now seems to cover most of the parallelization in a full system 
>>>> build.
>>>> There are points in the system where it can parallelize MUCH more, but 
>>>> they are
>>>> fairly rare.)
>>>>
>>>> 3) You need fast disks.  Software RAID works fine, but you likely need to 
>>>> buy at
>>>> least a couple of disk to boost performance.  SSDs are fast, b

Re: [yocto] Suitable machine for yocto

2017-09-10 Thread Mark Hatle
On 9/10/17 11:14 AM, Alex Lennon wrote:
> 
> 
> On 10/09/2017 17:06, Mark Hatle wrote:
>> On 9/10/17 2:00 AM, Usman Haider wrote:
>>> Hi,
>>>
>>> Can someone please recommend some good machine for yocto environment and
>>> building sdks. I am interested in RAM, hard disk space, processor.
>> You want fast I/O, as much RAM and as many (fast) cores as you can afford.  I
>> don't think there is a single answer as what is 'best'.  It also depends on
>> which Yocto Project versions, and which layers you are using as to which
>> combination is best.
>>
>> I run builds on my laptop, 4-core/8-thread & SSD and 16 GB of ram from a few
>> years ago.  It's fast, but I wouldn't want to do all of my development on it.
>>
>> I've had 8-core/16-thread (32GB ram/standard disk), 16-core/32-thread (72GB
>> ram/SAS-3 RAID), 24-core/48-thread (64GB ram/SATA - software RAID), 
>> 72-core/144
>> thread (256 GB ram/hardware raid/SAS-3), and recently upgraded to
>> 96-core/192-thread (256 GB ram/hardware raid/SAS-3).
>>
>> I would not go below quad-core (8-thread) myself.  You can get a quad core, 
>> good
>> quality machine for $1000 or less these day.  If you move up to the larger
>> machines, you can even be able to get to a 24-core for less then $5000.  By 
>> the
>> time you get to 96-core and all of the googles you are likely talking $5 
>> or
>> more.
>>
>> By clock raid, the 24-core machine is the fastest..  While the 96-core 
>> monster
>> can do the builds the quickest.  But when you figure out 
>> cost/performance/etc..
>> the 24-core is probably the best performance per dollar, and with adequate 
>> RAM
>> (I'd say at least 64GB if not 128GB), and fast I/O you'll probably get the
>> lowest price for the best performance in that category.
>>
>> If you need sheer speed and price is no option, then the (4 CPU w/ 24 core 
>> each)
>> 96-core monster (or even better) is what you want to go with.  256GB ram 
>> would
>> be a minimum with that configuration (I'm not sure if more is actually 
>> helpful,
>> I rarely end up in swap -- but I go get into situations where more then 50% 
>> of
>> ram is used.)  With that many cores, disk I/O starts to become obvious.  So
>> faster the better... SSDs would be the fastest, but of course the most 
>> expensive.
>>
>> If your employer is paying for the machine, you may be able to get a better 
>> then
>> normal machine by explaining how much time a faster machine will save and how
>> comparing to your salary a machine is inexpensive.  (If you are a contractor 
>> or
>> student, that changes of course.)  :)
>>
>> So my point is really, figure out how much money you have to spend.  My rule 
>> of
>> thumb is roughly:
>>
>> 1) Buy as many cores as you can.  Try to get a CPU that has Hyperthreading or
>> equivalent to double the effective core count.  Fastest processing speed 
>> helps
>> in repetitive cases vs full system builds.
>>
>> So if the choice is a 24 core @ 2.2GHz vs 22 core @ 2.5, I'd probably go with
>> the 22-core.  While if it was 24 core @ 2.2GHz vs 8 core @ 4.2 GHz, I'd go 
>> with
>> the 24 core.
>>
>> 2) Try to get at least 1 GB of ram per thread (2 GB per core..)  You can cut
>> back on the ram (if necessary) once you hit 72 threads or so.   (72 threads
>> right now seems to cover most of the parallelization in a full system build.
>> There are points in the system where it can parallelize MUCH more, but they 
>> are
>> fairly rare.)
>>
>> 3) You need fast disks.  Software RAID works fine, but you likely need to 
>> buy at
>> least a couple of disk to boost performance.  SSDs are fast, but lots of 
>> builds
>> take space, so fast SATA or even better SAS drives are the best performance 
>> per
>> cost.
> 
> This brings to mind a related question I keep coming back to as to the 
> economics of a docker (or similar) image running a fast Yocto build in 
> the cloud.
> 
> i.e. set config params -> bring up server image on plaform A/B/C -> 
> perform build taking time X/Y/Z -> store output images -> bring down 
> server  == $ ?
> 
> I find myself asking what the optimal cost per-build would be using this 
> approach...

I helped someone do some very -preliminary- figured a few years ago.  The
processing was 'cheap', but between storage and network transfer costs.. it was
cheaper to buy a reasonable machine.. payback time was only a few months.

(cloud 'storage' is often very slow as well, because there are expectations of
migration and things like that.)

So as of a few years ago at least, the economics didn't factory the cloud -- 
yet.

--Mark

> Cheers,
> 
> Alex
> 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Suitable machine for yocto

2017-09-10 Thread Mark Hatle
On 9/10/17 2:00 AM, Usman Haider wrote:
> Hi,
> 
> Can someone please recommend some good machine for yocto environment and
> building sdks. I am interested in RAM, hard disk space, processor.

You want fast I/O, as much RAM and as many (fast) cores as you can afford.  I
don't think there is a single answer as what is 'best'.  It also depends on
which Yocto Project versions, and which layers you are using as to which
combination is best.

I run builds on my laptop, 4-core/8-thread & SSD and 16 GB of ram from a few
years ago.  It's fast, but I wouldn't want to do all of my development on it.

I've had 8-core/16-thread (32GB ram/standard disk), 16-core/32-thread (72GB
ram/SAS-3 RAID), 24-core/48-thread (64GB ram/SATA - software RAID), 72-core/144
thread (256 GB ram/hardware raid/SAS-3), and recently upgraded to
96-core/192-thread (256 GB ram/hardware raid/SAS-3).

I would not go below quad-core (8-thread) myself.  You can get a quad core, good
quality machine for $1000 or less these day.  If you move up to the larger
machines, you can even be able to get to a 24-core for less then $5000.  By the
time you get to 96-core and all of the googles you are likely talking $5 or
more.

By clock raid, the 24-core machine is the fastest..  While the 96-core monster
can do the builds the quickest.  But when you figure out cost/performance/etc..
the 24-core is probably the best performance per dollar, and with adequate RAM
(I'd say at least 64GB if not 128GB), and fast I/O you'll probably get the
lowest price for the best performance in that category.

If you need sheer speed and price is no option, then the (4 CPU w/ 24 core each)
96-core monster (or even better) is what you want to go with.  256GB ram would
be a minimum with that configuration (I'm not sure if more is actually helpful,
I rarely end up in swap -- but I go get into situations where more then 50% of
ram is used.)  With that many cores, disk I/O starts to become obvious.  So
faster the better... SSDs would be the fastest, but of course the most 
expensive.

If your employer is paying for the machine, you may be able to get a better then
normal machine by explaining how much time a faster machine will save and how
comparing to your salary a machine is inexpensive.  (If you are a contractor or
student, that changes of course.)  :)

So my point is really, figure out how much money you have to spend.  My rule of
thumb is roughly:

1) Buy as many cores as you can.  Try to get a CPU that has Hyperthreading or
equivalent to double the effective core count.  Fastest processing speed helps
in repetitive cases vs full system builds.

So if the choice is a 24 core @ 2.2GHz vs 22 core @ 2.5, I'd probably go with
the 22-core.  While if it was 24 core @ 2.2GHz vs 8 core @ 4.2 GHz, I'd go with
the 24 core.

2) Try to get at least 1 GB of ram per thread (2 GB per core..)  You can cut
back on the ram (if necessary) once you hit 72 threads or so.   (72 threads
right now seems to cover most of the parallelization in a full system build.
There are points in the system where it can parallelize MUCH more, but they are
fairly rare.)

3) You need fast disks.  Software RAID works fine, but you likely need to buy at
least a couple of disk to boost performance.  SSDs are fast, but lots of builds
take space, so fast SATA or even better SAS drives are the best performance per
cost.
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


  1   2   3   4   >