[yocto] [ANNOUNCEMENT] Milestone 3 for Yocto Project 2.4 (yocto-2.4_M3) now available

2017-09-01 Thread Tracy Graydon
The third milestone release for Yocto Project 2.4 (yocto-2.4_M3) is available 
for download now.

Download:

http://downloads.yoctoproject.org/releases/yocto/milestones/yocto-2.4_M3/

eclipse-poky-mars 92aa0e79e8b01c56f0670af3cd8296ec68b43350
eclipse-poky-neon 83e0083ef3a71e10039ace7d18057dddc154408b
meta-qt3 f33b73a9563f2dfdfd0ee37b61d65d90197a456f
meta-qt4 d52c38ad9f0a617b9ad5048a872a9e97b3af5b44
poky 5f6945f5031e1a4ca116cc1eccf4c2f9dc228547

Test report:

https://wiki.yoctoproject.org/wiki/WW35_-_2017-08-30_-_Full_Test_Cycle_2.4_M3

Thank you.

Tracy Graydon
Yocto Project
Build and Release
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [psplash][PATCH] Fix text width calculation.

2017-09-01 Thread Burton, Ross
Merged to master, thanks.

Ross

On 11 August 2017 at 22:18, Kevin Corry  wrote:

> Psplash: Fix text width calculation.
>
> Using the "psplash-write MSG " command, you can display
> messages
> when the splash screen is running. As part of this, psplash needs to
> calculate
> the height and width of the box to render on the screen to hold the text
> (which
> will appear on top of the background image), which it does in the
> psplash_fb_text_size() function in psplash-fb.c.
>
> If the message contains multiple lines (i.e. it contains one or more
> newline
> characters), then it looks like the intention is for psplash to use the
> length
> of the longest single line as the width of this text box. However, there's
> a
> bug in this calculation that leads to some multi-line messages being
> rendered
> off the left edge of the screen, instead of properly centered.
>
> To fix this, each time a newline is encountered, if the width of the
> current
> line (w) is greater than the maximum line width (mw), update the maximum
> width.
> Also, reset the current line width to zero so we can correctly calculate
> the
> width of the next line.
>
> Signed-off-by: Kevin Corry 
> ---
>  psplash-fb.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/psplash-fb.c b/psplash-fb.c
> index d344e5a..c064d18 100644
> --- a/psplash-fb.c
> +++ b/psplash-fb.c
> @@ -483,7 +483,8 @@ psplash_fb_text_size (int*width,
>if (*c == '\n')
> {
>   if (w > mw)
> -   mw = 0;
> +   mw = w;
> + w = 0;
>   h += font->height;
>   continue;
> }
> --
> 2.11.0
>
> --
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Pyro's uninative and libstdc++ symbols

2017-09-01 Thread Richard Purdie
On Fri, 2017-09-01 at 12:14 -0700, akuster wrote:
> 
> On 08/29/2017 01:03 AM, Richard Purdie wrote:
> > 
> > On Fri, 2017-08-25 at 14:50 +0200, Raphael Kubo da Costa wrote:
> > > 
> > > I've recently updated my host system to Fedora 26, which has GCC
> > > 7.
> > > 
> > > This seems to be causing some issues on Pyro, where I have a
> > > -native
> > > recipe that is built with my system's g++ and ends up generating
> > > a
> > > binary with the following symbol:
> > > 
> > >    DF
> > > *UND*    GLIBCXX_3.4.23
> > > std::basic_string > > std::allocator
> > > > 
> > > > ::basic_string(std::string const&, unsigned long,
> > > std::allocator const&)
> > > 
> > > GLIBCXX_3.4.23 is not part of Pyro's uninative's libstdc++, so
> > > when
> > > that
> > > binary is invoked in another (non-native) recipe as part of
> > > do_configure
> > > it fails to run:
> > > 
> > >  gn: /data/src/yocto/poky/build/tmp/sysroots-
> > > uninative/x86_64-
> > > linux/usr/lib/libstdc++.so.6: version `GLIBCXX_3.4.23' not found
> > > (required by gn)
> > > 
> > > Is there anything I should be doing differently here?
> > We need to update the uninative version in pyro to the more recent
> is this action just a straight forward backport from Master?

Yes, should be...

Cheers,

Richard

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] modify sources.list file

2017-09-01 Thread yahia farghaly
Hi,
I am build a deb package manager for the target image. for that i set these
variables in conf/local.conf

> *PACKAGE_FEED_URIS = "http:///my-repo/yahia-repo/expiremental \*
>
> *"*
>
> *PACKAGE_FEED_BASE_PATHS = "deb"*
>
> *PACKAGE_FEED_ARCHS = "all"*
>
>
and as expected the result was in sources.list

> *deb http:///my-repo/yahia-repo/expiremental/deb/all ./ *

Now , i want to make sources.list to be like this

> *deb http:///my-repo/yahia-repo/expiremental/deb/all yahia main*
>

is there any way in yocto to achieve this ?
i tried overwriting content by an exist file on machine with install
command in do_install task but it doesn't work
I also tried in image recipe

> ROOTFS_POSTPROCESS_COMMAND += "modifysourcelist;"
>
>
>> modifysourcelist(){
>
> install -d ${D}/etc/apt
>
>  install -m 0755 ${THISDIR}/../addfiles/keyfile/sources.list
>> ${D}/etc/apt
>
> }
>
> and it was no effect.
-- 
Yahia Farghaly
Graduated from Faculty of Engineering - Electronics and Communications
Department at Cairo University.
Linkedin  - GitHub




‌
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] do_package: pseudo_append_element: path too long (wanted 4098 bytes)

2017-09-01 Thread Georgios Gkitsas

Hello,

I have written a recipe that copies files to rootfs and I am getting the 
following error when running bitbake:



   ERROR: pm2-bunyan-0.1-r1.4 do_package: file copy failed with exit
   code 2 (cmd was tar -cf - -C
   
[local-path]/tmp/work/cortexa7hf-neon-dey-linux-gnueabi/pm2-bunyan/0.1-r1.4/image
   -p . | tar -xf - -C
   
[local-path]/tmp/work/cortexa7hf-neon-dey-linux-gnueabi/pm2-bunyan/0.1-r1.4/package):
   pseudo_append_element: path too long (wanted 4098 bytes).
   couldn't allocate absolute path for
   
'./image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/ima
   
ge/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/image/configure.sstate'.
   tar:
   

Re: [yocto] cannot re-use shared state cache between build hosts

2017-09-01 Thread Maciej Borzęcki
On Fri, Sep 1, 2017 at 5:04 PM, Andrea Galbusera  wrote:
> Hi Maciej,
>
> On Fri, Sep 1, 2017 at 4:08 PM, Maciej Borzęcki 
> wrote:
>>
>> On Fri, Sep 1, 2017 at 3:54 PM, Andrea Galbusera  wrote:
>> > Hi!
>> >
>> > I was trying to share sstate between different hosts, but the consumer
>> > build
>> > system seems to be unable to use re-use any sstate object. My scenario
>> > is
>> > setup as follows:
>> >
>> > * The cache was populated by a pristine qemux86 core-image-minimal build
>> > of
>> > morty. This was done in a crops/poky container (running in docker on
>> > Mac)
>> > * The cache was then served via HTTP
>>
>> Make sure that you use a decent HTTP server. Simple `python3 -m
>> http.server` will quickly choke when the mirror is being checked. Also
>> running bitbake -DDD -v makes investigating this much easier.
>
>
> To be honest, the current server was indeed setup with python's
> SimpleHTTPServer... As you suggest, I checked the verbose debug log and
> noticed what's happening behind the apparently happy "Checking sstate mirror
> object availability" step. After a first "SState: Successful fetch test for"
> that I see correctly served with 200 on the server side, tests for any other
> sstate object suddenly and systematically fail with logs like this:
>
> DEBUG: SState: Attempting to fetch
> file://7d/sstate:libxml2:i586-poky-linux:2.9.4:r0:i586:3:7da8fc3f7f5ed0102d23bdb86ac7ab32_package_qa.tgz
> DEBUG: Searching for
> 7d/sstate:libxml2:i586-poky-linux:2.9.4:r0:i586:3:7da8fc3f7f5ed0102d23bdb86ac7ab32_package_qa.tgz
> in paths:
> /home/vagrant/koan/morty/build/sstate-cache
> DEBUG: Defaulting to
> /home/vagrant/koan/morty/build/sstate-cache/7d/sstate:libxml2:i586-poky-linux:2.9.4:r0:i586:3:7da8fc3f7f5ed0102d23bdb86ac7ab32_package_qa.tgz
> for 7d/sstate:libxml2:i586-poky-linux:2.9.4:r0:i586:3:7da8fc3f7f5ed0102d23b
> db86ac7ab32_package_qa.tgz
> DEBUG: Testing URL
> file://7d/sstate:libxml2:i586-poky-linux:2.9.4:r0:i586:3:7da8fc3f7f5ed0102d23bdb86ac7ab32_package_qa.tgz
> DEBUG: For url ['file', '',
> '7d/sstate:libxml2:i586-poky-linux:2.9.4:r0:i586:3:7da8fc3f7f5ed0102d23bdb86ac7ab32_package_qa.tgz',
> '', '', OrderedDict()] comparing ['file', '', '.*', '', '', OrderedDict()]
> to ['http', '192.168.33.1:8000', '
> /sstate-cache/PATH', '', '', OrderedDict([('downloadfilename', 'PATH')])]
> DEBUG: For url
> file://7d/sstate:libxml2:i586-poky-linux:2.9.4:r0:i586:3:7da8fc3f7f5ed0102d23bdb86ac7ab32_package_qa.tgz
> returning
> http://192.168.33.1:8000/sstate-cache/7d/sstate%3Alibxml2%3Ai586-poky-linux%3A2.9.4%3Ar0%3Ai586%3A3%3A7da8fc
> 3f7f5ed0102d23bdb86ac7ab32_package_qa.tgz;downloadfilename=7d/sstate:libxml2:i586-poky-linux:2.9.4:r0:i586:3:7da8fc3f7f5ed0102d23bdb86ac7ab32_package_qa.tgz
> DEBUG: checkstatus: trying again
> DEBUG: checkstatus() urlopen failed:  descriptor>
> DEBUG: SState: Unsuccessful fetch test for
> file://7d/sstate:libxml2:i586-poky-linux:2.9.4:r0:i586:3:7da8fc3f7f5ed0102d23bdb86ac7ab32_package_qa.tgz
>
> Nothing is reported server-side for any of these failures... As you
> recommend, I'll try to setup something more "decent" for the HTTP server and
> see if it helps.

Yeah, I think this has to do with HTTP keepalive. Anyways, caddy
worked just fine (https://caddyserver.com).

>
>
>>
>> > * The second host is a VM running Ubuntu 16.04 where I set
>> > SSTATE_MIRRORS to
>> > point to the hosted sstate cache like this:
>> >
>> > SSTATE_MIRRORS ?= "\
>> > file://.*
>> > http://192.168.33.1:8000/sstate-cache/PATH;downloadfilename=PATH;
>> >
>> > * I checked with curl that the VM can successfully get sstate objects
>> > from
>> > the server.
>> > * Then I start a new build (same metadata revisions, default
>> > configuration
>> > for core-image-minimal) and each and every task run from scratch with no
>> > sstate cache re-use.
>> >
>> > Here are the two configurations from bitbake and /etc/lsb-release files:
>> >
>> > On the container used to seed sstate cache:
>> >
>> > Build Configuration:
>> > BB_VERSION= "1.32.0"
>> > BUILD_SYS = "x86_64-linux"
>> > NATIVELSBSTRING   = "universal"
>> > TARGET_SYS= "i586-poky-linux"
>> > MACHINE   = "qemux86"
>> > DISTRO= "poky"
>> > DISTRO_VERSION= "2.2.2"
>> > TUNE_FEATURES = "m32 i586"
>> > TARGET_FPU= ""
>> > meta
>> > meta-poky
>> > meta-yocto-bsp= "morty:2a70e84643381eca0e7bf7928d4a3d56f9651128"
>> >
>> > $ cat /etc/lsb-release
>> > DISTRIB_ID=Ubuntu
>> > DISTRIB_RELEASE=16.04
>> > DISTRIB_CODENAME=xenial
>> > DISTRIB_DESCRIPTION="Ubuntu 16.04.2 LTS"
>> >
>> > On the VM that should consume the cache:
>> >
>> > Build Configuration:
>> > BB_VERSION= "1.32.0"
>> > BUILD_SYS = "x86_64-linux"
>> > NATIVELSBSTRING   = "Ubuntu-16.04"
>> > TARGET_SYS= "i586-poky-linux"
>> > MACHINE   = "qemux86"
>> > DISTRO= "poky"
>> > DISTRO_VERSION= "2.2.2"
>> > TUNE_FEATURES = "m32 

Re: [yocto] cannot re-use shared state cache between build hosts

2017-09-01 Thread Andrea Galbusera
Hi Maciej,

On Fri, Sep 1, 2017 at 4:08 PM, Maciej Borzęcki 
wrote:

> On Fri, Sep 1, 2017 at 3:54 PM, Andrea Galbusera  wrote:
> > Hi!
> >
> > I was trying to share sstate between different hosts, but the consumer
> build
> > system seems to be unable to use re-use any sstate object. My scenario is
> > setup as follows:
> >
> > * The cache was populated by a pristine qemux86 core-image-minimal build
> of
> > morty. This was done in a crops/poky container (running in docker on Mac)
> > * The cache was then served via HTTP
>
> Make sure that you use a decent HTTP server. Simple `python3 -m
> http.server` will quickly choke when the mirror is being checked. Also
> running bitbake -DDD -v makes investigating this much easier.
>

To be honest, the current server was indeed setup with python's
SimpleHTTPServer... As you suggest, I checked the verbose debug log and
noticed what's happening behind the apparently happy "Checking sstate
mirror object availability" step. After a first "SState: Successful fetch
test for" that I see correctly served with 200 on the server side, tests
for any other sstate object suddenly and systematically fail with logs like
this:

DEBUG: SState: Attempting to fetch file://7d/sstate:libxml2:i586-
poky-linux:2.9.4:r0:i586:3:7da8fc3f7f5ed0102d23bdb86ac7ab32_package_qa.tgz
DEBUG: Searching for 7d/sstate:libxml2:i586-poky-linux:2.9.4:r0:i586:3:
7da8fc3f7f5ed0102d23bdb86ac7ab32_package_qa.tgz in paths:
/home/vagrant/koan/morty/build/sstate-cache
DEBUG: Defaulting to /home/vagrant/koan/morty/build/sstate-cache/7d/sstate:
libxml2:i586-poky-linux:2.9.4:r0:i586:3:7da8fc3f7f5ed0102d23bdb86ac7ab32_package_qa.tgz
for 7d/sstate:libxml2:i586-poky-linux:2.9.4:r0:i586:3:7da8fc3f7f5ed0102d23b
db86ac7ab32_package_qa.tgz
DEBUG: Testing URL file://7d/sstate:libxml2:i586-poky-linux:2.9.4:r0:i586:3:
7da8fc3f7f5ed0102d23bdb86ac7ab32_package_qa.tgz
DEBUG: For url ['file', '', '7d/sstate:libxml2:i586-poky-
linux:2.9.4:r0:i586:3:7da8fc3f7f5ed0102d23bdb86ac7ab32_package_qa.tgz', '',
'', OrderedDict()] comparing ['file', '', '.*', '', '', OrderedDict()] to
['http', '192.168.33.1:8000', '
/sstate-cache/PATH', '', '', OrderedDict([('downloadfilename', 'PATH')])]
DEBUG: For url file://7d/sstate:libxml2:i586-poky-linux:2.9.4:r0:i586:3:
7da8fc3f7f5ed0102d23bdb86ac7ab32_package_qa.tgz returning
http://192.168.33.1:8000/sstate-cache/7d/sstate%3Alibxml2%3Ai586-poky-linux%
3A2.9.4%3Ar0%3Ai586%3A3%3A7da8fc
3f7f5ed0102d23bdb86ac7ab32_package_qa.tgz;downloadfilename=7d/sstate:
libxml2:i586-poky-linux:2.9.4:r0:i586:3:7da8fc3f7f5ed0102d23bdb86ac7ab
32_package_qa.tgz
DEBUG: checkstatus: trying again
DEBUG: checkstatus() urlopen failed: 
DEBUG: SState: Unsuccessful fetch test for file://7d/sstate:libxml2:i586-
poky-linux:2.9.4:r0:i586:3:7da8fc3f7f5ed0102d23bdb86ac7ab32_package_qa.tgz

Nothing is reported server-side for any of these failures... As you
recommend, I'll try to setup something more "decent" for the HTTP server
and see if it helps.



> > * The second host is a VM running Ubuntu 16.04 where I set
> SSTATE_MIRRORS to
> > point to the hosted sstate cache like this:
> >
> > SSTATE_MIRRORS ?= "\
> > file://.* http://192.168.33.1:8000/sstate-cache/PATH;downloadfilename=
> PATH"
> >
> > * I checked with curl that the VM can successfully get sstate objects
> from
> > the server.
> > * Then I start a new build (same metadata revisions, default
> configuration
> > for core-image-minimal) and each and every task run from scratch with no
> > sstate cache re-use.
> >
> > Here are the two configurations from bitbake and /etc/lsb-release files:
> >
> > On the container used to seed sstate cache:
> >
> > Build Configuration:
> > BB_VERSION= "1.32.0"
> > BUILD_SYS = "x86_64-linux"
> > NATIVELSBSTRING   = "universal"
> > TARGET_SYS= "i586-poky-linux"
> > MACHINE   = "qemux86"
> > DISTRO= "poky"
> > DISTRO_VERSION= "2.2.2"
> > TUNE_FEATURES = "m32 i586"
> > TARGET_FPU= ""
> > meta
> > meta-poky
> > meta-yocto-bsp= "morty:2a70e84643381eca0e7bf7928d4a3d56f9651128"
> >
> > $ cat /etc/lsb-release
> > DISTRIB_ID=Ubuntu
> > DISTRIB_RELEASE=16.04
> > DISTRIB_CODENAME=xenial
> > DISTRIB_DESCRIPTION="Ubuntu 16.04.2 LTS"
> >
> > On the VM that should consume the cache:
> >
> > Build Configuration:
> > BB_VERSION= "1.32.0"
> > BUILD_SYS = "x86_64-linux"
> > NATIVELSBSTRING   = "Ubuntu-16.04"
> > TARGET_SYS= "i586-poky-linux"
> > MACHINE   = "qemux86"
> > DISTRO= "poky"
> > DISTRO_VERSION= "2.2.2"
> > TUNE_FEATURES = "m32 i586"
> > TARGET_FPU= ""
> > meta
> > meta-poky
> > meta-yocto-bsp= "morty:2a70e84643381eca0e7bf7928d4a3d56f9651128"
> >
> > $ cat /etc/lsb-release
> > DISTRIB_ID=Ubuntu
> > DISTRIB_RELEASE=16.04
> > DISTRIB_CODENAME=xenial
> > DISTRIB_DESCRIPTION="Ubuntu 16.04.3 LTS"
> >
> >
> > To me, the only differing bit that in my understanding can lead to sstate

Re: [yocto] cannot re-use shared state cache between build hosts

2017-09-01 Thread Maciej Borzęcki
On Fri, Sep 1, 2017 at 3:54 PM, Andrea Galbusera  wrote:
> Hi!
>
> I was trying to share sstate between different hosts, but the consumer build
> system seems to be unable to use re-use any sstate object. My scenario is
> setup as follows:
>
> * The cache was populated by a pristine qemux86 core-image-minimal build of
> morty. This was done in a crops/poky container (running in docker on Mac)
> * The cache was then served via HTTP

Make sure that you use a decent HTTP server. Simple `python3 -m
http.server` will quickly choke when the mirror is being checked. Also
running bitbake -DDD -v makes investigating this much easier.

> * The second host is a VM running Ubuntu 16.04 where I set SSTATE_MIRRORS to
> point to the hosted sstate cache like this:
>
> SSTATE_MIRRORS ?= "\
> file://.* http://192.168.33.1:8000/sstate-cache/PATH;downloadfilename=PATH;
>
> * I checked with curl that the VM can successfully get sstate objects from
> the server.
> * Then I start a new build (same metadata revisions, default configuration
> for core-image-minimal) and each and every task run from scratch with no
> sstate cache re-use.
>
> Here are the two configurations from bitbake and /etc/lsb-release files:
>
> On the container used to seed sstate cache:
>
> Build Configuration:
> BB_VERSION= "1.32.0"
> BUILD_SYS = "x86_64-linux"
> NATIVELSBSTRING   = "universal"
> TARGET_SYS= "i586-poky-linux"
> MACHINE   = "qemux86"
> DISTRO= "poky"
> DISTRO_VERSION= "2.2.2"
> TUNE_FEATURES = "m32 i586"
> TARGET_FPU= ""
> meta
> meta-poky
> meta-yocto-bsp= "morty:2a70e84643381eca0e7bf7928d4a3d56f9651128"
>
> $ cat /etc/lsb-release
> DISTRIB_ID=Ubuntu
> DISTRIB_RELEASE=16.04
> DISTRIB_CODENAME=xenial
> DISTRIB_DESCRIPTION="Ubuntu 16.04.2 LTS"
>
> On the VM that should consume the cache:
>
> Build Configuration:
> BB_VERSION= "1.32.0"
> BUILD_SYS = "x86_64-linux"
> NATIVELSBSTRING   = "Ubuntu-16.04"
> TARGET_SYS= "i586-poky-linux"
> MACHINE   = "qemux86"
> DISTRO= "poky"
> DISTRO_VERSION= "2.2.2"
> TUNE_FEATURES = "m32 i586"
> TARGET_FPU= ""
> meta
> meta-poky
> meta-yocto-bsp= "morty:2a70e84643381eca0e7bf7928d4a3d56f9651128"
>
> $ cat /etc/lsb-release
> DISTRIB_ID=Ubuntu
> DISTRIB_RELEASE=16.04
> DISTRIB_CODENAME=xenial
> DISTRIB_DESCRIPTION="Ubuntu 16.04.3 LTS"
>
>
> To me, the only differing bit that in my understanding can lead to sstate
> cache objects invalidation is the value of NATIVELSBSTRING which is
> "universal" inside the container and "Ubuntu-16.04". This sounds strange to
> me, since both underlying systems are Ubuntu 16.04 (although not exactly the
> same dot release) as confirmed by /etc/lsb-release contents.
>
> Is the different NATIVELSBSTRING the root cause for everything being
> re-built? If so, what's causing them being different in the end and what
> does "universal" exactly mean (to me it looks like a more generic and
> incluse term than any distro label, so I'm confused...)?
>
>
> --
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto
>



-- 
Maciej Borzecki
RnDity
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] cannot re-use shared state cache between build hosts

2017-09-01 Thread Andrea Galbusera
On Fri, Sep 1, 2017 at 3:57 PM, Martin Jansa  wrote:

> Why do you use:
>
> ";downloadfilename=PATH
> "
>
> ?
>

Most likely because I copy/pasted from local.conf comments. I see the same
syntax on both "morty" [1] and "current" documentation. Shouldn't this be
part of the "keep the same two-character subdirectories layout of the
mirror" thing?

[1]
http://www.yoctoproject.org/docs/2.2/mega-manual/mega-manual.html#var-SSTATE_MIRRORS


> On Fri, Sep 1, 2017 at 3:54 PM, Andrea Galbusera  wrote:
>
>> Hi!
>>
>> I was trying to share sstate between different hosts, but the consumer
>> build system seems to be unable to use re-use any sstate object. My
>> scenario is setup as follows:
>>
>> * The cache was populated by a pristine qemux86 core-image-minimal build
>> of morty. This was done in a crops/poky container (running in docker on Mac)
>> * The cache was then served via HTTP
>> * The second host is a VM running Ubuntu 16.04 where I set SSTATE_MIRRORS
>> to point to the hosted sstate cache like this:
>>
>> SSTATE_MIRRORS ?= "\
>> file://.* http://192.168.33.1:8000/sstate-cache/PATH;downloadfilename=
>> PATH"
>>
>> * I checked with curl that the VM can successfully get sstate objects
>> from the server.
>> * Then I start a new build (same metadata revisions, default
>> configuration for core-image-minimal) and each and every task run from
>> scratch with no sstate cache re-use.
>>
>> Here are the two configurations from bitbake and /etc/lsb-release files:
>>
>> On the container used to seed sstate cache:
>>
>> Build Configuration:
>> BB_VERSION= "1.32.0"
>> BUILD_SYS = "x86_64-linux"
>> NATIVELSBSTRING   = "universal"
>> TARGET_SYS= "i586-poky-linux"
>> MACHINE   = "qemux86"
>> DISTRO= "poky"
>> DISTRO_VERSION= "2.2.2"
>> TUNE_FEATURES = "m32 i586"
>> TARGET_FPU= ""
>> meta
>> meta-poky
>> meta-yocto-bsp= "morty:2a70e84643381eca0e7bf7928d4a3d56f9651128"
>>
>> $ cat /etc/lsb-release
>> DISTRIB_ID=Ubuntu
>> DISTRIB_RELEASE=16.04
>> DISTRIB_CODENAME=xenial
>> DISTRIB_DESCRIPTION="Ubuntu 16.04.2 LTS"
>>
>> On the VM that should consume the cache:
>>
>> Build Configuration:
>> BB_VERSION= "1.32.0"
>> BUILD_SYS = "x86_64-linux"
>> NATIVELSBSTRING   = "Ubuntu-16.04"
>> TARGET_SYS= "i586-poky-linux"
>> MACHINE   = "qemux86"
>> DISTRO= "poky"
>> DISTRO_VERSION= "2.2.2"
>> TUNE_FEATURES = "m32 i586"
>> TARGET_FPU= ""
>> meta
>> meta-poky
>> meta-yocto-bsp= "morty:2a70e84643381eca0e7bf7928d4a3d56f9651128"
>>
>> $ cat /etc/lsb-release
>> DISTRIB_ID=Ubuntu
>> DISTRIB_RELEASE=16.04
>> DISTRIB_CODENAME=xenial
>> DISTRIB_DESCRIPTION="Ubuntu 16.04.3 LTS"
>>
>>
>> To me, the only differing bit that in my understanding can lead to sstate
>> cache objects invalidation is the value of NATIVELSBSTRING which is
>> "universal" inside the container and "Ubuntu-16.04". This sounds strange to
>> me, since both underlying systems are Ubuntu 16.04 (although not exactly
>> the same dot release) as confirmed by /etc/lsb-release contents.
>>
>> Is the different NATIVELSBSTRING the root cause for everything being
>> re-built? If so, what's causing them being different in the end and what
>> does "universal" exactly mean (to me it looks like a more generic and
>> incluse term than any distro label, so I'm confused...)?
>>
>>
>> --
>> ___
>> yocto mailing list
>> yocto@yoctoproject.org
>> https://lists.yoctoproject.org/listinfo/yocto
>>
>>
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] cannot re-use shared state cache between build hosts

2017-09-01 Thread Martin Jansa
Why do you use:

";downloadfilename=PATH
"

?

On Fri, Sep 1, 2017 at 3:54 PM, Andrea Galbusera  wrote:

> Hi!
>
> I was trying to share sstate between different hosts, but the consumer
> build system seems to be unable to use re-use any sstate object. My
> scenario is setup as follows:
>
> * The cache was populated by a pristine qemux86 core-image-minimal build
> of morty. This was done in a crops/poky container (running in docker on Mac)
> * The cache was then served via HTTP
> * The second host is a VM running Ubuntu 16.04 where I set SSTATE_MIRRORS
> to point to the hosted sstate cache like this:
>
> SSTATE_MIRRORS ?= "\
> file://.* http://192.168.33.1:8000/sstate-cache/PATH;downloadfilename=PATH
> "
>
> * I checked with curl that the VM can successfully get sstate objects from
> the server.
> * Then I start a new build (same metadata revisions, default configuration
> for core-image-minimal) and each and every task run from scratch with no
> sstate cache re-use.
>
> Here are the two configurations from bitbake and /etc/lsb-release files:
>
> On the container used to seed sstate cache:
>
> Build Configuration:
> BB_VERSION= "1.32.0"
> BUILD_SYS = "x86_64-linux"
> NATIVELSBSTRING   = "universal"
> TARGET_SYS= "i586-poky-linux"
> MACHINE   = "qemux86"
> DISTRO= "poky"
> DISTRO_VERSION= "2.2.2"
> TUNE_FEATURES = "m32 i586"
> TARGET_FPU= ""
> meta
> meta-poky
> meta-yocto-bsp= "morty:2a70e84643381eca0e7bf7928d4a3d56f9651128"
>
> $ cat /etc/lsb-release
> DISTRIB_ID=Ubuntu
> DISTRIB_RELEASE=16.04
> DISTRIB_CODENAME=xenial
> DISTRIB_DESCRIPTION="Ubuntu 16.04.2 LTS"
>
> On the VM that should consume the cache:
>
> Build Configuration:
> BB_VERSION= "1.32.0"
> BUILD_SYS = "x86_64-linux"
> NATIVELSBSTRING   = "Ubuntu-16.04"
> TARGET_SYS= "i586-poky-linux"
> MACHINE   = "qemux86"
> DISTRO= "poky"
> DISTRO_VERSION= "2.2.2"
> TUNE_FEATURES = "m32 i586"
> TARGET_FPU= ""
> meta
> meta-poky
> meta-yocto-bsp= "morty:2a70e84643381eca0e7bf7928d4a3d56f9651128"
>
> $ cat /etc/lsb-release
> DISTRIB_ID=Ubuntu
> DISTRIB_RELEASE=16.04
> DISTRIB_CODENAME=xenial
> DISTRIB_DESCRIPTION="Ubuntu 16.04.3 LTS"
>
>
> To me, the only differing bit that in my understanding can lead to sstate
> cache objects invalidation is the value of NATIVELSBSTRING which is
> "universal" inside the container and "Ubuntu-16.04". This sounds strange to
> me, since both underlying systems are Ubuntu 16.04 (although not exactly
> the same dot release) as confirmed by /etc/lsb-release contents.
>
> Is the different NATIVELSBSTRING the root cause for everything being
> re-built? If so, what's causing them being different in the end and what
> does "universal" exactly mean (to me it looks like a more generic and
> incluse term than any distro label, so I'm confused...)?
>
>
> --
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto
>
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] cannot re-use shared state cache between build hosts

2017-09-01 Thread Andrea Galbusera
Hi!

I was trying to share sstate between different hosts, but the consumer
build system seems to be unable to use re-use any sstate object. My
scenario is setup as follows:

* The cache was populated by a pristine qemux86 core-image-minimal build of
morty. This was done in a crops/poky container (running in docker on Mac)
* The cache was then served via HTTP
* The second host is a VM running Ubuntu 16.04 where I set SSTATE_MIRRORS
to point to the hosted sstate cache like this:

SSTATE_MIRRORS ?= "\
file://.* http://192.168.33.1:8000/sstate-cache/PATH;downloadfilename=PATH;

* I checked with curl that the VM can successfully get sstate objects from
the server.
* Then I start a new build (same metadata revisions, default configuration
for core-image-minimal) and each and every task run from scratch with no
sstate cache re-use.

Here are the two configurations from bitbake and /etc/lsb-release files:

On the container used to seed sstate cache:

Build Configuration:
BB_VERSION= "1.32.0"
BUILD_SYS = "x86_64-linux"
NATIVELSBSTRING   = "universal"
TARGET_SYS= "i586-poky-linux"
MACHINE   = "qemux86"
DISTRO= "poky"
DISTRO_VERSION= "2.2.2"
TUNE_FEATURES = "m32 i586"
TARGET_FPU= ""
meta
meta-poky
meta-yocto-bsp= "morty:2a70e84643381eca0e7bf7928d4a3d56f9651128"

$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.2 LTS"

On the VM that should consume the cache:

Build Configuration:
BB_VERSION= "1.32.0"
BUILD_SYS = "x86_64-linux"
NATIVELSBSTRING   = "Ubuntu-16.04"
TARGET_SYS= "i586-poky-linux"
MACHINE   = "qemux86"
DISTRO= "poky"
DISTRO_VERSION= "2.2.2"
TUNE_FEATURES = "m32 i586"
TARGET_FPU= ""
meta
meta-poky
meta-yocto-bsp= "morty:2a70e84643381eca0e7bf7928d4a3d56f9651128"

$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.3 LTS"


To me, the only differing bit that in my understanding can lead to sstate
cache objects invalidation is the value of NATIVELSBSTRING which is
"universal" inside the container and "Ubuntu-16.04". This sounds strange to
me, since both underlying systems are Ubuntu 16.04 (although not exactly
the same dot release) as confirmed by /etc/lsb-release contents.

Is the different NATIVELSBSTRING the root cause for everything being
re-built? If so, what's causing them being different in the end and what
does "universal" exactly mean (to me it looks like a more generic and
incluse term than any distro label, so I'm confused...)?
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [meta-security][v2][PATCH] fail2Ban: Add new package

2017-09-01 Thread Armin Kuster
Fail2Ban scans log files like /var/log/auth.log and bans IP addresses having 
too many failed login attempts. It does this by updating system firewall rules 
to reject new connections from those IP addresses, for a configurable amount of 
time. Fail2Ban comes out-of-the-box ready to read many standard log files, such 
as those for sshd and Apache, and is easy to configure to read any log file you 
choose, for any error you choose.

Though Fail2Ban is able to reduce the rate of incorrect authentications 
attempts, it cannot eliminate the risk that weak authentication presents. 
Configure services to use only two factor or public/private authentication 
mechanisms if you really want to protect services.

Signed-off-by: Armin Kuster 
---
 recipes-security/fail2ban/fail2ban_0.10.0.bb  |  41 +
 recipes-security/fail2ban/files/fail2ban_setup.py | 175 ++
 recipes-security/fail2ban/files/initd |  98 
 3 files changed, 314 insertions(+)
 create mode 100644 recipes-security/fail2ban/fail2ban_0.10.0.bb
 create mode 100755 recipes-security/fail2ban/files/fail2ban_setup.py
 create mode 100644 recipes-security/fail2ban/files/initd

diff --git a/recipes-security/fail2ban/fail2ban_0.10.0.bb 
b/recipes-security/fail2ban/fail2ban_0.10.0.bb
new file mode 100644
index 000..465316c
--- /dev/null
+++ b/recipes-security/fail2ban/fail2ban_0.10.0.bb
@@ -0,0 +1,41 @@
+SUMMARY = "Daemon to ban hosts that cause multiple authentication errors."
+DESCRIPTION = "Fail2Ban scans log files like /var/log/auth.log and bans IP 
addresses having too \
+many failed login attempts. It does this by updating system firewall rules to 
reject new \
+connections from those IP addresses, for a configurable amount of time. 
Fail2Ban comes \
+out-of-the-box ready to read many standard log files, such as those for sshd 
and Apache, \
+and is easy to configure to read any log file you choose, for any error you 
choose."
+HOMEPAGE = "http://www.fail2ban.org;
+
+LICENSE = "GPL-2.0"
+LIC_FILES_CHKSUM = "file://COPYING;md5=ecabc31e90311da843753ba772885d9f"
+
+SRCREV ="c60784540c5307d16cdc136ace5b395961492e73"
+SRC_URI = " \
+   git://github.com/fail2ban/fail2ban.git;branch=0.10 \
+   file://initd \
+   file://fail2ban_setup.py \
+"
+
+inherit update-rc.d setuptools
+
+S = "${WORKDIR}/git"
+
+INITSCRIPT_PACKAGES = "${PN}"
+INITSCRIPT_NAME = "fail2ban-server"
+INITSCRIPT_PARAMS = "defaults 25"
+
+do_compile_prepend () {
+cp ${WORKDIR}/fail2ban_setup.py ${S}/setup.py
+}
+
+do_install_append () {
+   install -d ${D}/${sysconfdir}/fail2ban
+   install -d ${D}/${sysconfdir}/init.d
+   install -m 0755 ${WORKDIR}/initd 
${D}${sysconfdir}/init.d/fail2ban-server
+}
+
+FILES_${PN} += "/run"
+
+INSANE_SKIP_${PN}_append = "already-stripped"
+
+RDEPENDS_${PN} = "sysklogd iptables sqlite3 python python-pyinotify"
diff --git a/recipes-security/fail2ban/files/fail2ban_setup.py 
b/recipes-security/fail2ban/files/fail2ban_setup.py
new file mode 100755
index 000..a5d4ed6
--- /dev/null
+++ b/recipes-security/fail2ban/files/fail2ban_setup.py
@@ -0,0 +1,175 @@
+#!/usr/bin/env python
+# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: t -*-
+# vi: set ft=python sts=4 ts=4 sw=4 noet :
+
+# This file is part of Fail2Ban.
+#
+# Fail2Ban is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# Fail2Ban is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with Fail2Ban; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, 
USA.
+
+__author__ = "Cyril Jaquier, Steven Hiscocks, Yaroslav Halchenko"
+__copyright__ = "Copyright (c) 2004 Cyril Jaquier, 2008-2016 Fail2Ban 
Contributors"
+__license__ = "GPL"
+
+import platform
+
+try:
+   import setuptools
+   from setuptools import setup
+   from setuptools.command.install import install
+   from setuptools.command.install_scripts import install_scripts
+except ImportError:
+   setuptools = None
+   from distutils.core import setup
+
+# all versions
+from distutils.command.build_py import build_py
+from distutils.command.build_scripts import build_scripts
+if setuptools is None:
+   from distutils.command.install import install
+   from distutils.command.install_scripts import install_scripts
+try:
+   # python 3.x
+   from distutils.command.build_py import build_py_2to3
+   from distutils.command.build_scripts import build_scripts_2to3
+   _2to3 = True
+except ImportError:
+  

Re: [yocto] [meta-security][PATCH] fail2bin: Add new package

2017-09-01 Thread akuster808


Hello Paul,

On 08/31/2017 10:35 PM, Paul Eggleton wrote:

Hi Armin,

On Friday, 1 September 2017 5:09:23 PM NZST Armin Kuster wrote:

Fail2Ban scans log files like /var/log/auth.log and bans IP addresses having too
many failed login attempts. It does this by updating system firewall rules to 
reject
new connections from those IP addresses, for a configurable amount of time.
Fail2Ban comes out-of-the-box ready to read many standard log files, such as
those for sshd and Apache, and is easy to configure to read any log file you
choose, for any error you choose.
...
+++ b/recipes-security/fail2ban/fail2ban_0.10.0.bb
@@ -0,0 +1,41 @@
+SUMMARY = "Daemon to ban hosts that cause multiple authentication errors."
+DESCIPTION = "Fail2Ban scans log files like /var/log/auth.log and bans IP 
addresses having too \

Typo ^. Also typo "fail2bin" in the shortlog.

ah.. thanks for the corrections.


Great to see this added though, and that it's alive upstream - I wrote a recipe
for fail2ban a few years ago (around the 0.8.4 times) and then noticed it had
a number of security issues and so I dropped it. I just found I still have the 
recipe
and I was doing a few things like sed'ing the hardcoded paths in the config
and setting CONFFILES that you don't have here, so I could send you a patch
afterwards with those tweaks if you like.

sure.

kind regards,
Armin


Cheers,
Paul



--
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] apt-key not found

2017-09-01 Thread Alexander Kanavin

On 09/01/2017 06:17 AM, yahia farghaly wrote:


So, what the right way to do this ?


You should study how signing RPM/IPK package feeds is done in Yocto, and 
modify the code in the same way. Start from


meta/recipes-core/meta/signing-keys.bb
meta/classes/sign_package_feed.bbclass
meta/lib/oe/package_manager.py (RpmIndexer/OpkgIndexer classes)

Then modify DpkgIndexer class to use apt-key to do the signing in a 
similar way. It even has a stub:


if self.d.getVar('PACKAGE_FEED_SIGN') == '1':
raise NotImplementedError('Package feed signing not 
implementd for dpkg')



Alex
--
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Building Custom Python 3 Packages

2017-09-01 Thread Alexander Kanavin

On 08/31/2017 04:54 PM, Seilis, Aaron wrote:

This clearly indicates that the issue is that the build is looking
for setup.py in the ${B} location, but it is only present in the ${S}
location when `devtool modify` has been run. I have tried setting
${B} to ${S} explicitly in the recipe, but this doesn't result in
${B} being changed when I run `bitbake -e mytool`. I could always
copy ${S} to ${B} in the recipe, but that seems a bit hack-ish.

Did I miss something or is there another way that Python builds are
intended to work?


I think this might be a limitation of setuptools: they do not support 
out-of-tree builds (which is a must for devtool modify'). If you can 
figure out how to solve it, that would be nice!


Alex
--
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-gplv2][PATCH] gmp_4.2.1: prevent calls to mpn_add_nc() if HAVE_NATIVE_mpn_sub_nc is false

2017-09-01 Thread Burton, Ross
Sorry I'd merged but not pushed.

Now pushed.

Ross

On 31 August 2017 at 19:49, Andre McCurdy  wrote:

> On Fri, Aug 25, 2017 at 5:50 PM, Andre McCurdy 
> wrote:
> > When building for aarch64 (ie relying only on generic C code rather
> > than asm) libgmp.so contains undefined references to __gmpn_add_nc
> > and __gmpn_sub_nc which causes attempts to link with -lgmp to fail:
> >
> >  | .../usr/lib/libgmp.so: undefined reference to `__gmpn_sub_nc'
> >  | .../usr/lib/libgmp.so: undefined reference to `__gmpn_add_nc'
> >
> > Solution based on a historical patch posted to the gmp mailing list:
> >
> >   https://gmplib.org/list-archives/gmp-discuss/2006-May/002344.html
> >
> > Signed-off-by: Andre McCurdy 
> > ---
>
> Ping.
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto