[OE-core] Shared Sstate-cache with qemu issue

2020-05-20 Thread vygu via lists.openembedded.org
Hello,

We observe an issue with qemu (runqemu) when we use our shared sstate-cache.
If we build the sstate-cache on a debian x86_64, and after we use it on an 
ubuntu x86_64, qemu looks for several not-found libs.
We have reproduced this problem on two different pc with ubuntu. We don't have 
this problem, if we use the sstate-cache on another debian, and ldd shows the 
use of local/host libs, not the yocto libs.

Cordially,

vygu-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#138507): 
https://lists.openembedded.org/g/openembedded-core/message/138507
Mute This Topic: https://lists.openembedded.org/mt/74351811/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [OE-core] Issue with qemu and a shared sstate-cache used by different linux distribution supported by yocto

2020-05-27 Thread vygu via lists.openembedded.org
libnfs for example

‐‐‐ Original Message ‐‐‐
On Wednesday 27 May 2020 14:43, Alexander Kanavin  
wrote:

> Can you please provide the lib names which are problematic?
>
> Alex
>
> On Wed, 27 May 2020 at 14:29, vygu via lists.openembedded.org 
>  wrote:
>
>> Hello,
>>
>> Since the zeus serie (also with dunfell), we observe an issue with runqemu 
>> when we share the sstate-cache thanks to a mirror between different linux 
>> distribution supported by yocto.
>>
>> If we build a sstate-cache on a debian 10 x86_64 buildfarm,  and after that 
>> we use it on an ubuntu 18.04 x86_64, runqemu don't find several libs.
>> We have reproduced this problem on two different pc with ubuntu 18.04 and 
>> 16.04.
>>
>> We don't have this problem, if we use the shared sstate-cache on another 
>> debian.
>>
>> In all cases, a  ldd on the qemu binary shows us the use of local/host libs, 
>> not the yocto libs.
>>
>> Is it an expected behavior? or not?
>> Runqemu's libs have to come from the linux distribution or from the yocto 
>> build env?
>>
>> Cordially,
>>
>> vygu-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#138781): 
https://lists.openembedded.org/g/openembedded-core/message/138781
Mute This Topic: https://lists.openembedded.org/mt/74498490/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[OE-core] Issue with qemu and a shared sstate-cache used by different linux distribution supported by yocto

2020-05-27 Thread vygu via lists.openembedded.org
Hello,

Since the zeus serie (also with dunfell), we observe an issue with runqemu when 
we share the sstate-cache thanks to a mirror between different linux 
distribution supported by yocto.

If we build a sstate-cache on a debian 10 x86_64 buildfarm,  and after that we 
use it on an ubuntu 18.04 x86_64, runqemu don't find several libs.
We have reproduced this problem on two different pc with ubuntu 18.04 and 16.04.

We don't have this problem, if we use the shared sstate-cache on another debian.

In all cases, a  ldd on the qemu binary shows us the use of local/host libs, 
not the yocto libs.

Is it an expected behavior? or not?
Runqemu's libs have to come from the linux distribution or from the yocto build 
env?

Cordially,

vygu-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#138776): 
https://lists.openembedded.org/g/openembedded-core/message/138776
Mute This Topic: https://lists.openembedded.org/mt/74498490/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [OE-core] Issue with qemu and a shared sstate-cache used by different linux distribution supported by yocto

2020-05-27 Thread vygu via lists.openembedded.org
00)
libplc4.so => /lib/x86_64-linux-gnu/libplc4.so (0x7f5330c7f000)
libplds4.so => /lib/x86_64-linux-gnu/libplds4.so (0x7f5330c7a000)
libnl-route-3.so.200 => /lib/x86_64-linux-gnu/libnl-route-3.so.200 
(0x7f5330a01000)
libnl-3.so.200 => /lib/x86_64-linux-gnu/libnl-3.so.200 
(0x7f53307e)
libkrb5support.so.0 => /lib/x86_64-linux-gnu/libkrb5support.so.0 
(0x7f53307cf000)
libkeyutils.so.1 => /lib/x86_64-linux-gnu/libkeyutils.so.1 
(0x7f53307c8000)

without the sstate-cache, on an ubuntu 16.04, we have:

ldd  
../build/tmp/work/x86_64-linux/qemu-system-native/4.1.0-r0/sysroot-destdir/home/user/yocto/build/tmp/work/x86_64-linux/qemu-system-native/4.1.0-r0/recipe-sysroot-native/usr/bin/qemu-system-aarch64
linux-vdso.so.1 (0x6b9b308f2000)
libasound.so.2 => /usr/lib/libasound.so.2 (0x6b9b307d)
libz.so.1 => /usr/lib/libz.so.1 (0x6b9b307b6000)
libpixman-1.so.0 => /usr/lib/libpixman-1.so.0 (0x6b9b3070e000)
libutil.so.1 => /usr/lib/libutil.so.1 (0x6b9b30709000)
libfdt.so.1 => /usr/lib/libfdt.so.1 (0x6b9b306ff000)
libglib-2.0.so.0 => /usr/lib/libglib-2.0.so.0 (0x6b9b305d4000)
librt.so.1 => /usr/lib/librt.so.1 (0x6b9b305c9000)
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x6b9b303ec000)
libm.so.6 => /usr/lib/libm.so.6 (0x6b9b302a7000)
libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x6b9b3028d000)
libpthread.so.0 => /usr/lib/libpthread.so.0 (0x6b9b3026b000)
libc.so.6 => /usr/lib/libc.so.6 (0x6b9b300a2000)
libdl.so.2 => /usr/lib/libdl.so.2 (0x6b9b3009c000)

/home/user/yocto/buildv1.8/tmp/sysroots-uninative/x86_64-linux/lib/ld-linux-x86-64.so.2
 => /usr/lib64/ld-linux-x86-64.so.2 (0x6b9b308f4000)
libpcre.so.1 => /usr/lib/libpcre.so.1 (0x00006b9b3002a000)

‐‐‐ Original Message ‐‐‐
On Wednesday 27 May 2020 14:43, Alexander Kanavin  
wrote:

> Can you please provide the lib names which are problematic?
>
> Alex
>
> On Wed, 27 May 2020 at 14:29, vygu via lists.openembedded.org 
>  wrote:
>
>> Hello,
>>
>> Since the zeus serie (also with dunfell), we observe an issue with runqemu 
>> when we share the sstate-cache thanks to a mirror between different linux 
>> distribution supported by yocto.
>>
>> If we build a sstate-cache on a debian 10 x86_64 buildfarm,  and after that 
>> we use it on an ubuntu 18.04 x86_64, runqemu don't find several libs.
>> We have reproduced this problem on two different pc with ubuntu 18.04 and 
>> 16.04.
>>
>> We don't have this problem, if we use the shared sstate-cache on another 
>> debian.
>>
>> In all cases, a  ldd on the qemu binary shows us the use of local/host libs, 
>> not the yocto libs.
>>
>> Is it an expected behavior? or not?
>> Runqemu's libs have to come from the linux distribution or from the yocto 
>> build env?
>>
>> Cordially,
>>
>> vygu-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#138784): 
https://lists.openembedded.org/g/openembedded-core/message/138784
Mute This Topic: https://lists.openembedded.org/mt/74498490/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [OE-core] Issue with qemu and a shared sstate-cache used by different linux distribution supported by yocto

2020-05-28 Thread vygu via lists.openembedded.org
After some investigation on the debian buildfarm, we can see in the 
build/tmp/work/x86_64-linux/qemu-system-native/4.2.0-r0/temp/log.do_configure  
"libnfs supportyes".
If we comment in poky/meta/recipes-devtools/qemu/qemu.inc all the prepend 
do_configure_prepend_class-native(), we obtain "libnfs supportno".
The function do_configure_prepend_class-native, as written in commentaries, is 
to find sdl. But we see now it adds more than just the sdl support.


Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐
On Thursday 28 May 2020 03:17, Andre McCurdy  wrote:

> On Wed, May 27, 2020 at 5:29 AM vygu via lists.openembedded.org
> vygu=protonmail...@lists.openembedded.org wrote:
>
> > Hello,
> > Since the zeus serie (also with dunfell), we observe an issue with runqemu 
> > when we share the sstate-cache thanks to a mirror between different linux 
> > distribution supported by yocto.
> > If we build a sstate-cache on a debian 10 x86_64 buildfarm, and after that 
> > we use it on an ubuntu 18.04 x86_64, runqemu don't find several libs.
> > We have reproduced this problem on two different pc with ubuntu 18.04 and 
> > 16.04.
> > We don't have this problem, if we use the shared sstate-cache on another 
> > debian.
> > In all cases, a ldd on the qemu binary shows us the use of local/host libs, 
> > not the yocto libs.
> > Is it an expected behavior? or not?
> > Runqemu's libs have to come from the linux distribution or from the yocto 
> > build env?
>
> There are two cases, depending on whether you have uninative enabled
> or not. It's disabled by default in oe-core but enabled by default in
> poky (the distro aimed at testing).
>
> With uninative disabled, native binaries link with host libc. Other
> link dependencies are either native packages provided by OE or they
> come from the host. In this case, sstate for native packages is stored
> within a host specific subdirectory of sstate-cache (e.g.
> sstate-cache/ubuntu-18.04). It should be quite safe to share
> sstate-cache between different hosts (since different hosts should
> each use a different subdirectory of sstate-cache). The downside is
> that because different hosts don't share sstate, build times may be
> slower.
>
> With uninative enabled, native binaries link with uninative libc.
> Other link dependencies should only be native packages provided by OE
> (ie they should NOT come directly from the host). In this case, sstate
> for native packages is stored within a common subdirectory of
> sstate-cache (ie sstate-cache/universal). The assumption is that
> because native packages never link with libs from the host, sstate for
> native packages no longer needs to be host specific.
>
> Unfortunately problems happen if uninative is enabled but a link
> dependency is found from the host. That causes host dependent sstate
> files to pollute sstate-cache/universal, making it unsafe to reuse
> between different hosts. This doesn't happen often, but it does happen
> sometimes, e.g:
>
> https://git.openembedded.org/openembedded-core/commit/?id=4a996574464028bd5d57b90920d0887d1a81e9e9
>
> It looks like maybe it's happened in your case too. If you want to
> share sstate-cache between different hosts the safest way is to
> disable uninative. If you are happy to test and debug uninative then
> of course give it a try, but be aware that it's not bug free.


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#138840): 
https://lists.openembedded.org/g/openembedded-core/message/138840
Mute This Topic: https://lists.openembedded.org/mt/74498490/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [OE-core] Issue with qemu and a shared sstate-cache used by different linux distribution supported by yocto

2020-05-29 Thread vygu via lists.openembedded.org
brt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7f533c3f4000)
>>  libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 
>> (0x7f533c27)
>>  libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7f533c0ed000)
>>  libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 
>> (0x7f533c0d3000)
>>  libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
>> (0x7f533c0b)
>>  libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f533beef000)
>>  libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f533beea000)
>>  
>> /home/user/yocto/build/tmp/sysroots-uninative/x86_64-linux/lib/ld-linux-x86-64.so.2
>>  => /lib64/ld-linux-x86-64.so.2 (0x7f533e447000)
>>  libceph-common.so.0 => 
>> /usr/lib/x86_64-linux-gnu/ceph/libceph-common.so.0 (0x7f5333422000)
>>  libboost_system.so.1.67.0 => 
>> /lib/x86_64-linux-gnu/libboost_system.so.1.67.0 (0x7f533341b000)
>>  libboost_thread.so.1.67.0 => 
>> /lib/x86_64-linux-gnu/libboost_thread.so.1.67.0 (0x7f5ed000)
>>  libattr.so.1 => /lib/x86_64-linux-gnu/libattr.so.1 (0x7f5e5000)
>>  libtirpc.so.3 => /lib/x86_64-linux-gnu/libtirpc.so.3 
>> (0x7f5b1000)
>>  libcrypto.so.1.1 => /lib/x86_64-linux-gnu/libcrypto.so.1.1 
>> (0x7f53330c8000)
>>  libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x7f5333054000)
>>  libresolv.so.2 => /lib/x86_64-linux-gnu/libresolv.so.2 
>> (0x7f5333038000)
>>  libboost_regex.so.1.67.0 => 
>> /lib/x86_64-linux-gnu/libboost_regex.so.1.67.0 (0x7f5332f23000)
>>  libboost_iostreams.so.1.67.0 => 
>> /lib/x86_64-linux-gnu/libboost_iostreams.so.1.67.0 (0x7f5332f05000)
>>  libblkid.so.1 => /lib/x86_64-linux-gnu/libblkid.so.1 
>> (0x7f5332eb)
>>  libsmime3.so => /lib/x86_64-linux-gnu/libsmime3.so (0x7f5332e81000)
>>  libnss3.so => /lib/x86_64-linux-gnu/libnss3.so (0x7f5332d33000)
>>  libnspr4.so => /lib/x86_64-linux-gnu/libnspr4.so (0x7f5332cf)
>>  libibverbs.so.1 => /lib/x86_64-linux-gnu/libibverbs.so.1 
>> (0x7f5332cd5000)
>>  libboost_atomic.so.1.67.0 => 
>> /lib/x86_64-linux-gnu/libboost_atomic.so.1.67.0 (0x7f5332cd)
>>  libgssapi_krb5.so.2 => /lib/x86_64-linux-gnu/libgssapi_krb5.so.2 
>> (0x7f5332c83000)
>>  libkrb5.so.3 => /lib/x86_64-linux-gnu/libkrb5.so.3 (0x7f5332ba3000)
>>  libk5crypto.so.3 => /lib/x86_64-linux-gnu/libk5crypto.so.3 
>> (0x7f5332b6d000)
>>  libcom_err.so.2 => /lib/x86_64-linux-gnu/libcom_err.so.2 
>> (0x7f5332b67000)
>>  libicudata.so.63 => /lib/x86_64-linux-gnu/libicudata.so.63 
>> (0x7f5331177000)
>>  libicui18n.so.63 => /lib/x86_64-linux-gnu/libicui18n.so.63 
>> (0x7f5330e9c000)
>>  libicuuc.so.63 => /lib/x86_64-linux-gnu/libicuuc.so.63 
>> (0x7f5330ccd000)
>>  libbz2.so.1.0 => /lib/x86_64-linux-gnu/libbz2.so.1.0 
>> (0x7f5330cb8000)
>>  libnssutil3.so => /lib/x86_64-linux-gnu/libnssutil3.so 
>> (0x7f5330c86000)
>>  libplc4.so => /lib/x86_64-linux-gnu/libplc4.so (0x7f5330c7f000)
>>  libplds4.so => /lib/x86_64-linux-gnu/libplds4.so (0x7f5330c7a000)
>>  libnl-route-3.so.200 => /lib/x86_64-linux-gnu/libnl-route-3.so.200 
>> (0x7f5330a01000)
>>  libnl-3.so.200 => /lib/x86_64-linux-gnu/libnl-3.so.200 
>> (0x7f53307e)
>>  libkrb5support.so.0 => /lib/x86_64-linux-gnu/libkrb5support.so.0 
>> (0x7f53307cf000)
>>  libkeyutils.so.1 => /lib/x86_64-linux-gnu/libkeyutils.so.1 
>> (0x7f53307c8000)
>>
>> without the sstate-cache, on an ubuntu 16.04, we have:
>>
>> ldd  
>> ../build/tmp/work/x86_64-linux/qemu-system-native/4.1.0-r0/sysroot-destdir/home/user/yocto/build/tmp/work/x86_64-linux/qemu-system-native/4.1.0-r0/recipe-sysroot-native/usr/bin/qemu-system-aarch64
>> linux-vdso.so.1 (0x6b9b308f2000)
>> libasound.so.2 => /usr/lib/libasound.so.2 (0x6b9b307d)
>> libz.so.1 => /usr/lib/libz.so.1 (0x6b9b307b6000)
>> libpixman-1.so.0 => /usr/lib/libpixman-1.so.0 (0x6b9b3070e000)
>> libutil.so.1 => /usr/lib/libutil.so.1 (0x6b9b30709000)
>> libfdt.so.1 => /usr/lib/libfdt.so.1 (0x00006b9b306ff000)
>> libglib-2.0.so.0 => /usr/lib/libglib-2.0.so.0 (0x6b9b305d4000)
>> librt.so.1 => /usr/lib/librt.so.1 (0x6b9b305c9000)
>> libst

[OE-core] error about cve_check after a 'do_populate_sdk: Succeeded' on poky master since 20 july on ubuntu18.04/Debian 10/Debian 9.12

2020-07-24 Thread vygu via lists.openembedded.org
Hello,

We observe this following error about cve_check after a populate_sdk:

ERROR: Execution of event handler 'cve_save_summary_handler' failed
Traceback (most recent call last):
File "/home/user/poky/meta/classes/cve-check.bbclass", line 65, in 
cve_save_summary_handler(e=):

> shutil.copyfile(cve_tmp_file, cve_summary_file)

File "/usr/lib/python3.6/shutil.py", line 120, in 
copyfile(src='/home/user/poky/build/tmp/cve_check', 
dst='/home/user/poky/build/tmp/log/cve/cve-summary-20200724111814.txt', 
follow_symlinks=True):
else:
> with open(src, 'rb') as fsrc:
with open(dst, 'wb') as fdst:
FileNotFoundError: [Errno 2] No such file or directory: 
'/home/user/poky/build/tmp/cve_check'-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#140916): 
https://lists.openembedded.org/g/openembedded-core/message/140916
Mute This Topic: https://lists.openembedded.org/mt/75764992/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-