Re: [yocto] lib32-ncurses not installing in rootfs

2018-11-07 Thread Mohammad, Jamal M
Thanks Ross, for the clear explanation.. It did solved my issue and a new 
learning of the oe-pkgdata-util …

Once again Thanks.

From: Burton, Ross [mailto:ross.bur...@intel.com]
Sent: Wednesday, November 7, 2018 6:19 PM
To: Mohammad, Jamal M 
Cc: ChenQi ; Yocto-mailing-list 
Subject: Re: [yocto] lib32-ncurses not installing in rootfs

The curses libraries are split up further, have a look at what else is in 
packages-split, or even better use oe-pkgdata-util:

$ oe-pkgdata-util list-pkg-files -p lib32-ncurses
...
lib32-ncurses-libncurses:
/lib/libncurses.so.5
/lib/libncurses.so.5.9
lib32-ncurses-libncursesw:
/lib/libncursesw.so.5
/lib/libncursesw.so.5.9
lib32-ncurses-libpanel:
/usr/lib/libpanel.so.5
/usr/lib/libpanel.so.5.9
...

$ oe-pkgdata-util find-path 
*/libncurses.so.*
lib32-ncurses-dbg: /lib/.debug/libncurses.so.5.9
lib32-ncurses-libncurses: /lib/libncurses.so.5
lib32-ncurses-libncurses: /lib/libncurses.so.5.9

Ross

On Wed, 7 Nov 2018 at 09:39, Mohammad, Jamal M 
mailto:mohammadjamal.mohiud...@ncr.com>> wrote:
There are many directories inside packages-split folder , lib32-ncurses, 
lib32-ncurses-dbg, lib32-ncurses-dev, lib32-ncurses-doc

Looking into lib32-ncurses,

lib32-ncurses
└── usr
├── bin
│   ├── tput
│   └── tset
└── share
└── tabset
├── std
├── stdcrt
├── vt100
└── vt300

It doesn’t have lib folder, so where are the 
libncurses.so
 missing..

From: ChenQi [mailto:qi.c...@windriver.com]
Sent: Wednesday, November 7, 2018 3:04 PM
To: Mohammad, Jamal M 
mailto:mohammadjamal.mohiud...@ncr.com>>; 
Yocto-mailing-list mailto:yocto@yoctoproject.org>>
Subject: Re: [yocto] lib32-ncurses not installing in rootfs

*External Message* - Use caution before opening links or attachments


Check the packages-split/ directory to see how files are put in each package.
I guess these files are packages into other packages derived from the ncurses 
recipe.

Best Regards,
Chen Qi

On 11/07/2018 05:21 PM, Mohammad, Jamal M wrote:
Hi Guys,
I am trying to add 32-bit ncurses into my root file system
I am using intel yocto bsp sumo branch
Here is my local.conf:

require conf/multilib.conf
DEFAULTTUNE_virtclass-multilib-lib32 = "x86"
IMAGE_INSTALL_append = " dpkg gnutls lib32-glibc lib32-libgcc lib32-libstdc++ 
lib32-gnutls lib32-freetype lib32-libx11 lib32-ncurses lib32-dpkg 
python3-six"

ncurses folder is present in tmp
build/tmp/work/x86-pokymllib32-linux/lib32-ncurses/6.0+20171125-r0

The image folder is created and has the libraries
build/tmp/work/x86-pokymllib32-linux/lib32-ncurses/6.0+20171125-r0/image/lib
libncurses.so.5  libncurses.so.5.9  libncursesw.so.5  libncursesw.so.5.9  
libtinfo.so.5  libtinfo.so.5.9

But these files are not present in root file system.
How can i debug or what should be my next step to get them into root file 
system. which log files should I look
Thanks for your time.

Regards,
Jamal,
Software Specialist,
NCR Corporation



--
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[linux-yocto] [V2 PATCH 0/2] Move some configs to intel-x86-64

2018-11-07 Thread Hongzhi.Song
v2:
modify commit log to be more clearly.

Hongzhi.Song (2):
  intel-x86-64: Move some configs from x86 and x86-64 shared to x86-64
  intel-x86-64: Move 'CONFIG_NR_CPUS=256' to intel-x86-64.cfg

 bsp/intel-x86/intel-x86-64.cfg  | 3 +++
 bsp/intel-x86/intel-x86-64.scc  | 1 +
 bsp/intel-x86/intel-x86.cfg | 1 -
 bsp/intel-x86/intel-x86.scc | 1 +
 features/ixgbe/ixgbe-x86-64.cfg | 1 +
 features/ixgbe/ixgbe-x86-64.scc | 3 +++
 features/ixgbe/ixgbe.cfg| 1 -
 features/ixgbe/ixgbe.scc| 1 -
 8 files changed, 9 insertions(+), 3 deletions(-)
 create mode 100644 features/ixgbe/ixgbe-x86-64.cfg
 create mode 100644 features/ixgbe/ixgbe-x86-64.scc

-- 
2.8.1

-- 
___
linux-yocto mailing list
linux-yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/linux-yocto


[linux-yocto] [PATCH 2/2] intel-x86-64: Move 'CONFIG_NR_CPUS=256' to intel-x86-64.cfg

2018-11-07 Thread Hongzhi.Song
The maximum cpus are 64 on intel-x86-32.
But intel-x86-64 support ranges from 1 to 8192.
So we should move the config to intel-x86-64.cfg.

Signed-off-by: Hongzhi.Song 
---
 bsp/intel-x86/intel-x86-64.cfg | 3 +++
 bsp/intel-x86/intel-x86.cfg| 1 -
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/bsp/intel-x86/intel-x86-64.cfg b/bsp/intel-x86/intel-x86-64.cfg
index 4e8a4d7..215c0f0 100644
--- a/bsp/intel-x86/intel-x86-64.cfg
+++ b/bsp/intel-x86/intel-x86-64.cfg
@@ -49,3 +49,6 @@ CONFIG_CRYPTO_DEV_QAT_DH895xCCVF=m
 
 # Intel Resource Director Technology support
 CONFIG_INTEL_RDT=y
+
+# Processor type and features
+CONFIG_NR_CPUS=256
diff --git a/bsp/intel-x86/intel-x86.cfg b/bsp/intel-x86/intel-x86.cfg
index 6919179..080bca1 100644
--- a/bsp/intel-x86/intel-x86.cfg
+++ b/bsp/intel-x86/intel-x86.cfg
@@ -17,7 +17,6 @@
 CONFIG_MCORE2=y
 CONFIG_SMP=y
 CONFIG_SCHED_SMT=y
-CONFIG_NR_CPUS=256
 
 CONFIG_NUMA=y
 CONFIG_ACPI_NUMA=y
-- 
2.8.1

-- 
___
linux-yocto mailing list
linux-yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/linux-yocto


[linux-yocto] [PATCH 1/2] intel-x86-64: Move some configs from x86 and x86-64 shared to x86-64

2018-11-07 Thread Hongzhi.Song
These configs are possessed privately by x86-64.
CONFIG_IXGBE_DCA depends on CONFIG_DCA.
CONFIG_DCA depends on x86-64.

So I abstract configs related to DCA and put them to *-x86-64.cfg.

Signed-off-by: Hongzhi.Song 
---
 bsp/intel-x86/intel-x86-64.scc  | 1 +
 bsp/intel-x86/intel-x86.scc | 1 +
 features/ixgbe/ixgbe-x86-64.cfg | 1 +
 features/ixgbe/ixgbe-x86-64.scc | 3 +++
 features/ixgbe/ixgbe.cfg| 1 -
 features/ixgbe/ixgbe.scc| 1 -
 6 files changed, 6 insertions(+), 2 deletions(-)
 create mode 100644 features/ixgbe/ixgbe-x86-64.cfg
 create mode 100644 features/ixgbe/ixgbe-x86-64.scc

diff --git a/bsp/intel-x86/intel-x86-64.scc b/bsp/intel-x86/intel-x86-64.scc
index c23611e..50ebf0e 100644
--- a/bsp/intel-x86/intel-x86-64.scc
+++ b/bsp/intel-x86/intel-x86-64.scc
@@ -7,3 +7,4 @@ include cfg/x86_64.scc nopatch
 include intel-x86.scc
 kconf hardware intel-x86-64.cfg
 include features/x2apic/x2apic.scc
+include features/ixgbe/ixgbe-x86-64.scc
diff --git a/bsp/intel-x86/intel-x86.scc b/bsp/intel-x86/intel-x86.scc
index bad7c65..aee6d5e 100644
--- a/bsp/intel-x86/intel-x86.scc
+++ b/bsp/intel-x86/intel-x86.scc
@@ -7,6 +7,7 @@ include cfg/sound.scc
 include cfg/efi-ext.scc
 include cfg/boot-live.scc
 include cfg/intel.scc
+include cfg/dmaengine.scc
 
 include features/netfilter/netfilter.scc
 include features/profiling/profiling.scc
diff --git a/features/ixgbe/ixgbe-x86-64.cfg b/features/ixgbe/ixgbe-x86-64.cfg
new file mode 100644
index 000..36c6076
--- /dev/null
+++ b/features/ixgbe/ixgbe-x86-64.cfg
@@ -0,0 +1 @@
+CONFIG_IXGBE_DCA=y
diff --git a/features/ixgbe/ixgbe-x86-64.scc b/features/ixgbe/ixgbe-x86-64.scc
new file mode 100644
index 000..2bd2d7c
--- /dev/null
+++ b/features/ixgbe/ixgbe-x86-64.scc
@@ -0,0 +1,3 @@
+kconf hardware ixgbe-x86-64.cfg
+
+include features/dca/dca.scc
diff --git a/features/ixgbe/ixgbe.cfg b/features/ixgbe/ixgbe.cfg
index c7abf3b..31d8b1d 100644
--- a/features/ixgbe/ixgbe.cfg
+++ b/features/ixgbe/ixgbe.cfg
@@ -1,6 +1,5 @@
 CONFIG_IXGBE=m
 
 CONFIG_DCB=y
-CONFIG_IXGBE_DCA=y
 CONFIG_IXGBE_DCB=y
 CONFIG_IXGBEVF=m
diff --git a/features/ixgbe/ixgbe.scc b/features/ixgbe/ixgbe.scc
index d22aa5c..a256a6e 100644
--- a/features/ixgbe/ixgbe.scc
+++ b/features/ixgbe/ixgbe.scc
@@ -1,3 +1,2 @@
 kconf hardware ixgbe.cfg
 
-include features/dca/dca.scc
-- 
2.8.1

-- 
___
linux-yocto mailing list
linux-yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/linux-yocto


[linux-yocto] [V2 PATCH 0/2] Move some configs to intel-x86-64

2018-11-07 Thread Hongzhi.Song
v2:
modify commit log to be more clearly.

Hongzhi.Song (2):
  intel-x86-64: Move some configs from x86 and x86-64 shared to x86-64
  intel-x86-64: Move 'CONFIG_NR_CPUS=256' to intel-x86-64.cfg

 bsp/intel-x86/intel-x86-64.cfg  | 3 +++
 bsp/intel-x86/intel-x86-64.scc  | 1 +
 bsp/intel-x86/intel-x86.cfg | 1 -
 bsp/intel-x86/intel-x86.scc | 1 +
 features/ixgbe/ixgbe-x86-64.cfg | 1 +
 features/ixgbe/ixgbe-x86-64.scc | 3 +++
 features/ixgbe/ixgbe.cfg| 1 -
 features/ixgbe/ixgbe.scc| 1 -
 8 files changed, 9 insertions(+), 3 deletions(-)
 create mode 100644 features/ixgbe/ixgbe-x86-64.cfg
 create mode 100644 features/ixgbe/ixgbe-x86-64.scc

-- 
2.8.1

-- 
___
linux-yocto mailing list
linux-yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/linux-yocto


Re: [yocto] [RFC] Yocto Autobuilder and LAVA Integration

2018-11-07 Thread richard . purdie
Hi Anibal,

On Wed, 2018-11-07 at 16:25 -0600, Anibal Limon wrote:
> We know the need to execute OE testimage over real HW not only QEMU, 
> 
> I'm aware that currently there is an implementation on the Yocto
> Autobuilder Helper 
> , this initial implementation looks pretty well separating parts for
> template generation [1] and the script to send jobs to LAVA [2].
> 
> There are some limitations.
> 
> - Requires that the boards are accessible trough SSH (same network?)
> by the Autobuilder, so no distributed LAB testing.
> - LAVA doesn't know about test results because the execution is
> injected via SSH.
> 
> In order to do a distributed Boards testing the Yocto Autobuilder
> needs to publish in some place accessible the artifacts (image,
> kernel, etc) to flash/boot the board and the test suite expected to
> execute. 
> 
> Currently there is a functionality called testexport (not too
> used/maintained) that allows you to export the test suite.

I continue to have mixed feelings about testexport. It adds complexity
but I'm not sure its actually worth it.

An alternative would be to specify a set of commit hashes for the
configuration under test (poky or oe-core+bitbake and any other
layers), then have LAVA obtain those pieces and run the tests directly.

Its worth considering that we already now have two difference pieces of
code trying to package up the build system/layers, eSDK and testexport.
Ideally if we had some kind of standardised layer setup/configuration
approach we'd then just have a config file to share, then the tools
could recreate the environment and allow the tests to be run there
without testexport. Layer-setup is itself a harder subject but for
example the layer setup code in autobuilder-helper could easily be
reused as things stand today...

In fact the more I think about it, the more I think we may want to do
it that way...

> I created a simple LAVA test definition that allows run testimage
> (oe-test runtime) in my own LAVA LAB, is very simplistic only has a
> regex to parse results and uses lava-target-ip and lava-echo-ipv4 to
> get target and server IP addresses.
> 
> In this way the LAVA server handles all the testing and finally the
> Yocto Autobuilder can get/poll an event to know what was the actual
> result of the job and the job could be send to different LAVA LAB's. 

That does sound useful and is likely a way we could end up doing this.
Its probably worth highlighting that we now have a way of summarising
the result of the test in the form of the json file the tests all
generate. Sharing that back to the Yocto autobuilder would give us the
test results we need.

> Some of the tasks, I identified,  (if is accepted)
> 
> - Yocto-aubuilder-helper: Implement/adapt to cover this new behavior
> , move the EXTRA_PLAIN_CMDS to a class.
> -  Yocto-aubuilder-helper: Create a better approach to re-use LAVA
> job templates across boards.
> - Poky/OE: Review/fix test-export or provide other mechanism to
> export the test suite.

I think some of these are also independent of each other and good
things to work on regardless...

I would like to hear feedback from those at Intel using LAVA who
submitted the existing code.

Cheers,

Richard

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [RFC] Yocto Autobuilder and LAVA Integration

2018-11-07 Thread Anibal Limon
Hi,

We know the need to execute OE testimage over real HW not only QEMU,

I'm aware that currently there is an implementation on the Yocto
Autobuilder Helper
, this initial implementation looks pretty well separating parts for
template generation [1] and the script to send jobs to LAVA [2].

There are some limitations.

- Requires that the boards are accessible trough SSH (same network?) by the
Autobuilder, so no distributed LAB testing.
- LAVA doesn't know about test results because the execution is injected
via SSH.

In order to do a distributed Boards testing the Yocto Autobuilder needs to
publish in some place accessible the artifacts (image, kernel, etc) to
flash/boot the board and the test suite expected to execute.

Currently there is a functionality called testexport (not too
used/maintained) that allows you to export the test suite.

I created a simple LAVA test definition that allows run testimage (oe-test
runtime) in my own LAVA LAB, is very simplistic only has a regex to parse
results and uses lava-target-ip and lava-echo-ipv4 to get target and server
IP addresses.

In this way the LAVA server handles all the testing and finally the Yocto
Autobuilder can get/poll an event to know what was the actual result of the
job and the job could be send to different LAVA LAB's.

Some of the tasks, I identified,  (if is accepted)

- Yocto-aubuilder-helper: Implement/adapt to cover this new behavior , move
the EXTRA_PLAIN_CMDS to a class.
-  Yocto-aubuilder-helper: Create a better approach to re-use LAVA job
templates across boards.
- Poky/OE: Review/fix test-export or provide other mechanism to export the
test suite.

Cheers,
Anibal

[1]
http://git.yoctoproject.org/cgit/cgit.cgi/yocto-autobuilder-helper/tree/lava-templates/generate-jobconfig.jinja2
[2]
http://git.yoctoproject.org/cgit/cgit.cgi/yocto-autobuilder-helper/tree/lava
[3]
https://github.com/alimon/test-definitions/commit/4691b67daca26658b669ac0e79e4f27cbf6ed88d
[4]
http://git.yoctoproject.org/cgit/cgit.cgi/yocto-autobuilder-helper/tree/config-intelqa-x86_64-lava.json#n127
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [linux-yocto] [PATCH] aufs: fix "dynamic" goto "sibling call"

2018-11-07 Thread Bruce Ashfield

On 2018-11-06 1:00 p.m., Mark Asselstine wrote:

If you build with CONFIG_STACK_VALIDATION you will get

   CC  fs/aufs/cpup.o
   fs/aufs/cpup.o: warning: objtool: au_cp_regular()+0x24c: sibling call from 
callable instruction with modified stack frame

As stated in tools/objtool/Documentation/stack-validation.txt the use
of "Dynamic jumps and jumps to undefined symbols" has several
conditions, neither of which we meet. The use of .label to dictate
which label we 'goto' can be implemented in several ways that will be
'safe' from the stack validation point of view. Here we drop the
.label and instead are able to use the .flags to decide which label to
goto. This results in the same end result while ensuring we are a good
citizen with respect to the stack validation.

Signed-off-by: Mark Asselstine 
---

This has been sent to the aufs-users ML and will hopefully be merged
upstream for upcoming releases. This patch is created for 4.18.y but
should apply (with possibly some fuzz) to 4.19.x.


merged.

Bruce



  fs/aufs/cpup.c | 10 ++
  1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/fs/aufs/cpup.c b/fs/aufs/cpup.c
index 93d6496aaf68..9a77b1217933 100644
--- a/fs/aufs/cpup.c
+++ b/fs/aufs/cpup.c
@@ -438,13 +438,11 @@ static int au_cp_regular(struct au_cp_generic *cpg)
{
.bindex = cpg->bsrc,
.flags = O_RDONLY | O_NOATIME | O_LARGEFILE,
-   .label = &
},
{
.bindex = cpg->bdst,
.flags = O_WRONLY | O_NOATIME | O_LARGEFILE,
.force_wr = !!au_ftest_cpup(cpg->flags, RWDST),
-   .label = &_src
}
};
struct super_block *sb, *h_src_sb;
@@ -459,8 +457,12 @@ static int au_cp_regular(struct au_cp_generic *cpg)
f->file = au_h_open(cpg->dentry, f->bindex, f->flags,
/*file*/NULL, f->force_wr);
err = PTR_ERR(f->file);
-   if (IS_ERR(f->file))
-   goto *f->label;
+   if (IS_ERR(f->file)) {
+   if (f->flags & O_RDONLY)
+   goto out;
+   else
+   goto out_src;
+   }
}
  
  	/* try stopping to update while we copyup */




--
___
linux-yocto mailing list
linux-yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/linux-yocto


Re: [linux-yocto] [PATCH] aufs: tiny, suppress a warning

2018-11-07 Thread Bruce Ashfield

On 2018-11-07 3:55 a.m., zhe...@windriver.com wrote:

From: "J. R. Okajima" 

commit 3a33601796d4139286c57cd15bf7d88d00aa7674 upstream

Signed-off-by: J. R. Okajima 

fs/aufs/vdir.c: In function 'fillvdir':
fs/aufs/vdir.c:493:15: warning: 'ino' may be used uninitialized in this 
function [-Wmaybe-uninitialized]
 arg->err = au_nhash_append_wh
^~
  (>whlist, name, nlen, ino, d_type,
  ~~~
   arg->bindex, shwh);
   ~~


merged.

Bruce



Signed-off-by: He Zhe 
---
  fs/aufs/vdir.c | 1 +
  1 file changed, 1 insertion(+)

diff --git a/fs/aufs/vdir.c b/fs/aufs/vdir.c
index 5b78b5d..0b713a5 100644
--- a/fs/aufs/vdir.c
+++ b/fs/aufs/vdir.c
@@ -484,6 +484,7 @@ static int fillvdir(struct dir_context *ctx, const char 
*__name, int nlen,
if (au_nhash_test_known_wh(>whlist, name, nlen))
goto out; /* already whiteouted */
  
+		ino = 0; /* just to suppress a warning */

if (shwh)
arg->err = au_wh_ino(sb, arg->bindex, h_ino, d_type,
 );



--
___
linux-yocto mailing list
linux-yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/linux-yocto


Re: [linux-yocto] [PATCH] intel-x86-64: Move 'CONFIG_NR_CPUS=256' to intel-x86-64.cfg

2018-11-07 Thread Bruce Ashfield

On 2018-11-06 9:06 p.m., Hongzhi.Song wrote:

The maximum cpus are 64 on intel-x86-32.
But intel-x86-64 can support 8192.

Signed-off-by: Hongzhi.Song 
---
  bsp/intel-x86/intel-x86-64.cfg | 3 +++
  bsp/intel-x86/intel-x86.cfg| 1 -
  2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/bsp/intel-x86/intel-x86-64.cfg b/bsp/intel-x86/intel-x86-64.cfg
index 4e8a4d7..215c0f0 100644
--- a/bsp/intel-x86/intel-x86-64.cfg
+++ b/bsp/intel-x86/intel-x86-64.cfg
@@ -49,3 +49,6 @@ CONFIG_CRYPTO_DEV_QAT_DH895xCCVF=m
  
  # Intel Resource Director Technology support

  CONFIG_INTEL_RDT=y
+
+# Processor type and features
+CONFIG_NR_CPUS=256


The change seems to be doing the opposite of your
commit message.

You are setting 64 bit to NR_CPUS=256, but in the
commit message, you indicate in can support 8192.

Can you clarify ?

Bruce


diff --git a/bsp/intel-x86/intel-x86.cfg b/bsp/intel-x86/intel-x86.cfg
index 6919179..080bca1 100644
--- a/bsp/intel-x86/intel-x86.cfg
+++ b/bsp/intel-x86/intel-x86.cfg
@@ -17,7 +17,6 @@
  CONFIG_MCORE2=y
  CONFIG_SMP=y
  CONFIG_SCHED_SMT=y
-CONFIG_NR_CPUS=256
  
  CONFIG_NUMA=y

  CONFIG_ACPI_NUMA=y



--
___
linux-yocto mailing list
linux-yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/linux-yocto


Re: [linux-yocto] [PATCH] intel-x86-64: Move configs from x86 and x86-64 shared to x86-64

2018-11-07 Thread Bruce Ashfield

On 2018-11-06 4:22 a.m., Hongzhi.Song wrote:

These configs are possessed privately by x86-64.


It isn't clear from the options themselves. Can you elaborate
in the commit message as to why IXGBE_DCA is 64bit only ?

Bruce



Signed-off-by: Hongzhi.Song 
---
  bsp/intel-x86/intel-x86-64.scc  | 1 +
  bsp/intel-x86/intel-x86.scc | 1 +
  features/ixgbe/ixgbe-x86-64.cfg | 1 +
  features/ixgbe/ixgbe-x86-64.scc | 3 +++
  features/ixgbe/ixgbe.cfg| 1 -
  features/ixgbe/ixgbe.scc| 1 -
  6 files changed, 6 insertions(+), 2 deletions(-)
  create mode 100644 features/ixgbe/ixgbe-x86-64.cfg
  create mode 100644 features/ixgbe/ixgbe-x86-64.scc

diff --git a/bsp/intel-x86/intel-x86-64.scc b/bsp/intel-x86/intel-x86-64.scc
index c23611e..50ebf0e 100644
--- a/bsp/intel-x86/intel-x86-64.scc
+++ b/bsp/intel-x86/intel-x86-64.scc
@@ -7,3 +7,4 @@ include cfg/x86_64.scc nopatch
  include intel-x86.scc
  kconf hardware intel-x86-64.cfg
  include features/x2apic/x2apic.scc
+include features/ixgbe/ixgbe-x86-64.scc
diff --git a/bsp/intel-x86/intel-x86.scc b/bsp/intel-x86/intel-x86.scc
index bad7c65..aee6d5e 100644
--- a/bsp/intel-x86/intel-x86.scc
+++ b/bsp/intel-x86/intel-x86.scc
@@ -7,6 +7,7 @@ include cfg/sound.scc
  include cfg/efi-ext.scc
  include cfg/boot-live.scc
  include cfg/intel.scc
+include cfg/dmaengine.scc
  
  include features/netfilter/netfilter.scc

  include features/profiling/profiling.scc
diff --git a/features/ixgbe/ixgbe-x86-64.cfg b/features/ixgbe/ixgbe-x86-64.cfg
new file mode 100644
index 000..36c6076
--- /dev/null
+++ b/features/ixgbe/ixgbe-x86-64.cfg
@@ -0,0 +1 @@
+CONFIG_IXGBE_DCA=y
diff --git a/features/ixgbe/ixgbe-x86-64.scc b/features/ixgbe/ixgbe-x86-64.scc
new file mode 100644
index 000..2bd2d7c
--- /dev/null
+++ b/features/ixgbe/ixgbe-x86-64.scc
@@ -0,0 +1,3 @@
+kconf hardware ixgbe-x86-64.cfg
+
+include features/dca/dca.scc
diff --git a/features/ixgbe/ixgbe.cfg b/features/ixgbe/ixgbe.cfg
index c7abf3b..31d8b1d 100644
--- a/features/ixgbe/ixgbe.cfg
+++ b/features/ixgbe/ixgbe.cfg
@@ -1,6 +1,5 @@
  CONFIG_IXGBE=m
  
  CONFIG_DCB=y

-CONFIG_IXGBE_DCA=y
  CONFIG_IXGBE_DCB=y
  CONFIG_IXGBEVF=m
diff --git a/features/ixgbe/ixgbe.scc b/features/ixgbe/ixgbe.scc
index d22aa5c..a256a6e 100644
--- a/features/ixgbe/ixgbe.scc
+++ b/features/ixgbe/ixgbe.scc
@@ -1,3 +1,2 @@
  kconf hardware ixgbe.cfg
  
-include features/dca/dca.scc




--
___
linux-yocto mailing list
linux-yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/linux-yocto


Re: [yocto] [yocto-autobuilder-helper][PATCH] config.json: Add steps to test new workers before adding to the main pool

2018-11-07 Thread Richard Purdie
On Wed, 2018-11-07 at 11:15 -0800, akuster wrote:
> On 11/7/18 10:33 AM, Michael Halstead wrote:
> > We add workers to the nightly-bringup pool to test them in a
> > production like
> > enviroment. Include one completely emulated target and one to test
> > virtualization extentions.
> 
> 
> Would this be a good indicator to add a new host to the sanity
> checker or start that process?

Yes, that is another piece of this puzzle. I'm trying to deal with it
piece by piece as if we try and do everything at once there is a risk
of insanity at this point. Even getting this bring up testing sorted
out is proving tricky and highlighting other issues :/

Cheers,

Richard

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [yocto-autobuilder-helper][PATCH] config.json: Add steps to test new workers before adding to the main pool

2018-11-07 Thread akuster


On 11/7/18 10:33 AM, Michael Halstead wrote:
> We add workers to the nightly-bringup pool to test them in a production like
> enviroment. Include one completely emulated target and one to test
> virtualization extentions.


Would this be a good indicator to add a new host to the sanity checker
or start that process?


- armin

>
> Signed-off-by: Michael Halstead 
> ---
>  config.json | 13 +
>  1 file changed, 13 insertions(+)
>
> diff --git a/config.json b/config.json
> index f912abf..2945155 100644
> --- a/config.json
> +++ b/config.json
> @@ -145,6 +145,19 @@
>  "SANITYTARGETS" : "core-image-minimal:do_testsdkext"
>  }
>  },
> +   "nightly-bringup" : {
> +"MACHINE" : "qemuarm",
> +"TEMPLATE" : "nightly-arch",
> +"step2" : {
> +"MACHINE" : "beaglebone-yocto",
> +"BBTARGETS" : "core-image-sato core-image-sato-dev 
> core-image-sato-sdk core-image-minimal core-image-minimal-dev 
> core-image-sato-sdk-ptest core-image-sato:do_populate_sdk"
> +},
> +"step5" : {
> +"MACHINE" : "qemux86-64",
> +"SDKMACHINE" : "x86_64",
> +"BBTARGETS" : "core-image-minimal:do_populate_sdk_ext 
> core-image-sato:do_populate_sdk"
> +}
> +},
>  "nightly-mips" : {
>  "MACHINE" : "qemumips",
>  "TEMPLATE" : "nightly-arch",
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [yocto-autobuilder-helper][PATCH] config.json: Add steps to test new workers before adding to the main pool

2018-11-07 Thread Michael Halstead
We add workers to the nightly-bringup pool to test them in a production like
enviroment. Include one completely emulated target and one to test
virtualization extentions.

Signed-off-by: Michael Halstead 
---
 config.json | 13 +
 1 file changed, 13 insertions(+)

diff --git a/config.json b/config.json
index f912abf..2945155 100644
--- a/config.json
+++ b/config.json
@@ -145,6 +145,19 @@
 "SANITYTARGETS" : "core-image-minimal:do_testsdkext"
 }
 },
+   "nightly-bringup" : {
+"MACHINE" : "qemuarm",
+"TEMPLATE" : "nightly-arch",
+"step2" : {
+"MACHINE" : "beaglebone-yocto",
+"BBTARGETS" : "core-image-sato core-image-sato-dev 
core-image-sato-sdk core-image-minimal core-image-minimal-dev 
core-image-sato-sdk-ptest core-image-sato:do_populate_sdk"
+},
+"step5" : {
+"MACHINE" : "qemux86-64",
+"SDKMACHINE" : "x86_64",
+"BBTARGETS" : "core-image-minimal:do_populate_sdk_ext 
core-image-sato:do_populate_sdk"
+}
+},
 "nightly-mips" : {
 "MACHINE" : "qemumips",
 "TEMPLATE" : "nightly-arch",
-- 
2.17.2

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [error-report-web][PATCH] Allow autobuilder filter string to match anywhere

2018-11-07 Thread Michael Halstead
Multiple build clusters use the same prefix so we match a more distinctive
string anywhere in the submitter name.

Signed-off-by: Michael Halstead 
---
 Post/feed.py  | 2 +-
 Post/views.py | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/Post/feed.py b/Post/feed.py
index 745ee23..45c5327 100644
--- a/Post/feed.py
+++ b/Post/feed.py
@@ -25,7 +25,7 @@ class LatestEntriesFeed(Feed):
 if self.mode == results_mode.SPECIAL_SUBMITTER and 
hasattr(settings,"SPECIAL_SUBMITTER"):
 #Special submitter mode see settings.py to enable
 name = settings.SPECIAL_SUBMITTER['name']
-queryset = 
BuildFailure.objects.order_by('-BUILD__DATE').filter(BUILD__NAME__istartswith=name)[:self.limit]
+queryset = 
BuildFailure.objects.order_by('-BUILD__DATE').filter(BUILD__NAME__icontains=name)[:self.limit]
 
 else:
 queryset = 
BuildFailure.objects.order_by('-BUILD__DATE')[:self.limit]
diff --git a/Post/views.py b/Post/views.py
index 7f2cffb..cabd29b 100644
--- a/Post/views.py
+++ b/Post/views.py
@@ -205,7 +205,7 @@ def search(request, mode=results_mode.LATEST, **kwargs):
 if mode == results_mode.SPECIAL_SUBMITTER and 
hasattr(settings,"SPECIAL_SUBMITTER"):
 #Special submitter mode see settings.py to enable
 name = settings.SPECIAL_SUBMITTER['name']
-items = items.filter(BUILD__NAME__istartswith=name)
+items = items.filter(BUILD__NAME__icontains=name)
 
 elif mode == results_mode.SEARCH and request.GET.has_key("query"):
 query = request.GET["query"]
-- 
2.17.2

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [opkg-devel] [opkg-utils PATCH] opkg-make-index: use ctime instead of mtime

2018-11-07 Thread Alejandro Del Castillo



On 11/7/18 6:07 AM, Stefan Agner wrote:
> Hi Alejandro,
> 
> On 22.10.2018 16:45, Alejandro Del Castillo wrote:
>> makes sense, sounds like this is going to fix a bunch of nasty
>> intermittent failures, thanks!
>>
>> merged
> 
> Thanks for merging!

np

> With the merge in opkg-utils this is not yet actively used in OE of
> course. So my question: Is there a new release of opkg-utils planned
> anytime soon? Or should that be added as a patch in the OE recipe for
> now?

I plan to release opkg & opkg-utils, version 0.4.0 on mid-December. 
Maybe its worth adding the patch now?

-- 
Cheers,

Alejandro
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[linux-yocto] [linux-yocto-4.4][PATCH 3/3] linux-yoct-rt/4.4: update to 4.4.162

2018-11-07 Thread Armin Kuster
Signed-off-by: Armin Kuster 
---
 meta/recipes-kernel/linux/linux-yocto-rt_4.4.bb | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/meta/recipes-kernel/linux/linux-yocto-rt_4.4.bb 
b/meta/recipes-kernel/linux/linux-yocto-rt_4.4.bb
index f61a479..cd6e0aa 100644
--- a/meta/recipes-kernel/linux/linux-yocto-rt_4.4.bb
+++ b/meta/recipes-kernel/linux/linux-yocto-rt_4.4.bb
@@ -11,13 +11,13 @@ python () {
 raise bb.parse.SkipPackage("Set PREFERRED_PROVIDER_virtual/kernel to 
linux-yocto-rt to enable it")
 }
 
-SRCREV_machine ?= "68a7377ccf81ae642a9f95042fd1a44e3fc61587"
-SRCREV_meta ?= "b41a36ffe53f73c86a0f3672d32b5ebec59ab15e"
+SRCREV_machine ?= "515e72c4bbb5d99964669052220fe459177b7329"
+SRCREV_meta ?= "69ebea34250696ebe2d8c87c553480974e56d922"
 
 SRC_URI = 
"git://git.yoctoproject.org/linux-yocto-4.4.git;branch=${KBRANCH};name=machine \

git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-4.4;destsuffix=${KMETA}"
 
-LINUX_VERSION ?= "4.4.141"
+LINUX_VERSION ?= "4.4.162"
 
 PV = "${LINUX_VERSION}+git${SRCPV}"
 
-- 
2.7.4

-- 
___
linux-yocto mailing list
linux-yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/linux-yocto


[linux-yocto] [linux-yocto-4.4][PATCH 2/3] linux-yocto-tiny/4.4: update to 4.4.162

2018-11-07 Thread Armin Kuster
Signed-off-by: Armin Kuster 
---
 meta/recipes-kernel/linux/linux-yocto-tiny_4.4.bb | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/meta/recipes-kernel/linux/linux-yocto-tiny_4.4.bb 
b/meta/recipes-kernel/linux/linux-yocto-tiny_4.4.bb
index 96df596..fcf0c6a 100644
--- a/meta/recipes-kernel/linux/linux-yocto-tiny_4.4.bb
+++ b/meta/recipes-kernel/linux/linux-yocto-tiny_4.4.bb
@@ -4,13 +4,13 @@ KCONFIG_MODE = "--allnoconfig"
 
 require recipes-kernel/linux/linux-yocto.inc
 
-LINUX_VERSION ?= "4.4.141"
+LINUX_VERSION ?= "4.4.162"
 
 KMETA = "kernel-meta"
 KCONF_BSP_AUDIT_LEVEL = "2"
 
-SRCREV_machine ?= "7c4dd5edc287abd270a97eb38cee98f0d0318418"
-SRCREV_meta ?= "b41a36ffe53f73c86a0f3672d32b5ebec59ab15e"
+SRCREV_machine ?= "a575843cceb539c7b0514e7d74b7936ca104b623"
+SRCREV_meta ?= "69ebea34250696ebe2d8c87c553480974e56d922"
 
 PV = "${LINUX_VERSION}+git${SRCPV}"
 
-- 
2.7.4

-- 
___
linux-yocto mailing list
linux-yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/linux-yocto


[linux-yocto] [linux-yocto-4.4][PATCH 1/3] linux-yocto/4.4: update to 4.4.162

2018-11-07 Thread Armin Kuster
Signed-off-by: Armin Kuster 
---
 meta/recipes-kernel/linux/linux-yocto_4.4.bb | 20 ++--
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/meta/recipes-kernel/linux/linux-yocto_4.4.bb 
b/meta/recipes-kernel/linux/linux-yocto_4.4.bb
index 094e4fb..9d07247 100644
--- a/meta/recipes-kernel/linux/linux-yocto_4.4.bb
+++ b/meta/recipes-kernel/linux/linux-yocto_4.4.bb
@@ -11,20 +11,20 @@ KBRANCH_qemux86  ?= "standard/base"
 KBRANCH_qemux86-64 ?= "standard/base"
 KBRANCH_qemumips64 ?= "standard/mti-malta64"
 
-SRCREV_machine_qemuarm ?= "acf04f09f4f2d2b568618b20171cf45afd6b2e28"
-SRCREV_machine_qemuarm64 ?= "7c4dd5edc287abd270a97eb38cee98f0d0318418"
-SRCREV_machine_qemumips ?= "df50bef6ecd2df365c203cc400920f6560e26a8c"
-SRCREV_machine_qemuppc ?= "7c4dd5edc287abd270a97eb38cee98f0d0318418"
-SRCREV_machine_qemux86 ?= "7c4dd5edc287abd270a97eb38cee98f0d0318418"
-SRCREV_machine_qemux86-64 ?= "7c4dd5edc287abd270a97eb38cee98f0d0318418"
-SRCREV_machine_qemumips64 ?= "27acba48690468efe0a888fcd68493c82658c7c2"
-SRCREV_machine ?= "7c4dd5edc287abd270a97eb38cee98f0d0318418"
-SRCREV_meta ?= "b41a36ffe53f73c86a0f3672d32b5ebec59ab15e"
+SRCREV_machine_qemuarm ?= "a68a73dbd3c37ec21239dd97060eef308f1ff958"
+SRCREV_machine_qemuarm64 ?= "a575843cceb539c7b0514e7d74b7936ca104b623"
+SRCREV_machine_qemumips ?= "3c0e62ea8803a1757e389dcd6233e3d6acba8d2c"
+SRCREV_machine_qemuppc ?= "a575843cceb539c7b0514e7d74b7936ca104b623"
+SRCREV_machine_qemux86 ?= "a575843cceb539c7b0514e7d74b7936ca104b623"
+SRCREV_machine_qemux86-64 ?= "a575843cceb539c7b0514e7d74b7936ca104b623"
+SRCREV_machine_qemumips64 ?= "eaed2a94a20c7f65afa342d9243f19337f63b434"
+SRCREV_machine ?= "a575843cceb539c7b0514e7d74b7936ca104b623"
+SRCREV_meta ?= "69ebea34250696ebe2d8c87c553480974e56d922"
 
 SRC_URI = 
"git://git.yoctoproject.org/linux-yocto-4.4.git;name=machine;branch=${KBRANCH}; 
\

git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-4.4;destsuffix=${KMETA}"
 
-LINUX_VERSION ?= "4.4.141"
+LINUX_VERSION ?= "4.4.162"
 
 PV = "${LINUX_VERSION}+git${SRCPV}"
 
-- 
2.7.4

-- 
___
linux-yocto mailing list
linux-yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/linux-yocto


Re: [yocto] thud, beaglebone-yocto.conf: SERIAL_CONSOLES setting

2018-11-07 Thread Kevin Hao
On Wed, Nov 07, 2018 at 08:33:51AM +0100, Heiko Schocher wrote:
> Hello Kevin, Robert,
> 
> Am 06.11.2018 um 09:10 schrieb Heiko Schocher:
> > Hello Kevin, Robert,
> > 
> > Am 05.11.2018 um 06:26 schrieb Kevin Hao:
> > > On Sun, Nov 04, 2018 at 12:10:00PM +0200, Robert Berger wrote:
> > > > Hi,
> > > > 
> > > > On 02.11.18 16:27, Khem Raj wrote:
> > > > > 
> > > > > omap serial is obsolete why does linux-yocto keeps using it.
> > > > > seondly, machine config should enable both consoles ttyO0 and ttyS0 if
> > > > > you know that at least one kernel is using ttyO0
> > > > > 
> > > > How about picking whatever works for you in the kernel conf and in 
> > > > machine
> > > > conf?
> > > > 
> > > > SERIAL_CONSOLES = "115200;ttyS0 115200;ttyO0"
> > > > SERIAL_CONSOLES_CHECK = "${SERIAL_CONSOLES}"
> > > > 
> > > > Like this on the first boot either ttyO0 or ttyS0 should be picked
> > > > automatically.
> > > 
> > > Yes, this is doable. Would you mind send a patch?
> > 
> > Sorry for answering so late... good hint, I missed SERIAL_CONSOLES_CHECK
> > 
> > I try this change and report, give me some time...
> 
> 
> diff --git a/meta-yocto-bsp/conf/machine/beaglebone-yocto.conf
> b/meta-yocto-bsp/conf/machine/beaglebone-yocto.conf
> index e911e75004..def3a2ae06 100644
> --- a/meta-yocto-bsp/conf/machine/beaglebone-yocto.conf
> +++ b/meta-yocto-bsp/conf/machine/beaglebone-yocto.conf
> @@ -20,7 +20,8 @@ WKS_FILE ?= "beaglebone-yocto.wks"
>  IMAGE_INSTALL_append = " kernel-devicetree kernel-image-zimage"
>  do_image_wic[depends] += "mtools-native:do_populate_sysroot 
> dosfstools-native:do_populate_sysroot"
> 
> -SERIAL_CONSOLES = "115200;ttyO0"
> +SERIAL_CONSOLES = "115200;ttyS0 115200;ttyO0"
> +SERIAL_CONSOLES_CHECK = "${SERIAL_CONSOLES}"
> 
>  PREFERRED_PROVIDER_virtual/kernel ?= "linux-yocto"
>  PREFERRED_VERSION_linux-yocto ?= "4.18%"
> 
> and on my beagleboneblack linux kernels with 8250 serial driver enabled
> (console ttyS0) and kernel omap_serial driver enabled (console ttyO0)
> are booting.
> 
> Unfortunately it took me some time, until I realized that my settings
> in auto.conf do not work, because in beaglebone-yocto.conf
> 
> SERIAL_CONSOLES = "115200;ttyO0"
> 
> is set ... May a
> 
> SERIAL_CONSOLES ?= "115200;ttyS0 115200;ttyO0"
> 
> is friendlier ?

Yes, "?=" is better.

> 
> Should I send a formal patch?

Yes, please.

Thanks,
Kevin

> 
> bye,
> Heiko
> -- 
> DENX Software Engineering GmbH,  Managing Director: Wolfgang Denk
> HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
> Phone: +49-8142-66989-52   Fax: +49-8142-66989-80   Email: h...@denx.de


signature.asc
Description: PGP signature
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Fwd: [meta-raspberrypi] Problem with adding udev rules

2018-11-07 Thread Markus W
Probably, I'm learning ;-)

On Wed, 7 Nov 2018 at 12:59, Outback Dingo  wrote:

> couldnt this have been a udev_append.bb
>
> instead of writing your own my-rules
> On Wed, Nov 7, 2018 at 6:46 PM Markus W  wrote:
> >
> > This my bb file and than I have added the following to local.conf
> IMAGE_INSTALL_append = " my-rules ...".
> >
> > SUMMARY = "My rules"
> > LICENSE = "CLOSED"
> > PR = "r1"
> >
> > SRC_URI = "file://90-interfaces.rules"
> >
> > do_install[nostamp] = "1"
> > do_unpack[nostamp] = "1"
> >
> > do_install () {
> > install -d ${D}${sysconfdir}/udev/rules.d
> > install -m 0666 ${WORKDIR}/90-interfaces.rules
> ${D}/etc/udev/rules.d/90-interfaces.rules
> >
> > }
> >
> > FILES_${PN} += " /etc/udev/rules.d/90-interfaces.rules"
> >
> > PACKAGES = "${PN}"
> > PROVIDES = "my-rules"
> >
> > Hope this helps
> >
> > /Markus
> >
> > On Wed, 7 Nov 2018 at 12:41, Outback Dingo 
> wrote:
> >>
> >> so curious, as im trying to add 2 udev rules also that i require in my
> >> build, so how was your recipe configured?
> >>
> >> mine keeps telling me i have a conflict between packages as they both
> >> install files to the same name and same place
> >> On Wed, Nov 7, 2018 at 4:53 PM Outback Dingo 
> wrote:
> >> >
> >> >
> >> >
> >> > On Wed, Nov 7, 2018, 16:44 Markus W  >> >>
> >> >> I have resolved this issue. My problem was that in my layer I have a
> >> >> recipe-core and within that I had the following structure
> >> >> udev/udev-extra-rules and udev-extra-rules.bb file and a files dir
> on
> >> >> the same level.
> >> >>
> >> >> By renaming udev/udev-extra-rules to my-udev/my-udev-extra-rules it
> >> >> suddenly worked.
> >> >>
> >> >> Cool
> >> >>
> >> >>
> >> >> Regards,
> >> >> Markus
> >> >>
> >> >> On Tue, 6 Nov 2018 at 14:06, Outback Dingo 
> wrote:
> >> >> >
> >> >> > On Tue, Nov 6, 2018 at 11:57 AM Markus W 
> wrote:
> >> >> > >
> >> >> > > Hi!
> >> >> > >
> >> >> > > I want to append the rules in the
> >> >> > > recipe-core/udev/udev-rules-rpi/99-com.rules with the rules
> below from
> >> >> > > within my own recipe. I can´t figure out how to do that.
> >> >> > >
> >> >> > > I have tried to add those rules as separate rules file in a
> recipe in
> >> >> > > my own layer. After the build I can see that the rules file is
> in the
> >> >> > > correct directory /etc/udev/rules.d (next to 99-com.rules) but
> the
> >> >> > > rules didn't get applied. The groups below I have created by
> >> >> > > inheriting the useradd class (GROUPADD_PARAM_${PN} = "-r spi; -r
> i2c;
> >> >> > > -r gpio") in a different layer with a higher priority than the
> layer
> >> >> > > with the rules recipe.
> >> >> > >
> >> >> > > Not sure why this is not working. Any suggestions?
> >> >> > >
> >> >> > > 90-interfaces.rules file:
> >> >> > >
> >> >> > > SUBSYSTEM=="input", GROUP="input", MODE="0660"
> >> >> > > SUBSYSTEM=="i2c-dev", GROUP="i2c", MODE="0660"
> >> >> > > SUBSYSTEM=="spidev", GROUP="spi", MODE="0660"
> >> >> > > SUBSYSTEM=="bcm2835-gpiomem", GROUP="gpio", MODE="0660"
> >> >> > >
> >> >> > > SUBSYSTEM=="gpio", GROUP="gpio", MODE="0660"
> >> >> > > SUBSYSTEM=="gpio*", PROGRAM="/bin/sh -c '\
> >> >> > > chown -R root:gpio /sys/class/gpio && chmod -R 770
> /sys/class/gpio;\
> >> >> > > chown -R root:gpio /sys/devices/virtual/gpio && chmod -R 770
> >> >> > > /sys/devices/virtual/gpio;\
> >> >> > > chown -R root:gpio /sys$devpath && chmod -R 770 /sys$devpath\
> >> >> > > '"
> >> >> > >
> >> >> >
> >> >> > might help to post the recipe used.
> >> >> >
> >> >> >
> >> >> > > Regards,
> >> >> > > Markus
> >> >> > > --
> >> >> > > ___
> >> >> > > yocto mailing list
> >> >> > > yocto@yoctoproject.org
> >> >> > > https://lists.yoctoproject.org/listinfo/yocto
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] lib32-ncurses not installing in rootfs

2018-11-07 Thread Burton, Ross
The curses libraries are split up further, have a look at what else is in
packages-split, or even better use oe-pkgdata-util:

$ oe-pkgdata-util list-pkg-files -p lib32-ncurses
...
lib32-ncurses-libncurses:
/lib/libncurses.so.5
/lib/libncurses.so.5.9
lib32-ncurses-libncursesw:
/lib/libncursesw.so.5
/lib/libncursesw.so.5.9
lib32-ncurses-libpanel:
/usr/lib/libpanel.so.5
/usr/lib/libpanel.so.5.9
...

$ oe-pkgdata-util find-path */libncurses.so.*
lib32-ncurses-dbg: /lib/.debug/libncurses.so.5.9
lib32-ncurses-libncurses: /lib/libncurses.so.5
lib32-ncurses-libncurses: /lib/libncurses.so.5.9

Ross

On Wed, 7 Nov 2018 at 09:39, Mohammad, Jamal M <
mohammadjamal.mohiud...@ncr.com> wrote:

> There are many directories inside packages-split folder , lib32-ncurses,
> lib32-ncurses-dbg, lib32-ncurses-dev, lib32-ncurses-doc
>
>
>
> Looking into lib32-ncurses,
>
>
>
> lib32-ncurses
>
> └── usr
>
> ├── bin
>
> │   ├── tput
>
> │   └── tset
>
> └── share
>
> └── tabset
>
> ├── std
>
> ├── stdcrt
>
> ├── vt100
>
> └── vt300
>
>
>
> It doesn’t have lib folder, so where are the libncurses.so missing..
>
>
>
> *From:* ChenQi [mailto:qi.c...@windriver.com]
> *Sent:* Wednesday, November 7, 2018 3:04 PM
> *To:* Mohammad, Jamal M ;
> Yocto-mailing-list 
> *Subject:* Re: [yocto] lib32-ncurses not installing in rootfs
>
>
>
> **External Message* - Use caution before opening links or attachments*
>
>
>
> Check the packages-split/ directory to see how files are put in each
> package.
> I guess these files are packages into other packages derived from the
> ncurses recipe.
>
> Best Regards,
> Chen Qi
>
> On 11/07/2018 05:21 PM, Mohammad, Jamal M wrote:
>
> Hi Guys,
>
> I am trying to add 32-bit ncurses into my root file system
>
> I am using intel yocto bsp sumo branch
>
> Here is my local.conf:
>
>
>
> require conf/multilib.conf
>
> DEFAULTTUNE_virtclass-multilib-lib32 = "x86"
>
> IMAGE_INSTALL_append = " dpkg gnutls lib32-glibc lib32-libgcc
> lib32-libstdc++ lib32-gnutls lib32-freetype lib32-libx11 lib32-ncurses
> lib32-dpkg python3-six"
>
>
>
> ncurses folder is present in tmp
>
> build/tmp/work/x86-pokymllib32-linux/lib32-ncurses/6.0+20171125-r0
>
>
>
> The image folder is created and has the libraries
>
>
> build/tmp/work/x86-pokymllib32-linux/lib32-ncurses/6.0+20171125-r0/image/lib
>
> libncurses.so.5  libncurses.so.5.9  libncursesw.so.5  libncursesw.so.5.9
> libtinfo.so.5  libtinfo.so.5.9
>
>
>
> But these files are not present in root file system.
>
> How can i debug or what should be my next step to get them into root file
> system. which log files should I look
>
> Thanks for your time.
>
>
>
> Regards,
>
> Jamal,
>
> Software Specialist,
>
> NCR Corporation
>
>
>
>
> --
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [opkg-devel] [opkg-utils PATCH] opkg-make-index: use ctime instead of mtime

2018-11-07 Thread Stefan Agner
Hi Alejandro,

On 22.10.2018 16:45, Alejandro Del Castillo wrote:
> makes sense, sounds like this is going to fix a bunch of nasty 
> intermittent failures, thanks!
> 
> merged

Thanks for merging!

With the merge in opkg-utils this is not yet actively used in OE of
course. So my question: Is there a new release of opkg-utils planned
anytime soon? Or should that be added as a patch in the OE recipe for
now?

--
Stefan

> 
> On 10/19/18 10:38 AM, Stefan Agner wrote:
>> From: Stefan Agner 
>>
>> When using sstate, two parallel builds can produce two packages
>> with the same mtime but different checksums. When later one of
>> those two builds fetches the others ipk, the package index does
>> not get udpated properly (since mtime matches). This ends up with
>> messages such as:
>>Downloading file:/../tmp/work/../image/...ipk.
>>Removing corrupt package file 
>> /../sysroot/../var/cache/opkg/volatile/...ipk
>>
>> However, in that case, ctime is different. Use ctime instead of
>> mtime to prevent failures like this.
>>
>> Suggested-by: Khem Raj 
>> Signed-off-by: Stefan Agner 
>> ---
>> This addresses the issue discussed here:
>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openembedded.org_pipermail_openembedded-2Dcore_2018-2DOctober_156348.html=DwIBaQ=I_0YwoKy7z5LMTVdyO6YCiE2uzI1jjZZuIPelcSjixA=wNcrL2akRn6jfxhHaKavUrJB_C9JAMXtynjLd8ZzgXQ=Innit37H69hUyZPGuuhwO6R5CbUNNTfXQwxbqsEA2NE=oFvqASrFTgasDqZ901HeIBFSsf6Cn4FcBieOOBU4MdI=
>>
>>   opkg-make-index | 6 +++---
>>   1 file changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/opkg-make-index b/opkg-make-index
>> index 3227fc0..db7bf64 100755
>> --- a/opkg-make-index
>> +++ b/opkg-make-index
>> @@ -115,12 +115,12 @@ for abspath in files:
>>pkg = None
>>fnameStat = os.stat(abspath)
>>if filename in old_pkg_hash:
>> -  if filename in pkgsStamps and int(fnameStat.st_mtime) == 
>> pkgsStamps[filename]:
>> +  if filename in pkgsStamps and int(fnameStat.st_ctime) == 
>> pkgsStamps[filename]:
>>   if (verbose):
>>  sys.stderr.write("Found %s in Packages\n" % (filename,))
>>   pkg = old_pkg_hash[filename]
>> else:
>> -   sys.stderr.write("Found %s in Packages, but mtime differs - 
>> re-reading\n" % (filename,))
>> +   sys.stderr.write("Found %s in Packages, but ctime differs - 
>> re-reading\n" % (filename,))
>>
>>if not pkg:
>> if (verbose):
>> @@ -137,7 +137,7 @@ for abspath in files:
>>else:
>> old_filename = ""
>>s = packages.add_package(pkg, opt_a)
>> - pkgsStamps[filename] = fnameStat.st_mtime
>> + pkgsStamps[filename] = fnameStat.st_ctime
>>if s == 0:
>> if old_filename:
>>  # old package was displaced by newer
>>
> 
> -- 
> Cheers,
> 
> Alejandro
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Fwd: [meta-raspberrypi] Problem with adding udev rules

2018-11-07 Thread Outback Dingo
couldnt this have been a udev_append.bb

instead of writing your own my-rules
On Wed, Nov 7, 2018 at 6:46 PM Markus W  wrote:
>
> This my bb file and than I have added the following to local.conf 
> IMAGE_INSTALL_append = " my-rules ...".
>
> SUMMARY = "My rules"
> LICENSE = "CLOSED"
> PR = "r1"
>
> SRC_URI = "file://90-interfaces.rules"
>
> do_install[nostamp] = "1"
> do_unpack[nostamp] = "1"
>
> do_install () {
> install -d ${D}${sysconfdir}/udev/rules.d
> install -m 0666 ${WORKDIR}/90-interfaces.rules 
> ${D}/etc/udev/rules.d/90-interfaces.rules
>
> }
>
> FILES_${PN} += " /etc/udev/rules.d/90-interfaces.rules"
>
> PACKAGES = "${PN}"
> PROVIDES = "my-rules"
>
> Hope this helps
>
> /Markus
>
> On Wed, 7 Nov 2018 at 12:41, Outback Dingo  wrote:
>>
>> so curious, as im trying to add 2 udev rules also that i require in my
>> build, so how was your recipe configured?
>>
>> mine keeps telling me i have a conflict between packages as they both
>> install files to the same name and same place
>> On Wed, Nov 7, 2018 at 4:53 PM Outback Dingo  wrote:
>> >
>> >
>> >
>> > On Wed, Nov 7, 2018, 16:44 Markus W > >>
>> >> I have resolved this issue. My problem was that in my layer I have a
>> >> recipe-core and within that I had the following structure
>> >> udev/udev-extra-rules and udev-extra-rules.bb file and a files dir on
>> >> the same level.
>> >>
>> >> By renaming udev/udev-extra-rules to my-udev/my-udev-extra-rules it
>> >> suddenly worked.
>> >>
>> >> Cool
>> >>
>> >>
>> >> Regards,
>> >> Markus
>> >>
>> >> On Tue, 6 Nov 2018 at 14:06, Outback Dingo  wrote:
>> >> >
>> >> > On Tue, Nov 6, 2018 at 11:57 AM Markus W  wrote:
>> >> > >
>> >> > > Hi!
>> >> > >
>> >> > > I want to append the rules in the
>> >> > > recipe-core/udev/udev-rules-rpi/99-com.rules with the rules below from
>> >> > > within my own recipe. I can´t figure out how to do that.
>> >> > >
>> >> > > I have tried to add those rules as separate rules file in a recipe in
>> >> > > my own layer. After the build I can see that the rules file is in the
>> >> > > correct directory /etc/udev/rules.d (next to 99-com.rules) but the
>> >> > > rules didn't get applied. The groups below I have created by
>> >> > > inheriting the useradd class (GROUPADD_PARAM_${PN} = "-r spi; -r i2c;
>> >> > > -r gpio") in a different layer with a higher priority than the layer
>> >> > > with the rules recipe.
>> >> > >
>> >> > > Not sure why this is not working. Any suggestions?
>> >> > >
>> >> > > 90-interfaces.rules file:
>> >> > >
>> >> > > SUBSYSTEM=="input", GROUP="input", MODE="0660"
>> >> > > SUBSYSTEM=="i2c-dev", GROUP="i2c", MODE="0660"
>> >> > > SUBSYSTEM=="spidev", GROUP="spi", MODE="0660"
>> >> > > SUBSYSTEM=="bcm2835-gpiomem", GROUP="gpio", MODE="0660"
>> >> > >
>> >> > > SUBSYSTEM=="gpio", GROUP="gpio", MODE="0660"
>> >> > > SUBSYSTEM=="gpio*", PROGRAM="/bin/sh -c '\
>> >> > > chown -R root:gpio /sys/class/gpio && chmod -R 770 /sys/class/gpio;\
>> >> > > chown -R root:gpio /sys/devices/virtual/gpio && chmod -R 770
>> >> > > /sys/devices/virtual/gpio;\
>> >> > > chown -R root:gpio /sys$devpath && chmod -R 770 /sys$devpath\
>> >> > > '"
>> >> > >
>> >> >
>> >> > might help to post the recipe used.
>> >> >
>> >> >
>> >> > > Regards,
>> >> > > Markus
>> >> > > --
>> >> > > ___
>> >> > > yocto mailing list
>> >> > > yocto@yoctoproject.org
>> >> > > https://lists.yoctoproject.org/listinfo/yocto
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Fwd: [meta-raspberrypi] Problem with adding udev rules

2018-11-07 Thread Markus W
This my bb file and than I have added the following to local.conf
IMAGE_INSTALL_append = " my-rules ...".

SUMMARY = "My rules"
LICENSE = "CLOSED"
PR = "r1"

SRC_URI = "file://90-interfaces.rules"

do_install[nostamp] = "1"
do_unpack[nostamp] = "1"

do_install () {
install -d ${D}${sysconfdir}/udev/rules.d
install -m 0666 ${WORKDIR}/90-interfaces.rules
${D}/etc/udev/rules.d/90-interfaces.rules

}

FILES_${PN} += " /etc/udev/rules.d/90-interfaces.rules"

PACKAGES = "${PN}"
PROVIDES = "my-rules"

Hope this helps

/Markus

On Wed, 7 Nov 2018 at 12:41, Outback Dingo  wrote:

> so curious, as im trying to add 2 udev rules also that i require in my
> build, so how was your recipe configured?
>
> mine keeps telling me i have a conflict between packages as they both
> install files to the same name and same place
> On Wed, Nov 7, 2018 at 4:53 PM Outback Dingo 
> wrote:
> >
> >
> >
> > On Wed, Nov 7, 2018, 16:44 Markus W  >>
> >> I have resolved this issue. My problem was that in my layer I have a
> >> recipe-core and within that I had the following structure
> >> udev/udev-extra-rules and udev-extra-rules.bb file and a files dir on
> >> the same level.
> >>
> >> By renaming udev/udev-extra-rules to my-udev/my-udev-extra-rules it
> >> suddenly worked.
> >>
> >> Cool
> >>
> >>
> >> Regards,
> >> Markus
> >>
> >> On Tue, 6 Nov 2018 at 14:06, Outback Dingo 
> wrote:
> >> >
> >> > On Tue, Nov 6, 2018 at 11:57 AM Markus W 
> wrote:
> >> > >
> >> > > Hi!
> >> > >
> >> > > I want to append the rules in the
> >> > > recipe-core/udev/udev-rules-rpi/99-com.rules with the rules below
> from
> >> > > within my own recipe. I can´t figure out how to do that.
> >> > >
> >> > > I have tried to add those rules as separate rules file in a recipe
> in
> >> > > my own layer. After the build I can see that the rules file is in
> the
> >> > > correct directory /etc/udev/rules.d (next to 99-com.rules) but the
> >> > > rules didn't get applied. The groups below I have created by
> >> > > inheriting the useradd class (GROUPADD_PARAM_${PN} = "-r spi; -r
> i2c;
> >> > > -r gpio") in a different layer with a higher priority than the layer
> >> > > with the rules recipe.
> >> > >
> >> > > Not sure why this is not working. Any suggestions?
> >> > >
> >> > > 90-interfaces.rules file:
> >> > >
> >> > > SUBSYSTEM=="input", GROUP="input", MODE="0660"
> >> > > SUBSYSTEM=="i2c-dev", GROUP="i2c", MODE="0660"
> >> > > SUBSYSTEM=="spidev", GROUP="spi", MODE="0660"
> >> > > SUBSYSTEM=="bcm2835-gpiomem", GROUP="gpio", MODE="0660"
> >> > >
> >> > > SUBSYSTEM=="gpio", GROUP="gpio", MODE="0660"
> >> > > SUBSYSTEM=="gpio*", PROGRAM="/bin/sh -c '\
> >> > > chown -R root:gpio /sys/class/gpio && chmod -R 770 /sys/class/gpio;\
> >> > > chown -R root:gpio /sys/devices/virtual/gpio && chmod -R 770
> >> > > /sys/devices/virtual/gpio;\
> >> > > chown -R root:gpio /sys$devpath && chmod -R 770 /sys$devpath\
> >> > > '"
> >> > >
> >> >
> >> > might help to post the recipe used.
> >> >
> >> >
> >> > > Regards,
> >> > > Markus
> >> > > --
> >> > > ___
> >> > > yocto mailing list
> >> > > yocto@yoctoproject.org
> >> > > https://lists.yoctoproject.org/listinfo/yocto
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Yocto and Debian package repositories

2018-11-07 Thread Mauro Ziliani
Thank you for your answer.

My system is produced only from Yoctk Krogoth with deb package, and
package-management turned on into EXTRAS_DISTRO_FEATURES.

The server repository is only my server and there are only the debs
built with yocto.

So I think the dependecies are always ok, because non other repos are used.


Mauro


Il 30/10/2018 22:21, Khem Raj ha scritto:
> On Tue, Oct 30, 2018 at 8:14 AM Mauro Ziliani  wrote:
>> Hi all.
>>
>> I often work with Debian and I build my own repository accessible with apt.
>>
>> I look that when I build the image recipe many deb pacakages are
>> produced with Packages and Release ready.
>>
>> How can I use this pakcages to upload all packages to a Debian customer
>> server?
> So I am assuming you are thinking of installing these .debs onto a
> debian base target
> if so then its not quite the right direction. with OE build system we
> can generate output packages
> as ipks, rpms, debs and tarballs, its just another output format but
> is not meant to be consumed
> on rpm-like or deb-like linux systems which are built using non-OE
> distros, because packages have
> dependencies and sometimes version specific dependencies which may not
> match between two ecosystems.
>
>> Best regards,
>>
>>   MZ
>> --
>> ___
>> yocto mailing list
>> yocto@yoctoproject.org
>> https://lists.yoctoproject.org/listinfo/yocto

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Check final recipe state

2018-11-07 Thread Alan Martinovic
Hi,
looking for a way to inspect the appearance of the final recipe
after all the .bbappend files have been applied.

Example:
meta-openembedded/meta-oe/recipes-extended/rsyslog/rsyslog_8.29.0.bb

gets appended with:
meta-mylayer/recipes-extended/rsyslog/rsyslog_8.29.0.bbappend

I'm conceptually perceiving that as being merged together into a
single "final recipe" during the build.
Is there a way to get how that "final recipe" might look like?

(know about the -e flag, exploring it there is something less verbose)

Be Well,
Alan
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [meta-oe] Set linux capabilities on binary on a recipe in meta-oe layer

2018-11-07 Thread Markus W
Hi!

*Background:*
In my raspberry project I am developing a nodejs app that needs access to
bluetooth/ble device. I want to run the node application as non root user
for security reasons. In order to get access from within the app, the node
binary need to have the following capability *cap_net_raw+eip *set. I am
using the nodejs recipe from meta-oe and added it in my local.conf:

IMAGE_INSTALL_append = " *nodejs* i2c-tools bluez5 kernel-image
kernel-devicetree"

*Question:*
Where should I apply the following command? *setcap cap_net_raw+eip
/usr/bin/node*

What are my options? Can I create a recipe in a different package that will
apply the above command on the meta-oe package for the nodejs recipe?

I have been following this thread (
https://lists.yoctoproject.org/pipermail/yocto/2016-June/030811.html), but
the node binaries and my node-app are in different layers and packages.

Any advice how to do this is much appreciated?

Regards,
Markus
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] lib32-ncurses not installing in rootfs

2018-11-07 Thread Mohammad, Jamal M
I see libncurses.so file in ncurses-dev folder in packages-split folder.. Do I 
need to add lib32-ncurses-dev in local.conf

From: Mohammad, Jamal M
Sent: Wednesday, November 7, 2018 3:08 PM
To: 'ChenQi' ; Yocto-mailing-list 

Subject: RE: [yocto] lib32-ncurses not installing in rootfs

There are many directories inside packages-split folder , lib32-ncurses, 
lib32-ncurses-dbg, lib32-ncurses-dev, lib32-ncurses-doc

Looking into lib32-ncurses,

lib32-ncurses
└── usr
├── bin
│   ├── tput
│   └── tset
└── share
└── tabset
├── std
├── stdcrt
├── vt100
└── vt300

It doesn't have lib folder, so where are the libncurses.so missing..

From: ChenQi [mailto:qi.c...@windriver.com]
Sent: Wednesday, November 7, 2018 3:04 PM
To: Mohammad, Jamal M 
mailto:mohammadjamal.mohiud...@ncr.com>>; 
Yocto-mailing-list mailto:yocto@yoctoproject.org>>
Subject: Re: [yocto] lib32-ncurses not installing in rootfs

*External Message* - Use caution before opening links or attachments


Check the packages-split/ directory to see how files are put in each package.
I guess these files are packages into other packages derived from the ncurses 
recipe.

Best Regards,
Chen Qi

On 11/07/2018 05:21 PM, Mohammad, Jamal M wrote:
Hi Guys,
I am trying to add 32-bit ncurses into my root file system
I am using intel yocto bsp sumo branch
Here is my local.conf:

require conf/multilib.conf
DEFAULTTUNE_virtclass-multilib-lib32 = "x86"
IMAGE_INSTALL_append = " dpkg gnutls lib32-glibc lib32-libgcc lib32-libstdc++ 
lib32-gnutls lib32-freetype lib32-libx11 lib32-ncurses lib32-dpkg 
python3-six"

ncurses folder is present in tmp
build/tmp/work/x86-pokymllib32-linux/lib32-ncurses/6.0+20171125-r0

The image folder is created and has the libraries
build/tmp/work/x86-pokymllib32-linux/lib32-ncurses/6.0+20171125-r0/image/lib
libncurses.so.5  libncurses.so.5.9  libncursesw.so.5  libncursesw.so.5.9  
libtinfo.so.5  libtinfo.so.5.9

But these files are not present in root file system.
How can i debug or what should be my next step to get them into root file 
system. which log files should I look
Thanks for your time.

Regards,
Jamal,
Software Specialist,
NCR Corporation



-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [patchtest-oe][PATCH] test_patch_cve.py: fix cve tag checking logic

2018-11-07 Thread Richard Purdie
On Fri, 2018-11-02 at 14:03 +0800, Chen Qi wrote:
> The current logic for checking cve tag is not correct. It errors
> out if and only if the patch contains a line which begins with
> CVE-- and contains nothing else.
> 
> It will not error out if the patch contains no CVE information, nor
> will it error out if the patch contains line like below.
> 
> 'Fix CVE--'
> 
> I can see that the cve tag checking logic tries to ensure the patch
> contains something like 'CVE: CVE--'. So fix to implement
> such
> logic.
> 
> Signed-off-by: Chen Qi 
> ---
>  tests/test_patch_cve.py | 15 ---
>  1 file changed, 8 insertions(+), 7 deletions(-)

Thanks, good find.

I've merged this and I believe the instance should have it applied now
too.

Cheers,

Richard

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Install a pre-build ipkg package at build time

2018-11-07 Thread João Gonçalves
Thanks, I didn't know that I could point SRC_URI to a ipk just like a
regullar tar file.
I did that, pointed to our ipk server and it could download the ipk file.
However during the do_rootfs task i got the following error:

ERROR: kelvin-base-image-2.8b4-r0 do_rootfs: Error executing a python
function in exec_python_func() autogenerated:

The stack trace of python calls that resulted in this exception/failure was:
File: 'exec_python_func() autogenerated', lineno: 2, function: 
 0001:
 *** 0002:license_create_manifest(d)
 0003:
File:
'/home/joao/imx6/build/../layers/openembedded-core/meta/classes/license.bbclass',
lineno: 48, function: license_create_manifest
 0044:pkg_dic = {}
 0045:for pkg in sorted(image_list_installed_packages(d)):
 0046:pkg_info = os.path.join(d.getVar('PKGDATA_DIR'),
 0047:'runtime-reverse', pkg)
 *** 0048:pkg_name = os.path.basename(os.readlink(pkg_info))
 0049:
 0050:pkg_dic[pkg_name] =
oe.packagedata.read_pkgdatafile(pkg_info)
 0051:if not "LICENSE" in pkg_dic[pkg_name].keys():
 0052:pkg_lic_name = "LICENSE_" + pkg_name
Exception: FileNotFoundError: [Errno 2] No such file or directory:
'/home/joao/imx6/build/tmp-glibc/pkgdata/apalis-imx6/runtime-reverse/python36'

ERROR: kelvin-base-image-2.8b4-r0 do_rootfs: Function failed:
license_create_manifest
ERROR: Logfile of failure stored in:
/home/joao/imx6/build/tmp-glibc/work/apalis_imx6-angstrom-linux-gnueabi/kelvin-base-image/2.8b4-r0/temp/log.do_rootfs.1303
ERROR: Task
(/home/joao/imx6/layers/meta-kelvin/recipes-images/images/kelvin-base-image_0.2.bb:do_rootfs)
failed with exit code '1'

I used "CLOSED" as license, i also tried put a MIT license the on "file:.."
of the recipe. I got the same error. Maybe it is because I do not have a
license file inside the ipk package, I'll test put a license file in a
package and try to install it.
I'am using a set of layers provided by our board vendor and they are using
rocko version.

I could "hide" the problem by putting  the whole body of
the license_create_manifest function of the license.bbclass inside a
try-except block.
It worked but that's not a solution.

Khem Raj  escreveu no dia sexta, 2/11/2018 à(s) 16:50:

> On Fri, Nov 2, 2018 at 8:48 AM João Gonçalves
>  wrote:
> >
> > Hi all,
> >
> > I have some pre-build opkg packages and I need to install them on my
> target image, is this possible?
> > I tried to use the bin_package class but I didn't found any example, I'm
> also not sure if it can be used this way or just to install the data files
> extracted from the ipkg file. Anyway I also tried to extract the binary
> files form the package and use the bin_package class to install them, but
> with no success.
> >
> > Does anyone have any example of this?
> >
>
> once you point SRC_URI to your ipk
> inherit bin_package should be then able to help in repackaging it
> you have to post specific errors so we can see what might be going on
>
> > Thanks,
> > João Gonçalves
> > --
> > ___
> > yocto mailing list
> > yocto@yoctoproject.org
> > https://lists.yoctoproject.org/listinfo/yocto
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Fwd: [meta-raspberrypi] Problem with adding udev rules

2018-11-07 Thread Outback Dingo
On Wed, Nov 7, 2018, 16:44 Markus W  I have resolved this issue. My problem was that in my layer I have a
> recipe-core and within that I had the following structure
> udev/udev-extra-rules and udev-extra-rules.bb file and a files dir on
> the same level.
>
> By renaming udev/udev-extra-rules to my-udev/my-udev-extra-rules it
> suddenly worked.
>
> Cool
>
>
> Regards,
> Markus
>
> On Tue, 6 Nov 2018 at 14:06, Outback Dingo  wrote:
> >
> > On Tue, Nov 6, 2018 at 11:57 AM Markus W  wrote:
> > >
> > > Hi!
> > >
> > > I want to append the rules in the
> > > recipe-core/udev/udev-rules-rpi/99-com.rules with the rules below from
> > > within my own recipe. I can´t figure out how to do that.
> > >
> > > I have tried to add those rules as separate rules file in a recipe in
> > > my own layer. After the build I can see that the rules file is in the
> > > correct directory /etc/udev/rules.d (next to 99-com.rules) but the
> > > rules didn't get applied. The groups below I have created by
> > > inheriting the useradd class (GROUPADD_PARAM_${PN} = "-r spi; -r i2c;
> > > -r gpio") in a different layer with a higher priority than the layer
> > > with the rules recipe.
> > >
> > > Not sure why this is not working. Any suggestions?
> > >
> > > 90-interfaces.rules file:
> > >
> > > SUBSYSTEM=="input", GROUP="input", MODE="0660"
> > > SUBSYSTEM=="i2c-dev", GROUP="i2c", MODE="0660"
> > > SUBSYSTEM=="spidev", GROUP="spi", MODE="0660"
> > > SUBSYSTEM=="bcm2835-gpiomem", GROUP="gpio", MODE="0660"
> > >
> > > SUBSYSTEM=="gpio", GROUP="gpio", MODE="0660"
> > > SUBSYSTEM=="gpio*", PROGRAM="/bin/sh -c '\
> > > chown -R root:gpio /sys/class/gpio && chmod -R 770 /sys/class/gpio;\
> > > chown -R root:gpio /sys/devices/virtual/gpio && chmod -R 770
> > > /sys/devices/virtual/gpio;\
> > > chown -R root:gpio /sys$devpath && chmod -R 770 /sys$devpath\
> > > '"
> > >
> >
> > might help to post the recipe used.
> >
> >
> > > Regards,
> > > Markus
> > > --
> > > ___
> > > yocto mailing list
> > > yocto@yoctoproject.org
> > > https://lists.yoctoproject.org/listinfo/yocto
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Fwd: [meta-raspberrypi] Problem with adding udev rules

2018-11-07 Thread Markus W
I have resolved this issue. My problem was that in my layer I have a
recipe-core and within that I had the following structure
udev/udev-extra-rules and udev-extra-rules.bb file and a files dir on
the same level.

By renaming udev/udev-extra-rules to my-udev/my-udev-extra-rules it
suddenly worked.

Regards,
Markus

On Tue, 6 Nov 2018 at 14:06, Outback Dingo  wrote:
>
> On Tue, Nov 6, 2018 at 11:57 AM Markus W  wrote:
> >
> > Hi!
> >
> > I want to append the rules in the
> > recipe-core/udev/udev-rules-rpi/99-com.rules with the rules below from
> > within my own recipe. I can´t figure out how to do that.
> >
> > I have tried to add those rules as separate rules file in a recipe in
> > my own layer. After the build I can see that the rules file is in the
> > correct directory /etc/udev/rules.d (next to 99-com.rules) but the
> > rules didn't get applied. The groups below I have created by
> > inheriting the useradd class (GROUPADD_PARAM_${PN} = "-r spi; -r i2c;
> > -r gpio") in a different layer with a higher priority than the layer
> > with the rules recipe.
> >
> > Not sure why this is not working. Any suggestions?
> >
> > 90-interfaces.rules file:
> >
> > SUBSYSTEM=="input", GROUP="input", MODE="0660"
> > SUBSYSTEM=="i2c-dev", GROUP="i2c", MODE="0660"
> > SUBSYSTEM=="spidev", GROUP="spi", MODE="0660"
> > SUBSYSTEM=="bcm2835-gpiomem", GROUP="gpio", MODE="0660"
> >
> > SUBSYSTEM=="gpio", GROUP="gpio", MODE="0660"
> > SUBSYSTEM=="gpio*", PROGRAM="/bin/sh -c '\
> > chown -R root:gpio /sys/class/gpio && chmod -R 770 /sys/class/gpio;\
> > chown -R root:gpio /sys/devices/virtual/gpio && chmod -R 770
> > /sys/devices/virtual/gpio;\
> > chown -R root:gpio /sys$devpath && chmod -R 770 /sys$devpath\
> > '"
> >
>
> might help to post the recipe used.
>
>
> > Regards,
> > Markus
> > --
> > ___
> > yocto mailing list
> > yocto@yoctoproject.org
> > https://lists.yoctoproject.org/listinfo/yocto
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] lib32-ncurses not installing in rootfs

2018-11-07 Thread Mohammad, Jamal M
There are many directories inside packages-split folder , lib32-ncurses, 
lib32-ncurses-dbg, lib32-ncurses-dev, lib32-ncurses-doc

Looking into lib32-ncurses,

lib32-ncurses
└── usr
├── bin
│   ├── tput
│   └── tset
└── share
└── tabset
├── std
├── stdcrt
├── vt100
└── vt300

It doesn't have lib folder, so where are the libncurses.so missing..

From: ChenQi [mailto:qi.c...@windriver.com]
Sent: Wednesday, November 7, 2018 3:04 PM
To: Mohammad, Jamal M ; Yocto-mailing-list 

Subject: Re: [yocto] lib32-ncurses not installing in rootfs

*External Message* - Use caution before opening links or attachments


Check the packages-split/ directory to see how files are put in each package.
I guess these files are packages into other packages derived from the ncurses 
recipe.

Best Regards,
Chen Qi

On 11/07/2018 05:21 PM, Mohammad, Jamal M wrote:
Hi Guys,
I am trying to add 32-bit ncurses into my root file system
I am using intel yocto bsp sumo branch
Here is my local.conf:

require conf/multilib.conf
DEFAULTTUNE_virtclass-multilib-lib32 = "x86"
IMAGE_INSTALL_append = " dpkg gnutls lib32-glibc lib32-libgcc lib32-libstdc++ 
lib32-gnutls lib32-freetype lib32-libx11 lib32-ncurses lib32-dpkg 
python3-six"

ncurses folder is present in tmp
build/tmp/work/x86-pokymllib32-linux/lib32-ncurses/6.0+20171125-r0

The image folder is created and has the libraries
build/tmp/work/x86-pokymllib32-linux/lib32-ncurses/6.0+20171125-r0/image/lib
libncurses.so.5  libncurses.so.5.9  libncursesw.so.5  libncursesw.so.5.9  
libtinfo.so.5  libtinfo.so.5.9

But these files are not present in root file system.
How can i debug or what should be my next step to get them into root file 
system. which log files should I look
Thanks for your time.

Regards,
Jamal,
Software Specialist,
NCR Corporation




-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] lib32-ncurses not installing in rootfs

2018-11-07 Thread ChenQi
Check the packages-split/ directory to see how files are put in each 
package.
I guess these files are packages into other packages derived from the 
ncurses recipe.


Best Regards,
Chen Qi

On 11/07/2018 05:21 PM, Mohammad, Jamal M wrote:


Hi Guys,

I am trying to add 32-bit ncurses into my root file system

I am using intel yocto bsp sumo branch

Here is my local.conf:

require conf/multilib.conf

DEFAULTTUNE_virtclass-multilib-lib32 = "x86"

IMAGE_INSTALL_append = " dpkg gnutls lib32-glibc lib32-libgcc 
lib32-libstdc++ lib32-gnutls lib32-freetype lib32-libx11 
lib32-ncurses lib32-dpkg python3-six"


ncurses folder is present in tmp

build/tmp/work/x86-pokymllib32-linux/lib32-ncurses/6.0+20171125-r0

The image folder is created and has the libraries

build/tmp/work/x86-pokymllib32-linux/lib32-ncurses/6.0+20171125-r0/image/lib

libncurses.so.5  libncurses.so.5.9 libncursesw.so.5  
libncursesw.so.5.9  libtinfo.so.5 libtinfo.so.5.9


But these files are not present in root file system.

How can i debug or what should be my next step to get them into root 
file system. which log files should I look


Thanks for your time.

Regards,

Jamal,

Software Specialist,

NCR Corporation





-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] lib32-ncurses not installing in rootfs

2018-11-07 Thread Mohammad, Jamal M
Hi Guys,
I am trying to add 32-bit ncurses into my root file system
I am using intel yocto bsp sumo branch
Here is my local.conf:

require conf/multilib.conf
DEFAULTTUNE_virtclass-multilib-lib32 = "x86"
IMAGE_INSTALL_append = " dpkg gnutls lib32-glibc lib32-libgcc lib32-libstdc++ 
lib32-gnutls lib32-freetype lib32-libx11 lib32-ncurses lib32-dpkg 
python3-six"

ncurses folder is present in tmp
build/tmp/work/x86-pokymllib32-linux/lib32-ncurses/6.0+20171125-r0

The image folder is created and has the libraries
build/tmp/work/x86-pokymllib32-linux/lib32-ncurses/6.0+20171125-r0/image/lib
libncurses.so.5  libncurses.so.5.9  libncursesw.so.5  libncursesw.so.5.9  
libtinfo.so.5  libtinfo.so.5.9

But these files are not present in root file system.
How can i debug or what should be my next step to get them into root file 
system. which log files should I look
Thanks for your time.

Regards,
Jamal,
Software Specialist,
NCR Corporation
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [poky] bug #8729 grub bootloader

2018-11-07 Thread Dimitris Tassopoulos
Hi Anuj,

thanks for the reply. First I need to say that it was insightful from your
side to use virtual/grub-bootconf
and this made things much easier. I only had to add `BBCLASSEXTEND =
"native"` in the grub-efi in
order to get `grub-editconf` for the host and create environment files on
the fly.

I had a look in your recommendation and it works after some tweaking. The `
efi-bootdisk.wks.im` file
has an example of this case, as your links suggested. It seems though that
the environment variable
`${IMAGE_ROOTFS}` in the file is not translated correctly during the
bitbake build and it fails with an
error. For example in my case the error was:

Couldn't get bitbake variable from
/rnd/yocto/grub-dev/build/tmp/sysroots/qemux86/imgdata/${IMAGE_ROOTFS}/boot.env.

After I changed the `${ROOTFS}` in `--rootfs-dir=${IMAGE_ROOTFS}/boot` with
the absolute path
then it worked. For me it's nor an issue but it's good for you to know.
Without someone pointing this
out it would be very difficult to discover the above use-case as it's not
clearly documented.

Also, it would be nice when bitbake detects that both `loader=grub-efi` and
`grub-efi_%.bb` are used
to print a warning about this overlapping and suggest the use of
`--rootfs-dir` and `--exclude-path`.

I'll continue from this point now.

Thanks again,
Dimitris

On Wed, Nov 7, 2018 at 1:58 AM Mittal, Anuj  wrote:

> Hi Dimitris
>
> On Tue, 2018-11-06 at 16:12 +0100, Dimitris Tassopoulos wrote:
> >
> > In this case the bootimg-efi.py script creates the extra partition, a
> > different grub.cfg and also
> > edits the /etc/fstab and mounts this partition over /boot and
> > therefore virtual/grub-bootconf
> > becomes unused.
> >
> > This raises the following questions:
> > - Why there are two different facilities to achieve the same thing?
> > - Why the second overrides the first one, instead of sharing the same
> > files?
> > - How to proceed from now on?
> >
>
> I think this needed --exclude-path in wks to be used to take effect.
> Please see:
>
>
>
> http://lists.openembedded.org/pipermail/openembedded-core/2018-February/147837.html
>
> and bug:
>
> https://bugzilla.yoctoproject.org/show_bug.cgi?id=10073
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [patchtest-oe][PATCH] test_patch_cve.py: fix cve tag checking logic

2018-11-07 Thread Richard Purdie
On Fri, 2018-11-02 at 14:03 +0800, Chen Qi wrote:
> The current logic for checking cve tag is not correct. It errors
> out if and only if the patch contains a line which begins with
> CVE-- and contains nothing else.
> 
> It will not error out if the patch contains no CVE information, nor
> will it error out if the patch contains line like below.
> 
> 'Fix CVE--'
> 
> I can see that the cve tag checking logic tries to ensure the patch
> contains something like 'CVE: CVE--'. So fix to implement
> such
> logic.
> 
> Signed-off-by: Chen Qi 
> ---
>  tests/test_patch_cve.py | 15 ---
>  1 file changed, 8 insertions(+), 7 deletions(-)

Thanks, good find.

I've merged this and I believe the instance should have it applied now
too.

Cheers,

Richard

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[linux-yocto] [PATCH] aufs: tiny, suppress a warning

2018-11-07 Thread zhe.he
From: "J. R. Okajima" 

commit 3a33601796d4139286c57cd15bf7d88d00aa7674 upstream

Signed-off-by: J. R. Okajima 

fs/aufs/vdir.c: In function 'fillvdir':
fs/aufs/vdir.c:493:15: warning: 'ino' may be used uninitialized in this 
function [-Wmaybe-uninitialized]
arg->err = au_nhash_append_wh
   ^~
 (>whlist, name, nlen, ino, d_type,
 ~~~
  arg->bindex, shwh);
  ~~

Signed-off-by: He Zhe 
---
 fs/aufs/vdir.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/fs/aufs/vdir.c b/fs/aufs/vdir.c
index 5b78b5d..0b713a5 100644
--- a/fs/aufs/vdir.c
+++ b/fs/aufs/vdir.c
@@ -484,6 +484,7 @@ static int fillvdir(struct dir_context *ctx, const char 
*__name, int nlen,
if (au_nhash_test_known_wh(>whlist, name, nlen))
goto out; /* already whiteouted */
 
+   ino = 0; /* just to suppress a warning */
if (shwh)
arg->err = au_wh_ino(sb, arg->bindex, h_ino, d_type,
 );
-- 
2.7.4

-- 
___
linux-yocto mailing list
linux-yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/linux-yocto