Re: [yocto] [PATCH V2 0/2] update-rc.d: support enable/disable function

2019-02-26 Thread Changqing Li

Ping

On 1/21/19 5:09 PM, changqing...@windriver.com wrote:

These three patches are for support disable/enable function for update-rc.d

change in V2:
add some comments and fix commit message

change in V1:
* change of script update-rc.d is for support disable/enable of init script link
* update-rc.d.bbclass remove preinst to make disable/enable can work after 
upgrade
* fix the doc manual

Changqing Li (2):
   update-rc.d.bbclass: remove preinst and remove -f for postinst
   ref-variables.xml: update manual page for update-rc.d

  documentation/ref-manual/ref-variables.xml |  2 +-
  meta/classes/update-rc.d.bbclass   | 28 
  2 files changed, 5 insertions(+), 25 deletions(-)


--
BRs

Sandy(Li Changqing)

--
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Storing Sstate in S3 success stories?

2019-02-26 Thread Timothy Froehlich
This doesn't seem to be an issue. I have multiple files with plus signs in
their names that made it back down to my local cache without requiring a
rebuild (including the whole Linux kernel)

On Tue, Feb 26, 2019 at 11:35 AM Brian Walsh  wrote:

> On Mon, Feb 25, 2019 at 8:46 PM Timothy Froehlich 
> wrote:
> >
> > I've been spending a bit too long this past week trying to build up a
> reproducable build infrastructure in AWS and I've got very little
> experience with cloud infrastucture and I'm wondering if I'm going in the
> wrong direction. I'm attempting to host my sstate_cache as a mirror in a
> private S3 bucket, and I believe I have everything configured properly,
> including exposing the bucket to http requests, since I can wget files that
> I've previously synced up to the bucket. However if I add in the
> SSTATE_MIRRORS to my build, bitbake slows to a crawl (it's a powerful VM)
> and barely seems to get anything. The EC2 instance is in the same region as
> the S3 bucket, roles have been configured properly to allow access, etc.
> >
> > I'm not looking for help debugging this, I just want to know whether I'm
> right that hosting my sstate in an S3 bucket should work. I've only been
> able to find one mention of it being done with no reproduction hints.
> >
>
> A lot of the files end up with plus signs in the name. This causes
> problems with retrieving files through http access with S3. S3
> translates all plus signs to spaces, even those in the file path. So
> if my-file_v1.0+g1241876 actually exists as named in S3 an http
> request for that file will trigger the server to look for
> "my-file_v1.0 g1241876"
>
> I ran into this problem trying to host an opkg repository in S3 for
> upgrading.
>
> It may mostly work for you but there will be many files that it will
> never be able to find in your S3 hosted sstate.
>
> Maybe this has been fixed by AWS. I noticed the problem a year or two ago.
>
>
> https://stackoverflow.com/questions/36734171/how-to-decide-if-the-filename-has-a-plus-sign-in-it#36758133
> https://news.ycombinator.com/item?id=15398804
>


-- 
Tim Froehlich
Embedded Linux Engineer
tfroehl...@archsys.io
215-218-8955
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Storing Sstate in S3 success stories?

2019-02-26 Thread Erik Hoogeveen
Hi Timothy,

The S3 protocol is HTTP(S) based, the overhead per object is quite significant. 
This is not much of a problem for large files but the sstate_cache contains 
mostly lots of really small files. I think in this case you’re better of 
storing the cache on a secondary EBS volume that you can attach as a regular 
block device to the EC2 instance. You can swtich on deletion protection to make 
it survive EC2 termination.

Since EBS volumes are quite a bit more expensive than S3 buckets  you could 
make snapshots to transfer the state between build runs in stead then you can 
destroy the EBS volume when nothing is running.

Alls the documentation about EBS is here 
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html

Cheers,
Erik
On 26 Feb 2019, 02:45 +0100, Timothy Froehlich , wrote:
I've been spending a bit too long this past week trying to build up a 
reproducable build infrastructure in AWS and I've got very little experience 
with cloud infrastucture and I'm wondering if I'm going in the wrong direction. 
I'm attempting to host my sstate_cache as a mirror in a private S3 bucket, and 
I believe I have everything configured properly, including exposing the bucket 
to http requests, since I can wget files that I've previously synced up to the 
bucket. However if I add in the SSTATE_MIRRORS to my build, bitbake slows to a 
crawl (it's a powerful VM) and barely seems to get anything. The EC2 instance 
is in the same region as the S3 bucket, roles have been configured properly to 
allow access, etc.

I'm not looking for help debugging this, I just want to know whether I'm right 
that hosting my sstate in an S3 bucket should work. I've only been able to find 
one mention of it being done with no reproduction hints.

--
___
yocto mailing list
yocto@yoctoproject.org
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.yoctoproject.org%2Flistinfo%2Fyoctodata=02%7C01%7C%7Cdc66c2ab9da44272220b08d69b8c2262%7C84df9e7fe9f640afb435%7C1%7C0%7C636867423506178146sdata=TjwcnaFaZQpk9iuxL16%2FosjuqEH2S7aXB16JjBIDGko%3Dreserved=0
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [meta-selinux][thud][PATCH] refpolicy: Forward patch to apply cleanly on thud

2019-02-26 Thread Khem Raj
Also fix devtool generated warnings by refreshing patches

Signed-off-by: Khem Raj 
---
 ...poky-fc-update-alternatives_sysklogd.patch | 17 +++--
 ...add-rules-for-var-log-symlink-apache.patch | 10 +++---
 ...m-systemd-fix-for-systemd-tmp-files-.patch | 19 +--
 3 files changed, 11 insertions(+), 35 deletions(-)

diff --git 
a/recipes-security/refpolicy/refpolicy-2.20170204/poky-fc-update-alternatives_sysklogd.patch
 
b/recipes-security/refpolicy/refpolicy-2.20170204/poky-fc-update-alternatives_sysklogd.patch
index e9a0464..aa928c6 100644
--- 
a/recipes-security/refpolicy/refpolicy-2.20170204/poky-fc-update-alternatives_sysklogd.patch
+++ 
b/recipes-security/refpolicy/refpolicy-2.20170204/poky-fc-update-alternatives_sysklogd.patch
@@ -17,8 +17,7 @@ Signed-off-by: Joe MacDonald 
 
 --- a/policy/modules/system/logging.fc
 +++ b/policy/modules/system/logging.fc
-@@ -1,9 +1,10 @@
- /dev/log  -s  
gen_context(system_u:object_r:devlog_t,mls_systemhigh)
+@@ -2,6 +2,7 @@
  
  /etc/rsyslog.conf gen_context(system_u:object_r:syslog_conf_t,s0)
  /etc/syslog.conf  gen_context(system_u:object_r:syslog_conf_t,s0)
@@ -26,11 +25,7 @@ Signed-off-by: Joe MacDonald 
  /etc/audit(/.*)?  
gen_context(system_u:object_r:auditd_etc_t,mls_systemhigh)
  /etc/rc\.d/init\.d/auditd --  
gen_context(system_u:object_r:auditd_initrc_exec_t,s0)
  /etc/rc\.d/init\.d/rsyslog -- 
gen_context(system_u:object_r:syslogd_initrc_exec_t,s0)
- 
- /usr/bin/audispd  --  gen_context(system_u:object_r:audisp_exec_t,s0)
-@@ -27,14 +28,16 @@
- /usr/sbin/audispd --  gen_context(system_u:object_r:audisp_exec_t,s0)
- /usr/sbin/audisp-remote   --  
gen_context(system_u:object_r:audisp_remote_exec_t,s0)
+@@ -27,10 +28,12 @@
  /usr/sbin/auditctl--  
gen_context(system_u:object_r:auditctl_exec_t,s0)
  /usr/sbin/auditd  --  gen_context(system_u:object_r:auditd_exec_t,s0)
  /usr/sbin/klogd   --  
gen_context(system_u:object_r:klogd_exec_t,s0)
@@ -43,13 +38,9 @@ Signed-off-by: Joe MacDonald 
  /usr/sbin/syslog-ng   --  gen_context(system_u:object_r:syslogd_exec_t,s0)
  /usr/sbin/syslogd --  gen_context(system_u:object_r:syslogd_exec_t,s0)
  
- /var/lib/misc/syslog-ng.persist-? -- 
gen_context(system_u:object_r:syslogd_var_lib_t,s0)
- /var/lib/syslog-ng(/.*)?  
gen_context(system_u:object_r:syslogd_var_lib_t,s0)
 --- a/policy/modules/system/logging.te
 +++ b/policy/modules/system/logging.te
-@@ -390,10 +390,12 @@ allow syslogd_t self:unix_dgram_socket s
- allow syslogd_t self:fifo_file rw_fifo_file_perms;
- allow syslogd_t self:udp_socket create_socket_perms;
+@@ -390,6 +390,8 @@ allow syslogd_t self:udp_socket create_s
  allow syslogd_t self:tcp_socket create_stream_socket_perms;
  
  allow syslogd_t syslog_conf_t:file read_file_perms;
@@ -58,5 +49,3 @@ Signed-off-by: Joe MacDonald 
  
  # Create and bind to /dev/log or /var/run/log.
  allow syslogd_t devlog_t:sock_file manage_sock_file_perms;
- files_pid_filetrans(syslogd_t, devlog_t, sock_file)
- init_pid_filetrans(syslogd_t, devlog_t, sock_file, "dev-log")
diff --git 
a/recipes-security/refpolicy/refpolicy-2.20170204/poky-policy-add-rules-for-var-log-symlink-apache.patch
 
b/recipes-security/refpolicy/refpolicy-2.20170204/poky-policy-add-rules-for-var-log-symlink-apache.patch
index fb912b5..6c96e33 100644
--- 
a/recipes-security/refpolicy/refpolicy-2.20170204/poky-policy-add-rules-for-var-log-symlink-apache.patch
+++ 
b/recipes-security/refpolicy/refpolicy-2.20170204/poky-policy-add-rules-for-var-log-symlink-apache.patch
@@ -17,15 +17,11 @@ Signed-off-by: Joe MacDonald 
 
 --- a/policy/modules/contrib/apache.te
 +++ b/policy/modules/contrib/apache.te
-@@ -407,10 +407,11 @@ allow httpd_t httpd_lock_t:file manage_f
- files_lock_filetrans(httpd_t, httpd_lock_t, { file dir })
- 
- manage_dirs_pattern(httpd_t, httpd_log_t, httpd_log_t)
- manage_files_pattern(httpd_t, httpd_log_t, httpd_log_t)
+@@ -411,6 +411,7 @@ create_files_pattern(httpd_t, httpd_log_
+ append_files_pattern(httpd_t, httpd_log_t, httpd_log_t)
+ read_files_pattern(httpd_t, httpd_log_t, httpd_log_t)
  read_lnk_files_pattern(httpd_t, httpd_log_t, httpd_log_t)
 +read_lnk_files_pattern(httpd_t, var_log_t, var_log_t)
  logging_log_filetrans(httpd_t, httpd_log_t, file)
  
  allow httpd_t httpd_modules_t:dir list_dir_perms;
- mmap_files_pattern(httpd_t, httpd_modules_t, httpd_modules_t)
- read_files_pattern(httpd_t, httpd_modules_t, httpd_modules_t)
diff --git 
a/recipes-security/refpolicy/refpolicy-minimum/0008-refpolicy-minimum-systemd-fix-for-systemd-tmp-files-.patch
 
b/recipes-security/refpolicy/refpolicy-minimum/0008-refpolicy-minimum-systemd-fix-for-systemd-tmp-files-.patch
index a7338e1..f5a767d 100644
--- 
a/recipes-security/refpolicy/refpolicy-minimum/0008-refpolicy-minimum-systemd-fix-for-systemd-tmp-files-.patch
+++ 

Re: [yocto] Storing Sstate in S3 success stories?

2019-02-26 Thread Brian Walsh
On Mon, Feb 25, 2019 at 8:46 PM Timothy Froehlich  wrote:
>
> I've been spending a bit too long this past week trying to build up a 
> reproducable build infrastructure in AWS and I've got very little experience 
> with cloud infrastucture and I'm wondering if I'm going in the wrong 
> direction. I'm attempting to host my sstate_cache as a mirror in a private S3 
> bucket, and I believe I have everything configured properly, including 
> exposing the bucket to http requests, since I can wget files that I've 
> previously synced up to the bucket. However if I add in the SSTATE_MIRRORS to 
> my build, bitbake slows to a crawl (it's a powerful VM) and barely seems to 
> get anything. The EC2 instance is in the same region as the S3 bucket, roles 
> have been configured properly to allow access, etc.
>
> I'm not looking for help debugging this, I just want to know whether I'm 
> right that hosting my sstate in an S3 bucket should work. I've only been able 
> to find one mention of it being done with no reproduction hints.
>

A lot of the files end up with plus signs in the name. This causes
problems with retrieving files through http access with S3. S3
translates all plus signs to spaces, even those in the file path. So
if my-file_v1.0+g1241876 actually exists as named in S3 an http
request for that file will trigger the server to look for
"my-file_v1.0 g1241876"

I ran into this problem trying to host an opkg repository in S3 for upgrading.

It may mostly work for you but there will be many files that it will
never be able to find in your S3 hosted sstate.

Maybe this has been fixed by AWS. I noticed the problem a year or two ago.

https://stackoverflow.com/questions/36734171/how-to-decide-if-the-filename-has-a-plus-sign-in-it#36758133
https://news.ycombinator.com/item?id=15398804
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Storing Sstate in S3 success stories?

2019-02-26 Thread Timothy Froehlich
Well, based on the responses above I did some more research and it didn't
seem like the file sizes should be causing problems on the scale that I was
seeing so I investigated further. I realized that despite my build/sstate
directory getting slowly larger, it wasn't actually getting the files and
was leaving empty files behind. I tried just changing the https in my
SSTATE_MIRRORs line just http and it worked perfectly, pulling down a half
gig of sstate before I could tell it was even working. So there was likely
some other misconfiguration in our AWS account that caused the https to
fail (despite being able to wget individual files using https). Thanks for
the responses!

On Tue, Feb 26, 2019 at 12:19 AM Erik Hoogeveen 
wrote:

> Hi Timothy,
>
> The S3 protocol is HTTP(S) based, the overhead per object is quite
> significant. This is not much of a problem for large files but the
> sstate_cache contains mostly lots of really small files. I think in this
> case you’re better of storing the cache on a secondary EBS volume that you
> can attach as a regular block device to the EC2 instance. You can swtich on
> deletion protection to make it survive EC2 termination.
>
> Since EBS volumes are quite a bit more expensive than S3 buckets  you
> could make snapshots to transfer the state between build runs in stead then
> you can destroy the EBS volume when nothing is running.
>
> Alls the documentation about EBS is here
> https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html
>
> Cheers,
> Erik
> On 26 Feb 2019, 02:45 +0100, Timothy Froehlich ,
> wrote:
>
> I've been spending a bit too long this past week trying to build up a
> reproducable build infrastructure in AWS and I've got very little
> experience with cloud infrastucture and I'm wondering if I'm going in the
> wrong direction. I'm attempting to host my sstate_cache as a mirror in a
> private S3 bucket, and I believe I have everything configured properly,
> including exposing the bucket to http requests, since I can wget files that
> I've previously synced up to the bucket. However if I add in the
> SSTATE_MIRRORS to my build, bitbake slows to a crawl (it's a powerful VM)
> and barely seems to get anything. The EC2 instance is in the same region as
> the S3 bucket, roles have been configured properly to allow access, etc.
>
> I'm not looking for help debugging this, I just want to know whether I'm
> right that hosting my sstate in an S3 bucket should work. I've only been
> able to find one mention of it being done with no reproduction hints.
>
> --
> ___
> yocto mailing list
> yocto@yoctoproject.org
>
> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.yoctoproject.org%2Flistinfo%2Fyoctodata=02%7C01%7C%7Cdc66c2ab9da44272220b08d69b8c2262%7C84df9e7fe9f640afb435%7C1%7C0%7C636867423506178146sdata=TjwcnaFaZQpk9iuxL16%2FosjuqEH2S7aXB16JjBIDGko%3Dreserved=0
>
>

-- 
Tim Froehlich
Embedded Linux Engineer
tfroehl...@archsys.io
215-218-8955
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Yocto Project Status WW09'19

2019-02-26 Thread sjolley.yp.pm
Current Dev Position: YP 2.7 M3 (New feature Freeze has begun.)

Next Deadline: YP 2.7 M3 Cutoff was Feb. 25, 2019

 

SWAT Team Rotation:

*   SWAT lead is currently: Ross
*   SWAT team rotation: Ross -> Chen on Mar. 1, 2019
*   SWAT team rotation: Chen -> Armin on Mar. 8, 2019
*
https://wiki.yoctoproject.org/wiki/Yocto_Build_Failure_Swat_Team

 

Key Status/Updates:

*   We have now passed the feature freeze point for 2.7
*   YP 2.7 M2 rc2 is out of QA and being readied for release.  See:

https://wiki.yoctoproject.org/wiki/WW07_-_2019-02-14_-_Full_Test_Cycle_2.7_M
2_RC2 and

https://wiki.yoctoproject.org/wiki/Yocto_Project_v2.7_Status#Milestone_2_-_T
arget_Feb._1.2C_2019
*   We now have resulttool, buildhistory and build-performance all
working with the autobuilder and will be using this as the primary QA
mechanism going forward. More details can be found here:

 

http://lists.openembedded.org/pipermail/openembedded-architecture/2019-Febru
ary/001591.html

*   There were some serious usability issues found with multiconfig,
those have been fixed in master and thud.
*   Various recipe upgrades made it in, thanks to all who contributed.
Several recipes also converted to meson including glib-2.0 and gdk-pixbuf.
*   We now need to build M3 as we're past feature freeze point for 2.7.
Right now we're aware of the following things which ideally need to be
resolved first:

*   Bug 13178 "X server and matchbox desktop don't start up properly on
beaglebone" needs to be fixed
*   Find the framebuffer problem with qemuarmv7 and switch to that
*   Several ptest issues would be addressed (missing openssl tests, 3
timeouts)
*   Arm build host issues identified and resolved

*   It's unlikely we'll switch to virtgl by default for 2.7 due to lack
of testing from the community although the base patches for this have
merged.

 

Planned Releases for YP 2.7:

*   YP 2.7 M2 rc2 is out of QA.
*   YP 2.7 M2 Release Target was Feb. 1, 2019
*   YP 2.7 M3 Cutoff is Feb. 25, 2019
*   YP 2.7 M3 Release Target is Mar. 8, 2019
*   YP 2.7 M4 Cutoff is Apr. 1, 2019
*   YP 2.7 M4 Release Target is Apr. 26, 2019

 

Planned upcoming dot releases:

*   YP 2.5.3 (Sumo) will be targeted after YP 2.7 M2 is done.
*   YP 2.5.4 (Sumo) will be targeted after YP 2.7 M4 is done.
*   YP 2.6.2 (Thud) will be targeted after YP 2.5.4 is done.

 

Tracking Metrics:

*   WDD 2415 (last week 2392) (

https://wiki.yoctoproject.org/charts/combo.html)
*   Poky Patch Metrics  

*   Total patches found: 1516 (last week 1527)
*   Patches in the Pending State: 660 (44%) [last week 663 (43%)]

 

Key Status Links for YP:

 
https://wiki.yoctoproject.org/wiki/Yocto_Project_v2.7_Status

 
https://wiki.yoctoproject.org/wiki/Yocto_2.7_Schedule

 
https://wiki.yoctoproject.org/wiki/Yocto_2.7_Features

 

The Status reports are now stored on the wiki at:

https://wiki.yoctoproject.org/wiki/Weekly_Status

 

[If anyone has suggestions for other information you'd like to see on this
weekly status update, let us know!]

 

Thanks,

 

Stephen K. Jolley

Yocto Project Program Manager

*Cell:(208) 244-4460

* Email:  sjolley.yp...@gmail.com
 

 

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-java] Illegal instruction (core dumped) when compiling jaxp1.3-native-1.4.01-r0

2019-02-26 Thread Belisko Marek
Hi,

looks like when use this proposal [0] it start working (juts need to update
multiple recipes which calls javac in configure phase with -Xnoinlining).
Compiletime disable inlining doesn't work.

It thee some pending fix for that? When use host gcc 5.5 (from ubuntu
16.04) all works good (while not with gcc 7.3).

[0] - https://sourceforge.net/p/jamvm/mailman/message/25051239/

On Mon, Feb 25, 2019 at 9:53 PM Belisko Marek 
wrote:

> Hi,
>
> I'm using meta-java sumo branch and on ubuntu 18.04 I have this issue
> (building for beaglebone-yocto machine):
> ERROR: jaxp1.3-native-1.4.01-r0 do_compile: Function failed: do_compile
> (log file is located at
> /home/jenkins/my_build/tmp/work/x86_64-linux/jaxp1.3-native/1.4.01-r0/temp/log.do_compile.15390)
> ERROR: Logfile of failure stored in:
> /home/jenkins/my_build/tmp/work/x86_64-linux/jaxp1.3-native/1.4.01-r0/temp/log.do_compile.15390
> Log data follows:
> | DEBUG: Executing shell function do_compile
> | Illegal instruction (core dumped)
> | WARNING: exit code 132 from a shell command.
> | ERROR: Function failed: do_compile (log file is located at
> /home/jenkins/my_build/tmp/work/x86_64-linux/jaxp1.3-native/1.4.01-r0/temp/log.do_compile.15390)
> ERROR: Task
> (virtual:native:/home/jenkins/meta-java/recipes-core/xml-commons/jaxp1.3_1.4.01.bb:do_compile)
> failed with exit code '1'
> ERROR: gnujaf-native-1.1.1-r1 do_compile: Function failed: do_compile (log
> file is located at
> /home/jenkins/my_build/tmp/work/x86_64-linux/gnujaf-native/1.1.1-r1/temp/log.do_compile.15475)
> ERROR: Logfile of failure stored in:
> /home/jenkins/my_build/tmp/work/x86_64-linux/gnujaf-native/1.1.1-r1/temp/log.do_compile.15475
> Log data follows:
> | DEBUG: Executing shell function do_compile
> | Illegal instruction (core dumped)
> | WARNING: exit code 132 from a shell command.
> | ERROR: Function failed: do_compile (log file is located at
> /home/jenkins/my_build/tmp/work/x86_64-linux/gnujaf-native/1.1.1-r1/temp/log.do_compile.15475)
>
> Looks like javac is failing (when run devshell and call java it will print
> help though). I tried same setup on ubuntu 16.04 and here it works
> perfectly fine. Any ides what can cause this - different toolchain or?
> Thanks.
>
> BR,
>
> marek
>
> --
> as simple and primitive as possible
> -
> Marek Belisko - OPEN-NANDRA
> Freelance Developer
>
> Ruska Nova Ves 219 | Presov, 08005 Slovak Republic
> Tel: +421 915 052 184
> skype: marekwhite
> twitter: #opennandra
> web: http://open-nandra.com
>

BR,

marek
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto