[Group.of.nepali.translators] [Bug 1709670] Re: logrotate never recovers if the statefile is corrupted

2017-08-22 Thread Launchpad Bug Tracker
This bug was fixed in the package logrotate - 3.8.7-2ubuntu3.1

---
logrotate (3.8.7-2ubuntu3.1) zesty; urgency=medium

  * logrotate does not ever recover from a corrupted statefile (LP: #1709670)
- d/p/do-not-treat-failure-of-readState-as-fatal.patch
(Backported from commit b9d82003002c98370e4131a7e43c76afcd23306a)

 -- Eric Desrochers   Wed, 09 Aug 2017
11:25:51 -0400

** Changed in: logrotate (Ubuntu Zesty)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1709670

Title:
  logrotate never recovers if the statefile is corrupted

Status in logrotate package in Ubuntu:
  Fix Released
Status in logrotate source package in Trusty:
  Fix Released
Status in logrotate source package in Xenial:
  Fix Released
Status in logrotate source package in Zesty:
  Fix Released
Status in logrotate source package in Artful:
  Fix Released
Status in logrotate package in Debian:
  New

Bug description:
  [Impact]

  logrotate never recovers if the statefile is corrupted unless you
  remove it or fix the corruption by hand.

  Impact scenarios :

  - System could eventually run out of disk space on a separate
  partition if mounted in "/var" or specifically "/var/log" or even
  worst if "/var/log" is on the same partition as "/" it could create
  even more damage if by any chance the partition is running out of free
  space.

  - System keep updating the same files over and over, creating large
  size logfiles.

  - ...

  [Test Case]

  - Install logrotate
  - Run "/etc/cron.daily/logrotate" ## The first logrotate run will generate 
the statefile "var/lib/logrotate/status"
  - Modify "/var/lib/logrotate/status" by removing the first line in order to 
corrupt the file
  - Re-run "/etc/cron.daily/logrotate" and one will get the following error : 
"error: bad top line in state file /var/lib/logrotate/status" every time you 
run logrotate

  Unless you remove the statefile and start again or fix the corruption
  by hand.

   * Additionally, I will run the /path_to_source/test/test script as a
  dogfooding that does ~72 tests.

  [Regression Potential]

   * Risk of potential regression is low, and IMHO couldn't be worst
  than the actual situation where logrotate simply doesn't recover from
  a corrupt statefile.

   * The current patch does recover (after verification) and has been
  through some upstream CI validation, community feedbacks, et al.

   * Additionally, I will run the /path_to_source/test/test script as a
  dogfooding that does ~72 tests.

  [Other Info]

  * Upstream commit:
  
https://github.com/logrotate/logrotate/commit/b9d82003002c98370e4131a7e43c76afcd23306a

  * Upstream bug:
  https://github.com/logrotate/logrotate/issues/45

  * Debian bug:
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=871592

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/logrotate/+bug/1709670/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1602192] Re: when starting many LXD containers, they start failing to boot with "Too many open files"

2017-08-22 Thread Stéphane Graber
** Also affects: lxd (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Description changed:

- Reported by Uros Jovanovic here: https://bugs.launchpad.net/juju-
- core/+bug/1593828/comments/18
+ == SRU
+ === Rationale
+ LXD containers using systemd will use a very large amount of inotify watches. 
This means that a system will typically run out of global watches with as 
little as 15 Ubuntu 16.04 containers.
+ 
+ An easy fix for the issue is to bump the number of user watches up to
+ 1024, making it possible to run around 100 containers before hitting the
+ limit again.
+ 
+ To do so, LXD is now shipping a sysctl.d file which bumps that
+ particular limit on systems that have LXD installed.
+ 
+ === Testcase
+ 1) Upgrade LXD
+ 2) Spawn about 50 Ubuntu 16.04 containers ("lxc launch ubuntu:16.04")
+ 3) Check that they all get an IP address ("lxc list"), that's a pretty good 
sign that they booted properly
+ 
+ === Regression potential
+ Not expecting anything here. Juju has shipped a similar configuration for a 
while now and so have the LXD feature releases.
+ 
+ We pretty much just forgot to include this particular change in our LTS
+ packaging branch
+ 
+ 
+ == Original bug report
+ Reported by Uros Jovanovic here: 
https://bugs.launchpad.net/juju-core/+bug/1593828/comments/18
  
  "...
  However, if you bootstrap LXD and do:
  
  juju bootstrap localxd lxd --upload-tools
  for i in {1..30}; do juju deploy ubuntu ubuntu$i; sleep 90; done
  
  Somewhere between 10-20-th deploy fails with machine in pending state
  (nothin useful in logs) and none of the new deploys after that first
  pending succeeds. Might be a different bug, but it's easy to verify with
  running that for loop.
  
  So, this particular error was not in my logs, but the controller still
  ends up unable to provision at least 30 machines ..."
  
  I can reproduce this. Looking on the failed machine I can see that jujud
  isn't running, which is why juju considers the machine not up, and in
  fact nothing of juju seems to be installed. There's nothing about juju
  in /var/log.
  
  Comparing cloud-init-output.log between a stuck-pending machine and one
  which has started up fine, they both start with some key-generation
  messages, but the successful machine then has the line:
  
  Cloud-init v. 0.7.7 running 'init' at Tue, 12 Jul 2016 08:32:00 +.
  Up 4.0 seconds.
  
  ...and then a whole lot of juju-installation gubbins, while the failed
  machine log just stops.

** Changed in: lxd (Ubuntu Xenial)
   Status: New => Triaged

** Changed in: lxd (Ubuntu Xenial)
   Status: Triaged => In Progress

** Changed in: lxd (Ubuntu Xenial)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1602192

Title:
  when starting many LXD containers, they start failing to boot with
  "Too many open files"

Status in lxd package in Ubuntu:
  Fix Released
Status in lxd source package in Xenial:
  In Progress

Bug description:
  == SRU
  === Rationale
  LXD containers using systemd will use a very large amount of inotify watches. 
This means that a system will typically run out of global watches with as 
little as 15 Ubuntu 16.04 containers.

  An easy fix for the issue is to bump the number of user watches up to
  1024, making it possible to run around 100 containers before hitting
  the limit again.

  To do so, LXD is now shipping a sysctl.d file which bumps that
  particular limit on systems that have LXD installed.

  === Testcase
  1) Upgrade LXD
  2) Spawn about 50 Ubuntu 16.04 containers ("lxc launch ubuntu:16.04")
  3) Check that they all get an IP address ("lxc list"), that's a pretty good 
sign that they booted properly

  === Regression potential
  Not expecting anything here. Juju has shipped a similar configuration for a 
while now and so have the LXD feature releases.

  We pretty much just forgot to include this particular change in our
  LTS packaging branch

  
  == Original bug report
  Reported by Uros Jovanovic here: 
https://bugs.launchpad.net/juju-core/+bug/1593828/comments/18

  "...
  However, if you bootstrap LXD and do:

  juju bootstrap localxd lxd --upload-tools
  for i in {1..30}; do juju deploy ubuntu ubuntu$i; sleep 90; done

  Somewhere between 10-20-th deploy fails with machine in pending state
  (nothin useful in logs) and none of the new deploys after that first
  pending succeeds. Might be a different bug, but it's easy to verify
  with running that for loop.

  So, this particular error was not in my logs, but the controller still
  ends up unable to provision at least 30 machines ..."

  I can reproduce this. Looking on the failed machine I can see that
  jujud isn't running, which is why juju considers the machine not up,
  and in fact nothing of juju seems to be installed. There's nothing
  about juju in /var/l

[Group.of.nepali.translators] [Bug 1667198] Re: gpu-manager should also support using custom xorg.conf files for Nvidia

2017-08-22 Thread Launchpad Bug Tracker
This bug was fixed in the package ubuntu-drivers-common - 1:0.4.17.3

---
ubuntu-drivers-common (1:0.4.17.3) xenial-proposed; urgency=medium

  * gpu-manager.{c|py}:
- Add support for using custom xorg.confs with the nvidia
  driver (LP: #1667198).
  Custom xorg files can be named "non-hybrid" (for non hybrid
  systems), "hybrid-performance", and "hybrid-power-saving",
  and will have to placed in the /usr/share/gpu-manager.d
  directory.
  The directory can be overridden by passing another directory
  along with the "--custom-xorg-conf-path" parameter.
- Add tests for the custom xorg.confs code and for amdgpu-pro
  hybrid support.

 -- Alberto Milone   Mon, 19 Jun 2017
17:27:33 +0200

** Changed in: ubuntu-drivers-common (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1667198

Title:
  gpu-manager should also support using custom xorg.conf files for
  Nvidia

Status in HWE Next:
  Incomplete
Status in OEM Priority Project:
  Confirmed
Status in ubuntu-drivers-common package in Ubuntu:
  Fix Released
Status in ubuntu-drivers-common source package in Xenial:
  Fix Released
Status in ubuntu-drivers-common source package in Zesty:
  Fix Released

Bug description:
  SRU Request:

  [Impact]
  Different GPUs may require different options in the xorg.conf file. Currently 
this is not possible, since the options have been hardcoded in gpu-manager to 
configure the system automatically.

  This new (backported) feature allows users to tell gpu-manager to use
  their own custom xorg.confs for their NVIDIA GPUs.

  [Test Case]
  1) Enable the xenial-proposed repository, and install the 
ubuntu-drivers-common and the nvidia-prime packages.

  2) Install the nvidia driver.

  3) Place your configuration file(s) in the /usr/share/gpu-manager.d
  directory.

  If you are using a non-hybrid system, you can create the /usr/share
  /gpu-manager.d/non-hybrid file. In case of a hybrid system, you can
  specify /usr/share/gpu-manager.d/hybrid-performance and/or /usr/share
  /gpu-manager.d/hybrid-power-saving, depending on the power profile you
  intend to apply your settings to.

  4) Restart the system and see if it boots correctly. If unsure, please
  attach your /var/log/gpu-manager.log and /var/log/Xorg.0.log

  5) Make sure that the /etc/X11/xorg.conf matches the file you
  specified in 3)

  [Regression Potential]
  Low, the above mentioned changes are already in Artful, and will not affect 
the system's behaviour unless new configuration files are put in the 
/usr/share/gpu-manager.d/ directory.

  __
  This feature has been implemented for Intel (ex. gpumanager_uxa), which is 
parsing kernel parameter to add option in xorg.conf (commit ff3a4e54).

  And now Nvidia also need this feature to add Option
  "nvidiaXineramaInfo" "off" to fix some issues.

  Btw, I'm thinking it might be a more flexible to have a config file
  which user could add all options as they want so that we could prevent
  keeping change gpu-manager when some more options are needed in the
  future.

  Needed by LP: #1653592

To manage notifications about this bug go to:
https://bugs.launchpad.net/hwe-next/+bug/1667198/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1667198] Re: gpu-manager should also support using custom xorg.conf files for Nvidia

2017-08-22 Thread Launchpad Bug Tracker
This bug was fixed in the package ubuntu-drivers-common - 1:0.4.22.1

---
ubuntu-drivers-common (1:0.4.22.1) zesty-proposed; urgency=medium

  * gpu-manager.{c|py}:
- Add support for using custom xorg.confs with the nvidia
  driver (LP: #1667198).
  Custom xorg files can be named "non-hybrid" (for non hybrid
  systems), "hybrid-performance", and "hybrid-power-saving",
  and will have to placed in the /usr/share/gpu-manager.d
  directory.
  The directory can be overridden by passing another directory
  along with the "--custom-xorg-conf-path" parameter.
- Add tests for the custom xorg.confs code and for amdgpu-pro
  hybrid support.

 -- Alberto Milone   Wed, 02 Aug 2017
17:17:13 +0200

** Changed in: ubuntu-drivers-common (Ubuntu Zesty)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1667198

Title:
  gpu-manager should also support using custom xorg.conf files for
  Nvidia

Status in HWE Next:
  Incomplete
Status in OEM Priority Project:
  Confirmed
Status in ubuntu-drivers-common package in Ubuntu:
  Fix Released
Status in ubuntu-drivers-common source package in Xenial:
  Fix Released
Status in ubuntu-drivers-common source package in Zesty:
  Fix Released

Bug description:
  SRU Request:

  [Impact]
  Different GPUs may require different options in the xorg.conf file. Currently 
this is not possible, since the options have been hardcoded in gpu-manager to 
configure the system automatically.

  This new (backported) feature allows users to tell gpu-manager to use
  their own custom xorg.confs for their NVIDIA GPUs.

  [Test Case]
  1) Enable the xenial-proposed repository, and install the 
ubuntu-drivers-common and the nvidia-prime packages.

  2) Install the nvidia driver.

  3) Place your configuration file(s) in the /usr/share/gpu-manager.d
  directory.

  If you are using a non-hybrid system, you can create the /usr/share
  /gpu-manager.d/non-hybrid file. In case of a hybrid system, you can
  specify /usr/share/gpu-manager.d/hybrid-performance and/or /usr/share
  /gpu-manager.d/hybrid-power-saving, depending on the power profile you
  intend to apply your settings to.

  4) Restart the system and see if it boots correctly. If unsure, please
  attach your /var/log/gpu-manager.log and /var/log/Xorg.0.log

  5) Make sure that the /etc/X11/xorg.conf matches the file you
  specified in 3)

  [Regression Potential]
  Low, the above mentioned changes are already in Artful, and will not affect 
the system's behaviour unless new configuration files are put in the 
/usr/share/gpu-manager.d/ directory.

  __
  This feature has been implemented for Intel (ex. gpumanager_uxa), which is 
parsing kernel parameter to add option in xorg.conf (commit ff3a4e54).

  And now Nvidia also need this feature to add Option
  "nvidiaXineramaInfo" "off" to fix some issues.

  Btw, I'm thinking it might be a more flexible to have a config file
  which user could add all options as they want so that we could prevent
  keeping change gpu-manager when some more options are needed in the
  future.

  Needed by LP: #1653592

To manage notifications about this bug go to:
https://bugs.launchpad.net/hwe-next/+bug/1667198/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1712455] [NEW] LXD 2.0.10 doesn't properly auto-update images

2017-08-22 Thread Stéphane Graber
Public bug reported:

A number of issues interfere with LXD 2.0.10's ability to update images:
 - The auto_update flag doesn't properly get set on newly downloaded images
 - The cached flag doesn't get properly copied when an image gets refreshed

The combination of those means that LXD effectively only updates images
when a user requests a new container. This at least means that there is
no security impact from this, but this also slows things down quite a
bit and is certainly not the expected behavior.

This fix cherry-picks two upstream fixes that resolve the two
highlighted issues and applies an extra change which will automatically
restore the auto_update flag for any image that is marked as "cached" in
the store.

== Rationale
LXD regressed in its background update code, leading to most LXD hosts storing 
stale images and only refreshing them when a user asks for a new container to 
be created.

This is pretty different from the expected behavior of LXD refreshing
all its images every 6 hours.

The fix restores the old behavior and attempts to reset the update flag
on images which are supposed to auto-update.

== Test case
1) Setup two LXD hosts on LXD 2.0.10
2) On the first LXD host, copy an image ("lxc image copy ubuntu:16.04 local: 
--alias blah")
3) Add teh first LXD host as a remote to the second LXD host
4) Create a container on the second LXD host using ("lxc init :blah c1")
5) On the first host, change the target of the alias ("lxc image copy 
ubuntu:14.04 local: --alias blah")
6) On the second host, look at the content of the image store ("lxc image list")
7) On the second host, restart LXD ("systemctl restart lxd")
8) On the second host, confirm that LXD is done starting ("tail -f 
/var/log/lxd/lxd.log")
9) On the second host, look at the content of the image store again ("lxc image 
list")

On a properly functionaling LXD, at step 9) you'll see the new image in
the image store with the old one gone. On a broken LXD, the initial
image is there and there's no sign of the new one.

== Regression potential
The changes being pushed here are coming from existing LXD releases (2.16) so 
have seen significant use already. The main risk I can think of is in the DB 
patch that's included in this change.

That code is very restrictive and will only set the auto-update flag on
an image which is marked as cached (was automatically downloaded) and
has a known source (as required for auto-update).

This will not fix images which have been already updated by LXD through
container launch as those will have lost their cached flag due to the
other bug we're fixing here, but attempting to fix those would just be
guesswork as any information on how the image ended up in the store is
lost at that point.

** Affects: lxd (Ubuntu)
 Importance: Undecided
 Status: Fix Released

** Affects: lxd (Ubuntu Trusty)
 Importance: Undecided
 Status: New

** Affects: lxd (Ubuntu Xenial)
 Importance: High
 Assignee: Stéphane Graber (stgraber)
 Status: In Progress

** Also affects: lxd (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Also affects: lxd (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: lxd (Ubuntu)
   Status: New => Fix Released

** Changed in: lxd (Ubuntu Xenial)
   Status: New => In Progress

** Changed in: lxd (Ubuntu Xenial)
 Assignee: (unassigned) => Stéphane Graber (stgraber)

** Changed in: lxd (Ubuntu Xenial)
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1712455

Title:
  LXD 2.0.10 doesn't properly auto-update images

Status in lxd package in Ubuntu:
  Fix Released
Status in lxd source package in Trusty:
  New
Status in lxd source package in Xenial:
  In Progress

Bug description:
  A number of issues interfere with LXD 2.0.10's ability to update images:
   - The auto_update flag doesn't properly get set on newly downloaded images
   - The cached flag doesn't get properly copied when an image gets refreshed

  The combination of those means that LXD effectively only updates
  images when a user requests a new container. This at least means that
  there is no security impact from this, but this also slows things down
  quite a bit and is certainly not the expected behavior.

  This fix cherry-picks two upstream fixes that resolve the two
  highlighted issues and applies an extra change which will
  automatically restore the auto_update flag for any image that is
  marked as "cached" in the store.

  == Rationale
  LXD regressed in its background update code, leading to most LXD hosts 
storing stale images and only refreshing them when a user asks for a new 
container to be created.

  This is pretty different from the expected behavior of LXD refreshing
  all its images every 6 hours.

  The fix restores the old behavior and 

[Group.of.nepali.translators] [Bug 1709885] Re: package libxfont1-dev (not installed) failed to install/upgrade: trying to overwrite '/usr/share/doc/libxfont-dev/fontlib.html', which is also in packag

2017-08-22 Thread Launchpad Bug Tracker
This bug was fixed in the package libxfont - 1:1.5.1-1ubuntu0.16.04.2

---
libxfont (1:1.5.1-1ubuntu0.16.04.2) xenial; urgency=medium

  * Install developer documentation under the correct path. (LP:
#1709885)

 -- Timo Aaltonen   Fri, 11 Aug 2017 01:16:51 +0300

** Changed in: libxfont (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1709885

Title:
  package libxfont1-dev (not installed) failed to install/upgrade:
  trying to overwrite '/usr/share/doc/libxfont-dev/fontlib.html', which
  is also in package libxfont-dev 1:2.0.1-3~ubuntu16.04.1

Status in libxfont package in Ubuntu:
  Invalid
Status in libxfont source package in Xenial:
  Fix Released

Bug description:
  [Impact]
  update failed

   package:libxfont1-dev:(not installed)
   Unpacking libxfont1-dev (1:1.5.1-1ubuntu0.16.04.1) ...
   dpkg: error processing archive 
/var/cache/apt/archives/libxfont1-dev_1%3a1.5.1-1ubuntu0.16.04.1_amd64.deb 
(--unpack):
    trying to overwrite '/usr/share/doc/libxfont-dev/fontlib.html', which is 
also in package libxfont-dev 1:2.0.1-3~ubuntu16.04.1

  the devel documentation is in the wrong directory, need to move it

  [Test case]
  try to install both libxfont1-dev and libxfont-dev

  [Regression potential]
  none, moves two files to the correct location, that's all

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libxfont/+bug/1709885/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1707400] Re: libvirt-bin doesn't regenerate apparmor cache in postinst

2017-08-22 Thread Launchpad Bug Tracker
This bug was fixed in the package libvirt - 1.2.2-0ubuntu13.1.21

---
libvirt (1.2.2-0ubuntu13.1.21) trusty; urgency=medium

  * d/libvirt-bin.postinst: call apparmor_parser with options to
ignore the apparmor cache and rebuild it, otherwise old apparmor
rules are used and this might break upgrades (LP: #1707400)

 -- Andreas Hasenack   Tue, 01 Aug 2017 11:58:38
-0300

** Changed in: libvirt (Ubuntu Trusty)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1707400

Title:
  libvirt-bin doesn't regenerate apparmor cache in postinst

Status in libvirt package in Ubuntu:
  Fix Released
Status in libvirt source package in Trusty:
  Fix Released
Status in libvirt source package in Xenial:
  Fix Released

Bug description:
  [Impact]
  TL;DR
  libvirt-bin stops working after a release upgrade from Trusty to Xenial. 
Other scenarios possible.

  Workaround after it breaks:
  sudo touch /etc/apparmor.d/{usr.lib.libvirt.virt-aa-helper,usr.sbin.libvirtd}
  sudo apt install --reinstall libvirt-bin

  The libvirt-bin package in Trusty and Xenial reloads the apparmor
  profile in postinst, but without taking into account possible apparmor
  caches. It uses this call:

  apparmor_parser -r  || true

  instead of what dh_apparmor and every other package is using nowadays,
  and is also recommended in the apparmor_parser manpage:

  apparmor_parser -r -T -W  || true

  Where:
  -T: skip reading any existing cache
  -W: write out the cache

  The apparmor_parser(8) manpage has this to say about how the apparmor
  cache is considered:

  """
  By default, if a profile's cache is found in the location specified by 
--cache-loc and the timestamp is newer than the profile, it will be loaded from 
the cache.
  """

  That is reasonable behaviour. After all, the cache is generated from
  the profile file. If someone wants to change the profile, it will be
  edited and thus have a more recent timestamp than the cache.

  Furthermore, since the libvirt-bin packages in Trusty and Xenial do
  not specify -W, that is, they do not write out a cache file, then
  using just "-r" to load a profile is consistent.

  But if something outside the libvirt-bin package decides to generate
  apparmor caches, then we might have a problem.

  One such scenario is an Ubuntu release upgrade from Trusty to Xenial. Here is 
what was observed during such an upgrade (here is a pastebin: 
http://pastebin.ubuntu.com/25222966/. It shows libvirt apparently restarting 
successfully at the end, but it didn't):
  - new libvirt-bin is unpacked
  - new apparmor is unpacked
  - new apparmor is set up. This sets up new abstractions, and also generates 
an apparmor cache for all profiles. This is new behaviour: the trusty apparmor 
package does not generate caches. Crucial: at this time, the old libvirt-bin 
apparmor profiles are installed still.
  - new libvirt-bin is setup. This installs the new apparmor profile for this 
version of libvirt-bin. Crucial: the profile timestamp is not $(now), but 
whatever timestamp the file has inside the debian package. Which is *older* 
than the cache generated above
  - libvirt-bin reloads the apparmor profile with -r. apparmor picks the cached 
profile because its timestamp is newer than the profile
  - libvirt-bin fails to start

  The fix is to call apparmor_parser with -T and -W in postinst. That
  will always invalidate the apparmor cache and generate a new one based
  on the current contents of the profile file.

  Another fix would be do use dh_apparmor to install the two profiles
  libvirt-bin uses. In fact, debian/rules even have those calls, but
  they are commented. We believe that doing that is a more invasive fix,
  and that just adding the -T and -W options to the existing
  apparmor_parser call has the same effect and is less invasive, being
  more in the spirit of an SRU to an LTS release.

  In Yakkety and Zesty, dh_apparmor is used, but the call with just "-r"
  remains in postinst. That was only removed in artful, where we finally
  only rely on dh_apparmor for this.

  [Test Case]

   * install libvirt-bin
   * check it's working. This command should work and return an empty list of 
virtual machines:
     - sudo virsh list
   * break the apparmor profile /etc/apparmor.d/usr.sbin.libvirtd by editing it 
and commenting the line "network inet stream," like this:
     #network inet stream,
   * generate a cache file for it:
     - sudo apparmor_parser -r -T -W /etc/apparmor.d/usr.sbin.libvirtd
   * purge libvirt-bin:
  - sudo apt purge libvirt-bin
   * install libvirt-bin back. It will fail to start the service:
  - sudo apt install libvirt-bin
   * verify that virsh list fails to connect to libvirt:
  - sudo virsh list
   * verify that service status shows the servicce being down:
  

[Group.of.nepali.translators] [Bug 1707400] Re: libvirt-bin doesn't regenerate apparmor cache in postinst

2017-08-22 Thread Launchpad Bug Tracker
This bug was fixed in the package libvirt - 1.3.1-1ubuntu10.13

---
libvirt (1.3.1-1ubuntu10.13) xenial; urgency=medium

  * d/libvirt-bin.postinst: call apparmor_parser with options to
ignore the apparmor cache and rebuild it, otherwise old apparmor
rules are used and this might break upgrades (LP: #1707400)

 -- Andreas Hasenack   Tue, 01 Aug 2017 10:50:20
-0300

** Changed in: libvirt (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1707400

Title:
  libvirt-bin doesn't regenerate apparmor cache in postinst

Status in libvirt package in Ubuntu:
  Fix Released
Status in libvirt source package in Trusty:
  Fix Released
Status in libvirt source package in Xenial:
  Fix Released

Bug description:
  [Impact]
  TL;DR
  libvirt-bin stops working after a release upgrade from Trusty to Xenial. 
Other scenarios possible.

  Workaround after it breaks:
  sudo touch /etc/apparmor.d/{usr.lib.libvirt.virt-aa-helper,usr.sbin.libvirtd}
  sudo apt install --reinstall libvirt-bin

  The libvirt-bin package in Trusty and Xenial reloads the apparmor
  profile in postinst, but without taking into account possible apparmor
  caches. It uses this call:

  apparmor_parser -r  || true

  instead of what dh_apparmor and every other package is using nowadays,
  and is also recommended in the apparmor_parser manpage:

  apparmor_parser -r -T -W  || true

  Where:
  -T: skip reading any existing cache
  -W: write out the cache

  The apparmor_parser(8) manpage has this to say about how the apparmor
  cache is considered:

  """
  By default, if a profile's cache is found in the location specified by 
--cache-loc and the timestamp is newer than the profile, it will be loaded from 
the cache.
  """

  That is reasonable behaviour. After all, the cache is generated from
  the profile file. If someone wants to change the profile, it will be
  edited and thus have a more recent timestamp than the cache.

  Furthermore, since the libvirt-bin packages in Trusty and Xenial do
  not specify -W, that is, they do not write out a cache file, then
  using just "-r" to load a profile is consistent.

  But if something outside the libvirt-bin package decides to generate
  apparmor caches, then we might have a problem.

  One such scenario is an Ubuntu release upgrade from Trusty to Xenial. Here is 
what was observed during such an upgrade (here is a pastebin: 
http://pastebin.ubuntu.com/25222966/. It shows libvirt apparently restarting 
successfully at the end, but it didn't):
  - new libvirt-bin is unpacked
  - new apparmor is unpacked
  - new apparmor is set up. This sets up new abstractions, and also generates 
an apparmor cache for all profiles. This is new behaviour: the trusty apparmor 
package does not generate caches. Crucial: at this time, the old libvirt-bin 
apparmor profiles are installed still.
  - new libvirt-bin is setup. This installs the new apparmor profile for this 
version of libvirt-bin. Crucial: the profile timestamp is not $(now), but 
whatever timestamp the file has inside the debian package. Which is *older* 
than the cache generated above
  - libvirt-bin reloads the apparmor profile with -r. apparmor picks the cached 
profile because its timestamp is newer than the profile
  - libvirt-bin fails to start

  The fix is to call apparmor_parser with -T and -W in postinst. That
  will always invalidate the apparmor cache and generate a new one based
  on the current contents of the profile file.

  Another fix would be do use dh_apparmor to install the two profiles
  libvirt-bin uses. In fact, debian/rules even have those calls, but
  they are commented. We believe that doing that is a more invasive fix,
  and that just adding the -T and -W options to the existing
  apparmor_parser call has the same effect and is less invasive, being
  more in the spirit of an SRU to an LTS release.

  In Yakkety and Zesty, dh_apparmor is used, but the call with just "-r"
  remains in postinst. That was only removed in artful, where we finally
  only rely on dh_apparmor for this.

  [Test Case]

   * install libvirt-bin
   * check it's working. This command should work and return an empty list of 
virtual machines:
     - sudo virsh list
   * break the apparmor profile /etc/apparmor.d/usr.sbin.libvirtd by editing it 
and commenting the line "network inet stream," like this:
     #network inet stream,
   * generate a cache file for it:
     - sudo apparmor_parser -r -T -W /etc/apparmor.d/usr.sbin.libvirtd
   * purge libvirt-bin:
  - sudo apt purge libvirt-bin
   * install libvirt-bin back. It will fail to start the service:
  - sudo apt install libvirt-bin
   * verify that virsh list fails to connect to libvirt:
  - sudo virsh list
   * verify that service status shows the servicce being down:
  

[Group.of.nepali.translators] [Bug 1700373] Re: intel-microcode is out of date, version 20170707 fixes errata on 6th and 7th generation platforms

2017-08-22 Thread Launchpad Bug Tracker
This bug was fixed in the package intel-microcode -
3.20170707.1~ubuntu16.04.0

---
intel-microcode (3.20170707.1~ubuntu16.04.0) xenial; urgency=medium

  * Sync of new upstream microcode release to address Skylake, Kaby Lake
Hyper Threading bug.  This is a sync of the dat files from artful
version 3.20170707.1 (LP: #1700373)
  * New upstream microcode datafile 20170707
+ New Microcodes:
  sig 0x00050654, pf_mask 0x97, 2017-06-01, rev 0x222, size 25600
  sig 0x000806e9, pf_mask 0xc0, 2017-04-27, rev 0x0062, size 97280
  sig 0x000806ea, pf_mask 0xc0, 2017-05-23, rev 0x0066, size 95232
  sig 0x000906e9, pf_mask 0x2a, 2017-04-06, rev 0x005e, size 97280
+ This release fixes the nightmare-level errata SKZ7/SKW144/SKL150/
  SKX150 (Skylake) KBL095/KBW095 (Kaby Lake) for all affected Kaby
  Lake and Skylake processors: Skylake D0/R0 were fixed since the
  previous upstream release (20170511).  This new release adds the
  fixes for Kaby Lake Y0/B0/H0 and Skylake H0 (Skylake-E/X).
+ Fix undisclosed errata in Skylake H0 (0x50654), Kaby Lake Y0
  (0x806ea), Kaby Lake H0 (0x806e9), Kaby Lake B0 (0x906e9)
  * source: removed superseded upstream dat files: 20101123, 20151106
This brings dat files in sync with those shipped in Arful.
  * Updated Intel changelog to reflect dat file sync.

 -- Dave Chiluk   Wed, 12 Jul 2017 21:46:36 -0500

** Changed in: intel-microcode (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

** Changed in: intel-microcode (Ubuntu Zesty)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1700373

Title:
  intel-microcode is out of date, version 20170707 fixes errata on 6th
  and 7th generation platforms

Status in intel-microcode package in Ubuntu:
  Fix Released
Status in intel-microcode source package in Trusty:
  Won't Fix
Status in intel-microcode source package in Xenial:
  Fix Released
Status in intel-microcode source package in Yakkety:
  Won't Fix
Status in intel-microcode source package in Zesty:
  Fix Released
Status in intel-microcode source package in Artful:
  Fix Released

Bug description:
  [Impact]

  * A security fix has been made available as part of intel-microcode
  * It is advisable to apply it
  * Thus an SRU of the latest intel-microcode is desirable for all stable 
releases

  [Test Case]

  * Upgrade intel-microcode package, if it is already installed / one is
  running on Intel CPUs

  * Reboot and verify no averse results, and/or that microcode for your
  cpu was loaded as expected.

  * Ocaml crash reproducer

  Download report.tar.gz from https://caml.inria.fr/mantis/view.php?id=7452 and 
place in your schroot scratch directory.
  $ mk-sbuild artful --arch=amd64
  $ schroot -c artful -u root
  // Artful was chosen as it contains the required versions of Ocaml for the 
reproducer.
  $ apt install ocaml opam ocaml-findlib m4
  $ opam init
  $ opam install extprot
  $ eval `opam config env`
  $ while ocamlfind opt -c -g -bin-annot -ccopt -g -ccopt -O2 -ccopt -Wextra 
-ccopt '-Wstrict-overflow=5' -thread -w +a-4-40..42-44-45-48-58 -w -27-32 
-package extprot test.ml -o test.cmx; do echo "ok"; done

  [Test case reporting]
  * Please paste the output of:

  dpkg-query -W intel-microcode
  grep -E 'model|stepping' /proc/cpuinfo | sort -u
  journalctl -k | grep microcode

  [Regression Potential]
  Microcode are proprietary blobs, and can cause any number of new errors and 
regressions. Microcode bugs have been reported before, therefore longer than 
usual phasing and monitoring of intel-microcode bugs should be done with extra 
care.

  Additional notes from ~racb, wearing an ~ubuntu-sru hat:

  SRU verification needs to take care to consider CPUs actually tested.
  We should have a representative sample of CPUs tested in SRU
  verification reports before considering release to the updates
  pockets.

  Given the potential severity of regressions, we should keep this in
  the proposed pockets for longer than the usual minimum ageing period.
  Let's have users opt-in to this update first, and only recommend it
  once we  confidence that a reasonable number (and representative CPU
  sample) of opted-in users have not hit any problems.

  Testers: please mark verification-done-* only after you consider that
  the above additional requirements have been met.

  [Other]
  caml discussion describing test case to reproduce the crash.
  https://caml.inria.fr/mantis/view.php?id=7452

  * I did not backport the full debian/changelog, as some of the changes
  were ommitted for SRU purposes, and I don't like the idea of modifying
  the changelog of others.

  * I did not backport this below change but I feel as though the SRU team 
should evaluate including it.  I left it out due to the change as little a

[Group.of.nepali.translators] [Bug 1700833] Re: linux-azure: 4.11.0-1003.3 -proposed tracker

2017-08-22 Thread Marcelo Cerri
*** This bug is a duplicate of bug 1712446 ***
https://bugs.launchpad.net/bugs/1712446

** This bug is no longer a duplicate of bug 1710944
   linux-azure: 4.11.0-1006.6 -proposed tracker
** This bug has been marked a duplicate of bug 1712446
   linux-azure: 4.11.0-1007.7 -proposed tracker

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1700833

Title:
  linux-azure: 4.11.0-1003.3 -proposed tracker

Status in Kernel SRU Workflow:
  In Progress
Status in Kernel SRU Workflow automated-testing series:
  Incomplete
Status in Kernel SRU Workflow certification-testing series:
  Invalid
Status in Kernel SRU Workflow prepare-package series:
  Fix Released
Status in Kernel SRU Workflow prepare-package-meta series:
  Fix Released
Status in Kernel SRU Workflow promote-to-proposed series:
  Fix Released
Status in Kernel SRU Workflow promote-to-security series:
  Invalid
Status in Kernel SRU Workflow promote-to-updates series:
  Invalid
Status in Kernel SRU Workflow regression-testing series:
  Fix Released
Status in Kernel SRU Workflow security-signoff series:
  Invalid
Status in Kernel SRU Workflow upload-to-ppa series:
  New
Status in Kernel SRU Workflow verification-testing series:
  Confirmed
Status in linux-azure package in Ubuntu:
  Invalid
Status in linux-azure source package in Xenial:
  Confirmed

Bug description:
  This bug is for tracking the 4.11.0-1003.3 upload package. This bug
  will contain status and testing results related to that upload.

  For an explanation of the tasks and the associated workflow see:
  https://wiki.ubuntu.com/Kernel/kernel-sru-workflow

  -- swm properties --
  boot-testing-requested: true
  phase: Promoted to proposed
  proposed-announcement-sent: true
  proposed-testing-requested: true

To manage notifications about this bug go to:
https://bugs.launchpad.net/kernel-sru-workflow/+bug/1700833/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1707061] Re: linux-azure: 4.11.0-1004.4 -proposed tracker

2017-08-22 Thread Marcelo Cerri
*** This bug is a duplicate of bug 1712446 ***
https://bugs.launchpad.net/bugs/1712446

** This bug is no longer a duplicate of bug 1710944
   linux-azure: 4.11.0-1006.6 -proposed tracker
** This bug has been marked a duplicate of bug 1712446
   linux-azure: 4.11.0-1007.7 -proposed tracker

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1707061

Title:
  linux-azure: 4.11.0-1004.4 -proposed tracker

Status in Kernel SRU Workflow:
  In Progress
Status in Kernel SRU Workflow automated-testing series:
  Confirmed
Status in Kernel SRU Workflow certification-testing series:
  Confirmed
Status in Kernel SRU Workflow prepare-package series:
  Fix Released
Status in Kernel SRU Workflow prepare-package-meta series:
  Fix Released
Status in Kernel SRU Workflow promote-to-proposed series:
  Fix Released
Status in Kernel SRU Workflow promote-to-security series:
  Invalid
Status in Kernel SRU Workflow promote-to-updates series:
  Invalid
Status in Kernel SRU Workflow regression-testing series:
  Confirmed
Status in Kernel SRU Workflow security-signoff series:
  Invalid
Status in Kernel SRU Workflow upload-to-ppa series:
  New
Status in Kernel SRU Workflow verification-testing series:
  Confirmed
Status in linux-azure package in Ubuntu:
  Invalid
Status in linux-azure source package in Xenial:
  Confirmed

Bug description:
  This bug is for tracking the 4.11.0-1004.4 upload package. This bug
  will contain status and testing results related to that upload.

  For an explanation of the tasks and the associated workflow see:
  https://wiki.ubuntu.com/Kernel/kernel-sru-workflow

  -- swm properties --
  boot-testing-requested: true
  phase: Promoted to proposed
  proposed-announcement-sent: true
  proposed-testing-requested: true

To manage notifications about this bug go to:
https://bugs.launchpad.net/kernel-sru-workflow/+bug/1707061/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1700531] Re: linux-azure: -proposed tracker

2017-08-22 Thread Marcelo Cerri
*** This bug is a duplicate of bug 1712446 ***
https://bugs.launchpad.net/bugs/1712446

** This bug is no longer a duplicate of bug 1710944
   linux-azure: 4.11.0-1006.6 -proposed tracker
** This bug has been marked a duplicate of bug 1712446
   linux-azure: 4.11.0-1007.7 -proposed tracker

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1700531

Title:
  linux-azure:  -proposed tracker

Status in Kernel SRU Workflow:
  In Progress
Status in Kernel SRU Workflow automated-testing series:
  New
Status in Kernel SRU Workflow certification-testing series:
  New
Status in Kernel SRU Workflow prepare-package series:
  New
Status in Kernel SRU Workflow prepare-package-meta series:
  New
Status in Kernel SRU Workflow promote-to-proposed series:
  New
Status in Kernel SRU Workflow promote-to-security series:
  New
Status in Kernel SRU Workflow promote-to-updates series:
  New
Status in Kernel SRU Workflow regression-testing series:
  New
Status in Kernel SRU Workflow security-signoff series:
  New
Status in Kernel SRU Workflow upload-to-ppa series:
  New
Status in Kernel SRU Workflow verification-testing series:
  New
Status in linux-azure package in Ubuntu:
  Invalid
Status in linux-azure source package in Xenial:
  Confirmed

Bug description:
  This bug is for tracking the  upload package.
  This bug will contain status and testing results related to that
  upload.

  For an explanation of the tasks and the associated workflow see: 
https://wiki.ubuntu.com/Kernel/kernel-sru-workflow
  -- swm properties --
  kernel-stable-master-bug: 1700528

To manage notifications about this bug go to:
https://bugs.launchpad.net/kernel-sru-workflow/+bug/1700531/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1708017] Re: linux-azure: 4.11.0-1005.5 -proposed tracker

2017-08-22 Thread Marcelo Cerri
*** This bug is a duplicate of bug 1712446 ***
https://bugs.launchpad.net/bugs/1712446

** This bug is no longer a duplicate of bug 1710944
   linux-azure: 4.11.0-1006.6 -proposed tracker
** This bug has been marked a duplicate of bug 1712446
   linux-azure: 4.11.0-1007.7 -proposed tracker

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1708017

Title:
  linux-azure: 4.11.0-1005.5 -proposed tracker

Status in Kernel SRU Workflow:
  In Progress
Status in Kernel SRU Workflow automated-testing series:
  Incomplete
Status in Kernel SRU Workflow certification-testing series:
  Invalid
Status in Kernel SRU Workflow prepare-package series:
  Fix Released
Status in Kernel SRU Workflow prepare-package-meta series:
  Fix Released
Status in Kernel SRU Workflow promote-to-proposed series:
  Fix Released
Status in Kernel SRU Workflow promote-to-security series:
  Invalid
Status in Kernel SRU Workflow promote-to-updates series:
  Invalid
Status in Kernel SRU Workflow regression-testing series:
  Fix Released
Status in Kernel SRU Workflow security-signoff series:
  Invalid
Status in Kernel SRU Workflow upload-to-ppa series:
  New
Status in Kernel SRU Workflow verification-testing series:
  Fix Released
Status in linux-azure package in Ubuntu:
  Invalid
Status in linux-azure source package in Xenial:
  Confirmed

Bug description:
  This bug is for tracking the 4.11.0-1005.5 upload package. This bug
  will contain status and testing results related to that upload.

  For an explanation of the tasks and the associated workflow see:
  https://wiki.ubuntu.com/Kernel/kernel-sru-workflow

  -- swm properties --
  boot-testing-requested: true
  phase: Promoted to proposed
  proposed-announcement-sent: true
  proposed-testing-requested: true

To manage notifications about this bug go to:
https://bugs.launchpad.net/kernel-sru-workflow/+bug/1708017/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1710944] Re: linux-azure: 4.11.0-1006.6 -proposed tracker

2017-08-22 Thread Marcelo Cerri
*** This bug is a duplicate of bug 1712446 ***
https://bugs.launchpad.net/bugs/1712446

** This bug has been marked a duplicate of bug 1712446
   linux-azure: 4.11.0-1007.7 -proposed tracker

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1710944

Title:
  linux-azure: 4.11.0-1006.6 -proposed tracker

Status in Kernel SRU Workflow:
  In Progress
Status in Kernel SRU Workflow automated-testing series:
  Incomplete
Status in Kernel SRU Workflow certification-testing series:
  Confirmed
Status in Kernel SRU Workflow prepare-package series:
  Fix Released
Status in Kernel SRU Workflow prepare-package-meta series:
  Fix Released
Status in Kernel SRU Workflow promote-to-proposed series:
  Fix Released
Status in Kernel SRU Workflow promote-to-security series:
  Invalid
Status in Kernel SRU Workflow promote-to-updates series:
  Invalid
Status in Kernel SRU Workflow regression-testing series:
  Fix Released
Status in Kernel SRU Workflow security-signoff series:
  Invalid
Status in Kernel SRU Workflow upload-to-ppa series:
  New
Status in Kernel SRU Workflow verification-testing series:
  Confirmed
Status in linux-azure package in Ubuntu:
  Invalid
Status in linux-azure source package in Xenial:
  Confirmed

Bug description:
  This bug is for tracking the 4.11.0-1006.6 upload package. This bug
  will contain status and testing results related to that upload.

  For an explanation of the tasks and the associated workflow see:
  https://wiki.ubuntu.com/Kernel/kernel-sru-workflow

  -- swm properties --
  boot-testing-requested: true
  phase: Promoted to proposed
  proposed-announcement-sent: true
  proposed-testing-requested: true

To manage notifications about this bug go to:
https://bugs.launchpad.net/kernel-sru-workflow/+bug/1710944/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1699331] Re: linux-azure: 4.11.0-1002.2 -proposed tracker

2017-08-22 Thread Marcelo Cerri
*** This bug is a duplicate of bug 1712446 ***
https://bugs.launchpad.net/bugs/1712446

** This bug is no longer a duplicate of bug 1710944
   linux-azure: 4.11.0-1006.6 -proposed tracker
** This bug has been marked a duplicate of bug 1712446
   linux-azure: 4.11.0-1007.7 -proposed tracker

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1699331

Title:
  linux-azure: 4.11.0-1002.2 -proposed tracker

Status in Kernel SRU Workflow:
  In Progress
Status in Kernel SRU Workflow automated-testing series:
  Incomplete
Status in Kernel SRU Workflow certification-testing series:
  Confirmed
Status in Kernel SRU Workflow prepare-package series:
  Fix Released
Status in Kernel SRU Workflow prepare-package-meta series:
  Fix Released
Status in Kernel SRU Workflow promote-to-proposed series:
  Fix Released
Status in Kernel SRU Workflow promote-to-security series:
  New
Status in Kernel SRU Workflow promote-to-updates series:
  New
Status in Kernel SRU Workflow regression-testing series:
  Fix Released
Status in Kernel SRU Workflow security-signoff series:
  Fix Released
Status in Kernel SRU Workflow upload-to-ppa series:
  New
Status in Kernel SRU Workflow verification-testing series:
  Invalid
Status in linux-azure package in Ubuntu:
  Invalid
Status in linux-azure source package in Xenial:
  New

Bug description:
  This bug is for tracking the 4.11.0-1002.2 upload package. This bug
  will contain status and testing results related to that upload.

  For an explanation of the tasks and the associated workflow see:
  https://wiki.ubuntu.com/Kernel/kernel-sru-workflow

  -- swm properties --
  boot-testing-requested: true
  phase: Promoted to proposed
  proposed-announcement-sent: true
  proposed-testing-requested: true

To manage notifications about this bug go to:
https://bugs.launchpad.net/kernel-sru-workflow/+bug/1699331/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1705132] Re: Large memory guests, "error: monitor socket did not show up: No such file or directory"

2017-08-22 Thread Ryan Beisner
This bug was fixed in the package libvirt - 2.5.0-3ubuntu5.4~cloud0
---

 libvirt (2.5.0-3ubuntu5.4~cloud0) xenial-ocata; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 libvirt (2.5.0-3ubuntu5.4) zesty; urgency=medium
 .
   * d/p/ubuntu/bug-1705132-* qemu: Adaptive timeout for connecting to
 monitor (LP: #1705132)
 - includes backports that make backing off on timeouts exponentially
   but cap the exponential increase on 1s.


** Changed in: cloud-archive/ocata
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1705132

Title:
  Large memory guests, "error: monitor socket did not show up: No such
  file or directory"

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in Ubuntu Cloud Archive ocata series:
  Fix Released
Status in libvirt package in Ubuntu:
  Fix Released
Status in libvirt source package in Xenial:
  Fix Released
Status in libvirt source package in Yakkety:
  Won't Fix
Status in libvirt source package in Zesty:
  Fix Released

Bug description:
  [Description]

  - Configured a machine with 32 static VCPUs, 160GB of RAM using 1G
  hugepages on a NUMA capable machine.

  Domain definition (http://pastebin.ubuntu.com/25121106/)

  - Once started (virsh start).

  Libvirt log.

  LC_ALL=C
  PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  QEMU_AUDIO_DRV=none /usr/bin/kvm-spice -name reproducer2 -S -machine
  pc-i440fx-2.5,accel=kvm,usb=off -cpu host -m 124928 -realtime
  mlock=off -smp 32,sockets=16,cores=1,threads=2 -object memory-backend-
  file,id=ram-node0,prealloc=yes,mem-
  path=/dev/hugepages/libvirt/qemu,share=yes,size=64424509440,host-
  nodes=0,policy=bind -numa node,nodeid=0,cpus=0-15,memdev=ram-node0
  -object memory-backend-file,id=ram-node1,prealloc=yes,mem-
  path=/dev/hugepages/libvirt/qemu,share=yes,size=66571993088,host-
  nodes=1,policy=bind -numa node,nodeid=1,cpus=16-31,memdev=ram-node1
  -uuid d7a4af7f-7549-4b44-8ceb-4a6c951388d4 -no-user-config -nodefaults
  -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-
  reproducer2/monitor.sock,server,nowait -mon
  chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
  -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
  -drive
  file=/var/lib/uvtool/libvirt/images/test.qcow,format=qcow2,if=none,id
  =drive-virtio-disk0,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x3,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -chardev pty,id=charserial0 -device isa-
  serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:0 -device cirrus-
  vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-
  pci,id=balloon0,bus=pci.0,addr=0x4 -msg timestamp=on

  Then the following error is raised.

  virsh start reproducer2
  error: Failed to start domain reproducer2
  error: monitor socket did not show up: No such file or directory

  - The fix is done via backports, as a TL;DR the change does:
1. instead of sleeping too short (1ms) in a loop for very long start 
   small but exponentially increase for the few cases that need long. 
   That way fast actions are done fast, but long actions are no cpu-hogs
2. huge guests get ~1s per 1Gb extra timeout to come up, that allows 
   huge guests to initialize properly.

  [Impact]

    * Cannot start virtual machines with large pools of memory allocated
  on NUMA nodes.

  [Test Case]

   * this is a tradeoff of memory clearing speed vs guest size.
 Once the clearing of guest memory exceeds ~30 seconds the issue will 
 trigger.
   * Guest must be backed by huge pages as otherwise the kernel will fault 
 in on demand instead of needing the initial clear.
   * One way to "slow down" is to Configure a Machine with multiple NUMA 
 nodes.
     root@buneary:/home/ubuntu# virsh freepages 0 1G
     1048576KiB: 60
     root@buneary:/home/ubuntu# virsh freepages 1 1G
     1048576KiB: 62
   * Another one to slow down the init is to just use a really heg guest. In 
 the example 122G guest was enough. (full guest definition: 
 http://paste.ubuntu.com/25125500/)

  120
    120

    
  
    
    
  

    

    

  
  
    
    
  
    

   * Define the guest, and try to start it.

    $ virsh define reproducer.xml
    $ virsh start reproducer

  * Verify that the following error is raised:

  root@buneary:/home/ubuntu# virsh start reproducer2
  error: Failed to start domain reproducer2
  error: monitor socket did not show up: No such file or directory

  [Expected Behavior]

  * Machine is started without issues as displayed
  https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1705132/comments/7

  [Regression Potential]

   * The behavior on timeo

[Group.of.nepali.translators] [Bug 1567557] Re: Performance degradation of "zfs clone"

2017-08-22 Thread Stéphane Graber
** No longer affects: lxd (Ubuntu Xenial)

** No longer affects: lxd (Ubuntu Zesty)

** No longer affects: lxd (Ubuntu Artful)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1567557

Title:
  Performance degradation of "zfs clone"

Status in Native ZFS for Linux:
  New
Status in zfs-linux package in Ubuntu:
  Fix Released
Status in zfs-linux source package in Xenial:
  Fix Committed
Status in zfs-linux source package in Zesty:
  Fix Committed
Status in zfs-linux source package in Artful:
  Fix Released

Bug description:
  [SRU Justification]

  Creating tens of hundreds of clones can be prohibitively slow. The
  underlying mechanism to gather clone information is using a 16K buffer
  which limits performance.  Also, the initial assumption is to pass in
  zero sized buffer to   the underlying ioctl() to get an idea of the
  size of the buffer required to fetch information back to userspace.
  If we bump the initial buffer to a larger size then we reduce the need
  for two ioctl calls which improves performance.

  [Fix]
  Bump initial buffer size from 16K to 256K

  [Regression Potential]
  This is minimal as this is just a tweak in the initial buffer size and larger 
sizes are handled correctly by ZFS since they are normally used on the second 
ioctl() call once we have established the size of the buffer required from the 
first ioctl() call. Larger initial buffers just remove the need for the initial 
size estimation for most cases where the number of clones is less than ~5000.  
There is a risk that a larger buffer size could lead to a ENOMEM issue when 
allocating the buffer, but the size of buffer used is still trivial for modern 
large 64 bit servers running ZFS.

  [Test case]
  Create 4000 clones. With the fix this takes 35-40% less time than without the 
fix. See the example test.sh script as an example of how to create this many 
clones.


  
  --

  I've been running some scale tests for LXD and what I've noticed is
  that "zfs clone" gets slower and slower as the zfs filesystem is
  getting busier.

  It feels like "zfs clone" requires some kind of pool-wide lock or
  something and so needs for all operations to complete before it can
  clone a new filesystem.

  A basic LXD scale test with btrfs vs zfs shows what I mean, see below
  for the reports.

  The test is run on a completely dedicated physical server with the
  pool on a dedicated SSD, the exact same machine and SSD was used for
  the btrfs test.

  The zfs filesystem is configured with those settings:
   - relatime=on
   - sync=disabled
   - xattr=sa

  So it shouldn't be related to pending sync() calls...

  The workload in this case is ultimately 1024 containers running busybox as 
their init system and udhcpc grabbing an IP.
  The problem gets significantly worse if spawning busier containers, say a 
full Ubuntu system.

  === zfs ===
  root@edfu:~# /home/ubuntu/lxd-benchmark spawn --count=1024 
--image=images:alpine/edge/amd64 --privileged=true
  Test environment:
    Server backend: lxd
    Server version: 2.0.0.rc8
    Kernel: Linux
    Kernel architecture: x86_64
    Kernel version: 4.4.0-16-generic
    Storage backend: zfs
    Storage version: 5
    Container backend: lxc
    Container version: 2.0.0.rc15

  Test variables:
    Container count: 1024
    Container mode: privileged
    Image: images:alpine/edge/amd64
    Batches: 128
    Batch size: 8
    Remainder: 0

  [Apr  3 06:42:51.170] Importing image into local store: 
64192037277800298d8c19473c055868e0288b039349b1c6579971fe99fdbac7
  [Apr  3 06:42:52.657] Starting the test
  [Apr  3 06:42:53.994] Started 8 containers in 1.336s
  [Apr  3 06:42:55.521] Started 16 containers in 2.864s
  [Apr  3 06:42:58.632] Started 32 containers in 5.975s
  [Apr  3 06:43:05.399] Started 64 containers in 12.742s
  [Apr  3 06:43:20.343] Started 128 containers in 27.686s
  [Apr  3 06:43:57.269] Started 256 containers in 64.612s
  [Apr  3 06:46:09.112] Started 512 containers in 196.455s
  [Apr  3 06:58:19.309] Started 1024 containers in 926.652s
  [Apr  3 06:58:19.309] Test completed in 926.652s

  === btrfs ===
  Test environment:
    Server backend: lxd
    Server version: 2.0.0.rc8
    Kernel: Linux
    Kernel architecture: x86_64
    Kernel version: 4.4.0-16-generic
    Storage backend: btrfs
    Storage version: 4.4
    Container backend: lxc
    Container version: 2.0.0.rc15

  Test variables:
    Container count: 1024
    Container mode: privileged
    Image: images:alpine/edge/amd64
    Batches: 128
    Batch size: 8
    Remainder: 0

  [Apr  3 07:42:12.053] Importing image into local store: 
64192037277800298d8c19473c055868e0288b039349b1c6579971fe99fdbac7
  [Apr  3 07:42:13.351] Starting the test
  [Apr  3 07:42:14.793] Started 8 containers in 1.442s
  [Apr  3 07:42:16.495] Started 16 conta

[Group.of.nepali.translators] [Bug 1701073] Re: CVE-2017-2619 regression breaks symlinks to directories

2017-08-22 Thread Launchpad Bug Tracker
This bug was fixed in the package samba - 2:4.6.7+dfsg-1ubuntu1

---
samba (2:4.6.7+dfsg-1ubuntu1) artful; urgency=medium

  * Merge with Debian unstable (LP: #1710281).
- Upstream version 4.6.7 fixes the CVE-2017-2619 regression with non-wide
  symlinks to directories (LP: #1701073)
  * Remaining changes:
- debian/VERSION.patch: Update vendor string to "Ubuntu".
- debian/smb.conf;
  + Add "(Samba, Ubuntu)" to server string.
  + Comment out the default [homes] share, and add a comment about
"valid users = %s" to show users how to restrict access to
\\server\username to only username.
- debian/samba-common.config:
  + Do not change priority to high if dhclient3 is installed.
- Add apport hook:
  + Created debian/source_samba.py.
  + debian/rules, debian/samba-common-bin.install: install hook.
- Add extra DEP8 tests to samba (LP #1696823):
  + d/t/control: enable the new DEP8 tests
  + d/t/smbclient-anonymous-share-list: list available shares anonymously
  + d/t/smbclient-authenticated-share-list: list available shares using
an authenticated connection
  + d/t/smbclient-share-access: create a share and download a file from it
  + d/t/cifs-share-access: access a file in a share using cifs
- Ask the user if we can run testparm against the config file. If yes,
  include its stderr and exit status in the bug report. Otherwise, only
  include the exit status. (LP #1694334)
- If systemctl is available, use it to query the status of the smbd
  service before trying to reload it. Otherwise, keep the same check
  as before and reload the service based on the existence of the
  initscript. (LP #1579597)
- d/rules: Compile winbindd/winbindd statically.
- Disable glusterfs support because it's not in main.
  MIR bug is https://launchpad.net/bugs/1274247

 -- Andreas Hasenack   Mon, 21 Aug 2017 17:27:08
-0300

** Changed in: samba (Ubuntu)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1701073

Title:
  CVE-2017-2619 regression breaks symlinks to directories

Status in samba:
  Unknown
Status in samba package in Ubuntu:
  Fix Released
Status in samba source package in Xenial:
  Fix Released
Status in samba source package in Yakkety:
  Fix Released
Status in samba source package in Zesty:
  Fix Released

Bug description:
  Found in current version in Xenial (4.3.11+dfsg-0ubuntu0.16.04.7).
  When share's path is '/', symlinks do not work properly from Windows
  client. Gives "Cannot Access" error.

  To reproduce:

  1. Install samba and related dependencies

  apt install -y samba

  2. Add a share at the end of the default file that uses '/' as the
  path:

  [reproducer]
  comment = share
  browseable = no
  writeable = yes
  create mode = 0600
  directory mode = 0700
  path = /

  3. Attempt to access a symlink somewhere within the path of the share
  with a Windows client.

  4. Receive "Windows cannot access..." related error

To manage notifications about this bug go to:
https://bugs.launchpad.net/samba/+bug/1701073/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1559072] Re: [SRU] exceptions.from_response with webob 1.6.0 results in "AttributeError: 'unicode' object has no attribute 'get'"

2017-08-22 Thread Ryan Beisner
This bug was fixed in the package python-cinderclient - 1:1.6.0-2ubuntu1~cloud0
---

 python-cinderclient (1:1.6.0-2ubuntu1~cloud0) trusty-mitaka; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 python-cinderclient (1:1.6.0-2ubuntu1) xenial; urgency=medium
 .
   * d/p/review-462204.patch: Handle error response for webob>=1.6.0
 (LP: #1559072).


** Changed in: cloud-archive/mitaka
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1559072

Title:
  [SRU] exceptions.from_response with webob 1.6.0 results in
  "AttributeError: 'unicode' object has no attribute 'get'"

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in Ubuntu Cloud Archive newton series:
  Fix Released
Status in Ubuntu Cloud Archive ocata series:
  Fix Released
Status in Ubuntu Cloud Archive pike series:
  Fix Released
Status in networking-midonet:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-novaclient:
  Fix Released
Status in python-openstackclient:
  Invalid
Status in python-cinderclient package in Ubuntu:
  Fix Released
Status in python-novaclient package in Ubuntu:
  Fix Released
Status in python-cinderclient source package in Xenial:
  Fix Released
Status in python-novaclient source package in Xenial:
  Fix Released
Status in python-cinderclient source package in Yakkety:
  Won't Fix
Status in python-novaclient source package in Yakkety:
  Fix Released
Status in python-cinderclient source package in Zesty:
  Fix Released
Status in python-novaclient source package in Zesty:
  Fix Released

Bug description:
  [Impact] [Testcase]
  Running on Ubuntu 14.04.
  After installing nova from source in either the Liberty release or Mitaka, 
with WebOb 1.6.0, running any nova command generated this error:
  root@openstack-ubu-controller:~# nova service-list
  ERROR (AttributeError): 'unicode' object has no attribute 'get'

  The equivalent openstack commands work correctly. After downgrading
  WebOb to 1.5.1 AND restarting the nova-api service everything works.

  Detailed output from nova -debug service-list with the error:

  root@openstack-ubu-controller:~# nova --debug service-list
  DEBUG (extension:157) found extension EntryPoint.parse('v2token = 
keystoneauth1.loading._plugins.identity.v2:Token')
  DEBUG (extension:157) found extension EntryPoint.parse('admin_token = 
keystoneauth1.loading._plugins.admin_token:AdminToken')
  DEBUG (extension:157) found extension EntryPoint.parse('v3oidcauthcode = 
keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuthorizationCode')
  DEBUG (extension:157) found extension EntryPoint.parse('v2password = 
keystoneauth1.loading._plugins.identity.v2:Password')
  DEBUG (extension:157) found extension EntryPoint.parse('v3password = 
keystoneauth1.loading._plugins.identity.v3:Password')
  DEBUG (extension:157) found extension EntryPoint.parse('v3oidcpassword = 
keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword')
  DEBUG (extension:157) found extension EntryPoint.parse('token = 
keystoneauth1.loading._plugins.identity.generic:Token')
  DEBUG (extension:157) found extension EntryPoint.parse('v3token = 
keystoneauth1.loading._plugins.identity.v3:Token')
  DEBUG (extension:157) found extension EntryPoint.parse('password = 
keystoneauth1.loading._plugins.identity.generic:Password')
  DEBUG (session:248) REQ: curl -g -i -X GET http://10.0.1.3:5000/v2.0 -H 
"Accept: application/json" -H "User-Agent: keystoneauth1/2.3.0 
python-requests/2.9.1 CPython/2.7.6"
  INFO (connectionpool:207) Starting new HTTP connection (1): 10.0.1.3
  DEBUG (connectionpool:387) "GET /v2.0 HTTP/1.1" 200 334
  DEBUG (session:277) RESP: [200] Content-Length: 334 Vary: X-Auth-Token 
Keep-Alive: timeout=5, max=100 Server: Apache/2.4.7 (Ubuntu) Connection: 
Keep-Alive Date: Fri, 18 Mar 2016 12:41:58 GMT Content-Type: application/json 
x-openstack-request-id: req-a0c68cd5-ea29-4391-942f-130cc69d15f8
  RESP BODY: {"version": {"status": "stable", "updated": 
"2014-04-17T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": 
[{"href": "http://10.0.1.3:5000/v2.0/";, "rel": "self"}, {"href": 
"http://docs.openstack.org/";, "type": "text/html", "rel": "describedby"}]}}

  DEBUG (v2:63) Making authentication request to 
http://10.0.1.3:5000/v2.0/tokens
  DEBUG (connectionpool:387) "POST /v2.0/tokens HTTP/1.1" 200 2465
  DEBUG (session:248) REQ: curl -g -i -X GET 
http://10.0.1.3:8774/v1.1/b77d640e127e488fb42a7c0716ba53a5 -H "User-Agent: 
python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: 
{SHA1}381893576ad46c62b587f4963d769b89441b919a"
  INFO (connectionpool:207) Starting new HTTP connection (1): 10.0.1.3
  DEB

[Group.of.nepali.translators] [Bug 1570491] Re: Network client (neutron) initialized without specifying region

2017-08-22 Thread Ryan Beisner
This bug was fixed in the package python-openstackclient - 2.3.1-0ubuntu1~cloud0
---

 python-openstackclient (2.3.1-0ubuntu1~cloud0) trusty-mitaka; urgency=medium
 .
   * New upstream release for the Ubuntu Cloud Archive.
 .
 python-openstackclient (2.3.1-0ubuntu1) xenial; urgency=medium
 .
   * d/gbp.conf: Update for stable/mitaka branch.
   * New upstream point release (LP: #1703372).
   * d/p/init-neutron-client-with-region-name.patch: Initialize neutron
 client with region name to enable use with multi-region deployments
 (LP: #1570491).


** Changed in: cloud-archive/mitaka
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1570491

Title:
  Network client (neutron) initialized without specifying region

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in python-openstackclient:
  Fix Released
Status in python-openstackclient package in Ubuntu:
  Invalid
Status in python-openstackclient source package in Xenial:
  Fix Released

Bug description:
  All network (neutron) related commands does not take region into
  account. In configuration where are more then one region, first one
  for network service is selected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1570491/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1494992] Re: 100% CPU when the History or Wallpaper Selector panels are open

2017-08-22 Thread James Lu
** Changed in: variety
   Status: Fix Committed => Fix Released

** Changed in: variety
Milestone: None => 0.6.5

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1494992

Title:
  100% CPU when the History or Wallpaper Selector panels are open

Status in Variety:
  Fix Released
Status in variety package in Ubuntu:
  Fix Released
Status in variety source package in Xenial:
  Fix Committed
Status in variety source package in Yakkety:
  Won't Fix
Status in variety source package in Zesty:
  Fix Committed
Status in variety source package in Artful:
  Fix Released

Bug description:
  Version:0.5.4
  DE:Cinnamon 2.6.13
  OS:Linux Mint 17.2

  When you chnage the backgroud in the history panel or scroll in it the CPU 
utilisation goes up to 100% and stay there.
  I need to close the panel and reopen it to restore the utilisation back to 
normal.
  The problem seem also to be in the download history panel and the selection 
of background too.

  

  Below is the SRU information by James Lu
  (https://launchpad.net/~tacocat)

  [Impact]

   * The autoscroll feature in Variety's Wallpaper Selector dialog
  before commit
  https://bazaar.launchpad.net/~variety/variety/trunk/revision/592
  consumes excessive amounts of CPU after leaving the autoscroll area.
  This affects both the "History" and "Wallpaper Selector" options found
  in Variety's menu.

   * Although this bug doesn't cause any serious damage, pegging a
  machine's CPU is quite annoying and users will notice whirring fans
  and reduced battery life as a result.

   * The proposed fix adds a missing line to clear the autoscroll state
  when leaving the wallpaper selector. This way, the code in
  _autoscroll_thread() (which polls for whether the mouse is over the
  wallpaper selector) doesn't instantly succeed and create an infinite
  loop.

  [Test Case]

   1) Select one or more wallpaper sources so that in the Wallpaper
  Selector, scrolling is needed to show all items.

   2) Open the wallpaper selector, either by focusing on a wallpaper
  source in the preferences dialog, or by choosing the "Wallpaper
  Selector" option in Variety's tray menu.

   3) Move the mouse over any of the images in the wallpaper selector.

   4) Move the mouse away from the wallpaper selector. A CPU spike in Variety 
should appear now.
     - Note that this CPU spike is different from any initial CPU spikes when 
the wallpaper selector first opens, as that is due to Variety generating all 
the thumbnails on the spot. The CPU spike mentioned in this bug lasts as long 
as the wallpaper selector is open and the mouse is not over it, while the 
initial spikes are temporary (they always last less than 5 seconds for me)

  [Regression Potential]

   * This patch affects the autoscroll portion of the wallpaper
  selector. Should this patch be erroneous, some symptoms could include
  autoscroll or the entire wallpaper selector not working entirely.

   * Syntax or variable name errors will, on the other hand, raise
  Python exceptions and possibly cause Variety to fail to start
  entirely.

  [Other Info]

   * The original patch fixing this bug was included in Debian release
  0.6.3-5, which has been in Debian stretch for about 2 months and
  artful for a similar time (I don't remember the exact date of the
  relevant sync). No new bugs related to this issue have been opened
  since in Debian and Ubuntu.

To manage notifications about this bug go to:
https://bugs.launchpad.net/variety/+bug/1494992/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1614054] Re: [SRU] Incorrect host cpu is given to emulator threads when cpu_realtime_mask flag is set

2017-08-22 Thread Ryan Beisner
This bug was fixed in the package nova - 2:13.1.4-0ubuntu2~cloud0
---

 nova (2:13.1.4-0ubuntu2~cloud0) trusty-mitaka; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 nova (2:13.1.4-0ubuntu2) xenial; urgency=medium
 .
   * d/p/libvirt-fix-incorrect-host-cpus-giving-to-emulator-t.patch:
 Backport fix for cpu pinning libvirt config incorrect emulator
 pin cpuset (LP: #1614054).


** Changed in: cloud-archive/mitaka
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1614054

Title:
  [SRU] Incorrect host cpu is given to emulator threads when
  cpu_realtime_mask flag is set

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  Fix Committed
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Xenial:
  Fix Released

Bug description:
  [Impact]

  This bug affects users of Openstack Nova who want to create instances
  that will leverage the realtime functionality that libvirt/qemu offers
  by, amongst other things, pinning guest vcpus and qemu emulator
  threads to specific pcpus. Nova provides the means for the user to
  control, via the flavor hw:cpu_realtime_mask or image property
  hw_cpu_realtime_mask, which physical cpus these resources will pinned
  to. This mask allows you to mask the set of N pins that Nova selects
  such that 1 or more of your vcpus can be declared "real-time" by
  ensuring that they do not have emulator threads also pinned to them.
  The remaining "non-realtime" vcpus will have vcpu and emulator threads
  colocated. The bug fixes the case where e.g. you have a guest that has
  2 vcpus (logically 0 and 1) and Nova selects pcpus 14 and 22 and you
  have mask ^0 to indicate that you want all but the first vcpu to be
  realtime. This should result in the following being present in your
  libvirt xml for the guest:

    
  
  
  
  
    

  But (current only Mitaka since it does not have this patch) you will
  get this:

    
  
  
  
  
    

  i.e. Nova will always set the emulator pin to be id of the vcpu
  instead of the corresponding pcpu that it is pinned to.

  In terms of actual impact this could result in vcpus that are supposed
  to be isolated not being so and therefore not behaving as expected.

  [Test Case]

   * deploy openstack mitaka and configure nova.conf with vcpu-pin-
  set=0,1,2,3

     https://pastebin.ubuntu.com/25133260/

   * configure compute host kernel opts with "isolcpus=0,1,2,3" + reboot

   * create flavor with:

     openstack flavor create --public --ram 2048 --disk 10 --vcpus 2 --swap 0 
test_flavor
     openstack flavor set --property hw:cpu_realtime_mask='^0' test_flavor
     openstack flavor set --property hw:cpu_policy=dedicated test_flavor
     openstack flavor set --property hw:cpu_thread_policy=prefer test_flavor
     openstack flavor set --property hw:cpu_realtime=yes test_flavor

   * boot instance with ^^ flavor

   * check that libvirt xml for vm has correct emulator pin cpuset #

  [Regression Potential]

  Since the patch being backported only touches the specific aread of
  code that was causing the original problem  and that code only serves
  to select cpusets based on flavor filters, i can't think of any
  regressions that it would introduce. However, one potential side
  effect/change to be aware of is that once nova-compute is upgraded to
  this newer version, any new instances created will have the
  correct/expected cpuset assignments whereas instances created prior to
  upgrade will remain unchanged i.e. they will all likely still have
  their emulation threads pinned to the wrong pcpu. In terms of side
  effects this will mean less load on the pcpu that was previously
  incorrectly chosen for existing guests but it will mean that older
  instances will need to be recreated in order to benefit from the fix.

  

  Description of problem:
  When using the cpu_realtime and cpu_realtim_mask flag to create new instance, 
the 'cpuset' of 'emulatorpin' option is using the id of vcpu which is 
incorrect. The id of host cpu should be used here.

  e.g.
    
  
  
    ### the cpuset should be '2' here, 
when cpu_realtime_mask=^0.
  
    

  How reproducible:
  Boot new instance with cpu_realtime_mask flavor.

  Steps to Reproduce:
  1. Create RT flavor
  nova flavor-create m1.small.performance 6 2048 20 2
  nova flavor-key m1.small.performance set hw:cpu_realtime=yes
  nova flavor-key m1.small.performance set hw:cpu_realtime_mask=^0
  nova flavor-key m1.small.performance set hw:cpu_policy=dedicated
  2. Boot a instance with this flavor
  3. Check the xml of the new 

[Group.of.nepali.translators] [Bug 1705132] Re: Large memory guests, "error: monitor socket did not show up: No such file or directory"

2017-08-22 Thread Ryan Beisner
This bug was fixed in the package libvirt - 1.3.1-1ubuntu10.12~cloud0
---

 libvirt (1.3.1-1ubuntu10.12~cloud0) trusty-mitaka; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 libvirt (1.3.1-1ubuntu10.12) xenial; urgency=medium
 .
   * d/p/ubuntu/bug-1705132-* qemu: Adaptive timeout for connecting to
 monitor (LP: #1705132)
 - includes backports that make backing off on timeouts exponentially
   but cap the exponential increase on 1s.


** Changed in: cloud-archive/mitaka
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1705132

Title:
  Large memory guests, "error: monitor socket did not show up: No such
  file or directory"

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in Ubuntu Cloud Archive ocata series:
  Fix Committed
Status in libvirt package in Ubuntu:
  Fix Released
Status in libvirt source package in Xenial:
  Fix Released
Status in libvirt source package in Yakkety:
  Won't Fix
Status in libvirt source package in Zesty:
  Fix Released

Bug description:
  [Description]

  - Configured a machine with 32 static VCPUs, 160GB of RAM using 1G
  hugepages on a NUMA capable machine.

  Domain definition (http://pastebin.ubuntu.com/25121106/)

  - Once started (virsh start).

  Libvirt log.

  LC_ALL=C
  PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  QEMU_AUDIO_DRV=none /usr/bin/kvm-spice -name reproducer2 -S -machine
  pc-i440fx-2.5,accel=kvm,usb=off -cpu host -m 124928 -realtime
  mlock=off -smp 32,sockets=16,cores=1,threads=2 -object memory-backend-
  file,id=ram-node0,prealloc=yes,mem-
  path=/dev/hugepages/libvirt/qemu,share=yes,size=64424509440,host-
  nodes=0,policy=bind -numa node,nodeid=0,cpus=0-15,memdev=ram-node0
  -object memory-backend-file,id=ram-node1,prealloc=yes,mem-
  path=/dev/hugepages/libvirt/qemu,share=yes,size=66571993088,host-
  nodes=1,policy=bind -numa node,nodeid=1,cpus=16-31,memdev=ram-node1
  -uuid d7a4af7f-7549-4b44-8ceb-4a6c951388d4 -no-user-config -nodefaults
  -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-
  reproducer2/monitor.sock,server,nowait -mon
  chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
  -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
  -drive
  file=/var/lib/uvtool/libvirt/images/test.qcow,format=qcow2,if=none,id
  =drive-virtio-disk0,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x3,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -chardev pty,id=charserial0 -device isa-
  serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:0 -device cirrus-
  vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-
  pci,id=balloon0,bus=pci.0,addr=0x4 -msg timestamp=on

  Then the following error is raised.

  virsh start reproducer2
  error: Failed to start domain reproducer2
  error: monitor socket did not show up: No such file or directory

  - The fix is done via backports, as a TL;DR the change does:
1. instead of sleeping too short (1ms) in a loop for very long start 
   small but exponentially increase for the few cases that need long. 
   That way fast actions are done fast, but long actions are no cpu-hogs
2. huge guests get ~1s per 1Gb extra timeout to come up, that allows 
   huge guests to initialize properly.

  [Impact]

    * Cannot start virtual machines with large pools of memory allocated
  on NUMA nodes.

  [Test Case]

   * this is a tradeoff of memory clearing speed vs guest size.
 Once the clearing of guest memory exceeds ~30 seconds the issue will 
 trigger.
   * Guest must be backed by huge pages as otherwise the kernel will fault 
 in on demand instead of needing the initial clear.
   * One way to "slow down" is to Configure a Machine with multiple NUMA 
 nodes.
     root@buneary:/home/ubuntu# virsh freepages 0 1G
     1048576KiB: 60
     root@buneary:/home/ubuntu# virsh freepages 1 1G
     1048576KiB: 62
   * Another one to slow down the init is to just use a really heg guest. In 
 the example 122G guest was enough. (full guest definition: 
 http://paste.ubuntu.com/25125500/)

  120
    120

    
  
    
    
  

    

    

  
  
    
    
  
    

   * Define the guest, and try to start it.

    $ virsh define reproducer.xml
    $ virsh start reproducer

  * Verify that the following error is raised:

  root@buneary:/home/ubuntu# virsh start reproducer2
  error: Failed to start domain reproducer2
  error: monitor socket did not show up: No such file or directory

  [Expected Behavior]

  * Machine is started without issues as displayed
  https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1705132/comments/7

  [Regression Potential]

   * The behavio

[Group.of.nepali.translators] [Bug 1712170] Re: linux-gcp: 4.10.0-1004.4 -proposed tracker

2017-08-22 Thread Brad Figg
** Changed in: kernel-sru-workflow/automated-testing
   Status: New => Confirmed

** Changed in: kernel-sru-workflow/certification-testing
   Status: New => Confirmed

** Changed in: kernel-sru-workflow/promote-to-proposed
   Status: Fix Committed => Fix Released

** Changed in: kernel-sru-workflow/regression-testing
   Status: New => Confirmed

** Changed in: kernel-sru-workflow/security-signoff
   Status: New => Fix Released

** Changed in: kernel-sru-workflow/verification-testing
   Status: New => Confirmed

** Description changed:

  This bug is for tracking the 4.10.0-1004.4 upload package. This bug will
  contain status and testing results related to that upload.
  
  For an explanation of the tasks and the associated workflow see: 
https://wiki.ubuntu.com/Kernel/kernel-sru-workflow
  -- swm properties --
  boot-testing-requested: true
  kernel-stable-master-bug: 1709303
  phase: Uploaded
+ kernel-stable-phase:Promoted to proposed
+ kernel-stable-phase-changed:Tuesday, 22. August 2017 18:30 UTC

** Description changed:

  This bug is for tracking the 4.10.0-1004.4 upload package. This bug will
  contain status and testing results related to that upload.
  
  For an explanation of the tasks and the associated workflow see: 
https://wiki.ubuntu.com/Kernel/kernel-sru-workflow
  -- swm properties --
  boot-testing-requested: true
  kernel-stable-master-bug: 1709303
- phase: Uploaded
- kernel-stable-phase:Promoted to proposed
- kernel-stable-phase-changed:Tuesday, 22. August 2017 18:30 UTC
+ phase: Promoted to proposed
+ proposed-announcement-sent: true
+ proposed-testing-requested: true

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1712170

Title:
  linux-gcp: 4.10.0-1004.4 -proposed tracker

Status in Kernel SRU Workflow:
  In Progress
Status in Kernel SRU Workflow automated-testing series:
  Confirmed
Status in Kernel SRU Workflow certification-testing series:
  Confirmed
Status in Kernel SRU Workflow prepare-package series:
  Fix Released
Status in Kernel SRU Workflow prepare-package-meta series:
  Fix Released
Status in Kernel SRU Workflow promote-to-proposed series:
  Fix Released
Status in Kernel SRU Workflow promote-to-security series:
  New
Status in Kernel SRU Workflow promote-to-updates series:
  New
Status in Kernel SRU Workflow regression-testing series:
  Confirmed
Status in Kernel SRU Workflow security-signoff series:
  Fix Released
Status in Kernel SRU Workflow upload-to-ppa series:
  New
Status in Kernel SRU Workflow verification-testing series:
  Confirmed
Status in linux-gcp package in Ubuntu:
  Invalid
Status in linux-gcp source package in Xenial:
  New

Bug description:
  This bug is for tracking the 4.10.0-1004.4 upload package. This bug
  will contain status and testing results related to that upload.

  For an explanation of the tasks and the associated workflow see: 
https://wiki.ubuntu.com/Kernel/kernel-sru-workflow
  -- swm properties --
  boot-testing-requested: true
  kernel-stable-master-bug: 1709303
  phase: Promoted to proposed
  proposed-announcement-sent: true
  proposed-testing-requested: true

To manage notifications about this bug go to:
https://bugs.launchpad.net/kernel-sru-workflow/+bug/1712170/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1567557] Re: Performance degradation of "zfs clone"

2017-08-22 Thread Stéphane Graber
** No longer affects: lxd (Ubuntu)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1567557

Title:
  Performance degradation of "zfs clone"

Status in Native ZFS for Linux:
  New
Status in zfs-linux package in Ubuntu:
  Fix Released
Status in lxd source package in Xenial:
  New
Status in zfs-linux source package in Xenial:
  Fix Committed
Status in lxd source package in Zesty:
  New
Status in zfs-linux source package in Zesty:
  Fix Committed
Status in lxd source package in Artful:
  Confirmed
Status in zfs-linux source package in Artful:
  Fix Released

Bug description:
  [SRU Justification]

  Creating tens of hundreds of clones can be prohibitively slow. The
  underlying mechanism to gather clone information is using a 16K buffer
  which limits performance.  Also, the initial assumption is to pass in
  zero sized buffer to   the underlying ioctl() to get an idea of the
  size of the buffer required to fetch information back to userspace.
  If we bump the initial buffer to a larger size then we reduce the need
  for two ioctl calls which improves performance.

  [Fix]
  Bump initial buffer size from 16K to 256K

  [Regression Potential]
  This is minimal as this is just a tweak in the initial buffer size and larger 
sizes are handled correctly by ZFS since they are normally used on the second 
ioctl() call once we have established the size of the buffer required from the 
first ioctl() call. Larger initial buffers just remove the need for the initial 
size estimation for most cases where the number of clones is less than ~5000.  
There is a risk that a larger buffer size could lead to a ENOMEM issue when 
allocating the buffer, but the size of buffer used is still trivial for modern 
large 64 bit servers running ZFS.

  [Test case]
  Create 4000 clones. With the fix this takes 35-40% less time than without the 
fix. See the example test.sh script as an example of how to create this many 
clones.


  
  --

  I've been running some scale tests for LXD and what I've noticed is
  that "zfs clone" gets slower and slower as the zfs filesystem is
  getting busier.

  It feels like "zfs clone" requires some kind of pool-wide lock or
  something and so needs for all operations to complete before it can
  clone a new filesystem.

  A basic LXD scale test with btrfs vs zfs shows what I mean, see below
  for the reports.

  The test is run on a completely dedicated physical server with the
  pool on a dedicated SSD, the exact same machine and SSD was used for
  the btrfs test.

  The zfs filesystem is configured with those settings:
   - relatime=on
   - sync=disabled
   - xattr=sa

  So it shouldn't be related to pending sync() calls...

  The workload in this case is ultimately 1024 containers running busybox as 
their init system and udhcpc grabbing an IP.
  The problem gets significantly worse if spawning busier containers, say a 
full Ubuntu system.

  === zfs ===
  root@edfu:~# /home/ubuntu/lxd-benchmark spawn --count=1024 
--image=images:alpine/edge/amd64 --privileged=true
  Test environment:
    Server backend: lxd
    Server version: 2.0.0.rc8
    Kernel: Linux
    Kernel architecture: x86_64
    Kernel version: 4.4.0-16-generic
    Storage backend: zfs
    Storage version: 5
    Container backend: lxc
    Container version: 2.0.0.rc15

  Test variables:
    Container count: 1024
    Container mode: privileged
    Image: images:alpine/edge/amd64
    Batches: 128
    Batch size: 8
    Remainder: 0

  [Apr  3 06:42:51.170] Importing image into local store: 
64192037277800298d8c19473c055868e0288b039349b1c6579971fe99fdbac7
  [Apr  3 06:42:52.657] Starting the test
  [Apr  3 06:42:53.994] Started 8 containers in 1.336s
  [Apr  3 06:42:55.521] Started 16 containers in 2.864s
  [Apr  3 06:42:58.632] Started 32 containers in 5.975s
  [Apr  3 06:43:05.399] Started 64 containers in 12.742s
  [Apr  3 06:43:20.343] Started 128 containers in 27.686s
  [Apr  3 06:43:57.269] Started 256 containers in 64.612s
  [Apr  3 06:46:09.112] Started 512 containers in 196.455s
  [Apr  3 06:58:19.309] Started 1024 containers in 926.652s
  [Apr  3 06:58:19.309] Test completed in 926.652s

  === btrfs ===
  Test environment:
    Server backend: lxd
    Server version: 2.0.0.rc8
    Kernel: Linux
    Kernel architecture: x86_64
    Kernel version: 4.4.0-16-generic
    Storage backend: btrfs
    Storage version: 4.4
    Container backend: lxc
    Container version: 2.0.0.rc15

  Test variables:
    Container count: 1024
    Container mode: privileged
    Image: images:alpine/edge/amd64
    Batches: 128
    Batch size: 8
    Remainder: 0

  [Apr  3 07:42:12.053] Importing image into local store: 
64192037277800298d8c19473c055868e0288b039349b1c6579971fe99fdbac7
  [Apr  3 07:42:13.351] Starting the test
  [Apr  3 07:42:14.793] Started 8 containers

[Group.of.nepali.translators] [Bug 1636593] Re: Please update nginx for Xenial, Yakkety, and Zesty to 1.10.2

2017-08-22 Thread Thomas Ward
** Changed in: nginx (Ubuntu Xenial)
   Status: Triaged => Fix Released

** Changed in: nginx (Ubuntu Yakkety)
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1636593

Title:
  Please update nginx for Xenial, Yakkety, and Zesty to 1.10.2

Status in nginx package in Ubuntu:
  Fix Released
Status in nginx source package in Xenial:
  Fix Released
Status in nginx source package in Yakkety:
  Won't Fix
Status in nginx source package in Zesty:
  Fix Released

Bug description:
  1.10.2 was released back in October.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nginx/+bug/1636593/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1705582] Re: Update Package request for lsvpd

2017-08-22 Thread Steve Langasek
** Changed in: lsvpd (Ubuntu)
   Status: In Progress => Fix Released

** Also affects: lsvpd (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: lsvpd (Ubuntu Xenial)
   Importance: Undecided => High

** Changed in: lsvpd (Ubuntu Xenial)
   Status: New => In Progress

** Changed in: lsvpd (Ubuntu Xenial)
 Assignee: (unassigned) => Canonical Foundations Team 
(canonical-foundations)

** Changed in: ubuntu-power-systems
 Assignee: Canonical Foundations Team (canonical-foundations) => 
(unassigned)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1705582

Title:
  Update Package request for lsvpd

Status in The Ubuntu-power-systems project:
  Incomplete
Status in lsvpd package in Ubuntu:
  Fix Released
Status in lsvpd source package in Xenial:
  In Progress

Bug description:
  SRU
  

  ---Problem Description---
  Update Package request for lsvpd

  ---uname output---
  Linux tuleta4u-lp9 4.10.0-27-generic #30~16.04.2-Ubuntu SMP Thu Jun 29 
16:06:52 UTC 2017 ppc64le ppc64le ppc64le GNU/Linux

  ---Steps to Reproduce---
   lsvpd

  Userspace tool common name: lsvpd

  Please pull below patches for lsvpd package.

  --
  commit 7499dcc199da96befef7bb788d62e446833137b2
  Author: Ankit Kumar 
  Date:   Mon Dec 7 11:59:05 2015 +0530

  lsvpd: Unique name for temp Device file given and file closed
  before deleting it.

  This piece of code used mkstemp to get unique name of file in that 
particular
  directory. As file name is not hardcoded, so parameter for device_close 
and
  device_open is changed to pass device location path.

  This patch also fixes the issue of opened file which is not closed by 
closing
  file descriptor.

  Signed-off-by: Ankit Kumar 
  [Removed redundant variable - Vasant]
  Signed-off-by: Vasant Hegde 

  -

  Backported patch on top of 1.7.6 provided

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-power-systems/+bug/1705582/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1712170] Re: linux-gcp: 4.10.0-1004.4 -proposed tracker

2017-08-22 Thread Steve Langasek
** Also affects: linux-gcp (Ubuntu Xenial)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1712170

Title:
  linux-gcp: 4.10.0-1004.4 -proposed tracker

Status in Kernel SRU Workflow:
  In Progress
Status in Kernel SRU Workflow automated-testing series:
  New
Status in Kernel SRU Workflow certification-testing series:
  New
Status in Kernel SRU Workflow prepare-package series:
  Fix Released
Status in Kernel SRU Workflow prepare-package-meta series:
  Fix Released
Status in Kernel SRU Workflow promote-to-proposed series:
  Confirmed
Status in Kernel SRU Workflow promote-to-security series:
  New
Status in Kernel SRU Workflow promote-to-updates series:
  New
Status in Kernel SRU Workflow regression-testing series:
  New
Status in Kernel SRU Workflow security-signoff series:
  New
Status in Kernel SRU Workflow upload-to-ppa series:
  New
Status in Kernel SRU Workflow verification-testing series:
  New
Status in linux-gcp package in Ubuntu:
  Invalid
Status in linux-gcp source package in Xenial:
  New

Bug description:
  This bug is for tracking the 4.10.0-1004.4 upload package. This bug
  will contain status and testing results related to that upload.

  For an explanation of the tasks and the associated workflow see: 
https://wiki.ubuntu.com/Kernel/kernel-sru-workflow
  -- swm properties --
  boot-testing-requested: true
  kernel-stable-master-bug: 1709303
  phase: Uploaded

To manage notifications about this bug go to:
https://bugs.launchpad.net/kernel-sru-workflow/+bug/1712170/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1635597] Re: Ubuntu16.10:talclp1: Kdump failed with multipath disk

2017-08-22 Thread Andrew Cloke
** Changed in: linux (Ubuntu)
   Status: Invalid => New

** Changed in: linux (Ubuntu Trusty)
   Status: Invalid => New

** Changed in: linux (Ubuntu Xenial)
   Status: Invalid => New

** Changed in: linux (Ubuntu Zesty)
   Status: Invalid => New

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1635597

Title:
  Ubuntu16.10:talclp1: Kdump failed with multipath disk

Status in The Ubuntu-power-systems project:
  In Progress
Status in linux package in Ubuntu:
  New
Status in makedumpfile package in Ubuntu:
  Fix Released
Status in linux source package in Trusty:
  New
Status in makedumpfile source package in Trusty:
  Confirmed
Status in linux source package in Xenial:
  New
Status in makedumpfile source package in Xenial:
  Confirmed
Status in linux source package in Zesty:
  New
Status in makedumpfile source package in Zesty:
  Confirmed

Bug description:
  Problem  Description
  ==
  On talclp1, I enabled kdump. But kdump failed and it drop to BusyBox.

  root@talclp1:~# echo c> /proc/sysrq-trigger
  [  132.643690] sysrq: SysRq : Trigger a crash
  [  132.643739] Unable to handle kernel paging request for data at address 
0x
  [  132.643745] Faulting instruction address: 0xc05c28f4
  [  132.643749] Oops: Kernel access of bad area, sig: 11 [#1]
  [  132.643753] SMP NR_CPUS=2048 NUMA pSeries
  [  132.643758] Modules linked in: fuse ufs qnx4 hfsplus hfs minix ntfs msdos 
jfs rpadlpar_io rpaphp rpcsec_gss_krb5 nfsv4 dccp_diag cifs nfs dns_resolver 
dccp tcp_diag fscache udp_diag inet_diag unix_diag af_packet_diag netlink_diag 
binfmt_misc xfs libcrc32c pseries_rng rng_core ghash_generic gf128mul 
vmx_crypto sg nfsd auth_rpcgss nfs_acl lockd grace sunrpc ip_tables x_tables 
autofs4 ext4 crc16 jbd2 fscrypto mbcache crc32c_generic btrfs xor raid6_pq 
dm_round_robin sr_mod sd_mod cdrom ses enclosure scsi_transport_sas ibmveth 
crc32c_vpmsum ipr scsi_dh_emc scsi_dh_rdac scsi_dh_alua dm_multipath dm_mod
  [  132.643819] CPU: 49 PID: 10174 Comm: bash Not tainted 4.8.0-15-generic 
#16-Ubuntu
  [  132.643824] task: c00111767080 task.stack: c000d82e
  [  132.643828] NIP: c05c28f4 LR: c05c39d8 CTR: 
c05c28c0
  [  132.643832] REGS: c000d82e3990 TRAP: 0300   Not tainted  
(4.8.0-15-generic)
  [  132.643836] MSR: 80009033   CR: 28242422  
XER: 0001
  [  132.643848] CFAR: c00087d0 DAR:  DSISR: 4200 
SOFTE: 1
  GPR00: c05c39d8 c000d82e3c10 c0f67b00 0063
  GPR04: c0011d04a9b8 c0011d05f7e0 c0047fb0 00015998
  GPR08: 0007 0001  0001
  GPR12: c05c28c0 c7b4b900  2200
  GPR16: 10170dc8 01002b566368 10140f58 100c7570
  GPR20:  1017dd58 10153618 1017b608
  GPR24: 3e87a294 0001 c0ebff60 0004
  GPR28: c0ec0320 0063 c0e72a90 
  [  132.643906] NIP [c05c28f4] sysrq_handle_crash+0x34/0x50
  [  132.643911] LR [c05c39d8] __handle_sysrq+0xe8/0x280
  [  132.643914] Call Trace:
  [  132.643917] [c000d82e3c10] [c0a245e8] 0xc0a245e8 
(unreliable)
  [  132.643923] [c000d82e3c30] [c05c39d8] __handle_sysrq+0xe8/0x280
  [  132.643928] [c000d82e3cd0] [c05c4188] 
write_sysrq_trigger+0x78/0xa0
  [  132.643935] [c000d82e3d00] [c03ad770] proc_reg_write+0xb0/0x110
  [  132.643941] [c000d82e3d50] [c030fc3c] __vfs_write+0x6c/0xe0
  [  132.643946] [c000d82e3d90] [c0311144] vfs_write+0xd4/0x240
  [  132.643950] [c000d82e3de0] [c0312e5c] SyS_write+0x6c/0x110
  [  132.643957] [c000d82e3e30] [c00095e0] system_call+0x38/0x108
  [  132.643961] Instruction dump:
  [  132.643963] 38425240 7c0802a6 f8010010 f821ffe1 6000 6000 3d220019 
3949ba60
  [  132.643972] 3921 912a 7c0004ac 3940 <992a> 38210020 
e8010010 7c0803a6
  [  132.643981] ---[ end trace eed6bbcd2c3bdfdf ]---
  [  132.646105]
  [  132.646176] Sending IPI to other CPUs
  [  132.647490] IPI complete
  I'm in purgatory
   -> smp_release_cpus()
  spinning_secondaries = 104
   <- smp_release_cpus()
  [2.011346] alg: hash: Test 1 failed for crc32c-vpmsum
  [2.729254] sd 0:2:0:0: [sda] Assuming drive cache: write through
  [2.731554] sd 1:2:5:0: [sdn] Assuming drive cache: write through
  [2.739087] sd 1:2:4:0: [sdm] Assuming drive cache: write through
  [2.739089] sd 1:2:6:0: [sdo] Assuming drive cache: write through
  [2.739110] sd 1:2:7:0: [sdp] Assuming drive cache: write through
  [2.739115] sd 1:2:0:0: [sdi] Assuming drive cache: write through
  [2.7

[Group.of.nepali.translators] [Bug 1708354] Re: [CVE] Correctly handle bogusly large chunk sizes

2017-08-22 Thread Launchpad Bug Tracker
This bug was fixed in the package varnish - 5.0.0-7ubuntu0.1

---
varnish (5.0.0-7ubuntu0.1) zesty-security; urgency=medium

  * SECURITY UPDATE: Correctly handle bogusly large chunk sizes (LP: #1708354)
- 5.0-Correctly-handle-bogusly-large-chunk-sizes.patch
- CVE-2017-12425

 -- Simon Quigley   Mon, 07 Aug 2017 12:57:31 -0500

** Changed in: varnish (Ubuntu Zesty)
   Status: Fix Committed => Fix Released

** Changed in: varnish (Ubuntu Xenial)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1708354

Title:
  [CVE] Correctly handle bogusly large chunk sizes

Status in varnish package in Ubuntu:
  Fix Released
Status in varnish source package in Xenial:
  Fix Released
Status in varnish source package in Zesty:
  Fix Released

Bug description:
  https://varnish-cache.org/security/VSV1.html

  CVE-2017-12425

  Date: 2017-08-02

  A wrong if statement in the varnishd source code means that particular
  invalid requests from the client can trigger an assert.

  This causes the varnishd worker process to abort and restart, loosing
  the cached contents in the process.

  An attacker can therefore crash the varnishd worker process on demand
  and effectively keep it from serving content - a Denial-of-Service
  attack.

  Mitigation is possible from VCL or by updating to a fixed version of Varnish 
Cache.
  Versions affected

  4.0.1 to 4.0.4
  4.1.0 to 4.1.7
  5.0.0
  5.1.0 to 5.1.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/varnish/+bug/1708354/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1709784] Re: KVM on 16.04.3 throws an error

2017-08-22 Thread Joseph Salisbury
** Changed in: linux (Ubuntu Zesty)
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1709784

Title:
  KVM on 16.04.3 throws an error

Status in The Ubuntu-power-systems project:
  In Progress
Status in linux package in Ubuntu:
  In Progress
Status in qemu package in Ubuntu:
  Won't Fix
Status in linux source package in Xenial:
  In Progress
Status in linux source package in Zesty:
  Invalid

Bug description:
  Problem Description
  
  KVM on Ubuntu 16.04.3 throws an error when used
   
  ---uname output---
  Linux bastion-1 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 19:37:08 UTC 2017 
ppc64le ppc64le ppc64le GNU/Linux
   
  Machine Type =  8348-21C Habanero 
   
  ---Steps to Reproduce---
   Install 16.04.3

  install KVM like:

  apt-get install libvirt-bin qemu qemu-slof qemu-system qemu-utils

  then exit and log back in so virsh will work without sudo

  then run my spawn script

  $ cat spawn.sh
  #!/bin/bash

  img=$1
  qemu-system-ppc64 \
  -machine pseries,accel=kvm,usb=off -cpu host -m 512 \
  -display none -nographic \
  -net nic -net user \
  -drive "file=$img"

  with a freshly downloaded ubuntu cloud image

  sudo ./spawn.sh xenial-server-cloudimg-ppc64el-disk1.img

  And I get nothing on the output.

  and errors in dmesg

  
  ubuntu@bastion-1:~$ [  340.180295] Facility 'TM' unavailable, exception at 
0xd000148b7f10, MSR=90009033
  [  340.180399] Oops: Unexpected facility unavailable exception, sig: 6 [#1]
  [  340.180513] SMP NR_CPUS=2048 NUMA PowerNV
  [  340.180547] Modules linked in: xt_CHECKSUM iptable_mangle ipt_MASQUERADE 
nf_nat_masquerade_ipv4 iptable_nat nf_nat_ipv4 nf_nat nf_conntrack_ipv4 
nf_defrag_ipv4 xt_conntrack nf_conntrack ipt_REJECT nf_reject_ipv4 xt_tcpudp 
bridge stp llc ebtable_filter ebtables ip6table_filter ip6_tables 
iptable_filter ip_tables x_tables kvm_hv kvm binfmt_misc joydev input_leds 
mac_hid opal_prd ofpart cmdlinepart powernv_flash ipmi_powernv ipmi_msghandler 
mtd at24 uio_pdrv_genirq uio ibmpowernv powernv_rng vmx_crypto ib_iser rdma_cm 
iw_cm ib_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi_tcp libiscsi 
scsi_transport_iscsi autofs4 btrfs raid10 raid456 async_raid6_recov 
async_memcpy async_pq async_xor async_tx xor raid6_pq raid1 raid0 multipath 
linear mlx4_en hid_generic usbhid hid uas usb_storage ast i2c_algo_bit bnx2x 
ttm drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops mlx4_core drm 
ahci vxlan libahci ip6_udp_tunnel udp_tunnel mdio libcrc32c
  [  340.181331] CPU: 46 PID: 5252 Comm: qemu-system-ppc Not tainted 
4.4.0-89-generic #112-Ubuntu
  [  340.181382] task: c01e34c30b50 ti: c01e34ce4000 task.ti: 
c01e34ce4000
  [  340.181432] NIP: d000148b7f10 LR: d00014822a14 CTR: 
d000148b7e40
  [  340.181475] REGS: c01e34ce77b0 TRAP: 0f60   Not tainted  
(4.4.0-89-generic)
  [  340.181519] MSR: 90009033   CR: 22024848  
XER: 
  [  340.181629] CFAR: d000148b7ea4 SOFTE: 1 
  GPR00: d00014822a14 c01e34ce7a30 d000148cc018 c01e37bc 
  GPR04: c01db9ac c01e34ce7bc0   
  GPR08: 0001 c01e34c30b50 0001 d000148278f8 
  GPR12: d000148b7e40 cfb5b500  001f 
  GPR16: 3fff91c3 0080 3fffa8e34390 3fff9242f200 
  GPR20: 3fff92430010 01001de5c030 3fff9242eb60 100c1ff0 
  GPR24: 3fffc91fe990 3fff91c10028  c01e37bc 
  GPR28:  c01db9ac c01e37bc c01db9ac 
  [  340.182315] NIP [d000148b7f10] kvmppc_vcpu_run_hv+0xd0/0xff0 [kvm_hv]
  [  340.182357] LR [d00014822a14] kvmppc_vcpu_run+0x44/0x60 [kvm]
  [  340.182394] Call Trace:
  [  340.182413] [c01e34ce7a30] [c01e34ce7ab0] 0xc01e34ce7ab0 
(unreliable)
  [  340.182468] [c01e34ce7b70] [d00014822a14] 
kvmppc_vcpu_run+0x44/0x60 [kvm]
  [  340.182522] [c01e34ce7ba0] [d0001481f674] 
kvm_arch_vcpu_ioctl_run+0x64/0x170 [kvm]
  [  340.182581] [c01e34ce7be0] [d00014813918] 
kvm_vcpu_ioctl+0x528/0x7b0 [kvm]
  [  340.182634] [c01e34ce7d40] [c02fffa0] do_vfs_ioctl+0x480/0x7d0
  [  340.182678] [c01e34ce7de0] [c03003c4] SyS_ioctl+0xd4/0xf0
  [  340.182723] [c01e34ce7e30] [c0009204] system_call+0x38/0xb4
  [  340.182766] Instruction dump:
  [  340.182788] e92d02a0 e9290a50 e9290108 792a07e3 41820058 e92d02a0 e9290a50 
e9290108 
  [  340.182863] 7927e8a4 78e71f87 40820ed8 e92d02a0 <7d4022a6> f9490ee8 
e92d02a0 7d4122a6 
  [  340.182938] ---[ end trace bc5080cb7d18f102 ]---
  [  340.276202] 

  
  This was with the latest ubuntu cloud image. I get the same thing when trying 
to use virt-install with an ISO image. 

  I have no way of 

[Group.of.nepali.translators] [Bug 1709299] Re: linux-snapdragon: 4.4.0-1073.78 -proposed tracker

2017-08-22 Thread Gavin Lin
Hardware Certification have completed testing this -proposed kernel. No
regressions were observed, results are available here:
http://people.canonical.com/~hwcert/sru-
testing/snapdragon/4.4.0-1073.78/snapdragon-4.4-proposed-published.html

** Tags added: certification-testing-passed

** Changed in: kernel-sru-workflow/certification-testing
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1709299

Title:
  linux-snapdragon: 4.4.0-1073.78 -proposed tracker

Status in Kernel SRU Workflow:
  In Progress
Status in Kernel SRU Workflow automated-testing series:
  Fix Released
Status in Kernel SRU Workflow certification-testing series:
  Fix Released
Status in Kernel SRU Workflow prepare-package series:
  Fix Released
Status in Kernel SRU Workflow prepare-package-meta series:
  Fix Released
Status in Kernel SRU Workflow promote-to-proposed series:
  Fix Released
Status in Kernel SRU Workflow promote-to-security series:
  New
Status in Kernel SRU Workflow promote-to-updates series:
  New
Status in Kernel SRU Workflow regression-testing series:
  Invalid
Status in Kernel SRU Workflow security-signoff series:
  In Progress
Status in Kernel SRU Workflow upload-to-ppa series:
  New
Status in Kernel SRU Workflow verification-testing series:
  Confirmed
Status in linux-snapdragon package in Ubuntu:
  Invalid
Status in linux-snapdragon source package in Xenial:
  Confirmed

Bug description:
  This bug is for tracking the  upload package.
  This bug will contain status and testing results related to that
  upload.

  For an explanation of the tasks and the associated workflow see: 
https://wiki.ubuntu.com/Kernel/kernel-sru-workflow
  -- swm properties --
  boot-testing-requested: true
  kernel-stable-master-bug: 1709296
  phase: Promoted to proposed
  proposed-announcement-sent: true
  proposed-testing-requested: true

To manage notifications about this bug go to:
https://bugs.launchpad.net/kernel-sru-workflow/+bug/1709299/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1709298] Re: linux-raspi2: 4.4.0-1071.79 -proposed tracker

2017-08-22 Thread Gavin Lin
Hardware Certification have completed testing this -proposed kernel. No
regressions were observed, results are available here:
http://people.canonical.com/~hwcert/sru-
testing/raspi2/4.4.0-1071.79/raspi2-4.4-proposed-published.html

** Tags added: certification-testing-passed

** Changed in: kernel-sru-workflow/certification-testing
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1709298

Title:
  linux-raspi2: 4.4.0-1071.79 -proposed tracker

Status in Kernel SRU Workflow:
  In Progress
Status in Kernel SRU Workflow automated-testing series:
  Fix Released
Status in Kernel SRU Workflow certification-testing series:
  Fix Released
Status in Kernel SRU Workflow prepare-package series:
  Fix Released
Status in Kernel SRU Workflow prepare-package-meta series:
  Fix Released
Status in Kernel SRU Workflow promote-to-proposed series:
  Fix Released
Status in Kernel SRU Workflow promote-to-security series:
  New
Status in Kernel SRU Workflow promote-to-updates series:
  New
Status in Kernel SRU Workflow regression-testing series:
  Invalid
Status in Kernel SRU Workflow security-signoff series:
  In Progress
Status in Kernel SRU Workflow upload-to-ppa series:
  New
Status in Kernel SRU Workflow verification-testing series:
  Confirmed
Status in linux-raspi2 package in Ubuntu:
  Invalid
Status in linux-raspi2 source package in Xenial:
  Confirmed

Bug description:
  This bug is for tracking the  upload package.
  This bug will contain status and testing results related to that
  upload.

  For an explanation of the tasks and the associated workflow see: 
https://wiki.ubuntu.com/Kernel/kernel-sru-workflow
  -- swm properties --
  boot-testing-requested: true
  kernel-stable-master-bug: 1709296
  phase: Promoted to proposed
  proposed-announcement-sent: true
  proposed-testing-requested: true

To manage notifications about this bug go to:
https://bugs.launchpad.net/kernel-sru-workflow/+bug/1709298/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp