[Kernel-packages] [Bug 2003714] Re: Azure: TDX enabled hyper-visors cause segfault

2023-01-24 Thread Dexuan Cui
FYI, the glibc bug is not
https://sourceware.org/bugzilla/show_bug.cgi?id=28784; instead, it's Bug
30037 - glibc 2.34 and newer segfault if CPUID leaf 0x2 reports zero
(https://sourceware.org/bugzilla/show_bug.cgi?id=30037)

** Bug watch added: Sourceware.org Bugzilla #28784
   https://sourceware.org/bugzilla/show_bug.cgi?id=28784

** Bug watch added: Sourceware.org Bugzilla #30037
   https://sourceware.org/bugzilla/show_bug.cgi?id=30037

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/2003714

Title:
  Azure: TDX enabled hyper-visors cause segfault

Status in linux-azure package in Ubuntu:
  In Progress

Bug description:
  SRU Justification

  [Impact]

  Microsoft TDX enabled hyper visors cause a segfault due to an upstream
  glibc bug. This can be worked around with a kernel patch.

  Issue Description:

  When I start an Intel TDX Ubuntu 22.04 (or RHEL 9.0) guest on Hyper-V,
  the guest always hits segfaults and can’t boot up. Here the kernel
  running in the guest is the upstream kernel + my TDX patchset, or the
  5.19.0-azure kernel + the same TDX patchset:

  [Fix]

  We confirmed the segfault also happens to TDX guests on the KVM
  hypervisor. After I checked with more Intel folks, it turns out this
  is indeed a glibc bug
  (https://sourceware.org/bugzilla/show_bug.cgi?id=28784), which has
  been fixed in the upsteram glibc, but Ubuntu 22.04 and newer haven’t
  picked up the glibc fix yet.

  I got a kernel side temporary workarouond from Intel:
  https://github.com/dcui/tdx/commit/16218cf73491e867fd39c16c9e4b8aa926cbda68,
  which is on the same existing branch “decui/upstream-
  kinetic-22.10/master-next/1209”.

  [   21.081453] Run /inits init process
  [   21.086896]   with arguments:
  [   21.095790] /init
  [   21.100982]   with environment:
  [   21.106611] HOME=/
  [   21.112463] TERM=linux
  [   21.119850] BOOT_IMAGE=/boot/vmlinuz-6.1.0-rc7-decui+

  Loading, please wait...

  Starting version 249.11-0ubuntu3.6

  [   21.253908] udevadm[144]: segfault at 56538d61e0c0 ip 7f8f5899efeb sp 
7ffd08fb7648 error 6 in libc.so.6[7f8f5882+195000] likely on CPU 0 
(core 0, socket 0)
  [   21.316549] Code: 07 62 e1 7d 48 e7 4f 01 62 e1 7d 48 e7 67 40 62 e1 7d 48 
e7 6f 41 62 61 7d 48 e7 87 00 20 00 00 62 61 7d 48 e7 8f 40 20 00 00 <62> 61 7d 
48 e7 a7 00 30 00 00 62 61 7d 48 e7 af 40 30 00 00 48 83

  Segmentation fault

  [   22.499317] setfont[153]: segfault at 55ef3b91b000 ip 7f5899899fa4 sp 
7ffc8008f628 error 4 in libc.so.6[7f589971b000+195000] likely on CPU 0 
(core 0, socket 0)
  [   22.602677] Code: 06 62 e1 fe 48 6f 4e 01 62 e1 fe 48 6f 66 40 62 e1 fe 48 
6f 6e 41 62 61 fe 48 6f 86 00 20 00 00 62 61 fe 48 6f 8e 40 20 00 00 <62> 61 fe 
48 6f a6 00 30 00 00 62 61 fe 48 6f ae 40 30 00 00 48 83
  [   22.732413] loadkeys[156]: segfault at 563ffe292000 ip 7fbff957afa4 sp 
7ffe31453808 error 4 in libc.so.6[7fbff93fc000+195000] likely on CPU 0 
(core 0, socket 0)
  [   22.833061] Code: 06 62 e1 fe 48 6f 4e 01 62 e1 fe 48 6f 66 40 62 e1 fe 48 
6f 6e 41 62 61 fe 48 6f 86 00 20 00 00 62 61 fe 48 6f 8e 40 20 00 00 <62> 61 fe 
48 6f a6 00 30 00 00 62 61 fe 48 6f ae 40 30 00 00 48 83

  The segfault only happens to recent glibc versions (e.g. v2.35 in
  Ubuntu 22.04, and v2.34 in RHEL 9.0). It doesn’t happens to v2.31 in
  Ubuntu 20.04, or v2.32 in Ubuntu 20.10. So something in glibc must
  have changed between v2.32 (good) and 2.34+ (not working for TDX). The
  oddity is: when I run the same Ubuntu 22.04/RHEL 9.0 image as a
  regular non-TDX guest, the segfault never happens.

  If I boot up a Ubuntu 20.04 TDX guest (which works fine), mount a
  Ubuntu 22.04 VHD image (“mount /dev/sdd1 /mnt”) and try to run “chroot
  /mnt”, I hit the same segfault:

  [  109.478556] EXT4-fs (sdd1): mounted filesystem with ordered data mode. 
Quota mode: none.
  [  129.22] bash[2112]: segfault at 556987854000 ip 7f88468c4ea4 sp 
7ffc22ecf158 error 6 in libc.so.6[7f8846828000+195000] likely on CPU 48 
(core 0, socket 48)
  [  129.242434] Code: e7 bf 30 10 00 00 66 44 0f e7 87 00 20 00 00 66 44 0f e7 
8f 10 20 00 00 66 44 0f e7 97 20 20 00 00 66 44 0f e7 9f 30 20 00 00 <66> 44 0f 
e7 a7 00 30 00 00 66 44 0f e7 af 10 30 00 00 66 44 0f e7

  It looks like the application is referencing a memory location that
  somehow triggers a page fault, which is converted to a sigal SIGSEGV,
  which causes a segfault and terminates the application (I’m not sure
  where the below “movntdq” instructions come from):

  root@decui-u2004-u28:/opt/linus-0824# echo 'Code: e7 bf 30 10 00 00 66
  44 0f e7 87 00 20 00 00 66 44 0f e7 8f 10 20 00 00 66 44 0f e7 97 20
  20 00 00 66 44 0f e7 9f 30 20 00 00 <66> 44 0f e7 a7 00 30 00 00 66 44
  0f e7 af 10 30 00 00 66 44 0f e7' | scripts/decodecode

  Code: e7 bf 30 10 00 00 66 44 0f e7 87 00 20 00 00 66 44 0f e7 8f 10
  20 00 00 66 44 0f

[Kernel-packages] [Bug 1902531] Re: [linux-azure] IP forwarding issue in netvsc

2022-12-04 Thread Dexuan Cui
I guess the SYN/ACK will get hashed on the same tunnel link if you
disable accelerated networking.

When accelerated networking is enabled on Azure, all incoming TCP SYN
packets are still received through the software NIC netvsc and get the
RX hash values from Hyper-V virtual switch, but the ACK packets are
received through the Mellanox NIC (and then passed on to the hv_netvsc
driver) so the ACK packets get hash values from the Mellanox NIC and the
values may be different from the ones got for the SYN packets.

In the future (maybe it's already available on some nodes), when
accelerated networking is enabled, the incoming SYN packets will also be
received through hardware NIC rather than netvsc.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1902531

Title:
  [linux-azure] IP forwarding issue in netvsc

Status in linux-azure package in Ubuntu:
  Fix Released
Status in linux-azure-4.15 package in Ubuntu:
  New
Status in linux-azure source package in Bionic:
  Invalid
Status in linux-azure-4.15 source package in Bionic:
  Fix Released
Status in linux-azure source package in Focal:
  Fix Released
Status in linux-azure-4.15 source package in Focal:
  Invalid

Bug description:
  [Impact]

  We identified an issue with the Linux netvsc driver when used in IP
  forwarding mode.  The problem is that the RSS hash value is not
  propagated to the outgoing packet, and so such packets go out on
  channel 0.  This produces an imbalance across outgoing channels, and a
  possible overload on the single host-side CPU that is processing
  channel 0.   The problem does not occur when Accelerated Networking is
  used because the packets go out through the Mellanox driver.  Because
  it is tied to IP forwarding, the problem is presumably most likely to
  be visible in a virtual appliance device that is doing network load
  balancing or other kinds of packet filtering and redirection.

  We would like to request fixes to this issue in 16.04, 18.04 and
  20.04.

  Two fixes are already in the upstream v5.5+, so they’re already in
  5.8.0-1011.11.

  For 5.4.0-1031.32, the 2 fixes can apply cleanly:
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1fac7ca4e63bf935780cc632ccb6ba8de5f22321
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=6f3aeb1ba05d41320e6cf9a60f698d9c4e44348e

  For 5.0.0-1036.38, we need 1 more patch applied first, so the list is:
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b441f79532ec13dc82d05c55badc4da1f62a6141
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1fac7ca4e63bf935780cc632ccb6ba8de5f22321
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=6f3aeb1ba05d41320e6cf9a60f698d9c4e44348e

  For 4.15.0-1098.109~16.04.1, the 2 patches can not apply cleanly, so Dexuan 
backported them here:
  https://github.com/dcui/linux/commit/4ed58762a56cccfd006e633fac63311176508795
  https://github.com/dcui/linux/commit/40ad7849a6365a5a485f05453e10e3541025e25a
  (The 2 patches are on the branch 
https://github.com/dcui/linux/commits/decui/ubuntu_16.04/linux-azure/Ubuntu-azure-4.15.0-1098.109_16.04.1)

  
  [Test Case]

  As described in https://bugs.launchpad.net/ubuntu/+source/linux-
  azure/+bug/1902531/comments/6

  
  [Where problems could occur]

  A potential regression would affect Azure instance using netvsc
  without accelerated networking.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1902531/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1973758] Re: Azure: Mellanox VF NIC crashes when removed

2022-05-18 Thread Dexuan Cui
I checked with Matthew and found Matthew only applied the first patch
[1]; after I applied the second patch [2], I'm no longer seeing any
crash or memory corruption issue in Matthew's VM.

BTW, the Windows Server 2019 host running Matthew's VM doesn't work with
NIC SR-IOV correctly:  when SR-IOV is enabled, the host offers an Intel
VF NIC to the VM, then immediately removes/rescinds the VF (this causes
hv_pci_probe() to fail and the bug on its error handling path is
triggered), and never re-offers the VF, i.e. NIC SR-IOV doesn't work on
this host, but that's a host bug and the host team needs to investigate
that.


[0] https://lists.ubuntu.com/archives/kernel-team/2022-May/130378.html
[1] https://lists.ubuntu.com/archives/kernel-team/2022-May/130379.html
[2] https://lists.ubuntu.com/archives/kernel-team/2022-May/130380.html

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1973758

Title:
  Azure:  Mellanox VF NIC crashes when removed

Status in linux-azure package in Ubuntu:
  Invalid
Status in linux-azure source package in Focal:
  In Progress

Bug description:
  SRU Justification

  [Impact]

  The 5.4.0-1075-azure and newer kernels are broken in that the VM can
  easily panic when the Mellanox VF NIC is removed and added due to
  Azure host servicing events or the below manual "unbind/bind" test
  (here the GUID can be different in different VMs):

  for i in `seq 1 1000`;
  do
  cd /sys/bus/vmbus/drivers/hv_pci;
  echo abdc2107-402e-4704-8c88-c2b850696c3c > unbind;
  echo abdc2107-402e-4704-8c88-c2b850696c3c > bind;
  done

  A sample panic call-trace is:
  [ 107.359954] kernel BUG at 
/build/linux-azure-5.4-4I3kFs/linux-azure-5.4-5.4.0/mm/slub.c:4020!
  [ 107.363858] invalid opcode:  [#1] SMP NOPTI
  [ 107.365870] CPU: 0 PID: 334 Comm: kworker/0:2 Not tainted 5.4.0-1077-azure 
#80~18.04.1-Ubuntu
  [ 107.369589] Hardware name: Microsoft Corporation Virtual Machine/Virtual 
Machine, BIOS 090008 12/07/2018
  [ 107.373811] Workqueue: events vmbus_onmessage_work
  [ 107.375909] RIP: 0010:kfree+0x1d2/0x240
  …
  [ 107.413789] Call Trace:
  [ 107.414867] kobject_uevent_env+0x1b5/0x7e0
  [ 107.416747] kobject_uevent+0xb/0x10
  [ 107.418327] device_release_driver_internal+0x191/0x1c0
  [ 107.420653] device_release_driver+0x12/0x20
  [ 107.422523] bus_remove_device+0xe1/0x150
  [ 107.424279] device_del+0x167/0x380
  [ 107.425824] device_unregister+0x1a/0x60
  [ 107.427536] vmbus_device_unregister+0x27/0x50
  [ 107.429528] vmbus_onoffer_rescind+0x1d0/0x1f0
  [ 107.431474] vmbus_onmessage+0x2c/0x70
  [ 107.433104] vmbus_onmessage_work+0x22/0x30
  [ 107.434919] process_one_work+0x209/0x400
  [ 107.436661] worker_thread+0x34/0x40

  It turns out there is a bug in https://git.launchpad.net/~canonical-
  kernel/ubuntu/+source/linux-
  azure/+git/bionic/commit/?id=16a3c750a78d8, which misses the second
  hunk of the upstream patch
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=877b911a5ba0.

  Please apply the below patch to fix the issue:

  --- a/drivers/pci/controller/pci-hyperv.c
  +++ b/drivers/pci/controller/pci-hyperv.c
  @@ -3653,7 +3653,7 @@ static int hv_pci_remove(struct hv_device *hdev)

  hv_put_dom_num(hbus->bridge->domain_nr);

  - free_page((unsigned long)hbus);
  + kfree(hbus);
  return ret;
   }

  BTW, please apply this patch as well (Note: this patch is not really required 
as it's only for error handling path, which is usually unlikely):
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=42c3d41832ef4fcf60aaa6f748de01ad99572adf

  [Test Case]

  Microsoft tested

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1973758/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1965618] Re: linux-azure: Focal 5.4 arm64 support

2022-05-16 Thread Dexuan Cui
The 5.4.0-1075-azure and newer kernels are broken in that the VM can
easily panic when the Mellanox VF NIC is removed and added due to Azure
host servicing events or the below manual "unbind/bind" test (here the
GUID can be different in different VMs):

for i in `seq 1 1000`;
do
cd /sys/bus/vmbus/drivers/hv_pci;
echo abdc2107-402e-4704-8c88-c2b850696c3c > unbind;
echo abdc2107-402e-4704-8c88-c2b850696c3c > bind;
done

A sample panic call-trace is:
[  107.359954] kernel BUG at 
/build/linux-azure-5.4-4I3kFs/linux-azure-5.4-5.4.0/mm/slub.c:4020!
[  107.363858] invalid opcode:  [#1] SMP NOPTI
[  107.365870] CPU: 0 PID: 334 Comm: kworker/0:2 Not tainted 5.4.0-1077-azure 
#80~18.04.1-Ubuntu
[  107.369589] Hardware name: Microsoft Corporation Virtual Machine/Virtual 
Machine, BIOS 090008  12/07/2018
[  107.373811] Workqueue: events vmbus_onmessage_work
[  107.375909] RIP: 0010:kfree+0x1d2/0x240
…
[  107.413789] Call Trace:
[  107.414867]  kobject_uevent_env+0x1b5/0x7e0
[  107.416747]  kobject_uevent+0xb/0x10
[  107.418327]  device_release_driver_internal+0x191/0x1c0
[  107.420653]  device_release_driver+0x12/0x20
[  107.422523]  bus_remove_device+0xe1/0x150
[  107.424279]  device_del+0x167/0x380
[  107.425824]  device_unregister+0x1a/0x60
[  107.427536]  vmbus_device_unregister+0x27/0x50
[  107.429528]  vmbus_onoffer_rescind+0x1d0/0x1f0
[  107.431474]  vmbus_onmessage+0x2c/0x70
[  107.433104]  vmbus_onmessage_work+0x22/0x30
[  107.434919]  process_one_work+0x209/0x400
[  107.436661]  worker_thread+0x34/0x40

It turns out there is a bug in https://git.launchpad.net/~canonical-
kernel/ubuntu/+source/linux-azure/+git/bionic/commit/?id=16a3c750a78d8,
which misses the second hunk of the upstream patch
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=877b911a5ba0.

Please apply the below patch to fix the issue:

--- a/drivers/pci/controller/pci-hyperv.c
+++ b/drivers/pci/controller/pci-hyperv.c
@@ -3653,7 +3653,7 @@ static int hv_pci_remove(struct hv_device *hdev)

hv_put_dom_num(hbus->bridge->domain_nr);

-   free_page((unsigned long)hbus);
+   kfree(hbus);
return ret;
 }

BTW, please apply this patch as well (Note: this patch is not really required 
as it's only for error handling path, which is usually unlikely):
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=42c3d41832ef4fcf60aaa6f748de01ad99572adf

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1965618

Title:
  linux-azure: Focal 5.4 arm64 support

Status in linux-azure package in Ubuntu:
  Fix Released
Status in linux-azure source package in Focal:
  Fix Released

Bug description:
  SRU Justification

  [Impact]

  Focal linux-azure does not support arm64

  [Fix]

  Backport/cherry-pick approximately 100 patches to support arm64 on
  hyperv

  [Where things could go wrong]

  A number of the patches reorganize hyperv CPU support. Therefore amd64
  could be affected.

  [Other Info]

  SF: #00310705

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1965618/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1928788] Re: linux-azure: Add Mana network driver

2021-06-14 Thread Dexuan Cui
I installed and tested the 5.8.0-1034-azure kernel and it worked as
expected.

I created a Ubuntu 20.04 VM and installed the “5.8.0-1034” kernel this way:
1. Enable the “proposed” kernel by running the below as “root” (refer to 
https://wiki.ubuntu.com/Testing/EnableProposed):

cat 

[Kernel-packages] [Bug 1928269] Re: netfilter: iptables-restore: setsockopt(3, SOL_IP, IPT_SO_SET_REPLACE, "security...", ...) return -EAGAIN

2021-05-12 Thread Dexuan Cui
I reported the issue to the mailing list:
https://lwn.net/ml/linux-kernel/MW2PR2101MB0892FC0F67BD25661CDCE149BF529%40MW2PR2101MB0892.namprd21.prod.outlook.com/

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1928269

Title:
  netfilter: iptables-restore: setsockopt(3, SOL_IP, IPT_SO_SET_REPLACE,
  "security...", ...) return -EAGAIN

Status in linux-azure package in Ubuntu:
  New

Bug description:
  Hi,
  I'm debugging an iptables-restore failure, which happens about 5% of the
  time when I keep stopping and starting the Linux VM. The VM has only 1
  CPU, and kernel version is 4.15.0-1098-azure, but I suspect the issue may
  also exist in the mainline Linux kernel.

  When the failure happens, it's always caused by line 27 of the rule
  file:

1 # Generated by iptables-save v1.6.0 on Fri Apr 23 09:22:59 2021
2 *raw
3 :PREROUTING ACCEPT [0:0]
4 :OUTPUT ACCEPT [0:0]
5 -A PREROUTING ! -s 168.63.129.16/32 -p tcp -j NOTRACK
6 -A OUTPUT ! -d 168.63.129.16/32 -p tcp -j NOTRACK
7 COMMIT
8 # Completed on Fri Apr 23 09:22:59 2021
9 # Generated by iptables-save v1.6.0 on Fri Apr 23 09:22:59 2021
   10 *filter
   11 :INPUT ACCEPT [2407:79190058]
   12 :FORWARD ACCEPT [0:0]
   13 :OUTPUT ACCEPT [1648:2190051]
   14 -A OUTPUT -d 169.254.169.254/32 -m owner --uid-owner 33 -j DROP
   15 COMMIT
   16 # Completed on Fri Apr 23 09:22:59 2021
   17 # Generated by iptables-save v1.6.0 on Fri Apr 23 09:22:59 2021
   18 *security
   19 :INPUT ACCEPT [2345:79155398]
   20 :FORWARD ACCEPT [0:0]
   21 :OUTPUT ACCEPT [1504:2129015]
   22 -A OUTPUT -d 168.63.129.16/32 -p tcp -m owner --uid-owner 0 -j ACCEPT
   23 -A OUTPUT -d 168.63.129.16/32 -p tcp -m conntrack --ctstate INVALID,NEW 
-j DROP
   24 -A OUTPUT -d 168.63.129.16/32 -p tcp -m owner --uid-owner 0 -j ACCEPT
   25 -A OUTPUT -d 168.63.129.16/32 -p tcp -m conntrack --ctstate INVALID,NEW 
-j DROP
   26 -A OUTPUT -d 168.63.129.16/32 -p tcp -m conntrack --ctstate INVALID,NEW 
-j DROP
   27 COMMIT

  The related part of the strace log is:

1 socket(PF_INET, SOCK_RAW, IPPROTO_RAW) = 3
2 getsockopt(3, SOL_IP, IPT_SO_GET_INFO, 
"security\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., [84]) = 0
3 getsockopt(3, SOL_IP, IPT_SO_GET_ENTRIES, 
"security\0\357B\16Z\177\0\0Pg\355\0\0\0\0\0Pg\355\0\0\0\0\0"..., [880]) = 0
4 setsockopt(3, SOL_IP, IPT_SO_SET_REPLACE, 
"security\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 2200) = -1 
EAGAIN (Resource temporarily unavailable)
5 close(3)  = 0
6 write(2, "iptables-restore: line 27 failed"..., 33) = 33

  The -EAGAIN error comes from line 1240 of xt_replace_table():

do_ipt_set_ctl
  do_replace
__do_replace
  xt_replace_table

  1216 xt_replace_table(struct xt_table *table,
  1217   unsigned int num_counters,
  1218   struct xt_table_info *newinfo,
  1219   int *error)
  1220 {
  1221 struct xt_table_info *private;
  1222 unsigned int cpu;
  1223 int ret;
  1224
  1225 ret = xt_jumpstack_alloc(newinfo);
  1226 if (ret < 0) {
  1227 *error = ret;
  1228 return NULL;
  1229 }
  1230
  1231 /* Do the substitution. */
  1232 local_bh_disable();
  1233 private = table->private;
  1234
  1235 /* Check inside lock: is the old number correct? */
  1236 if (num_counters != private->number) {
  1237 pr_debug("num_counters != table->private->number 
(%u/%u)\n",
  1238  num_counters, private->number);
  1239 local_bh_enable();
  1240 *error = -EAGAIN;
  1241 return NULL;
  1242 }

  When the function returns -EAGAIN, the 'num_counters' is 5 while
  'private->number' is 6.

  If I re-run the iptables-restore program upon the failure, the program
  will succeed.

  I checked the function xt_replace_table() in the recent mainline kernel and it
  looks like the function is the same.

  It looks like there is a race condition between iptables-restore calls
  getsockopt() to get the number of table entries and iptables call
  setsockopt() to replace the entries? Looks like some other program is
  concurrently calling getsockopt()/setsockopt() -- but it looks like this is
  not the case according to the messages I print via trace_printk() around
  do_replace() in do_ipt_set_ctl(): when the -EAGAIN error happens, there is
  no other program calling do_replace(); the table entry number was changed
  to 5 by another program 'iptables' about 1.3 milliseconds ago, and then
  this program 'iptables-restore' calls setsockopt() and the kernel sees
  'num_counters' being 5 and the 'private->number' being 6 (how can this
  happen??); the next setsockopt() call for the same 'security' table
  happen

[Kernel-packages] [Bug 1928269] [NEW] netfilter: iptables-restore: setsockopt(3, SOL_IP, IPT_SO_SET_REPLACE, "security...", ...) return -EAGAIN

2021-05-12 Thread Dexuan Cui
Public bug reported:

Hi,
I'm debugging an iptables-restore failure, which happens about 5% of the
time when I keep stopping and starting the Linux VM. The VM has only 1
CPU, and kernel version is 4.15.0-1098-azure, but I suspect the issue may
also exist in the mainline Linux kernel.

When the failure happens, it's always caused by line 27 of the rule
file:

  1 # Generated by iptables-save v1.6.0 on Fri Apr 23 09:22:59 2021
  2 *raw
  3 :PREROUTING ACCEPT [0:0]
  4 :OUTPUT ACCEPT [0:0]
  5 -A PREROUTING ! -s 168.63.129.16/32 -p tcp -j NOTRACK
  6 -A OUTPUT ! -d 168.63.129.16/32 -p tcp -j NOTRACK
  7 COMMIT
  8 # Completed on Fri Apr 23 09:22:59 2021
  9 # Generated by iptables-save v1.6.0 on Fri Apr 23 09:22:59 2021
 10 *filter
 11 :INPUT ACCEPT [2407:79190058]
 12 :FORWARD ACCEPT [0:0]
 13 :OUTPUT ACCEPT [1648:2190051]
 14 -A OUTPUT -d 169.254.169.254/32 -m owner --uid-owner 33 -j DROP
 15 COMMIT
 16 # Completed on Fri Apr 23 09:22:59 2021
 17 # Generated by iptables-save v1.6.0 on Fri Apr 23 09:22:59 2021
 18 *security
 19 :INPUT ACCEPT [2345:79155398]
 20 :FORWARD ACCEPT [0:0]
 21 :OUTPUT ACCEPT [1504:2129015]
 22 -A OUTPUT -d 168.63.129.16/32 -p tcp -m owner --uid-owner 0 -j ACCEPT
 23 -A OUTPUT -d 168.63.129.16/32 -p tcp -m conntrack --ctstate INVALID,NEW -j 
DROP
 24 -A OUTPUT -d 168.63.129.16/32 -p tcp -m owner --uid-owner 0 -j ACCEPT
 25 -A OUTPUT -d 168.63.129.16/32 -p tcp -m conntrack --ctstate INVALID,NEW -j 
DROP
 26 -A OUTPUT -d 168.63.129.16/32 -p tcp -m conntrack --ctstate INVALID,NEW -j 
DROP
 27 COMMIT

The related part of the strace log is:

  1 socket(PF_INET, SOCK_RAW, IPPROTO_RAW) = 3
  2 getsockopt(3, SOL_IP, IPT_SO_GET_INFO, 
"security\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., [84]) = 0
  3 getsockopt(3, SOL_IP, IPT_SO_GET_ENTRIES, 
"security\0\357B\16Z\177\0\0Pg\355\0\0\0\0\0Pg\355\0\0\0\0\0"..., [880]) = 0
  4 setsockopt(3, SOL_IP, IPT_SO_SET_REPLACE, 
"security\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 2200) = -1 
EAGAIN (Resource temporarily unavailable)
  5 close(3)  = 0
  6 write(2, "iptables-restore: line 27 failed"..., 33) = 33

The -EAGAIN error comes from line 1240 of xt_replace_table():

  do_ipt_set_ctl
do_replace
  __do_replace
xt_replace_table

1216 xt_replace_table(struct xt_table *table,
1217   unsigned int num_counters,
1218   struct xt_table_info *newinfo,
1219   int *error)
1220 {
1221 struct xt_table_info *private;
1222 unsigned int cpu;
1223 int ret;
1224
1225 ret = xt_jumpstack_alloc(newinfo);
1226 if (ret < 0) {
1227 *error = ret;
1228 return NULL;
1229 }
1230
1231 /* Do the substitution. */
1232 local_bh_disable();
1233 private = table->private;
1234
1235 /* Check inside lock: is the old number correct? */
1236 if (num_counters != private->number) {
1237 pr_debug("num_counters != table->private->number 
(%u/%u)\n",
1238  num_counters, private->number);
1239 local_bh_enable();
1240 *error = -EAGAIN;
1241 return NULL;
1242 }

When the function returns -EAGAIN, the 'num_counters' is 5 while
'private->number' is 6.

If I re-run the iptables-restore program upon the failure, the program
will succeed.

I checked the function xt_replace_table() in the recent mainline kernel and it
looks like the function is the same.

It looks like there is a race condition between iptables-restore calls
getsockopt() to get the number of table entries and iptables call
setsockopt() to replace the entries? Looks like some other program is
concurrently calling getsockopt()/setsockopt() -- but it looks like this is
not the case according to the messages I print via trace_printk() around
do_replace() in do_ipt_set_ctl(): when the -EAGAIN error happens, there is
no other program calling do_replace(); the table entry number was changed
to 5 by another program 'iptables' about 1.3 milliseconds ago, and then
this program 'iptables-restore' calls setsockopt() and the kernel sees
'num_counters' being 5 and the 'private->number' being 6 (how can this
happen??); the next setsockopt() call for the same 'security' table
happens in about 1 minute with both the numbers being 6.

Can you please shed some light on the issue? Thanks!

BTW, iptables does have a retry mechanism for getsockopt():
2f93205b375e ("Retry ruleset dump when kernel returns EAGAIN.")
(https://git.netfilter.org/iptables/commit/libiptc?id=2f93205b375e&context=10&ignorews=0&dt=0)

But it looks like this is enough? e.g. here getsockopt() returns 0, but
setsockopt() returns -EAGAIN.

Thanks,
Dexuan

** Affects: linux-azure (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launc

[Kernel-packages] [Bug 1904632] Re: Ubuntu 18.04 Azure VM host kernel panic

2020-12-16 Thread Dexuan Cui
Sure, will do. But AFAICT, there is no ETA yet. Even if the fix was made
today, it would take quite some time (at least a few months?) to deploy
the fix to the whole Azure fleet. :-(

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1904632

Title:
  Ubuntu 18.04 Azure VM host kernel panic

Status in linux-azure package in Ubuntu:
  New

Bug description:
  Running a container on an DV3 Standard_D8_v3 Azure host, as the
  container comes up, the Azure host VM kernel panics per the logs
  below.

  Isolated the issue to a process in the container which uses the
  virtual NICs available on the Azure host. The container also is
  running Ubuntu 18.04 based packages. The problem happens every single
  time the container is started, unless its NIC access process is not
  started.

  Has this sort of kernel panic on Azure been seen and what is the root
  cause and remedy please.

  Also the kernel logs on the Azure host show it vulnerable to the
  following CVE. There are other VMs and containers that can run on the
  Azure host without a kernel panic on it, but providing this info in
  case there is some tie-in to the panic.

  https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-3646

  Kernel panic from the Azure Host console:

  
Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux_1.13.33_e857c609-bc35-4b66-9a8b-e86fd8707e82.scope
  2020-11-17T00:50:11.537914Z INFO MonitorHandler ExtHandler Stopped tracking 
cgroup: Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux-1.13.33, path: 
/sys/fs/cgroup/memory/system.slice/Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux_1.13.33_e857c609-bc35-4b66-9a8b-e86fd8707e82.scope
  2020-11-17T00:50:23.291433Z INFO ExtHandler ExtHandler Checking for agent 
updates (family: Prod)
  2020-11-17T00:51:11.677191Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent 
WALinuxAgent-2.2.52 is running as the goal state agent [DEBUG HeartbeatCounter: 
7;HeartbeatId: 8A2DD5B7-02E5-46E2-9EDB-F8CCBA274479;DroppedPackets: 
0;UpdateGSErrors: 0;AutoUpdate: 1]
  [11218.537937] PANIC: double fault, error_code: 0x0
  [11218.541423] Kernel panic - not syncing: Machine halted.
  [11218.541423] CPU: 0 PID: 9281 Comm: vmxt Not tainted 4.15.18+test #1
  [11218.541423] Hardware name: Microsoft Corporation Virtual Machine/Virtual 
Machine, BIOS 090008  12/07/2018
  [11218.541423] Call Trace:
  [11218.541423]  <#DF>
  [11218.541423]  dump_stack+0x63/0x8b
  [11218.541423]  panic+0xe4/0x244
  [11218.541423]  df_debug+0x2d/0x30
  [11218.541423]  do_double_fault+0x9a/0x130
  [11218.541423]  double_fault+0x1e/0x30
  [11218.541423] RIP: 0010:0x1a80
  [11218.541423] RSP: 0018:2200 EFLAGS: 00010096
  [11218.541423] RAX: 0102 RBX: f7a40768 RCX: 
002f
  [11218.541423] RDX: f7ee9970 RSI: f7a40700 RDI: 
f7c3a000
  [11218.541423] RBP: fffd6430 R08:  R09: 

  [11218.541423] R10:  R11:  R12: 

  [11218.541423] R13:  R14:  R15: 

  [11218.541423]  
  [11218.541423] Kernel Offset: 0x2a40 from 0x8100 (relocation 
range: 0x8000-0xbfff)
  [11218.541423] ---[ end Kernel panic - not syncing: Machine halted.
  [11218.636804] [ cut here ]
  [11218.640802] sched: Unexpected reschedule of offline CPU#2!
  [11218.640802] WARNING: CPU: 0 PID: 9281 at arch/x86/kernel/smp.c:128 
native_smp_send_reschedule+0x3f/0x50
  [11218.640802] Modules linked in: xt_nat xt_u32 vxlan ip6_udp_tunnel 
udp_tunnel veth nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo xt_addrtype 
br_netfilter xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 
iptable_nat ipt_REJECT nf_reject_ipv4 xt_tcpudp bridge stp llc ebtable_filter 
ebtables ip6table_filter ip6_tables iptable_filter aufs xt_owner 
iptable_security xt_conntrack overlay openvswitch nsh nf_conntrack_ipv6 
nf_nat_ipv6 nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_defrag_ipv6 nf_nat 
nf_conntrack nls_iso8859_1 joydev input_leds mac_hid kvm_intel hv_balloon kvm 
serio_raw irqbypass intel_rapl_perf sch_fq_codel ib_iser rdma_cm iw_cm ib_cm 
ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi ip_tables x_tables 
autofs4 btrfs zstd_compress raid10 raid456 async_raid6_recov
  [11218.640802]  async_memcpy async_pq async_xor async_tx xor raid6_pq 
libcrc32c raid1 raid0 multipath linear hid_generic crct10dif_pclmul 
crc32_pclmul hid_hyperv ghash_clmulni_intel hv_utils hv_storvsc pcbc ptp 
hv_netvsc hid pps_core scsi_transport_fc hyperv_keyboard aesni_intel aes_x86_64 
crypto_simd hyperv_fb floppy glue_helper cryptd psmouse hv_vmbus i2c_piix4 
pata_acpi
  [11218.640802] CPU: 0 PID: 9281 Comm: vmxt Not tainted 4.15.18+test #1
  [11218.640802] Hardware name: Microsoft Corporation Virtual Machine/Virtual 
Ma

[Kernel-packages] [Bug 1904632] Re: Ubuntu 18.04 Azure VM host kernel panic

2020-12-16 Thread Dexuan Cui
VM exits are pretty frequent and normal. "VM exits occur in response to certain 
instructions and events in VMX non-root operation" (see CHAPTER 27
VM EXITS of 
https://software.intel.com/content/www/us/en/develop/download/intel-64-and-ia-32-architectures-sdm-volume-3c-system-programming-guide-part-3.html.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1904632

Title:
  Ubuntu 18.04 Azure VM host kernel panic

Status in linux-azure package in Ubuntu:
  New

Bug description:
  Running a container on an DV3 Standard_D8_v3 Azure host, as the
  container comes up, the Azure host VM kernel panics per the logs
  below.

  Isolated the issue to a process in the container which uses the
  virtual NICs available on the Azure host. The container also is
  running Ubuntu 18.04 based packages. The problem happens every single
  time the container is started, unless its NIC access process is not
  started.

  Has this sort of kernel panic on Azure been seen and what is the root
  cause and remedy please.

  Also the kernel logs on the Azure host show it vulnerable to the
  following CVE. There are other VMs and containers that can run on the
  Azure host without a kernel panic on it, but providing this info in
  case there is some tie-in to the panic.

  https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-3646

  Kernel panic from the Azure Host console:

  
Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux_1.13.33_e857c609-bc35-4b66-9a8b-e86fd8707e82.scope
  2020-11-17T00:50:11.537914Z INFO MonitorHandler ExtHandler Stopped tracking 
cgroup: Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux-1.13.33, path: 
/sys/fs/cgroup/memory/system.slice/Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux_1.13.33_e857c609-bc35-4b66-9a8b-e86fd8707e82.scope
  2020-11-17T00:50:23.291433Z INFO ExtHandler ExtHandler Checking for agent 
updates (family: Prod)
  2020-11-17T00:51:11.677191Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent 
WALinuxAgent-2.2.52 is running as the goal state agent [DEBUG HeartbeatCounter: 
7;HeartbeatId: 8A2DD5B7-02E5-46E2-9EDB-F8CCBA274479;DroppedPackets: 
0;UpdateGSErrors: 0;AutoUpdate: 1]
  [11218.537937] PANIC: double fault, error_code: 0x0
  [11218.541423] Kernel panic - not syncing: Machine halted.
  [11218.541423] CPU: 0 PID: 9281 Comm: vmxt Not tainted 4.15.18+test #1
  [11218.541423] Hardware name: Microsoft Corporation Virtual Machine/Virtual 
Machine, BIOS 090008  12/07/2018
  [11218.541423] Call Trace:
  [11218.541423]  <#DF>
  [11218.541423]  dump_stack+0x63/0x8b
  [11218.541423]  panic+0xe4/0x244
  [11218.541423]  df_debug+0x2d/0x30
  [11218.541423]  do_double_fault+0x9a/0x130
  [11218.541423]  double_fault+0x1e/0x30
  [11218.541423] RIP: 0010:0x1a80
  [11218.541423] RSP: 0018:2200 EFLAGS: 00010096
  [11218.541423] RAX: 0102 RBX: f7a40768 RCX: 
002f
  [11218.541423] RDX: f7ee9970 RSI: f7a40700 RDI: 
f7c3a000
  [11218.541423] RBP: fffd6430 R08:  R09: 

  [11218.541423] R10:  R11:  R12: 

  [11218.541423] R13:  R14:  R15: 

  [11218.541423]  
  [11218.541423] Kernel Offset: 0x2a40 from 0x8100 (relocation 
range: 0x8000-0xbfff)
  [11218.541423] ---[ end Kernel panic - not syncing: Machine halted.
  [11218.636804] [ cut here ]
  [11218.640802] sched: Unexpected reschedule of offline CPU#2!
  [11218.640802] WARNING: CPU: 0 PID: 9281 at arch/x86/kernel/smp.c:128 
native_smp_send_reschedule+0x3f/0x50
  [11218.640802] Modules linked in: xt_nat xt_u32 vxlan ip6_udp_tunnel 
udp_tunnel veth nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo xt_addrtype 
br_netfilter xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 
iptable_nat ipt_REJECT nf_reject_ipv4 xt_tcpudp bridge stp llc ebtable_filter 
ebtables ip6table_filter ip6_tables iptable_filter aufs xt_owner 
iptable_security xt_conntrack overlay openvswitch nsh nf_conntrack_ipv6 
nf_nat_ipv6 nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_defrag_ipv6 nf_nat 
nf_conntrack nls_iso8859_1 joydev input_leds mac_hid kvm_intel hv_balloon kvm 
serio_raw irqbypass intel_rapl_perf sch_fq_codel ib_iser rdma_cm iw_cm ib_cm 
ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi ip_tables x_tables 
autofs4 btrfs zstd_compress raid10 raid456 async_raid6_recov
  [11218.640802]  async_memcpy async_pq async_xor async_tx xor raid6_pq 
libcrc32c raid1 raid0 multipath linear hid_generic crct10dif_pclmul 
crc32_pclmul hid_hyperv ghash_clmulni_intel hv_utils hv_storvsc pcbc ptp 
hv_netvsc hid pps_core scsi_transport_fc hyperv_keyboard aesni_intel aes_x86_64 
crypto_simd hyperv_fb floppy glue_helper cryptd psmouse hv_vmbus i2c_piix4 
pata_acpi
  [11218.640802] CPU: 0 PID: 

[Kernel-packages] [Bug 1904632] Re: Ubuntu 18.04 Azure VM host kernel panic

2020-12-16 Thread Dexuan Cui
VM Exit is a term in the Intel CPU's Virtualization support (VMX). It
means the execution of the guest CPU is interrupted and the execution
"jumps" to some function in the hypervisor; the hypervisor analyzes the
reason of the VM Exit, and handles the VM exit properly, and then the
execution "jumps" back to wherever the guest CPU was interrupted. Here
the issue is: when the Level-2 guest CPU's VM Exit happens, somehow the
hypervisor messes up the Level-1 guest's 32-bit related state (i.e. the
SYSENTER instruction related state), so later when the 32-bit progarm
starts to run, the Level-1 guest kernel crashes due to double-fault. The
investigation is still ongoing.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1904632

Title:
  Ubuntu 18.04 Azure VM host kernel panic

Status in linux-azure package in Ubuntu:
  New

Bug description:
  Running a container on an DV3 Standard_D8_v3 Azure host, as the
  container comes up, the Azure host VM kernel panics per the logs
  below.

  Isolated the issue to a process in the container which uses the
  virtual NICs available on the Azure host. The container also is
  running Ubuntu 18.04 based packages. The problem happens every single
  time the container is started, unless its NIC access process is not
  started.

  Has this sort of kernel panic on Azure been seen and what is the root
  cause and remedy please.

  Also the kernel logs on the Azure host show it vulnerable to the
  following CVE. There are other VMs and containers that can run on the
  Azure host without a kernel panic on it, but providing this info in
  case there is some tie-in to the panic.

  https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-3646

  Kernel panic from the Azure Host console:

  
Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux_1.13.33_e857c609-bc35-4b66-9a8b-e86fd8707e82.scope
  2020-11-17T00:50:11.537914Z INFO MonitorHandler ExtHandler Stopped tracking 
cgroup: Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux-1.13.33, path: 
/sys/fs/cgroup/memory/system.slice/Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux_1.13.33_e857c609-bc35-4b66-9a8b-e86fd8707e82.scope
  2020-11-17T00:50:23.291433Z INFO ExtHandler ExtHandler Checking for agent 
updates (family: Prod)
  2020-11-17T00:51:11.677191Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent 
WALinuxAgent-2.2.52 is running as the goal state agent [DEBUG HeartbeatCounter: 
7;HeartbeatId: 8A2DD5B7-02E5-46E2-9EDB-F8CCBA274479;DroppedPackets: 
0;UpdateGSErrors: 0;AutoUpdate: 1]
  [11218.537937] PANIC: double fault, error_code: 0x0
  [11218.541423] Kernel panic - not syncing: Machine halted.
  [11218.541423] CPU: 0 PID: 9281 Comm: vmxt Not tainted 4.15.18+test #1
  [11218.541423] Hardware name: Microsoft Corporation Virtual Machine/Virtual 
Machine, BIOS 090008  12/07/2018
  [11218.541423] Call Trace:
  [11218.541423]  <#DF>
  [11218.541423]  dump_stack+0x63/0x8b
  [11218.541423]  panic+0xe4/0x244
  [11218.541423]  df_debug+0x2d/0x30
  [11218.541423]  do_double_fault+0x9a/0x130
  [11218.541423]  double_fault+0x1e/0x30
  [11218.541423] RIP: 0010:0x1a80
  [11218.541423] RSP: 0018:2200 EFLAGS: 00010096
  [11218.541423] RAX: 0102 RBX: f7a40768 RCX: 
002f
  [11218.541423] RDX: f7ee9970 RSI: f7a40700 RDI: 
f7c3a000
  [11218.541423] RBP: fffd6430 R08:  R09: 

  [11218.541423] R10:  R11:  R12: 

  [11218.541423] R13:  R14:  R15: 

  [11218.541423]  
  [11218.541423] Kernel Offset: 0x2a40 from 0x8100 (relocation 
range: 0x8000-0xbfff)
  [11218.541423] ---[ end Kernel panic - not syncing: Machine halted.
  [11218.636804] [ cut here ]
  [11218.640802] sched: Unexpected reschedule of offline CPU#2!
  [11218.640802] WARNING: CPU: 0 PID: 9281 at arch/x86/kernel/smp.c:128 
native_smp_send_reschedule+0x3f/0x50
  [11218.640802] Modules linked in: xt_nat xt_u32 vxlan ip6_udp_tunnel 
udp_tunnel veth nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo xt_addrtype 
br_netfilter xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 
iptable_nat ipt_REJECT nf_reject_ipv4 xt_tcpudp bridge stp llc ebtable_filter 
ebtables ip6table_filter ip6_tables iptable_filter aufs xt_owner 
iptable_security xt_conntrack overlay openvswitch nsh nf_conntrack_ipv6 
nf_nat_ipv6 nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_defrag_ipv6 nf_nat 
nf_conntrack nls_iso8859_1 joydev input_leds mac_hid kvm_intel hv_balloon kvm 
serio_raw irqbypass intel_rapl_perf sch_fq_codel ib_iser rdma_cm iw_cm ib_cm 
ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi ip_tables x_tables 
autofs4 btrfs zstd_compress raid10 raid456 async_raid6_recov
  [11218.640802]  async_memcpy async_pq async_xo

[Kernel-packages] [Bug 1904632] Re: Ubuntu 18.04 Azure VM host kernel panic

2020-12-16 Thread Dexuan Cui
Hyper-V team just identified a bug where the Hyper-V hypervisor can
truncate the host SYSENTER_ESP/EIP to 16 bits on VMexit for some reason.
A further investigation is ongoing.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1904632

Title:
  Ubuntu 18.04 Azure VM host kernel panic

Status in linux-azure package in Ubuntu:
  New

Bug description:
  Running a container on an DV3 Standard_D8_v3 Azure host, as the
  container comes up, the Azure host VM kernel panics per the logs
  below.

  Isolated the issue to a process in the container which uses the
  virtual NICs available on the Azure host. The container also is
  running Ubuntu 18.04 based packages. The problem happens every single
  time the container is started, unless its NIC access process is not
  started.

  Has this sort of kernel panic on Azure been seen and what is the root
  cause and remedy please.

  Also the kernel logs on the Azure host show it vulnerable to the
  following CVE. There are other VMs and containers that can run on the
  Azure host without a kernel panic on it, but providing this info in
  case there is some tie-in to the panic.

  https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-3646

  Kernel panic from the Azure Host console:

  
Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux_1.13.33_e857c609-bc35-4b66-9a8b-e86fd8707e82.scope
  2020-11-17T00:50:11.537914Z INFO MonitorHandler ExtHandler Stopped tracking 
cgroup: Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux-1.13.33, path: 
/sys/fs/cgroup/memory/system.slice/Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux_1.13.33_e857c609-bc35-4b66-9a8b-e86fd8707e82.scope
  2020-11-17T00:50:23.291433Z INFO ExtHandler ExtHandler Checking for agent 
updates (family: Prod)
  2020-11-17T00:51:11.677191Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent 
WALinuxAgent-2.2.52 is running as the goal state agent [DEBUG HeartbeatCounter: 
7;HeartbeatId: 8A2DD5B7-02E5-46E2-9EDB-F8CCBA274479;DroppedPackets: 
0;UpdateGSErrors: 0;AutoUpdate: 1]
  [11218.537937] PANIC: double fault, error_code: 0x0
  [11218.541423] Kernel panic - not syncing: Machine halted.
  [11218.541423] CPU: 0 PID: 9281 Comm: vmxt Not tainted 4.15.18+test #1
  [11218.541423] Hardware name: Microsoft Corporation Virtual Machine/Virtual 
Machine, BIOS 090008  12/07/2018
  [11218.541423] Call Trace:
  [11218.541423]  <#DF>
  [11218.541423]  dump_stack+0x63/0x8b
  [11218.541423]  panic+0xe4/0x244
  [11218.541423]  df_debug+0x2d/0x30
  [11218.541423]  do_double_fault+0x9a/0x130
  [11218.541423]  double_fault+0x1e/0x30
  [11218.541423] RIP: 0010:0x1a80
  [11218.541423] RSP: 0018:2200 EFLAGS: 00010096
  [11218.541423] RAX: 0102 RBX: f7a40768 RCX: 
002f
  [11218.541423] RDX: f7ee9970 RSI: f7a40700 RDI: 
f7c3a000
  [11218.541423] RBP: fffd6430 R08:  R09: 

  [11218.541423] R10:  R11:  R12: 

  [11218.541423] R13:  R14:  R15: 

  [11218.541423]  
  [11218.541423] Kernel Offset: 0x2a40 from 0x8100 (relocation 
range: 0x8000-0xbfff)
  [11218.541423] ---[ end Kernel panic - not syncing: Machine halted.
  [11218.636804] [ cut here ]
  [11218.640802] sched: Unexpected reschedule of offline CPU#2!
  [11218.640802] WARNING: CPU: 0 PID: 9281 at arch/x86/kernel/smp.c:128 
native_smp_send_reschedule+0x3f/0x50
  [11218.640802] Modules linked in: xt_nat xt_u32 vxlan ip6_udp_tunnel 
udp_tunnel veth nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo xt_addrtype 
br_netfilter xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 
iptable_nat ipt_REJECT nf_reject_ipv4 xt_tcpudp bridge stp llc ebtable_filter 
ebtables ip6table_filter ip6_tables iptable_filter aufs xt_owner 
iptable_security xt_conntrack overlay openvswitch nsh nf_conntrack_ipv6 
nf_nat_ipv6 nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_defrag_ipv6 nf_nat 
nf_conntrack nls_iso8859_1 joydev input_leds mac_hid kvm_intel hv_balloon kvm 
serio_raw irqbypass intel_rapl_perf sch_fq_codel ib_iser rdma_cm iw_cm ib_cm 
ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi ip_tables x_tables 
autofs4 btrfs zstd_compress raid10 raid456 async_raid6_recov
  [11218.640802]  async_memcpy async_pq async_xor async_tx xor raid6_pq 
libcrc32c raid1 raid0 multipath linear hid_generic crct10dif_pclmul 
crc32_pclmul hid_hyperv ghash_clmulni_intel hv_utils hv_storvsc pcbc ptp 
hv_netvsc hid pps_core scsi_transport_fc hyperv_keyboard aesni_intel aes_x86_64 
crypto_simd hyperv_fb floppy glue_helper cryptd psmouse hv_vmbus i2c_piix4 
pata_acpi
  [11218.640802] CPU: 0 PID: 9281 Comm: vmxt Not tainted 4.15.18+test #1
  [11218.640802] Hardware name: Microsoft Corporation Virtual Machine/Virtual 
Machine,

[Kernel-packages] [Bug 1902531] Re: [linux-azure] IP forwarding issue in netvsc

2020-12-11 Thread Dexuan Cui
Thanks, Marcelo! I tested all the 3 kernels and they worked as we
expected.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1902531

Title:
  [linux-azure] IP forwarding issue in netvsc

Status in linux-azure package in Ubuntu:
  New
Status in linux-azure-4.15 package in Ubuntu:
  New
Status in linux-azure source package in Bionic:
  Invalid
Status in linux-azure-4.15 source package in Bionic:
  In Progress
Status in linux-azure source package in Focal:
  In Progress
Status in linux-azure-4.15 source package in Focal:
  Invalid

Bug description:
  We identified an issue with the Linux netvsc driver when used in IP
  forwarding mode.  The problem is that the RSS hash value is not
  propagated to the outgoing packet, and so such packets go out on
  channel 0.  This produces an imbalance across outgoing channels, and a
  possible overload on the single host-side CPU that is processing
  channel 0.   The problem does not occur when Accelerated Networking is
  used because the packets go out through the Mellanox driver.  Because
  it is tied to IP forwarding, the problem is presumably most likely to
  be visible in a virtual appliance device that is doing network load
  balancing or other kinds of packet filtering and redirection.

  We would like to request fixes to this issue in 16.04, 18.04 and
  20.04.

  Two fixes are already in the upstream v5.5+, so they’re already in
  5.8.0-1011.11.

  For 5.4.0-1031.32, the 2 fixes can apply cleanly:
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1fac7ca4e63bf935780cc632ccb6ba8de5f22321
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=6f3aeb1ba05d41320e6cf9a60f698d9c4e44348e

  For 5.0.0-1036.38, we need 1 more patch applied first, so the list is:
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b441f79532ec13dc82d05c55badc4da1f62a6141
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1fac7ca4e63bf935780cc632ccb6ba8de5f22321
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=6f3aeb1ba05d41320e6cf9a60f698d9c4e44348e

  For 4.15.0-1098.109~16.04.1, the 2 patches can not apply cleanly, so Dexuan 
backported them here:
  https://github.com/dcui/linux/commit/4ed58762a56cccfd006e633fac63311176508795
  https://github.com/dcui/linux/commit/40ad7849a6365a5a485f05453e10e3541025e25a
  (The 2 patches are on the branch 
https://github.com/dcui/linux/commits/decui/ubuntu_16.04/linux-azure/Ubuntu-azure-4.15.0-1098.109_16.04.1)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1902531/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1902531] Re: [linux-azure] IP forwarding issue in netvsc

2020-11-03 Thread Dexuan Cui
This is the network config. Let me know if you need more info.

** Attachment added: "network-config.png"
   
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1902531/+attachment/5430820/+files/network-config.png

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1902531

Title:
  [linux-azure] IP forwarding issue in netvsc

Status in linux-azure package in Ubuntu:
  New

Bug description:
  We identified an issue with the Linux netvsc driver when used in IP
  forwarding mode.  The problem is that the RSS hash value is not
  propagated to the outgoing packet, and so such packets go out on
  channel 0.  This produces an imbalance across outgoing channels, and a
  possible overload on the single host-side CPU that is processing
  channel 0.   The problem does not occur when Accelerated Networking is
  used because the packets go out through the Mellanox driver.  Because
  it is tied to IP forwarding, the problem is presumably most likely to
  be visible in a virtual appliance device that is doing network load
  balancing or other kinds of packet filtering and redirection.

  We would like to request fixes to this issue in 16.04, 18.04 and
  20.04.

  Two fixes are already in the upstream v5.5+, so they’re already in
  5.8.0-1011.11.

  For 5.4.0-1031.32, the 2 fixes can apply cleanly:
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1fac7ca4e63bf935780cc632ccb6ba8de5f22321
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=6f3aeb1ba05d41320e6cf9a60f698d9c4e44348e

  For 5.0.0-1036.38, we need 1 more patch applied first, so the list is:
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b441f79532ec13dc82d05c55badc4da1f62a6141
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1fac7ca4e63bf935780cc632ccb6ba8de5f22321
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=6f3aeb1ba05d41320e6cf9a60f698d9c4e44348e

  For 4.15.0-1098.109~16.04.1, the 2 patches can not apply cleanly, so Dexuan 
backported them here:
  https://github.com/dcui/linux/commit/4ed58762a56cccfd006e633fac63311176508795
  https://github.com/dcui/linux/commit/40ad7849a6365a5a485f05453e10e3541025e25a
  (The 2 patches are on the branch 
https://github.com/dcui/linux/commits/decui/ubuntu_16.04/linux-azure/Ubuntu-azure-4.15.0-1098.109_16.04.1)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1902531/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1902531] Re: [linux-azure] IP forwarding issue in netvsc

2020-11-03 Thread Dexuan Cui
To use Azure UDR, I referred to this page:
https://campus.barracuda.com/product/cloudgenfirewall/doc/72516173/how-
to-configure-azure-route-tables-udr-using-azure-portal-and-arm/

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1902531

Title:
  [linux-azure] IP forwarding issue in netvsc

Status in linux-azure package in Ubuntu:
  New

Bug description:
  We identified an issue with the Linux netvsc driver when used in IP
  forwarding mode.  The problem is that the RSS hash value is not
  propagated to the outgoing packet, and so such packets go out on
  channel 0.  This produces an imbalance across outgoing channels, and a
  possible overload on the single host-side CPU that is processing
  channel 0.   The problem does not occur when Accelerated Networking is
  used because the packets go out through the Mellanox driver.  Because
  it is tied to IP forwarding, the problem is presumably most likely to
  be visible in a virtual appliance device that is doing network load
  balancing or other kinds of packet filtering and redirection.

  We would like to request fixes to this issue in 16.04, 18.04 and
  20.04.

  Two fixes are already in the upstream v5.5+, so they’re already in
  5.8.0-1011.11.

  For 5.4.0-1031.32, the 2 fixes can apply cleanly:
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1fac7ca4e63bf935780cc632ccb6ba8de5f22321
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=6f3aeb1ba05d41320e6cf9a60f698d9c4e44348e

  For 5.0.0-1036.38, we need 1 more patch applied first, so the list is:
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b441f79532ec13dc82d05c55badc4da1f62a6141
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1fac7ca4e63bf935780cc632ccb6ba8de5f22321
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=6f3aeb1ba05d41320e6cf9a60f698d9c4e44348e

  For 4.15.0-1098.109~16.04.1, the 2 patches can not apply cleanly, so Dexuan 
backported them here:
  https://github.com/dcui/linux/commit/4ed58762a56cccfd006e633fac63311176508795
  https://github.com/dcui/linux/commit/40ad7849a6365a5a485f05453e10e3541025e25a
  (The 2 patches are on the branch 
https://github.com/dcui/linux/commits/decui/ubuntu_16.04/linux-azure/Ubuntu-azure-4.15.0-1098.109_16.04.1)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1902531/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1902531] Re: [linux-azure] IP forwarding issue in netvsc

2020-11-03 Thread Dexuan Cui
Here is how I reproduce the bug:

Create 3 Ubuntu 16.04 VMs (VM-1, VM gateway, VM-2) on Azure in the same
Resource Group. The kernel should be the linux-azure kernel
4.15.0-1098.109~16.04.1 (or newer).  I use Gen1 VM but Gen2 should also
has the same issue; I use the "East US 2" region, but the issue should
reproduce in any region.

Note: none of the VMs use Accelerated Networking, i.e. all of the 3 VMs
use the software NIC "NetVSC".

In my setup, VM-1 and VM-2 are "Standard D8s v3 (8 vcpus, 32 GiB
memory)", and VM-gateway is "Standard DS3 v2 (4 vcpus, 14 GiB memory)".
I happen to name the gateway VM "decui-dpdk", but here actually DPDK is
not used at all (I do intend to use this setup for DPDK in future).


The gateway VM has 3 NICs:
The main NIC (10.0.0.4) is not used in ip-forwarding.
NIC-1's IP is 192.168.80.5
NIC-2's IP is 192.168.81.5.
The gateway VM receives packets from VM-1(192.168.80.4) and forwards 
the packets to VM-2 (192.168.81.4).
No firewall rule is used.

The client VM (VM-1, 192.168.80.4) has 1 NIC. It's running iperf2 client. 
The server VM (VM-2, 192.168.81.4) has 1 NIC. It's running iperf2 server: 
"nohup iperf -s &"

The client VM is sending traffic, through the gateway VM (192.168.80.5, 
192.168.81.5), to the server VM.
Note: all the 3 subnets here are in the same VNET(Virtual Net) and 2 Azure UDR 
(User Defined Routing) rules must be used to force the traffic to go through 
the gateway VM. The IP-forwarding of the gateway VM's NIC-1 and NIC-2 must be 
enabled from Azure portal (the setting can only changed when the VM is 
"stopped"), and IP-forwarding must be enabled in the gateway VM (i.e. echo 1 > 
/proc/sys/net/ipv4/ip_forward). I'll attach some screenshots showing the 
network topology and the configuration.


iperf2 uses 512 TCP connections and I limit the bandwidth used by iperf to 
<=70% of the per-VM bandwith limit (Note: if the VM uses >70% of the limit, 
even with the 2 patches, the ping latency between VM-1 and VM-2 can still 
easily go very high, e.g. >200ms -- we'll try to further investigate that).


It looks the per-VM bandwithd limit of the gateway VM (DS3_v2) is 2.6Gbps, so 
70% of it is 1.8Gbps.

In the client VM, run something like:
iperf -c 192.168.81.4 -b 3.5m -t 120 -P512
(-b means the per-TCP-connection limit; -P512 means 512 connections, so the 
total throughput should be around 3.5*512 = 1792 Mbps; "-t 120" means the test 
lasts for 2 minutes. we can abort the test any time by Ctrl+C.)

In the "Server VM, run: 
nohup iperf -s &
ping 192.168.80.4 (we can terminate the program by Ctrl+C), and observe the 
latency.

In the gateway VM, run "nload" to check the current throughput (if the
current device is not the NIC we want to check, press Right Arrow and
Left Allow), and run "top" to check the CPU utilization (when there are
512 connections, the utilization should be still low, e.g. <25%).

When the iperf2 test is running, the ping latency between VM-1 and VM-2
can easily exceed 100ms or even >300ms, but with the 2 patches applied,
the latency typically should be <20ms.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1902531

Title:
  [linux-azure] IP forwarding issue in netvsc

Status in linux-azure package in Ubuntu:
  New

Bug description:
  We identified an issue with the Linux netvsc driver when used in IP
  forwarding mode.  The problem is that the RSS hash value is not
  propagated to the outgoing packet, and so such packets go out on
  channel 0.  This produces an imbalance across outgoing channels, and a
  possible overload on the single host-side CPU that is processing
  channel 0.   The problem does not occur when Accelerated Networking is
  used because the packets go out through the Mellanox driver.  Because
  it is tied to IP forwarding, the problem is presumably most likely to
  be visible in a virtual appliance device that is doing network load
  balancing or other kinds of packet filtering and redirection.

  We would like to request fixes to this issue in 16.04, 18.04 and
  20.04.

  Two fixes are already in the upstream v5.5+, so they’re already in
  5.8.0-1011.11.

  For 5.4.0-1031.32, the 2 fixes can apply cleanly:
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1fac7ca4e63bf935780cc632ccb6ba8de5f22321
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=6f3aeb1ba05d41320e6cf9a60f698d9c4e44348e

  For 5.0.0-1036.38, we need 1 more patch applied first, so the list is:
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b441f79532ec13dc82d05c55badc4da1f62a6141
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1fac7ca4e63bf935780cc632ccb6ba8de5f22321
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=6f3aeb1ba05d41320e6cf9a

[Kernel-packages] [Bug 1902531] Re: [linux-azure] IP forwarding issue in netvsc

2020-11-03 Thread Dexuan Cui
I'll provide the instructions to reproduce the bug on Azure.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1902531

Title:
  [linux-azure] IP forwarding issue in netvsc

Status in linux-azure package in Ubuntu:
  New

Bug description:
  We identified an issue with the Linux netvsc driver when used in IP
  forwarding mode.  The problem is that the RSS hash value is not
  propagated to the outgoing packet, and so such packets go out on
  channel 0.  This produces an imbalance across outgoing channels, and a
  possible overload on the single host-side CPU that is processing
  channel 0.   The problem does not occur when Accelerated Networking is
  used because the packets go out through the Mellanox driver.  Because
  it is tied to IP forwarding, the problem is presumably most likely to
  be visible in a virtual appliance device that is doing network load
  balancing or other kinds of packet filtering and redirection.

  We would like to request fixes to this issue in 16.04, 18.04 and
  20.04.

  Two fixes are already in the upstream v5.5+, so they’re already in
  5.8.0-1011.11.

  For 5.4.0-1031.32, the 2 fixes can apply cleanly:
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1fac7ca4e63bf935780cc632ccb6ba8de5f22321
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=6f3aeb1ba05d41320e6cf9a60f698d9c4e44348e

  For 5.0.0-1036.38, we need 1 more patch applied first, so the list is:
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b441f79532ec13dc82d05c55badc4da1f62a6141
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1fac7ca4e63bf935780cc632ccb6ba8de5f22321
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=6f3aeb1ba05d41320e6cf9a60f698d9c4e44348e

  For 4.15.0-1098.109~16.04.1, the 2 patches can not apply cleanly, so Dexuan 
backported them here:
  https://github.com/dcui/linux/commit/4ed58762a56cccfd006e633fac63311176508795
  https://github.com/dcui/linux/commit/40ad7849a6365a5a485f05453e10e3541025e25a
  (The 2 patches are on the branch 
https://github.com/dcui/linux/commits/decui/ubuntu_16.04/linux-azure/Ubuntu-azure-4.15.0-1098.109_16.04.1)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1902531/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1902531] Re: [linux-azure] IP forwarding issue in netvsc

2020-11-03 Thread Dexuan Cui
Since the 5.0 linux-azure kernel is not maintained anymore, IMO we don't
have to fix this bug for it.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1902531

Title:
  [linux-azure] IP forwarding issue in netvsc

Status in linux-azure package in Ubuntu:
  New

Bug description:
  We identified an issue with the Linux netvsc driver when used in IP
  forwarding mode.  The problem is that the RSS hash value is not
  propagated to the outgoing packet, and so such packets go out on
  channel 0.  This produces an imbalance across outgoing channels, and a
  possible overload on the single host-side CPU that is processing
  channel 0.   The problem does not occur when Accelerated Networking is
  used because the packets go out through the Mellanox driver.  Because
  it is tied to IP forwarding, the problem is presumably most likely to
  be visible in a virtual appliance device that is doing network load
  balancing or other kinds of packet filtering and redirection.

  We would like to request fixes to this issue in 16.04, 18.04 and
  20.04.

  Two fixes are already in the upstream v5.5+, so they’re already in
  5.8.0-1011.11.

  For 5.4.0-1031.32, the 2 fixes can apply cleanly:
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1fac7ca4e63bf935780cc632ccb6ba8de5f22321
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=6f3aeb1ba05d41320e6cf9a60f698d9c4e44348e

  For 5.0.0-1036.38, we need 1 more patch applied first, so the list is:
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b441f79532ec13dc82d05c55badc4da1f62a6141
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1fac7ca4e63bf935780cc632ccb6ba8de5f22321
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=6f3aeb1ba05d41320e6cf9a60f698d9c4e44348e

  For 4.15.0-1098.109~16.04.1, the 2 patches can not apply cleanly, so Dexuan 
backported them here:
  https://github.com/dcui/linux/commit/4ed58762a56cccfd006e633fac63311176508795
  https://github.com/dcui/linux/commit/40ad7849a6365a5a485f05453e10e3541025e25a
  (The 2 patches are on the branch 
https://github.com/dcui/linux/commits/decui/ubuntu_16.04/linux-azure/Ubuntu-azure-4.15.0-1098.109_16.04.1)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1902531/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1894895] Re: [linux-azure][hibernation] ]VM hangs after hibernation/resume if the VM has SRIOV NIC and has been deallocated

2020-10-08 Thread Dexuan Cui
The fix is in the mainline kernel now:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=19873eec7e13fda140a0ebc75d6664e57c00bfb1

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1894895

Title:
  [linux-azure][hibernation] ]VM hangs after hibernation/resume if the
  VM has SRIOV NIC and has been deallocated

Status in linux-azure package in Ubuntu:
  New

Bug description:
  Description of problem:
  On Azure, if the VM is Stopped(deallocated) and later Started, the VF NIC's 
VMBus Instance GUID may change, and as a result hibernation/resume can hang 
forever.

  This happens to the latest stable release of the linux-azure
  5.4.0-1023.23 kernel and the latest mainline linux kernel.

  How reproducible:
  100%

  Steps to Reproduce:
  1. Start a VM in Azure that supports Accelerated Networking, and enable 
hibernation properly (please refer to 
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1880032/comments/14 )

  2. Do hibernation from serial console
  # systemctl hibernate

  4. After the VM state changes to "Stopped", click "Stop" button from
  Azure portal to change the VM state to Stopped(deallocated)

  5. Wait for some time (e.g. 10 minutes? 1 hour?), and click the
  "Start" button to start the VM, and then check the boot-up process
  from the serial console.

  Actual results:
  Can not boot up. VM hangs after resume.

  Starting Resume from hibernation us…6c7-2c0c-491e-adcf-b625d69faf76...
  [   19.822747] PM: resume from hibernation
  [   19.836693] Freezing user space processes ... (elapsed 0.003 seconds) done.
  [   19.846968] OOM killer disabled.
  [   19.850236] Freezing remaining freezable tasks ... (elapsed 0.001 seconds) 
done.
  [   20.542934] PM: Using 1 thread(s) for decompression
  [   20.548250] PM: Loading and decompressing image data (559580 pages)...
  [   22.844964] PM: Image loading progress:   0%
  [   28.131327] PM: Image loading progress:  10%
  [   32.346480] PM: Image loading progress:  20%
  [   37.453971] PM: Image loading progress:  30%
  [   40.834525] PM: Image loading progress:  40%
  [   42.980629] PM: Image loading progress:  50%
  [   44.342959] PM: Image loading progress:  60%
  [   45.506197] PM: Image loading progress:  70%
  [   46.800445] PM: Image loading progress:  80%
  [   48.010185] PM: Image loading progress:  90%
  [   49.045671] PM: Image loading done
  [   49.050419] PM: Read 2238320 kbytes in 28.48 seconds (78.59 MB/s)
  [   49.074198] printk: Suspending console(s) (use no_console_suspend to debug)

  (The VM hangs here forever)

  BUG FIX:
  A workaround patch is available and is being reviewed: 
https://lkml.org/lkml/2020/9/4/1270

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1894895/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1894893] Re: [linux-azure][hibernation] GPU device no longer working after resume from hibernation in NV6 VM size

2020-10-08 Thread Dexuan Cui
The fix is in the PCI tree now:

"PCI: hv: Fix hibernation in case interrupts are not re-create" (
https://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/pci.git/commit/?h=pci/hv&id=915cff7f38c5e4d47f187f8049245afc2cb3e503
 )

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1894893

Title:
  [linux-azure][hibernation] GPU device no longer working after resume
  from hibernation in NV6 VM size

Status in linux-azure package in Ubuntu:
  New

Bug description:
  There are failed logs after resume from hibernation in NV6 (GPU passthrough 
size) VM in Azure:
  [ 1432.153730] hv_pci 47505500-0001--3130-444531334632: hv_irq_unmask() 
failed: 0x5
  [ 1432.167910] hv_pci 47505500-0001--3130-444531334632: hv_irq_unmask() 
failed: 0x5

  This happens to the latest stable release of the linux-azure
  5.4.0-1023.23 kernel and the latest mainline linux kernel.

  How reproducible: 
  100%

  Steps to Reproduce:
  1. Start a Standard_NV6 VM in Azure and enable hibernation properly (please 
refer to 
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1880032/comments/14 )

  E.g. here I create a Generation-1 Ubuntu 20.04 Standard NV6_Promo (6
  vcpus, 56 GiB memory) VM in East US 2.

  2. Make sure the in-kernel open-source nouveau driver is loaded, or
  blacklist the nouveau driver and install the official Nvidia GPU
  driver (please follow https://docs.microsoft.com/en-us/azure/virtual-
  machines/linux/n-series-driver-setup : "Install GRID drivers on NV or
  NVv3-series VMs" -- the most important step to run the "./NVIDIA-
  Linux-x86_64-grid.run".)

  3. Run hibernation from serial console
  # systemctl hibernate

  4. After hibernation finishes, start VM and check dmesg
  # dmesg|grep fail

  Actual results:
  [ 1432.153730] hv_pci 47505500-0001--3130-444531334632: hv_irq_unmask() 
failed: 0x5
  [ 1432.167910] hv_pci 47505500-0001--3130-444531334632: hv_irq_unmask() 
failed: 0x5

  And /proc/interrupts shows that the GPU interrupts are no longer
  happening.

  Expected results:
  No failed logs, and the GPU interrupt should still happen after hibernation.

  
  BUG FIX:
  I made a fix here: https://lkml.org/lkml/2020/9/4/1268.

  Without the patch, we see the error "hv_pci
  47505500-0001--3130-444531334632: hv_irq_unmask() failed: 0x5"
  during hibernation when the VM has the Nvidia GPU driver loaded, and
  after hibernation the GPU driver can no longer receive any MSI/MSI-X
  interrupts when we check /proc/interrupts.

  With the patch, we should no longer see the error, and the GPU driver
  should still receive interrupts after hibernation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1894893/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1894896] Re: [linux-azure][hibernation] Mellanox CX4 NIC's TX/RX packets stop increasing after hibernation/resume

2020-09-10 Thread Dexuan Cui
We also need the second and the third patch:

https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=de214e52de1bba5392b5b7054924a08dbd57c2f6

https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=da26658c3d7005aa67a706dceff7b2807b59e123

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1894896

Title:
  [linux-azure][hibernation] Mellanox CX4 NIC's TX/RX packets stop
  increasing after hibernation/resume

Status in linux-azure package in Ubuntu:
  New

Bug description:
  Description of problem:
  In a VM with CX4 VF NIC on Azure, after hibernation/resume, the TX/RX packet 
counters stop increaseing.
  This issue doesn't exist in VM with a CX3 VF NIC.

  This happens to the latest stable release of the linux-azure
  5.4.0-1023.23 kernel and the latest mainline linux kernel.

  How reproducible:
  100%

  Steps to Reproduce:
  1. Start a VM in Azure that supports Accelerated Networking, and enable 
hibernation properly (please refer to 
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1880032/comments/14 
). Please make sure the VF NIC is CX-4 since the issue doesn't happen to CX-3.

  2. Do hibernation from serial console
  # systemctl hibernate

  3. After the VM resumes back, check the MSI interrupt counters in
  /proc/interrupts for the CX-4 NIC, and also check “ifconfig” (e.g.
  “ifconfig enP2642s2”) for the RX/TX counters. These counters stop
  increasing while they should.

  
  BUG FIX:
  The fix is in the net.git tree now: 
  
https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=19162fd4063a3211843b997a454b505edb81d5ce

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1894896/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1894896] [NEW] [linux-azure][hibernation] Mellanox CX4 NIC's TX/RX packets stop increasing after hibernation/resume

2020-09-08 Thread Dexuan Cui
Public bug reported:

Description of problem:
In a VM with CX4 VF NIC on Azure, after hibernation/resume, the TX/RX packet 
counters stop increaseing.
This issue doesn't exist in VM with a CX3 VF NIC.

This happens to the latest stable release of the linux-azure
5.4.0-1023.23 kernel and the latest mainline linux kernel.

How reproducible:
100%

Steps to Reproduce:
1. Start a VM in Azure that supports Accelerated Networking, and enable 
hibernation properly (please refer to 
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1880032/comments/14 
). Please make sure the VF NIC is CX-4 since the issue doesn't happen to CX-3.

2. Do hibernation from serial console
# systemctl hibernate

3. After the VM resumes back, check the MSI interrupt counters in
/proc/interrupts for the CX-4 NIC, and also check “ifconfig” (e.g.
“ifconfig enP2642s2”) for the RX/TX counters. These counters stop
increasing while they should.


BUG FIX:
The fix is in the net.git tree now: 
https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=19162fd4063a3211843b997a454b505edb81d5ce

** Affects: linux-azure (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1894896

Title:
  [linux-azure][hibernation] Mellanox CX4 NIC's TX/RX packets stop
  increasing after hibernation/resume

Status in linux-azure package in Ubuntu:
  New

Bug description:
  Description of problem:
  In a VM with CX4 VF NIC on Azure, after hibernation/resume, the TX/RX packet 
counters stop increaseing.
  This issue doesn't exist in VM with a CX3 VF NIC.

  This happens to the latest stable release of the linux-azure
  5.4.0-1023.23 kernel and the latest mainline linux kernel.

  How reproducible:
  100%

  Steps to Reproduce:
  1. Start a VM in Azure that supports Accelerated Networking, and enable 
hibernation properly (please refer to 
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1880032/comments/14 
). Please make sure the VF NIC is CX-4 since the issue doesn't happen to CX-3.

  2. Do hibernation from serial console
  # systemctl hibernate

  3. After the VM resumes back, check the MSI interrupt counters in
  /proc/interrupts for the CX-4 NIC, and also check “ifconfig” (e.g.
  “ifconfig enP2642s2”) for the RX/TX counters. These counters stop
  increasing while they should.

  
  BUG FIX:
  The fix is in the net.git tree now: 
  
https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=19162fd4063a3211843b997a454b505edb81d5ce

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1894896/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1894895] [NEW] [linux-azure][hibernation] ]VM hangs after hibernation/resume if the VM has SRIOV NIC and has been deallocated

2020-09-08 Thread Dexuan Cui
Public bug reported:

Description of problem:
On Azure, if the VM is Stopped(deallocated) and later Started, the VF NIC's 
VMBus Instance GUID may change, and as a result hibernation/resume can hang 
forever.

This happens to the latest stable release of the linux-azure
5.4.0-1023.23 kernel and the latest mainline linux kernel.

How reproducible:
100%

Steps to Reproduce:
1. Start a VM in Azure that supports Accelerated Networking, and enable 
hibernation properly (please refer to 
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1880032/comments/14 )

2. Do hibernation from serial console
# systemctl hibernate

4. After the VM state changes to "Stopped", click "Stop" button from
Azure portal to change the VM state to Stopped(deallocated)

5. Wait for some time (e.g. 10 minutes? 1 hour?), and click the "Start"
button to start the VM, and then check the boot-up process from the
serial console.

Actual results:
Can not boot up. VM hangs after resume.

Starting Resume from hibernation us…6c7-2c0c-491e-adcf-b625d69faf76...
[   19.822747] PM: resume from hibernation
[   19.836693] Freezing user space processes ... (elapsed 0.003 seconds) done.
[   19.846968] OOM killer disabled.
[   19.850236] Freezing remaining freezable tasks ... (elapsed 0.001 seconds) 
done.
[   20.542934] PM: Using 1 thread(s) for decompression
[   20.548250] PM: Loading and decompressing image data (559580 pages)...
[   22.844964] PM: Image loading progress:   0%
[   28.131327] PM: Image loading progress:  10%
[   32.346480] PM: Image loading progress:  20%
[   37.453971] PM: Image loading progress:  30%
[   40.834525] PM: Image loading progress:  40%
[   42.980629] PM: Image loading progress:  50%
[   44.342959] PM: Image loading progress:  60%
[   45.506197] PM: Image loading progress:  70%
[   46.800445] PM: Image loading progress:  80%
[   48.010185] PM: Image loading progress:  90%
[   49.045671] PM: Image loading done
[   49.050419] PM: Read 2238320 kbytes in 28.48 seconds (78.59 MB/s)
[   49.074198] printk: Suspending console(s) (use no_console_suspend to debug)

(The VM hangs here forever)

BUG FIX:
A workaround patch is available and is being reviewed: 
https://lkml.org/lkml/2020/9/4/1270

** Affects: linux-azure (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1894895

Title:
  [linux-azure][hibernation] ]VM hangs after hibernation/resume if the
  VM has SRIOV NIC and has been deallocated

Status in linux-azure package in Ubuntu:
  New

Bug description:
  Description of problem:
  On Azure, if the VM is Stopped(deallocated) and later Started, the VF NIC's 
VMBus Instance GUID may change, and as a result hibernation/resume can hang 
forever.

  This happens to the latest stable release of the linux-azure
  5.4.0-1023.23 kernel and the latest mainline linux kernel.

  How reproducible:
  100%

  Steps to Reproduce:
  1. Start a VM in Azure that supports Accelerated Networking, and enable 
hibernation properly (please refer to 
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1880032/comments/14 )

  2. Do hibernation from serial console
  # systemctl hibernate

  4. After the VM state changes to "Stopped", click "Stop" button from
  Azure portal to change the VM state to Stopped(deallocated)

  5. Wait for some time (e.g. 10 minutes? 1 hour?), and click the
  "Start" button to start the VM, and then check the boot-up process
  from the serial console.

  Actual results:
  Can not boot up. VM hangs after resume.

  Starting Resume from hibernation us…6c7-2c0c-491e-adcf-b625d69faf76...
  [   19.822747] PM: resume from hibernation
  [   19.836693] Freezing user space processes ... (elapsed 0.003 seconds) done.
  [   19.846968] OOM killer disabled.
  [   19.850236] Freezing remaining freezable tasks ... (elapsed 0.001 seconds) 
done.
  [   20.542934] PM: Using 1 thread(s) for decompression
  [   20.548250] PM: Loading and decompressing image data (559580 pages)...
  [   22.844964] PM: Image loading progress:   0%
  [   28.131327] PM: Image loading progress:  10%
  [   32.346480] PM: Image loading progress:  20%
  [   37.453971] PM: Image loading progress:  30%
  [   40.834525] PM: Image loading progress:  40%
  [   42.980629] PM: Image loading progress:  50%
  [   44.342959] PM: Image loading progress:  60%
  [   45.506197] PM: Image loading progress:  70%
  [   46.800445] PM: Image loading progress:  80%
  [   48.010185] PM: Image loading progress:  90%
  [   49.045671] PM: Image loading done
  [   49.050419] PM: Read 2238320 kbytes in 28.48 seconds (78.59 MB/s)
  [   49.074198] printk: Suspending console(s) (use no_console_suspend to debug)

  (The VM hangs here forever)

  BUG FIX:
  A workaround patch is available and is being reviewed: 
https://lkml.org/lkml/2020/9/4/1270

To manage notifications about this bug go to:
https://bugs.launchpad.n

[Kernel-packages] [Bug 1894893] [NEW] [linux-azure][hibernation] GPU device no longer working after resume from hibernation in NV6 VM size

2020-09-08 Thread Dexuan Cui
Public bug reported:

There are failed logs after resume from hibernation in NV6 (GPU passthrough 
size) VM in Azure:
[ 1432.153730] hv_pci 47505500-0001--3130-444531334632: hv_irq_unmask() 
failed: 0x5
[ 1432.167910] hv_pci 47505500-0001--3130-444531334632: hv_irq_unmask() 
failed: 0x5

This happens to the latest stable release of the linux-azure
5.4.0-1023.23 kernel and the latest mainline linux kernel.

How reproducible: 
100%

Steps to Reproduce:
1. Start a Standard_NV6 VM in Azure and enable hibernation properly (please 
refer to 
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1880032/comments/14 )

E.g. here I create a Generation-1 Ubuntu 20.04 Standard NV6_Promo (6
vcpus, 56 GiB memory) VM in East US 2.

2. Make sure the in-kernel open-source nouveau driver is loaded, or
blacklist the nouveau driver and install the official Nvidia GPU driver
(please follow https://docs.microsoft.com/en-us/azure/virtual-
machines/linux/n-series-driver-setup : "Install GRID drivers on NV or
NVv3-series VMs" -- the most important step to run the "./NVIDIA-Linux-
x86_64-grid.run".)

3. Run hibernation from serial console
# systemctl hibernate

4. After hibernation finishes, start VM and check dmesg
# dmesg|grep fail

Actual results:
[ 1432.153730] hv_pci 47505500-0001--3130-444531334632: hv_irq_unmask() 
failed: 0x5
[ 1432.167910] hv_pci 47505500-0001--3130-444531334632: hv_irq_unmask() 
failed: 0x5

And /proc/interrupts shows that the GPU interrupts are no longer
happening.

Expected results:
No failed logs, and the GPU interrupt should still happen after hibernation.


BUG FIX:
I made a fix here: https://lkml.org/lkml/2020/9/4/1268.

Without the patch, we see the error "hv_pci
47505500-0001--3130-444531334632: hv_irq_unmask() failed: 0x5"
during hibernation when the VM has the Nvidia GPU driver loaded, and
after hibernation the GPU driver can no longer receive any MSI/MSI-X
interrupts when we check /proc/interrupts.

With the patch, we should no longer see the error, and the GPU driver
should still receive interrupts after hibernation.

** Affects: linux-azure (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1894893

Title:
  [linux-azure][hibernation] GPU device no longer working after resume
  from hibernation in NV6 VM size

Status in linux-azure package in Ubuntu:
  New

Bug description:
  There are failed logs after resume from hibernation in NV6 (GPU passthrough 
size) VM in Azure:
  [ 1432.153730] hv_pci 47505500-0001--3130-444531334632: hv_irq_unmask() 
failed: 0x5
  [ 1432.167910] hv_pci 47505500-0001--3130-444531334632: hv_irq_unmask() 
failed: 0x5

  This happens to the latest stable release of the linux-azure
  5.4.0-1023.23 kernel and the latest mainline linux kernel.

  How reproducible: 
  100%

  Steps to Reproduce:
  1. Start a Standard_NV6 VM in Azure and enable hibernation properly (please 
refer to 
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1880032/comments/14 )

  E.g. here I create a Generation-1 Ubuntu 20.04 Standard NV6_Promo (6
  vcpus, 56 GiB memory) VM in East US 2.

  2. Make sure the in-kernel open-source nouveau driver is loaded, or
  blacklist the nouveau driver and install the official Nvidia GPU
  driver (please follow https://docs.microsoft.com/en-us/azure/virtual-
  machines/linux/n-series-driver-setup : "Install GRID drivers on NV or
  NVv3-series VMs" -- the most important step to run the "./NVIDIA-
  Linux-x86_64-grid.run".)

  3. Run hibernation from serial console
  # systemctl hibernate

  4. After hibernation finishes, start VM and check dmesg
  # dmesg|grep fail

  Actual results:
  [ 1432.153730] hv_pci 47505500-0001--3130-444531334632: hv_irq_unmask() 
failed: 0x5
  [ 1432.167910] hv_pci 47505500-0001--3130-444531334632: hv_irq_unmask() 
failed: 0x5

  And /proc/interrupts shows that the GPU interrupts are no longer
  happening.

  Expected results:
  No failed logs, and the GPU interrupt should still happen after hibernation.

  
  BUG FIX:
  I made a fix here: https://lkml.org/lkml/2020/9/4/1268.

  Without the patch, we see the error "hv_pci
  47505500-0001--3130-444531334632: hv_irq_unmask() failed: 0x5"
  during hibernation when the VM has the Nvidia GPU driver loaded, and
  after hibernation the GPU driver can no longer receive any MSI/MSI-X
  interrupts when we check /proc/interrupts.

  With the patch, we should no longer see the error, and the GPU driver
  should still receive interrupts after hibernation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1894893/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.ne

[Kernel-packages] [Bug 1891931] Re: [linux-azure] Panic when triggering hibernation

2020-08-31 Thread Dexuan Cui
I can confirm now hibernation can work with 5.4.0-1023, despite a
harmless warning:

root@decui-tmp-2004:~# echo disk >/sys/power/state
[   56.945758] PM: hibernation entry
[   57.165520] Filesystems sync: 0.007 seconds
[   57.169492] Freezing user space processes ... (elapsed 0.001 seconds) done.
[   57.177529] OOM killer disabled.
[   57.180702] PM: Marking nosave pages: [mem 0x-0x0fff]
[   57.185925] PM: Marking nosave pages: [mem 0x0009f000-0x000f]
[   57.191239] PM: Marking nosave pages: [mem 0x3fff-0x]
[   57.197810] PM: Basic memory bitmaps created
[   57.201563] PM: Preallocating image memory... done (allocated 210160 pages)
[   57.623616] PM: Allocated 840640 kbytes in 0.41 seconds (2050.34 MB/s)
[   57.629195] Freezing remaining freezable tasks ... (elapsed 0.000 seconds) 
done.
[   57.637795] serial 00:04: disabled
[   58.847939] Disabling non-boot CPUs ...
[   58.852140] smpboot: CPU 1 is now offline
[   58.857921] smpboot: CPU 2 is now offline
[   58.863623] smpboot: CPU 3 is now offline
[   58.869363] unchecked MSR access error: WRMSR to 0x4106 (tried to write 
0x412d4f49 000100ee) at rIP: 0x9ee1d9b8 (hv_cpu_die+0xe8/0x110)
[   58.870052] Call Trace:
[   58.870052]  hv_suspend+0x5a/0x87
[   58.870052]  syscore_suspend+0x59/0x1a0
[   58.870052]  hibernation_snapshot+0x1bc/0x460
[   58.870052]  hibernate.cold+0x6d/0x1f6
[   58.870052]  state_store+0xde/0xe0
[   58.870052]  kobj_attr_store+0x12/0x20
[   58.870052]  sysfs_kf_write+0x3e/0x50
[   58.870052]  kernfs_fop_write+0xda/0x1b0
[   58.870052]  __vfs_write+0x1b/0x40
[   58.870052]  vfs_write+0xb9/0x1a0
[   58.870052]  ksys_write+0x67/0xe0
[   58.870052]  __x64_sys_write+0x1a/0x20
[   58.870052]  do_syscall_64+0x5e/0x200
[   58.870052]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[   58.870052] RIP: 0033:0x7f2f9dfcb057
[   58.870052] Code: 64 89 02 48 c7 c0 ff ff ff ff eb bb 0f 1f 80 00 00 00 00 
f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 b8 01 00 00 00 0f 05 <48> 3d 00 
f0 ff ff 77 51 c3 48 83 ec 28 48 89 54 24 18 48 89 74 24
[   58.870052] RSP: 002b:7ffe96046608 EFLAGS: 0246 ORIG_RAX: 
0001
[   58.870052] RAX: ffda RBX: 0005 RCX: 7f2f9dfcb057
[   58.870052] RDX: 0005 RSI: 55ca5250c450 RDI: 0001
[   58.870052] RBP: 55ca5250c450 R08: 000a R09: 0004
[   58.870052] R10: 55ca50a2d017 R11: 0246 R12: 0005
[   58.870052] R13: 7f2f9e0a66a0 R14: 7f2f9e0a74a0 R15: 7f2f9e0a68a0
[   58.870052] PM: Creating hibernation image:
[   58.870052] PM: Need to copy 201788 pages
[   58.870052] PM: Normal pages needed: 201788 + 1024, available pages: 3992087
[   58.870052] PM: Hibernation image created (201788 pages copied)
[   58.870052] Enabling non-boot CPUs ...
[   58.870052] x86: Booting SMP configuration:
[   58.871862] smpboot: Booting Node 0 Processor 1 APIC 0x1
[   58.875719] CPU1 is up
[   58.877194] smpboot: Booting Node 0 Processor 2 APIC 0x2
[   58.881047] CPU2 is up
[   58.882499] smpboot: Booting Node 0 Processor 3 APIC 0x3
[   58.886033] CPU3 is up
[   58.891099] hv_utils: KVP IC version 4.0
[   58.893181] hv_utils: Shutdown IC version 3.2
[   58.896580] hv_balloon: Using Dynamic Memory protocol version 2.0
[   60.186366] hv_utils: Heartbeat IC version 3.0
[   61.952674] hv_utils: TimeSync IC version 4.0
[   68.108243] hv_balloon: Max. dynamic memory size: 16384 MB
[   70.552511] serial 00:03: activated
[   70.620778] serial 00:04: activated
[   70.692760] PM: Using 3 thread(s) for compression
[   70.716148] ata1.01: host indicates ignore ATA devices, ignored
[   70.760736] PM: Compressing and saving image data (202183 pages)...
[   70.760749] PM: Image saving progress:   0%
[   70.831492] ata1.00: host indicates ignore ATA devices, ignored
[   74.568857] PM: Image saving progress:  10%
[   89.707652] PM: Image saving progress:  20%
[  109.659651] PM: Image saving progress:  30%
[  125.565315] PM: Image saving progress:  40%
[  140.112605] PM: Image saving progress:  50%
[  146.074334] PM: Image saving progress:  60%
[  152.507964] PM: Image saving progress:  70%
[  161.068827] PM: Image saving progress:  80%
[  170.115167] PM: Image saving progress:  90%
[  177.616417] PM: Image saving progress: 100%
[  178.566922] PM: Image saving done
[  178.623924] PM: Wrote 808732 kbytes in 107.80 seconds (7.50 MB/s)
[  178.686742] PM: S|
[  178.791430] kvm: exiting hardware virtualization
[  178.851852] sd 0:0:0:0: [sdb] Synchronizing SCSI cache
[  178.913444] ACPI: Preparing to enter system sleep state S5
[  178.975244] reboot: Power down
[  179.043250] acpi_power_off called

This warning can be fixed by this upstream fix:
38dce4195f0d ("x86/hyperv: Properly suspend/resume reenlightenment 
notifications")

How to reproduce the warning: before following
https://bugs.launchpad.net/ubuntu/+source/linux-
azure/+bug/1880032/comments/14 to test hibernation, make sure that
"lsmod" s

[Kernel-packages] [Bug 1888715] Re: UDP data corruption caused by buggy udp_recvmsg() -> skb_copy_and_csum_datagram_msg()

2020-08-23 Thread Dexuan Cui
FYI: the fix is in the upstream linux-4.4.y branch now:
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?h=v4.4.233&id=c514bb4147e2c667cf82f9aa7689cf442078c13f

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1888715

Title:
  UDP data corruption caused by buggy udp_recvmsg() ->
  skb_copy_and_csum_datagram_msg()

Status in linux package in Ubuntu:
  Incomplete

Bug description:
  The Xenial v4.4 kernel (https://kernel.ubuntu.com/git/ubuntu/ubuntu-
  xenial.git/tag/?h=Ubuntu-4.4.0-184.214) lacks this upstream bug
  fix("make skb_copy_datagram_msg() et.al. preserve ->msg_iter on erro"
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=3278682123811dd8ef07de5eb701fc4548fcebf2);
  as a result, the v4.4 kernel can deliver corrupt data to the
  application when a corrupt packet is closely followed by a valid
  packet, and actually the UDP payload of the corrupt packet is
  delivered to the application with the "from IP/Port" of the valid
  packet, so this is actually a security vulnerability that can be used
  to trick the application to think the corrupt packet’s payload is sent
  from the valid packet’s IP address/port, i.e. a source IP based
  security authentication mechanism can be bypassed.

  Details:

  For a packet longer than 76 bytes (see line 3951,
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/linux/skbuff.h?h=v5.8-rc6#n3948),
  Linux delays the UDP checksum verification until the application
  invokes the syscall recvmsg().

  In the recvmsg() syscall handler, while Linux is copying the UDP
  payload to the application’s memory, it calculates the UDP checksum.
  If the calculated checksum doesn’t match the received checksum, Linux
  drops the corrupt UDP packet, and then starts to process the next
  packet (if any), and if the next packet is valid (i.e. the checksum is
  correct), Linux will copy the valid UDP packet's payload to the
  application’s receiver buffer.

  The bug is: before Linux starts to copy the valid UDP packet, the data
  structure used to track how many more bytes should be copied to the
  application memory is not reset to what it was when the application
  just entered the kernel by the syscall! Consequently, only a small
  portion or none of the valid packet’s payload is copied to the
  application’s receive buffer, and later when the application exits
  from the kernel, actually most of the application’s receive buffer
  contains the payload of the corrupt packet while recvmsg() returns the
  size of the UDP payload of the valid packet. Note: this is actually a
  security vulnerability that can be used to trick the application to
  think the corrupt packet’s payload is sent from the valid packet’s IP
  address/port -- so a source IP based security authentication mechanism
  can be bypassed.

  The bug was fixed in this 2017 patch “make skb_copy_datagram_msg()
  et.al. preserve ->msg_iter on error
  
(https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=3278682123811dd8ef07de5eb701fc4548fcebf2)”,
  but unluckily the patch is only backported to the upstream v4.9+
  kernels. I'm reporting this bug to request the bugfix to be backported
  to the v4.4 Xenial kernel, which is still used by some users and has
  not been EOL'ed (https://ubuntu.com/about/release-cycle).

  I have the below one-line workaround patch to force the recvmsg() syscall 
handler to return to the userspace when Linux detects a corrupt UDP packet, so 
the application will invoke the syscall again to receive the next good UDP 
packet:
  --- a/net/ipv4/udp.c
  +++ b/net/ipv4/udp.c
  @@ -1367,6 +1367,7 @@ csum_copy_err:
  /* starting over for a new packet, but check if we need to yield */
  cond_resched();
  msg->msg_flags &= ~MSG_TRUNC;
  +   return -EAGAIN;
  goto try_again;
  }

  Note: the patch may not work well with blocking sockets, for which
  typically the application doesn’t expect an error of -EAGAIN. I guess
  it would be safer to return -EINTR instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1888715/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1880032] Re: [linux-azure] Enable Hibernation on The 18.04 and 20.04 5.4 Kernels

2020-08-12 Thread Dexuan Cui
Detailed steps to repro the issueo on Azure:
1. Create a VM with the image "Ubuntu Server 20.04 LTS - Gen1". Any VM size 
should be fine. Here I use "Standard E4-2ds_v4 (2 vcpus, 32 GiB memory)".

2. Add an extra disk of 64GB to the VM via Azure portal.

3. Login the VM via ssh and check the kernel version: here I get
5.4.0-1022-azure.

4. In the VM, the 64GB disk can be sdc. Let's create a swap partition in
it, i.e. sdc1.

5. mkswap /dev/sdc1
root@decui-tmp-2004:~# mkswap /dev/sdc1
Setting up swapspace version 1, size = 64 GiB (68718424064 bytes)
no label, UUID=544831e4-72ab-4d2c-81aa-6dac3a8e20ad

6. Add the swap partition info into /etc/fstab:
UUID=544831e4-72ab-4d2c-81aa-6dac3a8e20ad   none swap   sw  0 0

7. Use "swapon -a; swapon -s" to confirm that the swap partition works.

8. Add the kernel parameter resume= into 
/etc/default/grub.d/50-cloudimg-settings.cfg:
 GRUB_CMDLINE_LINUX_DEFAULT="console=tty1 console=ttyS0 earlyprintk=ttyS0 
resume=UUID=544831e4-72ab-4d2c-81aa-6dac3a8e20ad ignore_loglevel 
no_console_suspend"

   Note: here I also add "ignore_loglevel no_console_suspend", which are
*required* to see the error messages during hibernation.

9. Comment out the only line in /etc/default/grub.d/40-force-partuuid.cfg:
 GRUB_FORCE_PARTUUID=bf00dea3-136e-49cb-a640-0df7ce49d6db
   Note: this step is required, otherwise the generated grub.cfg doesn't 
contain the "initrd ..." line , which is required for resuming to work.

10. Run "update-grub2; reboot".
 Note: this 'reboot' might be a must, because we'll need to re-generate the 
initramfs when the running kernel has the resume= parameter. 

11. Login the VM again and run "update-initramfs -u".

12. Run "echo disk > /sys/power/state". Note: we'd better run this
command from Azure serial console (we need to set a password for root
and use that to login via the serial console) so we can easily watch
what will be happening.

root@decui-tmp-2004:~# echo disk > /sys/power/state
[   67.838749] PM: hibernation entry
[   68.266627] Filesystems sync: 0.041 seconds
[   68.271740] Freezing user space processes ... (elapsed 0.001 seconds) done.
[   68.281528] OOM killer disabled.
[   68.286475] PM: Marking nosave pages: [mem 0x-0x0fff]
[   68.293459] PM: Marking nosave pages: [mem 0x0009f000-0x000f]
[   68.300306] PM: Marking nosave pages: [mem 0x3fff-0x]
[   68.308250] PM: Basic memory bitmaps created
[   68.313082] PM: Preallocating image memory... done (allocated 298659 pages)
[   69.303864] PM: Allocated 1194636 kbytes in 0.98 seconds (1219.01 MB/s)
[   69.311605] Freezing remaining freezable tasks ... (elapsed 0.001 seconds) 
done.
[   69.322486] serial 00:04: disabled
[   69.345193] [ cut here ]
[   69.345199] WARNING: CPU: 1 PID: 1495 at kernel/workqueue.c:3040 
__flush_work+0x1b5/0x1d0
...
[   70.047238] CPU1 is up
[   70.054474] hv_utils: KVP IC version 4.0
[   70.056763] hv_utils: Shutdown IC version 3.2
[   70.061009] hv_balloon: Using Dynamic Memory protocol version 2.0

It looks the kernel hangs here forever. Normally the VM is expected to
save the state to disk and power off and later when we start the VM from
the portal, the VM is expected to resume back from the 'echo' command on
the serial console.

If I build a kernel with the same source code but revert 0a14dbaa0736,
the above suspending and resuming work fine.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1880032

Title:
  [linux-azure] Enable Hibernation on The 18.04 and 20.04 5.4 Kernels

Status in linux-azure package in Ubuntu:
  Fix Released
Status in linux-azure source package in Focal:
  Fix Released

Bug description:
  Microsoft would like to request commits to enable VM hibernation in
  the Azure 5.4 kernels for 18.04 and 20.04.

  Some of the commits needed to enable VM hibernation were included in
  mainline 5.4 and older.  However, 24 commits were added in 5.5 and
  later, which are required in the 5.4 kernel.  The list of commits
  requested are:

  38dce4195f0d  x86/hyperv: Properly suspend/resume reenlightenment 
notifications
  2351f8d295ed  PM: hibernate: Freeze kernel threads in software_resume()
  421f090c819d  x86/hyperv: Suspend/resume the VP assist page for hibernation
  1a06d017fb3f  Drivers: hv: vmbus: Fix Suspend-to-Idle for Generation-2 VM
  3704a6a44579  PM: hibernate: Propagate the return value of 
hibernation_restore()
  54e19d34011f  hv_utils: Add the support of hibernation
  ffd1d4a49336  hv_utils: Support host-initiated hibernation request
  3e9c72056ed5  hv_utils: Support host-initiated restart request
  9fc3c01a1fae6 Tools: hv: Reopen the devices if read() or write() returns
  05bd330a7fd8  x86/hyperv: Suspend/resume the hypercall page for hibernation
  382a46221757  video: hyperv_fb: Fix hibernation for the deferred IO feature
  e2379b30324c  Input:

[Kernel-packages] [Bug 1880032] Re: [linux-azure] Enable Hibernation on The 18.04 and 20.04 5.4 Kernels

2020-08-12 Thread Dexuan Cui
Hi Marcelo, yes, please revert 
0a14dbaa0736 ("video: hyperv_fb: Fix hibernation for the deferred IO feature").
No other change is needed.

In the future, when a4ddb11d297e is included, 0a14dbaa0736 should also
be included.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1880032

Title:
  [linux-azure] Enable Hibernation on The 18.04 and 20.04 5.4 Kernels

Status in linux-azure package in Ubuntu:
  Fix Released
Status in linux-azure source package in Focal:
  Fix Released

Bug description:
  Microsoft would like to request commits to enable VM hibernation in
  the Azure 5.4 kernels for 18.04 and 20.04.

  Some of the commits needed to enable VM hibernation were included in
  mainline 5.4 and older.  However, 24 commits were added in 5.5 and
  later, which are required in the 5.4 kernel.  The list of commits
  requested are:

  38dce4195f0d  x86/hyperv: Properly suspend/resume reenlightenment 
notifications
  2351f8d295ed  PM: hibernate: Freeze kernel threads in software_resume()
  421f090c819d  x86/hyperv: Suspend/resume the VP assist page for hibernation
  1a06d017fb3f  Drivers: hv: vmbus: Fix Suspend-to-Idle for Generation-2 VM
  3704a6a44579  PM: hibernate: Propagate the return value of 
hibernation_restore()
  54e19d34011f  hv_utils: Add the support of hibernation
  ffd1d4a49336  hv_utils: Support host-initiated hibernation request
  3e9c72056ed5  hv_utils: Support host-initiated restart request
  9fc3c01a1fae6 Tools: hv: Reopen the devices if read() or write() returns
  05bd330a7fd8  x86/hyperv: Suspend/resume the hypercall page for hibernation
  382a46221757  video: hyperv_fb: Fix hibernation for the deferred IO feature
  e2379b30324c  Input: hyperv-keyboard: Add the support of hibernation
  ac82fc8327088 PCI: hv: Add hibernation support
  a8e37506e79a  PCI: hv: Reorganize the code in preparation of hibernation
  1349401ff1aa4 clocksource/drivers/hyper-v: Suspend/resume Hyper-V clocksource 
for hibernation
  af13f9ed6f9a  HID: hyperv: Add the support of hibernation
  25bd2b2f1f053 hv_balloon: Add the support of hibernation
  b96f86534fa31 x86/hyperv: Implement hv_is_hibernation_supported()
  4df4cb9e99f83 x86/hyperv: Initialize clockevents earlier in CPU onlining
  0efeea5fb1535 hv_netvsc: Add the support of hibernation
  2194c2eb6717f hv_sock: Add the support of hibernation
  1ecf302021040 video: hyperv_fb: Add the support of hibernation
  56fb105859345 scsi: storvsc: Add the support of hibernation
  f2c33ccacb2d4 PCI/PM: Always return devices to D0 when thawing

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1880032/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1880032] Re: [linux-azure] Enable Hibernation on The 18.04 and 20.04 5.4 Kernels

2020-08-12 Thread Dexuan Cui
To reproduce the issue, I created a Ubuntu 20.04 VM on Azure (the kernel
version was "5.4.0-1022-azure #22-Ubuntu"), and I ran "echo disk >
/sys/power/state" in the VM and then checked the Azure serial console of
the VM and found the warning in commen #8 and suspending couldn't finish
normally (it looks the VM got a fatal page fault error later). I suppose
the issue can also repro on a local Hyper-V host.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1880032

Title:
  [linux-azure] Enable Hibernation on The 18.04 and 20.04 5.4 Kernels

Status in linux-azure package in Ubuntu:
  Fix Released
Status in linux-azure source package in Focal:
  Fix Released

Bug description:
  Microsoft would like to request commits to enable VM hibernation in
  the Azure 5.4 kernels for 18.04 and 20.04.

  Some of the commits needed to enable VM hibernation were included in
  mainline 5.4 and older.  However, 24 commits were added in 5.5 and
  later, which are required in the 5.4 kernel.  The list of commits
  requested are:

  38dce4195f0d  x86/hyperv: Properly suspend/resume reenlightenment 
notifications
  2351f8d295ed  PM: hibernate: Freeze kernel threads in software_resume()
  421f090c819d  x86/hyperv: Suspend/resume the VP assist page for hibernation
  1a06d017fb3f  Drivers: hv: vmbus: Fix Suspend-to-Idle for Generation-2 VM
  3704a6a44579  PM: hibernate: Propagate the return value of 
hibernation_restore()
  54e19d34011f  hv_utils: Add the support of hibernation
  ffd1d4a49336  hv_utils: Support host-initiated hibernation request
  3e9c72056ed5  hv_utils: Support host-initiated restart request
  9fc3c01a1fae6 Tools: hv: Reopen the devices if read() or write() returns
  05bd330a7fd8  x86/hyperv: Suspend/resume the hypercall page for hibernation
  382a46221757  video: hyperv_fb: Fix hibernation for the deferred IO feature
  e2379b30324c  Input: hyperv-keyboard: Add the support of hibernation
  ac82fc8327088 PCI: hv: Add hibernation support
  a8e37506e79a  PCI: hv: Reorganize the code in preparation of hibernation
  1349401ff1aa4 clocksource/drivers/hyper-v: Suspend/resume Hyper-V clocksource 
for hibernation
  af13f9ed6f9a  HID: hyperv: Add the support of hibernation
  25bd2b2f1f053 hv_balloon: Add the support of hibernation
  b96f86534fa31 x86/hyperv: Implement hv_is_hibernation_supported()
  4df4cb9e99f83 x86/hyperv: Initialize clockevents earlier in CPU onlining
  0efeea5fb1535 hv_netvsc: Add the support of hibernation
  2194c2eb6717f hv_sock: Add the support of hibernation
  1ecf302021040 video: hyperv_fb: Add the support of hibernation
  56fb105859345 scsi: storvsc: Add the support of hibernation
  f2c33ccacb2d4 PCI/PM: Always return devices to D0 when thawing

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1880032/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1880032] Re: [linux-azure] Enable Hibernation on The 18.04 and 20.04 5.4 Kernels

2020-08-11 Thread Dexuan Cui
Unluckily this commit breaks hibernation:
0a14dbaa0736 ("video: hyperv_fb: Fix hibernation for the deferred IO feature"):
https://git.launchpad.net/~canonical-kernel/ubuntu/+source/linux-azure/+git/focal/commit/?h=Ubuntu-azure-5.4.0-1022.22&id=0a14dbaa0736a6021c02e74d42cf3a7ca5438da6

The kernel here doesn't include 
a4ddb11d297e ("video: hyperv: hyperv_fb: Support deferred IO for Hyper-V frame 
buffer driver", so it should not include
0a14dbaa0736 ("video: hyperv_fb: Fix hibernation for the deferred IO feature").

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1880032

Title:
  [linux-azure] Enable Hibernation on The 18.04 and 20.04 5.4 Kernels

Status in linux-azure package in Ubuntu:
  Fix Released
Status in linux-azure source package in Focal:
  Fix Released

Bug description:
  Microsoft would like to request commits to enable VM hibernation in
  the Azure 5.4 kernels for 18.04 and 20.04.

  Some of the commits needed to enable VM hibernation were included in
  mainline 5.4 and older.  However, 24 commits were added in 5.5 and
  later, which are required in the 5.4 kernel.  The list of commits
  requested are:

  38dce4195f0d  x86/hyperv: Properly suspend/resume reenlightenment 
notifications
  2351f8d295ed  PM: hibernate: Freeze kernel threads in software_resume()
  421f090c819d  x86/hyperv: Suspend/resume the VP assist page for hibernation
  1a06d017fb3f  Drivers: hv: vmbus: Fix Suspend-to-Idle for Generation-2 VM
  3704a6a44579  PM: hibernate: Propagate the return value of 
hibernation_restore()
  54e19d34011f  hv_utils: Add the support of hibernation
  ffd1d4a49336  hv_utils: Support host-initiated hibernation request
  3e9c72056ed5  hv_utils: Support host-initiated restart request
  9fc3c01a1fae6 Tools: hv: Reopen the devices if read() or write() returns
  05bd330a7fd8  x86/hyperv: Suspend/resume the hypercall page for hibernation
  382a46221757  video: hyperv_fb: Fix hibernation for the deferred IO feature
  e2379b30324c  Input: hyperv-keyboard: Add the support of hibernation
  ac82fc8327088 PCI: hv: Add hibernation support
  a8e37506e79a  PCI: hv: Reorganize the code in preparation of hibernation
  1349401ff1aa4 clocksource/drivers/hyper-v: Suspend/resume Hyper-V clocksource 
for hibernation
  af13f9ed6f9a  HID: hyperv: Add the support of hibernation
  25bd2b2f1f053 hv_balloon: Add the support of hibernation
  b96f86534fa31 x86/hyperv: Implement hv_is_hibernation_supported()
  4df4cb9e99f83 x86/hyperv: Initialize clockevents earlier in CPU onlining
  0efeea5fb1535 hv_netvsc: Add the support of hibernation
  2194c2eb6717f hv_sock: Add the support of hibernation
  1ecf302021040 video: hyperv_fb: Add the support of hibernation
  56fb105859345 scsi: storvsc: Add the support of hibernation
  f2c33ccacb2d4 PCI/PM: Always return devices to D0 when thawing

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1880032/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1880032] Re: [linux-azure] Enable Hibernation on The 18.04 and 20.04 5.4 Kernels

2020-07-31 Thread Dexuan Cui
Unluckily this commit breaks hibernation:
0a14dbaa0736 ("video: hyperv_fb: Fix hibernation for the deferred IO feature"):
https://git.launchpad.net/~canonical-kernel/ubuntu/+source/linux-azure/+git/focal/commit/?h=Ubuntu-azure-5.4.0-1022.22&id=0a14dbaa0736a6021c02e74d42cf3a7ca5438da6

We should include the patch only if the kernel also includes 
a4ddb11d297e ("video: hyperv: hyperv_fb: Support deferred IO for Hyper-V frame 
buffer driver"

Now I'm seeing a hang/panic issue when hibernating the VM ("5.4.0-1022-azure 
#22-Ubuntu"):
[   67.736061] [ cut here ]
[   67.736068] WARNING: CPU: 5 PID: 1358 at kernel/workqueue.c:3040 
__flush_work+0x1b5/0x1d0
[   67.736068] Modules linked in: xt_owner iptable_security xt_conntrack 
nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c bpfilter nls_iso8859_1 
dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua sb_edac crct10dif_pclmul 
crc32_pclmul ghash_clmulni_intel aesni_intel crypto_simd cryptd glue_helper 
joydev hid_generic hyperv_fb cfbfillrect hid_hyperv intel_rapl_perf serio_raw 
hyperv_keyboard pata_acpi hv_netvsc hv_balloon hid cfbimgblt pci_hyperv 
cfbcopyarea hv_utils pci_hyperv_intf sch_fq_codel drm 
drm_panel_orientation_quirks i2c_core ip_tables x_tables autofs4
[   67.736088] CPU: 5 PID: 1358 Comm: bash Not tainted 5.4.0-1022-azure 
#22-Ubuntu
[   67.736089] Hardware name: Microsoft Corporation Virtual Machine/Virtual 
Machine, BIOS 090007  06/02/2017
[   67.736091] RIP: 0010:__flush_work+0x1b5/0x1d0
[   67.736092] Code: f0 eb e3 4d 8b 7c 24 20 e9 f3 fe ff ff 8b 0b 48 8b 53 08 
83 e1 08 48 0f ba 2b 03 80 c9 f0 e9 4f ff ff ff 0f 0b e9 68 ff ff ff <0f> 0b 45 
31 f6 e9 5e ff ff ff e8 ec e0 fd ff 66 66 2e 0f 1f 84 00
[   67.736095] RSP: 0018:a7ce8a8ffb78 EFLAGS: 00010246
[   67.736096] RAX:  RBX: 8be3621f02a0 RCX: 
[   67.736096] RDX: 0001 RSI: 0001 RDI: 8be3621f02a0
[   67.736097] RBP: a7ce8a8ffbf0 R08:  R09: ff010101
[   67.736098] R10: 8be363f7a320 R11: 0001 R12: 8be3621f02a0
[   67.736098] R13: 0001 R14: 0001 R15: bc390fd1
[   67.736099] FS:  7f6df35fe740() GS:8be375d4() 
knlGS:
[   67.736100] CS:  0010 DS:  ES:  CR0: 80050033
[   67.736100] CR2: 561eef2c1b50 CR3: 000e40a14004 CR4: 001706e0
[   67.736102] Call Trace:
[   67.736108]  __cancel_work_timer+0x107/0x180
[   67.736119]  cancel_delayed_work_sync+0x13/0x20
[   67.736121]  hvfb_suspend+0x48/0x80 [hyperv_fb]
[   67.736122]  vmbus_suspend+0x2a/0x40
[   67.736125]  dpm_run_callback+0x5b/0x150
[   67.736127]  __device_suspend_noirq+0x9e/0x2f0
[   67.736128]  dpm_suspend_noirq+0x101/0x2d0
[   67.736130]  dpm_suspend_end+0x53/0x80
[   67.736132]  hibernation_snapshot+0xd8/0x460
[   67.736133]  hibernate.cold+0x6d/0x1f6
[   67.736135]  state_store+0xde/0xe0
[   67.736138]  kobj_attr_store+0x12/0x20
[   67.736141]  sysfs_kf_write+0x3e/0x50
[   67.736142]  kernfs_fop_write+0xda/0x1b0
[   67.736145]  __vfs_write+0x1b/0x40
[   67.736147]  vfs_write+0xb9/0x1a0
[   67.736149]  ksys_write+0x67/0xe0
[   67.736150]  __x64_sys_write+0x1a/0x20
[   67.736152]  do_syscall_64+0x5e/0x200
[   67.736156]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[   67.736157] RIP: 0033:0x7f6df3712057


After I revert 0a14dbaa0736, hibernation works.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1880032

Title:
  [linux-azure] Enable Hibernation on The 18.04 and 20.04 5.4 Kernels

Status in linux-azure package in Ubuntu:
  Fix Released
Status in linux-azure source package in Focal:
  Fix Released

Bug description:
  Microsoft would like to request commits to enable VM hibernation in
  the Azure 5.4 kernels for 18.04 and 20.04.

  Some of the commits needed to enable VM hibernation were included in
  mainline 5.4 and older.  However, 24 commits were added in 5.5 and
  later, which are required in the 5.4 kernel.  The list of commits
  requested are:

  38dce4195f0d  x86/hyperv: Properly suspend/resume reenlightenment 
notifications
  2351f8d295ed  PM: hibernate: Freeze kernel threads in software_resume()
  421f090c819d  x86/hyperv: Suspend/resume the VP assist page for hibernation
  1a06d017fb3f  Drivers: hv: vmbus: Fix Suspend-to-Idle for Generation-2 VM
  3704a6a44579  PM: hibernate: Propagate the return value of 
hibernation_restore()
  54e19d34011f  hv_utils: Add the support of hibernation
  ffd1d4a49336  hv_utils: Support host-initiated hibernation request
  3e9c72056ed5  hv_utils: Support host-initiated restart request
  9fc3c01a1fae6 Tools: hv: Reopen the devices if read() or write() returns
  05bd330a7fd8  x86/hyperv: Suspend/resume the hypercall page for hibernation
  382a46221757  video: hyperv_fb: Fix hibernation for the deferred IO feature
  e2379b30324c  Input: hyperv-keybo

[Kernel-packages] [Bug 1888715] Re: UDP data corruption caused by buggy udp_recvmsg() -> skb_copy_and_csum_datagram_msg()

2020-07-28 Thread Dexuan Cui
https://lore.kernel.org/netdev/20200728015505.37830-1-de...@microsoft.com/

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1888715

Title:
  UDP data corruption caused by buggy udp_recvmsg() ->
  skb_copy_and_csum_datagram_msg()

Status in linux package in Ubuntu:
  Incomplete

Bug description:
  The Xenial v4.4 kernel (https://kernel.ubuntu.com/git/ubuntu/ubuntu-
  xenial.git/tag/?h=Ubuntu-4.4.0-184.214) lacks this upstream bug
  fix("make skb_copy_datagram_msg() et.al. preserve ->msg_iter on erro"
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=3278682123811dd8ef07de5eb701fc4548fcebf2);
  as a result, the v4.4 kernel can deliver corrupt data to the
  application when a corrupt packet is closely followed by a valid
  packet, and actually the UDP payload of the corrupt packet is
  delivered to the application with the "from IP/Port" of the valid
  packet, so this is actually a security vulnerability that can be used
  to trick the application to think the corrupt packet’s payload is sent
  from the valid packet’s IP address/port, i.e. a source IP based
  security authentication mechanism can be bypassed.

  Details:

  For a packet longer than 76 bytes (see line 3951,
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/linux/skbuff.h?h=v5.8-rc6#n3948),
  Linux delays the UDP checksum verification until the application
  invokes the syscall recvmsg().

  In the recvmsg() syscall handler, while Linux is copying the UDP
  payload to the application’s memory, it calculates the UDP checksum.
  If the calculated checksum doesn’t match the received checksum, Linux
  drops the corrupt UDP packet, and then starts to process the next
  packet (if any), and if the next packet is valid (i.e. the checksum is
  correct), Linux will copy the valid UDP packet's payload to the
  application’s receiver buffer.

  The bug is: before Linux starts to copy the valid UDP packet, the data
  structure used to track how many more bytes should be copied to the
  application memory is not reset to what it was when the application
  just entered the kernel by the syscall! Consequently, only a small
  portion or none of the valid packet’s payload is copied to the
  application’s receive buffer, and later when the application exits
  from the kernel, actually most of the application’s receive buffer
  contains the payload of the corrupt packet while recvmsg() returns the
  size of the UDP payload of the valid packet. Note: this is actually a
  security vulnerability that can be used to trick the application to
  think the corrupt packet’s payload is sent from the valid packet’s IP
  address/port -- so a source IP based security authentication mechanism
  can be bypassed.

  The bug was fixed in this 2017 patch “make skb_copy_datagram_msg()
  et.al. preserve ->msg_iter on error
  
(https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=3278682123811dd8ef07de5eb701fc4548fcebf2)”,
  but unluckily the patch is only backported to the upstream v4.9+
  kernels. I'm reporting this bug to request the bugfix to be backported
  to the v4.4 Xenial kernel, which is still used by some users and has
  not been EOL'ed (https://ubuntu.com/about/release-cycle).

  I have the below one-line workaround patch to force the recvmsg() syscall 
handler to return to the userspace when Linux detects a corrupt UDP packet, so 
the application will invoke the syscall again to receive the next good UDP 
packet:
  --- a/net/ipv4/udp.c
  +++ b/net/ipv4/udp.c
  @@ -1367,6 +1367,7 @@ csum_copy_err:
  /* starting over for a new packet, but check if we need to yield */
  cond_resched();
  msg->msg_flags &= ~MSG_TRUNC;
  +   return -EAGAIN;
  goto try_again;
  }

  Note: the patch may not work well with blocking sockets, for which
  typically the application doesn’t expect an error of -EAGAIN. I guess
  it would be safer to return -EINTR instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1888715/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1888715] Re: UDP data corruption caused by buggy udp_recvmsg() -> skb_copy_and_csum_datagram_msg()

2020-07-24 Thread Dexuan Cui
rcu_access_pointer(sk->sk_filter) is basically the same as
sk->sk_filter.

If sk->sk_filter is true, the change makes no difference.
If sk->sk_filter is false, the change also drops a UDP packet with incorrect 
UDP checksum by "goto csum_error;". Without the change, the packet is dropped 
in udp_recvmsg(); with the change, the packet is dropped earlier.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1888715

Title:
  UDP data corruption caused by buggy udp_recvmsg() ->
  skb_copy_and_csum_datagram_msg()

Status in linux package in Ubuntu:
  Incomplete

Bug description:
  The Xenial v4.4 kernel (https://kernel.ubuntu.com/git/ubuntu/ubuntu-
  xenial.git/tag/?h=Ubuntu-4.4.0-184.214) lacks this upstream bug
  fix("make skb_copy_datagram_msg() et.al. preserve ->msg_iter on erro"
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=3278682123811dd8ef07de5eb701fc4548fcebf2);
  as a result, the v4.4 kernel can deliver corrupt data to the
  application when a corrupt packet is closely followed by a valid
  packet, and actually the UDP payload of the corrupt packet is
  delivered to the application with the "from IP/Port" of the valid
  packet, so this is actually a security vulnerability that can be used
  to trick the application to think the corrupt packet’s payload is sent
  from the valid packet’s IP address/port, i.e. a source IP based
  security authentication mechanism can be bypassed.

  Details:

  For a packet longer than 76 bytes (see line 3951,
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/linux/skbuff.h?h=v5.8-rc6#n3948),
  Linux delays the UDP checksum verification until the application
  invokes the syscall recvmsg().

  In the recvmsg() syscall handler, while Linux is copying the UDP
  payload to the application’s memory, it calculates the UDP checksum.
  If the calculated checksum doesn’t match the received checksum, Linux
  drops the corrupt UDP packet, and then starts to process the next
  packet (if any), and if the next packet is valid (i.e. the checksum is
  correct), Linux will copy the valid UDP packet's payload to the
  application’s receiver buffer.

  The bug is: before Linux starts to copy the valid UDP packet, the data
  structure used to track how many more bytes should be copied to the
  application memory is not reset to what it was when the application
  just entered the kernel by the syscall! Consequently, only a small
  portion or none of the valid packet’s payload is copied to the
  application’s receive buffer, and later when the application exits
  from the kernel, actually most of the application’s receive buffer
  contains the payload of the corrupt packet while recvmsg() returns the
  size of the UDP payload of the valid packet. Note: this is actually a
  security vulnerability that can be used to trick the application to
  think the corrupt packet’s payload is sent from the valid packet’s IP
  address/port -- so a source IP based security authentication mechanism
  can be bypassed.

  The bug was fixed in this 2017 patch “make skb_copy_datagram_msg()
  et.al. preserve ->msg_iter on error
  
(https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=3278682123811dd8ef07de5eb701fc4548fcebf2)”,
  but unluckily the patch is only backported to the upstream v4.9+
  kernels. I'm reporting this bug to request the bugfix to be backported
  to the v4.4 Xenial kernel, which is still used by some users and has
  not been EOL'ed (https://ubuntu.com/about/release-cycle).

  I have the below one-line workaround patch to force the recvmsg() syscall 
handler to return to the userspace when Linux detects a corrupt UDP packet, so 
the application will invoke the syscall again to receive the next good UDP 
packet:
  --- a/net/ipv4/udp.c
  +++ b/net/ipv4/udp.c
  @@ -1367,6 +1367,7 @@ csum_copy_err:
  /* starting over for a new packet, but check if we need to yield */
  cond_resched();
  msg->msg_flags &= ~MSG_TRUNC;
  +   return -EAGAIN;
  goto try_again;
  }

  Note: the patch may not work well with blocking sockets, for which
  typically the application doesn’t expect an error of -EAGAIN. I guess
  it would be safer to return -EINTR instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1888715/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1880032] Re: [linux-azure] Enable Hibernation on The 18.04 and 20.04 5.4 Kernels

2020-06-01 Thread Dexuan Cui
FYI: the patch "net/mlx5: Fix crash upon suspend/resume" is in v5.7 now
(i.e. today's latest mainline):
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?h=v5.7&id=8fc3e29be9248048f449793502c15af329f35c6e

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1880032

Title:
  [linux-azure] Enable Hibernation on The 18.04 and 20.04 5.4 Kernels

Status in linux-azure package in Ubuntu:
  New

Bug description:
  Microsoft would like to request commits to enable VM hibernation in
  the Azure 5.4 kernels for 18.04 and 20.04.

  Some of the commits needed to enable VM hibernation were included in
  mainline 5.4 and older.  However, 24 commits were added in 5.5 and
  later, which are required in the 5.4 kernel.  The list of commits
  requested are:

  38dce4195f0d  x86/hyperv: Properly suspend/resume reenlightenment 
notifications
  2351f8d295ed  PM: hibernate: Freeze kernel threads in software_resume()
  421f090c819d  x86/hyperv: Suspend/resume the VP assist page for hibernation
  1a06d017fb3f  Drivers: hv: vmbus: Fix Suspend-to-Idle for Generation-2 VM
  3704a6a44579  PM: hibernate: Propagate the return value of 
hibernation_restore()
  54e19d34011f  hv_utils: Add the support of hibernation
  ffd1d4a49336  hv_utils: Support host-initiated hibernation request
  3e9c72056ed5  hv_utils: Support host-initiated restart request
  9fc3c01a1fae6 Tools: hv: Reopen the devices if read() or write() returns
  05bd330a7fd8  x86/hyperv: Suspend/resume the hypercall page for hibernation
  382a46221757  video: hyperv_fb: Fix hibernation for the deferred IO feature
  e2379b30324c  Input: hyperv-keyboard: Add the support of hibernation
  ac82fc8327088 PCI: hv: Add hibernation support
  a8e37506e79a  PCI: hv: Reorganize the code in preparation of hibernation
  1349401ff1aa4 clocksource/drivers/hyper-v: Suspend/resume Hyper-V clocksource 
for hibernation
  af13f9ed6f9a  HID: hyperv: Add the support of hibernation
  25bd2b2f1f053 hv_balloon: Add the support of hibernation
  b96f86534fa31 x86/hyperv: Implement hv_is_hibernation_supported()
  4df4cb9e99f83 x86/hyperv: Initialize clockevents earlier in CPU onlining
  0efeea5fb1535 hv_netvsc: Add the support of hibernation
  2194c2eb6717f hv_sock: Add the support of hibernation
  1ecf302021040 video: hyperv_fb: Add the support of hibernation
  56fb105859345 scsi: storvsc: Add the support of hibernation
  f2c33ccacb2d4 PCI/PM: Always return devices to D0 when thawing

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1880032/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1880032] Re: [linux-azure] Enable Hibernation on The 18.04 and 20.04 5.4 Kernels

2020-05-29 Thread Dexuan Cui
There is another important bug fix for hibernation: 
net/mlx5: Fix crash upon suspend/resume 
(https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=8fc3e29be9248048f449793502c15af329f35c6e).

So far the fix is only present in the net.git tree, but I expect it will
be in the mainline tree’s v5.8-rc1 (or even v5.7, if we’re lucky).

Please consider picking it up. Thanks!

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1880032

Title:
  [linux-azure] Enable Hibernation on The 18.04 and 20.04 5.4 Kernels

Status in linux-azure package in Ubuntu:
  New

Bug description:
  Microsoft would like to request commits to enable VM hibernation in
  the Azure 5.4 kernels for 18.04 and 20.04.

  Some of the commits needed to enable VM hibernation were included in
  mainline 5.4 and older.  However, 24 commits were added in 5.5 and
  later, which are required in the 5.4 kernel.  The list of commits
  requested are:

  38dce4195f0d  x86/hyperv: Properly suspend/resume reenlightenment 
notifications
  2351f8d295ed  PM: hibernate: Freeze kernel threads in software_resume()
  421f090c819d  x86/hyperv: Suspend/resume the VP assist page for hibernation
  1a06d017fb3f  Drivers: hv: vmbus: Fix Suspend-to-Idle for Generation-2 VM
  3704a6a44579  PM: hibernate: Propagate the return value of 
hibernation_restore()
  54e19d34011f  hv_utils: Add the support of hibernation
  ffd1d4a49336  hv_utils: Support host-initiated hibernation request
  3e9c72056ed5  hv_utils: Support host-initiated restart request
  9fc3c01a1fae6 Tools: hv: Reopen the devices if read() or write() returns
  05bd330a7fd8  x86/hyperv: Suspend/resume the hypercall page for hibernation
  382a46221757  video: hyperv_fb: Fix hibernation for the deferred IO feature
  e2379b30324c  Input: hyperv-keyboard: Add the support of hibernation
  ac82fc8327088 PCI: hv: Add hibernation support
  a8e37506e79a  PCI: hv: Reorganize the code in preparation of hibernation
  1349401ff1aa4 clocksource/drivers/hyper-v: Suspend/resume Hyper-V clocksource 
for hibernation
  af13f9ed6f9a  HID: hyperv: Add the support of hibernation
  25bd2b2f1f053 hv_balloon: Add the support of hibernation
  b96f86534fa31 x86/hyperv: Implement hv_is_hibernation_supported()
  4df4cb9e99f83 x86/hyperv: Initialize clockevents earlier in CPU onlining
  0efeea5fb1535 hv_netvsc: Add the support of hibernation
  2194c2eb6717f hv_sock: Add the support of hibernation
  1ecf302021040 video: hyperv_fb: Add the support of hibernation
  56fb105859345 scsi: storvsc: Add the support of hibernation
  f2c33ccacb2d4 PCI/PM: Always return devices to D0 when thawing

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1880032/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1870189] Re: initramfs does not get loaded

2020-05-19 Thread Dexuan Cui
So it looks this is considered as a feature rather than a bug for the
Ubuntu 20.04 VM image in Azure Marketplacet. To whoever uses such an
image on Azure: if you're installing a new kernel that doesn't have the
necessary drivers built-in (CONFIG_HYPERV=y, CONFIG_HYPERV_STORAGE=y),
you're supposed to comment out the GRUB_FORCE_PARTUUID line in
/etc/default/grub.d/40-force-partuuid.cfg and run 'sudo update-grub'.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1870189

Title:
  initramfs does not get loaded

Status in grub2 package in Ubuntu:
  Won't Fix
Status in linux-azure package in Ubuntu:
  Confirmed
Status in livecd-rootfs package in Ubuntu:
  Triaged

Bug description:
  A Gen-1 Ubuntu 19.10 VM on Azure was created and upgraded to Ubuntu
  20.04 by “do-release-upgrade –d”.

  Then the latest Ubuntu v5.6 kernel was installed from
  https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.6/. As soon as a
  reboot was performed, a panic with the v5.6 kernel occured because the
  rootfs can not be found.

  It turns out by default, initramfs does not get loaded:

  /boot/grub/grub.cfg:
  menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os 
$menuentry_id_option 'gnulinux-simple-3d2737e8-
  b95a-42bf-bac1-bb6fb4cda87f' {
  …
  if [ "${initrdfail}" = 1 ]; then
linux /boot/vmlinuz-5.6.0-050600-generic 
root=PARTUUID=bc3d472f-401e-4774-affa-df1acba65a73 ro  console=tty1 
console=ttyS0 earlyprintk=ttyS0 ignore_loglevel sysrq_always_enabled 
unknown_nmi_panic
initrd/boot/initrd.img-5.6.0-050600-generic
  else
linux /boot/vmlinuz-5.6.0-050600-generic 
root=PARTUUID=bc3d472f-401e-4774-affa-df1acba65a73 ro  console=tty1 
console=ttyS0 earlyprintk=ttyS0 ignore_loglevel sysrq_always_enabled 
unknown_nmi_panic panic=-1
#Dexuan: here the initrd line is missing!
  fi
  initrdfail
  }

  
  As we can see, Ubuntu only uses the initrd.img if initrdfail=1. Normally, 
initrdfail = 0, so when we boot the v5.6 kernel for the first time, we must hit 
the “fail to mount rootfs” panic and the kernel will automatically reboot….   

  Also, the “initrdfail” here marks initrdfail=1, so when the kernel
  boots for the 2nd time, the kernel should successfully boot up.  Next,
  when the kernel boots for the 3rd time, it panics again since the
  userspace program resets initrdfail to 0, and next time when the
  kernel boots, it can boot up successfully -- this
  “panic/success/panic/success” pattern repeats forever…

  
  The linux-azure kernels are not affected since they have the vmbus driver and 
storage drivers built-in (i.e. “=y”):
  /boot/config-5.3.0-1013-azure:CONFIG_HYPERV_STORAGE=y
  /boot/config-5.3.0-1013-azure:CONFIG_HYPERV=y
  /boot/config-5.4.0-1006-azure:CONFIG_HYPERV_STORAGE=y
  /boot/config-5.4.0-1006-azure:CONFIG_HYPERV=y
  /boot/config-5.6.0-050600-generic:CONFIG_HYPERV_STORAGE=m
  /boot/config-5.6.0-050600-generic:CONFIG_HYPERV=m
  The v5.6 kernel uses =m rather than =y, so is affected here.

  
  It looks the setting may be intentional, but we should not assume a customer 
kernel must have the necessary vmbus/storage drivers built-in. 

  This issue only happens to the Ubuntu Marketplace image (19.10 and maybe 
19.04 as well?) on Azure. 
  We installed a Ubuntu  20.04 VM from the .iso file from 
http://cdimage.ubuntu.com/daily-live/pending/ and don’t see the strange grub 
issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1870189/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1870189] Re: initramfs does not get loaded

2020-05-12 Thread Dexuan Cui
BTW, the symptom described in the Bug Description also exists in the
Ubuntu 20.04 image in Azure Marketplace.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1870189

Title:
  initramfs does not get loaded

Status in grub2 package in Ubuntu:
  Won't Fix
Status in linux-azure package in Ubuntu:
  Confirmed

Bug description:
  A Gen-1 Ubuntu 19.10 VM on Azure was created and upgraded to Ubuntu
  20.04 by “do-release-upgrade –d”.

  Then the latest Ubuntu v5.6 kernel was installed from
  https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.6/. As soon as a
  reboot was performed, a panic with the v5.6 kernel occured because the
  rootfs can not be found.

  It turns out by default, initramfs does not get loaded:

  /boot/grub/grub.cfg:
  menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os 
$menuentry_id_option 'gnulinux-simple-3d2737e8-
  b95a-42bf-bac1-bb6fb4cda87f' {
  …
  if [ "${initrdfail}" = 1 ]; then
linux /boot/vmlinuz-5.6.0-050600-generic 
root=PARTUUID=bc3d472f-401e-4774-affa-df1acba65a73 ro  console=tty1 
console=ttyS0 earlyprintk=ttyS0 ignore_loglevel sysrq_always_enabled 
unknown_nmi_panic
initrd/boot/initrd.img-5.6.0-050600-generic
  else
linux /boot/vmlinuz-5.6.0-050600-generic 
root=PARTUUID=bc3d472f-401e-4774-affa-df1acba65a73 ro  console=tty1 
console=ttyS0 earlyprintk=ttyS0 ignore_loglevel sysrq_always_enabled 
unknown_nmi_panic panic=-1
#Dexuan: here the initrd line is missing!
  fi
  initrdfail
  }

  
  As we can see, Ubuntu only uses the initrd.img if initrdfail=1. Normally, 
initrdfail = 0, so when we boot the v5.6 kernel for the first time, we must hit 
the “fail to mount rootfs” panic and the kernel will automatically reboot….   

  Also, the “initrdfail” here marks initrdfail=1, so when the kernel
  boots for the 2nd time, the kernel should successfully boot up.  Next,
  when the kernel boots for the 3rd time, it panics again since the
  userspace program resets initrdfail to 0, and next time when the
  kernel boots, it can boot up successfully -- this
  “panic/success/panic/success” pattern repeats forever…

  
  The linux-azure kernels are not affected since they have the vmbus driver and 
storage drivers built-in (i.e. “=y”):
  /boot/config-5.3.0-1013-azure:CONFIG_HYPERV_STORAGE=y
  /boot/config-5.3.0-1013-azure:CONFIG_HYPERV=y
  /boot/config-5.4.0-1006-azure:CONFIG_HYPERV_STORAGE=y
  /boot/config-5.4.0-1006-azure:CONFIG_HYPERV=y
  /boot/config-5.6.0-050600-generic:CONFIG_HYPERV_STORAGE=m
  /boot/config-5.6.0-050600-generic:CONFIG_HYPERV=m
  The v5.6 kernel uses =m rather than =y, so is affected here.

  
  It looks the setting may be intentional, but we should not assume a customer 
kernel must have the necessary vmbus/storage drivers built-in. 

  This issue only happens to the Ubuntu Marketplace image (19.10 and maybe 
19.04 as well?) on Azure. 
  We installed a Ubuntu  20.04 VM from the .iso file from 
http://cdimage.ubuntu.com/daily-live/pending/ and don’t see the strange grub 
issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1870189/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1870189] Re: initramfs does not get loaded

2020-05-12 Thread Dexuan Cui
> If someone is using a kernel other than the one we provide for the
cloud, or in the case of a bug, the system will still boot (slower)
after a panic and a reboot to try again with the initrd.

Hi Steve, I guess you assume the pattern is
"panic/success/success/success/...", but actually the pattern is
“panic/success/panic/success/panic/success/...” -- this is pretty
confusing. Please refer to the the Bug Description for details.

Ideally grub should be configured to not add the 'initrd' line only for
the cloud kernels. Is there a way for grub to tell if the kernel is a
cloud kernel or not?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1870189

Title:
  initramfs does not get loaded

Status in grub2 package in Ubuntu:
  Won't Fix
Status in linux-azure package in Ubuntu:
  Confirmed

Bug description:
  A Gen-1 Ubuntu 19.10 VM on Azure was created and upgraded to Ubuntu
  20.04 by “do-release-upgrade –d”.

  Then the latest Ubuntu v5.6 kernel was installed from
  https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.6/. As soon as a
  reboot was performed, a panic with the v5.6 kernel occured because the
  rootfs can not be found.

  It turns out by default, initramfs does not get loaded:

  /boot/grub/grub.cfg:
  menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os 
$menuentry_id_option 'gnulinux-simple-3d2737e8-
  b95a-42bf-bac1-bb6fb4cda87f' {
  …
  if [ "${initrdfail}" = 1 ]; then
linux /boot/vmlinuz-5.6.0-050600-generic 
root=PARTUUID=bc3d472f-401e-4774-affa-df1acba65a73 ro  console=tty1 
console=ttyS0 earlyprintk=ttyS0 ignore_loglevel sysrq_always_enabled 
unknown_nmi_panic
initrd/boot/initrd.img-5.6.0-050600-generic
  else
linux /boot/vmlinuz-5.6.0-050600-generic 
root=PARTUUID=bc3d472f-401e-4774-affa-df1acba65a73 ro  console=tty1 
console=ttyS0 earlyprintk=ttyS0 ignore_loglevel sysrq_always_enabled 
unknown_nmi_panic panic=-1
#Dexuan: here the initrd line is missing!
  fi
  initrdfail
  }

  
  As we can see, Ubuntu only uses the initrd.img if initrdfail=1. Normally, 
initrdfail = 0, so when we boot the v5.6 kernel for the first time, we must hit 
the “fail to mount rootfs” panic and the kernel will automatically reboot….   

  Also, the “initrdfail” here marks initrdfail=1, so when the kernel
  boots for the 2nd time, the kernel should successfully boot up.  Next,
  when the kernel boots for the 3rd time, it panics again since the
  userspace program resets initrdfail to 0, and next time when the
  kernel boots, it can boot up successfully -- this
  “panic/success/panic/success” pattern repeats forever…

  
  The linux-azure kernels are not affected since they have the vmbus driver and 
storage drivers built-in (i.e. “=y”):
  /boot/config-5.3.0-1013-azure:CONFIG_HYPERV_STORAGE=y
  /boot/config-5.3.0-1013-azure:CONFIG_HYPERV=y
  /boot/config-5.4.0-1006-azure:CONFIG_HYPERV_STORAGE=y
  /boot/config-5.4.0-1006-azure:CONFIG_HYPERV=y
  /boot/config-5.6.0-050600-generic:CONFIG_HYPERV_STORAGE=m
  /boot/config-5.6.0-050600-generic:CONFIG_HYPERV=m
  The v5.6 kernel uses =m rather than =y, so is affected here.

  
  It looks the setting may be intentional, but we should not assume a customer 
kernel must have the necessary vmbus/storage drivers built-in. 

  This issue only happens to the Ubuntu Marketplace image (19.10 and maybe 
19.04 as well?) on Azure. 
  We installed a Ubuntu  20.04 VM from the .iso file from 
http://cdimage.ubuntu.com/daily-live/pending/ and don’t see the strange grub 
issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1870189/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1870189] Re: initramfs does not get loaded

2020-05-06 Thread Dexuan Cui
Anyone knows who maintains the grub package shipped in the cloud-images?
Should we report a bug at https://bugs.launchpad.net/ubuntu/+source/grub2 ?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1870189

Title:
  initramfs does not get loaded

Status in grub2 package in Ubuntu:
  Confirmed
Status in linux-azure package in Ubuntu:
  Confirmed

Bug description:
  A Gen-1 Ubuntu 19.10 VM on Azure was created and upgraded to Ubuntu
  20.04 by “do-release-upgrade –d”.

  Then the latest Ubuntu v5.6 kernel was installed from
  https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.6/. As soon as a
  reboot was performed, a panic with the v5.6 kernel occured because the
  rootfs can not be found.

  It turns out by default, initramfs does not get loaded:

  /boot/grub/grub.cfg:
  menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os 
$menuentry_id_option 'gnulinux-simple-3d2737e8-
  b95a-42bf-bac1-bb6fb4cda87f' {
  …
  if [ "${initrdfail}" = 1 ]; then
linux /boot/vmlinuz-5.6.0-050600-generic 
root=PARTUUID=bc3d472f-401e-4774-affa-df1acba65a73 ro  console=tty1 
console=ttyS0 earlyprintk=ttyS0 ignore_loglevel sysrq_always_enabled 
unknown_nmi_panic
initrd/boot/initrd.img-5.6.0-050600-generic
  else
linux /boot/vmlinuz-5.6.0-050600-generic 
root=PARTUUID=bc3d472f-401e-4774-affa-df1acba65a73 ro  console=tty1 
console=ttyS0 earlyprintk=ttyS0 ignore_loglevel sysrq_always_enabled 
unknown_nmi_panic panic=-1
#Dexuan: here the initrd line is missing!
  fi
  initrdfail
  }

  
  As we can see, Ubuntu only uses the initrd.img if initrdfail=1. Normally, 
initrdfail = 0, so when we boot the v5.6 kernel for the first time, we must hit 
the “fail to mount rootfs” panic and the kernel will automatically reboot….   

  Also, the “initrdfail” here marks initrdfail=1, so when the kernel
  boots for the 2nd time, the kernel should successfully boot up.  Next,
  when the kernel boots for the 3rd time, it panics again since the
  userspace program resets initrdfail to 0, and next time when the
  kernel boots, it can boot up successfully -- this
  “panic/success/panic/success” pattern repeats forever…

  
  The linux-azure kernels are not affected since they have the vmbus driver and 
storage drivers built-in (i.e. “=y”):
  /boot/config-5.3.0-1013-azure:CONFIG_HYPERV_STORAGE=y
  /boot/config-5.3.0-1013-azure:CONFIG_HYPERV=y
  /boot/config-5.4.0-1006-azure:CONFIG_HYPERV_STORAGE=y
  /boot/config-5.4.0-1006-azure:CONFIG_HYPERV=y
  /boot/config-5.6.0-050600-generic:CONFIG_HYPERV_STORAGE=m
  /boot/config-5.6.0-050600-generic:CONFIG_HYPERV=m
  The v5.6 kernel uses =m rather than =y, so is affected here.

  
  It looks the setting may be intentional, but we should not assume a customer 
kernel must have the necessary vmbus/storage drivers built-in. 

  This issue only happens to the Ubuntu Marketplace image (19.10 and maybe 
19.04 as well?) on Azure. 
  We installed a Ubuntu  20.04 VM from the .iso file from 
http://cdimage.ubuntu.com/daily-live/pending/ and don’t see the strange grub 
issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1870189/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1870189] Re: initramfs does not get loaded

2020-05-06 Thread Dexuan Cui
I think commen #6 is correct: it looks the 2018 patch introduced the
issue for us, but the patch is originally for "initrd-less boot
capabilities." and here we do need the initramfs file.

I guess the patch "ubuntu-add-initrd-less-boot-fallback.patch" is not
included into the grub shipped in the 20.04 .iso file ubuntu-20.04-live-
server-amd64.iso, but somehow it's included into the grub shipped in the
cloud-image?  If so, I guess we can fix this bug by removing the patch
for cloud-image?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1870189

Title:
  initramfs does not get loaded

Status in grub2 package in Ubuntu:
  Confirmed
Status in linux-azure package in Ubuntu:
  Confirmed

Bug description:
  A Gen-1 Ubuntu 19.10 VM on Azure was created and upgraded to Ubuntu
  20.04 by “do-release-upgrade –d”.

  Then the latest Ubuntu v5.6 kernel was installed from
  https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.6/. As soon as a
  reboot was performed, a panic with the v5.6 kernel occured because the
  rootfs can not be found.

  It turns out by default, initramfs does not get loaded:

  /boot/grub/grub.cfg:
  menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os 
$menuentry_id_option 'gnulinux-simple-3d2737e8-
  b95a-42bf-bac1-bb6fb4cda87f' {
  …
  if [ "${initrdfail}" = 1 ]; then
linux /boot/vmlinuz-5.6.0-050600-generic 
root=PARTUUID=bc3d472f-401e-4774-affa-df1acba65a73 ro  console=tty1 
console=ttyS0 earlyprintk=ttyS0 ignore_loglevel sysrq_always_enabled 
unknown_nmi_panic
initrd/boot/initrd.img-5.6.0-050600-generic
  else
linux /boot/vmlinuz-5.6.0-050600-generic 
root=PARTUUID=bc3d472f-401e-4774-affa-df1acba65a73 ro  console=tty1 
console=ttyS0 earlyprintk=ttyS0 ignore_loglevel sysrq_always_enabled 
unknown_nmi_panic panic=-1
#Dexuan: here the initrd line is missing!
  fi
  initrdfail
  }

  
  As we can see, Ubuntu only uses the initrd.img if initrdfail=1. Normally, 
initrdfail = 0, so when we boot the v5.6 kernel for the first time, we must hit 
the “fail to mount rootfs” panic and the kernel will automatically reboot….   

  Also, the “initrdfail” here marks initrdfail=1, so when the kernel
  boots for the 2nd time, the kernel should successfully boot up.  Next,
  when the kernel boots for the 3rd time, it panics again since the
  userspace program resets initrdfail to 0, and next time when the
  kernel boots, it can boot up successfully -- this
  “panic/success/panic/success” pattern repeats forever…

  
  The linux-azure kernels are not affected since they have the vmbus driver and 
storage drivers built-in (i.e. “=y”):
  /boot/config-5.3.0-1013-azure:CONFIG_HYPERV_STORAGE=y
  /boot/config-5.3.0-1013-azure:CONFIG_HYPERV=y
  /boot/config-5.4.0-1006-azure:CONFIG_HYPERV_STORAGE=y
  /boot/config-5.4.0-1006-azure:CONFIG_HYPERV=y
  /boot/config-5.6.0-050600-generic:CONFIG_HYPERV_STORAGE=m
  /boot/config-5.6.0-050600-generic:CONFIG_HYPERV=m
  The v5.6 kernel uses =m rather than =y, so is affected here.

  
  It looks the setting may be intentional, but we should not assume a customer 
kernel must have the necessary vmbus/storage drivers built-in. 

  This issue only happens to the Ubuntu Marketplace image (19.10 and maybe 
19.04 as well?) on Azure. 
  We installed a Ubuntu  20.04 VM from the .iso file from 
http://cdimage.ubuntu.com/daily-live/pending/ and don’t see the strange grub 
issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1870189/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1870189] Re: initramfs does not get loaded

2020-05-04 Thread Dexuan Cui
Today I just repeated the test "Create a Gen-1 Ubuntu 19.10 VM on Azure,
and upgrade it to Ubuntu 20.04 by “do-release-upgrade –d" and I
reproduced this bug again, and the grub version is also 2.04-1ubuntu26!

So I suspect grub itself should be good, but some grub config file (i.e. 
/etc/grub.d/10_linux?) causes the bug? 
I checked my /etc/grub.d/10_linux: after I added line 263, "grub-mkconfig" can 
generate the needed initrd line correctly:

257 fi
258
259 sed "s/^/$submenu_indentation/" << EOF
260   initrd${rel_dirname}/${initrd}
261 else
262   linux ${rel_dirname}/${basename} 
root=${linux_root_device_thisversion} ro ${args} panic=-1
263   initrd${rel_dirname}/${initrd}
264 fi
265 initrdfail
266 EOF

My /etc/grub.d/10_linux is from the grub2-common package
(2.04-1ubuntu26). It looks this file in my VM that's upgraded from 19.10
to 20.04 is different from the version of the file in a VM that's
created from   https://releases.ubuntu.com/20.04/ubuntu-20.04-live-
server-amd64.iso

That's why I suspected it is specific to the cloud-image version of
Ubuntu 20.04. I don't know how exactly “do-release-upgrade -d" works and
where the upgrade procedure pulls the grub2 that lacks the initrd line
in the /etc/grub.d/10_linux.


In summary, 
1. 
https://cloud-images.ubuntu.com/focal/20200430.1/focal-server-cloudimg-amd64.img
 and 
https://cloud-images.ubuntu.com/focal/20200430.1/focal-server-cloudimg-amd64-azure.vhd.zip
 have the bug.
2.  https://releases.ubuntu.com/20.04/ubuntu-20.04-live-server-amd64.iso  does 
not have the bug.
3. A quick fix is add the needed line 263 (see above), but I think we need to 
understand how the bug is introduced.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1870189

Title:
  initramfs does not get loaded

Status in grub2 package in Ubuntu:
  Confirmed
Status in linux-azure package in Ubuntu:
  Confirmed

Bug description:
  A Gen-1 Ubuntu 19.10 VM on Azure was created and upgraded to Ubuntu
  20.04 by “do-release-upgrade –d”.

  Then the latest Ubuntu v5.6 kernel was installed from
  https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.6/. As soon as a
  reboot was performed, a panic with the v5.6 kernel occured because the
  rootfs can not be found.

  It turns out by default, initramfs does not get loaded:

  /boot/grub/grub.cfg:
  menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os 
$menuentry_id_option 'gnulinux-simple-3d2737e8-
  b95a-42bf-bac1-bb6fb4cda87f' {
  …
  if [ "${initrdfail}" = 1 ]; then
linux /boot/vmlinuz-5.6.0-050600-generic 
root=PARTUUID=bc3d472f-401e-4774-affa-df1acba65a73 ro  console=tty1 
console=ttyS0 earlyprintk=ttyS0 ignore_loglevel sysrq_always_enabled 
unknown_nmi_panic
initrd/boot/initrd.img-5.6.0-050600-generic
  else
linux /boot/vmlinuz-5.6.0-050600-generic 
root=PARTUUID=bc3d472f-401e-4774-affa-df1acba65a73 ro  console=tty1 
console=ttyS0 earlyprintk=ttyS0 ignore_loglevel sysrq_always_enabled 
unknown_nmi_panic panic=-1
#Dexuan: here the initrd line is missing!
  fi
  initrdfail
  }

  
  As we can see, Ubuntu only uses the initrd.img if initrdfail=1. Normally, 
initrdfail = 0, so when we boot the v5.6 kernel for the first time, we must hit 
the “fail to mount rootfs” panic and the kernel will automatically reboot….   

  Also, the “initrdfail” here marks initrdfail=1, so when the kernel
  boots for the 2nd time, the kernel should successfully boot up.  Next,
  when the kernel boots for the 3rd time, it panics again since the
  userspace program resets initrdfail to 0, and next time when the
  kernel boots, it can boot up successfully -- this
  “panic/success/panic/success” pattern repeats forever…

  
  The linux-azure kernels are not affected since they have the vmbus driver and 
storage drivers built-in (i.e. “=y”):
  /boot/config-5.3.0-1013-azure:CONFIG_HYPERV_STORAGE=y
  /boot/config-5.3.0-1013-azure:CONFIG_HYPERV=y
  /boot/config-5.4.0-1006-azure:CONFIG_HYPERV_STORAGE=y
  /boot/config-5.4.0-1006-azure:CONFIG_HYPERV=y
  /boot/config-5.6.0-050600-generic:CONFIG_HYPERV_STORAGE=m
  /boot/config-5.6.0-050600-generic:CONFIG_HYPERV=m
  The v5.6 kernel uses =m rather than =y, so is affected here.

  
  It looks the setting may be intentional, but we should not assume a customer 
kernel must have the necessary vmbus/storage drivers built-in. 

  This issue only happens to the Ubuntu Marketplace image (19.10 and maybe 
19.04 as well?) on Azure. 
  We installed a Ubuntu  20.04 VM from the .iso file from 
http://cdimage.ubuntu.com/daily-live/pending/ and don’t see the strange grub 
issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1870189/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-

[Kernel-packages] [Bug 1870189] Re: initramfs does not get loaded

2020-05-04 Thread Dexuan Cui
Sorry, this statement is wrong:
==
Today I also created a VM on my host from 
https://cloud-images.ubuntu.com/focal/20200430.1/focal-server-cloudimg-amd64-azure.vhd.zip
 and can not see the bug either, and the grub version is also 2.04-1ubuntu26.
==

Actually I do see the bug as well with the vhd.zip file.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1870189

Title:
  initramfs does not get loaded

Status in grub2 package in Ubuntu:
  Confirmed
Status in linux-azure package in Ubuntu:
  Confirmed

Bug description:
  A Gen-1 Ubuntu 19.10 VM on Azure was created and upgraded to Ubuntu
  20.04 by “do-release-upgrade –d”.

  Then the latest Ubuntu v5.6 kernel was installed from
  https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.6/. As soon as a
  reboot was performed, a panic with the v5.6 kernel occured because the
  rootfs can not be found.

  It turns out by default, initramfs does not get loaded:

  /boot/grub/grub.cfg:
  menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os 
$menuentry_id_option 'gnulinux-simple-3d2737e8-
  b95a-42bf-bac1-bb6fb4cda87f' {
  …
  if [ "${initrdfail}" = 1 ]; then
linux /boot/vmlinuz-5.6.0-050600-generic 
root=PARTUUID=bc3d472f-401e-4774-affa-df1acba65a73 ro  console=tty1 
console=ttyS0 earlyprintk=ttyS0 ignore_loglevel sysrq_always_enabled 
unknown_nmi_panic
initrd/boot/initrd.img-5.6.0-050600-generic
  else
linux /boot/vmlinuz-5.6.0-050600-generic 
root=PARTUUID=bc3d472f-401e-4774-affa-df1acba65a73 ro  console=tty1 
console=ttyS0 earlyprintk=ttyS0 ignore_loglevel sysrq_always_enabled 
unknown_nmi_panic panic=-1
#Dexuan: here the initrd line is missing!
  fi
  initrdfail
  }

  
  As we can see, Ubuntu only uses the initrd.img if initrdfail=1. Normally, 
initrdfail = 0, so when we boot the v5.6 kernel for the first time, we must hit 
the “fail to mount rootfs” panic and the kernel will automatically reboot….   

  Also, the “initrdfail” here marks initrdfail=1, so when the kernel
  boots for the 2nd time, the kernel should successfully boot up.  Next,
  when the kernel boots for the 3rd time, it panics again since the
  userspace program resets initrdfail to 0, and next time when the
  kernel boots, it can boot up successfully -- this
  “panic/success/panic/success” pattern repeats forever…

  
  The linux-azure kernels are not affected since they have the vmbus driver and 
storage drivers built-in (i.e. “=y”):
  /boot/config-5.3.0-1013-azure:CONFIG_HYPERV_STORAGE=y
  /boot/config-5.3.0-1013-azure:CONFIG_HYPERV=y
  /boot/config-5.4.0-1006-azure:CONFIG_HYPERV_STORAGE=y
  /boot/config-5.4.0-1006-azure:CONFIG_HYPERV=y
  /boot/config-5.6.0-050600-generic:CONFIG_HYPERV_STORAGE=m
  /boot/config-5.6.0-050600-generic:CONFIG_HYPERV=m
  The v5.6 kernel uses =m rather than =y, so is affected here.

  
  It looks the setting may be intentional, but we should not assume a customer 
kernel must have the necessary vmbus/storage drivers built-in. 

  This issue only happens to the Ubuntu Marketplace image (19.10 and maybe 
19.04 as well?) on Azure. 
  We installed a Ubuntu  20.04 VM from the .iso file from 
http://cdimage.ubuntu.com/daily-live/pending/ and don’t see the strange grub 
issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1870189/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1870189] Re: initramfs does not get loaded

2020-05-04 Thread Dexuan Cui
Today I installed a Generation-1 Ubuntu 20.04 VM on my local Hyper-V
host from the ISO file: https://releases.ubuntu.com/20.04/ubuntu-20.04
-live-server-amd64.iso (released on 4/23/2020) and I don't see this bug
and the grub version is 2.04-1ubuntu26.

Today I also created a VM on my host from https://cloud-
images.ubuntu.com/focal/20200430.1/focal-server-cloudimg-
amd64-azure.vhd.zip and can not see the bug either, and the grub version
is also 2.04-1ubuntu26.

When the bug was originally reported on Apr 1 against my Azure VM (a
Ubuntu 19.10 VM on Azure was created and upgraded to Ubuntu 20.04 by
“do-release-upgrade –d"), the grub version was 2.04-1ubuntu22.

So it looks the issue has been fixed in the 26 version?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1870189

Title:
  initramfs does not get loaded

Status in grub2 package in Ubuntu:
  Confirmed
Status in linux-azure package in Ubuntu:
  Confirmed

Bug description:
  A Gen-1 Ubuntu 19.10 VM on Azure was created and upgraded to Ubuntu
  20.04 by “do-release-upgrade –d”.

  Then the latest Ubuntu v5.6 kernel was installed from
  https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.6/. As soon as a
  reboot was performed, a panic with the v5.6 kernel occured because the
  rootfs can not be found.

  It turns out by default, initramfs does not get loaded:

  /boot/grub/grub.cfg:
  menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os 
$menuentry_id_option 'gnulinux-simple-3d2737e8-
  b95a-42bf-bac1-bb6fb4cda87f' {
  …
  if [ "${initrdfail}" = 1 ]; then
linux /boot/vmlinuz-5.6.0-050600-generic 
root=PARTUUID=bc3d472f-401e-4774-affa-df1acba65a73 ro  console=tty1 
console=ttyS0 earlyprintk=ttyS0 ignore_loglevel sysrq_always_enabled 
unknown_nmi_panic
initrd/boot/initrd.img-5.6.0-050600-generic
  else
linux /boot/vmlinuz-5.6.0-050600-generic 
root=PARTUUID=bc3d472f-401e-4774-affa-df1acba65a73 ro  console=tty1 
console=ttyS0 earlyprintk=ttyS0 ignore_loglevel sysrq_always_enabled 
unknown_nmi_panic panic=-1
#Dexuan: here the initrd line is missing!
  fi
  initrdfail
  }

  
  As we can see, Ubuntu only uses the initrd.img if initrdfail=1. Normally, 
initrdfail = 0, so when we boot the v5.6 kernel for the first time, we must hit 
the “fail to mount rootfs” panic and the kernel will automatically reboot….   

  Also, the “initrdfail” here marks initrdfail=1, so when the kernel
  boots for the 2nd time, the kernel should successfully boot up.  Next,
  when the kernel boots for the 3rd time, it panics again since the
  userspace program resets initrdfail to 0, and next time when the
  kernel boots, it can boot up successfully -- this
  “panic/success/panic/success” pattern repeats forever…

  
  The linux-azure kernels are not affected since they have the vmbus driver and 
storage drivers built-in (i.e. “=y”):
  /boot/config-5.3.0-1013-azure:CONFIG_HYPERV_STORAGE=y
  /boot/config-5.3.0-1013-azure:CONFIG_HYPERV=y
  /boot/config-5.4.0-1006-azure:CONFIG_HYPERV_STORAGE=y
  /boot/config-5.4.0-1006-azure:CONFIG_HYPERV=y
  /boot/config-5.6.0-050600-generic:CONFIG_HYPERV_STORAGE=m
  /boot/config-5.6.0-050600-generic:CONFIG_HYPERV=m
  The v5.6 kernel uses =m rather than =y, so is affected here.

  
  It looks the setting may be intentional, but we should not assume a customer 
kernel must have the necessary vmbus/storage drivers built-in. 

  This issue only happens to the Ubuntu Marketplace image (19.10 and maybe 
19.04 as well?) on Azure. 
  We installed a Ubuntu  20.04 VM from the .iso file from 
http://cdimage.ubuntu.com/daily-live/pending/ and don’t see the strange grub 
issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1870189/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1870189] Re: initramfs does not get loaded

2020-05-04 Thread Dexuan Cui
When the bug was originally reported on Apr 1, "We installed a Ubuntu
20.04 VM from the .iso file from http://cdimage.ubuntu.com/daily-
live/pending/ and don’t see the strange grub issue". It looks the grub
version used in the .iso file (on Apr 1) does not have the bug.

I don't think the patch in the link mentioned in comment #6 causes the
bug, because that patch was made 2 years ago and we started to see this
bug just recently. Of course I can be wrong, since I don't really have a
lot of grub knowledge. :-)

I'm not sure if the commit 6a814c759e10 ("Import patches-unapplied
version 2.04-1ubuntu1 to ubuntu/eoan-proposed", made on Jul 16 11:31:29
201) causes this bug, either, since it's made almost 10 months ago.
Again, I can be wrong. :-)  BTW, this commit is huge -- more than 12K
lines.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1870189

Title:
  initramfs does not get loaded

Status in grub2 package in Ubuntu:
  Confirmed
Status in linux-azure package in Ubuntu:
  Confirmed

Bug description:
  A Gen-1 Ubuntu 19.10 VM on Azure was created and upgraded to Ubuntu
  20.04 by “do-release-upgrade –d”.

  Then the latest Ubuntu v5.6 kernel was installed from
  https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.6/. As soon as a
  reboot was performed, a panic with the v5.6 kernel occured because the
  rootfs can not be found.

  It turns out by default, initramfs does not get loaded:

  /boot/grub/grub.cfg:
  menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os 
$menuentry_id_option 'gnulinux-simple-3d2737e8-
  b95a-42bf-bac1-bb6fb4cda87f' {
  …
  if [ "${initrdfail}" = 1 ]; then
linux /boot/vmlinuz-5.6.0-050600-generic 
root=PARTUUID=bc3d472f-401e-4774-affa-df1acba65a73 ro  console=tty1 
console=ttyS0 earlyprintk=ttyS0 ignore_loglevel sysrq_always_enabled 
unknown_nmi_panic
initrd/boot/initrd.img-5.6.0-050600-generic
  else
linux /boot/vmlinuz-5.6.0-050600-generic 
root=PARTUUID=bc3d472f-401e-4774-affa-df1acba65a73 ro  console=tty1 
console=ttyS0 earlyprintk=ttyS0 ignore_loglevel sysrq_always_enabled 
unknown_nmi_panic panic=-1
#Dexuan: here the initrd line is missing!
  fi
  initrdfail
  }

  
  As we can see, Ubuntu only uses the initrd.img if initrdfail=1. Normally, 
initrdfail = 0, so when we boot the v5.6 kernel for the first time, we must hit 
the “fail to mount rootfs” panic and the kernel will automatically reboot….   

  Also, the “initrdfail” here marks initrdfail=1, so when the kernel
  boots for the 2nd time, the kernel should successfully boot up.  Next,
  when the kernel boots for the 3rd time, it panics again since the
  userspace program resets initrdfail to 0, and next time when the
  kernel boots, it can boot up successfully -- this
  “panic/success/panic/success” pattern repeats forever…

  
  The linux-azure kernels are not affected since they have the vmbus driver and 
storage drivers built-in (i.e. “=y”):
  /boot/config-5.3.0-1013-azure:CONFIG_HYPERV_STORAGE=y
  /boot/config-5.3.0-1013-azure:CONFIG_HYPERV=y
  /boot/config-5.4.0-1006-azure:CONFIG_HYPERV_STORAGE=y
  /boot/config-5.4.0-1006-azure:CONFIG_HYPERV=y
  /boot/config-5.6.0-050600-generic:CONFIG_HYPERV_STORAGE=m
  /boot/config-5.6.0-050600-generic:CONFIG_HYPERV=m
  The v5.6 kernel uses =m rather than =y, so is affected here.

  
  It looks the setting may be intentional, but we should not assume a customer 
kernel must have the necessary vmbus/storage drivers built-in. 

  This issue only happens to the Ubuntu Marketplace image (19.10 and maybe 
19.04 as well?) on Azure. 
  We installed a Ubuntu  20.04 VM from the .iso file from 
http://cdimage.ubuntu.com/daily-live/pending/ and don’t see the strange grub 
issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1870189/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1848534] Re: [Microsoft Hyper-V guest] System shows graphic artifacts for a moment, then text cursor for about minute and then starts

2020-04-26 Thread Dexuan Cui
Thanks for the reminder! I just realized Ubuntu 20.02 was already
released on 4/23. We should try it.

For the CPU firmware (CPU microcode?) update issue: sorry, it's
completely out of my scope -- I only work on Linux. Hopefully that issue
will be resolved in the near future.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1848534

Title:
  [Microsoft Hyper-V guest] System shows graphic artifacts for a moment,
  then text cursor for about minute and then starts

Status in gdm3 package in Ubuntu:
  Confirmed
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  AFter upgrade system shows graphic artefacts for a moment and then
  text cursor for about minute (it looks like hanged) and then starts.

  In 19.04 startup required 1 or 2 seconds.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: ubuntu-release-upgrader-core 1:19.10.15
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  Uname: Linux 5.3.0-18-generic x86_64
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  CrashDB: ubuntu
  CurrentDesktop: ubuntu:GNOME
  Date: Thu Oct 17 17:42:27 2019
  InstallationDate: Installed on 2019-07-04 (104 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  PackageArchitecture: all
  SourcePackage: ubuntu-release-upgrader
  Symptom: dist-upgrade
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  VarLogDistupgradeLspcitxt:
   
  VarLogDistupgradeXorgFixuplog:
   INFO:root:/usr/bin/do-release-upgrade running
   INFO:root:No xorg.conf, exiting
  --- 
  ProblemType: Bug
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  BootLog: Error: [Errno 13] Permission denied: '/var/log/boot.log'
  CompositorRunning: None
  CurrentDesktop: ubuntu:GNOME
  DistUpgraded: 2019-10-17 17:03:47,139 DEBUG Running PostInstallScript: 
'./xorg_fix_proprietary.py'
  DistroCodename: eoan
  DistroRelease: Ubuntu 19.10
  DistroVariant: ubuntu
  ExtraDebuggingInterest: Yes
  GraphicsCard:
   
  InstallationDate: Installed on 2019-07-04 (105 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  IwConfig:
   eth0  no wireless extensions.
   
   lono wireless extensions.
  Lspci:
   
  Lsusb: Error: command ['lsusb'] failed with exit code 1:
  MachineType: Microsoft Corporation Virtual Machine
  Package: xorg-server (not installed)
  ProcFB: 0 hyperv_fb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.3.0-18-generic 
root=UUID=17409d40-25e9-4051-9fd9-758e2a02ebc3 ro quiet splash 
video=hyperv_fb:1900x900 elevator=noop vt.handoff=7
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  RelatedPackageVersions:
   linux-restricted-modules-5.3.0-18-generic N/A
   linux-backports-modules-5.3.0-18-generic  N/A
   linux-firmware1.183
  RfKill:
   
  Tags:  eoan ubuntu
  Uname: Linux 5.3.0-18-generic x86_64
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  UserGroups: adm cdrom dip lpadmin plugdev sambashare sudo
  _MarkForUpload: True
  dmi.bios.date: 01/30/2019
  dmi.bios.vendor: Microsoft Corporation
  dmi.bios.version: Hyper-V UEFI Release v4.0
  dmi.board.asset.tag: None
  dmi.board.name: Virtual Machine
  dmi.board.vendor: Microsoft Corporation
  dmi.board.version: Hyper-V UEFI Release v4.0
  dmi.chassis.asset.tag: 2831-3616-6111-5725-4803-1162-28
  dmi.chassis.type: 3
  dmi.chassis.vendor: Microsoft Corporation
  dmi.chassis.version: Hyper-V UEFI Release v4.0
  dmi.modalias: 
dmi:bvnMicrosoftCorporation:bvrHyper-VUEFIReleasev4.0:bd01/30/2019:svnMicrosoftCorporation:pnVirtualMachine:pvrHyper-VUEFIReleasev4.0:rvnMicrosoftCorporation:rnVirtualMachine:rvrHyper-VUEFIReleasev4.0:cvnMicrosoftCorporation:ct3:cvrHyper-VUEFIReleasev4.0:
  dmi.product.family: Virtual Machine
  dmi.product.name: Virtual Machine
  dmi.product.sku: None
  dmi.product.version: Hyper-V UEFI Release v4.0
  dmi.sys.vendor: Microsoft Corporation
  version.compiz: compiz N/A
  version.libdrm2: libdrm2 2.4.99-1ubuntu1
  version.libgl1-mesa-dri: libgl1-mesa-dri 19.2.1-1ubuntu1
  version.libgl1-mesa-glx: libgl1-mesa-glx N/A
  version.xserver-xorg-core: xserver-xorg-core 2:1.20.5+git20191008-0ubuntu1
  version.xserver-xorg-input-evdev: xserver-xorg-input-evdev N/A
  version.xserver-xorg-video-ati: xserver-xorg-video-ati 1:19.0.1-1ubuntu1
  version.xserver-xorg-video-intel: xserver-xorg-video-intel 
2:2.99.917+git20190815-1
  version.xserver-xorg-video-nouveau: xserver-xorg-video-nouveau 1:1.0.16-1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gdm3/+bug/1848534/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.

[Kernel-packages] [Bug 1848534] Re: [Microsoft Hyper-V guest] System shows graphic artifacts for a moment, then text cursor for about minute and then starts

2020-04-24 Thread Dexuan Cui
Sorry, I made a typo above: systemd.dsystemd.default_standard_output=kmsg ==> 
systemd.default_standard_output=kmsg.
BTW, it looks systemd.show_status=true makes no difference for me. I don't see 
any status info during the boot-up time -- not sure if I did something wrong.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1848534

Title:
  [Microsoft Hyper-V guest] System shows graphic artifacts for a moment,
  then text cursor for about minute and then starts

Status in gdm3 package in Ubuntu:
  Confirmed
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  AFter upgrade system shows graphic artefacts for a moment and then
  text cursor for about minute (it looks like hanged) and then starts.

  In 19.04 startup required 1 or 2 seconds.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: ubuntu-release-upgrader-core 1:19.10.15
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  Uname: Linux 5.3.0-18-generic x86_64
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  CrashDB: ubuntu
  CurrentDesktop: ubuntu:GNOME
  Date: Thu Oct 17 17:42:27 2019
  InstallationDate: Installed on 2019-07-04 (104 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  PackageArchitecture: all
  SourcePackage: ubuntu-release-upgrader
  Symptom: dist-upgrade
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  VarLogDistupgradeLspcitxt:
   
  VarLogDistupgradeXorgFixuplog:
   INFO:root:/usr/bin/do-release-upgrade running
   INFO:root:No xorg.conf, exiting
  --- 
  ProblemType: Bug
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  BootLog: Error: [Errno 13] Permission denied: '/var/log/boot.log'
  CompositorRunning: None
  CurrentDesktop: ubuntu:GNOME
  DistUpgraded: 2019-10-17 17:03:47,139 DEBUG Running PostInstallScript: 
'./xorg_fix_proprietary.py'
  DistroCodename: eoan
  DistroRelease: Ubuntu 19.10
  DistroVariant: ubuntu
  ExtraDebuggingInterest: Yes
  GraphicsCard:
   
  InstallationDate: Installed on 2019-07-04 (105 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  IwConfig:
   eth0  no wireless extensions.
   
   lono wireless extensions.
  Lspci:
   
  Lsusb: Error: command ['lsusb'] failed with exit code 1:
  MachineType: Microsoft Corporation Virtual Machine
  Package: xorg-server (not installed)
  ProcFB: 0 hyperv_fb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.3.0-18-generic 
root=UUID=17409d40-25e9-4051-9fd9-758e2a02ebc3 ro quiet splash 
video=hyperv_fb:1900x900 elevator=noop vt.handoff=7
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  RelatedPackageVersions:
   linux-restricted-modules-5.3.0-18-generic N/A
   linux-backports-modules-5.3.0-18-generic  N/A
   linux-firmware1.183
  RfKill:
   
  Tags:  eoan ubuntu
  Uname: Linux 5.3.0-18-generic x86_64
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  UserGroups: adm cdrom dip lpadmin plugdev sambashare sudo
  _MarkForUpload: True
  dmi.bios.date: 01/30/2019
  dmi.bios.vendor: Microsoft Corporation
  dmi.bios.version: Hyper-V UEFI Release v4.0
  dmi.board.asset.tag: None
  dmi.board.name: Virtual Machine
  dmi.board.vendor: Microsoft Corporation
  dmi.board.version: Hyper-V UEFI Release v4.0
  dmi.chassis.asset.tag: 2831-3616-6111-5725-4803-1162-28
  dmi.chassis.type: 3
  dmi.chassis.vendor: Microsoft Corporation
  dmi.chassis.version: Hyper-V UEFI Release v4.0
  dmi.modalias: 
dmi:bvnMicrosoftCorporation:bvrHyper-VUEFIReleasev4.0:bd01/30/2019:svnMicrosoftCorporation:pnVirtualMachine:pvrHyper-VUEFIReleasev4.0:rvnMicrosoftCorporation:rnVirtualMachine:rvrHyper-VUEFIReleasev4.0:cvnMicrosoftCorporation:ct3:cvrHyper-VUEFIReleasev4.0:
  dmi.product.family: Virtual Machine
  dmi.product.name: Virtual Machine
  dmi.product.sku: None
  dmi.product.version: Hyper-V UEFI Release v4.0
  dmi.sys.vendor: Microsoft Corporation
  version.compiz: compiz N/A
  version.libdrm2: libdrm2 2.4.99-1ubuntu1
  version.libgl1-mesa-dri: libgl1-mesa-dri 19.2.1-1ubuntu1
  version.libgl1-mesa-glx: libgl1-mesa-glx N/A
  version.xserver-xorg-core: xserver-xorg-core 2:1.20.5+git20191008-0ubuntu1
  version.xserver-xorg-input-evdev: xserver-xorg-input-evdev N/A
  version.xserver-xorg-video-ati: xserver-xorg-video-ati 1:19.0.1-1ubuntu1
  version.xserver-xorg-video-intel: xserver-xorg-video-intel 
2:2.99.917+git20190815-1
  version.xserver-xorg-video-nouveau: xserver-xorg-video-nouveau 1:1.0.16-1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gdm3/+bug/1848534/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.la

[Kernel-packages] [Bug 1848534] Re: [Microsoft Hyper-V guest] System shows graphic artifacts for a moment, then text cursor for about minute and then starts

2020-04-24 Thread Dexuan Cui
I don't have much knowledge bout systemd, either :-) I just did a "man
systemd" and found the options of systemd.  "man systemd" says that we
can use pass these kernel parameters to systemd:

systemd.service_watchdogs=true systemd.show_status=true
systemd.log_level=debug systemd.dsystemd.default_standard_output=kmsg
systemd.default_standard_error=kmsg

I tried these by adding them into /boot/grub/grub.cfg manually, at the end of 
the line "linux /boot/vmlinuz-5.3.0-23-generic ...". 
I also replaced "quiet splash $vt_handoff" with "ignore_loglevel". So I can get 
more messages from systemd, but not so much as I expected. Not sure if this 
would be helpful to troubleshoot the long delay issue for you, and I'm not even 
sure if I enabled the systemd loggong completely correctly -- again, I'm not 
really familiar with systemd. :-)

To stop/disble a systemd "service", I think we can use something like this 
(taking the setvtrgb.service as an example):
  systemctl stop  setvtrgb.service
  systemctl disable setvtrgb.service
  systemctl status setvtrgb.service

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1848534

Title:
  [Microsoft Hyper-V guest] System shows graphic artifacts for a moment,
  then text cursor for about minute and then starts

Status in gdm3 package in Ubuntu:
  Confirmed
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  AFter upgrade system shows graphic artefacts for a moment and then
  text cursor for about minute (it looks like hanged) and then starts.

  In 19.04 startup required 1 or 2 seconds.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: ubuntu-release-upgrader-core 1:19.10.15
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  Uname: Linux 5.3.0-18-generic x86_64
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  CrashDB: ubuntu
  CurrentDesktop: ubuntu:GNOME
  Date: Thu Oct 17 17:42:27 2019
  InstallationDate: Installed on 2019-07-04 (104 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  PackageArchitecture: all
  SourcePackage: ubuntu-release-upgrader
  Symptom: dist-upgrade
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  VarLogDistupgradeLspcitxt:
   
  VarLogDistupgradeXorgFixuplog:
   INFO:root:/usr/bin/do-release-upgrade running
   INFO:root:No xorg.conf, exiting
  --- 
  ProblemType: Bug
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  BootLog: Error: [Errno 13] Permission denied: '/var/log/boot.log'
  CompositorRunning: None
  CurrentDesktop: ubuntu:GNOME
  DistUpgraded: 2019-10-17 17:03:47,139 DEBUG Running PostInstallScript: 
'./xorg_fix_proprietary.py'
  DistroCodename: eoan
  DistroRelease: Ubuntu 19.10
  DistroVariant: ubuntu
  ExtraDebuggingInterest: Yes
  GraphicsCard:
   
  InstallationDate: Installed on 2019-07-04 (105 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  IwConfig:
   eth0  no wireless extensions.
   
   lono wireless extensions.
  Lspci:
   
  Lsusb: Error: command ['lsusb'] failed with exit code 1:
  MachineType: Microsoft Corporation Virtual Machine
  Package: xorg-server (not installed)
  ProcFB: 0 hyperv_fb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.3.0-18-generic 
root=UUID=17409d40-25e9-4051-9fd9-758e2a02ebc3 ro quiet splash 
video=hyperv_fb:1900x900 elevator=noop vt.handoff=7
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  RelatedPackageVersions:
   linux-restricted-modules-5.3.0-18-generic N/A
   linux-backports-modules-5.3.0-18-generic  N/A
   linux-firmware1.183
  RfKill:
   
  Tags:  eoan ubuntu
  Uname: Linux 5.3.0-18-generic x86_64
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  UserGroups: adm cdrom dip lpadmin plugdev sambashare sudo
  _MarkForUpload: True
  dmi.bios.date: 01/30/2019
  dmi.bios.vendor: Microsoft Corporation
  dmi.bios.version: Hyper-V UEFI Release v4.0
  dmi.board.asset.tag: None
  dmi.board.name: Virtual Machine
  dmi.board.vendor: Microsoft Corporation
  dmi.board.version: Hyper-V UEFI Release v4.0
  dmi.chassis.asset.tag: 2831-3616-6111-5725-4803-1162-28
  dmi.chassis.type: 3
  dmi.chassis.vendor: Microsoft Corporation
  dmi.chassis.version: Hyper-V UEFI Release v4.0
  dmi.modalias: 
dmi:bvnMicrosoftCorporation:bvrHyper-VUEFIReleasev4.0:bd01/30/2019:svnMicrosoftCorporation:pnVirtualMachine:pvrHyper-VUEFIReleasev4.0:rvnMicrosoftCorporation:rnVirtualMachine:rvrHyper-VUEFIReleasev4.0:cvnMicrosoftCorporation:ct3:cvrHyper-VUEFIReleasev4.0:
  dmi.product.family: Virtual Machine
  dmi.product.name: Virtual Machine
  dmi.product.sku: None
  dmi.product.version: Hyper-V UEFI Release v4.0
  dmi.sys.vendor: Microsoft Corporation
  version.compiz: compiz N/A
  version.libdrm2: libdrm2 2.4.99-1ubuntu1
  ve

[Kernel-packages] [Bug 1848534] Re: [Microsoft Hyper-V guest] System shows graphic artifacts for a moment, then text cursor for about minute and then starts

2020-04-24 Thread Dexuan Cui
Since Alt-SysRq-w gives nothing, I'm sure the long delay is not a
kernel/driver issue but a user space issue. It looks due to some reason
I just can not reproduce the long delay. :-(

In the Hyper-V Virtual Machine Connection window's "View" menu, there is
an item "Enhanced Session". In my Ubuntu 19.10 VM created by "Quick
Create...", the xrdp daemon/service is configured to automatically run
during the boot-up procedure; I think as soon as the xrdp daemon starts
to run, the "Enhanced Session" item becomes clickable/usable, and I can
click it to toggle between "Enhanced Session" mode (i.e. xrdp mode) and
the native Xorg GUI mode; when I'm in the xrdp mode, I click VM
Connection window's Action | Shut Down, then Start, and the VM will boot
up to the xrdp login screen in about 14 seconds; when I'm in the Xorg
mode, I click Shutdown then Start, the VM will boot up to the Xorg GUI
desktop in about 30 seconds. If I shut down the VM, close the VM
Connection window, and start the VM and open VM Connection window, I'll
be prompted by a small pop-up window to choose a resolution when (I
think) the xrdp daemon starts to run: 1) if I click the close icon of
the small window, I'll be in the Xorg GUI mode; if I accept the default
resolution (or change to a different resolution) and click "connect" in
the small window, I'll be in the xrdp mode. So all these work pretty
good for me.

Note: after I just created the 19.10 VM by "Quick Create..." and set up
the host name and user name/password stuff, I rebooted the VM and when
the VM booted up, I found the "Enhanced Session" was not
clickable/usable -- this looks like a bug -- while I still don't know
the root cause, it looks this can be resolved by manually adding the
line "initrd/boot/initrd.img-5.3.0-23-generic  #This line is
added by Dexuan manually" into "/boot/grub/grub.cfg":

menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os 
$menuentry_id_option 'gnulinux-simple-55829715-0091-4b86-b060-1cb88f342faf' {
...
if [ "${initrdfail}" = 1 ]; then
  linux /boot/vmlinuz-5.3.0-23-generic 
root=PARTUUID=43e99d31-1277-402c-a13b-6cc8fb93169b ro  quiet splash $vt_handoff
  initrd/boot/initrd.img-5.3.0-23-generic
else
  linux /boot/vmlinuz-5.3.0-23-generic 
root=PARTUUID=43e99d31-1277-402c-a13b-6cc8fb93169b ro  quiet splash $vt_handoff 
panic=-1
  initrd/boot/initrd.img-5.3.0-23-generic  #This line is added 
by Dexuan manually!!!
fi
initrdfail
}

With the addition of the line, it looks "GRUB_TIMEOUT=0" in "/etc/default/grub" 
is always really applied every time I reboot the VM.
Without the line, it looks sometimes the grub timeout is 30 second and 
sometimes it's 0 second.
BTW, I reported a bug for the missing initrd line a few weeks ago for a 
different issue: https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1870189.

I suggest you also manually add the line, then I guess you should be
able to reliably toggle between xrdp mode and Xorg mode.

Note: "/boot/grub/grub.cfg" is overwritten when update-grub is run by us
or some automatic-update daemon, so we may want to check if the line is
still there when we see something unexpected (i.e. unable to use xrdp
mode, or see a grub timeout of 30s). I hope Bug 1870189 will be fixed by
somebody ASAP...

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1848534

Title:
  [Microsoft Hyper-V guest] System shows graphic artifacts for a moment,
  then text cursor for about minute and then starts

Status in gdm3 package in Ubuntu:
  Confirmed
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  AFter upgrade system shows graphic artefacts for a moment and then
  text cursor for about minute (it looks like hanged) and then starts.

  In 19.04 startup required 1 or 2 seconds.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: ubuntu-release-upgrader-core 1:19.10.15
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  Uname: Linux 5.3.0-18-generic x86_64
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  CrashDB: ubuntu
  CurrentDesktop: ubuntu:GNOME
  Date: Thu Oct 17 17:42:27 2019
  InstallationDate: Installed on 2019-07-04 (104 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  PackageArchitecture: all
  SourcePackage: ubuntu-release-upgrader
  Symptom: dist-upgrade
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  VarLogDistupgradeLspcitxt:
   
  VarLogDistupgradeXorgFixuplog:
   INFO:root:/usr/bin/do-release-upgrade running
   INFO:root:No xorg.conf, exiting
  --- 
  ProblemType: Bug
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  BootLog: Error: [Errno 13] Permission denied: '/var/log/boot.log'
  CompositorRunning: None
  

[Kernel-packages] [Bug 1848534] Re: [Microsoft Hyper-V guest] System shows graphic artifacts for a moment, then text cursor for about minute and then starts

2020-04-22 Thread Dexuan Cui
I also tried xrdp mode and the VM booted up to the xrdp login window in
14 seconds, which is faster than the "native Xorg GUI mode" (which needs
30s)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1848534

Title:
  [Microsoft Hyper-V guest] System shows graphic artifacts for a moment,
  then text cursor for about minute and then starts

Status in gdm3 package in Ubuntu:
  Confirmed
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  AFter upgrade system shows graphic artefacts for a moment and then
  text cursor for about minute (it looks like hanged) and then starts.

  In 19.04 startup required 1 or 2 seconds.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: ubuntu-release-upgrader-core 1:19.10.15
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  Uname: Linux 5.3.0-18-generic x86_64
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  CrashDB: ubuntu
  CurrentDesktop: ubuntu:GNOME
  Date: Thu Oct 17 17:42:27 2019
  InstallationDate: Installed on 2019-07-04 (104 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  PackageArchitecture: all
  SourcePackage: ubuntu-release-upgrader
  Symptom: dist-upgrade
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  VarLogDistupgradeLspcitxt:
   
  VarLogDistupgradeXorgFixuplog:
   INFO:root:/usr/bin/do-release-upgrade running
   INFO:root:No xorg.conf, exiting
  --- 
  ProblemType: Bug
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  BootLog: Error: [Errno 13] Permission denied: '/var/log/boot.log'
  CompositorRunning: None
  CurrentDesktop: ubuntu:GNOME
  DistUpgraded: 2019-10-17 17:03:47,139 DEBUG Running PostInstallScript: 
'./xorg_fix_proprietary.py'
  DistroCodename: eoan
  DistroRelease: Ubuntu 19.10
  DistroVariant: ubuntu
  ExtraDebuggingInterest: Yes
  GraphicsCard:
   
  InstallationDate: Installed on 2019-07-04 (105 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  IwConfig:
   eth0  no wireless extensions.
   
   lono wireless extensions.
  Lspci:
   
  Lsusb: Error: command ['lsusb'] failed with exit code 1:
  MachineType: Microsoft Corporation Virtual Machine
  Package: xorg-server (not installed)
  ProcFB: 0 hyperv_fb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.3.0-18-generic 
root=UUID=17409d40-25e9-4051-9fd9-758e2a02ebc3 ro quiet splash 
video=hyperv_fb:1900x900 elevator=noop vt.handoff=7
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  RelatedPackageVersions:
   linux-restricted-modules-5.3.0-18-generic N/A
   linux-backports-modules-5.3.0-18-generic  N/A
   linux-firmware1.183
  RfKill:
   
  Tags:  eoan ubuntu
  Uname: Linux 5.3.0-18-generic x86_64
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  UserGroups: adm cdrom dip lpadmin plugdev sambashare sudo
  _MarkForUpload: True
  dmi.bios.date: 01/30/2019
  dmi.bios.vendor: Microsoft Corporation
  dmi.bios.version: Hyper-V UEFI Release v4.0
  dmi.board.asset.tag: None
  dmi.board.name: Virtual Machine
  dmi.board.vendor: Microsoft Corporation
  dmi.board.version: Hyper-V UEFI Release v4.0
  dmi.chassis.asset.tag: 2831-3616-6111-5725-4803-1162-28
  dmi.chassis.type: 3
  dmi.chassis.vendor: Microsoft Corporation
  dmi.chassis.version: Hyper-V UEFI Release v4.0
  dmi.modalias: 
dmi:bvnMicrosoftCorporation:bvrHyper-VUEFIReleasev4.0:bd01/30/2019:svnMicrosoftCorporation:pnVirtualMachine:pvrHyper-VUEFIReleasev4.0:rvnMicrosoftCorporation:rnVirtualMachine:rvrHyper-VUEFIReleasev4.0:cvnMicrosoftCorporation:ct3:cvrHyper-VUEFIReleasev4.0:
  dmi.product.family: Virtual Machine
  dmi.product.name: Virtual Machine
  dmi.product.sku: None
  dmi.product.version: Hyper-V UEFI Release v4.0
  dmi.sys.vendor: Microsoft Corporation
  version.compiz: compiz N/A
  version.libdrm2: libdrm2 2.4.99-1ubuntu1
  version.libgl1-mesa-dri: libgl1-mesa-dri 19.2.1-1ubuntu1
  version.libgl1-mesa-glx: libgl1-mesa-glx N/A
  version.xserver-xorg-core: xserver-xorg-core 2:1.20.5+git20191008-0ubuntu1
  version.xserver-xorg-input-evdev: xserver-xorg-input-evdev N/A
  version.xserver-xorg-video-ati: xserver-xorg-video-ati 1:19.0.1-1ubuntu1
  version.xserver-xorg-video-intel: xserver-xorg-video-intel 
2:2.99.917+git20190815-1
  version.xserver-xorg-video-nouveau: xserver-xorg-video-nouveau 1:1.0.16-1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gdm3/+bug/1848534/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1848534] Re: [Microsoft Hyper-V guest] System shows graphic artifacts for a moment, then text cursor for about minute and then starts

2020-04-22 Thread Dexuan Cui
Sorry, I did miss this part of your previous reply:

root@stock19:~# systemctl list-jobs
JOB UNIT TYPE STATE
 48 setvtrgb.service start waiting
137 system-getty.slice start waiting
  1 graphical.target start waiting
102 systemd-update-utmp-runlevel.service start waiting
 83 plymouth-quit-wait.service start running
  2 multi-user.target start waiting

I'm wondering if you can disable setvtrgb.service, system-getty.slice,
systemd-update-utmp-runlevel.service, and plymouth-quit-wait.service,
and see if the long delay will disappear. I guess these 4 services don't
look critical to the GUI desktop.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1848534

Title:
  [Microsoft Hyper-V guest] System shows graphic artifacts for a moment,
  then text cursor for about minute and then starts

Status in gdm3 package in Ubuntu:
  Confirmed
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  AFter upgrade system shows graphic artefacts for a moment and then
  text cursor for about minute (it looks like hanged) and then starts.

  In 19.04 startup required 1 or 2 seconds.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: ubuntu-release-upgrader-core 1:19.10.15
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  Uname: Linux 5.3.0-18-generic x86_64
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  CrashDB: ubuntu
  CurrentDesktop: ubuntu:GNOME
  Date: Thu Oct 17 17:42:27 2019
  InstallationDate: Installed on 2019-07-04 (104 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  PackageArchitecture: all
  SourcePackage: ubuntu-release-upgrader
  Symptom: dist-upgrade
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  VarLogDistupgradeLspcitxt:
   
  VarLogDistupgradeXorgFixuplog:
   INFO:root:/usr/bin/do-release-upgrade running
   INFO:root:No xorg.conf, exiting
  --- 
  ProblemType: Bug
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  BootLog: Error: [Errno 13] Permission denied: '/var/log/boot.log'
  CompositorRunning: None
  CurrentDesktop: ubuntu:GNOME
  DistUpgraded: 2019-10-17 17:03:47,139 DEBUG Running PostInstallScript: 
'./xorg_fix_proprietary.py'
  DistroCodename: eoan
  DistroRelease: Ubuntu 19.10
  DistroVariant: ubuntu
  ExtraDebuggingInterest: Yes
  GraphicsCard:
   
  InstallationDate: Installed on 2019-07-04 (105 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  IwConfig:
   eth0  no wireless extensions.
   
   lono wireless extensions.
  Lspci:
   
  Lsusb: Error: command ['lsusb'] failed with exit code 1:
  MachineType: Microsoft Corporation Virtual Machine
  Package: xorg-server (not installed)
  ProcFB: 0 hyperv_fb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.3.0-18-generic 
root=UUID=17409d40-25e9-4051-9fd9-758e2a02ebc3 ro quiet splash 
video=hyperv_fb:1900x900 elevator=noop vt.handoff=7
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  RelatedPackageVersions:
   linux-restricted-modules-5.3.0-18-generic N/A
   linux-backports-modules-5.3.0-18-generic  N/A
   linux-firmware1.183
  RfKill:
   
  Tags:  eoan ubuntu
  Uname: Linux 5.3.0-18-generic x86_64
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  UserGroups: adm cdrom dip lpadmin plugdev sambashare sudo
  _MarkForUpload: True
  dmi.bios.date: 01/30/2019
  dmi.bios.vendor: Microsoft Corporation
  dmi.bios.version: Hyper-V UEFI Release v4.0
  dmi.board.asset.tag: None
  dmi.board.name: Virtual Machine
  dmi.board.vendor: Microsoft Corporation
  dmi.board.version: Hyper-V UEFI Release v4.0
  dmi.chassis.asset.tag: 2831-3616-6111-5725-4803-1162-28
  dmi.chassis.type: 3
  dmi.chassis.vendor: Microsoft Corporation
  dmi.chassis.version: Hyper-V UEFI Release v4.0
  dmi.modalias: 
dmi:bvnMicrosoftCorporation:bvrHyper-VUEFIReleasev4.0:bd01/30/2019:svnMicrosoftCorporation:pnVirtualMachine:pvrHyper-VUEFIReleasev4.0:rvnMicrosoftCorporation:rnVirtualMachine:rvrHyper-VUEFIReleasev4.0:cvnMicrosoftCorporation:ct3:cvrHyper-VUEFIReleasev4.0:
  dmi.product.family: Virtual Machine
  dmi.product.name: Virtual Machine
  dmi.product.sku: None
  dmi.product.version: Hyper-V UEFI Release v4.0
  dmi.sys.vendor: Microsoft Corporation
  version.compiz: compiz N/A
  version.libdrm2: libdrm2 2.4.99-1ubuntu1
  version.libgl1-mesa-dri: libgl1-mesa-dri 19.2.1-1ubuntu1
  version.libgl1-mesa-glx: libgl1-mesa-glx N/A
  version.xserver-xorg-core: xserver-xorg-core 2:1.20.5+git20191008-0ubuntu1
  version.xserver-xorg-input-evdev: xserver-xorg-input-evdev N/A
  version.xserver-xorg-video-ati: xserver-xorg-video-ati 1:19.0.1-1ubuntu1
  version.xserver-xorg-video-intel: xserver-xorg-video-intel 
2:2.99.917+git20190815-1
  version.xserver-xorg-video-nouveau: xserver-xorg-video-nouve

[Kernel-packages] [Bug 1870189] Re: initramfs does not get loaded

2020-04-22 Thread Dexuan Cui
I agree with David. IMO this bug should be fixed ASAP. Thanks!

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1870189

Title:
  initramfs does not get loaded

Status in grub2 package in Ubuntu:
  Confirmed
Status in linux-azure package in Ubuntu:
  Confirmed

Bug description:
  A Gen-1 Ubuntu 19.10 VM on Azure was created and upgraded to Ubuntu
  20.04 by “do-release-upgrade –d”.

  Then the latest Ubuntu v5.6 kernel was installed from
  https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.6/. As soon as a
  reboot was performed, a panic with the v5.6 kernel occured because the
  rootfs can not be found.

  It turns out by default, initramfs does not get loaded:

  /boot/grub/grub.cfg:
  menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os 
$menuentry_id_option 'gnulinux-simple-3d2737e8-
  b95a-42bf-bac1-bb6fb4cda87f' {
  …
  if [ "${initrdfail}" = 1 ]; then
linux /boot/vmlinuz-5.6.0-050600-generic 
root=PARTUUID=bc3d472f-401e-4774-affa-df1acba65a73 ro  console=tty1 
console=ttyS0 earlyprintk=ttyS0 ignore_loglevel sysrq_always_enabled 
unknown_nmi_panic
initrd/boot/initrd.img-5.6.0-050600-generic
  else
linux /boot/vmlinuz-5.6.0-050600-generic 
root=PARTUUID=bc3d472f-401e-4774-affa-df1acba65a73 ro  console=tty1 
console=ttyS0 earlyprintk=ttyS0 ignore_loglevel sysrq_always_enabled 
unknown_nmi_panic panic=-1
#Dexuan: here the initrd line is missing!
  fi
  initrdfail
  }

  
  As we can see, Ubuntu only uses the initrd.img if initrdfail=1. Normally, 
initrdfail = 0, so when we boot the v5.6 kernel for the first time, we must hit 
the “fail to mount rootfs” panic and the kernel will automatically reboot….   

  Also, the “initrdfail” here marks initrdfail=1, so when the kernel
  boots for the 2nd time, the kernel should successfully boot up.  Next,
  when the kernel boots for the 3rd time, it panics again since the
  userspace program resets initrdfail to 0, and next time when the
  kernel boots, it can boot up successfully -- this
  “panic/success/panic/success” pattern repeats forever…

  
  The linux-azure kernels are not affected since they have the vmbus driver and 
storage drivers built-in (i.e. “=y”):
  /boot/config-5.3.0-1013-azure:CONFIG_HYPERV_STORAGE=y
  /boot/config-5.3.0-1013-azure:CONFIG_HYPERV=y
  /boot/config-5.4.0-1006-azure:CONFIG_HYPERV_STORAGE=y
  /boot/config-5.4.0-1006-azure:CONFIG_HYPERV=y
  /boot/config-5.6.0-050600-generic:CONFIG_HYPERV_STORAGE=m
  /boot/config-5.6.0-050600-generic:CONFIG_HYPERV=m
  The v5.6 kernel uses =m rather than =y, so is affected here.

  
  It looks the setting may be intentional, but we should not assume a customer 
kernel must have the necessary vmbus/storage drivers built-in. 

  This issue only happens to the Ubuntu Marketplace image (19.10 and maybe 
19.04 as well?) on Azure. 
  We installed a Ubuntu  20.04 VM from the .iso file from 
http://cdimage.ubuntu.com/daily-live/pending/ and don’t see the strange grub 
issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1870189/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1848534] Re: [Microsoft Hyper-V guest] System shows graphic artifacts for a moment, then text cursor for about minute and then starts

2020-04-21 Thread Dexuan Cui
I created a Ubuntu 19.10 VM via "Quick Create..." and still can not
reproduce the long delay of > 1 minute: the VM can boot up to the Xorg
GUI desktop in 26 seconds.

My Windows 10 has the same version info: Version 1909 (OS Build
18363.778).

At the grub screen, can you press 'e' and, manually edit the kernel
parameter: please remove the "quiet splash $vt_handoff" and add
"ignore_loglevel sysrq_always_enabled". You may want to enable the
serial console logging by adding the kernel parameter "console=ttyS0",
and attach putty (Run as Administrator) to the named pipe
\\.\pipe\debug_slow_vm, assuming you configure the VM serial console by
"Set-VMComPort -VMName your_vm_name -number 1 -path
\\.\pipe\debug_slow_vm").

This way, you should get more messages on the VM serial console when the
VM boots up. When you see the long delay, you can press SysRQ+w (i.e.
the Right Alt + SysRq + w) to show the blocked processes, if any. This
may provide more info about the long delay. BTW, here I assume your have
a keyboard that has the SysRq key. :-)

It looks systemd can be configured to use "--log-level=debug --default-
standard-output=kmsg --default-standard-error=kmsg", which may provide
more info as well, if we check 'dmesg' and/or the VM serial console.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1848534

Title:
  [Microsoft Hyper-V guest] System shows graphic artifacts for a moment,
  then text cursor for about minute and then starts

Status in gdm3 package in Ubuntu:
  Confirmed
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  AFter upgrade system shows graphic artefacts for a moment and then
  text cursor for about minute (it looks like hanged) and then starts.

  In 19.04 startup required 1 or 2 seconds.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: ubuntu-release-upgrader-core 1:19.10.15
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  Uname: Linux 5.3.0-18-generic x86_64
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  CrashDB: ubuntu
  CurrentDesktop: ubuntu:GNOME
  Date: Thu Oct 17 17:42:27 2019
  InstallationDate: Installed on 2019-07-04 (104 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  PackageArchitecture: all
  SourcePackage: ubuntu-release-upgrader
  Symptom: dist-upgrade
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  VarLogDistupgradeLspcitxt:
   
  VarLogDistupgradeXorgFixuplog:
   INFO:root:/usr/bin/do-release-upgrade running
   INFO:root:No xorg.conf, exiting
  --- 
  ProblemType: Bug
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  BootLog: Error: [Errno 13] Permission denied: '/var/log/boot.log'
  CompositorRunning: None
  CurrentDesktop: ubuntu:GNOME
  DistUpgraded: 2019-10-17 17:03:47,139 DEBUG Running PostInstallScript: 
'./xorg_fix_proprietary.py'
  DistroCodename: eoan
  DistroRelease: Ubuntu 19.10
  DistroVariant: ubuntu
  ExtraDebuggingInterest: Yes
  GraphicsCard:
   
  InstallationDate: Installed on 2019-07-04 (105 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  IwConfig:
   eth0  no wireless extensions.
   
   lono wireless extensions.
  Lspci:
   
  Lsusb: Error: command ['lsusb'] failed with exit code 1:
  MachineType: Microsoft Corporation Virtual Machine
  Package: xorg-server (not installed)
  ProcFB: 0 hyperv_fb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.3.0-18-generic 
root=UUID=17409d40-25e9-4051-9fd9-758e2a02ebc3 ro quiet splash 
video=hyperv_fb:1900x900 elevator=noop vt.handoff=7
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  RelatedPackageVersions:
   linux-restricted-modules-5.3.0-18-generic N/A
   linux-backports-modules-5.3.0-18-generic  N/A
   linux-firmware1.183
  RfKill:
   
  Tags:  eoan ubuntu
  Uname: Linux 5.3.0-18-generic x86_64
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  UserGroups: adm cdrom dip lpadmin plugdev sambashare sudo
  _MarkForUpload: True
  dmi.bios.date: 01/30/2019
  dmi.bios.vendor: Microsoft Corporation
  dmi.bios.version: Hyper-V UEFI Release v4.0
  dmi.board.asset.tag: None
  dmi.board.name: Virtual Machine
  dmi.board.vendor: Microsoft Corporation
  dmi.board.version: Hyper-V UEFI Release v4.0
  dmi.chassis.asset.tag: 2831-3616-6111-5725-4803-1162-28
  dmi.chassis.type: 3
  dmi.chassis.vendor: Microsoft Corporation
  dmi.chassis.version: Hyper-V UEFI Release v4.0
  dmi.modalias: 
dmi:bvnMicrosoftCorporation:bvrHyper-VUEFIReleasev4.0:bd01/30/2019:svnMicrosoftCorporation:pnVirtualMachine:pvrHyper-VUEFIReleasev4.0:rvnMicrosoftCorporation:rnVirtualMachine:rvrHyper-VUEFIReleasev4.0:cvnMicrosoftCorporation:ct3:cvrHyper-VUEFIReleasev4.0:
  dmi.product.family: Virtual Machine
  dmi.product.name: Virtual Mach

[Kernel-packages] [Bug 1848534] Re: [Microsoft Hyper-V guest] System shows graphic artifacts for a moment, then text cursor for about minute and then starts

2020-04-21 Thread Dexuan Cui
It looks #48 shows some service is causing the long delay -- can you try
'systemctl list-jobs' to see active jobs, as the "Hint" says? :-)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1848534

Title:
  [Microsoft Hyper-V guest] System shows graphic artifacts for a moment,
  then text cursor for about minute and then starts

Status in gdm3 package in Ubuntu:
  Confirmed
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  AFter upgrade system shows graphic artefacts for a moment and then
  text cursor for about minute (it looks like hanged) and then starts.

  In 19.04 startup required 1 or 2 seconds.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: ubuntu-release-upgrader-core 1:19.10.15
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  Uname: Linux 5.3.0-18-generic x86_64
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  CrashDB: ubuntu
  CurrentDesktop: ubuntu:GNOME
  Date: Thu Oct 17 17:42:27 2019
  InstallationDate: Installed on 2019-07-04 (104 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  PackageArchitecture: all
  SourcePackage: ubuntu-release-upgrader
  Symptom: dist-upgrade
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  VarLogDistupgradeLspcitxt:
   
  VarLogDistupgradeXorgFixuplog:
   INFO:root:/usr/bin/do-release-upgrade running
   INFO:root:No xorg.conf, exiting
  --- 
  ProblemType: Bug
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  BootLog: Error: [Errno 13] Permission denied: '/var/log/boot.log'
  CompositorRunning: None
  CurrentDesktop: ubuntu:GNOME
  DistUpgraded: 2019-10-17 17:03:47,139 DEBUG Running PostInstallScript: 
'./xorg_fix_proprietary.py'
  DistroCodename: eoan
  DistroRelease: Ubuntu 19.10
  DistroVariant: ubuntu
  ExtraDebuggingInterest: Yes
  GraphicsCard:
   
  InstallationDate: Installed on 2019-07-04 (105 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  IwConfig:
   eth0  no wireless extensions.
   
   lono wireless extensions.
  Lspci:
   
  Lsusb: Error: command ['lsusb'] failed with exit code 1:
  MachineType: Microsoft Corporation Virtual Machine
  Package: xorg-server (not installed)
  ProcFB: 0 hyperv_fb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.3.0-18-generic 
root=UUID=17409d40-25e9-4051-9fd9-758e2a02ebc3 ro quiet splash 
video=hyperv_fb:1900x900 elevator=noop vt.handoff=7
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  RelatedPackageVersions:
   linux-restricted-modules-5.3.0-18-generic N/A
   linux-backports-modules-5.3.0-18-generic  N/A
   linux-firmware1.183
  RfKill:
   
  Tags:  eoan ubuntu
  Uname: Linux 5.3.0-18-generic x86_64
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  UserGroups: adm cdrom dip lpadmin plugdev sambashare sudo
  _MarkForUpload: True
  dmi.bios.date: 01/30/2019
  dmi.bios.vendor: Microsoft Corporation
  dmi.bios.version: Hyper-V UEFI Release v4.0
  dmi.board.asset.tag: None
  dmi.board.name: Virtual Machine
  dmi.board.vendor: Microsoft Corporation
  dmi.board.version: Hyper-V UEFI Release v4.0
  dmi.chassis.asset.tag: 2831-3616-6111-5725-4803-1162-28
  dmi.chassis.type: 3
  dmi.chassis.vendor: Microsoft Corporation
  dmi.chassis.version: Hyper-V UEFI Release v4.0
  dmi.modalias: 
dmi:bvnMicrosoftCorporation:bvrHyper-VUEFIReleasev4.0:bd01/30/2019:svnMicrosoftCorporation:pnVirtualMachine:pvrHyper-VUEFIReleasev4.0:rvnMicrosoftCorporation:rnVirtualMachine:rvrHyper-VUEFIReleasev4.0:cvnMicrosoftCorporation:ct3:cvrHyper-VUEFIReleasev4.0:
  dmi.product.family: Virtual Machine
  dmi.product.name: Virtual Machine
  dmi.product.sku: None
  dmi.product.version: Hyper-V UEFI Release v4.0
  dmi.sys.vendor: Microsoft Corporation
  version.compiz: compiz N/A
  version.libdrm2: libdrm2 2.4.99-1ubuntu1
  version.libgl1-mesa-dri: libgl1-mesa-dri 19.2.1-1ubuntu1
  version.libgl1-mesa-glx: libgl1-mesa-glx N/A
  version.xserver-xorg-core: xserver-xorg-core 2:1.20.5+git20191008-0ubuntu1
  version.xserver-xorg-input-evdev: xserver-xorg-input-evdev N/A
  version.xserver-xorg-video-ati: xserver-xorg-video-ati 1:19.0.1-1ubuntu1
  version.xserver-xorg-video-intel: xserver-xorg-video-intel 
2:2.99.917+git20190815-1
  version.xserver-xorg-video-nouveau: xserver-xorg-video-nouveau 1:1.0.16-1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gdm3/+bug/1848534/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1848534] Re: [Microsoft Hyper-V guest] System shows graphic artifacts for a moment, then text cursor for about minute and then starts

2020-04-12 Thread Dexuan Cui
@msgallery: BTW, you mentioned 'The "restart" button is not functional'
-- actually it is not functional only when we try to click the button by
mouse -- if we press Tab to focus on the button and then press Enter,
the VM should reboot. :-)  I'll try to mention this to Hyper-V team, but
I'm not sure when they will fix this minor issue, since the issue should
be of low priority.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1848534

Title:
  [Microsoft Hyper-V guest] System shows graphic artifacts for a moment,
  then text cursor for about minute and then starts

Status in gdm3 package in Ubuntu:
  Confirmed
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  AFter upgrade system shows graphic artefacts for a moment and then
  text cursor for about minute (it looks like hanged) and then starts.

  In 19.04 startup required 1 or 2 seconds.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: ubuntu-release-upgrader-core 1:19.10.15
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  Uname: Linux 5.3.0-18-generic x86_64
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  CrashDB: ubuntu
  CurrentDesktop: ubuntu:GNOME
  Date: Thu Oct 17 17:42:27 2019
  InstallationDate: Installed on 2019-07-04 (104 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  PackageArchitecture: all
  SourcePackage: ubuntu-release-upgrader
  Symptom: dist-upgrade
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  VarLogDistupgradeLspcitxt:
   
  VarLogDistupgradeXorgFixuplog:
   INFO:root:/usr/bin/do-release-upgrade running
   INFO:root:No xorg.conf, exiting
  --- 
  ProblemType: Bug
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  BootLog: Error: [Errno 13] Permission denied: '/var/log/boot.log'
  CompositorRunning: None
  CurrentDesktop: ubuntu:GNOME
  DistUpgraded: 2019-10-17 17:03:47,139 DEBUG Running PostInstallScript: 
'./xorg_fix_proprietary.py'
  DistroCodename: eoan
  DistroRelease: Ubuntu 19.10
  DistroVariant: ubuntu
  ExtraDebuggingInterest: Yes
  GraphicsCard:
   
  InstallationDate: Installed on 2019-07-04 (105 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  IwConfig:
   eth0  no wireless extensions.
   
   lono wireless extensions.
  Lspci:
   
  Lsusb: Error: command ['lsusb'] failed with exit code 1:
  MachineType: Microsoft Corporation Virtual Machine
  Package: xorg-server (not installed)
  ProcFB: 0 hyperv_fb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.3.0-18-generic 
root=UUID=17409d40-25e9-4051-9fd9-758e2a02ebc3 ro quiet splash 
video=hyperv_fb:1900x900 elevator=noop vt.handoff=7
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  RelatedPackageVersions:
   linux-restricted-modules-5.3.0-18-generic N/A
   linux-backports-modules-5.3.0-18-generic  N/A
   linux-firmware1.183
  RfKill:
   
  Tags:  eoan ubuntu
  Uname: Linux 5.3.0-18-generic x86_64
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  UserGroups: adm cdrom dip lpadmin plugdev sambashare sudo
  _MarkForUpload: True
  dmi.bios.date: 01/30/2019
  dmi.bios.vendor: Microsoft Corporation
  dmi.bios.version: Hyper-V UEFI Release v4.0
  dmi.board.asset.tag: None
  dmi.board.name: Virtual Machine
  dmi.board.vendor: Microsoft Corporation
  dmi.board.version: Hyper-V UEFI Release v4.0
  dmi.chassis.asset.tag: 2831-3616-6111-5725-4803-1162-28
  dmi.chassis.type: 3
  dmi.chassis.vendor: Microsoft Corporation
  dmi.chassis.version: Hyper-V UEFI Release v4.0
  dmi.modalias: 
dmi:bvnMicrosoftCorporation:bvrHyper-VUEFIReleasev4.0:bd01/30/2019:svnMicrosoftCorporation:pnVirtualMachine:pvrHyper-VUEFIReleasev4.0:rvnMicrosoftCorporation:rnVirtualMachine:rvrHyper-VUEFIReleasev4.0:cvnMicrosoftCorporation:ct3:cvrHyper-VUEFIReleasev4.0:
  dmi.product.family: Virtual Machine
  dmi.product.name: Virtual Machine
  dmi.product.sku: None
  dmi.product.version: Hyper-V UEFI Release v4.0
  dmi.sys.vendor: Microsoft Corporation
  version.compiz: compiz N/A
  version.libdrm2: libdrm2 2.4.99-1ubuntu1
  version.libgl1-mesa-dri: libgl1-mesa-dri 19.2.1-1ubuntu1
  version.libgl1-mesa-glx: libgl1-mesa-glx N/A
  version.xserver-xorg-core: xserver-xorg-core 2:1.20.5+git20191008-0ubuntu1
  version.xserver-xorg-input-evdev: xserver-xorg-input-evdev N/A
  version.xserver-xorg-video-ati: xserver-xorg-video-ati 1:19.0.1-1ubuntu1
  version.xserver-xorg-video-intel: xserver-xorg-video-intel 
2:2.99.917+git20190815-1
  version.xserver-xorg-video-nouveau: xserver-xorg-video-nouveau 1:1.0.16-1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gdm3/+bug/1848534/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packa

[Kernel-packages] [Bug 1848534] Re: [Microsoft Hyper-V guest] System shows graphic artifacts for a moment, then text cursor for about minute and then starts

2020-04-12 Thread Dexuan Cui
BTW, my Linux kernel version is 5.3.0-46-generic #38-Ubuntu  (17:37:05,
3/27/2020).

The "graphic artifact" is somehow caused by the "$vt_hanoff" kernel parameter 
(check "cat /proc/cmdline").
If I manually remove the "$vt_hanoff" at the grub screen, I won't see the 
"graphic artifact" -- Ubuntu guys should take a look and fix the issue, as I'm 
not familiar with "vt_handoff".

@msgallery: I never see the "1:40" (1 minutes and 40 seconds) delay
reported in comment #40.  Maybe you can use "systemd-analyze critical-
chain" (mentioned in Comment #25) to figure out why the delay happened.

To recap, my experience with the fresh Desktop installation of Ubuntu
19.10 (Gen-2 VM) on Hyper-V is good, except for the minor "graphic
artifact" issue. I don't see any long delay.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1848534

Title:
  [Microsoft Hyper-V guest] System shows graphic artifacts for a moment,
  then text cursor for about minute and then starts

Status in gdm3 package in Ubuntu:
  Confirmed
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  AFter upgrade system shows graphic artefacts for a moment and then
  text cursor for about minute (it looks like hanged) and then starts.

  In 19.04 startup required 1 or 2 seconds.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: ubuntu-release-upgrader-core 1:19.10.15
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  Uname: Linux 5.3.0-18-generic x86_64
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  CrashDB: ubuntu
  CurrentDesktop: ubuntu:GNOME
  Date: Thu Oct 17 17:42:27 2019
  InstallationDate: Installed on 2019-07-04 (104 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  PackageArchitecture: all
  SourcePackage: ubuntu-release-upgrader
  Symptom: dist-upgrade
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  VarLogDistupgradeLspcitxt:
   
  VarLogDistupgradeXorgFixuplog:
   INFO:root:/usr/bin/do-release-upgrade running
   INFO:root:No xorg.conf, exiting
  --- 
  ProblemType: Bug
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  BootLog: Error: [Errno 13] Permission denied: '/var/log/boot.log'
  CompositorRunning: None
  CurrentDesktop: ubuntu:GNOME
  DistUpgraded: 2019-10-17 17:03:47,139 DEBUG Running PostInstallScript: 
'./xorg_fix_proprietary.py'
  DistroCodename: eoan
  DistroRelease: Ubuntu 19.10
  DistroVariant: ubuntu
  ExtraDebuggingInterest: Yes
  GraphicsCard:
   
  InstallationDate: Installed on 2019-07-04 (105 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  IwConfig:
   eth0  no wireless extensions.
   
   lono wireless extensions.
  Lspci:
   
  Lsusb: Error: command ['lsusb'] failed with exit code 1:
  MachineType: Microsoft Corporation Virtual Machine
  Package: xorg-server (not installed)
  ProcFB: 0 hyperv_fb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.3.0-18-generic 
root=UUID=17409d40-25e9-4051-9fd9-758e2a02ebc3 ro quiet splash 
video=hyperv_fb:1900x900 elevator=noop vt.handoff=7
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  RelatedPackageVersions:
   linux-restricted-modules-5.3.0-18-generic N/A
   linux-backports-modules-5.3.0-18-generic  N/A
   linux-firmware1.183
  RfKill:
   
  Tags:  eoan ubuntu
  Uname: Linux 5.3.0-18-generic x86_64
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  UserGroups: adm cdrom dip lpadmin plugdev sambashare sudo
  _MarkForUpload: True
  dmi.bios.date: 01/30/2019
  dmi.bios.vendor: Microsoft Corporation
  dmi.bios.version: Hyper-V UEFI Release v4.0
  dmi.board.asset.tag: None
  dmi.board.name: Virtual Machine
  dmi.board.vendor: Microsoft Corporation
  dmi.board.version: Hyper-V UEFI Release v4.0
  dmi.chassis.asset.tag: 2831-3616-6111-5725-4803-1162-28
  dmi.chassis.type: 3
  dmi.chassis.vendor: Microsoft Corporation
  dmi.chassis.version: Hyper-V UEFI Release v4.0
  dmi.modalias: 
dmi:bvnMicrosoftCorporation:bvrHyper-VUEFIReleasev4.0:bd01/30/2019:svnMicrosoftCorporation:pnVirtualMachine:pvrHyper-VUEFIReleasev4.0:rvnMicrosoftCorporation:rnVirtualMachine:rvrHyper-VUEFIReleasev4.0:cvnMicrosoftCorporation:ct3:cvrHyper-VUEFIReleasev4.0:
  dmi.product.family: Virtual Machine
  dmi.product.name: Virtual Machine
  dmi.product.sku: None
  dmi.product.version: Hyper-V UEFI Release v4.0
  dmi.sys.vendor: Microsoft Corporation
  version.compiz: compiz N/A
  version.libdrm2: libdrm2 2.4.99-1ubuntu1
  version.libgl1-mesa-dri: libgl1-mesa-dri 19.2.1-1ubuntu1
  version.libgl1-mesa-glx: libgl1-mesa-glx N/A
  version.xserver-xorg-core: xserver-xorg-core 2:1.20.5+git20191008-0ubuntu1
  version.xserver-xorg-input-evdev: xserver-xorg-input-evdev N/A
  version.xserver-xorg-video-ati: xserver-xorg-vid

[Kernel-packages] [Bug 1848534] Re: [Microsoft Hyper-V guest] System shows graphic artifacts for a moment, then text cursor for about minute and then starts

2020-04-12 Thread Dexuan Cui
Today I installed a Generation-2 VM (4 virtual CPUS, 4 GB memory) from the this 
.iso file: 
http://releases.ubuntu.com/19.10/ubuntu-19.10-desktop-amd64.iso.

My host is Win10: Version 1909 (OS Build 18383.720) -- I got the info by
running the built-in "winver.exe" program.

The CPU type is Intel Core I7-7600 (2.80G Hz). There are 2 cores and the
cores support SMT, so there are 4 logical processors in total.

I can see the "graphic artifact" (I will upload a screenshot soon), but
it looks overall the boot-up is fast (it takes 30 seconds) and it looks
the VM works fine for me.

When the VM boots up:
1. First, the screen with the purple background (I think it's from grub) 
remains 4 seconds.
2. The screen background becomes black, and there is a "Hyper-V" logo in the 
center of the screen. This remains about 1 second.
3. The screen with the "graphic artifact" appears, and remains about 4 seconds.
4. The screen background becomes purple and the "Ubuntu" logo with 5 dots 
appears. This screen remains 8 seconds.
5. The screen becomes completely black. This screen remains 9 seconds.
6. The screen becomes kind of purple again, and in about 2 seconds the GUI 
desktop appears (I set Ubuntu to automatically login in to the desktop). 

So the overall time spent on the 6 steps are 30 seconds. IMO this looks
normal.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1848534

Title:
  [Microsoft Hyper-V guest] System shows graphic artifacts for a moment,
  then text cursor for about minute and then starts

Status in gdm3 package in Ubuntu:
  Confirmed
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  AFter upgrade system shows graphic artefacts for a moment and then
  text cursor for about minute (it looks like hanged) and then starts.

  In 19.04 startup required 1 or 2 seconds.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: ubuntu-release-upgrader-core 1:19.10.15
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  Uname: Linux 5.3.0-18-generic x86_64
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  CrashDB: ubuntu
  CurrentDesktop: ubuntu:GNOME
  Date: Thu Oct 17 17:42:27 2019
  InstallationDate: Installed on 2019-07-04 (104 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  PackageArchitecture: all
  SourcePackage: ubuntu-release-upgrader
  Symptom: dist-upgrade
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  VarLogDistupgradeLspcitxt:
   
  VarLogDistupgradeXorgFixuplog:
   INFO:root:/usr/bin/do-release-upgrade running
   INFO:root:No xorg.conf, exiting
  --- 
  ProblemType: Bug
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  BootLog: Error: [Errno 13] Permission denied: '/var/log/boot.log'
  CompositorRunning: None
  CurrentDesktop: ubuntu:GNOME
  DistUpgraded: 2019-10-17 17:03:47,139 DEBUG Running PostInstallScript: 
'./xorg_fix_proprietary.py'
  DistroCodename: eoan
  DistroRelease: Ubuntu 19.10
  DistroVariant: ubuntu
  ExtraDebuggingInterest: Yes
  GraphicsCard:
   
  InstallationDate: Installed on 2019-07-04 (105 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  IwConfig:
   eth0  no wireless extensions.
   
   lono wireless extensions.
  Lspci:
   
  Lsusb: Error: command ['lsusb'] failed with exit code 1:
  MachineType: Microsoft Corporation Virtual Machine
  Package: xorg-server (not installed)
  ProcFB: 0 hyperv_fb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.3.0-18-generic 
root=UUID=17409d40-25e9-4051-9fd9-758e2a02ebc3 ro quiet splash 
video=hyperv_fb:1900x900 elevator=noop vt.handoff=7
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  RelatedPackageVersions:
   linux-restricted-modules-5.3.0-18-generic N/A
   linux-backports-modules-5.3.0-18-generic  N/A
   linux-firmware1.183
  RfKill:
   
  Tags:  eoan ubuntu
  Uname: Linux 5.3.0-18-generic x86_64
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  UserGroups: adm cdrom dip lpadmin plugdev sambashare sudo
  _MarkForUpload: True
  dmi.bios.date: 01/30/2019
  dmi.bios.vendor: Microsoft Corporation
  dmi.bios.version: Hyper-V UEFI Release v4.0
  dmi.board.asset.tag: None
  dmi.board.name: Virtual Machine
  dmi.board.vendor: Microsoft Corporation
  dmi.board.version: Hyper-V UEFI Release v4.0
  dmi.chassis.asset.tag: 2831-3616-6111-5725-4803-1162-28
  dmi.chassis.type: 3
  dmi.chassis.vendor: Microsoft Corporation
  dmi.chassis.version: Hyper-V UEFI Release v4.0
  dmi.modalias: 
dmi:bvnMicrosoftCorporation:bvrHyper-VUEFIReleasev4.0:bd01/30/2019:svnMicrosoftCorporation:pnVirtualMachine:pvrHyper-VUEFIReleasev4.0:rvnMicrosoftCorporation:rnVirtualMachine:rvrHyper-VUEFIReleasev4.0:cvnMicrosoftCorporation:ct3:cvrHyper-VUEFIReleasev4.0:
  dmi.p

[Kernel-packages] [Bug 1848534] Re: [Microsoft Hyper-V guest] System shows graphic artifacts for a moment, then text cursor for about minute and then starts

2020-04-12 Thread Dexuan Cui
This is the screenshot of the graphic artifact mentioned in the previous
comment.

** Attachment added: "graphic_artifact.png"
   
https://bugs.launchpad.net/ubuntu/+source/gdm3/+bug/1848534/+attachment/5352858/+files/graphic_artifact.png

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1848534

Title:
  [Microsoft Hyper-V guest] System shows graphic artifacts for a moment,
  then text cursor for about minute and then starts

Status in gdm3 package in Ubuntu:
  Confirmed
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  AFter upgrade system shows graphic artefacts for a moment and then
  text cursor for about minute (it looks like hanged) and then starts.

  In 19.04 startup required 1 or 2 seconds.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: ubuntu-release-upgrader-core 1:19.10.15
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  Uname: Linux 5.3.0-18-generic x86_64
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  CrashDB: ubuntu
  CurrentDesktop: ubuntu:GNOME
  Date: Thu Oct 17 17:42:27 2019
  InstallationDate: Installed on 2019-07-04 (104 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  PackageArchitecture: all
  SourcePackage: ubuntu-release-upgrader
  Symptom: dist-upgrade
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  VarLogDistupgradeLspcitxt:
   
  VarLogDistupgradeXorgFixuplog:
   INFO:root:/usr/bin/do-release-upgrade running
   INFO:root:No xorg.conf, exiting
  --- 
  ProblemType: Bug
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  BootLog: Error: [Errno 13] Permission denied: '/var/log/boot.log'
  CompositorRunning: None
  CurrentDesktop: ubuntu:GNOME
  DistUpgraded: 2019-10-17 17:03:47,139 DEBUG Running PostInstallScript: 
'./xorg_fix_proprietary.py'
  DistroCodename: eoan
  DistroRelease: Ubuntu 19.10
  DistroVariant: ubuntu
  ExtraDebuggingInterest: Yes
  GraphicsCard:
   
  InstallationDate: Installed on 2019-07-04 (105 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  IwConfig:
   eth0  no wireless extensions.
   
   lono wireless extensions.
  Lspci:
   
  Lsusb: Error: command ['lsusb'] failed with exit code 1:
  MachineType: Microsoft Corporation Virtual Machine
  Package: xorg-server (not installed)
  ProcFB: 0 hyperv_fb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.3.0-18-generic 
root=UUID=17409d40-25e9-4051-9fd9-758e2a02ebc3 ro quiet splash 
video=hyperv_fb:1900x900 elevator=noop vt.handoff=7
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  RelatedPackageVersions:
   linux-restricted-modules-5.3.0-18-generic N/A
   linux-backports-modules-5.3.0-18-generic  N/A
   linux-firmware1.183
  RfKill:
   
  Tags:  eoan ubuntu
  Uname: Linux 5.3.0-18-generic x86_64
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  UserGroups: adm cdrom dip lpadmin plugdev sambashare sudo
  _MarkForUpload: True
  dmi.bios.date: 01/30/2019
  dmi.bios.vendor: Microsoft Corporation
  dmi.bios.version: Hyper-V UEFI Release v4.0
  dmi.board.asset.tag: None
  dmi.board.name: Virtual Machine
  dmi.board.vendor: Microsoft Corporation
  dmi.board.version: Hyper-V UEFI Release v4.0
  dmi.chassis.asset.tag: 2831-3616-6111-5725-4803-1162-28
  dmi.chassis.type: 3
  dmi.chassis.vendor: Microsoft Corporation
  dmi.chassis.version: Hyper-V UEFI Release v4.0
  dmi.modalias: 
dmi:bvnMicrosoftCorporation:bvrHyper-VUEFIReleasev4.0:bd01/30/2019:svnMicrosoftCorporation:pnVirtualMachine:pvrHyper-VUEFIReleasev4.0:rvnMicrosoftCorporation:rnVirtualMachine:rvrHyper-VUEFIReleasev4.0:cvnMicrosoftCorporation:ct3:cvrHyper-VUEFIReleasev4.0:
  dmi.product.family: Virtual Machine
  dmi.product.name: Virtual Machine
  dmi.product.sku: None
  dmi.product.version: Hyper-V UEFI Release v4.0
  dmi.sys.vendor: Microsoft Corporation
  version.compiz: compiz N/A
  version.libdrm2: libdrm2 2.4.99-1ubuntu1
  version.libgl1-mesa-dri: libgl1-mesa-dri 19.2.1-1ubuntu1
  version.libgl1-mesa-glx: libgl1-mesa-glx N/A
  version.xserver-xorg-core: xserver-xorg-core 2:1.20.5+git20191008-0ubuntu1
  version.xserver-xorg-input-evdev: xserver-xorg-input-evdev N/A
  version.xserver-xorg-video-ati: xserver-xorg-video-ati 1:19.0.1-1ubuntu1
  version.xserver-xorg-video-intel: xserver-xorg-video-intel 
2:2.99.917+git20190815-1
  version.xserver-xorg-video-nouveau: xserver-xorg-video-nouveau 1:1.0.16-1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gdm3/+bug/1848534/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1867220] Re: Assignment of VDEV Somtimes Fails using Intel QAT

2020-04-02 Thread Dexuan Cui
Hi Marcelo, I'm not sure which v5.3 kernel you mean -- the v5.3 in
Ubuntu 19.10, v5.4 in Ubuntu 20.04 or the upstream stable tree's v5.3
and newer? :-)

Here we need to make sure the 3 patches in the Bug Description are
included, and also make sure the line "if (list_empty(&hbus->children))
hbus->sysdata.domain = desc->ser" in new_pcichild_device() should be
completely removed. BTW, this line never made it into the upstream
kernel and it only appears in some Ubuntu versions (if not all).

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1867220

Title:
  Assignment of VDEV Somtimes Fails using Intel QAT

Status in linux-azure package in Ubuntu:
  New

Bug description:
  The QAT is a PCIe device which supports SR-IOV.  It is used to
  accelerate the crypto and compression operations by hardware.

  There is additional info for this hardware on the Intel website: 
  
https://www.intel.com/content/www/us/en/products/docs/network-io/ethernet/10-25-40-gigabit-adapters/quickassist-adapter-for-servers.html
  https://01.org/intel-quick-assist-technology

  On Ubuntu, when the QAT device is enabled with SR-IOV, one device will have 
16 VFs.
  When we assign the VDEV to Linux VM, sometimes, it will fail but sometimes it 
will not.

  
  This issue was debugged and is resolved by applying three patches and 
removing some buggy code that was added as SAUCE.  The following three patches 
are needed:

  f73f8a504e27 PCI: hv: Use bytes 4 and 5 from instance ID as the PCI domain 
numbers
  be700103efd1 PCI: hv: Detect and fix Hyper-V PCI domain number collision
  6ae91579061c PCI: hv: Add __aligned(8) to struct retarget_msi_interrupt

  
  Also, revert the patch Revert "PCI: hv: Make sure the bus domain is really 
unique", i.e. the line "if (list_empty(&hbus->children)) hbus->sysdata.domain = 
desc->ser" in new_pcichild_device() should be completely removed.

  The patch “UBUNTU: SAUCE: pci-hyperv: Use only 16 bit integer for PCI
  domai” is also pointless now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1867220/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1867220] Re: Assignment of VDEV Somtimes Fails using Intel QAT

2020-03-13 Thread Dexuan Cui
BTW, the bug also applies to hwe-4.15.0-91.92_16.04.1 and Ubuntu-
hwe-5.0.0-37.40_18.04.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1867220

Title:
  Assignment of VDEV Somtimes Fails using Intel QAT

Status in linux-azure package in Ubuntu:
  New

Bug description:
  The QAT is a PCIe device which supports SR-IOV.  It is used to
  accelerate the crypto and compression operations by hardware.

  There is additional info for this hardware on the Intel website: 
  
https://www.intel.com/content/www/us/en/products/docs/network-io/ethernet/10-25-40-gigabit-adapters/quickassist-adapter-for-servers.html
  https://01.org/intel-quick-assist-technology

  On Ubuntu, when the QAT device is enabled with SR-IOV, one device will have 
16 VFs.
  When we assign the VDEV to Linux VM, sometimes, it will fail but sometimes it 
will not.

  
  This issue was debugged and is resolved by applying three patches and 
removing some buggy code that was added as SAUCE.  The following three patches 
are needed:

  f73f8a504e27 PCI: hv: Use bytes 4 and 5 from instance ID as the PCI domain 
numbers
  be700103efd1 PCI: hv: Detect and fix Hyper-V PCI domain number collision
  6ae91579061c PCI: hv: Add __aligned(8) to struct retarget_msi_interrupt

  
  Also, revert the patch Revert "PCI: hv: Make sure the bus domain is really 
unique", i.e. the line "if (list_empty(&hbus->children)) hbus->sysdata.domain = 
desc->ser" in new_pcichild_device() should be completely removed.

  The patch “UBUNTU: SAUCE: pci-hyperv: Use only 16 bit integer for PCI
  domai” is also pointless now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1867220/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1867220] Re: Assignment of VDEV Somtimes Fails using Intel QAT

2020-03-13 Thread Dexuan Cui
The bug applies to both linux-azure-5.0.0-1032 and linux-azure
4.15.0-1074.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1867220

Title:
  Assignment of VDEV Somtimes Fails using Intel QAT

Status in linux-azure package in Ubuntu:
  New

Bug description:
  The QAT is a PCIe device which supports SR-IOV.  It is used to
  accelerate the crypto and compression operations by hardware.

  There is additional info for this hardware on the Intel website: 
  
https://www.intel.com/content/www/us/en/products/docs/network-io/ethernet/10-25-40-gigabit-adapters/quickassist-adapter-for-servers.html
  https://01.org/intel-quick-assist-technology

  On Ubuntu, when the QAT device is enabled with SR-IOV, one device will have 
16 VFs.
  When we assign the VDEV to Linux VM, sometimes, it will fail but sometimes it 
will not.

  
  This issue was debugged and is resolved by applying three patches and 
removing some buggy code that was added as SAUCE.  The following three patches 
are needed:

  f73f8a504e27 PCI: hv: Use bytes 4 and 5 from instance ID as the PCI domain 
numbers
  be700103efd1 PCI: hv: Detect and fix Hyper-V PCI domain number collision
  6ae91579061c PCI: hv: Add __aligned(8) to struct retarget_msi_interrupt

  
  Also, revert the patch Revert "PCI: hv: Make sure the bus domain is really 
unique", i.e. the line "if (list_empty(&hbus->children)) hbus->sysdata.domain = 
desc->ser" in new_pcichild_device() should be completely removed.

  The patch “UBUNTU: SAUCE: pci-hyperv: Use only 16 bit integer for PCI
  domai” is also pointless now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1867220/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1848534] Re: [Microsoft Hyper-V guest] System shows graphic artifacts for a moment, then text cursor for about minute and then starts

2019-10-24 Thread Dexuan Cui
So let me summarize your findings on the same host of yours (I suppose
your VMs use the same config for the number of vCPUs and the memory
size. I also suppose you only tested Hyper-V Generation 2 VMs or you
confirmed Gen-1 vs. Gen-2 makes no difference):

("fast" means you can see the GUI desktop or the text terminal prompt in
about 1~2 seconds, and "slow" means you need a much longer time, e.g. 1
minute (?))

fresh Server 19.10 ==> fast
fresh Server 19.10 + the ubuntu-desktop package ==> slow
fresh Desktop 19.10 ==> slow
fresh Desktop 19.04 ==> fast
fresh Desktop 19.04 upgraded to 19.10 ==> slow

So it looks a change in 19.10 with the xorg causes the slowness.

However, I can not reproduce the issue, because both my fresh 19.10 and
19.04 VMs boot up in 20+ seconds and I never have a boot-up time of 1~2
seconds.

Hi M, can you please check this case:

fresh Desktop 19.04 upgraded to 19.10 ==> slow

What if you boot the VM with the 19.04 kernel + 19.10's userspace (including 
Xorg)?
If it's also slow, then we have more confidence that the 19.04 Xorg has an 
issue.
If it's fast, then the issue may be more likely that the interaction between 
the 19.10 Xorg and the 19.10 kernel is causing the issue.

Can you also please try booting the VM with a "good" 19.04 VM but (ONLY)
upgrading the kernel to 19.10?

In the slow cases, can you check the logs files (/var/log/Xorg*,
/var/log/syslog*) and see if there is any obvious error/warning?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1848534

Title:
  [Microsoft Hyper-V guest] System shows graphic artifacts for a moment,
  then text cursor for about minute and then starts

Status in linux package in Ubuntu:
  Incomplete
Status in xorg-server package in Ubuntu:
  New

Bug description:
  AFter upgrade system shows graphic artefacts for a moment and then
  text cursor for about minute (it looks like hanged) and then starts.

  In 19.04 startup required 1 or 2 seconds.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: ubuntu-release-upgrader-core 1:19.10.15
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  Uname: Linux 5.3.0-18-generic x86_64
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  CrashDB: ubuntu
  CurrentDesktop: ubuntu:GNOME
  Date: Thu Oct 17 17:42:27 2019
  InstallationDate: Installed on 2019-07-04 (104 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  PackageArchitecture: all
  SourcePackage: ubuntu-release-upgrader
  Symptom: dist-upgrade
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  VarLogDistupgradeLspcitxt:
   
  VarLogDistupgradeXorgFixuplog:
   INFO:root:/usr/bin/do-release-upgrade running
   INFO:root:No xorg.conf, exiting
  --- 
  ProblemType: Bug
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  BootLog: Error: [Errno 13] Permission denied: '/var/log/boot.log'
  CompositorRunning: None
  CurrentDesktop: ubuntu:GNOME
  DistUpgraded: 2019-10-17 17:03:47,139 DEBUG Running PostInstallScript: 
'./xorg_fix_proprietary.py'
  DistroCodename: eoan
  DistroRelease: Ubuntu 19.10
  DistroVariant: ubuntu
  ExtraDebuggingInterest: Yes
  GraphicsCard:
   
  InstallationDate: Installed on 2019-07-04 (105 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  IwConfig:
   eth0  no wireless extensions.
   
   lono wireless extensions.
  Lspci:
   
  Lsusb: Error: command ['lsusb'] failed with exit code 1:
  MachineType: Microsoft Corporation Virtual Machine
  Package: xorg-server (not installed)
  ProcFB: 0 hyperv_fb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.3.0-18-generic 
root=UUID=17409d40-25e9-4051-9fd9-758e2a02ebc3 ro quiet splash 
video=hyperv_fb:1900x900 elevator=noop vt.handoff=7
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  RelatedPackageVersions:
   linux-restricted-modules-5.3.0-18-generic N/A
   linux-backports-modules-5.3.0-18-generic  N/A
   linux-firmware1.183
  RfKill:
   
  Tags:  eoan ubuntu
  Uname: Linux 5.3.0-18-generic x86_64
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  UserGroups: adm cdrom dip lpadmin plugdev sambashare sudo
  _MarkForUpload: True
  dmi.bios.date: 01/30/2019
  dmi.bios.vendor: Microsoft Corporation
  dmi.bios.version: Hyper-V UEFI Release v4.0
  dmi.board.asset.tag: None
  dmi.board.name: Virtual Machine
  dmi.board.vendor: Microsoft Corporation
  dmi.board.version: Hyper-V UEFI Release v4.0
  dmi.chassis.asset.tag: 2831-3616-6111-5725-4803-1162-28
  dmi.chassis.type: 3
  dmi.chassis.vendor: Microsoft Corporation
  dmi.chassis.version: Hyper-V UEFI Release v4.0
  dmi.modalias: 
dmi:bvnMicrosoftCorporation:bvrHyper-VUEFIReleasev4.0:bd01/30/2019:svnMicrosoftCorporation:pnVirtualMachine:pvrHyper-VUEFIReleasev4.0

[Kernel-packages] [Bug 1848534] Re: [Microsoft Hyper-V guest] System shows graphic artifacts for a moment, then text cursor for about minute and then starts

2019-10-24 Thread Dexuan Cui
Can Ubuntu devs please try to repro the issue? I can not repro it. :-(

Hi M, I assume you can also repro the issue with a VM created from
scratch from the server .iso (see comment #28) with a minimal
installation? If yes, can you please share the vhdx file? If you
configure the disk size to 15GB an use xfs (rather than ext4) in the
installation process, the generated vhdx file should be 1.5GB or so
(IIRC), so I guess there might be a way for you to share the file
somewhere for me to download? Please also use less CPUs (e.g. 2) and
memory for the VM (e.g. 2GB), if this doesn't prevent you from
reproducing the issue.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1848534

Title:
  [Microsoft Hyper-V guest] System shows graphic artifacts for a moment,
  then text cursor for about minute and then starts

Status in linux package in Ubuntu:
  Incomplete
Status in xorg-server package in Ubuntu:
  New

Bug description:
  AFter upgrade system shows graphic artefacts for a moment and then
  text cursor for about minute (it looks like hanged) and then starts.

  In 19.04 startup required 1 or 2 seconds.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: ubuntu-release-upgrader-core 1:19.10.15
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  Uname: Linux 5.3.0-18-generic x86_64
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  CrashDB: ubuntu
  CurrentDesktop: ubuntu:GNOME
  Date: Thu Oct 17 17:42:27 2019
  InstallationDate: Installed on 2019-07-04 (104 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  PackageArchitecture: all
  SourcePackage: ubuntu-release-upgrader
  Symptom: dist-upgrade
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  VarLogDistupgradeLspcitxt:
   
  VarLogDistupgradeXorgFixuplog:
   INFO:root:/usr/bin/do-release-upgrade running
   INFO:root:No xorg.conf, exiting
  --- 
  ProblemType: Bug
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  BootLog: Error: [Errno 13] Permission denied: '/var/log/boot.log'
  CompositorRunning: None
  CurrentDesktop: ubuntu:GNOME
  DistUpgraded: 2019-10-17 17:03:47,139 DEBUG Running PostInstallScript: 
'./xorg_fix_proprietary.py'
  DistroCodename: eoan
  DistroRelease: Ubuntu 19.10
  DistroVariant: ubuntu
  ExtraDebuggingInterest: Yes
  GraphicsCard:
   
  InstallationDate: Installed on 2019-07-04 (105 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  IwConfig:
   eth0  no wireless extensions.
   
   lono wireless extensions.
  Lspci:
   
  Lsusb: Error: command ['lsusb'] failed with exit code 1:
  MachineType: Microsoft Corporation Virtual Machine
  Package: xorg-server (not installed)
  ProcFB: 0 hyperv_fb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.3.0-18-generic 
root=UUID=17409d40-25e9-4051-9fd9-758e2a02ebc3 ro quiet splash 
video=hyperv_fb:1900x900 elevator=noop vt.handoff=7
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  RelatedPackageVersions:
   linux-restricted-modules-5.3.0-18-generic N/A
   linux-backports-modules-5.3.0-18-generic  N/A
   linux-firmware1.183
  RfKill:
   
  Tags:  eoan ubuntu
  Uname: Linux 5.3.0-18-generic x86_64
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  UserGroups: adm cdrom dip lpadmin plugdev sambashare sudo
  _MarkForUpload: True
  dmi.bios.date: 01/30/2019
  dmi.bios.vendor: Microsoft Corporation
  dmi.bios.version: Hyper-V UEFI Release v4.0
  dmi.board.asset.tag: None
  dmi.board.name: Virtual Machine
  dmi.board.vendor: Microsoft Corporation
  dmi.board.version: Hyper-V UEFI Release v4.0
  dmi.chassis.asset.tag: 2831-3616-6111-5725-4803-1162-28
  dmi.chassis.type: 3
  dmi.chassis.vendor: Microsoft Corporation
  dmi.chassis.version: Hyper-V UEFI Release v4.0
  dmi.modalias: 
dmi:bvnMicrosoftCorporation:bvrHyper-VUEFIReleasev4.0:bd01/30/2019:svnMicrosoftCorporation:pnVirtualMachine:pvrHyper-VUEFIReleasev4.0:rvnMicrosoftCorporation:rnVirtualMachine:rvrHyper-VUEFIReleasev4.0:cvnMicrosoftCorporation:ct3:cvrHyper-VUEFIReleasev4.0:
  dmi.product.family: Virtual Machine
  dmi.product.name: Virtual Machine
  dmi.product.sku: None
  dmi.product.version: Hyper-V UEFI Release v4.0
  dmi.sys.vendor: Microsoft Corporation
  version.compiz: compiz N/A
  version.libdrm2: libdrm2 2.4.99-1ubuntu1
  version.libgl1-mesa-dri: libgl1-mesa-dri 19.2.1-1ubuntu1
  version.libgl1-mesa-glx: libgl1-mesa-glx N/A
  version.xserver-xorg-core: xserver-xorg-core 2:1.20.5+git20191008-0ubuntu1
  version.xserver-xorg-input-evdev: xserver-xorg-input-evdev N/A
  version.xserver-xorg-video-ati: xserver-xorg-video-ati 1:19.0.1-1ubuntu1
  version.xserver-xorg-video-intel: xserver-xorg-video-intel 
2:2.99.917+git20190815-1
  version.xserver-xorg-video-nouveau:

[Kernel-packages] [Bug 1848534] Re: [Microsoft Hyper-V guest] System shows graphic artifacts for a moment, then text cursor for about minute and then starts

2019-10-20 Thread Dexuan Cui
The typical boot-up time of my Ubuntu VM on Hyper-V is 20~30 seconds for
a Desktop version of Ubuntu, and 10~20 seconds for a Server version. I
tried Ubuntu 19.04 just now and it also took 20+ seconds.

I never achieve a boot-up time of 2s.

I do know Ubuntu can boot up fast in 2~3 seconds in WSL (Windows
Subsystem for Linux), though I didn't try it in person.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1848534

Title:
  [Microsoft Hyper-V guest] System shows graphic artifacts for a moment,
  then text cursor for about minute and then starts

Status in linux package in Ubuntu:
  Incomplete
Status in xorg-server package in Ubuntu:
  New

Bug description:
  AFter upgrade system shows graphic artefacts for a moment and then
  text cursor for about minute (it looks like hanged) and then starts.

  In 19.04 startup required 1 or 2 seconds.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: ubuntu-release-upgrader-core 1:19.10.15
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  Uname: Linux 5.3.0-18-generic x86_64
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  CrashDB: ubuntu
  CurrentDesktop: ubuntu:GNOME
  Date: Thu Oct 17 17:42:27 2019
  InstallationDate: Installed on 2019-07-04 (104 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  PackageArchitecture: all
  SourcePackage: ubuntu-release-upgrader
  Symptom: dist-upgrade
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  VarLogDistupgradeLspcitxt:
   
  VarLogDistupgradeXorgFixuplog:
   INFO:root:/usr/bin/do-release-upgrade running
   INFO:root:No xorg.conf, exiting
  --- 
  ProblemType: Bug
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  BootLog: Error: [Errno 13] Permission denied: '/var/log/boot.log'
  CompositorRunning: None
  CurrentDesktop: ubuntu:GNOME
  DistUpgraded: 2019-10-17 17:03:47,139 DEBUG Running PostInstallScript: 
'./xorg_fix_proprietary.py'
  DistroCodename: eoan
  DistroRelease: Ubuntu 19.10
  DistroVariant: ubuntu
  ExtraDebuggingInterest: Yes
  GraphicsCard:
   
  InstallationDate: Installed on 2019-07-04 (105 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  IwConfig:
   eth0  no wireless extensions.
   
   lono wireless extensions.
  Lspci:
   
  Lsusb: Error: command ['lsusb'] failed with exit code 1:
  MachineType: Microsoft Corporation Virtual Machine
  Package: xorg-server (not installed)
  ProcFB: 0 hyperv_fb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.3.0-18-generic 
root=UUID=17409d40-25e9-4051-9fd9-758e2a02ebc3 ro quiet splash 
video=hyperv_fb:1900x900 elevator=noop vt.handoff=7
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  RelatedPackageVersions:
   linux-restricted-modules-5.3.0-18-generic N/A
   linux-backports-modules-5.3.0-18-generic  N/A
   linux-firmware1.183
  RfKill:
   
  Tags:  eoan ubuntu
  Uname: Linux 5.3.0-18-generic x86_64
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  UserGroups: adm cdrom dip lpadmin plugdev sambashare sudo
  _MarkForUpload: True
  dmi.bios.date: 01/30/2019
  dmi.bios.vendor: Microsoft Corporation
  dmi.bios.version: Hyper-V UEFI Release v4.0
  dmi.board.asset.tag: None
  dmi.board.name: Virtual Machine
  dmi.board.vendor: Microsoft Corporation
  dmi.board.version: Hyper-V UEFI Release v4.0
  dmi.chassis.asset.tag: 2831-3616-6111-5725-4803-1162-28
  dmi.chassis.type: 3
  dmi.chassis.vendor: Microsoft Corporation
  dmi.chassis.version: Hyper-V UEFI Release v4.0
  dmi.modalias: 
dmi:bvnMicrosoftCorporation:bvrHyper-VUEFIReleasev4.0:bd01/30/2019:svnMicrosoftCorporation:pnVirtualMachine:pvrHyper-VUEFIReleasev4.0:rvnMicrosoftCorporation:rnVirtualMachine:rvrHyper-VUEFIReleasev4.0:cvnMicrosoftCorporation:ct3:cvrHyper-VUEFIReleasev4.0:
  dmi.product.family: Virtual Machine
  dmi.product.name: Virtual Machine
  dmi.product.sku: None
  dmi.product.version: Hyper-V UEFI Release v4.0
  dmi.sys.vendor: Microsoft Corporation
  version.compiz: compiz N/A
  version.libdrm2: libdrm2 2.4.99-1ubuntu1
  version.libgl1-mesa-dri: libgl1-mesa-dri 19.2.1-1ubuntu1
  version.libgl1-mesa-glx: libgl1-mesa-glx N/A
  version.xserver-xorg-core: xserver-xorg-core 2:1.20.5+git20191008-0ubuntu1
  version.xserver-xorg-input-evdev: xserver-xorg-input-evdev N/A
  version.xserver-xorg-video-ati: xserver-xorg-video-ati 1:19.0.1-1ubuntu1
  version.xserver-xorg-video-intel: xserver-xorg-video-intel 
2:2.99.917+git20190815-1
  version.xserver-xorg-video-nouveau: xserver-xorg-video-nouveau 1:1.0.16-1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1848534/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launc

[Kernel-packages] [Bug 1848534] Re: [Microsoft Hyper-V guest] System shows graphic artifacts for a moment, then text cursor for about minute and then starts

2019-10-20 Thread Dexuan Cui
Hi M, since I can not reproduce the delay issue, I don't know what I can
do now. :-(

Do you think if it's related to Xorg?

Can you install a new VM from scratch from the server .iso
(http://releases.ubuntu.com/19.10/ubuntu-19.10-live-server-amd64.iso)
and see if you can reproduce the same issue?

The server .iso doesn't install Xorg, and a lot of other packages used
in a Desktop environment. I hope you can not repro the issue with it,
then we'll have a good starting point.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1848534

Title:
  [Microsoft Hyper-V guest] System shows graphic artifacts for a moment,
  then text cursor for about minute and then starts

Status in linux package in Ubuntu:
  Incomplete
Status in xorg-server package in Ubuntu:
  New

Bug description:
  AFter upgrade system shows graphic artefacts for a moment and then
  text cursor for about minute (it looks like hanged) and then starts.

  In 19.04 startup required 1 or 2 seconds.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: ubuntu-release-upgrader-core 1:19.10.15
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  Uname: Linux 5.3.0-18-generic x86_64
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  CrashDB: ubuntu
  CurrentDesktop: ubuntu:GNOME
  Date: Thu Oct 17 17:42:27 2019
  InstallationDate: Installed on 2019-07-04 (104 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  PackageArchitecture: all
  SourcePackage: ubuntu-release-upgrader
  Symptom: dist-upgrade
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  VarLogDistupgradeLspcitxt:
   
  VarLogDistupgradeXorgFixuplog:
   INFO:root:/usr/bin/do-release-upgrade running
   INFO:root:No xorg.conf, exiting
  --- 
  ProblemType: Bug
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  BootLog: Error: [Errno 13] Permission denied: '/var/log/boot.log'
  CompositorRunning: None
  CurrentDesktop: ubuntu:GNOME
  DistUpgraded: 2019-10-17 17:03:47,139 DEBUG Running PostInstallScript: 
'./xorg_fix_proprietary.py'
  DistroCodename: eoan
  DistroRelease: Ubuntu 19.10
  DistroVariant: ubuntu
  ExtraDebuggingInterest: Yes
  GraphicsCard:
   
  InstallationDate: Installed on 2019-07-04 (105 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  IwConfig:
   eth0  no wireless extensions.
   
   lono wireless extensions.
  Lspci:
   
  Lsusb: Error: command ['lsusb'] failed with exit code 1:
  MachineType: Microsoft Corporation Virtual Machine
  Package: xorg-server (not installed)
  ProcFB: 0 hyperv_fb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.3.0-18-generic 
root=UUID=17409d40-25e9-4051-9fd9-758e2a02ebc3 ro quiet splash 
video=hyperv_fb:1900x900 elevator=noop vt.handoff=7
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  RelatedPackageVersions:
   linux-restricted-modules-5.3.0-18-generic N/A
   linux-backports-modules-5.3.0-18-generic  N/A
   linux-firmware1.183
  RfKill:
   
  Tags:  eoan ubuntu
  Uname: Linux 5.3.0-18-generic x86_64
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  UserGroups: adm cdrom dip lpadmin plugdev sambashare sudo
  _MarkForUpload: True
  dmi.bios.date: 01/30/2019
  dmi.bios.vendor: Microsoft Corporation
  dmi.bios.version: Hyper-V UEFI Release v4.0
  dmi.board.asset.tag: None
  dmi.board.name: Virtual Machine
  dmi.board.vendor: Microsoft Corporation
  dmi.board.version: Hyper-V UEFI Release v4.0
  dmi.chassis.asset.tag: 2831-3616-6111-5725-4803-1162-28
  dmi.chassis.type: 3
  dmi.chassis.vendor: Microsoft Corporation
  dmi.chassis.version: Hyper-V UEFI Release v4.0
  dmi.modalias: 
dmi:bvnMicrosoftCorporation:bvrHyper-VUEFIReleasev4.0:bd01/30/2019:svnMicrosoftCorporation:pnVirtualMachine:pvrHyper-VUEFIReleasev4.0:rvnMicrosoftCorporation:rnVirtualMachine:rvrHyper-VUEFIReleasev4.0:cvnMicrosoftCorporation:ct3:cvrHyper-VUEFIReleasev4.0:
  dmi.product.family: Virtual Machine
  dmi.product.name: Virtual Machine
  dmi.product.sku: None
  dmi.product.version: Hyper-V UEFI Release v4.0
  dmi.sys.vendor: Microsoft Corporation
  version.compiz: compiz N/A
  version.libdrm2: libdrm2 2.4.99-1ubuntu1
  version.libgl1-mesa-dri: libgl1-mesa-dri 19.2.1-1ubuntu1
  version.libgl1-mesa-glx: libgl1-mesa-glx N/A
  version.xserver-xorg-core: xserver-xorg-core 2:1.20.5+git20191008-0ubuntu1
  version.xserver-xorg-input-evdev: xserver-xorg-input-evdev N/A
  version.xserver-xorg-video-ati: xserver-xorg-video-ati 1:19.0.1-1ubuntu1
  version.xserver-xorg-video-intel: xserver-xorg-video-intel 
2:2.99.917+git20190815-1
  version.xserver-xorg-video-nouveau: xserver-xorg-video-nouveau 1:1.0.16-1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/18485

[Kernel-packages] [Bug 1848534] Re: [Microsoft Hyper-V guest] System shows graphic artifacts for a moment, then text cursor for about minute and then starts

2019-10-19 Thread Dexuan Cui
I'm not sure what exact issues you're reporting.

Your VM takes too much time to boot up? How long? "systemd-analyze
blame" should collect the info for your VM.

Your VM's screen is somehow messed up temporarily during the boot-up
process? Or the boot-up process is stuck in the "text cursor"  screen
for a long period of time (about 1 minute?) and you'd like to figure out
what's happening during that period of time? But since it looks your VM
is able to boot up in 2 minutes or so (?), it looks there is no fatal
issue?

You're saying you can reproduce the issue with a fresh VM created from
the .iso file (ubuntu-19.10-desktop-amd64.iso)?

It would be helpful if you can share your output of the same commands I
ran.

A video of your VM's boot-up process would be helpful, if that's not
difficult. I don't know how you would share the video. :-)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1848534

Title:
  [Microsoft Hyper-V guest] System shows graphic artifacts for a moment,
  then text cursor for about minute and then starts

Status in linux package in Ubuntu:
  Incomplete
Status in xorg-server package in Ubuntu:
  New

Bug description:
  AFter upgrade system shows graphic artefacts for a moment and then
  text cursor for about minute (it looks like hanged) and then starts.

  In 19.04 startup required 1 or 2 seconds.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: ubuntu-release-upgrader-core 1:19.10.15
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  Uname: Linux 5.3.0-18-generic x86_64
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  CrashDB: ubuntu
  CurrentDesktop: ubuntu:GNOME
  Date: Thu Oct 17 17:42:27 2019
  InstallationDate: Installed on 2019-07-04 (104 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  PackageArchitecture: all
  SourcePackage: ubuntu-release-upgrader
  Symptom: dist-upgrade
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  VarLogDistupgradeLspcitxt:
   
  VarLogDistupgradeXorgFixuplog:
   INFO:root:/usr/bin/do-release-upgrade running
   INFO:root:No xorg.conf, exiting
  --- 
  ProblemType: Bug
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  BootLog: Error: [Errno 13] Permission denied: '/var/log/boot.log'
  CompositorRunning: None
  CurrentDesktop: ubuntu:GNOME
  DistUpgraded: 2019-10-17 17:03:47,139 DEBUG Running PostInstallScript: 
'./xorg_fix_proprietary.py'
  DistroCodename: eoan
  DistroRelease: Ubuntu 19.10
  DistroVariant: ubuntu
  ExtraDebuggingInterest: Yes
  GraphicsCard:
   
  InstallationDate: Installed on 2019-07-04 (105 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  IwConfig:
   eth0  no wireless extensions.
   
   lono wireless extensions.
  Lspci:
   
  Lsusb: Error: command ['lsusb'] failed with exit code 1:
  MachineType: Microsoft Corporation Virtual Machine
  Package: xorg-server (not installed)
  ProcFB: 0 hyperv_fb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.3.0-18-generic 
root=UUID=17409d40-25e9-4051-9fd9-758e2a02ebc3 ro quiet splash 
video=hyperv_fb:1900x900 elevator=noop vt.handoff=7
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  RelatedPackageVersions:
   linux-restricted-modules-5.3.0-18-generic N/A
   linux-backports-modules-5.3.0-18-generic  N/A
   linux-firmware1.183
  RfKill:
   
  Tags:  eoan ubuntu
  Uname: Linux 5.3.0-18-generic x86_64
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  UserGroups: adm cdrom dip lpadmin plugdev sambashare sudo
  _MarkForUpload: True
  dmi.bios.date: 01/30/2019
  dmi.bios.vendor: Microsoft Corporation
  dmi.bios.version: Hyper-V UEFI Release v4.0
  dmi.board.asset.tag: None
  dmi.board.name: Virtual Machine
  dmi.board.vendor: Microsoft Corporation
  dmi.board.version: Hyper-V UEFI Release v4.0
  dmi.chassis.asset.tag: 2831-3616-6111-5725-4803-1162-28
  dmi.chassis.type: 3
  dmi.chassis.vendor: Microsoft Corporation
  dmi.chassis.version: Hyper-V UEFI Release v4.0
  dmi.modalias: 
dmi:bvnMicrosoftCorporation:bvrHyper-VUEFIReleasev4.0:bd01/30/2019:svnMicrosoftCorporation:pnVirtualMachine:pvrHyper-VUEFIReleasev4.0:rvnMicrosoftCorporation:rnVirtualMachine:rvrHyper-VUEFIReleasev4.0:cvnMicrosoftCorporation:ct3:cvrHyper-VUEFIReleasev4.0:
  dmi.product.family: Virtual Machine
  dmi.product.name: Virtual Machine
  dmi.product.sku: None
  dmi.product.version: Hyper-V UEFI Release v4.0
  dmi.sys.vendor: Microsoft Corporation
  version.compiz: compiz N/A
  version.libdrm2: libdrm2 2.4.99-1ubuntu1
  version.libgl1-mesa-dri: libgl1-mesa-dri 19.2.1-1ubuntu1
  version.libgl1-mesa-glx: libgl1-mesa-glx N/A
  version.xserver-xorg-core: xserver-xorg-core 2:1.20.5+git20191008-0ubuntu1
  version.xserver-xorg-input-evdev: xs

[Kernel-packages] [Bug 1848534] Re: [Microsoft Hyper-V guest] System shows graphic artifacts for a moment, then text cursor for about minute and then starts

2019-10-19 Thread Dexuan Cui
Hi M, I can not reproduce the issue: just now I downloaded
http://dt0cinyuc0rrg.cloudfront.net/ubuntu-19.10-desktop-amd64.iso and
created a Generation-2 VM on Hyper-V with the .iso file. The VM boots up
fast: it boots up to the Xorg desktop in 28 seconds with 1 CPU and 2GB
memory, and in 21 seconds with 8 CPUs and 8GB memory.

I don't see anything unusual.

I believe you're also using Generation-2 VM since your "lspci" returns
nothing.

FYI: I got the below in my VM:

decui@decui-u1910-gen2:~$ uname -a
Linux decui-u1910-gen2 5.0.0-32-generic #34-Ubuntu SMP Wed Oct 2 02:06:48 UTC 
2019 x86_64 x86_64 x86_64 GNU/Linux

decui@decui-u1910-gen2:~$ dmesg | grep "Hyper-V Host Build"
[0.00] Hyper-V Host Build:18928-10.0-1-0.1044

decui@decui-u1910-gen2:~$ systemd-analyze
Startup finished in 303ms (firmware) + 663ms (loader) + 1.087s (kernel) + 
35.041s (userspace) = 37.096s
graphical.target reached after 34.963s in userspace

decui@decui-u1910-gen2:~$ systemd-analyze blame
 31.530s plymouth-quit-wait.service
 10.598s gdm.service
  1.817s dev-sda2.device
   944ms networkd-dispatcher.service
   866ms NetworkManager-wait-online.service
  .

decui@decui-u1910-gen2:~$  systemd-analyze critical-chain
The time after the unit is active or started is printed after the "@" character.
The time the unit takes to start is printed after the "+" character.

graphical.target @34.963s
└─multi-user.target @34.963s
  └─kerneloops.service @4.213s +34ms
└─network-online.target @4.170s
  └─NetworkManager-wait-online.service @3.301s +866ms
└─NetworkManager.service @2.983s +317ms
  └─dbus.service @2.912s
└─basic.target @2.765s
  └─sockets.target @2.765s
└─snapd.socket @2.754s +10ms
  └─sysinit.target @2.731s
└─apparmor.service @2.199s +530ms
  └─local-fs.target @2.173s
└─run-user-1000-gvfs.mount @17.395s
  └─run-user-1000.mount @14.119s
└─local-fs-pre.target @456ms
  └─keyboard-setup.service @223ms +232ms
└─systemd-journald.socket @218ms
  └─-.mount @206ms
└─system.slice @206ms
  └─-.slice @206ms

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1848534

Title:
  [Microsoft Hyper-V guest] System shows graphic artifacts for a moment,
  then text cursor for about minute and then starts

Status in linux package in Ubuntu:
  Incomplete
Status in xorg-server package in Ubuntu:
  New

Bug description:
  AFter upgrade system shows graphic artefacts for a moment and then
  text cursor for about minute (it looks like hanged) and then starts.

  In 19.04 startup required 1 or 2 seconds.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: ubuntu-release-upgrader-core 1:19.10.15
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  Uname: Linux 5.3.0-18-generic x86_64
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  CrashDB: ubuntu
  CurrentDesktop: ubuntu:GNOME
  Date: Thu Oct 17 17:42:27 2019
  InstallationDate: Installed on 2019-07-04 (104 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  PackageArchitecture: all
  SourcePackage: ubuntu-release-upgrader
  Symptom: dist-upgrade
  UpgradeStatus: Upgraded to eoan on 2019-10-17 (0 days ago)
  VarLogDistupgradeLspcitxt:
   
  VarLogDistupgradeXorgFixuplog:
   INFO:root:/usr/bin/do-release-upgrade running
   INFO:root:No xorg.conf, exiting
  --- 
  ProblemType: Bug
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  BootLog: Error: [Errno 13] Permission denied: '/var/log/boot.log'
  CompositorRunning: None
  CurrentDesktop: ubuntu:GNOME
  DistUpgraded: 2019-10-17 17:03:47,139 DEBUG Running PostInstallScript: 
'./xorg_fix_proprietary.py'
  DistroCodename: eoan
  DistroRelease: Ubuntu 19.10
  DistroVariant: ubuntu
  ExtraDebuggingInterest: Yes
  GraphicsCard:
   
  InstallationDate: Installed on 2019-07-04 (105 days ago)
  InstallationMedia: Ubuntu 19.04 "Disco Dingo" - Release amd64 (20190416)
  IwConfig:
   eth0  no wireless extensions.
   
   lono wireless extensions.
  Lspci:
   
  Lsusb: Error: command ['lsusb'] failed with exit code 1:
  MachineType: Microsoft Corporation Virtual Machine
  Package: xorg-server (not installed)
  ProcFB: 0 hyperv_fb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.3.0-18-generic 
root=UUID=17409d40-25e9-4051-9fd9-758e2a02ebc3 ro quiet splash 
video=hyperv_fb:1900x900 elevator=noop vt.handoff=7
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  RelatedPackage

[Kernel-packages] [Bug 1822133] Re: Azure Instance never recovered during series of instance reboots.

2019-09-05 Thread Dexuan Cui
@Vald: This is from your attachment:

[21965.367843] kernel BUG at 
/build/linux-azure-njdnVX/linux-azure-4.15.0/net/ipv4/ip_output.c:636!
[21965.377590] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.15.0-1056-azure 
#61-Ubuntu
[21965.435777] RIP: 0010:ip_do_fragment+0x571/0x860
[21965.435777]  ip_fragment.constprop.47+0x43/0x90
[21965.435777]  ip_finish_output+0xf6/0x270
[21965.435777]  ip_output+0x75/0xf0
[21965.435777]  ip_forward_finish+0x51/0x80
[21965.435777]  ip_forward+0x38a/0x480
[21965.435777]  ip_rcv_finish+0x122/0x410
[21965.435777]  ip_rcv+0x292/0x360
[21965.435777]  __netif_receive_skb_core+0x809/0xbc0
[21965.435777]  __netif_receive_skb+0x18/0x60
[21965.435777]  netif_receive_skb_internal+0x37/0xe0
[21965.435777]  napi_gro_receive+0xd0/0xf0
[21965.435777]  netvsc_recv_callback+0x16d/0x220 [hv_netvsc]
[21965.435777]  rndis_filter_receive+0x23b/0x580 [hv_netvsc]
[21965.435777]  netvsc_poll+0x17e/0x630 [hv_netvsc]
[21965.435777]  net_rx_action+0x265/0x3b0
[21965.435777]  __do_softirq+0xf5/0x2a8
[21965.435777]  irq_exit+0x106/0x110
[21965.435777]  hyperv_vector_handler+0x63/0x76

So you're running a 4.15.0-1056-azure kernel, which panics at RIP:
0010:ip_do_fragment+0x571/0x860 (net/ipv4/ip_output.c:636).

This is a known issue in this version of the kernel and we're working on
a new version which will fix the issue:

The identified upsream fix is:

commit 5d407b071dc369c26a38398326ee2be53651cfe4
Author: Taehee Yoo 
Date: Mon Sep 10 02:47:05 2018 +0900
Subject: ip: frags: fix crash in ip_do_fragment()
( 
https://github.com/torvalds/linux/commit/5d407b071dc369c26a38398326ee2be53651cfe4
 )

Meanwhile, if it's possible, please downgrade the kernel to from -1056
to -1052, which per support does not crash.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1822133

Title:
  Azure Instance never recovered during series of instance reboots.

Status in linux-azure package in Ubuntu:
  Incomplete

Bug description:
  Description: During SRU Testing of various Azure Instances, there will
  be some cases where the instance will not respond following a system
  reboot.  SRU Testing only restarts a giving instance once, after it
  preps all of the necessary files to-be-tested.

  Series: Disco
  Instance Size: Basic_A3
  Region: (Default) US-WEST-2
  Kernel Version: 4.18.0-1013-azure #13-Ubuntu SMP Thu Feb 28 22:54:16 UTC 2019 
x86_64 x86_64 x86_64 GNU/Linux

  I initiated a series of tests which rebooted Azure Cloud instances 50
  times. During the 49th Reboot, an Instance failed to return from a
  reboot.. Upon grabbing the console output the following was seen
  scrolling endlessly. I have seen this failure in cases where the
  instance only restarted a handful of times >5

  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus

  In another test attempt I saw the following failure:

  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes

  
  Both of these failures broke networking, Both of these failures were seen at 
least twice to three times, thus may explain why in some cases we never recover 
from an instance reboot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1822133/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1837661] Re: [linux-azure] CRI-RDOS | Live migration only takes 10 seconds, but the VM was unavailable for 2 hours

2019-08-01 Thread Dexuan Cui
I guess you might have already included this patch:
15becc2b56c6 ("PCI: hv: Add hv_pci_remove_slots() when we unload the 
driver")

Unluckily it turns out it is buggy and just now I had to post a further patch 
for it:
[PATCH] PCI: hv: Fix panic by calling hv_pci_remove_slots() earlier
( https://lkml.org/lkml/2019/8/1/1173)

Please consider including this further patch as well.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1837661

Title:
  [linux-azure] CRI-RDOS | Live migration only takes 10 seconds, but the
  VM was unavailable for 2 hours

Status in linux-azure package in Ubuntu:
  New
Status in linux-azure source package in Xenial:
  New
Status in linux-azure source package in Disco:
  In Progress

Bug description:
  Can you please pick up the following 4 patches? They resolve this live
  migration that was reported by a mutual customer.

  PCI: hv: Add pci_destroy_slot() in pci_devices_present_work(), if necessary
  PCI: hv: Add hv_pci_remove_slots() when we unload the driver
  PCI: hv: Fix a memory leak in hv_eject_device_work()
  PCI: hv: Fix a use-after-free bug in hv_eject_device_work()pci/hv

  
  This is a known issue in linux pci-hyperv driver and is fixed by these 
patches.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1837661/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1826416] Re: [Xenial] Customer can not SSH to Linux VM due to "VSC State Unhealthy"

2019-06-18 Thread Dexuan Cui
I can confirm the fix is included in the kernel "4.4.0-152.179":
https://kernel.ubuntu.com/git/ubuntu/ubuntu-
xenial.git/tree/include/linux/hyperv.h?h=Ubuntu-4.4.0-152.179

I installed the kernel, did some network tests, and it worked fine for
me:

#apt-get install linux-image-4.4.0-152-generic
#reboot

# uname -a
Linux localhost 4.4.0-152-generic #179-Ubuntu SMP Thu Jun 13 10:05:07 UTC 2019 
x86_64 x86_64 x86_64 GNU/Linux



** Tags removed: verification-needed-xenial
** Tags added: verification-done-xenial

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1826416

Title:
  [Xenial] Customer can not SSH to Linux VM due to "VSC State Unhealthy"

Status in linux package in Ubuntu:
  Incomplete
Status in linux source package in Xenial:
  Fix Committed

Bug description:
  [Impact]

  A mutual customer is reporting ssh is not working on a Xenial based
  VM.  This VM is running the 4.4 based Xenial kernel and not a custom
  linux-azure kernel.

  After an investigation this is a known old signaling issue.  The 4.4
  based Xenial kernels require the following patch:

  vmbus: fix missing signaling in hv_signal_on_read()
  
(https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=13c5e97701091f9b02ded0c68809f8a6b08c747a).

  The patch is not in the upstream stable 4.4 tree, because it’s not
  needed there.

  Compared to the upstream 4.4 tree, Ubuntu 4.4.0-124-generic integrated
  more hv patches from the mainline kernel, e.g. Drivers: hv: vmbus:
  finally fix hv_need_to_signal_on_read(), so it must pick up the above
  patch.

  We checked the latest Ubuntu 4.4 kernel
  (https://kernel.ubuntu.com/git/ubuntu/ubuntu-
  xenial.git/tree/?h=Ubuntu-4.4.0-146.172) and the patch is also absent
  there.

  The patch (13c5e97701091f9b02ded0c68809f8a6b08c747a) can be cleanly
  cherry-picked into Ubuntu-4.4.0-146.172.

  [Test Case]

  When the issue happens, there is no error message in dmesg or syslog,
  and it's just the host side NetVSP driver stops reading from the
  guest-to-host ring, and the guest network stops working. So we don't
  really have any logs to provide here.

  [Regression Potential]

  Low risk since that's a simple patch from stable upstream touching
  only Hyper-V specific code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1826416/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1822133] Re: Azure Instance never recovered during series of instance reboots.

2019-06-17 Thread Dexuan Cui
@rnsc:  Can you please share the VM's full serial log, which can be
obtained from Azure portal's Boot Diagnostics -> Serial log?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1822133

Title:
  Azure Instance never recovered during series of instance reboots.

Status in linux-azure package in Ubuntu:
  In Progress

Bug description:
  Description: During SRU Testing of various Azure Instances, there will
  be some cases where the instance will not respond following a system
  reboot.  SRU Testing only restarts a giving instance once, after it
  preps all of the necessary files to-be-tested.

  Series: Disco
  Instance Size: Basic_A3
  Region: (Default) US-WEST-2
  Kernel Version: 4.18.0-1013-azure #13-Ubuntu SMP Thu Feb 28 22:54:16 UTC 2019 
x86_64 x86_64 x86_64 GNU/Linux

  I initiated a series of tests which rebooted Azure Cloud instances 50
  times. During the 49th Reboot, an Instance failed to return from a
  reboot.. Upon grabbing the console output the following was seen
  scrolling endlessly. I have seen this failure in cases where the
  instance only restarted a handful of times >5

  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus

  In another test attempt I saw the following failure:

  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes

  
  Both of these failures broke networking, Both of these failures were seen at 
least twice to three times, thus may explain why in some cases we never recover 
from an instance reboot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1822133/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1822133] Re: Azure Instance never recovered during series of instance reboots.

2019-05-30 Thread Dexuan Cui
I tried to reproduce the bug but couldn't.

My Ubuntu 18.04 VM (in West US 2, the VM size is: Basic A3 (4 vcpus, 7
GiB memory)) is still running fine, after I rebooted the VM 100 times
with the below /etc/rc.local script:

#!/bin/bash
date >> /root/reboot.log
NUM=`wc -l /root/reboot.log | cut -d' ' -f1`
[ $NUM -le 100 ] && reboot

In Kirk's log, it looks the VM failed to obtain an IP via DHCP (or
somehow the VM's network went DOWN?), and there are a lot of lines of

WARNING ExtHandler CGroup walinuxagent.service: Crossed the Memory
Threshold. Current Value: 627945472 bytes, Threshold: 512 megabytes.

I don't know what the line means.

I don't see any issue in Linux kernel and drivers. I guess the issue may
be in the waagent/cloud-init daemons, or somewhere outside the VM.

Since I can not reproduce the issue, and it looks nobody can reproduce
the issue now (it looks Kirk's VM works fine now, after the VM was
Stopped and Started again), I can not further debug the issue.

If somebody manages to repro the issue again, please check if you're
still able to login the VM via Azure serial console; if yes, please
check if the VM has an IP address by "ifconfig -a"; if the VM has no IP,
we'll have to check the syslog (and/or the network manager's log?) to
find out what goes wrong; if the VM has an IP but is unable to
communicate to the outside world, something may be wrong with the Linux
network device driver.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1822133

Title:
  Azure Instance never recovered during series of instance reboots.

Status in linux-azure package in Ubuntu:
  In Progress

Bug description:
  Description: During SRU Testing of various Azure Instances, there will
  be some cases where the instance will not respond following a system
  reboot.  SRU Testing only restarts a giving instance once, after it
  preps all of the necessary files to-be-tested.

  Series: Disco
  Instance Size: Basic_A3
  Region: (Default) US-WEST-2
  Kernel Version: 4.18.0-1013-azure #13-Ubuntu SMP Thu Feb 28 22:54:16 UTC 2019 
x86_64 x86_64 x86_64 GNU/Linux

  I initiated a series of tests which rebooted Azure Cloud instances 50
  times. During the 49th Reboot, an Instance failed to return from a
  reboot.. Upon grabbing the console output the following was seen
  scrolling endlessly. I have seen this failure in cases where the
  instance only restarted a handful of times >5

  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus

  In another test attempt I saw the following failure:

  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes

  
  Both of these failures broke networking, Both of these failures were seen at 
least twice to three times, thus may explain why in some cases we never recover 
from an instance reboot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1822133/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1822133] Re: Azure Instance never recovered during series of instance reboots.

2019-05-29 Thread Dexuan Cui
@Kirk: I suppose you can get your VM back by Restarting the VM by force
via Azure Portal (or Azure cmd line)?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1822133

Title:
  Azure Instance never recovered during series of instance reboots.

Status in linux-azure package in Ubuntu:
  In Progress

Bug description:
  Description: During SRU Testing of various Azure Instances, there will
  be some cases where the instance will not respond following a system
  reboot.  SRU Testing only restarts a giving instance once, after it
  preps all of the necessary files to-be-tested.

  Series: Disco
  Instance Size: Basic_A3
  Region: (Default) US-WEST-2
  Kernel Version: 4.18.0-1013-azure #13-Ubuntu SMP Thu Feb 28 22:54:16 UTC 2019 
x86_64 x86_64 x86_64 GNU/Linux

  I initiated a series of tests which rebooted Azure Cloud instances 50
  times. During the 49th Reboot, an Instance failed to return from a
  reboot.. Upon grabbing the console output the following was seen
  scrolling endlessly. I have seen this failure in cases where the
  instance only restarted a handful of times >5

  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus

  In another test attempt I saw the following failure:

  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes

  
  Both of these failures broke networking, Both of these failures were seen at 
least twice to three times, thus may explain why in some cases we never recover 
from an instance reboot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1822133/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1822133] Re: Azure Instance never recovered during series of instance reboots.

2019-05-29 Thread Dexuan Cui
@Kirk: Can you please share the VM's serial log, which can be obtained
from Azure portal's Boot Diagnostics -> Serial log?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1822133

Title:
  Azure Instance never recovered during series of instance reboots.

Status in linux-azure package in Ubuntu:
  In Progress

Bug description:
  Description: During SRU Testing of various Azure Instances, there will
  be some cases where the instance will not respond following a system
  reboot.  SRU Testing only restarts a giving instance once, after it
  preps all of the necessary files to-be-tested.

  Series: Disco
  Instance Size: Basic_A3
  Region: (Default) US-WEST-2
  Kernel Version: 4.18.0-1013-azure #13-Ubuntu SMP Thu Feb 28 22:54:16 UTC 2019 
x86_64 x86_64 x86_64 GNU/Linux

  I initiated a series of tests which rebooted Azure Cloud instances 50
  times. During the 49th Reboot, an Instance failed to return from a
  reboot.. Upon grabbing the console output the following was seen
  scrolling endlessly. I have seen this failure in cases where the
  instance only restarted a handful of times >5

  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus

  In another test attempt I saw the following failure:

  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes

  
  Both of these failures broke networking, Both of these failures were seen at 
least twice to three times, thus may explain why in some cases we never recover 
from an instance reboot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1822133/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1822133] Re: Azure Instance never recovered during series of instance reboots.

2019-05-29 Thread Dexuan Cui
It's glad to see the issue can not repro with the 5.0.0-1007.7 kernel.

@sfeole: The line "[ 84.247704]hyperv_fb: unable to send packet via
vmbus" usually means the VM has panicked. Do you happen to still keep
the full serial log containing this line of error? It would be good to
understand this error for the 4.18.0-1013-azure kernel.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1822133

Title:
  Azure Instance never recovered during series of instance reboots.

Status in linux-azure package in Ubuntu:
  In Progress

Bug description:
  Description: During SRU Testing of various Azure Instances, there will
  be some cases where the instance will not respond following a system
  reboot.  SRU Testing only restarts a giving instance once, after it
  preps all of the necessary files to-be-tested.

  Series: Disco
  Instance Size: Basic_A3
  Region: (Default) US-WEST-2
  Kernel Version: 4.18.0-1013-azure #13-Ubuntu SMP Thu Feb 28 22:54:16 UTC 2019 
x86_64 x86_64 x86_64 GNU/Linux

  I initiated a series of tests which rebooted Azure Cloud instances 50
  times. During the 49th Reboot, an Instance failed to return from a
  reboot.. Upon grabbing the console output the following was seen
  scrolling endlessly. I have seen this failure in cases where the
  instance only restarted a handful of times >5

  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus
  [84.247704]hyperv_fb: unable to send packet via vmbus

  In another test attempt I saw the following failure:

  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes
  ERROR ExtHandler /proc/net/route contains no routes

  
  Both of these failures broke networking, Both of these failures were seen at 
least twice to three times, thus may explain why in some cases we never recover 
from an instance reboot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1822133/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1826416] Re: [Xenial] Customer can not SSH to Linux VM due to "VSC State Unhealthy"

2019-05-07 Thread Dexuan Cui
When the issue happens, there is no error message in dmesg or syslog,
and it's just the host side NetVSP driver stops reading from the guest-
to-host ring, and the guest network stops working. So we don't really
have any logs to provide here.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1826416

Title:
  [Xenial] Customer can not SSH to Linux VM due to "VSC State Unhealthy"

Status in linux package in Ubuntu:
  Incomplete

Bug description:
  A mutual customer is reporting ssh is not working on a Xenial based
  VM.  This VM is running the 4.4 based Xenial kernel and not a custom
  linux-azure kernel.

  After an investigation this is a known old signaling issue.  The 4.4
  based Xenial kernels require the following patch:

  vmbus: fix missing signaling in hv_signal_on_read()
  
(https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=13c5e97701091f9b02ded0c68809f8a6b08c747a).

  The patch is not in the upstream stable 4.4 tree, because it’s not
  needed there.

  Compared to the upstream 4.4 tree, Ubuntu 4.4.0-124-generic integrated
  more hv patches from the mainline kernel, e.g. Drivers: hv: vmbus:
  finally fix hv_need_to_signal_on_read(), so it must pick up the above
  patch.

  We checked the latest Ubuntu 4.4 kernel
  (https://kernel.ubuntu.com/git/ubuntu/ubuntu-
  xenial.git/tree/?h=Ubuntu-4.4.0-146.172) and the patch is also absent
  there.

  The patch (13c5e97701091f9b02ded0c68809f8a6b08c747a) can be cleanly
  cherry-picked into Ubuntu-4.4.0-146.172.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1826416/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1805304] Re: [Hyper-V] Additional patches for Lv2 storage performance

2018-11-26 Thread Dexuan Cui
The link to "[PATCH] scsi: storvsc: Fix a race in sub-channel creation
that can cause panic" is

https://lkml.org/lkml/2018/11/26/159 
or
https://lore.kernel.org/patchwork/patch/1016903/

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1805304

Title:
  [Hyper-V] Additional patches for Lv2 storage performance

Status in linux-azure package in Ubuntu:
  New

Bug description:
  After analysis of the first 4.15 kernel for Lv2 performance, and while
  we are delayed getting to 4.18, we have identified and backported the
  following patches for the 4.15 linux-azure kernel:

  commit 1268ed0c474a5c8f165ef386f3310521b5e00e27
  Author: K. Y. Srinivasan 
  Date:   Tue Jul 3 16:01:55 2018 -0700
  x86/hyper-v: Fix the circular dependency in IPI enlightenment
  linux-next: 
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?h=next-20181126&id=1268ed0c474a5c8f165ef386f3310521b5e00e27

  commit 366f03b0cf90ef55f063d4a54cf62b0ac9b6da9d
  Author: K. Y. Srinivasan 
  Date:   Wed May 16 14:53:32 2018 -0700
  X86/Hyper-V: Enhanced IPI enlightenment
  
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?h=next-20181126&id=366f03b0cf90ef55f063d4a54cf62b0ac9b6da9d

  commit 68bb7bfb7985df2bd15c2dc975cb68b7a901488a
  Author: K. Y. Srinivasan 
  Date:   Wed May 16 14:53:31 2018 -0700
  X86/Hyper-V: Enable IPI enlightenments
  
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?h=next-20181126&id=68bb7bfb7985df2bd15c2dc975cb68b7a901488a

  commit 6b48cb5f8347bc0153ff1d7b075db92e6723ffdb
  Author: K. Y. Srinivasan 
  Date:   Wed May 16 14:53:30 2018 -0700
  X86/Hyper-V: Enlighten APIC access
  
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?h=next-20181126&id=6b48cb5f8347bc0153ff1d7b075db92e6723ffdb

  commit 68d1eb72ee99e26576913aa6824f7a703ca06b90
  Author: Vitaly Kuznetsov 
  Date:   Tue Mar 20 15:02:09 2018 +0100
  x86/hyper-v: define struct hv_enlightened_vmcs and clean field bits
  
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?h=next-20181126&id=68d1eb72ee99e26576913aa6824f7a703ca06b90

  commit a46d15cc1ae5af905afac2af4cc0c188c2eb59b0
  Author: Vitaly Kuznetsov 
  Date:   Tue Mar 20 15:02:08 2018 +0100
  x86/hyper-v: allocate and use Virtual Processor Assist Pages
  
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?h=next-20181126&id=a46d15cc1ae5af905afac2af4cc0c188c2eb59b0

  commit 415bd1cd3a42897f61a92cda0a9f9d7b04c28fb7
  Author: Vitaly Kuznetsov 
  Date:   Tue Mar 20 15:02:06 2018 +0100
  x86/hyper-v: move definitions from TLFS to hyperv-tlfs.h
  
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?h=next-20181126&id=415bd1cd3a42897f61a92cda0a9f9d7b04c28fb7

  commit 5a485803221777013944cbd1a7cd5c62efba3ffa
  Author: Vitaly Kuznetsov 
  Date:   Tue Mar 20 15:02:05 2018 +0100
  x86/hyper-v: move hyperv.h out of uapi
  
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?h=next-20181126&id=5a485803221777013944cbd1a7cd5c62efba3ffa

  commit e7c4e36c447daca2b7df49024f6bf230871cb155
  Author: Vitaly Kuznetsov 
  Date:   Wed Jan 24 14:23:34 2018 +0100
  x86/hyperv: Redirect reenlightment notifications on CPU offlining
  
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?h=next-20181126&id=e7c4e36c447daca2b7df49024f6bf230871cb155

  commit 93286261de1b46339aa27cd4c639b21778f6cade
  Author: Vitaly Kuznetsov 
  Date:   Wed Jan 24 14:23:33 2018 +0100
  x86/hyperv: Reenlightenment notifications support
  
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?h=next-20181126&id=93286261de1b46339aa27cd4c639b21778f6cade

  commit e2768eaa1ca4fbb7b778da5615cce3dd310352e6
  Author: Vitaly Kuznetsov 
  Date:   Wed Jan 24 14:23:32 2018 +0100
  x86/hyperv: Add a function to read both TSC and TSC page value 
simulateneously
  
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?h=next-20181126&id=e2768eaa1ca4fbb7b778da5615cce3dd310352e6

  commit 4a5f3cde4d51c7afce859aed9d74d197751896d5
  Author: Michael Kelley 
  Date:   Fri Dec 22 11:19:02 2017 -0700
  Drivers: hv: vmbus: Remove x86-isms from arch independent drivers
  
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/drivers/hv?h=next-20181126&id=4a5f3cde4d51c7afce859aed9d74d197751896d5

  From: Dexuan Cui 

  We can concurrently try to open the same sub-channel from 2 paths:

  path #1: vmbus_onoffer() -> vmbus_process_offer() -> handle_sc_creation().
  path #2: storvsc_probe() -> storvsc_connect_to_vsp() ->
 -> storvsc_channel_init() -> handle_multichannel_storage() ->
 -> vmbus_are_subchannels_present() -> handle_sc_cr

[Kernel-packages] [Bug 1792349] Re: Memory leaking when running kubernetes cronjobs

2018-11-01 Thread Dexuan Cui
More patches are required: https://lkml.org/lkml/2018/11/2/182
It looks we'll have to wait for some time, before the kernel stabilizes...

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1792349

Title:
  Memory leaking when running kubernetes cronjobs

Status in linux package in Ubuntu:
  Triaged
Status in linux source package in Bionic:
  Triaged
Status in linux source package in Cosmic:
  Triaged

Bug description:
  We are using Kubernetes V1.8.15 with docker 18.03.1-ce.
  We schedule 50 Kubernetes cronjobs to run every 5 minutes. Each cronjob will 
create a simple busybox container, echo hello world, then terminate.

  In the data attached to the bug I let this run for 1 hour, and in this
  time the Available memory had reduced from 31256704 kB to 30461224 kB
  - so a loss of 776 MB. From previous longer runs we observe the
  available memory continues to drop.

  There doesn't appear to be any processes left behind, or any growth in
  any other processes to explain where the memory has gone.

  echo 3 > /proc/sys/vm/drop_caches causes some of the memory to be
  returned, but the majority remains leaked, and the only way to free it
  appears to be to reboot the system.

  We are currently running Ubuntu 4.15.0-32.35-generic 4.15.18 and have
  previously observed similar issues on Ubuntu 16.04 with Kernel
  4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 19:38:41 UTC 2017 and
  Debian 9.4 running 4.9.0-6-amd64 #1 SMP Debian 4.9.82-1+deb9u3
  (2018-03-02)

  The leak was more severe on the Debian system, and investigations
  there showed leaks in pcpu_get_vm_areas and were related to memory
  cgroups. Running with Kernel 4.17 on debian showed a leak at a similar
  rate to what we now observe on Ubuntu 18. This leak causes us issues
  as we need to run the cronjobs regularly and want the systems to
  remain up for months.

  Kubernetes will create a new cgroup each time the cronjob runs, but
  these are removed when the job completes (which takes a few seconds).
  If I use systemd-cgtop I don't see any increase in cgroups over time -
  but if I monitor /proc/cgroups over time I can see num_cgroups for
  memory increases.

  For the duration of the test I collected slabinfo, meminfo,
  vmallocinfo & cgroups - which I will attach to the bug. Each file is
  suffixed with the number of seconds since the start.

  *.0 & *.600 were taken before the test was started. The test was
  stopped shortly after the *.4200 files were generated. I then left the
  system idle for 10 minutes. I then ran echo 3 >
  /proc/sys/vm/drop_caches after *.4800 was generated. This seemed to
  free ~240MB - but this still leaves ~500MB lost. I then left the
  system idle for a further 20 minutes, and MemoryAvailable didn't seem
  to be increasing significantly.

  Note, the data attached is from running on kernel 4.18.7-041807-generic 
#201809090930 SMP Sun Sep 9 09:33:16 UTC 2018 (which I ran to verify the issue 
still exists in latest kernel) - however I was unable to run ubuntu-bug linux 
on this kernel as it complained about:
  *** Problem in linux-image-4.18.7-041807-generic

  The problem cannot be reported:

  This report is about a package that is not installed.

  So I switched back to 4.15.0-32.35-generic to raise the bug.

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: linux-image-4.15.0-32-generic 4.15.0-32.35
  ProcVersionSignature: Ubuntu 4.15.0-32.35-generic 4.15.18
  Uname: Linux 4.15.0-32-generic x86_64
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 Sep 13 08:55 seq
   crw-rw 1 root audio 116, 33 Sep 13 08:55 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay': 'aplay'
  ApportVersion: 2.20.9-0ubuntu7.2
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord': 
'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  Date: Thu Sep 13 08:55:46 2018
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig': 'iwconfig'
  Lsusb:
   Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd 
   Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
  MachineType: Xen HVM domU
  PciMultimedia:
   
  ProcEnviron:
   LANG=C.UTF-8
   SHELL=/bin/bash
   TERM=xterm
   PATH=(custom, no user)
  ProcFB:
   
  ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-4.15.0-32-generic 
root=UUID=6a84f0e4-8522-41cd-8ecb-d4a6fbecef8a ro earlyprintk
  RelatedPackageVersions:
   linux-restricted-modules-4.15.0-32-generic N/A
   linux-backports-modules-4.15.0-32-generic  N/A
   linux-firmware N/A
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill': 'rfkill'
  SourcePackage: linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  WifiSyslog:
   
  dmi.bios.date: 08/13/2018
  dmi.bios.vendor: Xen
  dmi.bios.version: 4

[Kernel-packages] [Bug 1777128] Re: [Hyper-V] patches for SR-IOV post-bionic GA

2018-09-06 Thread Dexuan Cui
I guess we can close the bug now?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1777128

Title:
  [Hyper-V] patches for SR-IOV post-bionic GA

Status in linux package in Ubuntu:
  Fix Committed
Status in linux source package in Bionic:
  Fix Committed

Bug description:
  For 18.04 we're seeing some SR-IOV functionality issues.

  Please pick these patches for the next SRU.
  Some of them are in ubuntu master-next tree, but still wanting to track these 
for inclusion into 18.04 kernel.

  "PCI: hv: Fix 2 hang issues in hv_compose_msi_msg()" is the most
  critical one to be picked up, the rest is part of the series or
  related that should be added along.

  Full list:
  2018-03-28 Drivers: hv: vmbus: do not mark HV_PCIE as perf_device
  2018-03-16 PCI: hv: Only queue new work items in hv_pci_devices_present() if 
necessary
  2018-03-16 PCI: hv: Remove the bogus test in hv_eject_device_work()
  2018-03-16 PCI: hv: Fix a comment typo in _hv_pcifront_read_config()
  2018-03-16 PCI: hv: Fix 2 hang issues in hv_compose_msi_msg()
  2018-03-16 PCI: hv: Serialize the present and eject work items

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1777128/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1786313] Re: [Hyper-V] hv_netvsc: Fix napi reschedule while receive completion is busy

2018-08-28 Thread Dexuan Cui
Thanks, Marcolo!

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1786313

Title:
  [Hyper-V] hv_netvsc: Fix napi reschedule while receive completion is
  busy

Status in linux-azure package in Ubuntu:
  New
Status in linux-azure source package in Bionic:
  Fix Committed

Bug description:
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=6b81b193b83e87da1ea13217d684b54fccf8ee8a

  hv_netvsc: Fix napi reschedule while receive completion is busy
  If out ring is full temporarily and receive completion cannot go out,
  we may still need to reschedule napi if certain conditions are met.
  Otherwise the napi poll might be stopped forever, and cause network
  disconnect.

  Fixes: 7426b1a51803 ("netvsc: optimize receive completions")

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1786313/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1786313] Re: [Hyper-V] hv_netvsc: Fix napi reschedule while receive completion is busy

2018-08-24 Thread Dexuan Cui
4.15.0-1022 (https://git.launchpad.net/~canonical-kernel/ubuntu/+source
/linux-azure/tree/drivers/net/hyperv/netvsc.c?h=master-next&id=Ubuntu-
azure-4.15.0-1022.22_16.04.1) does NOT have the fix
(6b81b193b83e87da1ea13217d684b54fccf8ee8a).

I'm not sure why the bug status was changed to Fix Committed on Aug 17.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/1786313

Title:
  [Hyper-V] hv_netvsc: Fix napi reschedule while receive completion is
  busy

Status in linux-azure package in Ubuntu:
  New
Status in linux-azure source package in Bionic:
  Fix Committed

Bug description:
  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=6b81b193b83e87da1ea13217d684b54fccf8ee8a

  hv_netvsc: Fix napi reschedule while receive completion is busy
  If out ring is full temporarily and receive completion cannot go out,
  we may still need to reschedule napi if certain conditions are met.
  Otherwise the napi poll might be stopped forever, and cause network
  disconnect.

  Fixes: 7426b1a51803 ("netvsc: optimize receive completions")

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1786313/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1679898] Re: [Hyper-V] Ubuntu VM crash during Hyper-V backup or live migration after installing kernel 4.4.0-72

2017-10-19 Thread Dexuan Cui
People are working on this issue: e.g. it looks the patch may work around it: 
https://patchwork.kernel.org/patch/10012603/ (it would be great if somebody can 
test the patch)

Long will send one more patch: 
https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1517902.html

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1679898

Title:
  [Hyper-V] Ubuntu VM crash during Hyper-V backup or live migration
  after installing kernel 4.4.0-72

Status in linux package in Ubuntu:
  In Progress
Status in linux source package in Xenial:
  In Progress

Bug description:
  Description:Ubuntu 14.04.5 LTS
  Release:14.04

  Hi, after installing kernel 4.4.0-67 or later I cannot backup my Ubuntu VM's 
on Hyper-V.
  Microsoft Hyper-v 2012r2 Gen2 VMs

  See Attachment for what happens is immediately after backup starts I
  get an error.

  Eventually the kernel reports it has run out of memory and the machine
  just continuously scrolls errors message related to page allocation.

  When reseting the virtual machine no logs can be found of the problem.

  kernel 4.4.0-72-generic problem still here

  DistroRelease: Ubuntu 14.04
  InstallationDate: Installed on 2016-02-20 (409 days ago)
  InstallationMedia: Ubuntu-Server 14.04.3 LTS "Trusty Tahr" - Beta amd64 
(20150805)
  Package: linux-image-4.4.0-72-generic 4.4.0-72.93~14.04.1
  PackageArchitecture: amd64
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   LANG=ru_RU.UTF-8
   SHELL=/bin/bash
  ProcVersionSignature: Ubuntu 4.4.0-72.93~14.04.1-generic 4.4.49
  SourcePackage: linux-lts-xenial
  Tags:  trusty
  Uname: Linux 4.4.0-72-generic x86_64
  --- 
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 Apr  5 12:05 seq
   crw-rw 1 root audio 116, 33 Apr  5 12:05 timer
  AplayDevices: Error: [Errno 2] No such file or directory
  ApportVersion: 2.14.1-0ubuntu3.23
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: Error: [Errno 2] No such file or directory
  DistroRelease: Ubuntu 14.04
  HibernationDevice: RESUME=/dev/mapper/ubuntu--vg-swap_1
  InstallationDate: Installed on 2016-02-20 (409 days ago)
  InstallationMedia: Ubuntu-Server 14.04.3 LTS "Trusty Tahr" - Beta amd64 
(20150805)
  IwConfig:
   lono wireless extensions.
   
   eth1  no wireless extensions.
   
   eth0  no wireless extensions.
  Lspci:
   
  Lsusb: Error: command ['lsusb'] failed with exit code 1: unable to initialize 
libusb: -99
  MachineType: Microsoft Corporation Virtual Machine
  Package: linux (not installed)
  PciMultimedia:
   
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   LANG=C
   SHELL=/bin/bash
  ProcFB: 0 hyperv_fb
  ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-4.4.0-72-generic.efi.signed 
root=/dev/mapper/ubuntu--vg-root ro
  ProcVersionSignature: Ubuntu 4.4.0-72.93~14.04.1-generic 4.4.49
  RelatedPackageVersions:
   linux-restricted-modules-4.4.0-72-generic N/A
   linux-backports-modules-4.4.0-72-generic  N/A
   linux-firmware1.127.23
  RfKill: Error: [Errno 2] No such file or directory
  Tags:  trusty
  Uname: Linux 4.4.0-72-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups:
   
  _MarkForUpload: True
  dmi.bios.date: 11/26/2012
  dmi.bios.vendor: Microsoft Corporation
  dmi.bios.version: Hyper-V UEFI Release v1.0
  dmi.board.asset.tag: None
  dmi.board.name: Virtual Machine
  dmi.board.vendor: Microsoft Corporation
  dmi.board.version: Hyper-V UEFI Release v1.0
  dmi.chassis.asset.tag: 5894-4187-8369-8212-0547-2747-15
  dmi.chassis.type: 3
  dmi.chassis.vendor: Microsoft Corporation
  dmi.chassis.version: Hyper-V UEFI Release v1.0
  dmi.modalias: 
dmi:bvnMicrosoftCorporation:bvrHyper-VUEFIReleasev1.0:bd11/26/2012:svnMicrosoftCorporation:pnVirtualMachine:pvrHyper-VUEFIReleasev1.0:rvnMicrosoftCorporation:rnVirtualMachine:rvrHyper-VUEFIReleasev1.0:cvnMicrosoftCorporation:ct3:cvrHyper-VUEFIReleasev1.0:
  dmi.product.name: Virtual Machine
  dmi.product.version: Hyper-V UEFI Release v1.0
  dmi.sys.vendor: Microsoft Corporation
  --- 
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 Apr 10 10:33 seq
   crw-rw 1 root audio 116, 33 Apr 10 10:33 timer
  AplayDevices: Error: [Errno 2] No such file or directory
  ApportVersion: 2.14.1-0ubuntu3.23
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: Error: [Errno 2] No such file or directory
  DistroRelease: Ubuntu 14.04
  HibernationDevice: RESUME=/dev/mapper/ubuntu--vg-swap_1
  InstallationDate: Installed on 2016-02-20 (414 days ago)
  InstallationMedia: Ubuntu-Server 14.04.3 LTS "Trusty Tahr" - Beta 

[Kernel-packages] [Bug 1713884] Re: [CIFS] Fix maximum SMB2 header size

2017-09-05 Thread Dexuan Cui
First I created a Ubuntu 16.04 VM on Azure, which could reproduce the bug, and 
"uname -a" showed:
Linux decui-u1604-hwe 4.4.0-92-generic #115~14.04.1-Ubuntu SMP Thu Aug 10 
15:06:53 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

I installed the xenial/, and confirmed it resolved the bug:
Linux decui-u1604-hwe 4.4.0-93-generic #116~lp1713884 SMP Wed Aug 30 14:16:51 
UTC 2017 x86_64 x86_64 x86_64 GNU/LinuxLinux decui-
And the zesty kernel resolved the bug too:
u1604-hwe 4.10.0-33-generic #37~lp1713884 SMP Wed Aug 30 14:15:48 UTC 2017 
x86_64 x86_64 x86_64 

And the artful kernel resolved the bug too:
Linux decui-u1604-hwe 4.12.0-11-generic #12~lp1713884 SMP Wed Aug 30 14:14:10 
UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1713884

Title:
  [CIFS] Fix maximum SMB2 header size

Status in linux package in Ubuntu:
  Triaged
Status in linux source package in Vivid:
  Triaged
Status in linux source package in Xenial:
  Triaged
Status in linux source package in Zesty:
  Triaged
Status in linux source package in Artful:
  Triaged

Bug description:
  Currently the maximum size of SMB2/3 header is set incorrectly which
  leads to hanging of directory listing operations on encrypted SMB3
  connections. Fix this by setting the maximum size to 170 bytes that
  is calculated as RFC1002 length field size (4) + transform header
  size (52) + SMB2 header size (64) + create response size (56).

  https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-
  next.git/commit/?id=47690ab81f4f29b12bbb0676d3579e61ab4d84de

  This applies across the board 3.16, 4.4, 4.10, artful, and azure.
  Microsoft would be happy to help test.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1713884/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1713884] Re: [CIFS] Fix maximum SMB2 header size

2017-09-01 Thread Dexuan Cui
The patch has been in the mainline tree:
https://github.com/torvalds/linux/commit/e89ce1f89f62c7e527db3850a91dab3389772af3

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1713884

Title:
  [CIFS] Fix maximum SMB2 header size

Status in linux package in Ubuntu:
  Triaged
Status in linux source package in Vivid:
  Triaged
Status in linux source package in Xenial:
  Triaged
Status in linux source package in Zesty:
  Triaged
Status in linux source package in Artful:
  Triaged

Bug description:
  Currently the maximum size of SMB2/3 header is set incorrectly which
  leads to hanging of directory listing operations on encrypted SMB3
  connections. Fix this by setting the maximum size to 170 bytes that
  is calculated as RFC1002 length field size (4) + transform header
  size (52) + SMB2 header size (64) + create response size (56).

  https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-
  next.git/commit/?id=47690ab81f4f29b12bbb0676d3579e61ab4d84de

  This applies across the board 3.16, 4.4, 4.10, artful, and azure.
  Microsoft would be happy to help test.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1713884/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1679898] Re: [Hyper-V] Ubuntu VM crash during Hyper-V backup or live migration after installing kernel 4.4.0-72

2017-08-28 Thread Dexuan Cui
@fastlanejb are you on Windows Server 2012 R2 or 2016? Is your VM
running some I/O intensive workload when the live backup happens? It
looks you get the OOM issue every time you do the live backup? I'm
digging into the issue, and trying to reproduce it first.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1679898

Title:
  [Hyper-V] Ubuntu VM crash during Hyper-V backup or live migration
  after installing kernel 4.4.0-72

Status in linux package in Ubuntu:
  In Progress
Status in linux source package in Xenial:
  In Progress

Bug description:
  Description:Ubuntu 14.04.5 LTS
  Release:14.04

  Hi, after installing kernel 4.4.0-67 or later I cannot backup my Ubuntu VM's 
on Hyper-V.
  Microsoft Hyper-v 2012r2 Gen2 VMs

  See Attachment for what happens is immediately after backup starts I
  get an error.

  Eventually the kernel reports it has run out of memory and the machine
  just continuously scrolls errors message related to page allocation.

  When reseting the virtual machine no logs can be found of the problem.

  kernel 4.4.0-72-generic problem still here

  DistroRelease: Ubuntu 14.04
  InstallationDate: Installed on 2016-02-20 (409 days ago)
  InstallationMedia: Ubuntu-Server 14.04.3 LTS "Trusty Tahr" - Beta amd64 
(20150805)
  Package: linux-image-4.4.0-72-generic 4.4.0-72.93~14.04.1
  PackageArchitecture: amd64
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   LANG=ru_RU.UTF-8
   SHELL=/bin/bash
  ProcVersionSignature: Ubuntu 4.4.0-72.93~14.04.1-generic 4.4.49
  SourcePackage: linux-lts-xenial
  Tags:  trusty
  Uname: Linux 4.4.0-72-generic x86_64
  --- 
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 Apr  5 12:05 seq
   crw-rw 1 root audio 116, 33 Apr  5 12:05 timer
  AplayDevices: Error: [Errno 2] No such file or directory
  ApportVersion: 2.14.1-0ubuntu3.23
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: Error: [Errno 2] No such file or directory
  DistroRelease: Ubuntu 14.04
  HibernationDevice: RESUME=/dev/mapper/ubuntu--vg-swap_1
  InstallationDate: Installed on 2016-02-20 (409 days ago)
  InstallationMedia: Ubuntu-Server 14.04.3 LTS "Trusty Tahr" - Beta amd64 
(20150805)
  IwConfig:
   lono wireless extensions.
   
   eth1  no wireless extensions.
   
   eth0  no wireless extensions.
  Lspci:
   
  Lsusb: Error: command ['lsusb'] failed with exit code 1: unable to initialize 
libusb: -99
  MachineType: Microsoft Corporation Virtual Machine
  Package: linux (not installed)
  PciMultimedia:
   
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   LANG=C
   SHELL=/bin/bash
  ProcFB: 0 hyperv_fb
  ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-4.4.0-72-generic.efi.signed 
root=/dev/mapper/ubuntu--vg-root ro
  ProcVersionSignature: Ubuntu 4.4.0-72.93~14.04.1-generic 4.4.49
  RelatedPackageVersions:
   linux-restricted-modules-4.4.0-72-generic N/A
   linux-backports-modules-4.4.0-72-generic  N/A
   linux-firmware1.127.23
  RfKill: Error: [Errno 2] No such file or directory
  Tags:  trusty
  Uname: Linux 4.4.0-72-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups:
   
  _MarkForUpload: True
  dmi.bios.date: 11/26/2012
  dmi.bios.vendor: Microsoft Corporation
  dmi.bios.version: Hyper-V UEFI Release v1.0
  dmi.board.asset.tag: None
  dmi.board.name: Virtual Machine
  dmi.board.vendor: Microsoft Corporation
  dmi.board.version: Hyper-V UEFI Release v1.0
  dmi.chassis.asset.tag: 5894-4187-8369-8212-0547-2747-15
  dmi.chassis.type: 3
  dmi.chassis.vendor: Microsoft Corporation
  dmi.chassis.version: Hyper-V UEFI Release v1.0
  dmi.modalias: 
dmi:bvnMicrosoftCorporation:bvrHyper-VUEFIReleasev1.0:bd11/26/2012:svnMicrosoftCorporation:pnVirtualMachine:pvrHyper-VUEFIReleasev1.0:rvnMicrosoftCorporation:rnVirtualMachine:rvrHyper-VUEFIReleasev1.0:cvnMicrosoftCorporation:ct3:cvrHyper-VUEFIReleasev1.0:
  dmi.product.name: Virtual Machine
  dmi.product.version: Hyper-V UEFI Release v1.0
  dmi.sys.vendor: Microsoft Corporation
  --- 
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 Apr 10 10:33 seq
   crw-rw 1 root audio 116, 33 Apr 10 10:33 timer
  AplayDevices: Error: [Errno 2] No such file or directory
  ApportVersion: 2.14.1-0ubuntu3.23
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: Error: [Errno 2] No such file or directory
  DistroRelease: Ubuntu 14.04
  HibernationDevice: RESUME=/dev/mapper/ubuntu--vg-swap_1
  InstallationDate: Installed on 2016-02-20 (414 days ago)
  InstallationMedia: Ubuntu-Server 14.04.3 LTS "Trusty Tahr" - Beta amd64 
(20150805)
  IwC

[Kernel-packages] [Bug 1679898] Re: [Hyper-V] Ubuntu VM crash during Hyper-V backup or live migration after installing kernel 4.4.0-72

2017-08-17 Thread Dexuan Cui
I read through the long bug log and found an interesting thing:

In #36, Andrey Vertexx (vertexx) reported the issue was fixed by the
kernel in #30, but later Andrey thought the same kernel couldn't work
any more?

In #47, #48, #54, #59 a lot of people ,   Aleksey (noirfry) , Khallaf
(mkhallaf),  Eric (jumpiem) etc.,  said removing the virtual DVD could
resolve the issue or the boot issue? But it looks later this workaround
can't work any more?

Please correct me if I don't get it right.

PS, I'm debugging a similar (or the same?) issue with RHEL 7.3 + LIS
4.2.2, and I happened to find this Ubuntu bug.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1679898

Title:
  [Hyper-V] Ubuntu VM crash during Hyper-V backup or live migration
  after installing kernel 4.4.0-72

Status in linux package in Ubuntu:
  In Progress
Status in linux source package in Xenial:
  In Progress

Bug description:
  Description:Ubuntu 14.04.5 LTS
  Release:14.04

  Hi, after installing kernel 4.4.0-67 or later I cannot backup my Ubuntu VM's 
on Hyper-V.
  Microsoft Hyper-v 2012r2 Gen2 VMs

  See Attachment for what happens is immediately after backup starts I
  get an error.

  Eventually the kernel reports it has run out of memory and the machine
  just continuously scrolls errors message related to page allocation.

  When reseting the virtual machine no logs can be found of the problem.

  kernel 4.4.0-72-generic problem still here

  DistroRelease: Ubuntu 14.04
  InstallationDate: Installed on 2016-02-20 (409 days ago)
  InstallationMedia: Ubuntu-Server 14.04.3 LTS "Trusty Tahr" - Beta amd64 
(20150805)
  Package: linux-image-4.4.0-72-generic 4.4.0-72.93~14.04.1
  PackageArchitecture: amd64
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   LANG=ru_RU.UTF-8
   SHELL=/bin/bash
  ProcVersionSignature: Ubuntu 4.4.0-72.93~14.04.1-generic 4.4.49
  SourcePackage: linux-lts-xenial
  Tags:  trusty
  Uname: Linux 4.4.0-72-generic x86_64
  --- 
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 Apr  5 12:05 seq
   crw-rw 1 root audio 116, 33 Apr  5 12:05 timer
  AplayDevices: Error: [Errno 2] No such file or directory
  ApportVersion: 2.14.1-0ubuntu3.23
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: Error: [Errno 2] No such file or directory
  DistroRelease: Ubuntu 14.04
  HibernationDevice: RESUME=/dev/mapper/ubuntu--vg-swap_1
  InstallationDate: Installed on 2016-02-20 (409 days ago)
  InstallationMedia: Ubuntu-Server 14.04.3 LTS "Trusty Tahr" - Beta amd64 
(20150805)
  IwConfig:
   lono wireless extensions.
   
   eth1  no wireless extensions.
   
   eth0  no wireless extensions.
  Lspci:
   
  Lsusb: Error: command ['lsusb'] failed with exit code 1: unable to initialize 
libusb: -99
  MachineType: Microsoft Corporation Virtual Machine
  Package: linux (not installed)
  PciMultimedia:
   
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   LANG=C
   SHELL=/bin/bash
  ProcFB: 0 hyperv_fb
  ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-4.4.0-72-generic.efi.signed 
root=/dev/mapper/ubuntu--vg-root ro
  ProcVersionSignature: Ubuntu 4.4.0-72.93~14.04.1-generic 4.4.49
  RelatedPackageVersions:
   linux-restricted-modules-4.4.0-72-generic N/A
   linux-backports-modules-4.4.0-72-generic  N/A
   linux-firmware1.127.23
  RfKill: Error: [Errno 2] No such file or directory
  Tags:  trusty
  Uname: Linux 4.4.0-72-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups:
   
  _MarkForUpload: True
  dmi.bios.date: 11/26/2012
  dmi.bios.vendor: Microsoft Corporation
  dmi.bios.version: Hyper-V UEFI Release v1.0
  dmi.board.asset.tag: None
  dmi.board.name: Virtual Machine
  dmi.board.vendor: Microsoft Corporation
  dmi.board.version: Hyper-V UEFI Release v1.0
  dmi.chassis.asset.tag: 5894-4187-8369-8212-0547-2747-15
  dmi.chassis.type: 3
  dmi.chassis.vendor: Microsoft Corporation
  dmi.chassis.version: Hyper-V UEFI Release v1.0
  dmi.modalias: 
dmi:bvnMicrosoftCorporation:bvrHyper-VUEFIReleasev1.0:bd11/26/2012:svnMicrosoftCorporation:pnVirtualMachine:pvrHyper-VUEFIReleasev1.0:rvnMicrosoftCorporation:rnVirtualMachine:rvrHyper-VUEFIReleasev1.0:cvnMicrosoftCorporation:ct3:cvrHyper-VUEFIReleasev1.0:
  dmi.product.name: Virtual Machine
  dmi.product.version: Hyper-V UEFI Release v1.0
  dmi.sys.vendor: Microsoft Corporation
  --- 
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 Apr 10 10:33 seq
   crw-rw 1 root audio 116, 33 Apr 10 10:33 timer
  AplayDevices: Error: [Errno 2] No such file or directory
  ApportVersion: 2.14.1-0ubuntu3.23
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory
  AudioDevicesInUse: Error: command ['fuser', '-v',

[Kernel-packages] [Bug 1650058] Re: [Hyper-V/Azure] Please include Mellanox OFED drivers in Azure kernel and image

2017-02-23 Thread Dexuan Cui
Thanks, Joseph!

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1650058

Title:
  [Hyper-V/Azure] Please include Mellanox OFED drivers in Azure kernel
  and image

Status in linux package in Ubuntu:
  Fix Committed
Status in linux source package in Xenial:
  Fix Committed

Bug description:
  In order to have the correct VF driver to support SR-IOV in Azure, the
  Mellanox OFED distribution needs to be included in the kernel and the
  image. Mellanox's drivers are not upstream, but they are available
  from here:

  
https://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers&ssn=3nnhohirh5htv6dnp1uk7tf487

  While this configuration is not explicitly listed as supported in the
  release notes, Microsoft and Mellanox engineers are working on the
  corresponding Windows Server 2016 PF driver to support this VF driver
  in operation in Ubuntu guests.

  I file file a corresponding rebase request to pick up the PCI
  passthrough and other SR-IOV work done for the Hyper-V capabilities in
  the upstream 4.9 kernel.

  Only 64-bit support for Ubuntu 16.04's HWE kernel is needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1650058/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1650058] Re: [Hyper-V/Azure] Please include Mellanox OFED drivers in Azure kernel and image

2017-02-22 Thread Dexuan Cui
@Joshph
Can you please confirm the patch 
(https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1665097/comments/4) is 
included in the test kernel in #25?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1650058

Title:
  [Hyper-V/Azure] Please include Mellanox OFED drivers in Azure kernel
  and image

Status in linux package in Ubuntu:
  Fix Committed
Status in linux source package in Xenial:
  Fix Committed

Bug description:
  In order to have the correct VF driver to support SR-IOV in Azure, the
  Mellanox OFED distribution needs to be included in the kernel and the
  image. Mellanox's drivers are not upstream, but they are available
  from here:

  
https://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers&ssn=3nnhohirh5htv6dnp1uk7tf487

  While this configuration is not explicitly listed as supported in the
  release notes, Microsoft and Mellanox engineers are working on the
  corresponding Windows Server 2016 PF driver to support this VF driver
  in operation in Ubuntu guests.

  I file file a corresponding rebase request to pick up the PCI
  passthrough and other SR-IOV work done for the Hyper-V capabilities in
  the upstream 4.9 kernel.

  Only 64-bit support for Ubuntu 16.04's HWE kernel is needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1650058/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1650058] Re: [Hyper-V/Azure] Please include Mellanox OFED drivers in Azure kernel and image

2017-02-22 Thread Dexuan Cui
@Joshph, "the test kernel in #25" means the #4 in the link of #25,
i.e.  https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1667007/comments/4
It looks to me the patch is not included. Just want to confirm my guess.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1650058

Title:
  [Hyper-V/Azure] Please include Mellanox OFED drivers in Azure kernel
  and image

Status in linux package in Ubuntu:
  Fix Committed
Status in linux source package in Xenial:
  Fix Committed

Bug description:
  In order to have the correct VF driver to support SR-IOV in Azure, the
  Mellanox OFED distribution needs to be included in the kernel and the
  image. Mellanox's drivers are not upstream, but they are available
  from here:

  
https://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers&ssn=3nnhohirh5htv6dnp1uk7tf487

  While this configuration is not explicitly listed as supported in the
  release notes, Microsoft and Mellanox engineers are working on the
  corresponding Windows Server 2016 PF driver to support this VF driver
  in operation in Ubuntu guests.

  I file file a corresponding rebase request to pick up the PCI
  passthrough and other SR-IOV work done for the Hyper-V capabilities in
  the upstream 4.9 kernel.

  Only 64-bit support for Ubuntu 16.04's HWE kernel is needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1650058/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1665097] Re: [Hyper-V] SAUCE: pci-hyperv fixes for SR-IOV on Azure

2017-02-15 Thread Dexuan Cui
I happened to see this bug and want to add one more patch:
https://git.kernel.org/cgit/linux/kernel/git/helgaas/pci.git/commit/?h=pci/host-hv&id=60e2e2fbafdd1285ae1b4ad39ded41603e0c74d0

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1665097

Title:
  [Hyper-V] SAUCE: pci-hyperv fixes for SR-IOV on Azure

Status in linux package in Ubuntu:
  In Progress
Status in linux source package in Xenial:
  In Progress

Bug description:
  The following two patches to pci-hyperv were submitted upstream but
  missed getting pulled into Bjorn's tree. They are needed to fix an
  issue discovered working on SR-IOV on Azure.

  Patch 1: hv_pci_devices_present is called in hv_pci_remove when we
  remove a PCI device from host (e.g. by disabling SRIOV on a device).
  In hv_pci_remove, the bus is already removed before the call, so we
  don't need to rescan the bus in the workqueue scheduled from
  hv_pci_devices_present. By introducing status hv_pcibus_removed, we
  can avoid this situation.

  Patch 2: A PCI_EJECT message can arrive at the same time we are
  calling pci_scan_child_bus in the workqueue for the previous
  PCI_BUS_RELATIONS message or in create_root_hv_pci_bus(), in this case
  we could potentailly modify the bus from multiple places. Properly
  lock the bus access.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1665097/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1555786] Re: [Hyper-V] VM with ubuntu 32bit with linux-next does not boot

2017-02-02 Thread Dexuan Cui
@jrp, it looks the patch in comment #26 is unrelated to this bug?
The patch is for cxlflash (Support for IBM CAPI Flash), which doesn't exist in 
a VM running on Hyper-V.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1555786

Title:
  [Hyper-V] VM with ubuntu 32bit with linux-next does not boot

Status in linux package in Ubuntu:
  In Progress
Status in linux source package in Yakkety:
  In Progress
Status in linux source package in Zesty:
  In Progress

Bug description:
  We identified that at least ubuntu (let's take 15.10) does not boot if
  installed in the 32bit architecture, and using the linux-next upstream
  kernel 4.5 on it. This has been seen on other distributions as well.

  We took both the official linux-next branch from kernel.org and the one from 
kernel.ubuntu.com v4.5-rc7.
  For the compilation process we went either with the 4.2 kernel config file 
from the installed running kernel, and also built the config separately. 
  No errors have been encountered during the build or install process of the 
4.5 kernel, however at boot, the VM will just hang.
  There are no call traces or messages at boot to show any pottential issue.

  Have you seen this on your end when testing the 32bit build?
  Attaching the full serial log for reference, however I don't see any errors 
in the kernel boot process.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1555786/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1549601] Re: [Hyper-V] x86, pageattr: prevent overflow in slow_virt_to_phys() for X86_PAE

2016-02-25 Thread Dexuan Cui
It turns out the issue also exists in the latest mainline kernel!

The fix "x86, pageattr: Prevent overflow in slow_virt_to_phys() for X86_PAE" is 
there, but a later patch "x86/mm: Fix slow_virt_to_phys() to handle large PAT 
bit"
 
(https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=34437e67a6727885bdf6cbfd8441b1ac43a1ee65)
 
actually removed the fix unintentionally, so we have the regression...

I have made a new fix and post it to LKML just now (sta...@vger.kernel.org was 
Cc-ed):
 http://marc.info/?l=linux-kernel&m=145638841908383&w=2

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1549601

Title:
  [Hyper-V] x86,pageattr: prevent overflow in slow_virt_to_phys() for
  X86_PAE

Status in linux package in Ubuntu:
  Incomplete

Bug description:
  
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d1cd1210834649ce1ca6bafe5ac25d2f40331343

  x86, pageattr: Prevent overflow in slow_virt_to_phys() for X86_PAE
  pte_pfn() returns a PFN of long (32 bits in 32-PAE), so "long <<
  PAGE_SHIFT" will overflow for PFNs above 4GB.

  Due to this issue, some Linux 32-PAE distros, running as guests on Hyper-V,
  with 5GB memory assigned, can't load the netvsc driver successfully and
  hence the synthetic network device can't work (we can use the kernel parameter
  mem=3000M to work around the issue).

  Cast pte_pfn() to phys_addr_t before shifting.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1549601/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1470250] Re: [Hyper-V] Ubuntu 14.04.2 LTS Generation 2 SCSI Errors on VSS Based Backups

2016-01-12 Thread Dexuan Cui
Sorry, I was moved to another project so I couldn't debug the issue with
full-time.

Hi Joshua R. Poulson (jrp),  can you please find more resource for this
bug?

My previous debugging made me think the root cause might be in the
storvsc driver code, but unluckily I'm not an expert in that area. :-(

Recently there are some storvsc fixes posted in LKML and some of them haven't 
been accepted in the upstream kernel.
I think I can re-try the bug when all of the recent fixes are in the upstream 
to see if the situation will change.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1470250

Title:
  [Hyper-V] Ubuntu 14.04.2 LTS Generation 2 SCSI Errors on VSS Based
  Backups

Status in linux package in Ubuntu:
  In Progress
Status in linux source package in Trusty:
  In Progress
Status in linux source package in Vivid:
  In Progress
Status in linux source package in Wily:
  In Progress

Bug description:
  Customers have reported running various versions of Ubuntu 14.04.2 LTS
  on Generation 2 Hyper-V Hosts.On a random Basis, the file system
  will be mounted Read-Only due to a "disk error" (which really isn't
  the case here).As a result, they must reboot the Ubuntu guest to
  get the file system to mount RW again.

  The Error seen are the following:
  Apr 30 00:02:01 balticnetworkstraining kernel: [640153.968142] storvsc: Sense 
Key : Unit Attention [current] 
  Apr 30 00:02:01 balticnetworkstraining kernel: [640153.968145] storvsc: Add. 
Sense: Changed operating definition
  Apr 30 00:02:01 balticnetworkstraining kernel: [640153.968161] sd 0:0:0:0: 
Warning! Received an indication that the operating parameters on this target 
have changed. The Linux SCSI layer does not automatically adjust these 
parameters.
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584164] hv_storvsc 
vmbus_0_4: cmd 0x2a scsi status 0x2 srb status 0x82
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584178] hv_storvsc 
vmbus_0_4: stor pkt 88006eb6c700 autosense data valid - len 18
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584180] storvsc: Sense 
Key : Unit Attention [current] 
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584183] storvsc: Add. 
Sense: Changed operating definition
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584198] sd 0:0:0:0: 
Warning! Received an indication that the operating parameters on this target 
have changed.  The Linux SCSI layer does not automatically adjust these 
parameters.

  This relates to the VSS "Windows Server Backup" process that kicks off at 
midnight on the host and finishes an hour and half later.   
  Yes, we do have hv_vss_daemon and hv_kvp_daemon running for the correct 
kernel version we have.   We're currently running kernel version 
3.13.0-49-generic #83 on one system and 3.16.0-34-generic #37 on the other. -- 
We see the same errors on both.
  As a result, we've been hesitant to drop any more ubuntu guests on our 2012R2 
hyper-v system because of this.   We can stop the backup process and all is 
good, but we need nightly backups to image all of our VM's.   All the windows 
guests have no issues of course.   We also have some CentOS based guests 
running without issues from what we've seen.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1470250/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1470250] Re: [Hyper-V] Ubuntu 14.04.2 LTS Generation 2 SCSI Errors on VSS Based Backups

2015-12-24 Thread Dexuan Cui
My update: 
It looks the issue is somehow related to the backup, but I tend to think there 
is a bug somewhere in the storvsc driver code -- it's very hard to track it 
down because before the ext4 read-only issue happens,  the ext4 file system may 
have been somewhat corrupted.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1470250

Title:
  [Hyper-V] Ubuntu 14.04.2 LTS Generation 2 SCSI Errors on VSS Based
  Backups

Status in linux package in Ubuntu:
  In Progress
Status in linux source package in Trusty:
  In Progress
Status in linux source package in Vivid:
  In Progress
Status in linux source package in Wily:
  In Progress

Bug description:
  Customers have reported running various versions of Ubuntu 14.04.2 LTS
  on Generation 2 Hyper-V Hosts.On a random Basis, the file system
  will be mounted Read-Only due to a "disk error" (which really isn't
  the case here).As a result, they must reboot the Ubuntu guest to
  get the file system to mount RW again.

  The Error seen are the following:
  Apr 30 00:02:01 balticnetworkstraining kernel: [640153.968142] storvsc: Sense 
Key : Unit Attention [current] 
  Apr 30 00:02:01 balticnetworkstraining kernel: [640153.968145] storvsc: Add. 
Sense: Changed operating definition
  Apr 30 00:02:01 balticnetworkstraining kernel: [640153.968161] sd 0:0:0:0: 
Warning! Received an indication that the operating parameters on this target 
have changed. The Linux SCSI layer does not automatically adjust these 
parameters.
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584164] hv_storvsc 
vmbus_0_4: cmd 0x2a scsi status 0x2 srb status 0x82
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584178] hv_storvsc 
vmbus_0_4: stor pkt 88006eb6c700 autosense data valid - len 18
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584180] storvsc: Sense 
Key : Unit Attention [current] 
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584183] storvsc: Add. 
Sense: Changed operating definition
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584198] sd 0:0:0:0: 
Warning! Received an indication that the operating parameters on this target 
have changed.  The Linux SCSI layer does not automatically adjust these 
parameters.

  This relates to the VSS "Windows Server Backup" process that kicks off at 
midnight on the host and finishes an hour and half later.   
  Yes, we do have hv_vss_daemon and hv_kvp_daemon running for the correct 
kernel version we have.   We're currently running kernel version 
3.13.0-49-generic #83 on one system and 3.16.0-34-generic #37 on the other. -- 
We see the same errors on both.
  As a result, we've been hesitant to drop any more ubuntu guests on our 2012R2 
hyper-v system because of this.   We can stop the backup process and all is 
good, but we need nightly backups to image all of our VM's.   All the windows 
guests have no issues of course.   We also have some CentOS based guests 
running without issues from what we've seen.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1470250/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1292400] Re: task systemd-udevd:1906 blocked for more than 120 seconds.

2015-12-12 Thread Dexuan Cui
If the patch has been in Wily, I don't think Wily should have this bug.
Please test Wily to confirm this.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1292400

Title:
  task systemd-udevd:1906 blocked for more than 120 seconds.

Status in linux package in Ubuntu:
  Incomplete
Status in linux source package in Trusty:
  Incomplete
Status in linux source package in Vivid:
  Incomplete

Bug description:
  System log shows repeated incidents of "task systemd-udevd:1906
  blocked for more than 120 seconds."

  ProblemType: Bug
  DistroRelease: Ubuntu 14.04
  Package: linux-image-3.13.0-17-generic 3.13.0-17.37 [modified: 
boot/vmlinuz-3.13.0-17-generic]
  ProcVersionSignature: Ubuntu 3.13.0-17.37-generic 3.13.6
  Uname: Linux 3.13.0-17-generic x86_64
  ApportVersion: 2.13.3-0ubuntu1
  Architecture: amd64
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CurrentDesktop: XFCE
  Date: Fri Mar 14 08:54:36 2014
  HibernationDevice: RESUME=UUID=0d223c65-8c7e-41b4-95b3-05b22ff4679b
  InstallationDate: Installed on 2014-03-12 (1 days ago)
  InstallationMedia: Xubuntu 14.04 LTS "Trusty Tahr" - Alpha amd64 (20140312)
  IwConfig:
   eth0  no wireless extensions.
   
   lono wireless extensions.
  Lspci:
   
  Lsusb: Error: command ['lsusb'] failed with exit code 1: unable to initialize 
libusb: -99
  MachineType: Microsoft Corporation Virtual Machine
  ProcFB: 0 EFI VGA
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-3.13.0-17-generic.efi.signed 
root=UUID=61b52c1b-300d-4c30-8966-db1745f4a4bc ro video:hyperv_fb=1920x1080 
quiet splash vt.handoff=7
  RelatedPackageVersions:
   linux-restricted-modules-3.13.0-17-generic N/A
   linux-backports-modules-3.13.0-17-generic  N/A
   linux-firmware 1.126
  RfKill:
   
  SourcePackage: linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  dmi.bios.date: 11/26/2012
  dmi.bios.vendor: Microsoft Corporation
  dmi.bios.version: Hyper-V UEFI Release v1.0
  dmi.board.asset.tag: None
  dmi.board.name: Virtual Machine
  dmi.board.vendor: Microsoft Corporation
  dmi.board.version: Hyper-V UEFI Release v1.0
  dmi.chassis.asset.tag: 6485-6574-9248-6162-4701-6267-50
  dmi.chassis.type: 3
  dmi.chassis.vendor: Microsoft Corporation
  dmi.chassis.version: Hyper-V UEFI Release v1.0
  dmi.modalias: 
dmi:bvnMicrosoftCorporation:bvrHyper-VUEFIReleasev1.0:bd11/26/2012:svnMicrosoftCorporation:pnVirtualMachine:pvrHyper-VUEFIReleasev1.0:rvnMicrosoftCorporation:rnVirtualMachine:rvrHyper-VUEFIReleasev1.0:cvnMicrosoftCorporation:ct3:cvrHyper-VUEFIReleasev1.0:
  dmi.product.name: Virtual Machine
  dmi.product.version: Hyper-V UEFI Release v1.0
  dmi.sys.vendor: Microsoft Corporation
  --- 
  ApportVersion: 2.13.3-0ubuntu1
  Architecture: amd64
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CurrentDesktop: XFCE
  DistroRelease: Ubuntu 14.04
  HibernationDevice: RESUME=UUID=0d223c65-8c7e-41b4-95b3-05b22ff4679b
  InstallationDate: Installed on 2014-03-12 (1 days ago)
  InstallationMedia: Xubuntu 14.04 LTS "Trusty Tahr" - Alpha amd64 (20140312)
  IwConfig:
   eth0  no wireless extensions.
   
   lono wireless extensions.
  Lspci:
   
  Lsusb: Error: command ['lsusb'] failed with exit code 1: unable to initialize 
libusb: -99
  MachineType: Microsoft Corporation Virtual Machine
  Package: linux (not installed)
  ProcFB: 0 EFI VGA
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-3.13.0-17-generic.efi.signed 
root=UUID=61b52c1b-300d-4c30-8966-db1745f4a4bc ro video:hyperv_fb=1920x1080 
quiet splash vt.handoff=7
  ProcVersionSignature: Ubuntu 3.13.0-17.37-generic 3.13.6
  RelatedPackageVersions:
   linux-restricted-modules-3.13.0-17-generic N/A
   linux-backports-modules-3.13.0-17-generic  N/A
   linux-firmware 1.126
  RfKill:
   
  Tags:  trusty
  Uname: Linux 3.13.0-17-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: adm cdrom dip lpadmin plugdev sambashare sudo
  _MarkForUpload: True
  dmi.bios.date: 11/26/2012
  dmi.bios.vendor: Microsoft Corporation
  dmi.bios.version: Hyper-V UEFI Release v1.0
  dmi.board.asset.tag: None
  dmi.board.name: Virtual Machine
  dmi.board.vendor: Microsoft Corporation
  dmi.board.version: Hyper-V UEFI Release v1.0
  dmi.chassis.asset.tag: 6485-6574-9248-6162-4701-6267-50
  dmi.chassis.type: 3
  dmi.chassis.vendor: Microsoft Corporation
  dmi.chassis.version: Hyper-V UEFI Release v1.0
  dmi.modalias: 
dmi:bvnMicrosoftCorporation:bvrHyper-VUEFIReleasev1.0:bd11/26/2012:svnMicrosoftCorporation:pnVirtualMachine:pvrHyper-VUEFIReleasev1.0:rvnMicrosoftCorporation:rnVirtualMachine:rvrHyper-VUEFIReleasev1.0:cvnMicrosoftCorporation:ct3:cvrHyper-VUEFIReleasev1.0:
  dmi.product.name: Virtual Machine
  dmi.product.version: 

[Kernel-packages] [Bug 1521053] Re: Network Performance dropping between vms on different location in Azure

2015-12-09 Thread Dexuan Cui
BTW, I'm not sure if comment #10 could helps or not -- just FYI. :-)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1521053

Title:
  Network Performance dropping between vms on different location in
  Azure

Status in linux package in Ubuntu:
  Invalid
Status in linux source package in Vivid:
  In Progress

Bug description:
  [Impact]

  Ubuntu VMs between different location in Azure , especially North Europe and 
East Europe in this case, have network performance issue.
  It should be around 100MB/s speed between them. but it's around 0.3MB/s when 
dropping happens.

  [Fix]

  Upstream development
  0d158852a8089099a6959ae235b20f230871982f ("hv_netvsc: Clean up two unused 
variables")

  It's affected over 3.19.0-28-generic (ubuntu-vivid)

  [Testcase]

  Make 2 VMs on North Europe, West Europe each.
  Then run below test script

  NE VM

  - netcat & nload
   while true; do netcat -l 8080 < /dev/zero; done;
   nload -u M eth0 ( need nload pkg )

  - iperf
   iperf -s -f M

  WE VM

  - netcat
   for i in {1..1000}
   do
    timeout 30s nc NE_HOST 8080 > /dev/null
   done

  - iperf
   iperf -c HOST -f M

  Network performance dropping can be seen frequently.

  More Tests
  http://pastebin.ubuntu.com/13657083/

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1521053/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1521053] Re: Network Performance dropping between vms on different location in Azure

2015-12-09 Thread Dexuan Cui
When the issue happens (it looks due to the layout of the struct somehow...), 
can you try the small workaround patch at 
https://patchwork.ozlabs.org/patch/518469/?

I paste it below:

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 88a0069..7233790 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -132,7 +132,9 @@  static inline bool dev_xmit_complete(int rc)
  * used.
  */
 
-#if defined(CONFIG_WLAN) || IS_ENABLED(CONFIG_AX25)
+#if IS_ENABLED(CONFIG_HYPERV_NET)
+# define LL_MAX_HEADER 224
+#elif defined(CONFIG_WLAN) || IS_ENABLED(CONFIG_AX25)
 # if defined(CONFIG_MAC80211_MESH)
 #  define LL_MAX_HEADER 128
 # else

If this can work, please use the formal fixes from KY, which have been in 
linux-next:
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/log/?qt=grep&q=hv_netvsc
 (please check the patches of the past week)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1521053

Title:
  Network Performance dropping between vms on different location in
  Azure

Status in linux package in Ubuntu:
  Invalid
Status in linux source package in Vivid:
  In Progress

Bug description:
  [Impact]

  Ubuntu VMs between different location in Azure , especially North Europe and 
East Europe in this case, have network performance issue.
  It should be around 100MB/s speed between them. but it's around 0.3MB/s when 
dropping happens.

  [Fix]

  Upstream development
  0d158852a8089099a6959ae235b20f230871982f ("hv_netvsc: Clean up two unused 
variables")

  It's affected over 3.19.0-28-generic (ubuntu-vivid)

  [Testcase]

  Make 2 VMs on North Europe, West Europe each.
  Then run below test script

  NE VM

  - netcat & nload
   while true; do netcat -l 8080 < /dev/zero; done;
   nload -u M eth0 ( need nload pkg )

  - iperf
   iperf -s -f M

  WE VM

  - netcat
   for i in {1..1000}
   do
    timeout 30s nc NE_HOST 8080 > /dev/null
   done

  - iperf
   iperf -c HOST -f M

  Network performance dropping can be seen frequently.

  More Tests
  http://pastebin.ubuntu.com/13657083/

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1521053/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1470250] Re: [Hyper-V] Ubuntu 14.04.2 LTS Generation 2 SCSI Errors on VSS Based Backups

2015-12-09 Thread Dexuan Cui
The patch mentioned in #72 can't help -- still bad luck. :-(
But I can confirm: before the issue happens, somehow athe host doesn't send us 
freeze/thaw commands any longer.

we need further debugging...

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1470250

Title:
  [Hyper-V] Ubuntu 14.04.2 LTS Generation 2 SCSI Errors on VSS Based
  Backups

Status in linux package in Ubuntu:
  In Progress
Status in linux source package in Trusty:
  In Progress
Status in linux source package in Vivid:
  In Progress
Status in linux source package in Wily:
  In Progress

Bug description:
  Customers have reported running various versions of Ubuntu 14.04.2 LTS
  on Generation 2 Hyper-V Hosts.On a random Basis, the file system
  will be mounted Read-Only due to a "disk error" (which really isn't
  the case here).As a result, they must reboot the Ubuntu guest to
  get the file system to mount RW again.

  The Error seen are the following:
  Apr 30 00:02:01 balticnetworkstraining kernel: [640153.968142] storvsc: Sense 
Key : Unit Attention [current] 
  Apr 30 00:02:01 balticnetworkstraining kernel: [640153.968145] storvsc: Add. 
Sense: Changed operating definition
  Apr 30 00:02:01 balticnetworkstraining kernel: [640153.968161] sd 0:0:0:0: 
Warning! Received an indication that the operating parameters on this target 
have changed. The Linux SCSI layer does not automatically adjust these 
parameters.
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584164] hv_storvsc 
vmbus_0_4: cmd 0x2a scsi status 0x2 srb status 0x82
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584178] hv_storvsc 
vmbus_0_4: stor pkt 88006eb6c700 autosense data valid - len 18
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584180] storvsc: Sense 
Key : Unit Attention [current] 
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584183] storvsc: Add. 
Sense: Changed operating definition
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584198] sd 0:0:0:0: 
Warning! Received an indication that the operating parameters on this target 
have changed.  The Linux SCSI layer does not automatically adjust these 
parameters.

  This relates to the VSS "Windows Server Backup" process that kicks off at 
midnight on the host and finishes an hour and half later.   
  Yes, we do have hv_vss_daemon and hv_kvp_daemon running for the correct 
kernel version we have.   We're currently running kernel version 
3.13.0-49-generic #83 on one system and 3.16.0-34-generic #37 on the other. -- 
We see the same errors on both.
  As a result, we've been hesitant to drop any more ubuntu guests on our 2012R2 
hyper-v system because of this.   We can stop the backup process and all is 
good, but we need nightly backups to image all of our VM's.   All the windows 
guests have no issues of course.   We also have some CentOS based guests 
running without issues from what we've seen.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1470250/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1470250] Re: [Hyper-V] Ubuntu 14.04.2 LTS Generation 2 SCSI Errors on VSS Based Backups

2015-12-09 Thread Dexuan Cui
I suspect the race condition may be in vss_on_msg() with the non-thread-
safe variable vss_transaction.state.

And I guess the below patch may have fixed the issue (the patch hasn't be in 
the upstream yet):
http://lkml.iu.edu/hypermail/linux/kernel/1510.3/04218.html

I can only test the patch tomorrow. So it would be great if somebody can
help to test the patch today, at your conveniement. Or, just wait my
result. :-)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1470250

Title:
  [Hyper-V] Ubuntu 14.04.2 LTS Generation 2 SCSI Errors on VSS Based
  Backups

Status in linux package in Ubuntu:
  In Progress
Status in linux source package in Trusty:
  In Progress
Status in linux source package in Utopic:
  In Progress
Status in linux source package in Vivid:
  In Progress
Status in linux source package in Wily:
  In Progress

Bug description:
  Customers have reported running various versions of Ubuntu 14.04.2 LTS
  on Generation 2 Hyper-V Hosts.On a random Basis, the file system
  will be mounted Read-Only due to a "disk error" (which really isn't
  the case here).As a result, they must reboot the Ubuntu guest to
  get the file system to mount RW again.

  The Error seen are the following:
  Apr 30 00:02:01 balticnetworkstraining kernel: [640153.968142] storvsc: Sense 
Key : Unit Attention [current] 
  Apr 30 00:02:01 balticnetworkstraining kernel: [640153.968145] storvsc: Add. 
Sense: Changed operating definition
  Apr 30 00:02:01 balticnetworkstraining kernel: [640153.968161] sd 0:0:0:0: 
Warning! Received an indication that the operating parameters on this target 
have changed. The Linux SCSI layer does not automatically adjust these 
parameters.
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584164] hv_storvsc 
vmbus_0_4: cmd 0x2a scsi status 0x2 srb status 0x82
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584178] hv_storvsc 
vmbus_0_4: stor pkt 88006eb6c700 autosense data valid - len 18
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584180] storvsc: Sense 
Key : Unit Attention [current] 
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584183] storvsc: Add. 
Sense: Changed operating definition
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584198] sd 0:0:0:0: 
Warning! Received an indication that the operating parameters on this target 
have changed.  The Linux SCSI layer does not automatically adjust these 
parameters.

  This relates to the VSS "Windows Server Backup" process that kicks off at 
midnight on the host and finishes an hour and half later.   
  Yes, we do have hv_vss_daemon and hv_kvp_daemon running for the correct 
kernel version we have.   We're currently running kernel version 
3.13.0-49-generic #83 on one system and 3.16.0-34-generic #37 on the other. -- 
We see the same errors on both.
  As a result, we've been hesitant to drop any more ubuntu guests on our 2012R2 
hyper-v system because of this.   We can stop the backup process and all is 
good, but we need nightly backups to image all of our VM's.   All the windows 
guests have no issues of course.   We also have some CentOS based guests 
running without issues from what we've seen.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1470250/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1470250] Re: [Hyper-V] Ubuntu 14.04.2 LTS Generation 2 SCSI Errors on VSS Based Backups

2015-12-08 Thread Dexuan Cui
> "BTW, Since Ubuntu 15.04's "
typo.. 15.04 -> 15.10.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1470250

Title:
  [Hyper-V] Ubuntu 14.04.2 LTS Generation 2 SCSI Errors on VSS Based
  Backups

Status in linux package in Ubuntu:
  In Progress
Status in linux source package in Trusty:
  In Progress
Status in linux source package in Utopic:
  In Progress
Status in linux source package in Vivid:
  In Progress
Status in linux source package in Wily:
  In Progress

Bug description:
  Customers have reported running various versions of Ubuntu 14.04.2 LTS
  on Generation 2 Hyper-V Hosts.On a random Basis, the file system
  will be mounted Read-Only due to a "disk error" (which really isn't
  the case here).As a result, they must reboot the Ubuntu guest to
  get the file system to mount RW again.

  The Error seen are the following:
  Apr 30 00:02:01 balticnetworkstraining kernel: [640153.968142] storvsc: Sense 
Key : Unit Attention [current] 
  Apr 30 00:02:01 balticnetworkstraining kernel: [640153.968145] storvsc: Add. 
Sense: Changed operating definition
  Apr 30 00:02:01 balticnetworkstraining kernel: [640153.968161] sd 0:0:0:0: 
Warning! Received an indication that the operating parameters on this target 
have changed. The Linux SCSI layer does not automatically adjust these 
parameters.
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584164] hv_storvsc 
vmbus_0_4: cmd 0x2a scsi status 0x2 srb status 0x82
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584178] hv_storvsc 
vmbus_0_4: stor pkt 88006eb6c700 autosense data valid - len 18
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584180] storvsc: Sense 
Key : Unit Attention [current] 
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584183] storvsc: Add. 
Sense: Changed operating definition
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584198] sd 0:0:0:0: 
Warning! Received an indication that the operating parameters on this target 
have changed.  The Linux SCSI layer does not automatically adjust these 
parameters.

  This relates to the VSS "Windows Server Backup" process that kicks off at 
midnight on the host and finishes an hour and half later.   
  Yes, we do have hv_vss_daemon and hv_kvp_daemon running for the correct 
kernel version we have.   We're currently running kernel version 
3.13.0-49-generic #83 on one system and 3.16.0-34-generic #37 on the other. -- 
We see the same errors on both.
  As a result, we've been hesitant to drop any more ubuntu guests on our 2012R2 
hyper-v system because of this.   We can stop the backup process and all is 
good, but we need nightly backups to image all of our VM's.   All the windows 
guests have no issues of course.   We also have some CentOS based guests 
running without issues from what we've seen.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1470250/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1470250] Re: [Hyper-V] Ubuntu 14.04.2 LTS Generation 2 SCSI Errors on VSS Based Backups

2015-12-08 Thread Dexuan Cui
@f-bosch @jsalisbury 
I can reproduce the issue consistently within 5~6 hours with a Ubuntu 15.10 VM.

In /var/log/syslog, several minutes before the file system is remounted
as read-only, the hv_vss_daemon has stopped working: the daemon just
always hangs on the poll() , not receiving freeze/thaw commands from the
hv_utils driver at all.

I guess there might be a race condition  in the hv_utils.ko driver, so
the commands from the host are not received properly, or not forwarded
to the daemon properly, so the daemon isn't be woken up.

Trying to track it down.
BTW, Since Ubuntu 15.04's code of the hv_utils driver and the daemon is the 
same as the upstream Linux, I think the upstream should have the same issue.


BTW, the below message looks like a benign warning -- I get this every time the 
backup begins, but I think it has nothing to do with the issue here:
[  967.339810] sd 2:0:0:0: [storvsc] Sense Key : Unit Attention [current]
[  967.339891] sd 2:0:0:0: [storvsc] Add. Sense: Changed operating definition
[  967.340111] sd 2:0:0:0: Warning! Received an indication that the operating 
parameters on this target have changed. The Linux SCSI layer does not automa

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1470250

Title:
  [Hyper-V] Ubuntu 14.04.2 LTS Generation 2 SCSI Errors on VSS Based
  Backups

Status in linux package in Ubuntu:
  In Progress
Status in linux source package in Trusty:
  In Progress
Status in linux source package in Utopic:
  In Progress
Status in linux source package in Vivid:
  In Progress
Status in linux source package in Wily:
  In Progress

Bug description:
  Customers have reported running various versions of Ubuntu 14.04.2 LTS
  on Generation 2 Hyper-V Hosts.On a random Basis, the file system
  will be mounted Read-Only due to a "disk error" (which really isn't
  the case here).As a result, they must reboot the Ubuntu guest to
  get the file system to mount RW again.

  The Error seen are the following:
  Apr 30 00:02:01 balticnetworkstraining kernel: [640153.968142] storvsc: Sense 
Key : Unit Attention [current] 
  Apr 30 00:02:01 balticnetworkstraining kernel: [640153.968145] storvsc: Add. 
Sense: Changed operating definition
  Apr 30 00:02:01 balticnetworkstraining kernel: [640153.968161] sd 0:0:0:0: 
Warning! Received an indication that the operating parameters on this target 
have changed. The Linux SCSI layer does not automatically adjust these 
parameters.
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584164] hv_storvsc 
vmbus_0_4: cmd 0x2a scsi status 0x2 srb status 0x82
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584178] hv_storvsc 
vmbus_0_4: stor pkt 88006eb6c700 autosense data valid - len 18
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584180] storvsc: Sense 
Key : Unit Attention [current] 
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584183] storvsc: Add. 
Sense: Changed operating definition
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584198] sd 0:0:0:0: 
Warning! Received an indication that the operating parameters on this target 
have changed.  The Linux SCSI layer does not automatically adjust these 
parameters.

  This relates to the VSS "Windows Server Backup" process that kicks off at 
midnight on the host and finishes an hour and half later.   
  Yes, we do have hv_vss_daemon and hv_kvp_daemon running for the correct 
kernel version we have.   We're currently running kernel version 
3.13.0-49-generic #83 on one system and 3.16.0-34-generic #37 on the other. -- 
We see the same errors on both.
  As a result, we've been hesitant to drop any more ubuntu guests on our 2012R2 
hyper-v system because of this.   We can stop the backup process and all is 
good, but we need nightly backups to image all of our VM's.   All the windows 
guests have no issues of course.   We also have some CentOS based guests 
running without issues from what we've seen.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1470250/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1470250] Re: [Hyper-V] Ubuntu 14.04.2 LTS Generation 2 SCSI Errors on VSS Based Backups

2015-12-03 Thread Dexuan Cui
Thanks @f-bosch  for your clarification in #62. So my understanding is:
the (temporary) I/O downgrade during the period of backup might be
caused by the fact the disk space has been almost used up (?) recently,
but it also might be somehow related to the backup.  Let's focus on the
backup issue at present.

Thanks @jsalisbury for the detailed test steps & script!
I suppose the result in #66 implies a real issue, which can't be fixed by the 
patches mentioned in bug 1519917. We're going to further investigate the issue 
too with the help of your test script.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1470250

Title:
  [Hyper-V] Ubuntu 14.04.2 LTS Generation 2 SCSI Errors on VSS Based
  Backups

Status in linux package in Ubuntu:
  In Progress
Status in linux source package in Trusty:
  In Progress
Status in linux source package in Utopic:
  In Progress
Status in linux source package in Vivid:
  In Progress
Status in linux source package in Wily:
  In Progress

Bug description:
  Customers have reported running various versions of Ubuntu 14.04.2 LTS
  on Generation 2 Hyper-V Hosts.On a random Basis, the file system
  will be mounted Read-Only due to a "disk error" (which really isn't
  the case here).As a result, they must reboot the Ubuntu guest to
  get the file system to mount RW again.

  The Error seen are the following:
  Apr 30 00:02:01 balticnetworkstraining kernel: [640153.968142] storvsc: Sense 
Key : Unit Attention [current] 
  Apr 30 00:02:01 balticnetworkstraining kernel: [640153.968145] storvsc: Add. 
Sense: Changed operating definition
  Apr 30 00:02:01 balticnetworkstraining kernel: [640153.968161] sd 0:0:0:0: 
Warning! Received an indication that the operating parameters on this target 
have changed. The Linux SCSI layer does not automatically adjust these 
parameters.
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584164] hv_storvsc 
vmbus_0_4: cmd 0x2a scsi status 0x2 srb status 0x82
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584178] hv_storvsc 
vmbus_0_4: stor pkt 88006eb6c700 autosense data valid - len 18
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584180] storvsc: Sense 
Key : Unit Attention [current] 
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584183] storvsc: Add. 
Sense: Changed operating definition
  Apr 30 01:23:26 balticnetworkstraining kernel: [645039.584198] sd 0:0:0:0: 
Warning! Received an indication that the operating parameters on this target 
have changed.  The Linux SCSI layer does not automatically adjust these 
parameters.

  This relates to the VSS "Windows Server Backup" process that kicks off at 
midnight on the host and finishes an hour and half later.   
  Yes, we do have hv_vss_daemon and hv_kvp_daemon running for the correct 
kernel version we have.   We're currently running kernel version 
3.13.0-49-generic #83 on one system and 3.16.0-34-generic #37 on the other. -- 
We see the same errors on both.
  As a result, we've been hesitant to drop any more ubuntu guests on our 2012R2 
hyper-v system because of this.   We can stop the backup process and all is 
good, but we need nightly backups to image all of our VM's.   All the windows 
guests have no issues of course.   We also have some CentOS based guests 
running without issues from what we've seen.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1470250/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


  1   2   >