[Bug 2064859] Re: GNOME's automatic timezone doesn't work with Failed to query location: No WiFi networks found

2024-05-06 Thread Nobuto Murata
> Do you use wpa_supplicant or iwd on that system? I'm with wpa_supplicant (default). And netplan status says the Wifi connection is up so not sure the "No WiFi" line is the root cause or just a red-herring. ● 3: wlp2s0 wifi UP (NetworkManager: NM-94eee488-50b3-42db-8b93-cc8d7dcad210)

[Bug 2064859] Re: GNOME's automatic timezone doesn't work with Failed to query location: No WiFi networks found

2024-05-06 Thread Nobuto Murata
There is no feedback in the UI anywhere. The line was from journalctl > May 05 21:57:22 t14 geoclue[71430]: Failed to query location: No WiFi > networks found and nothing happens. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu.

[Bug 2064859] [NEW] GNOME's automatic timezone doesn't work with Failed to query location: No WiFi networks found

2024-05-05 Thread Nobuto Murata
Public bug reported: I'm aware that the underlying service is going to be retired as covered by: https://bugs.launchpad.net/ubuntu/+source/gnome-control-center/+bug/2062178 However, the service is still active as of writing but somehow GNOME desktop env cannot determine the timezone. It's worth

[Bug 2056387] Re: [T14 Gen 3 AMD] Fail to suspend/resume for the second time

2024-05-02 Thread Nobuto Murata
It's no longer reproducible at least with linux-image-6.8.0-31-generic, closing. ** Changed in: linux (Ubuntu) Status: Confirmed => Invalid -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu.

[Bug 2056387] Re: [T14 Gen 3 AMD] Fail to suspend/resume for the second time

2024-04-10 Thread Nobuto Murata
** Summary changed: - Fail to suspend/resume for the second time + [T14 Gen 3 AMD] Fail to suspend/resume for the second time -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2056387 Title: [T14 Gen

[Bug 2059386] Re: curtin-install.log and curtin-install-cfg.yaml are not collected

2024-03-28 Thread Nobuto Murata
** Attachment added: "curtin-install-cfg.yaml" https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/2059386/+attachment/5760167/+files/curtin-install-cfg.yaml -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu.

[Bug 2059386] Re: curtin-install.log and curtin-install-cfg.yaml are not collected

2024-03-28 Thread Nobuto Murata
It's worth noting that those files contain some MAAS token. ** Attachment added: "curtin-install.log" https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/2059386/+attachment/5760166/+files/curtin-install.log -- You received this bug notification because you are a member of Ubuntu Bugs,

[Bug 2059386] [NEW] curtin-install.log and curtin-install-cfg.yaml are not collected

2024-03-28 Thread Nobuto Murata
Public bug reported: Installed: 4.5.6-0ubuntu1~22.04.2 When a server was provisioned by MAAS, troubleshooting of the installation process or configuration issue require the logs of the curtin process. It's usually stored in /root # ll -h /root/curtin-install* -r 1 root root 5.4K Mar 28

[Bug 1988366] Re: python-rtslib-fb needs to handle new attribute cpus_allowed_list

2024-03-22 Thread Nobuto Murata
** Description changed: - python-rtslib-fb needs to properly handle the new kernel module - attribute cpus_allowed_list. + [ Impact ] + + * getting information about "attached_luns" fails via python3-rtslib-fb + when running the HWE kernel on jammy due to the new kernel module + attribute

[Bug 1988366] Re: python-rtslib-fb needs to handle new attribute cpus_allowed_list

2024-03-21 Thread Nobuto Murata
Ceph-iSCSI is a bit complicated example as a reproducer https://docs.ceph.com/en/quincy/rbd/iscsi-overview/ But the simplest reproducer is `targetctl clear` with jammy HWE kernel. $ sudo targetctl clear Traceback (most recent call last): File "/usr/bin/targetctl", line 82, in main() File

[Bug 1988366] Re: python-rtslib-fb needs to handle new attribute cpus_allowed_list

2024-03-21 Thread Nobuto Murata
The workaround is to switch back to GA kernel (v5.15), but it's far from ideal to be used for newer generation of servers (less than two years old). -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu.

[Bug 1988366] Re: python-rtslib-fb needs to handle new attribute cpus_allowed_list

2024-03-21 Thread Nobuto Murata
The latest LTS (jammy) is missing this patch, and causes a failure in LUN operations when the host is running the HWE kernel, v6.5. python3-rtslib-fb | 2.1.74-0ubuntu4 | jammy | all python3-rtslib-fb | 2.1.74-0ubuntu5 | mantic | all python3-rtslib-fb |

[Bug 2055239] Re: Warning: The unit file, source configuration file or drop-ins of {apt-news, esm-cache}.service changed on disk. Run 'systemctl daemon-reload' to reload units.

2024-03-13 Thread Nobuto Murata
It's not the apt-news nor esm-cache service that was modified. It looks like systemd warns about daemon-reload in any cases if any of the systemd unit files are modified and daemon-reload wasn't called after that.

[Bug 2056387] Re: Fail to suspend/resume for the second time

2024-03-06 Thread Nobuto Murata
Random pointers although I'm not sure those are identical to my issue: https://www.reddit.com/r/archlinux/comments/199am0a/thinkpad_t14_suspend_broken_in_kernel_670/ https://discussion.fedoraproject.org/t/random-resume-after-suspend-issue-on-thinkpad-t14s-amd-gen3-radeon-680m-ryzen-7/103452 --

[Bug 2056387] Re: Fail to suspend/resume for the second time

2024-03-06 Thread Nobuto Murata
Multiple suspends in a row worked without an external monitor connected, but after connecting it the machine failed to suspend/resume. ** Attachment added: "failed_on_suspend_after_connecting_monitor.log"

[Bug 2056387] Re: Fail to suspend/resume for the second time

2024-03-06 Thread Nobuto Murata
kernel log when trying suspend/resume twice in a row. The machine got frozen while the power LED is still on in the second suspend and there is no second "PM: suspend entry (s2idle)" in the kernel log. ** Attachment added: "failed_on_second_suspend.log"

[Bug 2056387] [NEW] Fail to suspend/resume for the second time

2024-03-06 Thread Nobuto Murata
Public bug reported: I had a similar issue before: https://bugs.launchpad.net/ubuntu/+source/linux-hwe-5.19/+bug/2007718 However, I haven't seen the issue with later kernels until getting 6.8.0-11.11+1 recently. * 6.8.0-11 - fails to suspend/resume for the second time although the first

[Bug 2055239] Re: Warning: The unit file, source configuration file or drop-ins of {apt-news, esm-cache}.service changed on disk. Run 'systemctl daemon-reload' to reload units.

2024-03-04 Thread Nobuto Murata
Hmm, it happened again between those two `apt update`. It might be snapd related. 2024-03-05T10:49:54.513356+09:00 t14 sudo: nobuto : TTY=pts/0 ; PWD=/home/nobuto ; USER=root ; COMMAND=/usr/bin/apt update 2024-03-05T11:00:47.422897+09:00 t14 sudo: nobuto : TTY=pts/0 ; PWD=/home/nobuto ;

[Bug 2055239] Re: Warning: The unit file, source configuration file or drop-ins of {apt-news, esm-cache}.service changed on disk. Run 'systemctl daemon-reload' to reload units.

2024-03-04 Thread Nobuto Murata
The list of files modified in the last two hours (if I increase the range to the last 2 days, it lists almost everything). $ find /etc/systemd /lib/systemd/ -mmin -7200 /etc/systemd/system /etc/systemd/system/snap-chromium-2768.mount /etc/systemd/system/snap-hugo-18726.mount

[Bug 2055239] Re: Warning: The unit file, source configuration file or drop-ins of {apt-news, esm-cache}.service changed on disk. Run 'systemctl daemon-reload' to reload units.

2024-03-04 Thread Nobuto Murata
Just for completeness. $ sudo apt update Warning: The unit file, source configuration file or drop-ins of apt-news.service changed on disk. Run 'systemctl daemon-reload' to reload units. Warning: The unit file, source configuration file or drop-ins of esm-cache.service changed on disk. Run

[Bug 2055239] Re: Warning: The unit file, source configuration file or drop-ins of {apt-news, esm-cache}.service changed on disk. Run 'systemctl daemon-reload' to reload units.

2024-03-04 Thread Nobuto Murata
> @nobotu - was yours really an empty file or did you not copy more than one? Are you referring to the `systemctl cat apt-news.service` in the bug description? If so, my apologies. I just pasted the file line of the content on purpose just for confirming the full path of the service. The flie

[Bug 2055239] Re: Warning: The unit file, source configuration file or drop-ins of {apt-news, esm-cache}.service changed on disk. Run 'systemctl daemon-reload' to reload units.

2024-03-04 Thread Nobuto Murata
** Description changed: I recently started seeing the following warning messages when I run `apt update`. $ sudo apt update Warning: The unit file, source configuration file or drop-ins of apt-news.service changed on disk. Run 'systemctl daemon-reload' to reload units. Warning: The

[Bug 2055239] Re: Warning: The unit file, source configuration file or drop-ins of {apt-news, esm-cache}.service changed on disk. Run 'systemctl daemon-reload' to reload units.

2024-02-29 Thread Nobuto Murata
I tried to minimize the test case but no luck so far. I will report it back whenever I find something additional. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2055239 Title: Warning: The unit

[Bug 2055239] Re: Warning: The unit file, source configuration file or drop-ins of {apt-news, esm-cache}.service changed on disk. Run 'systemctl daemon-reload' to reload units.

2024-02-28 Thread Nobuto Murata
It was puzzling indeed, but now I have a reproduction step. $ sudo apt update -> no warning $ sudo apt upgrade -> to install something to invoke the rsyslog trigger. Processing triggers for rsyslog (8.2312.0-3ubuntu3) ... Warning: The unit file, source configuration file or drop-ins of

[Bug 2055239] [NEW] Warning: The unit file, source configuration file or drop-ins of {apt-news, esm-cache}.service changed on disk. Run 'systemctl daemon-reload' to reload units.

2024-02-27 Thread Nobuto Murata
Public bug reported: I recently started seeing the following warning messages when I run `apt update`. $ sudo apt update Warning: The unit file, source configuration file or drop-ins of apt-news.service changed on disk. Run 'systemctl daemon-reload' to reload units. Warning: The unit file,

[Bug 1939390] Re: Missing dependency: lsscsi

2022-04-11 Thread Nobuto Murata
To accommodate the upstream change, we need backporting down to Victoria. os-brick (master=)$ git branch -r --contains fc6ca22bdb955137d97cb9bcfc84104426e53842 origin/HEAD -> origin/master origin/master origin/stable/victoria origin/stable/wallaby origin/stable/xena

[Bug 1967131] Re: CONFIG_NR_CPUS=64 in -kvm is too low compared to -generic

2022-03-30 Thread Nobuto Murata
Thank you Stefan for the prompt response. I'm marking this as Invalid for the time being assuming the value was intended. ** Changed in: linux-kvm (Ubuntu) Status: New => Invalid -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu.

[Bug 1967131] [NEW] CONFIG_NR_CPUS=64 in -kvm is too low compared to -generic

2022-03-30 Thread Nobuto Murata
Public bug reported: -kvm flavor has CONFIG_NR_CPUS=64 although -generic has CONFIG_NR_CPUS=8192 these days. It will be a problem especially when launching a VM on top of a hypervisor with more than 64 CPU threads available. Then the guest can only use up to 64 vCPUs even when more vCPUs are

[Bug 1963698] Re: ovn-controller on Wallaby creates high CPU usage after moving port

2022-03-06 Thread Nobuto Murata
In this specific case (the environment Olivier described), we tested focal-xena and the issue was NOT reproducible. We've decided to go with Xena so field-high can be dropped (I'm not able to remove the subscription by myself here). Assuming that it might be focal-wallaby specific since we

[Bug 1963698] Re: ovn-controller on Wallaby creates high CPU usage after moving port

2022-03-04 Thread Nobuto Murata
** Project changed: networking-ovn => ovn (Ubuntu) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1963698 Title: ovn-controller on Wallaby creates high CPU usage after moving port To manage

[Bug 1959712] Re: [SRU] Add iSCSI support to HPE 3PAR driver for Primera 4.2 and higher

2022-02-24 Thread Nobuto Murata
Tested and verified with cloud-archive:ussuri-proposed. apt-cache policy cinder-common cinder-common: Installed: 2:16.4.2-0ubuntu2~cloud0 Candidate: 2:16.4.2-0ubuntu2~cloud0 Version table: *** 2:16.4.2-0ubuntu2~cloud0 500 500 http://ubuntu-cloud.archive.canonical.com/ubuntu

[Bug 1959712] Re: [SRU] Add iSCSI support to HPE 3PAR driver for Primera 4.2 and higher

2022-02-24 Thread Nobuto Murata
Tested and verified with cloud-archive:victoria-proposed. apt-cache policy cinder-common cinder-common: Installed: 2:17.2.0-0ubuntu1~cloud1 Candidate: 2:17.2.0-0ubuntu1~cloud1 Version table: *** 2:17.2.0-0ubuntu1~cloud1 500 500 http://ubuntu-cloud.archive.canonical.com/ubuntu

[Bug 1959712] Re: [SRU] Add iSCSI support to HPE 3PAR driver for Primera 4.2 and higher

2022-02-24 Thread Nobuto Murata
Tested and verified with focal-proposed. apt-cache policy cinder-common cinder-common: Installed: 2:16.4.2-0ubuntu2 Candidate: 2:16.4.2-0ubuntu2 Version table: *** 2:16.4.2-0ubuntu2 500 500 http://archive.ubuntu.com/ubuntu focal-proposed/main amd64 Packages 100

[Bug 1947063] Re: Missing dependency to sysfsutils, nvme-cli

2022-02-22 Thread Nobuto Murata
There is a separate bug for `lsscsi` since it's impenitent to iSCSI use cases: https://bugs.launchpad.net/ubuntu/+source/python-os-brick/+bug/1939390 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu.

[Bug 1939390] Re: Missing dependency: lsscsi

2022-02-22 Thread Nobuto Murata
Okay, I've added a comment there: https://bugs.launchpad.net/ubuntu/+source/python-os-brick/+bug/1947063 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1939390 Title: Missing dependency: lsscsi To

[Bug 1947063] Re: Missing dependency to sysfsutils

2022-02-22 Thread Nobuto Murata
Upstream refreshed the list of dependencies by adding more commands, etc. "nvme" command from nvme-cli package is one of them. This is a warning in the NVMe-oF code path, but it's invoked regardless whether NVMe-oF is used or not. 2022-02-22 11:00:42.531 713772 WARNING

[Bug 1939390] Re: Missing dependency: lsscsi

2022-02-21 Thread Nobuto Murata
> I *think* we also had this problem on systems that had NVMe volumes. The nvme-cli package is not pulled in, even though it is used by os- brick: Did it block any operation by missing the nvme command? It looks like it's in a critical path for NVMe-oF usecase, but it generates a warning instead

[Bug 1959712] Re: [SRU] Add iSCSI support to HPE 3PAR driver for Primera 4.2 and higher

2022-02-21 Thread Nobuto Murata
Hi Raghavendra, First of all, thank you for your effort tying to make things forward. I'm afraid devstack works in this specific case because devstack pulls Cinder from git repository directly instead of using Ubuntu's binary packages (.deb basically) if I'm not mistaken. This validation requires

[Bug 1939390] Re: Missing dependency: lsscsi

2022-02-21 Thread Nobuto Murata
Subscribing ~field-medium -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1939390 Title: Missing dependency: lsscsi To manage notifications about this bug go to:

[Bug 1939390] Re: Missing dependency: lsscsi

2022-02-21 Thread Nobuto Murata
Similar with this one: https://bugs.launchpad.net/ubuntu/+source/python- os-brick/+bug/1947063 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1939390 Title: Missing dependency: lsscsi To manage

[Bug 1939390] Re: Missing dependency: lsscsi

2022-02-21 Thread Nobuto Murata
Adding Ubuntu packaging task. It seems lsscsi dependency has been added fairly recently (July 2020) so it looks like it's something os-brick binary package should install as a dependency: https://bugs.launchpad.net/os-brick/+bug/1793259

[Bug 1953052] Re: In Jammy sound level reset after reboot

2022-02-20 Thread Nobuto Murata
> TL;DR: pipewire-pulse and PulseAudio should *not* be installed at the same time as they serve the same function and applications don't know the difference. This is not the case at least for the default Ubuntu flavor (w/ GNOME). $ curl -s

[Bug 1953052] Re: In Jammy sound level reset after reboot

2022-02-17 Thread Nobuto Murata
I can confirm that after stopping pipewire temporarily by: $ systemctl --user stop pipewire.socket pipewire.service The volume level is properly recovered across plugging in and out a headset for example, which is good. Both pulseaudio and pipewire are installed out of the box and running if I'm

[Bug 1959712] Re: [SRU] Add iSCSI support to HPE 3PAR driver for Primera 4.2 and higher

2022-02-01 Thread Nobuto Murata
** Description changed: [Description] OpenStack cinder in Focal (OpenStack Ussuri) is lacking iSCSI support for HPE Primera 4.2 and higher. This is now supported in Cinder and we would like to enable it in Ubuntu Focal as well as OpenStack Ussuri. The rationale for this SRU falls under

[Bug 1959712] Re: [SRU] Add iSCSI support to HPE 3PAR driver for Primera 4.2 and higher

2022-02-01 Thread Nobuto Murata
** Description changed: [Description] OpenStack cinder in Focal (OpenStack Ussuri) is lacking iSCSI support for HPE Primera 4.2 and higher. This is now supported in Cinder and we would like to enable it in Ubuntu Focal as well as OpenStack Ussuri. The rationale for this SRU falls under

[Bug 1936842] Re: agent cannot be up on LXD/Fan network on OpenStack OVN/geneve mtu=1442

2021-10-18 Thread Nobuto Murata
Let me know what log / log level you want to see to compare. I'm attaching the machine log of the VM for the time being. ** Attachment added: "machine-0.log" https://bugs.launchpad.net/juju/+bug/1936842/+attachment/5533786/+files/machine-0.log -- You received this bug notification because

[Bug 1936842] Re: agent cannot be up on LXD/Fan network on OpenStack OVN/geneve mtu=1442

2021-10-18 Thread Nobuto Murata
Hmm, I'm not sure where the difference comes from. With Juju 2.9.16 I still see mtu=1442 on VM NIC (expected) and mtu=1450 (bigger than underlying NIC) on fan-252 bridge. ubuntu@juju-913ba4-k8s-on-openstack-0:~$ brctl show bridge name bridge id STP enabled interfaces

[Bug 1947063] [NEW] Missing dependency to sysfsutils

2021-10-13 Thread Nobuto Murata
Public bug reported: At the moment, python3-os-brick pulls iSCSI dependency such as open- iscsi but doesn't pull FC dependency as sysfsutils at all. os-brick is actively using the "systool" command to detect HBA and bais if it's not installed. It would be nice to add sysfsutils package at least

[Bug 1940957] Re: DPDK ports get disabled after Open vSwitch restart with Intel XXV710(i40e) and 25G AOC cables

2021-09-03 Thread Nobuto Murata
** Description changed: - Ubuntu 20.04 LTS - dpdk 19.11.7-0ubuntu0.20.04.1 + - Ubuntu 20.04 LTS + - dpdk 19.11.7-0ubuntu0.20.04.1 + (we tested it with 19.11.10~rc1, but the problem persists) + - Intel XXV710 + - Cisco 25G AOC cables - We are seeing issues with link status of ports as

[Bug 1940957] Re: DPDK ports get disabled after Open vSwitch restart with Intel XXV710(i40e) and 25G AOC cables

2021-09-03 Thread Nobuto Murata
** Summary changed: - i40e: support 25G AOC/ACC cables + DPDK ports get disabled after Open vSwitch restart with Intel XXV710(i40e) and 25G AOC cables -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu.

[Bug 1940957] Re: i40e: support 25G AOC/ACC cables

2021-08-25 Thread Nobuto Murata
A test build for testing: https://launchpad.net/~nobuto/+archive/ubuntu/dpdk -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1940957 Title: i40e: support 25G AOC/ACC cables To manage notifications

[Bug 1940957] [NEW] i40e: support 25G AOC/ACC cables

2021-08-24 Thread Nobuto Murata
Public bug reported: Ubuntu 20.04 LTS dpdk 19.11.7-0ubuntu0.20.04.1 We are seeing issues with link status of ports as DPDK-bond members and those links suddenly go away and marked as down. There are multiple parameters that could cause this issue, but one of the suggestions we've got from a

[Bug 1939898] Re: Unnatended postgresql-12 upgrade caused MAAS internal error

2021-08-16 Thread Nobuto Murata
A deployment method improvement in the field will be tracked as a private bug LP: #1889498. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1939898 Title: Unnatended postgresql-12 upgrade caused MAAS

[Bug 1939898] Re: Unnatended postgresql-12 upgrade caused MAAS internal error

2021-08-16 Thread Nobuto Murata
Closing MAAS task since MAAS just connects to PostgreSQL through tcp connections. One correction to my previous statement: $ sudo lsof / | grep plpgsql.so postgres 21822 postgres memREG 252,1 202824 1295136 /usr/lib/postgresql/12/lib/plpgsql.so postgres 21948 postgres

[Bug 1939898] Re: Unnatended postgresql-12 upgrade caused MAAS internal error

2021-08-16 Thread Nobuto Murata
The only scenario I can think of is NOT restarting postgres after the package update. This could happen when postgres process is managed outside of init(systemd) such as pacemaker, etc. for HA purposes. $ sudo lsof / | grep plpgsql.so postgres 21822 postgres DELREG 252,1

[Bug 1870829] Re: AptX and AptX HD unavailable as Bluetooth audio quality options

2021-08-15 Thread Nobuto Murata
From a duplicate of this bug, as tldr: https://bugs.launchpad.net/ubuntu/+source/pulseaudio/+bug/1939933 > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=991597 > > > Does that mean that enabling it, would only add some dependencies but > > > not actually do anything? > > > > Yes, a (soft)

[Bug 1939933] Re: pulseaudio is not built with AptX support

2021-08-14 Thread Nobuto Murata
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=991597 > > Does that mean that enabling it, would only add some dependencies but > > not actually do anything? > > Yes, a (soft) dependency should probably be added against > gstreamer1.0-plugins-bad, but as I said, the needed version (>= 1.19)

[Bug 1939933] Re: pulseaudio is not built with AptX support

2021-08-14 Thread Nobuto Murata
** Bug watch added: Debian Bug tracker #991597 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=991597 ** Also affects: pulseaudio (Debian) via https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=991597 Importance: Unknown Status: Unknown -- You received this bug notification

[Bug 1939933] [NEW] pulseaudio is not built with AptX support

2021-08-14 Thread Nobuto Murata
Public bug reported: The changelog mentions AptX, but it's not actually enabled in the build if I'm not mistaken. Aptx support seems to require gstreamer in the build dependency at least. [changelog] pulseaudio (1:15.0+dfsg1-1ubuntu1) impish; urgency=medium * New upstream version

[Bug 1904580] Re: Permissions 0644 for '/var/lib/nova/.ssh/id_rsa' are too open

2021-08-04 Thread Nobuto Murata
Previously reported as https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1904745 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1904580 Title: Permissions 0644 for '/var/lib/nova/.ssh/id_rsa' are

[Bug 1904580] Re: Permissions 0644 for '/var/lib/nova/.ssh/id_rsa' are too open

2021-08-04 Thread Nobuto Murata
root@casual-condor:/var/lib/nova# ll .ssh/ total 28 drwxr-xr-x 2 nova root 4096 Aug 3 10:43 ./ drwxr-xr-x 10 nova nova 4096 Aug 3 10:25 ../ -rw-r--r-- 1 root root 1197 Aug 3 10:54 authorized_keys -rw--- 1 nova root 1823 Aug 3 10:25 id_rsa -rw-r--r-- 1 nova root 400 Aug 3 10:25

[Bug 1904580] Re: Permissions 0644 for '/var/lib/nova/.ssh/id_rsa' are too open

2021-08-04 Thread Nobuto Murata
> Charms were not upgraded while this broke. We simply upgrade the packages. If that's the case, package maintainer script might be related? For example, $ grep /var/lib/nova /var/lib/dpkg/info/nova-common.postinst --home /var/lib/nova \ chown -R nova:nova /var/lib/nova/

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-27 Thread Nobuto Murata
[focal-victoria] All of the uploads succeeded. And -proposed shortened time for the larger sizes. $ sudo apt-get install python3-glance-store/focal-proposed $ sudo systemctl restart glance-api $ apt-cache policy python3-glance-store python3-glance-store: Installed: 2.3.0-0ubuntu1~cloud1

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-27 Thread Nobuto Murata
[bionic-ussuri] All of the uploads succeeded. And -proposed shortened time for the larger sizes. $ sudo apt-get install python3-glance-store/bionic-proposed $ sudo systemctl restart glance-api $ apt-cache policy python3-glance-store python3-glance-store: Installed: 2.0.0-0ubuntu2~cloud0

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-27 Thread Nobuto Murata
Just for the record, this is the current status with focal-victoria. No diff between -updates and -proposed. $ apt-cache policy python3-glance-store python3-glance-store: Installed: 2.3.0-0ubuntu1~cloud0 Candidate: 2.3.0-0ubuntu1~cloud0 Version table: *** 2.3.0-0ubuntu1~cloud0 500

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-27 Thread Nobuto Murata
@Corey, Somehow the binary package for cloud-archive:victoria-proposed is not published yet. Can you please double-check the build status of the package? I just don't know where to look. cloud1 in the source vs cloud0 in the binary. $ curl -s

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-26 Thread Nobuto Murata
[focal-wallaby] All of the uploads succeeded. And -proposed shortened time for the larger sizes. $ sudo apt-get install python3-glance-store/focal-proposed $ sudo systemctl restart glance-api $ apt-cache policy python3-glance-store python3-glance-store: Installed: 2.5.0-0ubuntu2~cloud0

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-26 Thread Nobuto Murata
[focal] All of the uploads succeeded. And -proposed shortened time for the larger sizes. $ sudo apt-get install python3-glance-store/focal-proposed $ sudo systemctl restart glance-api $ apt-cache policy python3-glance-store python3-glance-store: Installed: 2.0.0-0ubuntu2 Candidate:

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-26 Thread Nobuto Murata
[hirsute] All of the uploads succeeded. And -proposed shortened time for the larger sizes. $ sudo apt-get install python3-glance-store/hirsute-proposed $ sudo systemctl restart glance-api $ apt-cache policy python3-glance-store python3-glance-store: Installed: 2.5.0-0ubuntu2 Candidate:

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-20 Thread Nobuto Murata
My update in the bug description was somehow rolled back (by me in the record), trying again. ** Description changed: [Impact] - [Test Case] - I have a test Ceph cluster as an object storage with both Swift and S3 protocols enabled for Glance (Ussuri). When I use Swift backend with Glance, an

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-19 Thread Nobuto Murata
** Description changed: [Impact] - - Glance with S3 backend cannot accept image uploads in a realistic time - frame. For example, an 1GB image upload takes ~60 minutes although other - backends such as swift can complete it with 10 seconds. - - [Test Plan] - - 1. Deploy a partial OpenStack

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-19 Thread Nobuto Murata
** Description changed: [Impact] - [Test Case] - I have a test Ceph cluster as an object storage with both Swift and S3 protocols enabled for Glance (Ussuri). When I use Swift backend with Glance, an image upload completes quickly enough. But with S3 backend Glance, it takes much more time

[Bug 1885169] Re: Some arping version only accept integer number as -w argument

2021-07-16 Thread Nobuto Murata
It's likely iputils-arping. $ apt rdepends arping arping Reverse Depends: Conflicts: iputils-arping Depends: netconsole Depends: ifupdown-extra $ apt rdepends iputils-arping iputils-arping Reverse Depends: Depends: neutron-l3-agent Recommends: python3-networking-arista Recommends:

[Bug 1885169] Re: Some arping version only accept integer number as -w argument

2021-07-16 Thread Nobuto Murata
On focal, there are two packages offering arping binary: [iputils-arping(main)] $ sudo arping -U -I eth0 -c 1 -w 1.5 10.48.98.1 arping: invalid argument: '1.5' [arping(universe)] $ arping -U -I eth0 -c 1 -w 1.5 10.48.98.1 ARPING 10.48.98.1 I don't know which one our charms install. -- You

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-11 Thread Nobuto Murata
Subscribing Canonical's ~field-high to initiate the Ubuntu package's SRU process in a timely manner. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1934849 Title: s3 backend takes time exponentially

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-07 Thread Nobuto Murata
> I *think* hash calculation and verifier have to be outside of the loop to avoid the overhead. I will confirm it with a manual testing. This hypothesis wasn't true, it was really about the chunk size. -- You received this bug notification because you are a member of Ubuntu Bugs, which is

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-07 Thread Nobuto Murata
I *think* hash calculation and verifier have to be outside of the loop to avoid the overhead. I will confirm it with a manual testing. for chunk in utils.chunkreadable(image_file, self.WRITE_CHUNKSIZE): image_data += chunk image_size += len(chunk)

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-07 Thread Nobuto Murata
Yeah, I put the same config on purpose for both s3 and swift. But tweaking large_object_size didn't make any difference. [swift] large_object_size = 5120 large_object_chunk_size = 200 [s3] s3_store_large_object_size = 5120 s3_store_large_object_chunk_size = 200 After digging into the actual

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-07 Thread Nobuto Murata
And by using "4 * units.Mi" it can be 20s. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1934849 Title: s3 backend takes time exponentially To manage notifications about this bug go to:

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-07 Thread Nobuto Murata
Okay, as the utils.chunkreadable loop is taking time I've tried a larger WRITE_CHUNKSIZE by hand. It can decrease the amount of time of uploading a 512MB image from 14 minutes to 60 seconds. $ git diff diff --git a/glance_store/_drivers/s3.py b/glance_store/_drivers/s3.py index 1c18531..576c573

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-07 Thread Nobuto Murata
The code part in question is this for loop: https://opendev.org/openstack/glance_store/src/branch/stable/ussuri/glance_store/_drivers/s3.py#L638-L644 2021-07-07 11:50:06.735 - def _add_singlepart 2021-07-07 11:50:06.736 - getting into utils.chunkreadable loop 2021-07-07 11:50:06.736 - loop

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-07 Thread Nobuto Murata
S3 performance itself is not bad. Uploading 512MB object can complete within a few seconds. So I suppose it's on how Glance S3 driver is using boto3. $ time python3 upload.py real0m3.644s user0m3.124s sys 0m1.835s $ cat upload.py import boto3 s3 = boto3.client( "s3",

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-06 Thread Nobuto Murata
Debug log of when uploading a 512MB image with S3 backend. ** Attachment added: "glance-api.log" https://bugs.launchpad.net/ubuntu/+source/python-glance-store/+bug/1934849/+attachment/5509534/+files/glance-api.log ** Also affects: glance-store Importance: Undecided Status: New --

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-06 Thread Nobuto Murata
python3-boto3 1.9.253-1 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1934849 Title: s3 backend takes time exponentially To manage notifications about this bug go to:

[Bug 1934849] [NEW] s3 backend takes time exponentially

2021-07-06 Thread Nobuto Murata
Public bug reported: I have a test Ceph cluster as an object storage with both Swift and S3 protocols enabled for Glance (Ussuri). When I use Swift backend with Glance, an image upload completes quickly enough. But with S3 backend Glance, it takes much more time to upload an image and it seems to

[Bug 1891259] Re: snap installation with core18 fails at 'Ensure prerequisites for "etcd" are available' in air-gapped environments as snapd always requires core(16) or "snapd" snap

2021-04-14 Thread Nobuto Murata
Now that "snapd" snap is seeded into the base image of focal along with core18 for "lxd" snap . That actually solves the original issue in a different way. We no longer have to upload "snapd" snap using a charm resource. Bionic is still affected, but I don't think it's common for new deployments

[Bug 1903221] Re: ERROR: error creating empty object store in /var/lib/ceph/osd/ceph-NN/: (11) Resource temporarily unavailable

2021-04-05 Thread Nobuto Murata
Adding Ubuntu Ceph packaging task here. 30-ceph-osd.conf file is owned by ceph-osd package as follows. $ dpkg -S /etc/sysctl.d/30-ceph-osd.conf ceph-osd: /etc/sysctl.d/30-ceph-osd.conf However, as far as I see in 15.2.8-0ubuntu0.20.04.1/focal there is no place in

[Bug 1657202] Re: OpenStack not installed with consistent uid/gid for glance/cinder/nova users

2021-02-28 Thread Nobuto Murata
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=983635 > 64065 | gnocchi | Gnocchi - Metric as a Service ** Bug watch added: Debian Bug tracker #983635 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=983635 -- You received this bug notification because you are a member of

[Bug 1657202] Re: OpenStack not installed with consistent uid/gid for glance/cinder/nova users

2021-02-25 Thread Nobuto Murata
Here is the current maintainer code: https://git.launchpad.net/ubuntu/+source/openstack-pkg-tools/tree/pkgos_func?h=ubuntu/focal-proposed#n786 and the previous upstream bug in Debian: https://bugs.debian.org/884178 -- You received this bug notification because you are a member of Ubuntu Bugs,

[Bug 1657202] Re: OpenStack not installed with consistent uid/gid for glance/cinder/nova users

2021-02-25 Thread Nobuto Murata
Excuse me for reviving an old bug report, but Gnocchi also requires a static uid/gid to support NFS use case. https://gnocchi.xyz/intro.html > If you need to scale the number of server with the file driver, you can > export and share the data via NFS among all Gnocchi processes. ** Also

[Bug 1916610] Re: MAAS booting with ga-20.04 kernel results in nvme bcache device can't be registered

2021-02-23 Thread Nobuto Murata
Initially we thought we were hit by https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1910201 But it looks like some patches are already in focal GA kernel like https://kernel.ubuntu.com/git/ubuntu/ubuntu-focal.git/commit/?id=d256617be44956fe4f048295a71b31d44d9104d9 -- You received this bug

[Bug 1891259] Re: snap installation with core18 fails at 'Ensure prerequisites for "etcd" are available' in air-gapped environments as snapd always requires core(16) or "snapd" snap

2021-01-06 Thread Nobuto Murata
** Summary changed: - snap installation with core18 fails at 'Ensure prerequisites for "etcd" are available' in air-gapped environments as snapd always requires core(16) + snap installation with core18 fails at 'Ensure prerequisites for "etcd" are available' in air-gapped environments as snapd

[Bug 1908770] [NEW] [pull-pkg] unnecessary crash report with ubuntutools.pullpkg.InvalidPullValueError in parse_pull(): Must specify --pull

2020-12-18 Thread Nobuto Murata
Public bug reported: When invoking the command without `--pull`, the command tells me "Must specify --pull" which is good. However, it also triggers crash file collection process as /var/crash/_usr_bin_pull-pkg.1000.crash and gives unnecessary traceback as follows. $ pull-pkg my-package

[Bug 1900668] Re: MAAS PXE Boot stalls with grub 2.02

2020-12-09 Thread Nobuto Murata
** Tags added: ps5 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1900668 Title: MAAS PXE Boot stalls with grub 2.02 To manage notifications about this bug go to:

[Bug 1900668] Re: MAAS PXE Boot stalls with grub 2.02

2020-12-07 Thread Nobuto Murata
> Michał Ajduk based on the above comments #13 and #15, can you please confirm if there is anything outstanding or not working w.r.t. this issue? Michal's comment on #17 scratched the comment #13. So there is still an outstanding issue here indeed. ** Changed in: maas-images Status:

[Bug 1900668] Re: MAAS PXE Boot stalls with grub 2.02

2020-11-13 Thread Nobuto Murata
** Also affects: maas-images Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1900668 Title: MAAS PXE Boot stalls with grub 2.02 To manage notifications

[Bug 1773765] Re: There is a possibility that 'running' notification will remain

2020-09-16 Thread Nobuto Murata
** Also affects: masakari (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1773765 Title: There is a possibility that 'running' notification will

[Bug 1891259] Re: snap installation with core18 fails at 'Ensure prerequisites for "etcd" are available' in air-gapped environments as snapd always requires core(16)

2020-08-20 Thread Nobuto Murata
> On top of the already existing snaps, can you also download the snapd snap > and ack its assertion? > > So the command sequence looks like this: > > snap download snapd > snap download core18 > snap download etcd > > # on the host > snap ack ./snapd_*.assert > snap ack ./core18_*.assert >

[Bug 1891259] Re: snap installation with core18 fails at 'Ensure prerequisites for "etcd" are available' in air-gapped environments as snapd always requires core(16)

2020-08-19 Thread Nobuto Murata
** Summary changed: - snap installation with core18 fails at 'Ensure prerequisites for "etcd" are available' in air-gapped environments + snap installation with core18 fails at 'Ensure prerequisites for "etcd" are available' in air-gapped environments as snapd always requires core(16) -- You

[Bug 1891259] Re: snap installation with core18 fails at 'Ensure prerequisites for "etcd" are available' in air-gapped environments

2020-08-19 Thread Nobuto Murata
Hi, > In this environment, can you show all of the places that snapd could be > re-execing to? The assumption here is that the unit is deployed by Juju and based on a cloud image instead of a standard desktop or server image. Thus, there is no core snap available out of the box. > What is the

  1   2   3   4   5   6   7   8   9   10   >