Today I did some more tests and investigations.

First of all I moved to the new mantic ISO image (20230928) that improved 
things quite a lot!
The installation (on z/VM so far) is again very snappy and quick,
incl. the enablement of a DASD device in the zDev activation screen.

At the end of the installation, before hitting 'Reboot now' I went to
the console and had a look at top to see if udev related processes are
active, but I couldn't identify any:

top - 10:47:51 up 9 min,  2 users,  load average: 0.44, 0.41, 0.20
Tasks: 137 total,   1 running, 136 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st 
MiB Mem :   3988.9 total,    214.3 free,   1957.6 used,   3233.2 buff/cache     
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   2031.3 avail Mem 
    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND  
                                                    
  15026 root      20   0    8788   4864   2816 R   0.7   0.1   0:00.28 top      
                                                    
   1439 root      20   0  129824  74216  21376 S   0.3   1.8   0:03.76 
/snap/subiquity/5183/usr/bin/python3.10 /snap/subiquity/518+ 
   1441 root      20   0  129788  74312  21376 S   0.3   1.8   0:03.57 
/snap/subiquity/5183/usr/bin/python3.10 /snap/subiquity/518+ 
   2381 root      20   0   12216   5760   4864 S   0.3   0.1   0:00.10 sudo 
snap run subiquity                                      
   2383 root      20   0  206204  77680  21632 S   0.3   1.9   0:04.96 
/snap/subiquity/5183/usr/bin/python3.10 -m subiquity         
      1 root      20   0  103300  13676   8940 S   0.0   0.3   0:03.49 
/sbin/init ---                                               
      2 root      20   0       0      0      0 S   0.0   0.0   0:00.00 
[kthreadd]                                                   
      3 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 [rcu_gp] 
                                                    
      4 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 
[rcu_par_gp]                                                 
      5 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 
[slub_flushwq]                                               
      6 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 [netns]  
                                                    
      8 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 
[kworker/0:0H-events_highpri]                                
      9 root      20   0       0      0      0 I   0.0   0.0   0:00.00 
[kworker/0:1-cgroup_destroy]                                 
     10 root      20   0       0      0      0 I   0.0   0.0   0:01.11 
[kworker/u128:0-events_unbound]                              
     11 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 
[mm_percpu_wq]                                               
     12 root      20   0       0      0      0 I   0.0   0.0   0:00.00 
[rcu_tasks_rude_kthread]                                     
     13 root      20   0       0      0      0 I   0.0   0.0   0:00.00 
[rcu_tasks_trace_kthread]   

Then, having the post-install reboot completed, and looking at top, I
can observe the udev activities:

top - 10:55:26 up 6 min,  1 user,  load average: 2.16, 1.75, 0.84
Tasks: 108 total,   5 running, 103 sleeping,   0 stopped,   0 zombie
%Cpu(s): 18.1 us, 22.7 sy,  0.0 ni, 48.8 id,  8.1 wa,  0.6 hi,  0.8 si,  0.9 st 
MiB Mem :   3988.9 total,   3274.2 free,    271.4 used,    539.0 buff/cache     
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   3717.5 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND  
                                                        
   1177 root      20   0   24884   5724   2560 R  33.2   0.1   1:50.24 
(udev-worker)                                                    
      1 root      20   0  168592  12912   8432 R  20.6   0.3   1:07.93 
/sbin/init                                                       
    690 root      20   0  467964  13112  10752 D  15.6   0.3   0:52.91 
/usr/libexec/udisks2/udisksd                                     
    660 message+  20   0    9328   4480   3840 S  14.6   0.1   0:52.68 
@dbus-daemon --system --address=systemd: --nofork --nopidfile -+ 
  16285 ubuntu    20   0   18004   9984   8448 S  10.6   0.2   0:33.06 
/lib/systemd/systemd --user                                      
    385 root      20   0   24764   7492   4676 S   9.3   0.2   0:30.40 
/lib/systemd/systemd-udevd                                       
    373 root      rt   0  288412  26368   8064 S   6.6   0.6   0:21.50 
/sbin/multipathd -d -s                                           
    686 root      20   0   16096   7424   6528 S   4.0   0.2   0:13.20 
/lib/systemd/systemd-logind                                      
    681 root      20   0 2066332  34480  21248 S   1.7   0.8   0:05.48 
/usr/lib/snapd/snapd                                             
     14 root      20   0       0      0      0 S   0.7   0.0   0:01.90 
[ksoftirqd/0]                                                    
     20 root      20   0       0      0      0 S   0.7   0.0   0:01.87 
[ksoftirqd/1]                                                    
     25 root      20   0       0      0      0 S   0.7   0.0   0:01.27 
[ksoftirqd/2]                                                    
    333 root      19  -1   56736  23936  23040 S   0.7   0.6   0:03.42 
/lib/systemd/systemd-journald                                    
     30 root      20   0       0      0      0 S   0.3   0.0   0:01.32 
[ksoftirqd/3]                                                    
    104 root       0 -20       0      0      0 I   0.3   0.0   0:00.55 
[kworker/1:1H-kblockd]                                           
    138 root       0 -20       0      0      0 I   0.3   0.0   0:00.39 
[kworker/2:1H-kblockd]                                           
    256 root      20   0       0      0      0 S   0.3   0.0   0:00.55 
[jbd2/dasda1-8]              

Did the same (on the exact same system) with jammy 22.04.3 - nothing
suspicious about udev, also not after post-inst reboot.

On mantic I've noticed a flood of warning messages like:
root@hwe0003:~# udisksctl monitor
** (udisksctl monitor:424911): WARNING **: 11:14:33.100: 
(udisksctl.c:2811):monitor_on_interface_proxy_properties_changed: runtime check 
failed: (g_strv_length ((gchar **) invalidated_properties) == 0)
^C
(endless repeating messages)
which I think fits to your observations, Dan.

I've search for any already known udisks2 issue upstream, but couldn't
find anything that fits ...

Interestingly I get everything back to normal with just a 'systemd-udevd' 
restart:
# systemctl restart systemd-udevd

top - 12:16:45 up 20 min,  3 users,  load average: 0.00, 0.33, 0.72
Tasks: 110 total,   1 running, 109 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st 
MiB Mem :   3988.9 total,   3260.3 free,    274.9 used,    549.6 buff/cache     
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   3714.0 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND  
                                           
 166794 root      20   0   11784   5248   3200 R   0.7   0.1   0:00.65 top      
                                           
      1 root      20   0  103052  13040   8560 S   0.0   0.3   1:49.42 systemd  
                                           
      2 root      20   0       0      0      0 S   0.0   0.0   0:00.00 kthreadd 
                                           
      3 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 rcu_gp   
                                           
      4 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 
rcu_par_gp                                          
      5 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 
slub_flushwq                                        
      6 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 netns    
                                           
      8 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 
kworker/0:0H-events_highpri                         
      9 root      20   0       0      0      0 I   0.0   0.0   0:00.39 
kworker/0:1-rcu_par_gp                              
     11 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 
mm_percpu_wq                                        
     12 root      20   0       0      0      0 I   0.0   0.0   0:00.00 
rcu_tasks_rude_kthread                              
     13 root      20   0       0      0      0 I   0.0   0.0   0:00.00 
rcu_tasks_trace_kthread                             
     14 root      20   0       0      0      0 S   0.0   0.0   0:02.99 
ksoftirqd/0                                         
     15 root      20   0       0      0      0 I   0.0   0.0   0:00.16 
rcu_sched                                           
     16 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 
migration/0                                         
     17 root      20   0       0      0      0 S   0.0   0.0   0:00.00 cpuhp/0  
                                           
     18 root      20   0       0      0      0 S   0.0   0.0   0:00.00 cpuhp/1  

After a reboot, I'm hitting the same situation again:

top - 12:23:24 up 1 min,  1 user,  load average: 2.48, 0.86, 0.31
Tasks: 138 total,   4 running, 134 sleeping,   0 stopped,   0 zombie
%Cpu(s): 19.7 us, 24.1 sy,  0.0 ni, 47.8 id,  6.0 wa,  0.6 hi,  0.9 si,  1.0 st 
MiB Mem :   3988.9 total,   3527.3 free,    285.1 used,    268.7 buff/cache     
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   3703.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND  
                                                  
    375 root      20   0   25096   5860   2816 R  33.6   0.1   0:30.19 
(udev-worker)                                              
      1 root      20   0  101752  11760   8560 S  20.9   0.3   0:19.96 
/sbin/init                                                 
    591 root      20   0  467924  13060  10752 R  17.9   0.3   0:16.32 
/usr/libexec/udisks2/udisksd                               
    558 message+  20   0    9316   4608   3840 S  16.9   0.1   0:15.39 
@dbus-daemon --system --address=systemd: --nofork --nopid+ 
  12185 ubuntu    20   0   17996   9984   8448 S  12.3   0.2   0:07.06 
/lib/systemd/systemd --user                                
    373 root      20   0   24116   6784   4608 S   9.6   0.2   0:09.17 
/lib/systemd/systemd-udevd                                 
    358 root      rt   0  288412  26624   8192 S   6.6   0.7   0:06.34 
/sbin/multipathd -d -s                                     
    589 root      20   0   16268   7424   6528 S   4.3   0.2   0:04.15 
/lib/systemd/systemd-logind                                
    585 root      20   0 1918612  31388  20352 S   2.0   0.8   0:01.89 
/usr/lib/snapd/snapd                                       
    324 root      19  -1   40344  13880  13112 S   1.0   0.3   0:01.04 
/lib/systemd/systemd-journald                              
     14 root      20   0       0      0      0 S   0.7   0.0   0:00.45 
[ksoftirqd/0]                                              
     20 root      20   0       0      0      0 S   0.3   0.0   0:00.43 
[ksoftirqd/1]                                              
     25 root      20   0       0      0      0 S   0.3   0.0   0:00.47 
[ksoftirqd/2]                                              
     30 root      20   0       0      0      0 S   0.3   0.0   0:00.43 
[ksoftirqd/3]                                              
     36 root      20   0       0      0      0 I   0.3   0.0   0:00.10 
[kworker/0:2-rcu_gp]                                       
     61 root      20   0       0      0      0 I   0.3   0.0   0:00.17 
[kworker/u128:2-flush-94:0]                                
     71 root      20   0       0      0      0 I   0.3   0.0   0:00.07 
[kworker/3:1-events]                                       

Restart (or stop / start) of "systemd-udevd" helps again.

I'm wondering now:
- what is different during the installation that prevents udev from going crazy?
  (is systemd-udevd for example restarted during the installation at some point 
in time?)
- and why is the initial start of systemd-udevd causing a different behavior 
than after it's its restart? (is there s/t missing when it's started at boot 
time?)

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to udisks2 in Ubuntu.
https://bugs.launchpad.net/bugs/2037569

Title:
  udev issues with mantic beta

Status in Ubuntu on IBM z Systems:
  Confirmed
Status in systemd package in Ubuntu:
  Confirmed
Status in udisks2 package in Ubuntu:
  Confirmed

Bug description:
  While installing mantic beta (on s390x, LPAR and z/VM - but this might not be 
architecture specific) I faced issues with udev.
  In my installation I've updated the installer to "edge/lp-2009141" (subiquity 
 22.02.2+git1762.1b1ee6f4  5164).

  During my installations I first noticed bad response times in case of
  dealing with devices (like enabling new devices with chzdev). chzdev
  is used during the installation, hence the installation procedure is
  also affected by this. (I mainly notice this issue in case of DASD
  ECKD disk enablements.)

  But even after after a successful (but due to this issue less snappier) 
installation, means after the post-install reboot, in the installed system I 
can find several udev related processes, like:
    69448 root      20   0   31280  11944   2560 S  39.2   0.0   2:51.67 
(udev-worker)    
      509 root      20   0   31276  13812   4600 S  20.6   0.0   2:07.76 
systemd-udevd    
      893 root      20   0  469016  13544  10496 R  17.3   0.0   1:43.53 
udisksd          
        1 root      20   0  168664  12748   8396 S  16.3   0.0   1:40.47 
systemd  
  which is not only unusual, but (as one can see) they consume quite some 
resources.
  Even the remote ssh into that system is impacted by this high load.

  So far I only see this in mantic.
  I tried 20.04.3 as well as lunar, but both do not seem to be affected by this 
udev problem.
  I neither face the bad response on device enablement, nor can see any udev 
related processes still running after post-install-reboot in the installed 
system.

  (Sometimes I could also see a growing log file 'syslog').

  I cannot say yet what is causing this, but since I see 'systemd-udevd'
  as prominent process in top, I'll first of all mark this as affecting
  systemd-udevd (or systemd).

  I've attached the outcome of some more investigations I did ...

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/2037569/+subscriptions


-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to     : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to