TadiosAbebe commented on issue #11141:
URL: https://github.com/apache/cloudstack/issues/11141#issuecomment-3575316479

   @weizhouapache, very interesting. I checked the resource utilization of the 
Java process on one of the hosts currently exhibiting the issue, and this was 
the output:
   ```
   root@host3:/home/zgadmin# pid=$(pgrep -f /usr/bin/java)
   echo "CPU: $(ps -p $pid -o %cpu --no-headers)%"
   echo "MEM: $(ps -p $pid -o %mem --no-headers)%"
   echo "FD: $(ls /proc/$pid/fd 2>/dev/null | wc -l)"
   echo "Threads: $(ps -L -p $pid -o tid --no-headers 2>/dev/null | wc -l)"
   echo "Conn: $(ss -tanp | grep "pid=$pid" | grep ESTAB | wc -l)"
   CPU: 73.9%
   MEM:  0.1%
   FD: 253
   Threads: 134
   Conn: 1
   ```
   But I had a test all-in-one ACS on ubuntu 24.04 with libvirt 10.0.0, but I 
couldn’t reproduce the issue I’m seeing in the production environment. I 
repeatedly ran your test script:
   ```
   for i in `seq 1 20`;do
       cmk deploy virtualmachine name=L2-wei-test-$i serviceofferingid=xxx 
zoneid=xxx templateid=xxx networkids=xxx & >/dev/null;
       sleep 2;
   done
   ```
   to generate load, and the results were consistently fast:
   ```
   mysql> select id,name,created,update_time,(update_time-created) from 
vm_instance where removed is null and name like "L2-wei%";
   
+-----+----------------+---------------------+---------------------+-----------------------+
   | id  | name           | created             | update_time         | 
(update_time-created) |
   
+-----+----------------+---------------------+---------------------+-----------------------+
   | 191 | L2-wei-test-1  | 2025-11-25 11:22:07 | 2025-11-25 11:22:14 |         
            7 |
   | 192 | L2-wei-test-2  | 2025-11-25 11:22:09 | 2025-11-25 11:22:16 |         
            7 |
   | 193 | L2-wei-test-3  | 2025-11-25 11:22:11 | 2025-11-25 11:22:17 |         
            6 |
   | 194 | L2-wei-test-4  | 2025-11-25 11:22:13 | 2025-11-25 11:22:22 |         
            9 |
   | 195 | L2-wei-test-5  | 2025-11-25 11:22:15 | 2025-11-25 11:22:20 |         
            5 |
   | 196 | L2-wei-test-6  | 2025-11-25 11:22:17 | 2025-11-25 11:22:23 |         
            6 |
   | 197 | L2-wei-test-7  | 2025-11-25 11:22:19 | 2025-11-25 11:22:26 |         
            7 |
   | 198 | L2-wei-test-8  | 2025-11-25 11:22:21 | 2025-11-25 11:22:27 |         
            6 |
   | 199 | L2-wei-test-9  | 2025-11-25 11:22:23 | 2025-11-25 11:22:29 |         
            6 |
   | 200 | L2-wei-test-10 | 2025-11-25 11:22:25 | 2025-11-25 11:22:31 |         
            6 |
   | 201 | L2-wei-test-11 | 2025-11-25 11:22:27 | 2025-11-25 11:22:34 |         
            7 |
   | 202 | L2-wei-test-12 | 2025-11-25 11:22:29 | 2025-11-25 11:22:36 |         
            7 |
   | 203 | L2-wei-test-13 | 2025-11-25 11:22:31 | 2025-11-25 11:22:38 |         
            7 |
   | 204 | L2-wei-test-14 | 2025-11-25 11:22:33 | 2025-11-25 11:22:41 |         
            8 |
   | 205 | L2-wei-test-15 | 2025-11-25 11:22:35 | 2025-11-25 11:22:42 |         
            7 |
   | 206 | L2-wei-test-16 | 2025-11-25 11:22:37 | 2025-11-25 11:22:45 |         
            8 |
   | 207 | L2-wei-test-17 | 2025-11-25 11:22:39 | 2025-11-25 11:22:48 |         
            9 |
   | 208 | L2-wei-test-18 | 2025-11-25 11:22:41 | 2025-11-25 11:22:49 |         
            8 |
   | 209 | L2-wei-test-19 | 2025-11-25 11:22:43 | 2025-11-25 11:22:51 |         
            8 |
   | 210 | L2-wei-test-20 | 2025-11-25 11:22:45 | 2025-11-25 11:22:55 |         
           10 |
   
+-----+----------------+---------------------+---------------------+-----------------------+
   ```
   I'll try to create a full environment consisting ceph and multiple kvm host 
on a test environment to see if i can replicate the issue there and see if 
libvirt 10.6.0 fix it, later this week.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to