gw769 opened a new issue, #12090:
URL: https://github.com/apache/cloudstack/issues/12090
### problem
### Environment
- CloudStack Version: 4.20.1.0
- CPU Architecture: aarch64
- Hypervisor: KVM
### Problem Description
When attaching multiple disks to an aarch64 VM, CloudStack incorrectly adds
an **extra lsilogic SCSI controller** after allocating virtio-scsi controllers
for every 7 disks. This causes the **last disk** to be unrecognized after VM
reboot.
**Key Evidence:** The second SCSI controller switches from `virtio-scsi` to
`lsilogic` after reboot.
### Reproduction Steps
1. Deploy an aarch64 VM with 5 disks → all recognized correctly under
`virtio-scsi` controller
2. Hot-add a 6th disk while VM running → temporarily works, creates second
`virtio-scsi` controller
3. Shutdown and restart the VM → second controller becomes `lsilogic`, 6th
disk disappears
### Expected Behavior
All disks assigned to `virtio-scsi` controllers, no `lsilogic` controller
added.
### Actual Behavior (with Evidence)
```xml
After reboot - WRONG
<controller type='scsi' index='1' model='lsilogic'>
Should be: <controller type='scsi' index='1' model='virtio-scsi'>
```
### Related Code
PR #9823 (bug at the end of implementation)
### Attachments
```xml
Step 1: Initial State (5 disks attached)
root@NODE159:/var/log/cloudstack/agent# virsh dumpxml i-2-971-VM | grep
controller
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
<address type='drive' controller='0' bus='0' target='0' unit='4'/>
<address type='drive' controller='0' bus='0' target='0' unit='5'/>
<address type='drive' controller='0' bus='0' target='0' unit='6'/>
<controller type='usb' index='0' model='qemu-xhci'>
</controller>
<controller type='scsi' index='0' model='virtio-scsi'>
</controller>
<controller type='pci' index='0' model='pcie-root'>
</controller>
<controller type='virtio-serial' index='0'>
</controller>
<controller type='pci' index='1' model='pcie-root-port'>
</controller>
<controller type='pci' index='2' model='pcie-root-port'>
</controller>
<controller type='pci' index='3' model='pcie-root-port'>
</controller>
<controller type='pci' index='4' model='pcie-root-port'>
</controller>
<controller type='pci' index='5' model='pcie-root-port'>
</controller>
<controller type='pci' index='6' model='pcie-root-port'>
</controller>
<controller type='pci' index='7' model='pcie-to-pci-bridge'>
</controller>
<controller type='pci' index='8' model='pcie-root-port'>
</controller>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
Step 2: Hot-add 6th disk while VM running,witch device id is 7
root@NODE159:/var/log/cloudstack/agent# virsh dumpxml i-2-971-VM | grep
controller
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
<address type='drive' controller='0' bus='0' target='0' unit='4'/>
<address type='drive' controller='0' bus='0' target='0' unit='5'/>
<address type='drive' controller='0' bus='0' target='0' unit='6'/>
<address type='drive' controller='1' bus='0' target='0' unit='0'/>
<controller type='usb' index='0' model='qemu-xhci'>
</controller>
<controller type='scsi' index='0' model='virtio-scsi'>
</controller>
<controller type='scsi' index='1' model='virtio-scsi'>
</controller>
<controller type='pci' index='0' model='pcie-root'>
</controller>
<controller type='virtio-serial' index='0'>
</controller>
<controller type='pci' index='1' model='pcie-root-port'>
</controller>
<controller type='pci' index='2' model='pcie-root-port'>
</controller>
<controller type='pci' index='3' model='pcie-root-port'>
</controller>
<controller type='pci' index='4' model='pcie-root-port'>
</controller>
<controller type='pci' index='5' model='pcie-root-port'>
</controller>
<controller type='pci' index='6' model='pcie-root-port'>
</controller>
<controller type='pci' index='7' model='pcie-to-pci-bridge'>
</controller>
<controller type='pci' index='8' model='pcie-root-port'>
</controller>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
Step 3: Reboot VM
root@NODE159:/var/log/cloudstack/agent# virsh dumpxml i-2-971-VM | grep
controller
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
<address type='drive' controller='0' bus='0' target='0' unit='4'/>
<address type='drive' controller='0' bus='0' target='0' unit='5'/>
<address type='drive' controller='0' bus='0' target='0' unit='6'/>
<address type='drive' controller='1' bus='0' target='0' unit='0'/>
<controller type='usb' index='0' model='qemu-xhci'>
</controller>
<controller type='scsi' index='0' model='virtio-scsi'>
</controller>
<controller type='pci' index='0' model='pcie-root'>
</controller>
<controller type='scsi' index='1' model='lsilogic'> <!-- BUG: Should
be virtio-scsi -->
</controller>
<controller type='virtio-serial' index='0'>
</controller>
<controller type='pci' index='1' model='pcie-root-port'>
</controller>
<controller type='pci' index='2' model='pcie-root-port'>
</controller>
<controller type='pci' index='3' model='pcie-root-port'>
</controller>
<controller type='pci' index='4' model='pcie-root-port'>
</controller>
<controller type='pci' index='5' model='pcie-root-port'>
</controller>
<controller type='pci' index='6' model='pcie-root-port'>
</controller>
<controller type='pci' index='7' model='pcie-to-pci-bridge'>
</controller>
<controller type='pci' index='8' model='pcie-root-port'>
</controller>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
root@NODE159:/var/log/cloudstack/agent#
```
### versions
- CloudStack Version: 4.20.1.0
- CPU Architecture: aarch64
- Hypervisor: KVM
### The steps to reproduce the bug
1.Initial State 5 disks attached
2.Hot-add 6th disk while VM running
3.reboot VM
...
<img width="1483" height="459" alt="Image"
src="https://github.com/user-attachments/assets/d662cb6b-5c97-4e76-878a-98a1d33c2245"
/>
### What to do about it?
_No response_
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]