It is kind of bad to attach disks that cloudstack doesn't know about.
Cloudstack can hot attach disks for you, but I assume you have a specific,
temporary reason to do this (perhaps there is existing data on the disk and
you don't want to import the volume through supported 'volume upload'?).

I can't immediately think of a reason as to why it would make the console
fail, but you might want to start by looking at how the XML for the VM
changes, and perhaps turn on debug in the agent logs and watch for what
happens when it looks for the VNC port during console open.

Also, CentOS 7 is the guest, right? As hypervisor CentOS 7 barely works in
4.5.0 with a bunch of tweaking, but should be just fine as a guest in any
recent cloudstack release.
On Mar 10, 2015 6:00 PM, "Star Guo" <st...@ceph.me> wrote:

> Hi,
>
> I hope to get some advise to debug this issus. Thanks :) .
>
> Best Regards,
> Star Guo
>
> ===================================================
>
> Hi, Dev Team
>
> I play with cloudstack 4.4.2 + centos 7 kvm. It is ok that I deploy an
> instance to kvm host, and the console is ok too.
> When I directly add a vdisk via virsh command to this instance, it is ok
> and
> the disk is recongnised but the console in CS UI is fail.
> After I remove the vdisk via virsh command from this instance, the console
> in CS UI is ok.
> How to debug this issue ? Thanks.
>
> My Env:
>
> virsh # pool-list --all
>  Name                 State      Autostart
> -------------------------------------------
>  797914bf-50ae-328e-9530-cf379313b216 active     no
>  bcf4214b-059d-4017-861c-17d6d9306e2b active     no
>  lvm-pool-sdb         active     no
>
> virsh # pool-dumpxml lvm-pool-sdb
> <pool type='logical'>
>   <name>lvm-pool-sdb</name>
>   <uuid>229dee22-af67-4a11-be61-a664676afdce</uuid>
>   <capacity unit='bytes'>1999839952896</capacity>
>   <allocation unit='bytes'>10737418240</allocation>
>   <available unit='bytes'>1989102534656</available>
>   <source>
>     <device path='/dev/sdb1'/>
>     <name>lvm-pool-sdb</name>
>     <format type='lvm2'/>
>   </source>
>   <target>
>     <path>/dev/lvm-pool-sdb</path>
>     <permissions>
>       <mode>0755</mode>
>       <owner>-1</owner>
>       <group>-1</group>
>     </permissions>
>   </target>
> </pool>
>
> virsh # vol-list lvm-pool-sdb
>  Name                 Path
>
> ----------------------------------------------------------------------------
> --
>  vol1                 /dev/lvm-pool-sdb/vol1
>
> I attache disk to i-3-132-VM:
> "virsh attach-disk --domain i-3-132-VM --source /dev/lvm-pool-sdb/vol1
> --target vdb --targetbus virtio"
>
> And then I open the console of this instance but fail with notice: "Server
> Internal Error ".
>
> Best Regards,
> Star Guo
>
>

Reply via email to