Re: Getting errors while adding ceph storage to cloudstack

2017-02-23 Thread Simon Weller
Install this repo and the associated qemu packages for Ceph/RBD on Centos 6:


http://docs.ceph.com/docs/emperor/install/qemu-rpm/


Is there a possibility you could upgrade to Centos 7 on the hosts? Ceph works 
out of the box on 7.


From: Kurt K 
Sent: Wednesday, February 22, 2017 11:50 PM
To: users@cloudstack.apache.org
Subject: Re: Getting errors while adding ceph storage to cloudstack

Hi Simon,

Thanks for the reply.

 >> Also run ceph health and make sure your agent can talk to your ceph
monitors.

ceph health status shows fine and also we can connect our OSD's from
monitor server. Snippets pasted below.

===
[root@mon1 ~]# ceph -s
 cluster ebac75fc-e631-4c9f-a310-880cbcdd1d25
  health HEALTH_OK
  monmap e1: 1 mons at {mon1=10.10.48.7:6789/0}
 election epoch 3, quorum 0 mon1
  osdmap e12: 2 osds: 2 up, 2 in
 flags sortbitwise,require_jewel_osds
   pgmap v3376: 192 pgs, 2 pools, 0 bytes data, 0 objects
 73108 kB used, 1852 GB / 1852 GB avail
  192 active+clean
==
[root@mon1 ~]# ceph osd tree
ID WEIGHT  TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 1.80878 root default
-2 0.90439 host osd4
  0 0.90439 osd.0  up  1.0  1.0
-3 0.90439 host osdceph3
  1 0.90439 osd.1  up  1.0  1.0


 >> Which OS are you running on your hosts?

Our cloudstack servers are on Centos 6 and ceph admin/mon/osd servers
are running on Centos 7.

After we have enabled the cloudstack agent log to DEBUG on hypervisor
server, we are seeing below errors while adding the ceph primary.

==
2017-02-22 21:01:00,444 DEBUG [kvm.storage.LibvirtStorageAdaptor]
(agentRequest-Handler-4:null) 
==

In a quick search, we found that the hypervisor KVM has no kernel module
loaded for RBD. Then we have upgraded the kernel to elrepo and loaded
the module with modprobe. When we tried to rebuild rbd module with
existing libvirtd configuration, that didn't worked. Apart from this, we
have custom compiled libvirtd with rbd support and we have no clues
regarding how to connect custom libvirtd with the qemu image utility.

=
[root@hyperkvm3 ~]# libvirtd --version (default)
libvirtd (libvirt) 0.10.2
=
[root@hyperkvm3 ~]# lsmod | grep rbd
rbd56743  0
libceph   148605  1 rbd
=
[root@hyperkvm3 ~]# qemu-img -h  | grep "Supported formats"
Supported formats: raw cow qcow vdi vmdk cloop dmg bochs vpc vvfat qcow2
qed vhdx parallels nbd blkdebug null host_cdrom  host_floppy host_device
file gluster
== // no rbd support
[root@hyperkvm3 ~]# /usr/bin/sbin/libvirtd  --version  (custom)
/usr/bin/sbin/libvirtd (libvirt) 1.3.5
=

Do you have any ideas/suggestions?

-Kurt

On 02/22/2017 08:20 PM, Simon Weller wrote:
> I agree,  agent logs would be good to look at.
>
>
> You can enable kvm agent debugging by running this: sed -i 's/INFO/DEBUG/g' 
> /etc/cloudstack/agent/log4j-cloud.xml
>
> Restart the agent and then tail -f /var/log/cloudstack/agent/agent.log
>
>
> Also run ceph health and make sure your agent can talk to your ceph monitors.
>
> Which OS are you running on your hosts?
>
>
> - Si
>
> 
> From: Abhinandan Prateek 
> Sent: Wednesday, February 22, 2017 12:45 AM
> To: users@cloudstack.apache.org
> Subject: Re: Getting errors while adding ceph storage to cloudstack
>
> Take a look at the agent logs on kvm host there will be more clues.
>
>
>
>
> On 22/02/17, 8:10 AM, "Kurt K"  wrote:
>
>> Hello,
>>
>> I have created a ceph cluster with one admin server, one monitor and two
>> osd's. The setup is completed. But when trying to add the ceph as
>> primary storage of cloudstack, I am getting the below error in error logs.
>>
>> Am I missing something ? Please help.
>>
>> 
>> 2017-02-20 21:03:02,842 DEBUG
>> [o.a.c.s.d.l.CloudStackPrimaryDataStoreLifeCycleImpl]
>> (catalina-exec-6:ctx-f293a10c ctx-093b4faf) In createPool Adding the
>> pool to each of the hosts
>> 2017-02-20 21:03:02,843 DEBUG [c.c.s.StorageManagerImpl]
>> (catalina-exec-6:ctx-f293a10c ctx-093b4faf) Adding pool null to host 1
>> 2017-02-20 21:03:02,845 DEBUG [c.c.a.t.Request]
>> (catalina-exec-6:ctx-f293a10c ctx-093b4faf) Seq 1-653584895922143294:
>> Sending  { Cmd , MgmtId: 207381009036, via: 1(hyperkvm.x.com), Ver:
>> v1, Flags: 100011,
>> [{"com.cloud.agent.api.ModifyStoragePoolCommand":{"add":true,"pool":{"id":14,"uuid":"9c51d737-3a6f-3bb3-8f28-109954fc2ef0","host":"mon1..com","path":"cloudstack","userInfo":"cloudstack:AQDagqZYgSSpOBAATFvSt4tz3cOUWhNtR-NaoQ==","port":6789,"type":"RBD"},"localPath":"/mnt//ac5436a6-5889-30eb-b079-ac1e05a30526","wait":0}}]
>> }
>> 2017-02-20 21:03:02,944 DEBUG [c.c.a.t.Request]
>> (AgentManager-Handler-15:null) Seq 1-653584895922143294: Processing:  {
>> Ans: , MgmtId: 207381009036, via: 1, Ver: v1, Flags: 10,
>> 

AW: XenServer VM does no longer start

2017-02-23 Thread Martin Emrich
Yes that worked. After detaching one volume the VM starts (although it's 
unusable as the volume ist part of a larger LVM volume).

I'm trying to get my build environment up for Abhinandan's patch, but no 
success. I cloned the 4.9.2.0 branch, and ran (cd packaging ; ./package.sh -p 
oss -d centos63).
This used to work with 4.6.0, but now I get 

+ cp 'tools/marvin/dist/Marvin-*.tar.gz' 
/opt/csbuild/cs/cloudstack/dist/rpmbuild/BUILDROOT/cloudstack-4.9.2.0-1.el6.x86_64/usr/share/cloudstack-marvin/
cp: cannot stat `tools/marvin/dist/Marvin-*.tar.gz': No such file or directory

Any quick Idea? Or should I start a new thread for that?

Thanks,

Martin

-Ursprüngliche Nachricht-
Von: S. Brüseke - proIO GmbH [mailto:s.brues...@proio.com] 
Gesendet: Donnerstag, 23. Februar 2017 14:39
An: Martin Emrich ; users@cloudstack.apache.org
Betreff: AW: XenServer VM does no longer start

Hi Martin,

as Abhinandan was pointing out in a previous mail it looks like you hit a bug. 
Take a look a the link he provided in his mail.
Please detach all data disks and try to start the VM. Is this working?

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Martin Emrich [mailto:martin.emr...@empolis.com]
Gesendet: Donnerstag, 23. Februar 2017 13:49
An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
Betreff: AW: XenServer VM does no longer start

Hi!

How can I check that? 

I tried starting the VM, not a single line appeared on the SMlog during that 
attempt.

Thanks,

Martin

-Ursprüngliche Nachricht-
Von: S. Brüseke - proIO GmbH [mailto:s.brues...@proio.com]
Gesendet: Mittwoch, 22. Februar 2017 12:41
An: users@cloudstack.apache.org
Betreff: AW: XenServer VM does no longer start

Hi Martin,

does the volume still exist on primary storage? You can also take a look at 
SMlog on XenServer.

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Martin Emrich [mailto:martin.emr...@empolis.com]
Gesendet: Mittwoch, 22. Februar 2017 12:27
An: users@cloudstack.apache.org
Betreff: XenServer VM does no longer start

Hi!

After shutting down a VM for resizing, it no longer starts. The GUI reports 
insufficient Capacity (but there's plenty), and in the Log I see this:

2017-02-22 12:18:40,626 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) Checking 
if we need to prepare 4 volumes for VM[User|i-18-2998-VM]
2017-02-22 12:18:40,626 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to 
recreate the volume: Vol[5050|vm=2998|ROOT], since it already has a pool 
assigned: 29, adding di sk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to 
recreate the volume: Vol[5051|vm=2998|DATADISK], since it already has a pool 
assigned: 29, addin g disk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to 
recreate the volume: Vol[5052|vm=2998|DATADISK], since it already has a pool 
assigned: 29, addin g disk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to 
recreate the volume: Vol[5053|vm=2998|DATADISK], since it already has a pool 
assigned: 29, addin g disk to VM
2017-02-22 12:18:40,669 DEBUG [c.c.h.x.r.w.x.CitrixStartCommandWrapper] 
(DirectAgent-469:ctx-d6e5768e) 1. The VM i-18-2998-VM is in Starting state.
2017-02-22 12:18:40,688 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-469:ctx-d6e5768e) Created VM e37afda2-9661-4655-e750-1855b0318787 
for i-18-2998-VM
2017-02-22 12:18:40,710 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-469:ctx-d6e5768e) VBD d560c831-29f8-c82b-7e81-778ce33318ae created 
for com.cloud.agent.api.to.DiskTO@1d82661a
2017-02-22 12:18:40,720 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-469:ctx-d6e5768e) VBD b083c0c8-31bc-1248-859a-234e276d9b4c created 
for com.cloud.agent.api.to.DiskTO@5bfd4418
2017-02-22 12:18:40,729 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-469:ctx-d6e5768e) VBD 48701244-a29a-e9ce-f6c3-ed5225271aa7 created 
for com.cloud.agent.api.to.DiskTO@5081b2d6
2017-02-22 12:18:40,737 DEBUG [c.c.a.m.DirectAgentAttache] 
(DirectAgentCronJob-352:ctx-569e5f7b) Ping from 337(esc-fra1-xn011)
2017-02-22 12:18:40,739 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-469:ctx-d6e5768e) VBD 755de6cb-3994-8251-c0d5-e45cda52ca98 created 
for com.cloud.agent.api.to.DiskTO@64992bda
2017-02-22 12:18:40,744 WARN  [c.c.h.x.r.w.x.CitrixStartCommandWrapper] 
(DirectAgent-469:ctx-d6e5768e) Catch Exception: class 
com.xensource.xenapi.Types$InvalidDevice due to The device name is invalid The 
device name is invalid
at com.xensource.xenapi.Types.checkResponse(Types.java:1169)
at 

AW: XenServer VM does no longer start

2017-02-23 Thread S . Brüseke - proIO GmbH
Hi Martin,

as Abhinandan was pointing out in a previous mail it looks like you hit a bug. 
Take a look a the link he provided in his mail.
Please detach all data disks and try to start the VM. Is this working?

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Martin Emrich [mailto:martin.emr...@empolis.com] 
Gesendet: Donnerstag, 23. Februar 2017 13:49
An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
Betreff: AW: XenServer VM does no longer start

Hi!

How can I check that? 

I tried starting the VM, not a single line appeared on the SMlog during that 
attempt.

Thanks,

Martin

-Ursprüngliche Nachricht-
Von: S. Brüseke - proIO GmbH [mailto:s.brues...@proio.com] 
Gesendet: Mittwoch, 22. Februar 2017 12:41
An: users@cloudstack.apache.org
Betreff: AW: XenServer VM does no longer start

Hi Martin,

does the volume still exist on primary storage? You can also take a look at 
SMlog on XenServer.

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Martin Emrich [mailto:martin.emr...@empolis.com]
Gesendet: Mittwoch, 22. Februar 2017 12:27
An: users@cloudstack.apache.org
Betreff: XenServer VM does no longer start

Hi!

After shutting down a VM for resizing, it no longer starts. The GUI reports 
insufficient Capacity (but there's plenty), and in the Log I see this:

2017-02-22 12:18:40,626 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) Checking 
if we need to prepare 4 volumes for VM[User|i-18-2998-VM]
2017-02-22 12:18:40,626 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to 
recreate the volume: Vol[5050|vm=2998|ROOT], since it already has a pool 
assigned: 29, adding di sk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to 
recreate the volume: Vol[5051|vm=2998|DATADISK], since it already has a pool 
assigned: 29, addin g disk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to 
recreate the volume: Vol[5052|vm=2998|DATADISK], since it already has a pool 
assigned: 29, addin g disk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to 
recreate the volume: Vol[5053|vm=2998|DATADISK], since it already has a pool 
assigned: 29, addin g disk to VM
2017-02-22 12:18:40,669 DEBUG [c.c.h.x.r.w.x.CitrixStartCommandWrapper] 
(DirectAgent-469:ctx-d6e5768e) 1. The VM i-18-2998-VM is in Starting state.
2017-02-22 12:18:40,688 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-469:ctx-d6e5768e) Created VM e37afda2-9661-4655-e750-1855b0318787 
for i-18-2998-VM
2017-02-22 12:18:40,710 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-469:ctx-d6e5768e) VBD d560c831-29f8-c82b-7e81-778ce33318ae created 
for com.cloud.agent.api.to.DiskTO@1d82661a
2017-02-22 12:18:40,720 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-469:ctx-d6e5768e) VBD b083c0c8-31bc-1248-859a-234e276d9b4c created 
for com.cloud.agent.api.to.DiskTO@5bfd4418
2017-02-22 12:18:40,729 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-469:ctx-d6e5768e) VBD 48701244-a29a-e9ce-f6c3-ed5225271aa7 created 
for com.cloud.agent.api.to.DiskTO@5081b2d6
2017-02-22 12:18:40,737 DEBUG [c.c.a.m.DirectAgentAttache] 
(DirectAgentCronJob-352:ctx-569e5f7b) Ping from 337(esc-fra1-xn011)
2017-02-22 12:18:40,739 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-469:ctx-d6e5768e) VBD 755de6cb-3994-8251-c0d5-e45cda52ca98 created 
for com.cloud.agent.api.to.DiskTO@64992bda
2017-02-22 12:18:40,744 WARN  [c.c.h.x.r.w.x.CitrixStartCommandWrapper] 
(DirectAgent-469:ctx-d6e5768e) Catch Exception: class 
com.xensource.xenapi.Types$InvalidDevice due to The device name is invalid The 
device name is invalid
at com.xensource.xenapi.Types.checkResponse(Types.java:1169)
at com.xensource.xenapi.Connection.dispatch(Connection.java:395)
at 
com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:457)
at com.xensource.xenapi.VBD.create(VBD.java:322)
at 
com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.createVbd(CitrixResourceBase.java:1156)
at 
com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStartCommandWrapper.execute(CitrixStartCommandWrapper.java:121)
at 
com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStartCommandWrapper.execute(CitrixStartCommandWrapper.java:53)
at 
com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixRequestWrapper.execute(CitrixRequestWrapper.java:122)
at 
com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:1687)
at 

AW: XenServer VM does no longer start

2017-02-23 Thread Martin Emrich
Hi!

How can I check that? 

I tried starting the VM, not a single line appeared on the SMlog during that 
attempt.

Thanks,

Martin

-Ursprüngliche Nachricht-
Von: S. Brüseke - proIO GmbH [mailto:s.brues...@proio.com] 
Gesendet: Mittwoch, 22. Februar 2017 12:41
An: users@cloudstack.apache.org
Betreff: AW: XenServer VM does no longer start

Hi Martin,

does the volume still exist on primary storage? You can also take a look at 
SMlog on XenServer.

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Martin Emrich [mailto:martin.emr...@empolis.com]
Gesendet: Mittwoch, 22. Februar 2017 12:27
An: users@cloudstack.apache.org
Betreff: XenServer VM does no longer start

Hi!

After shutting down a VM for resizing, it no longer starts. The GUI reports 
insufficient Capacity (but there's plenty), and in the Log I see this:

2017-02-22 12:18:40,626 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) Checking 
if we need to prepare 4 volumes for VM[User|i-18-2998-VM]
2017-02-22 12:18:40,626 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to 
recreate the volume: Vol[5050|vm=2998|ROOT], since it already has a pool 
assigned: 29, adding di sk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to 
recreate the volume: Vol[5051|vm=2998|DATADISK], since it already has a pool 
assigned: 29, addin g disk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to 
recreate the volume: Vol[5052|vm=2998|DATADISK], since it already has a pool 
assigned: 29, addin g disk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to 
recreate the volume: Vol[5053|vm=2998|DATADISK], since it already has a pool 
assigned: 29, addin g disk to VM
2017-02-22 12:18:40,669 DEBUG [c.c.h.x.r.w.x.CitrixStartCommandWrapper] 
(DirectAgent-469:ctx-d6e5768e) 1. The VM i-18-2998-VM is in Starting state.
2017-02-22 12:18:40,688 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-469:ctx-d6e5768e) Created VM e37afda2-9661-4655-e750-1855b0318787 
for i-18-2998-VM
2017-02-22 12:18:40,710 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-469:ctx-d6e5768e) VBD d560c831-29f8-c82b-7e81-778ce33318ae created 
for com.cloud.agent.api.to.DiskTO@1d82661a
2017-02-22 12:18:40,720 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-469:ctx-d6e5768e) VBD b083c0c8-31bc-1248-859a-234e276d9b4c created 
for com.cloud.agent.api.to.DiskTO@5bfd4418
2017-02-22 12:18:40,729 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-469:ctx-d6e5768e) VBD 48701244-a29a-e9ce-f6c3-ed5225271aa7 created 
for com.cloud.agent.api.to.DiskTO@5081b2d6
2017-02-22 12:18:40,737 DEBUG [c.c.a.m.DirectAgentAttache] 
(DirectAgentCronJob-352:ctx-569e5f7b) Ping from 337(esc-fra1-xn011)
2017-02-22 12:18:40,739 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-469:ctx-d6e5768e) VBD 755de6cb-3994-8251-c0d5-e45cda52ca98 created 
for com.cloud.agent.api.to.DiskTO@64992bda
2017-02-22 12:18:40,744 WARN  [c.c.h.x.r.w.x.CitrixStartCommandWrapper] 
(DirectAgent-469:ctx-d6e5768e) Catch Exception: class 
com.xensource.xenapi.Types$InvalidDevice due to The device name is invalid The 
device name is invalid
at com.xensource.xenapi.Types.checkResponse(Types.java:1169)
at com.xensource.xenapi.Connection.dispatch(Connection.java:395)
at 
com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:457)
at com.xensource.xenapi.VBD.create(VBD.java:322)
at 
com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.createVbd(CitrixResourceBase.java:1156)
at 
com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStartCommandWrapper.execute(CitrixStartCommandWrapper.java:121)
at 
com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStartCommandWrapper.execute(CitrixStartCommandWrapper.java:53)
at 
com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixRequestWrapper.execute(CitrixRequestWrapper.java:122)
at 
com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:1687)
at 
com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:315)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
at 

Re: XenServer VM does no longer start

2017-02-23 Thread Abhinandan Prateek
Hi Martin,

  Looks like you have hit a bug, you can patch it from this PR 
https://github.com/apache/cloudstack/pull/1829




On 22/02/17, 4:56 PM, "Martin Emrich"  wrote:

>Hi!
>
>After shutting down a VM for resizing, it no longer starts. The GUI reports 
>insufficient Capacity (but there's plenty), and in the Log I see this:
>
>2017-02-22 12:18:40,626 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
>(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) Checking 
>if we need to prepare 4 volumes for VM[User|i-18-2998-VM]
>2017-02-22 12:18:40,626 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
>(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need 
>to recreate the volume: Vol[5050|vm=2998|ROOT], since it already has a pool 
>assigned: 29, adding di
>sk to VM
>2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
>(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need 
>to recreate the volume: Vol[5051|vm=2998|DATADISK], since it already has a 
>pool assigned: 29, addin
>g disk to VM
>2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
>(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need 
>to recreate the volume: Vol[5052|vm=2998|DATADISK], since it already has a 
>pool assigned: 29, addin
>g disk to VM
>2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
>(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need 
>to recreate the volume: Vol[5053|vm=2998|DATADISK], since it already has a 
>pool assigned: 29, addin
>g disk to VM
>2017-02-22 12:18:40,669 DEBUG [c.c.h.x.r.w.x.CitrixStartCommandWrapper] 
>(DirectAgent-469:ctx-d6e5768e) 1. The VM i-18-2998-VM is in Starting state.
>2017-02-22 12:18:40,688 DEBUG [c.c.h.x.r.CitrixResourceBase] 
>(DirectAgent-469:ctx-d6e5768e) Created VM e37afda2-9661-4655-e750-1855b0318787 
>for i-18-2998-VM
>2017-02-22 12:18:40,710 DEBUG [c.c.h.x.r.CitrixResourceBase] 
>(DirectAgent-469:ctx-d6e5768e) VBD d560c831-29f8-c82b-7e81-778ce33318ae 
>created for com.cloud.agent.api.to.DiskTO@1d82661a
>2017-02-22 12:18:40,720 DEBUG [c.c.h.x.r.CitrixResourceBase] 
>(DirectAgent-469:ctx-d6e5768e) VBD b083c0c8-31bc-1248-859a-234e276d9b4c 
>created for com.cloud.agent.api.to.DiskTO@5bfd4418
>2017-02-22 12:18:40,729 DEBUG [c.c.h.x.r.CitrixResourceBase] 
>(DirectAgent-469:ctx-d6e5768e) VBD 48701244-a29a-e9ce-f6c3-ed5225271aa7 
>created for com.cloud.agent.api.to.DiskTO@5081b2d6
>2017-02-22 12:18:40,737 DEBUG [c.c.a.m.DirectAgentAttache] 
>(DirectAgentCronJob-352:ctx-569e5f7b) Ping from 337(esc-fra1-xn011)
>2017-02-22 12:18:40,739 DEBUG [c.c.h.x.r.CitrixResourceBase] 
>(DirectAgent-469:ctx-d6e5768e) VBD 755de6cb-3994-8251-c0d5-e45cda52ca98 
>created for com.cloud.agent.api.to.DiskTO@64992bda
>2017-02-22 12:18:40,744 WARN  [c.c.h.x.r.w.x.CitrixStartCommandWrapper] 
>(DirectAgent-469:ctx-d6e5768e) Catch Exception: class 
>com.xensource.xenapi.Types$InvalidDevice due to The device name is invalid
>The device name is invalid
>at com.xensource.xenapi.Types.checkResponse(Types.java:1169)
>at com.xensource.xenapi.Connection.dispatch(Connection.java:395)
>at 
> com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:457)
>at com.xensource.xenapi.VBD.create(VBD.java:322)
>at 
> com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.createVbd(CitrixResourceBase.java:1156)
>at 
> com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStartCommandWrapper.execute(CitrixStartCommandWrapper.java:121)
>at 
> com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStartCommandWrapper.execute(CitrixStartCommandWrapper.java:53)
>at 
> com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixRequestWrapper.execute(CitrixRequestWrapper.java:122)
>at 
> com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:1687)
>at 
> com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:315)
>at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
>at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
>at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
>at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
>at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
>at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>at 
>