Can you mount the secondary storage on your KVM host?

On 12/14/15 12:32 PM, Cristian Ciobanu wrote:
> OK,
> 
>    I re-installed again ( fresh install ) but now i have again same error for 
> second IP class (The IP's from second class are not allocated on VM, if i set 
> manually the IP that must be allocated to VM, works )  
> 
>   System configuration:
> 
>   CS Version : Cloudstack 4.6 
>   OS: CentOS 6
>   Hypervisor: KVM
>   Network: Basic - DefaultSharedNetworkOffer 
>   IP :  2 x /29 
> 
>   I think this is a issue on CS 4.6
> 
> 
> 
> Regards,
> Cristian
> 
>  
> On 12/14/2015 7:46:17 PM, Cristian Ciobanu <[email protected]> wrote:
> Hi,
> 
>     I have a primary and one secondary.
> 
>      In my cloudstack management interface i see only 1 primary and one 
> secondary ( ssvm )
> 
>     Yes, i run with local storage (KVM)  
> 
> 
> Regards,
> Cristian
> www.istream.today [http://www.istream.today/]
> www.shape.host [http://www.shape.host/]
> +40.733.955.922
>  
> On 12/14/2015 7:36:49 PM, Nux! <[email protected]> wrote:
> Cristian,
> 
> Do you have 2 secondary storages?
> Do you run with local storage in KVM?
> 
> "could not open disk image 
> /var/lib/libvirt/images/0269e267-c80f-4d23-a079-be188f814d0e: Is a directory" 
> <- this="" is="" really="" wrong.="">
> 
> --
> Sent from the Delta quadrant using Borg technology!
> 
> Nux!
> www.nux.ro
> 
> ----- Original Message -----
>> From: "Cristian Ciobanu"
>> To: [email protected]
>> Sent: Monday, 14 December, 2015 16:43:11
>> Subject: Re: InsufficientServerCapacityException
> 
>> Hello,
>>
>>     First of all, thank you.
>>
>>     The host is UP in cloudstack
>>
>>
>>
>> Logs from SSVM:
>>
>> root@s-28-VM:~# /usr/local/cloud/systemvm/ssvm-check.sh
>> ================================================
>> First DNS server is  8.8.8.8
>> PING 8.8.8.8 (8.8.8.8): 48 data bytes
>> 60 bytes from 172.20.255.39: Destination Host Unreachable
>> Vr HL TOS  Len   ID Flg  off TTL Pro  cks      Src      Dst Data
>>  4  5  00 4c00 c4ed   0 0040  40  01 a091 172.xx.255.39  8.8.8.8
>> 60 bytes from 172.20.255.39: Destination Host Unreachable
>> Vr HL TOS  Len   ID Flg  off TTL Pro  cks      Src      Dst Data
>>  4  5  00 4c00 53ee   0 0040  40  01 1191 172.xx.255.39  8.8.8.8
>> --- 8.8.8.8 ping statistics ---
>> 2 packets transmitted, 0 packets received, 100% packet loss
>> WARNING: cannot ping DNS server
>> route follows
>> Kernel IP routing table
>> Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
>> 0.0.0.0         158.xx.xxx.166  0.0.0.0         UG    0      0        0 eth2
>> 8.8.8.8         172.xx.255.1    255.255.255.255 UGH   0      0        0 eth1
>> 158.xx.xxx.160  0.0.0.0         255.255.255.248 U     0      0        0 eth2
>> 169.254.0.0     0.0.0.0         255.255.0.0     U     0      0        0 eth0
>> 172.xx.255.0    0.0.0.0         255.255.255.0   U     0      0        0 eth1
>> ================================================
>> Good: DNS resolves download.cloud.com
>> ================================================
>> nfs is currently mounted
>> Mount point is /mnt/SecStorage/821f0c6f-4a92-362d-916f-979a693231d9
>> Good: Can write to mount point
>> Mount point is /mnt/SecStorage/a1ece0a6-d7f9-3e78-bf27-47360eb58d4b
>> Good: Can write to mount point
>> ================================================
>> Management server is 172.xx.255.2. Checking connectivity.
>> Good: Can connect to management server port 8250
>> ================================================
>> Good: Java process is running
>> ================================================
>> Tests Complete. Look for ERROR or WARNING above.
>>
>>
>> Agent Logs:
>>
>>
>> 2015-12-14 17:38:48,219 INFO  [kvm.storage.LibvirtStorageAdaptor]
>> (agentRequest-Handler-2:null) Trying to fetch storage pool
>> 1b5e1ff6-19d4-44b7-ae63-0b7b039edb14 from libvirt
>> 2015-12-14 17:38:48,710 WARN  [resource.wrapper.LibvirtStartCommandWrapper]
>> (agentRequest-Handler-2:null) LibvirtException
>> org.libvirt.LibvirtException: internal error Process exited while reading
>> console log output: char device redirected to /dev/pts/4
>> 2015-12-14T16:38:48.499386Z qemu-kvm: -drive
>> file=/var/lib/libvirt/images/0269e267-c80f-4d23-a079-be188f814d0e,if=none,id=drive-virtio-disk0,format=qcow2,serial=0269e267c80f4d23a079,cache=none:
>> could not open disk image
>> /var/lib/libvirt/images/0269e267-c80f-4d23-a079-be188f814d0e: Is a directory
>>
>>         at org.libvirt.ErrorHandler.processError(Unknown Source)
>>         at org.libvirt.Connect.processError(Unknown Source)
>>         at org.libvirt.Connect.processError(Unknown Source)
>>         at org.libvirt.Connect.domainCreateXML(Unknown Source)
>>         at
>>         
>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.startVM(LibvirtComputingResource.java:1292)
>>         at
>>         
>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtStartCommandWrapper.execute(LibvirtStartCommandWrapper.java:82)
>>         at
>>         
>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtStartCommandWrapper.execute(LibvirtStartCommandWrapper.java:46)
>>         at
>>         
>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:75)
>>         at
>>         
>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1316)
>>         at com.cloud.agent.Agent.processRequest(Agent.java:518)
>>         at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:823)
>>         at com.cloud.utils.nio.Task.call(Task.java:83)
>>         at com.cloud.utils.nio.Task.call(Task.java:29)
>>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>         at
>>         
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>         at
>>         
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>         at java.lang.Thread.run(Thread.java:745)
>> 2015-12-14 17:38:48,710 INFO  [kvm.storage.LibvirtStorageAdaptor]
>> (agentRequest-Handler-2:null) Trying to fetch storage pool
>> 1b5e1ff6-19d4-44b7-ae63-0b7b039edb14 from libvirt
>> 2015-12-14 17:38:48,767 WARN  [kvm.resource.LibvirtConnection]
>> (agentRequest-Handler-1:null) Can not find a connection for Instance 
>> i-2-42-VM.
>> Assuming the default connection.
>>
>> Libvirtd Log:
>>
>> 2015-12-14 16:38:48.312+0000: 7014: warning : qemuDomainObjTaint:1459 : 
>> Domain
>> id=31 name='i-2-42-VM' uuid=2bde9674-d90b-4d8e-ab82-669c155155ff is tainted:
>> high-privileges
>> 2015-12-14 16:38:48.615+0000: 7014: error : qemuProcessReadLogOutput:1583 :
>> internal error Process exited while reading console log output: char device
>> redirected to /dev/pts/4
>> 2015-12-14T16:38:48.499386Z qemu-kvm: -drive
>> file=/var/lib/libvirt/images/0269e267-c80f-4d23-a079-be188f814d0e,if=none,id=drive-virtio-disk0,format=qcow2,serial=0269e267c80f4d23a079,cache=none:
>> could not open disk image
>> /var/lib/libvirt/images/0269e267-c80f-4d23-a079-be188f814d0e: Is a directory
>>
>> 2015-12-14 16:38:58.224+0000: 7016: warning : qemuDomainObjTaint:1459 : 
>> Domain
>> id=32 name='i-2-42-VM' uuid=2bde9674-d90b-4d8e-ab82-669c155155ff is tainted:
>> high-privileges
>> 2015-12-14 16:38:58.596+0000: 7016: error : qemuProcessReadLogOutput:1583 :
>> internal error Process exited while reading console log output: char device
>> redirected to /dev/pts/4
>> 2015-12-14T16:38:58.479426Z qemu-kvm: -drive
>> file=/var/lib/libvirt/images/0269e267-c80f-4d23-a079-be188f814d0e,if=none,id=drive-virtio-disk0,format=qcow2,serial=0269e267c80f4d23a079,cache=none:
>> could not open disk image
>> /var/lib/libvirt/images/0269e267-c80f-4d23-a079-be188f814d0e: Is a directory
>>
>> Thank you.
>>
>> Regards,
>> Cristian
>>
>>  
>> On 12/14/2015 6:29:13 PM, Nux! wrote:
>> Hi,
>>
>> Are your hosts with UP status?
>>
>> Other things I would check:
>> agent.log and libvirtd.log on the hosts
>>
>> Run this in the Sec Storage VM: /usr/local/cloud/systemvm/ssvm-check.sh
>>
>> --
>> Sent from the Delta quadrant using Borg technology!
>>
>> Nux!
>> www.nux.ro
>>
>> ----- Original Message -----
>>> From: "Cristian Ciobanu"
>>> To: [email protected]
>>> Sent: Monday, 14 December, 2015 16:17:49
>>> Subject: InsufficientServerCapacityException
>>
>>> Hello,
>>>
>>>      I re-installed the KVM host ( cloudstack 4.6, centOS 6.6 ) but i have 
>>> a issue,
>>>      i'm not sure why on  every install i have a new issue.
>>>
>>>     Right now I'm not able to create a new VM using default centos template 
>>> (all
>>>     system VM's are working )
>>>
>>>      Please see the logs:
>>>
>>> 2015-12-14 17:07:56,840 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
>>> (Work-Job-Executor-6:ctx-4e51fe56 job-222/job-223 ctx-6cc32ca4) The 
>>> specified
>>> host is in avoid set
>>> 2015-12-14 17:07:56,840 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
>>> (Work-Job-Executor-6:ctx-4e51fe56 job-222/job-223 ctx-6cc32ca4) Cannot 
>>> deploy
>>> to specified host, returning.
>>>
>>> ==> apilog.log <==>
>>> 2015-12-14 17:07:56,850 INFO  [a.c.c.a.ApiServer] 
>>> (catalina-exec-15:ctx-35c33515
>>> ctx-bf6ef33a) (userId=2 accountId=2 
>>> sessionId=D7AC10C9BA7873B42C6A71D94DC9DF5F)
>>> 79.xx.xx.128 -- GET
>>> command=queryAsyncJobResult&jobId=140796c1-c42f-45bc-bd57-d420f121413d&response=json&_=1450109270793
>>> 200
>>> {"queryasyncjobresultresponse":{"accountid":"fcd8ab12-a25b-11e5-9097-0cc47a69688e","userid":"fcd8b152-a25b-11e5-9097-0cc47a69688e","cmd":"org.apache.cloudstack.api.command.admin.vm.DeployVMCmdByAdmin","jobstatus":0,"jobprocstatus":0,"jobresultcode":0,"jobinstancetype":"VirtualMachine","jobinstanceid":"894e4fe6-477d-4b97-baa4-221ca729b5f1","created":"2015-12-14T17:07:47+0100","jobid":"140796c1-c42f-45bc-bd57-d420f121413d"}}
>>>
>>> ==> management-server.log <==>
>>> 2015-12-14 17:07:56,850 DEBUG [c.c.a.ApiServlet] 
>>> (catalina-exec-15:ctx-35c33515
>>> ctx-bf6ef33a) ===END===  79.xx.xx.128 -- GET
>>>  
>>> command=queryAsyncJobResult&jobId=140796c1-c42f-45bc-bd57-d420f121413d&response=json&_=1450109270793
>>> 2015-12-14 17:07:56,881 DEBUG [c.c.c.CapacityManagerImpl]
>>> (Work-Job-Executor-6:ctx-4e51fe56 job-222/job-223 ctx-6cc32ca4) VM state
>>> transitted from :Starting to Stopped with event: OperationFailedvm's 
>>> original
>>> host id: null new host id: null host id before state transition: 1
>>> 2015-12-14 17:07:56,885 DEBUG [c.c.c.CapacityManagerImpl]
>>> (Work-Job-Executor-6:ctx-4e51fe56 job-222/job-223 ctx-6cc32ca4) Hosts's 
>>> actual
>>> total CPU: 29592 and CPU after applying overprovisioning: 29592
>>> 2015-12-14 17:07:56,885 DEBUG [c.c.c.CapacityManagerImpl]
>>> (Work-Job-Executor-6:ctx-4e51fe56 job-222/job-223 ctx-6cc32ca4) Hosts's 
>>> actual
>>> total RAM: 32621350912 and RAM after applying overprovisioning: 32621350912
>>> 2015-12-14 17:07:56,885 DEBUG [c.c.c.CapacityManagerImpl]
>>> (Work-Job-Executor-6:ctx-4e51fe56 job-222/job-223 ctx-6cc32ca4) release cpu
>>> from host: 1, old used: 2500,reserved: 0, actual total: 29592, total with
>>> overprovisioning: 29592; new used: 1500,reserved:0; movedfromreserved:
>>> false,moveToReserveredfalse
>>> 2015-12-14 17:07:56,885 DEBUG [c.c.c.CapacityManagerImpl]
>>> (Work-Job-Executor-6:ctx-4e51fe56 job-222/job-223 ctx-6cc32ca4) release mem
>>> from host: 1, old used: 2927624192,reserved: 0, total: 32621350912; new 
>>> used:
>>> 1879048192,reserved:0; movedfromreserved: false,moveToReserveredfalse
>>> 2015-12-14 17:07:56,932 ERROR [c.c.v.VmWorkJobHandlerProxy]
>>> (Work-Job-Executor-6:ctx-4e51fe56 job-222/job-223 ctx-6cc32ca4) Invocation
>>> exception, caused by: 
>>> com.cloud.exception.InsufficientServerCapacityException:
>>> Unable to create a deployment for VM[User|i-2-41-VM]Scope=interface
>>> com.cloud.dc.DataCenter; id=1
>>> 2015-12-14 17:07:56,932 INFO  [c.c.v.VmWorkJobHandlerProxy]
>>> (Work-Job-Executor-6:ctx-4e51fe56 job-222/job-223 ctx-6cc32ca4) Rethrow
>>> exception com.cloud.exception.InsufficientServerCapacityException: Unable to
>>> create a deployment for VM[User|i-2-41-VM]Scope=interface
>>> com.cloud.dc.DataCenter; id=1
>>> 2015-12-14 17:07:56,932 DEBUG [c.c.v.VmWorkJobDispatcher]
>>> (Work-Job-Executor-6:ctx-4e51fe56 job-222/job-223) Done with run of VM work
>>> job: com.cloud.vm.VmWorkStart for VM 41, job origin: 222
>>> 2015-12-14 17:07:56,932 ERROR [c.c.v.VmWorkJobDispatcher]
>>> (Work-Job-Executor-6:ctx-4e51fe56 job-222/job-223) Unable to complete
>>> AsyncJobVO {id:223, userId: 2, accountId: 2, instanceType: null, instanceId:
>>> null, cmd: com.cloud.vm.VmWorkStart, cmdInfo:
>>> rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAAKXQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFxAH4ACnBwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA,
>>> cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result:
>>> null, initMsid: 14038006851726, completeMsid: null, lastUpdated: null,
>>> lastPolled: null, created: Mon Dec 14 17:07:47 CET 2015}, job origin:222
>>> com.cloud.exception.InsufficientServerCapacityException: Unable to create a
>>> deployment for VM[User|i-2-41-VM]Scope=interface com.cloud.dc.DataCenter; 
>>> id=1
>>>         at
>>>         
>>> com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:959)
>>>         at
>>>         
>>> com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:4580)
>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>         at 
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>         at
>>>         
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>         at java.lang.reflect.Method.invoke(Method.java:606)
>>>         at
>>>         
>>> com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
>>>         at
>>>         
>>> com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4736)
>>>         at 
>>> com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:102)
>>>         at
>>>         
>>> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:537)
>>>         at
>>>         
>>> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
>>>         at
>>>         
>>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
>>>         at
>>>         
>>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
>>>         at
>>>         
>>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
>>>         at
>>>         
>>> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
>>>         at
>>>         
>>> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:494)
>>>         at 
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>>         at
>>>         
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>         at
>>>         
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>         at java.lang.Thread.run(Thread.java:745)
>>>
>>>    Thank you.
>>>
>>>
>>> Regards,
>>> Cristian
> 

Reply via email to