Re: Can't KVM migrate between local storage.

2018-05-16 Thread Daznis
Hi Marc,


I have attached all the xml responses. I have hidden some of the info
in xml files to hide some information.




On Mon, May 14, 2018 at 9:25 PM, Marc-Aurèle Brothier  wrote:
> Can you give us the result of those API calls:
>
> listZones
> listZones id=2
> listHosts
> listHosts id=5
> listStoragePools
> listStoragePools id=1
> listVirtualMachines id=19
> listVolumes id=70
>
> On Mon, May 14, 2018 at 5:30 PM, Daznis  wrote:
>
>> Hi,
>>
>> It has 1 zone. I'm not sure how it got zoneid2. Probably failed to add
>> whole zone and was added again. We have 4 hosts with local storage on
>> them for system vms and VMS that need ssd storage and ceph primary for
>> everything else plus one secondary  storage server.
>>
>> On Mon, May 14, 2018 at 5:38 PM, Marc-Aurèle Brothier 
>> wrote:
>> > Hi Daznis,
>> >
>> > Reading the logs I see some inconsistency in the values. Can you describe
>> > the infrastructure you set up? The things that disturbs me is a zoneid=2,
>> > and a destination pool id=1. Aren't you trying to migrate a volume of a
>> VM
>> > between 2 regions/zones?
>> >
>> > On Sat, May 12, 2018 at 2:33 PM, Daznis  wrote:
>> >
>> >> Hi,
>> >> Actually that's the whole log. Above it just job starting. I have
>> >> attached the missing part of the log. Which tables do you need from
>> >> the database?
>> >> There are multiple records with allocated/creating inside
>> >> volume_store_ref. There is nothing that's looks wrong with
>> >> volumes/snapshots/snapshot_store_ref.
>> >>
>> >> On Thu, May 10, 2018 at 9:27 PM, Suresh Kumar Anaparti
>> >>  wrote:
>> >> > Hi Darius,
>> >> >
>> >> > From the logs, I could observe that image volume is already in the
>> >> creating
>> >> > state and trying to use the same for copying the volume between pools.
>> >> So,
>> >> > state transition failed. Could you please provide the complete log for
>> >> the
>> >> > usecase to root cause the issue. Also, include volumes and snapshots
>> db
>> >> > details for the mentioned volume and snapshot.
>> >> >
>> >> > -Suresh
>> >> >
>> >> >
>> >> > On Thu, May 10, 2018 at 1:22 PM, Daznis  wrote:
>> >> >
>> >> >> Snapshots work fine. I can make a snapshot -> convert it to template
>> >> >> and start the VM on a new node from that template. When I needed to
>> >> >> move one VM for balance purposes. But I want to fix the migration
>> >> >> process. I have attached the error log to this email. Maybe I'm
>> >> >> looking at the wrong place were I get the error?
>> >> >>
>> >> >> On Wed, May 9, 2018 at 9:23 PM, Marc-Aurèle Brothier <
>> ma...@exoscale.ch
>> >> >
>> >> >> wrote:
>> >> >> > Can you try to perform a snapshot of the volume on VM's that are on
>> >> your
>> >> >> > host, to see if they get copied correctly other the NFS too.
>> >> >> >
>> >> >> > Otherwise you need to look into the management logs to catch the
>> >> >> exception
>> >> >> > (stack trace) to have a better understanding of the issue.
>> >> >> >
>> >> >> > On Wed, May 9, 2018 at 1:58 PM, Daznis  wrote:
>> >> >> >
>> >> >> >> Hello,
>> >> >> >>
>> >> >> >>
>> >> >> >> Yeah it's offline. I'm running 4.9.2 version. Running it on the
>> same
>> >> >> >> zone with the  only NFS secondary storage.
>> >> >> >>
>> >> >> >> On Wed, May 9, 2018 at 10:49 AM, Marc-Aurèle Brothier <
>> >> >> ma...@exoscale.ch>
>> >> >> >> wrote:
>> >> >> >> > Hi Darius,
>> >> >> >> >
>> >> >> >> > Are you trying to perform an offline migration within the same
>> >> zone,
>> >> >> >> > meaning that the source and destination hosts have the same set
>> of
>> >> NFS
>> >> >> >> > secondary storage ?
>> >> >> >> >
>> >> >> >> > Marc-Aurèle
>> >> >> >> >
>> >> >> >> > On Tue, May 8, 2018 at 3:37 PM, Daznis 
>> wrote:
>> >> >> >> >
>> >> >> >> >> Hi,
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >> I'm having an issue while migrating offline vm disk within
>> local
>> >> >> >> >> storages. The particular error that has be baffled is "Can't
>> find
>> >> >> >> >> staging storage in zone". From what I have gather "staging
>> >> storage"
>> >> >> >> >> referred to secondary storage in cloudstack and it's working
>> >> >> perfectly
>> >> >> >> >> fine with both the source and destination node. Not sure where
>> to
>> >> go
>> >> >> >> >> next. Any help would be appreciated.
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >> Regards,
>> >> >> >> >> Darius
>> >> >> >> >>
>> >> >> >>
>> >> >>
>> >>
>>

4

0fddab91-151f-4669-aa0b-8a0baf379c25
v-65-VM
Up
2018-05-10T14:45:59+0200
ConsoleProxy
10.24.51.122
cb4e630c-1918-441f-9886-324ff5869118
HIDDEN
8add7350-56b6-4106-82e4-eab5204971ee
HIDDEN
4.9.2.0
1970-01-18T07:56:31+0200
159304575802479
false
2018-05-07T10:38:37+0200

AgentDisconnected; Remove; PingTimeout; ManagementServerDown; StartAgentRebalance; AgentConnected; Ping; ShutdownRequested; HostDown


Disabled
false

Enabled



Re: Can't KVM migrate between local storage.

2018-05-14 Thread Marc-Aurèle Brothier
Can you give us the result of those API calls:

listZones
listZones id=2
listHosts
listHosts id=5
listStoragePools
listStoragePools id=1
listVirtualMachines id=19
listVolumes id=70

On Mon, May 14, 2018 at 5:30 PM, Daznis  wrote:

> Hi,
>
> It has 1 zone. I'm not sure how it got zoneid2. Probably failed to add
> whole zone and was added again. We have 4 hosts with local storage on
> them for system vms and VMS that need ssd storage and ceph primary for
> everything else plus one secondary  storage server.
>
> On Mon, May 14, 2018 at 5:38 PM, Marc-Aurèle Brothier 
> wrote:
> > Hi Daznis,
> >
> > Reading the logs I see some inconsistency in the values. Can you describe
> > the infrastructure you set up? The things that disturbs me is a zoneid=2,
> > and a destination pool id=1. Aren't you trying to migrate a volume of a
> VM
> > between 2 regions/zones?
> >
> > On Sat, May 12, 2018 at 2:33 PM, Daznis  wrote:
> >
> >> Hi,
> >> Actually that's the whole log. Above it just job starting. I have
> >> attached the missing part of the log. Which tables do you need from
> >> the database?
> >> There are multiple records with allocated/creating inside
> >> volume_store_ref. There is nothing that's looks wrong with
> >> volumes/snapshots/snapshot_store_ref.
> >>
> >> On Thu, May 10, 2018 at 9:27 PM, Suresh Kumar Anaparti
> >>  wrote:
> >> > Hi Darius,
> >> >
> >> > From the logs, I could observe that image volume is already in the
> >> creating
> >> > state and trying to use the same for copying the volume between pools.
> >> So,
> >> > state transition failed. Could you please provide the complete log for
> >> the
> >> > usecase to root cause the issue. Also, include volumes and snapshots
> db
> >> > details for the mentioned volume and snapshot.
> >> >
> >> > -Suresh
> >> >
> >> >
> >> > On Thu, May 10, 2018 at 1:22 PM, Daznis  wrote:
> >> >
> >> >> Snapshots work fine. I can make a snapshot -> convert it to template
> >> >> and start the VM on a new node from that template. When I needed to
> >> >> move one VM for balance purposes. But I want to fix the migration
> >> >> process. I have attached the error log to this email. Maybe I'm
> >> >> looking at the wrong place were I get the error?
> >> >>
> >> >> On Wed, May 9, 2018 at 9:23 PM, Marc-Aurèle Brothier <
> ma...@exoscale.ch
> >> >
> >> >> wrote:
> >> >> > Can you try to perform a snapshot of the volume on VM's that are on
> >> your
> >> >> > host, to see if they get copied correctly other the NFS too.
> >> >> >
> >> >> > Otherwise you need to look into the management logs to catch the
> >> >> exception
> >> >> > (stack trace) to have a better understanding of the issue.
> >> >> >
> >> >> > On Wed, May 9, 2018 at 1:58 PM, Daznis  wrote:
> >> >> >
> >> >> >> Hello,
> >> >> >>
> >> >> >>
> >> >> >> Yeah it's offline. I'm running 4.9.2 version. Running it on the
> same
> >> >> >> zone with the  only NFS secondary storage.
> >> >> >>
> >> >> >> On Wed, May 9, 2018 at 10:49 AM, Marc-Aurèle Brothier <
> >> >> ma...@exoscale.ch>
> >> >> >> wrote:
> >> >> >> > Hi Darius,
> >> >> >> >
> >> >> >> > Are you trying to perform an offline migration within the same
> >> zone,
> >> >> >> > meaning that the source and destination hosts have the same set
> of
> >> NFS
> >> >> >> > secondary storage ?
> >> >> >> >
> >> >> >> > Marc-Aurèle
> >> >> >> >
> >> >> >> > On Tue, May 8, 2018 at 3:37 PM, Daznis 
> wrote:
> >> >> >> >
> >> >> >> >> Hi,
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> I'm having an issue while migrating offline vm disk within
> local
> >> >> >> >> storages. The particular error that has be baffled is "Can't
> find
> >> >> >> >> staging storage in zone". From what I have gather "staging
> >> storage"
> >> >> >> >> referred to secondary storage in cloudstack and it's working
> >> >> perfectly
> >> >> >> >> fine with both the source and destination node. Not sure where
> to
> >> go
> >> >> >> >> next. Any help would be appreciated.
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> Regards,
> >> >> >> >> Darius
> >> >> >> >>
> >> >> >>
> >> >>
> >>
>


Re: Can't KVM migrate between local storage.

2018-05-14 Thread Daznis
Hi,

It has 1 zone. I'm not sure how it got zoneid2. Probably failed to add
whole zone and was added again. We have 4 hosts with local storage on
them for system vms and VMS that need ssd storage and ceph primary for
everything else plus one secondary  storage server.

On Mon, May 14, 2018 at 5:38 PM, Marc-Aurèle Brothier  wrote:
> Hi Daznis,
>
> Reading the logs I see some inconsistency in the values. Can you describe
> the infrastructure you set up? The things that disturbs me is a zoneid=2,
> and a destination pool id=1. Aren't you trying to migrate a volume of a VM
> between 2 regions/zones?
>
> On Sat, May 12, 2018 at 2:33 PM, Daznis  wrote:
>
>> Hi,
>> Actually that's the whole log. Above it just job starting. I have
>> attached the missing part of the log. Which tables do you need from
>> the database?
>> There are multiple records with allocated/creating inside
>> volume_store_ref. There is nothing that's looks wrong with
>> volumes/snapshots/snapshot_store_ref.
>>
>> On Thu, May 10, 2018 at 9:27 PM, Suresh Kumar Anaparti
>>  wrote:
>> > Hi Darius,
>> >
>> > From the logs, I could observe that image volume is already in the
>> creating
>> > state and trying to use the same for copying the volume between pools.
>> So,
>> > state transition failed. Could you please provide the complete log for
>> the
>> > usecase to root cause the issue. Also, include volumes and snapshots db
>> > details for the mentioned volume and snapshot.
>> >
>> > -Suresh
>> >
>> >
>> > On Thu, May 10, 2018 at 1:22 PM, Daznis  wrote:
>> >
>> >> Snapshots work fine. I can make a snapshot -> convert it to template
>> >> and start the VM on a new node from that template. When I needed to
>> >> move one VM for balance purposes. But I want to fix the migration
>> >> process. I have attached the error log to this email. Maybe I'm
>> >> looking at the wrong place were I get the error?
>> >>
>> >> On Wed, May 9, 2018 at 9:23 PM, Marc-Aurèle Brothier > >
>> >> wrote:
>> >> > Can you try to perform a snapshot of the volume on VM's that are on
>> your
>> >> > host, to see if they get copied correctly other the NFS too.
>> >> >
>> >> > Otherwise you need to look into the management logs to catch the
>> >> exception
>> >> > (stack trace) to have a better understanding of the issue.
>> >> >
>> >> > On Wed, May 9, 2018 at 1:58 PM, Daznis  wrote:
>> >> >
>> >> >> Hello,
>> >> >>
>> >> >>
>> >> >> Yeah it's offline. I'm running 4.9.2 version. Running it on the same
>> >> >> zone with the  only NFS secondary storage.
>> >> >>
>> >> >> On Wed, May 9, 2018 at 10:49 AM, Marc-Aurèle Brothier <
>> >> ma...@exoscale.ch>
>> >> >> wrote:
>> >> >> > Hi Darius,
>> >> >> >
>> >> >> > Are you trying to perform an offline migration within the same
>> zone,
>> >> >> > meaning that the source and destination hosts have the same set of
>> NFS
>> >> >> > secondary storage ?
>> >> >> >
>> >> >> > Marc-Aurèle
>> >> >> >
>> >> >> > On Tue, May 8, 2018 at 3:37 PM, Daznis  wrote:
>> >> >> >
>> >> >> >> Hi,
>> >> >> >>
>> >> >> >>
>> >> >> >> I'm having an issue while migrating offline vm disk within local
>> >> >> >> storages. The particular error that has be baffled is "Can't find
>> >> >> >> staging storage in zone". From what I have gather "staging
>> storage"
>> >> >> >> referred to secondary storage in cloudstack and it's working
>> >> perfectly
>> >> >> >> fine with both the source and destination node. Not sure where to
>> go
>> >> >> >> next. Any help would be appreciated.
>> >> >> >>
>> >> >> >>
>> >> >> >> Regards,
>> >> >> >> Darius
>> >> >> >>
>> >> >>
>> >>
>>


Re: Can't KVM migrate between local storage.

2018-05-14 Thread Marc-Aurèle Brothier
Hi Daznis,

Reading the logs I see some inconsistency in the values. Can you describe
the infrastructure you set up? The things that disturbs me is a zoneid=2,
and a destination pool id=1. Aren't you trying to migrate a volume of a VM
between 2 regions/zones?

On Sat, May 12, 2018 at 2:33 PM, Daznis  wrote:

> Hi,
> Actually that's the whole log. Above it just job starting. I have
> attached the missing part of the log. Which tables do you need from
> the database?
> There are multiple records with allocated/creating inside
> volume_store_ref. There is nothing that's looks wrong with
> volumes/snapshots/snapshot_store_ref.
>
> On Thu, May 10, 2018 at 9:27 PM, Suresh Kumar Anaparti
>  wrote:
> > Hi Darius,
> >
> > From the logs, I could observe that image volume is already in the
> creating
> > state and trying to use the same for copying the volume between pools.
> So,
> > state transition failed. Could you please provide the complete log for
> the
> > usecase to root cause the issue. Also, include volumes and snapshots db
> > details for the mentioned volume and snapshot.
> >
> > -Suresh
> >
> >
> > On Thu, May 10, 2018 at 1:22 PM, Daznis  wrote:
> >
> >> Snapshots work fine. I can make a snapshot -> convert it to template
> >> and start the VM on a new node from that template. When I needed to
> >> move one VM for balance purposes. But I want to fix the migration
> >> process. I have attached the error log to this email. Maybe I'm
> >> looking at the wrong place were I get the error?
> >>
> >> On Wed, May 9, 2018 at 9:23 PM, Marc-Aurèle Brothier  >
> >> wrote:
> >> > Can you try to perform a snapshot of the volume on VM's that are on
> your
> >> > host, to see if they get copied correctly other the NFS too.
> >> >
> >> > Otherwise you need to look into the management logs to catch the
> >> exception
> >> > (stack trace) to have a better understanding of the issue.
> >> >
> >> > On Wed, May 9, 2018 at 1:58 PM, Daznis  wrote:
> >> >
> >> >> Hello,
> >> >>
> >> >>
> >> >> Yeah it's offline. I'm running 4.9.2 version. Running it on the same
> >> >> zone with the  only NFS secondary storage.
> >> >>
> >> >> On Wed, May 9, 2018 at 10:49 AM, Marc-Aurèle Brothier <
> >> ma...@exoscale.ch>
> >> >> wrote:
> >> >> > Hi Darius,
> >> >> >
> >> >> > Are you trying to perform an offline migration within the same
> zone,
> >> >> > meaning that the source and destination hosts have the same set of
> NFS
> >> >> > secondary storage ?
> >> >> >
> >> >> > Marc-Aurèle
> >> >> >
> >> >> > On Tue, May 8, 2018 at 3:37 PM, Daznis  wrote:
> >> >> >
> >> >> >> Hi,
> >> >> >>
> >> >> >>
> >> >> >> I'm having an issue while migrating offline vm disk within local
> >> >> >> storages. The particular error that has be baffled is "Can't find
> >> >> >> staging storage in zone". From what I have gather "staging
> storage"
> >> >> >> referred to secondary storage in cloudstack and it's working
> >> perfectly
> >> >> >> fine with both the source and destination node. Not sure where to
> go
> >> >> >> next. Any help would be appreciated.
> >> >> >>
> >> >> >>
> >> >> >> Regards,
> >> >> >> Darius
> >> >> >>
> >> >>
> >>
>


Re: Can't KVM migrate between local storage.

2018-05-12 Thread Daznis
Hi,
Actually that's the whole log. Above it just job starting. I have
attached the missing part of the log. Which tables do you need from
the database?
There are multiple records with allocated/creating inside
volume_store_ref. There is nothing that's looks wrong with
volumes/snapshots/snapshot_store_ref.

On Thu, May 10, 2018 at 9:27 PM, Suresh Kumar Anaparti
 wrote:
> Hi Darius,
>
> From the logs, I could observe that image volume is already in the creating
> state and trying to use the same for copying the volume between pools. So,
> state transition failed. Could you please provide the complete log for the
> usecase to root cause the issue. Also, include volumes and snapshots db
> details for the mentioned volume and snapshot.
>
> -Suresh
>
>
> On Thu, May 10, 2018 at 1:22 PM, Daznis  wrote:
>
>> Snapshots work fine. I can make a snapshot -> convert it to template
>> and start the VM on a new node from that template. When I needed to
>> move one VM for balance purposes. But I want to fix the migration
>> process. I have attached the error log to this email. Maybe I'm
>> looking at the wrong place were I get the error?
>>
>> On Wed, May 9, 2018 at 9:23 PM, Marc-Aurèle Brothier 
>> wrote:
>> > Can you try to perform a snapshot of the volume on VM's that are on your
>> > host, to see if they get copied correctly other the NFS too.
>> >
>> > Otherwise you need to look into the management logs to catch the
>> exception
>> > (stack trace) to have a better understanding of the issue.
>> >
>> > On Wed, May 9, 2018 at 1:58 PM, Daznis  wrote:
>> >
>> >> Hello,
>> >>
>> >>
>> >> Yeah it's offline. I'm running 4.9.2 version. Running it on the same
>> >> zone with the  only NFS secondary storage.
>> >>
>> >> On Wed, May 9, 2018 at 10:49 AM, Marc-Aurèle Brothier <
>> ma...@exoscale.ch>
>> >> wrote:
>> >> > Hi Darius,
>> >> >
>> >> > Are you trying to perform an offline migration within the same zone,
>> >> > meaning that the source and destination hosts have the same set of NFS
>> >> > secondary storage ?
>> >> >
>> >> > Marc-Aurèle
>> >> >
>> >> > On Tue, May 8, 2018 at 3:37 PM, Daznis  wrote:
>> >> >
>> >> >> Hi,
>> >> >>
>> >> >>
>> >> >> I'm having an issue while migrating offline vm disk within local
>> >> >> storages. The particular error that has be baffled is "Can't find
>> >> >> staging storage in zone". From what I have gather "staging storage"
>> >> >> referred to secondary storage in cloudstack and it's working
>> perfectly
>> >> >> fine with both the source and destination node. Not sure where to go
>> >> >> next. Any help would be appreciated.
>> >> >>
>> >> >>
>> >> >> Regards,
>> >> >> Darius
>> >> >>
>> >>
>>
2018-05-10 09:41:38,382 DEBUG [c.c.a.ApiServlet] 
(catalina-exec-11:ctx-bcc608a4) (logid:4600778d) ===START===  172.16.16.34 -- 
GET  
command=migrateVirtualMachine=cc86b3e2-3a7d-4025-b1c4-b3ad19e4a566=f52b6905-f6b5-46bd-bed9-2fa169e2b83a=json&_=1525938294460
2018-05-10 09:41:38,430 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
(API-Job-Executor-3:ctx-64fdce4d job-2148) (logid:cb2e3ab2) Add job-2148 into 
job monitoring
2018-05-10 09:41:38,433 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
(catalina-exec-11:ctx-bcc608a4 ctx-9456d552) (logid:4600778d) submit async 
job-2148, details: AsyncJobVO {id:2148, userId: 2, accountId: 2, instanceType: 
None, instanceId: null, cmd: 
org.apache.cloudstack.api.command.admin.vm.MigrateVMCmd, cmdInfo: 
{"virtualmachineid":"f52b6905-f6b5-46bd-bed9-2fa169e2b83a","response":"json","ctxUserId":"2","httpmethod":"GET","ctxStartEventId":"15250","ctxDetails":"{\"interface
 
com.cloud.vm.VirtualMachine\":\"f52b6905-f6b5-46bd-bed9-2fa169e2b83a\",\"interface
 
com.cloud.storage.StoragePool\":\"cc86b3e2-3a7d-4025-b1c4-b3ad19e4a566\"}","ctxAccountId":"2","cmdEventType":"VM.MIGRATE","storageid":"cc86b3e2-3a7d-4025-b1c4-b3ad19e4a566","_":"1525938294460"},
 cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
null, initMsid: 159304575802479, completeMsid: null, lastUpdated: null, 
lastPolled: null, created: null}
2018-05-10 09:41:38,434 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
(API-Job-Executor-3:ctx-64fdce4d job-2148) (logid:132e1fe7) Executing 
AsyncJobVO {id:2148, userId: 2, accountId: 2, instanceType: None, instanceId: 
null, cmd: org.apache.cloudstack.api.command.admin.vm.MigrateVMCmd, cmdInfo: 
{"virtualmachineid":"f52b6905-f6b5-46bd-bed9-2fa169e2b83a","response":"json","ctxUserId":"2","httpmethod":"GET","ctxStartEventId":"15250","ctxDetails":"{\"interface
 
com.cloud.vm.VirtualMachine\":\"f52b6905-f6b5-46bd-bed9-2fa169e2b83a\",\"interface
 
com.cloud.storage.StoragePool\":\"cc86b3e2-3a7d-4025-b1c4-b3ad19e4a566\"}","ctxAccountId":"2","cmdEventType":"VM.MIGRATE","storageid":"cc86b3e2-3a7d-4025-b1c4-b3ad19e4a566","_":"1525938294460"},
 cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
null, initMsid: 159304575802479, 

Re: Can't KVM migrate between local storage.

2018-05-10 Thread Suresh Kumar Anaparti
Hi Darius,

>From the logs, I could observe that image volume is already in the creating
state and trying to use the same for copying the volume between pools. So,
state transition failed. Could you please provide the complete log for the
usecase to root cause the issue. Also, include volumes and snapshots db
details for the mentioned volume and snapshot.

-Suresh


On Thu, May 10, 2018 at 1:22 PM, Daznis  wrote:

> Snapshots work fine. I can make a snapshot -> convert it to template
> and start the VM on a new node from that template. When I needed to
> move one VM for balance purposes. But I want to fix the migration
> process. I have attached the error log to this email. Maybe I'm
> looking at the wrong place were I get the error?
>
> On Wed, May 9, 2018 at 9:23 PM, Marc-Aurèle Brothier 
> wrote:
> > Can you try to perform a snapshot of the volume on VM's that are on your
> > host, to see if they get copied correctly other the NFS too.
> >
> > Otherwise you need to look into the management logs to catch the
> exception
> > (stack trace) to have a better understanding of the issue.
> >
> > On Wed, May 9, 2018 at 1:58 PM, Daznis  wrote:
> >
> >> Hello,
> >>
> >>
> >> Yeah it's offline. I'm running 4.9.2 version. Running it on the same
> >> zone with the  only NFS secondary storage.
> >>
> >> On Wed, May 9, 2018 at 10:49 AM, Marc-Aurèle Brothier <
> ma...@exoscale.ch>
> >> wrote:
> >> > Hi Darius,
> >> >
> >> > Are you trying to perform an offline migration within the same zone,
> >> > meaning that the source and destination hosts have the same set of NFS
> >> > secondary storage ?
> >> >
> >> > Marc-Aurèle
> >> >
> >> > On Tue, May 8, 2018 at 3:37 PM, Daznis  wrote:
> >> >
> >> >> Hi,
> >> >>
> >> >>
> >> >> I'm having an issue while migrating offline vm disk within local
> >> >> storages. The particular error that has be baffled is "Can't find
> >> >> staging storage in zone". From what I have gather "staging storage"
> >> >> referred to secondary storage in cloudstack and it's working
> perfectly
> >> >> fine with both the source and destination node. Not sure where to go
> >> >> next. Any help would be appreciated.
> >> >>
> >> >>
> >> >> Regards,
> >> >> Darius
> >> >>
> >>
>


Re: Can't KVM migrate between local storage.

2018-05-10 Thread Daznis
Snapshots work fine. I can make a snapshot -> convert it to template
and start the VM on a new node from that template. When I needed to
move one VM for balance purposes. But I want to fix the migration
process. I have attached the error log to this email. Maybe I'm
looking at the wrong place were I get the error?

On Wed, May 9, 2018 at 9:23 PM, Marc-Aurèle Brothier  wrote:
> Can you try to perform a snapshot of the volume on VM's that are on your
> host, to see if they get copied correctly other the NFS too.
>
> Otherwise you need to look into the management logs to catch the exception
> (stack trace) to have a better understanding of the issue.
>
> On Wed, May 9, 2018 at 1:58 PM, Daznis  wrote:
>
>> Hello,
>>
>>
>> Yeah it's offline. I'm running 4.9.2 version. Running it on the same
>> zone with the  only NFS secondary storage.
>>
>> On Wed, May 9, 2018 at 10:49 AM, Marc-Aurèle Brothier 
>> wrote:
>> > Hi Darius,
>> >
>> > Are you trying to perform an offline migration within the same zone,
>> > meaning that the source and destination hosts have the same set of NFS
>> > secondary storage ?
>> >
>> > Marc-Aurèle
>> >
>> > On Tue, May 8, 2018 at 3:37 PM, Daznis  wrote:
>> >
>> >> Hi,
>> >>
>> >>
>> >> I'm having an issue while migrating offline vm disk within local
>> >> storages. The particular error that has be baffled is "Can't find
>> >> staging storage in zone". From what I have gather "staging storage"
>> >> referred to secondary storage in cloudstack and it's working perfectly
>> >> fine with both the source and destination node. Not sure where to go
>> >> next. Any help would be appreciated.
>> >>
>> >>
>> >> Regards,
>> >> Darius
>> >>
>>
2018-05-10 09:41:38,555 DEBUG [c.c.v.VmWorkJobDispatcher] 
(Work-Job-Executor-3:ctx-69344343 job-2148/job-2149) (logid:132e1fe7) Run VM 
work job: com.cloud.vm.VmWorkStorageMigration for VM 19, job origin: 2148
2018-05-10 09:41:38,556 DEBUG [c.c.v.VmWorkJobHandlerProxy] 
(Work-Job-Executor-3:ctx-69344343 job-2148/job-2149 ctx-cfe56a50) 
(logid:132e1fe7) Execute VM work job: 
com.cloud.vm.VmWorkStorageMigration{"destPoolId":1,"userId":2,"accountId":2,"vmId":19,"handl
erName":"VirtualMachineManagerImpl"}
2018-05-10 09:41:38,569 DEBUG [c.c.c.CapacityManagerImpl] 
(Work-Job-Executor-3:ctx-69344343 job-2148/job-2149 ctx-cfe56a50) 
(logid:132e1fe7) VM state transitted from :Stopped to Migrating with event: 
StorageMigrationRequestedvm's original host id: 5 new ho
st id: null host id before state transition: null
2018-05-10 09:41:38,594 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] 
(Work-Job-Executor-3:ctx-69344343 job-2148/job-2149 ctx-cfe56a50) 
(logid:132e1fe7) copyAsync inspecting src type VOLUME copyAsync inspecting dest 
type VOLUME
2018-05-10 09:41:38,598 DEBUG [o.a.c.s.c.a.StorageCacheRandomAllocator] 
(Work-Job-Executor-3:ctx-69344343 job-2148/job-2149 ctx-cfe56a50) 
(logid:132e1fe7) Can't find staging storage in zone: 2
2018-05-10 09:41:38,608 DEBUG [o.a.c.s.v.VolumeObject] 
(Work-Job-Executor-3:ctx-69344343 job-2148/job-2149 ctx-cfe56a50) 
(logid:132e1fe7) Failed to update state
com.cloud.utils.fsm.NoTransitionException: Unable to transition to a new state 
from Creating via CreateOnlyRequested
at 
com.cloud.utils.fsm.StateMachine2.getTransition(StateMachine2.java:108)
at com.cloud.utils.fsm.StateMachine2.getNextState(StateMachine2.java:94)
at com.cloud.utils.fsm.StateMachine2.transitTo(StateMachine2.java:124)
at 
org.apache.cloudstack.storage.datastore.ObjectInDataStoreManagerImpl.update(ObjectInDataStoreManagerImpl.java:307)
at 
org.apache.cloudstack.storage.volume.VolumeObject.processEvent(VolumeObject.java:292)
at 
org.apache.cloudstack.storage.motion.AncientDataMotionStrategy.copyVolumeBetweenPools(AncientDataMotionStrategy.java:317)
at 
org.apache.cloudstack.storage.motion.AncientDataMotionStrategy.copyAsync(AncientDataMotionStrategy.java:440)
at 
org.apache.cloudstack.storage.motion.DataMotionServiceImpl.copyAsync(DataMotionServiceImpl.java:68)
at 
org.apache.cloudstack.storage.motion.DataMotionServiceImpl.copyAsync(DataMotionServiceImpl.java:73)
at 
org.apache.cloudstack.storage.volume.VolumeServiceImpl.copyVolume(VolumeServiceImpl.java:1372)
at 
org.apache.cloudstack.engine.orchestration.VolumeOrchestrator.migrateVolume(VolumeOrchestrator.java:952)
at 
org.apache.cloudstack.engine.orchestration.VolumeOrchestrator.storageMigration(VolumeOrchestrator.java:1056)
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateStorageMigration(VirtualMachineManagerImpl.java:1791)
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateStorageMigration(VirtualMachineManagerImpl.java:5073)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 

Re: Can't KVM migrate between local storage.

2018-05-09 Thread Marc-Aurèle Brothier
Can you try to perform a snapshot of the volume on VM's that are on your
host, to see if they get copied correctly other the NFS too.

Otherwise you need to look into the management logs to catch the exception
(stack trace) to have a better understanding of the issue.

On Wed, May 9, 2018 at 1:58 PM, Daznis  wrote:

> Hello,
>
>
> Yeah it's offline. I'm running 4.9.2 version. Running it on the same
> zone with the  only NFS secondary storage.
>
> On Wed, May 9, 2018 at 10:49 AM, Marc-Aurèle Brothier 
> wrote:
> > Hi Darius,
> >
> > Are you trying to perform an offline migration within the same zone,
> > meaning that the source and destination hosts have the same set of NFS
> > secondary storage ?
> >
> > Marc-Aurèle
> >
> > On Tue, May 8, 2018 at 3:37 PM, Daznis  wrote:
> >
> >> Hi,
> >>
> >>
> >> I'm having an issue while migrating offline vm disk within local
> >> storages. The particular error that has be baffled is "Can't find
> >> staging storage in zone". From what I have gather "staging storage"
> >> referred to secondary storage in cloudstack and it's working perfectly
> >> fine with both the source and destination node. Not sure where to go
> >> next. Any help would be appreciated.
> >>
> >>
> >> Regards,
> >> Darius
> >>
>


Re: Can't KVM migrate between local storage.

2018-05-09 Thread Daznis
Hello,


Yeah it's offline. I'm running 4.9.2 version. Running it on the same
zone with the  only NFS secondary storage.

On Wed, May 9, 2018 at 10:49 AM, Marc-Aurèle Brothier  wrote:
> Hi Darius,
>
> Are you trying to perform an offline migration within the same zone,
> meaning that the source and destination hosts have the same set of NFS
> secondary storage ?
>
> Marc-Aurèle
>
> On Tue, May 8, 2018 at 3:37 PM, Daznis  wrote:
>
>> Hi,
>>
>>
>> I'm having an issue while migrating offline vm disk within local
>> storages. The particular error that has be baffled is "Can't find
>> staging storage in zone". From what I have gather "staging storage"
>> referred to secondary storage in cloudstack and it's working perfectly
>> fine with both the source and destination node. Not sure where to go
>> next. Any help would be appreciated.
>>
>>
>> Regards,
>> Darius
>>


Re: Can't KVM migrate between local storage.

2018-05-09 Thread Marc-Aurèle Brothier
Hi Darius,

Are you trying to perform an offline migration within the same zone,
meaning that the source and destination hosts have the same set of NFS
secondary storage ?

Marc-Aurèle

On Tue, May 8, 2018 at 3:37 PM, Daznis  wrote:

> Hi,
>
>
> I'm having an issue while migrating offline vm disk within local
> storages. The particular error that has be baffled is "Can't find
> staging storage in zone". From what I have gather "staging storage"
> referred to secondary storage in cloudstack and it's working perfectly
> fine with both the source and destination node. Not sure where to go
> next. Any help would be appreciated.
>
>
> Regards,
> Darius
>


Can't KVM migrate between local storage.

2018-05-08 Thread Daznis
Hi,


I'm having an issue while migrating offline vm disk within local
storages. The particular error that has be baffled is "Can't find
staging storage in zone". From what I have gather "staging storage"
referred to secondary storage in cloudstack and it's working perfectly
fine with both the source and destination node. Not sure where to go
next. Any help would be appreciated.


Regards,
Darius