Re: [one-users] Live migration fail

2014-10-06 Thread Amier Anis
thanks. it works.


On Fri, Oct 3, 2014 at 8:14 PM, Campbell, Bill <
bcampb...@axcess-financial.com> wrote:

> Amier,
> Yes, each KVM node will need to be able to authenticate with each other
> directly, so you'll need to do one of the following:
>
>- Create a key on each node and ssh-copy-id to each kvm node
>- Use the same key/pair that is used to communicate from OpenNebula to
>the nodes between nodes (i.e. copy the /var/lib/one/.ssh directory to each
>node).
>
> We use the second one and it works well, but either should work.
>
> --
> *From: *"Amier Anis" 
> *To: *users@lists.opennebula.org
> *Sent: *Thursday, October 2, 2014 10:24:46 PM
> *Subject: *[one-users] Live migration fail
>
>
> Dear Team,
>
> I can't do live migration the error is this
>
> Fri Oct 3 10:20:32 2014 [Z0][LCM][I]: New VM state is MIGRATE
> Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: Successfully execute transfer
> manager driver operation: tm_premigrate.
> Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: ExitCode: 0
> Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: Successfully execute network driver
> operation: pre.
> Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: Command execution fail:
> /var/tmp/one/vmm/kvm/migrate 'one-6' 'ds12.myserver.lan'
> 'ds13.myserver.lan' 6 ds13.myserver.lan
> Fri Oct 3 10:20:33 2014 [Z0][VMM][E]: migrate: Command "virsh --connect
> qemu:///system migrate --live one-6 qemu+ssh://ds12/system" failed: error:
> Cannot recv data: Warning: Permanently added 
> 'ds12.cloud.skali.net,172.20.11.12'
> (RSA) to the list of known hosts.
> Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: Permission denied, please try again.
> Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: Permission denied, please try again.
> Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: Permission denied
> (publickey,gssapi-keyex,gssapi-with-mic,password).: Connection reset by peer
> Fri Oct 3 10:20:33 2014 [Z0][VMM][E]: Could not migrate one-6 to
> ds12.cloud.skali.net
> Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: ExitCode: 1
> Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: Failed to execute virtualization
> driver operation: migrate.
> Fri Oct 3 10:20:33 2014 [Z0][VMM][E]: Error live migrating VM: Could not
> migrate one-6 to ds12.cloud.skali.net
> Fri Oct 3 10:20:33 2014 [Z0][LCM][I]: Fail to live migrate VM. Assuming
> that the VM is still RUNNING (will poll VM).
>
>
> ​Do I need to do ssh-key to each other for each worker node? I have no
> issue to deploy vm from my frontend to each of the worker node.
>
> I also have make it all the node into the cluster with same datastore. but
> still doesn work. same error log.
> ​
>
> Thanks you.
>
> Regards & Best Wishes,
>
>
> *.: Amier Anis :.*
> Mobile: +6012-260-0819
> --
> IMPORTANT:
> This e-mail (including any attachment hereto) is intended solely for the
> addressee and is confidential and privileged. If this should have been sent
> to you in error, you are not to reproduce, distribute or take any action in
> reliance on it. Kindly notify us and delete the e-mail and all attachments
> immediately.
>
> As e-mail and/or attachments may contain viruses and other interfering or
> damaging elements, the receipt and/or downloading of e-mail and/or
> attachments will be at your own risk and we accept no liability for any
> damage sustained as a result of any such viruses; you should carry out your
> own virus checks before opening any attachment.
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
> *NOTICE: Protect the information in this message in accordance with the
> company's security policies. If you received this message in error,
> immediately notify the sender and destroy all copies.*
>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Live migration fail

2014-10-03 Thread Campbell, Bill
Amier, 
Yes, each KVM node will need to be able to authenticate with each other 
directly, so you'll need to do one of the following: 


* Create a key on each node and ssh-copy-id to each kvm node 
* Use the same key/pair that is used to communicate from OpenNebula to the 
nodes between nodes (i.e. copy the /var/lib/one/.ssh directory to each node). 

We use the second one and it works well, but either should work. 

- Original Message -

From: "Amier Anis"  
To: users@lists.opennebula.org 
Sent: Thursday, October 2, 2014 10:24:46 PM 
Subject: [one-users] Live migration fail 

Dear Team, 

I can't do live migration the error is this 

Fri Oct 3 10:20:32 2014 [Z0][LCM][I]: New VM state is MIGRATE 
Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: Successfully execute transfer manager 
driver operation: tm_premigrate. 
Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: ExitCode: 0 
Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: Successfully execute network driver 
operation: pre. 
Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: Command execution fail: 
/var/tmp/one/vmm/kvm/migrate 'one-6' 'ds12.myserver.lan' 'ds13.myserver.lan' 6 
ds13.myserver.lan 
Fri Oct 3 10:20:33 2014 [Z0][VMM][E]: migrate: Command "virsh --connect 
qemu:///system migrate --live one-6 qemu+ssh://ds12/system" failed: error: 
Cannot recv data: Warning: Permanently added ' ds12.cloud.skali.net 
,172.20.11.12' (RSA) to the list of known hosts. 
Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: Permission denied, please try again. 
Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: Permission denied, please try again. 
Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: Permission denied 
(publickey,gssapi-keyex,gssapi-with-mic,password).: Connection reset by peer 
Fri Oct 3 10:20:33 2014 [Z0][VMM][E]: Could not migrate one-6 to 
ds12.cloud.skali.net 
Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: ExitCode: 1 
Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: Failed to execute virtualization driver 
operation: migrate. 
Fri Oct 3 10:20:33 2014 [Z0][VMM][E]: Error live migrating VM: Could not 
migrate one-6 to ds12.cloud.skali.net 
Fri Oct 3 10:20:33 2014 [Z0][LCM][I]: Fail to live migrate VM. Assuming that 
the VM is still RUNNING (will poll VM). 


​Do I need to do ssh-key to each other for each worker node? I have no issue to 
deploy vm from my frontend to each of the worker node. 

I also have make it all the node into the cluster with same datastore. but 
still doesn work. same error log. 
​ 



Thanks you. 



Regards & Best Wishes, 




.: Amier Anis :. 
Mobile: +6012-260-0819 
IMPORTANT: 
This e-mail (including any attachment hereto) is intended solely for the 
addressee and is confidential and privileged. If this should have been sent to 
you in error, you are not to reproduce, distribute or take any action in 
reliance on it. Kindly notify us and delete the e-mail and all attachments 
immediately. 

As e-mail and/or attachments may contain viruses and other interfering or 
damaging elements, the receipt and/or downloading of e-mail and/or attachments 
will be at your own risk and we accept no liability for any damage sustained as 
a result of any such viruses; you should carry out your own virus checks before 
opening any attachment. 

___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 


NOTICE: Protect the information in this message in accordance with the 
company's security policies. If you received this message in error, immediately 
notify the sender and destroy all copies.

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Live migration fail

2014-10-02 Thread Amier Anis
Dear Team,

I can't do live migration the error is this

Fri Oct 3 10:20:32 2014 [Z0][LCM][I]: New VM state is MIGRATE
Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: Successfully execute transfer manager
driver operation: tm_premigrate.
Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: ExitCode: 0
Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: Successfully execute network driver
operation: pre.
Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: Command execution fail:
/var/tmp/one/vmm/kvm/migrate 'one-6' 'ds12.myserver.lan'
'ds13.myserver.lan' 6 ds13.myserver.lan
Fri Oct 3 10:20:33 2014 [Z0][VMM][E]: migrate: Command "virsh --connect
qemu:///system migrate --live one-6 qemu+ssh://ds12/system" failed: error:
Cannot recv data: Warning: Permanently added
'ds12.cloud.skali.net,172.20.11.12'
(RSA) to the list of known hosts.
Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: Permission denied, please try again.
Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: Permission denied, please try again.
Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: Permission denied
(publickey,gssapi-keyex,gssapi-with-mic,password).: Connection reset by peer
Fri Oct 3 10:20:33 2014 [Z0][VMM][E]: Could not migrate one-6 to
ds12.cloud.skali.net
Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: ExitCode: 1
Fri Oct 3 10:20:33 2014 [Z0][VMM][I]: Failed to execute virtualization
driver operation: migrate.
Fri Oct 3 10:20:33 2014 [Z0][VMM][E]: Error live migrating VM: Could not
migrate one-6 to ds12.cloud.skali.net
Fri Oct 3 10:20:33 2014 [Z0][LCM][I]: Fail to live migrate VM. Assuming
that the VM is still RUNNING (will poll VM).


​Do I need to do ssh-key to each other for each worker node? I have no
issue to deploy vm from my frontend to each of the worker node.

I also have make it all the node into the cluster with same datastore. but
still doesn work. same error log.
​

Thanks you.

Regards & Best Wishes,


*.: Amier Anis :.*
Mobile: +6012-260-0819
--
IMPORTANT:
This e-mail (including any attachment hereto) is intended solely for the
addressee and is confidential and privileged. If this should have been sent
to you in error, you are not to reproduce, distribute or take any action in
reliance on it. Kindly notify us and delete the e-mail and all attachments
immediately.

As e-mail and/or attachments may contain viruses and other interfering or
damaging elements, the receipt and/or downloading of e-mail and/or
attachments will be at your own risk and we accept no liability for any
damage sustained as a result of any such viruses; you should carry out your
own virus checks before opening any attachment.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Live migration fails

2014-09-08 Thread Johan Kooijman
Javier,

Tested it already, but that works without any issue:

oneadmin@hv8:~$ ssh 10.23.24.19
Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-35-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Mon Sep  8 08:08:46 CEST 2014

  System load:  1.25   Users logged in:   1
  Usage of /:   1.3% of 117.21GB   IP address for p1p1:   10.23.24.19
  Memory usage: 23%IP address for bond0:  10.0.24.2
  Swap usage:   0% IP address for virbr0: 192.168.122.1
  Processes:327

  Graph this data and manage this system at:
https://landscape.canonical.com/

9 packages can be updated.
1 update is a security update.

Last login: Mon Sep  8 08:08:47 2014 from admin.one.gs.cloud.lan



On Mon, Sep 8, 2014 at 9:53 AM, Javier Fontan 
wrote:

> Go to machine 10.23.24.13 and as oneadmin execute:
>
> ssh 10.23.24.19
>
> Most probably you have an old fingerprint for that host in the known_hosts
> file
>
> On Mon, Sep 8, 2014 at 8:09 AM, Johan Kooijman 
> wrote:
> > Yup, not a problem at all.
> >
> >
> > On Mon, Sep 8, 2014 at 8:05 AM, Sander Klein  wrote:
> >>
> >> Hi,
> >>
> >> Can the destination host oneadmin ssh to the source host?
> >>
> >> Greets,
> >>
> >> Sander
> >>
> >> On 8 sep. 2014, at 07:54, Johan Kooijman 
> wrote:
> >>
> >> Hey All,
> >>
> >> I just tried to live migrate a VM, but got a message it failed:
> >>
> >> migrate: Command "virsh --connect qemu:///system migrate --live one-810
> >> qemu+ssh://10.23.24.19/system" failed: error: Cannot recv data: Host
> key
> >> verification failed.: Connection reset by peer
> >>
> >> See https://plakbord.cloud.nl/p/hBkytAtcPG627NLYrUS18AAB.
> >>
> >> qemu process is running as user oneadmin. user oneadmin can succesfully
> >> ssh to the node. Am I missing something here?
> >>
> >> --
> >> Met vriendelijke groeten / With kind regards,
> >> Johan Kooijman
> >>
> >> ___
> >> Users mailing list
> >> Users@lists.opennebula.org
> >> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
> >
> >
> >
> >
> > --
> > Met vriendelijke groeten / With kind regards,
> > Johan Kooijman
> >
> > ___
> > Users mailing list
> > Users@lists.opennebula.org
> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
> >
>
>
>
> --
> Javier Fontán Muiños
> Developer
> OpenNebula - Flexible Enterprise Cloud Made Simple
> www.OpenNebula.org | @OpenNebula | github.com/jfontan
>



-- 
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Live migration fails

2014-09-08 Thread Javier Fontan
Go to machine 10.23.24.13 and as oneadmin execute:

ssh 10.23.24.19

Most probably you have an old fingerprint for that host in the known_hosts file

On Mon, Sep 8, 2014 at 8:09 AM, Johan Kooijman  wrote:
> Yup, not a problem at all.
>
>
> On Mon, Sep 8, 2014 at 8:05 AM, Sander Klein  wrote:
>>
>> Hi,
>>
>> Can the destination host oneadmin ssh to the source host?
>>
>> Greets,
>>
>> Sander
>>
>> On 8 sep. 2014, at 07:54, Johan Kooijman  wrote:
>>
>> Hey All,
>>
>> I just tried to live migrate a VM, but got a message it failed:
>>
>> migrate: Command "virsh --connect qemu:///system migrate --live one-810
>> qemu+ssh://10.23.24.19/system" failed: error: Cannot recv data: Host key
>> verification failed.: Connection reset by peer
>>
>> See https://plakbord.cloud.nl/p/hBkytAtcPG627NLYrUS18AAB.
>>
>> qemu process is running as user oneadmin. user oneadmin can succesfully
>> ssh to the node. Am I missing something here?
>>
>> --
>> Met vriendelijke groeten / With kind regards,
>> Johan Kooijman
>>
>> ___
>> Users mailing list
>> Users@lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
>
>
> --
> Met vriendelijke groeten / With kind regards,
> Johan Kooijman
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>



-- 
Javier Fontán Muiños
Developer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | @OpenNebula | github.com/jfontan
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Live migration fails

2014-09-07 Thread Johan Kooijman
Yup, not a problem at all.


On Mon, Sep 8, 2014 at 8:05 AM, Sander Klein  wrote:

> Hi,
>
> Can the destination host oneadmin ssh to the source host?
>
> Greets,
>
> Sander
>
> On 8 sep. 2014, at 07:54, Johan Kooijman  wrote:
>
> Hey All,
>
> I just tried to live migrate a VM, but got a message it failed:
>
> migrate: Command "virsh --connect qemu:///system migrate --live one-810
> qemu+ssh://10.23.24.19/system" failed: error: Cannot recv data: Host key
> verification failed.: Connection reset by peer
>
> See https://plakbord.cloud.nl/p/hBkytAtcPG627NLYrUS18AAB.
>
> qemu process is running as user oneadmin. user oneadmin can succesfully
> ssh to the node. Am I missing something here?
>
> --
> Met vriendelijke groeten / With kind regards,
> Johan Kooijman
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>


-- 
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Live migration fails

2014-09-07 Thread Sander Klein
Hi,

Can the destination host oneadmin ssh to the source host?

Greets,

Sander

> On 8 sep. 2014, at 07:54, Johan Kooijman  wrote:
> 
> Hey All,
> 
> I just tried to live migrate a VM, but got a message it failed:
> 
> migrate: Command "virsh --connect qemu:///system migrate --live one-810 
> qemu+ssh://10.23.24.19/system" failed: error: Cannot recv data: Host key 
> verification failed.: Connection reset by peer
> 
> See https://plakbord.cloud.nl/p/hBkytAtcPG627NLYrUS18AAB.
> 
> qemu process is running as user oneadmin. user oneadmin can succesfully ssh 
> to the node. Am I missing something here?
> 
> -- 
> Met vriendelijke groeten / With kind regards,
> Johan Kooijman
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Live migration fails

2014-09-07 Thread Johan Kooijman
Hey All,

I just tried to live migrate a VM, but got a message it failed:

migrate: Command "virsh --connect qemu:///system migrate --live one-810
qemu+ssh://10.23.24.19/system" failed: error: Cannot recv data: Host key
verification failed.: Connection reset by peer

See https://plakbord.cloud.nl/p/hBkytAtcPG627NLYrUS18AAB.

qemu process is running as user oneadmin. user oneadmin can succesfully ssh
to the node. Am I missing something here?

-- 
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] live migration without shared storage

2014-08-13 Thread Jaime Melis
Hi,

yes, you need shared storage for live migration. However, please note that
regular migration will work perfectly and the machine state will be
preserved.

cheers,
Jaime


On Fri, Jul 11, 2014 at 8:19 PM, Thomas Stein  wrote:

> Hello
>
> Is it right opennebula currently does not support live migration without
> shared storage? I get this error when trying.
>
> Fri Jul 11 22:14:32 2014 [VMM][E]: migrate: Command "virsh --connect
> qemu:///system migrate --live one-11 qemu+ssh://192.168.122.200/system"
> failed: error: unable to resolve '/var/lib/one//datastores/0/11/disk.1': No
> such file or directory
>
> thanks and best regards
> t.
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>



-- 
Jaime Melis
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | jme...@opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] (live) migration

2014-08-13 Thread Johan Kooijman
Ah, got it. Too bad CephFS isn't stable yet te replace NFS/iSCSI


On Mon, Aug 11, 2014 at 9:34 PM, Campbell, Bill <
bcampb...@axcess-financial.com> wrote:

> What are you using as a system datastore (shared or SSH)?
>
> If you are using shared, you'll need to ensure the /var/lib/one directory
> is shared among all of your nodes (particularly the ONE server and your
> hypervisors).  If you are using SSH as your system datastore, then you need
> to modify the SSH transfer manager pre and post migrate scripts to include
> moving the checkpoint files from one hypervisor node to another (and
> cleaning up after migration).
>
>
> http://lists.opennebula.org/pipermail/users-opennebula.org/2013-April/022705.html
>
> --
>
>
> *Bill Campbell *Infrastructure Architect
>
> Axcess Financial Services, Inc.
> 7755 Montgomery Rd., Suite 400
> Cincinnati, OH  45236
>
> --
> *From: *"Johan Kooijman" 
> *To: *users@lists.opennebula.org
> *Sent: *Monday, August 11, 2014 3:18:37 PM
> *Subject: *[one-users] (live) migration
>
>
> All,
>
> I'm testing with a ONE right now, and running into an issue I can't
> explain. I created VM's on a Ceph datastore, that works fine. Cloning &
> snapshotting does. But when I want to migrate a VM to another host:
>
> Mon Aug 11 21:16:24 2014 : Error restoring VM: Could not restore from
> /var/lib/one//datastores/105/99/checkpoint
>
> Am I missing something here?
>
> --
> Met vriendelijke groeten / With kind regards,
> Johan Kooijman
>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
> *NOTICE: Protect the information in this message in accordance with the
> company's security policies. If you received this message in error,
> immediately notify the sender and destroy all copies.*
>
>


-- 
Met vriendelijke groeten / With kind regards,
Johan Kooijman

T +31(0) 6 43 44 45 27
E m...@johankooijman.com
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] (live) migration

2014-08-12 Thread Hamada, Ondrej
Hi,
Is the checkpoint file accessible on the host? Have you check hypervisor logs? 
(for KVM something like /var/log/libvirt/qemu/one-99.log)

O.

From: Users [mailto:users-boun...@lists.opennebula.org] On Behalf Of Johan 
Kooijman
Sent: Monday, August 11, 2014 9:19 PM
To: users@lists.opennebula.org
Subject: [one-users] (live) migration

All,

I'm testing with a ONE right now, and running into an issue I can't explain. I 
created VM's on a Ceph datastore, that works fine. Cloning & snapshotting does. 
But when I want to migrate a VM to another host:

Mon Aug 11 21:16:24 2014 : Error restoring VM: Could not restore from 
/var/lib/one//datastores/105/99/checkpoint

Am I missing something here?

--
Met vriendelijke groeten / With kind regards,
Johan Kooijman

This e-mail and any attachment is for authorised use by the intended 
recipient(s) only. It may contain proprietary material, confidential 
information and/or be subject to legal privilege. It should not be copied, 
disclosed to, retained or used by, any other party. If you are not an intended 
recipient then please promptly delete this e-mail and any attachment and all 
copies and inform the sender. Thank you for understanding.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] (live) migration

2014-08-11 Thread Johan Kooijman
All,

I'm testing with a ONE right now, and running into an issue I can't
explain. I created VM's on a Ceph datastore, that works fine. Cloning &
snapshotting does. But when I want to migrate a VM to another host:

Mon Aug 11 21:16:24 2014 : Error restoring VM: Could not restore from
/var/lib/one//datastores/105/99/checkpoint

Am I missing something here?

-- 
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] live migration without shared storage

2014-07-11 Thread Thomas Stein
Hello

Is it right opennebula currently does not support live migration without 
shared storage? I get this error when trying.

Fri Jul 11 22:14:32 2014 [VMM][E]: migrate: Command "virsh --connect 
qemu:///system migrate --live one-11 qemu+ssh://192.168.122.200/system" 
failed: error: unable to resolve '/var/lib/one//datastores/0/11/disk.1': No 
such file or directory

thanks and best regards
t.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Live migration leads to UNKNOWM state

2014-05-26 Thread Jaime Melis
Hi Stefan,

Apologies, but I still don't understand if there's a bug here or not.

Could you please open a bug report with instructions on how to replicate
the issue?

Regards,
Jaime


On Wed, May 7, 2014 at 10:00 AM, Stefan Ivanov wrote:

> We have 3 datastores with the following ids:
>
> 0 - system
>
> 101 - ceph_data
>
> 102 - system_mc
>
> Live migration happens without a problem when we move the data from the
> datastore directory to the migration target machine datastore directory
> before that.
>
>
>
> *From:* Jaime Melis [mailto:jme...@opennebula.org]
> *Sent:* 05 май 2014 г. 12:54 ч.
>
> *To:* Stefan Ivanov
> *Cc:* users@lists.opennebula.org
> *Subject:* Re: [one-users] Live migration leads to UNKNOWM state
>
>
>
> Hi Stefan,
>
>
>
> what changes did you make? is this a bug you're reporting?
>
>
>
> cheers,
> Jaime
>
>
>
> On Wed, Apr 30, 2014 at 2:57 PM, Stefan Ivanov 
> wrote:
>
> Hello Jaime
>
>
>
> Problem is resolved,
>
>
>
> After little modification of live migrate scripts everything is fine. The
> problem is that: checkpoint and deployment file not copied to target node.
>
>
>
> Right now I upgrade to 4.6 and test how is live migration I this version.
>
>
>
> Thanks and best regards,
>
> Stefan Ivanov
>
>
>
> *From:* Jaime Melis [mailto:jme...@opennebula.org]
> *Sent:* 30 април 2014 г. 15:50 ч.
> *To:* Stefan Ivanov
> *Cc:* users@lists.opennebula.org
> *Subject:* Re: [one-users] Live migration leads to UNKNOWM state
>
>
>
> Hi Stefan,
>
>
>
> can you verify if after a while the vms reverts back to the RUNNING state.
>
>
>
> Can you also manually confirm that the VM is indeed in the target server
> (by running virsh -c qemu:///system list)
>
>
>
> cheers,
> Jaime
>
>
>
> On Thu, Apr 24, 2014 at 11:46 AM, Stefan Ivanov 
> wrote:
>
> I`m running OpenNebula + CEPH + KVM. When I try to make live migration
> from one host to other everything looks good, no have errors, process is
> running on right host but Virtual machine go to UNKNOWN
> state(RUNNING(host1) -> MIGRATE -> RUNNING(host2) -> UNKNOWN(host2)). In vm
> log I see this: VM running but it was not found. Boot and delete actions
> available or try to recover it manually, New VM state is UNKNOWN.
>
> About my configuration:
> Ceph datastore:
> ID  101
> Nameceph_data
> Cluster SunSystem
> Base path   /var/lib/one/datastores/101
> Capacity
> Total   36.4TB
> Used3.4TB
> Free33TB
> Limit   -
>
> System datastores:
> system
> /var/lib/one//datastores/0
> SHARED  NO
> TM_MAD  ssh
> TYPESYSTEM_DS
> system_mc
> /var/lib/one//datastores/102
> SHARED  NO
> TM_MAD  ssh
> TYPESYSTEM_DS
>
> VM LOG:
> Thu Apr 24 12:15:49 2014 [LCM][I]: New VM state is MIGRATE
> Thu Apr 24 12:15:49 2014 [VMM][I]: Successfully execute transfer
> manager driver operation: tm_premigrate.
> Thu Apr 24 12:15:49 2014 [VMM][I]: ExitCode: 0
> Thu Apr 24 12:15:49 2014 [VMM][I]: Successfully execute network
> driver operation: pre.
> Thu Apr 24 12:15:55 2014 [VMM][I]: ExitCode: 0
> Thu Apr 24 12:15:55 2014 [VMM][I]: Successfully execute
> virtualization driver operation: migrate.
> Thu Apr 24 12:15:55 2014 [VMM][I]: ExitCode: 0
> Thu Apr 24 12:15:55 2014 [VMM][I]: Successfully execute network
> driver operation: clean.
> Thu Apr 24 12:15:55 2014 [VMM][I]: ExitCode: 0
> Thu Apr 24 12:15:55 2014 [VMM][I]: Successfully execute network
> driver operation: post.
> Thu Apr 24 12:15:55 2014 [VMM][I]: Successfully execute transfer
> manager driver operation: tm_postmigrate.
> Thu Apr 24 12:15:55 2014 [LCM][I]: New VM state is RUNNING
> Thu Apr 24 12:16:04 2014 [VMM][I]: VM running but it was not
> found. Boot and delete actions available or try to recover it manually
> Thu Apr 24 12:16:04 2014 [LCM][I]: New VM state is UNKNOWN
>
> Version of OpenNebula 4.4.1
>
>
> CONFIDENTIALITY NOTICE
> The information contained in this message (including any attachments) is
> confidential and may be legally privileged or otherwise protected from
> disclosure. This message is intended solely for the addressee(s). If you
> are not the intended recipient, please notify the sender by return e-mail
> and delete this message from your 

Re: [one-users] Live Migration Fail

2014-05-15 Thread Christophe Duez
sorry,
I see that i need to place Node1KVM in my /etc/hosts file.
the migration process uses the hostname form the nodes


On Thu, May 15, 2014 at 4:36 PM, Christophe Duez <
christophe.d...@student.uantwerpen.be> wrote:

> Hello,
> when i tried live migration it fails however i can do a normal migration
> without any problems.
> this is the error:
> migrate: Command "virsh --connect qemu:///system migrate --live one-72
> qemu+ssh://kvmnode1/system" failed: error: Unable to resolve address
> 'Node1KVM' service '49152': Name or service not known
>
> see pastebin for full log: http://pastebin.com/rH10zDfQ
>
> anybody knows the solution :/
>
> --
> Kind regards,
> Duez Christophe
> Student at University of Antwerp :
> Master of Industrial Sciences: Electronics-ICT
>
> E christophe.d...@student.uantwperen.be
> L linkedin 
> duez-christophe
>



-- 
Kind regards,
Duez Christophe
Student at University of Antwerp :
Master of Industrial Sciences: Electronics-ICT

E christophe.d...@student.uantwperen.be
L linkedin duez-christophe
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Live Migration Fail

2014-05-15 Thread Christophe Duez
Hello,
when i tried live migration it fails however i can do a normal migration
without any problems.
this is the error:
migrate: Command "virsh --connect qemu:///system migrate --live one-72
qemu+ssh://kvmnode1/system" failed: error: Unable to resolve address
'Node1KVM' service '49152': Name or service not known

see pastebin for full log: http://pastebin.com/rH10zDfQ

anybody knows the solution :/

-- 
Kind regards,
Duez Christophe
Student at University of Antwerp :
Master of Industrial Sciences: Electronics-ICT

E christophe.d...@student.uantwperen.be
L linkedin duez-christophe
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Live migration leads to UNKNOWM state

2014-05-07 Thread Stefan Ivanov
We have 3 datastores with the following ids:
0 - system
101 - ceph_data
102 - system_mc
Live migration happens without a problem when we move the data from the 
datastore directory to the migration target machine datastore directory before 
that.

From: Jaime Melis [mailto:jme...@opennebula.org]
Sent: 05 май 2014 г. 12:54 ч.
To: Stefan Ivanov
Cc: users@lists.opennebula.org
Subject: Re: [one-users] Live migration leads to UNKNOWM state

Hi Stefan,

what changes did you make? is this a bug you're reporting?

cheers,
Jaime

On Wed, Apr 30, 2014 at 2:57 PM, Stefan Ivanov 
mailto:s.iva...@maxtelecom.bg>> wrote:
Hello Jaime

Problem is resolved,

After little modification of live migrate scripts everything is fine. The 
problem is that: checkpoint and deployment file not copied to target node.

Right now I upgrade to 4.6 and test how is live migration I this version.

Thanks and best regards,
Stefan Ivanov

From: Jaime Melis [mailto:jme...@opennebula.org<mailto:jme...@opennebula.org>]
Sent: 30 април 2014 г. 15:50 ч.
To: Stefan Ivanov
Cc: users@lists.opennebula.org<mailto:users@lists.opennebula.org>
Subject: Re: [one-users] Live migration leads to UNKNOWM state

Hi Stefan,

can you verify if after a while the vms reverts back to the RUNNING state.

Can you also manually confirm that the VM is indeed in the target server (by 
running virsh -c qemu:///system list)

cheers,
Jaime

On Thu, Apr 24, 2014 at 11:46 AM, Stefan Ivanov 
mailto:s.iva...@maxtelecom.bg>> wrote:
I`m running OpenNebula + CEPH + KVM. When I try to make live migration from one 
host to other everything looks good, no have errors, process is running on 
right host but Virtual machine go to UNKNOWN state(RUNNING(host1) -> MIGRATE -> 
RUNNING(host2) -> UNKNOWN(host2)). In vm log I see this: VM running but it was 
not found. Boot and delete actions available or try to recover it manually, New 
VM state is UNKNOWN.

About my configuration:
Ceph datastore:
ID  101
Nameceph_data
Cluster SunSystem
Base path   /var/lib/one/datastores/101
Capacity
Total   36.4TB
Used3.4TB
Free33TB
Limit   -

System datastores:
system
/var/lib/one//datastores/0
SHARED  NO
TM_MAD  ssh
TYPESYSTEM_DS
system_mc
/var/lib/one//datastores/102
SHARED  NO
TM_MAD  ssh
TYPESYSTEM_DS

VM LOG:
Thu Apr 24 12:15:49 2014 [LCM][I]: New VM state is MIGRATE
Thu Apr 24 12:15:49 2014 [VMM][I]: Successfully execute transfer 
manager driver operation: tm_premigrate.
Thu Apr 24 12:15:49 2014 [VMM][I]: ExitCode: 0
Thu Apr 24 12:15:49 2014 [VMM][I]: Successfully execute network driver 
operation: pre.
Thu Apr 24 12:15:55 2014 [VMM][I]: ExitCode: 0
Thu Apr 24 12:15:55 2014 [VMM][I]: Successfully execute virtualization 
driver operation: migrate.
Thu Apr 24 12:15:55 2014 [VMM][I]: ExitCode: 0
Thu Apr 24 12:15:55 2014 [VMM][I]: Successfully execute network driver 
operation: clean.
Thu Apr 24 12:15:55 2014 [VMM][I]: ExitCode: 0
Thu Apr 24 12:15:55 2014 [VMM][I]: Successfully execute network driver 
operation: post.
Thu Apr 24 12:15:55 2014 [VMM][I]: Successfully execute transfer 
manager driver operation: tm_postmigrate.
Thu Apr 24 12:15:55 2014 [LCM][I]: New VM state is RUNNING
Thu Apr 24 12:16:04 2014 [VMM][I]: VM running but it was not found. 
Boot and delete actions available or try to recover it manually
Thu Apr 24 12:16:04 2014 [LCM][I]: New VM state is UNKNOWN

Version of OpenNebula 4.4.1


CONFIDENTIALITY NOTICE
The information contained in this message (including any attachments) is 
confidential and may be legally privileged or otherwise protected from 
disclosure. This message is intended solely for the addressee(s). If you are 
not the intended recipient, please notify the sender by return e-mail and 
delete this message from your system. Any unauthorised use, reproduction, or 
dissemination of this message is strictly prohibited. Any liability arising 
from any third party acting, or refraining from acting, on any information 
contained in this e-mail is hereby excluded. Please note that e-mails are 
susceptible to change. Max Telecom shall not be liable for the improper or 
incomplete transmission of the information contained in this communication, nor 
shall it be liable for any delay in its receipt.


___
Users mailing list
Users@lists.opennebula.org<mailto:Users@lists.opennebula.org>
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org



--
Jaime Melis
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org<http://www.OpenNebula.org> | 
jme...@opennebula.org<mailto:jme...@opennebula.org>
Confi

Re: [one-users] Live migration leads to UNKNOWM state

2014-05-05 Thread Jaime Melis
Hi Stefan,

what changes did you make? is this a bug you're reporting?

cheers,
Jaime


On Wed, Apr 30, 2014 at 2:57 PM, Stefan Ivanov wrote:

> Hello Jaime
>
>
>
> Problem is resolved,
>
>
>
> After little modification of live migrate scripts everything is fine. The
> problem is that: checkpoint and deployment file not copied to target node.
>
>
>
> Right now I upgrade to 4.6 and test how is live migration I this version.
>
>
>
> Thanks and best regards,
>
> Stefan Ivanov
>
>
>
> *From:* Jaime Melis [mailto:jme...@opennebula.org]
> *Sent:* 30 април 2014 г. 15:50 ч.
> *To:* Stefan Ivanov
> *Cc:* users@lists.opennebula.org
> *Subject:* Re: [one-users] Live migration leads to UNKNOWM state
>
>
>
> Hi Stefan,
>
>
>
> can you verify if after a while the vms reverts back to the RUNNING state.
>
>
>
> Can you also manually confirm that the VM is indeed in the target server
> (by running virsh -c qemu:///system list)
>
>
>
> cheers,
> Jaime
>
>
>
> On Thu, Apr 24, 2014 at 11:46 AM, Stefan Ivanov 
> wrote:
>
> I`m running OpenNebula + CEPH + KVM. When I try to make live migration
> from one host to other everything looks good, no have errors, process is
> running on right host but Virtual machine go to UNKNOWN
> state(RUNNING(host1) -> MIGRATE -> RUNNING(host2) -> UNKNOWN(host2)). In vm
> log I see this: VM running but it was not found. Boot and delete actions
> available or try to recover it manually, New VM state is UNKNOWN.
>
> About my configuration:
> Ceph datastore:
> ID  101
> Nameceph_data
> Cluster SunSystem
> Base path   /var/lib/one/datastores/101
> Capacity
> Total   36.4TB
> Used3.4TB
> Free33TB
> Limit   -
>
> System datastores:
> system
> /var/lib/one//datastores/0
> SHARED  NO
> TM_MAD  ssh
> TYPESYSTEM_DS
> system_mc
> /var/lib/one//datastores/102
> SHARED  NO
> TM_MAD  ssh
> TYPESYSTEM_DS
>
> VM LOG:
> Thu Apr 24 12:15:49 2014 [LCM][I]: New VM state is MIGRATE
> Thu Apr 24 12:15:49 2014 [VMM][I]: Successfully execute transfer
> manager driver operation: tm_premigrate.
> Thu Apr 24 12:15:49 2014 [VMM][I]: ExitCode: 0
> Thu Apr 24 12:15:49 2014 [VMM][I]: Successfully execute network
> driver operation: pre.
> Thu Apr 24 12:15:55 2014 [VMM][I]: ExitCode: 0
> Thu Apr 24 12:15:55 2014 [VMM][I]: Successfully execute
> virtualization driver operation: migrate.
> Thu Apr 24 12:15:55 2014 [VMM][I]: ExitCode: 0
> Thu Apr 24 12:15:55 2014 [VMM][I]: Successfully execute network
> driver operation: clean.
> Thu Apr 24 12:15:55 2014 [VMM][I]: ExitCode: 0
> Thu Apr 24 12:15:55 2014 [VMM][I]: Successfully execute network
> driver operation: post.
> Thu Apr 24 12:15:55 2014 [VMM][I]: Successfully execute transfer
> manager driver operation: tm_postmigrate.
> Thu Apr 24 12:15:55 2014 [LCM][I]: New VM state is RUNNING
> Thu Apr 24 12:16:04 2014 [VMM][I]: VM running but it was not
> found. Boot and delete actions available or try to recover it manually
> Thu Apr 24 12:16:04 2014 [LCM][I]: New VM state is UNKNOWN
>
> Version of OpenNebula 4.4.1
>
>
> CONFIDENTIALITY NOTICE
> The information contained in this message (including any attachments) is
> confidential and may be legally privileged or otherwise protected from
> disclosure. This message is intended solely for the addressee(s). If you
> are not the intended recipient, please notify the sender by return e-mail
> and delete this message from your system. Any unauthorised use,
> reproduction, or dissemination of this message is strictly prohibited. Any
> liability arising from any third party acting, or refraining from acting,
> on any information contained in this e-mail is hereby excluded. Please note
> that e-mails are susceptible to change. Max Telecom shall not be liable for
> the improper or incomplete transmission of the information contained in
> this communication, nor shall it be liable for any delay in its receipt.
>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
>
>
>
>
>
> --
>
> Jaime Melis
> Project Engineer
> OpenNebula - Flexible Enterprise Cloud Made Simple
> www.OpenNebula.org | jme...@opennebula.org
>
> <http://gfidisc.maxtelecom.bg>
&

Re: [one-users] Live migration leads to UNKNOWM state

2014-04-30 Thread Stefan Ivanov
Hello Jaime

Problem is resolved,

After little modification of live migrate scripts everything is fine. The 
problem is that: checkpoint and deployment file not copied to target node.

Right now I upgrade to 4.6 and test how is live migration I this version.

Thanks and best regards,
Stefan Ivanov

From: Jaime Melis [mailto:jme...@opennebula.org]
Sent: 30 април 2014 г. 15:50 ч.
To: Stefan Ivanov
Cc: users@lists.opennebula.org
Subject: Re: [one-users] Live migration leads to UNKNOWM state

Hi Stefan,

can you verify if after a while the vms reverts back to the RUNNING state.

Can you also manually confirm that the VM is indeed in the target server (by 
running virsh -c qemu:///system list)

cheers,
Jaime

On Thu, Apr 24, 2014 at 11:46 AM, Stefan Ivanov 
mailto:s.iva...@maxtelecom.bg>> wrote:
I`m running OpenNebula + CEPH + KVM. When I try to make live migration from one 
host to other everything looks good, no have errors, process is running on 
right host but Virtual machine go to UNKNOWN state(RUNNING(host1) -> MIGRATE -> 
RUNNING(host2) -> UNKNOWN(host2)). In vm log I see this: VM running but it was 
not found. Boot and delete actions available or try to recover it manually, New 
VM state is UNKNOWN.

About my configuration:
Ceph datastore:
ID  101
Nameceph_data
Cluster SunSystem
Base path   /var/lib/one/datastores/101
Capacity
Total   36.4TB
Used3.4TB
Free33TB
Limit   -

System datastores:
system
/var/lib/one//datastores/0
SHARED  NO
TM_MAD  ssh
TYPESYSTEM_DS
system_mc
/var/lib/one//datastores/102
SHARED  NO
TM_MAD  ssh
TYPESYSTEM_DS

VM LOG:
Thu Apr 24 12:15:49 2014 [LCM][I]: New VM state is MIGRATE
Thu Apr 24 12:15:49 2014 [VMM][I]: Successfully execute transfer 
manager driver operation: tm_premigrate.
Thu Apr 24 12:15:49 2014 [VMM][I]: ExitCode: 0
Thu Apr 24 12:15:49 2014 [VMM][I]: Successfully execute network driver 
operation: pre.
Thu Apr 24 12:15:55 2014 [VMM][I]: ExitCode: 0
Thu Apr 24 12:15:55 2014 [VMM][I]: Successfully execute virtualization 
driver operation: migrate.
Thu Apr 24 12:15:55 2014 [VMM][I]: ExitCode: 0
Thu Apr 24 12:15:55 2014 [VMM][I]: Successfully execute network driver 
operation: clean.
Thu Apr 24 12:15:55 2014 [VMM][I]: ExitCode: 0
Thu Apr 24 12:15:55 2014 [VMM][I]: Successfully execute network driver 
operation: post.
Thu Apr 24 12:15:55 2014 [VMM][I]: Successfully execute transfer 
manager driver operation: tm_postmigrate.
Thu Apr 24 12:15:55 2014 [LCM][I]: New VM state is RUNNING
Thu Apr 24 12:16:04 2014 [VMM][I]: VM running but it was not found. 
Boot and delete actions available or try to recover it manually
Thu Apr 24 12:16:04 2014 [LCM][I]: New VM state is UNKNOWN

Version of OpenNebula 4.4.1


CONFIDENTIALITY NOTICE
The information contained in this message (including any attachments) is 
confidential and may be legally privileged or otherwise protected from 
disclosure. This message is intended solely for the addressee(s). If you are 
not the intended recipient, please notify the sender by return e-mail and 
delete this message from your system. Any unauthorised use, reproduction, or 
dissemination of this message is strictly prohibited. Any liability arising 
from any third party acting, or refraining from acting, on any information 
contained in this e-mail is hereby excluded. Please note that e-mails are 
susceptible to change. Max Telecom shall not be liable for the improper or 
incomplete transmission of the information contained in this communication, nor 
shall it be liable for any delay in its receipt.


___
Users mailing list
Users@lists.opennebula.org<mailto:Users@lists.opennebula.org>
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org



--
Jaime Melis
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org<http://www.OpenNebula.org> | 
jme...@opennebula.org<mailto:jme...@opennebula.org>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Live migration leads to UNKNOWM state

2014-04-30 Thread Jaime Melis
Hi Stefan,

can you verify if after a while the vms reverts back to the RUNNING state.

Can you also manually confirm that the VM is indeed in the target server
(by running virsh -c qemu:///system list)

cheers,
Jaime


On Thu, Apr 24, 2014 at 11:46 AM, Stefan Ivanov wrote:

> I`m running OpenNebula + CEPH + KVM. When I try to make live migration
> from one host to other everything looks good, no have errors, process is
> running on right host but Virtual machine go to UNKNOWN
> state(RUNNING(host1) -> MIGRATE -> RUNNING(host2) -> UNKNOWN(host2)). In vm
> log I see this: VM running but it was not found. Boot and delete actions
> available or try to recover it manually, New VM state is UNKNOWN.
>
> About my configuration:
> Ceph datastore:
> ID  101
> Nameceph_data
> Cluster SunSystem
> Base path   /var/lib/one/datastores/101
> Capacity
> Total   36.4TB
> Used3.4TB
> Free33TB
> Limit   -
>
> System datastores:
> system
> /var/lib/one//datastores/0
> SHARED  NO
> TM_MAD  ssh
> TYPESYSTEM_DS
> system_mc
> /var/lib/one//datastores/102
> SHARED  NO
> TM_MAD  ssh
> TYPESYSTEM_DS
>
> VM LOG:
> Thu Apr 24 12:15:49 2014 [LCM][I]: New VM state is MIGRATE
> Thu Apr 24 12:15:49 2014 [VMM][I]: Successfully execute transfer
> manager driver operation: tm_premigrate.
> Thu Apr 24 12:15:49 2014 [VMM][I]: ExitCode: 0
> Thu Apr 24 12:15:49 2014 [VMM][I]: Successfully execute network
> driver operation: pre.
> Thu Apr 24 12:15:55 2014 [VMM][I]: ExitCode: 0
> Thu Apr 24 12:15:55 2014 [VMM][I]: Successfully execute
> virtualization driver operation: migrate.
> Thu Apr 24 12:15:55 2014 [VMM][I]: ExitCode: 0
> Thu Apr 24 12:15:55 2014 [VMM][I]: Successfully execute network
> driver operation: clean.
> Thu Apr 24 12:15:55 2014 [VMM][I]: ExitCode: 0
> Thu Apr 24 12:15:55 2014 [VMM][I]: Successfully execute network
> driver operation: post.
> Thu Apr 24 12:15:55 2014 [VMM][I]: Successfully execute transfer
> manager driver operation: tm_postmigrate.
> Thu Apr 24 12:15:55 2014 [LCM][I]: New VM state is RUNNING
> Thu Apr 24 12:16:04 2014 [VMM][I]: VM running but it was not
> found. Boot and delete actions available or try to recover it manually
> Thu Apr 24 12:16:04 2014 [LCM][I]: New VM state is UNKNOWN
>
> Version of OpenNebula 4.4.1
>
>
> CONFIDENTIALITY NOTICE
> The information contained in this message (including any attachments) is
> confidential and may be legally privileged or otherwise protected from
> disclosure. This message is intended solely for the addressee(s). If you
> are not the intended recipient, please notify the sender by return e-mail
> and delete this message from your system. Any unauthorised use,
> reproduction, or dissemination of this message is strictly prohibited. Any
> liability arising from any third party acting, or refraining from acting,
> on any information contained in this e-mail is hereby excluded. Please note
> that e-mails are susceptible to change. Max Telecom shall not be liable for
> the improper or incomplete transmission of the information contained in
> this communication, nor shall it be liable for any delay in its receipt.
>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>



-- 
Jaime Melis
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | jme...@opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Live migration leads to UNKNOWM state

2014-04-24 Thread Stefan Ivanov
I`m running OpenNebula + CEPH + KVM. When I try to make live migration from one 
host to other everything looks good, no have errors, process is running on 
right host but Virtual machine go to UNKNOWN state(RUNNING(host1) -> MIGRATE -> 
RUNNING(host2) -> UNKNOWN(host2)). In vm log I see this: VM running but it was 
not found. Boot and delete actions available or try to recover it manually, New 
VM state is UNKNOWN.

About my configuration:
Ceph datastore:
ID  101 
Nameceph_data   
Cluster SunSystem   
Base path   /var/lib/one/datastores/101 
Capacity
Total   36.4TB
Used3.4TB
Free33TB
Limit   -

System datastores:
system
/var/lib/one//datastores/0
SHARED  NO  
TM_MAD  ssh 
TYPESYSTEM_DS
system_mc
/var/lib/one//datastores/102
SHARED  NO  
TM_MAD  ssh 
TYPESYSTEM_DS   

VM LOG:
Thu Apr 24 12:15:49 2014 [LCM][I]: New VM state is MIGRATE
Thu Apr 24 12:15:49 2014 [VMM][I]: Successfully execute transfer 
manager driver operation: tm_premigrate.
Thu Apr 24 12:15:49 2014 [VMM][I]: ExitCode: 0
Thu Apr 24 12:15:49 2014 [VMM][I]: Successfully execute network driver 
operation: pre.
Thu Apr 24 12:15:55 2014 [VMM][I]: ExitCode: 0
Thu Apr 24 12:15:55 2014 [VMM][I]: Successfully execute virtualization 
driver operation: migrate.
Thu Apr 24 12:15:55 2014 [VMM][I]: ExitCode: 0
Thu Apr 24 12:15:55 2014 [VMM][I]: Successfully execute network driver 
operation: clean.
Thu Apr 24 12:15:55 2014 [VMM][I]: ExitCode: 0
Thu Apr 24 12:15:55 2014 [VMM][I]: Successfully execute network driver 
operation: post.
Thu Apr 24 12:15:55 2014 [VMM][I]: Successfully execute transfer 
manager driver operation: tm_postmigrate.
Thu Apr 24 12:15:55 2014 [LCM][I]: New VM state is RUNNING
Thu Apr 24 12:16:04 2014 [VMM][I]: VM running but it was not found. 
Boot and delete actions available or try to recover it manually
Thu Apr 24 12:16:04 2014 [LCM][I]: New VM state is UNKNOWN

Version of OpenNebula 4.4.1 


CONFIDENTIALITY NOTICE
The information contained in this message (including any attachments) is 
confidential and may be legally privileged or otherwise protected from 
disclosure. This message is intended solely for the addressee(s). If you are 
not the intended recipient, please notify the sender by return e-mail and 
delete this message from your system. Any unauthorised use, reproduction, or 
dissemination of this message is strictly prohibited. Any liability arising 
from any third party acting, or refraining from acting, on any information 
contained in this e-mail is hereby excluded. Please note that e-mails are 
susceptible to change. Max Telecom shall not be liable for the improper or 
incomplete transmission of the information contained in this communication, nor 
shall it be liable for any delay in its receipt.


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] live migration error

2013-08-14 Thread 김 경륜

Hi Hyun Woo,

My all hosts(gcloud01~03) have a virsh command 



[root@gcloud03 ~]# which virsh
/usr/bin/virsh
[root@gcloud03 ~]# virsh -c qemu:///system nodeinfo
CPU 모델:  x86_64
CPU: 6
CPU 주파수:   2660 MHz
CPU 소켓:  1
소켓당 코어:6
코어당 스레드: 1
NUMA cell:   1
메모리 용량:24597092 KiB

[root@gcloud03 ~]#



Thanks
Gyeong-Ryoon Kim.


2013. 8. 13., 오후 10:57, Hyun Woo Kim  작성:

> This error seems to say that gcloud03 does not have virsh command installed 
> yet.
> HyunWoo
> 
> From: 김 경륜 
> Date: Tuesday, August 13, 2013 3:22 AM
> To: Users OpenNebula 
> Subject: [one-users] live migration error
> 
> HI all
> 
> When I do live migration command  it doesn't work 
> 
> I have this error message on vm log.
> 
> Mon Aug 12 20:39:39 2013 [LCM][I]: New VM state is MIGRATE
> Mon Aug 12 20:39:39 2013 [VMM][I]: Successfully execute transfer manager 
> driver operation: tm_premigrate.
> Mon Aug 12 20:39:39 2013 [VMM][I]: ExitCode: 0
> Mon Aug 12 20:39:39 2013 [VMM][I]: Successfully execute network driver 
> operation: pre.
> Mon Aug 12 20:39:39 2013 [VMM][I]: Command execution fail: 
> /gcloud/one//var/remotes/vmm/kvm/migrate_local 'one-43' 'gcloud01' 'gcloud03' 
> 43 gcloud03
> Mon Aug 12 20:39:39 2013 [VMM][I]: 
> /gcloud/one//var/remotes/vmm/kvm/migrate_local: line 25: virsh: command not 
> found
> Mon Aug 12 20:39:39 2013 [VMM][I]: ExitCode: 127
> Mon Aug 12 20:39:39 2013 [VMM][I]: Failed to execute virtualization driver 
> operation: migrate.
> Mon Aug 12 20:39:39 2013 [VMM][E]: Error live migrating VM
> Mon Aug 12 20:39:40 2013 [LCM][I]: Fail to live migrate VM. Assuming that the 
> VM is still RUNNING (will poll VM).
> 
> 
> 
> 
> 
> I'm using scientific linux for all the host and kvm driver . The OpenNubula 
> version is 4.2.
> 
> 
> basic configuration like below:
> 
> >> front-end machine
> [oneadmin@gcloud-front etc]$ grep -vE '^(#|$)' oned.conf
> LOG = [
>   system  = "file",
>   debug_level = 3
> ]
> MONITORING_INTERVAL  = 300
> SCRIPTS_REMOTE_DIR=/var/tmp/one
> PORT = 2633
> DB = [ backend = "sqlite" ]
> VNC_BASE_PORT = 5900
> NETWORK_SIZE = 254
> MAC_PREFIX   = "02:00"
> DATASTORE_CAPACITY_CHECK = "yes"
> DEFAULT_IMAGE_TYPE= "OS"
> DEFAULT_DEVICE_PREFIX = "hd"
> IM_MAD = [
>   name   = "kvm",
>   executable = "one_im_ssh",
>   arguments  = "-r 0 -t 15 kvm" ]
> IM_MAD = [
>   name   = "ec2",
>   executable = "one_im_ec2",
>   arguments  = "im_ec2/im_ec2.conf" ]
> VM_MAD = [
> name   = "kvm",
> executable = "one_vmm_exec",
> arguments  = "-t 15 -r 0 kvm -l migrate=migrate_local",
> #arguments  = "-t 15 -r 0 kvm -l ",
> default= "vmm_exec/vmm_exec_kvm.conf",
> type   = "kvm" ]
> VM_MAD = [
> name   = "ec2",
> executable = "one_vmm_ec2",
> arguments  = "vmm_ec2/vmm_ec2.conf",
> type   = "xml" ]
> TM_MAD = [
> executable = "one_tm",
> arguments  = "-t 15 -d dummy,lvm,shared,qcow2,ssh,vmfs,iscsi,ceph" ]
> DATASTORE_MAD = [
> executable = "one_datastore",
> arguments  = "-t 15 -d dummy,fs,vmfs,iscsi,lvm,ceph"
> ]
> HM_MAD = [
> executable = "one_hm" ]
> AUTH_MAD = [
> executable = "one_auth_mad",
> authn = "ssh,x509,ldap,server_cipher,server_x509"
> ]
> SESSION_EXPIRATION_TIME = 900
> DEFAULT_UMASK = 177
> VM_RESTRICTED_ATTR = "NIC/MAC"
> VM_RESTRICTED_ATTR = "NIC/VLAN_ID"
> IMAGE_RESTRICTED_ATTR = "SOURCE"
> [oneadmin@gcloud-front etc]$
> 
> 
> 
> 
> 
> 
> >>> host machines
> 
>  [root@gcloud01 ~]# grep  -Ev '^(#|$)' /etc/libvirt/qemu.conf
> user = "oneadmin"
> group = "oneadmin"
> dynamic_ownership = 0
> [root@gcloud01 ~]#
> 
>  [root@gcloud01 ~]# grep -vE '^(#|$)' libvirtd
> LIBVIRTD_ARGS="--listen"
> 
>  [root@gcloud01 ~]#grep -vE '^(#|$)' libvirtd.conf
> unix_sock_group = "libvirt"
> unix_sock_ro_perms = "0777"
> unix_sock_rw_perms= "0777"
> auth_unix_ro = "none"
> auth_unix_rw = "none"
> 
> 
> 
> 
> Thanks 
> Gyeong-Ryoon Kim.

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] live migration error

2013-08-13 Thread Jaime Melis
Hi,
why do you have this line in your oned.conf?


> arguments  = "-t 15 -r 0 kvm -l migrate=migrate_local",
>

I think you should comment that line out.

regards,
Jaime

-- 
Join us at OpenNebulaConf2013  in Berlin, 24-26
September, 2013
--
Jaime Melis
Project Engineer
OpenNebula - The Open Source Toolkit for Cloud Computing
www.OpenNebula.org | jme...@opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] live migration error

2013-08-13 Thread 김 경륜
HI all

When I do live migration command  it doesn't work 

I have this error message on vm log.

Mon Aug 12 20:39:39 2013 [LCM][I]: New VM state is MIGRATE
Mon Aug 12 20:39:39 2013 [VMM][I]: Successfully execute transfer manager driver 
operation: tm_premigrate.
Mon Aug 12 20:39:39 2013 [VMM][I]: ExitCode: 0
Mon Aug 12 20:39:39 2013 [VMM][I]: Successfully execute network driver 
operation: pre.
Mon Aug 12 20:39:39 2013 [VMM][I]: Command execution fail: 
/gcloud/one//var/remotes/vmm/kvm/migrate_local 'one-43' 'gcloud01' 'gcloud03' 
43 gcloud03
Mon Aug 12 20:39:39 2013 [VMM][I]: 
/gcloud/one//var/remotes/vmm/kvm/migrate_local: line 25: virsh: command not 
found
Mon Aug 12 20:39:39 2013 [VMM][I]: ExitCode: 127
Mon Aug 12 20:39:39 2013 [VMM][I]: Failed to execute virtualization driver 
operation: migrate.
Mon Aug 12 20:39:39 2013 [VMM][E]: Error live migrating VM
Mon Aug 12 20:39:40 2013 [LCM][I]: Fail to live migrate VM. Assuming that the 
VM is still RUNNING (will poll VM).





I'm using scientific linux for all the host and kvm driver . The OpenNubula 
version is 4.2.


basic configuration like below:

>> front-end machine
[oneadmin@gcloud-front etc]$ grep -vE '^(#|$)' oned.conf
LOG = [
  system  = "file",
  debug_level = 3
]
MONITORING_INTERVAL  = 300
SCRIPTS_REMOTE_DIR=/var/tmp/one
PORT = 2633
DB = [ backend = "sqlite" ]
VNC_BASE_PORT = 5900
NETWORK_SIZE = 254
MAC_PREFIX   = "02:00"
DATASTORE_CAPACITY_CHECK = "yes"
DEFAULT_IMAGE_TYPE= "OS"
DEFAULT_DEVICE_PREFIX = "hd"
IM_MAD = [
  name   = "kvm",
  executable = "one_im_ssh",
  arguments  = "-r 0 -t 15 kvm" ]
IM_MAD = [
  name   = "ec2",
  executable = "one_im_ec2",
  arguments  = "im_ec2/im_ec2.conf" ]
VM_MAD = [
name   = "kvm",
executable = "one_vmm_exec",
arguments  = "-t 15 -r 0 kvm -l migrate=migrate_local",
#arguments  = "-t 15 -r 0 kvm -l ",
default= "vmm_exec/vmm_exec_kvm.conf",
type   = "kvm" ]
VM_MAD = [
name   = "ec2",
executable = "one_vmm_ec2",
arguments  = "vmm_ec2/vmm_ec2.conf",
type   = "xml" ]
TM_MAD = [
executable = "one_tm",
arguments  = "-t 15 -d dummy,lvm,shared,qcow2,ssh,vmfs,iscsi,ceph" ]
DATASTORE_MAD = [
executable = "one_datastore",
arguments  = "-t 15 -d dummy,fs,vmfs,iscsi,lvm,ceph"
]
HM_MAD = [
executable = "one_hm" ]
AUTH_MAD = [
executable = "one_auth_mad",
authn = "ssh,x509,ldap,server_cipher,server_x509"
]
SESSION_EXPIRATION_TIME = 900
DEFAULT_UMASK = 177
VM_RESTRICTED_ATTR = "NIC/MAC"
VM_RESTRICTED_ATTR = "NIC/VLAN_ID"
IMAGE_RESTRICTED_ATTR = "SOURCE"
[oneadmin@gcloud-front etc]$






>>> host machines

 [root@gcloud01 ~]# grep  -Ev '^(#|$)' /etc/libvirt/qemu.conf
user = "oneadmin"
group = "oneadmin"
dynamic_ownership = 0
[root@gcloud01 ~]#

 [root@gcloud01 ~]# grep -vE '^(#|$)' libvirtd
LIBVIRTD_ARGS="--listen"

 [root@gcloud01 ~]#grep -vE '^(#|$)' libvirtd.conf
unix_sock_group = "libvirt"
unix_sock_ro_perms = "0777"
unix_sock_rw_perms= "0777"
auth_unix_ro = "none"
auth_unix_rw = "none"




Thanks 
Gyeong-Ryoon Kim.___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Live migration doesn't work _ SOLVED

2012-07-18 Thread Jan Benadik

  
  
This error description in oned.log is a little bit confused.
The issue was - wrong DNS resolving (or - to be honest - my mistake
in /etc/hosts).

Live migration works well if IP and names are set correctly.

Jan

Dňa 18.07.2012 10:57, haseni...@gmx.de
  wrote / napísal(a):


  
Hi,

please have look in the libvirtd.log. (less /var/log/libvirt/libvirtd.log)
Maybe you have some permission-problems or somethinge else.

Best regards

Stefan


 Original-Nachricht 

  
Datum: Tue, 17 Jul 2012 17:50:36 +0200
Von: Jan Benadik 
An: "users@lists.opennebula.org" 
Betreff: [one-users] Live migration doesn\'t work

  
  

  
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

  


-- 
  
  
   
Ján Beňadik
 Managed Services - Solution
  Design Architect
  +421 46 5151 332
  +421 903 691 634
  jan.bena...@atos.net
  Vinohradnícka 6, 971 01 Prievidza
  www.sk.atos.net
  __
  
  
 


  

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Live migration doesn't work

2012-07-18 Thread haseningo
Hi,

please have look in the libvirtd.log. (less /var/log/libvirt/libvirtd.log)
Maybe you have some permission-problems or somethinge else.

Best regards

Stefan


 Original-Nachricht 
> Datum: Tue, 17 Jul 2012 17:50:36 +0200
> Von: Jan Benadik 
> An: "users@lists.opennebula.org" 
> Betreff: [one-users] Live migration doesn\'t work

> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Live migration doesn't work

2012-07-17 Thread Jan Benadik

  
  
Hi all,

I have two nodes with Ubuntu 12.04 Server (KVM) , OpenNebula 3.6.0
(SQLite), shared datastore (NFS), a small number of VM running on
that. If I try to run live migration (via Sunstone), nothing happens
and this error message in oned.log appears :

Tue Jul 17 19:33:26 2012 [ONE][E]: SQL command was: INSERT
  INTO vm_monitoring (vmid, last_poll, body) VALUES
  (45,1342546187,'4500oneadminoneadminmint-occi11000134254618733013425436620one-451048576312055752530457one213425464060vmm_kvmdummyshared0001342546406'),
  error: columns vmid, last_poll are not unique
  Tue Jul 17 19:33:31 2012 [InM][I]: --Mark--

Can somebody help me where I did something wrong?

Jan

-- 
  
  
   
Ján Beňadik
 Managed Services - Solution
  Design Architect
  +421 46 5151 332
  +421 903 691 634
  jan.bena...@atos.net
  Vinohradnícka 6, 971 01 Prievidza
  www.sk.atos.net
  __
  
  
 
  

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Live migration problem

2012-05-29 Thread Javier Fontan
Can you give us more information? Like the log files for the VM that
failed and opennebula configuration.

On Mon, May 28, 2012 at 1:55 PM, Juanra  wrote:
> Hello list.
>
> In opennebula 3.4.1 I have this error when trying live migration:
>
> MIGRATE FAILURE 39 Network action pre needs a ssh stream.
>
>
> Any ideas?
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>



-- 
Javier Fontán Muiños
Project Engineer
OpenNebula - The Open Source Toolkit for Data Center Virtualization
www.OpenNebula.org | jfon...@opennebula.org | @OpenNebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Live migration problem

2012-05-28 Thread Juanra
Hello list.

In opennebula 3.4.1 I have this error when trying live migration:

MIGRATE FAILURE 39 Network action pre needs a ssh stream.


Any ideas?
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] live migration fails on ubuntu 11.04

2011-06-30 Thread samuel
I did a full verification and turned out to be the same problem, a wrong
entry in /etc/hosts. One of the node's entry was not propoerly set
(mispelled domain) and it made impossible for one node's Kvm connect to the
other one.

In order to find out the problem I increased libvirt's debug level to
maximum and I saw thew wrong remote host.domain error.

Thank you very much for the support and apologies for the noise,

Samuel.

On 30 June 2011 19:15, Javier Fontan  wrote:

> I cannot see any info that leads me to find the problem. Have you
> tried migrating VM's manually, that is, using libvirt/kvm manually,
> not OpenNebula. Also check that both machines have the same processor
> and libvirt/kvm versions.
>
> On Fri, Jun 17, 2011 at 5:58 PM, samuel  wrote:
> >
> > The error happened to be a wrong entry in the file /etc/hosts, where the
> > remote node's IP was set to the local one and there were several errors.
> >
> > However, it is not yet possible to perform live migration on the same
> > escenario (normal migration works perfectly), I always end up with the
> > following error:
> > Fri Jun 17 17:47:58 2011 [LCM][I]: New VM state is MIGRATE
> > Fri Jun 17 17:51:09 2011 [VMM][I]: Command execution fail: 'if [ -x
> > "/var/tmp/one/vmm/kvm/migrate" ]; then /var/tmp/one/vmm/kvm/migrate
> one-21
> > node2; else  exit 42; fi'
> > Fri Jun 17 17:51:09 2011 [VMM][I]: STDERR follows.
> > Fri Jun 17 17:51:09 2011 [VMM][I]: error: operation failed: migration
> job:
> > unexpectedly failed
> > Fri Jun 17 17:51:09 2011 [VMM][I]: ExitCode: 1
> > Fri Jun 17 17:51:09 2011 [VMM][E]: Error live-migrating VM, error:
> operation
> > failed: migration job: unexpectedly failed
> > Fri Jun 17 17:51:09 2011 [LCM][I]: Fail to life migrate VM. Assuming that
> > the VM is still RUNNING (will poll VM).
> >
> > This is the output of the file /var/log/libvirt/qemu/one-21.log
> > 2011-06-17 17:48:02.232: starting up
> > LC_ALL=C
> PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin
> > QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.14 -cpu qemu32 -enable-kvm -m
> > 2048 -smp 1,sockets=1,cores=1,threads=1 -name one-21 -uuid
> > b9330d8d-3d2e-666a-c9e5-5e32e81c29dc -nodefconfig -nodefaults -chardev
> >
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-21.monitor,server,nowait
> > -mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc -boot c
> > -drive
> >
> file=/srv/cloud/one/var//21/images/disk.0,if=none,id=drive-ide0-0-0,format=raw
> > -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0
> -netdev
> > tap,fd=18,id=hostnet0 -device
> > rtl8139,netdev=hostnet0,id=net0,mac=02:00:c0:a8:32:03,bus=pci.0,addr=0x3
> > -usb -vnc 0.0.0.0:21 -vga cirrus -incoming tcp:0.0.0.0:49152 -device
> > virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
> > 2011-06-17 17:51:11.997: shutting down
> >
> > And on /var/log/syslog, the folloing line:
> > Jun 17 17:51:46 node2 libvirtd: 17:51:46.798: 1200: error :
> > qemuDomainWaitForMigrationComplete:4218 : operation failed: migration
> job:
> > unexpectedly failed
> >
> > Can anyone provide help on this issue? How can I debug the live
> migration?
> >
> > Thank you very much in advance,
> > Samuel.
> >
> > On 7 June 2011 17:22, samuel  wrote:
> >>
> >> Hi folks,
> >>
> >> After few tricks to the standard configuration (controller exporting via
> >> NFS opennebula directories to 2 other nodes) seems to work except for
> one
> >> point: live migration.
> >>
> >> When starting live migration (from sunstone web interface), the
> following
> >> problem appears:
> >>
> >> Tue Jun  7 17:12:51 2011 [VMM][I]: Command execution fail: 'if [ -x
> >> "/var/tmp/one/vmm/kvm/migrate" ]; then /var/tmp/one/vmm/kvm/migrate
> one-131
> >> node1; else  exit 42; fi'
> >> Tue Jun  7 17:12:51 2011 [VMM][I]: STDERR follows.
> >> Tue Jun  7 17:12:51 2011 [VMM][I]: error: Requested operation is not
> >> valid: domain is already active as 'one-131'
> >> Tue Jun  7 17:12:51 2011 [VMM][I]: ExitCode: 1
> >> Tue Jun  7 17:12:51 2011 [VMM][E]: Error live-migrating VM, error:
> >> Requested operation is not valid: domain is already active as 'one-131'
> >> Tue Jun  7 17:12:51 2011 [LCM][I]: Fail to life migrate VM. Assuming
> that
> >> the VM is still RUNNING (will poll VM).
> >>
> >> I'm using qemu+ssh transport with the following version:
> >> $ virsh version
> >> Compilado contra la biblioteca: libvir 0.8.8
> >> Utilizando la biblioteca: libvir 0.8.8
> >> Utilizando API: QEMU 0.8.8
> >> Ejecutando hypervisor: QEMU 0.14.0
> >>
> >> Installed version of open nebula is 2.2.
> >>
> >> Could anyone shed some light on this issue? I've looked in the Internet
> >> and found some posts relating to qemu bugs but I'd like to know whether
> can
> >> I get more information about this issue.
> >>
> >> Thank you very much in advance,
> >> Samuel.
> >
> >
> > ___
> > Users mailing list
> > Users@lists.opennebula.org
> 

Re: [one-users] live migration fails on ubuntu 11.04

2011-06-30 Thread Javier Fontan
I cannot see any info that leads me to find the problem. Have you
tried migrating VM's manually, that is, using libvirt/kvm manually,
not OpenNebula. Also check that both machines have the same processor
and libvirt/kvm versions.

On Fri, Jun 17, 2011 at 5:58 PM, samuel  wrote:
>
> The error happened to be a wrong entry in the file /etc/hosts, where the
> remote node's IP was set to the local one and there were several errors.
>
> However, it is not yet possible to perform live migration on the same
> escenario (normal migration works perfectly), I always end up with the
> following error:
> Fri Jun 17 17:47:58 2011 [LCM][I]: New VM state is MIGRATE
> Fri Jun 17 17:51:09 2011 [VMM][I]: Command execution fail: 'if [ -x
> "/var/tmp/one/vmm/kvm/migrate" ]; then /var/tmp/one/vmm/kvm/migrate one-21
> node2; else  exit 42; fi'
> Fri Jun 17 17:51:09 2011 [VMM][I]: STDERR follows.
> Fri Jun 17 17:51:09 2011 [VMM][I]: error: operation failed: migration job:
> unexpectedly failed
> Fri Jun 17 17:51:09 2011 [VMM][I]: ExitCode: 1
> Fri Jun 17 17:51:09 2011 [VMM][E]: Error live-migrating VM, error: operation
> failed: migration job: unexpectedly failed
> Fri Jun 17 17:51:09 2011 [LCM][I]: Fail to life migrate VM. Assuming that
> the VM is still RUNNING (will poll VM).
>
> This is the output of the file /var/log/libvirt/qemu/one-21.log
> 2011-06-17 17:48:02.232: starting up
> LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin
> QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.14 -cpu qemu32 -enable-kvm -m
> 2048 -smp 1,sockets=1,cores=1,threads=1 -name one-21 -uuid
> b9330d8d-3d2e-666a-c9e5-5e32e81c29dc -nodefconfig -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-21.monitor,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc -boot c
> -drive
> file=/srv/cloud/one/var//21/images/disk.0,if=none,id=drive-ide0-0-0,format=raw
> -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev
> tap,fd=18,id=hostnet0 -device
> rtl8139,netdev=hostnet0,id=net0,mac=02:00:c0:a8:32:03,bus=pci.0,addr=0x3
> -usb -vnc 0.0.0.0:21 -vga cirrus -incoming tcp:0.0.0.0:49152 -device
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
> 2011-06-17 17:51:11.997: shutting down
>
> And on /var/log/syslog, the folloing line:
> Jun 17 17:51:46 node2 libvirtd: 17:51:46.798: 1200: error :
> qemuDomainWaitForMigrationComplete:4218 : operation failed: migration job:
> unexpectedly failed
>
> Can anyone provide help on this issue? How can I debug the live migration?
>
> Thank you very much in advance,
> Samuel.
>
> On 7 June 2011 17:22, samuel  wrote:
>>
>> Hi folks,
>>
>> After few tricks to the standard configuration (controller exporting via
>> NFS opennebula directories to 2 other nodes) seems to work except for one
>> point: live migration.
>>
>> When starting live migration (from sunstone web interface), the following
>> problem appears:
>>
>> Tue Jun  7 17:12:51 2011 [VMM][I]: Command execution fail: 'if [ -x
>> "/var/tmp/one/vmm/kvm/migrate" ]; then /var/tmp/one/vmm/kvm/migrate one-131
>> node1; else  exit 42; fi'
>> Tue Jun  7 17:12:51 2011 [VMM][I]: STDERR follows.
>> Tue Jun  7 17:12:51 2011 [VMM][I]: error: Requested operation is not
>> valid: domain is already active as 'one-131'
>> Tue Jun  7 17:12:51 2011 [VMM][I]: ExitCode: 1
>> Tue Jun  7 17:12:51 2011 [VMM][E]: Error live-migrating VM, error:
>> Requested operation is not valid: domain is already active as 'one-131'
>> Tue Jun  7 17:12:51 2011 [LCM][I]: Fail to life migrate VM. Assuming that
>> the VM is still RUNNING (will poll VM).
>>
>> I'm using qemu+ssh transport with the following version:
>> $ virsh version
>> Compilado contra la biblioteca: libvir 0.8.8
>> Utilizando la biblioteca: libvir 0.8.8
>> Utilizando API: QEMU 0.8.8
>> Ejecutando hypervisor: QEMU 0.14.0
>>
>> Installed version of open nebula is 2.2.
>>
>> Could anyone shed some light on this issue? I've looked in the Internet
>> and found some posts relating to qemu bugs but I'd like to know whether can
>> I get more information about this issue.
>>
>> Thank you very much in advance,
>> Samuel.
>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>



-- 
Javier Fontan, Grid & Virtualization Technology Engineer/Researcher
DSA Research Group: http://dsa-research.org
Globus GridWay Metascheduler: http://www.GridWay.org
OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] live migration fails on ubuntu 11.04

2011-06-17 Thread samuel
The error happened to be a wrong entry in the file /etc/hosts, where the
remote node's IP was set to the local one and there were several errors.

However, it is not yet possible to perform live migration on the same
escenario (normal migration works perfectly), I always end up with the
following error:
Fri Jun 17 17:47:58 2011 [LCM][I]: New VM state is MIGRATE
Fri Jun 17 17:51:09 2011 [VMM][I]: Command execution fail: 'if [ -x
"/var/tmp/one/vmm/kvm/migrate" ]; then /var/tmp/one/vmm/kvm/migrate one-21
node2; else  exit 42; fi'
Fri Jun 17 17:51:09 2011 [VMM][I]: STDERR follows.
Fri Jun 17 17:51:09 2011 [VMM][I]: error: operation failed: migration job:
unexpectedly failed
Fri Jun 17 17:51:09 2011 [VMM][I]: ExitCode: 1
Fri Jun 17 17:51:09 2011 [VMM][E]: Error live-migrating VM, error: operation
failed: migration job: unexpectedly failed
Fri Jun 17 17:51:09 2011 [LCM][I]: Fail to life migrate VM. Assuming that
the VM is still RUNNING (will poll VM).

This is the output of the file /var/log/libvirt/qemu/one-21.log
2011-06-17 17:48:02.232: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin
QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.14 -cpu qemu32 -enable-kvm -m
2048 -smp 1,sockets=1,cores=1,threads=1 -name one-21 -uuid
b9330d8d-3d2e-666a-c9e5-5e32e81c29dc -nodefconfig -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-21.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc -boot c
-drive
file=/srv/cloud/one/var//21/images/disk.0,if=none,id=drive-ide0-0-0,format=raw
-device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=18,id=hostnet0 -device
rtl8139,netdev=hostnet0,id=net0,mac=02:00:c0:a8:32:03,bus=pci.0,addr=0x3
-usb -vnc 0.0.0.0:21 -vga cirrus -incoming tcp:0.0.0.0:49152 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
2011-06-17 17:51:11.997: shutting down

And on /var/log/syslog, the folloing line:
Jun 17 17:51:46 node2 libvirtd: 17:51:46.798: 1200: error :
qemuDomainWaitForMigrationComplete:4218 : operation failed: migration job:
unexpectedly failed

Can anyone provide help on this issue? How can I debug the live migration?

Thank you very much in advance,
Samuel.

On 7 June 2011 17:22, samuel  wrote:

> Hi folks,
>
> After few tricks to the standard configuration (controller exporting via
> NFS opennebula directories to 2 other nodes) seems to work except for one
> point: live migration.
>
> When starting live migration (from sunstone web interface), the following
> problem appears:
>
> Tue Jun  7 17:12:51 2011 [VMM][I]: Command execution fail: 'if [ -x
> "/var/tmp/one/vmm/kvm/migrate" ]; then /var/tmp/one/vmm/kvm/migrate one-131
> node1; else  exit 42; fi'
> Tue Jun  7 17:12:51 2011 [VMM][I]: STDERR follows.
> Tue Jun  7 17:12:51 2011 [VMM][I]: error: Requested operation is not valid:
> domain is already active as 'one-131'
> Tue Jun  7 17:12:51 2011 [VMM][I]: ExitCode: 1
> Tue Jun  7 17:12:51 2011 [VMM][E]: Error live-migrating VM, error:
> Requested operation is not valid: domain is already active as 'one-131'
> Tue Jun  7 17:12:51 2011 [LCM][I]: Fail to life migrate VM. Assuming that
> the VM is still RUNNING (will poll VM).
>
> I'm using qemu+ssh transport with the following version:
> $ virsh version
> Compilado contra la biblioteca: libvir 0.8.8
> Utilizando la biblioteca: libvir 0.8.8
> Utilizando API: QEMU 0.8.8
> Ejecutando hypervisor: QEMU 0.14.0
>
> Installed version of open nebula is 2.2.
>
> Could anyone shed some light on this issue? I've looked in the Internet and
> found some posts relating to qemu bugs but I'd like to know whether can I
> get more information about this issue.
>
> Thank you very much in advance,
> Samuel.
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] live-/migration not working, uknown error

2011-06-07 Thread Adnan Pasic
Hey everyone,
sorry for spamming, but I thought it might be a good idea to share with you how 
I found out how to fix this issue!!! It was quite tedious actually.

After reviewing the logging documents, I set the following environment variables

export LIBVIRT_DEBUG=4
export LIBVIRT_LOG_OUTPUTS="1:file:/var/log/virsh.log"

Once that was set, I ran the following to monitor the log file: tail -f 
/var/log/virsh.log

When I ran the live migration this time around, this was the output I received 
on that error

23:35:47.102: error : server_error:7231 : operation failed: migrate failed: 
migrate "tcp:scott:49154"
migration failed

As you can see from this, it was trying to live migrate using the hostname 
"scott" instead of the actual IP once it established the session on both ends 
using ssh. To fix this, I simply modified the /etc/hosts file and included 
scott to resolve to the remote system and voila, it worked like a charm and 
quickly.

Hope this is helpful to anyone getting random UNKNOWN ERROR messages, and 
thanks to the mailing list for helping so selflessly every time!
Keep up the good work!

Regards, Adnan

-Ursprüngliche Nachricht-
Von: users-boun...@lists.opennebula.org 
[mailto:users-boun...@lists.opennebula.org] Im Auftrag von Adnan Pasic
Gesendet: Montag, 30. Mai 2011 21:41
An: 'Steffen Neumann'
Cc: users@lists.opennebula.org
Betreff: Re: [one-users] live-/migration not working, uknown error

Hey,
I already changed that part, and also performed a "onehost sync" afterwards, 
but still the "unknown error" issue appears. 
I wonder if I overlooked something else...although it's not possible, because I 
really read the tutorials thoroughly.
Also, I checked the lists already - I check them every time before I post 
something new. I don't wanna end up spamming the whole list here! :) 

Thanks very much, for the help up to now! 

-Ursprüngliche Nachricht-
Von: Steffen Neumann [mailto:sneum...@ipb-halle.de] 
Gesendet: Montag, 30. Mai 2011 21:34
An: Adnan Pasic
Cc: 'Carlos Martín Sánchez'; users@lists.opennebula.org
Betreff: Re: [one-users] live-/migration not working, uknown error

Hi,

On Mon, 2011-05-30 at 13:02 +0200, Adnan Pasic wrote:
> machines normally. The only problem I’, still having is the Live
> Migration!
...
> Is it possible that I somehow messed up the TLS encryption? I 

CHeck out the list archives, in opennebula-2.2 
you need to change qmeu to qemu+ssh in one of the scripts.

If you don't find it, ask again.

Yours,
Steffen

-- 
IPB HalleAG Massenspektrometrie & Bioinformatik
Dr. Steffen Neumann  http://www.IPB-Halle.DE
Weinberg 3   http://msbi.bic-gh.de
06120 Halle  Tel. +49 (0) 345 5582 - 1470
  +49 (0) 345 5582 - 0
sneumann(at)IPB-Halle.DE Fax. +49 (0) 345 5582 - 1409


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] live migration fails on ubuntu 11.04

2011-06-07 Thread samuel
Hi folks,

After few tricks to the standard configuration (controller exporting via NFS
opennebula directories to 2 other nodes) seems to work except for one point:
live migration.

When starting live migration (from sunstone web interface), the following
problem appears:

Tue Jun  7 17:12:51 2011 [VMM][I]: Command execution fail: 'if [ -x
"/var/tmp/one/vmm/kvm/migrate" ]; then /var/tmp/one/vmm/kvm/migrate one-131
node1; else  exit 42; fi'
Tue Jun  7 17:12:51 2011 [VMM][I]: STDERR follows.
Tue Jun  7 17:12:51 2011 [VMM][I]: error: Requested operation is not valid:
domain is already active as 'one-131'
Tue Jun  7 17:12:51 2011 [VMM][I]: ExitCode: 1
Tue Jun  7 17:12:51 2011 [VMM][E]: Error live-migrating VM, error: Requested
operation is not valid: domain is already active as 'one-131'
Tue Jun  7 17:12:51 2011 [LCM][I]: Fail to life migrate VM. Assuming that
the VM is still RUNNING (will poll VM).

I'm using qemu+ssh transport with the following version:
$ virsh version
Compilado contra la biblioteca: libvir 0.8.8
Utilizando la biblioteca: libvir 0.8.8
Utilizando API: QEMU 0.8.8
Ejecutando hypervisor: QEMU 0.14.0

Installed version of open nebula is 2.2.

Could anyone shed some light on this issue? I've looked in the Internet and
found some posts relating to qemu bugs but I'd like to know whether can I
get more information about this issue.

Thank you very much in advance,
Samuel.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] live-/migration not working, uknown error

2011-05-30 Thread Adnan Pasic
Hey,
I already changed that part, and also performed a "onehost sync" afterwards, 
but still the "unknown error" issue appears. 
I wonder if I overlooked something else...although it's not possible, because I 
really read the tutorials thoroughly.
Also, I checked the lists already - I check them every time before I post 
something new. I don't wanna end up spamming the whole list here! :) 

Thanks very much, for the help up to now! 

-Ursprüngliche Nachricht-
Von: Steffen Neumann [mailto:sneum...@ipb-halle.de] 
Gesendet: Montag, 30. Mai 2011 21:34
An: Adnan Pasic
Cc: 'Carlos Martín Sánchez'; users@lists.opennebula.org
Betreff: Re: [one-users] live-/migration not working, uknown error

Hi,

On Mon, 2011-05-30 at 13:02 +0200, Adnan Pasic wrote:
> machines normally. The only problem I’, still having is the Live
> Migration!
...
> Is it possible that I somehow messed up the TLS encryption? I 

CHeck out the list archives, in opennebula-2.2 
you need to change qmeu to qemu+ssh in one of the scripts.

If you don't find it, ask again.

Yours,
Steffen

-- 
IPB HalleAG Massenspektrometrie & Bioinformatik
Dr. Steffen Neumann  http://www.IPB-Halle.DE
Weinberg 3   http://msbi.bic-gh.de
06120 Halle  Tel. +49 (0) 345 5582 - 1470
  +49 (0) 345 5582 - 0
sneumann(at)IPB-Halle.DE Fax. +49 (0) 345 5582 - 1409


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] live-/migration not working, uknown error

2011-05-30 Thread Steffen Neumann
Hi,

On Mon, 2011-05-30 at 13:02 +0200, Adnan Pasic wrote:
> machines normally. The only problem I’, still having is the Live
> Migration!
...
> Is it possible that I somehow messed up the TLS encryption? I 

CHeck out the list archives, in opennebula-2.2 
you need to change qmeu to qemu+ssh in one of the scripts.

If you don't find it, ask again.

Yours,
Steffen

-- 
IPB HalleAG Massenspektrometrie & Bioinformatik
Dr. Steffen Neumann  http://www.IPB-Halle.DE
Weinberg 3   http://msbi.bic-gh.de
06120 Halle  Tel. +49 (0) 345 5582 - 1470
  +49 (0) 345 5582 - 0
sneumann(at)IPB-Halle.DE Fax. +49 (0) 345 5582 - 1409


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] live-/migration not working, uknown error

2011-05-30 Thread Adnan Pasic
Hey Carlos,

thank you soo much for your help. By now I can finally migrate my machines
normally. The only problem I’, still having is the Live Migration!

This is what the log says:

 

n May 30 12:35:39 2011 [LCM][I]: New VM state is MIGRATE

Mon May 30 12:35:40 2011 [VMM][I]: Command execution fail: 'if [ -x
"/var/tmp/one/vmm/kvm/migrate" ]; then /var/tmp/one/vmm/kvm/migrate one-31
192.168.0.3; else  exit 42; fi'

Mon May 30 12:35:40 2011 [VMM][I]: STDERR follows.

Mon May 30 12:35:40 2011 [VMM][I]: error: Unknown failure

Mon May 30 12:35:40 2011 [VMM][I]: ExitCode: 1

Mon May 30 12:35:40 2011 [VMM][E]: Error live-migrating VM, error: Unknown
failure

Mon May 30 12:35:40 2011 [LCM][I]: Fail to life migrate VM. Assuming that
the VM is still RUNNING (will poll VM).

Mon May 30 12:35:41 2011 [VMM][D]: Monitor Information:

CPU   : 18

Memory: 524288

Net_TX: 300

Net_RX: 1248

 

 

Is it possible that I somehow messed up the TLS encryption? I configured my
cloud according to this website
http://hpc.uamr.de/wissen/opennebula-workshop/

 

In one point on the website it says: “For live migration and secured VNC,
Transport Layer Security needs to be set up. A step by step guide can be
found in the TLS Setup Guide of Libvirt”.

However, I didn’t dothis part of the tutorial since I didn’t find these
steps in any of the other guides I was reading, including the official
opennebula-guide. Is there something I missed? Do I have to perform this TLS
Setup? A tiny bit of further help would be more than appreciated J

Thanks in advance!

 

 

Von: Carlos Martín Sánchez [mailto:cmar...@opennebula.org] 
Gesendet: Montag, 30. Mai 2011 12:22
An: Adnan Pasic
Cc: users@lists.opennebula.org
Betreff: Re: [one-users] live-/migration not working, uknown error

 

Hi Adnan,

This may be of help:
http://lists.opennebula.org/pipermail/users-opennebula.org/2010-September/00
2680.html

Regards,
Carlos.
--
Carlos Martín, MSc
Project Major Contributor
OpenNebula - The Open Source Toolkit for Cloud Computing
 <http://www.opennebula.org/> www.OpenNebula.org |
<mailto:cmar...@opennebula.org> cmar...@opennebula.org



On Mon, May 30, 2011 at 11:52 AM, Adnan Pasic  wrote:

Hello,

unfortunately the problem is still there.

It says: error: unable to set ownership of
“/srv/cloud/one/var//29/images/checkpoint” to user 0:0: Operation not
permitted

Error saving VM state, error: Failed to save domain one-29 to
/srv/cloud/one/var//29/images/checkpoint

 

Please help, this is slowly  making me crazy…

 

Von: Neumann, Steffen [mailto:sneum...@ipb-halle.de] 

Gesendet: Dienstag, 24. Mai 2011 16:59
An: Adnan Pasic; users@lists.opennebula.org

Betreff: RE: [one-users] live-/migration not working, uknown error

 

Hi,

check out the list archives:
http://lists.opennebula.org/htdig.cgi/users-opennebula.org/2011-April/005013
.html

Steffen

  _  

From: users-boun...@lists.opennebula.org
[users-boun...@lists.opennebula.org] on behalf of Adnan Pasic
[pq...@yahoo.de]
Sent: 24 May 2011 14:47
To: users@lists.opennebula.org
Subject: [one-users] live-/migration not working, uknown error

Update: Okay, the issue with "virsh list" is gone. But still the problem
with migration / live migration pertains. 

Do you have to do some extra steps for live migration to work? Do you need
to create TLS keys or something, or is the standard tutorial enough for
everything to work?

 

The error still says: 

[VMM] [E]: Error saving VM state, error: Failed to save domain one-26 to
/srv/cloud/one/var//26/images/checkpoint

 

It seems the folder checkpoint can't get created, or the creation of the
folder is missing somewhere in the scripts, because when I browse through to
the folder /images there is no checkpoint folder inside.

 

I hope those infos are sufficient to narrow down the possible mistakes?


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] live-/migration not working, uknown error

2011-05-30 Thread Carlos Martín Sánchez
Hi Adnan,

This may be of help:
http://lists.opennebula.org/pipermail/users-opennebula.org/2010-September/002680.html

Regards,
Carlos.
--
Carlos Martín, MSc
Project Major Contributor
OpenNebula - The Open Source Toolkit for Cloud Computing
www.OpenNebula.org <http://www.opennebula.org/> | cmar...@opennebula.org


On Mon, May 30, 2011 at 11:52 AM, Adnan Pasic  wrote:

> Hello,
>
> unfortunately the problem is still there.
>
> It says: error: unable to set ownership of
> “/srv/cloud/one/var//29/images/checkpoint” to user 0:0: Operation not
> permitted
>
> Error saving VM state, error: Failed to save domain one-29 to
> /srv/cloud/one/var//29/images/checkpoint
>
>
>
> Please help, this is slowly  making me crazy…
>
>
>
> *Von:* Neumann, Steffen [mailto:sneum...@ipb-halle.de]
> *Gesendet:* Dienstag, 24. Mai 2011 16:59
> *An:* Adnan Pasic; users@lists.opennebula.org
> *Betreff:* RE: [one-users] live-/migration not working, uknown error
>
>
>
> Hi,
>
> check out the list archives:
>
> http://lists.opennebula.org/htdig.cgi/users-opennebula.org/2011-April/005013.html
>
> Steffen
> --
>
> *From:* users-boun...@lists.opennebula.org [
> users-boun...@lists.opennebula.org] on behalf of Adnan Pasic [
> pq...@yahoo.de]
> *Sent:* 24 May 2011 14:47
> *To:* users@lists.opennebula.org
> *Subject:* [one-users] live-/migration not working, uknown error
>
> Update: Okay, the issue with "virsh list" is gone. But still the problem
> with migration / live migration pertains.
>
> Do you have to do some extra steps for live migration to work? Do you need
> to create TLS keys or something, or is the standard tutorial enough for
> everything to work?
>
>
>
> The error still says:
>
> [VMM] [E]: Error saving VM state, error: Failed to save domain one-26 to
> /srv/cloud/one/var//26/images/checkpoint
>
>
>
> It seems the folder checkpoint can't get created, or the creation of the
> folder is missing somewhere in the scripts, because when I browse through to
> the folder /images there is no checkpoint folder inside.
>
>
>
> I hope those infos are sufficient to narrow down the possible mistakes?
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] live-/migration not working, uknown error

2011-05-30 Thread Adnan Pasic
Hello,

unfortunately the problem is still there.

It says: error: unable to set ownership of
"/srv/cloud/one/var//29/images/checkpoint" to user 0:0: Operation not
permitted

Error saving VM state, error: Failed to save domain one-29 to
/srv/cloud/one/var//29/images/checkpoint

 

Please help, this is slowly  making me crazy.

 

Von: Neumann, Steffen [mailto:sneum...@ipb-halle.de] 
Gesendet: Dienstag, 24. Mai 2011 16:59
An: Adnan Pasic; users@lists.opennebula.org
Betreff: RE: [one-users] live-/migration not working, uknown error

 

Hi,

check out the list archives:
http://lists.opennebula.org/htdig.cgi/users-opennebula.org/2011-April/005013
.html

Steffen

  _  

From: users-boun...@lists.opennebula.org
[users-boun...@lists.opennebula.org] on behalf of Adnan Pasic
[pq...@yahoo.de]
Sent: 24 May 2011 14:47
To: users@lists.opennebula.org
Subject: [one-users] live-/migration not working, uknown error

Update: Okay, the issue with "virsh list" is gone. But still the problem
with migration / live migration pertains. 

Do you have to do some extra steps for live migration to work? Do you need
to create TLS keys or something, or is the standard tutorial enough for
everything to work?

 

The error still says: 

[VMM] [E]: Error saving VM state, error: Failed to save domain one-26 to
/srv/cloud/one/var//26/images/checkpoint

 

It seems the folder checkpoint can't get created, or the creation of the
folder is missing somewhere in the scripts, because when I browse through to
the folder /images there is no checkpoint folder inside.

 

I hope those infos are sufficient to narrow down the possible mistakes?

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] live-/migration not working, uknown error

2011-05-24 Thread Adnan Pasic
Hi,

first of all, thanks for your help Steffen. Unfortunately, I already found
this particular page and tried what was written there, but nevertheless it
still wouldn't work.

Any other ideas???

 

Regards!

Adnan Pasic

 

Von: Neumann, Steffen [mailto:sneum...@ipb-halle.de] 
Gesendet: Dienstag, 24. Mai 2011 16:59
An: Adnan Pasic; users@lists.opennebula.org
Betreff: RE: [one-users] live-/migration not working, uknown error

 

Hi,

check out the list archives:
http://lists.opennebula.org/htdig.cgi/users-opennebula.org/2011-April/005013
.html

Steffen

  _  

From: users-boun...@lists.opennebula.org
[users-boun...@lists.opennebula.org] on behalf of Adnan Pasic
[pq...@yahoo.de]
Sent: 24 May 2011 14:47
To: users@lists.opennebula.org
Subject: [one-users] live-/migration not working, uknown error

Update: Okay, the issue with "virsh list" is gone. But still the problem
with migration / live migration pertains. 

Do you have to do some extra steps for live migration to work? Do you need
to create TLS keys or something, or is the standard tutorial enough for
everything to work?

 

The error still says: 

[VMM] [E]: Error saving VM state, error: Failed to save domain one-26 to
/srv/cloud/one/var//26/images/checkpoint

 

It seems the folder checkpoint can't get created, or the creation of the
folder is missing somewhere in the scripts, because when I browse through to
the folder /images there is no checkpoint folder inside.

 

I hope those infos are sufficient to narrow down the possible mistakes?

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] live-/migration not working, uknown error

2011-05-24 Thread Neumann, Steffen
Hi,

check out the list archives:
http://lists.opennebula.org/htdig.cgi/users-opennebula.org/2011-April/005013.html

Steffen


From: users-boun...@lists.opennebula.org [users-boun...@lists.opennebula.org] 
on behalf of Adnan Pasic [pq...@yahoo.de]
Sent: 24 May 2011 14:47
To: users@lists.opennebula.org
Subject: [one-users] live-/migration not working, uknown error

Update: Okay, the issue with "virsh list" is gone. But still the problem with 
migration / live migration pertains.
Do you have to do some extra steps for live migration to work? Do you need to 
create TLS keys or something, or is the standard tutorial enough for everything 
to work?

The error still says:
[VMM] [E]: Error saving VM state, error: Failed to save domain one-26 to 
/srv/cloud/one/var//26/images/checkpoint

It seems the folder checkpoint can't get created, or the creation of the folder 
is missing somewhere in the scripts, because when I browse through to the 
folder /images there is no checkpoint folder inside.

I hope those infos are sufficient to narrow down the possible mistakes?
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] live-/migration not working, uknown error

2011-05-24 Thread Adnan Pasic
Update: Okay, the issue with "virsh list" is gone. But still the problem with 
migration / live migration pertains. 

Do you have to do some extra steps for live migration to work? Do you need to 
create TLS keys or something, or is the standard tutorial enough for everything 
to work?

The error still says: 

[VMM] [E]: Error saving VM state, error: Failed to save domain one-26 to 
/srv/cloud/one/var//26/images/checkpoint

It seems the folder checkpoint can't get created, or the creation of the folder 
is missing somewhere in the scripts, because when I browse through to the 
folder /images there is no checkpoint folder inside.

I hope those infos are sufficient to narrow down the possible mistakes?
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] live-/migration not working, uknown error

2011-05-24 Thread Adnan Pasic
Hello everybody,
a new day, a new issue :)
I was able to successfully create my cloud and connect the clusternodes to my 
front-end. Even the test-VM "ttylinux" is running as expected on one of the 
nodes. Or so it seems... 

The problem I am having right now is, that when I start a migration or 
live-migration I get the following errors:

Tue May 24 12:58:32 2011 [LCM][I]: New VM state is MIGRATE
Tue May 24 12:58:33 2011 [VMM][I]: Command execution fail: 'if [ -x 
"/var/tmp/one/vmm/kvm/migrate" ]; then /var/tmp/one/vmm/kvm/migrate one-16 
192.168.0.3; else  exit 42; fi'
Tue May 24 12:58:33 2011 [VMM][I]: STDERR follows.
Tue May 24 12:58:33 2011 [VMM][I]: error: Unknown failure
Tue May 24 12:58:33 2011 [VMM][I]: ExitCode: 1
Tue May 24 12:58:33 2011 [VMM][E]: Error live-migrating VM, error: Unknown 
failure
Tue May 24 12:58:33 2011 [LCM][I]: Fail to life migrate VM. Assuming that the 
VM is still RUNNING (will poll VM).
Tue May 24 12:58:34 2011 [VMM][D]: Monitor Information:
CPU   : 6
Memory: 65536
Net_TX: 0
Net_RX: 6657
Tue May 24 13:01:16 2011 [LCM][I]: New VM state is SAVE_MIGRATE
Tue May 24 13:01:17 2011 [VMM][I]: Command execution fail: 'if [ -x 
"/var/tmp/one/vmm/kvm/save" ]; then /var/tmp/one/vmm/kvm/save one-16 
/srv/cloud/one/var//16/images/checkpoint; else  
exit 42; fi'
Tue May 24 13:01:17 2011 [VMM][I]: STDERR follows.
Tue May 24 13:01:17 2011 [VMM][I]: error: Failed to save domain one-16 to 
/srv/cloud/one/var//16/images/checkpoint
Tue May 24 13:01:17 2011 [VMM][I]: error: unable to set ownership of 
'/srv/cloud/one/var//16/images/checkpoint' to user 0:0: Operation not permitted
Tue May 24 13:01:17 2011 [VMM][I]: ExitCode: 1
Tue May 24 13:01:17 2011 [VMM][E]: Error saving VM state, error: Failed to save 
domain one-16 to /srv/cloud/one/var//16/images/checkpoint
Tue May 24 13:01:17 2011 [LCM][I]: Fail to save VM state while migrating. 
Assuming that the VM is still RUNNING (will poll VM).
Tue May 24 13:01:17 2011 [VMM][I]: VM running but new state from monitor is 
PAUSED.
Tue May 24 13:01:17 2011 [LCM][I]: VM is suspended.
Tue May 24 13:01:17 2011 [DiM][I]: New VM state is SUSPENDED

As you can see in the log, I first tried to do a live-migration and then a 
normal migration right afterwards. unfortunately, both tries ended 
unsuccessfully.
I was searching for an error, but all I could find was one strange thing: When 
I initiate a "virsh list" on the node where the VM is running, I don't get any 
results.
Could this be the problem? And if so, how could I resolve it???


Thank you!
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] live migration

2011-01-14 Thread Tino Vazquez
Dear Paolo,

You can extend the TM driver to query the core for the list of
physical hosts registered, but it requires a bit of fiddling with the
OCA library.

As an alternative, maybe libvirt hooks can be used to refresh the configuration.

Regards,

-Tino

--
Constantino Vázquez Blanco | dsa-research.org/tinova
Virtualization Technology Engineer / Researcher
OpenNebula Toolkit | opennebula.org



On Thu, Jan 13, 2011 at 5:31 PM, Paolo Smiraglia
 wrote:
>> since livemigrate functionality assumes shared FS, no transfer manager
>> call is performed by the OpenNebula core during livemigration.
>
> I'm using a custom LVM transfer manager driver which needs to refresh
> the configuration each time a VM is moved to another host.
>
> Actually my driver works correctly only with off-line migration.
>
> There is a way to obtain the list of opennebula registered nodes from
> the transfer manager driver?
>
> Thanks...
>
>
> --
> PAOLO SMIRAGLIA
> http://portale.isf.polito.it/paolo-smiraglia
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] live migration

2011-01-13 Thread Paolo Smiraglia
> since livemigrate functionality assumes shared FS, no transfer manager
> call is performed by the OpenNebula core during livemigration.

I'm using a custom LVM transfer manager driver which needs to refresh
the configuration each time a VM is moved to another host.

Actually my driver works correctly only with off-line migration.

There is a way to obtain the list of opennebula registered nodes from
the transfer manager driver?

Thanks...


-- 
PAOLO SMIRAGLIA
http://portale.isf.polito.it/paolo-smiraglia
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] live migration

2011-01-13 Thread Tino Vazquez
Hi there,

since livemigrate functionality assumes shared FS, no transfer manager
call is performed by the OpenNebula core during livemigration.

Hope it helps,

-Tino

--
Constantino Vázquez Blanco | dsa-research.org/tinova
Virtualization Technology Engineer / Researcher
OpenNebula Toolkit | opennebula.org



On Thu, Jan 13, 2011 at 5:18 PM,   wrote:
>> "PS" == Paolo Smiraglia  writes:
>
> PS> Hi to everyones!  Do opennebula call some transfer manager driver
> PS> during Live Migration?
>
> AFAIK, it ssh-launches,  on the machine where the  migration begins, a
> command called "migrate"  that (at least for KVM  hypervisor) issues a
> virsh "migrate --live" on the destination machine.
>
> Only the devil knows why one of my machine can not be used as starting
> point for a migration (and no, I  was wrong in a previous post on this
> list, I did not solve it :[ ).
>
> --
> ing. Gian Uberto Lauri
> Ricercatore / Reasearcher
> Divisione Ricerca ed Innovazione / Research & Innovation Division
> gianuberto.la...@eng.it
>
> Engineering Ingegneria Informatica spa
> Corso Stati Uniti 23/C, 35127 Padova (PD)
>
> Tel. +39-049.8283.538         | main(){printf(&unix["\021%six\012\0"],
> Fax  +39-049.8283.569             |    (unix)["have"]+"fun"-0x60);}
> Skype: gian.uberto.lauri          |          David Korn, AT&T Bell Labs
> http://www.eng.it                         |          ioccc best One Liner, 
> 1987
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] live migration

2011-01-13 Thread saint
> "PS" == Paolo Smiraglia  writes:

PS> Hi to everyones!  Do opennebula call some transfer manager driver
PS> during Live Migration?

AFAIK, it ssh-launches,  on the machine where the  migration begins, a
command called "migrate"  that (at least for KVM  hypervisor) issues a
virsh "migrate --live" on the destination machine.

Only the devil knows why one of my machine can not be used as starting
point for a migration (and no, I  was wrong in a previous post on this
list, I did not solve it :[ ).

--
ing. Gian Uberto Lauri
Ricercatore / Reasearcher
Divisione Ricerca ed Innovazione / Research & Innovation Division
gianuberto.la...@eng.it

Engineering Ingegneria Informatica spa
Corso Stati Uniti 23/C, 35127 Padova (PD) 

Tel. +39-049.8283.538 | main(){printf(&unix["\021%six\012\0"], 
Fax  +39-049.8283.569 |(unix)["have"]+"fun"-0x60);}   
Skype: gian.uberto.lauri  |  David Korn, AT&T Bell Labs 

http://www.eng.it |  ioccc best One Liner, 1987 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] live migration

2011-01-13 Thread Paolo Smiraglia
Hi to everyones!

Do opennebula call some transfer manager driver during Live Migration?

>From oned.log I don't see any message about that.

Thanks in advance for replies!



PAOLO

-- 
PAOLO SMIRAGLIA
http://portale.isf.polito.it/paolo-smiraglia
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] live migration using occi-storage fails

2010-07-23 Thread Harder, Stefan
Hi Javier,

I've added the chmod line to the script but nothing changes. The images
still have 600 permissions instead of 644 needed for a working live
migration. I also restarted one.

By the way, thank you all again for the tutorial!

Regards,

Stefan

> -Ursprüngliche Nachricht-
> Von: users-boun...@lists.opennebula.org [mailto:users-
> boun...@lists.opennebula.org] Im Auftrag von Javier Fontan
> Gesendet: Donnerstag, 22. Juli 2010 15:53
> An: Strutz, Marco
> Cc: users@lists.opennebula.org
> Betreff: Re: [one-users] live migration using occi-storage fails
> 
> Hello Marco,
> 
> To change the permissions of the image uploaded by OCCI server you can
> edit $ONE_LOCATION/lib/ruby/cloud/image.rb. Around line 102 there is
> this function:
> 
> --8<--
> def copy_image(path, move=false)
> if move
> FileUtils.mv(path, image_path)
> else
> FileUtils.cp(path, image_path)
> end
> self.path=image_path
> end
> -->8--
> 
> You have to add there a line so it looks like this:
> 
> --8<--
> def copy_image(path, move=false)
> if move
> FileUtils.mv(path, image_path)
> else
> FileUtils.cp(path, image_path)
> FileUtils.chmod(0666, image_path)
> end
> self.path=image_path
> end
> -->8--
> 
> Feel free to change permissions parameter to suit you needs and tell
> me if that solves the problem.
> 
> Bye
> 
> On Fri, Jun 25, 2010 at 11:13 AM, Strutz, Marco
>  wrote:
> > Hello Javier.
> >
> > As described in the documentation[1] umask is not set in
> "/etc/exports":
> >        /srv/cloud      10.0.0.6(rw)
> >
> > If I upload an image via "occi-storage create " an
> image will be created in "/srv/cloud/images. This image has rw-
> permission only for "oneadmin":"
> >        -rw--- 1 oneadmin cloud
> >
> > The migration fails for that permissions until I change it to
> >        -rw-r---r- 1 oneadmin cloud
> > Then the migration works fine.
> >
> > If I manually create a file as oneadmin in "/srv/cloud/images" via
> "touch testfile", then "testfile" has correct (read) permission which
> works fine for migration:
> >        onead...@b:/srv/cloud/images$ touch testfile && ls -la
> testfile
> >        -rw-r--r-- 1 oneadmin cloud 0 2010-06-25 10:57 testfile
> >
> >
> > Occi-server run's as "oneadmin" user:
> >        $ ps aux | grep "ruby"
> >        oneadmin  3038  0.0  0.0  31032  4472 ?        SNl  Jun11
> 8:17 ruby /srv/cloud/one/lib/mads/one_vmm_kvm.rb
> >        oneadmin  3049  0.0  0.0  37860  5140 ?        SNl  Jun11
> 9:39 ruby /srv/cloud/one/lib/mads/one_im_ssh.rb im_kvm/im_kvm.conf
> >        oneadmin  3063  0.0  0.0  30560  3988 ?        SNl  Jun11
> 7:44 ruby /srv/cloud/one/lib/mads/one_tm.rb tm_nfs/tm_nfs.conf
> >        oneadmin  3077  0.0  0.0  30320  3652 ?        SNl  Jun11
> 7:35 ruby /srv/cloud/one/lib/mads/one_hm.rb
> >        oneadmin  3091  0.1  0.4 115116 37400 ?        Rl   Jun11
>  35:22 ruby /srv/cloud/one/lib/ruby/cloud/occi/occi-server.rb
> >
> >
> > I'm clueless about further testing. Could you please assist? I would
> appreciate it.
> >
> >
> >
> > [1] http://www.opennebula.org/documentation:rel1.4:plan  -->
> Preparing the Cluster : storage :
> >    $ cat /etc/exports
> >    /srv/cloud 192.168.0.0/255.255.255.0(rw)
> >
> >
> >
> > Thanks + bye
> > Marco
> >
> >
> > -Original Message-
> > From: Javier Fontan [mailto:jfon...@gmail.com]
> > Sent: Thursday, June 24, 2010 12:23 PM
> > To: Strutz, Marco
> > Cc: users@lists.opennebula.org
> > Subject: Re: [one-users] live migration using occi-storage fails
> >
> > Hello,
> >
> > We don't explicitly set image file permissions, take a look at umask
> > for oneadmin user.
> >
> > Bye
> >
> >
> > On Wed, Jun 23, 2010 at 2:23 PM, Strutz, Marco
> >  wrote:
> >> I have add read permission.. now the live migration works! (My setup
> uses KVM as hypervisor)
> >> Thanks!
> >>
> >> onead...@v:~/var/36/images$ ls -la disk.0
> >> -rw-rw-rw- 1 oneadmin cloud 41943040 2010-06-11 14:13 disk.0
> >>
> >>
> >> What can I do to have the read-permission automatically set by
> OpenNebula every time a virtual machine is de

Re: [one-users] live migration using occi-storage fails

2010-07-22 Thread Javier Fontan
Hello Marco,

To change the permissions of the image uploaded by OCCI server you can
edit $ONE_LOCATION/lib/ruby/cloud/image.rb. Around line 102 there is
this function:

--8<--
def copy_image(path, move=false)
if move
FileUtils.mv(path, image_path)
else
FileUtils.cp(path, image_path)
end
self.path=image_path
end
-->8--

You have to add there a line so it looks like this:

--8<--
def copy_image(path, move=false)
if move
FileUtils.mv(path, image_path)
else
FileUtils.cp(path, image_path)
FileUtils.chmod(0666, image_path)
end
self.path=image_path
end
-->8--

Feel free to change permissions parameter to suit you needs and tell
me if that solves the problem.

Bye

On Fri, Jun 25, 2010 at 11:13 AM, Strutz, Marco
 wrote:
> Hello Javier.
>
> As described in the documentation[1] umask is not set in "/etc/exports":
>        /srv/cloud      10.0.0.6(rw)
>
> If I upload an image via "occi-storage create " an image will 
> be created in "/srv/cloud/images. This image has rw-permission only for 
> "oneadmin":"
>        -rw--- 1 oneadmin cloud
>
> The migration fails for that permissions until I change it to
>        -rw-r---r- 1 oneadmin cloud
> Then the migration works fine.
>
> If I manually create a file as oneadmin in "/srv/cloud/images" via "touch 
> testfile", then "testfile" has correct (read) permission which works fine for 
> migration:
>        onead...@b:/srv/cloud/images$ touch testfile && ls -la testfile
>        -rw-r--r-- 1 oneadmin cloud 0 2010-06-25 10:57 testfile
>
>
> Occi-server run's as "oneadmin" user:
>        $ ps aux | grep "ruby"
>        oneadmin  3038  0.0  0.0  31032  4472 ?        SNl  Jun11   8:17 ruby 
> /srv/cloud/one/lib/mads/one_vmm_kvm.rb
>        oneadmin  3049  0.0  0.0  37860  5140 ?        SNl  Jun11   9:39 ruby 
> /srv/cloud/one/lib/mads/one_im_ssh.rb im_kvm/im_kvm.conf
>        oneadmin  3063  0.0  0.0  30560  3988 ?        SNl  Jun11   7:44 ruby 
> /srv/cloud/one/lib/mads/one_tm.rb tm_nfs/tm_nfs.conf
>        oneadmin  3077  0.0  0.0  30320  3652 ?        SNl  Jun11   7:35 ruby 
> /srv/cloud/one/lib/mads/one_hm.rb
>        oneadmin  3091  0.1  0.4 115116 37400 ?        Rl   Jun11  35:22 ruby 
> /srv/cloud/one/lib/ruby/cloud/occi/occi-server.rb
>
>
> I'm clueless about further testing. Could you please assist? I would 
> appreciate it.
>
>
>
> [1] http://www.opennebula.org/documentation:rel1.4:plan  --> Preparing the 
> Cluster : storage :
>    $ cat /etc/exports
>    /srv/cloud 192.168.0.0/255.255.255.0(rw)
>
>
>
> Thanks + bye
> Marco
>
>
> -Original Message-
> From: Javier Fontan [mailto:jfon...@gmail.com]
> Sent: Thursday, June 24, 2010 12:23 PM
> To: Strutz, Marco
> Cc: users@lists.opennebula.org
> Subject: Re: [one-users] live migration using occi-storage fails
>
> Hello,
>
> We don't explicitly set image file permissions, take a look at umask
> for oneadmin user.
>
> Bye
>
>
> On Wed, Jun 23, 2010 at 2:23 PM, Strutz, Marco
>  wrote:
>> I have add read permission.. now the live migration works! (My setup uses 
>> KVM as hypervisor)
>> Thanks!
>>
>> onead...@v:~/var/36/images$ ls -la disk.0
>> -rw-rw-rw- 1 oneadmin cloud 41943040 2010-06-11 14:13 disk.0
>>
>>
>> What can I do to have the read-permission automatically set by OpenNebula 
>> every time a virtual machine is deployed via OCCI? Is this read-only file 
>> permission a bug in the OCCI implementation, should I open a ticket?
>>
>>
>>
>> Marco
>>
>> -Original Message-
>> From: Javier Fontan [mailto:jfon...@gmail.com]
>> Sent: Tuesday, June 22, 2010 4:54 PM
>> To: Strutz, Marco
>> Cc: users@lists.opennebula.org
>> Subject: Re: [one-users] live migration using occi-storage fails
>>
>> Hello,
>>
>> Those write only permissions are probably causing that error. xen
>> daemon uses root permissions to read disk image files. As the
>> filesystem is nfs mounted these permissions are enforced for root user
>> and that can cause the problem.
>>
>> On Mon, Jun 21, 2010 at 9:15 PM, Strutz, Marco
>>  wrote:
>>> Hello Javier.
>>>
>>>
>>> The destination node "v" uses a shared storage (via nfs) to access
>>> /srv/cloud and "disk.0" can be accessed from both machines ("v" and "b"). A
>>

Re: [one-users] live migration using occi-storage fails

2010-06-25 Thread Strutz, Marco
Hello Javier.

As described in the documentation[1] umask is not set in "/etc/exports":
/srv/cloud  10.0.0.6(rw)

If I upload an image via "occi-storage create " an image will be 
created in "/srv/cloud/images. This image has rw-permission only for 
"oneadmin":"
-rw--- 1 oneadmin cloud

The migration fails for that permissions until I change it to
-rw-r---r- 1 oneadmin cloud
Then the migration works fine.

If I manually create a file as oneadmin in "/srv/cloud/images" via "touch 
testfile", then "testfile" has correct (read) permission which works fine for 
migration:
onead...@b:/srv/cloud/images$ touch testfile && ls -la testfile
-rw-r--r-- 1 oneadmin cloud 0 2010-06-25 10:57 testfile


Occi-server run's as "oneadmin" user:
$ ps aux | grep "ruby"
oneadmin  3038  0.0  0.0  31032  4472 ?SNl  Jun11   8:17 ruby 
/srv/cloud/one/lib/mads/one_vmm_kvm.rb
oneadmin  3049  0.0  0.0  37860  5140 ?SNl  Jun11   9:39 ruby 
/srv/cloud/one/lib/mads/one_im_ssh.rb im_kvm/im_kvm.conf
oneadmin  3063  0.0  0.0  30560  3988 ?SNl  Jun11   7:44 ruby 
/srv/cloud/one/lib/mads/one_tm.rb tm_nfs/tm_nfs.conf
oneadmin  3077  0.0  0.0  30320  3652 ?SNl  Jun11   7:35 ruby 
/srv/cloud/one/lib/mads/one_hm.rb
oneadmin  3091  0.1  0.4 115116 37400 ?Rl   Jun11  35:22 ruby 
/srv/cloud/one/lib/ruby/cloud/occi/occi-server.rb


I'm clueless about further testing. Could you please assist? I would appreciate 
it.



[1] http://www.opennebula.org/documentation:rel1.4:plan  --> Preparing the 
Cluster : storage :
$ cat /etc/exports
/srv/cloud 192.168.0.0/255.255.255.0(rw)



Thanks + bye
Marco


-Original Message-
From: Javier Fontan [mailto:jfon...@gmail.com] 
Sent: Thursday, June 24, 2010 12:23 PM
To: Strutz, Marco
Cc: users@lists.opennebula.org
Subject: Re: [one-users] live migration using occi-storage fails

Hello,

We don't explicitly set image file permissions, take a look at umask
for oneadmin user.

Bye


On Wed, Jun 23, 2010 at 2:23 PM, Strutz, Marco
 wrote:
> I have add read permission.. now the live migration works! (My setup uses KVM 
> as hypervisor)
> Thanks!
>
> onead...@v:~/var/36/images$ ls -la disk.0
> -rw-rw-rw- 1 oneadmin cloud 41943040 2010-06-11 14:13 disk.0
>
>
> What can I do to have the read-permission automatically set by OpenNebula 
> every time a virtual machine is deployed via OCCI? Is this read-only file 
> permission a bug in the OCCI implementation, should I open a ticket?
>
>
>
> Marco
>
> -Original Message-
> From: Javier Fontan [mailto:jfon...@gmail.com]
> Sent: Tuesday, June 22, 2010 4:54 PM
> To: Strutz, Marco
> Cc: users@lists.opennebula.org
> Subject: Re: [one-users] live migration using occi-storage fails
>
> Hello,
>
> Those write only permissions are probably causing that error. xen
> daemon uses root permissions to read disk image files. As the
> filesystem is nfs mounted these permissions are enforced for root user
> and that can cause the problem.
>
> On Mon, Jun 21, 2010 at 9:15 PM, Strutz, Marco
>  wrote:
>> Hello Javier.
>>
>>
>> The destination node "v" uses a shared storage (via nfs) to access
>> /srv/cloud and "disk.0" can be accessed from both machines ("v" and "b"). A
>> symlink seems not no be used for the image(s):
>>
>>
>> id=36:
>>
>> onead...@v:~/var/36/images$ ls -la /srv/cloud/one/var/36/images/disk.0
>> -rw--w--w- 1 oneadmin cloud 41943040 2010-06-11 14:13
>> /srv/cloud/one/var/36/images/disk.0
>>
>>
>>
>> id=38:
>>
>> onead...@v:~/var/36/images$ ls -la /srv/cloud/one/var/38/images/disk.0
>> -rw-rw-rw- 1 oneadmin cloud 41943040 2010-06-11 14:55
>> /srv/cloud/one/var/38/images/disk.0
>>
>> onead...@v:~/var/36/images$  ls -la /srv/cloud/images/2
>> -rw--- 1 oneadmin cloud 41943040 2010-06-09 10:36 /srv/cloud/images/2
>>
>> onead...@v:~/var/36/images$ ls -la /srv/cloud/images/ttylinux.img
>> -rw-r--r-- 1 oneadmin cloud 41943040 2010-03-30 13:57
>> /srv/cloud/images/ttylinux.img
>>
>>
>>
>> The file-permissions seems to be different. Could that be a potential
>> problem?
>>
>>
>>
>>
>> thanks
>> Marco
>>
>>
>>
>>
>> -Ursprüngliche Nachricht-
>> Von: Javier Fontan [mailto:jfon...@gmail.com]
>> Gesendet: Mo 21.06.2010 17:58
>> An: Strutz, Marco
>> Cc: users@lists.opennebula.org
>> Betreff: Re: [one-users] live migration using occi-storage fails
>>
>> Hello,
>>
>> Can

Re: [one-users] live migration using occi-storage fails

2010-06-24 Thread Javier Fontan
Hello,

We don't explicitly set image file permissions, take a look at umask
for oneadmin user.

Bye


On Wed, Jun 23, 2010 at 2:23 PM, Strutz, Marco
 wrote:
> I have add read permission.. now the live migration works! (My setup uses KVM 
> as hypervisor)
> Thanks!
>
> onead...@v:~/var/36/images$ ls -la disk.0
> -rw-rw-rw- 1 oneadmin cloud 41943040 2010-06-11 14:13 disk.0
>
>
> What can I do to have the read-permission automatically set by OpenNebula 
> every time a virtual machine is deployed via OCCI? Is this read-only file 
> permission a bug in the OCCI implementation, should I open a ticket?
>
>
>
> Marco
>
> -Original Message-
> From: Javier Fontan [mailto:jfon...@gmail.com]
> Sent: Tuesday, June 22, 2010 4:54 PM
> To: Strutz, Marco
> Cc: users@lists.opennebula.org
> Subject: Re: [one-users] live migration using occi-storage fails
>
> Hello,
>
> Those write only permissions are probably causing that error. xen
> daemon uses root permissions to read disk image files. As the
> filesystem is nfs mounted these permissions are enforced for root user
> and that can cause the problem.
>
> On Mon, Jun 21, 2010 at 9:15 PM, Strutz, Marco
>  wrote:
>> Hello Javier.
>>
>>
>> The destination node "v" uses a shared storage (via nfs) to access
>> /srv/cloud and "disk.0" can be accessed from both machines ("v" and "b"). A
>> symlink seems not no be used for the image(s):
>>
>>
>> id=36:
>>
>> onead...@v:~/var/36/images$ ls -la /srv/cloud/one/var/36/images/disk.0
>> -rw--w--w- 1 oneadmin cloud 41943040 2010-06-11 14:13
>> /srv/cloud/one/var/36/images/disk.0
>>
>>
>>
>> id=38:
>>
>> onead...@v:~/var/36/images$ ls -la /srv/cloud/one/var/38/images/disk.0
>> -rw-rw-rw- 1 oneadmin cloud 41943040 2010-06-11 14:55
>> /srv/cloud/one/var/38/images/disk.0
>>
>> onead...@v:~/var/36/images$  ls -la /srv/cloud/images/2
>> -rw--- 1 oneadmin cloud 41943040 2010-06-09 10:36 /srv/cloud/images/2
>>
>> onead...@v:~/var/36/images$ ls -la /srv/cloud/images/ttylinux.img
>> -rw-r--r-- 1 oneadmin cloud 41943040 2010-03-30 13:57
>> /srv/cloud/images/ttylinux.img
>>
>>
>>
>> The file-permissions seems to be different. Could that be a potential
>> problem?
>>
>>
>>
>>
>> thanks
>> Marco
>>
>>
>>
>>
>> -Ursprüngliche Nachricht-
>> Von: Javier Fontan [mailto:jfon...@gmail.com]
>> Gesendet: Mo 21.06.2010 17:58
>> An: Strutz, Marco
>> Cc: users@lists.opennebula.org
>> Betreff: Re: [one-users] live migration using occi-storage fails
>>
>> Hello,
>>
>> Can you check that /srv/cloud/one/var//36/images/disk.0 is accessible
>> from destination node (I suppose "v")? Also check if that it is a
>> symlink the target file is readable there.
>>
>> Bye
>>
>> On Fri, Jun 11, 2010 at 3:21 PM, Strutz, Marco
>>  wrote:
>>> Hi everyone.
>>>
>>> I have deployed ttyLinux twice, once via occi (id=36) and the other via
>>> cli (onevm create ...).
>>> Both machines are up and running.
>>>
>>> Unfortunately live-migration doesn't work with the occi machine id=36.
>>> BUT the live migration for id=38 work like a charme.
>>>
>>>
>>> The ttyLinux image for Id=36 was uploaded via occi as storage resource
>>> (disk-id=2).
>>> The ttyLinux image for Id=38 get never in contact with occi ->
>>> /srv/cloud/images/ttyLinux.img
>>>
>>> (both images are identical, confirmed via the 'diff' command)
>>>
>>> Strange: If I deploy a third ttyLinux (same configuration as id=38) but
>>> point it's source to the occi-storage "SOURCE=/srv/cloud/images/2" then
>>> the live-migration fails as well.
>>>
>>>
>>> Any guesses? (log files see below)
>>>
>>>
>>>
>>> thanks in advance
>>> Marco
>>>
>>>
>>>
>>> environment:
>>> Linux b 2.6.28-19-server #61-Ubuntu SMP Thu May 27 00:22:27 UTC 2010
>>> x86_64 GNU/Linux
>>> OpenNebula v1.4 (Last Stable Release)
>>>
>>>
>>>
>>> -/srv/cloud/one/var/36/vm.log--
>>> (...)
>>> Fri Jun 11 14:24:05 2010 [LCM][I]: New VM state is MIGRATE
>>> Fri Jun 11 14:24:35 2010 [VMM][I]: Command execution fail: virsh
>>> --connect qemu:///system migrate --live o

Re: [one-users] live migration using occi-storage fails

2010-06-23 Thread Strutz, Marco
I have add read permission.. now the live migration works! (My setup uses KVM 
as hypervisor)
Thanks!

onead...@v:~/var/36/images$ ls -la disk.0 
-rw-rw-rw- 1 oneadmin cloud 41943040 2010-06-11 14:13 disk.0


What can I do to have the read-permission automatically set by OpenNebula every 
time a virtual machine is deployed via OCCI? Is this read-only file permission 
a bug in the OCCI implementation, should I open a ticket?



Marco

-Original Message-
From: Javier Fontan [mailto:jfon...@gmail.com] 
Sent: Tuesday, June 22, 2010 4:54 PM
To: Strutz, Marco
Cc: users@lists.opennebula.org
Subject: Re: [one-users] live migration using occi-storage fails

Hello,

Those write only permissions are probably causing that error. xen
daemon uses root permissions to read disk image files. As the
filesystem is nfs mounted these permissions are enforced for root user
and that can cause the problem.

On Mon, Jun 21, 2010 at 9:15 PM, Strutz, Marco
 wrote:
> Hello Javier.
>
>
> The destination node "v" uses a shared storage (via nfs) to access
> /srv/cloud and "disk.0" can be accessed from both machines ("v" and "b"). A
> symlink seems not no be used for the image(s):
>
>
> id=36:
>
> onead...@v:~/var/36/images$ ls -la /srv/cloud/one/var/36/images/disk.0
> -rw--w--w- 1 oneadmin cloud 41943040 2010-06-11 14:13
> /srv/cloud/one/var/36/images/disk.0
>
>
>
> id=38:
>
> onead...@v:~/var/36/images$ ls -la /srv/cloud/one/var/38/images/disk.0
> -rw-rw-rw- 1 oneadmin cloud 41943040 2010-06-11 14:55
> /srv/cloud/one/var/38/images/disk.0
>
> onead...@v:~/var/36/images$  ls -la /srv/cloud/images/2
> -rw--- 1 oneadmin cloud 41943040 2010-06-09 10:36 /srv/cloud/images/2
>
> onead...@v:~/var/36/images$ ls -la /srv/cloud/images/ttylinux.img
> -rw-r--r-- 1 oneadmin cloud 41943040 2010-03-30 13:57
> /srv/cloud/images/ttylinux.img
>
>
>
> The file-permissions seems to be different. Could that be a potential
> problem?
>
>
>
>
> thanks
> Marco
>
>
>
>
> -Ursprüngliche Nachricht-
> Von: Javier Fontan [mailto:jfon...@gmail.com]
> Gesendet: Mo 21.06.2010 17:58
> An: Strutz, Marco
> Cc: users@lists.opennebula.org
> Betreff: Re: [one-users] live migration using occi-storage fails
>
> Hello,
>
> Can you check that /srv/cloud/one/var//36/images/disk.0 is accessible
> from destination node (I suppose "v")? Also check if that it is a
> symlink the target file is readable there.
>
> Bye
>
> On Fri, Jun 11, 2010 at 3:21 PM, Strutz, Marco
>  wrote:
>> Hi everyone.
>>
>> I have deployed ttyLinux twice, once via occi (id=36) and the other via
>> cli (onevm create ...).
>> Both machines are up and running.
>>
>> Unfortunately live-migration doesn't work with the occi machine id=36.
>> BUT the live migration for id=38 work like a charme.
>>
>>
>> The ttyLinux image for Id=36 was uploaded via occi as storage resource
>> (disk-id=2).
>> The ttyLinux image for Id=38 get never in contact with occi ->
>> /srv/cloud/images/ttyLinux.img
>>
>> (both images are identical, confirmed via the 'diff' command)
>>
>> Strange: If I deploy a third ttyLinux (same configuration as id=38) but
>> point it's source to the occi-storage "SOURCE=/srv/cloud/images/2" then
>> the live-migration fails as well.
>>
>>
>> Any guesses? (log files see below)
>>
>>
>>
>> thanks in advance
>> Marco
>>
>>
>>
>> environment:
>> Linux b 2.6.28-19-server #61-Ubuntu SMP Thu May 27 00:22:27 UTC 2010
>> x86_64 GNU/Linux
>> OpenNebula v1.4 (Last Stable Release)
>>
>>
>>
>> -/srv/cloud/one/var/36/vm.log--
>> (...)
>> Fri Jun 11 14:24:05 2010 [LCM][I]: New VM state is MIGRATE
>> Fri Jun 11 14:24:35 2010 [VMM][I]: Command execution fail: virsh
>> --connect qemu:///system migrate --live one-36 qemu+ssh://v/session
>> Fri Jun 11 14:24:35 2010 [VMM][I]: STDERR follows.
>> Fri Jun 11 14:24:35 2010 [VMM][I]: /usr/lib/ruby/1.8/open3.rb:67:
>> warning: Insecure world writable dir /srv/cloud in PATH, mode 040777
>> Fri Jun 11 14:24:35 2010 [VMM][I]: Connecting to uri: qemu:///system
>> Fri Jun 11 14:24:35 2010 [VMM][I]: error: operation failed: failed to
>> start listening VM
>> Fri Jun 11 14:24:35 2010 [VMM][I]: ExitCode: 1
>> Fri Jun 11 14:24:35 2010 [VMM][E]: Error live-migrating VM, -
>> Fri Jun 11 14:24:35 2010 [LCM][I]: Fail to life migrate VM. Assuming
>> that the VM is still RUNNING (will poll VM).
>> (...)
>> --

Re: [one-users] live migration using occi-storage fails

2010-06-22 Thread Javier Fontan
Hello,

Those write only permissions are probably causing that error. xen
daemon uses root permissions to read disk image files. As the
filesystem is nfs mounted these permissions are enforced for root user
and that can cause the problem.

On Mon, Jun 21, 2010 at 9:15 PM, Strutz, Marco
 wrote:
> Hello Javier.
>
>
> The destination node "v" uses a shared storage (via nfs) to access
> /srv/cloud and "disk.0" can be accessed from both machines ("v" and "b"). A
> symlink seems not no be used for the image(s):
>
>
> id=36:
>
> onead...@v:~/var/36/images$ ls -la /srv/cloud/one/var/36/images/disk.0
> -rw--w--w- 1 oneadmin cloud 41943040 2010-06-11 14:13
> /srv/cloud/one/var/36/images/disk.0
>
>
>
> id=38:
>
> onead...@v:~/var/36/images$ ls -la /srv/cloud/one/var/38/images/disk.0
> -rw-rw-rw- 1 oneadmin cloud 41943040 2010-06-11 14:55
> /srv/cloud/one/var/38/images/disk.0
>
> onead...@v:~/var/36/images$  ls -la /srv/cloud/images/2
> -rw--- 1 oneadmin cloud 41943040 2010-06-09 10:36 /srv/cloud/images/2
>
> onead...@v:~/var/36/images$ ls -la /srv/cloud/images/ttylinux.img
> -rw-r--r-- 1 oneadmin cloud 41943040 2010-03-30 13:57
> /srv/cloud/images/ttylinux.img
>
>
>
> The file-permissions seems to be different. Could that be a potential
> problem?
>
>
>
>
> thanks
> Marco
>
>
>
>
> -Ursprüngliche Nachricht-
> Von: Javier Fontan [mailto:jfon...@gmail.com]
> Gesendet: Mo 21.06.2010 17:58
> An: Strutz, Marco
> Cc: users@lists.opennebula.org
> Betreff: Re: [one-users] live migration using occi-storage fails
>
> Hello,
>
> Can you check that /srv/cloud/one/var//36/images/disk.0 is accessible
> from destination node (I suppose "v")? Also check if that it is a
> symlink the target file is readable there.
>
> Bye
>
> On Fri, Jun 11, 2010 at 3:21 PM, Strutz, Marco
>  wrote:
>> Hi everyone.
>>
>> I have deployed ttyLinux twice, once via occi (id=36) and the other via
>> cli (onevm create ...).
>> Both machines are up and running.
>>
>> Unfortunately live-migration doesn't work with the occi machine id=36.
>> BUT the live migration for id=38 work like a charme.
>>
>>
>> The ttyLinux image for Id=36 was uploaded via occi as storage resource
>> (disk-id=2).
>> The ttyLinux image for Id=38 get never in contact with occi ->
>> /srv/cloud/images/ttyLinux.img
>>
>> (both images are identical, confirmed via the 'diff' command)
>>
>> Strange: If I deploy a third ttyLinux (same configuration as id=38) but
>> point it's source to the occi-storage "SOURCE=/srv/cloud/images/2" then
>> the live-migration fails as well.
>>
>>
>> Any guesses? (log files see below)
>>
>>
>>
>> thanks in advance
>> Marco
>>
>>
>>
>> environment:
>> Linux b 2.6.28-19-server #61-Ubuntu SMP Thu May 27 00:22:27 UTC 2010
>> x86_64 GNU/Linux
>> OpenNebula v1.4 (Last Stable Release)
>>
>>
>>
>> -/srv/cloud/one/var/36/vm.log--
>> (...)
>> Fri Jun 11 14:24:05 2010 [LCM][I]: New VM state is MIGRATE
>> Fri Jun 11 14:24:35 2010 [VMM][I]: Command execution fail: virsh
>> --connect qemu:///system migrate --live one-36 qemu+ssh://v/session
>> Fri Jun 11 14:24:35 2010 [VMM][I]: STDERR follows.
>> Fri Jun 11 14:24:35 2010 [VMM][I]: /usr/lib/ruby/1.8/open3.rb:67:
>> warning: Insecure world writable dir /srv/cloud in PATH, mode 040777
>> Fri Jun 11 14:24:35 2010 [VMM][I]: Connecting to uri: qemu:///system
>> Fri Jun 11 14:24:35 2010 [VMM][I]: error: operation failed: failed to
>> start listening VM
>> Fri Jun 11 14:24:35 2010 [VMM][I]: ExitCode: 1
>> Fri Jun 11 14:24:35 2010 [VMM][E]: Error live-migrating VM, -
>> Fri Jun 11 14:24:35 2010 [LCM][I]: Fail to life migrate VM. Assuming
>> that the VM is still RUNNING (will poll VM).
>> (...)
>> ---
>>
>>
>> -/srv/cloud/one/var/38/vm.log--
>> (...)
>> Fri Jun 11 14:56:52 2010 [LCM][I]: New VM state is MIGRATE
>> Fri Jun 11 14:56:53 2010 [LCM][I]: New VM state is RUNNING
>> (...)
>> ---
>>
>>
>>
>> -$onevm list---
>>  ID USER NAME STAT CPU MEM    HOSTNAME    TIME
>>  36 oneadmin ttyLinux runn   0   65536   b 00 00:01:03
>>  38 oneadmin ttylinux runn   0   65536  

Re: [one-users] live migration using occi-storage fails

2010-06-21 Thread Strutz, Marco
Hello Javier.


The destination node "v" uses a shared storage (via nfs) to access /srv/cloud 
and "disk.0" can be accessed from both machines ("v" and "b"). A symlink seems 
not no be used for the image(s):


id=36:

onead...@v:~/var/36/images$ ls -la /srv/cloud/one/var/36/images/disk.0 
-rw--w--w- 1 oneadmin cloud 41943040 2010-06-11 14:13 
/srv/cloud/one/var/36/images/disk.0



id=38:

onead...@v:~/var/36/images$ ls -la /srv/cloud/one/var/38/images/disk.0 
-rw-rw-rw- 1 oneadmin cloud 41943040 2010-06-11 14:55 
/srv/cloud/one/var/38/images/disk.0

onead...@v:~/var/36/images$  ls -la /srv/cloud/images/2
-rw--- 1 oneadmin cloud 41943040 2010-06-09 10:36 /srv/cloud/images/2

onead...@v:~/var/36/images$ ls -la /srv/cloud/images/ttylinux.img
-rw-r--r-- 1 oneadmin cloud 41943040 2010-03-30 13:57 
/srv/cloud/images/ttylinux.img



The file-permissions seems to be different. Could that be a potential problem?




thanks
Marco




-Ursprüngliche Nachricht-
Von: Javier Fontan [mailto:jfon...@gmail.com]
Gesendet: Mo 21.06.2010 17:58
An: Strutz, Marco
Cc: users@lists.opennebula.org
Betreff: Re: [one-users] live migration using occi-storage fails
 
Hello,

Can you check that /srv/cloud/one/var//36/images/disk.0 is accessible
from destination node (I suppose "v")? Also check if that it is a
symlink the target file is readable there.

Bye

On Fri, Jun 11, 2010 at 3:21 PM, Strutz, Marco
 wrote:
> Hi everyone.
>
> I have deployed ttyLinux twice, once via occi (id=36) and the other via
> cli (onevm create ...).
> Both machines are up and running.
>
> Unfortunately live-migration doesn't work with the occi machine id=36.
> BUT the live migration for id=38 work like a charme.
>
>
> The ttyLinux image for Id=36 was uploaded via occi as storage resource
> (disk-id=2).
> The ttyLinux image for Id=38 get never in contact with occi ->
> /srv/cloud/images/ttyLinux.img
>
> (both images are identical, confirmed via the 'diff' command)
>
> Strange: If I deploy a third ttyLinux (same configuration as id=38) but
> point it's source to the occi-storage "SOURCE=/srv/cloud/images/2" then
> the live-migration fails as well.
>
>
> Any guesses? (log files see below)
>
>
>
> thanks in advance
> Marco
>
>
>
> environment:
> Linux b 2.6.28-19-server #61-Ubuntu SMP Thu May 27 00:22:27 UTC 2010
> x86_64 GNU/Linux
> OpenNebula v1.4 (Last Stable Release)
>
>
>
> -/srv/cloud/one/var/36/vm.log--
> (...)
> Fri Jun 11 14:24:05 2010 [LCM][I]: New VM state is MIGRATE
> Fri Jun 11 14:24:35 2010 [VMM][I]: Command execution fail: virsh
> --connect qemu:///system migrate --live one-36 qemu+ssh://v/session
> Fri Jun 11 14:24:35 2010 [VMM][I]: STDERR follows.
> Fri Jun 11 14:24:35 2010 [VMM][I]: /usr/lib/ruby/1.8/open3.rb:67:
> warning: Insecure world writable dir /srv/cloud in PATH, mode 040777
> Fri Jun 11 14:24:35 2010 [VMM][I]: Connecting to uri: qemu:///system
> Fri Jun 11 14:24:35 2010 [VMM][I]: error: operation failed: failed to
> start listening VM
> Fri Jun 11 14:24:35 2010 [VMM][I]: ExitCode: 1
> Fri Jun 11 14:24:35 2010 [VMM][E]: Error live-migrating VM, -
> Fri Jun 11 14:24:35 2010 [LCM][I]: Fail to life migrate VM. Assuming
> that the VM is still RUNNING (will poll VM).
> (...)
> ---
>
>
> -/srv/cloud/one/var/38/vm.log--
> (...)
> Fri Jun 11 14:56:52 2010 [LCM][I]: New VM state is MIGRATE
> Fri Jun 11 14:56:53 2010 [LCM][I]: New VM state is RUNNING
> (...)
> ---
>
>
>
> -$onevm list---
>  ID USER NAME STAT CPU MEMHOSTNAMETIME
>  36 oneadmin ttyLinux runn   0   65536   b 00 00:01:03
>  38 oneadmin ttylinux runn   0   65536   b 00 00:01:14
> ---
>
>
>
> $onehost list--
>  ID NAME  RVM   TCPU   FCPU   ACPUTMEMFMEM
> STAT
>   2 v   0400400400 8078448 8006072
> on
>   3 b   2400394394 8078448 7875748
> on
> ---
>
>
>
>
> ---$ onevm show 36-
> VIRTUAL MACHINE 36 INFORMATION
>
> ID : 36
> NAME   : ttyLinux01
> STATE  : ACTIVE
> LCM_STATE  : RUNNING
> START TIME : 06/11 1

Re: [one-users] live migration using occi-storage fails

2010-06-21 Thread Javier Fontan
Hello,

Can you check that /srv/cloud/one/var//36/images/disk.0 is accessible
from destination node (I suppose "v")? Also check if that it is a
symlink the target file is readable there.

Bye

On Fri, Jun 11, 2010 at 3:21 PM, Strutz, Marco
 wrote:
> Hi everyone.
>
> I have deployed ttyLinux twice, once via occi (id=36) and the other via
> cli (onevm create ...).
> Both machines are up and running.
>
> Unfortunately live-migration doesn't work with the occi machine id=36.
> BUT the live migration for id=38 work like a charme.
>
>
> The ttyLinux image for Id=36 was uploaded via occi as storage resource
> (disk-id=2).
> The ttyLinux image for Id=38 get never in contact with occi ->
> /srv/cloud/images/ttyLinux.img
>
> (both images are identical, confirmed via the 'diff' command)
>
> Strange: If I deploy a third ttyLinux (same configuration as id=38) but
> point it's source to the occi-storage "SOURCE=/srv/cloud/images/2" then
> the live-migration fails as well.
>
>
> Any guesses? (log files see below)
>
>
>
> thanks in advance
> Marco
>
>
>
> environment:
> Linux b 2.6.28-19-server #61-Ubuntu SMP Thu May 27 00:22:27 UTC 2010
> x86_64 GNU/Linux
> OpenNebula v1.4 (Last Stable Release)
>
>
>
> -/srv/cloud/one/var/36/vm.log--
> (...)
> Fri Jun 11 14:24:05 2010 [LCM][I]: New VM state is MIGRATE
> Fri Jun 11 14:24:35 2010 [VMM][I]: Command execution fail: virsh
> --connect qemu:///system migrate --live one-36 qemu+ssh://vodka/session
> Fri Jun 11 14:24:35 2010 [VMM][I]: STDERR follows.
> Fri Jun 11 14:24:35 2010 [VMM][I]: /usr/lib/ruby/1.8/open3.rb:67:
> warning: Insecure world writable dir /srv/cloud in PATH, mode 040777
> Fri Jun 11 14:24:35 2010 [VMM][I]: Connecting to uri: qemu:///system
> Fri Jun 11 14:24:35 2010 [VMM][I]: error: operation failed: failed to
> start listening VM
> Fri Jun 11 14:24:35 2010 [VMM][I]: ExitCode: 1
> Fri Jun 11 14:24:35 2010 [VMM][E]: Error live-migrating VM, -
> Fri Jun 11 14:24:35 2010 [LCM][I]: Fail to life migrate VM. Assuming
> that the VM is still RUNNING (will poll VM).
> (...)
> ---
>
>
> -/srv/cloud/one/var/38/vm.log--
> (...)
> Fri Jun 11 14:56:52 2010 [LCM][I]: New VM state is MIGRATE
> Fri Jun 11 14:56:53 2010 [LCM][I]: New VM state is RUNNING
> (...)
> ---
>
>
>
> -$onevm list---
>  ID     USER     NAME STAT CPU     MEM        HOSTNAME        TIME
>  36 oneadmin ttyLinux runn   0   65536               b 00 00:01:03
>  38 oneadmin ttylinux runn   0   65536               b 00 00:01:14
> ---
>
>
>
> $onehost list--
>  ID NAME                      RVM   TCPU   FCPU   ACPU    TMEM    FMEM
> STAT
>   2 v                           0    400    400    400 8078448 8006072
> on
>   3 b                           2    400    394    394 8078448 7875748
> on
> ---
>
>
>
>
> ---$ onevm show 36-
> VIRTUAL MACHINE 36 INFORMATION
>
> ID             : 36
> NAME           : ttyLinux01
> STATE          : ACTIVE
> LCM_STATE      : RUNNING
> START TIME     : 06/11 14:11:15
> END TIME       : -
> DEPLOY ID:     : one-36
>
> VIRTUAL MACHINE TEMPLATE
>
> CPU=1
> DISK=[
>  IMAGE_ID=2,
>  READONLY=no,
>  SOURCE=/srv/cloud/images/2,
>  TARGET=hda ]
> FEATURES=[
>  ACPI=no ]
> INSTANCE_TYPE=small
> MEMORY=64
> NAME=ttyLinux01
> NIC=[
>  BRIDGE=br0,
>  IP=10.0.0.2,
>  MAC=00:03:c1:00:00:ca,
>  NETWORK=network,
>  VNID=0 ]
> VMID=36
> ---
>
>
>
>
>
>
> -$ virsh dumpxml one-36
> Connecting to uri: qemu:///system
> 
>  one-36
>  fd9dde78-1033-986e-003b-b353b9eaf8b3
>  65536
>  65536
>  1
>  
>    hvm
>    
>  
>  
>  destroy
>  restart
>  destroy
>  
>    /usr/bin/kvm
>    
>      
>      
>    
>    
>      
>      
>      
>    
>  
> 
> ---
>
>
> ---$ onevm show 38-
> VIRTUAL MACHINE 38 INFORMATION
>
> ID             : 38
> NAME           : ttylinux
> STATE          : ACTIVE
> LCM_STATE      : RUNNING
> START TIME     : 06/11 14:54:30
> END TIME       : -
> DEPLOY ID:     : one-38
>
> VIRTUAL MACHINE TEMPLATE
>
> CPU=0.1
> DISK=[
>  READONLY=no,
>  SOURCE=/srv/cloud/images/ttylinux.img,
>  TARGET=hda ]
> FEATURES=[
>  ACPI=no ]
> MEMORY=64
> NAME=ttylinux
> NIC=[
>  BRIDGE=br0,
>  IP=10.0.0.3,
>  MAC=00:03:c1:00:00:cb,
>  NETWORK=network,
>  VNID=0 ]
> VMID=38
> ---
>
>
>
>
> -$ virsh dumpxml one-38
> 
>  one