[Openstack-operators] : Public cloud operators group in

2016-09-28 Thread Rochelle Grober
I've followed "cloud for at least as long as OpenStack has existed, but back 
then I followed whatever/whoever called themselves "cloud" or 
"cloud-{app|service|etc}" and at one point there was a heated discussion 
(mostly that the rest of the group agreed with) that you couldn't claim you ran 
in the/a cloud if you utilized your own equipment in your own data center.



So, yeah.  The rest of the world doesn't always see cloud the way we do.



--Rocky





From: Silence Dogood 

I figure if you have entity Y's workloads running on entity X's hardware...

and that's 51% or greater portion of gross revenue... you are a public

cloud.



On Mon, Sep 26, 2016 at 11:35 AM, Kenny Johnston 
>

wrote:



> That seems like a strange definition. It doesn't incorporate the usual

> multi-tenancy requirement that traditionally separates private from public

> clouds. By that definition, Rackspace's Private Cloud offer, where we

> design, deploy and operate a single-tenant cloud on behalf of customers (in

> their data-center or ours) would be considered a "public" cloud.

>

> On Fri, Sep 23, 2016 at 3:54 PM, Rochelle Grober <

> rochelle.gro...@huawei.com> wrote:

>

>> Hi Matt,

>>

>>

>>

>> At considerable risk of heading down a rabbit hole... how are you

>> defining "public" cloud for these purposes?

>>

>>

>>

>> Cheers,

>>

>> Blair

>>

>>

>>

>> Any cloud that provides a cloud to a thirdparty in exchange for money.

>> So, rent a VM, rent a collection of vms, lease a fully operational cloud

>> spec'ed to your requirements, lease a team and HW with your cloud on

>> them.

>>

>>

>>

>> So any cloud that provides offsite IAAS to lessees

>>

>>

>>
>> --Rocky
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [ansible] Best practice for single interface cconfig

2016-09-28 Thread Adam Lawson
It's been a while since I deployed OpenStack from scratch so I'm using the
Ansible install guide [1] and ran into a problem:

How should the hosts be configured for hosts that have one NIC? The test
environment layout with one interface [2] is what I'm following but the
/etc/network/interface content [3] does not follow the test environment
layout (presumes the presence of four interfaces).

How would the network be configured with hosts with only 1 NIC to ensure
Ansible runs properly?

[1]
http://docs.openstack.org/developer/openstack-ansible/install-guide/index.html
[2]
http://docs.openstack.org/developer/openstack-ansible/install-guide/overview-host-layout.html#test-environment
[3]
http://docs.openstack.org/developer/openstack-ansible/install-guide/app-targethosts-networkexample.html#test-environment

//adam

*Adam Lawson*

Principal Architect, CEO
Office: +1-916-794-5706
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova live-migration failing for RHEL7/CentOS7 VMs

2016-09-28 Thread Mike Smith
Yeah, I saw that in his response.  Sounds like a good option if you want to try 
it.  I hadn’t tried it only because I didn’t know about it at the time, and the 
older version was working fine for us on our other hosts.

Mike Smith
Lead Cloud Systems Architect
Overstock.com



On Sep 28, 2016, at 11:05 AM, William Josefsson 
> wrote:

Thank you Mike! I see that setting mem_stats_period_seconds = 0 in nova.conf 
libvirt section is mentioned in the bug ticket. This has also been mentioned as 
a workaround by Corbin. Is this something that you have tested? This may be the 
least intrusive workaround. Thx will

On Wed, Sep 28, 2016 at 11:15 PM, Mike Smith 
> wrote:
William -

That is probably what you have hit then.  There are several bug tickets out 
there that are related, I don’t have the specific one I found handy but it’s 
similar to the one Marcus just posted.  I found it by Googling around for the 
error late one night.

In our case, we had most of our hypervisors on the 1.5.3-105.el7_2.4 version 
but had a couple of new ones that ended up with 1.5.3-105.el7_2.7.  Migrating 
to those new hypervisors resulted in the VMs being shut down, but migrating 
from the new to the old worked fine.  So for my case I live-migrated all the 
VMs from the new compute hosts to the old ones, did the yum downgrade of those 
packages on the new hosts, and then rebooted those hosts for good measure.  
After that I no longer had the problem.

Good luck!

Mike Smith
Lead Cloud Systems Architect
Overstock.com



On Sep 28, 2016, at 8:55 AM, William Josefsson 
> wrote:

thanks Mike! yes, I checked and I've got the versions you mentioned installed:

yum list installed | grep "qemu-"
qemu-img.x86_64   10:1.5.3-105.el7_2.7 @updates
qemu-kvm.x86_64  10:1.5.3-105.el7_2.7 @updates
qemu-kvm-common.x86_64   10:1.5.3-105.el7_2.7 @updates


Is there any reference or bugzilla ticket for this issue, or how did you track 
it down?

Can you also please advice on the command: "yum downgrade qemu-kvm 
qemu-kvm-common qemu-img”   as I'm having running VMs in my environment, do I 
need to restart libvirtd after this, and would it affect any of my instances? 
thx will



On Wed, Sep 28, 2016 at 10:47 PM, Mike Smith 
> wrote:
There is a bug in the following:

qemu-kvm-1.5.3-105.el7_2.7
qemu-img-1.5.3-105.el7_2.7
qemu-kvm-common-1.5.3-105.el7_2.7

Are you running these versions?  We encountered this same issue and fixed it by 
running "yum downgrade qemu-kvm qemu-kvm-common qemu-img” which took us back to 
the el7_2.4


Mike Smith
Lead Cloud Systems Architect
Overstock.com



On Sep 28, 2016, at 8:30 AM, William Josefsson 
> wrote:

Hi,

I have problems with nova live-migration for my CentOS7 and RHEL7 VMs.   The 
live migrations works fine for Windows2012R2 and Ubuntu1404 VMs.

For CentOS7/RHEL based images I get this error in the Destination node in 
nova-compute.log. Also console for logged in VM users on the VM freeze.



2016-09-28 10:49:24.101 353935 INFO nova.compute.manager [instance: cd0b605d] 
VM Resumed (Lifecycle Event)
2016-09-28 10:49:24.339 353935 INFO nova.compute.manager [instance: cd0b605d] 
VM Resumed (Lifecycle Event)
2016-09-28 10:49:25.866 353935 INFO nova.compute.manager [instance: cd0b605d] 
Post operation of migration started
2016-09-28 10:49:39.410 353935 INFO nova.compute.manager [instance: cd0b605d] 
VM Stopped (Lifecycle Event)
2016-09-28 10:49:39.532 353935 WARNING nova.compute.manager [instance: 
cd0b605d] Instance shutdown by itself. Calling the stop API. Current vm_state: 
active, current task_state: None, original DB power_state: 4, current VM 
power_state: 4
2016-09-28 10:49:39.668 353935 INFO nova.compute.manager [instance: cd0b605d] 
Instance is already powered off in the hypervisor when stop is called.
2016-09-28 10:49:39.736 353935 INFO nova.virt.libvirt.driver [instance: 
cd0b605d] Instance already shutdown.
2016-09-28 10:49:39.743 353935 INFO nova.virt.libvirt.driver [instance: 
cd0b605d] Instance destroyed successfully.


Eventually the VM ends up in SHUTDOWN state on the destination node.

I'm on CentOS7/Liberty, and my storage backend is CEPH(Hammer 0.94.9).

Please advice. thx will
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




CONFIDENTIALITY NOTICE: This message is intended only for the use and review of 
the individual or entity to which it is addressed and may contain information 

Re: [Openstack-operators] Nova live-migration failing for RHEL7/CentOS7 VMs

2016-09-28 Thread William Josefsson
yes thx Corbin, I should try setting mem_stats_period_seconds = 0 in
libvirt section of nova.conf and then restart nova-compute on all hosts.
Then I can try the live-migration again and see if that prevents VMs from
unexpected shutdowns.

Hopefully this works, and maybe this goes away with 7.3 when it comes, and
I can enable the setting again than. thx will
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova live-migration failing for RHEL7/CentOS7 VMs

2016-09-28 Thread Corbin Hendrickson
Oh you can read it in the bug thread, but I forgot to mention, if you put
in your nova.conf under the libvirt section mem_stats_period_seconds = 0,
and restart nova on the destination (although i'd say just do it on both)
it will no longer hit the bug. I tested this a couple weeks back with
success.

Corbin Hendrickson
Endurance Cloud Development Lead - Manager
Cell: 801-400-0464

On Wed, Sep 28, 2016 at 9:34 AM, Corbin Hendrickson <
corbin.hendrick...@endurance.com> wrote:

> It unfortunately is affecting virtually all of Redhat's latest qemu-kvm
> packages. The bug that was unintentionally introduced was done so in
> response to CVE-2016-5403  Qemu: virtio: unbounded memory allocation on
> host via guest leading to DoS.
>
> Late in the bug thread, they finally posted to a new bug created for the
> breaking of live migrate via *Bug 1371943*
>  - RHSA-2016-1756
> breaks migration of instances.
>
> Based off their posts i've been following it's likely going to "hit the
> shelves" when RHEL 7.3 / CentOS 7.3 comes out. It does look like they are
> backporting it to all their versions of RHEL so that's good.
>
> But yes this does affect 2.3 as well.
>
> Corbin Hendrickson
> Endurance Cloud Development Lead - Manager
> Cell: 801-400-0464
>
> On Wed, Sep 28, 2016 at 9:13 AM, Van Leeuwen, Robert <
> rovanleeu...@ebay.com> wrote:
>
>> > There is a bug in the following:
>>
>> >
>>
>> > qemu-kvm-1.5.3-105.el7_2.7
>>
>> > qemu-img-1.5.3-105.el7_2.7
>>
>> > qemu-kvm-common-1.5.3-105.el7_2.7
>>
>>
>>
>> You might be better of using the RHEV qemu packages
>>
>> They are more recent (2.3) and have more features compiled into them.
>>
>>
>>
>> Cheers,
>>
>> Robert van Leeuwen
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova live-migration failing for RHEL7/CentOS7 VMs

2016-09-28 Thread Corbin Hendrickson
It unfortunately is affecting virtually all of Redhat's latest qemu-kvm
packages. The bug that was unintentionally introduced was done so in
response to CVE-2016-5403  Qemu: virtio: unbounded memory allocation on
host via guest leading to DoS.

Late in the bug thread, they finally posted to a new bug created for the
breaking of live migrate via *Bug 1371943*
 - RHSA-2016-1756
breaks migration of instances.

Based off their posts i've been following it's likely going to "hit the
shelves" when RHEL 7.3 / CentOS 7.3 comes out. It does look like they are
backporting it to all their versions of RHEL so that's good.

But yes this does affect 2.3 as well.

Corbin Hendrickson
Endurance Cloud Development Lead - Manager
Cell: 801-400-0464

On Wed, Sep 28, 2016 at 9:13 AM, Van Leeuwen, Robert 
wrote:

> > There is a bug in the following:
>
> >
>
> > qemu-kvm-1.5.3-105.el7_2.7
>
> > qemu-img-1.5.3-105.el7_2.7
>
> > qemu-kvm-common-1.5.3-105.el7_2.7
>
>
>
> You might be better of using the RHEV qemu packages
>
> They are more recent (2.3) and have more features compiled into them.
>
>
>
> Cheers,
>
> Robert van Leeuwen
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova live-migration failing for RHEL7/CentOS7 VMs

2016-09-28 Thread Mike Smith
William -

That is probably what you have hit then.  There are several bug tickets out 
there that are related, I don’t have the specific one I found handy but it’s 
similar to the one Marcus just posted.  I found it by Googling around for the 
error late one night.

In our case, we had most of our hypervisors on the 1.5.3-105.el7_2.4 version 
but had a couple of new ones that ended up with 1.5.3-105.el7_2.7.  Migrating 
to those new hypervisors resulted in the VMs being shut down, but migrating 
from the new to the old worked fine.  So for my case I live-migrated all the 
VMs from the new compute hosts to the old ones, did the yum downgrade of those 
packages on the new hosts, and then rebooted those hosts for good measure.  
After that I no longer had the problem.

Good luck!

Mike Smith
Lead Cloud Systems Architect
Overstock.com



On Sep 28, 2016, at 8:55 AM, William Josefsson 
> wrote:

thanks Mike! yes, I checked and I've got the versions you mentioned installed:

yum list installed | grep "qemu-"
qemu-img.x86_64   10:1.5.3-105.el7_2.7 @updates
qemu-kvm.x86_64  10:1.5.3-105.el7_2.7 @updates
qemu-kvm-common.x86_64   10:1.5.3-105.el7_2.7 @updates


Is there any reference or bugzilla ticket for this issue, or how did you track 
it down?

Can you also please advice on the command: "yum downgrade qemu-kvm 
qemu-kvm-common qemu-img”   as I'm having running VMs in my environment, do I 
need to restart libvirtd after this, and would it affect any of my instances? 
thx will



On Wed, Sep 28, 2016 at 10:47 PM, Mike Smith 
> wrote:
There is a bug in the following:

qemu-kvm-1.5.3-105.el7_2.7
qemu-img-1.5.3-105.el7_2.7
qemu-kvm-common-1.5.3-105.el7_2.7

Are you running these versions?  We encountered this same issue and fixed it by 
running "yum downgrade qemu-kvm qemu-kvm-common qemu-img” which took us back to 
the el7_2.4


Mike Smith
Lead Cloud Systems Architect
Overstock.com



On Sep 28, 2016, at 8:30 AM, William Josefsson 
> wrote:

Hi,

I have problems with nova live-migration for my CentOS7 and RHEL7 VMs.   The 
live migrations works fine for Windows2012R2 and Ubuntu1404 VMs.

For CentOS7/RHEL based images I get this error in the Destination node in 
nova-compute.log. Also console for logged in VM users on the VM freeze.



2016-09-28 10:49:24.101 353935 INFO nova.compute.manager [instance: cd0b605d] 
VM Resumed (Lifecycle Event)
2016-09-28 10:49:24.339 353935 INFO nova.compute.manager [instance: cd0b605d] 
VM Resumed (Lifecycle Event)
2016-09-28 10:49:25.866 353935 INFO nova.compute.manager [instance: cd0b605d] 
Post operation of migration started
2016-09-28 10:49:39.410 353935 INFO nova.compute.manager [instance: cd0b605d] 
VM Stopped (Lifecycle Event)
2016-09-28 10:49:39.532 353935 WARNING nova.compute.manager [instance: 
cd0b605d] Instance shutdown by itself. Calling the stop API. Current vm_state: 
active, current task_state: None, original DB power_state: 4, current VM 
power_state: 4
2016-09-28 10:49:39.668 353935 INFO nova.compute.manager [instance: cd0b605d] 
Instance is already powered off in the hypervisor when stop is called.
2016-09-28 10:49:39.736 353935 INFO nova.virt.libvirt.driver [instance: 
cd0b605d] Instance already shutdown.
2016-09-28 10:49:39.743 353935 INFO nova.virt.libvirt.driver [instance: 
cd0b605d] Instance destroyed successfully.


Eventually the VM ends up in SHUTDOWN state on the destination node.

I'm on CentOS7/Liberty, and my storage backend is CEPH(Hammer 0.94.9).

Please advice. thx will
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




CONFIDENTIALITY NOTICE: This message is intended only for the use and review of 
the individual or entity to which it is addressed and may contain information 
that is privileged and confidential. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message solely to the intended recipient, you are hereby notified that any 
dissemination, distribution or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify 
sender immediately by telephone or return email. Thank you.


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova live-migration failing for RHEL7/CentOS7 VMs

2016-09-28 Thread Van Leeuwen, Robert
> There is a bug in the following:
>
> qemu-kvm-1.5.3-105.el7_2.7
> qemu-img-1.5.3-105.el7_2.7
> qemu-kvm-common-1.5.3-105.el7_2.7

You might be better of using the RHEV qemu packages
They are more recent (2.3) and have more features compiled into them.

Cheers,
Robert van Leeuwen
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova live-migration failing for RHEL7/CentOS7 VMs

2016-09-28 Thread William Josefsson
thanks Mike! yes, I checked and I've got the versions you mentioned
installed:

yum list installed | grep "qemu-"
qemu-img.x86_64   10:1.5.3-105.el7_2.7 @updates
qemu-kvm.x86_64  10:1.5.3-105.el7_2.7 @updates
qemu-kvm-common.x86_64   10:1.5.3-105.el7_2.7 @updates


Is there any reference or bugzilla ticket for this issue, or how did you
track it down?

Can you also please advice on the command: "yum downgrade qemu-kvm
qemu-kvm-common qemu-img”   as I'm having running VMs in my environment, do
I need to restart libvirtd after this, and would it affect any of my
instances? thx will



On Wed, Sep 28, 2016 at 10:47 PM, Mike Smith  wrote:

> There is a bug in the following:
>
> qemu-kvm-1.5.3-105.el7_2.7
> qemu-img-1.5.3-105.el7_2.7
> qemu-kvm-common-1.5.3-105.el7_2.7
>
> Are you running these versions?  We encountered this same issue and fixed
> it by running "yum downgrade qemu-kvm qemu-kvm-common qemu-img” which took
> us back to the el7_2.4
>
>
> Mike Smith
> Lead Cloud Systems Architect
> Overstock.com 
>
>
>
> On Sep 28, 2016, at 8:30 AM, William Josefsson 
> wrote:
>
> Hi,
>
> I have problems with nova live-migration for my CentOS7 and RHEL7 VMs.
> The live migrations works fine for Windows2012R2 and Ubuntu1404 VMs.
>
> For CentOS7/RHEL based images I get this error in the Destination node in
> nova-compute.log. Also console for logged in VM users on the VM freeze.
>
>
>
> 2016-09-28 10:49:24.101 353935 INFO nova.compute.manager [instance:
> cd0b605d] VM Resumed (Lifecycle Event)
> 2016-09-28 10:49:24.339 353935 INFO nova.compute.manager [instance:
> cd0b605d] VM Resumed (Lifecycle Event)
> 2016-09-28 10:49:25.866 353935 INFO nova.compute.manager [instance:
> cd0b605d] Post operation of migration started
> 2016-09-28 10:49:39.410 353935 INFO nova.compute.manager [instance:
> cd0b605d] VM Stopped (Lifecycle Event)
> *2016-09-28 10:49:39.532 353935 WARNING nova.compute.manager [instance:
> cd0b605d] Instance shutdown by itself. Calling the stop API. Current
> vm_state: active, current task_state: None, original DB power_state: 4,
> current VM power_state: 4*
> 2016-09-28 10:49:39.668 353935 INFO nova.compute.manager [instance:
> cd0b605d] Instance is already powered off in the hypervisor when stop is
> called.
> 2016-09-28 10:49:39.736 353935 INFO nova.virt.libvirt.driver [instance:
> cd0b605d] Instance already shutdown.
> 2016-09-28 10:49:39.743 353935 INFO nova.virt.libvirt.driver [instance:
> cd0b605d] Instance destroyed successfully.
>
>
> Eventually the VM ends up in SHUTDOWN state on the destination node.
>
> I'm on CentOS7/Liberty, and my storage backend is CEPH(Hammer 0.94.9).
>
> Please advice. thx will
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
> --
>
> CONFIDENTIALITY NOTICE: This message is intended only for the use and
> review of the individual or entity to which it is addressed and may contain
> information that is privileged and confidential. If the reader of this
> message is not the intended recipient, or the employee or agent responsible
> for delivering the message solely to the intended recipient, you are hereby
> notified that any dissemination, distribution or copying of this
> communication is strictly prohibited. If you have received this
> communication in error, please notify sender immediately by telephone or
> return email. Thank you.
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova live-migration failing for RHEL7/CentOS7 VMs

2016-09-28 Thread Marcus Furlong
What's your qemu version, and what does the libvirt log on the destination
say?

You may have hit this bug:

https://bugzilla.redhat.com/show_bug.cgi?id=1371943

There are some workarounds listed there, and downgrading also fixes it.

Marcus.

On 29 Sep 2016 00:35, "William Josefsson" 
wrote:

> Hi,
>
> I have problems with nova live-migration for my CentOS7 and RHEL7 VMs.
> The live migrations works fine for Windows2012R2 and Ubuntu1404 VMs.
>
> For CentOS7/RHEL based images I get this error in the Destination node in
> nova-compute.log. Also console for logged in VM users on the VM freeze.
>
>
>
> 2016-09-28 10:49:24.101 353935 INFO nova.compute.manager [instance:
> cd0b605d] VM Resumed (Lifecycle Event)
> 2016-09-28 10:49:24.339 353935 INFO nova.compute.manager [instance:
> cd0b605d] VM Resumed (Lifecycle Event)
> 2016-09-28 10:49:25.866 353935 INFO nova.compute.manager [instance:
> cd0b605d] Post operation of migration started
> 2016-09-28 10:49:39.410 353935 INFO nova.compute.manager [instance:
> cd0b605d] VM Stopped (Lifecycle Event)
> *2016-09-28 10:49:39.532 353935 WARNING nova.compute.manager [instance:
> cd0b605d] Instance shutdown by itself. Calling the stop API. Current
> vm_state: active, current task_state: None, original DB power_state: 4,
> current VM power_state: 4*
> 2016-09-28 10:49:39.668 353935 INFO nova.compute.manager [instance:
> cd0b605d] Instance is already powered off in the hypervisor when stop is
> called.
> 2016-09-28 10:49:39.736 353935 INFO nova.virt.libvirt.driver [instance:
> cd0b605d] Instance already shutdown.
> 2016-09-28 10:49:39.743 353935 INFO nova.virt.libvirt.driver [instance:
> cd0b605d] Instance destroyed successfully.
>
>
> Eventually the VM ends up in SHUTDOWN state on the destination node.
>
> I'm on CentOS7/Liberty, and my storage backend is CEPH(Hammer 0.94.9).
>
> Please advice. thx will
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova live-migration failing for RHEL7/CentOS7 VMs

2016-09-28 Thread Mike Smith
There is a bug in the following:

qemu-kvm-1.5.3-105.el7_2.7
qemu-img-1.5.3-105.el7_2.7
qemu-kvm-common-1.5.3-105.el7_2.7

Are you running these versions?  We encountered this same issue and fixed it by 
running "yum downgrade qemu-kvm qemu-kvm-common qemu-img” which took us back to 
the el7_2.4


Mike Smith
Lead Cloud Systems Architect
Overstock.com



On Sep 28, 2016, at 8:30 AM, William Josefsson 
> wrote:

Hi,

I have problems with nova live-migration for my CentOS7 and RHEL7 VMs.   The 
live migrations works fine for Windows2012R2 and Ubuntu1404 VMs.

For CentOS7/RHEL based images I get this error in the Destination node in 
nova-compute.log. Also console for logged in VM users on the VM freeze.



2016-09-28 10:49:24.101 353935 INFO nova.compute.manager [instance: cd0b605d] 
VM Resumed (Lifecycle Event)
2016-09-28 10:49:24.339 353935 INFO nova.compute.manager [instance: cd0b605d] 
VM Resumed (Lifecycle Event)
2016-09-28 10:49:25.866 353935 INFO nova.compute.manager [instance: cd0b605d] 
Post operation of migration started
2016-09-28 10:49:39.410 353935 INFO nova.compute.manager [instance: cd0b605d] 
VM Stopped (Lifecycle Event)
2016-09-28 10:49:39.532 353935 WARNING nova.compute.manager [instance: 
cd0b605d] Instance shutdown by itself. Calling the stop API. Current vm_state: 
active, current task_state: None, original DB power_state: 4, current VM 
power_state: 4
2016-09-28 10:49:39.668 353935 INFO nova.compute.manager [instance: cd0b605d] 
Instance is already powered off in the hypervisor when stop is called.
2016-09-28 10:49:39.736 353935 INFO nova.virt.libvirt.driver [instance: 
cd0b605d] Instance already shutdown.
2016-09-28 10:49:39.743 353935 INFO nova.virt.libvirt.driver [instance: 
cd0b605d] Instance destroyed successfully.


Eventually the VM ends up in SHUTDOWN state on the destination node.

I'm on CentOS7/Liberty, and my storage backend is CEPH(Hammer 0.94.9).

Please advice. thx will
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] L3 HA Router

2016-09-28 Thread Davíð Örn Jóhannsson
Thank you, this was very helpful

From: Tobias Urdin
Date: Wednesday 28 September 2016 at 12:43
To: David Orn Johannsson
Cc: 
"openstack-operators@lists.openstack.org"
Subject: Re: [Openstack-operators] L3 HA Router

Hello,
Please note that this is not recommended but it's currently the least invasive 
way (i.e not killing the machine or the keepalived process).

* Get the UUID of the L3 router.
* Go to the node that hosts your L3 routers (running the neutron-l3-agent 
service) and show all interfaces in that namespace.
* Shutdown ha-xxx interface in that routers namespace.

Example:
List all interfaces in this router namespace.
$ ip netns exec qrouter-ebb5939f-b8b2-4351-8995-27e4ccf9ebe2 ifconfig -a

Then just kill the "ha-" interface with:
$ ip netns exec qrouter-ebb5939f-b8b2-4351-8995-27e4ccf9ebe2 ifconfig ha-xxx 
down

You can find the current master router using
$ neutron l3-agent-list-hosting-router ebb5939f-b8b2-4351-8995-27e4ccf9ebe2

Read up more on how these things work, you will need it.
Google some stuff and you'll get something like this: 
https://developer.rackspace.com/blog/neutron-networking-l3-agent/

There is alot of resource at your disposal out there.
Good luck!

On 09/28/2016 01:58 PM, Davíð Örn Jóhannsson wrote:
I figured this might be the case, but can you tell how I can locate the 
interface for the router namespace, if I do a ifconfig –a on the network node, 
I only see the br-* interfaces and the physical ones.

I assume, I’d need to take down one of the interfaces that keepalived is 
responsible for, but I’m not sure how to find them and make the right 
connection interface to router in order to choose the right interface to take 
down

From: Tobias Urdin
Date: Wednesday 28 September 2016 at 11:11
To: David Orn Johannsson
Cc: 
"openstack-operators@lists.openstack.org"
Subject: Re: [Openstack-operators] L3 HA Router

Hello,

Some work was done in that area however it was never completed.
https://bugs.launchpad.net/neutron/+bug/1370033

You can issue an ugly failover by taking down the "ha" interface in the router 
namespace of the master with ifconfig down. But it's not pretty.

Best regards

On 09/28/2016 11:40 AM, Davíð Örn Jóhannsson wrote:
OS: Ubuntu 14.04
OpenStack Liberty

Is it possible to perform a manual failover of HA routers between master and 
backup by any sensible means ?


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Nova live-migration failing for RHEL7/CentOS7 VMs

2016-09-28 Thread William Josefsson
Hi,

I have problems with nova live-migration for my CentOS7 and RHEL7 VMs.
The live migrations works fine for Windows2012R2 and Ubuntu1404 VMs.

For CentOS7/RHEL based images I get this error in the Destination node in
nova-compute.log. Also console for logged in VM users on the VM freeze.



2016-09-28 10:49:24.101 353935 INFO nova.compute.manager [instance:
cd0b605d] VM Resumed (Lifecycle Event)
2016-09-28 10:49:24.339 353935 INFO nova.compute.manager [instance:
cd0b605d] VM Resumed (Lifecycle Event)
2016-09-28 10:49:25.866 353935 INFO nova.compute.manager [instance:
cd0b605d] Post operation of migration started
2016-09-28 10:49:39.410 353935 INFO nova.compute.manager [instance:
cd0b605d] VM Stopped (Lifecycle Event)
*2016-09-28 10:49:39.532 353935 WARNING nova.compute.manager [instance:
cd0b605d] Instance shutdown by itself. Calling the stop API. Current
vm_state: active, current task_state: None, original DB power_state: 4,
current VM power_state: 4*
2016-09-28 10:49:39.668 353935 INFO nova.compute.manager [instance:
cd0b605d] Instance is already powered off in the hypervisor when stop is
called.
2016-09-28 10:49:39.736 353935 INFO nova.virt.libvirt.driver [instance:
cd0b605d] Instance already shutdown.
2016-09-28 10:49:39.743 353935 INFO nova.virt.libvirt.driver [instance:
cd0b605d] Instance destroyed successfully.


Eventually the VM ends up in SHUTDOWN state on the destination node.

I'm on CentOS7/Liberty, and my storage backend is CEPH(Hammer 0.94.9).

Please advice. thx will
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] L3 HA Router

2016-09-28 Thread Tobias Urdin
Hello,
Please note that this is not recommended but it's currently the least invasive 
way (i.e not killing the machine or the keepalived process).

* Get the UUID of the L3 router.
* Go to the node that hosts your L3 routers (running the neutron-l3-agent 
service) and show all interfaces in that namespace.
* Shutdown ha-xxx interface in that routers namespace.

Example:
List all interfaces in this router namespace.
$ ip netns exec qrouter-ebb5939f-b8b2-4351-8995-27e4ccf9ebe2 ifconfig -a

Then just kill the "ha-" interface with:
$ ip netns exec qrouter-ebb5939f-b8b2-4351-8995-27e4ccf9ebe2 ifconfig ha-xxx 
down

You can find the current master router using
$ neutron l3-agent-list-hosting-router ebb5939f-b8b2-4351-8995-27e4ccf9ebe2

Read up more on how these things work, you will need it.
Google some stuff and you'll get something like this: 
https://developer.rackspace.com/blog/neutron-networking-l3-agent/

There is alot of resource at your disposal out there.
Good luck!

On 09/28/2016 01:58 PM, Davíð Örn Jóhannsson wrote:
I figured this might be the case, but can you tell how I can locate the 
interface for the router namespace, if I do a ifconfig -a on the network node, 
I only see the br-* interfaces and the physical ones.

I assume, I'd need to take down one of the interfaces that keepalived is 
responsible for, but I'm not sure how to find them and make the right 
connection interface to router in order to choose the right interface to take 
down

From: Tobias Urdin
Date: Wednesday 28 September 2016 at 11:11
To: David Orn Johannsson
Cc: 
"openstack-operators@lists.openstack.org"
Subject: Re: [Openstack-operators] L3 HA Router

Hello,

Some work was done in that area however it was never completed.
https://bugs.launchpad.net/neutron/+bug/1370033

You can issue an ugly failover by taking down the "ha" interface in the router 
namespace of the master with ifconfig down. But it's not pretty.

Best regards

On 09/28/2016 11:40 AM, Davíð Örn Jóhannsson wrote:
OS: Ubuntu 14.04
OpenStack Liberty

Is it possible to perform a manual failover of HA routers between master and 
backup by any sensible means ?


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] L3 HA Router

2016-09-28 Thread Davíð Örn Jóhannsson
I figured this might be the case, but can you tell how I can locate the 
interface for the router namespace, if I do a ifconfig –a on the network node, 
I only see the br-* interfaces and the physical ones.

I assume, I’d need to take down one of the interfaces that keepalived is 
responsible for, but I’m not sure how to find them and make the right 
connection interface to router in order to choose the right interface to take 
down

From: Tobias Urdin
Date: Wednesday 28 September 2016 at 11:11
To: David Orn Johannsson
Cc: 
"openstack-operators@lists.openstack.org"
Subject: Re: [Openstack-operators] L3 HA Router

Hello,

Some work was done in that area however it was never completed.
https://bugs.launchpad.net/neutron/+bug/1370033

You can issue an ugly failover by taking down the "ha" interface in the router 
namespace of the master with ifconfig down. But it's not pretty.

Best regards

On 09/28/2016 11:40 AM, Davíð Örn Jóhannsson wrote:
OS: Ubuntu 14.04
OpenStack Liberty

Is it possible to perform a manual failover of HA routers between master and 
backup by any sensible means ?

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] L3 HA Router

2016-09-28 Thread Tobias Urdin
Hello,

Some work was done in that area however it was never completed.
https://bugs.launchpad.net/neutron/+bug/1370033

You can issue an ugly failover by taking down the "ha" interface in the router 
namespace of the master with ifconfig down. But it's not pretty.

Best regards

On 09/28/2016 11:40 AM, Davíð Örn Jóhannsson wrote:
OS: Ubuntu 14.04
OpenStack Liberty

Is it possible to perform a manual failover of HA routers between master and 
backup by any sensible means ?

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] L3 HA Router

2016-09-28 Thread Davíð Örn Jóhannsson
OS: Ubuntu 14.04
OpenStack Liberty

Is it possible to perform a manual failover of HA routers between master and 
backup by any sensible means ?
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators