[openstack-dev] [Trove] No Weekly Trove Meeting on Wednesday, Dec 31st

2014-12-31 Thread Nikhil Manchanda
Hey folks:

Just a quick reminder that there will be no Weekly Trove Meeting
on Wednesday, Dec 31st. We will resume the weekly meeting next
year on January 7th.

See you in the new year!

Thanks,
Nikhil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Meeting on 01/06 is canceled

2014-12-31 Thread Serg Melikyan
Hi folks,

We agreed to cancel next meeting scheduled on 01/06 due to extended
holidays in Russia. Next meeting is scheduled on 01/13.

-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

+7 (495) 640-4904, 0261
+7 (903) 156-0836
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] thoughts on the midcycle

2014-12-31 Thread Lucas Alvares Gomes
Hi

I probably won't be able to make it to the SF Bay Area, but I think it's a
good idea for those who can't go to Grenoble.

Lucas

On Tue, Dec 30, 2014 at 9:58 PM, Clif Houck m...@clifhouck.com wrote:

 I'll attend. Whether it's in-person or remote is up in the air though.

 Clif

 On 12/30/2014 10:51 AM, Jay Faulkner wrote:
 
  On Dec 29, 2014, at 2:45 PM, Devananda van der Veen
  devananda@gmail.com mailto:devananda@gmail.com wrote:
 
  That being said, I'd also like to put forth this idea: if we had a
  second gathering (with the same focus on writing code) the following
  week (let's say, Feb 11 - 13) in the SF Bay area -- who would attend?
  Would we be able to get the other half of the core team together and
  get more work done? Is this a good idea?
 
 
  +1 I’d be willing and able to attend this.
 
 
  -
  Jay Faulkner
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Need help getting DevStack setup working for VPN testing

2014-12-31 Thread Paul Michali (pcm)
I’ve been playing a bit with trying to get VPNaaS working post-repo split, and 
haven’t been successful. I’m trying it a few ways with DevStack, and I’m not 
sure whether I have a config error, setup issue, or there is something due to 
the split.

In the past (and it’s been a few months since I verified VPN operation), I used 
two bare metal machines and an external switch connecting them. With a DevStack 
cloud running on each. That configuration is currently setup for a vendor VPN 
solution, so I wanted to try different methods to test the reference VPN 
implementation. I’ve got two ideas to do this:

A) Run DevStack and create two routers with a shared “public” network, and two 
private networks, setting up a VPN connection between the private nets.
B) Run two DevStack instances (on two VMs) and try to setup a provider network 
between them.

I’m starting with A (though I did try B quickly, but it didn’t work), and I 
spun up the stack, added a second router (all under the same tenant), created 
another private network, and booted a Cirros VM in each private net.

Before even trying VPN, I checked pings. From the first private net VM 
(10.1.0.4), I could ping on the pubic net, including the public IP of the 
second private net’s public interface for its router. I cannot ping the VM from 
the host. That seems all expected to me.

What seems wrong is the other VM (this is on the post stack net I created). 
Like the other VM, I can ping public net IPs. However, I can also ping the 
private net address of the first network’s router (10.1.0.1)! Shouldn’t that 
have failed (at least that was what I was expecting)? I can’t ping the VM on 
that side though. Another curiosity is that the VM got the second IP on the 
subnet (10.2.0.2), unlike the other private net, where DHCP and a compute probe 
got the 2nd and 3rd IPs. There is DHCP enabled on this private network.

When I tried VPN, both connections show as DOWN, and all I see are phase 1 
ident packets. I cannot ping from VM to VM. I don’t see any logging for the 
OpenSwan processes, so not to sure how to debug. Maybe I can try some ipsec 
show command?

I’m not too sure what is wrong with this setup.

For a comparison, I decided to do the same thing, using stable/juno. So, I 
fired up a VM and cloned DevStack with stable/juno and stacked. This time, 
things are even worse! When I try to boot a VM, and then check the status, the 
VM is in PAUSED power state. I can’t seem to unpause (nor do I know why it is 
in this state). Verified this with both Cirros 3.3, 3.2, and Ubuntu cloud 
images:

+--++
| Property | Value  
|
+--++
| OS-DCF:diskConfig| MANUAL 
|
| OS-EXT-AZ:availability_zone  | nova   
|
| OS-EXT-SRV-ATTR:host | juno   
|
| OS-EXT-SRV-ATTR:hypervisor_hostname  | juno   
|
| OS-EXT-SRV-ATTR:instance_name| instance-0001  
|
| OS-EXT-STS:power_state   | 3  
|
| OS-EXT-STS:task_state| -  
|
| OS-EXT-STS:vm_state  | active 
|
| OS-SRV-USG:launched_at   | 2014-12-31T15:15:33.00 
|
| OS-SRV-USG:terminated_at | -  
|
| accessIPv4   |
|
| accessIPv6   |
|
| config_drive |
|
| created  | 2014-12-31T15:15:24Z   
|
| flavor   | m1.tiny (1)
|
| hostId   | 
5b0c48250ccc0ac3fca8a821e29e4b154ec0b101f9cc0a0b27071a3f   |
| id   | ec5c8d70-ae80-4cc3-a5bb-b68019170dd6   
|
| image| cirros-0.3.3-x86_64-uec 
(797e4dee-8c03-497f-8dac-a44b9351dfa3) |
| key_name | -  
|
| metadata | {}

[openstack-dev] [nova] boot images in power state PAUSED for stable/juno

2014-12-31 Thread Paul Michali (pcm)
Not sure if I’m going crazy or what. I’m using DevStack and, after stacking I 
tried booting a Cirros 3.2, 3.3, and Ubuntu cloud 14.04 image. Each time, the 
image ends up in PAUSED power state:

ubuntu@juno:/opt/stack/neutron$ nova show peter
+--++
| Property | Value  
|
+--++
| OS-DCF:diskConfig| MANUAL 
|
| OS-EXT-AZ:availability_zone  | nova   
|
| OS-EXT-SRV-ATTR:host | juno   
|
| OS-EXT-SRV-ATTR:hypervisor_hostname  | juno   
|
| OS-EXT-SRV-ATTR:instance_name| instance-0001  
|
| OS-EXT-STS:power_state   | 3  
|
| OS-EXT-STS:task_state| -  
|
| OS-EXT-STS:vm_state  | active 
|
| OS-SRV-USG:launched_at   | 2014-12-31T15:15:33.00 
|
| OS-SRV-USG:terminated_at | -  
|
| accessIPv4   |
|
| accessIPv6   |
|
| config_drive |
|
| created  | 2014-12-31T15:15:24Z   
|
| flavor   | m1.tiny (1)
|
| hostId   | 
5b0c48250ccc0ac3fca8a821e29e4b154ec0b101f9cc0a0b27071a3f   |
| id   | ec5c8d70-ae80-4cc3-a5bb-b68019170dd6   
|
| image| cirros-0.3.3-x86_64-uec 
(797e4dee-8c03-497f-8dac-a44b9351dfa3) |
| key_name | -  
|
| metadata | {} 
|
| name | peter  
|
| os-extended-volumes:volumes_attached | [] 
|
| private network  | 10.0.0.4   
|
| progress | 0  
|
| security_groups  | default
|
| status   | ACTIVE 
|
| tenant_id| 7afb5bc1d88d462c8d57178437d3c277   
|
| updated  | 2014-12-31T15:15:34Z   
|
| user_id  | 4ff18bdbeb4d436ea4ff1bcd29e269a9   
|
+--++
ubuntu@juno:/opt/stack/neutron$ nova list
+--+---+++-+--+
| ID   | Name  | Status | Task State | Power 
State | Networks |
+--+---+++-+--+
| ec5c8d70-ae80-4cc3-a5bb-b68019170dd6 | peter | ACTIVE | -  | Paused   
   | private=10.0.0.4 |


I don’t see this with Kilo latest images. Any idea what I may be doing wrong, 
or if there is an issue (I didn’t see anything on Google search)?

IMAGE_ID=`nova image-list | grep 'cloudimg-amd64 ' | cut -d' ' -f 2`
PRIVATE_NET=`neutron net-list | grep 'private ' | cut -f 2 -d' ‘`

nova boot peter --flavor 3 --image $IMAGE_ID --user-data 
~/devstack/user_data.txt --nic net-id=$PRIVATE_NET
nova boot --flavor 1 --image cirros-0.3.3-x86_64-uec --nic net-id=$PRIVATE_NET 
paul

Thanks.


PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pc_m (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83






signature.asc
Description: Message signed with OpenPGP using GPGMail

Re: [openstack-dev] [Ironic] thoughts on the midcycle

2014-12-31 Thread Joshua Harlow
I'm not a core but I'll most definitely show up to the SF bay area one 
since it's close by for me. Elsewhere would be unlikely but since it's 
nearby it would be very easy to just go to wherever it is in the area 
and help out...


My 2 cents.

-Josh

Devananda van der Veen wrote:

I'm sending the details of the midcycle in a separate email. Before you
reply that you won't be able to make it, I'd like to share some thoughts
/ concerns.

In the last few weeks, several people who I previously thought would
attend told me that they can't. By my informal count, it looks like we
will have at most 5 of our 10 core reviewers in attendance. I don't
think we should cancel based on that, but it does mean that we need to
set our expectations accordingly.

Assuming that we will be lacking about half the core team, I think it
will be more practical as a focused sprint, rather than a planning 
design meeting. While that's a break from precedent, planning should be
happening via the spec review process *anyway*. Also, we already have a
larger back log of specs and work than we had this time last cycle, but
with the same size review team. Rather than adding to our backlog, I
would like us to use this gathering to burn through some specs and land
some code.

That being said, I'd also like to put forth this idea: if we had a
second gathering (with the same focus on writing code) the following
week (let's say, Feb 11 - 13) in the SF Bay area -- who would attend?
Would we be able to get the other half of the core team together and
get more work done? Is this a good idea?

OK. That's enough of my musing for now...

Once again, if you will be attending the midcycle sprint in Grenoble the
week of Feb 3rd, please sign up HERE
https://www.eventbrite.com/e/openstack-ironic-kilo-midcycle-sprint-in-grenoble-tickets-15082886319.



Regards,
Devananda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Containers][Magnum] Questions on dbapi

2014-12-31 Thread Hongbin Lu
Hi all,

I am writing tests for the Magnum dbapi. I have several questions about its
implementation and appreciate if someone could comment on them.

* Exceptions: The exceptions below were ported from Ironic but don't seem
to make sense in Magnum. I think we should purge them from the code except
InstanceAssociated and NodeAssociated. Do everyone agree?

*class InstanceAssociated(Conflict):*
*message = _(Instance %(instance_uuid)s is already associated with a
node,*
* it cannot be associated with this other node %(node)s)*

*class BayAssociated(InvalidState):*
*message = _(Bay %(bay)s is associated with instance %(instance)s.)*

*class ContainerAssociated(InvalidState):*
*message = _(Container %(container)s is associated with *
*instance %(instance)s.)*

*class PodAssociated(InvalidState):*
*message = _(Pod %(pod)s is associated with instance %(instance)s.)*

*class ServiceAssociated(InvalidState):*
*message = _(Service %(service)s is associated with *
*instance %(instance)s.)*

*NodeAssociated: it is used but definition missing*

*BayModelAssociated: it is used but definition missing*

* APIs: the APIs below seem to be ported from Ironic Node, but it seems we
won't need them all. Again, I think we should purge some of them that does
not make sense. In addition, these APIs are defined without being call.
Does it make sense remove them for now, and add them one by one later when
they are actually needed.

*def reserve_bay(self, tag, bay_id):*
*Reserve a bay.*

*def release_bay(self, tag, bay_id):*
*Release the reservation on a bay.*

*def reserve_baymodel(self, tag, baymodel_id):*
*Reserve a baymodel.*

*def release_baymodel(self, tag, baymodel_id):*
*Release the reservation on a baymodel.*

*def reserve_container(self, tag, container_id):*
*Reserve a container.*

*def reserve_node(self, tag, node_id):*
*Reserve a node.*

*def release_node(self, tag, node_id):*
*Release the reservation on a node.*

*def reserve_pod(self, tag, pod_id):*
*Reserve a pod.*

*def release_pod(self, tag, pod_id):*
*Release the reservation on a pod.*

*def reserve_service(self, tag, service_id):*
*Reserve a service.*

*def release_service(self, tag, service_id):*
*Release the reservation on a service.*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] thoughts on the midcycle

2014-12-31 Thread David Shrewsbury

 On Dec 29, 2014, at 5:45 PM, Devananda van der Veen devananda@gmail.com 
 wrote:
 
 I'm sending the details of the midcycle in a separate email. Before you reply 
 that you won't be able to make it, I'd like to share some thoughts / concerns.
 
 In the last few weeks, several people who I previously thought would attend 
 told me that they can't. By my informal count, it looks like we will have at 
 most 5 of our 10 core reviewers in attendance. I don't think we should cancel 
 based on that, but it does mean that we need to set our expectations 
 accordingly.
 
 Assuming that we will be lacking about half the core team, I think it will be 
 more practical as a focused sprint, rather than a planning  design meeting. 
 While that's a break from precedent, planning should be happening via the 
 spec review process *anyway*. Also, we already have a larger back log of 
 specs and work than we had this time last cycle, but with the same size 
 review team. Rather than adding to our backlog, I would like us to use this 
 gathering to burn through some specs and land some code.
 
 That being said, I'd also like to put forth this idea: if we had a second 
 gathering (with the same focus on writing code) the following week (let's 
 say, Feb 11 - 13) in the SF Bay area -- who would attend? Would we be able to 
 get the other half of the core team together and get more work done? Is 
 this a good idea?

I could (and likely would) attend the Bay area one.

-Dave


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Need help getting DevStack setup working for VPN testing

2014-12-31 Thread Paul Michali (pcm)
Just more data…

I keep consistently seeing that on private subnet, the VM can only access 
router (as expected), but on privateB subnet, the VM can access the private I/F 
of router1 on private subnet. From the router’s namespace, I cannot ping the 
local VM (why not?). Oddly, I can ping router1’s private IP from router2 
namespace!

I tried these commands to create security group rules (are they wrong?):

# There are two default groups created by DevStack
group=`neutron security-group-list | grep default | cut -f 2 -d' ' | head -1`
neutron security-group-rule-create --protocol ICMP $group
neutron security-group-rule-create --protocol tcp --port-range-min 22 
--port-range-max 22 $group
group=`neutron security-group-list | grep default | cut -f 2 -d' ' | tail -1`
neutron security-group-rule-create --protocol ICMP $group
neutron security-group-rule-create --protocol tcp --port-range-min 22 
--port-range-max 22 $group

The only change that happens, when I do these commands, is that the VM in 
privateB subnet can now ping the VM from private subnet, but not vice versa. 
From router1 namespace, it can then access local VMs. From router2 namespace it 
can access local VMs and VMs in private subnet (all access).

It seems like I have some issue with security groups, and I need to square that 
away, before I can test VPN out.

Am I creating the security group rules correctly?
My goal is that the private nets can access the public net, but not each other 
(until VPN connection is established).

Lastly, in this latest try, I set OVS_PHYSICAL_BRIDGE=br-ex. In earlier runs 
w/o that, there were QVO interfaces, but no QVB or QBR interfaces at all. It 
didn’t seem to change connectivity, however.

Ideas?

PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pc_m (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83




On Dec 31, 2014, at 10:33 AM, Paul Michali (pcm) p...@cisco.com wrote:

 I’ve been playing a bit with trying to get VPNaaS working post-repo split, 
 and haven’t been successful. I’m trying it a few ways with DevStack, and I’m 
 not sure whether I have a config error, setup issue, or there is something 
 due to the split.
 
 In the past (and it’s been a few months since I verified VPN operation), I 
 used two bare metal machines and an external switch connecting them. With a 
 DevStack cloud running on each. That configuration is currently setup for a 
 vendor VPN solution, so I wanted to try different methods to test the 
 reference VPN implementation. I’ve got two ideas to do this:
 
 A) Run DevStack and create two routers with a shared “public” network, and 
 two private networks, setting up a VPN connection between the private nets.
 B) Run two DevStack instances (on two VMs) and try to setup a provider 
 network between them.
 
 I’m starting with A (though I did try B quickly, but it didn’t work), and I 
 spun up the stack, added a second router (all under the same tenant), created 
 another private network, and booted a Cirros VM in each private net.
 
 Before even trying VPN, I checked pings. From the first private net VM 
 (10.1.0.4), I could ping on the pubic net, including the public IP of the 
 second private net’s public interface for its router. I cannot ping the VM 
 from the host. That seems all expected to me.
 
 What seems wrong is the other VM (this is on the post stack net I created). 
 Like the other VM, I can ping public net IPs. However, I can also ping the 
 private net address of the first network’s router (10.1.0.1)! Shouldn’t that 
 have failed (at least that was what I was expecting)? I can’t ping the VM on 
 that side though. Another curiosity is that the VM got the second IP on the 
 subnet (10.2.0.2), unlike the other private net, where DHCP and a compute 
 probe got the 2nd and 3rd IPs. There is DHCP enabled on this private network.
 
 When I tried VPN, both connections show as DOWN, and all I see are phase 1 
 ident packets. I cannot ping from VM to VM. I don’t see any logging for the 
 OpenSwan processes, so not to sure how to debug. Maybe I can try some ipsec 
 show command?
 
 I’m not too sure what is wrong with this setup.
 
 For a comparison, I decided to do the same thing, using stable/juno. So, I 
 fired up a VM and cloned DevStack with stable/juno and stacked. This time, 
 things are even worse! When I try to boot a VM, and then check the status, 
 the VM is in PAUSED power state. I can’t seem to unpause (nor do I know why 
 it is in this state). Verified this with both Cirros 3.3, 3.2, and Ubuntu 
 cloud images:
 
 +--++
 | Property | Value
   |
 +--++
 | OS-DCF:diskConfig| MANUAL   

Re: [openstack-dev] [Containers][Magnum] Questions on dbapi

2014-12-31 Thread Steven Dake

On 12/31/2014 10:54 AM, Hongbin Lu wrote:

Hi all,

I am writing tests for the Magnum dbapi. I have several questions 
about its implementation and appreciate if someone could comment on them.


* Exceptions: The exceptions below were ported from Ironic but don't 
seem to make sense in Magnum. I think we should purge them from the 
code except InstanceAssociated and NodeAssociated. Do everyone agree?



Hongbin,

Agree we should remove any exceptions that were from Ironic that don't 
make any sense in Magnum.


The only reason I copied alot of Ironic code base was to pull in the 
versioned objects support which should be heading to oslo at some point.



/class InstanceAssociated(Conflict):/
/message = _(Instance %(instance_uuid)s is already associated 
with a node,/
/ it cannot be associated with this other node 
%(node)s)/

/
/
/class BayAssociated(InvalidState):/
/message = _(Bay %(bay)s is associated with instance %(instance)s.)/
/
/
/class ContainerAssociated(InvalidState):/
/message = _(Container %(container)s is associated with /
/instance %(instance)s.)/
/
/
/class PodAssociated(InvalidState):/
/message = _(Pod %(pod)s is associated with instance %(instance)s.)/
/
/
/class ServiceAssociated(InvalidState):/
/message = _(Service %(service)s is associated with /
/instance %(instance)s.)/
/
/
/NodeAssociated: it is used but definition missing/

/BayModelAssociated: it is used but definition missing/

* APIs: the APIs below seem to be ported from Ironic Node, but it 
seems we won't need them all. Again, I think we should purge some of 
them that does not make sense. In addition, these APIs are defined 
without being call. Does it make sense remove them for now, and add 
them one by one later when they are actually needed.



Agree they should be removed now and added as needed later.


/def reserve_bay(self, tag, bay_id):/
/Reserve a bay./
/
/
/def release_bay(self, tag, bay_id):/
/Release the reservation on a bay./
/
/
/def reserve_baymodel(self, tag, baymodel_id):/
/Reserve a baymodel./
/
/
/def release_baymodel(self, tag, baymodel_id):/
/Release the reservation on a baymodel./
/
/
/def reserve_container(self, tag, container_id):/
/Reserve a container./
/
/
/def reserve_node(self, tag, node_id):/
/Reserve a node./
/
/
/def release_node(self, tag, node_id):/
/Release the reservation on a node./
/
/
/def reserve_pod(self, tag, pod_id):/
/Reserve a pod./
/
/
/def release_pod(self, tag, pod_id):/
/Release the reservation on a pod./
/
/
/def reserve_service(self, tag, service_id):/
/Reserve a service./
/
/
/def release_service(self, tag, service_id):/
/Release the reservation on a service./


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] [Agent] Breaking HardwareManager API Change proposed

2014-12-31 Thread Jay Faulkner
Hi all,

I proposed https://review.openstack.org/#/c/143193 to ironic-python-agent, in 
an attempt to make Hardware Manager loading more sane. As it works today, the 
most specific hardware manager is the only one chosen. This means in order to 
use a mix of hardware managers, you have to compose a custom interface. This is 
not the way I originally thought it worked, and not the way Josh and I 
presented it at the summit[1].

This change makes it so we will try each method, in priority order (from most 
specific to least specific hardware manager). If the method exists and doesn’t 
throw NotImplementedError, it will be allowed to complete and errors bubble up. 
If an AttributeError or NotImplementedError is thrown, the next most generic 
method is called until all methods have been attempted (in which case we fail) 
or a method does not raise the exceptions above.

The downside to this is that it will change behavior for anyone using hardware 
managers downstream. As of today, the only hardware manager that I know of 
external to Ironic is the one we use at Rackspace for OnMetal[2]. I’m sending 
this email to check and see if anyone has objection to this interface changing 
in this way, and generally asking for comment.

Thanks,
Jay Faulkner

1: https://www.youtube.com/watch?v=2Oi2T2pSGDU
2: https://github.com/rackerlabs/onmetal-ironic-hardware-manager
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] should 'ip address' be retrived when decribe host?

2014-12-31 Thread Chen CH Ji
Hi
  Sorry If I didn't understand clearly about it , looks to me
the hypervisor itself hosts the instances and it should have a IP with it
(like Linux host KVM instances, Linux is the hypervisor, the PC is the
host)
  while the host is physical node and only to be used by
'hypervisor' concept ,so I think maybe we don't need ip for the 'host' ?
thanks a lot

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   Lingxian Kong anlin.k...@gmail.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   12/31/2014 07:22 AM
Subject:Re: [openstack-dev] [Nova] should 'ip address' be retrived when
decribe host?



Thanks Kevin for your clarification, which further affirms my belief
that ip address should be included in the host info.

I will contact Jay Pipes on IRC, to see what can I help towards this
effort, soon after the New Year's Day in China. :)

2014-12-31 0:34 GMT+08:00 Kevin L. Mitchell kevin.mitch...@rackspace.com:
 On Tue, 2014-12-30 at 14:52 +0800, Lingxian Kong wrote:
 Just as what Jay Lau said, 'nova hypervisor-show hypervisor_id'
 indeed returns host ip address, and there are more other information
 included than 'nova host-describe hostname'. I feel a little
 confused about the 'host' and 'hypervisor', what's the difference
 between them? For cloud operator, maybe 'host' is more usefull and
 intuitive for management than 'hypervisor'. From the implementation
 perspective, both 'compute_nodes' and 'services' database tables are
 used for them. Should them be combined for more common use cases?

 Well, the host and the hypervisor are conceptually distinct objects.
 The hypervisor is, obviously, the thing on which all the VMs run.  The
 host, though, is the node running the corresponding nova-compute
 service, which may be separate from the hypervisor.  For instance, on
 Xen-based setups, the host runs in a VM on the hypervisor.  There has
 also been discussion of allowing one host to be responsible for multiple
 hypervisors, which would be useful for providers with large numbers of
 hypervisors.
 --
 Kevin L. Mitchell kevin.mitch...@rackspace.com
 Rackspace


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Regards!
---
Lingxian Kong

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] should 'ip address' be retrived when decribe host?

2014-12-31 Thread Kevin L. Mitchell
On Wed, 2014-12-31 at 20:56 +0100, Chen CH Ji wrote:
   Sorry If I didn't understand clearly about it , looks to
 me the hypervisor itself hosts the instances and it should have a IP
 with it (like Linux host KVM instances, Linux is the hypervisor, the
 PC is the host)
   while the host is physical node and only to be used by
 'hypervisor' concept ,so I think maybe we don't need ip for the
 'host' ?  thanks a lot

The hypervisor hosts the VMs, yes, but the component that sits between
the hypervisor and the rest of nova—that is, nova-compute—does not
necessarily reside on the hypervisor.  It is the nova-compute node
(which may be either a VM or a physical host) that is referred to by the
nova term host.  For KVM, I believe the host is often the same as the
hypervisor, meaning that nova-compute runs directly on the hypervisor…
but this is not necessarily the case for all virt drivers.  For example,
the host for Xen-based installations is often a separate VM on the same
hypervisor, which would have its own distinct IP address.
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] should 'ip address' be retrived when decribe host?

2014-12-31 Thread Chen CH Ji
ok, that make sense to me , thanks a lot

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   Kevin L. Mitchell kevin.mitch...@rackspace.com
To: openstack-dev@lists.openstack.org
Date:   12/31/2014 09:37 PM
Subject:Re: [openstack-dev] [Nova] should 'ip address' be retrived when
decribe host?



On Wed, 2014-12-31 at 20:56 +0100, Chen CH Ji wrote:
   Sorry If I didn't understand clearly about it , looks to
 me the hypervisor itself hosts the instances and it should have a IP
 with it (like Linux host KVM instances, Linux is the hypervisor, the
 PC is the host)
   while the host is physical node and only to be used by
 'hypervisor' concept ,so I think maybe we don't need ip for the
 'host' ?  thanks a lot

The hypervisor hosts the VMs, yes, but the component that sits between
the hypervisor and the rest of nova―that is, nova-compute―does not
necessarily reside on the hypervisor.  It is the nova-compute node
(which may be either a VM or a physical host) that is referred to by the
nova term host.  For KVM, I believe the host is often the same as the
hypervisor, meaning that nova-compute runs directly on the hypervisor…
but this is not necessarily the case for all virt drivers.  For example,
the host for Xen-based installations is often a separate VM on the same
hypervisor, which would have its own distinct IP address.
--
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers][Magnum] Questions on dbapi

2014-12-31 Thread Adrian Otto
I would welcome any patches to true this code up to be more appropriate for our 
needs. We might as well trim cruft out now if we notice it. Our milestone-2 
will add a lot of tests, so it would be great to get a clean start.

Adrian


 Original message 
From: Steven Dake
Date:12/31/2014 11:46 AM (GMT-08:00)
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Containers][Magnum] Questions on dbapi

On 12/31/2014 10:54 AM, Hongbin Lu wrote:
Hi all,

I am writing tests for the Magnum dbapi. I have several questions about its 
implementation and appreciate if someone could comment on them.

* Exceptions: The exceptions below were ported from Ironic but don't seem to 
make sense in Magnum. I think we should purge them from the code except 
InstanceAssociated and NodeAssociated. Do everyone agree?

Hongbin,

Agree we should remove any exceptions that were from Ironic that don't make any 
sense in Magnum.

The only reason I copied alot of Ironic code base was to pull in the versioned 
objects support which should be heading to oslo at some point.

class InstanceAssociated(Conflict):
message = _(Instance %(instance_uuid)s is already associated with a node,
 it cannot be associated with this other node %(node)s)

class BayAssociated(InvalidState):
message = _(Bay %(bay)s is associated with instance %(instance)s.)

class ContainerAssociated(InvalidState):
message = _(Container %(container)s is associated with 
instance %(instance)s.)

class PodAssociated(InvalidState):
message = _(Pod %(pod)s is associated with instance %(instance)s.)

class ServiceAssociated(InvalidState):
message = _(Service %(service)s is associated with 
instance %(instance)s.)

NodeAssociated: it is used but definition missing

BayModelAssociated: it is used but definition missing

* APIs: the APIs below seem to be ported from Ironic Node, but it seems we 
won't need them all. Again, I think we should purge some of them that does not 
make sense. In addition, these APIs are defined without being call. Does it 
make sense remove them for now, and add them one by one later when they are 
actually needed.

Agree they should be removed now and added as needed later.

def reserve_bay(self, tag, bay_id):
Reserve a bay.

def release_bay(self, tag, bay_id):
Release the reservation on a bay.

def reserve_baymodel(self, tag, baymodel_id):
Reserve a baymodel.

def release_baymodel(self, tag, baymodel_id):
Release the reservation on a baymodel.

def reserve_container(self, tag, container_id):
Reserve a container.

def reserve_node(self, tag, node_id):
Reserve a node.

def release_node(self, tag, node_id):
Release the reservation on a node.

def reserve_pod(self, tag, pod_id):
Reserve a pod.

def release_pod(self, tag, pod_id):
Release the reservation on a pod.

def reserve_service(self, tag, service_id):
Reserve a service.

def release_service(self, tag, service_id):
Release the reservation on a service.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] should 'ip address' be retrived when decribe host?

2014-12-31 Thread James Downs

On Dec 31, 2014, at 12:35 PM, Kevin L. Mitchell kevin.mitch...@rackspace.com 
wrote:

 but this is not necessarily the case for all virt drivers.  For example,
 the host for Xen-based installations is often a separate VM on the same
 hypervisor, which would have its own distinct IP address.

This is quite similar to how Openstack / xCat / zVM would work together.

Cheers,
-j


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] boot images in power state PAUSED for stable/juno

2014-12-31 Thread Kevin Benton
Any exceptions on the Neutron side? It might not be notifying nova
that the network is ready.

On Wed, Dec 31, 2014 at 8:41 AM, Paul Michali (pcm) p...@cisco.com wrote:
 Not sure if I’m going crazy or what. I’m using DevStack and, after stacking
 I tried booting a Cirros 3.2, 3.3, and Ubuntu cloud 14.04 image. Each time,
 the image ends up in PAUSED power state:

 ubuntu@juno:/opt/stack/neutron$ nova show peter
 +--++
 | Property | Value
 |
 +--++
 | OS-DCF:diskConfig| MANUAL
 |
 | OS-EXT-AZ:availability_zone  | nova
 |
 | OS-EXT-SRV-ATTR:host | juno
 |
 | OS-EXT-SRV-ATTR:hypervisor_hostname  | juno
 |
 | OS-EXT-SRV-ATTR:instance_name| instance-0001
 |
 | OS-EXT-STS:power_state   | 3
 |
 | OS-EXT-STS:task_state| -
 |
 | OS-EXT-STS:vm_state  | active
 |
 | OS-SRV-USG:launched_at   | 2014-12-31T15:15:33.00
 |
 | OS-SRV-USG:terminated_at | -
 |
 | accessIPv4   |
 |
 | accessIPv6   |
 |
 | config_drive |
 |
 | created  | 2014-12-31T15:15:24Z
 |
 | flavor   | m1.tiny (1)
 |
 | hostId   |
 5b0c48250ccc0ac3fca8a821e29e4b154ec0b101f9cc0a0b27071a3f   |
 | id   |
 ec5c8d70-ae80-4cc3-a5bb-b68019170dd6   |
 | image| cirros-0.3.3-x86_64-uec
 (797e4dee-8c03-497f-8dac-a44b9351dfa3) |
 | key_name | -
 |
 | metadata | {}
 |
 | name | peter
 |
 | os-extended-volumes:volumes_attached | []
 |
 | private network  | 10.0.0.4
 |
 | progress | 0
 |
 | security_groups  | default
 |
 | status   | ACTIVE
 |
 | tenant_id| 7afb5bc1d88d462c8d57178437d3c277
 |
 | updated  | 2014-12-31T15:15:34Z
 |
 | user_id  | 4ff18bdbeb4d436ea4ff1bcd29e269a9
 |
 +--++
 ubuntu@juno:/opt/stack/neutron$ nova list
 +--+---+++-+--+
 | ID   | Name  | Status | Task State | Power
 State | Networks |
 +--+---+++-+--+
 | ec5c8d70-ae80-4cc3-a5bb-b68019170dd6 | peter | ACTIVE | -  |
 Paused  | private=10.0.0.4 |


 I don’t see this with Kilo latest images. Any idea what I may be doing
 wrong, or if there is an issue (I didn’t see anything on Google search)?

 IMAGE_ID=`nova image-list | grep 'cloudimg-amd64 ' | cut -d' ' -f 2`
 PRIVATE_NET=`neutron net-list | grep 'private ' | cut -f 2 -d' ‘`

 nova boot peter --flavor 3 --image $IMAGE_ID --user-data
 ~/devstack/user_data.txt --nic net-id=$PRIVATE_NET
 nova boot --flavor 1 --image cirros-0.3.3-x86_64-uec --nic
 net-id=$PRIVATE_NET paul

 Thanks.


 PCM (Paul Michali)

 MAIL …..…. p...@cisco.com
 IRC ……..… pc_m (irc.freenode.com)
 TW ………... @pmichali
 GPG Key … 4525ECC253E31A83
 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][plugins] Fuel 6.0 plugin for Pacemaker STONITH (HA fencing)

2014-12-31 Thread Andrew Woodward
Bogdan,

Do you think that the existing post deployment hook is sufficient to
implement this or does additional plugins development need to be done to
support this
On Dec 30, 2014 3:39 AM, Bogdan Dobrelya bdobre...@mirantis.com wrote:

 Hello.
 There is a long living blueprint [0] about HA fencing of failed nodes
 in Corosync and Pacemaker cluster. Happily, in 6.0 release we have a
 pluggable architecture supported in Fuel.

 I propose the following implementation [1] (WIP repo [2]) for this
 feature as a plugin for puppet. It addresses the related blueprint for
 HA Fencing in puppet manifests of Fuel library [3].

 For initial version,  all the data definitions for power management
 devices should be done manually in YAML files (see the plugin's
 README.md file). Later it could be done in a more user friendly way, as
 a part of Fuel UI perhaps.

 Note that the similar approach - YAML data structures which should be
 filled in by the cloud admin and passed to Fuel Orchestrator
 automatically at PXE provision stage - could be used as well for Power
 management blueprint, see the related ML thread [4].

 Please also note, there is a dev docs for Fuel plugins merged recently
 [5] where you can find how to build and install this plugin.

 [0] https://blueprints.launchpad.net/fuel/+spec/ha-fencing
 [1] https://review.openstack.org/#/c/144425/
 [2]

 https://github.com/bogdando/fuel-plugins/tree/fencing_puppet_newprovider/ha_fencing
 [3]
 https://blueprints.launchpad.net/fuel/+spec/fencing-in-puppet-manifests
 [4]

 http://lists.openstack.org/pipermail/openstack-dev/2014-November/049794.html
 [5]

 http://docs.mirantis.com/fuel/fuel-6.0/plugin-dev.html#what-is-pluggable-architecture

 --
 Best regards,
 Bogdan Dobrelya,
 Skype #bogdando_at_yahoo.com
 Irc #bogdando

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel]

2014-12-31 Thread Andrew Woodward
+1 we need more net testing pre deployment to recommend usable network
settings the first time. Settings like mtu gro gso need to be considered
too
On Dec 16, 2014 8:30 AM, Sergey Vasilenko svasile...@mirantis.com wrote:

 Guys, it's a big and complicated architecture issue.

 Issue, like this was carefully researched about month ago (while P***)

 root case of issue:

- Now we use OVS for build virtual network topology on each node.
- OVS has performance degradation while pass huge of small network
packets.
- We can’t abandon using OVS entirely and forever, because it's a most
popular Neutron solution.
- We can’t abandon using OVS partial now, because low-level modules
don’t ready yet for this. I start blueprint (

 https://blueprints.launchpad.net/fuel/+spec/l23network-refactror-to-provider-based-resources)
for aim possibility of combine using OVS for Neutron purposes and don't use
it for management, storage, etc... purposes.

 We, together with L2 support team, Neutron team, and another network
 experts make tuning one of existing production-like env after deployment
 and achieve following values on bonds of two 10G cards:

- vm-to-vm speed (on different compute nodes): 2.56 Gbits/sec (GRE
segmentation)
- node-to-node speed: 17.6 Gbits/s

 This values closely near with theoretical maximum for OVS 1.xx with GRE.
 Some performance improvements may also achieved by upgrading open vSwitch
 to the latest LTS (2.3.1 at this time) branch and using megaflow feature (
 http://networkheresy.com/2014/11/13/accelerating-open-vswitch-to-ludicrous-speed/
 ).


 After this research we concluded:


- OVS can't pass huge of small packages without network performance
degradation
- for fix this we should re-design network topology on env nodes
- even re-designed network topology can't fix this issue at all. Some
network parameters, like mtu, disabling offloading for NICs, buffers,
etc... can be tuned only on real environment.


 My opinion — in FUEL we should add new (or extend existing
 network-checker) component.  This component should testing network
 performance on real customer’s pre-configured env by different (already
 defined) performance test cases and recommend better setup BEFORE main
 deployment cycle run.

 /sv

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev