Re: [openstack-dev] [Hyper-V] Havana status

2013-10-12 Thread Christopher Yeoh
On Fri, 11 Oct 2013 08:27:54 -0700
Dan Smith  wrote:
> 
> Agreed, a stable virt driver API is not feasible or healthy at this
> point, IMHO. However, it doesn't change that much as it is. I know
> I'll be making changes to virt drivers in the coming cycle due to
> objects and I have no problem submitting the corresponding changes to
> the nova-extra-drivers tree for those drivers alongside any that go
> for the main one.

If the idea is to gate with nova-extra-drivers this could lead to a
rather painful process to change the virt driver API. When all the
drivers are in the same tree all of them can be updated at the same
time as the infrastructure. 

If they are in separate trees and Nova gates on nova-extra-drivers then
at least temporarily a backwards compatible API would have to remain so
the nova-extra-drivers tests still passed. The changes would then be
applied to nova-extra-drivers and finally a third changeset to remove
the backwards compatible code. 

We see this in tempest/nova or tempest/cinder occasionally (not often
as the APIs are stable) and its not very pretty. Ideally we'd be able to
link two changesets for different projects so they can be processed as
one. But without that ability I think splitting any drivers out and
continuing to gate on them would be bad.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][Libvirt] Disabling nova-compute when a connection to libvirt is broken.

2013-10-12 Thread Lingxian Kong
+1 for me. And I am willing to be a volunteer.


2013/10/12 Joe Gordon 

> On Thu, Oct 10, 2013 at 4:47 AM, Vladik Romanovsky <
> vladik.romanov...@enovance.com> wrote:
>
>> Hello everyone,
>>
>> I have been recently working on a migration bug in nova (Bug #1233184).
>>
>> I noticed that compute service remains available, even if a connection to
>> libvirt is broken.
>> I thought that it might be better to disable the service (using
>> conductor.manager.update_service()) and resume it once it's connected again.
>> (maybe keep the host_stats periodic task running or create a dedicated
>> one, once it succeed, the service will become available again).
>> This way new vms wont be scheduled nor migrated to the disconnected host.
>>
>> Any thoughts on that?
>>
>
> Sounds reasonable to me. If we can't reach libvirt there isn't much that
> nova-compute can / should do.
>
>
>> Is anyone already working on that?
>>
>> Thank you,
>> Vladik
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
**
*Lingxian Kong*
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com; anlin.k...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] [Heat] Havana RC2 available

2013-10-12 Thread Thierry Carrez
Happy Saturday everyone,

Due to major issues detected in key features during RC1 testing, we just
published new Havana release candidates for OpenStack Compute ("Nova")
and OpenStack Orchestration ("Heat").

You can find RC2 tarballs and lists of fixed bugs at:

https://launchpad.net/nova/havana/havana-rc2
https://launchpad.net/heat/havana/havana-rc2

This is hopefully the last Havana release candidate for Nova and Heat.
Unless a last-minute release-critical regression is found that warrant
another release candidate respin, those RC2s will be formally included
in the common OpenStack 2013.2 final release Thursday. You are therefore
strongly encouraged to test and validate these tarballs.

Alternatively, you can grab the code at:
https://github.com/openstack/nova/tree/milestone-proposed
https://github.com/openstack/heat/tree/milestone-proposed

If you find a regression that could be considered release-critical,
please file it at https://bugs.launchpad.net/nova/+filebug (or
https://bugs.launchpad.net/heat/+filebug if the bug is in Heat) and tag
it *havana-rc-potential* to bring it to the release crew's attention.

Happy regression hunting,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] "Thanks for fixing my patch"

2013-10-12 Thread Lingxian Kong
Really a good idea! It's painful for us to summit a patch, then waiting for
reviewing because of the time difference. It's more painful if we get a -1
after getting up. It's very appreciated that if someone could help, and we
can help others, too.


2013/10/12 Nikhil Manchanda 

> Just wanted to chime in that Trove also follows this approach and it's
> worked pretty well for us.
> +1 on Doug's suggestion to leave a comment on the patch so that two
> reviewers don't end up doing the same work fixing it.
>
> Cheers,
> -Nikhil
>
>
>
> On Fri, Oct 11, 2013 at 12:17 PM, Dolph Mathews 
> wrote:
>
>>
>> On Fri, Oct 11, 2013 at 1:34 PM, Clint Byrum  wrote:
>>
>>> Recently in the TripleO meeting we identified situations where we need
>>> to make it very clear that it is ok to pick up somebody else's patch
>>> and finish it. We are broadly distributed, time-zone-wise, and I know
>>> other teams working on OpenStack projects have the same situation. So
>>> when one of us starts the day and sees an obvious issue with a patch,
>>> we have decided to take action, rather than always -1 and move on. We
>>> clarified for our core reviewers that this does not mean that now both
>>> of you cannot +2. We just need at least one person who hasn't been in
>>> the code to also +2 for an approval*.
>>>
>>> I think all projects can benefit from this model, as it will raise
>>> velocity. It is not perfect for everything, but it is really great when
>>> running up against deadlines or when a patch has a lot of churn and thus
>>> may take a long time to get through the "rebase gauntlet".
>>>
>>> So, all of that said, I want to encourage all OpenStack developers to
>>> say "thanks for fixing my patch" when somebody else does so. It may seem
>>> obvious, but publicly expressing gratitude will make it clear that you
>>> do not take things personally and that we're all working together.
>>>
>>> Thanks for your time -Clint
>>>
>>> * If all core reviewers have been in on the patch, then any two +2's
>>> work.
>>>
>>>
>> +1 across the board -- keystone-core follows this approach, especially
>> around feature freeze / release candidate time.
>>
>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>>
>> -Dolph
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
**
*Lingxian Kong*
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com; anlin.k...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-12 Thread Robert Collins
On 12 October 2013 21:35, Christopher Yeoh  wrote:
> On Fri, 11 Oct 2013 08:27:54 -0700
> Dan Smith  wrote:
>
> If the idea is to gate with nova-extra-drivers this could lead to a
> rather painful process to change the virt driver API. When all the
> drivers are in the same tree all of them can be updated at the same
> time as the infrastructure.
>
> If they are in separate trees and Nova gates on nova-extra-drivers then
> at least temporarily a backwards compatible API would have to remain so
> the nova-extra-drivers tests still passed. The changes would then be
> applied to nova-extra-drivers and finally a third changeset to remove
> the backwards compatible code.
>
> We see this in tempest/nova or tempest/cinder occasionally (not often
> as the APIs are stable) and its not very pretty. Ideally we'd be able to
> link two changesets for different projects so they can be processed as
> one. But without that ability I think splitting any drivers out and
> continuing to gate on them would be bad.

A fairly fundamental thing in SOA architectures - which we have here -
is to make all changes backwards compatibly, it's pretty easy if
you're in the habit of it - there's only a handful of basic primitives
around evolving APIs gracefully - and it results in a much smoother
deployment story - and ultimately thats what we're aiming at.

I recognise the potential for angst around longevity of APIs and so
forth, but even in a single tree with multiple patches the discipline
of being careful about API evolution can reduce gating issues with
patches that would otherwise conflict semantically.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Havana RC2 available

2013-10-12 Thread Thierry Carrez
Hi,

Probably the last before Monday: due to various issues detected in RC1
testing, we just created a new Havana release candidate for OpenStack
Networking ("Neutron").

You can find the RC2 tarball and the list of fixed bugs at:

https://launchpad.net/neutron/havana/havana-rc2

This is hopefully the last Havana release candidate for Neutron.
Unless a last-minute release-critical regression is found that warrant
another release candidate respin, this RC2 will be formally included in
the common OpenStack 2013.2 final release next Thursday. You are
therefore strongly encouraged to test and validate this tarball.

Alternatively, you can grab the code at:
https://github.com/openstack/neutron/tree/milestone-proposed

If you find a regression that could be considered release-critical,
please file it at https://bugs.launchpad.net/neutron/+filebug and tag
it *havana-rc-potential* to bring it to the release crew's attention.

NB: we still have RC2 windows opened for Keystone, Ceilometer and
Horizon. Those should all be published very early next week.

Happy regression hunting,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] odd behaviour from sqlalchemy

2013-10-12 Thread Roman Podolyaka
Hello Chris,

I thought it was a bug in SQLAlchemy code, so I wrote a snippet [1] to
check my assumption, but I haven't managed to reproduce the problem with
SQLAlchemy versions 0.7.9, 0.7.10 and 0.8.2.

I would suggest you to start from enabling logging of all SQL queries
SQLAlchemy issues [2] and, if needed, examining of session/model instance
state with pdb.

For your second question. You can set a column to its current value by
using of literal_column() expression [3].

Can you elaborate a bit more on your use case? Why do you update the table
row, but keep the update_at column value unchanged?

Thanks,
Roman

[1] http://paste.openstack.org/show/48335/
[2]
https://github.com/openstack/nova/blob/stable/grizzly/nova/openstack/common/db/sqlalchemy/session.py#L296
[3]
https://github.com/openstack/cinder/blob/master/cinder/db/sqlalchemy/api.py#L1088


On Sat, Oct 12, 2013 at 2:31 AM, Chris Friesen
wrote:

> Hi,
>
> I'm using grizzly with sqlalchemy 0.7.9.
>
> I'm seeing some funny behaviour related to the automatic update of
> "updated_at" column for the Service class in the sqlalchemy model.
>
> I added a new column to the Service class, and I want to be able to update
> that column without triggering the automatic update of the "updated_at"
> field.
>
> While trying to do this, I noticed the following behaviour.  If I do
>
> values = {'updated_at': new_value}
> self.service_update(context, service, values)
>
> this sets the "updated_at" column to new_value as expected.  However, if I
> do
>
> values = {'updated_at': new_value, 'other_key': other_value}
> self.service_update(context, service, values)
>
> then the other key is set as expected, but "updated_at" gets auto-updated
> to the current timestamp.
>
> The "onupdate" description in the sqlalchemy docs indicates that it "will
> be invoked upon update if this column is not present in the SET clause of
> the update".  Anyone know why it's being invoked even though I'm passing in
> an explicit value?
>
>
> On a slightly different note, does anyone have a good way to update a
> column in the Service class without triggering the "updated_at" field to be
> changed?  Is there a way to tell the database "set this column to this
> value, and set the updated_at column to its current value"?  I don't want
> to read the "updated_at" value and then write it back in another operation
> since that leads to a potential race with other entities accessing the
> database.
>
> Thanks,
> Chris
>
> __**_
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.**org 
> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] quantum network node in KVM guest - connectivity issues

2013-10-12 Thread Jay Pipes
On Sat, Oct 12, 2013 at 8:59 AM, Nick Maslov  wrote:

> Hi,
>
> I have following setup:
>
> 1) infrastructure node, IP in bond, hosting following KVM guests:
> 1.1) Postgres KVM guest
> 1.2) MQ KVM guest
> 1.3) DNS KVM guest
> 1.4) Control node with Nova API, Cinder API, Quantum Server, etc.
> ...
> 1.8) Quantum network node with quantum agents
>
> Agents on this network node are always dying and starting up again:
>
> # quantum agent-list
>
> +--++-+---++
> | id   | agent_type | host
>| alive | admin_state_up |
>
> +--++-+---++
> | 5656392b-b6fe-4570-802f-97d2154acf31 | L3 agent   |
> net01-001.int.net.net | xxx   | True   |
> | 1093fb73-6622-448e-8dad-558a36cca306 | DHCP agent |
> net01-001.int.net.net | xxx   | True   |
> | 4518830d-e112-439f-a629-7defa7bd29e9 | Open vSwitch agent |
> net01-001.int.net.net | xxx   | True   |
> | 86ee6d24-2e6a-4f58-addb-290fefc26401 | Open vSwitch agent | nova05
>| :-)   | True   |
> | b67697bb-3ec1-49fc-8f3c-7e4e7892e83a | Open vSwitch agent | nova04
>| :-)   | True   |
>
> +--++-+---++
>
> Few minutes after, those agents will be up again, one may die - while
> others not.
>
> ping net01-001
> PING net01-001.int.net.net (10.10.146.34) 56(84) bytes of data.
> 64 bytes from net01-001.int.net.net (10.10.146.34): icmp_req=1 ttl=64
> time=0.912 ms
> 64 bytes from net01-001.int.net.net (10.10.146.34): icmp_req=2 ttl=64
> time=0.273 ms
> 64 bytes from net01-001.int.net.net (10.10.146.34): icmp_req=2 ttl=64
> time=0.319 ms (DUP!)
> 64 bytes from net01-001.int.net.net (10.10.146.34): icmp_req=3 ttl=64
> time=0.190 ms
> 64 bytes from net01-001.int.net.net (10.10.146.34): icmp_req=4 ttl=64
> time=0.230 ms
> 64 bytes from net01-001.int.net.net (10.10.146.34): icmp_req=4 ttl=64
> time=0.305 ms (DUP!)
> 64 bytes from net01-001.int.net.net (10.10.146.34): icmp_req=5 ttl=64
> time=0.199 ms
> 64 bytes from net01-001.int.net.net (10.10.146.34): icmp_req=7 ttl=64
> time=0.211 ms
> 64 bytes from net01-001.int.net.net (10.10.146.34): icmp_req=8 ttl=64
> time=0.322 ms
> 64 bytes from net01-001.int.net.net (10.10.146.34): icmp_req=8 ttl=64
> time=0.409 ms (DUP!)
> ^C
> --- net01-001.int.net.net ping statistics ---
> 8 packets transmitted, 7 received, +3 duplicates, 12% packet loss, time
> 7017ms
>
> SSH`ing to network node is also difficult - constant freezes. Nothing
> suspicious in the logs.
>

Those DUP!'s are suspicious, since you aren't pinging a broadcast domain.
That might indicate there's something up with the OVS GRE mesh.


> Since DHCP agent may be down, spawning a VM may end in "waiting for
> network device" state. Then, it might get the internal IP and then floating
> - but accessing it also proves to be very troublesome - I believe because
> of L3 agent flapping.
>
> My OpenStack was set up under this manual -
> https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_MultiNode/OpenStack_Grizzly_Install_Guide.rst
>
> Only thing I did - I added HAproxy/keepalived on top of it, balancing API
>> requests on control nodes. But this shouldn`t impact networking...
>>
>
Agreed, it should not affect network connectivity for the network node.

Not sure what the issue is. Perhaps you might try following through
Darragh's excellent tutorial on debugging L3 issues in OVS/Quantum here:

http://techbackground.blogspot.com/2013/05/the-quantum-l3-router-and-floating-ips.html

DId you manually set up KVM instances for all these nodes, or are you using
something like Triple-O?

Best,
-jay


>
> Anyone have any thoughts about this?
>
> Cheers,
> NM
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openst...@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-12 Thread Dan Smith
> If the idea is to gate with nova-extra-drivers this could lead to a
> rather painful process to change the virt driver API. When all the
> drivers are in the same tree all of them can be updated at the same
> time as the infrastructure. 

Right, and I think if we split those drivers out, then we do *not* gate
on them for the main tree. It's asymmetric, which means potentially more
trouble for the maintainers of the extra drivers. However, as has been
said, we *want* the drivers in the tree as we have them now. Being moved
out would be something the owners of a driver would choose in order to
achieve a faster pace of development, with the consequence of having to
place catch-up if and when we change the driver API.

Like I said, I'll be glad to submit patches to the extra tree in unison
with patches to the main tree to make some of the virt API changes that
will be coming soon, which should minimize the troubles.

I believe Alex has already said that he'd prefer the occasional catch-up
activities over what he's currently experiencing.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-12 Thread Tim Bell

>From the user perspective, splitting off the projects seems to be focussing on 
>the ease of commit compared to the final user experience. An 'extras' project 
>without *strong* testing co-ordination with packagers such as SUSE and RedHat 
>would end up with the consumers of the product facing the integration problems 
>rather than resolving where they should be, within the OpenStack project 
>itself.

I am sympathetic to the 'extra' drivers problem such as Hyper-V and powervm, 
but I do not feel the right solution is to split.

As CERN uses the Hyper-V driver (we have a dual KVM/Hyper-V approach), we want 
that this configuration is certified before it reaches us.

Assuming there is a summit session on how to address this, I can arrange a user 
representation in that session.

Tim

> -Original Message-
> From: Dan Smith [mailto:d...@danplanet.com]
> Sent: 12 October 2013 18:31
> To: OpenStack Development Mailing List
> Subject: Re: [openstack-dev] [Hyper-V] Havana status
> 
> > If the idea is to gate with nova-extra-drivers this could lead to a
> > rather painful process to change the virt driver API. When all the
> > drivers are in the same tree all of them can be updated at the same
> > time as the infrastructure.
> 
> Right, and I think if we split those drivers out, then we do *not* gate on 
> them for the main tree. It's asymmetric, which means potentially
> more trouble for the maintainers of the extra drivers. However, as has been 
> said, we *want* the drivers in the tree as we have them now.
> Being moved out would be something the owners of a driver would choose in 
> order to achieve a faster pace of development, with the
> consequence of having to place catch-up if and when we change the driver API.
> 
> Like I said, I'll be glad to submit patches to the extra tree in unison with 
> patches to the main tree to make some of the virt API changes
> that will be coming soon, which should minimize the troubles.
> 
> I believe Alex has already said that he'd prefer the occasional catch-up 
> activities over what he's currently experiencing.
> 
> --Dan
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-12 Thread Dan Smith
> From the user perspective, splitting off the projects seems to be 
> focussing on the ease of commit compared to the final user 
> experience.

I think what you describe is specifically the desire that originally
spawned the thread: making the merging of changes to the hyper-v driver
faster by having them not reviewed by the rest of the Nova team. It
seems to be what the hyper-v developers want, not necessarily what the
Nova team as a whole wants.

> An 'extras' project without *strong* testing co-ordination with
> packagers such as SUSE and RedHat would end up with the consumers of
> the product facing the integration problems rather than resolving
> where they should be, within the OpenStack project itself.

I don't think splitting out to -extras means that it loses strong
testing coordination (note that strong testing coordination does not
exist with the hyper-v driver at this point in time). Every patch to the
-extras tree could still be unit (and soon, integration) tested against
the current nova tree, using the proposed patch applied to the -extras
tree. It just means that a change against nova wouldn't trigger the
same, which is why the potential for "catch up" behavior would be required.

> I am sympathetic to the 'extra' drivers problem such as Hyper-V and 
> powervm, but I do not feel the right solution is to split.
> 
> Assuming there is a summit session on how to address this, I can 
> arrange a user representation in that session.

Cool, I really think we're at the point where we know the advantages and
disadvantages of the various options and further face-to-face discussion
at the summit is what is going to move us to the next stage.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Icehouse design summit proposal deadline

2013-10-12 Thread Gary Kotton
FYI

On 10/10/13 9:43 PM, "Russell Bryant"  wrote:

>Greetings,
>
>We already have more proposals for the Nova design summit track than
>time slots.  Please get your proposals in as soon as possible, and
>ideally no later than 1 week from today - Thursday, October 17.  At that
>point we will be focusing on putting a schedule together in order to
>have the schedule completed at least a week in advance of the summit.
>
>Thanks!
>
>-- 
>Russell Bryant
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Looking for clarification on the diagnostics API

2013-10-12 Thread Gary Kotton
Yup, it seems to be hypervisor specific. I have added in the Vmware support 
following you correcting in the Vmware driver.
Thanks
Gary

From: Matt Riedemann mailto:mrie...@us.ibm.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, October 10, 2013 10:17 PM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [nova] Looking for clarification on the 
diagnostics API

Looks like this has been brought up a couple of times:

https://lists.launchpad.net/openstack/msg09138.html

https://lists.launchpad.net/openstack/msg08555.html

But they seem to kind of end up in the same place I already am - it seems to be 
an open-ended API that is hypervisor-specific.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development


Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com
[cid:_1_0B6881F80B687C640069F4B286257C00]

3605 Hwy 52 N
Rochester, MN 55901-1407
United States






From:Matt Riedemann/Rochester/IBM
To:"OpenStack Development Mailing List" 
mailto:openstack-dev@lists.openstack.org>>,
Date:10/10/2013 02:12 PM
Subject:[nova] Looking for clarification on the diagnostics API



Tempest recently got some new tests for the nova diagnostics API [1] which 
failed when I was running against the powervm driver since it doesn't implement 
that API.  I started looking at other drivers that did and found that libvirt, 
vmware and xenapi at least had code for the get_diagnostics method.  I found 
that the vmware driver was re-using it's get_info method for get_diagnostics 
which led to bug 1237622 [2] but overall caused some confusion about the 
difference between the compute driver's get_info and get_diagnostics mehods.  
It looks like get_info is mainly just used to get the power_state of the 
instance.

First, the get_info method has a nice docstring for what it needs returned [3] 
but the get_diagnostics method doesn't [4].  From looking at the API docs [5], 
the diagnostics API basically gives an example of values to get back which is 
completely based on what the libvirt driver returns.  Looking at the xenapi 
driver code, it looks like it does things a bit differently than the libvirt 
driver (maybe doesn't return the exact same keys, but it returns information 
based on what Xen provides).

I'm thinking about implementing the diagnostics API for the powervm driver but 
I'd like to try and get some help on defining just what should be returned from 
that call.  There are some IVM commands available to the powervm driver for 
getting hardware resource information about an LPAR so I think I could 
implement this pretty easily.

I think it basically comes down to providing information about the processor, 
memory, storage and network interfaces for the instance but if anyone has more 
background information on that API I'd like to hear it.

[1] 
https://github.com/openstack/tempest/commit/da0708587432e47f85241201968e6402190f0c5d
[2] https://bugs.launchpad.net/nova/+bug/1237622
[3] https://github.com/openstack/nova/blob/2013.2.rc1/nova/virt/driver.py#L144
[4] https://github.com/openstack/nova/blob/2013.2.rc1/nova/virt/driver.py#L299
[5] http://paste.openstack.org/show/48236/



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development


Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com
[cid:_1_0B682BA80B6826140069F4B286257C00]

3605 Hwy 52 N
Rochester, MN 55901-1407
United States



<><>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-12 Thread Alessandro Pilotti


> On 12.10.2013, at 20:04, "Tim Bell"  wrote:
> 
> 
> From the user perspective, splitting off the projects seems to be focussing 
> on the ease of commit compared to the final user experience. An 'extras' 
> project without *strong* testing co-ordination with packagers such as SUSE 
> and RedHat would end up with the consumers of the product facing the 
> integration problems rather than resolving where they should be, within the 
> OpenStack project itself.
> 
> I am sympathetic to the 'extra' drivers problem such as Hyper-V and powervm, 
> but I do not feel the right solution is to split.
> 
> As CERN uses the Hyper-V driver (we have a dual KVM/Hyper-V approach), we 
> want that this configuration is certified before it reaches us.
> 
I don't see your point here. From any practical perspective, most of the Nova 
core review work in the sub-project areas consists in formal validation of the 
patches (beyond the basic pep8 / pylinting done by Jenkins) or unit test 
requests while 99% of the authoritative work on the patches is done by the 
"de-facto" sub-project maintainers, simply because those are the people knowing 
the domain. This wouldn't change with a separate project. It would actually 
improve. 

Informal "Certification", to call it this way, is eventually coming from the 
users (including CERN of course), not from the reviewers: in the end you (the 
users) are the ones using this stuff in production environments and you are 
filing bugs and asking for new features.

On the other side, if by "extra" you mean a repo outside of OpenStack (the 
vendor repo suggested in previous replies in this thread), I totally agree, as 
it would move the project outside of the focus of the largest part of the 
community in most cases.

> Assuming there is a summit session on how to address this, I can arrange a 
> user representation in that session.
> 
> Tim
> 
>> -Original Message-
>> From: Dan Smith [mailto:d...@danplanet.com]
>> Sent: 12 October 2013 18:31
>> To: OpenStack Development Mailing List
>> Subject: Re: [openstack-dev] [Hyper-V] Havana status
>> 
>>> If the idea is to gate with nova-extra-drivers this could lead to a
>>> rather painful process to change the virt driver API. When all the
>>> drivers are in the same tree all of them can be updated at the same
>>> time as the infrastructure.
>> 
>> Right, and I think if we split those drivers out, then we do *not* gate on 
>> them for the main tree. It's asymmetric, which means potentially
>> more trouble for the maintainers of the extra drivers. However, as has been 
>> said, we *want* the drivers in the tree as we have them now.
>> Being moved out would be something the owners of a driver would choose in 
>> order to achieve a faster pace of development, with the
>> consequence of having to place catch-up if and when we change the driver API.
>> 
>> Like I said, I'll be glad to submit patches to the extra tree in unison with 
>> patches to the main tree to make some of the virt API changes
>> that will be coming soon, which should minimize the troubles.
>> 
>> I believe Alex has already said that he'd prefer the occasional catch-up 
>> activities over what he's currently experiencing.
>> 
>> --Dan
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-12 Thread Alessandro Pilotti


On 12.10.2013, at 20:22, "Dan Smith"  wrote:

>> From the user perspective, splitting off the projects seems to be 
>> focussing on the ease of commit compared to the final user 
>> experience.
> 
> I think what you describe is specifically the desire that originally
> spawned the thread: making the merging of changes to the hyper-v driver
> faster by having them not reviewed by the rest of the Nova team. It
> seems to be what the hyper-v developers want, not necessarily what the
> Nova team as a whole wants.
> 
>> An 'extras' project without *strong* testing co-ordination with
>> packagers such as SUSE and RedHat would end up with the consumers of
>> the product facing the integration problems rather than resolving
>> where they should be, within the OpenStack project itself.
> 
> I don't think splitting out to -extras means that it loses strong
> testing coordination (note that strong testing coordination does not
> exist with the hyper-v driver at this point in time). Every patch to the
> -extras tree could still be unit (and soon, integration) tested against
> the current nova tree, using the proposed patch applied to the -extras
> tree. It just means that a change against nova wouldn't trigger the
> same, which is why the potential for "catch up" behavior would be required.
> 
>> I am sympathetic to the 'extra' drivers problem such as Hyper-V and 
>> powervm, but I do not feel the right solution is to split.
>> 
>> Assuming there is a summit session on how to address this, I can 
>> arrange a user representation in that session.
> 
> Cool, I really think we're at the point where we know the advantages and
> disadvantages of the various options and further face-to-face discussion
> at the summit is what is going to move us to the next stage.
> 

I agree. Looks like we are converging towards a common ground. I'm summing it 
up here, including a few additional details, for the benefit of who will not 
join us in HK (sorry, we'll party for you as well :-)):

1) All the drivers will still be part of Nova.

2) One official project (nova-drivers-incubator?) or more than one will be 
created for the purpose of supporting a leaner and faster development pace of 
the drivers.

3) Current driver sub-project teams will informally elect their maintainer(s) 
which will have +2a rights on the new project or specific subtrees.

4) Periodically, code from the new project(s) must be merged into Nova. 
Only Nova core reviewers will have obviously +2a rights here.
I propose to do it on scheduled days before every milestone, differentiated per 
driver to distribute the review effort (what about also having Nova core 
reviewers assigned to each driver? Dan was suggesting something similar some 
time ago).

5) All drivers will be treated equally and new features and bug fixes for 
master (except security ones) should land in the new project before moving to 
Nova.

6) CI gates for all drivers, once available, will be added to the new project 
as well. Only drivers code with a CI gate will be merged in Nova (starting with 
the Icehouse release as we already discussed).

7) Active communication should be maintained between the Nova core team and the 
drivers maintainers. This means something more than: "I wrote it on the ML 
didn't you see it?" :-)

A couple if questions: will we keep version branches on the new project or just 
master?

Bug fixes for older releases will be proposed to the incubator for the current 
release in development and to Nova for past versions branches?

Please correct me if I missed something!

Thanks,

Alessandro

> --Dan
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] VPNaaS questions

2013-10-12 Thread Eugene Nikanorov
Hi folks,

> I was wondering in general how providers can customize service features,
> based on their capabilities (better or worse than reference). I could
create
> a Summit session topic on this, but wanted to know if this is something
that
> has already been addressed or if a different architectural approach has
> already been defined.

This is seems to be a multilayered feature that needs to be discussed.
Mark McClain will be speaking about vendor cli extensions in
http://summit.openstack.org/cfp/details/10.
It require API counterpart on server side. I was planning to speak about
this in this session:
http://summit.openstack.org/cfp/details/22
Feel free to add your suggestions to the etherpad.

more specifically:
> 7) If a provider as additional attributes (can't think of any yet), how
can
> the attribute be extended, only for that provider (or is that the wrong
way
> to handle this)?
I think it should be an additional extension mechanism different from the
framework that we're using right now.
Service plugin should gather extended resources or attribute maps from
supported drivers and return them to the layer that will make wsgi
controllers for the collections. So it should be pretty much the same as
extension framework but instead of loading common extensions, it should
load resources from the service plugin.


Thanks,
Eugene.


On Sat, Oct 12, 2013 at 1:40 AM, Nachi Ueno  wrote:

> Hi Paul
>
> 2013/10/11 Paul Michali :
> > Hi folks,
> >
> > I have a bunch of questions for you on VPNaaS in specific, and services
> in
> > general...
> >
> > Nachi,
> >
> > 1) You hd a bug fix to do service provider framework support for VPN
> > (41827). It was held for Icehouse. Is that pretty much a working patch?
> > 2) When are you planning on reopening the review?
>
> I'm not sure it will work without rebase.
> I'll rebase, and test it again in next week.
>
> >
> > Anyone,
> >
> > I see that there is an agent.py file for VPN that has a main() and it
> starts
> > up an L3 agent, specifying the VPNAgent class (in same file).
> >
> > 3) How does this file get invoked? IOW how does the main() get invoked?
>
> we should use neutron-vpn-agent command to run vpn-agent.
> This command invoke vpn agent class.
> It is defined setup.cnf
>
> https://github.com/openstack/neutron/blob/master/setup.cfg#L98
>
> > 4) I take it we can specify multiple device drivers in the config file
> for
> > the agent?
>
> Yes.
>
> >
> > Currently, for the reference device driver, the hierarchy is currently
> > DeviceDriver [ABC] -> IPsecDriver [Swan based logic] -> OpenSwanDriver
> [one
> > function, OpenSwan specific]. The ABC has a specific set of APIs.
> Wondering
> > how to incorporate provider based device drivers.
>
> It is designed when we know only one swan based driver.
> so It won't fit with another device drivers.
> if so, You can also extend or modify DeviceDriver also.
>
> > 5) Should I push up more general methods from IPsecDriver to
> DeviceDriver,
> > so that they can be reused by other providers?
>
> That's woud be great
>
> > 6) Should I push down the swan based methods from DeviceDriver to
> > IPsecDriver and maybe name it SwanDeviceDriver?
>
> yes
>
> >
> > I see that vpnaas.py is an extension for VPN that defines attributes and
> the
> > base plugin functions.
> >
> > 7) If a provider as additional attributes (can't think of any yet), how
> can
> > the attribute be extended, only for that provider (or is that the wrong
> way
> > to handle this)?
>
> You can extend existing extension.
>
> > For VPN, there are several attributes, each with varying ranges of values
> > allowed. This is reflected in the CLI help messages, the database (e.g.
> > enums), and is validated (some) in the client code and in the VPN
> service.
>
> Chaining existing attributes may be challenging on client side.
> But let's discuss this with a concrete example.
>
> > 8) How do we provide different limits/allowed values for attributes, for
> a
> > specific provider (e.g. let's say the provider supports or doesn't
> support
> > an encryption method, or doesn't support IKE v1 or v2)?
>
> Driver can throw unsupported exception. ( It is not defined yet)
>
> > 9) Should the code be changed not to do any client validation, and to
> have
> > generic help, so that different values could be provided, or is there a
> way
> > to customize this based on provider?
>
> That's could be one way.
>
> > 10) If customized, is it possible to reflect the difference in allowed
> > values in the help strings (and client validation)?
>
> May be, server side can tell the client "hey I'm supporting this set of
> values"
> Then client can use it as the help string.
> # This change may need bp.
>
> > 11) How do we handle the variation in the database (e.g. when enums
> > specifying a fixed set of values)? Do we need to change the database to
> be
> > more generic (strings and ints) or do we somehow extend the database?
>
> more than one driver will use same DB.
> so I'm +1 for gene

[openstack-dev] [TripleO] All TripleO ATC's - getting accounts on the TripleOCloud

2013-10-12 Thread Robert Collins
There are reviews up now to add user accounts for the TripleO run
OpenStack reference cloud
(https://wiki.openstack.org/wiki/TripleO/TripleOCloud).

To be eligible for an account, you need to be a TripleO ATC, or have
some use case which the TripleO PTL considers worthwhile (e.g. just
ask :)).

To setup an account, submit a review to the tripleo incubator. This
https://review.openstack.org/#/c/51410/ is an example of such a review
adding accounts for openstack CI to be able to spin up nodepool and
other services in the cloud. Until the base infrastructure has landed,
you'll need to build on top of review 51354 (e.g. git review -d 51354,
add a commit, git review -y).

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] VPNaaS questions

2013-10-12 Thread Paul Michali
Thanks for the responses Nachi! I combined them into one message. See @PCM 
inlineā€¦


PCM (Paul Michali)

MAIL p...@cisco.com
IRC   pcm_  (irc.freenode.net)
TW   @pmichali

On Oct 11, 2013, at 5:40 PM, Nachi Ueno  wrote:

> Hi Paul
> 
> 2013/10/11 Paul Michali :
>> Hi folks,
>> 
>> I have a bunch of questions for you on VPNaaS in specific, and services in
>> general...
>> 
>> Nachi,
>> 
>> 1) You hd a bug fix to do service provider framework support for VPN
>> (41827). It was held for Icehouse. Is that pretty much a working patch?
>> 2) When are you planning on reopening the review?
> 
> I'm not sure it will work without rebase.
> I'll rebase, and test it again in next week.

@PCM Thanks! I did pull new Neutron code and then patched in the code and it 
seems to be working fine, BTW.


> 
>> 
>> Anyone,
>> 
>> I see that there is an agent.py file for VPN that has a main() and it starts
>> up an L3 agent, specifying the VPNAgent class (in same file).
>> 
>> 3) How does this file get invoked? IOW how does the main() get invoked?
> 
> we should use neutron-vpn-agent command to run vpn-agent.
> This command invoke vpn agent class.
> It is defined setup.cnf
> 
> https://github.com/openstack/neutron/blob/master/setup.cfg#L98
> 
>> 4) I take it we can specify multiple device drivers in the config file for
>> the agent?
> 
> Yes.
> 
>> 
>> Currently, for the reference device driver, the hierarchy is currently
>> DeviceDriver [ABC] -> IPsecDriver [Swan based logic] -> OpenSwanDriver [one
>> function, OpenSwan specific]. The ABC has a specific set of APIs. Wondering
>> how to incorporate provider based device drivers.
> 
> It is designed when we know only one swan based driver.
> so It won't fit with another device drivers.
> if so, You can also extend or modify DeviceDriver also.
> 
>> 5) Should I push up more general methods from IPsecDriver to DeviceDriver,
>> so that they can be reused by other providers?
> 
> That's woud be great
> 
>> 6) Should I push down the swan based methods from DeviceDriver to
>> IPsecDriver and maybe name it SwanDeviceDriver?
> 
> yes
> 
>> 
>> I see that vpnaas.py is an extension for VPN that defines attributes and the
>> base plugin functions.
>> 
>> 7) If a provider as additional attributes (can't think of any yet), how can
>> the attribute be extended, only for that provider (or is that the wrong way
>> to handle this)?
> 
> You can extend existing extension.

@PCM Would like more info on how to do that. Any examples? Would the attribute 
be static, always available (versus, only being available when a certain 
provider is selected)?


> 
>> For VPN, there are several attributes, each with varying ranges of values
>> allowed. This is reflected in the CLI help messages, the database (e.g.
>> enums), and is validated (some) in the client code and in the VPN service.
> 
> Chaining existing attributes may be challenging on client side.
> But let's discuss this with a concrete example.
> 
>> 8) How do we provide different limits/allowed values for attributes, for a
>> specific provider (e.g. let's say the provider supports or doesn't support
>> an encryption method, or doesn't support IKE v1 or v2)?
> 
> Driver can throw unsupported exception. ( It is not defined yet)

@PCM Yeah, I was figuring at execution time we could reject the selection, but 
I'm wondering if we can be proactive and not show the option in the help and 
maybe validate it at the client side.  The other side of this (and probably 
worse) is if a provider supports more options than the reference VPN. The new 
value would be hidden from the help, and we have the added value that the 
database is not set up to allow the value (say an emu is being used).

I have an example, and that is of a provider (yet) not supporting (via 
programatic interface) IKE v2. Not sure if this is a good example though, as, 
for VPN, it seems like the IKE and IPSec policies, are they not associated with 
the provider, right? If that is the case, then how do we handle the case where 
a provider has a limitation (in this case) or more capabilities than the 
reference VPN?

 
> 
>> 9) Should the code be changed not to do any client validation, and to have
>> generic help, so that different values could be provided, or is there a way
>> to customize this based on provider?
> 
> That's could be one way.
> 
>> 10) If customized, is it possible to reflect the difference in allowed
>> values in the help strings (and client validation)?
> 
> May be, server side can tell the client "hey I'm supporting this set of 
> values"
> Then client can use it as the help string.
> # This change may need bp.

@PCM That was what I was thinkingā€¦ having an API that the client can use to 
determine the server's capabilities. Was thinking of that for a summit session, 
but not sure how the community feels about doing something like that, and 
whether the complexity is worth the effort.  Mark? Kyle?

I had similar thoughts about that info be using for the attribute values,

Re: [openstack-dev] [Hyper-V] Havana status

2013-10-12 Thread Bob Ball

From: Alessandro Pilotti [apilo...@cloudbasesolutions.com]
Sent: 12 October 2013 20:21
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Hyper-V] Havana status

> 1) All the drivers will still be part of Nova.
> 
> 2) One official project (nova-drivers-incubator?) or more than one will be 
> created for
> the purpose of supporting a leaner and faster development pace of the drivers.

I still think that all drivers should be treated equally; if we are to create a 
separate repository for drivers then I think we should officially split the 
driver repository out, including KVM and XenAPI drivers.  Certainly the XenAPI 
team have experienced a very similar issue with the time it takes to get 
reviews in - although I fully accept it may be to a lesser degree than Hyper-V.

> 3) Current driver sub-project teams will informally elect their maintainer(s) 
> which will
> have +2a rights on the new project or specific subtrees.

The more I've thought about it the more I think we need common +2a's across all 
drivers to identify commonality before a one-big-drop and not per-driver +2a's. 
 Perhaps if there were dedicate "nova driver core" folk then the pace of driver 
development would be increased without sacrificing the good things we get by 
having people familiar with the expectations of the API, how other drivers 
implement things or identifying code that should not be written in drivers but 
moved to oslo or the main nova repository for the good of everyone rather than 
the specific driver.

> 4) Periodically, code from the new project(s) must be merged into Nova.
> Only Nova core reviewers will have obviously +2a rights here.
> I propose to do it on scheduled days before every milestone, differentiated 
> per
> driver to distribute the review effort (what about also having Nova core 
> reviewers
> assigned to each driver? Dan was suggesting something similar some time ago).

I don't think this is maintainable.  Assuming there is a high rate of change in 
the drivers, the number of changes that would likely need to be reviewed before 
each milestone could be huge and completely impossible to review - which could 
cause an even bigger issue.  I worry that if the Nova core reviewers aren't 
convinced by the code coming from this separate repository their choice would 
either be to reject the lot or just accept it without review.

> 5) All drivers will be treated equally and new features and bug fixes for 
> master
> (except security ones) should land in the new project before moving to Nova.

Perhaps I don't understand this in relation to "nova-drivers-incubator" - but 
are you suggesting that new APIs are added to Nova, but their implementation is 
only added to nova-drivers-incubator until the scheduled day before the 
milestone, when the functionality can be moved into Nova?  If so I'm not sure 
of the benefit of having any drivers in Nova at all is, since the expectation 
would be you must always deploy the matching nova-drivers to get API 
compatibility.  Or are you suggesting that it is the developers choice about 
whether to push the new code to both repositories at the same time, or whether 
they want to wait for the big merge pre-milestone?

> 6) CI gates for all drivers, once available, will be added to the new project 
> as
> well. Only drivers code with a CI gate will be merged in Nova (starting with 
> the
> Icehouse release as we already discussed).

I think we can all agree on this one - although I thought the IceHouse 
expectation was not CI gate, but unit test gate and automated test (possibly 
through an external system) posting review comments.  Having said that, I would 
be very happy with enforcing CI gate for all drivers.

> 7) Active communication should be maintained between the Nova core team
> and the drivers maintainers. This means something more than: "I wrote it on 
> the
> ML didn't you see it?" :-)

Definitely.  I'd suggest an IRC meeting - they are fun.

Bob
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-12 Thread Dan Smith
> 4) Periodically, code from the new project(s) must be merged into Nova. 
> Only Nova core reviewers will have obviously +2a rights here.
> I propose to do it on scheduled days before every milestone, differentiated 
> per driver to distribute the review effort (what about also having Nova core 
> reviewers assigned to each driver? Dan was suggesting something similar some 
> time ago).

FWIW, this is not what I had intended. I think that if you want (or
need) to be in the "extras" tree, then that's where you are. Periodic
syncs generate extra work and add the previously-mentioned confusion of
"which driver is the official/best one?"

I think that any driver that gets put into -extra gets removed from the
mainline nova tree. If that driver has full CI testing and wants to be
moved into the main tree, then that happens once.

Having commit rights to the extras tree and periodic nearly-unattended
or too-large-to-reasonably-review sync patches just sidesteps the
process. That gains you the recgonition of being in the tree, without
having to undergo the aggressive review and participate in the planning
and coordination of the process that goes with it. That is NOT okay, IMHO.

Sorry if that was unclear with the previous discussion. I'm not sure who
else was thinking that those drivers would exist in both places, but I
definitely was not.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-12 Thread Joe Gordon
On Sat, Oct 12, 2013 at 12:21 PM, Alessandro Pilotti <
apilo...@cloudbasesolutions.com> wrote:

>
>
> On 12.10.2013, at 20:22, "Dan Smith"  wrote:
>
> >> From the user perspective, splitting off the projects seems to be
> >> focussing on the ease of commit compared to the final user
> >> experience.
> >
> > I think what you describe is specifically the desire that originally
> > spawned the thread: making the merging of changes to the hyper-v driver
> > faster by having them not reviewed by the rest of the Nova team. It
> > seems to be what the hyper-v developers want, not necessarily what the
> > Nova team as a whole wants.
> >
> >> An 'extras' project without *strong* testing co-ordination with
> >> packagers such as SUSE and RedHat would end up with the consumers of
> >> the product facing the integration problems rather than resolving
> >> where they should be, within the OpenStack project itself.
> >
> > I don't think splitting out to -extras means that it loses strong
> > testing coordination (note that strong testing coordination does not
> > exist with the hyper-v driver at this point in time). Every patch to the
> > -extras tree could still be unit (and soon, integration) tested against
> > the current nova tree, using the proposed patch applied to the -extras
> > tree. It just means that a change against nova wouldn't trigger the
> > same, which is why the potential for "catch up" behavior would be
> required.
> >
> >> I am sympathetic to the 'extra' drivers problem such as Hyper-V and
> >> powervm, but I do not feel the right solution is to split.
> >>
> >> Assuming there is a summit session on how to address this, I can
> >> arrange a user representation in that session.
> >
> > Cool, I really think we're at the point where we know the advantages and
> > disadvantages of the various options and further face-to-face discussion
> > at the summit is what is going to move us to the next stage.
> >
>
> I agree. Looks like we are converging towards a common ground. I'm summing
> it up here, including a few additional details, for the benefit of who will
> not join us in HK (sorry, we'll party for you as well :-)):
>


This sounds like a very myopic solution to the issue you originally raised,
and I don't think it will solve the underlying issues.



Taking a step back, you originally raised a concern about how we prioritize
reviews with the havana-rc-potential tag.

"In the past weeks we diligently marked bugs that are related to Havana
features with the "havana-rc-potential" tag, which at least for what Nova
is concerned, had absolutely no effect.
Our code is sitting in the review queue as usual and, not being tagged for
a release or prioritised, there's no guarantee that anybody will take a
look at the patches in time for the release. Needless to say, this starts
to feel like a Kafka novel. :-)" [1]

If the issue is just better bug triage and prioritizing reviews, help us do
that!

[2] shows the current status of your hyper-v havana-rc-potential bugs.
Currently there are only 7 bugs that have both tags.  Of those 7, 3 have no
pending patches to trunk, and one doesn't sound like it warrants a back
port (https://bugs.launchpad.net/nova/+bug/1220256).

Looking at the remaining 4, one is marked as a WIP by you (
https://bugs.launchpad.net/nova/+bug/1231911
https://review.openstack.org/#/c/48645/) which leaves three patches for
nova team to review.  Three reviews open for a week doesn't sound like an
issue that warrants a whole new repository.

You went on to clarify your position.

"I'm not putting into discussion how much and well you guys are working (I
actually firmly believe that you DO work very well), I'm just discussing
about the way in which blueprints and bugs get prioritised.



On the other side, to get our code reviewed and merged we are always
dependent on the good will and best effort of core reviewers that don't
necessarily know or care about specific driver, plugin or agent internals.
This brings to even longer review cycles even considering that reviewers
are clearly doing their best in understanding the patches and we couldn't
be more thankful.

"Best effort" has also a very specific meaning: in Nova all the Havana
Hyper-V blueprints were marked as "low priority" (which can be translated
in: "the only way to get them merged is to beg for reviews or maybe commit
them on day 1 of the release cycle and pray") while most of the Hyper-V
bugs had no priority at all (which can be translated in "make some noise on
the ML and IRC or nobody will care"). :-)

This reality unfortunately applies to most of the sub-projects (non only
Hyper-V) and can be IMHO solved only by delegating more authonomy to the
sub-project teams on their specific area of competence across OpenStack as
a whole. Hopefully we'll manage to find a solution during the design summit
as we are definitely not the only ones feeling this way, by judging on
various threads in this ML." [3]


Once again you raise the issue of bug tri

[openstack-dev] [Swift] Porting swiftclient to Python 3?

2013-10-12 Thread Brian Curtin
Hi,

I just had a look at the python-swiftclient reviews in Gerrit and noticed that 
Kui Shi and I are working on the same stuff, but I'm guessing Kui didn't see 
that I had proposed a number of Python 3 changes from a few weeks ago. Now that 
there are reviews and a web of dependent branches being maintained by both of 
us, how should this proceed?

I don't want to waste anyone's time with two sets of branches to develop and 
two sets of patches to review.

Brian
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] Porting swiftclient to Python 3?

2013-10-12 Thread John Dickinson
Co-reviewing each other's patches and discussing changes in #openstack-swift 
would be good ways to ensure that you are working in the same direction.

--John


On Oct 12, 2013, at 3:49 PM, Brian Curtin  wrote:

> Hi,
> 
> I just had a look at the python-swiftclient reviews in Gerrit and noticed 
> that Kui Shi and I are working on the same stuff, but I'm guessing Kui didn't 
> see that I had proposed a number of Python 3 changes from a few weeks ago. 
> Now that there are reviews and a web of dependent branches being maintained 
> by both of us, how should this proceed?
> 
> I don't want to waste anyone's time with two sets of branches to develop and 
> two sets of patches to review.
> 
> Brian
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-12 Thread Alessandro Pilotti


On 13.10.2013, at 01:26, "Joe Gordon" 
mailto:joe.gord...@gmail.com>> wrote:




On Sat, Oct 12, 2013 at 12:21 PM, Alessandro Pilotti 
mailto:apilo...@cloudbasesolutions.com>> wrote:


On 12.10.2013, at 20:22, "Dan Smith" 
mailto:d...@danplanet.com>> wrote:

>> From the user perspective, splitting off the projects seems to be
>> focussing on the ease of commit compared to the final user
>> experience.
>
> I think what you describe is specifically the desire that originally
> spawned the thread: making the merging of changes to the hyper-v driver
> faster by having them not reviewed by the rest of the Nova team. It
> seems to be what the hyper-v developers want, not necessarily what the
> Nova team as a whole wants.
>
>> An 'extras' project without *strong* testing co-ordination with
>> packagers such as SUSE and RedHat would end up with the consumers of
>> the product facing the integration problems rather than resolving
>> where they should be, within the OpenStack project itself.
>
> I don't think splitting out to -extras means that it loses strong
> testing coordination (note that strong testing coordination does not
> exist with the hyper-v driver at this point in time). Every patch to the
> -extras tree could still be unit (and soon, integration) tested against
> the current nova tree, using the proposed patch applied to the -extras
> tree. It just means that a change against nova wouldn't trigger the
> same, which is why the potential for "catch up" behavior would be required.
>
>> I am sympathetic to the 'extra' drivers problem such as Hyper-V and
>> powervm, but I do not feel the right solution is to split.
>>
>> Assuming there is a summit session on how to address this, I can
>> arrange a user representation in that session.
>
> Cool, I really think we're at the point where we know the advantages and
> disadvantages of the various options and further face-to-face discussion
> at the summit is what is going to move us to the next stage.
>

I agree. Looks like we are converging towards a common ground. I'm summing it 
up here, including a few additional details, for the benefit of who will not 
join us in HK (sorry, we'll party for you as well :-)):


This sounds like a very myopic solution to the issue you originally raised, and 
I don't think it will solve the underlying issues.

The solution I just proposed was based on the feedbacks received on this thread 
trying to make everybody happy, so if you find it "myopic" please be my guest 
and find a better one that suits all the different positions. :-)



Taking a step back, you originally raised a concern about how we prioritize 
reviews with the havana-rc-potential tag.

"In the past weeks we diligently marked bugs that are related to Havana 
features with the "havana-rc-potential" tag, which at least for what Nova is 
concerned, had absolutely no effect.
Our code is sitting in the review queue as usual and, not being tagged for a 
release or prioritised, there's no guarantee that anybody will take a look at 
the patches in time for the release. Needless to say, this starts to feel like 
a Kafka novel. :-)" [1]

If the issue is just better bug triage and prioritizing reviews, help us do 
that!

[2] shows the current status of your hyper-v havana-rc-potential bugs. 
Currently there are only 7 bugs that have both tags.  Of those 7, 3 have no 
pending patches to trunk, and one doesn't sound like it warrants a back port 
(https://bugs.launchpad.net/nova/+bug/1220256).

Looking at the remaining 4, one is marked as a WIP by you 
(https://bugs.launchpad.net/nova/+bug/1231911 
https://review.openstack.org/#/c/48645/) which leaves three patches for nova 
team to review.  Three reviews open for a week doesn't sound like an issue that 
warrants a whole new repository.


Sure, it's not the volume of reviews the subject here. This is just the icing 
on the cake on something that goes on since a while (see Havana feature freeze).

You went on to clarify your position.

"I'm not putting into discussion how much and well you guys are working (I 
actually firmly believe that you DO work very well), I'm just discussing about 
the way in which blueprints and bugs get prioritised.



On the other side, to get our code reviewed and merged we are always dependent 
on the good will and best effort of core reviewers that don't necessarily know 
or care about specific driver, plugin or agent internals. This brings to even 
longer review cycles even considering that reviewers are clearly doing their 
best in understanding the patches and we couldn't be more thankful.

"Best effort" has also a very specific meaning: in Nova all the Havana Hyper-V 
blueprints were marked as "low priority" (which can be translated in: "the only 
way to get them merged is to beg for reviews or maybe commit them on day 1 of 
the release cycle and pray") while most of the Hyper-V bugs had no priority at 
all (which can be translated in "make some noise on the ML and IRC or nobody 
will care")

Re: [openstack-dev] [Hyper-V] Havana status

2013-10-12 Thread Alessandro Pilotti


On 13.10.2013, at 01:09, "Dan Smith"  wrote:

>> 4) Periodically, code from the new project(s) must be merged into Nova. 
>> Only Nova core reviewers will have obviously +2a rights here.
>> I propose to do it on scheduled days before every milestone, differentiated 
>> per driver to distribute the review effort (what about also having Nova core 
>> reviewers assigned to each driver? Dan was suggesting something similar some 
>> time ago).
> 
> FWIW, this is not what I had intended. I think that if you want (or
> need) to be in the "extras" tree, then that's where you are. Periodic
> syncs generate extra work and add the previously-mentioned confusion of
> "which driver is the official/best one?"
> 
> I think that any driver that gets put into -extra gets removed from the
> mainline nova tree. If that driver has full CI testing and wants to be
> moved into the main tree, then that happens once.
> 

"extra" sounds to me like a ghetto for drivers which are not good enough to 
stay in Nova. No thanks.

My suggestion in the previous email was just to make happy also who wanted to 
keep the drivers in Nova.
At this point, based on your reply, why not a clear and simple 
"nova-driver-hyperv" project as Russell was initially suggesting? What's the 
practical difference from "extra"?

It'd be an official project, we won't have to beg you for reviews, you won't 
need to understand the Hyper-V internals, the community would still support it 
(definitely more than now), users would have TIMELY bug fixes and new features 
instead of this mess, the sun would shine, etc etc.

As a side note, the stability of the driver's interface is IMO an irrelevant 
issue here compared to all the opposite drawbacks. 

> Having commit rights to the extras tree and periodic nearly-unattended
> or too-large-to-reasonably-review sync patches just sidesteps the
> process. That gains you the recgonition of being in the tree, without
> having to undergo the aggressive review and participate in the planning
> and coordination of the process that goes with it. That is NOT okay, IMHO.
> 
> Sorry if that was unclear with the previous discussion. I'm not sure who
> else was thinking that those drivers would exist in both places, but I
> definitely was not.
> 
> --Dan
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] Porting swiftclient to Python 3?

2013-10-12 Thread Brian Curtin
On Oct 12, 2013, at 5:59 PM, John Dickinson  wrote:

> Co-reviewing each other's patches and discussing changes in #openstack-swift 
> would be good ways to ensure that you are working in the same direction.
> 
> --John

Kui ended up abandoning his changes and I'm going to review them to incorporate 
them into mine, then we'll move forward that way. Easy enough :)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Looking for clarification on the diagnostics API

2013-10-12 Thread Matt Riedemann
There is also a tempest patch now to ease some of the libvirt-specific 
keys checked in the new diagnostics tests there:

https://review.openstack.org/#/c/51412/ 

To relay some of my concerns that I put in that patch:

I'm not sure how I feel about this. It should probably be more generic but 
I think we need more than just a change in tempest to enforce it, i.e. we 
should have a nova patch that changes the doc strings for the abstract 
compute driver method to specify what the minimum keys are for the info 
returned, maybe a doc api sample change, etc?

For reference, here is the mailing list post I started on this last week:

http://lists.openstack.org/pipermail/openstack-dev/2013-October/016385.html

There are also docs here (these examples use xen and libvirt):

http://docs.openstack.org/grizzly/openstack-compute/admin/content/configuring-openstack-compute-basics.html

And under procedure 4.4 here:

http://docs.openstack.org/admin-guide-cloud/content/ch_introduction-to-openstack-compute.html#section_manage-the-cloud


=

I also found this wiki page related to metering and the nova diagnostics 
API:

https://wiki.openstack.org/wiki/EfficientMetering/FutureNovaInteractionModel 


So it seems like if at some point this will be used with ceilometer it 
should be standardized a bit which is what the Tempest part starts but I 
don't want it to get lost there.


Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Gary Kotton 
To: OpenStack Development Mailing List 
, 
Date:   10/12/2013 01:42 PM
Subject:Re: [openstack-dev] [nova] Looking for clarification on 
the diagnostics API



Yup, it seems to be hypervisor specific. I have added in the Vmware 
support following you correcting in the Vmware driver.
Thanks
Gary 

From: Matt Riedemann 
Reply-To: OpenStack Development Mailing List <
openstack-dev@lists.openstack.org>
Date: Thursday, October 10, 2013 10:17 PM
To: OpenStack Development Mailing List 
Subject: Re: [openstack-dev] [nova] Looking for clarification on the 
diagnostics API

Looks like this has been brought up a couple of times:

https://lists.launchpad.net/openstack/msg09138.html

https://lists.launchpad.net/openstack/msg08555.html

But they seem to kind of end up in the same place I already am - it seems 
to be an open-ended API that is hypervisor-specific.



Thanks, 

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States





From:Matt Riedemann/Rochester/IBM
To:"OpenStack Development Mailing List" <
openstack-dev@lists.openstack.org>, 
Date:10/10/2013 02:12 PM
Subject:[nova] Looking for clarification on the diagnostics API


Tempest recently got some new tests for the nova diagnostics API [1] which 
failed when I was running against the powervm driver since it doesn't 
implement that API.  I started looking at other drivers that did and found 
that libvirt, vmware and xenapi at least had code for the get_diagnostics 
method.  I found that the vmware driver was re-using it's get_info method 
for get_diagnostics which led to bug 1237622 [2] but overall caused some 
confusion about the difference between the compute driver's get_info and 
get_diagnostics mehods.  It looks like get_info is mainly just used to get 
the power_state of the instance.

First, the get_info method has a nice docstring for what it needs returned 
[3] but the get_diagnostics method doesn't [4].  From looking at the API 
docs [5], the diagnostics API basically gives an example of values to get 
back which is completely based on what the libvirt driver returns. Looking 
at the xenapi driver code, it looks like it does things a bit differently 
than the libvirt driver (maybe doesn't return the exact same keys, but it 
returns information based on what Xen provides). 

I'm thinking about implementing the diagnostics API for the powervm driver 
but I'd like to try and get some help on defining just what should be 
returned from that call.  There are some IVM commands available to the 
powervm driver for getting hardware resource information about an LPAR so 
I think I could implement this pretty easily.

I think it basically comes down to providing information about the 
processor, memory, storage and network interfaces for the instance but if 
anyone has more background information on that API I'd like to hear it.

[1] 
https://github.com/openstack/tempest/commit/da0708587432e47f85241201968e6402190f0c5d

[2] https://bugs.launchpad.net/nova/+bug/1237622
[3] 
https://github.com/openstack/nova/blob/2013.2.rc1/nova/virt/driver.py#L144
[4] 
https://github.com/openstack/nova/blob/2013.2.rc1/nova/virt/driver.py#L299
[5] http://paste.openstac

[openstack-dev] Advanced Services and Core plugins

2013-10-12 Thread Rajesh Mohan
I wrote a vendor specific fwaas-driver for our firewall (I also wrote the
iptables reference fwaas driver).

As per my understanding, the driver demux happens in L3 agent. Since we
also wanted to enable our physical appliance for Fwaas, I had to extend L3
agent for insert the firewall into Neutron router.

Ignoring service-insertion changes for now, if we just take the
fwaas-driver demux happening in L3 agent, will this work with all
core-plugins?

When I look at other core-plugins (NVP or Big Switch), I see that L3 agent
is not used (or not started).Does this mean that we have to have multiple
implementations of fwaas-driver?

As a firewall vendor, I would have expected that I write one one driver and
it would work with all core-plugins.

Is there some way this could be done? Is it possible with multi-host mode
where one host runs L3 agent(and all advanced services can be used through
this host) while other network services may be provided through
vendor-specific core plugins.

I am not sure if this is a good analogy. In Nova, we have different
hypervisors being supported at the same time. For Neutron, can we have
different core-plugins work at the same time.

With the focus slowly increasing on advanced services and service
insertion, is there a framework that we could follow that will work with
all core-plugins.

Thanks,
-Rajesh Mohan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] baremetal nova boot issue

2013-10-12 Thread Robert Collins
Have you read the docs about nova baremetal ? The questions you're
asking - about bootstrapping and about a baremetal agent - don't make
any sense to me ;)

These are the needed steps:
 - install openstack
 - build a deploy ramdisk and kernel
 - put them in glance
 - configure nova baremetal as your driver
 - configure a flavor with the deploy ramdisk and kernel
 - install tftpd pointing at /tftproot
 - register machines
 - add your own image to glance (must be a partition image with
separate kernel and ramdisk, not a whole disk image)

nova boot, done.

-Rob



On 12 October 2013 13:58, Ravikanth Samprathi  wrote:
> I fixed the quantum issue. Now i am able to successfully do 'nova boot':
>
> root@os:/etc/init.d# nova boot --flavor 9 --image
> 278f9721-1354-4c04-9798-65835398e027 mybmnode
> +-+--+
> | Property| Value
> |
> +-+--+
> | status  | BUILD
> |
> | updated | 2013-10-12T00:56:28Z
> |
> | OS-EXT-STS:task_state   | scheduling
> |
> | OS-EXT-SRV-ATTR:host| None
> |
> | key_name| None
> |
> | image   | my-image
> |
> | hostId  |
> |
> | OS-EXT-STS:vm_state | building
> |
> | OS-EXT-SRV-ATTR:instance_name   | instance-0020
> |
> | OS-EXT-SRV-ATTR:hypervisor_hostname | None
> |
> | flavor  | my-baremetal-flavor
> |
> | id  | beeb7ffd-ed81-44e0-91ae-62435442769a
> |
> | security_groups | [{u'name': u'default'}]
> |
> | user_id | 251bd0a9388a477b9c24c99b223e7b2a
> |
> | name| mybmnode
> |
> | adminPass   | xWurDrbi5E8X
> |
> | tenant_id   | 8a34123d83824f3ea52527c5a28ad81e
> |
> | created | 2013-10-12T00:56:28Z
> |
> | OS-DCF:diskConfig   | MANUAL
> |
> | metadata| {}
> |
> | accessIPv4  |
> |
> | accessIPv6  |
> |
> | progress| 0
> |
> | OS-EXT-STS:power_state  | 0
> |
> | OS-EXT-AZ:availability_zone | nova
> |
> | config_drive|
> |
> +-+--+
> root@os:/etc/init.d#
>
>
> Can you please help me on how to go from here:  I think all the provisioning
> listed in the baremetal wiki i could do them successfully.
>
> How to now load the images on the baremetal server (bootstrap) and then load
> my own image on the baremetal server?
>
> Thanks
> Ravi
>
>
>
> On Fri, Oct 11, 2013 at 5:41 PM, Ravikanth Samprathi 
> wrote:
>>
>> Hi Joe
>> Thanks, i fixed that, now i see this issue. I have always got
>> confused/wondered about this, which credentials should i use?  Can you
>> please help?
>>
>> nova-api.log:
>> ==
>> 14 2013-10-11 17:35:44.806 4034 INFO nova.osapi_compute.wsgi.server [-]
>> (4034) accepted ('10.40.0.99', 45381)
>>  15
>>  16 2013-10-11 17:35:44.892 ERROR nova.api.openstack
>> [req-12f8de18-544b-4cde-b46a-55fea30d0057 251bd0a9388a477b9c24c99b223e7b2a
>> 8a34123d83824f3ea52527c5a28ad81e] Caught error: 401 Unauthorized
>>  17
>>  18 This server could not verify that you are authorized to access the
>> document you requested. Either you supplied the wrong credentials (e.g., bad
>> password), or your browser does not understand how to supply the
>> credentials required.
>>  19
>>  20  Authentication required
>>  21 2013-10-11 17:35:44.892 4034 TRACE nova.api.openstack Traceback (most
>> recent call last):
>>  22 2013-10-11 17:35:44.892 4034 TRACE nova.api.openstack   File
>> "/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py", line 81,
>> in __call__
>>  23 2013-10-11 17:35:44.892 4034 TRACE nova.api.openstack return
>> req.get_response(self.application)
>>  24 2013-10-11 17:35:44.892 4034 TRACE nova.api.openstack   File
>> "/usr/lib/python2.7/dist-packages/webob/request.py", line 1296, in send
>>  25 2013-10-11 17:35:44.892 4034 TRACE nova.api.openstack application,
>> catch_exc_info=False)
>>  26 2013-10-11 17:35:44.892 4034 TRACE nova.api.openstack   File
>> "/usr/lib/python2.7/dist-packages/webob/request.py", line 1260, in
>> call_application
>>  27 2013-10-11 17:35:44.892 4034 TRACE nova.api.openstack app_iter =
>> application(self.environ, start_response)
>>  28 2013-10-11 17:35:44.892 4034 TRACE nova.api.openstack   File
>> "/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
>>  29 2013-10-11 17:35:44.892 4034 TRACE nova.api.openstack return
>> resp(environ, start_response)
>>  30 2013-10-11 17:35:44.892 4034 TRACE nova.api.openstack  

Re: [openstack-dev] Need suggestions and pointers to start contributing for development :

2013-10-12 Thread Mayank Mittal
Thanks Dolph and Mark for the welcome and guidance,

Will get start based on the pointers.

-Mayank


On Fri, Oct 11, 2013 at 8:06 PM, Mark McClain wrote:

>
> On Oct 11, 2013, at 9:14 AM, Dolph Mathews 
> wrote:
>
>
> On Friday, October 11, 2013, Mayank Mittal wrote:
>
>> Hi Teams,
>>
>> Please suggest and guide for starting to contribute in development. About
>> me - I have been working on L2/L3 protocol, SNMP, NMS development and ready
>> to contribute as a full timer to openstack.
>>
>> PS : My interest lies in LB and MPLS. Any pointers to respective teams
>> will help a lot.
>>
>
> Welcome! It sounds like you'd be interested in contributing to neutron:
> https://github.com/openstack/neutron
>
> This should get you pointed in the right direction:
> https://wiki.openstack.org/wiki/How_To_Contribute
>
>
>
>
> Mayank-
>
> Dolph is correct that Neutron matches up with your interests.  Here's bit
> more specific information on Neutron development:
> https://wiki.openstack.org/wiki/NeutronDevelopment
>
> mark
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Looking for clarification on the diagnostics API

2013-10-12 Thread Gary Kotton
Hi,
I agree with Matt here. This is not broad enough. One option is to have a 
tempest class that overrides for various backend plugins. Then the test can be 
haredednd for each driver. I am not sure if that is something that has been 
talked about.
Thanks
Gary

From: Matt Riedemann mailto:mrie...@us.ibm.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: Sunday, October 13, 2013 6:13 AM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [nova] Looking for clarification on the 
diagnostics API

There is also a tempest patch now to ease some of the libvirt-specific keys 
checked in the new diagnostics tests there:

https://review.openstack.org/#/c/51412/

To relay some of my concerns that I put in that patch:

I'm not sure how I feel about this. It should probably be more generic but I 
think we need more than just a change in tempest to enforce it, i.e. we should 
have a nova patch that changes the doc strings for the abstract compute driver 
method to specify what the minimum keys are for the info returned, maybe a doc 
api sample change, etc?

For reference, here is the mailing list post I started on this last week:

http://lists.openstack.org/pipermail/openstack-dev/2013-October/016385.html

There are also docs here (these examples use xen and libvirt):

http://docs.openstack.org/grizzly/openstack-compute/admin/content/configuring-openstack-compute-basics.html

And under procedure 4.4 here:

http://docs.openstack.org/admin-guide-cloud/content/ch_introduction-to-openstack-compute.html#section_manage-the-cloud

=

I also found this wiki page related to metering and the nova diagnostics API:

https://wiki.openstack.org/wiki/EfficientMetering/FutureNovaInteractionModel

So it seems like if at some point this will be used with ceilometer it should 
be standardized a bit which is what the Tempest part starts but I don't want it 
to get lost there.


Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development


Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com
[cid:_1_09B39E1009B3987C0011B74C86257C03]

3605 Hwy 52 N
Rochester, MN 55901-1407
United States






From:Gary Kotton mailto:gkot...@vmware.com>>
To:OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>,
Date:10/12/2013 01:42 PM
Subject:Re: [openstack-dev] [nova] Looking for clarification on the 
diagnostics API




Yup, it seems to be hypervisor specific. I have added in the Vmware support 
following you correcting in the Vmware driver.
Thanks
Gary

From: Matt Riedemann mailto:mrie...@us.ibm.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, October 10, 2013 10:17 PM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [nova] Looking for clarification on the 
diagnostics API

Looks like this has been brought up a couple of times:

https://lists.launchpad.net/openstack/msg09138.html

https://lists.launchpad.net/openstack/msg08555.html

But they seem to kind of end up in the same place I already am - it seems to be 
an open-ended API that is hypervisor-specific.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development


Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com
[cid:_1_09BCCED809BCCAD80011B74C86257C03]

3605 Hwy 52 N
Rochester, MN 55901-1407
United States







From:Matt Riedemann/Rochester/IBM
To:"OpenStack Development Mailing List" 
mailto:openstack-dev@lists.openstack.org>>,
Date:10/10/2013 02:12 PM
Subject:[nova] Looking for clarification on the diagnostics API



Tempest recently got some new tests for the nova diagnostics API [1] which 
failed when I was running against the powervm driver since it doesn't implement 
that API.  I started looking at other drivers that did and found that libvirt, 
vmware and xenapi at least had code for the get_diagnostics method.  I found 
that the vmware driver was re-using it's get_info method for get_diagnostics 
which led to bug 1237622 [2] but overall caused some confusion about the 
difference between the compute driver's get_info and get_diagnostics mehods.  
It looks like get_info is mainly just used to get the power_state of the 
instance.

First, the get_info method has a nice docstring for what it needs returned [3] 
but the get_diagnostics method doesn't [4].  From looking at the API docs [5], 
the diagnostics API basically gives an example of values to get back which is 
completely based on what the libvirt driver returns.  Looking at the xenapi 
driver code, it looks like it