Re: [openstack-dev] [Nova] Future meeting times

2013-12-18 Thread Day, Phil
+1, I would make the 14:00 meeting. I often have good intention of making the 
21:00 meeting,  but it's tough to work in around family life


Sent from Samsung Mobile



 Original message 
From: Joe Gordon 
Date:
To: OpenStack Development Mailing List 
Subject: Re: [openstack-dev] [Nova] Future meeting times



On Dec 18, 2013 6:38 AM, "Russell Bryant" 
mailto:rbry...@redhat.com>> wrote:
>
> Greetings,
>
> The weekly Nova meeting [1] has been held on Thursdays at 2100 UTC.
> I've been getting some requests to offer an alternative meeting time.
> I'd like to try out alternating the meeting time between two different
> times to allow more people in our global development team to attend
> meetings and engage in some real-time discussion.
>
> I propose the alternate meeting time as 1400 UTC.  I realize that
> doesn't help *everyone*, but it should be an improvement for some,
> especially for those in Europe.
>
> If we proceed with this, we would meet at 2100 UTC on January 2nd, 1400
> UTC on January 9th, and alternate from there.  Note that we will not be
> meeting at all on December 26th as a break for the holidays.
>
> If you can't attend either of these times, please note that the meetings
> are intended to be supplementary to the openstack-dev mailing list.  In
> the meetings, we check in on status, raise awareness of important
> issues, and progress some discussions with real-time debate, but the
> most important discussions and decisions will always be brought to the
> openstack-dev mailing list, as well.  With that said, active Nova
> contributors are always encouraged to attend and participate if they are
> able.
>
> Comments welcome, especially some acknowledgement that there are people
> that would attend the alternate meeting time.  :-)

I am fine with this, but I will never be attending the 1400 UTC meetings, as I 
live in utc-8

>
> Thanks,
>
> [1] https://wiki.openstack.org/wiki/Meetings/Nova
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] [Trove] [Savanna] [Oslo] Unified Agents - what is the actual problem?

2013-12-18 Thread Clint Byrum
So I've seen a lot of really great discussion of the unified agents, and
it has made me think a lot about the problem that we're trying to solve.

I just wanted to reiterate that we should be trying to solve real problems
and not get distracted by doing things "right" or even "better".

I actually think there are three problems to solve.

* Private network guest to cloud service communication.
* Narrow scope highly responsive lean guest agents (Trove, Savanna).
* General purpose in-instance management agent (Heat).

Since the private network guests problem is the only one they all share,
perhaps this is where the three projects should collaborate, and the
other pieces should be left to another discussion.

Thoughts?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][VMware] Deploy from vCenter template

2013-12-18 Thread bjzzu...@gmail.com
Hi Arnaud, 

It's really good to know that your team are proposing the vcenter driver with 
OVA+glance datastore backend support.  Thanks for sharing the information.  OVA 
would be a good choice which will benefit users by avoiding only use flat image 
limited from current driver. 

But in my opinion, it may not conflict with deploying from template. From an 
end user perpective, if there are already a set of templates within vcenter, 
it's good for him to have openstack to deploy VM from it directly.  He can 
touch an empty image in glance only with the metadata pointing to the template 
name.  And boot vm from it.  Also he can freely choose to  generate a *ova with 
vmdk stream file and placed in certain datastore and deploy from VM from image 
location pointing to the datastore.  These are two different usage scenarios 
per my understanding.  

And to go further, if there are some mechnism, that openstack can sync exsiting 
VM templates into Glance images.  It can make this function more useful. 




Best Regards
Zarric(Zhu Zhu)

From: Arnaud Legendre
Date: 2013-12-18 01:58
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][VMware] Deploy from vCenter template
Hi Qui Xing,



We are planning to address the vCenter template issue by levering the OVF/OVA 
capabilities.
Kiran's implementation is tied to a specific VC and requires to add Glance 
properties that are not generic.
For existing templates, the workflow will be:
. generate an *.ova tarball (containing the *.ovf descriptor + *.vmdk 
stream-optimized) out of the template,
. register the *.ova as a Glance image location (using the VMware Glance driver 
see bp [1]) or simply upload the *.ova to another Glance store,
. The VMware driver in Nova will be able to consume the *.ova (either through 
the location or by downloading the content to the cache):  see bp [2]. However, 
for Icehouse, we are not planning to actually consume the *.ovf descriptor 
(work scheduled for the J/K release).


[1]  
https://blueprints.launchpad.net/glance/+spec/vmware-datastore-storage-backend
[2] https://blueprints.launchpad.net/nova/+spec/vmware-driver-ova-support 


If you have questions about [1], please send me an email. For [2], please reach 
vuil.




Thanks,
Arnaud





From: "Shawn Hartsock" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Monday, December 16, 2013 9:37:34 AM
Subject: Re: [openstack-dev] [Nova][VMware] Deploy from vCenter template



IIRC someone who shows up at 
https://wiki.openstack.org/wiki/Meetings/VMwareAPI#Meetings is planning on 
working on that again for Icehouse-3 but there's some new debate on the best 
way to implement the desired effect. The goal of that change would be to avoid 
streaming the disk image out of vCenter for the purpose of then streaming the 
same image back into the same vCenter. That's really inefficient.


So there's a Nova level change that could happen (that's the patch you saw) and 
there's a Glance level change that could happen, and there's a combination of 
both approaches that could happen.


If you want to discuss it informally with the group that's looking into the 
problem I could probably make sure you end up talking to the right people on 
#openstack-vmware or if you pop into the weekly team meeting on IRC you could 
mention it during open discussion time.




On Mon, Dec 16, 2013 at 3:27 AM, Qing Xin Meng  wrote:

I saw a commit for Deploying from VMware vCenter template and found it's 
abandoned.
https://review.openstack.org/#/c/34903 

Anyone knows the plan to support the deployment from VMware vCenter template?


Thanks!



Best Regards

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







-- 

# Shawn.Hartsock - twitter: @hartsock - plus.google.com/+ShawnHartsock 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=5wWaXo2oVaivfKLCMyU6Z9UTO8HOfeGCzbGHAT4gZpo%3D%0A&m=KCBtvBVSZCslDQrTvSEqBcTEcx%2FSKxtF0ZRIjtTFmSw%3D%0A&s=f45fbe293564be6a16c90b0125534e5e62f7505fea9f92708b72aa60e5e1dc5f___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Subject: [nova][vmware] VMwareAPI sub-team status 2013-12-08

2013-12-18 Thread Gary Kotton
Would it be possible that an additional core please take a look at 
https://review.openstack.org/#/c/51793/?
Thanks
Gary

From: Shawn Hartsock mailto:harts...@acm.org>>
Date: Wednesday, December 18, 2013 6:32 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Subject: [openstack-dev][nova][vmware] VMwareAPI sub-team status 
2013-12-08


Greetings Stackers!

BTW: Reviews by fitness at the end.

It's Wednesday so it's time for me to cheer-lead for our VMwareAPI subteam. Go 
team! Our normal Wednesday meetings fall on December 25th and January 1st 
coming up so, no meetings until January 8th. If there's a really strong 
objection to that we can organize an impromptu meeting.

Here's the community priorities so far for IceHouse.

== Blueprint priorities ==

Icehouse-2

Nova

*. 
https://blueprints.launchpad.net/nova/+spec/vmware-image-cache-management

*. 
https://blueprints.launchpad.net/nova/+spec/vmware-vsan-support

*. 
https://blueprints.launchpad.net/nova/+spec/autowsdl-repair

Glance

*. 
https://blueprints.launchpad.net/glance/+spec/vmware-datastore-storage-backend

Icehouse-3

*. 
https://blueprints.launchpad.net/nova/+spec/config-validation-script


== Bugs by priority: ==

The priority here is an aggregate, Nova Priority / VMware Driver priority where 
the priorities are determined independently.


* High/Critical, needs review : 'vmware driver does not work with more than one 
datacenter in vC'

https://review.openstack.org/62587

* High/High, needs one more +2/approval : 'VMware: NotAuthenticated occurred in 
the call to RetrievePropertiesEx'

https://review.openstack.org/61555

* High/High, needs review : 'VMware: spawning large amounts of VMs concurrently 
sometimes causes "VMDK lock" error'

https://review.openstack.org/58598

* High/High, needs review : 'VMWare: AssertionError: Trying to re-send() an 
already-triggered event.'

https://review.openstack.org/54808

* High/High, needs review : 'VMware: timeouts due to nova-compute stuck at 100% 
when using deploying 100 VMs'

https://review.openstack.org/60259

[openstack-dev] [swift][python-switfclient][oslo] bufferedhttp in glance and oslo

2013-12-18 Thread Amala Basha Alungal
Hi,

There has been few discussions/patches/reviews on pulling in the
bufferedhttp.py (currently in use in swift) to python-swiftclient and even
to oslo, so that we can leverage features like buffering http headers and
use of 'expect: 100-continue' etc which is currently not suported by the
httplib library. Please take a look at the the patch sets that addresses
this in swift  and
oslo
.

A usecase  of this approach is in
python-swiftclient where it makes use of the 100-continue headers to
receive a fast fail during an upload without ending up uploading the entire
chunk in case of auth failures.

The intention behind this mail is to gain some traction around these
patches which have been hanging around for quite some time now.



-- 
Thanks And Regards
Amala Basha
+91-7760972008
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Diversity as a requirement for incubation

2013-12-18 Thread Jay Pipes

On 12/18/2013 12:34 PM, Doug Hellmann wrote:

I have more of an issue with a project failing *after* becoming
integrated than during incubation. That's why we have the incubation
period to begin with. For the same reason, I'm leaning towards allowing
projects into incubation without a very diverse team, as long as there
is some recognition that they won't be able to graduate in that state,
no matter the technical situation.


This precisely sums up my view as well.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Re: [Blueprint vlan-aware-vms] VLAN aware VMs

2013-12-18 Thread Yi Sun

Ian,
Could you unlock your doc at 
https://docs.google.com/document/d/16DDJLYHxMmbCPO5LxW_kp610oj4goiic_oTakJiXjTs?

It require a permission to read.
Thanks
Yi

On 12/18/13, 4:20 AM, Ian Wells wrote:
A Neutron network is analagous to a wire between ports.  We can do 
almost everything with this wire - we can pass  both IP and non-IP 
traffic, I can even pass MPLS traffic over it (yes, I tried).  For no 
rational reason, at least if you live north of the API, I sometimes 
can't pass VLAN traffic over it.  You would think this would be in the 
specification for what a network is, but as it happens I don't think 
we have a specification for what a network is in those terms.


I have a counterproposal that I wrote up yesterday [1]. This is the 
absurdly simple approach, taking the position that implementing trunks 
*should* be easy.  That's actually not such a bad position to take, 
because the problem lies with certain plugins (OVS-based setups, 
basically) - it's not a problem with Neutron.


It's very uncompromising, though - it just allows you to request a 
VLAN-clean network.  It would work with OVS code because it allows 
plugins to decline a request, but it doesn't solve the VLAN problem 
for you, it just ensures that you don't run somewhere where your 
application doesn't work, and gives plugins with problems an 
opportunity for special case code.  You could expand it so that you're 
requesting either a VLAN-safe network or a network that passes 
*specified* VLANs - which is the starting position of Eric's document, 
a plugin-specific solution to a plugin-specific problem.


I accept that, for as long as we use VLAN based infrastructure, we 
have to accommodate the fact that VLANs are a special case, but this 
is very much an artifact of the plugin implementation - Linux bridge 
based network infrastructure simply doesn't have this problem, for 
instance.


On 17 December 2013 06:17, Isaku Yamahata > wrote:


- 2 Modeling proposal
  What's the purpose of trunk network?
  Can you please add a use case that trunk network can't be
optimized away?


Even before I read the document I could list three use cases.  Eric's 
covered some of them himself.


The reasons you might want to have a trunked network passing VLAN traffic:
1: You're replicating a physical design for simulation purposes [2]

2: There are any number of reasons to use VLANs in a physical design, 
but generally it's a port reduction thing.  In Openstack, clearly I 
can do this a different way - instead of using 30 VLANs over one 
network with two ports, I can use 30 networks with two ports each.  
Ports are cheaper when you're virtual, but they're not free - KVM has 
a limitation of, from memory, 254 ports per VM. So I might well still 
want to use VLANs.  I could arbitrarily switch to another encap 
technology, but this is the tail wagging the dog - I have to change my 
design because Neutron's contract is inconsistent.


3: I want to condense many tenant networks into a single VM or 
physical box because I'm using a single VM to offer logically 
separated services to multiple tenants. This has all the points of (2) 
basically, that VLANs are not the only encap I could use, but they're 
the conventional one and widely supported.  Provider networks do 
actually offer the functionality you need for this already - if you're 
talking physical - but they would, I think, be awkward to use in 
practice, and they would eat NIC ports on your hosts.


--
Ian.

[1] 
https://docs.google.com/document/d/16DDJLYHxMmbCPO5LxW_kp610oj4goiic_oTakJiXjTs
[2] http://blogs.cisco.com/wp-content/uploads/network1-550x334.png - a 
network simulator (search for 'Cisco VIRL'). Shameless plug, sorry, 
but it's an Openstack based application and I'm rather proud of it.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack]

2013-12-18 Thread Ganpat Agarwal
Thank you  Dean and Anne for your response.


On Wed, Dec 18, 2013 at 8:27 PM, Dean Troyer  wrote:

> On Wed, Dec 18, 2013 at 8:23 AM, Anne Gentle <
> annegen...@justwriteclick.com> wrote:
>
>> Also, Dean correct me if I'm wrong, but I don't believe devstack does a
>> precise milestone release. To guestimate it, you could figure out what
>>
>
> This is correct, only stable/* releases.
>
>
>> devstack looked like on 12/5 and get all the matching _BRANCH=
>> like this example localrc does:
>> https://gist.github.com/everett-toews/4004430 I use this method to get
>> stable/havana for example. Hat tip to Everett's blog at
>> http://blog.phymata.com/2012/11/08/openstack-devstack-on-the-rackspace-open-cloud/
>> .
>>
>
> Actually, to do a stable/* release, check out that branch of DevStack, it
> has the correct stable branch settings in stackrc already.  But to do
> milestones, you would want DevStack somewhat close (for example, current
> DevStack master is still OK for the I-1 milestone) then set the XX_BRANCH
> variables manually.
>
> dt
>
> --
>
> Dean Troyer
> dtro...@gmail.com
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Diversity as a requirement for incubation

2013-12-18 Thread Mike Perez
On Wed, Dec 18, 2013 at 2:40 AM, Thierry Carrez wrote:

> I guess there are 3 options:
>
> 1. Require diversity for incubation, but find ways to bless or recommend
> projects pre-incubation so that this diversity can actually be achieved
>
> 2. Do not require diversity for incubation, but require it for
> graduation, and remove projects from incubation if they fail to attract
> a diverse community
>
> 3. Do not require diversity at incubation time, but at least judge the
> interest of other companies: are they signed up to join in the future ?
> Be ready to drop the project from incubation if that was a fake support
> and the project fails to attract a diverse community
>
> Personally I'm leaning towards (3) at the moment. Thoughts ?



3 gets my vote. As I've voiced over and over on the Barbican incubation
thread, diversity is going to help the project gain contributors. You'll
have a
hard time gaining contributors if it's just one-sided designed solutions
that
only some companies can use.

-Mike Perez
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Re: [Blueprint vlan-aware-vms] VLAN aware VMs

2013-12-18 Thread Isaku Yamahata

Hi Ian.

I can't see your proposal. Can you please make it public viewable?
Some comments inlined below.

On Wed, Dec 18, 2013 at 01:20:50PM +0100,
Ian Wells  wrote:

> A Neutron network is analagous to a wire between ports.  We can do almost
> everything with this wire - we can pass  both IP and non-IP traffic, I can
> even pass MPLS traffic over it (yes, I tried).  For no rational reason, at
> least if you live north of the API, I sometimes can't pass VLAN traffic
> over it.  You would think this would be in the specification for what a
> network is, but as it happens I don't think we have a specification for
> what a network is in those terms.
> 
> I have a counterproposal that I wrote up yesterday [1].  This is the
> absurdly simple approach, taking the position that implementing trunks
> *should* be easy.  That's actually not such a bad position to take, because
> the problem lies with certain plugins (OVS-based setups, basically) - it's
> not a problem with Neutron.
> 
> It's very uncompromising, though - it just allows you to request a
> VLAN-clean network.  It would work with OVS code because it allows plugins
> to decline a request, but it doesn't solve the VLAN problem for you, it
> just ensures that you don't run somewhere where your application doesn't
> work, and gives plugins with problems an opportunity for special case
> code.  You could expand it so that you're requesting either a VLAN-safe
> network or a network that passes *specified* VLANs - which is the starting
> position of Eric's document, a plugin-specific solution to a
> plugin-specific problem.
> 
> I accept that, for as long as we use VLAN based infrastructure, we have to
> accommodate the fact that VLANs are a special case, but this is very much
> an artifact of the plugin implementation - Linux bridge based network
> infrastructure simply doesn't have this problem, for instance.
> 
> On 17 December 2013 06:17, Isaku Yamahata  wrote:
> 
> > - 2 Modeling proposal
> >   What's the purpose of trunk network?
> >   Can you please add a use case that trunk network can't be optimized away?
> >
> 
> Even before I read the document I could list three use cases.  Eric's
> covered some of them himself.

I'm not against trunking.
I'm trying to understand what requirements need "trunk network" in
the figure 1 in addition to "L2 gateway" directly connected to VM via
"trunk port".

thanks,

> The reasons you might want to have a trunked network passing VLAN traffic:
> 1: You're replicating a physical design for simulation purposes [2]
> 
> 2: There are any number of reasons to use VLANs in a physical design, but
> generally it's a port reduction thing.  In Openstack, clearly I can do this
> a different way - instead of using 30 VLANs over one network with two
> ports, I can use 30 networks with two ports each.  Ports are cheaper when
> you're virtual, but they're not free - KVM has a limitation of, from
> memory, 254 ports per VM.  So I might well still want to use VLANs.  I
> could arbitrarily switch to another encap technology, but this is the tail
> wagging the dog - I have to change my design because Neutron's contract is
> inconsistent.
> 
> 3: I want to condense many tenant networks into a single VM or physical box
> because I'm using a single VM to offer logically separated services to
> multiple tenants.  This has all the points of (2) basically, that VLANs are
> not the only encap I could use, but they're the conventional one and widely
> supported.  Provider networks do actually offer the functionality you need
> for this already - if you're talking physical - but they would, I think, be
> awkward to use in practice, and they would eat NIC ports on your hosts.
> 
> -- 
> Ian.
> 
> [1]
> https://docs.google.com/document/d/16DDJLYHxMmbCPO5LxW_kp610oj4goiic_oTakJiXjTs
> [2] http://blogs.cisco.com/wp-content/uploads/network1-550x334.png - a
> network simulator (search for 'Cisco VIRL'). Shameless plug, sorry, but
> it's an Openstack based application and I'm rather proud of it.

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Isaku Yamahata 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request for Barbican

2013-12-18 Thread Mike Perez
On Tue, Dec 17, 2013 at 1:59 PM, Mike Perez  wrote:

> On Tue, Dec 17, 2013 at 11:44 AM, Jarret Raim 
> wrote:
>
>> On 12/13/13, 4:50 AM, "Thierry Carrez"  wrote:
>>
>>
>> >If you remove Jenkins and attach Paul Kehrer, jqxin2006 (Michael Xin),
>> >Arash Ghoreyshi, Chad Lung and Steven Gonzales to Rackspace, then the
>> >picture is:
>> >
>> >67% of commits come from a single person (John Wood)
>> >96% of commits come from a single company (Rackspace)
>> >
>> >I think that's a bit brittle: if John Wood or Rackspace were to decide
>> >to place their bets elsewhere, the project would probably die instantly.
>> >I would feel more comfortable if a single individual didn't author more
>> >than 50% of the changes, and a single company didn't sponsor more than
>> >80% of the changes.
>>
>>
>> I think these numbers somewhat miss the point. It is true that Rackspace
>> is the primary sponsor of Barbican and that John Wood is the developer
>> that has been on the project the longest. However, % of commits is not the
>> only measure of contributions to the project. That number doesn¹t include
>> the work on our chef-automation scripts or design work to figure out the
>> HSM interfaces or work on the testing suite or writing our documentation
>> or the million other tasks for the project.
>>
>> Rackspace is committed to this project. If John Wood leaves, we¹ll hire
>> additional developers to replace him. There is no risk of the project
>> lacking resources because a single person decides to work on something
>> else.
>>
>> We¹ve seen other folks from HP, RedHat, Nebula, etc. say that they are
>> interested in contributing and we are getting outside contributions today.
>> That will only continue, but I think the risk of the project somehow
>> collapsing is being overstated.
>>
>> There are problems that aren¹t necessarily the sexiest things to work on,
>> but need to be done. It may be hard to get a large number of people
>> interested in such a project in a short period of time. I think it would
>> be a mistake to reject projects that solve important problems just because
>> the team is a bit one sided at the time.
>>
>
>
> Besides it being considered "brittle" because there is one major code
> contributor,
> I would be worried about the project being one-sided to solving the use
> cases that
> work for one company, but may not work for the use cases of other
> companies
> if they have not chimed in yet. Do you feel this is not the case? Can
> anyone
> from somewhere other than Rackspace speak up and say they have been
> involved
> with the design/discussions of Barbican?
>
>
> -Mike Perez
>


I reviewed the TC meeting notes, and my question still stands.

It seems the committee is touching on the point of there being a worry
because if
it's a single company running the show, they can pull resources away and
the
project collapses. My worry is just having one company attempting to design
solutions
to use cases that work for them, will later not work for those potential
companies that would
provide contributors.

-Mike Perez
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Nomination for heat-core

2013-12-18 Thread Angus Salkeld

On 19/12/13 15:21 +1300, Steve Baker wrote:

I would like to nominate Bartosz Górski to be a heat-core reviewer. His
reviews to date have been valuable and his other contributions to the
project have shown a sound understanding of how heat works.

Here is his review history:
https://review.openstack.org/#/q/reviewer:bartosz.gorski%2540ntti3.com+project:openstack/heat,n,z

If you are heat-core please reply with your vote.


+1



cheers



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Nomination for heat-core

2013-12-18 Thread Clint Byrum
Excerpts from Steve Baker's message of 2013-12-18 18:21:46 -0800:
> I would like to nominate Bartosz Górski to be a heat-core reviewer. His
> reviews to date have been valuable and his other contributions to the
> project have shown a sound understanding of how heat works.
> 
> Here is his review history:
> https://review.openstack.org/#/q/reviewer:bartosz.gorski%2540ntti3.com+project:openstack/heat,n,z
> 
> If you are heat-core please reply with your vote.
> 
> cheers

+1

Keep it up Bartosz!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding DB migration items to the common review checklist

2013-12-18 Thread Jay Pipes

On 12/18/2013 02:14 PM, Brant Knudson wrote:

Matt -

Could a test be added that goes through the models and checks these
things? Other projects could use this too.

Here's an example of a test that checks if the tables are all InnoDB:
http://git.openstack.org/cgit/openstack/nova/tree/nova/tests/db/test_migrations.py?id=6e455cd97f04bf26bbe022be17c57e089cf502f4#n430


Actually, there's already work done for this.

https://review.openstack.org/#/c/42307/

I was initially put off by the unique constraint naming convention (and 
it's still a little problematic due to constraint name length 
constraints in certain RDBMS), but the patch above is an excellent start.


Please show Svetlana's work a little review love :)

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Policy-Rules discussions based on Dec.12 network policy meeting

2013-12-18 Thread Mohammad Banikazemi

Please have a look at the agenda for tomorrow at
https://wiki.openstack.org/wiki/Meetings/Neutron_Group_Policy
It would be great if we could at least close on the following two items
tomorrow:

1.  Converged model by allowing policies to have a destination 
group and
  a source group. Each of these groups can have one or more end points.
2.  Minimum set of actions to support: security, redirect, (and 
possibly
  qos)

We don't need to be perfect and we can always revisit but if we agree on
these we can start thinking about a PoC implementation which itself may
lead to more design issues that we will need to consider and discuss.

Based on the discussions we have had, other items that we need to discuss
include the conflict resolution and also the capability to query a plugin
for supported actions.

Mohammad



From:   Anees A Shaikh/Watson/IBM@IBMUS
To: Stephen Wong ,
Cc: openstack-dev@lists.openstack.org
Date:   12/18/2013 08:02 PM
Subject:Re: [openstack-dev] [neutron][policy] Policy-Rules discussions
based on Dec.12 network policy meeting



folks, sorry for the late input ... a few additional thoughts...

> Hi Prasad,
>
> Thanks for the comments, please see responses inline.
>
> On Mon, Dec 16, 2013 at 2:11 PM, Prasad Vellanki
>  wrote:
> > Hi
> > Please see inline 
> >
> >
> > On Sun, Dec 15, 2013 at 8:49 AM, Stephen Wong  midokura.com> wrote:
> >>
> >> Hi,
> >>
> >> During Thursday's  group-policy meeting[1], there are several
> >> policy-rules related issues which we agreed should be posted on the
> >> mailing list to gather community comments / consensus. They are:
> >>
> >> (1) Conflict resolution between policy-rules
> >> --- a priority field was added to the policy-rules attributes
> >> list[2]. Is this enough to resolve conflict across policy-rules (or
> >> even across policies)? Please state cases where a cross policy-rules
> >> conflict can occur.
> >> --- conflict resolution was a major discussion point during
> >> Thursday's meeting - and there was even suggestion on setting
priority
> >> on endpoint groups; but I would like to have this email thread
focused
> >> on conflict resolution across policy-rules in a single policy first.

I agree with keeping the focus on intra-policy conflicts and even there
would suggest we try to keep things dead simple to start at the expense of
some flexibility in handling every use case.  This is a classic problem in
policy frameworks and I hope we don't grind to a halt trying to address
it.

> >>
> >> (2) Default policy-rule actions
> >> --- there seems to be consensus from the community that we need
to
> >> establish some basic set of policy-rule actions upon which all
> >> plugins/drivers would have to support
> >> --- just to get the discussion going, I am proposing:
> >>
> >
> > Or should this be a query the plugin for supported actions and thus
the user
> > knows what functionality the plugin can support.  Hence there is no
default
> > supported list.
>
> I think what we want is a set of "must-have" actions which
> application can utilize by default while using the group-policy APIs.
> Without this, application would need to perform many run time checks
> and have unpredictable behavior across different deployments.
>
> As for querying for a capability list - I am not against having
> such API, but what is the common use case? Having a script querying
> for the supported action list and generate policies based on that?
> Should we expect policy definition to be so dynamic?

My view is that the query capability may be where we try to go eventually,
but we should start with a must-have list that is very small, e.g., just
the security policy.  Other action types would be optional but
well-defined.


>
> >
> >> a.) action_type: 'security'action: 'allow' | 'drop'
> >> b.) action_type: 'qos'action: {'qos_class': {'critical' |
> >> 'low-priority' | 'high-priority' |
> >>
> >>'low-immediate' | 'high-immediate' |
> >>
> >>'expedite-forwarding'}
> >>  (a subset of DSCP values - hopefully in language that
can
> >> be well understood by those performing application deployments)
> >> c.) action_type:'redirect'   action: {UUID, [UUID]...}
> >>  (a list of Neutron objects to redirect to, and the list
> >> should contain at least one element)
> >>
> >
> > I am not sure making the UUIDs a list of neutron objects or endpoints
will
> > work well. It seems that it should more higher level such as list of
> > services that form a chain. Lets say one forms a chain of services,
> > firewall, IPS, LB. It would be tough to expect user to derive the
neutron
> > ports create a chain of them. It could be a VM UUID.
>
> Service chain is a Neutron object with UUID:
>
> https://docs.google.com/document/d/
> 1fmCWpCxAN4g5txmCJVmBDt02GYew2kvyRsh0Wl3YF2U/edit#
>
> so this is not defined by the group-policy subgro

Re: [openstack-dev] [neutron][qa] test_network_basic_ops and the "FloatingIPChecker" control point

2013-12-18 Thread Jay Pipes

On 12/18/2013 10:21 PM, Brent Eagles wrote:

Hi,

Yair and I were discussing a change that I initiated and was
incorporated into the test_network_basic_ops test. It was intended as a
configuration control point for floating IP address assignments before
actually testing connectivity. The question we were discussing was
whether this check was a valid pass/fail criteria for tests like
test_network_basic_ops.

The initial motivation for the change was that test_network_basic_ops
had a less than 50/50 chance of passing in my local environment for
whatever reason. After looking at the test, it seemed ridiculous that it
should be failing. The problem is that more often than not the data that
was available in the logs all pointed to it being set up correctly but
the ping test for connectivity was timing out. From the logs it wasn't
clear that the test was failing because neutron did not do the right
thing, did not do it fast enough, or is something else happening?  Of
course if I paused the test for a short bit between setup and the checks
to manually verify everything the checks always passed. So it's a timing
issue right?

Two things: adding more timeout to a check is as appealing to me as
gargling glass AND I was less "annoyed" that the test was failing as I
was that it wasn't clear from reading logs what had gone wrong. I tried
to find an additional intermediate control point that would "split"
failure modes into two categories: neutron is too slow in setting things
up and neutron failed to set things up correctly. Granted it still is
adding timeout to the test, but if I could find a control point based on
"settling" so that if it passed, then there is a good chance that if the
next check failed it was because neutron actually screwed up what it was
trying to do.

Waiting until the query on the nova for the floating IP information
seemed a relatively reasonable, if imperfect, "settling" criteria before
attempting to connect to the VM. Testing to see if the floating IP
assignment gets to the nova instance details is a valid test and,
AFAICT, missing from the current tests. However, Yair has the reasonable
point that connectivity is often available long before the floating IP
appears in the nova results and that it could be considered invalid to
use non-network specific criteria as pass/fail for this test.


But, Tempest is all about functional integration testing. Using a call 
to Nova's server details to determine whether a dependent call to 
Neutron succeeded (setting up the floating IP) is exactly what I think 
Tempest is all about. It's validating that the integration between Nova 
and Neutron is working as expected.


So, I actually think the assertion on the floating IP address appearing 
(after some timeout/timeout-backoff) is entirely appropriate.



In general, the validity of checking for the presence of a floating IP
in the server details is a matter of interpretation. I think it is a
given that it must be tested somewhere and that if it causes a test to
fail then it is as valid a failure than a ping failing. Certainly I have
seen scenarios where an IP appears, but doesn't actually work and others
where the IP doesn't appear (ever, not just in really long while) but
magically works. Both are bugs. Which is more appropriate to tests like
test_network_basic_ops?


I believe both assertions should be part of the test cases, but since 
the latter condition (good ping connectivity, but no floater ever 
appears attached to the instance) necessarily depends on the first 
failure (floating IP does not appear in the server details after a 
timeout), then perhaps one way to handle this would be to do this:


a) create server instance
b) assign floating ip
c) query server details looking for floater in a timeout-backoff loop
c1) floater does appear
 c1-a) assert ping connectivity
c2) floater does not appear
 c2-a) check ping connectivity. if ping connectivity succeeds, use a 
call to testtools.TestCase.addDetail() to provide some "interesting" 
feedback

 c2-b) raise assertion that floater did not appear in the server details


Currently, the polling interval for the checks in the gate should be
tuned. They are borrowing other polling configuration and I can see it
is ill-advised. It is currently polling at an interval of a second and
if the intent is to wait for the entire system to settle down before
proceeding then polling nova that quickly is too often. It simply
increases the load while we are waiting to adapt to a loaded system. For
example in the course of a three minute timeout, the floating IP check
polled nova for server details 180 times.


Agreed completely.

Best,
-jay


All this aside it is granted that checking for the floating IP in the
nova instance details is imperfect in itself. There is nothing that
assures that the presence of that information indicates that the
networking backend is done its work.

Comments, suggestions, queries, foam bricks?

Cheers,

Brent


Re: [openstack-dev] [heat] Nomination for heat-core

2013-12-18 Thread Steven Dake

On 12/18/2013 07:21 PM, Steve Baker wrote:
I would like to nominate Bartosz Górski to be a heat-core reviewer. 
His reviews to date have been valuable and his other contributions to 
the project have shown a sound understanding of how heat works.


Here is his review history:
https://review.openstack.org/#/q/reviewer:bartosz.gorski%2540ntti3.com+project:openstack/heat,n,z

If you are heat-core please reply with your vote.

cheers


lgtm +1



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

2013-12-18 Thread Steven Dake

On 12/18/2013 05:57 PM, Tim Simpson wrote:

python:
consumes 12.5MB of virt memory and 4.3MB of resident memory.

Very few Python only processes will take up such a small amount of virtual 
memory unless the authors are disciplined about not pulling in any dependencies 
at all and writing code in a way that isn't necessarily idiomatic. For an 
example of a project where code is simply written the obvious way, take the 
Trove reference guest, which uses just shy of 200MB of virtual memory and 39MB 
of resident. In defense of the reference guest,  there are some things we can 
do there to make that figure better, I'm certain. I just want to use it as an 
example of how large a Python process can get when the authors proceed doing 
things the way they normally would.


C:
4MB of virt memory and 328k of resident memory
C++:
12.5MB of virt memory and 784k of resident memory

Much of the space you're seeing is from the C++ standard library. Building a 
process normally, I get similar results. However it is also possible to 
statically link the standard libraries and knock off roughly half of that, to 
6.42MB virtual and 400kb resident.

Additionally, the C++ standard library can be omitted if necessary. At this 
point, you might argue that you'd just be writing C code, but even then you'd 
have the advantages of template metaprogramming and other features not present 
in plain C. Even without those features, there's no shame in writing C style 
code assuming you *have* to- C++ was designed to be compatible with C to take 
advantages of its strengths. The only loss would be some of the C99 stuff like 
named initializers.

Additionally, in a vast number of contexts the virtual memory "used" for the 
standard library is not going to matter as other processes will be including that code 
anyway.

Going back to the Trove C++ Agent, it takes 4MB of resident and 28MB of virtual 
memory. This is with some fairly non-trivial dependencies, such as Curl, libz, 
the MySQL and Rabbit client libraries. No special effort was expended making 
sure we kept the process small as in C++ things naturally stay tiny.


C++ is full of fail in a variety of ways and offers no useful advantage for 
something as small as an agent ;-)

If you haven't recently, I recommend you read up on modern C++. The language, 
and how it's written and explained, has changed a lot over the past ten years.
I am intimately familiar with C++.  I t believe you missed the 
constraint "for something as small as an agent" in my response.


Regards
-steve


Thanks,

Tim


-Original Message-
From: Steven Dake [mailto:sd...@redhat.com]
Sent: Wednesday, December 18, 2013 4:15 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

On 12/18/2013 12:27 PM, Tim Simpson wrote:

Please provide proof of that assumption or at least a general hypothesis that 
we can test.

I can't prove that the new agent will be larger as it doesn't exist yet.


Since nothing was agreed upon anyway, I don't know how you came to that 
conclusion.  I would suggest that any agent framework be held to an extremely 
high standard for footprint for this very reason.

Sorry, I formed a conclusion based on what I'd read so far. There has been talk 
to add Salt to this Unified Agent along with several other things. So I think 
its a valid concern to state that making this thing small is not as high on the 
list of priorities as adding extra functionality.

The C++ agent is just over 3 megabytes of real memory and takes up less than 30 
megabytes  of virtual memory. I don't think an agent has to be *that* small. 
However it won't get near that small unless making it tiny is made a priority, 
and I'm skeptical that's possible while also deciding an agent will be capable 
of interacting with all major OpenStack projects as well as Salt.


Nobody has suggested writing an agent that does everything.

Steven Dake just said:

"A unified agent addresses the downstream viewpoint well, which is 'There is only 
one agent to package and maintain, and it supports all the integrated OpenStack Program 
projects'."

So it sounds like some people are saying there will only be one. Or that it is 
at least an idea.


If Trove's communication method is in fact superior to all others, then perhaps 
we should discuss using that in the unified agent framework.

My point is every project should communicate to an agent in its own interface, 
which can be swapped out for whatever implementations people need.


   In fact I've specifically been arguing to keep it focused on facilitating 
guest<->service communication and limiting its in-guest capabilities to 
narrowly focused tasks.

I like this idea better than creating one agent to rule them all, but I would 
like to avoid forcing a single method of communicating between agents.


Also I'd certainly be interested in hearing about whether or not you think the 
C++ agent could m

Re: [openstack-dev] [Neutron][IPv6] Three SLAAC and DHCPv6 related blueprints

2013-12-18 Thread Shixiong Shang
It is up to Sean to make the call, but I would love to see IBM team in the 
meeting.



On Dec 18, 2013, at 7:09 PM, Xuhan Peng  wrote:

> Shixiong and guys, 
> 
> The sub team meeting is too early for china IBM folks to join although we 
> would like to participate the discussion very much. Any chance to rotate the 
> time so we can comment?
> 
> Thanks, Xuhan
> 
> On Thursday, December 19, 2013, Shixiong Shang wrote:
> Hi, Ian:
> 
> I agree with you on the point that the way we implement it should be app 
> agnostic. In addition, it should cover both CLI and Dashboard, so the system 
> behavior should be consistent to end users.
> 
> The keywords is just one of the many ways to implement the concept. It is 
> based on the reality that dnsmasq is the only driver available today to the 
> community. By the end of the day, the input from customer should be 
> translated to one of those mode keywords. It doesn't imply the same constants 
> have to be used as part of the CLI or Dashboard.
> 
> Randy and I had lengthy discussion/debating about this topic today. We have 
> straw-man proposal and will share with the team tomorrow. 
> 
> That being said, what concerned me the most at this moment is, we are not on 
> the same page. I hope tomorrow during sub-team meeting, we can reach 
> consensus. If you can not make it, then please set up a separate meeting to 
> invite key placeholders so we have a chance to sort it out.
> 
> Shixiong
> 
> 
> 
> 
> On Dec 18, 2013, at 8:25 AM, Ian Wells  wrote:
> 
>> On 18 December 2013 14:10, Shixiong Shang  
>> wrote:
>> Hi, Ian:
>> 
>> I won’t say the intent here is to replace dnsmasq-mode-keyword BP. Instead, 
>> I was trying to leverage and enhance those definitions so when dnsmasq is 
>> launched, it knows which mode it should run in. 
>> 
>> That being said, I see the value of your points and I also had lengthy 
>> discussion with Randy regarding this. We did realize that the keyword itself 
>> may not be sufficient to properly configure dnsmasq.
>> 
>> I think the point is that the attribute on whatever object (subnet or 
>> router) that defines the behaviour should define the behaviour, in precisely 
>> the terms you're talking about, and then we should find the dnsmasq options 
>> to suit.  Talking to Sean, he's good with this too, so we're all working to 
>> the same ends and it's just a matter of getting code in.
>>  
>> Let us discuss that on Thursday’s IRC meeting.
>> 
>> Not sure if I'll be available or not this Thursday, unfortunately.  I'll try 
>> to attend but I can't make promises.
>> 
>> Randy and I had a quick glance over your document. Much of it parallels the 
>> work we did on our POC last summer, and is now being addressed across 
>> multiple BP being implemented by ourselves or with Sean Collins and IBM 
>> team's work. I will take a closer look and provide my comments.
>> 
>> That's great.  I'm not wedded to the details in there, I'm actually more 
>> interested that we've covered everything.
>> 
>> If you have blueprint references, add them as comments - the 
>> ipv6-feature-parity BP could do with work and if we get the links together 
>> in one place we can update it.
>> -- 
>> Ian.
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

2013-12-18 Thread Gao, Fengqian
Hi, Alan,
I think, for nova-scheduler it is better if we gather more information.  And In 
today's DC, power and temperature are very important facts to considering.
CPU/Memory utilization is not enough to describe nodes' status. Power/inlet 
temperature should be noticed.

Best Wishes

--fengqian

From: Alan Kavanagh [mailto:alan.kavan...@ericsson.com]
Sent: Thursday, December 19, 2013 2:14 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

Hi Gao

What is the reason why you see it would be important to have these two 
additional metrics "power and temperature" for Nova to base scheduling on?

Alan

From: Gao, Fengqian [mailto:fengqian@intel.com]
Sent: December-18-13 1:00 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

Hi, all,
I am planning to extend bp 
https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling with 
power and temperature. In other words, power and temperature can be collected 
and used for nova-scheduler just as CPU utilization.
I have a question here. As you know, IPMI is used to get power and temperature 
and baremetal implements IPMI functions in Nova. But baremetal driver is being 
split out of nova, so if I want to change something to the IPMI, which part 
should I choose now? Nova or Ironic?


Best wishes

--fengqian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][qa] test_network_basic_ops and the "FloatingIPChecker" control point

2013-12-18 Thread Brent Eagles

Hi,

Yair and I were discussing a change that I initiated and was 
incorporated into the test_network_basic_ops test. It was intended as a 
configuration control point for floating IP address assignments before 
actually testing connectivity. The question we were discussing was 
whether this check was a valid pass/fail criteria for tests like 
test_network_basic_ops.


The initial motivation for the change was that test_network_basic_ops 
had a less than 50/50 chance of passing in my local environment for 
whatever reason. After looking at the test, it seemed ridiculous that it 
should be failing. The problem is that more often than not the data that 
was available in the logs all pointed to it being set up correctly but 
the ping test for connectivity was timing out. From the logs it wasn't 
clear that the test was failing because neutron did not do the right 
thing, did not do it fast enough, or is something else happening?  Of 
course if I paused the test for a short bit between setup and the checks 
to manually verify everything the checks always passed. So it's a timing 
issue right?


Two things: adding more timeout to a check is as appealing to me as 
gargling glass AND I was less "annoyed" that the test was failing as I 
was that it wasn't clear from reading logs what had gone wrong. I tried 
to find an additional intermediate control point that would "split" 
failure modes into two categories: neutron is too slow in setting things 
up and neutron failed to set things up correctly. Granted it still is 
adding timeout to the test, but if I could find a control point based on 
"settling" so that if it passed, then there is a good chance that if the 
next check failed it was because neutron actually screwed up what it was 
trying to do.


Waiting until the query on the nova for the floating IP information 
seemed a relatively reasonable, if imperfect, "settling" criteria before 
attempting to connect to the VM. Testing to see if the floating IP 
assignment gets to the nova instance details is a valid test and, 
AFAICT, missing from the current tests. However, Yair has the reasonable 
point that connectivity is often available long before the floating IP 
appears in the nova results and that it could be considered invalid to 
use non-network specific criteria as pass/fail for this test.


In general, the validity of checking for the presence of a floating IP 
in the server details is a matter of interpretation. I think it is a 
given that it must be tested somewhere and that if it causes a test to 
fail then it is as valid a failure than a ping failing. Certainly I have 
seen scenarios where an IP appears, but doesn't actually work and others 
where the IP doesn't appear (ever, not just in really long while) but 
magically works. Both are bugs. Which is more appropriate to tests like 
test_network_basic_ops?


Currently, the polling interval for the checks in the gate should be 
tuned. They are borrowing other polling configuration and I can see it 
is ill-advised. It is currently polling at an interval of a second and 
if the intent is to wait for the entire system to settle down before 
proceeding then polling nova that quickly is too often. It simply 
increases the load while we are waiting to adapt to a loaded system. For 
example in the course of a three minute timeout, the floating IP check 
polled nova for server details 180 times.


All this aside it is granted that checking for the floating IP in the 
nova instance details is imperfect in itself. There is nothing that 
assures that the presence of that information indicates that the 
networking backend is done its work.


Comments, suggestions, queries, foam bricks?

Cheers,

Brent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

2013-12-18 Thread Gao, Fengqian
Hi, Devananda,
I agree with you that new features should be towards Ironic.
As you asked why use Ironic instead of lm-sensors, actually I just want to use 
IPMI instead of lm-sensors. I think it is reasonable to put the IPMI part into 
Ironic and we already did:).

To get the sensors' information, I think IPMI is much more powerful than 
lm-sensors.
Firstly, IPMI is flexible.  Generally speaking, it provides two kinds of 
connections, in-bind and out-of-band.
Out-of-band connection allows us to get sensors' status even without OS and CPU.
In-band connection is quite similar to lm-sensors, It needs the OS kernel to 
get sensor data.
Secondly,  IPMI can gather more sensors' information that lm-sensors and it is 
easy to use. From my own experience, using IPMI can get all the sensor 
information that lm-sensors could get, such as temperature/voltage/fan. Besides 
that, IPMI can get power data and some OEM specific sensor data.
Thirdly, I think IPMI is a common spec for most of OEMs.  And most of servers 
are integrated with IPMI interface.

As you sais, nova-compute is already supplying information to the scheduler and 
power/temperature should be gathered locally.  IPMI can be used locally, the 
in-band connection. And there is a lot of open source library, such as 
OpenIPMI, FreeIPMI, which provide the interfaces to OS, just like lm-sensors.
So, I prefer to use IPMI than lm-sensors. Please leave your comments if you 
disagree:).

Best wishes

--fengqian


From: Devananda van der Veen [mailto:devananda@gmail.com]
Sent: Thursday, December 19, 2013 3:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

On Tue, Dec 17, 2013 at 10:00 PM, Gao, Fengqian 
mailto:fengqian@intel.com>> wrote:
Hi, all,
I am planning to extend bp 
https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling with 
power and temperature. In other words, power and temperature can be collected 
and used for nova-scheduler just as CPU utilization.
I have a question here. As you know, IPMI is used to get power and temperature 
and baremetal implements IPMI functions in Nova. But baremetal driver is being 
split out of nova, so if I want to change something to the IPMI, which part 
should I choose now? Nova or Ironic?


Hi!

A few thoughts... Firstly, new features should be geared towards Ironic, not 
the nova baremetal driver as it will be deprecated soon 
(https://blueprints.launchpad.net/nova/+spec/deprecate-baremetal-driver). That 
being said, I actually don't think you want to use IPMI for what you're 
describing at all, but maybe I'm wrong.

When scheduling VMs with Nova, in many cases there is already an agent running 
locally, eg. nova-compute, and this agent is already supplying information to 
the scheduler. I think this is where the facilities for gathering 
power/temperature/etc (eg, via lm-sensors) should be placed, and it can 
reported back to the scheduler along with other usage statistics.

If you think there's a compelling reason to use Ironic for this instead of 
lm-sensors, please clarify.

Cheers,
Devananda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] datastore migration issues

2013-12-18 Thread Robert Myers
There is the database migration for datastores. We should add a function
to  back fill the existing data with either a dummy data or set it to
'mysql' as that was the only possibility before data stores.
On Dec 18, 2013 3:23 PM, "Greg Hill"  wrote:

>  I've been working on fixing a bug related to migrating existing
> installations to the new datastore code:
>
>  https://bugs.launchpad.net/trove/+bug/1259642
>
>  The basic gist is that existing instances won't have any data in the
> datastore_version_id field in the database unless we somehow populate that
> data during migration, and not having that data populated breaks a lot of
> things (including the ability to list instances or delete or resize old
> instances).  It's impossible to populate that data in an automatic, generic
> way, since it's highly vendor-dependent on what database and version they
> currently support, and there's not enough data in the older schema to
> populate the new tables automatically.
>
>  So far, we've come up with some non-optimal solutions:
>
>  1. The first iteration was to assume 'mysql' as the database manager on
> instances without a datastore set.
> 2. The next iteration was to make the default value be configurable in
> trove.conf, but default to 'mysql' if it wasn't set.
> 3. It was then proposed that we could just use the 'default_datastore'
> value from the config, which may or may not be set by the operator.
>
>  My problem with any of these approaches beyond the first is that
> requiring people to populate config values in order to successfully migrate
> to the newer code is really no different than requiring them to populate
> the new database tables with appropriate data and updating the existing
> instances with the appropriate values.  Either way, it's now highly
> dependent on people deploying the upgrade to know about this change and
> react accordingly.
>
>  Does anyone have a better solution that we aren't considering?  Is this
> even worth the effort given that trove has so few current deployments that
> we can just make sure everyone is populating the new tables as part of
> their upgrade path and not bother fixing the code to deal with the legacy
> data?
>
>  Greg
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Nomination for heat-core

2013-12-18 Thread Steve Baker
I would like to nominate Bartosz Górski to be a heat-core reviewer. His
reviews to date have been valuable and his other contributions to the
project have shown a sound understanding of how heat works.

Here is his review history:
https://review.openstack.org/#/q/reviewer:bartosz.gorski%2540ntti3.com+project:openstack/heat,n,z

If you are heat-core please reply with your vote.

cheers
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Alternative time for the Heat/orchestration meeting

2013-12-18 Thread Steve Baker
The Heat meeting is currently weekly on Wednesdays at 2000 UTC. This is
not very friendly for contributors in some timezones, specifically Asia.

In today's Heat meeting we decided to try having every second meeting at
an alternate time. I've set up this poll to gather opinions on some
(similar) time options.
http://doodle.com/rdrb7gpnb2wydbmg

In this case Europe has drawn the short straw, since we have a large
number of US contributors and a PTL in New Zealand (me).

The meeting on January the 8th will be at the usual time, and the one
after that can be at the alternate time which we end up agreeing to.

cheers
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

2013-12-18 Thread Tim Simpson
>> python:
>> consumes 12.5MB of virt memory and 4.3MB of resident memory.

Very few Python only processes will take up such a small amount of virtual 
memory unless the authors are disciplined about not pulling in any dependencies 
at all and writing code in a way that isn't necessarily idiomatic. For an 
example of a project where code is simply written the obvious way, take the 
Trove reference guest, which uses just shy of 200MB of virtual memory and 39MB 
of resident. In defense of the reference guest,  there are some things we can 
do there to make that figure better, I'm certain. I just want to use it as an 
example of how large a Python process can get when the authors proceed doing 
things the way they normally would. 

>> C:
>> 4MB of virt memory and 328k of resident memory

>> C++:
>> 12.5MB of virt memory and 784k of resident memory

Much of the space you're seeing is from the C++ standard library. Building a 
process normally, I get similar results. However it is also possible to 
statically link the standard libraries and knock off roughly half of that, to 
6.42MB virtual and 400kb resident.

Additionally, the C++ standard library can be omitted if necessary. At this 
point, you might argue that you'd just be writing C code, but even then you'd 
have the advantages of template metaprogramming and other features not present 
in plain C. Even without those features, there's no shame in writing C style 
code assuming you *have* to- C++ was designed to be compatible with C to take 
advantages of its strengths. The only loss would be some of the C99 stuff like 
named initializers.

Additionally, in a vast number of contexts the virtual memory "used" for the 
standard library is not going to matter as other processes will be including 
that code anyway.

Going back to the Trove C++ Agent, it takes 4MB of resident and 28MB of virtual 
memory. This is with some fairly non-trivial dependencies, such as Curl, libz, 
the MySQL and Rabbit client libraries. No special effort was expended making 
sure we kept the process small as in C++ things naturally stay tiny.

>> C++ is full of fail in a variety of ways and offers no useful advantage for 
>> something as small as an agent ;-)

If you haven't recently, I recommend you read up on modern C++. The language, 
and how it's written and explained, has changed a lot over the past ten years.

Thanks,

Tim


-Original Message-
From: Steven Dake [mailto:sd...@redhat.com] 
Sent: Wednesday, December 18, 2013 4:15 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

On 12/18/2013 12:27 PM, Tim Simpson wrote:
>>> Please provide proof of that assumption or at least a general hypothesis 
>>> that we can test.
> I can't prove that the new agent will be larger as it doesn't exist yet.
>
>>> Since nothing was agreed upon anyway, I don't know how you came to that 
>>> conclusion.  I would suggest that any agent framework be held to an 
>>> extremely high standard for footprint for this very reason.
> Sorry, I formed a conclusion based on what I'd read so far. There has been 
> talk to add Salt to this Unified Agent along with several other things. So I 
> think its a valid concern to state that making this thing small is not as 
> high on the list of priorities as adding extra functionality.
>
> The C++ agent is just over 3 megabytes of real memory and takes up less than 
> 30 megabytes  of virtual memory. I don't think an agent has to be *that* 
> small. However it won't get near that small unless making it tiny is made a 
> priority, and I'm skeptical that's possible while also deciding an agent will 
> be capable of interacting with all major OpenStack projects as well as Salt.
>
>>> Nobody has suggested writing an agent that does everything.
> Steven Dake just said:
>
> "A unified agent addresses the downstream viewpoint well, which is 'There is 
> only one agent to package and maintain, and it supports all the integrated 
> OpenStack Program projects'."
>
> So it sounds like some people are saying there will only be one. Or that it 
> is at least an idea.
>
>>> If Trove's communication method is in fact superior to all others, then 
>>> perhaps we should discuss using that in the unified agent framework.
> My point is every project should communicate to an agent in its own 
> interface, which can be swapped out for whatever implementations people need.
>
>>>   In fact I've specifically been arguing to keep it focused on facilitating 
>>> guest<->service communication and limiting its in-guest capabilities to 
>>> narrowly focused tasks.
> I like this idea better than creating one agent to rule them all, but I would 
> like to avoid forcing a single method of communicating between agents.
>
>>> Also I'd certainly be interested in hearing about whether or not you think 
>>> the C++ agent could made generic enough for any project to use.
> I certainly believe much of the c

Re: [openstack-dev] [neutron][policy] Policy-Rules discussions based on Dec.12 network policy meeting

2013-12-18 Thread Anees A Shaikh
folks, sorry for the late input ... a few additional thoughts...

> Hi Prasad,
> 
> Thanks for the comments, please see responses inline.
> 
> On Mon, Dec 16, 2013 at 2:11 PM, Prasad Vellanki
>  wrote:
> > Hi
> > Please see inline 
> >
> >
> > On Sun, Dec 15, 2013 at 8:49 AM, Stephen Wong  midokura.com> wrote:
> >>
> >> Hi,
> >>
> >> During Thursday's  group-policy meeting[1], there are several
> >> policy-rules related issues which we agreed should be posted on the
> >> mailing list to gather community comments / consensus. They are:
> >>
> >> (1) Conflict resolution between policy-rules
> >> --- a priority field was added to the policy-rules attributes
> >> list[2]. Is this enough to resolve conflict across policy-rules (or
> >> even across policies)? Please state cases where a cross policy-rules
> >> conflict can occur.
> >> --- conflict resolution was a major discussion point during
> >> Thursday's meeting - and there was even suggestion on setting 
priority
> >> on endpoint groups; but I would like to have this email thread 
focused
> >> on conflict resolution across policy-rules in a single policy first.

I agree with keeping the focus on intra-policy conflicts and even there 
would suggest we try to keep things dead simple to start at the expense of 
some flexibility in handling every use case.  This is a classic problem in 
policy frameworks and I hope we don't grind to a halt trying to address 
it.

> >>
> >> (2) Default policy-rule actions
> >> --- there seems to be consensus from the community that we need 
to
> >> establish some basic set of policy-rule actions upon which all
> >> plugins/drivers would have to support
> >> --- just to get the discussion going, I am proposing:
> >>
> >
> > Or should this be a query the plugin for supported actions and thus 
the user
> > knows what functionality the plugin can support.  Hence there is no 
default
> > supported list.
> 
> I think what we want is a set of "must-have" actions which
> application can utilize by default while using the group-policy APIs.
> Without this, application would need to perform many run time checks
> and have unpredictable behavior across different deployments.
> 
> As for querying for a capability list - I am not against having
> such API, but what is the common use case? Having a script querying
> for the supported action list and generate policies based on that?
> Should we expect policy definition to be so dynamic?

My view is that the query capability may be where we try to go eventually, 
but we should start with a must-have list that is very small, e.g., just 
the security policy.  Other action types would be optional but 
well-defined.


> 
> >
> >> a.) action_type: 'security'action: 'allow' | 'drop'
> >> b.) action_type: 'qos'action: {'qos_class': {'critical' |
> >> 'low-priority' | 'high-priority' |
> >>
> >>'low-immediate' | 'high-immediate' |
> >>
> >>'expedite-forwarding'}
> >>  (a subset of DSCP values - hopefully in language that 
can
> >> be well understood by those performing application deployments)
> >> c.) action_type:'redirect'   action: {UUID, [UUID]...}
> >>  (a list of Neutron objects to redirect to, and the list
> >> should contain at least one element)
> >>
> >
> > I am not sure making the UUIDs a list of neutron objects or endpoints 
will
> > work well. It seems that it should more higher level such as list of
> > services that form a chain. Lets say one forms a chain of services,
> > firewall, IPS, LB. It would be tough to expect user to derive the 
neutron
> > ports create a chain of them. It could be a VM UUID.
> 
> Service chain is a Neutron object with UUID:
> 
> https://docs.google.com/document/d/
> 1fmCWpCxAN4g5txmCJVmBDt02GYew2kvyRsh0Wl3YF2U/edit#
> 
> so this is not defined by the group-policy subgroup, but from a
> different project. We expect operator / tenant to define a service
> chain for the users, and users simply pick that as one of the
> "redirect action" object to send traffic to.
> 
> >
> >> Please discuss. In the document, there is also 'rate-limit' and
> >> 'policing' for 'qos' type, but those can be optional instead of
> >> required for now
> >>
> >> (3) Prasad asked for clarification on 'redirect' action, I propose to
> >> add the following text to document regarding 'redirect' action:
> >>
> >> "'redirect' action is used to mirror traffic to other 
destinations
> >> - destination can be another endpoint group, a service chain, a port,
> >> or a network. Note that 'redirect' action type can be used with other
> >> forwarding related action type such as 'security'; therefore, it is
> >> entirely possible that one can specify {'security':'deny'} and still
> >> do {'redirect':{'uuid-1', 'uuid-2'...}. Note that the destination
> >> specified on the list CANNOT be the endpoint-group who provides this
> >> policy. Also, in case of destination being another end

Re: [openstack-dev] [Neutron][IPv6] Three SLAAC and DHCPv6 related blueprints

2013-12-18 Thread Xuhan Peng
Shixiong and guys,

The sub team meeting is too early for china IBM folks to join although we
would like to participate the discussion very much. Any chance to rotate
the time so we can comment?

Thanks, Xuhan

On Thursday, December 19, 2013, Shixiong Shang wrote:

> Hi, Ian:
>
> I agree with you on the point that the way we implement it should be app
> agnostic. In addition, it should cover both CLI and Dashboard, so the
> system behavior should be consistent to end users.
>
> The keywords is just one of the many ways to implement the concept. It is
> based on the reality that dnsmasq is the only driver available today to the
> community. By the end of the day, the input from customer should be
> translated to one of those mode keywords. It doesn't imply the same
> constants have to be used as part of the CLI or Dashboard.
>
> Randy and I had lengthy discussion/debating about this topic today. We
> have straw-man proposal and will share with the team tomorrow.
>
> That being said, what concerned me the most at this moment is, we are not
> on the same page. I hope tomorrow during sub-team meeting, we can reach
> consensus. If you can not make it, then please set up a separate meeting to
> invite key placeholders so we have a chance to sort it out.
>
> Shixiong
>
>
>
>
> On Dec 18, 2013, at 8:25 AM, Ian Wells 
> >
> wrote:
>
> On 18 December 2013 14:10, Shixiong Shang 
>  'sparkofwisdom.cl...@gmail.com');>
> > wrote:
>
>> Hi, Ian:
>>
>> I won’t say the intent here is to replace dnsmasq-mode-keyword BP.
>> Instead, I was trying to leverage and enhance those definitions so when
>> dnsmasq is launched, it knows which mode it should run in.
>>
>> That being said, I see the value of your points and I also had lengthy
>> discussion with Randy regarding this. We did realize that the keyword
>> itself may not be sufficient to properly configure dnsmasq.
>>
>
> I think the point is that the attribute on whatever object (subnet or
> router) that defines the behaviour should define the behaviour, in
> precisely the terms you're talking about, and then we should find the
> dnsmasq options to suit.  Talking to Sean, he's good with this too, so
> we're all working to the same ends and it's just a matter of getting code
> in.
>
>
>> Let us discuss that on Thursday’s IRC meeting.
>>
>
> Not sure if I'll be available or not this Thursday, unfortunately.  I'll
> try to attend but I can't make promises.
>
> Randy and I had a quick glance over your document. Much of it parallels
>> the work we did on our POC last summer, and is now being addressed across
>> multiple BP being implemented by ourselves or with Sean Collins and IBM
>> team's work. I will take a closer look and provide my comments.
>>
>
> That's great.  I'm not wedded to the details in there, I'm actually more
> interested that we've covered everything.
>
> If you have blueprint references, add them as comments - the
> ipv6-feature-parity BP could do with work and if we get the links together
> in one place we can update it.
> --
> Ian.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org  'OpenStack-dev@lists.openstack.org');>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][lbaas] plugin driver

2013-12-18 Thread Subrahmanyam Ongole
I was going through the latest common agent patch from Eugene.
plugin-driver appears to be specific to driver provider, for example
haproxy has a plugin driver (in addition to agent driver). How does this
work with single common agent and many providers? Would they need to use a
separate topic?

Who owns the plugin driver? Is it appliance driver provider (such as
f5/radware for example) or Openstack cloud service provider (such as
Rackspace for example?)

-- 

Thanks
Subra
(Subrahmanyam Ongole)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Support for Pecan in Nova

2013-12-18 Thread Christopher Yeoh
So the short answer: as long as the api samples continue to generate
normally I don't think it will.

The long answer is more complicated. The V2 API documentation was generated
and updated manually.
We generate api samples for the documentation automatically but thats it.
I've been trying to improve on that
for V3 and have some experimental code here to add extra information along
with the api samples so
we can autogenerate the documentation entirely from what is generated in
the api sample directory. Kersten
Richter is working on what is required to convert from the api samples and
meta information into
wadl.

https://github.com/cyeoh/nova-v3-api-doc/tree/metafiles_2

But if it we're going to end up using wsme in J, perhaps its not worth
merging in Icehouse.

So for Icehouse its likely the documentation will either be generated
manually like the V2
or hopefully at least partially automated. But I don't think  Pecan will
change this (but I'd need to
see more examples of what the Pecan changes to the plugins actually look
like.

Regards,

Chris





On Thu, Dec 19, 2013 at 1:38 AM, Ryan Petrello
wrote:

> Additionally, can anyone here summarize the current status of v3
> documentation?  Is there a process I can currently run against Nova to
> generate a WADL (I want to make sure the Pecan changes work with it)?
>
> ---
> Ryan Petrello
> Senior Developer, DreamHost
> ryan.petre...@dreamhost.com
>
> On Dec 18, 2013, at 9:46 AM, Ryan Petrello 
> wrote:
>
> > Sounds like what I’m hearing is “Let’s see something that uses this
> (that works)”?  I’ll work on that :)
> >
> > ---
> > Ryan Petrello
> > Senior Developer, DreamHost
> > ryan.petre...@dreamhost.com
> >
> > On Dec 18, 2013, at 9:45 AM, Ryan Petrello 
> wrote:
> >
> >> Jamie:
> >>
> >> Your approach makes sense, but it still uses both pecan and Routes.
>  One of the goals of my patch was to (eventually) be able to remove the use
> of Routes entirely in v3 for Nova once all of the extensions are
> re-implemented with pecan (so that we’re not defining a mix of object
> dispatch and regular-expression style routes).
> >>
> >> ---
> >> Ryan Petrello
> >> Senior Developer, DreamHost
> >> ryan.petre...@dreamhost.com
> >>
> >> On Dec 18, 2013, at 6:05 AM, Jamie Lennox 
> wrote:
> >>
> >>> I attempted this in keystone as part of a very simple extension [1]. I
> understand that it is a much simpler case but nesting the Pecan within the
> existing routing infrastructure, rather than have a single Pecan app was
> fairly simple (though there are some limiting situations).
> >>>
> >>> Any reason you decided to go this way rather than the one in my review?
> >>>
> >>> [1] https://review.openstack.org/#/c/62810/
> >>>
> >>> - Original Message -
>  From: "Ryan Petrello" 
>  To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
>  Sent: Wednesday, 18 December, 2013 7:08:09 AM
>  Subject: Re: [openstack-dev] [Nova] Support for Pecan in Nova
> 
>  So any additional feedback on this patch?  I’d love to start working
> on
>  porting some of the other extensions to pecan, but want to make sure
> I’ve
>  got approval on this approach first.
> 
>  https://review.openstack.org/#/c/61303/7
> 
>  ---
>  Ryan Petrello
>  Senior Developer, DreamHost
>  ryan.petre...@dreamhost.com
> 
>  On Dec 14, 2013, at 10:45 AM, Doug Hellmann <
> doug.hellm...@dreamhost.com>
>  wrote:
> 
> >
> >
> >
> > On Sat, Dec 14, 2013 at 7:55 AM, Christopher Yeoh  >
> > wrote:
> >
> > On Sat, Dec 14, 2013 at 8:48 AM, Doug Hellmann
> >  wrote:
> > That covers routes. What about the properties of the inputs and
> outputs?
> >
> >
> > I think the best way for me to describe it is that as the V3 API
> core and
> > all the extensions
> > are written, both the routes and input and output parameters are
> from a
> > client's perspective fixed at application
> > startup time. Its not an inherent restriction of the framework (an
> > extension could for
> > example dynamically load another extension at runtime if it really
> wanted
> > to), but we just don't do that.
> >
> > OK, good.
> >
> >
> >
> > Note that values of parameters returned can be changed by an
> extension
> > though. For example os-hide-server-addresses
> > can based on a runtime policy check and the vm_state of the server,
> filter
> > whether the values in the
> > addresses field are filtered out or not when returning information
> about a
> > server. This isn't a new thing in the
> > V3 API though, it already existed in the V2 API.
> >
> > OK, it seems like as long as the fields are still present that makes
> the
> > API at least consistent for a given deployment's configuration.
> >
> > Doug
> >
> >
> >
> > Chris
> >
> >
> > On Fri, Dec 13, 201

Re: [openstack-dev] [neutron] [third-party-testing] Reminder: Meeting tomorrow

2013-12-18 Thread Don Kehn
Wouldn't 2200 UTC be in about 20 mins?


On Wed, Dec 18, 2013 at 3:32 PM, Itsuro ODA  wrote:

> Hi,
>
> It seems the meeting was not held on 2200 UTC on Wednesday (today).
>
> Do you mean 2200 UTC on Thursday ?
>
> Thanks.
>
> On Thu, 12 Dec 2013 11:43:03 -0600
> Kyle Mestery  wrote:
>
> > Hi everyone:
> >
> > We had a meeting around Neutron Third-Party testing today on IRC.
> > The logs are available here [1]. We plan to host another meeting
> > next week, and it will be at 2200 UTC on Wednesday in the
> > #openstack-meeting-alt channel on IRC. Please attend and update
> > the etherpad [2] with any items relevant to you before then.
> >
> > Thanks again!
> > Kyle
> >
> > [1]
> http://eavesdrop.openstack.org/meetings/networking_third_party_testing/2013/
> > [2] https://etherpad.openstack.org/p/multi-node-neutron-tempest
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> On Wed, 18 Dec 2013 15:10:46 -0600
> Kyle Mestery  wrote:
>
> > Just a reminder, we'll be meeting at 2200 UTC on #openstack-meeting-alt.
> > We'll be looking at this etherpad [1] again, and continuing discussions
> from
> > last week.
> >
> > Thanks!
> > Kyle
> >
> > [1] https://etherpad.openstack.org/p/multi-node-neutron-tempest
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
> Itsuro ODA 
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Don Kehn
303-442-0060
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party-testing] Reminder: Meeting tomorrow

2013-12-18 Thread Itsuro ODA
Hi,

It seems the meeting was not held on 2200 UTC on Wednesday (today).

Do you mean 2200 UTC on Thursday ?

Thanks.

On Thu, 12 Dec 2013 11:43:03 -0600
Kyle Mestery  wrote:

> Hi everyone:
> 
> We had a meeting around Neutron Third-Party testing today on IRC.
> The logs are available here [1]. We plan to host another meeting
> next week, and it will be at 2200 UTC on Wednesday in the
> #openstack-meeting-alt channel on IRC. Please attend and update
> the etherpad [2] with any items relevant to you before then.
> 
> Thanks again!
> Kyle
> 
> [1] 
> http://eavesdrop.openstack.org/meetings/networking_third_party_testing/2013/
> [2] https://etherpad.openstack.org/p/multi-node-neutron-tempest
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

On Wed, 18 Dec 2013 15:10:46 -0600
Kyle Mestery  wrote:

> Just a reminder, we'll be meeting at 2200 UTC on #openstack-meeting-alt.
> We'll be looking at this etherpad [1] again, and continuing discussions from
> last week.
> 
> Thanks!
> Kyle
> 
> [1] https://etherpad.openstack.org/p/multi-node-neutron-tempest
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Itsuro ODA 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Neutron resources and hidden dependencies

2013-12-18 Thread Zane Bitter

On 18/12/13 05:21, Thomas Herve wrote:



At the design summit we had a discussion[1] about redesigning Neutron
resources to avoid the need for hidden dependencies (that is to say,
dependencies which cannot be inferred from the template).

Since then we've got close to fixing one of those issues[2] (thanks
Bartosz!), but the patch is currently held up by a merge conflict
because... we managed to add two more[3][4].


Sorry, I'm one of the reviewer of those, and [4] was a bit suspicious indeed, 
but it sounded like the best solution. [3] doesn't add dependencies, though.


Quite right, my mistake, and I can definitely see how it would not have 
looked suspicious at the time. I can easily imagine though that there 
might be a bug raised in the future saying we need to add some magic 
dependencies to [3] to make sure the DHCP agent is selected before any 
instances/ports get created. Actually there's already an inline comment 
from the author in patch set 1 of [4] that says "I will add dependency 
between subnet and NetDHCPAgent."



There's also another patch currently under review[5] that adds the most
impressively complex hidden dependencies we've yet seen (though,
fortunately, the worst of this is actually redundant and can be removed
without effect).

I know that due to the number of Neutron resources that currently
override add_dependencies(), this may look like a normal part of a
resource implementation. However, this is absolutely not the case. If
you feel the need to override add_dependencies() for any reason then
some part of the design is wrong. If you feel the need to do it for any
reason other than not breaking existing soon-to-be-deprecated wrong
designs (i.e. RouterGateway), then your part of the design is the part
that's wrong.

Core reviewers should treat any attempt to override add_dependencies()
as a red flag IMO.

It's unfortunate that many parts of the Neutron API are not great,
especially that some pretty core functionality is currently balkanised
into various 'extensions' that don't have a coherent interface.
Nevertheless, that just means that we need to work harder to come up
with resource designs that express the appropriate relationships between
resources *in the template*, not with hidden relationships in the code.
This means that orchestration will actually work regardless of whether
all of the related resources are defined in the same template, and in
fact regardless of whether they are defined in templates at all.

I'd like to propose that we revert NetDHCPAgent and RouterL3Agent (which
are somewhat misnamed, and serve only to connect a Net or Router to an
existing agent, not to create an agent) and replace them with properties
on the Net and Router classes, respectively.


I'm not sure about NetDHCPAgent, as it doesn't add a dependency.


That's true, although I suspect it may only be working by coincidence 
now. It also seems to me that we should have a consistent approach for 
selecting agents. This resource does nothing but connect two other 
resources together in a way that, if it makes any difference at all, 
makes the dependency resolution worse not better. (Contrast this with 
VolumeAttachment or EIPAssociation, which exist to solve some specific 
dependency issues.) Conceptually, it is just a property of the Network IMO.



Regarding RouterL3Agent, the only objection I have is if you want to use a 
router defined outside of the template, which if I remember correctly was the 
justification of this class. Do we have an answer for that?


I haven't seen that justification advanced anywhere in relation to this 
particular resource type, but maybe I missed it. In any event, if you 
want to use a Router defined outside of the template, you should also 
assign it to the correct agent outside of the template. It's difficult 
to get upset about that, especially given that it needs to be done 
before you use the Router for anything anyway. Basically there's no 
there's no safe way to do this other than at the time you create the 
router... that's the nice thing about not hacking the dependencies - 
when they're all explicit in the template then everything works the same 
way whether it's created in Heat or not, in the same template or not.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

2013-12-18 Thread Steven Dake

On 12/18/2013 12:27 PM, Tim Simpson wrote:

Please provide proof of that assumption or at least a general hypothesis that 
we can test.

I can't prove that the new agent will be larger as it doesn't exist yet.


Since nothing was agreed upon anyway, I don't know how you came to that 
conclusion.  I would suggest that any agent framework be held to an extremely 
high standard for footprint for this very reason.

Sorry, I formed a conclusion based on what I'd read so far. There has been talk 
to add Salt to this Unified Agent along with several other things. So I think 
its a valid concern to state that making this thing small is not as high on the 
list of priorities as adding extra functionality.

The C++ agent is just over 3 megabytes of real memory and takes up less than 30 
megabytes  of virtual memory. I don't think an agent has to be *that* small. 
However it won't get near that small unless making it tiny is made a priority, 
and I'm skeptical that's possible while also deciding an agent will be capable 
of interacting with all major OpenStack projects as well as Salt.


Nobody has suggested writing an agent that does everything.

Steven Dake just said:

"A unified agent addresses the downstream viewpoint well, which is 'There is only 
one agent to package and maintain, and it supports all the integrated OpenStack Program 
projects'."

So it sounds like some people are saying there will only be one. Or that it is 
at least an idea.


If Trove's communication method is in fact superior to all others, then perhaps 
we should discuss using that in the unified agent framework.

My point is every project should communicate to an agent in its own interface, 
which can be swapped out for whatever implementations people need.


  In fact I've specifically been arguing to keep it focused on facilitating 
guest<->service communication and limiting its in-guest capabilities to 
narrowly focused tasks.

I like this idea better than creating one agent to rule them all, but I would 
like to avoid forcing a single method of communicating between agents.


Also I'd certainly be interested in hearing about whether or not you think the 
C++ agent could made generic enough for any project to use.

I certainly believe much of the code could be reused for other projects. Right 
now it communicates over RabbitMQ, Oslo RPC style, so I'm not sure how much it 
will fall in line with what the Unified Agent group wants. However, I would 
love to talk more about this. So far my experience has been that no one wants 
to pursue using / developing an agent that was written in C++.

Thanks,

Tim

From: Clint Byrum [cl...@fewbar.com]
Sent: Wednesday, December 18, 2013 11:36 AM
To: openstack-dev
Subject: Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

Excerpts from Tim Simpson's message of 2013-12-18 07:34:14 -0800:

I've been following the Unified Agent mailing list thread for awhile
now and, as someone who has written a fair amount of code for both of
the two existing Trove agents, thought I should give my opinion about
it. I like the idea of a unified agent, but believe that forcing Trove
to adopt this agent for use as its by default will stifle innovation
and harm the project.


"Them's fightin words". ;)

That is a very strong position to take. So I am going to hold your
statements of facts and assumptions to a very high standard below.


There are reasons Trove has more than one agent currently. While
everyone knows about the "Reference Agent" written in Python, Rackspace
uses a different agent written in C++ because it takes up less memory. The
concerns which led to the C++ agent would not be addressed by a unified
agent, which if anything would be larger than the Reference Agent is
currently.


"Would be larger..." - Please provide proof of that assumption or at least
a general hypothesis that we can test. Since nothing was agreed upon
anyway, I don't know how you came to that conclusion. I would suggest
that any agent framework be held to an extremely high standard for
footprint for this very reason.


I also believe a unified agent represents the wrong approach
philosophically. An agent by design needs to be lightweight, capable
of doing exactly what it needs to and no more. This is especially true
for a project like Trove whose goal is to not to provide overly general
PAAS capabilities but simply installation and maintenance of different
datastores. Currently, the Trove daemons handle most logic and leave
the agents themselves to do relatively little. This takes some effort
as many of the first iterations of Trove features have too much logic
put into the guest agents. However through perseverance the subsequent
designs are usually cleaner and simpler to follow. A community approved,
"do everything" agent would endorse the wrong balance and lead to
developers piling up logic on the guest side. Over time, features would
become dependent on the Unified Agent, makin

Re: [openstack-dev] [trove] Delivering datastore logs to customers

2013-12-18 Thread Michael Basnight
I think this is a good idea and I support it. In todays meeting [1] there
were some questions, and I encourage them to get brought up here. My only
question is in regard to the "tail" of a file we discussed in irc. After
talking about it w/ other trovesters, I think it doesnt make sense to tail
the log for most datstores. I cant imagine finding anything useful in say,
a java, applications last 100 lines (especially if a stack trace was
present). But I dont want to derail, so lets try to focus on the "deliver
to swift" first option.

[1]
http://eavesdrop.openstack.org/meetings/trove/2013/trove.2013-12-18-18.13.log.txt

On Wed, Dec 18, 2013 at 5:24 AM, Denis Makogon wrote:

> Greetings, OpenStack DBaaS community.
>
> I'd like to start discussion around a new feature in Trove. The
> feature I would like to propose covers manipulating  database log files.
>

> Main idea. Give user an ability to retrieve database log file for any
> purposes.
>
> Goals to achieve. Suppose we have an application (binary application,
> without source code) which requires a DB connection to perform data
> manipulations and a user would like to perform development, debbuging of an
> application, also logs would be useful for audit process. Trove itself
> provides access only for CRUD operations inside of database, so the user
> cannot access the instance directly and analyze its log files. Therefore,
> Trove should be able to provide ways to allow a user to download the
> database log for analysis.
>
>
> Log manipulations are designed to let user perform log investigations.
> Since Trove is a PaaS - level project, its user cannot interact with the
> compute instance directly, only with database through the provided API
> (database operations).
>
> I would like to propose the following API operations:
>
>1.
>
>Create DBLog entries.
>2.
>
>Delete DBLog entries.
>3.
>
>List DBLog entries.
>
> Possible API, models, server, and guest configurations are described at
> wiki page. [1]
>
> [1] https://wiki.openstack.org/wiki/TroveDBInstanceLogOperation
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Michael Basnight
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Horizon and Tuskar-UI codebase merge

2013-12-18 Thread Gabriel Hurley
>From my experience, directly adding incubated projects to the main Horizon 
>codebase prior to graduation has been fraught with peril. That said, the 
>closer they can be together prior to the graduation merge, the better.

I like the idea of these types of projects being under the OpenStack Dashboard 
Program umbrella. Ideally I think it would be a jointly-managed resource in 
Gerrit. The Horizon Core folks would have +2 power, but the Tuskar core folks 
would also have +2 power. (I'm 90% certain that can be done in the Gerrit 
admin...)

That way development speed isn't bottlenecked by Horizon Core, but there's a 
closer tie-in with the people who may ultimately be maintaining it. It becomes 
easier to keep track of, and can be more easily guided in the right directions. 
With a little work incubated dashboard components like this could even be made 
to be a non-gating part of the testing infrastructure to indicate when things 
change or break.

Adding developers to Horizon Core just for the purpose of reviewing an 
incubated umbrella project is not the right way to do things at all.  If my 
proposal of two separate groups having the +2 power in Gerrit isn't technically 
feasible then a new group should be created for management of umbrella projects.

All the best,

 - Gabriel

> -Original Message-
> From: Julie Pichon [mailto:jpic...@redhat.com]
> Sent: Wednesday, December 18, 2013 4:50 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [TripleO] [Horizon] [Tuskar] [UI] Horizon and
> Tuskar-UI codebase merge
> 
> "Jaromir Coufal"  wrote:
> > Hi All,
> >
> > After yesterday's weekly meeting it seems that majority of us is
> > leaning towards this approach (codebase merge). However there are few
> > concerns regarding speed of development and resources especially for
> reviews.
> > With this e-mail, I'd like to kick off discussion about how we might
> > assure to get enough people so that we can fulfill Horizon as well as
> > Tuskar-UI goals.
> 
> It seems to me the opinions were still divided in the meeting, and this is why
> we are continuing the discussion. Personally I'm more favourable to the 
> initial
> proposal, of moving the tuskar-ui repository under the Horizon program.
> Existing Horizon cores add it to their Watched Changes under Gerrit, just like
> for Horizon and django_openstack_auth now, and get familiar with the
> project during its development. Tuskar-ui cores can move their discussions to
> the #openstack-horizon channel and that's another way that we all become
> part of the same horizon community, and another chance for the horizon
> folks to understand what's important to Tuskar.
> 
> There's a number of exceptions that are being requested, that make me
> think now is not the right time to just merge the code into the main horizon
> codebase. A few that have come up during the IRC meeting:
> 
>  - Request for more attention due to the need to move very fast
>  - (Possibly) Request for an exception to the same company approval
>rule
>  - Request to be able to check in broken stuff at times
> 
> In my mind these requirements come from being an incubated project with
> different priorities, which is fine but make the project not suitable for the
> main horizon codebase yet.
> 
> I think it makes more sense to merge once the project is integrated, like
> we've done so far. Another discussion on list ([1]) makes it clear that 
> there's
> no promise an incubated project will become integrated, and that it can be
> dropped from incubation. IMHO that's another argument for waiting for a
> project to be integrated before merging it. This doesn't mean it doesn't get
> any attention from the existing Horizon team.
> 
> I think extending this rule to all - moving dashboard components from
> incubated projects under the Horizon program - would be more reasonable
> and easier to scale, than the proposed alternative.
> 
> > With respect to the e-mail which was sent Dec 17 (quoted below), I
> > think all of what was written applies, we just need to clarify couple
> > more details. And I hope we can get to consensus by the end of this
> > week so that we can make things happen.
> >
> >
> > *Blueprints:*
> > Currently there is couple of already existing blueprints for Tuskar-UI
> > and there will appear more based on wireframes around deployment
> setup
> > (which are not finished yet).
> >
> > https://blueprints.launchpad.net/tuskar-ui
> >
> > We want to make sure, that Horizoners are fine with these blueprints
> > being merged with Horizon ones, keeping their priorities and that
> > there will appear couple more in time.
> >
> >
> > *Cores:*
> > To make sure that we have enough people to get the code in and also to
> > help Horizoners to free load of reviews on their side, we propose to
> > merge our core team (plus rdopieralski). All of them contributes to
> > Horizon so they know the code well.
> >
> > People we are talking about

Re: [openstack-dev] [Neutron] Neutron Tempest code sprint - 2nd week of January, Montreal, QC, Canada

2013-12-18 Thread Anita Kuno
Matt Trenish wins this round of spot Anita's typo to the -dev ml.

9am - 5pm, 8 hours of work (lunch in the middle) for 3 days

Thanks Matt,
Anita.

On 12/18/2013 04:17 PM, Anita Kuno wrote:
> Okay time for a recap.
> 
> What: Neutron Tempest code sprint
> Where: Montreal, QC, Canada
> When: January 15, 16, 17 2014
> Location: I am about to sign the contract for Salle du Parc at 3625 Parc
> avenue, a room in a residence of McGill University.
> Time: 9am - 5am
> 
> I am expecting to see the following people in Montreal in January:
> Mark McClain
> Salvatore Orlando
> Sean Dague
> Matt Trenish
> Jay Pipes
> Sukhdev Kapur
> Miguel Lavelle
> Oleg Bondarev
> Rossella Sblendido
> Emilien Macchi
> Sylvain Afchain
> Nicolas Planel
> Kyle Mestery
> Dane Leblanc
> Sumit Naiksatam
> Henry Gessau
> Don Kehn
> Carl Baldwin
> Justin Hammond
> Anita Kuno
> 
> If you are on the above list and can't attend, please email me so I have
> an up-to-date list. If you are planning on attending and I don't have
> your name listed, please email me without delay so that I can add you
> and you get done what you need to get done to attend.
> 
> I have the contract for the room and will be signing it and sending it
> in with the room deposit tomorrow. Monty has about 6 more hours to get
> back to me on this, then I just have to go ahead and do it.
> 
> Caterer is booked and I will be doing menu selection over the holidays.
> I can post the intended, _the intended_ menu once I have decided. Soup,
> salad, sandwich - not glamourous but hopefully filling. If the menu on
> the day isn't the same as what I post, please forgive me. Unforeseen
> circumstances may crop up and I will do my best to get you fed. One
> person has identified they have a specific food request, if there are
> any more out there, please email me now. This covers breakfast, lunch
> and tea/coffee all day.
> 
> Henry Gessau will be social convener for dinners. If you have some
> restaurant suggestions, please contact Henry. Organization of dinners
> will take place once we congregate in our meeting room.
> 
> T-shirts: we decided that the code quality of Neutron was a higher
> priority than t-shirts.
> 
> One person required a letter of invitation for visa purposes and
> received it. I hope the visa has been granted.
> 
> Individuals arrangements for hotels seem to be going well from what I
> have been hearing. A few people will be staying at Le Nouvel Hotel,
> thanks for finding that one, Rosella.
> 
> Weather: well you got me on this one. This winter is colder than we have
> had in some time and more snow too. So it will be beautiful but bring or
> buy warm clothes. A few suggestions:
> * layer your clothes (t-shirt, turtleneck, sweatshirt)
> * boots with removable liners (this is my boot of choice:
> http://amzn.to/19ddJve) remove the liners at the end of each day to dry them
> * warm coat
> * toque (wool unless you are allergic) I'm seeing them for $35, don't
> pay that much, you should be able to get something warm for $15 or less
> * warm socks (cotton socks and wool over top)- keep your feet dry
> * mitts (mitts keep my fingers warmer than gloves)
> * scarf
> If the weather is making you panic, talk to me and I will see about
> bringing some of my extra accessories with me. The style might not be
> you but you will be warm.
> 
> Remember, don't lick the flagpole. It doesn't matter what your friends
> tell you.
> 
> That's all I can think of, if I missed something, email me.
> 
> Oh, and best to consider me offline from Jan.2 until the code sprint.
> Make sure you have all the information you need prior to that time.
> 
> See you in Montreal,
> Anita.
> 
> 
> On 11/19/2013 11:31 AM, Rossella Sblendido wrote:
>> Hi all,
>>
>> sorry if this is a bit OT now.
>> I contacted some hotels to see if we could get a special price if we book
>> many rooms. According to my research the difference in price is not much.
>> Also, as Anita was saying, booking for everybody is more complicated.
>> So I decided to booked a room for myself.
>> I share the name of the hotel, in case you want to stay in the same place
>> http://www.lenouvelhotel.com/.
>> It's close to the meeting room and the price is one of the best I have
>> found.
>>
>> cheers,
>>
>> Rossella
>>
>>
>>
>>
>> On Sat, Nov 16, 2013 at 7:39 PM, Anita Kuno  wrote:
>>
>>>  On 11/16/2013 01:14 PM, Anita Kuno wrote:
>>>
>>> On 11/16/2013 12:37 PM, Sean Dague wrote:
>>>
>>> On 11/15/2013 10:36 AM, Russell Bryant wrote:
>>>
>>>  On 11/13/2013 11:10 AM, Anita Kuno wrote:
>>>
>>>  Neutron Tempest code sprint
>>>
>>> In the second week of January in Montreal, Quebec, Canadathere will be a
>>> Neutron Tempest code sp

[openstack-dev] [trove] datastore migration issues

2013-12-18 Thread Greg Hill
I've been working on fixing a bug related to migrating existing installations 
to the new datastore code:

https://bugs.launchpad.net/trove/+bug/1259642

The basic gist is that existing instances won't have any data in the 
datastore_version_id field in the database unless we somehow populate that data 
during migration, and not having that data populated breaks a lot of things 
(including the ability to list instances or delete or resize old instances).  
It's impossible to populate that data in an automatic, generic way, since it's 
highly vendor-dependent on what database and version they currently support, 
and there's not enough data in the older schema to populate the new tables 
automatically.

So far, we've come up with some non-optimal solutions:

1. The first iteration was to assume 'mysql' as the database manager on 
instances without a datastore set.
2. The next iteration was to make the default value be configurable in 
trove.conf, but default to 'mysql' if it wasn't set.
3. It was then proposed that we could just use the 'default_datastore' value 
from the config, which may or may not be set by the operator.

My problem with any of these approaches beyond the first is that requiring 
people to populate config values in order to successfully migrate to the newer 
code is really no different than requiring them to populate the new database 
tables with appropriate data and updating the existing instances with the 
appropriate values.  Either way, it's now highly dependent on people deploying 
the upgrade to know about this change and react accordingly.

Does anyone have a better solution that we aren't considering?  Is this even 
worth the effort given that trove has so few current deployments that we can 
just make sure everyone is populating the new tables as part of their upgrade 
path and not bother fixing the code to deal with the legacy data?

Greg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron Tempest code sprint - 2nd week of January, Montreal, QC, Canada

2013-12-18 Thread Anita Kuno
Okay time for a recap.

What: Neutron Tempest code sprint
Where: Montreal, QC, Canada
When: January 15, 16, 17 2014
Location: I am about to sign the contract for Salle du Parc at 3625 Parc
avenue, a room in a residence of McGill University.
Time: 9am - 5am

I am expecting to see the following people in Montreal in January:
Mark McClain
Salvatore Orlando
Sean Dague
Matt Trenish
Jay Pipes
Sukhdev Kapur
Miguel Lavelle
Oleg Bondarev
Rossella Sblendido
Emilien Macchi
Sylvain Afchain
Nicolas Planel
Kyle Mestery
Dane Leblanc
Sumit Naiksatam
Henry Gessau
Don Kehn
Carl Baldwin
Justin Hammond
Anita Kuno

If you are on the above list and can't attend, please email me so I have
an up-to-date list. If you are planning on attending and I don't have
your name listed, please email me without delay so that I can add you
and you get done what you need to get done to attend.

I have the contract for the room and will be signing it and sending it
in with the room deposit tomorrow. Monty has about 6 more hours to get
back to me on this, then I just have to go ahead and do it.

Caterer is booked and I will be doing menu selection over the holidays.
I can post the intended, _the intended_ menu once I have decided. Soup,
salad, sandwich - not glamourous but hopefully filling. If the menu on
the day isn't the same as what I post, please forgive me. Unforeseen
circumstances may crop up and I will do my best to get you fed. One
person has identified they have a specific food request, if there are
any more out there, please email me now. This covers breakfast, lunch
and tea/coffee all day.

Henry Gessau will be social convener for dinners. If you have some
restaurant suggestions, please contact Henry. Organization of dinners
will take place once we congregate in our meeting room.

T-shirts: we decided that the code quality of Neutron was a higher
priority than t-shirts.

One person required a letter of invitation for visa purposes and
received it. I hope the visa has been granted.

Individuals arrangements for hotels seem to be going well from what I
have been hearing. A few people will be staying at Le Nouvel Hotel,
thanks for finding that one, Rosella.

Weather: well you got me on this one. This winter is colder than we have
had in some time and more snow too. So it will be beautiful but bring or
buy warm clothes. A few suggestions:
* layer your clothes (t-shirt, turtleneck, sweatshirt)
* boots with removable liners (this is my boot of choice:
http://amzn.to/19ddJve) remove the liners at the end of each day to dry them
* warm coat
* toque (wool unless you are allergic) I'm seeing them for $35, don't
pay that much, you should be able to get something warm for $15 or less
* warm socks (cotton socks and wool over top)- keep your feet dry
* mitts (mitts keep my fingers warmer than gloves)
* scarf
If the weather is making you panic, talk to me and I will see about
bringing some of my extra accessories with me. The style might not be
you but you will be warm.

Remember, don't lick the flagpole. It doesn't matter what your friends
tell you.

That's all I can think of, if I missed something, email me.

Oh, and best to consider me offline from Jan.2 until the code sprint.
Make sure you have all the information you need prior to that time.

See you in Montreal,
Anita.


On 11/19/2013 11:31 AM, Rossella Sblendido wrote:
> Hi all,
> 
> sorry if this is a bit OT now.
> I contacted some hotels to see if we could get a special price if we book
> many rooms. According to my research the difference in price is not much.
> Also, as Anita was saying, booking for everybody is more complicated.
> So I decided to booked a room for myself.
> I share the name of the hotel, in case you want to stay in the same place
> http://www.lenouvelhotel.com/.
> It's close to the meeting room and the price is one of the best I have
> found.
> 
> cheers,
> 
> Rossella
> 
> 
> 
> 
> On Sat, Nov 16, 2013 at 7:39 PM, Anita Kuno  wrote:
> 
>>  On 11/16/2013 01:14 PM, Anita Kuno wrote:
>>
>> On 11/16/2013 12:37 PM, Sean Dague wrote:
>>
>> On 11/15/2013 10:36 AM, Russell Bryant wrote:
>>
>>  On 11/13/2013 11:10 AM, Anita Kuno wrote:
>>
>>  Neutron Tempest code sprint
>>
>> In the second week of January in Montreal, Quebec, Canadathere will be a
>> Neutron Tempest code sprint to improve the status of Neutron tests in
>> Tempest and to add new tests.
>>
>>  First off, I think anything regarding putting more effort into this is
>> great.  However, I *beg* the Neutron team not to wait until this week to
>> make significant progress.
>>
>> To be clear, IMO, this is already painfully late.  It's one of the
>> largest items blocking deprecation of nova-network and movin

[openstack-dev] [neutron] [third-party-testing] Reminder: Meeting tomorrow

2013-12-18 Thread Kyle Mestery
Just a reminder, we'll be meeting at 2200 UTC on #openstack-meeting-alt.
We'll be looking at this etherpad [1] again, and continuing discussions from
last week.

Thanks!
Kyle

[1] https://etherpad.openstack.org/p/multi-node-neutron-tempest

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] next two weekly meetings

2013-12-18 Thread Michael Basnight
We are canceling our next two weekly meetings. They occur on Dec 25 and Jan
1. See you all on the 8th for our next regularly scheduled trove meeting.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Unit test coverage

2013-12-18 Thread Amir Sadoughi
I’ve run the tests again with `coverage report -m` to show the line-by-line 
numbers of statements that weren’t executed.

(.venv)amir@dev:~/neutron$ git log HEAD  --pretty=oneline | head -n 1
51bef83fecfd6ccf7bf1f9ba3d14bbab0c205949 Merge "Midonet plugin: Fix source NAT”

fox -e cover neutron.tests.unit.ml2: http://paste.openstack.org/show/55372/
tox -e cover neutron.tests.unit.openvswitch.test_ovs_neutron_agent: 
http://paste.openstack.org/show/55373/
tox -e cover neutron.tests.unit.linuxbridge.test_lb_neutron_agent: 
http://paste.openstack.org/show/55374/

fox -e cover neutron.tests.unit.ml2 (filtered, sorted): 
http://paste.openstack.org/show/55375/

Amir

On Dec 12, 2013, at 4:48 PM, Amir Sadoughi  wrote:

> Mathieu,
> 
> Here are my results for running the unit tests for the agents.
> 
> I ran `tox -e cover neutron.tests.unit.openvswitch.test_ovs_neutron_agent` at 
> 3b4233873539bad62d202025529678a5b0add412 with the following result:
> 
> Name Stmts   Miss 
> Branch BrMiss  Cover
> …
> neutron/plugins/openvswitch/agent/ovs_neutron_agent639257 
>23712357%
> …
> 
> and `tox -e cover neutron.tests.unit.linuxbridge.test_lb_neutron_agent` with 
> the following result:
> 
> ...
> neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent607134 
>255 7376%
> ...
> 
> Amir
> 
> 
> On Dec 11, 2013, at 3:01 PM, Mathieu Rohon  wrote:
> 
>> the coverage is quite good on the ML2 plugin.
>> it looks like the biggest effort should be done on the ovs and lb agents, no?
>> 
>> On Wed, Dec 11, 2013 at 9:00 PM, Amir Sadoughi
>>  wrote:
>>> From today’s ML2 meeting, I had an action item to produce coverage report
>>> for ML2 unit tests.
>>> 
>>> Here is the command line output of the tests and report I produced:
>>> 
>>> http://paste.openstack.org/show/54845/
>>> 
>>> Amir Sadoughi
>>> 
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding DB migration items to the common review checklist

2013-12-18 Thread Matt Riedemann



On 12/18/2013 2:11 PM, Dan Prince wrote:



- Original Message -

From: "Matt Riedemann" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Wednesday, December 18, 2013 12:27:49 PM
Subject: [openstack-dev] Adding DB migration items to the common review 
checklist

I've seen this come up a few times in reviews and was thinking we should
put something in the general review checklist wiki for it [1].

Basically I have three things I'd like to have in the list for DB
migrations:

1. Unique constraints should be named. Different DB engines and
SQLAlchemy dialects automatically name the constraint their own way,
which can be troublesome for universal migrations. We should avoid this
by enforcing that UCs are named when they are created. This means not
using the unique=True arg in UniqueConstraint if the name arg isn't
provided.

2. Foreign keys should be named for the same reasons in #1.

3. Foreign keys shouldn't be created against nullable columns. Some DB
engines don't allow unique constraints over nullable columns and if you
can't create the unique constraint you can't create the foreign key, so
we should avoid this. If you need the FK, then the pre-req is to make
the target column non-nullable. Think of the instances.uuid column in
nova for example.

Unless anyone has a strong objection to this, I'll update the review
checklist wiki with these items.


No objection to these.

One possible addition would be to make sure that migrations stand on their own 
as much as possible. Code sharing, while good in many cases, can bite you in DB 
migrations because fixing a bug in the shared code may change the behavior of 
an old (released) migration. So by sharing migration code it then can  become 
easier to break upgrades paths down the road. If we make some exceptions to 
this rule with nova.db.sqlalchemy we need to be very careful that we don't 
change the behavior in those functions. Automated tests help here too.

Dan


Not sure if this is pure coincidence, but case in point:

https://review.openstack.org/#/c/62965/





[1] https://wiki.openstack.org/wiki/ReviewChecklist

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding DB migration items to the common review checklist

2013-12-18 Thread Matt Riedemann



On 12/18/2013 1:14 PM, Brant Knudson wrote:

Matt -

Could a test be added that goes through the models and checks these
things? Other projects could use this too.

Here's an example of a test that checks if the tables are all InnoDB:
http://git.openstack.org/cgit/openstack/nova/tree/nova/tests/db/test_migrations.py?id=6e455cd97f04bf26bbe022be17c57e089cf502f4#n430

- Brant



Brant, I could see automating #3 since you could trace the FK to the UC 
and then check if the columns in the UC are nullable or not, but I'm not 
sure how easy it would be to generically test 1 and 2 because we don't 
have strict naming conventions on UC/FK names as far as I know, but I 
guess we could start enforcing that with a test, and whitelist any 
existing UC/FK names that don't fit the new convention.


Thoughts on that or other ideas how to automate checking for this?





On Wed, Dec 18, 2013 at 11:27 AM, Matt Riedemann
mailto:mrie...@linux.vnet.ibm.com>> wrote:

I've seen this come up a few times in reviews and was thinking we
should put something in the general review checklist wiki for it [1].

Basically I have three things I'd like to have in the list for DB
migrations:

1. Unique constraints should be named. Different DB engines and
SQLAlchemy dialects automatically name the constraint their own way,
which can be troublesome for universal migrations. We should avoid
this by enforcing that UCs are named when they are created. This
means not using the unique=True arg in UniqueConstraint if the name
arg isn't provided.

2. Foreign keys should be named for the same reasons in #1.

3. Foreign keys shouldn't be created against nullable columns. Some
DB engines don't allow unique constraints over nullable columns and
if you can't create the unique constraint you can't create the
foreign key, so we should avoid this. If you need the FK, then the
pre-req is to make the target column non-nullable. Think of the
instances.uuid column in nova for example.

Unless anyone has a strong objection to this, I'll update the review
checklist wiki with these items.

[1] https://wiki.openstack.org/__wiki/ReviewChecklist


--

Thanks,

Matt Riedemann


_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org

http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Three SLAAC and DHCPv6 related blueprints

2013-12-18 Thread Shixiong Shang
Hi, Ian:

I agree with you on the point that the way we implement it should be app 
agnostic. In addition, it should cover both CLI and Dashboard, so the system 
behavior should be consistent to end users.

The keywords is just one of the many ways to implement the concept. It is based 
on the reality that dnsmasq is the only driver available today to the 
community. By the end of the day, the input from customer should be translated 
to one of those mode keywords. It doesn't imply the same constants have to be 
used as part of the CLI or Dashboard.

Randy and I had lengthy discussion/debating about this topic today. We have 
straw-man proposal and will share with the team tomorrow. 

That being said, what concerned me the most at this moment is, we are not on 
the same page. I hope tomorrow during sub-team meeting, we can reach consensus. 
If you can not make it, then please set up a separate meeting to invite key 
placeholders so we have a chance to sort it out.

Shixiong




> On Dec 18, 2013, at 8:25 AM, Ian Wells  wrote:
> 
>> On 18 December 2013 14:10, Shixiong Shang  
>> wrote:
>> Hi, Ian:
>> 
>> I won’t say the intent here is to replace dnsmasq-mode-keyword BP. Instead, 
>> I was trying to leverage and enhance those definitions so when dnsmasq is 
>> launched, it knows which mode it should run in. 
>> 
>> That being said, I see the value of your points and I also had lengthy 
>> discussion with Randy regarding this. We did realize that the keyword itself 
>> may not be sufficient to properly configure dnsmasq.
> 
> I think the point is that the attribute on whatever object (subnet or router) 
> that defines the behaviour should define the behaviour, in precisely the 
> terms you're talking about, and then we should find the dnsmasq options to 
> suit.  Talking to Sean, he's good with this too, so we're all working to the 
> same ends and it's just a matter of getting code in.
>  
>> Let us discuss that on Thursday’s IRC meeting.
> 
> Not sure if I'll be available or not this Thursday, unfortunately.  I'll try 
> to attend but I can't make promises.
> 
>> Randy and I had a quick glance over your document. Much of it parallels the 
>> work we did on our POC last summer, and is now being addressed across 
>> multiple BP being implemented by ourselves or with Sean Collins and IBM 
>> team's work. I will take a closer look and provide my comments.
> 
> That's great.  I'm not wedded to the details in there, I'm actually more 
> interested that we've covered everything.
> 
> If you have blueprint references, add them as comments - the 
> ipv6-feature-parity BP could do with work and if we get the links together in 
> one place we can update it.
> -- 
> Ian.
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday December 19th at 17:00 UTC

2013-12-18 Thread Matthew Treinish
Just a reminder that the weekly OpenStack QA team IRC meeting will be tomorrow,
December 19th at 17:00 UTC in the #openstack-meeting channel.

The meeting agenda can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding DB migration items to the common review checklist

2013-12-18 Thread Dan Prince


- Original Message -
> From: "Matt Riedemann" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Wednesday, December 18, 2013 12:27:49 PM
> Subject: [openstack-dev] Adding DB migration items to the common review   
> checklist
> 
> I've seen this come up a few times in reviews and was thinking we should
> put something in the general review checklist wiki for it [1].
> 
> Basically I have three things I'd like to have in the list for DB
> migrations:
> 
> 1. Unique constraints should be named. Different DB engines and
> SQLAlchemy dialects automatically name the constraint their own way,
> which can be troublesome for universal migrations. We should avoid this
> by enforcing that UCs are named when they are created. This means not
> using the unique=True arg in UniqueConstraint if the name arg isn't
> provided.
> 
> 2. Foreign keys should be named for the same reasons in #1.
> 
> 3. Foreign keys shouldn't be created against nullable columns. Some DB
> engines don't allow unique constraints over nullable columns and if you
> can't create the unique constraint you can't create the foreign key, so
> we should avoid this. If you need the FK, then the pre-req is to make
> the target column non-nullable. Think of the instances.uuid column in
> nova for example.
> 
> Unless anyone has a strong objection to this, I'll update the review
> checklist wiki with these items.

No objection to these.

One possible addition would be to make sure that migrations stand on their own 
as much as possible. Code sharing, while good in many cases, can bite you in DB 
migrations because fixing a bug in the shared code may change the behavior of 
an old (released) migration. So by sharing migration code it then can  become 
easier to break upgrades paths down the road. If we make some exceptions to 
this rule with nova.db.sqlalchemy we need to be very careful that we don't 
change the behavior in those functions. Automated tests help here too.

Dan

> 
> [1] https://wiki.openstack.org/wiki/ReviewChecklist
> 
> --
> 
> Thanks,
> 
> Matt Riedemann
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

2013-12-18 Thread Fox, Kevin M
How about a different approach then... OpenStack has thus far been very 
successful providing an API and plugins for dealing with things that cloud 
providers need to be able to switch out to suit their needs.

There seems to be two different parts to the unified agent issue:
 * How to get rpc messages to/from the VM from the thing needing to control it.
 * How to write a plugin to go from a generic rpc mechanism, to doing something 
useful in the vm.

How about standardising what a plugin looks like, "python api, c++ api, etc". 
It won't have to deal with transport at all.

Also standardize the api the controller uses to talk to the system, rest or 
amqp.

Then the mechanism is an implementation detail. If rackspace wants to do a VM 
serial driver, thats cool. If you want to use the network, that works too. 
Savanna/Trove/etc don't have to care which mechanism is used, only the cloud 
provider.

Its not quite as good as one and only one implementation to rule them all, but 
would allow providers to choose what's best for their situation and get as much 
code shared as can be.

What do you think?

Thanks,
Kevin
 




From: Tim Simpson [tim.simp...@rackspace.com]
Sent: Wednesday, December 18, 2013 11:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

Thanks for the summary Dmitry. I'm ok with these ideas, and while I still 
disagree with having a single, forced standard for RPC communication, I should 
probably let things pan out a bit before being too concerned.

- Tim



From: Dmitry Mescheryakov [dmescherya...@mirantis.com]
Sent: Wednesday, December 18, 2013 11:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

Tim,

The unified agent we proposing is based on the following ideas:
  * the core agent has _no_ functionality at all. It is a pure RPC mechanism 
with the ability to add whichever API needed on top of it.
  * the API is organized into modules which could be reused across different 
projects.
  * there will be no single package: each project (Trove/Savanna/Others) 
assembles its own agent based on API project needs.

I hope that covers your concerns.

Dmitry


2013/12/18 Tim Simpson 
mailto:tim.simp...@rackspace.com>>
I've been following the Unified Agent mailing list thread for awhile now and, 
as someone who has written a fair amount of code for both of the two existing 
Trove agents, thought I should give my opinion about it. I like the idea of a 
unified agent, but believe that forcing Trove to adopt this agent for use as 
its by default will stifle innovation and harm the project.

There are reasons Trove has more than one agent currently. While everyone knows 
about the "Reference Agent" written in Python, Rackspace uses a different agent 
written in C++ because it takes up less memory. The concerns which led to the 
C++ agent would not be addressed by a unified agent, which if anything would be 
larger than the Reference Agent is currently.

I also believe a unified agent represents the wrong approach philosophically. 
An agent by design needs to be lightweight, capable of doing exactly what it 
needs to and no more. This is especially true for a project like Trove whose 
goal is to not to provide overly general PAAS capabilities but simply 
installation and maintenance of different datastores. Currently, the Trove 
daemons handle most logic and leave the agents themselves to do relatively 
little. This takes some effort as many of the first iterations of Trove 
features have too much logic put into the guest agents. However through 
perseverance the subsequent designs are usually cleaner and simpler to follow. 
A community approved, "do everything" agent would endorse the wrong balance and 
lead to developers piling up logic on the guest side. Over time, features would 
become dependent on the Unified Agent, making it impossible to run or even 
contemplate light-weight agents.

Trove's interface to agents today is fairly loose and could stand to be made 
stricter. However, it is flexible and works well enough. Essentially, the duck 
typed interface of the trove.guestagent.api.API class is used to send messages, 
and Trove conductor is used to receive them at which point it updates the 
database. Because both of these components can be swapped out if necessary, the 
code could support the Unified Agent when it appears as well as future agents.

It would be a mistake however to alter Trove's standard method of communication 
to please the new Unified Agent. In general, we should try to keep Trove 
speaking to guest agents in Trove's terms alone to prevent bloat.

Thanks,

Tim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http

Re: [openstack-dev] [keystone] domain admin role query

2013-12-18 Thread Ravi Chunduru
Thanks Dolph,
 It worked now. I specified domain id in the scope.

-Ravi.


On Wed, Dec 18, 2013 at 12:05 PM, Ravi Chunduru  wrote:

> Hi Dolph,
>   I dont have project yet to use in the scope. The intention is to get a
> token using domain admin credentials and create project using it.
>
> Thanks,
> -Ravi.
>
>
> On Wed, Dec 18, 2013 at 11:39 AM, Dolph Mathews 
> wrote:
>
>>
>> On Wed, Dec 18, 2013 at 12:48 PM, Ravi Chunduru wrote:
>>
>>> Thanks all for the information.
>>> I have now v3 policies in place, the issue is that as a domain admin I
>>> could not create a project in the domain. I get 403 unauthorized status.
>>>
>>> I see that when as a  'domain admin' request a token, the response did
>>> not have any roles.  In the token request, I couldnt specify the project -
>>> as we are about to create the project in next step.
>>>
>>
>> Specify a domain as the "scope" to obtain domain-level authorization in
>> the resulting token.
>>
>> See the third example under Scope:
>>
>>
>> https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#scope-scope
>>
>>
>>>
>>> Here is the complete request/response of all the steps done.
>>> https://gist.github.com/kumarcv/8015275
>>>
>>> I am assuming its a bug. Please let me know your opinions.
>>>
>>> Thanks,
>>> -Ravi.
>>>
>>>
>>>
>>>
>>> On Thu, Dec 12, 2013 at 3:00 PM, Henry Nash 
>>> wrote:
>>>
 Hi

 So the idea wasn't the you create a domain with the id of
 'domain_admin_id', rather that you create the domain that you plan to use
 for your admin domain, and then paste its (auto-generated) domain_id into
 the policy file.

 Henry
 On 12 Dec 2013, at 03:11, Paul Belanger 
 wrote:

 > On 13-12-11 11:18 AM, Lyle, David wrote:
 >> +1 on moving the domain admin role rules to the default policy.json
 >>
 >> -David Lyle
 >>
 >> From: Dolph Mathews [mailto:dolph.math...@gmail.com]
 >> Sent: Wednesday, December 11, 2013 9:04 AM
 >> To: OpenStack Development Mailing List (not for usage questions)
 >> Subject: Re: [openstack-dev] [keystone] domain admin role query
 >>
 >>
 >> On Tue, Dec 10, 2013 at 10:49 PM, Jamie Lennox <
 jamielen...@redhat.com> wrote:
 >> Using the default policies it will simply check for the admin role
 and not care about the domain that admin is limited to. This is partially a
 left over from the V2 api when there wasn't domains to worry > about.
 >>
 >> A better example of policies are in the file
 etc/policy.v3cloudsample.json. In there you will see the rule for
 create_project is:
 >>
 >>   "identity:create_project": "rule:admin_required and
 domain_id:%(project.domain_id)s",
 >>
 >> as opposed to (in policy.json):
 >>
 >>   "identity:create_project": "rule:admin_required",
 >>
 >> This is what you are looking for to scope the admin role to a domain.
 >>
 >> We need to start moving the rules from policy.v3cloudsample.json to
 the default policy.json =)
 >>
 >>
 >> Jamie
 >>
 >> - Original Message -
 >>> From: "Ravi Chunduru" 
 >>> To: "OpenStack Development Mailing List" <
 openstack-dev@lists.openstack.org>
 >>> Sent: Wednesday, 11 December, 2013 11:23:15 AM
 >>> Subject: [openstack-dev] [keystone] domain admin role query
 >>>
 >>> Hi,
 >>> I am trying out Keystone V3 APIs and domains.
 >>> I created an domain, created a project in that domain, created an
 user in
 >>> that domain and project.
 >>> Next, gave an admin role for that user in that domain.
 >>>
 >>> I am assuming that user is now admin to that domain.
 >>> Now, I got a scoped token with that user, domain and project. With
 that
 >>> token, I tried to create a new project in that domain. It worked.
 >>>
 >>> But, using the same token, I could also create a new project in a
 'default'
 >>> domain too. I expected it should throw authentication error. Is it
 a bug?
 >>>
 >>> Thanks,
 >>> --
 >>> Ravi
 >>>
 >
 > One of the issues I had this week while using the
 policy.v3cloudsample.json was I had no easy way of creating a domain with
 the id of 'admin_domain_id'.  I basically had to modify the SQL directly to
 do it.
 >
 > Any chance we can create a 2nd domain using 'admin_domain_id' via
 keystone-manage sync_db?
 >
 > --
 > Paul Belanger | PolyBeacon, Inc.
 > Jabber: paul.belan...@polybeacon.com | IRC: pabelanger (Freenode)
 > Github: https://github.com/pabelanger | Twitter:
 https://twitter.com/pabelanger
 >
 > ___
 > OpenStack-dev mailing list
 > OpenStack-dev@lists.openstack.org
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >


 ___

Re: [openstack-dev] [keystone] domain admin role query

2013-12-18 Thread Ravi Chunduru
Hi Dolph,
  I dont have project yet to use in the scope. The intention is to get a
token using domain admin credentials and create project using it.

Thanks,
-Ravi.


On Wed, Dec 18, 2013 at 11:39 AM, Dolph Mathews wrote:

>
> On Wed, Dec 18, 2013 at 12:48 PM, Ravi Chunduru  wrote:
>
>> Thanks all for the information.
>> I have now v3 policies in place, the issue is that as a domain admin I
>> could not create a project in the domain. I get 403 unauthorized status.
>>
>> I see that when as a  'domain admin' request a token, the response did
>> not have any roles.  In the token request, I couldnt specify the project -
>> as we are about to create the project in next step.
>>
>
> Specify a domain as the "scope" to obtain domain-level authorization in
> the resulting token.
>
> See the third example under Scope:
>
>
> https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#scope-scope
>
>
>>
>> Here is the complete request/response of all the steps done.
>> https://gist.github.com/kumarcv/8015275
>>
>> I am assuming its a bug. Please let me know your opinions.
>>
>> Thanks,
>> -Ravi.
>>
>>
>>
>>
>> On Thu, Dec 12, 2013 at 3:00 PM, Henry Nash wrote:
>>
>>> Hi
>>>
>>> So the idea wasn't the you create a domain with the id of
>>> 'domain_admin_id', rather that you create the domain that you plan to use
>>> for your admin domain, and then paste its (auto-generated) domain_id into
>>> the policy file.
>>>
>>> Henry
>>> On 12 Dec 2013, at 03:11, Paul Belanger 
>>> wrote:
>>>
>>> > On 13-12-11 11:18 AM, Lyle, David wrote:
>>> >> +1 on moving the domain admin role rules to the default policy.json
>>> >>
>>> >> -David Lyle
>>> >>
>>> >> From: Dolph Mathews [mailto:dolph.math...@gmail.com]
>>> >> Sent: Wednesday, December 11, 2013 9:04 AM
>>> >> To: OpenStack Development Mailing List (not for usage questions)
>>> >> Subject: Re: [openstack-dev] [keystone] domain admin role query
>>> >>
>>> >>
>>> >> On Tue, Dec 10, 2013 at 10:49 PM, Jamie Lennox <
>>> jamielen...@redhat.com> wrote:
>>> >> Using the default policies it will simply check for the admin role
>>> and not care about the domain that admin is limited to. This is partially a
>>> left over from the V2 api when there wasn't domains to worry > about.
>>> >>
>>> >> A better example of policies are in the file
>>> etc/policy.v3cloudsample.json. In there you will see the rule for
>>> create_project is:
>>> >>
>>> >>   "identity:create_project": "rule:admin_required and
>>> domain_id:%(project.domain_id)s",
>>> >>
>>> >> as opposed to (in policy.json):
>>> >>
>>> >>   "identity:create_project": "rule:admin_required",
>>> >>
>>> >> This is what you are looking for to scope the admin role to a domain.
>>> >>
>>> >> We need to start moving the rules from policy.v3cloudsample.json to
>>> the default policy.json =)
>>> >>
>>> >>
>>> >> Jamie
>>> >>
>>> >> - Original Message -
>>> >>> From: "Ravi Chunduru" 
>>> >>> To: "OpenStack Development Mailing List" <
>>> openstack-dev@lists.openstack.org>
>>> >>> Sent: Wednesday, 11 December, 2013 11:23:15 AM
>>> >>> Subject: [openstack-dev] [keystone] domain admin role query
>>> >>>
>>> >>> Hi,
>>> >>> I am trying out Keystone V3 APIs and domains.
>>> >>> I created an domain, created a project in that domain, created an
>>> user in
>>> >>> that domain and project.
>>> >>> Next, gave an admin role for that user in that domain.
>>> >>>
>>> >>> I am assuming that user is now admin to that domain.
>>> >>> Now, I got a scoped token with that user, domain and project. With
>>> that
>>> >>> token, I tried to create a new project in that domain. It worked.
>>> >>>
>>> >>> But, using the same token, I could also create a new project in a
>>> 'default'
>>> >>> domain too. I expected it should throw authentication error. Is it a
>>> bug?
>>> >>>
>>> >>> Thanks,
>>> >>> --
>>> >>> Ravi
>>> >>>
>>> >
>>> > One of the issues I had this week while using the
>>> policy.v3cloudsample.json was I had no easy way of creating a domain with
>>> the id of 'admin_domain_id'.  I basically had to modify the SQL directly to
>>> do it.
>>> >
>>> > Any chance we can create a 2nd domain using 'admin_domain_id' via
>>> keystone-manage sync_db?
>>> >
>>> > --
>>> > Paul Belanger | PolyBeacon, Inc.
>>> > Jabber: paul.belan...@polybeacon.com | IRC: pabelanger (Freenode)
>>> > Github: https://github.com/pabelanger | Twitter:
>>> https://twitter.com/pabelanger
>>> >
>>> > ___
>>> > OpenStack-dev mailing list
>>> > OpenStack-dev@lists.openstack.org
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Ravi
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.

Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

2013-12-18 Thread Dmitry Mescheryakov
Tim, we definitely don't want to force projects to migrate to the unified
agent. I've started making the PoC with the idea that Savanna needs the
agent anyway, and we want it to be ready for Icehouse. On the other side, I
believe, it will be much easier to drive the discussion further with the
PoC ready, as we will have something material to talk over.

Dmitry


2013/12/18 Tim Simpson 

>  Thanks for the summary Dmitry. I'm ok with these ideas, and while I
> still disagree with having a single, forced standard for RPC communication,
> I should probably let things pan out a bit before being too concerned.
>
> - Tim
>
>
>  --
> *From:* Dmitry Mescheryakov [dmescherya...@mirantis.com]
> *Sent:* Wednesday, December 18, 2013 11:51 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
>
> *Subject:* Re: [openstack-dev] [trove] My thoughts on the Unified Guest
> Agent
>
>   Tim,
>
>  The unified agent we proposing is based on the following ideas:
>   * the core agent has _no_ functionality at all. It is a pure RPC
> mechanism with the ability to add whichever API needed on top of it.
>   * the API is organized into modules which could be reused across
> different projects.
>   * there will be no single package: each project (Trove/Savanna/Others)
> assembles its own agent based on API project needs.
>
>  I hope that covers your concerns.
>
>  Dmitry
>
>
> 2013/12/18 Tim Simpson 
>
>>  I've been following the Unified Agent mailing list thread for awhile
>> now and, as someone who has written a fair amount of code for both of the
>> two existing Trove agents, thought I should give my opinion about it. I
>> like the idea of a unified agent, but believe that forcing Trove to adopt
>> this agent for use as its by default will stifle innovation and harm the
>> project.
>>
>> There are reasons Trove has more than one agent currently. While everyone
>> knows about the "Reference Agent" written in Python, Rackspace uses a
>> different agent written in C++ because it takes up less memory. The
>> concerns which led to the C++ agent would not be addressed by a unified
>> agent, which if anything would be larger than the Reference Agent is
>> currently.
>>
>> I also believe a unified agent represents the wrong approach
>> philosophically. An agent by design needs to be lightweight, capable of
>> doing exactly what it needs to and no more. This is especially true for a
>> project like Trove whose goal is to not to provide overly general PAAS
>> capabilities but simply installation and maintenance of different
>> datastores. Currently, the Trove daemons handle most logic and leave the
>> agents themselves to do relatively little. This takes some effort as many
>> of the first iterations of Trove features have too much logic put into the
>> guest agents. However through perseverance the subsequent designs are
>> usually cleaner and simpler to follow. A community approved, "do
>> everything" agent would endorse the wrong balance and lead to developers
>> piling up logic on the guest side. Over time, features would become
>> dependent on the Unified Agent, making it impossible to run or even
>> contemplate light-weight agents.
>>
>> Trove's interface to agents today is fairly loose and could stand to be
>> made stricter. However, it is flexible and works well enough. Essentially,
>> the duck typed interface of the trove.guestagent.api.API class is used to
>> send messages, and Trove conductor is used to receive them at which point
>> it updates the database. Because both of these components can be swapped
>> out if necessary, the code could support the Unified Agent when it appears
>> as well as future agents.
>>
>> It would be a mistake however to alter Trove's standard method of
>> communication to please the new Unified Agent. In general, we should try to
>> keep Trove speaking to guest agents in Trove's terms alone to prevent bloat.
>>
>> Thanks,
>>
>> Tim
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding DB migration items to the common review checklist

2013-12-18 Thread Michael Still
I don't have a problem with any of these requirements, but I'd like to
explore automating the checks. Would it be possible to write a unit
test that verified this for all migrations? Then we don't need to add
it to the checklist...

Michael

On Thu, Dec 19, 2013 at 4:27 AM, Matt Riedemann
 wrote:
> I've seen this come up a few times in reviews and was thinking we should put
> something in the general review checklist wiki for it [1].
>
> Basically I have three things I'd like to have in the list for DB
> migrations:
>
> 1. Unique constraints should be named. Different DB engines and SQLAlchemy
> dialects automatically name the constraint their own way, which can be
> troublesome for universal migrations. We should avoid this by enforcing that
> UCs are named when they are created. This means not using the unique=True
> arg in UniqueConstraint if the name arg isn't provided.
>
> 2. Foreign keys should be named for the same reasons in #1.
>
> 3. Foreign keys shouldn't be created against nullable columns. Some DB
> engines don't allow unique constraints over nullable columns and if you
> can't create the unique constraint you can't create the foreign key, so we
> should avoid this. If you need the FK, then the pre-req is to make the
> target column non-nullable. Think of the instances.uuid column in nova for
> example.
>
> Unless anyone has a strong objection to this, I'll update the review
> checklist wiki with these items.
>
> [1] https://wiki.openstack.org/wiki/ReviewChecklist
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] domain admin role query

2013-12-18 Thread Dolph Mathews
On Wed, Dec 18, 2013 at 12:48 PM, Ravi Chunduru  wrote:

> Thanks all for the information.
> I have now v3 policies in place, the issue is that as a domain admin I
> could not create a project in the domain. I get 403 unauthorized status.
>
> I see that when as a  'domain admin' request a token, the response did not
> have any roles.  In the token request, I couldnt specify the project - as
> we are about to create the project in next step.
>

Specify a domain as the "scope" to obtain domain-level authorization in the
resulting token.

See the third example under Scope:


https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#scope-scope


>
> Here is the complete request/response of all the steps done.
> https://gist.github.com/kumarcv/8015275
>
> I am assuming its a bug. Please let me know your opinions.
>
> Thanks,
> -Ravi.
>
>
>
>
> On Thu, Dec 12, 2013 at 3:00 PM, Henry Nash wrote:
>
>> Hi
>>
>> So the idea wasn't the you create a domain with the id of
>> 'domain_admin_id', rather that you create the domain that you plan to use
>> for your admin domain, and then paste its (auto-generated) domain_id into
>> the policy file.
>>
>> Henry
>> On 12 Dec 2013, at 03:11, Paul Belanger 
>> wrote:
>>
>> > On 13-12-11 11:18 AM, Lyle, David wrote:
>> >> +1 on moving the domain admin role rules to the default policy.json
>> >>
>> >> -David Lyle
>> >>
>> >> From: Dolph Mathews [mailto:dolph.math...@gmail.com]
>> >> Sent: Wednesday, December 11, 2013 9:04 AM
>> >> To: OpenStack Development Mailing List (not for usage questions)
>> >> Subject: Re: [openstack-dev] [keystone] domain admin role query
>> >>
>> >>
>> >> On Tue, Dec 10, 2013 at 10:49 PM, Jamie Lennox 
>> wrote:
>> >> Using the default policies it will simply check for the admin role and
>> not care about the domain that admin is limited to. This is partially a
>> left over from the V2 api when there wasn't domains to worry > about.
>> >>
>> >> A better example of policies are in the file
>> etc/policy.v3cloudsample.json. In there you will see the rule for
>> create_project is:
>> >>
>> >>   "identity:create_project": "rule:admin_required and
>> domain_id:%(project.domain_id)s",
>> >>
>> >> as opposed to (in policy.json):
>> >>
>> >>   "identity:create_project": "rule:admin_required",
>> >>
>> >> This is what you are looking for to scope the admin role to a domain.
>> >>
>> >> We need to start moving the rules from policy.v3cloudsample.json to
>> the default policy.json =)
>> >>
>> >>
>> >> Jamie
>> >>
>> >> - Original Message -
>> >>> From: "Ravi Chunduru" 
>> >>> To: "OpenStack Development Mailing List" <
>> openstack-dev@lists.openstack.org>
>> >>> Sent: Wednesday, 11 December, 2013 11:23:15 AM
>> >>> Subject: [openstack-dev] [keystone] domain admin role query
>> >>>
>> >>> Hi,
>> >>> I am trying out Keystone V3 APIs and domains.
>> >>> I created an domain, created a project in that domain, created an
>> user in
>> >>> that domain and project.
>> >>> Next, gave an admin role for that user in that domain.
>> >>>
>> >>> I am assuming that user is now admin to that domain.
>> >>> Now, I got a scoped token with that user, domain and project. With
>> that
>> >>> token, I tried to create a new project in that domain. It worked.
>> >>>
>> >>> But, using the same token, I could also create a new project in a
>> 'default'
>> >>> domain too. I expected it should throw authentication error. Is it a
>> bug?
>> >>>
>> >>> Thanks,
>> >>> --
>> >>> Ravi
>> >>>
>> >
>> > One of the issues I had this week while using the
>> policy.v3cloudsample.json was I had no easy way of creating a domain with
>> the id of 'admin_domain_id'.  I basically had to modify the SQL directly to
>> do it.
>> >
>> > Any chance we can create a 2nd domain using 'admin_domain_id' via
>> keystone-manage sync_db?
>> >
>> > --
>> > Paul Belanger | PolyBeacon, Inc.
>> > Jabber: paul.belan...@polybeacon.com | IRC: pabelanger (Freenode)
>> > Github: https://github.com/pabelanger | Twitter:
>> https://twitter.com/pabelanger
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Ravi
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Future meeting times

2013-12-18 Thread Dan Smith
> I am fine with this, but I will never be attending the 1400 UTC
> meetings, as I live in utc-8

I too will miss the 1400UTC meetings during the majority of the year.
During PDT I will be able to make them, but will be uncaffeinated.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] API spec for OS-NS-ROLES extension

2013-12-18 Thread Dolph Mathews
Services already own their own policy enforcement, and therefore own their
own definitions of roles. A service deployment can already require roles
that are prefixed by a specific string (compute-*), and can already map
actual capabilities onto those roles ({"compute-create":
"role:compute-manager"}).

Centralizing or duplicating some aspect of that enforcement into keystone
does nothing for OpenStack, AFAICT.

At this point, if you've somehow changed your proposal as the message below
implies, I'd appreciate a summary of the revision that addresses the
questions and issues that have repeatedly been raised against it and
ignored.


On Wed, Dec 18, 2013 at 11:19 AM, Tiwari, Arvind wrote:

>  Hi Adam,
>
>
>
> I would like to request you to revisit the below link and provide your
> opinion, so that we can move forward and try to find a common ground where
> everyone.
>
>
>
> https://review.openstack.org/#/c/61897
>
>
>
>
>
> *Below is my justification for service_id in role model:*
>
> In a public cloud deployment model, service teams (or service deployers)
> defines the roles along with other artifacts (service and endpoint) and
> they need full control on these artifacts including roles. This way they
> can control the life cycle of these artifacts without depending on IAM
> service providers. (more details in
> https://blueprints.launchpad.net/keystone/+spec/name-spaced-roles)
>
> As an IAM service provider in a public cloud deployment, it is our
> responsibility to facilitate them so that they can control full life cycle
> of their service specific artifacts. To make it happen we need tight access
> control on these artifacts, so that a service deployer accidently or
> maliciously not able to mess-up with other services.
>
> To achieve that level fine granularity and to isolate service deployers
> from artifacts, we need to associate entity models (service, endpoints and
> roles) with a service.  This way we can define entity ownership and define
> access control policy based on service. Currently, role data model  does
> not support any association and that is why I am requesting  to introduce
> some way to associate a role with domain, project and service. This
> association also helps to define a namespace for making the role name
> globally unique.
>
>
>
> Previously I was trying achieve tight linking of roles with service_id and
> that might be offending for some community members. Now after much effort
> and help from David Chadwick, we have generalized the role model and come
> up with generic design, so that it can fit in with every once use case. As
> I mentioned in the spec it will be backward compatible so that it won’t
> break the existing deployments
>
>
>
> I would appreciate if you can revisit the link and provide comments and
> suggestions, there might be still some room for improvements and I am open
> for them.
>
>
>
>
>
> Dolph, I would also like you to review the specs, so that we can make some
> progress.
>
>
>
>
>
> Regards,
>
> Arvind
>
>
>
>
>
>
>
>
>



-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

2013-12-18 Thread Tim Simpson
Thanks for the summary Dmitry. I'm ok with these ideas, and while I still 
disagree with having a single, forced standard for RPC communication, I should 
probably let things pan out a bit before being too concerned.

- Tim



From: Dmitry Mescheryakov [dmescherya...@mirantis.com]
Sent: Wednesday, December 18, 2013 11:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

Tim,

The unified agent we proposing is based on the following ideas:
  * the core agent has _no_ functionality at all. It is a pure RPC mechanism 
with the ability to add whichever API needed on top of it.
  * the API is organized into modules which could be reused across different 
projects.
  * there will be no single package: each project (Trove/Savanna/Others) 
assembles its own agent based on API project needs.

I hope that covers your concerns.

Dmitry


2013/12/18 Tim Simpson 
mailto:tim.simp...@rackspace.com>>
I've been following the Unified Agent mailing list thread for awhile now and, 
as someone who has written a fair amount of code for both of the two existing 
Trove agents, thought I should give my opinion about it. I like the idea of a 
unified agent, but believe that forcing Trove to adopt this agent for use as 
its by default will stifle innovation and harm the project.

There are reasons Trove has more than one agent currently. While everyone knows 
about the "Reference Agent" written in Python, Rackspace uses a different agent 
written in C++ because it takes up less memory. The concerns which led to the 
C++ agent would not be addressed by a unified agent, which if anything would be 
larger than the Reference Agent is currently.

I also believe a unified agent represents the wrong approach philosophically. 
An agent by design needs to be lightweight, capable of doing exactly what it 
needs to and no more. This is especially true for a project like Trove whose 
goal is to not to provide overly general PAAS capabilities but simply 
installation and maintenance of different datastores. Currently, the Trove 
daemons handle most logic and leave the agents themselves to do relatively 
little. This takes some effort as many of the first iterations of Trove 
features have too much logic put into the guest agents. However through 
perseverance the subsequent designs are usually cleaner and simpler to follow. 
A community approved, "do everything" agent would endorse the wrong balance and 
lead to developers piling up logic on the guest side. Over time, features would 
become dependent on the Unified Agent, making it impossible to run or even 
contemplate light-weight agents.

Trove's interface to agents today is fairly loose and could stand to be made 
stricter. However, it is flexible and works well enough. Essentially, the duck 
typed interface of the trove.guestagent.api.API class is used to send messages, 
and Trove conductor is used to receive them at which point it updates the 
database. Because both of these components can be swapped out if necessary, the 
code could support the Unified Agent when it appears as well as future agents.

It would be a mistake however to alter Trove's standard method of communication 
to please the new Unified Agent. In general, we should try to keep Trove 
speaking to guest agents in Trove's terms alone to prevent bloat.

Thanks,

Tim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Future meeting times

2013-12-18 Thread Joe Gordon
On Dec 18, 2013 6:38 AM, "Russell Bryant"  wrote:
>
> Greetings,
>
> The weekly Nova meeting [1] has been held on Thursdays at 2100 UTC.
> I've been getting some requests to offer an alternative meeting time.
> I'd like to try out alternating the meeting time between two different
> times to allow more people in our global development team to attend
> meetings and engage in some real-time discussion.
>
> I propose the alternate meeting time as 1400 UTC.  I realize that
> doesn't help *everyone*, but it should be an improvement for some,
> especially for those in Europe.
>
> If we proceed with this, we would meet at 2100 UTC on January 2nd, 1400
> UTC on January 9th, and alternate from there.  Note that we will not be
> meeting at all on December 26th as a break for the holidays.
>
> If you can't attend either of these times, please note that the meetings
> are intended to be supplementary to the openstack-dev mailing list.  In
> the meetings, we check in on status, raise awareness of important
> issues, and progress some discussions with real-time debate, but the
> most important discussions and decisions will always be brought to the
> openstack-dev mailing list, as well.  With that said, active Nova
> contributors are always encouraged to attend and participate if they are
> able.
>
> Comments welcome, especially some acknowledgement that there are people
> that would attend the alternate meeting time.  :-)

I am fine with this, but I will never be attending the 1400 UTC meetings,
as I live in utc-8

>
> Thanks,
>
> [1] https://wiki.openstack.org/wiki/Meetings/Nova
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

2013-12-18 Thread Tim Simpson
>> Please provide proof of that assumption or at least a general hypothesis 
>> that we can test.

I can't prove that the new agent will be larger as it doesn't exist yet. 

>> Since nothing was agreed upon anyway, I don't know how you came to that 
>> conclusion.  I would suggest that any agent framework be held to an 
>> extremely high standard for footprint for this very reason.

Sorry, I formed a conclusion based on what I'd read so far. There has been talk 
to add Salt to this Unified Agent along with several other things. So I think 
its a valid concern to state that making this thing small is not as high on the 
list of priorities as adding extra functionality. 

The C++ agent is just over 3 megabytes of real memory and takes up less than 30 
megabytes  of virtual memory. I don't think an agent has to be *that* small. 
However it won't get near that small unless making it tiny is made a priority, 
and I'm skeptical that's possible while also deciding an agent will be capable 
of interacting with all major OpenStack projects as well as Salt.

>> Nobody has suggested writing an agent that does everything.

Steven Dake just said:

"A unified agent addresses the downstream viewpoint well, which is 'There is 
only one agent to package and maintain, and it supports all the integrated 
OpenStack Program projects'."

So it sounds like some people are saying there will only be one. Or that it is 
at least an idea.

>> If Trove's communication method is in fact superior to all others, then 
>> perhaps we should discuss using that in the unified agent framework.

My point is every project should communicate to an agent in its own interface, 
which can be swapped out for whatever implementations people need.

>>  In fact I've specifically been arguing to keep it focused on facilitating 
>> guest<->service communication and limiting its in-guest capabilities to 
>> narrowly focused tasks.

I like this idea better than creating one agent to rule them all, but I would 
like to avoid forcing a single method of communicating between agents.

>> Also I'd certainly be interested in hearing about whether or not you think 
>> the C++ agent could made generic enough for any project to use.

I certainly believe much of the code could be reused for other projects. Right 
now it communicates over RabbitMQ, Oslo RPC style, so I'm not sure how much it 
will fall in line with what the Unified Agent group wants. However, I would 
love to talk more about this. So far my experience has been that no one wants 
to pursue using / developing an agent that was written in C++.

Thanks,

Tim

From: Clint Byrum [cl...@fewbar.com]
Sent: Wednesday, December 18, 2013 11:36 AM
To: openstack-dev
Subject: Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

Excerpts from Tim Simpson's message of 2013-12-18 07:34:14 -0800:
> I've been following the Unified Agent mailing list thread for awhile
> now and, as someone who has written a fair amount of code for both of
> the two existing Trove agents, thought I should give my opinion about
> it. I like the idea of a unified agent, but believe that forcing Trove
> to adopt this agent for use as its by default will stifle innovation
> and harm the project.
>

"Them's fightin words". ;)

That is a very strong position to take. So I am going to hold your
statements of facts and assumptions to a very high standard below.

> There are reasons Trove has more than one agent currently. While
> everyone knows about the "Reference Agent" written in Python, Rackspace
> uses a different agent written in C++ because it takes up less memory. The
> concerns which led to the C++ agent would not be addressed by a unified
> agent, which if anything would be larger than the Reference Agent is
> currently.
>

"Would be larger..." - Please provide proof of that assumption or at least
a general hypothesis that we can test. Since nothing was agreed upon
anyway, I don't know how you came to that conclusion. I would suggest
that any agent framework be held to an extremely high standard for
footprint for this very reason.

> I also believe a unified agent represents the wrong approach
> philosophically. An agent by design needs to be lightweight, capable
> of doing exactly what it needs to and no more. This is especially true
> for a project like Trove whose goal is to not to provide overly general
> PAAS capabilities but simply installation and maintenance of different
> datastores. Currently, the Trove daemons handle most logic and leave
> the agents themselves to do relatively little. This takes some effort
> as many of the first iterations of Trove features have too much logic
> put into the guest agents. However through perseverance the subsequent
> designs are usually cleaner and simpler to follow. A community approved,
> "do everything" agent would endorse the wrong balance and lead to
> developers piling up logic on the guest side. Over time, features would
> becom

Re: [openstack-dev] Adding DB migration items to the common review checklist

2013-12-18 Thread Brant Knudson
Matt -

Could a test be added that goes through the models and checks these things?
Other projects could use this too.

Here's an example of a test that checks if the tables are all InnoDB:
http://git.openstack.org/cgit/openstack/nova/tree/nova/tests/db/test_migrations.py?id=6e455cd97f04bf26bbe022be17c57e089cf502f4#n430

- Brant




On Wed, Dec 18, 2013 at 11:27 AM, Matt Riedemann  wrote:

> I've seen this come up a few times in reviews and was thinking we should
> put something in the general review checklist wiki for it [1].
>
> Basically I have three things I'd like to have in the list for DB
> migrations:
>
> 1. Unique constraints should be named. Different DB engines and SQLAlchemy
> dialects automatically name the constraint their own way, which can be
> troublesome for universal migrations. We should avoid this by enforcing
> that UCs are named when they are created. This means not using the
> unique=True arg in UniqueConstraint if the name arg isn't provided.
>
> 2. Foreign keys should be named for the same reasons in #1.
>
> 3. Foreign keys shouldn't be created against nullable columns. Some DB
> engines don't allow unique constraints over nullable columns and if you
> can't create the unique constraint you can't create the foreign key, so we
> should avoid this. If you need the FK, then the pre-req is to make the
> target column non-nullable. Think of the instances.uuid column in nova for
> example.
>
> Unless anyone has a strong objection to this, I'll update the review
> checklist wiki with these items.
>
> [1] https://wiki.openstack.org/wiki/ReviewChecklist
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How do we format/version/deprecate things from notifications?

2013-12-18 Thread Sandy Walsh


On 12/18/2013 03:00 PM, Russell Bryant wrote:

> We really need proper versioning for notifications.  We've had a
> blueprint open for about a year, but AFAICT, nobody is actively working
> on it.
> 
> https://blueprints.launchpad.net/nova/+spec/versioned-notifications
> 

IBM is behind this effort now and are keen to get CADF support around
notifications. Seems to handle all of our use cases.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How do we format/version/deprecate things from notifications?

2013-12-18 Thread Sandy Walsh


On 12/18/2013 01:44 PM, Nikola Đipanov wrote:
> On 12/18/2013 06:17 PM, Matt Riedemann wrote:
>>
>>
>> On 12/18/2013 9:42 AM, Matt Riedemann wrote:
>>> The question came up in this patch [1], how do we deprecate and remove
>>> keys in the notification payload?  In this case I need to deprecate and
>>> replace the 'instance_type' key with 'flavor' per the associated
>>> blueprint.
>>>
>>> [1] https://review.openstack.org/#/c/62430/
>>>
>>
>> By the way, my thinking is it's handled like a deprecated config option,
>> you deprecate it for a release, make sure it's documented in the release
>> notes and then drop it in the next release. For anyone that hasn't
>> switched over they are broken until they start consuming the new key.
>>
> 
> FWIW - I am OK with this approach - but we should at least document it.
> I am also thinking that we may want to make it explicit like oslo.config
> does it.

Likewise ... until we get defined schemas and versioning on
notifications, it seems reasonable.

A post to the ML is nice too :)




> 
> Thanks,
> 
> N.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] hybrid cloud & bursting question

2013-12-18 Thread Zane Bitter

On 18/12/13 04:26, 이준원 wrote:

Hi, stackers,

I know only little about Heat, and I can't wholly follow recent
discussions around multi-region support in this mailing list.
Please help me understand the roadmap or plan about hybrid cloud
and "bursting" in particular.

I'd like to ask the following questions to the developers who have
the same interests.

1) Can we use Heat to deploy to the AWS using the same template
as in the OpenStack cloud?


Short answer, no. Heat calls only OpenStack APIs.

In principle, you could write a CloudFormation template for AWS, and one 
of the design goals for Heat is that in most cases you should be able to 
use it with Heat on OpenStack also. Of course you miss out on all of the 
nice features that Heat has which are not present in CloudFormation.



2) Will Heat support "bursting" when multi-region is supported?
(i.e., auto scaling from the private cloud to the public cloud)


Not in an automated way, where a single autoscaling group spans multiple 
clouds. It's not clear to me how that could work, given that 
configurations would likely be different in different clouds.


What it will allow is to manage a tree of templates (i.e. a top-level 
stack and any nested stacks or providers) that spans multiple clouds.



If these are not prepared in the Icehouse release, is it possible
in the J-release? Who's interested in these? Will it be able to
burst into Rackspace cloud or HP cloud using HOT within next year?


I'm not aware of any plans in this area, but I'm certain there are a 
bunch of things we could do to make this smoother. e.g. (half-baked idea 
off the top of my head) some sort of alarm chaining, so if an 
autoscaling group has reached its maximum capacity it redirects its 
alarms to some other place (e.g. an autoscaling group in some other cloud).



We're also considering the development of the related features
if possible and want to know how to be involved step by step.


Hybrid cloud bursting is the holy grail of cloud computing, so 
contributions in this area would be extremely welcome :)


One caveat: to the extent that we can support this within the scope of 
Heat's mission, we should definitely do it. However, it's possible there 
might be things that make more sense for some higher-level service to 
do, and if that turned out to be the case then IMO we would want to 
focus more on features to make Heat a great base on which to build that 
service rather than a place to integrate that service itself. We don't 
need to have that discussion yet though; right now it would be great to 
hear whatever ideas you have.


cheers,
Zane.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

2013-12-18 Thread Devananda van der Veen
On Tue, Dec 17, 2013 at 10:00 PM, Gao, Fengqian wrote:

>  Hi, all,
>
> I am planning to extend bp
> https://blueprints.launchpad.net/nova/+spec/utilization-aware-schedulingwith 
> power and temperature. In other words, power and temperature can be
> collected and used for nova-scheduler just as CPU utilization.
>
> I have a question here. As you know, IPMI is used to get power and
> temperature and baremetal implements IPMI functions in Nova. But baremetal
> driver is being split out of nova, so if I want to change something to the
> IPMI, which part should I choose now? Nova or Ironic?
>
>
>

Hi!

A few thoughts... Firstly, new features should be geared towards Ironic,
not the nova baremetal driver as it will be deprecated soon (
https://blueprints.launchpad.net/nova/+spec/deprecate-baremetal-driver).
That being said, I actually don't think you want to use IPMI for what
you're describing at all, but maybe I'm wrong.

When scheduling VMs with Nova, in many cases there is already an agent
running locally, eg. nova-compute, and this agent is already supplying
information to the scheduler. I think this is where the facilities for
gathering power/temperature/etc (eg, via lm-sensors) should be placed, and
it can reported back to the scheduler along with other usage statistics.

If you think there's a compelling reason to use Ironic for this instead of
lm-sensors, please clarify.

Cheers,
Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How do we format/version/deprecate things from notifications?

2013-12-18 Thread Russell Bryant
On 12/18/2013 12:44 PM, Nikola Đipanov wrote:
> On 12/18/2013 06:17 PM, Matt Riedemann wrote:
>>
>>
>> On 12/18/2013 9:42 AM, Matt Riedemann wrote:
>>> The question came up in this patch [1], how do we deprecate and remove
>>> keys in the notification payload?  In this case I need to deprecate and
>>> replace the 'instance_type' key with 'flavor' per the associated
>>> blueprint.
>>>
>>> [1] https://review.openstack.org/#/c/62430/
>>>
>>
>> By the way, my thinking is it's handled like a deprecated config option,
>> you deprecate it for a release, make sure it's documented in the release
>> notes and then drop it in the next release. For anyone that hasn't
>> switched over they are broken until they start consuming the new key.
>>
> 
> FWIW - I am OK with this approach - but we should at least document it.
> I am also thinking that we may want to make it explicit like oslo.config
> does it.

We really need proper versioning for notifications.  We've had a
blueprint open for about a year, but AFAICT, nobody is actively working
on it.

https://blueprints.launchpad.net/nova/+spec/versioned-notifications

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] domain admin role query

2013-12-18 Thread Ravi Chunduru
Thanks all for the information.
I have now v3 policies in place, the issue is that as a domain admin I
could not create a project in the domain. I get 403 unauthorized status.

I see that when as a  'domain admin' request a token, the response did not
have any roles.  In the token request, I couldnt specify the project - as
we are about to create the project in next step.

Here is the complete request/response of all the steps done.
https://gist.github.com/kumarcv/8015275

I am assuming its a bug. Please let me know your opinions.

Thanks,
-Ravi.




On Thu, Dec 12, 2013 at 3:00 PM, Henry Nash wrote:

> Hi
>
> So the idea wasn't the you create a domain with the id of
> 'domain_admin_id', rather that you create the domain that you plan to use
> for your admin domain, and then paste its (auto-generated) domain_id into
> the policy file.
>
> Henry
> On 12 Dec 2013, at 03:11, Paul Belanger 
> wrote:
>
> > On 13-12-11 11:18 AM, Lyle, David wrote:
> >> +1 on moving the domain admin role rules to the default policy.json
> >>
> >> -David Lyle
> >>
> >> From: Dolph Mathews [mailto:dolph.math...@gmail.com]
> >> Sent: Wednesday, December 11, 2013 9:04 AM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: Re: [openstack-dev] [keystone] domain admin role query
> >>
> >>
> >> On Tue, Dec 10, 2013 at 10:49 PM, Jamie Lennox 
> wrote:
> >> Using the default policies it will simply check for the admin role and
> not care about the domain that admin is limited to. This is partially a
> left over from the V2 api when there wasn't domains to worry > about.
> >>
> >> A better example of policies are in the file
> etc/policy.v3cloudsample.json. In there you will see the rule for
> create_project is:
> >>
> >>   "identity:create_project": "rule:admin_required and
> domain_id:%(project.domain_id)s",
> >>
> >> as opposed to (in policy.json):
> >>
> >>   "identity:create_project": "rule:admin_required",
> >>
> >> This is what you are looking for to scope the admin role to a domain.
> >>
> >> We need to start moving the rules from policy.v3cloudsample.json to the
> default policy.json =)
> >>
> >>
> >> Jamie
> >>
> >> - Original Message -
> >>> From: "Ravi Chunduru" 
> >>> To: "OpenStack Development Mailing List" <
> openstack-dev@lists.openstack.org>
> >>> Sent: Wednesday, 11 December, 2013 11:23:15 AM
> >>> Subject: [openstack-dev] [keystone] domain admin role query
> >>>
> >>> Hi,
> >>> I am trying out Keystone V3 APIs and domains.
> >>> I created an domain, created a project in that domain, created an user
> in
> >>> that domain and project.
> >>> Next, gave an admin role for that user in that domain.
> >>>
> >>> I am assuming that user is now admin to that domain.
> >>> Now, I got a scoped token with that user, domain and project. With that
> >>> token, I tried to create a new project in that domain. It worked.
> >>>
> >>> But, using the same token, I could also create a new project in a
> 'default'
> >>> domain too. I expected it should throw authentication error. Is it a
> bug?
> >>>
> >>> Thanks,
> >>> --
> >>> Ravi
> >>>
> >
> > One of the issues I had this week while using the
> policy.v3cloudsample.json was I had no easy way of creating a domain with
> the id of 'admin_domain_id'.  I basically had to modify the SQL directly to
> do it.
> >
> > Any chance we can create a 2nd domain using 'admin_domain_id' via
> keystone-manage sync_db?
> >
> > --
> > Paul Belanger | PolyBeacon, Inc.
> > Jabber: paul.belan...@polybeacon.com | IRC: pabelanger (Freenode)
> > Github: https://github.com/pabelanger | Twitter:
> https://twitter.com/pabelanger
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Ravi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Diversity as a requirement for incubation

2013-12-18 Thread Stefano Maffulli
On 12/18/2013 07:01 AM, Dolph Mathews wrote:
> Options 2 and 3 sound identical to me, when realistically applied.
> Option 3 just makes the common sense aspect mandatory.

Indeed, Option 3 gets my vote. One aspect I'd like to mention regarding
diversity of contributors in any open source project is a crucial
element gives credibility to its "open sourceness".  Many of the tools
that do software evaluation pay close attention to this element.

'Contributors' is to be intended in a wider term than just code
developers or commits: having users very involved in design discussions
or providing test/use cases, for example, would be IMHO evaluated
positively towards a diversity score.

Option 3 is a good balance because it helps projects get started and
puts pressure on them to grow outside of the team that started. It's not
a secret that the outside-looking aspect of OpenStack since its
inception is what made it so successful so rapidly. One team coding, one
team recruiting contributors (users and developers): let's keep the
model going.

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

2013-12-18 Thread Fox, Kevin M
Someone's gotta make/maintain the trove/savanna images though. They usually are 
built from packages. If there is a unified agent, then it only has to be 
packaged once. If there is one per special type of agent, its one package per 
special type of agent. I don't think there is a free lunch here, just a 
question of who does the work.

Thanks,
Kevin

From: Clint Byrum [cl...@fewbar.com]
Sent: Wednesday, December 18, 2013 9:38 AM
To: openstack-dev
Subject: Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

Excerpts from Steven Dake's message of 2013-12-18 08:05:09 -0800:
> On 12/18/2013 08:34 AM, Tim Simpson wrote:
> > I've been following the Unified Agent mailing list thread for awhile
> > now and, as someone who has written a fair amount of code for both of
> > the two existing Trove agents, thought I should give my opinion about
> > it. I like the idea of a unified agent, but believe that forcing Trove
> > to adopt this agent for use as its by default will stifle innovation
> > and harm the project.
> >
> > There are reasons Trove has more than one agent currently. While
> > everyone knows about the "Reference Agent" written in Python,
> > Rackspace uses a different agent written in C++ because it takes up
> > less memory. The concerns which led to the C++ agent would not be
> > addressed by a unified agent, which if anything would be larger than
> > the Reference Agent is currently.
> >
> > I also believe a unified agent represents the wrong approach
> > philosophically. An agent by design needs to be lightweight, capable
> > of doing exactly what it needs to and no more. This is especially true
> > for a project like Trove whose goal is to not to provide overly
> > general PAAS capabilities but simply installation and maintenance of
> > different datastores. Currently, the Trove daemons handle most logic
> > and leave the agents themselves to do relatively little. This takes
> > some effort as many of the first iterations of Trove features have too
> > much logic put into the guest agents. However through perseverance the
> > subsequent designs are usually cleaner and simpler to follow. A
> > community approved, "do everything" agent would endorse the wrong
> > balance and lead to developers piling up logic on the guest side. Over
> > time, features would become dependent on the Unified Agent, making it
> > impossible to run or even contemplate light-weight agents.
> >
> > Trove's interface to agents today is fairly loose and could stand to
> > be made stricter. However, it is flexible and works well enough.
> > Essentially, the duck typed interface of the trove.guestagent.api.API
> > class is used to send messages, and Trove conductor is used to receive
> > them at which point it updates the database. Because both of these
> > components can be swapped out if necessary, the code could support the
> > Unified Agent when it appears as well as future agents.
> >
> > It would be a mistake however to alter Trove's standard method of
> > communication to please the new Unified Agent. In general, we should
> > try to keep Trove speaking to guest agents in Trove's terms alone to
> > prevent bloat.
> >
> > Thanks,
> >
> > Tim
>
> Tim,
>
> You raise very valid points that I'll summarize into bullet points:
> * memory footprint of a python-based agent
> * guest-agent feature bloat with no clear path to refactoring
> * an agent should do one thing and do it well
>
> The competing viewpoint is from downstream:
> * How do you get those various agents into the various linux
> distributions cloud images and maintain them
>

Only Heat cares about this. Trove and Savanna would be pretty silly if
they tried to run on general purpose images. IMO Heat shouldn't care
either and Heat operators should just make images that include Heat tools.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

2013-12-18 Thread Alan Kavanagh
Hi Gao

What is the reason why you see it would be important to have these two 
additional metrics "power and temperature" for Nova to base scheduling on?

Alan

From: Gao, Fengqian [mailto:fengqian@intel.com]
Sent: December-18-13 1:00 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

Hi, all,
I am planning to extend bp 
https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling with 
power and temperature. In other words, power and temperature can be collected 
and used for nova-scheduler just as CPU utilization.
I have a question here. As you know, IPMI is used to get power and temperature 
and baremetal implements IPMI functions in Nova. But baremetal driver is being 
split out of nova, so if I want to change something to the IPMI, which part 
should I choose now? Nova or Ironic?


Best wishes

--fengqian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] hybrid cloud & bursting question

2013-12-18 Thread Clint Byrum
Excerpts from =?ks_c_5601-1987?B?wMzB2L/4?='s message of 2013-12-18 01:26:22 
-0800:
> Hi, stackers,
> 
> I know only little about Heat, and I can't wholly follow recent 
> discussions around multi-region support in this mailing list.
> Please help me understand the roadmap or plan about hybrid cloud 
> and "bursting" in particular. 
> 
> I'd like to ask the following questions to the developers who have 
> the same interests.
> 
> 1) Can we use Heat to deploy to the AWS using the same template
>as in the OpenStack cloud?
> 

"Sort of". If you stick to CloudFormation syntax then yes, that should
continue to work as long as cloudformation syntax is supported.

> 2) Will Heat support "bursting" when multi-region is supported?
>(i.e., auto scaling from the private cloud to the public cloud)
>

That was not discussed at the Icehouse summit. What is planned is to
allow separate resources in separate regions.

> If these are not prepared in the Icehouse release, is it possible 
> in the J-release? Who's interested in these? Will it be able to 
> burst into Rackspace cloud or HP cloud using HOT within next year?
> 
> We're also considering the development of the related features 
> if possible and want to know how to be involved step by step.
> 

I think many people would like a feature like that. The best place to
discuss this would be at the weekly IRC Meeting:

https://wiki.openstack.org/wiki/Meetings/HeatAgenda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-18 Thread Dmitry Mescheryakov
That sounds like a very good idea, I'll do it.


2013/12/18 Clint Byrum 

> Excerpts from Dmitry Mescheryakov's message of 2013-12-18 09:32:30 -0800:
> > Clint, do you mean
> >   * use os-collect-config and its HTTP transport as a base for the PoC
> > or
> >   * migrate os-collect-config on PoC after it is implemented on
> > oslo.messaging
> >
> > I presume the later, but could you clarify?
> >
>
> os-collect-config speaks two HTTP API's: EC2 metadata and
> CloudFormation. I am suggesting that it would be fairly easy to teach
> it to also speak oslo.messaging. It currently doesn't have a two-way
> communication method, but that is only because we haven't needed that.
> It wouldn't be difficult at all to have responders instead of collectors
> and send back a response after the command is run.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

2013-12-18 Thread Dmitry Mescheryakov
2013/12/18 Steven Dake 

>  On 12/18/2013 08:34 AM, Tim Simpson wrote:
>
> I've been following the Unified Agent mailing list thread for awhile now
> and, as someone who has written a fair amount of code for both of the two
> existing Trove agents, thought I should give my opinion about it. I like
> the idea of a unified agent, but believe that forcing Trove to adopt this
> agent for use as its by default will stifle innovation and harm the project.
>
> There are reasons Trove has more than one agent currently. While everyone
> knows about the "Reference Agent" written in Python, Rackspace uses a
> different agent written in C++ because it takes up less memory. The
> concerns which led to the C++ agent would not be addressed by a unified
> agent, which if anything would be larger than the Reference Agent is
> currently.
>
> I also believe a unified agent represents the wrong approach
> philosophically. An agent by design needs to be lightweight, capable of
> doing exactly what it needs to and no more. This is especially true for a
> project like Trove whose goal is to not to provide overly general PAAS
> capabilities but simply installation and maintenance of different
> datastores. Currently, the Trove daemons handle most logic and leave the
> agents themselves to do relatively little. This takes some effort as many
> of the first iterations of Trove features have too much logic put into the
> guest agents. However through perseverance the subsequent designs are
> usually cleaner and simpler to follow. A community approved, "do
> everything" agent would endorse the wrong balance and lead to developers
> piling up logic on the guest side. Over time, features would become
> dependent on the Unified Agent, making it impossible to run or even
> contemplate light-weight agents.
>
> Trove's interface to agents today is fairly loose and could stand to be
> made stricter. However, it is flexible and works well enough. Essentially,
> the duck typed interface of the trove.guestagent.api.API class is used to
> send messages, and Trove conductor is used to receive them at which point
> it updates the database. Because both of these components can be swapped
> out if necessary, the code could support the Unified Agent when it appears
> as well as future agents.
>
> It would be a mistake however to alter Trove's standard method of
> communication to please the new Unified Agent. In general, we should try to
> keep Trove speaking to guest agents in Trove's terms alone to prevent bloat.
>
> Thanks,
>
> Tim
>
>
> Tim,
>
> You raise very valid points that I'll summarize into bullet points:
> * memory footprint of a python-based agent
> * guest-agent feature bloat with no clear path to refactoring
> * an agent should do one thing and do it well
>
> The competing viewpoint is from downstream:
> * How do you get those various agents into the various linux distributions
> cloud images and maintain them
>
> A unified agent addresses the downstream viewpoint well, which is "There
> is only one agent to package and maintain, and it supports all the
> integrated OpenStack Program projects".
>
> Putting on my Fedora Hat for a moment, I'm not a big fan of an agent per
> OpenStack project going into the Fedora 21 cloud images.
>
> Another option that we really haven't discussed on this long long thread
> is injecting the per-project agents into the vm on bootstrapping of the
> vm.  If we developed common code for this sort of operation and placed it
> into oslo, *and* agreed to use it as our common unifying mechanism of agent
> support, each project would be free to ship whatever agents they wanted in
> their packaging, use the proposed oslo.bootstrap code to bootstrap the VM
> via cloudinit with the appropriate agents installed in the proper
> locations, whamo, problem solved for everyone.
>

Funny thing is, the same idea was proposed and discussed among my
colleagues and me recently. We saw it as a Heat extension which could be
requested to inject guest agent into the VM. The list of required modules
could be passed as a request parameter. That can ease life of us, Savanna
devs, because we will not have to pre-install the agent on our images.



> Regards
> -steve
>
>
>
>
>
> ___
> OpenStack-dev mailing 
> listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] weekly meeting

2013-12-18 Thread Avishay Traeger
For me, 4/5 is currently 6/7AM.  An hour later when daylight savings 
messes things up.  Exactly wake up/get ready/kid to school/me to work 
time.
I'd rather leave it as is (6/7PM depending on daylight savings).
If you alternate, I may have to miss some.

Thanks,
Avishay



From:   John Griffith 
To: OpenStack Development Mailing List 
, 
Date:   12/17/2013 05:08 AM
Subject:[openstack-dev] [cinder] weekly meeting



Hi All,

Prompted by a recent suggestion from Tom Fifield, I thought I'd gauge
some interest in either changing the weekly Cinder meeting time, or
proposing a second meeting to accomodate folks in other time-zones.

A large number of folks are already in time-zones that are not
"friendly" to our current meeting time.  I'm wondering if there is
enough of an interest to move the meeting time from 16:00 UTC on
Wednesdays, to 04:00 or 05:00 UTC?  Depending on the interest I'd be
willing to look at either moving the meeting for a trial period or
holding a second meeting to make sure folks in other TZ's had a chance
to be heard.

Let me know your thoughts, if there are folks out there that feel
unable to attend due to TZ conflicts and we can see what we might be
able to do.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

2013-12-18 Thread Dmitry Mescheryakov
Tim,

The unified agent we proposing is based on the following ideas:
  * the core agent has _no_ functionality at all. It is a pure RPC
mechanism with the ability to add whichever API needed on top of it.
  * the API is organized into modules which could be reused across
different projects.
  * there will be no single package: each project (Trove/Savanna/Others)
assembles its own agent based on API project needs.

I hope that covers your concerns.

Dmitry


2013/12/18 Tim Simpson 

>  I've been following the Unified Agent mailing list thread for awhile now
> and, as someone who has written a fair amount of code for both of the two
> existing Trove agents, thought I should give my opinion about it. I like
> the idea of a unified agent, but believe that forcing Trove to adopt this
> agent for use as its by default will stifle innovation and harm the project.
>
> There are reasons Trove has more than one agent currently. While everyone
> knows about the "Reference Agent" written in Python, Rackspace uses a
> different agent written in C++ because it takes up less memory. The
> concerns which led to the C++ agent would not be addressed by a unified
> agent, which if anything would be larger than the Reference Agent is
> currently.
>
> I also believe a unified agent represents the wrong approach
> philosophically. An agent by design needs to be lightweight, capable of
> doing exactly what it needs to and no more. This is especially true for a
> project like Trove whose goal is to not to provide overly general PAAS
> capabilities but simply installation and maintenance of different
> datastores. Currently, the Trove daemons handle most logic and leave the
> agents themselves to do relatively little. This takes some effort as many
> of the first iterations of Trove features have too much logic put into the
> guest agents. However through perseverance the subsequent designs are
> usually cleaner and simpler to follow. A community approved, "do
> everything" agent would endorse the wrong balance and lead to developers
> piling up logic on the guest side. Over time, features would become
> dependent on the Unified Agent, making it impossible to run or even
> contemplate light-weight agents.
>
> Trove's interface to agents today is fairly loose and could stand to be
> made stricter. However, it is flexible and works well enough. Essentially,
> the duck typed interface of the trove.guestagent.api.API class is used to
> send messages, and Trove conductor is used to receive them at which point
> it updates the database. Because both of these components can be swapped
> out if necessary, the code could support the Unified Agent when it appears
> as well as future agents.
>
> It would be a mistake however to alter Trove's standard method of
> communication to please the new Unified Agent. In general, we should try to
> keep Trove speaking to guest agents in Trove's terms alone to prevent bloat.
>
> Thanks,
>
> Tim
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Diversity as a requirement for incubation

2013-12-18 Thread Flavio Percoco

On 18/12/13 12:35 -0500, Doug Hellmann wrote:


On Wed, Dec 18, 2013 at 10:58 AM, Sean Dague  wrote:
   On 12/18/2013 10:37 AM, Steven Dake wrote:
   > But I can tell you one thing for certain, an actual incubation
   > commitment from the OpenStack Technical Committee has a huge impact - it
   > says "Yes we think this project has great potential for improving
   > OpenStack's scope in a helpful useful way and we plan to support the
   > program to make it happen".  Without that commitment, managers at
   > companies have a harder time justifying R&D expenses.
   >
   > That is why I am not a big fan of approach #3 - companies are unlikely
   > to commit without a commitment from the TC first ;-) (see chicken/egg in
   > your original argument ;)
   >
   > We shouldn't be afraid of a project failing to graduate to Integrated.
   > Even though it hasn't happened yet, it will undoubtedly happen at some
   > point in the future.  We have a way for projects to leave incubation if
   > they fail to become a strong emergent system, as described in option #2.

   Agreed.

   One of the things I think we are missing with the incubation project is
   checkpoints. We really should be reviewing incubated projects at the end
   of every cycle (and maybe at milestone-2 as an interim) and give them
   some feedback on how they are doing on integration requirements, or even
   how they are doing keeping up with what's needed for incubation (and
   de-incubate if required).


+1



FWIW, Marconi's team sticks to OpenStack's milestones. As part of the
incubation process, projects should start 'behaving' as they were
integrated. This will teach the members of the team how the release
cycle works and it'll also help the TC to review the work going on in
the project based on the OpenStack milestones.

Also, incubated projects can ask for reviews / support at anytime.

Cheers,
FF

--
@flaper87
Flavio Percoco


pgp80J9z8s_ZD.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How do we format/version/deprecate things from notifications?

2013-12-18 Thread Nikola Đipanov
On 12/18/2013 06:17 PM, Matt Riedemann wrote:
> 
> 
> On 12/18/2013 9:42 AM, Matt Riedemann wrote:
>> The question came up in this patch [1], how do we deprecate and remove
>> keys in the notification payload?  In this case I need to deprecate and
>> replace the 'instance_type' key with 'flavor' per the associated
>> blueprint.
>>
>> [1] https://review.openstack.org/#/c/62430/
>>
> 
> By the way, my thinking is it's handled like a deprecated config option,
> you deprecate it for a release, make sure it's documented in the release
> notes and then drop it in the next release. For anyone that hasn't
> switched over they are broken until they start consuming the new key.
> 

FWIW - I am OK with this approach - but we should at least document it.
I am also thinking that we may want to make it explicit like oslo.config
does it.

Thanks,

N.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-18 Thread Clint Byrum
Excerpts from Dmitry Mescheryakov's message of 2013-12-18 09:32:30 -0800:
> Clint, do you mean
>   * use os-collect-config and its HTTP transport as a base for the PoC
> or
>   * migrate os-collect-config on PoC after it is implemented on
> oslo.messaging
> 
> I presume the later, but could you clarify?
> 

os-collect-config speaks two HTTP API's: EC2 metadata and
CloudFormation. I am suggesting that it would be fairly easy to teach
it to also speak oslo.messaging. It currently doesn't have a two-way
communication method, but that is only because we haven't needed that.
It wouldn't be difficult at all to have responders instead of collectors
and send back a response after the command is run.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

2013-12-18 Thread Clint Byrum
Excerpts from Steven Dake's message of 2013-12-18 08:05:09 -0800:
> On 12/18/2013 08:34 AM, Tim Simpson wrote:
> > I've been following the Unified Agent mailing list thread for awhile 
> > now and, as someone who has written a fair amount of code for both of 
> > the two existing Trove agents, thought I should give my opinion about 
> > it. I like the idea of a unified agent, but believe that forcing Trove 
> > to adopt this agent for use as its by default will stifle innovation 
> > and harm the project.
> >
> > There are reasons Trove has more than one agent currently. While 
> > everyone knows about the "Reference Agent" written in Python, 
> > Rackspace uses a different agent written in C++ because it takes up 
> > less memory. The concerns which led to the C++ agent would not be 
> > addressed by a unified agent, which if anything would be larger than 
> > the Reference Agent is currently.
> >
> > I also believe a unified agent represents the wrong approach 
> > philosophically. An agent by design needs to be lightweight, capable 
> > of doing exactly what it needs to and no more. This is especially true 
> > for a project like Trove whose goal is to not to provide overly 
> > general PAAS capabilities but simply installation and maintenance of 
> > different datastores. Currently, the Trove daemons handle most logic 
> > and leave the agents themselves to do relatively little. This takes 
> > some effort as many of the first iterations of Trove features have too 
> > much logic put into the guest agents. However through perseverance the 
> > subsequent designs are usually cleaner and simpler to follow. A 
> > community approved, "do everything" agent would endorse the wrong 
> > balance and lead to developers piling up logic on the guest side. Over 
> > time, features would become dependent on the Unified Agent, making it 
> > impossible to run or even contemplate light-weight agents.
> >
> > Trove's interface to agents today is fairly loose and could stand to 
> > be made stricter. However, it is flexible and works well enough. 
> > Essentially, the duck typed interface of the trove.guestagent.api.API 
> > class is used to send messages, and Trove conductor is used to receive 
> > them at which point it updates the database. Because both of these 
> > components can be swapped out if necessary, the code could support the 
> > Unified Agent when it appears as well as future agents.
> >
> > It would be a mistake however to alter Trove's standard method of 
> > communication to please the new Unified Agent. In general, we should 
> > try to keep Trove speaking to guest agents in Trove's terms alone to 
> > prevent bloat.
> >
> > Thanks,
> >
> > Tim
> 
> Tim,
> 
> You raise very valid points that I'll summarize into bullet points:
> * memory footprint of a python-based agent
> * guest-agent feature bloat with no clear path to refactoring
> * an agent should do one thing and do it well
> 
> The competing viewpoint is from downstream:
> * How do you get those various agents into the various linux 
> distributions cloud images and maintain them
> 

Only Heat cares about this. Trove and Savanna would be pretty silly if
they tried to run on general purpose images. IMO Heat shouldn't care
either and Heat operators should just make images that include Heat tools.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

2013-12-18 Thread Clint Byrum
Excerpts from Tim Simpson's message of 2013-12-18 07:34:14 -0800:
> I've been following the Unified Agent mailing list thread for awhile
> now and, as someone who has written a fair amount of code for both of
> the two existing Trove agents, thought I should give my opinion about
> it. I like the idea of a unified agent, but believe that forcing Trove
> to adopt this agent for use as its by default will stifle innovation
> and harm the project.
> 

"Them's fightin words". ;)

That is a very strong position to take. So I am going to hold your
statements of facts and assumptions to a very high standard below.

> There are reasons Trove has more than one agent currently. While
> everyone knows about the "Reference Agent" written in Python, Rackspace
> uses a different agent written in C++ because it takes up less memory. The
> concerns which led to the C++ agent would not be addressed by a unified
> agent, which if anything would be larger than the Reference Agent is
> currently.
>

"Would be larger..." - Please provide proof of that assumption or at least
a general hypothesis that we can test. Since nothing was agreed upon
anyway, I don't know how you came to that conclusion. I would suggest
that any agent framework be held to an extremely high standard for
footprint for this very reason.

> I also believe a unified agent represents the wrong approach
> philosophically. An agent by design needs to be lightweight, capable
> of doing exactly what it needs to and no more. This is especially true
> for a project like Trove whose goal is to not to provide overly general
> PAAS capabilities but simply installation and maintenance of different
> datastores. Currently, the Trove daemons handle most logic and leave
> the agents themselves to do relatively little. This takes some effort
> as many of the first iterations of Trove features have too much logic
> put into the guest agents. However through perseverance the subsequent
> designs are usually cleaner and simpler to follow. A community approved,
> "do everything" agent would endorse the wrong balance and lead to
> developers piling up logic on the guest side. Over time, features would
> become dependent on the Unified Agent, making it impossible to run or
> even contemplate light-weight agents.
> 

Nobody has suggested writing an agent that does everything. A
framework for agents to build on is what has been suggested. In fact
I've specifically been arguing to keep it focused on facilitating
guest<->service communication and limiting its in-guest capabilities to
narrowly focused tasks.

> Trove's interface to agents today is fairly loose and could stand to be
> made stricter. However, it is flexible and works well enough. Essentially,
> the duck typed interface of the trove.guestagent.api.API class is used
> to send messages, and Trove conductor is used to receive them at which
> point it updates the database. Because both of these components can be
> swapped out if necessary, the code could support the Unified Agent when
> it appears as well as future agents.
> 
> It would be a mistake however to alter Trove's standard method of
> communication to please the new Unified Agent. In general, we should
> try to keep Trove speaking to guest agents in Trove's terms alone to
> prevent bloat.
>

If Trove's communication method is in fact superior to all others,
then perhaps we should discuss using that in the unified agent framework.

Also I'd certainly be interested in hearing about whether or not you
think the C++ agent could made generic enough for any project to use.
That would be a nice win.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Diversity as a requirement for incubation

2013-12-18 Thread Doug Hellmann
On Wed, Dec 18, 2013 at 10:20 AM, Jarret Raim wrote:

> > It is a difficult thing to measure, and I don't think the intent is to
> set
> a hard %
> > for contributions. I think the numbers for Barbican were just
> illustrating
> the
> > fact that the concrete contributions are very coming very heavily from
> one
>
> > source. That's only one data point, though, and as you point out there
> are
> a
> > lot of other way to contribute.
>
> Agreed. I don't think the conclusion (e.g. Barbican is mostly a Rackspace
> driven endeavor at the moment) was incorrect, just that if we are going to
> codify the social requirement, we need to have a larger definition than
> just
> code commit %.
>
> > For example, another criteria could be whether the project has core
> reviewers
> > from multiple companies, where each is doing some significant proportion
> of reviews.
>
> A good option. Still leaves the chicken and the egg problem of how to get
> other ATCs interested in reviewing patches for a product they don't work on
> that may not be incubated.
>

I don't remember having that much trouble when we started ceilometer. What
is the perceived risk of working on a project before it is "accepted" at
some level?



>
>
> > I think we've been lucky on that count. I'm not sure how I feel about the
> requirement
> > for entering incubation, yet, but I do feel strongly that we need to have
> diverse contributors
> > when a project graduates from incubation.
>
> I don't know about lucky or not. If the project has been working in the
> mode
> of not requiring it for years and there have been no problems, why would we
> assume they would somehow start? Shouldn't we optimize for the data we have
> rather than a guess of future pain? On the technical side, most of the
>

We do this sort of risk assessment all the time. For internal projects at
DreamHost (and I'm sure at other companies), we lower the risk by having
more than one developer who understands the code. For OpenStack
requirements, we look for a healthy and active development community
supporting the library. Applying similar criteria to new OpenStack projects
seems perfectly reasonable.

More specifically, I would like to have the commitment to a potential new
OpenStack project spread across more than one company. We've recently had
IBM ask to withdraw a driver from nova because they changed directions
internally. I wouldn't want a change in direction (or profitability, or
whatever) in a single driving company behind a project cause it to fail.

The question is when to put that "diversity" requirement in place.


> requirements should be driven from things that have caused pain to
> OpenStack
> projects (rather than sacred cows / opinions / guesses of future pain).
> They
> should be discussed, agreed upon and documented before projects get into
> the
> incubation cycle so that everyone knows the goals going in. The social
> requirements seem like they should meet the same standard.
>

Yes, hence this email thread. :-)


>
> To me, we just need to codify the risk of a project failing to integrate
> after incubation. What would be the downside if we incubate a product that
> subsequently needs to be removed (for technical or social reasons)? How
> much
> pain does this cause? Is it worth turning away or slowing projects that
> solve needs for OpenStack to avoid this pain?
>

I have more of an issue with a project failing *after* becoming integrated
than during incubation. That's why we have the incubation period to begin
with. For the same reason, I'm leaning towards allowing projects into
incubation without a very diverse team, as long as there is some
recognition that they won't be able to graduate in that state, no matter
the technical situation.



>
> We already have data that says there is a low chance of this happening, so
> how much do we want to optimize to reduce that risk before we just accept
> that it is there and deal with it when it happens?
>

What data are you citing? We haven't done this very many times.

Doug



>
> There seems to be broad support for the 'required for graduation' bar,
> which
> I'm fine with. It seems like we just need to nail down whether it is
> required pre-incubation or not.
>
>
> Jarret
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Diversity as a requirement for incubation

2013-12-18 Thread Doug Hellmann
On Wed, Dec 18, 2013 at 10:58 AM, Sean Dague  wrote:

> On 12/18/2013 10:37 AM, Steven Dake wrote:
> > On 12/18/2013 03:40 AM, Thierry Carrez wrote:
> >> Hi everyone,
> >>
> >> The TC meeting yesterday uncovered an interesting question which, so
> >> far, divided TC members.
> >>
> >> We require that projects have a number of different developers involved
> >> before they apply for incubation, mostly to raise the bus factor. But we
> >> also currently require some level of diversity in that development team:
> >> we tend to reject projects where all the development team comes from a
> >> single company.
> >>
> >> There are various reasons for that: we want to make sure the project
> >> survives the loss of interest of its main corporate sponsor, we want to
> >> make sure it takes into account more than just one company's use case,
> >> and we want to make sure there is convergence, collaboration and open
> >> development at play there, before we start spending common resources in
> >> helping them integrate with the rest of OpenStack.
> >>
> >> That said, it creates a chicken-and-egg issue: other companies are less
> >> likely to assign resources and converge to a project unless it gets
> >> blessed as THE future solution. And it's true that in the past a lot of
> >> projects really ramped up their communities AFTER being incubated.
> >>
> >> I guess there are 3 options:
> >>
> >> 1. Require diversity for incubation, but find ways to bless or recommend
> >> projects pre-incubation so that this diversity can actually be achieved
> >>
> >> 2. Do not require diversity for incubation, but require it for
> >> graduation, and remove projects from incubation if they fail to attract
> >> a diverse community
> >
> > I apparently posted on the prior thread regarding Barbican my
> > experiences with Heat incubation.  Option #2 is best.
> >
> > Managers at companies are much more willing to invest R&D money into a
> > project which is judged to atleast have a potential path to success.  I
> > think this is because at many companies Managers are judged in their
> > reviews and compensated based upon how effectively they manage their
> > people resources to achieve their goals.
> >
> > I spent countless hours on the phone in the early days of Heat trying to
> > fulfill the unwritten "diversity" requirement that I forsaw being a
> > potential problem for Heat to enter Incubation.  In the end, I was
> > completely unsuccessful at convincing any company to invest any people
> > in an R&D effort, even though Red Hat was all-in on OpenStack and the
> > Heat eight person development team at Red Hat was all-in on Heat as
> > well.  In the end, I think the various managers at different companies I
> > spoke to just couldn't justify it in their organization if Heat failed.
> >
> > This radically changed once we entered incubation.  Suddenly a bunch of
> > people from many companies were committing patches, doing reviews.  Our
> > IRC channel exploded with people.  The small core team was bombarded
> > with questions.  This all happened because of the commitment the
> > OpenStack TC made when we were incubated to indicate "yes Heat is in
> > OpenStack's scope, and yes we plan to support the project from an
> > infrastructure perspective to make it successful, and yes, we think the
> > implementation meets our community quality guidelines."
> >
> > I will grant that if this behavior doesn't happen after incubation, it
> > should block integration, and maybe seen as an exit path out of
> incubation.
>
> I think this is really good concrete feedback, and definitely puts me on
> the #2/3 path.
>
>
> >> 3. Do not require diversity at incubation time, but at least judge the
> >> interest of other companies: are they signed up to join in the future ?
> >> Be ready to drop the project from incubation if that was a fake support
> >> and the project fails to attract a diverse community
> >>
> >> Personally I'm leaning towards (3) at the moment. Thoughts ?
> >
> > In the early days of incubation requests, I got the distinct impression
> > managers at companies believed that actually getting a project incubated
> > in OpenStack was not possible, even though it was sparsely documented as
> > an option.  Maybe things are different now that a few projects have
> > actually run the gauntlet of incubation and proven that it can be done
> > ;)   (see ceilometer, heat as early examples).
> >
> > But I can tell you one thing for certain, an actual incubation
> > commitment from the OpenStack Technical Committee has a huge impact - it
> > says "Yes we think this project has great potential for improving
> > OpenStack's scope in a helpful useful way and we plan to support the
> > program to make it happen".  Without that commitment, managers at
> > companies have a harder time justifying R&D expenses.
> >
> > That is why I am not a big fan of approach #3 - companies are unlikely
> > to commit without a commitment from the TC first ;-) (see chicken/egg

Re: [openstack-dev] Unified Guest Agent proposal

2013-12-18 Thread Dmitry Mescheryakov
Clint, do you mean
  * use os-collect-config and its HTTP transport as a base for the PoC
or
  * migrate os-collect-config on PoC after it is implemented on
oslo.messaging

I presume the later, but could you clarify?



2013/12/18 Clint Byrum 

> Excerpts from Dmitry Mescheryakov's message of 2013-12-17 08:01:38 -0800:
> > Folks,
> >
> > The discussion didn't result in a consensus, but it did revealed a great
> > number of things to be accounted. I've tried to summarize top-level
> points
> > in the etherpad [1]. It lists only items everyone (as it seems to me)
> > agrees on, or suggested options where there was no consensus. Let me know
> > if i misunderstood or missed something. The etherpad does not list
> > advantages/disadvantages of options, otherwise it just would be too long.
> > Interested people might search the thread for the arguments :-) .
> >
> > I've thought it over and I agree with people saying we need to move
> > further. Savanna needs the agent and I am going to write a PoC for it.
> Sure
> > the PoC will be implemented in project-independent way. I still think
> that
> > Salt limitations overweight its advantages, so the PoC will be done on
> top
> > of oslo.messaging without Salt. At least we'll have an example on how it
> > might look.
> >
> > Most probably I will have more questions in the process, for instance we
> > didn't finish discussion on enabling networking for the agent yet. In
> that
> > case I will start a new, more specific thread in the list.
>
> If you're not going to investigate using salt, can I suggest you base
> your POC on os-collect-config? It it would not take much to add two-way
> communication to it.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Adding DB migration items to the common review checklist

2013-12-18 Thread Matt Riedemann
I've seen this come up a few times in reviews and was thinking we should 
put something in the general review checklist wiki for it [1].


Basically I have three things I'd like to have in the list for DB 
migrations:


1. Unique constraints should be named. Different DB engines and 
SQLAlchemy dialects automatically name the constraint their own way, 
which can be troublesome for universal migrations. We should avoid this 
by enforcing that UCs are named when they are created. This means not 
using the unique=True arg in UniqueConstraint if the name arg isn't 
provided.


2. Foreign keys should be named for the same reasons in #1.

3. Foreign keys shouldn't be created against nullable columns. Some DB 
engines don't allow unique constraints over nullable columns and if you 
can't create the unique constraint you can't create the foreign key, so 
we should avoid this. If you need the FK, then the pre-req is to make 
the target column non-nullable. Think of the instances.uuid column in 
nova for example.


Unless anyone has a strong objection to this, I'll update the review 
checklist wiki with these items.


[1] https://wiki.openstack.org/wiki/ReviewChecklist

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] API spec for OS-NS-ROLES extension

2013-12-18 Thread Tiwari, Arvind
Hi Adam,

I would like to request you to revisit the below link and provide your opinion, 
so that we can move forward and try to find a common ground where everyone.

https://review.openstack.org/#/c/61897


Below is my justification for service_id in role model:
In a public cloud deployment model, service teams (or service deployers) 
defines the roles along with other artifacts (service and endpoint) and they 
need full control on these artifacts including roles. This way they can control 
the life cycle of these artifacts without depending on IAM service providers. 
(more details in 
https://blueprints.launchpad.net/keystone/+spec/name-spaced-roles)
As an IAM service provider in a public cloud deployment, it is our 
responsibility to facilitate them so that they can control full life cycle of 
their service specific artifacts. To make it happen we need tight access 
control on these artifacts, so that a service deployer accidently or 
maliciously not able to mess-up with other services.
To achieve that level fine granularity and to isolate service deployers from 
artifacts, we need to associate entity models (service, endpoints and roles) 
with a service.  This way we can define entity ownership and define access 
control policy based on service. Currently, role data model  does not support 
any association and that is why I am requesting  to introduce some way to 
associate a role with domain, project and service. This association also helps 
to define a namespace for making the role name globally unique.

Previously I was trying achieve tight linking of roles with service_id and that 
might be offending for some community members. Now after much effort and help 
from David Chadwick, we have generalized the role model and come up with 
generic design, so that it can fit in with every once use case. As I mentioned 
in the spec it will be backward compatible so that it won't break the existing 
deployments

I would appreciate if you can revisit the link and provide comments and 
suggestions, there might be still some room for improvements and I am open for 
them.


Dolph, I would also like you to review the specs, so that we can make some 
progress.


Regards,
Arvind




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How do we format/version/deprecate things from notifications?

2013-12-18 Thread Matt Riedemann



On 12/18/2013 9:42 AM, Matt Riedemann wrote:

The question came up in this patch [1], how do we deprecate and remove
keys in the notification payload?  In this case I need to deprecate and
replace the 'instance_type' key with 'flavor' per the associated blueprint.

[1] https://review.openstack.org/#/c/62430/



By the way, my thinking is it's handled like a deprecated config option, 
you deprecate it for a release, make sure it's documented in the release 
notes and then drop it in the next release. For anyone that hasn't 
switched over they are broken until they start consuming the new key.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Diversity as a requirement for incubation

2013-12-18 Thread Flavio Percoco

On 18/12/13 11:40 +0100, Thierry Carrez wrote:

Hi everyone,

The TC meeting yesterday uncovered an interesting question which, so
far, divided TC members.

We require that projects have a number of different developers involved
before they apply for incubation, mostly to raise the bus factor. But we
also currently require some level of diversity in that development team:
we tend to reject projects where all the development team comes from a
single company.

There are various reasons for that: we want to make sure the project
survives the loss of interest of its main corporate sponsor, we want to
make sure it takes into account more than just one company's use case,
and we want to make sure there is convergence, collaboration and open
development at play there, before we start spending common resources in
helping them integrate with the rest of OpenStack.

That said, it creates a chicken-and-egg issue: other companies are less
likely to assign resources and converge to a project unless it gets
blessed as THE future solution. And it's true that in the past a lot of
projects really ramped up their communities AFTER being incubated.

I guess there are 3 options:

1. Require diversity for incubation, but find ways to bless or recommend
projects pre-incubation so that this diversity can actually be achieved

2. Do not require diversity for incubation, but require it for
graduation, and remove projects from incubation if they fail to attract
a diverse community

3. Do not require diversity at incubation time, but at least judge the
interest of other companies: are they signed up to join in the future ?
Be ready to drop the project from incubation if that was a fake support
and the project fails to attract a diverse community



My vote goes for #3



Personally I'm leaning towards (3) at the moment. Thoughts ?

--
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgp1SYfXAXnZd.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Diversity as a requirement for incubation

2013-12-18 Thread Julien Danjou
On Wed, Dec 18 2013, Thierry Carrez wrote:

> 1. Require diversity for incubation, but find ways to bless or recommend
> projects pre-incubation so that this diversity can actually be achieved
>
> 2. Do not require diversity for incubation, but require it for
> graduation, and remove projects from incubation if they fail to attract
> a diverse community
>
> 3. Do not require diversity at incubation time, but at least judge the
> interest of other companies: are they signed up to join in the future ?
> Be ready to drop the project from incubation if that was a fake support
> and the project fails to attract a diverse community
>
> Personally I'm leaning towards (3) at the moment. Thoughts ?

Option 2 is definitely the way to go IMHO. As it has already been
stated, companies may not get on board until the project is incubated,
so adding such a requirement is not going to help.

-- 
Julien Danjou
;; Free Software hacker ; independent consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Subject: [nova][vmware] VMwareAPI sub-team status 2013-12-08

2013-12-18 Thread Shawn Hartsock
Greetings Stackers!

BTW: Reviews by fitness at the end.

It's Wednesday so it's time for me to cheer-lead for our VMwareAPI subteam.
Go team! Our normal Wednesday meetings fall on December 25th and January
1st coming up so, no meetings until January 8th. If there's a really strong
objection to that we can organize an impromptu meeting.

Here's the community priorities so far for IceHouse.

== Blueprint priorities ==

Icehouse-2

Nova

*. https://blueprints.launchpad.net/nova/+spec/vmware-image-cache-management

*. https://blueprints.launchpad.net/nova/+spec/vmware-vsan-support

*. https://blueprints.launchpad.net/nova/+spec/autowsdl-repair

Glance

*.
https://blueprints.launchpad.net/glance/+spec/vmware-datastore-storage-backend

Icehouse-3

*. https://blueprints.launchpad.net/nova/+spec/config-validation-script

== Bugs by priority: ==

The priority here is an aggregate, Nova Priority / VMware Driver priority
where the priorities are determined independently.


* High/Critical, needs review : 'vmware driver does not work with more than
one datacenter in vC'

 https://review.openstack.org/62587

* High/High, needs one more +2/approval : 'VMware: NotAuthenticated
occurred in the call to RetrievePropertiesEx'

 https://review.openstack.org/61555

* High/High, needs review : 'VMware: spawning large amounts of VMs
concurrently sometimes causes "VMDK lock" error'

 https://review.openstack.org/58598

* High/High, needs review : 'VMWare: AssertionError: Trying to re-send() an
already-triggered event.'

 https://review.openstack.org/54808

* High/High, needs review : 'VMware: timeouts due to nova-compute stuck at
100% when using deploying 100 VMs'

 https://review.openstack.org/60259

* Medium/High, needs review : 'VMware: instance names can be edited, breaks
nova-driver lookup'

 https://review.openstack.org/59571

* Medium/High, needs revision : '_check_if_folder_file_exists only checks
for metadata file'

 https://review.openstack.org/48544


=


= Reviews By fitness for core: =

I know a lot of you just look for this part...


== needs one more +2/approval ==

* https://review.openstack.org/61555

title: 'VMware: use session.call_method to invoke api's'

votes: +2:1, +1:5, -1:0, -2:0. +7 days in progress, revision: 2 is 7 days
old

* https://review.openstack.org/53990

title: 'VMware ESX: Boot from volume must not relocate vol'

votes: +2:1, +1:2, -1:0, -2:0. +53 days in progress, revision: 5 is 16 days
old

* https://review.openstack.org/54361

title: 'VMware: fix datastore selection when token is returned'

votes: +2:1, +1:8, -1:0, -2:0. +50 days in progress, revision: 5 is 49 days
old


== ready for core ==

* https://review.openstack.org/55070

title: 'VMware: fix rescue with disks are not hot-addable'

votes: +2:0, +1:5, -1:0, -2:0. +45 days in progress, revision: 3 is 6 days
old

* https://review.openstack.org/60010

title: 'VMware: prefer shared datastores over unshared'

votes: +2:0, +1:5, -1:0, -2:0. +14 days in progress, revision: 1 is 12 days
old

* https://review.openstack.org/57376

title: 'VMware: delete vm snapshot after nova snapshot'

votes: +2:0, +1:5, -1:0, -2:0. +28 days in progress, revision: 4 is 23 days
old

* https://review.openstack.org/57519

title: 'VMware: use .get() to access 'summary.accessible''

votes: +2:0, +1:5, -1:0, -2:0. +28 days in progress, revision: 1 is 23 days
old


== needs review ==

* https://review.openstack.org/62820

title: 'VMWare: bug fix for Vim exception handling'

votes: +2:0, +1:1, -1:0, -2:0. +0 days in progress, revision: 1 is 0 days
old

* https://review.openstack.org/52557

title: 'VMware Driver update correct disk usage stat'

votes: +2:0, +1:1, -1:0, -2:0. +62 days in progress, revision: 3 is 15 days
old

* https://review.openstack.org/59571

title: 'VMware: fix instance lookup against vSphere'

votes: +2:0, +1:3, -1:0, -2:0. +16 days in progress, revision: 11 is 2 days
old

* https://review.openstack.org/62118

title: 'VMware: Only include connected hosts in cluster stats'

votes: +2:0, +1:1, -1:0, -2:0. +5 days in progress, revision: 2 is 3 days
old

* https://review.openstack.org/60259

title: 'VMware: fix bug causing nova-compute CPU to spike to 100%'

votes: +2:0, +1:2, -1:0, -2:0. +13 days in progress, revision: 6 is 2 days
old

* https://review.openstack.org/54808

title: 'VMware: fix bug for exceptions thrown in _wait_for_task'

votes: +2:0, +1:2, -1:0, -2:0. +48 days in progress, revision: 4 is 10 days
old

* https://review.openstack.org/60652

title: 'VMware: fix disk extend bug when no space on datastore'

votes: +2:0, +1:1, -1:0, -2:0. +11 days in progress, revision: 2 is 11 days
old

* https://review.openstack.org/55038

title: 'VMware: bug fix for VM rescue when config drive is config...'

votes: +2:0, +1:4, -1:0, -2:0. +46 days in progress, revision: 5 is 6 days
old

* https://review.openstack.org/58994

title: 'VMware: fix the VNC port allocation'

 votes: +2:0, +1:4, -1:0, -2:0. +2

Re: [openstack-dev] [Nova] Future meeting times

2013-12-18 Thread Nikola Đipanov
On 12/18/2013 03:29 PM, Russell Bryant wrote:
> Greetings,
> 
> The weekly Nova meeting [1] has been held on Thursdays at 2100 UTC.
> I've been getting some requests to offer an alternative meeting time.
> I'd like to try out alternating the meeting time between two different
> times to allow more people in our global development team to attend
> meetings and engage in some real-time discussion.
> 
> I propose the alternate meeting time as 1400 UTC.  I realize that
> doesn't help *everyone*, but it should be an improvement for some,
> especially for those in Europe.
> 
> If we proceed with this, we would meet at 2100 UTC on January 2nd, 1400
> UTC on January 9th, and alternate from there.  Note that we will not be
> meeting at all on December 26th as a break for the holidays.
> 
> If you can't attend either of these times, please note that the meetings
> are intended to be supplementary to the openstack-dev mailing list.  In
> the meetings, we check in on status, raise awareness of important
> issues, and progress some discussions with real-time debate, but the
> most important discussions and decisions will always be brought to the
> openstack-dev mailing list, as well.  With that said, active Nova
> contributors are always encouraged to attend and participate if they are
> able.
> 
> Comments welcome, especially some acknowledgement that there are people
> that would attend the alternate meeting time.  :-)
> 

It was very difficult for me to attend the current one since it's 10 PM
my time and my value add capacity at that point is low even by my
standards :).

Booking the new time now.

Thanks for this!

Nikola

> Thanks,
> 
> [1] https://wiki.openstack.org/wiki/Meetings/Nova
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

2013-12-18 Thread Steven Dake

On 12/18/2013 08:34 AM, Tim Simpson wrote:
I've been following the Unified Agent mailing list thread for awhile 
now and, as someone who has written a fair amount of code for both of 
the two existing Trove agents, thought I should give my opinion about 
it. I like the idea of a unified agent, but believe that forcing Trove 
to adopt this agent for use as its by default will stifle innovation 
and harm the project.


There are reasons Trove has more than one agent currently. While 
everyone knows about the "Reference Agent" written in Python, 
Rackspace uses a different agent written in C++ because it takes up 
less memory. The concerns which led to the C++ agent would not be 
addressed by a unified agent, which if anything would be larger than 
the Reference Agent is currently.


I also believe a unified agent represents the wrong approach 
philosophically. An agent by design needs to be lightweight, capable 
of doing exactly what it needs to and no more. This is especially true 
for a project like Trove whose goal is to not to provide overly 
general PAAS capabilities but simply installation and maintenance of 
different datastores. Currently, the Trove daemons handle most logic 
and leave the agents themselves to do relatively little. This takes 
some effort as many of the first iterations of Trove features have too 
much logic put into the guest agents. However through perseverance the 
subsequent designs are usually cleaner and simpler to follow. A 
community approved, "do everything" agent would endorse the wrong 
balance and lead to developers piling up logic on the guest side. Over 
time, features would become dependent on the Unified Agent, making it 
impossible to run or even contemplate light-weight agents.


Trove's interface to agents today is fairly loose and could stand to 
be made stricter. However, it is flexible and works well enough. 
Essentially, the duck typed interface of the trove.guestagent.api.API 
class is used to send messages, and Trove conductor is used to receive 
them at which point it updates the database. Because both of these 
components can be swapped out if necessary, the code could support the 
Unified Agent when it appears as well as future agents.


It would be a mistake however to alter Trove's standard method of 
communication to please the new Unified Agent. In general, we should 
try to keep Trove speaking to guest agents in Trove's terms alone to 
prevent bloat.


Thanks,

Tim


Tim,

You raise very valid points that I'll summarize into bullet points:
* memory footprint of a python-based agent
* guest-agent feature bloat with no clear path to refactoring
* an agent should do one thing and do it well

The competing viewpoint is from downstream:
* How do you get those various agents into the various linux 
distributions cloud images and maintain them


A unified agent addresses the downstream viewpoint well, which is "There 
is only one agent to package and maintain, and it supports all the 
integrated OpenStack Program projects".


Putting on my Fedora Hat for a moment, I'm not a big fan of an agent per 
OpenStack project going into the Fedora 21 cloud images.


Another option that we really haven't discussed on this long long thread 
is injecting the per-project agents into the vm on bootstrapping of the 
vm.  If we developed common code for this sort of operation and placed 
it into oslo, *and* agreed to use it as our common unifying mechanism of 
agent support, each project would be free to ship whatever agents they 
wanted in their packaging, use the proposed oslo.bootstrap code to 
bootstrap the VM via cloudinit with the appropriate agents installed in 
the proper locations, whamo, problem solved for everyone.


Regards
-steve





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request for Barbican

2013-12-18 Thread Sean Dague
On 12/18/2013 10:23 AM, Jarret Raim wrote:
>> -Original Message-
>> From: Steven Dake [mailto:sd...@redhat.com]
>> In this particular case, I believe Barbican is not ready for incubation
> because
>> of their dependence on celery, but ultimately I don't make the decision :)
> 
> We've landed the PR that removes celery and replaces it with oslo.messaging.
> 
> 
> As Thierry said, once we are finished with the other couple of requests from
> the TC and we'll re-apply after the holidays.
> 
> Other than that, I agree with everything you said :)

I also just want to note that Jarret and the Barbican team have been
very responsive to the technical requirements we've talked about for
incubation. And I think this is progressing really great in going and
working on that list and coming back for the actual incubation request
once it's done.

Thanks folks. The responsiveness there has been very appreciated.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Weekly meeting time

2013-12-18 Thread Lyle, David
Thanks Liz for setting up the poll.

I'm not sure we're locked to Tuesday as the meeting day either.  

On Tuesdays, 20:00 UTC (TC meeting) and 21:00 UTC (Project) are meetings I 
don't want to schedule over.
Those times are more attractive on other days.

-David

> -Original Message-
> From: Liz Blanchard [mailto:lsure...@redhat.com]
> Sent: Tuesday, December 17, 2013 4:19 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [Horizon] Weekly meeting time
> 
> Hi All,
> 
> In today's weekly meeting we discussed the possibility of changing the time
> of the weekly meetings to make it easier for everyone to attend. This seems
> to be a difficult task since we are worldwide! Nevertheless, let's see if
> Doodle can help us understand which times might be a better fit for the
> majority of the group.
> 
> Here is what you need to do:
> 
> 1) Click on this link to open the Doodle:
> http://www.doodle.com/q8q6iu8gcqp6c4xa
> 2) Ignore the fact that this says for "Dec 31st" only.
> 3) Choose your time zone over on the right.
> 4) Expand the list to show all time options.
> 5) Enter your name and check off the times that work for you for a meeting.
> 6) Click save.
> 
> Thanks for your participation!
> 
> Liz
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Diversity as a requirement for incubation

2013-12-18 Thread Sean Dague
On 12/18/2013 10:37 AM, Steven Dake wrote:
> On 12/18/2013 03:40 AM, Thierry Carrez wrote:
>> Hi everyone,
>>
>> The TC meeting yesterday uncovered an interesting question which, so
>> far, divided TC members.
>>
>> We require that projects have a number of different developers involved
>> before they apply for incubation, mostly to raise the bus factor. But we
>> also currently require some level of diversity in that development team:
>> we tend to reject projects where all the development team comes from a
>> single company.
>>
>> There are various reasons for that: we want to make sure the project
>> survives the loss of interest of its main corporate sponsor, we want to
>> make sure it takes into account more than just one company's use case,
>> and we want to make sure there is convergence, collaboration and open
>> development at play there, before we start spending common resources in
>> helping them integrate with the rest of OpenStack.
>>
>> That said, it creates a chicken-and-egg issue: other companies are less
>> likely to assign resources and converge to a project unless it gets
>> blessed as THE future solution. And it's true that in the past a lot of
>> projects really ramped up their communities AFTER being incubated.
>>
>> I guess there are 3 options:
>>
>> 1. Require diversity for incubation, but find ways to bless or recommend
>> projects pre-incubation so that this diversity can actually be achieved
>>
>> 2. Do not require diversity for incubation, but require it for
>> graduation, and remove projects from incubation if they fail to attract
>> a diverse community
> 
> I apparently posted on the prior thread regarding Barbican my
> experiences with Heat incubation.  Option #2 is best.
> 
> Managers at companies are much more willing to invest R&D money into a
> project which is judged to atleast have a potential path to success.  I
> think this is because at many companies Managers are judged in their
> reviews and compensated based upon how effectively they manage their
> people resources to achieve their goals.
> 
> I spent countless hours on the phone in the early days of Heat trying to
> fulfill the unwritten "diversity" requirement that I forsaw being a
> potential problem for Heat to enter Incubation.  In the end, I was
> completely unsuccessful at convincing any company to invest any people
> in an R&D effort, even though Red Hat was all-in on OpenStack and the
> Heat eight person development team at Red Hat was all-in on Heat as
> well.  In the end, I think the various managers at different companies I
> spoke to just couldn't justify it in their organization if Heat failed.
> 
> This radically changed once we entered incubation.  Suddenly a bunch of
> people from many companies were committing patches, doing reviews.  Our
> IRC channel exploded with people.  The small core team was bombarded
> with questions.  This all happened because of the commitment the
> OpenStack TC made when we were incubated to indicate "yes Heat is in
> OpenStack's scope, and yes we plan to support the project from an
> infrastructure perspective to make it successful, and yes, we think the
> implementation meets our community quality guidelines."
> 
> I will grant that if this behavior doesn't happen after incubation, it
> should block integration, and maybe seen as an exit path out of incubation.

I think this is really good concrete feedback, and definitely puts me on
the #2/3 path.


>> 3. Do not require diversity at incubation time, but at least judge the
>> interest of other companies: are they signed up to join in the future ?
>> Be ready to drop the project from incubation if that was a fake support
>> and the project fails to attract a diverse community
>>
>> Personally I'm leaning towards (3) at the moment. Thoughts ?
> 
> In the early days of incubation requests, I got the distinct impression
> managers at companies believed that actually getting a project incubated
> in OpenStack was not possible, even though it was sparsely documented as
> an option.  Maybe things are different now that a few projects have
> actually run the gauntlet of incubation and proven that it can be done
> ;)   (see ceilometer, heat as early examples).
> 
> But I can tell you one thing for certain, an actual incubation
> commitment from the OpenStack Technical Committee has a huge impact - it
> says "Yes we think this project has great potential for improving
> OpenStack's scope in a helpful useful way and we plan to support the
> program to make it happen".  Without that commitment, managers at
> companies have a harder time justifying R&D expenses.
> 
> That is why I am not a big fan of approach #3 - companies are unlikely
> to commit without a commitment from the TC first ;-) (see chicken/egg in
> your original argument ;)
> 
> We shouldn't be afraid of a project failing to graduate to Integrated. 
> Even though it hasn't happened yet, it will undoubtedly happen at some
> point in the future.  We have a way for pro

Re: [openstack-dev] olso.config error on running Devstack

2013-12-18 Thread Sayali Lunkad
Also forgot to mention I can access the dashboard but I am unable to
retrieve any information about the volumes.
On Dec 18, 2013 8:56 PM, "Sayali Lunkad"  wrote:

> Hello,
>
> I get the following error when I run stack.sh on Devstack
>
> Traceback (most recent call last):
>   File "/usr/local/bin/ceilometer-dbsync", line 6, in 
> from ceilometer.storage import dbsync
>   File "/opt/stack/ceilometer/ceilometer/storage/__init__.py", line 23, in
> 
> from oslo.config import cfg
> ImportError: No module named config
> ++ failed
> ++ local r=1
> +++ jobs -p
> ++ kill
> ++ set +o xtrace
>
> Search gives me olso.config is installed. Please let me know of any
> solution.
>
> Thanks,
> Sayali.
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] weekly meeting

2013-12-18 Thread John Griffith
On Wed, Dec 18, 2013 at 8:35 AM, Duncan Thomas  wrote:
> 04:00 or 05:00 UTC would basically preclude European participation for
> most people... that's 4 am for Dosaboy and myself for example.
>
> Alternating meetings on different weeks would probably work, though we
> would need to encourage people to get stuff on the agenda in advance
> rather than an hour before the meeting, so that people can send their
> comments ahead if they can't attend.
>
> On 17 December 2013 17:03, Walter A. Boring IV  wrote:
>> 4 or 5 UTC works better for me.   I can't attend the current meeting
>> time, due to taking my kids to school in the morning at 1620UTC
>>
>> Walt
>>
>>> Hi All,
>>>
>>> Prompted by a recent suggestion from Tom Fifield, I thought I'd gauge
>>> some interest in either changing the weekly Cinder meeting time, or
>>> proposing a second meeting to accomodate folks in other time-zones.
>>>
>>> A large number of folks are already in time-zones that are not
>>> "friendly" to our current meeting time.  I'm wondering if there is
>>> enough of an interest to move the meeting time from 16:00 UTC on
>>> Wednesdays, to 04:00 or 05:00 UTC?  Depending on the interest I'd be
>>> willing to look at either moving the meeting for a trial period or
>>> holding a second meeting to make sure folks in other TZ's had a chance
>>> to be heard.
>>>
>>> Let me know your thoughts, if there are folks out there that feel
>>> unable to attend due to TZ conflicts and we can see what we might be
>>> able to do.
>>>
>>> Thanks,
>>> John
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Duncan Thomas
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Good feedback, thanks everyone for the input.  I have to say I am
beginning to feel a bit like "trying to solve a problem that doesn't
exist".  I'll think about this some more and see what we come up with.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2013-12-18 Thread Sayali Lunkad
Hey,

I do need ceilometer so making changes to localrc won't help me here.
And no corporate proxy!
What could possibly be the issue because my Devstack had been working fine
till now.

Thanks,
Sayali
On Dec 18, 2013 9:14 PM, "iKhan"  wrote:

> If you are behind corporate proxy, make sure the proxy is set.
>
>
> On Wed, Dec 18, 2013 at 8:56 PM, Sayali Lunkad wrote:
>
>> Hello,
>>
>> I get the following error when I run stack.sh on Devstack
>>
>> Traceback (most recent call last):
>>   File "/usr/local/bin/ceilometer-dbsync", line 6, in 
>> from ceilometer.storage import dbsync
>>   File "/opt/stack/ceilometer/ceilometer/storage/__init__.py", line 23,
>> in 
>> from oslo.config import cfg
>> ImportError: No module named config
>> ++ failed
>> ++ local r=1
>> +++ jobs -p
>> ++ kill
>> ++ set +o xtrace
>>
>> Search gives me olso.config is installed. Please let me know of any
>> solution.
>>
>> Thanks,
>> Sayali.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Thanks,
> Ibad Khan
> 9686594607
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Git workflow meeting reminder & agenda

2013-12-18 Thread Krishna Raman
Hi Ibad,

The Solum Git workflow meeting will be on #solum channel on IRC (freenode).

—Kr

On Dec 18, 2013, at 7:30 AM, iKhan  wrote:

> I am newbie here and I am from +5:30 GMT. Can any one direct me how to attend 
> this meeting?
> 
> 
> On Wed, Dec 18, 2013 at 12:46 PM, Krishna Raman  wrote:
> Hi,
> 
> The next Git-workflow meeting is tomorrow at 8 AM PST. 
> (http://www.worldtimebuddy.com/?qm=1&lid=8,524901,2158177,100&h=8&date=2013-12-18&sln=8-9)
> 
> Agenda:
>   Administrative:
>   * Skip meetings for next 2 weeks. Reconvene on Jan 8th.
>   Topics:
>   * Krishna and Monty to summarize offline discussion
>   * Use it for git pull/push -> DU build flow
>   * Not exposed to user. Access always through 
> authenticated Solum APIs.
>   * Not for generic workflow in rest of Solum
>   - Not used to orchestrate HEAT workflow
>   * Discussion on suggested Zuul workflow
>   * Other workflow suggestions?
> 
> —Krishna
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> -- 
> Thanks,
> Ibad Khan
> 9686594607
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Git workflow meeting reminder & agenda

2013-12-18 Thread Adrian Otto
Use IRC to connect to irc.freenode.net
Join the #solum channel

The meeting starts in roughly 15 minutes from now.

On Dec 18, 2013, at 7:30 AM, iKhan 
mailto:ik.ibadk...@gmail.com>>
 wrote:

I am newbie here and I am from +5:30 GMT. Can any one direct me how to attend 
this meeting?


On Wed, Dec 18, 2013 at 12:46 PM, Krishna Raman 
mailto:kra...@gmail.com>> wrote:
Hi,

The next Git-workflow meeting is tomorrow at 8 AM PST. 
(http://www.worldtimebuddy.com/?qm=1&lid=8,524901,2158177,100&h=8&date=2013-12-18&sln=8-9)

Agenda:
Administrative:
* Skip meetings for next 2 weeks. Reconvene on Jan 8th.
Topics:
* Krishna and Monty to summarize offline discussion
* Use it for git pull/push -> DU build flow
* Not exposed to user. Access always through authenticated Solum APIs.
* Not for generic workflow in rest of Solum
- Not used to orchestrate HEAT workflow
* Discussion on suggested Zuul workflow
* Other workflow suggestions?

—Krishna

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Thanks,
Ibad Khan
9686594607
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] How do we format/version/deprecate things from notifications?

2013-12-18 Thread Matt Riedemann
The question came up in this patch [1], how do we deprecate and remove 
keys in the notification payload?  In this case I need to deprecate and 
replace the 'instance_type' key with 'flavor' per the associated blueprint.


[1] https://review.openstack.org/#/c/62430/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Diversity as a requirement for incubation

2013-12-18 Thread Steven Dake

On 12/18/2013 03:40 AM, Thierry Carrez wrote:

Hi everyone,

The TC meeting yesterday uncovered an interesting question which, so
far, divided TC members.

We require that projects have a number of different developers involved
before they apply for incubation, mostly to raise the bus factor. But we
also currently require some level of diversity in that development team:
we tend to reject projects where all the development team comes from a
single company.

There are various reasons for that: we want to make sure the project
survives the loss of interest of its main corporate sponsor, we want to
make sure it takes into account more than just one company's use case,
and we want to make sure there is convergence, collaboration and open
development at play there, before we start spending common resources in
helping them integrate with the rest of OpenStack.

That said, it creates a chicken-and-egg issue: other companies are less
likely to assign resources and converge to a project unless it gets
blessed as THE future solution. And it's true that in the past a lot of
projects really ramped up their communities AFTER being incubated.

I guess there are 3 options:

1. Require diversity for incubation, but find ways to bless or recommend
projects pre-incubation so that this diversity can actually be achieved

2. Do not require diversity for incubation, but require it for
graduation, and remove projects from incubation if they fail to attract
a diverse community


I apparently posted on the prior thread regarding Barbican my 
experiences with Heat incubation.  Option #2 is best.


Managers at companies are much more willing to invest R&D money into a 
project which is judged to atleast have a potential path to success.  I 
think this is because at many companies Managers are judged in their 
reviews and compensated based upon how effectively they manage their 
people resources to achieve their goals.


I spent countless hours on the phone in the early days of Heat trying to 
fulfill the unwritten "diversity" requirement that I forsaw being a 
potential problem for Heat to enter Incubation.  In the end, I was 
completely unsuccessful at convincing any company to invest any people 
in an R&D effort, even though Red Hat was all-in on OpenStack and the 
Heat eight person development team at Red Hat was all-in on Heat as 
well.  In the end, I think the various managers at different companies I 
spoke to just couldn't justify it in their organization if Heat failed.


This radically changed once we entered incubation.  Suddenly a bunch of 
people from many companies were committing patches, doing reviews.  Our 
IRC channel exploded with people.  The small core team was bombarded 
with questions.  This all happened because of the commitment the 
OpenStack TC made when we were incubated to indicate "yes Heat is in 
OpenStack's scope, and yes we plan to support the project from an 
infrastructure perspective to make it successful, and yes, we think the 
implementation meets our community quality guidelines."


I will grant that if this behavior doesn't happen after incubation, it 
should block integration, and maybe seen as an exit path out of incubation.



3. Do not require diversity at incubation time, but at least judge the
interest of other companies: are they signed up to join in the future ?
Be ready to drop the project from incubation if that was a fake support
and the project fails to attract a diverse community

Personally I'm leaning towards (3) at the moment. Thoughts ?


In the early days of incubation requests, I got the distinct impression 
managers at companies believed that actually getting a project incubated 
in OpenStack was not possible, even though it was sparsely documented as 
an option.  Maybe things are different now that a few projects have 
actually run the gauntlet of incubation and proven that it can be done 
;)   (see ceilometer, heat as early examples).


But I can tell you one thing for certain, an actual incubation 
commitment from the OpenStack Technical Committee has a huge impact - it 
says "Yes we think this project has great potential for improving 
OpenStack's scope in a helpful useful way and we plan to support the 
program to make it happen".  Without that commitment, managers at 
companies have a harder time justifying R&D expenses.


That is why I am not a big fan of approach #3 - companies are unlikely 
to commit without a commitment from the TC first ;-) (see chicken/egg in 
your original argument ;)


We shouldn't be afraid of a project failing to graduate to Integrated.  
Even though it hasn't happened yet, it will undoubtedly happen at some 
point in the future.  We have a way for projects to leave incubation if 
they fail to become a strong emergent system, as described in option #2.


Regards
-steve



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-d

Re: [openstack-dev] [cinder] weekly meeting

2013-12-18 Thread Duncan Thomas
04:00 or 05:00 UTC would basically preclude European participation for
most people... that's 4 am for Dosaboy and myself for example.

Alternating meetings on different weeks would probably work, though we
would need to encourage people to get stuff on the agenda in advance
rather than an hour before the meeting, so that people can send their
comments ahead if they can't attend.

On 17 December 2013 17:03, Walter A. Boring IV  wrote:
> 4 or 5 UTC works better for me.   I can't attend the current meeting
> time, due to taking my kids to school in the morning at 1620UTC
>
> Walt
>
>> Hi All,
>>
>> Prompted by a recent suggestion from Tom Fifield, I thought I'd gauge
>> some interest in either changing the weekly Cinder meeting time, or
>> proposing a second meeting to accomodate folks in other time-zones.
>>
>> A large number of folks are already in time-zones that are not
>> "friendly" to our current meeting time.  I'm wondering if there is
>> enough of an interest to move the meeting time from 16:00 UTC on
>> Wednesdays, to 04:00 or 05:00 UTC?  Depending on the interest I'd be
>> willing to look at either moving the meeting for a trial period or
>> holding a second meeting to make sure folks in other TZ's had a chance
>> to be heard.
>>
>> Let me know your thoughts, if there are folks out there that feel
>> unable to attend due to TZ conflicts and we can see what we might be
>> able to do.
>>
>> Thanks,
>> John
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >