Re: [openstack-dev] [Keystone]ON DELETE RESTRICT VS ON DELETE CASCADE

2015-03-07 Thread Morgan Fainberg
In general I'd say that cascade is the right approach. There are some very
limited cases where restrict should be used. Overall, I'd like to see less
reliance on FK constraints anywhere. The reason for using Cascade is that
we should be very specific in our code to prevent deletion independent of
the backend (move these checks to the controller level) if we want to
prevent deletion cascades. In short, we should not rely on an
implementation specific detail to know if we can / cannot delete something.

--Morgan

On Sat, Mar 7, 2015 at 7:37 PM, Chen, Wei D  wrote:

> Hi,
>
> I did some homework to follow up the inline comment about on delete
> cascade subclauses of the foreign key clause[1], when ' ON
> DELETE CASCADE ' is given, delete a recode from parent table will DELETE
> all the corresponding rows from the CHILD table
> automatically *without any warning*. 'ON DELETE RESTRICT' looks different,
> it will fail complaining about the existing child rows,
> this is the default foreign key relationship behavior, this seems give end
> user a chance to double check the data.
>
> I did a quick test against the table 'endpoint_group', the output error
> message like below,
> mysql> delete from endpoint_group;
> ERROR 1451 (23000): Cannot delete or update a parent row: a foreign key
> constraint fails (`keystone`.`project_endpoint_group`,
> CONSTRAINT `project_endpoint_group_ibfk_1` FOREIGN KEY
> (`endpoint_group_id`) REFERENCES `endpoint_group` (`id`))
>
> I am a little confused about two different subclauses as both of them can
> be found in the table definition of SQL backends, it hard
> to say which one is better, is it worthwhile to move all of them to "ON
> DELETE CASCADE" or "ON DELETE RESTRICT"?
>
>
> [1]
> https://review.openstack.org/#/c/151931/5/keystone/contrib/endpoint_filter/migrate_repo/versions/002_add_endpoint_groups.py
>
> Best Regards,
> Dave Chen
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone]ON DELETE RESTRICT VS ON DELETE CASCADE

2015-03-07 Thread Chen, Wei D
Hi,

I did some homework to follow up the inline comment about on delete cascade 
subclauses of the foreign key clause[1], when ' ON
DELETE CASCADE ' is given, delete a recode from parent table will DELETE all 
the corresponding rows from the CHILD table
automatically *without any warning*. 'ON DELETE RESTRICT' looks different, it 
will fail complaining about the existing child rows,
this is the default foreign key relationship behavior, this seems give end user 
a chance to double check the data.

I did a quick test against the table 'endpoint_group', the output error message 
like below,
mysql> delete from endpoint_group;
ERROR 1451 (23000): Cannot delete or update a parent row: a foreign key 
constraint fails (`keystone`.`project_endpoint_group`,
CONSTRAINT `project_endpoint_group_ibfk_1` FOREIGN KEY (`endpoint_group_id`) 
REFERENCES `endpoint_group` (`id`))

I am a little confused about two different subclauses as both of them can be 
found in the table definition of SQL backends, it hard
to say which one is better, is it worthwhile to move all of them to "ON DELETE 
CASCADE" or "ON DELETE RESTRICT"?


[1] 
https://review.openstack.org/#/c/151931/5/keystone/contrib/endpoint_filter/migrate_repo/versions/002_add_endpoint_groups.py

Best Regards,
Dave Chen



smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][api] Microversions. And why do we need API extensions for new API functionality?

2015-03-07 Thread Jay Pipes

Hi Stackers,

Now that microversions have been introduced to the Nova API (meaning we 
can now have novaclient request, say, version 2.3 of the Nova API using 
the special X-OpenStack-Nova-API-Version HTTP header), is there any good 
reason to require API extensions at all for *new* functionality.


Sergey Nikitin is currently in the process of code review for the final 
patch that adds server instance tagging to the Nova API:


https://review.openstack.org/#/c/128940

Unfortunately, for some reason I really don't understand, Sergey is 
being required to create an API extension called "os-server-tags" in 
order to add the server tag functionality to the API. The patch 
implements the 2.4 Nova API microversion, though, as you can see from 
this part of the patch:


https://review.openstack.org/#/c/128940/43/nova/api/openstack/compute/plugins/v3/server_tags.py

What is the point of creating a new "plugin"/API extension for this new 
functionality? Why can't we just modify the 
nova/api/openstack/compute/server.py Controller.show() method and 
decorate it with a 2.4 microversion that adds a "tags" attribute to the 
returned server dictionary?


https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/servers.py#L369

Because we're using an API extension for this new server tags 
functionality, we are instead having the extension "extend" the server 
dictionary with an "os-server-tags:tags" key containing the list of 
string tags.


This is ugly and pointless. We don't need to use API extensions any more 
for this stuff.


A client knows that server tags are supported by the 2.4 API 
microversion. If the client requests the 2.4+ API, then we should just 
include the "tags" attribute in the server dictionary.


Similarly, new microversion API functionality should live in a module, 
as a top-level (or subcollection) Controller in 
/nova/api/openstack/compute/, and should not be in the 
/nova/api/openstack/compute/plugins/ directory. Why? Because it's not a 
plugin.


Why are we continuing to use these awkward, messy, and cumbersome API 
extensions?


Please, I am begging the Nova core team. Let us stop this madness. No 
more API extensions.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Thinking about our python client UX

2015-03-07 Thread Devananda van der Veen
Hi folks,

Recently, I've been thinking more of how users of our python client
will interact with the service, and in particular, how they might
expect different instances of Ironic to behave.

We added several extensions to the API this cycle, and along with
that, also landed microversion support (I'll say more on that in
another thread). However, I don't feel like we've collectively given
nearly enough thought to the python client. It seems to work well
enough for our CI testing, but is that really enough? What about user
experience?

In my own testing of the client versioning patch that landed on
Friday, I noticed some pretty appalling errors (some unrelated to that
patch) when pointing the current client at a server running the
stable/juno code...

http://paste.openstack.org/show/u91DtCf0fwRyv0auQWpx/


I haven't filed specific bugs from yet this because I think the issue
is large enough that we should talk about a plan first. I think that
starts by agreeing on who the intended audience is and what level of
forward-and-backward compatibility we are going to commit to [*],
documenting that agreement, and then come up with a plan to deliver
that during the L cycle. I'd like to start the discussion now, so I
have put it on the agenda for Monday, but I also expect it will be a
topic at the Vancouver summit.

-Devananda


[*] full disclosure

I believe we have to commit to building a client that works well with
every release since Icehouse, and the changes we've introduced in the
client in this cycle do not.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [swift] dependency on non-standardized, private APIs

2015-03-07 Thread Jay Pipes

Thanks very much, David, appreciated!

-jay

On 03/07/2015 02:25 PM, David Lyle wrote:

I agree that Horizon should not be requiring optional headers. Changing
status of bug.Â

On Tue, Mar 3, 2015 at 5:51 PM, Jay Pipes mailto:jaypi...@gmail.com>> wrote:

Added [swift] to topic.

On 03/03/2015 07:41 AM, Matthew Farina wrote:

Radoslaw,

Unfortunately the documentation for OpenStack has some holes.
What you
are calling a private API may be something missed in the
documentation.
Is there a documentation bug on the issue? If not one should be
created.


There is no indication that the X-Timestamp or X-Object-Meta-Mtime
HTTP headers are part of the public Swift API:

http://developer.openstack.__org/api-ref-objectstorage-v1.__html


I don't believe this is a bug in the Swift API documentation,
either. John Dickinson (cc'd) mentioned that the X-Timestamp HTTP
header is required for the Swift implementation of container
replication (John, please do correct me if wrong on that).

But that is the private implementation and not part of the public API.

In practice OpenStack isn't a specification and implementation. The
documentation has enough missing information you can't treat it this
way. If you want to contribute to improving the documentation
I'm sure
the documentation team would appreciate it. The last time I
looked there
were a number of undocumented public swift API details.


The bug here is not in the documentation. The bug is that Horizon is
coded to rely on HTTP headers that are not in the Swift API. Horizon
should be fixed to use .get('X-Timestamp') instead of doing
['X-Timestamp'] in its view pages for container details. There
are already patches up that the Horizon developers have, IMO
erroneously, rejected stating this is a problem in Ceph RadosGW for
not properly following the Swift API).

Best,
-jay

Best of luck,
Matt Farina

On Tue, Mar 3, 2015 at 9:59 AM, Radoslaw Zarzynski
mailto:rzarzyn...@mirantis.com>
>> wrote:

    Guys,

    I would like discuss a problem which can be seen in
Horizon: breaking
    the boundaries of public, well-specified Object Storage
API in favour
    of utilizing a Swift-specific extensions. Ticket #1297173
[1] may serve
    as a good example of such violation. It is about relying on
    non-standard (in the terms of OpenStack Object Storage API
v1) and
    undocumented HTTP header provided by Swift. In order to make
    Ceph RADOS Gateway work correctly with Horizon, developers
had to
    inspect sources of Swift and implement the same behaviour.

     From my perspective, that practise breaks the the
mission of OpenStack
    which is much more than delivering yet another IaaS/PaaS
implementation.
    I think its main goal is to provide a universal set of
APIs covering all
    functional areas relevant for cloud computing, and to
place that set
    of APIs in front as many implementations as possible.
Having an open
    source reference implementation of a particular API is
required to prove
    its viability, but is secondary to having an open and
documented API.

    I have full understanding that situations where the public
OpenStack
    interfaces are insufficient to get the work done might exist.
    However, introduction of dependency on
implementation-specific feature
    (especially without giving the users a choice via e.g. some
    configuration option) is not the proper way to deal with
the problem.
     From my point of view, such cases should be handled with
adoption of
    new, carefully designed and documented version of the
given API.

    In any case I think that Horizon, at least basic
functionality, should
    work with any storage which provides Object Storage API.
    That being said, I'm willing to contribute such patches,
if we decide
    to go that way.

    Best regards,
    Radoslaw Zarzynski

    [1] https://bugs.launchpad.net/__horizon/+bug/1297173


  Â

__
    OpenStack Development Mailing List (not for usage questions)
    Unsubscribe:
  Â
OpenStac

Re: [openstack-dev] [Glance] Core nominations.

2015-03-07 Thread Nikhil Komawar
Thank you for the response, Hemanth! Those are some excellent questions.


In order to avoid diverging the conversation, I would like to give my general 
sense of direction. Please do keep in mind that a lot of these thoughts need to 
be better formulated, preferably on a different thread.


Core-members would be generic concept unlike core-reviewers. The one important 
thing that this should achieve is clear understanding of the individuals 
(usually ones who are new or interact less often in Glance) - who actually is a 
"Core" in the program? There are a few things that can be part of their rights 
like being able to vote for important decisions (like the current thread), they 
may or may not have core-reviewer rights based on their participation area. For 
example, they could be security liaison or they may _officially_ do release 
management for the libraries without being a core-reviewer, etc. The 
responsibilities should complement the rights.


Those are just initial thoughts and can be better formulated. I will attempt to 
craft out the details of the core-member concept in the near future and you all 
are welcome to join me in doing so.


Hope that answered your questions, at least for the time being!


Cheers
-Nikhil

From: Hemanth Makkapati 
Sent: Friday, March 6, 2015 7:15 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] Core nominations.


I like the idea of a 'core-member'. But, how are core-members different from 
core-reviewers? For instance, with core-reviewers it is very clear that these 
are folks you would trust with merging code because they are supposed to have a 
good understanding of the overall project. What about core-members? Are 
core-members essentially just core-reviewers who somehow don't fit the criteria 
of core-reviewers but are good candidates nevertheless? Just trying to 
understand here ... no offense meant.


Also, +1 to both the criteria for removing existing cores and addition of new 
cores.


-Hemanth.


From: Nikhil Komawar 
Sent: Friday, March 6, 2015 4:04 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] Core nominations.


Thank you all for the input outside of the program: Kyle, Ihar, Thierry, Daniel!


Mike, Ian: It's a good idea to have the policy however, we need to craft one 
that is custom to the Glance program. It will be a bit different to ones out 
there as we've contributors who are dedicated to only subset of the code - for 
example just glance_store or python-glanceclient or metadefs. From here on, we 
may see that for Artifacts and other such features. It's already being observed 
for metadefs.


While I like Mike's suggestion to (semi-)adopt what Nova and Neutron are doing, 
it also makes me wonder if that's going to help us in long term. If not, then 
what can we do now to set a good path forward?


Flavio, Erno, Malini, Louis, Mike: Drafting a guideline policy and implementing 
rotation based on that was my intent so that everyone is aware of the changes 
in the program. That would let the core reviewers know what their duties are 
and let non-cores know what they need to do to become cores. Moreover, I've a 
idea for proposing a "core-member status" for our program than just 
core-reviewer. That seems more applicable for a few strong regular contributors 
like Travis and Lakshmi who work on bug fixes, bug triaging and client 
improvements however, do not seem to keep momentum on reviews. The core status 
can affect project decisions hence, this change may be important. This process 
may involve some interactions with governance so, will take more time.


Malini: I wish to take a strategic decision here rather an agile one. That 
needs a lot of brainpower before implementation. While warning and acting is 
good, it seems less applicable for this case. Simply because, we need to make a 
positive difference in the interactions of the community and we've a chance of 
doing that here.


Nevertheless, I do not want to block the new-core additions or ask Flavio 
et.al. to accommodate for the reviews that the new members would have been able 
to do (just kidding).


Tweaking Flavio's criterion of cleaning up the list for cores who have not done 
any reviews in the last 2 cycles (Icehouse and Juno), I've prepared a new list 
below (as Flavio's list did not match up even if we take cycles to be Juno, 
Kilo). They can be added back to the list faster in the future if they consider 
contributing to Glance again.


The criterion is:

Reviews <= 50 in combined cycles.


Proposal to remove the following members(review_count) from the glance-core 
list:

  *   Brian Lamar (0+15)
  *   Brian Waldon (0+0)
  *   Dan Prince (3+1)
  *   Eoghan Glynn (0+3)
  *   John Bresnahan (31+12)

And we would add the following new members:

  *   Ian Cordasco
  *   Louis Taylor
  *   Mike Fedosin
  *   H

Re: [openstack-dev] [horizon] Do No Evil

2015-03-07 Thread Davanum Srinivas
Possibly a better venue would be the legal-discuss@ mailing list?

-- dims

On Sat, Mar 7, 2015 at 12:57 PM, Michael Krotscheck
 wrote:
> On Sat, Mar 7, 2015 at 8:15 AM Ian Wells  wrote:
>>
>> With apologies for derailing the question, but would you care to tell us
>> what evil you're planning on doing?  I find it's always best to be informed
>> about these things.
>
>
> Me? What? Me? Evil? None, of course. Nope. Nothing at all. Do not look
> behind the curtain.
>
> If someone like BigEvilCorp wanted to install Horizon though, and saw that
> we used tooling that included a "do no evil" license in it? Lawyers get
> touchy, so do governments. There's some discussion on this here that
> suggests no small amount of consternation:
>
> https://github.com/jshint/jshint/issues/1234
>
> The actual license in question.
>
> https://github.com/jshint/jshint/blob/master/src%2Fjshint.js
>
>>
>> (Why yes, it *is* a Saturday morning.)
>
>
> Anyone wanna hack on a bower mirror puppet module with me?
>
> Michael
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] bower

2015-03-07 Thread Richard Jones
On Sun, 8 Mar 2015 at 04:59 Michael Krotscheck  wrote:

> Anyone wanna hack on a bower mirror puppet module with me?
>

BTW are you aware of this spec for bower/Horizon/infra?

https://review.openstack.org/#/c/154297/


Richard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [swift] dependency on non-standardized, private APIs

2015-03-07 Thread David Lyle
I agree that Horizon should not be requiring optional headers. Changing
status of bug.

On Tue, Mar 3, 2015 at 5:51 PM, Jay Pipes  wrote:

> Added [swift] to topic.
>
> On 03/03/2015 07:41 AM, Matthew Farina wrote:
>
>> Radoslaw,
>>
>> Unfortunately the documentation for OpenStack has some holes. What you
>> are calling a private API may be something missed in the documentation.
>> Is there a documentation bug on the issue? If not one should be created.
>>
>
> There is no indication that the X-Timestamp or X-Object-Meta-Mtime HTTP
> headers are part of the public Swift API:
>
> http://developer.openstack.org/api-ref-objectstorage-v1.html
>
> I don't believe this is a bug in the Swift API documentation, either. John
> Dickinson (cc'd) mentioned that the X-Timestamp HTTP header is required for
> the Swift implementation of container replication (John, please do correct
> me if wrong on that).
>
> But that is the private implementation and not part of the public API.
>
>  In practice OpenStack isn't a specification and implementation. The
>> documentation has enough missing information you can't treat it this
>> way. If you want to contribute to improving the documentation I'm sure
>> the documentation team would appreciate it. The last time I looked there
>> were a number of undocumented public swift API details.
>>
>
> The bug here is not in the documentation. The bug is that Horizon is coded
> to rely on HTTP headers that are not in the Swift API. Horizon should be
> fixed to use .get('X-Timestamp') instead of doing
> ['X-Timestamp'] in its view pages for container details. There are
> already patches up that the Horizon developers have, IMO erroneously,
> rejected stating this is a problem in Ceph RadosGW for not properly
> following the Swift API).
>
> Best,
> -jay
>
>  Best of luck,
>> Matt Farina
>>
>> On Tue, Mar 3, 2015 at 9:59 AM, Radoslaw Zarzynski
>> mailto:rzarzyn...@mirantis.com>> wrote:
>>
>> Guys,
>>
>> I would like discuss a problem which can be seen in Horizon: breaking
>> the boundaries of public, well-specified Object Storage API in favour
>> of utilizing a Swift-specific extensions. Ticket #1297173 [1] may
>> serve
>> as a good example of such violation. It is about relying on
>> non-standard (in the terms of OpenStack Object Storage API v1) and
>> undocumented HTTP header provided by Swift. In order to make
>> Ceph RADOS Gateway work correctly with Horizon, developers had to
>> inspect sources of Swift and implement the same behaviour.
>>
>>  From my perspective, that practise breaks the the mission of
>> OpenStack
>> which is much more than delivering yet another IaaS/PaaS
>> implementation.
>> I think its main goal is to provide a universal set of APIs covering
>> all
>> functional areas relevant for cloud computing, and to place that set
>> of APIs in front as many implementations as possible. Having an open
>> source reference implementation of a particular API is required to
>> prove
>> its viability, but is secondary to having an open and documented API.
>>
>> I have full understanding that situations where the public OpenStack
>> interfaces are insufficient to get the work done might exist.
>> However, introduction of dependency on implementation-specific feature
>> (especially without giving the users a choice via e.g. some
>> configuration option) is not the proper way to deal with the problem.
>>  From my point of view, such cases should be handled with adoption of
>> new, carefully designed and documented version of the given API.
>>
>> In any case I think that Horizon, at least basic functionality, should
>> work with any storage which provides Object Storage API.
>> That being said, I'm willing to contribute such patches, if we decide
>> to go that way.
>>
>> Best regards,
>> Radoslaw Zarzynski
>>
>> [1] https://bugs.launchpad.net/horizon/+bug/1297173
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__

Re: [openstack-dev] [neutron] Neutron agent internal data structures

2015-03-07 Thread Salvatore Orlando
Hi Leo,

Every agent keeps anyway an in-memory state throughout its execution.
The agents indeed have no persistent storage - at least not in the usual
form of a database. They however rely on data other than the neutron
database.

For instance for the l2 agent, ovsdb itself is a source of information. The
agent periodically scans it to detect interfaces which are brought up or
down.
As another example the dhcp agent stores its current state a 'data'
directory (if you're using devstack it's usually
/opt/stack/data/neutron/dhcp)

Hope this helps,
Salvatore





On 7 March 2015 at 13:05, Leo Y  wrote:

> Hello,
>
> Where within the code of neutron agents I can find structure(s) that store
> network information? The agent has to know all current networks and ports
> in use by all VMs that are running in its compute node. Does anyone know
> where this information is stored except for neutron DB?
>
> Thank you
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Adding vendor drivers in Ironic

2015-03-07 Thread Devananda van der Veen
On Sun, Mar 1, 2015 at 8:45 AM, Clint Byrum  wrote:
> Excerpts from Gary Kotton's message of 2015-03-01 02:32:37 -0800:
>> Hi,
>> I am just relaying pain-points that we encountered in neutron. As I have
>> said below it makes the development process a lot quicker for people
>> working on external drivers. I personally believe that it fragments the
>> community and feel that the external drivers loose the community
>> contributions and inputs.
>
> I think you're right that this does change the dynamic in the
> community. One way to lower the barrier is to go ahead and define the
> plugin API very strongly, but then delegate control of drivers in-tree
> to active maintainers, rather than in external repositories. If a driver
> falls below the line in terms of maintenance, then it can be deprecated.
> And if a maintainer feels strongly that they cannot include the driver
> with Ironic for whatever reason, the plugin API being strongly defined
> will allow them to do so.
>


++ on all counts.

Even with delegation of existing drivers to an active driver
maintainer(s), there is still a cost to the core review team: new
driver submissions have generally not come from existing core
reviewers. That could be mitigated if we were to encourage new drivers
to be developed and proven out of tree while the author becomes active
in the "parent" project, then when the core team feels ready, allow
the driver in-tree and delegate its maintenance.

-D

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Adding vendor drivers in Ironic

2015-03-07 Thread Devananda van der Veen
I know I'm arriving late to this thread, but I want to chime in with a
resounding "yes!" to this approach.

We've held to a fairly strict policy around maintaining compatibility
within the driver API since early on, and we'll continue to do that as
we add new interfaces (see the new ManagementInterface) or enhance
existing ones (like the new @clean_step decorator). I believe that, as
a hardware abstraction layer, we need to enable authors of
vendor-specific plugins/drivers to innovate out of tree -- whether
that is through a driver + library, such as stackforge/proliantutils,
or in a completely out of tree driver (I'm not aware of any open
source examples of this yet, but I have heard of private ones).

As far as the two (dis)advantages you list:

1) vendor drivers reviewed by the community

On the one hand, I believe there is some benefit from the broader
OpenStack community reviewing vendor code. Some... but not much.
Architecture and general principles (is it scalable? are you doing
something that operators will hate?) is valuable, but that's where it
stops. As you pointed out, it is impractical to expect open source
developers (or developers at other hardware companies) to have the
knowledge and expertise necessary (or hardware available to them) to
determine whether or not vendor-specific code will work with specific
vendor hardware.

As an interesting data point, we now have support for 8 different
types of hardware drivers in tree (10 if you count the ssh and vbox
drivers that we use in testing).

2) distros (not) packaging dependencies

Because we can not actually test every driver's dependencies in the
upstream gate (there's no hardware to exercise against), we do not
install those packages in devstack-gate, and so we do not list
driver's python dependencies in requirements.txt. We mock those
libraries and test the in-tree driver code, and rely on the library
and driver authors to ensure their fitness for use against real
hardware. So, yes, third-party-CI is essential to this -- and I wish
more vendors would step up and run CI on their drivers. But I
digress...

After talking with Thomas Goirand about debian packaging of Ironic, we
realized that there wasn't a clear list of our driver's dependencies.
So to provide distros with a single location for this information, we
are now maintaining a "driver-requirements.txt" file. It's fairly new
-- I added it about a month ago. See
https://github.com/openstack/ironic/blob/master/driver-requirements.txt


Thanks for writing up your experiences with this, Ramakrish. I believe
it is a great example to other driver authors -- and it would be great
to have a wiki or doc page describing the how's and why's of this
approach.

Cheers,
Devananda


On Fri, Feb 27, 2015 at 10:28 PM, Ramakrishnan G
 wrote:
>
> Hello All,
>
> This is about adding vendor drivers in Ironic.
>
> In Kilo, we have many vendor drivers getting added in Ironic which is a very
> good thing.  But something I noticed is that, most of these reviews have
> lots of hardware-specific code in them.  This is something most of the other
> Ironic folks cannot understand unless they go and read the hardware manuals
> of the vendor hardware about what is being done.  Otherwise we just need to
> blindly mark the file as reviewed.
>
> Now let me pitch in with our story about this.  We added a vendor driver for
> HP Proliant hardware (the *ilo drivers in Ironic).  Initially we proposed
> this same thing in Ironic that we will add all the hardware specific code in
> Ironic itself under the directory drivers/modules/ilo.  But few of the
> Ironic folks didn't agree on this (Devananda especially who is from my
> company :)). So we created a new module proliantutils, hosted in our own
> github and recently moved it to stackforge.  We gave a limited set of APIs
> for Ironic to use - like get_host_power_status(), set_host_power(),
> get_one_time_boot(), set_one_time_boot(), etc. (Entire list is here
> https://github.com/stackforge/proliantutils/blob/master/proliantutils/ilo/operations.py).
>
> We have only seen benefits in doing it.  Let me bring in some examples:
>
> 1) We tried to add support for some lower version of servers.  We could do
> this without making any changes in Ironic (Review in proliantutils
> https://review.openstack.org/#/c/153945/)
> 2) We are adding support for newer models of servers (earlier we use to talk
> to servers in protocol called RIBCL, newer servers we will use a protocol
> called RIS) - We could do this with just 14 lines of actual code change in
> Ironic (this was needed mainly because we didn't think we will have to use a
> new protocol itself when we started) -
> https://review.openstack.org/#/c/154403/
>
> Now talking about the advantages of putting hardware-specific code in
> Ironic:
>
> 1) It's reviewed by Openstack community and tested:
> No. I doubt if I throw in 600 lines of new iLO specific code that is here
> (https://github.com/stackforge/proliantutils/blob/master/proli

Re: [openstack-dev] [keystone] [oslo] Further details for the Oslo Cache Updated to use dogpile.cache spec

2015-03-07 Thread Doug Hellmann


On Fri, Mar 6, 2015, at 06:35 PM, xiaoyuan wrote:
> Hi everyone,
> 
> I am Xiaoyuan Lu, a recipient oftheHP Helion OpenStack scholarship, and 
> ElizabethK. Josephis my mentor from HP. I would like to work on projects 
> related to Keystone. Considering the amount of work and tiime limit, 
> after going through the Keystone specs, finally we decided to work on 
> "oslo-cache-using-dogpile".[0]
> 
> At the Oslo meeting this week, I met with the Oslo team and we 
> identified the need to add some ideas for what the library API might 
> need to include with regard to the Oslo Cache Updated to use 
> dogpile.cache spec.
> 
> Can we get some feedback to help flesh this out?
> 
> [0]http://specs.openstack.org/openstack/oslo-specs/specs/kilo/oslo-cache-using-dogpile.html

Looking at the documentation for the backends in the dogpile.cache docs
[1], I see that each backend driver may take different arguments. Some
of those values, like the URL to the memcached server, could become
configuration options defined in oslo.cache. So one thing you could
start with is figuring out a reasonable starting list of those
configuration options. We won't need every single option, just some of
the basics to start with.

Then we need to decide what an API in oslo.cache that uses those
configuration options might look like. For example, there are places in
keystone where someone might want a "memory" cache and other places
where they might want a "persistent" cache. So we might have 2 functions
in oslo.cache with names like get_memory_cache_region() and
get_persistent_cache_region(). Those could be implemented as thin
wrappers around dogpile.cache.make_region(), calling it with the
appropriate arguments taken from the configuration options. They would
return the region, and then the application developer would use it
directly.

I haven't thought much about how many different versions of those
functions we need, though. We don't want one for each dogpile backend,
because we don't necessarily need to support each backend. And we don't
need multiple versions of the function for the persistent storage, so
how does the deployer (not the application developer) pick which
persistent storage mechanism to use? And do we need different memory
configurations, too? Probably, for devstack, but maybe not.

If you start by adding details like this to the existing spec, the rest
of the team can help work out missing details and make suggestions to
keep the oslo.cache API consistent with some of the other Oslo
libraries.

Doug

[1]
http://dogpilecache.readthedocs.org/en/latest/api.html#module-dogpile.cache.backends.memory

> 
> -- 
> Xiaoyuan Lu
> Computer Science M.S.
> UC Santa Cruz
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Do No Evil

2015-03-07 Thread Michael Krotscheck
On Sat, Mar 7, 2015 at 8:15 AM Ian Wells  wrote:

> With apologies for derailing the question, but would you care to tell us
> what evil you're planning on doing?  I find it's always best to be informed
> about these things.
>

Me? What? Me? Evil? None, of course. Nope. Nothing at all. Do not look
behind the curtain.

If someone like BigEvilCorp wanted to install Horizon though, and saw that
we used tooling that included a "do no evil" license in it? Lawyers get
touchy, so do governments. There's some discussion on this here that
suggests no small amount of consternation:

https://github.com/jshint/jshint/issues/1234

The actual license in question.

https://github.com/jshint/jshint/blob/master/src%2Fjshint.js


> (Why yes, it *is* a Saturday morning.)
>

Anyone wanna hack on a bower mirror puppet module with me?

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Do No Evil

2015-03-07 Thread Ian Wells
With apologies for derailing the question, but would you care to tell us
what evil you're planning on doing?  I find it's always best to be informed
about these things.
-- 
Ian.

(Why yes, it *is* a Saturday morning.)

On 6 March 2015 at 12:23, Michael Krotscheck  wrote:

> Heya!
>
> So, a while ago Horizon pulled in JSHint to do javascript linting, which
> is awesome, but has a rather obnoxious "Do no evil" licence in the
> codebase: https://github.com/jshint/jshint/blob/master/src/jshint.js
>
> StoryBoard had the same issue, and I've recently replaced JSHint with
> ESlint for just that reason, but I'm not certain it matters as far as
> OpenStack license compatibility. I'm personally of the opinion that tools
> used != code shipped, but I am neither a lawyer nor a liable party should
> my opinion be wrong. Is this something worth revisiting?
>
> Michael
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Generic question about synchronizing neutron agent on compute node with DB

2015-03-07 Thread Leo Y
Hello,

What happens when neutron DB is updated to change network settings (e.g.
via Dashboard or manually) when there are communication sessions opened in
compute nodes. Does it influence those sessions? When the update is
propagated to compute nodes?

Thank you
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Neutron agent internal data structures

2015-03-07 Thread Leo Y
Hello,

Where within the code of neutron agents I can find structure(s) that store
network information? The agent has to know all current networks and ports
in use by all VMs that are running in its compute node. Does anyone know
where this information is stored except for neutron DB?

Thank you
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] How neutron calculate routing path

2015-03-07 Thread Leo Y
Hello, I am looking to learn how neutron agent (probably L3) calculates a
new routing path when VM on compute node wants to communicate with some
destination. Does it use neutron API to learn about network topology or it
uses its internal structures to simulate path resolving like in real
network? If the latter is correct, then what happens when a network
topology is changed in neutron DB (due user intervention for by other
actions) and the "local" data is invalid?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev