Re: [openstack-dev] [all] Organizing a Gate Blocking Bug Fix Day

2014-01-10 Thread Robert Collins
On 10 January 2014 01:46, Sean Dague s...@dague.net wrote:
 I think we are all agreed that the current state of Gate Resets isn't good.
...
 Specifically I'd like to get commitments from as many PTLs as possible that
 they'll both directly participate in the day, as well as encourage the rest
 of their project to do the same.

Am in.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Bogus -1 scores from turbo hipster

2014-01-10 Thread Robert Collins
On 9 January 2014 07:05, Samuel Merritt s...@swiftstack.com wrote:
 On 1/7/14 2:53 PM, Michael Still wrote:

 So applying migration 206 took slightly over a minute (67 seconds).
 Our historical data (mean + 2 standard deviations) says that this
 migration should take no more than 63 seconds. So this only just
 failed the test.


 It seems to me that requiring a runtime less than (mean + 2 stddev) leads to
 a false-positive rate of 1 in 40, right? If the runtimes have a normal(-ish)
 distribution, then 95% of them will be within 2 standard deviations of the
 mean, so that's 1 in 20 falling outside that range. Then discard the ones
 that are faster than (mean - 2 stddev), and that leaves 1 in 40. Please
 correct me if I'm wrong; I'm no statistician.

Your math is right but performance distribution isn't necessarily
standard - there's some minimum time the operation takes (call this
the ideal time) and then there are things like having to do I/O which
make it worse - so if you're testing on idle systems, most of the time
you're near ideal, and then sometimes you're worse - but you're never
better.

The acid question is whether the things that make the time worse are
things we should consider in evaluating the time. For instance, I/O
contention with other VMs - ignore. Database engines deciding to do
garbage collection at just the wrong time - probably we want to
consider that, because that is something prod systems may encounter
(or we should put a gc in the deploy process and test it etc).

I think we should set some confidence interval - e.g. 95% - and then
from that we can calculate how many runs we need to be confident it
won't occur more than that often. The number of runs will be more than
3 though :).

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Gantt] Looking for some answers...

2014-01-10 Thread Robert Collins
On 7 January 2014 08:50, Dugger, Donald D donald.d.dug...@intel.com wrote:
 Pretty much what Vish said.

 In re: History.  I think this was the right way, these scheduler files didn't 
 just spring up from nowhere, maintaining the history is a good thing.  Even 
 when this becomes a separate service knowing where the files came from is a 
 good thing.

 In re: Changes to the current scheduler - I intend to track the nova tree and 
 port over any changes to the nova scheduler code into the gantt tree.  
 Hopefully, by the time the gantt code has diverged enough that this becomes a 
 burden we will have deprecated the nova scheduler code and have moved to 
 gantt.


We need to do the gantt client tree too, it's up but not sorted like
the server is. This tree is what nova should import to get the RPC
definitions to talk to gantt.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] where to expose network quota

2014-01-10 Thread Robert Collins
On 8 January 2014 03:01, Christopher Yeoh cbky...@gmail.com wrote:
 On Mon, Jan 6, 2014 at 4:47 PM, Yaguang Tang yaguang.t...@canonical.com
...

 For the V3 API clients should access neutron directly for quota information.
 The V3 API will no longer proxy quota related information for neutron. Also
 novaclient will not get the quota information from neutron, but users should
 use neutronclient or python-openstackclient instead.

 The V3 API mode for novaclient will only be accessing Nova - with one big
 exception for querying glance
 so images can be specified by name. And longer term I think we need to think
 about how we share client code amongst clients because I think there will be
 more cases where its useful to access other servers so things can be
 specified by name rather than UUID but we don't want to duplicate code in
 the clients.

Also I think we shouldn't change v2 for this.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] IDE extensions in .gitignore

2014-01-10 Thread Robert Collins
On 4 January 2014 08:31, Jeremy Stanley fu...@yuggoth.org wrote:

 I really don't understand the aversion to allowing contributors to
 police on their own what files they do and don't commit in a review
 to an OpenStack project. It all boils down to the following
 balancing act:

I have *no* aversion to allowing contributors to police things on
their own. I have an aversion to forcing them to do so.

  * Reviewing changes to each project's .gitignore for the trashfile
patterns of every editor and IDE known to man is a waste of
reviewers' collective time.

This is a strawman. If we have to review for a trashfile pattern then
we have contributors using that. There are more editors than
contributors :).

  * Having to point out to contributors that they've accidentally
added trashfiles created by their arbitrary choice of tools to a
change in review is also a waste of reviewers' collective time.

 Since there are ways for a contributor to configure their
 development environment in a manner which prevents them from
 inadvertently putting these files into a change for review, I feel
 like it's perfectly reasonable to suggest that as an alternative. It
 is just one of the many ways a contributor avoids wasting reviewer
 time by neither polluting their changes nor every project's
 .gitignore with details potentially relevant only to their own
 personal development system and nowhere else.

I don't understand why you call it polluting. Pollution is toxic. What
is toxic about the few rules needed to handle common editors?

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] IDE extensions in .gitignore

2014-01-10 Thread Robert Collins
On 6 January 2014 05:18, Jeremy Stanley fu...@yuggoth.org wrote:

 I think people are conflating two different global concepts
 here...

 There had been a discussion about synchronizing the .gitignore files
 of all projects into one central list (a la openstack/requirements
 synchronization): global across our entire developer community.

 There were also suggestions that contributors could adjust their own
 ~/.gitconfig to ignore the particular trashfiles created by the
 tools that they commonly use: global across all local git
 repositories on a particular developer's computer.

 The first is something which I'm pretty sure will flat out not work,
 and would almost certainly annoy a great many people if we did find

Out of curiousity, why wouldn't it work?

 enough workarounds to get it to sort-of work. The second I see as
 no different than configuring Git to know your preferred E-mail
 address or OpenPGP key, but Sam was expressing a concern that we as
 a project should never educate contributors about available options
 for configuring their development tools.

*everyone* I know gets git's preferred email and gpg config wrong to
start with. Recent gits make this explicit by refusing to work in a
broken fashion. I see having common defaults in trees as an analogous
thing - rather than beating people up when they got it wrong, make it
harder for them to get it wrong.

 I'm not opposed to projects adding random development tool droppings
 to their .gitignore files, though I personally would prefer to just
 configure my development environment to ignore those sorts of files
 for any project I happen to touch rather than go on a crusade to
 patch every project under the sun to ignore them.

This is another strawman, no? Is anyone suggesting a crusade?

 I also disagree that they require no reviewer time... we have
 release tooling which takes the patterns in .gitignore into account
 so it knows to skip files which get generated as part of the build
 process. A too-greedy pattern in a .gitignore file can very quickly
 end in broken release tarballs if reviewers are not especially
 careful to confirm those patterns match *only* what's intended
 (which also means gaining confidence in the nuances of git's pattern
 matcher).

I read that as 'we don't test that our tarballs work'. No?

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Savanna] Spark plugin status

2014-01-10 Thread Daniele Venzano

On 01/09/14 19:12, Matthew Farrellee wrote:

This is definitely great news!

+2 to the things Sergey mentioned below.

Additionally, will you fill out the blueprint or wiki w/ details that
will help others write integration tests for your plugin?


We already implemented at least some part of the integration tests for 
Spark, mimicking the ones that are provided with the Vanilla plugin. The 
Spark plugin works almost exactly as the Vanilla one, it can install a 
datanode, namenode, Spark master or Spark worker and resize the cluster.

What kind of documentation is needed?



And, did you integrate (or have plans to integrate) Spark into the EDP
workflows in Horizon?


We would like to have that functionality. Currently we are limited by 
the lack of a Swift service in our cluster. We will have one test 
installation in a short while and then we will see. What is the status 
of the HDFS datasource? We are very interested in that, but I lost track 
of the development during the holidays.





On 01/09/2014 03:41 AM, Sergey Lukjanov wrote:

Hi,

I'm really glad to here that!

Answers inlined.

Thanks.


On Thu, Jan 9, 2014 at 11:33 AM, Daniele Venzano
daniele.venz...@eurecom.fr mailto:daniele.venz...@eurecom.fr wrote:

Hello,

we are finishing up the development of the Spark plugin for Savanna.
In the next few days we will deploy it on an OpenStack cluster with
real users to iron out the last few things. Hopefully next week we
will put the code on a public github repository in beta status.

[SL] Awesome! Could you, please, share some info this installation if
possible? like OpenStack cluster version and size, Savanna version,
expected Spark cluster sizes and lifecycle, etc.


You can find the blueprint here:
https://blueprints.launchpad.__net/savanna/+spec/spark-plugin
https://blueprints.launchpad.net/savanna/+spec/spark-plugin

There are two things we need to release, the VM image and the code
itself.
For the image we created one ourselves and for the code we used the
Vanilla plugin as a base.

[SL] You can use diskimage-builder [0] to prepare such images, we're
already using it for building images for vanilla plugin [1].


We feel that our work could be interesting for others and we would
like to see it integrated in Savanna. What is the best way to
proceed?

[SL] Absolutely, it's a very interesting tool for data processing. IMO
the best way is to create a change request to savanna for code review
and discussion in gerrit, it'll be really the most effective way to
collaborate. As for the best way of integration with Savanna - we're
expecting to see it in the openstack/savanna repo like vanilla, HDP and
IDH (which will be landed soon) plugins.


We did not follow the Gerrit workflow until now because development
happened internally.
I will prepare the repo on github with git-review and reference the
blueprint in the commit. After that, do you prefer that I send
immediately the code for review or should I send a link here on the
mailing list first for some feedback/discussion?

[SL] It'll be better to immediately send the code for review.


Thank you,
Daniele Venzano, Hoang Do and Vo Thanh Phuc

_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



[0] https://github.com/openstack/diskimage-builder
[1] https://github.com/openstack/savanna-image-elements

Please, feel free to ping me if some help needed with gerrit or savanna
internals stuff.

Thanks.

--
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] - taskflow preventing sqla 0.8 upgrade

2014-01-10 Thread Robert Collins
On 5 January 2014 02:02, Sean Dague s...@dague.net wrote:

 So we used to do that the apps against release libraries. And the result was
 more and more full day gate breaks. We did 2 consecutive ones in 2 weeks.

 Basically, once you get to be a certain level of coupled in OpenStack we can
 no longer let you manage your own requirements file. We need a global lever
 on it. Because people were doing it wrong, and slowly (we could go through
 specific examples about how bad this was. This was a top issue at nearly
 every summit I'd been at going back to Essex.
..
 (It was about 14 days to resolve the python client issue, there was a django
 issue around the same time that never made it to the list, as we did it all
 under fire in IRC)

 And we have a solution now. Which is one list of requirements that we can
 test everything with, that we can propose requirements updates
 speculatively, and see what works and what doesn't. And *after* we know they
 work, we propose the changes back into the projects, now automatically.

So the flip-flop thing is certainly very interesting. We wouldn't want
that to happen again.

 I do see the issue Sean is pointing at, which is that we have to fix
 the libraries first and then the things that use them. OTOH thats
 normal in the software world, I don't see anything unique about it.


 Well, as the person that normally gets stuck figuring this out when .eu has
 been gate blocked for a day, and I'm one of the first people up on the east
 coast, I find the normal state of affairs unsatisfying. :)

:)

 I also think that what we are basically dealing with is the classical N^2
 comms problem. With N git trees that we need to all get working together,
 this gets exponentially more difficult over time. Which is why we created
 the integrated gate and the global requirements lever.

I don't think we are - I think we're dealing with the fact that we've
had no signal - no backpressure - on projects that have upper caps set
to remove those caps. So they stick there and we all suffer.

 Another solution would be reduce the number of OpenStack git trees to make
 N^2 more manageable, and let us with single commits affect multiple
 components. But that's not the direction we've taken.

I don't think thats necessary.

What I'd like to see is:
A) two test sets for every commit:
 - commit with latest-release of all deps
 - commit with latest-trunk [or dependent zuul ref] of all deps

B) *if* there are upper version caps for any reason, some signal back
to developers that this exists and that we need to fix our code to
work with that newer release.
  - Possibly we should allow major version caps where major releases
are anticipated to be incompatible without warning about that *until*
there is a [pre-]release of the new major version available

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-10 Thread Imre Farkas

Thanks Jay, this is a very useful summary! Some comments inline:

On 01/09/2014 06:22 PM, Jay Dobies wrote:

I'm trying to hash out where data will live for Tuskar (both long term
and for its Icehouse deliverables). Based on the expectations for
Icehouse (a combination of the wireframes and what's in Tuskar client's
api.py), we have the following concepts:


= Nodes =
A node is a baremetal machine on which the overcloud resources will be
deployed. The ownership of this information lies with Ironic. The Tuskar
UI will accept the needed information to create them and pass it to
Ironic. Ironic is consulted directly when information on a specific node
or the list of available nodes is needed.


= Resource Categories =
A specific type of thing that will be deployed into the overcloud.
These are static definitions that describe the entities the user will
want to add to the overcloud and are owned by Tuskar. For Icehouse, the
categories themselves are added during installation for the four types
listed in the wireframes.

Since this is a new model (as compared to other things that live in
Ironic or Heat), I'll go into some more detail. Each Resource Category
has the following information:

== Metadata ==
My intention here is that we do things in such a way that if we change
one of the original 4 categories, or more importantly add more or allow
users to add more, the information about the category is centralized and
not reliant on the UI to provide the user information on what it is.

ID - Unique ID for the Resource Category.
Display Name - User-friendly name to display.
Description - Equally self-explanatory.

== Count ==
In the Tuskar UI, the user selects how many of each category is desired.
This stored in Tuskar's domain model for the category and is used when
generating the template to pass to Heat to make it happen.

These counts are what is displayed to the user in the Tuskar UI for each
category. The staging concept has been removed for Icehouse. In other
words, the wireframes that cover the waiting to be deployed aren't
relevant for now.

== Image ==
For Icehouse, each category will have one image associated with it. Last
I remember, there was discussion on whether or not we need to support
multiple images for a category, but for Icehouse we'll limit it to 1 and
deal with it later.

Metadata for each Resource Category is owned by the Tuskar API. The
images themselves are managed by Glance, with each Resource Category
keeping track of just the UUID for its image.


= Stack =
There is a single stack in Tuskar, the overcloud.

A small nit here: in the long term Tuskar will support multiple overclouds.

 The Heat template

for the stack is generated by the Tuskar API based on the Resource
Category data (image, count, etc.). The template is handed to Heat to
execute.

Heat owns information about running instances and is queried directly
when the Tuskar UI needs to access that information.

--

Next steps for me are to start to work on the Tuskar APIs around
Resource Category CRUD and their conversion into a Heat template.
There's some discussion to be had there as well, but I don't want to put
too much into one e-mail.


Thoughts?


There's few pieces of concepts which I think is missing from the list:
- overclouds: after Heat successfully created the stack, Tuskar needs to 
keep track whether it applied the post configuration steps (Keystone 
initialization, registering services, etc) or not. It also needs to know 
the name of the stack (only 1 stack named 'overcloud' for Icehouse).
- service endpoints of an overcloud: eg. Tuskar-ui in the undercloud 
will need the url of the overcloud Horizon. The overcloud Keystone owns 
the information about this (after post configuration is done) and Heat 
owns the information about the overcloud Keystone.
- user credentials for an overcloud: it will be used by Heat during 
stack creation, by Tuskar during post configuration, by Tuskar-ui 
querying various information (eg. running vms on a node) and finally by 
the user logging in to the overcloud Horizon. Now it can be found in the 
Tuskar-ui settings file [1].


Imre

[1] 
https://github.com/openstack/tuskar-ui/blob/master/local_settings.py.example#L351 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Less option (was: [oslo.config] Centralized config management)

2014-01-10 Thread Flavio Percoco

On 09/01/14 23:56 +0100, Julien Danjou wrote:

On Thu, Jan 09 2014, Jay Pipes wrote:


Hope you don't mind, I'll jump in here :)

On Thu, 2014-01-09 at 11:08 -0800, Nachi Ueno wrote:

Hi Jeremy

Don't you think it is burden for operators if we should choose correct
combination of config for multiple nodes even if we have chef and
puppet?


It's more of a burden for operators to have to configure OpenStack in
multiple ways.


I also think projects should try to minimize configuration options at
their minimum so operators are completely lost. Opening the sample
nova.conf and seeing 696 options is not what I would call user friendly.

And also having working default. I know it's hard, but we should really
try to think about that sometimes.

Sorry to hijack the thread a bit.


IMHO, not a hijack!

+100 to this!


--
@flaper87
Flavio Percoco


pgplZ7SGvB4XG.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Organizing a Gate Blocking Bug Fix Day

2014-01-10 Thread Thierry Carrez
Jay Pipes wrote:
 Personally, I think sooner is better. The severity of the disruption is
 quite high, and action is needed ASAP.

Having the bug day organized shouldn't prevent people from working on
the most pressing issues and get the disruption under control ASAP...

I'm confident we'll be left with enough gate-wedging bugs to make for an
interesting gate blocking bug day at the end of the month :)

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Not able to launch VM From Windows Server 2008 Template.

2014-01-10 Thread Mardan Raghuwanshi
Hello All Please help me,

I Export Template of Windows Server R2 2008 from XEN server as VHD file.
Create a template in cloudstack with this VHD file
I tried to launch an instance through this Template.
After downloading template it tried to installed for half an hour after
that, removes the template file
I am using XEN Server as a HOST machine.









Thanks,
--Mardan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Less option

2014-01-10 Thread Thierry Carrez
Flavio Percoco wrote:
 On 09/01/14 23:56 +0100, Julien Danjou wrote:
 I also think projects should try to minimize configuration options at
 their minimum so operators are completely lost. Opening the sample
 nova.conf and seeing 696 options is not what I would call user friendly.

 And also having working default. I know it's hard, but we should really
 try to think about that sometimes.

 Sorry to hijack the thread a bit.
 
 IMHO, not a hijack!

It's a hijack, because it deserves a thread of its own, rather than be
lost in the last breaths of the configserver thread.

Adding a config option is a good way to avoid saying NO - you just say
yes, not by default and enabled with a config option instead. The
trouble begins when the sheer number of options make it difficult to
document, find and configure the right options, and the trouble
continues when you're unable to test the explosive matrix of option
combinations. Classifying options between basic and advanced are a way
to mitigate that, but not a magic bullet.

Personally I think we should (and we can) say NO more often. As we get
stronger as a dev community it becomes easier, and I think we see more
opinionated choices in younger projects. That said, it's just harder
for old projects which already have a lot of options to suddenly start
denying someone's feature instead of just adding another option...

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-10 Thread Flavio Percoco

On 09/01/14 13:28 -0500, Jay Pipes wrote:

On Thu, 2014-01-09 at 10:23 +0100, Flavio Percoco wrote:

On 08/01/14 17:13 -0800, Nachi Ueno wrote:
Hi folks

OpenStack process tend to have many config options, and many hosts.
It is a pain to manage this tons of config options.
To centralize this management helps operation.

We can use chef or puppet kind of tools, however
sometimes each process depends on the other processes configuration.
For example, nova depends on neutron configuration etc

My idea is to have config server in oslo.config, and let cfg.CONF get
config from the server.
This way has several benefits.

- We can get centralized management without modification on each
projects ( nova, neutron, etc)
- We can provide horizon for configuration

This is bp for this proposal.
https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized

I'm very appreciate any comments on this.

I've thought about this as well. I like the overall idea of having a
config server. However, I don't like the idea of having it within
oslo.config. I'd prefer oslo.config to remain a library.

Also, I think it would be more complex than just having a server that
provides the configs. It'll need authentication like all other
services in OpenStack and perhaps even support of encryption.

I like the idea of a config registry but as mentioned above, IMHO it's
to live under its own project.


Hi Nati and Flavio!

So, I'm -1 on this idea, just because I think it belongs in the realm of
configuration management tooling (Chef/Puppet/Salt/Ansible/etc). Those
tools are built to manage multiple configuration files and changes in
them. Adding a config server would dramatically change the way that
configuration management tools would interface with OpenStack services.
Instead of managing the config file templates as all of the tools
currently do, the tools would need to essentially need to forego the
tried-and-true INI files and instead write a bunch of code in order to
deal with REST API set/get operations for changing configuration data.

In summary, while I agree that OpenStack services have an absolute TON
of configurability -- for good and bad -- there are ways to improve the
usability of configuration without changing the paradigm that most
configuration management tools expect. One such example is having
include.d/ support -- similar to the existing oslo.cfg module's support
for a --config-dir, but more flexible and more like what other open
source programs (like Apache) have done for years.


FWIW, this is the exact reason why I didn't propose the idea. Although
I like the idea, I'm not fully convinced.

I don't want to reinvent existing configuration management tools, nor
tight OpenStack services to this server. In my head I thought about it
as an optional thing that could help deployments that are not already
using other tools but lets be realistic, who isn't using configuration
tools nowadays? It'd be very painful to manage the whole thing without
these tools.

Anyway, all this to say, I agree with you and that I think
implementing this service would be more complex than just serving
configurations. :)

Cheers,
FF

--
@flaper87
Flavio Percoco


pgp5cd_BzKtiA.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Bug Triage Event 2014-01-13 11:00 UTC

2014-01-10 Thread Ekaterina Fedorova
Hi everyone!

This is the announcement of Murano Bug Triage Event that will be held at
#murano channel at 11:00 UTC on Monday.
Current bug state can be found in our launchpad
pagehttps://bugs.launchpad.net/murano/+bugs
.

See you there!
Kate.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Savanna] Spark plugin status

2014-01-10 Thread Sergey Lukjanov
Answers inlined.


On Fri, Jan 10, 2014 at 1:05 PM, Daniele Venzano daniele.venz...@eurecom.fr
 wrote:

 On 01/09/14 19:12, Matthew Farrellee wrote:

 This is definitely great news!

 +2 to the things Sergey mentioned below.

 Additionally, will you fill out the blueprint or wiki w/ details that
 will help others write integration tests for your plugin?


 We already implemented at least some part of the integration tests for
 Spark, mimicking the ones that are provided with the Vanilla plugin. The
 Spark plugin works almost exactly as the Vanilla one, it can install a
 datanode, namenode, Spark master or Spark worker and resize the cluster.
 What kind of documentation is needed?


[SL] Are you installing HDFS too? I think that some docs about how your
plugin works and about the Spark's requirements will be great.





  And, did you integrate (or have plans to integrate) Spark into the EDP
 workflows in Horizon?


 We would like to have that functionality. Currently we are limited by the
 lack of a Swift service in our cluster. We will have one test installation
 in a short while and then we will see. What is the status of the HDFS
 datasource? We are very interested in that, but I lost track of the
 development during the holidays.


Is it possible to run Spark workloads using Oozie? Here is the external
HDFS support change request - https://review.openstack.org/#/c/47828/.






  On 01/09/2014 03:41 AM, Sergey Lukjanov wrote:

 Hi,

 I'm really glad to here that!

 Answers inlined.

 Thanks.


 On Thu, Jan 9, 2014 at 11:33 AM, Daniele Venzano
 daniele.venz...@eurecom.fr mailto:daniele.venz...@eurecom.fr wrote:

 Hello,

 we are finishing up the development of the Spark plugin for Savanna.
 In the next few days we will deploy it on an OpenStack cluster with
 real users to iron out the last few things. Hopefully next week we
 will put the code on a public github repository in beta status.

 [SL] Awesome! Could you, please, share some info this installation if
 possible? like OpenStack cluster version and size, Savanna version,
 expected Spark cluster sizes and lifecycle, etc.


 You can find the blueprint here:
 https://blueprints.launchpad.__net/savanna/+spec/spark-plugin
 https://blueprints.launchpad.net/savanna/+spec/spark-plugin

 There are two things we need to release, the VM image and the code
 itself.
 For the image we created one ourselves and for the code we used the
 Vanilla plugin as a base.

 [SL] You can use diskimage-builder [0] to prepare such images, we're
 already using it for building images for vanilla plugin [1].


 We feel that our work could be interesting for others and we would
 like to see it integrated in Savanna. What is the best way to
 proceed?

 [SL] Absolutely, it's a very interesting tool for data processing. IMO
 the best way is to create a change request to savanna for code review
 and discussion in gerrit, it'll be really the most effective way to
 collaborate. As for the best way of integration with Savanna - we're
 expecting to see it in the openstack/savanna repo like vanilla, HDP and
 IDH (which will be landed soon) plugins.


 We did not follow the Gerrit workflow until now because development
 happened internally.
 I will prepare the repo on github with git-review and reference the
 blueprint in the commit. After that, do you prefer that I send
 immediately the code for review or should I send a link here on the
 mailing list first for some feedback/discussion?

 [SL] It'll be better to immediately send the code for review.


 Thank you,
 Daniele Venzano, Hoang Do and Vo Thanh Phuc

 _
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.__org
 mailto:OpenStack-dev@lists.openstack.org

 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 [0] https://github.com/openstack/diskimage-builder
 [1] https://github.com/openstack/savanna-image-elements

 Please, feel free to ping me if some help needed with gerrit or savanna
 internals stuff.

 Thanks.

 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list

[openstack-dev] [Murano] Bug Triage Event 2014-01-13 11:00 UTC

2014-01-10 Thread Ekaterina Fedorova
Hi everyone!

This is the announcement of Murano Bug Triage Event that will be held at
#murano channel at 11:00 UTC on Monday.
Current bug state can be found in our launchpad
pagehttps://bugs.launchpad.net/murano/+bugs
.

See you there!
Kate.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] where to expose network quota

2014-01-10 Thread Day, Phil


 -Original Message-
 From: Robert Collins [mailto:robe...@robertcollins.net]
 Sent: 10 January 2014 08:54
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] where to expose network quota
 
 On 8 January 2014 03:01, Christopher Yeoh cbky...@gmail.com wrote:
  On Mon, Jan 6, 2014 at 4:47 PM, Yaguang Tang
  yaguang.t...@canonical.com
 ...
 
  For the V3 API clients should access neutron directly for quota information.
  The V3 API will no longer proxy quota related information for neutron.
  Also novaclient will not get the quota information from neutron, but
  users should use neutronclient or python-openstackclient instead.
 
  The V3 API mode for novaclient will only be accessing Nova - with one
  big exception for querying glance so images can be specified by name.
  And longer term I think we need to think about how we share client
  code amongst clients because I think there will be more cases where
  its useful to access other servers so things can be specified by name
  rather than UUID but we don't want to duplicate code in the clients.
 
 Also I think we shouldn't change v2 for this.
 
 -Rob
 
If you mean we shouldn't fix the V2 API to report Neutron quotas (rather that 
we shouldn't change the V2 api to remove network quotas) then I disagree  - 
currently the V2 API contains information on network quotas, and can be used on 
systems configured for either nova-network or Neutron.   It should provide the 
same consistent information regardless of the network backend configured - so 
it's a bug that the V2 API doesn't provide network quotas when using neutron.

I know we want to deprecate the V2 API but it will still be around for a while 
- and in the meantime if people want to put the effort into working on bug 
fixes then that should still be allowed.

Phil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Less option

2014-01-10 Thread Flavio Percoco

On 10/01/14 11:28 +0100, Thierry Carrez wrote:

Flavio Percoco wrote:

On 09/01/14 23:56 +0100, Julien Danjou wrote:

I also think projects should try to minimize configuration options at
their minimum so operators are completely lost. Opening the sample
nova.conf and seeing 696 options is not what I would call user friendly.

And also having working default. I know it's hard, but we should really
try to think about that sometimes.

Sorry to hijack the thread a bit.


IMHO, not a hijack!


It's a hijack, because it deserves a thread of its own, rather than be
lost in the last breaths of the configserver thread.


I didn't consider it a bad hijack because it's still relevant to
what was being discussed in the previous thread. Anyway, thanks for
creating a new one.


Adding a config option is a good way to avoid saying NO - you just say
yes, not by default and enabled with a config option instead. The
trouble begins when the sheer number of options make it difficult to
document, find and configure the right options, and the trouble
continues when you're unable to test the explosive matrix of option
combinations. Classifying options between basic and advanced are a way
to mitigate that, but not a magic bullet.


Agreed, differentiating the options sounds like a good idea. I don't
think the issue is just related to whether the option is
well-documented or not. Good documentation helps for sure but it's
still difficult to know all the options when there are so many.

One thing that should be considered is that we have to make sure the
default values are sane. By sane I mean they've to be good enough to
support rather big deployments. I've seen - and unfortunately I don't
have a reference to this because my memory sucks - default values that
are good just to 'get it running' which basically means that all
'serious' deployments will have to tweak that value. This is something
we should all keep in mind.


Personally I think we should (and we can) say NO more often. As we get
stronger as a dev community it becomes easier, and I think we see more
opinionated choices in younger projects. That said, it's just harder
for old projects which already have a lot of options to suddenly start
denying someone's feature instead of just adding another option...


I've seen NOs flying around, which is a good sign. I think one of the
issues right now is that we already have many configuration options in
some projects and we should try to shrink them, if possible.

I also think we should keep things as much non-opinionated as
possible. Unless the new project is very specific, this is something
that it should stick to.

Cheers,
FF

--
@flaper87
Flavio Percoco


pgp0LKKsrjRLC.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Less option (was: [oslo.config] Centralized config management)

2014-01-10 Thread Mark McLoughlin
On Thu, 2014-01-09 at 16:34 -0800, Joe Gordon wrote:
 On Thu, Jan 9, 2014 at 3:01 PM, Jay Pipes jaypi...@gmail.com wrote:
 
  On Thu, 2014-01-09 at 23:56 +0100, Julien Danjou wrote:
   On Thu, Jan 09 2014, Jay Pipes wrote:
  
Hope you don't mind, I'll jump in here :)
   
On Thu, 2014-01-09 at 11:08 -0800, Nachi Ueno wrote:
Hi Jeremy
   
Don't you think it is burden for operators if we should choose correct
combination of config for multiple nodes even if we have chef and
puppet?
   
It's more of a burden for operators to have to configure OpenStack in
multiple ways.
  
   I also think projects should try to minimize configuration options at
   their minimum so operators are completely lost. Opening the sample
   nova.conf and seeing 696 options is not what I would call user friendly.
  
 
 
 
 There was talk a while back about marking different config options as basic
 and advanced (or something along those lines) to help make it easier for
 operators.

You might be thinking of this session summit I led:

  https://etherpad.openstack.org/p/grizzly-nova-config-options

My thinking was we first move config options into groups to make it
easier for operators to make sense of the available options and then we
would classify them (as e.g. tuning, experimental, debug) and
exclude some classifications from the sample config file.

Sadly, I never even made good progress on Tedious Task 2 :: Group.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-10 Thread Ian Wells
On 10 January 2014 07:40, Jiang, Yunhong yunhong.ji...@intel.com wrote:

  Robert, sorry that I’m not fan of * your group * term. To me, *your
 group” mixed two thing. It’s an extra property provided by configuration,
 and also it’s a very-not-flexible mechanism to select devices (you can only
 select devices based on the ‘group name’ property).


It is exactly that.  It's 0 new config items, 0 new APIs, just an extra tag
on the whitelists that are already there (although the proposal suggests
changing the name of them to be more descriptive of what they now do).  And
you talk about flexibility as if this changes frequently, but in fact the
grouping / aliasing of devices almost never changes after installation,
which is, not coincidentally, when the config on the compute nodes gets set
up.

  1)   A dynamic group is much better. For example, user may want to
 select GPU device based on vendor id, or based on vendor_id+device_id. In
 another word, user want to create group based on vendor_id, or
 vendor_id+device_id and select devices from these group.  John’s proposal
 is very good, to provide an API to create the PCI flavor(or alias). I
 prefer flavor because it’s more openstack style.

I disagree with this.  I agree that what you're saying offers a more
flexibilibility after initial installation but I have various issues with
it.

This is directly related to the hardware configuation on each compute
node.  For (some) other things of this nature, like provider networks, the
compute node is the only thing that knows what it has attached to it, and
it is the store (in configuration) of that information.  If I add a new
compute node then it's my responsibility to configure it correctly on
attachment, but when I add a compute node (when I'm setting the cluster up,
or sometime later on) then it's at that precise point that I know how I've
attached it and what hardware it's got on it.  Also, it's at this that
point in time that I write out the configuration file (not by hand, note;
there's almost certainly automation when configuring hundreds of nodes so
arguments that 'if I'm writing hundreds of config files one will be wrong'
are moot).

I'm also not sure there's much reason to change the available devices
dynamically after that, since that's normally an activity that results from
changing the physical setup of the machine which implies that actually
you're going to have access to and be able to change the config as you do
it.  John did come up with one case where you might be trying to remove old
GPUs from circulation, but it's a very uncommon case that doesn't seem
worth coding for, and it's still achievable by changing the config and
restarting the compute processes.

This also reduces the autonomy of the compute node in favour of centralised
tracking, which goes against the 'distributed where possible' philosophy of
Openstack.

Finally, you're not actually removing configuration from the compute node.
You still have to configure a whitelist there; in the grouping design you
also have to configure grouping (flavouring) on the control node as well.
The groups proposal adds one extra piece of information to the whitelists
that are already there to mark groups, not a whole new set of config lines.


To compare scheduling behaviour:

If I  need 4G of RAM, each compute node has reported its summary of free
RAM to the scheduler.  I look for a compute node with 4G free, and filter
the list of compute nodes down.  This is a query on n records, n being the
number of compute nodes.  I schedule to the compute node, which then
confirms it does still have 4G free and runs the VM or rejects the request.

If I need 3 PCI devices and use the current system, each machine has
reported its device allocations to the scheduler.  With SRIOV multiplying
up the number of available devices, it's reporting back hundreds of records
per compute node to the schedulers, and the filtering activity is a 3
queries on n * number of PCI devices in cloud records, which could easily
end up in the tens or even hundreds of thousands of records for a
moderately sized cloud.  There compute node also has a record of its device
allocations which is also checked and updated before the final request is
run.

If I need 3 PCI devices and use the groups system, each machine has
reported its device *summary* to the scheduler.  With SRIOV multiplying up
the number of available devices, it's still reporting one or a small number
of categories, i.e. { net: 100}.  The difficulty of scheduling is a query
on num groups * n records - fewer, in fact, if some machines have no
passthrough devices.

You can see that there's quite a cost to be paid for having those flexible
alias APIs.

 4)   IMHO, the core for nova PCI support is **PCI property**. The
 property means not only generic PCI devices like vendor id, device id,
 device type, compute specific property like BDF address, the adjacent
 switch IP address,  but also user defined property like 

Re: [openstack-dev] [Solum] Devstack gate is failing

2014-01-10 Thread Noorul Islam Kamal Malmiyoda
On Wed, Jan 8, 2014 at 11:20 PM, Noorul Islam Kamal Malmiyoda
noo...@noorul.com wrote:
 On Wed, Jan 8, 2014 at 11:02 PM, Sean Dague s...@dague.net wrote:
 On 01/08/2014 11:40 AM, Noorul Islam Kamal Malmiyoda wrote:

 On Jan 8, 2014 9:58 PM, Georgy Okrokvertskhov
 gokrokvertsk...@mirantis.com mailto:gokrokvertsk...@mirantis.com wrote:

 Hi,

 I do understand why there is a push back for this patch. This patch is
 for infrastructure project which works for multiple projects. Infra
 maintainers should not know specifics of each project in details. If
 this patch is a temporary solution then who will be responsible to
 remove it?


 I am not sure who is responsible for solum related configurations in
 infra project. I see that almost all the infra config for solum project
 is done by solum members. So I think any solum member can submit a patch
 to revert this once we have a permanent solution.

 If we need start this gate I propose to revert all patches which led
 to this inconsistent state and apply workaround in Solum repository
 which is under Solum team full control and review. We need to open a bug
 in Solum project to track this.


 The problematic patch [1] solves a specific problem. Do we have other
 ways to solve it?

 Regards,
 Noorul

 [1] https://review.openstack.org/#/c/64226

 Why is test-requirements.txt getting installed in pre_test instead of
 post_test? Installing test-requirements prior to installing devstack
 itself in no way surprises me that it causes issues. You can see that
 command is litterally the first thing in the console -
 http://logs.openstack.org/66/62466/7/gate/gate-solum-devstack-dsvm/49bac35/console.html#_2014-01-08_13_46_15_161

 It should be installed right before tests get run, which I assume is L34
 of this file -
 https://review.openstack.org/#/c/64226/3/modules/openstack_project/files/jenkins_job_builder/config/solum.yaml

 Given that is where ./run_tests.sh is run.


 This might help, but run_tests.sh anyhow will import oslo.config. I
 need to test this and see.


Tested and this is working. Thank you Sean.

Regards,
Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-10 Thread Ian Wells
In any case, we don't have to decide this now.  If we simply allowed the
whitelist to add extra arbitrary properties to the PCI record (like a group
name) and return it to the central server, we could use that in scheduling
for the minute as a group name, we wouldn't implement the APIs for flavors
yet, and we could get a working system that would be minimally changed from
what we already have.  We could worry about the scheduling in the
scheduling group, and we could leave the APIs (which, as I say, are a
minimally useful feature) untl later.  then we'd have something useful in
short order.
-- 
Ian.


On 10 January 2014 13:08, Ian Wells ijw.ubu...@cack.org.uk wrote:

 On 10 January 2014 07:40, Jiang, Yunhong yunhong.ji...@intel.com wrote:

  Robert, sorry that I’m not fan of * your group * term. To me, *your
 group” mixed two thing. It’s an extra property provided by configuration,
 and also it’s a very-not-flexible mechanism to select devices (you can only
 select devices based on the ‘group name’ property).


 It is exactly that.  It's 0 new config items, 0 new APIs, just an extra
 tag on the whitelists that are already there (although the proposal
 suggests changing the name of them to be more descriptive of what they now
 do).  And you talk about flexibility as if this changes frequently, but in
 fact the grouping / aliasing of devices almost never changes after
 installation, which is, not coincidentally, when the config on the compute
 nodes gets set up.

  1)   A dynamic group is much better. For example, user may want to
 select GPU device based on vendor id, or based on vendor_id+device_id. In
 another word, user want to create group based on vendor_id, or
 vendor_id+device_id and select devices from these group.  John’s proposal
 is very good, to provide an API to create the PCI flavor(or alias). I
 prefer flavor because it’s more openstack style.

 I disagree with this.  I agree that what you're saying offers a more
 flexibilibility after initial installation but I have various issues with
 it.

 This is directly related to the hardware configuation on each compute
 node.  For (some) other things of this nature, like provider networks, the
 compute node is the only thing that knows what it has attached to it, and
 it is the store (in configuration) of that information.  If I add a new
 compute node then it's my responsibility to configure it correctly on
 attachment, but when I add a compute node (when I'm setting the cluster up,
 or sometime later on) then it's at that precise point that I know how I've
 attached it and what hardware it's got on it.  Also, it's at this that
 point in time that I write out the configuration file (not by hand, note;
 there's almost certainly automation when configuring hundreds of nodes so
 arguments that 'if I'm writing hundreds of config files one will be wrong'
 are moot).

 I'm also not sure there's much reason to change the available devices
 dynamically after that, since that's normally an activity that results from
 changing the physical setup of the machine which implies that actually
 you're going to have access to and be able to change the config as you do
 it.  John did come up with one case where you might be trying to remove old
 GPUs from circulation, but it's a very uncommon case that doesn't seem
 worth coding for, and it's still achievable by changing the config and
 restarting the compute processes.

 This also reduces the autonomy of the compute node in favour of
 centralised tracking, which goes against the 'distributed where possible'
 philosophy of Openstack.

 Finally, you're not actually removing configuration from the compute
 node.  You still have to configure a whitelist there; in the grouping
 design you also have to configure grouping (flavouring) on the control node
 as well.  The groups proposal adds one extra piece of information to the
 whitelists that are already there to mark groups, not a whole new set of
 config lines.


 To compare scheduling behaviour:

 If I  need 4G of RAM, each compute node has reported its summary of free
 RAM to the scheduler.  I look for a compute node with 4G free, and filter
 the list of compute nodes down.  This is a query on n records, n being the
 number of compute nodes.  I schedule to the compute node, which then
 confirms it does still have 4G free and runs the VM or rejects the request.

 If I need 3 PCI devices and use the current system, each machine has
 reported its device allocations to the scheduler.  With SRIOV multiplying
 up the number of available devices, it's reporting back hundreds of records
 per compute node to the schedulers, and the filtering activity is a 3
 queries on n * number of PCI devices in cloud records, which could easily
 end up in the tens or even hundreds of thousands of records for a
 moderately sized cloud.  There compute node also has a record of its device
 allocations which is also checked and updated before the final request is
 run.

 If I need 3 PCI 

[openstack-dev] [Glance][All] Pecan migration strategies

2014-01-10 Thread Flavio Percoco

Greetings,

More discussions around the adoption of Pecan.

I'd like to know what is the feeling of other folks about migrating
existing APIs to Pecan as opposed to waiting for a new API version as
an excuse to migrate the API implementation to Pecan?

We discussed this in one of the sessions at the summit, I'd like to
get a final consensus on what the desired migration path is for the
overall community.

IIRC, Cinder has a working version of the API with Pecan but there's
not a real motivation to release a new version of it that will use
the new implementation. Am I right?

Nova, instead, will start migrating some parts but not all of them and
it'll happen as part of the API v3. AFAIU.

Recently a new patch was proposed in glance[0] and it contains a base
implementation for the existing API v2. I love that patch and the fact
that Oleh Anufriiev is working on it. What worries me, is that the
patch re-implements an existing API and I don't think we should just
swap them.

Yes, we have tests (unit and functional) and that should be enough to
make sure the new implementation works as the old one - Should it?
Should it? - but...

This most likely has to be evaluated in a per-project basis. But:

   - What are the thoughts of other folks on this matter?

Cheers,
FF

[0] https://review.openstack.org/#/c/62911/

--
@flaper87
Flavio Percoco


pgpgyYiD83_0e.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] - taskflow preventing sqla 0.8 upgrade

2014-01-10 Thread Sean Dague

On 01/10/2014 04:13 AM, Robert Collins wrote:

On 5 January 2014 02:02, Sean Dague s...@dague.net wrote:


So we used to do that the apps against release libraries. And the result was
more and more full day gate breaks. We did 2 consecutive ones in 2 weeks.

Basically, once you get to be a certain level of coupled in OpenStack we can
no longer let you manage your own requirements file. We need a global lever
on it. Because people were doing it wrong, and slowly (we could go through
specific examples about how bad this was. This was a top issue at nearly
every summit I'd been at going back to Essex.
..
(It was about 14 days to resolve the python client issue, there was a django
issue around the same time that never made it to the list, as we did it all
under fire in IRC)

And we have a solution now. Which is one list of requirements that we can
test everything with, that we can propose requirements updates
speculatively, and see what works and what doesn't. And *after* we know they
work, we propose the changes back into the projects, now automatically.


So the flip-flop thing is certainly very interesting. We wouldn't want
that to happen again.


I do see the issue Sean is pointing at, which is that we have to fix
the libraries first and then the things that use them. OTOH thats
normal in the software world, I don't see anything unique about it.



Well, as the person that normally gets stuck figuring this out when .eu has
been gate blocked for a day, and I'm one of the first people up on the east
coast, I find the normal state of affairs unsatisfying. :)


:)


I also think that what we are basically dealing with is the classical N^2
comms problem. With N git trees that we need to all get working together,
this gets exponentially more difficult over time. Which is why we created
the integrated gate and the global requirements lever.


I don't think we are - I think we're dealing with the fact that we've
had no signal - no backpressure - on projects that have upper caps set
to remove those caps. So they stick there and we all suffer.


Another solution would be reduce the number of OpenStack git trees to make
N^2 more manageable, and let us with single commits affect multiple
components. But that's not the direction we've taken.


I don't think thats necessary.

What I'd like to see is:
A) two test sets for every commit:
  - commit with latest-release of all deps
  - commit with latest-trunk [or dependent zuul ref] of all deps

B) *if* there are upper version caps for any reason, some signal back
to developers that this exists and that we need to fix our code to
work with that newer release.
   - Possibly we should allow major version caps where major releases
are anticipated to be incompatible without warning about that *until*
there is a [pre-]release of the new major version available


Honestly, I would too. I actually proposed just this at Summit. :) But 
it's a ton of work, that is lacking for volunteers right now. Earliest 
I'm going to look at it is Juno, but I'm not really sure I can really 
commit on this one. And I expect this all by itself needs two or three 
people where this is a top priority.


So consider this a call for volunteers. If you want to be an OpenStack 
hero, here is a great way to do so.


-Sean

--
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Organizing a Gate Blocking Bug Fix Day

2014-01-10 Thread Sean Dague

On 01/10/2014 05:06 AM, Thierry Carrez wrote:

Jay Pipes wrote:

Personally, I think sooner is better. The severity of the disruption is
quite high, and action is needed ASAP.


Having the bug day organized shouldn't prevent people from working on
the most pressing issues and get the disruption under control ASAP...

I'm confident we'll be left with enough gate-wedging bugs to make for an
interesting gate blocking bug day at the end of the month :)


Agreed. And the more familiar people are with some of the issues up 
front, the more progress I expect we'll make that day.


-Sean

--
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Aggregation discussion

2014-01-10 Thread Nadya Privalova
Hi team,

I've decided to move discussion about aggregation in mailing list.
Here is a description about my idea and I really need your comments.

*Idea:*
The goal is to improve performance when user gets statistics for meter. Now
we have fixed list of statistics (min, max and so on). During request a
user may specify the following params:
1. query
2. group_by
3. period

The idea of bp is to pre-calculate some kind of requests and store them to
a separate table in database.
The pre-calculated statistics is called aggregates. Aggregates may be
merged among each others and with any Statistics' objects.
Note, that aggregates will be transparent for users. No changes in api is
required during get_statistics.

Example:
Let's assume we have 6 Samples about 'image' meter. All of them belong to
one day (e.g. 1st May) but have happened in different times:
11.50, 12.25, 12.50, 13.25, 13.50 and 14.25.  User would like to get
statistics about this meter from start = 11.30 till end = 14.30. So we need
to process all samples.
But we may process these samples earlier and already have pre-calculated
results for full hour 12.00 and 13.00. In this case we may get  Sample
11.50 and 14.25 from meters table and merge statistics for them with
already calculated Statistic result from aggregates table.
This example saved only 2 reads from DB. But if we consider metrics from
pollsters with interval = 5 sec (720 Samples per hour) we will save 719
reads with aggregate usage.

*Limitations: *
Of course we cannot aggregate data for all periods, group_by's and queries.
But we may allow user to configure what queries and group_by's he or she is
interested in. For instance, it may be useful for UI where we show graph
with statistics for each hour.  I think that period should not be
configurable, period may be only hour and day.

Example of entries in db:
 image_9223372035472681807  column=H:avg,
timestamp=1389255460712, value=1.0   (I will not copy all columns. The list
of columns is [column=H:min, column=H:max, column=H:sum and so on])
Example of filtered_aggregates in db (filter is image by project):
image_project_8c62fb0cd16c41498245095761b1a263_9223372035472681807
column=H:avg, timestamp=1389255460712, value=1.0


More details here: https://etherpad.openstack.org/p/ceilometer-aggregation
Draft implementation for HBase is here:
https://review.openstack.org/#/c/65681/1

Thanks for your attention,
Nadya
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Devstack on Fedora 20

2014-01-10 Thread Dean Troyer
On Thu, Jan 9, 2014 at 10:27 PM, Adam Young ayo...@redhat.com wrote:

  Tried wiping out the (installed) python-greenlet rpm and re running, and
 that was not installed afterwards, either.  I am guessing that the package
 install step is getting skipped somehow, after the first run.


That sounds like you need to remove the .prereqs file to force the
install_prereqs.sh script to run again...normally this lets you skip
running through all of the package checking on subsequent stack.sh
runs...but if you change the package config within the time window (default
is 2 hours) it gets missed until the end of the window.  Setting
FORCE_PREREQ=1 in local.conf will turn off the whole mechanism.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Less option

2014-01-10 Thread Dean Troyer
On Fri, Jan 10, 2014 at 5:16 AM, Flavio Percoco fla...@redhat.com wrote:

 On 10/01/14 11:28 +0100, Thierry Carrez wrote:

 Personally I think we should (and we can) say NO more often. As we get

 stronger as a dev community it becomes easier, and I think we see more
 opinionated choices in younger projects. That said, it's just harder
 for old projects which already have a lot of options to suddenly start
 denying someone's feature instead of just adding another option...


+1


 I've seen NOs flying around, which is a good sign. I think one of the
 issues right now is that we already have many configuration options in
 some projects and we should try to shrink them, if possible.


+1 or maybe +1000 after looking at how to test all of those options...

The trend of attempting to accommodate all possible options in
configuration has led us down the slippery slope of complexity that is
extremely hard to recover from.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [OpenStack][Sentry]

2014-01-10 Thread Soren Hansen
I've not read the blueprint yet, but I think we'll need another name
for it. I'm sure lots of us are running this Sentry in prduction:

https://github.com/getsentry/sentry



Soren Hansen | http://linux2go.dk/
Ubuntu Developer | http://www.ubuntu.com/
OpenStack Developer  | http://www.openstack.org/


2014/1/10 Anastasia Latynskaya alatynsk...@mirantis.com:
 Hello, OpenStack folks,

 we have once more idea how to impove our wonderful OpenStack=) We've made a
 new concept named Sentry for host security attestation. And we need your
 review and comments, please.

 There is a link
 https://blueprints.launchpad.net/sentry/+spec/sentry-general-architecture


 Thanks!

 --
 Anastasia Latynskaya
 Junior Software Engineer
 Mirantis, Inc.

 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController vs. Nova policy

2014-01-10 Thread Ryan Petrello
Georgy,

Pecan hook functions (http://pecan.readthedocs.org/en/latest/hooks.html) are 
passed a `state` argument, which has a couple of attributes you can make use 
of.  Starting at the `before` hook, you have access to `state.controller`, 
which is the @pecan.expose() decorated controller/function that pecan 
discovered in its routing algorithm (if any):

class MyHook(pecan.hooks.PecanHook):

def before(self, state):
assert isinstance(state.request, webob.Request)
assert state.controller.__func__ is MyController.index.__func__  # for 
examples’ sake, to illustrate the *type* of the controller attribute.  This 
could be False, depending on the URL path :)

Important to note is that `state.controller` will be `None` in the `on_route` 
hook, because the routing of the path to controller hasn’t actually happened at 
that point.

---
Ryan Petrello
Senior Developer, DreamHost
ryan.petre...@dreamhost.com

On Jan 9, 2014, at 6:44 PM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.com wrote:

 Hi Rayan,
 
 Thank you for sharing your view on SecureController. That is always good to 
 hear info from the developers who are deeply familiar with the code base.
 
 I like an idea with hooks. If we go this path, we will need to have an 
 information about a method of a particular controller which will be called if 
 authorization is successful. In current keystone implementation this is done 
 by wrapper which knows the actual method name it wraps. This allows one to 
 write simple rules for specific methods like  identity:get_policy: 
 rule:admin_required,
 
 Do you know if you are inside hook code is there a way to obtain information 
 about router and method which will be called after hook?
 
 Thanks
 Georgy
 
 
 On Thu, Jan 9, 2014 at 2:48 PM, Ryan Petrello ryan.petre...@dreamhost.com 
 wrote:
 As a Pecan developer, I’ll chime in and say that I’m actually *not* a fan of 
 SecureController and its metaclass approach.  Maybe it’s just too magical for 
 my taste.  I’d give a big thumbs up to an approach that involves utilizing 
 pecan’s hooks.  Similar to Kurt’s suggestion with middleware, they give you 
 the opportunity to hook in security *before* the controller call, but they 
 avoid the nastiness of parsing the WSGI environ by hand and writing code that 
 duplicates pecan’s route-to-controller resolution.
 
 ---
 Ryan Petrello
 Senior Developer, DreamHost
 ryan.petre...@dreamhost.com
 
 On Jan 9, 2014, at 3:04 PM, Georgy Okrokvertskhov 
 gokrokvertsk...@mirantis.com wrote:
 
  Hi Adam,
 
  This looks very interesting. When do you expect to have this code available 
  in oslo? Do you have a development guide which describes best practices for 
  using this authorization approach?
 
  I think that for Pecan it will be possible to get rid of @protected wrapper 
  and use SecureController class as a parent. It has a method which will be 
  called before each controller method call. I saw Pecan was moved to 
  stackforge, so probably it is a good idea to talk with Pecan developers and 
  discuss how this part of keystone can be integrated\ supported by Pecan 
  framework.
 
 
  On Wed, Jan 8, 2014 at 8:34 PM, Adam Young ayo...@redhat.com wrote:
  We are working on cleaning up the Keystone code with an eye to Oslo and 
  reuse:
 
  https://review.openstack.org/#/c/56333/
 
 
  On 01/08/2014 02:47 PM, Georgy Okrokvertskhov wrote:
  Hi,
 
  Keep policy control in one place is a good idea. We can use standard 
  policy approach and keep access control configuration in json file as it 
  done in Nova and other projects.
  Keystone uses wrapper function for methods. Here is a wrapper code: 
  https://github.com/openstack/keystone/blob/master/keystone/common/controller.py#L111.
   Each controller method has @protected() wrapper, so a method information 
  is available through python f.__name__ instead of URL parsing. It means 
  that some RBAC parts anyway scattered among the code.
 
  If we want to avoid RBAC scattered among the code we can use URL parsing 
  approach and have all the logic inside hook. In pecan hook WSGI 
  environment is already created and there is full access to request 
  parameters\content. We can map URL to policy key.
 
  So we have two options:
  1. Add wrapper to each API method like all other project did
  2. Add a hook with URL parsing which maps path to policy key.
 
 
  Thanks
  Georgy
 
 
 
  On Wed, Jan 8, 2014 at 9:05 AM, Kurt Griffiths 
  kurt.griffi...@rackspace.com wrote:
  Yeah, that could work. The main thing is to try and keep policy control in 
  one place if you can rather than sprinkling it all over the place.
 
  From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com
  Reply-To: OpenStack Dev openstack-dev@lists.openstack.org
  Date: Wednesday, January 8, 2014 at 10:41 AM
 
  To: OpenStack Dev openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Solum][Pecan][Security] Pecan 
  SecureController vs. Nova policy
 
  Hi Kurt,
 
  As for WSGI 

Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-10 Thread John Garbutt
Apologies for this top post, I just want to move this discussion towards action.

I am traveling next week so it is unlikely that I can make the meetings. Sorry.

Can we please agree on some concrete actions, and who will do the coding?
This also means raising new blueprints for each item of work.
I am happy to review and eventually approve those blueprints, if you
email me directly.

Ideas are taken from what we started to agree on, mostly written up here:
https://wiki.openstack.org/wiki/Meetings/Passthrough#Definitions


What doesn't need doing...


We have PCI whitelist and PCI alias at the moment, let keep those
names the same for now.
I personally prefer PCI-flavor, rather than PCI-alias, but lets
discuss any rename separately.

We seemed happy with the current system (roughly) around GPU passthrough:
nova flavor-key three_GPU_attached_30GB set
pci_passthrough:alias= large_GPU:1,small_GPU:2
nova boot --image some_image --flavor three_GPU_attached_30GB some_name

Again, we seemed happy with the current PCI whitelist.

Sure, we could optimise the scheduling, but again, please keep that a
separate discussion.
Something in the scheduler needs to know how many of each PCI alias
are available on each host.
How that information gets there can be change at a later date.

PCI alias is in config, but its probably better defined using host
aggregates, or some custom API.
But lets leave that for now, and discuss it separately.
If the need arrises, we can migrate away from the config.


What does need doing...
==

1) API  CLI changes for nic-type, and associated tempest tests

* Add a user visible nic-type so users can express on of several
network types.
* We need a default nic-type, for when the user doesn't specify one
(might default to SRIOV in some cases)
* We can easily test the case where the default is virtual and the
user expresses a preference for virtual
* Above is much better than not testing it at all.

nova boot --flavor m1.large --image image_id
  --nic net-id=net-id-1
  --nic net-id=net-id-2,nic-type=fast
  --nic net-id=net-id-3,nic-type=fast vm-name

or

neutron port-create
  --fixed-ip subnet_id=subnet-id,ip_address=192.168.57.101
  --nic-type=slow | fast | foobar
  net-id
nova boot --flavor m1.large --image image_id --nic port-id=port-id

Where nic-type is just an extra bit metadata string that is passed to
nova and the VIF driver.


2) Expand PCI alias information

We need extensions to PCI alias so we can group SRIOV devices better.

I still think we are yet to agree on a format, but I would suggest
this as a starting point:

{
 name:GPU_fast,
 devices:[
  {vendor_id:1137,product_id:0071, address:*, attach-type:direct},
  {vendor_id:1137,product_id:0072, address:*, attach-type:direct}
 ],
 sriov_info: {}
}

{
 name:NIC_fast,
 devices:[
  {vendor_id:1137,product_id:0071, address:0:[1-50]:2:*,
attach-type:macvtap}
  {vendor_id:1234,product_id:0081, address:*, attach-type:direct}
 ],
 sriov_info: {
  nic_type:fast,
  network_ids: [net-id-1, net-id-2]
 }
}

{
 name:NIC_slower,
 devices:[
  {vendor_id:1137,product_id:0071, address:*, attach-type:direct}
  {vendor_id:1234,product_id:0081, address:*, attach-type:direct}
 ],
 sriov_info: {
  nic_type:fast,
  network_ids: [*]  # this means could attach to any network
 }
}

The idea being the VIF driver gets passed this info, when network_info
includes a nic that matches.
Any other details, like VLAN id, would come from neutron, and passed
to the VIF driver as normal.


3) Reading nic_type and doing the PCI passthrough of NIC user requests

Not sure we are agreed on this, but basically:
* network_info contains nic-type from neutron
* need to select the correct VIF driver
* need to pass matching PCI alias information to VIF driver
* neutron passes details other details (like VLAN id) as before
* nova gives VIF driver an API that allows it to attach PCI devices
that are in the whitelist to the VM being configured
* with all this, the VIF driver can do what it needs to do
* lets keep it simple, and expand it as the need arrises

4) Make changes to VIF drivers, so the above is implemented

Depends on (3)



These seems like some good steps to get the basics in place for PCI
passthrough networking.
Once its working, we can review it and see if there are things that
need to evolve further.

Does that seem like a workable approach?
Who is willing to implement any of (1), (2) and (3)?


Cheers,
John


On 9 January 2014 17:47, Ian Wells ijw.ubu...@cack.org.uk wrote:
 I think I'm in agreement with all of this.  Nice summary, Robert.

 It may not be where the work ends, but if we could get this done the rest is
 just refinement.


 On 9 January 2014 17:49, Robert Li (baoli) ba...@cisco.com wrote:

 Hi Folks,


 With John joining the IRC, so far, we had a couple of productive meetings
 in an effort to come to consensus and move forward. Thanks John for doing
 that, and I appreciate everyone's effort to make it to the daily 

Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-10 Thread Clint Byrum
Excerpts from Doug Hellmann's message of 2014-01-09 12:21:05 -0700:
 On Thu, Jan 9, 2014 at 1:53 PM, Nachi Ueno na...@ntti3.com wrote:
 
  Hi folks
 
  Thank you for your input.
 
  The key difference from external configuration system (Chef, puppet
  etc) is integration with
  openstack services.
  There are cases a process should know the config value in the other hosts.
  If we could have centralized config storage api, we can solve this issue.
 
  One example of such case is neuron + nova vif parameter configuration
  regarding to security group.
  The workflow is something like this.
 
  nova asks vif configuration information for neutron server.
  Neutron server ask configuration in neutron l2-agent on the same host
  of nova-compute.
 
 
 That extra round trip does sound like a potential performance bottleneck,
 but sharing the configuration data directly is not the right solution. If
 the configuration setting names are shared, they become part of the
 integration API between the two services. Nova should ask neutron how to
 connect the VIF, and it shouldn't care how neutron decides to answer that
 question. The configuration setting is an implementation detail of neutron
 that shouldn't be exposed directly to nova.
 

That is where I think my resistance to such a change starts. If Nova and
Neutron need to share a value, they should just do that via their API's.
There is no need for a config server in the middle. If it is networking
related, it lives in Neutron's configs, and if it is compute related,
Nova's configs.

Is there any example where values need to be in sync but are not
sharable via normal API chatter?

 Running a configuration service also introduces what could be a single
 point of failure for all of the other distributed services in OpenStack. An
 out-of-band tool like chef or puppet doesn't result in the same sort of
 situation, because the tool does not have to be online in order for the
 cloud to be online.
 

Configuration shouldn't ever have a rapid pattern of change, so even if
this service existed I'd suggest that it would be used just like current
config management solutions: scrape values out, write to config files.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Detect changes in object model

2014-01-10 Thread Dan Smith
 If an object A contains another object or object list (called 
 sub-object), any change happened in the sub-object can't be detected 
 by obj_what_changed() in object A.

Well, like the Instance object does, you can override obj_what_changed()
to expose that fact to the caller. However, I think it might be good to
expand the base class to check, for any NovaObject fields, for the
obj_what_changed() of the child.

How does that sound?

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Climate] Reminder: Meeting today at 1500 UTC

2014-01-10 Thread Sylvain Bauza
Hi folks,

Please keep in mind that our weekly meeting changed its timeslot from
Mondays to Fridays 1500 UTC.

#openstack-meeting should be available at this time, booking it.

-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] [Tuskar] Deployment Management section - Wireframes

2014-01-10 Thread Jaromir Coufal

Hi everybody,

there is first stab of Deployment Management section with future 
direction (note that it was discussed as a scope for Icehouse).


I tried to add functionality in time and break it down to steps. This 
will help us to focus on one functionality at a time and if we will be 
in time pressure for Icehouse release, we can cut off last steps.


Wireframes:
http://people.redhat.com/~jcoufal/openstack/tripleo/2014-01-10_tripleo-ui_deployment-management.pdf

Recording of walkthrough:
https://www.youtube.com/watch?v=9ROxyc85IyE

We sare about to start with first step as soon as possible, so please 
focus on our initial steps the most (which doesn't mean that we should 
neglect the direction).


Every feedback is very welcome, thanks
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Building a new open source NFV system for Neutron

2014-01-10 Thread Luke Gorrie
Howdy Stackers!

We are developing a new open source Network Functions Virtualization
driver for Neutron. I am writing to you now to ask for early advice
that could help us to smoothly bring this work upstream into OpenStack
Juno.

The background is that we are open source developers working to
satisfy the NFV requirements of large service provider networks
including Deutsche Telekom's TeraStream project [1] [2]. We are
developing a complete NFV stack for this purpose: from the DPDK-like
traffic plane all the way up to the Neutron ML2 driver.

We are developing against Havana, we attended the Icehouse summit and
had a lot of great discussions in Hong Kong, and our ambition is to
start bringing running code upstream into Juno.

Our work is 100% open source and we want to work in the open with the
wider OpenStack community. Currently we are in heads-down hacking
mode on the core functionality, but it would be wonderful to connect
with the upstream communities who we hope to be working with more in
the future (that's you guys).

More details on Github:
https://github.com/SnabbCo/snabbswitch/tree/snabbnfv-readme/src/designs/nfv

Thanks for reading!

Cheers,
-Luke

[1] Ivan Pepelnjak on TeraStream:
http://blog.ipspace.net/2013/11/deutsche-telekom-terastream-designed.html
[2] Peter Löthberg's presentation on TeraStream at RIPE 67:
https://ripe67.ripe.net/archives/video/3/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Deployment Management section - Wireframes

2014-01-10 Thread Walls, Jeffrey Joel (Cloud OS RD)
Jarda,

I love how this is progressing.  It will be very nice once it's implemented!

The iconography seems to be inconsistent.  The ! triangle is used for error 
conditions and warning conditions; and the x hexagon is also used for error 
conditions.   

For the Roles usage, will the user be able to scroll back and see more than the 
last month's usage?

For configuration, will it be possible to supply default values to these and 
let the user change them only if they want to?  For some values it's probably 
not possible, but for others it will be.  The fewer things the user has to 
enter the better.

Jeff

-Original Message-
From: Jaromir Coufal [mailto:jcou...@redhat.com] 
Sent: Friday, January 10, 2014 7:58 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [TripleO] [Tuskar] Deployment Management section - 
Wireframes

Hi everybody,

there is first stab of Deployment Management section with future direction 
(note that it was discussed as a scope for Icehouse).

I tried to add functionality in time and break it down to steps. This will help 
us to focus on one functionality at a time and if we will be in time pressure 
for Icehouse release, we can cut off last steps.

Wireframes:
http://people.redhat.com/~jcoufal/openstack/tripleo/2014-01-10_tripleo-ui_deployment-management.pdf

Recording of walkthrough:
https://www.youtube.com/watch?v=9ROxyc85IyE

We sare about to start with first step as soon as possible, so please focus on 
our initial steps the most (which doesn't mean that we should neglect the 
direction).

Every feedback is very welcome, thanks
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [wsme] Undefined attributes in WSME

2014-01-10 Thread Doug Hellmann
On Thu, Jan 9, 2014 at 12:02 AM, Jamie Lennox jamielen...@redhat.comwrote:

 Is there any way to have WSME pass through arbitrary attributes to the
 created object? There is nothing that i can see in the documentation or
 code that would seem to support this.

 In keystone we have the situation where arbitrary data was able to be
 attached to our resources. For example there are a certain number of
 predefined attributes for a user including name, email but if you want to
 include an address you just add an 'address': 'value' to the resource
 creation and it will be saved and returned to you when you request the
 resource.

 Ignoring whether this is a good idea or not (it's done), is the option
 there that i missed - or is there any plans/way to support something like
 this?


There's a change in WSME trunk (I don't think we've released it yet) that
allows the schema for a type to be changed after the class is defined.
There isn't any facility for allowing the caller to pass arbitrary data,
though. Part of the point of WSME is to define the inputs and outputs of
the API for validation.

How are the arbitrary values being stored in keystone? What sorts of things
can be done with them? Can an API caller query them, for example?

Doug




 Thanks,

 Jamie

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Re-using Horizon bits in OpenDaylight

2014-01-10 Thread Endre Karlson
Hello everyone.

I would like to know if anyone here has knowledge on how easy it is to use
Horizon for something else then OpenStack things?

I'm the starter of the dlux project that aims to consume the OpenDaylight
SDN controller Northbound REST APIs instead of the integrated UI it has
now. Though the current PoC is done using AngularJS i came into issues like
how we make it easy for third part things that are not core to plugin it's
things into the app which I know that can be done using panels and alike in
Horizon.

So the question boils down to, can I easily re-use Horizon for ODL?

Endre
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-10 Thread Jay Dobies

Thanks for the feedback  :)


= Stack =
There is a single stack in Tuskar, the overcloud.

A small nit here: in the long term Tuskar will support multiple overclouds.


Yes, absolutely. I should have added For Icehouse like I did in other 
places. Good catch.



There's few pieces of concepts which I think is missing from the list:
- overclouds: after Heat successfully created the stack, Tuskar needs to
keep track whether it applied the post configuration steps (Keystone
initialization, registering services, etc) or not. It also needs to know
the name of the stack (only 1 stack named 'overcloud' for Icehouse).


I assumed this sort of thing was captured by the resource status, though 
I'm far from a Heat expert. Is it not enough to assume that if the 
resource started successfully, all of that took place?



- service endpoints of an overcloud: eg. Tuskar-ui in the undercloud
will need the url of the overcloud Horizon. The overcloud Keystone owns
the information about this (after post configuration is done) and Heat
owns the information about the overcloud Keystone.



- user credentials for an overcloud: it will be used by Heat during
stack creation, by Tuskar during post configuration, by Tuskar-ui
querying various information (eg. running vms on a node) and finally by
the user logging in to the overcloud Horizon. Now it can be found in the
Tuskar-ui settings file [1].


Both of these are really good points that I haven't seen discussed yet. 
The wireframes cover the allocation of nodes and displaying basic 
details of what's created (even that is still placeholder) but not much 
beyond that.


I'd like to break that into a separate thread. I'm not saying it's 
unrelated, but since it's not even wireframed out I'd like to have a 
dedicated discussion about what it might look like. I'll start that 
thread up as soon as I collect my thoughts.



Imre

[1]
https://github.com/openstack/tuskar-ui/blob/master/local_settings.py.example#L351


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][qa]API testing update

2014-01-10 Thread Miguel Lavalle
Sukhdev,

Thanks for your comment. Eugene summarized very well the reason I didin't
specify any testing dealing with the ml2 plugin. It

Cheers


On Thu, Jan 9, 2014 at 11:50 PM, Eugene Nikanorov
enikano...@mirantis.comwrote:

 Sukhdev,

 API tests are really not for end-to-end testing; also, tempest tests (both
 API and scenario) should not make any
 assumptions about neutron configuration (e.g. ml2 mechanism drivers).
 End-to-end testing for particular ml2 drivers seems to fit in 3rd party
 testing
 where you can run additional tests which are configuration-specific.

 Thanks,
 Eugene.


 On Wed, Jan 8, 2014 at 4:36 AM, Sukhdev Kapur sukhdevka...@gmail.comwrote:

 Hi Miguel,

 As I am using neutron API tempest tests, I notice that in the create_port
 tests, the port context is set partially - i.e. only network Id is
 available.
 ML2 drivers expect more in formation in the port context in order to test
 the API on the back-ends.

 I noticed such an enhancement is not listed in the etherpad.
 This is really not a new test, but, enhancement of the test coverage to
 allow third party ML2 drivers to perform end-to-end API testing.

 If you like, I will be happy to update the ehterpad to include this
 information.

 regards..
 -Sukhdev




 On Mon, Jan 6, 2014 at 10:37 AM, Miguel Lavalle mig...@mlavalle.comwrote:

 As described in a previous message, the community is focusing efforts in
 developing a comprehensive set of API tests in Tempest for Neutron. We are
 keeping track of this effort in the API tests gap analysis section at
 https://etherpad.openstack.org/p/icehouse-summit-qa-neutron

 These are recent developments in this regard:

 1) The gap analysis is complete as of January 5th. The analysis takes
 into consideration what already exists in Tempest and what is in the Gerrit
 review process
 2) Soon there is going to be a generative (i.e. non manual) tool to
 create negative tests in Tempest. As a consequence, all negative tests
 specifications were removed from the gap analysis described in the previous
 point

 If you are interested in helping in this effort, please go to the
 etherpad indicated above and select from the API tests gap analysis
 section the tests you want to contribute. Please put your name and email
 address next to the selected tests. Also, when your code merges, please
 come back to the etherpad and update it indicating that your test is done.

 If your are new to OpenStack, Neutron or Tempest, implementing tests is
 an excellent way to learn an API. We have put together the following guide
 to help you get started
 https://wiki.openstack.org/wiki/Neutron/TempestAPITests



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-10 Thread Jay Dobies

As much as the Tuskar Chassis model is lacking compared to the Tuskar
Rack model, the opposite problem exists for each project's model of
Node. In Tuskar, the Node model is pretty bare and useless, whereas
Ironic's Node model is much richer.


Thanks for looking that deeply into it :)


So, it's not as simple as it may initially seem :)


Ah, I should have been clearer in my statement - my understanding is that
we're scrapping concepts like Rack entirely.


That was my understanding as well. The existing Tuskar domain model was 
largely placeholder/proof of concept and didn't necessarily reflect 
exactly what was desired/expected.



Mainn


Best,
-jay

[1]
https://github.com/openstack/ironic/blob/master/ironic/db/sqlalchemy/models.py
[2]
https://github.com/openstack/ironic/blob/master/ironic/db/sqlalchemy/models.py#L83



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Re-using Horizon bits in OpenDaylight

2014-01-10 Thread Walls, Jeffrey Joel (Cloud OS RD)
I have used the Horizon framework for an application other than the OpenStack 
Dashboard and it worked really well.  There is an effort to create a separation 
between the Horizon framework and the OpenStack Dashboard and once that happens 
it will be even easier.  How hard or difficult it is will depend on exactly 
what you're trying to do and how well its UI metaphor matches that of the 
OpenStack Dashboard.

Jeff

From: Endre Karlson [mailto:endre.karl...@gmail.com]
Sent: Friday, January 10, 2014 8:20 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] Re-using Horizon bits in OpenDaylight

Hello everyone.

I would like to know if anyone here has knowledge on how easy it is to use 
Horizon for something else then OpenStack things?

I'm the starter of the dlux project that aims to consume the OpenDaylight SDN 
controller Northbound REST APIs instead of the integrated UI it has now. Though 
the current PoC is done using AngularJS i came into issues like how we make it 
easy for third part things that are not core to plugin it's things into the app 
which I know that can be done using panels and alike in Horizon.

So the question boils down to, can I easily re-use Horizon for ODL?

Endre
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [climate] Meeting minutes

2014-01-10 Thread Dina Belova
Thank everyone who were on our Climate weekly meeting.
Meeting minutes:

Minutes:
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-01-10-15.00.html

Minutes (text):
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-01-10-15.00.txt

Log:
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-01-10-15.00.log.html


Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-10 Thread Alan Kavanagh
+1 PCI Flavor.

From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com]
Sent: January-10-14 1:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

BTW, I like the PCI flavor :)

From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com]
Sent: Thursday, January 09, 2014 10:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

Hi, Ian, when you in aggrement with all of this, do you agree with the 'group 
name', or agree with John's pci flavor?
I'm against the PCI group and will send out a reply later.

--jyh

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Thursday, January 09, 2014 9:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

I think I'm in agreement with all of this.  Nice summary, Robert.
It may not be where the work ends, but if we could get this done the rest is 
just refinement.

On 9 January 2014 17:49, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:
Hi Folks,

With John joining the IRC, so far, we had a couple of productive meetings in an 
effort to come to consensus and move forward. Thanks John for doing that, and I 
appreciate everyone's effort to make it to the daily meeting. Let's reconvene 
on Monday.

But before that, and based on our today's conversation on IRC, I'd like to say 
a few things. I think that first of all, we need to get agreement on the 
terminologies that we are using so far. With the current nova PCI passthrough

PCI whitelist: defines all the available PCI passthrough devices on a 
compute node. pci_passthrough_whitelist=[{ 
vendor_id:,product_id:}]
PCI Alias: criteria defined on the controller node with which requested 
PCI passthrough devices can be selected from all the PCI passthrough devices 
available in a cloud.
Currently it has the following format: 
pci_alias={vendor_id:, product_id:, name:str}

nova flavor extra_specs: request for PCI passthrough devices can be 
specified with extra_specs in the format for 
example:pci_passthrough:alias=name:count

As you can see, currently a PCI alias has a name and is defined on the 
controller. The implications for it is that when matching it against the PCI 
devices, it has to match the vendor_id and product_id against all the available 
PCI devices until one is found. The name is only used for reference in the 
extra_specs. On the other hand, the whitelist is basically the same as the 
alias without a name.

What we have discussed so far is based on something called PCI groups (or PCI 
flavors as Yongli puts it). Without introducing other complexities, and with a 
little change of the above representation, we will have something like:

pci_passthrough_whitelist=[{ vendor_id:,product_id:, 
name:str}]

By doing so, we eliminated the PCI alias. And we call the name in above as a 
PCI group name. You can think of it as combining the definitions of the 
existing whitelist and PCI alias. And believe it or not, a PCI group is 
actually a PCI alias. However, with that change of thinking, a lot of benefits 
can be harvested:

 * the implementation is significantly simplified
 * provisioning is simplified by eliminating the PCI alias
 * a compute node only needs to report stats with something like: PCI 
group name:count. A compute node processes all the PCI passthrough devices 
against the whitelist, and assign a PCI group based on the whitelist definition.
 * on the controller, we may only need to define the PCI group names. 
if we use a nova api to define PCI groups (could be private or public, for 
example), one potential benefit, among other things (validation, etc),  they 
can be owned by the tenant that creates them. And thus a wholesale of PCI 
passthrough devices is also possible.
 * scheduler only works with PCI group names.
 * request for PCI passthrough device is based on PCI-group
 * deployers can provision the cloud based on the PCI groups
 * Particularly for SRIOV, deployers can design SRIOV PCI groups based 
on network connectivities.

Further, to support SRIOV, we are saying that PCI group names not only can be 
used in the extra specs, it can also be used in the -nic option and the neutron 
commands. This allows the most flexibilities and functionalities afforded by 
SRIOV.

Further, we are saying that we can define default PCI groups based on the PCI 
device's class.

For vnic-type (or nic-type), we are saying that it defines the link 
characteristics of the nic that is attached to a VM: a nic that's connected to 
a virtual switch, a nic that is connected to a physical switch, or a nic that 
is connected to a physical switch, but has a host macvtap device in between. 
The actual 

Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-10 Thread Imre Farkas

On 01/10/2014 04:27 PM, Jay Dobies wrote:

Thanks for the feedback  :)


= Stack =
There is a single stack in Tuskar, the overcloud.

A small nit here: in the long term Tuskar will support multiple
overclouds.


Yes, absolutely. I should have added For Icehouse like I did in other
places. Good catch.


There's few pieces of concepts which I think is missing from the list:
- overclouds: after Heat successfully created the stack, Tuskar needs to
keep track whether it applied the post configuration steps (Keystone
initialization, registering services, etc) or not. It also needs to know
the name of the stack (only 1 stack named 'overcloud' for Icehouse).


I assumed this sort of thing was captured by the resource status, though
I'm far from a Heat expert. Is it not enough to assume that if the
resource started successfully, all of that took place?



I am also far from a Heat expert, I just had a some really hard times 
when I previously expected from my Tuskar deployed overcloud that it's 
ready to use. :-)


In short, having the resources started is not enough, Heat stack-create 
is only a part of the deployment story. There was a few emails on the 
mailing list about this:

http://lists.openstack.org/pipermail/openstack-dev/2013-December/022217.html
http://lists.openstack.org/pipermail/openstack-dev/2013-December/022887.html

There was also a discussion during the last TripleO meeting in December, 
check the topic 'After heat stack-create init operations (lsmola)'
http://eavesdrop.openstack.org/meetings/tripleo/2013/tripleo.2013-12-17-19.02.log.html 




- service endpoints of an overcloud: eg. Tuskar-ui in the undercloud
will need the url of the overcloud Horizon. The overcloud Keystone owns
the information about this (after post configuration is done) and Heat
owns the information about the overcloud Keystone.



- user credentials for an overcloud: it will be used by Heat during
stack creation, by Tuskar during post configuration, by Tuskar-ui
querying various information (eg. running vms on a node) and finally by
the user logging in to the overcloud Horizon. Now it can be found in the
Tuskar-ui settings file [1].


Both of these are really good points that I haven't seen discussed yet.
The wireframes cover the allocation of nodes and displaying basic
details of what's created (even that is still placeholder) but not much
beyond that.

I'd like to break that into a separate thread. I'm not saying it's
unrelated, but since it's not even wireframed out I'd like to have a
dedicated discussion about what it might look like. I'll start that
thread up as soon as I collect my thoughts.



Fair point, sorry about that. I haven't seen the latest wireframes, I 
had a few expectations based on the previous version.



Imre

[1]
https://github.com/openstack/tuskar-ui/blob/master/local_settings.py.example#L351



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-10 Thread James Slagle
On Fri, Jan 10, 2014 at 10:27 AM, Jay Dobies jason.dob...@redhat.com wrote:
 There's few pieces of concepts which I think is missing from the list:
 - overclouds: after Heat successfully created the stack, Tuskar needs to
 keep track whether it applied the post configuration steps (Keystone
 initialization, registering services, etc) or not. It also needs to know
 the name of the stack (only 1 stack named 'overcloud' for Icehouse).


 I assumed this sort of thing was captured by the resource status, though I'm
 far from a Heat expert. Is it not enough to assume that if the resource
 started successfully, all of that took place?

Not currently.  Those steps are done seperately from a different host
after Heat reports the stack as completed and running.  In the Tuskar
model, that host would be the undercloud.  Tuskar would have to know
what steps to run do the post configuration/setup of the overcloud.

I believe It would be possible to instead automate that so that it
happens as part of the os-refresh-config cycle that runs scripts at
boot time in an image.  At the end of the initial os-refresh-config
run there is a callback to Heat to indicate success.  So, if we did
that, the Overcloud would basically configure itself then callback to
Heat to indicate it all worked.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Deployment Management section - Wireframes

2014-01-10 Thread Jay Dobies

Thanks for recording this. A few questions:

- I'm guessing the capacity metrics will come from Ceilometer. Will 
Ceilometer provide the averages for the role or is that calculated by 
Tuskar?


- When on the change deployments screen, after making a change but not 
yet applying it, how are the projected capacity changes calculated?


- For editing a role, does it make a new image with the changes to what 
services are deployed each time it's saved?


- When a role is edited, if it has existing nodes deployed with the old 
version, are the automatically/immediately updated? If not, how do we 
reflect that there's a difference between how the role is currently 
configured and the nodes that were previously created from it?


- I don't see any indication that the role scaling process is taking 
place. That's a potentially medium/long running operation, we should 
have some sort of way to inform the user it's running and if any errors 
took place.


That last point is a bit of a concern for me. I like the simplicity of 
what the UI presents, but the nature of what we're doing doesn't really 
fit with that. I can click the count button to add 20 nodes in a few 
seconds, but the execution of that is a long running, asynchronous 
operation. We have no means of reflecting that it's running, nor finding 
any feedback on it as it runs or completes.


Related question. If I have 20 instances and I press the button to scale 
it out to 50, if I immediately return to the My Deployment screen what 
do I see? 20, 50, or the current count as they are stood up?


It could all be written off as a future feature, but I think we should 
at least start to account for it in the wireframes. The initial user 
experience could be off putting if it's hard to discern the difference 
between what I told the UI to do and when it's actually finished being done.


It's also likely to influence the ultimate design as we figure out who 
keeps track of the running operations and their results (for both simple 
display purposes to the user and auditing reasons).



On 01/10/2014 09:58 AM, Jaromir Coufal wrote:

Hi everybody,

there is first stab of Deployment Management section with future
direction (note that it was discussed as a scope for Icehouse).

I tried to add functionality in time and break it down to steps. This
will help us to focus on one functionality at a time and if we will be
in time pressure for Icehouse release, we can cut off last steps.

Wireframes:
http://people.redhat.com/~jcoufal/openstack/tripleo/2014-01-10_tripleo-ui_deployment-management.pdf


Recording of walkthrough:
https://www.youtube.com/watch?v=9ROxyc85IyE

We sare about to start with first step as soon as possible, so please
focus on our initial steps the most (which doesn't mean that we should
neglect the direction).

Every feedback is very welcome, thanks
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-10 Thread James Slagle
On Fri, Jan 10, 2014 at 11:01 AM, Imre Farkas ifar...@redhat.com wrote:
 On 01/10/2014 04:27 PM, Jay Dobies wrote:
 There's few pieces of concepts which I think is missing from the list:
 - overclouds: after Heat successfully created the stack, Tuskar needs to
 keep track whether it applied the post configuration steps (Keystone
 initialization, registering services, etc) or not. It also needs to know
 the name of the stack (only 1 stack named 'overcloud' for Icehouse).


 I assumed this sort of thing was captured by the resource status, though
 I'm far from a Heat expert. Is it not enough to assume that if the
 resource started successfully, all of that took place?


 I am also far from a Heat expert, I just had a some really hard times when I
 previously expected from my Tuskar deployed overcloud that it's ready to
 use. :-)

 In short, having the resources started is not enough, Heat stack-create is
 only a part of the deployment story. There was a few emails on the mailing
 list about this:
 http://lists.openstack.org/pipermail/openstack-dev/2013-December/022217.html
 http://lists.openstack.org/pipermail/openstack-dev/2013-December/022887.html

 There was also a discussion during the last TripleO meeting in December,
 check the topic 'After heat stack-create init operations (lsmola)'
 http://eavesdrop.openstack.org/meetings/tripleo/2013/tripleo.2013-12-17-19.02.log.html

Thanks for posting the links :) Very helpful.  There are some really
good points there in the irc log about *not* doing what I suggested
with the local machine os-refresh-config scripts :).

So, I think it's likely that Tuskar will need to orchestrate this
setup in some fasion.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Detect changes in object model

2014-01-10 Thread Murray, Paul (HP Cloud Services)
Sounds good to me. The list base objects don't have methods to make changes to 
the list - so it would be a case of iterating looking at each object in the 
list. That would be ok. 

Do we need the contents of the lists to be modified without assigning a new 
list? - that would need a little more work to allow the changes and to track 
them there too.

Paul.

-Original Message-
From: Dan Smith [mailto:d...@danplanet.com] 
Sent: 10 January 2014 14:42
To: Wang, Shane; OpenStack Development Mailing List (not for usage questions)
Cc: Murray, Paul (HP Cloud Services); Lee, Alexis; Tan, Lin
Subject: Re: [Nova] Detect changes in object model

 If an object A contains another object or object list (called 
 sub-object), any change happened in the sub-object can't be detected 
 by obj_what_changed() in object A.

Well, like the Instance object does, you can override obj_what_changed() to 
expose that fact to the caller. However, I think it might be good to expand the 
base class to check, for any NovaObject fields, for the
obj_what_changed() of the child.

How does that sound?

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Detect changes in object model

2014-01-10 Thread Dan Smith
 Sounds good to me. The list base objects don't have methods to make changes 
 to the list - so it would be a case of iterating looking at each object in 
 the list. That would be ok. 

Hmm? You mean for NovaObjects that are lists? I hesitate to expose lists
as changed when one of the objects inside has changed because I think
that sends the wrong message. However, I think it makes sense to have a
different method on lists for are any of your contents changed?

I'll cook up a patch to implement what I'm talking about so you can take
a look.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-10 Thread Dan Prince
One more related idea related to real packages in TripleO. While I still think 
using packages is totally cool we may want to make an exception for 
systemd/upstart scripts. We have some non-standard ordering in our TripleO init 
scripts that is meaningful and blindly switching to a distro specific version 
would almost certainly cause issues.

Dan

- Original Message -
 From: James Slagle james.sla...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Wednesday, January 8, 2014 10:03:39 AM
 Subject: Re: [openstack-dev] [TripleO] Installing from packages in
 tripleo-image-elements
 
 On Tue, Jan 7, 2014 at 11:20 PM, Robert Collins
 robe...@robertcollins.net wrote:
  On 8 January 2014 12:18, James Slagle james.sla...@gmail.com wrote:
  Sure, the crux of the problem was likely that versions in the distro
  were too old and they needed to be updated.  But unless we take on
  building the whole OS from source/git/whatever every time, we're
  always going to have that issue.  So, an additional benefit of
  packages is that you can install a known good version of an OpenStack
  component that is known to work with the versions of dependent
  software you already have installed.
 
  The problem is that OpenStack is building against newer stuff than is
  in distros, so folk building on a packaging toolchain are going to
  often be in catchup mode - I think we need to anticipate package based
  environments running against releases rather than CD.
 
 I just don't see anyone not building on a packaging toolchain, given
 that we're all running the distro of our choice and pip/virtualenv/etc
 are installed from distro packages.  Trying to isolate the building of
 components with pip installed virtualenvs was still a problem.  Short
 of uninstalling the build tools packages from the cloud image and then
 wget'ing the pip tarball, I don't think there would have been a good
 way around this particular problem.  Which, that approach may
 certainly make some sense for a CD scenario.
 
 Agreed that packages against releases makes sense.
 
 --
 -- James Slagle
 --
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Building a new open source NFV system for Neutron

2014-01-10 Thread Michael Bright
Hi Luke,

Very pleased to see this initiative in the OpenStack/NFV space.

A dumb question - how do you see this related to the ongoing
 [openstack-dev] [nova] [neutron] PCI pass-through network support

discussion on this list?

Do you see that work as one component within your proposed architecture for
example
or an alternative implementation?

Regards,
Mike.

SDN/NFV Solution Architect




On 10 January 2014 16:11, Luke Gorrie l...@snabb.co wrote:

 Howdy Stackers!

 We are developing a new open source Network Functions Virtualization
 driver for Neutron. I am writing to you now to ask for early advice
 that could help us to smoothly bring this work upstream into OpenStack
 Juno.

 The background is that we are open source developers working to
 satisfy the NFV requirements of large service provider networks
 including Deutsche Telekom's TeraStream project [1] [2]. We are
 developing a complete NFV stack for this purpose: from the DPDK-like
 traffic plane all the way up to the Neutron ML2 driver.

 We are developing against Havana, we attended the Icehouse summit and
 had a lot of great discussions in Hong Kong, and our ambition is to
 start bringing running code upstream into Juno.

 Our work is 100% open source and we want to work in the open with the
 wider OpenStack community. Currently we are in heads-down hacking
 mode on the core functionality, but it would be wonderful to connect
 with the upstream communities who we hope to be working with more in
 the future (that's you guys).

 More details on Github:
 https://github.com/SnabbCo/snabbswitch/tree/snabbnfv-readme/src/designs/nfv

 Thanks for reading!

 Cheers,
 -Luke

 [1] Ivan Pepelnjak on TeraStream:
 http://blog.ipspace.net/2013/11/deutsche-telekom-terastream-designed.html
 [2] Peter Löthberg's presentation on TeraStream at RIPE 67:
 https://ripe67.ripe.net/archives/video/3/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Less option (was: [oslo.config] Centralized config management)

2014-01-10 Thread Joe Gordon
On Fri, Jan 10, 2014 at 4:01 AM, Mark McLoughlin mar...@redhat.com wrote:

 On Thu, 2014-01-09 at 16:34 -0800, Joe Gordon wrote:
  On Thu, Jan 9, 2014 at 3:01 PM, Jay Pipes jaypi...@gmail.com wrote:
 
   On Thu, 2014-01-09 at 23:56 +0100, Julien Danjou wrote:
On Thu, Jan 09 2014, Jay Pipes wrote:
   
 Hope you don't mind, I'll jump in here :)

 On Thu, 2014-01-09 at 11:08 -0800, Nachi Ueno wrote:
 Hi Jeremy

 Don't you think it is burden for operators if we should choose
 correct
 combination of config for multiple nodes even if we have chef and
 puppet?

 It's more of a burden for operators to have to configure OpenStack
 in
 multiple ways.
   
I also think projects should try to minimize configuration options at
their minimum so operators are completely lost. Opening the sample
nova.conf and seeing 696 options is not what I would call user
 friendly.
   
  
 
 
  There was talk a while back about marking different config options as
 basic
  and advanced (or something along those lines) to help make it easier for
  operators.

 You might be thinking of this session summit I led:

   https://etherpad.openstack.org/p/grizzly-nova-config-options

 My thinking was we first move config options into groups to make it
 easier for operators to make sense of the available options and then we
 would classify them (as e.g. tuning, experimental, debug) and
 exclude some classifications from the sample config file.

 Sadly, I never even made good progress on Tedious Task 2 :: Group.



That is exactly what I was thinking of.



 Mark.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Partially Shared Networks

2014-01-10 Thread CARVER, PAUL
If anyone is giving any thought to networks that are available to multiple 
tenants (controlled by a configurable list of tenants) but not visible to all 
tenants I'd like to hear about it.

I'm especially thinking of scenarios where specific networks exist outside of 
OpenStack and have specific purposes and rules for who can deploy servers on 
them. We'd like to enable the use of OpenStack to deploy to these sorts of 
networks but we can't do that with the current shared or not shared binary 
choice.

--
Paul Carver

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Ichouse roadmap and Graduation tracking

2014-01-10 Thread Kurt Griffiths
Hi folks, I put together a tracking blueprint for us to refer to in our team 
meetings:

https://blueprints.launchpad.net/marconi/+spec/graduation

Also, here is an outline of what I want to a accomplish for Icehouse:

https://wiki.openstack.org/wiki/Marconi/roadmaps/icehouse

Feedback is welcome, as always.

Cheers,
@kgriffs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Devstack on Fedora 20

2014-01-10 Thread Adam Young

That worked.  I incorporated the

FORCE_PREREQ=1

change and all good.


On 01/10/2014 04:54 AM, Flavio Percoco wrote:

On 09/01/14 23:27 -0500, Adam Young wrote:

On 01/09/2014 04:58 PM, Sean Dague wrote:

   On 01/09/2014 04:12 PM, Dean Troyer wrote:

   On Thu, Jan 9, 2014 at 2:16 PM, Adam Young ayo...@redhat.com
   mailto:ayo...@redhat.com wrote:

   That didn't seem to make a difference, still no cache.  
The RPMS are

   not getting installed, even if I deliberately add a line for
   python-dogpile-cache
   Shouldn't it get installed via pip without the rpm line?


   Yes pip should install it based on requirements.txt.  I just 
tried this
   and see the install in /opt/stack/logs/stack.sh.log and then 
see the
   import fail later.  God I love pip.  And there it 
is...xslt-config isn't

   present so a whole batch of installs fails.

   Add this to files/rpms/keystone:

   libxslt-devel   # dist:f20

   There are some additional tweaks that I'll ask Flavio to add
   to https://review.openstack.org/63647 as it needs at least one 
more

   patch set anyway.

   So that would be the work around if the rpm doesn't work, but I 
really
   think we should use the rpm instead. I'm sort of confused that it 
didn't

   get picked up.

   Got more detailed output Adam?

   -Sean



 ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Let me see...

I erased the two rpms and reran unstack then

./stack.sh

seems to have worked.  I have a running Keystone.  Let me try again 
on a virgin

machine

Nope...with that patch applied, I have not python-lxml or 
python-dogpile-cache

rpms.


Tried wiping out the (installed) python-greenlet rpm and re running, 
and that
was not installed afterwards, either.  I am guessing that the package 
install

step is getting skipped somehow, after the first run.


I just updated the patch with the latest feedback from this thread.
Could you give it a try again?

Cheers,
FF



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-10 Thread Robert Li (baoli)
Hi Yunhong,

I appreciate your comments. Please see inline…

--Robert

On 1/10/14 1:40 AM, Jiang, Yunhong 
yunhong.ji...@intel.commailto:yunhong.ji...@intel.com wrote:

Robert, sorry that I’m not fan of * your group * term. To me, *your group” 
mixed two thing. It’s an extra property provided by configuration, and also 
it’s a very-not-flexible mechanism to select devices (you can only select 
devices based on the ‘group name’ property).


1)   A dynamic group is much better. For example, user may want to select 
GPU device based on vendor id, or based on vendor_id+device_id. In another 
word, user want to create group based on vendor_id, or vendor_id+device_id and 
select devices from these group.  John’s proposal is very good, to provide an 
API to create the PCI flavor(or alias). I prefer flavor because it’s more 
openstack style.



I'm not sure what you mean by a dynamic group. But a PCI group can be 
dynamically created on the controller. The whitelist definition allows the 
grouping based on vendor_id or vendor_id + product_id, etc. The name of PCI 
group makes more sense in terms of SRIOV, but the name of PCI flavor may make 
more sense for GPU because a user may want something from a specific vendor as 
you have indicated.

So far, our discussion has been largely based on the infrastructure that is 
currently existing in nova, or largely confined within the existing PCI 
passthrough implemenation. If my understanding is correct, then devices 
belonging to different aliases shouldn't overlap. Otherwise, the stats 
accounting would become useless. So the question is do we allow overlapping of 
devices that can be classified into different aliases at the same time. If the 
answer is yes, then some fundamental change would be required.

Talking about the flexibility you mentioned earlier, let me try to describe 
this if I understand you correctly:
 -- whitelist defines devices available in a compute node. The 
collection of them determines all the devices available in a cloud.
 -- At any time, PCI groups (or PCI flavors) can be defined on the 
controller that defines criteria (in terms of vendor_id, product_id, bdf, etc) 
to locate a particular device.

I don't think it's a bad idea. But Would it require the controller to manage 
all the PCI devices available in the cloud? and/or how would stats be managed 
per PCI flavor? Can we clearly define how to enable this maximum flexibility? 
It's certainly  not there today.


2)   As for the second thing of your ‘group’, I’d understand it as an extra 
property provided by configuration.  I don’t think we should put it into the 
white list, which is to configure devices that are assignable.  I’d add another 
configuration option to provide extra attribute to devices. When nova compute 
is up, it will parse this configuration and add them to the corresponding PCI 
devices. I don’t think adding another configuration will cause too many trouble 
to deployment. Openstack already have a lot of configuration items :)

Not sure how exactly it's going to be done. But the patches that Yongli has 
posted seems to be adding the pci-flaovr into the whitelist. We are just trying 
to see the pci-flavor in a different angle (as posted in this thread), and that 
would make things a lot different.




3)   I think currently we mixed the neutron and nova design. To me, Neutron 
SRIOV support is a user of nova PCI support. Thus we should firstly analysis 
the requirement from neutron PCI support to nova PCI support in a more generic  
way, and then, we can discuss how we enhance the nova PCI support, or, if you 
want, re-design the nova PCI support. IMHO, if don’t consider network, current 
implementation should be ok.



I don't see that we are trying to mix the design. But I agree that we should 
provide SRIOV requirements, which we have already discussed in our previous 
threads. Let me try it here, and folks, please add yours if I'm missing 
anything:
   1. A SRIOV device can be used as a NIC to be attached to a VM (or 
domain). This implies that a PCI passthrough device is recognized as an SRIOV 
device and corresponding networking handling as required by the domain is 
performed to attach it to the VM as a NIC.
   2. A SRIOV device should be selected to be attached to a VM based on 
the VM's network connectivity.
   3. If a VM has multiple SRIOV NICs, it should be possible to locate 
the SRIOV device assigned to the corresponding NIC.
   4. A SRIOV-capable compute node may not be used as a host for VMs 
that don't require SRIOV capability
   5. Specifically as required by 2  3, pci-flavor (or pci-alias, or 
pci-group, whatever it's called) should be allowed in —nic and neutron commands.

When exploring the existing nova PCI passthrough, we figured out how to meet 
those requirements, and as a result we started the conversation. SRIOV 
requirements would certainly influence the overall PCI passthrough 

Re: [openstack-dev] [oslo] Common SSH

2014-01-10 Thread Doug Hellmann
On Wed, Jan 8, 2014 at 2:31 PM, Sergey Skripnick sskripn...@mirantis.comwrote:





  On Wed, Jan 8, 2014 at 10:43 AM, Eric Windisch ewindi...@docker.com
 wrote:








 About spur: spur is looks ok, but it a bit complicated inside (it uses

 separate threads for non-blocking stdin/stderr reading [1]) and I don't

 know how it would work with eventlet.


 That does sound like it might cause issues. What would we need to do to
 test it?





 Looking at the code, I don't expect it to be an issue. The
 monkey-patching will cause eventlet.spawn to be called for
 threading.Thread. The code looks eventlet-friendly enough on the surface.
 Error handing around file read/write could be affected, but it also looks
 fine.


 Thanks for that analysis Eric.

 Is there any reason for us to prefer one approach over the other, then?

 Doug


 So, there is only one reason left -- oslo lib is more simple and
 lightweight
 (not using threads). Anyway this class is used by stackforge/rally and
 may be used by other projects instead of buggy oslo.processutils.ssh.


I appreciate that we want to fix the ssh client. I'm not certain that
writing our own is the best answer.

In his comments on your pull request, the paramiko author recommended
looking at Fabric. I know that Fabric has a long history in production.
Does it provide the required features?

Doug







 --
 Regards,
 Sergey Skripnick


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-10 Thread Robert Li (baoli)
Hi Yongli,

Please also see my response to Yunhong. Here, I just want to add a comment 
about your local versus global argument. I took a brief look at your patches, 
and the PCI-flavor is added into the whitelist. The compute node needs to know 
these pci-flavors in order to report PCI stats based on them. Please correct me 
if I'm wrong.

Another comment is that a compute node doesn't need to consult with the 
controller, but it's report or registration of resources may be rejected by the 
controller due to non-existing PCI groups.

thanks,
Robert

On 1/10/14 2:11 AM, yongli he 
yongli...@intel.commailto:yongli...@intel.com wrote:

On 2014年01月10日 00:49, Robert Li (baoli) wrote:
Hi Folks,
HI, all

basiclly i flavor  the pic-flavor style and against massing  the white-list. 
please see my inline comments.



With John joining the IRC, so far, we had a couple of productive meetings in an 
effort to come to consensus and move forward. Thanks John for doing that, and I 
appreciate everyone's effort to make it to the daily meeting. Let's reconvene 
on Monday.

But before that, and based on our today's conversation on IRC, I'd like to say 
a few things. I think that first of all, we need to get agreement on the 
terminologies that we are using so far. With the current nova PCI passthrough

PCI whitelist: defines all the available PCI passthrough devices on a 
compute node. pci_passthrough_whitelist=[{ 
vendor_id:,product_id:}]
PCI Alias: criteria defined on the controller node with which requested 
PCI passthrough devices can be selected from all the PCI passthrough devices 
available in a cloud.
Currently it has the following format: 
pci_alias={vendor_id:, product_id:, name:str}

nova flavor extra_specs: request for PCI passthrough devices can be 
specified with extra_specs in the format for 
example:pci_passthrough:alias=name:count

As you can see, currently a PCI alias has a name and is defined on the 
controller. The implications for it is that when matching it against the PCI 
devices, it has to match the vendor_id and product_id against all the available 
PCI devices until one is found. The name is only used for reference in the 
extra_specs. On the other hand, the whitelist is basically the same as the 
alias without a name.

What we have discussed so far is based on something called PCI groups (or PCI 
flavors as Yongli puts it). Without introducing other complexities, and with a 
little change of the above representation, we will have something like:

pci_passthrough_whitelist=[{ vendor_id:,product_id:, 
name:str}]

By doing so, we eliminated the PCI alias. And we call the name in above as a 
PCI group name. You can think of it as combining the definitions of the 
existing whitelist and PCI alias. And believe it or not, a PCI group is 
actually a PCI alias. However, with that change of thinking, a lot of
the white list configuration is mostly local to a host, so only address in 
there, like John's proposal is good. mix the group into the whitelist means we 
make the global thing per host style, this is maybe wrong.

benefits can be harvested:

 * the implementation is significantly simplified
but more mass, refer my new patches already sent out.
 * provisioning is simplified by eliminating the PCI alias
pci alias provide a good way to define a global reference-able name for PCI, we 
need this, this is also true for John's pci-flavor.
 * a compute node only needs to report stats with something like: PCI 
group name:count. A compute node processes all the PCI passthrough devices 
against the whitelist, and assign a PCI group based on the whitelist definition.
simplify this seems like good, but it does not, separated the local and global 
is the instinct nature simplify.
 * on the controller, we may only need to define the PCI group names. 
if we use a nova api to define PCI groups (could be private or public, for 
example), one potential benefit, among other things (validation, etc),  they 
can be owned by the tenant that creates them. And thus a wholesale of PCI 
passthrough devices is also possible.
this mean you should consult the controller to deploy your host, if we keep 
white-list local, we simplify the deploy.
 * scheduler only works with PCI group names.
 * request for PCI passthrough device is based on PCI-group
 * deployers can provision the cloud based on the PCI groups
 * Particularly for SRIOV, deployers can design SRIOV PCI groups based 
on network connectivities.

Further, to support SRIOV, we are saying that PCI group names not only can be 
used in the extra specs, it can also be used in the —nic option and the neutron 
commands. This allows the most flexibilities and functionalities afforded by 
SRIOV.
i still feel use alias/pci flavor is better solution.

Further, we are saying that we can define default PCI groups based on the PCI 
device's class.
default 

Re: [openstack-dev] [oslo] Common SSH

2014-01-10 Thread Sergey Skripnick


I appreciate that we want to fix the ssh client. I'm not certain that  
writing our own is the best answer.


I was supposed to fix oslo.processutils.ssh with this class, but it may
be fixed without it, not big deal.




In his comments on your pull request, the paramiko author recommended  
looking at Fabric. I know that Fabric has a long history in production.  
Does it provide the required features?




Fabric is too much for just command execution on remote server. Spur seems  
like

good choice for this.

But I still don't understand: why do we need oslo.processutils.execute? We  
can use
subprocess module. Why do we need oslo.processutils.ssh_execute? We can  
use paramiko

instead.


--
Regards,
Sergey Skripnick

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Deployment Management section - Wireframes

2014-01-10 Thread Dougal Matthews

Hi,

Thanks for the wireframes and the walkthrough. Very useful. I've a few 
comments.


- I'd like to echo the comments from the recording about Role I think 
the term probably isn't specific enough but I don't have a great 
suggestion. However, this is probably suited better to the other thread.


- We will have a number of long processes, for example, when a deploy or 
re-size is happening. How do we keep the user informed of the progress 
and errors? I don't see anything in the wireframes, but maybe there is a 
Horizon standard approach I'm less familiar with. For example, I have 50 
compute nodes, then I add 10 but I want to know how many are ready etc.


- If I remove some instances, do I as the administrator need to care 
which are removed? Do we need to choose or be informed at the end?




On 10/01/14 14:58, Jaromir Coufal wrote:

Hi everybody,

there is first stab of Deployment Management section with future
direction (note that it was discussed as a scope for Icehouse).

I tried to add functionality in time and break it down to steps. This
will help us to focus on one functionality at a time and if we will be
in time pressure for Icehouse release, we can cut off last steps.

Wireframes:
http://people.redhat.com/~jcoufal/openstack/tripleo/2014-01-10_tripleo-ui_deployment-management.pdf


Recording of walkthrough:
https://www.youtube.com/watch?v=9ROxyc85IyE

We sare about to start with first step as soon as possible, so please
focus on our initial steps the most (which doesn't mean that we should
neglect the direction).

Every feedback is very welcome, thanks
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Common SSH

2014-01-10 Thread Doug Hellmann
On Fri, Jan 10, 2014 at 12:54 PM, Sergey Skripnick
sskripn...@mirantis.comwrote:


  I appreciate that we want to fix the ssh client. I'm not certain that
 writing our own is the best answer.


 I was supposed to fix oslo.processutils.ssh with this class, but it may
 be fixed without it, not big deal.




 In his comments on your pull request, the paramiko author recommended
 looking at Fabric. I know that Fabric has a long history in production.
 Does it provide the required features?


 Fabric is too much for just command execution on remote server. Spur seems
 like
 good choice for this.


 But I still don't understand: why do we need oslo.processutils.execute? We
 can use
 subprocess module. Why do we need oslo.processutils.ssh_execute? We can
 use paramiko
 instead.


Well, as you've shown, having a wrapper around subprocess to deal with the
I/O properly is useful, especially commands that produce a lot of it. :-)

As far as ssh_execute goes, I don't know the origin but I imagine the
author didn't know about paramiko.

Doug






 --
 Regards,
 Sergey Skripnick

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Building a new open source NFV system for Neutron

2014-01-10 Thread Luke Gorrie
Hi Mike,

On 10 January 2014 17:35, Michael Bright mjbrigh...@gmail.com wrote:

 Very pleased to see this initiative in the OpenStack/NFV space.

Glad to hear it!

 A dumb question - how do you see this related to the ongoing
  [openstack-dev] [nova] [neutron] PCI pass-through network support

 discussion on this list?

 Do you see that work as one component within your proposed architecture for
 example or an alternative implementation?

Good question. I'd like to answer separately about the underlying
technology on the one hand and the OpenStack API on the other.

The underlying technology of SR-IOV and IOMMU hardware capabilities
are the same in PCI pass-through and Snabb NFV. The difference is that
we introduce a very thin layer of software over the top that preserves
the basic zero-copy operation while adding a Virtio-net abstraction
towards the VM, packet filtering, tunneling, and policing (to start
off with). The design goal is to add quite a bit of functionality with
only a modest processing cost.

The OpenStack API question is more open. How should we best map our
functionality onto Neutron APIs? This is something we need to thrash
out together with the community. Our current best guess - which surely
needs much revision, and is not based on the PCI pass-through
blueprint - is here:
https://github.com/SnabbCo/snabbswitch/tree/snabbnfv-readme/src/designs/nfv#neutron-configuration

Cheers,
-Luke

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Common SSH

2014-01-10 Thread Sergey Skripnick







On Fri, Jan 10, 2014 at 12:54 PM, Sergey Skripnick  
sskripn...@mirantis.com wrote:







I appreciate that we want to fix the ssh client. I'm not certain that  
writing our own is the best answer.






I was supposed to fix oslo.processutils.ssh with this class, but it may

be fixed without it, not big deal.










In his comments on your pull request, the paramiko author recommended  
looking at Fabric. I know that Fabric has a long history in  
production. Does it provide the required features?








Fabric is too much for just command execution on remote server. Spur  
seems like


good choice for this.



But I still don't understand: why do we need oslo.processutils.execute?  
We can use


subprocess module. Why do we need oslo.processutils.ssh_execute? We can  
use paramiko


instead.


Well, as you've shown, having a wrapper around subprocess to deal with  
the I/O properly is useful, especially commands that produce a lot of  
it. :-)



As far as ssh_execute goes, I don't know the origin but I imagine the  
author didn't know about paramiko.



Doug



ssh_execute is using paramiko :)



--
Regards,
Sergey Skripnick

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-10 Thread Jiang, Yunhong
Ian, thanks for your reply. Please check my response prefix with 'yjiang5'.

--jyh

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Friday, January 10, 2014 4:08 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

On 10 January 2014 07:40, Jiang, Yunhong 
yunhong.ji...@intel.commailto:yunhong.ji...@intel.com wrote:
Robert, sorry that I'm not fan of * your group * term. To me, *your group 
mixed two thing. It's an extra property provided by configuration, and also 
it's a very-not-flexible mechanism to select devices (you can only select 
devices based on the 'group name' property).

It is exactly that.  It's 0 new config items, 0 new APIs, just an extra tag on 
the whitelists that are already there (although the proposal suggests changing 
the name of them to be more descriptive of what they now do).  And you talk 
about flexibility as if this changes frequently, but in fact the grouping / 
aliasing of devices almost never changes after installation, which is, not 
coincidentally, when the config on the compute nodes gets set up.

1)   A dynamic group is much better. For example, user may want to select 
GPU device based on vendor id, or based on vendor_id+device_id. In another 
word, user want to create group based on vendor_id, or vendor_id+device_id and 
select devices from these group.  John's proposal is very good, to provide an 
API to create the PCI flavor(or alias). I prefer flavor because it's more 
openstack style.
I disagree with this.  I agree that what you're saying offers a more 
flexibilibility after initial installation but I have various issues with it.
[yjiang5] I think you talking is mostly about white list, instead of PCI 
flavor. PCI flavor is more about PCI request, like I want to have a device with 
vendor_id = cisco, device_id= 15454E, or 'vendor_id=intel device_class=nic' , 
( because the image have the driver for all Intel NIC card :)  ). While 
whitelist is to decide the device that is assignable in a host.


This is directly related to the hardware configuation on each compute node.  
For (some) other things of this nature, like provider networks, the compute 
node is the only thing that knows what it has attached to it, and it is the 
store (in configuration) of that information.  If I add a new compute node then 
it's my responsibility to configure it correctly on attachment, but when I add 
a compute node (when I'm setting the cluster up, or sometime later on) then 
it's at that precise point that I know how I've attached it and what hardware 
it's got on it.  Also, it's at this that point in time that I write out the 
configuration file (not by hand, note; there's almost certainly automation when 
configuring hundreds of nodes so arguments that 'if I'm writing hundreds of 
config files one will be wrong' are moot).

I'm also not sure there's much reason to change the available devices 
dynamically after that, since that's normally an activity that results from 
changing the physical setup of the machine which implies that actually you're 
going to have access to and be able to change the config as you do it.  John 
did come up with one case where you might be trying to remove old GPUs from 
circulation, but it's a very uncommon case that doesn't seem worth coding for, 
and it's still achievable by changing the config and restarting the compute 
processes.
[yjiag5] I totally agree with you that whitelist is static defined when 
provision. I just want to separate the information of 'provider network' to 
another configuration (like extra information). Whitelist is just white list to 
decide the device assignable. The provider network is information of the 
device, it's not in the scope of the white list.
This also reduces the autonomy of the compute node in favour of centralised 
tracking, which goes against the 'distributed where possible' philosophy of 
Openstack.
Finally, you're not actually removing configuration from the compute node.  You 
still have to configure a whitelist there; in the grouping design you also have 
to configure grouping (flavouring) on the control node as well.  The groups 
proposal adds one extra piece of information to the whitelists that are already 
there to mark groups, not a whole new set of config lines.
[yjiang5] Still, while list is to decide the device assignable, not to provide 
device information. We should mixed functionality to the configuration. If it's 
ok, I simply want to discard the 'group' term :) The nova PCI flow is simple, 
compute node provide PCI device (based on white list), the scheduler track the 
PCI device information (abstracted as pci_stats for performance issue), the API 
provide method that user specify the device they wanted (the PCI flavor). 
Current implementation need enhancement on each step of the flow, but I really 
see no reason to have the Group. Yes, the 'PCI flavor' in fact create group 
based on PCI 

Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-10 Thread Jiang, Yunhong
Brian, the issue of 'class name' is because currently the libvirt does not 
provide such information, otherwise we are glad to add that :(
But this is a good point and we have considered already. One solution is to 
retrieve it through some code like read the configuration space directly. But 
that's not so easy especially considering the different platform has different 
method to get the configuration space. A workaround (at least in first step) is 
to use the user defined property, so that user can define it through 
configuration space.

The issue to udev is, it's linux specific, and it may even various in different 
distribution.

Thanks
--jyh

From: Brian Schott [mailto:brian.sch...@nimbisservices.com]
Sent: Thursday, January 09, 2014 11:19 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

Ian,

The idea of pci flavors is a great and using vendor_id and product_id make 
sense, but I could see a case for adding the class name such as 'VGA compatible 
controller'. Otherwise, slightly different generations of hardware will mean 
custom whitelist setups on each compute node.

01:00.0 VGA compatible controller: NVIDIA Corporation G71 [GeForce 7900 GTX] 
(rev a1)

On the flip side, vendor_id and product_id might not be sufficient.  Suppose I 
have two identical NICs, one for nova internal use and the second for guest 
tenants?  So, bus numbering may be required.

01:00.0 VGA compatible controller: NVIDIA Corporation G71 [GeForce 7900 GTX] 
(rev a1)
02:00.0 VGA compatible controller: NVIDIA Corporation G71 [GeForce 7900 GTX] 
(rev a1)

Some possible combinations:

# take 2 gpus
pci_passthrough_whitelist=[
 { vendor_id:NVIDIA Corporation G71,product_id:GeForce 7900 GTX, 
name:GPU},
]

# only take the GPU on PCI 2
pci_passthrough_whitelist=[
 { vendor_id:NVIDIA Corporation G71,product_id:GeForce 7900 GTX, 
'bus_id': '02:', name:GPU},
]
pci_passthrough_whitelist=[
 {bus_id: 01:00.0, name: GPU},
 {bus_id: 02:00.0, name: GPU},
]

pci_passthrough_whitelist=[
 {class: VGA compatible controller, name: GPU},
]

pci_passthrough_whitelist=[
 { product_id:GeForce 7900 GTX, name:GPU},
]

I know you guys are thinking of PCI devices, but any though of mapping to 
something like udev rather than pci?  Supporting udev rules might be easier and 
more robust rather than making something up.

Brian

-
Brian Schott, CTO
Nimbis Services, Inc.
brian.sch...@nimbisservices.commailto:brian.sch...@nimbisservices.com
ph: 443-274-6064  fx: 443-274-6060



On Jan 9, 2014, at 12:47 PM, Ian Wells 
ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk wrote:


I think I'm in agreement with all of this.  Nice summary, Robert.
It may not be where the work ends, but if we could get this done the rest is 
just refinement.

On 9 January 2014 17:49, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:

Hi Folks,

With John joining the IRC, so far, we had a couple of productive meetings in an 
effort to come to consensus and move forward. Thanks John for doing that, and I 
appreciate everyone's effort to make it to the daily meeting. Let's reconvene 
on Monday.

But before that, and based on our today's conversation on IRC, I'd like to say 
a few things. I think that first of all, we need to get agreement on the 
terminologies that we are using so far. With the current nova PCI passthrough

PCI whitelist: defines all the available PCI passthrough devices on a 
compute node. pci_passthrough_whitelist=[{ 
vendor_id:,product_id:}]
PCI Alias: criteria defined on the controller node with which requested 
PCI passthrough devices can be selected from all the PCI passthrough devices 
available in a cloud.
Currently it has the following format: 
pci_alias={vendor_id:, product_id:, name:str}

nova flavor extra_specs: request for PCI passthrough devices can be 
specified with extra_specs in the format for 
example:pci_passthrough:alias=name:count

As you can see, currently a PCI alias has a name and is defined on the 
controller. The implications for it is that when matching it against the PCI 
devices, it has to match the vendor_id and product_id against all the available 
PCI devices until one is found. The name is only used for reference in the 
extra_specs. On the other hand, the whitelist is basically the same as the 
alias without a name.

What we have discussed so far is based on something called PCI groups (or PCI 
flavors as Yongli puts it). Without introducing other complexities, and with a 
little change of the above representation, we will have something like:

pci_passthrough_whitelist=[{ vendor_id:,product_id:, 
name:str}]

By doing so, we eliminated the PCI alias. And we call the name in above as a 
PCI group name. You can think of it as combining the definitions of the 
existing whitelist and PCI 

Re: [openstack-dev] [horizon] User registrations

2014-01-10 Thread James Nzomo
Hi

I've been thinking of ideas on how to fulfill this user self registration
requirement for our startup's private beta.
So far, i'm of the opinion that storage of customer data (contacts,
physical address, billing info, etc) by commercial entities can be handled
by Keystone via extension(s), (
http://docs.openstack.org/developer/keystone/EXTENSIONS_HOWTO.html)

Such an extension could at least :-
-- implement API Extension to facilitate CRUD ops on customer data.
-- implement a backend to store customer data, say in an additional
cust-info table in keystone's db.
-- have a customizable customer model/schema to allow different OpenStack
IaaS providers to store whatever info they require on their clients (and
employees).
-- be capable of being disabled for those who do not need this extended
functionality e.g some private clouds.

There should be a corresponding client lib for this extended API that can
be used by:-
-- *a horizon django self-registration app, *
-- a billing system
-- a CRM system
-- (the list goes on  on)


*I'll avail a proof of concept for the above in a few days.  *peer
review and scrutiny will be very much appreciated.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-10 Thread Ian Wells
On 10 January 2014 15:30, John Garbutt j...@johngarbutt.com wrote:

 We seemed happy with the current system (roughly) around GPU passthrough:
 nova flavor-key three_GPU_attached_30GB set
 pci_passthrough:alias= large_GPU:1,small_GPU:2
 nova boot --image some_image --flavor three_GPU_attached_30GB some_name


Actually, I think we pretty solidly disagree on this point.  On the other
hand, Yongli's current patch (with pci_flavor in the whitelist) is pretty
OK.


 nova boot --flavor m1.large --image image_id
   --nic net-id=net-id-1
   --nic net-id=net-id-2,nic-type=fast

  --nic net-id=net-id-3,nic-type=fast vm-name


With flavor defined (wherever it's defined):

nova boot ..
   --nic net-id=net-id-1,pci-flavor=xxx# ok, presumably defaults to
PCI passthrough
   --nic net-id=net-id-1,pci-flavor=xxx,vnic-attach=macvtap # ok
   --nic net-id=net-id-1 # ok - no flavor = vnic
   --nic port-id=net-id-1,pci-flavor=xxx# ok, gets vnic-attach from
port
   --nic port-id=net-id-1 # ok - no flavor = vnic



 or

 neutron port-create
   --fixed-ip subnet_id=subnet-id,ip_address=192.168.57.101
   --nic-type=slow | fast | foobar
   net-id
 nova boot --flavor m1.large --image image_id --nic port-id=port-id


No, I think not - specifically because flavors are a nova concept and not a
neutron one, so putting them on the port is inappropriate. Conversely,
vnic-attach is a Neutron concept (fine, nova implements it, but Neutron
tells it how) so I think it *is* a port field, and we'd just set it on the
newly created port when doing nova boot ..,vnic-attach=thing

2) Expand PCI alias information

{
  name:NIC_fast,
   sriov_info: {
   nic_type:fast,

  network_ids: [net-id-1, net-id-2]


Why can't we use the flavor name in --nic (because multiple flavors might
be on one NIC type, I guess)?  Where does e.g. switch/port information go,
particularly since it's per-device (not per-group) and non-scheduling?

I think the issue here is that you assume we group by flavor, then add
extra info, then group into a NIC group.  But for a lot of use cases there
is information that differs on every NIC port, so it makes more sense to
add extra info to a device, then group into flavor and that can also be
used for the --nic.

network_ids is interesting, but this is a nova config file and network_ids
are (a) from Neutron (b) ephemeral, so we can't put them in config.  They
could be provider network names, but that's not the same thing as a neutron
network name and not easily discoverable, outside of Neutron i.e. before
scheduling.

Again, Yongli's current change with pci-flavor in the whitelist records
leads to a reasonable way to how to make this work here, I think;
straightforward extra_info would be fine (though perhaps nice if it's
easier to spot it as of a different type from the whitelist regex fields).
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Organizing a Gate Blocking Bug Fix Day - Mon Jan 27th

2014-01-10 Thread Steve Baker
On 10/01/14 04:30, Sean Dague wrote:
 Minor correction, we're going to do this on Jan 27th, to be after the
 i2 push, as I don't think there is time organize this prior.


 Specifically I'd like to get commitments from as many PTLs as possible
 that they'll both directly participate in the day, as well as encourage
 the rest of their project to do the same.

I am keen for this.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Deployment Management section - Wireframes

2014-01-10 Thread Jay Dobies

Another question:

- A Role (sounds like we're moving away from that so I'll call it 
Resource Category) can have multiple Node Profiles defined (assuming I'm 
interpretting the + and the tabs in the Create a Role wireframe 
correctly). But I don't see anywhere where a profile is selected when 
scaling the Resource Category. Is the idea behind the profiles that you 
can select how much power you want to provide in addition to how many nodes?



On 01/10/2014 09:58 AM, Jaromir Coufal wrote:

Hi everybody,

there is first stab of Deployment Management section with future
direction (note that it was discussed as a scope for Icehouse).

I tried to add functionality in time and break it down to steps. This
will help us to focus on one functionality at a time and if we will be
in time pressure for Icehouse release, we can cut off last steps.

Wireframes:
http://people.redhat.com/~jcoufal/openstack/tripleo/2014-01-10_tripleo-ui_deployment-management.pdf


Recording of walkthrough:
https://www.youtube.com/watch?v=9ROxyc85IyE

We sare about to start with first step as soon as possible, so please
focus on our initial steps the most (which doesn't mean that we should
neglect the direction).

Every feedback is very welcome, thanks
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-10 Thread Ian Wells
Hey Yunhong,

The thing about 'group' and 'flavor' and 'whitelist' is that they once
meant distinct things (and I think we've been trying to reduce them back
from three things to two or one):

- group: equivalent devices at a host level - use any one, no-one will
care, because they're either identical or as near as makes no difference
- flavor: equivalent devices to an end user - we may re-evaluate our
offerings and group them differently on the fly
- whitelist: either 'something to match the devices you may assign'
(originally) or 'something to match the devices you may assign *and* put
them in the group (in the group proposal)

Bearing in mind what you said about scheduling, and if we skip 'group' for
a moment, then can I suggest (or possibly restate, because your comments
are pointing in this direction):

- we allow extra information to be added at what is now the whitelisting
stage, that just gets carried around with the device
- when we're turning devices into flavors, we can also match on that extra
information if we want (which means we can tag up the devices on the
compute node if we like, according to taste, and then bundle them up by tag
to make flavors; or we can add Neutron specific information and ignore it
when making flavors)
- we would need to add a config param on the control host to decide which
flags to group on when doing the stats (and they would additionally be the
only params that would work for flavors, I think)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Common SSH

2014-01-10 Thread Doug Hellmann
On Fri, Jan 10, 2014 at 1:32 PM, Sergey Skripnick
sskripn...@mirantis.comwrote:






 On Fri, Jan 10, 2014 at 12:54 PM, Sergey Skripnick 
 sskripn...@mirantis.com wrote:




 I appreciate that we want to fix the ssh client. I'm not certain that
 writing our own is the best answer.




 I was supposed to fix oslo.processutils.ssh with this class, but it may

 be fixed without it, not big deal.









 In his comments on your pull request, the paramiko author recommended
 looking at Fabric. I know that Fabric has a long history in production.
 Does it provide the required features?






 Fabric is too much for just command execution on remote server. Spur
 seems like

 good choice for this.



 But I still don't understand: why do we need oslo.processutils.execute?
 We can use

 subprocess module. Why do we need oslo.processutils.ssh_execute? We can
 use paramiko

 instead.


 Well, as you've shown, having a wrapper around subprocess to deal with
 the I/O properly is useful, especially commands that produce a lot of it.
 :-)


 As far as ssh_execute goes, I don't know the origin but I imagine the
 author didn't know about paramiko.


 Doug



 ssh_execute is using paramiko :)


See, this is what I get for not looking at git blame. :-)

Doug







 --
 Regards,
 Sergey Skripnick

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Re-using Horizon bits in OpenDaylight

2014-01-10 Thread Gabriel Hurley
I've also used the core Horizon bits for dashboards other than the OpenStack 
dashboard. I can't speak for any current bugs you may run into, but 
by-and-large the ability to create arbitrary dashboards, tables, workflows, 
etc. to interact with RESTful APIs works perfectly without the OpenStack bits. 
My goal has always been to keep a clean separation between the generic 
(horizon) and specific (openstack_dashboard) code.

I always encouraged people to file any problems they have using the horizon 
module for non-OpenStack purposes as bugs on the project. Someday the modules 
may actually be split into two separate packages, but for nw the design goal 
stands, at least.

All the best,


-  Gabriel

From: Walls, Jeffrey Joel (Cloud OS RD) [mailto:jeff.wa...@hp.com]
Sent: Friday, January 10, 2014 7:43 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Re-using Horizon bits in OpenDaylight

I have used the Horizon framework for an application other than the OpenStack 
Dashboard and it worked really well.  There is an effort to create a separation 
between the Horizon framework and the OpenStack Dashboard and once that happens 
it will be even easier.  How hard or difficult it is will depend on exactly 
what you're trying to do and how well its UI metaphor matches that of the 
OpenStack Dashboard.

Jeff

From: Endre Karlson [mailto:endre.karl...@gmail.com]
Sent: Friday, January 10, 2014 8:20 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] Re-using Horizon bits in OpenDaylight

Hello everyone.

I would like to know if anyone here has knowledge on how easy it is to use 
Horizon for something else then OpenStack things?

I'm the starter of the dlux project that aims to consume the OpenDaylight SDN 
controller Northbound REST APIs instead of the integrated UI it has now. Though 
the current PoC is done using AngularJS i came into issues like how we make it 
easy for third part things that are not core to plugin it's things into the app 
which I know that can be done using panels and alike in Horizon.

So the question boils down to, can I easily re-use Horizon for ODL?

Endre
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][documentation][devstack] Confused about how to set up a Nova development environment

2014-01-10 Thread Dan Genin

On 01/09/2014 06:14 PM, Brant Knudson wrote:




On Thu, Jan 9, 2014 at 12:21 PM, Mike Spreitzer mspre...@us.ibm.com 
mailto:mspre...@us.ibm.com wrote:


Brant Knudson b...@acm.org mailto:b...@acm.org wrote on
01/09/2014 10:07:27 AM:


 When I was starting out, I ran devstack (
http://devstack.org/) on
 an Ubuntu VM. You wind up with a system where you've got a basic
 running OpenStack so you can try things out with the command-line
 utilities, and also do development because it checks out all the
 repos. I learned a lot, and it's how I still do development.

What sort(s) of testing do you do in that environment, and how?


Just running devstack exercises quite a bit of code, because it's 
setting up users, project, and loading images. Now you've got a system 
that's set up so you can add your own images and boot them using 
regular OpenStack commands, and you can use the command-line utilities 
or REST API to exercise your changes. The command-line utilities and 
REST API are documented. There are some things that you aren't going 
to be able to do with devstack because it's a single node, but that 
hasn't affected my development.


 Does your code editing interfere with the running DevStack?


Code editing doesn't interfere with a running DevStack. After you make 
a change you can find the process's screen in devstack's and restart 
it to pick up your changes. For example if you made a change that 
affects nova-api, you can restart the process in the n-api screen.


 Can you run the unit tests without interference from/to the
running DevStack?


Running unit tests doesn't interfere with DevStack. The unit tests run 
in their own processes and also run in a virtual environment.


 How do you do bigger tests?


For bigger tests I'd need a cluster which I don't have, so I don't do 
bigger tests.
It is possible to setup a multi-node devstack cloud using multiple VMs, 
see http://devstack.org/guides/multinode-lab.html. Depending on what you 
are trying to test, e.g., VM migration, this may be sufficient.


 What is the process for switching from running the merged code to
running your modified code?


I don't know what the merged code is? I use eclipse, so I create a 
project for the different directories in /opt/stack so I can edit the 
code right there.


 Are the answers documented someplace I have not found?

Thanks,
Mike


Not that I know of... the wiki pages are editable, so you or I could 
update them to help out others.


- Brant






smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Less option (was: [oslo.config] Centralized config management)

2014-01-10 Thread Nachi Ueno
+100 also :)

2014/1/10 Joe Gordon joe.gord...@gmail.com:



 On Fri, Jan 10, 2014 at 4:01 AM, Mark McLoughlin mar...@redhat.com wrote:

 On Thu, 2014-01-09 at 16:34 -0800, Joe Gordon wrote:
  On Thu, Jan 9, 2014 at 3:01 PM, Jay Pipes jaypi...@gmail.com wrote:
 
   On Thu, 2014-01-09 at 23:56 +0100, Julien Danjou wrote:
On Thu, Jan 09 2014, Jay Pipes wrote:
   
 Hope you don't mind, I'll jump in here :)

 On Thu, 2014-01-09 at 11:08 -0800, Nachi Ueno wrote:
 Hi Jeremy

 Don't you think it is burden for operators if we should choose
 correct
 combination of config for multiple nodes even if we have chef and
 puppet?

 It's more of a burden for operators to have to configure OpenStack
 in
 multiple ways.
   
I also think projects should try to minimize configuration options
at
their minimum so operators are completely lost. Opening the sample
nova.conf and seeing 696 options is not what I would call user
friendly.
   
  
 
 
  There was talk a while back about marking different config options as
  basic
  and advanced (or something along those lines) to help make it easier for
  operators.

 You might be thinking of this session summit I led:

   https://etherpad.openstack.org/p/grizzly-nova-config-options

 My thinking was we first move config options into groups to make it
 easier for operators to make sense of the available options and then we
 would classify them (as e.g. tuning, experimental, debug) and
 exclude some classifications from the sample config file.

 Sadly, I never even made good progress on Tedious Task 2 :: Group.



 That is exactly what I was thinking of.



 Mark.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-10 Thread Nachi Ueno
Hi Flavio, Clint

I agree with you guys.
sorry, may be, I wasn't clear. My opinion is to remove every
configuration in the node,
and every configuration should be done by API from central resource
manager. (nova-api or neturon server etc).

This is how to add new hosts, in cloudstack, vcenter, and openstack.

Cloudstack: Go to web UI, add Host/ID/PW.
http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.0.2/html/Installation_Guide/host-add.html

vCenter: Go to vsphere client, Host/ID/PW.
https://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.solutions.doc%2FGUID-A367585C-EB0E-4CEB-B147-817C1E5E8D1D.html

Openstack,
- Manual
   - setup mysql connection config, rabbitmq/qpid connection config,
keystone config,, neturon config, 
http://docs.openstack.org/havana/install-guide/install/apt/content/nova-compute.html

We have some deployment system including chef / puppet / packstack, TripleO
- Chef/Puppet
   Setup chef node
   Add node/ apply role
- Packstack
   -  Generate answer file
  
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/2/html/Getting_Started_Guide/sect-Running_PackStack_Non-interactively.html
   -  packstack --install-hosts=192.168.1.0,192.168.1.1,192.168.1.2
- TripleO
   - UnderCloud
   nova baremetal node add
   - OverCloud
   modify heat template

For residence in this mailing list, Chef/Puppet or third party tool is
easy to use.
However,  I believe they are magical tools to use for many operators.
Furthermore, these development system tend to take time to support
newest release.
so most of users, OpenStack release didn't means it can be usable for them.

IMO, current way to manage configuration is the cause of this issue.
If we manage everything via API, we can manage cluster by horizon.
Then user can do go to horizon, just add host.

It may take time to migrate config to API, so one easy step is to convert
existing config for API resources. This is the purpose of this proposal.

Best
Nachi


2014/1/10 Clint Byrum cl...@fewbar.com:
 Excerpts from Doug Hellmann's message of 2014-01-09 12:21:05 -0700:
 On Thu, Jan 9, 2014 at 1:53 PM, Nachi Ueno na...@ntti3.com wrote:

  Hi folks
 
  Thank you for your input.
 
  The key difference from external configuration system (Chef, puppet
  etc) is integration with
  openstack services.
  There are cases a process should know the config value in the other hosts.
  If we could have centralized config storage api, we can solve this issue.
 
  One example of such case is neuron + nova vif parameter configuration
  regarding to security group.
  The workflow is something like this.
 
  nova asks vif configuration information for neutron server.
  Neutron server ask configuration in neutron l2-agent on the same host
  of nova-compute.
 

 That extra round trip does sound like a potential performance bottleneck,
 but sharing the configuration data directly is not the right solution. If
 the configuration setting names are shared, they become part of the
 integration API between the two services. Nova should ask neutron how to
 connect the VIF, and it shouldn't care how neutron decides to answer that
 question. The configuration setting is an implementation detail of neutron
 that shouldn't be exposed directly to nova.


 That is where I think my resistance to such a change starts. If Nova and
 Neutron need to share a value, they should just do that via their API's.
 There is no need for a config server in the middle. If it is networking
 related, it lives in Neutron's configs, and if it is compute related,
 Nova's configs.

 Is there any example where values need to be in sync but are not
 sharable via normal API chatter?

 Running a configuration service also introduces what could be a single
 point of failure for all of the other distributed services in OpenStack. An
 out-of-band tool like chef or puppet doesn't result in the same sort of
 situation, because the tool does not have to be online in order for the
 cloud to be online.


 Configuration shouldn't ever have a rapid pattern of change, so even if
 this service existed I'd suggest that it would be used just like current
 config management solutions: scrape values out, write to config files.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] team meeting minutes Jan 9

2014-01-10 Thread Sergey Lukjanov
Thanks everyone who have joined Savanna meeting.

Here are the logs from the meeting:

Minutes: 
savanna.2014-01-09-18.10.htmlhttp://eavesdrop.openstack.org/meetings/savanna/2014/savanna.2014-01-09-18.10.html
Log: 
savanna.2014-01-09-18.10.log.htmlhttp://eavesdrop.openstack.org/meetings/savanna/2014/savanna.2014-01-09-18.10.log.html

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Partially Shared Networks

2014-01-10 Thread Jay Pipes
On Fri, 2014-01-10 at 17:06 +, CARVER, PAUL wrote:
 If anyone is giving any thought to networks that are available to
 multiple tenants (controlled by a configurable list of tenants) but
 not visible to all tenants I’d like to hear about it.
 
 I’m especially thinking of scenarios where specific networks exist
 outside of OpenStack and have specific purposes and rules for who can
 deploy servers on them. We’d like to enable the use of OpenStack to
 deploy to these sorts of networks but we can’t do that with the
 current “shared or not shared” binary choice. 

Hi Paul :) Please see here:

https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg07268.html

for a similar discussion.

best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][qa] The 'spec' parameter of mock.patch()

2014-01-10 Thread Maru Newby
I recently saw a case [1] where a misspelled assertion method 
(asoptt_called_once_with vs assert_called_once_with) did not result in a test 
failure because the object it was called on was created by mock.patch() without 
any of the spec/spec_set/autospec parameters being set.  Might it make sense to 
require that calls to mock.patch() set autospec=True [2]?


m.

1: 
https://review.openstack.org/#/c/61105/7/neutron/tests/unit/openvswitch/test_ovs_lib.py
 (line 162)
2: http://www.voidspace.org.uk/python/mock/patch.html#mock.patch


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][qa] The 'spec' parameter of mock.patch()

2014-01-10 Thread Nachi Ueno
+1 but fixing this looks like take not small time

2014/1/10 Maru Newby ma...@redhat.com:
 I recently saw a case [1] where a misspelled assertion method 
 (asoptt_called_once_with vs assert_called_once_with) did not result in a test 
 failure because the object it was called on was created by mock.patch() 
 without any of the spec/spec_set/autospec parameters being set.  Might it 
 make sense to require that calls to mock.patch() set autospec=True [2]?


 m.

 1: 
 https://review.openstack.org/#/c/61105/7/neutron/tests/unit/openvswitch/test_ovs_lib.py
  (line 162)
 2: http://www.voidspace.org.uk/python/mock/patch.html#mock.patch


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-10 Thread Jiang, Yunhong
Ian, thanks for your reply. Please check comments prefix with [yjiang5].

Thanks
--jyh

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Friday, January 10, 2014 12:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

Hey Yunhong,

The thing about 'group' and 'flavor' and 'whitelist' is that they once meant 
distinct things (and I think we've been trying to reduce them back from three 
things to two or one):

- group: equivalent devices at a host level - use any one, no-one will care, 
because they're either identical or as near as makes no difference
- flavor: equivalent devices to an end user - we may re-evaluate our offerings 
and group them differently on the fly
- whitelist: either 'something to match the devices you may assign' 
(originally) or 'something to match the devices you may assign *and* put them 
in the group (in the group proposal)

[yjiang5] Really thanks for the summary and it is quite clear. So what's the 
object of equivalent devices at host level? Because 'equivalent device * to 
an end user * is flavor, so is it 'equivalent to *scheduler* or 'equivalent 
to *xxx*'? If equivalent to scheduler, then I'd take the pci_stats as a 
flexible group for scheduler, and I'd think 'equivalent for scheduler' as a 
restriction for 'equivalent to end user' because of performance issue, 
otherwise, it's needless.   Secondly, for your definition of 'whitelist', I'm 
hesitate to your '*and*' because IMHO, 'and' means mixed two things together, 
otherwise, we can state in simply one sentence. For example, I prefer to have 
another configuration option to define the 'put devices in the group', or, if 
we extend it , be define extra information like 'group name' for devices.

Bearing in mind what you said about scheduling, and if we skip 'group' for a 
moment, then can I suggest (or possibly restate, because your comments are 
pointing in this direction):
- we allow extra information to be added at what is now the whitelisting stage, 
that just gets carried around with the device
[yjiang5] For 'added at ... whitelisting stage', see my above statement about 
the configuration. However, if you do want to use whitelist, I'm ok, but please 
keep in mind that it's two functionality combined: device you may assign *and* 
the group name for these devices.

- when we're turning devices into flavors, we can also match on that extra 
information if we want (which means we can tag up the devices on the compute 
node if we like, according to taste, and then bundle them up by tag to make 
flavors; or we can add Neutron specific information and ignore it when making 
flavors)
[yjiang5] Agree. Currently we can only use vendor_id and device_id for 
flavor/alias, but we can extend it to cover such extra information since now 
it's a API.

- we would need to add a config param on the control host to decide which flags 
to group on when doing the stats (and they would additionally be the only 
params that would work for flavors, I think)
[yjiang5] Agree. And this is achievable because we switch the flavor to be API, 
then we can control the flavor creation process.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Building a new open source NFV system for Neutron

2014-01-10 Thread Ian Wells
Hey Luke,

If you look at the passthrough proposals, the overview is that part of the
passthrough work is to ensure there's an PCI function available to allocate
to the VM, and part is to pass that function on to the Neutron plugin via
conventional means.  There's nothing that actually mandates that you
connect the SRIOV port using the passthrough mechanism, and we've been
working on the assumption that we would be supporting the 'macvtap' method
of attachment that Mellanox came up with some time ago.

I think what we'll probably have is a set of standard attachments
(including passthrough) added to the Nova drivers - you'll see in the
virtualisation drivers that Neutron already gets to tell Nova how to attach
the port and can pass auxiliary information - and we will pass the PCI path
and, optionally, other parameters to Neutron in the port-update that
precedes VIF plugging.  That would leave you with the option of passing the
path back and requesting an actual passthrough or coming up with some other
mechanism of your own choosing (which may not involve changing Nova at all,
if you're using your standard virtual plugging mechanism).

-- 
Ian.


On 10 January 2014 19:26, Luke Gorrie l...@snabb.co wrote:

 Hi Mike,

 On 10 January 2014 17:35, Michael Bright mjbrigh...@gmail.com wrote:

  Very pleased to see this initiative in the OpenStack/NFV space.

 Glad to hear it!

  A dumb question - how do you see this related to the ongoing
   [openstack-dev] [nova] [neutron] PCI pass-through network support
 
  discussion on this list?
 
  Do you see that work as one component within your proposed architecture
 for
  example or an alternative implementation?

 Good question. I'd like to answer separately about the underlying
 technology on the one hand and the OpenStack API on the other.

 The underlying technology of SR-IOV and IOMMU hardware capabilities
 are the same in PCI pass-through and Snabb NFV. The difference is that
 we introduce a very thin layer of software over the top that preserves
 the basic zero-copy operation while adding a Virtio-net abstraction
 towards the VM, packet filtering, tunneling, and policing (to start
 off with). The design goal is to add quite a bit of functionality with
 only a modest processing cost.

 The OpenStack API question is more open. How should we best map our
 functionality onto Neutron APIs? This is something we need to thrash
 out together with the community. Our current best guess - which surely
 needs much revision, and is not based on the PCI pass-through
 blueprint - is here:

 https://github.com/SnabbCo/snabbswitch/tree/snabbnfv-readme/src/designs/nfv#neutron-configuration

 Cheers,
 -Luke

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] The extra_resource in compute node object

2014-01-10 Thread Jiang, Yunhong
Hi, Paul/Dan
For the extra_resource (refer to Dan's comments in 
https://review.openstack.org/#/c/60258/ for more information), I created a 
patch set 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:extra_resources,n,z
 and want to get some feedback.

This patch set makes the extra_resources a list of object, instead of 
opaque json string. How do you think about that?

However, the compute resource object is different with current 
NovaObject, a) it has no corresponding table, but just a field in another 
table, and I assume it will have no save/update functions. b) it defines the 
functions for the object like alloc/free etc. Not sure if this is correct 
direction.

Thanks
--jyh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-10 Thread Jiang, Yunhong
I have to use [yjiang5_1] prefix now :)

--jyh

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Friday, January 10, 2014 3:55 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

On 11 January 2014 00:04, Jiang, Yunhong 
yunhong.ji...@intel.commailto:yunhong.ji...@intel.com wrote:
[yjiang5] Really thanks for the summary and it is quite clear. So what's the 
object of equivalent devices at host level? Because 'equivalent device * to 
an end user * is flavor, so is it 'equivalent to *scheduler* or 'equivalent 
to *xxx*'? If equivalent to scheduler, then I'd take the pci_stats as a 
flexible group for scheduler

To the scheduler, indeed.  And with the group proposal the scheduler and end 
user equivalences are one and the same.
[yjiang5_1] Once use the proposal, then we missed the flexible for 'end user 
equivalences and that's the reason I'm against the group :)


Secondly, for your definition of 'whitelist', I'm hesitate to your '*and*' 
because IMHO, 'and' means mixed two things together, otherwise, we can state in 
simply one sentence. For example, I prefer to have another configuration option 
to define the 'put devices in the group', or, if we extend it , be define 
extra information like 'group name' for devices.

I'm not stating what we should do, or what the definitions should mean; I'm 
saying how they've been interpreted as weve discussed this in the past.  We've 
had issues in the past where we've had continuing difficulties in describing 
anything without coming back to a 'whitelist' (generally meaning 'matching 
expression, as an actual 'whitelist' is implied, rather than separately 
required, in a grouping system.
 Bearing in mind what you said about scheduling, and if we skip 'group' for a 
moment, then can I suggest (or possibly restate, because your comments are 
pointing in this direction):
- we allow extra information to be added at what is now the whitelisting stage, 
that just gets carried around with the device
[yjiang5] For 'added at ... whitelisting stage', see my above statement about 
the configuration. However, if you do want to use whitelist, I'm ok, but please 
keep in mind that it's two functionality combined: device you may assign *and* 
the group name for these devices.

Indeed - which is in fact what we've been proposing all along.


- when we're turning devices into flavors, we can also match on that extra 
information if we want (which means we can tag up the devices on the compute 
node if we like, according to taste, and then bundle them up by tag to make 
flavors; or we can add Neutron specific information and ignore it when making 
flavors)
[yjiang5] Agree. Currently we can only use vendor_id and device_id for 
flavor/alias, but we can extend it to cover such extra information since now 
it's a API.

- we would need to add a config param on the control host to decide which flags 
to group on when doing the stats (and they would additionally be the only 
params that would work for flavors, I think)
[yjiang5] Agree. And this is achievable because we switch the flavor to be API, 
then we can control the flavor creation process.

OK - so if this is good then I think the question is how we could change the 
'pci_whitelist' parameter we have - which, as you say, should either *only* do 
whitelisting or be renamed - to allow us to add information.  Yongli has 
something along those lines but it's not flexible and it distinguishes poorly 
between which bits are extra information and which bits are matching 
expressions (and it's still called pci_whitelist) - but even with those 
criticisms it's very close to what we're talking about.  When we have that I 
think a lot of the rest of the arguments should simply resolve themselves.

[yjiang5_1] The reason that not easy to find a flexible/distinguishable change 
to pci_whitelist is because it combined two things. So a stupid/naive solution 
in my head is, change it to VERY generic name, 'pci_devices_information', and 
change schema as an array of {'devices_property'=regex exp, 'group_name' = 
'g1'} dictionary, and the device_property expression can be 'address ==xxx, 
vendor_id == xxx' (i.e. similar with current white list),  and we can squeeze 
more into the pci_devices_information in future, like 'network_information' = 
xxx or Neutron specific information you required in previous mail. All keys 
other than 'device_property' becomes extra information, i.e. software defined 
property. These extra information will be carried with the PCI devices,. Some 
implementation details, A)we can limit the acceptable keys, like we only 
support 'group_name', 'network_id', or we can accept any keys other than 
reserved (vendor_id, device_id etc) one. B) if a device match 'device_property' 
in several entries, raise exception, or use the first one.

[yjiang5_1] Another thing need discussed is, as you pointed out, we would need 
to add a config 

[openstack-dev] [Swift] Metadata Search API

2014-01-10 Thread Thomas, Lincoln (HP Storage RD)
Rebooting this thread now that I've reorg'd the Wiki page.

The proposed REST API spec for searching system and custom metadata in Swift, 
across accounts, containers, and objects, is now posted at:

https://wiki.openstack.org/wiki/MetadataSearchAPI

I've also made the first modification to the API since the Icehouse design 
summit where I introduced this project: adding metadata: per conversation 
with Paula Ta-Shma's team at IBM Storage Research.

The home page for this project remains at:

https://wiki.openstack.org/wiki/MetadataSearch 

See that home page for further details, and the history of this email thread. 
Feel free to edit the Wiki as described on the home page!

As Brian Cline (SoftLayer) said so eloquently in this thread:

 Today, about the best one can do is iterate through everything and inspect 
 metadata 
 along the way - obviously an infinitely expensive (and hilariously insane) 
 operation.

 If there are any others who have implemented search in Swift, please speak up 
 and 
 help shape this. We both want to get community consensus on a standard search 
 API, 
 then get a pluggable reference implementation into Swift.

 This is all work-in-progress stuff, but we'd welcome any feedback, concerns, 
 literal jumps for joy, etc. in this thread, both on the API and on a 
 reference 
 architecture.

Thanks in advance,
Lincoln Thomas (IRC lincolnt)
System/Software Engineer, HP Storage RD
Portland, OR, USA,  +1 (503) 757-6274


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] IDE extensions in .gitignore

2014-01-10 Thread Jeremy Stanley
On 2014-01-10 21:57:33 +1300 (+1300), Robert Collins wrote:
 I have *no* aversion to allowing contributors to police things on
 their own.
[...]

I know you don't. It was stated in the message I was replying to (in
context you trimmed) that ...the community should not accept or
promote any policy which suggests a configuration that alters the
behavior of systems beyond the scope of a local workspace used while
working with OpenStack... I disagree, and think we as a collective
of individuals should feel free to exchange tips and suggestions on
configuring our development environments even if they may have
(potentially positive) implications outside of just work on
OpenStack code.

 If we have to review for a trashfile pattern then we have
 contributors using that. There are more editors than contributors
 :).
[...]
 I don't understand why you call it polluting. Pollution is toxic.
 What is toxic about the few rules needed to handle common editors?

For me, the ignore list is there so that someone doesn't have to
worry about accidentally committing *.o files because they ran make
and forgot to make clean when they were done. I'm less keen on it
being used so that developers don't need to know that visual studio
is leaving project directories all over the place.

Anyway I was using the term polluting more in reference to
accidentally committing unwanted files to the repository, and only
to a lesser extent inserting implementation details of this week's
most popular code flosser. How do you determine when it's okay to
clean up entries in the ever-growing .gitignore file (that one
person who ran a tool once and added pattern for it has moved on to
less messy choices)? A file with operational implications which
grows in complexity without bounds worries me, even if only in
principle.

Anyway, it's not a huge deal. I'm just unlikely to review these
sorts of additions unless I've really run out of actual improvements
to review or bugs to fix. (And I already feel bad for wasting time
replying to several messages on the topic, but I couldn't let the
should not...promote any policy which suggests a configuration that
alters the behavior of systems comment go unanswered.)
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] IDE extensions in .gitignore

2014-01-10 Thread Robert Collins
On 11 January 2014 15:39, Jeremy Stanley fu...@yuggoth.org wrote:
 On 2014-01-10 22:00:40 +1300 (+1300), Robert Collins wrote:
 [...synchronized .gitignore across all projects...]
 Out of curiousity, why wouldn't it work?

 The example I gave earlier in the thread... one project wants
 autogenerated ChangeLog so it's in their .gitignore but another
 project wants a hand-curated ChangeLog so they commit it to the
 repository. The two projects can't share a common .gitignore file as

Yes they can. Ignore ChangeLog in .gitignore, the added one will
override .gitignore and it's all fine.

 a result. There are almost certainly other examples, that's just the
 first to spring to mind. Could work around it by dynamically
 proposing semi-synchronized .gitignore updates based on a number of
 other preferences expressed individually by each project, but this
 seems like overengineering.

There *may* be some examples, but we don't have one yet :).

 Do you have a recommendation for a canned .gitignore which safely
 covers the files left behind by most free software editors, IDEs,
 debuggers, test tools, et cetera? Something we should incorporate
 into a recommended initial list in openstack-dev/cookiecutter's
 template perhaps?

I've added putting one together to my todo list.


 I read that as 'we don't test that our tarballs work'. No?

 We don't test that changes won't break our tarballs in some ways,
 no. I suppose we could add new jobs to generate throwaway tarballs
 and then re-run all other tests using source extracted from those in
 addition to the source obtained from the VCS, but that's probably
 duplicating a lot of the current tests we run. Could be worthwhile
 to explore anyway.

I'd be very keen to see *something* test that our tarballs work and
meet some basic criteria (such as perhaps we want to guarantee a
ChangeLog is actually in each tarball...)

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon][Tuskar] Tuskar-UI navigation

2014-01-10 Thread Tzu-Mainn Chen
Hey all,

I have a question regarding the development of the tuskar-ui navigation.

So, to give some background: we are currently working off the wireframes that 
Jaromir Coufal has developed:

http://people.redhat.com/~jcoufal/openstack/tripleo/2013-12-03_tripleo-ui_02-resources.pdf

In these wireframes, you can see a left-hand navigation for Resources (which we 
have since renamed Nodes).  This
left-hand navigation includes sub-navigation for Resources: Overview, Resource 
Nodes, Unallocated, etc.

It seems like the Horizon way to implement this would be to create a 'nodes/' 
directory within our dashboard.
We would create a tabs.py with a Tab for Overview, Resource Nodes, Unallocated, 
etc, and views.py would contain
a single TabbedTableView populated by our tabs.

However, this prevents us from using left-handed navigation.  As a result, our 
nodes/ directory currently appears
as such: 
https://github.com/openstack/tuskar-ui/tree/master/tuskar_ui/infrastructure/nodes

'overview', 'resource', and 'free' are subdirectories within nodes, and they 
each define their own panel.py,
enabling them to appear in the left-handed navigation.

This leads to the following questions:

* Would our current workaround be acceptable?  Or should we follow Horizon 
precedent more closely?
* I understand that a more flexible navigation system is currently under 
development
  (https://blueprints.launchpad.net/horizon/+spec/navigation-enhancement) - 
would it be preferred that
  we follow Horizon precedent until that navigation system is ready, rather 
than use our own workarounds?

Thanks in advance for any opinions!


Tzu-Mainn Chen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev