[openstack-dev] [cinder] Ask for review for cinder driver patch

2015-02-27 Thread liuxinguo
Hi Mike,

I have fixed the patch as your comments and have committed it at 
https://review.openstack.org/#/c/152116/.

Would you please have a review at it, thanks!

Best regards,
Liu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Adding vendor drivers in Ironic

2015-02-27 Thread Ramakrishnan G
Hello All,

This is about adding vendor drivers in Ironic.

In Kilo, we have many vendor drivers getting added in Ironic which is a
very good thing.  But something I noticed is that, most of these reviews
have lots of hardware-specific code in them.  This is something most of the
other Ironic folks cannot understand unless they go and read the hardware
manuals of the vendor hardware about what is being done.  Otherwise we just
need to blindly mark the file as reviewed.

Now let me pitch in with our story about this.  We added a vendor driver
for HP Proliant hardware (the *ilo drivers in Ironic).  Initially we
proposed this same thing in Ironic that we will add all the hardware
specific code in Ironic itself under the directory drivers/modules/ilo.
But few of the Ironic folks didn't agree on this (Devananda especially who
is from my company :)). So we created a new module proliantutils, hosted in
our own github and recently moved it to stackforge.  We gave a limited set
of APIs for Ironic to use - like get_host_power_status(), set_host_power(),
get_one_time_boot(), set_one_time_boot(), etc. (Entire list is here
https://github.com/stackforge/proliantutils/blob/master/proliantutils/ilo/operations.py
).

We have only seen benefits in doing it.  Let me bring in some examples:

1) We tried to add support for some lower version of servers.  We could do
this without making any changes in Ironic (Review in proliantutils
https://review.openstack.org/#/c/153945/)
2) We are adding support for newer models of servers (earlier we use to
talk to servers in protocol called RIBCL, newer servers we will use a
protocol called RIS) - We could do this with just 14 lines of actual code
change in Ironic (this was needed mainly because we didn't think we will
have to use a new protocol itself when we started) -
https://review.openstack.org/#/c/154403/

Now talking about the advantages of putting hardware-specific code in
Ironic:

*1) It's reviewed by Openstack community and tested:*
No. I doubt if I throw in 600 lines of new iLO specific code that is here (
https://github.com/stackforge/proliantutils/blob/master/proliantutils/ilo/ris.py)
for Ironic folks, they will hardly take a look at it.  And regarding
testing, it's not tested in the gate unless we have a 3rd party CI for it.
 [We (iLO drivers) also don't have 3rd party CI right now, but we are
working on it.]

*2) Everything gets packaged into distributions automatically:*
Now the hardware-specific code that we add in Ironic under
drivers/modules/vendor/ will get packaged into distributions, but this
code in turn will have dependencies  which needs to be installed manually
by the operator (I assume vendor specific dependencies are not considered
by Linux distributions while packaging Openstack Ironic). Anyone installing
Ironic and wanting to manage my company's servers will again need to
install these dependencies manually.  Why not install the wrapper if there
is one too.

I assume we only get these advantages by moving all of hardware-specific
code to a wrapper module in stackforge and just exposing some APIs for
Ironic to use:
* Ironic code would be much cleaner and easier to maintain
* Any changes related to your hardware - support for newer hardware, bug
fixes in particular models of hardware, would be very easy. You don't need
to change Ironic code for that. You could just fix the bug in your module,
release a new version and ask your users to install a newer version of the
module.
* python-fooclient could be used outside Ironic to easily manage foo
servers.
* Openstack CI for free if you are in stackforge - unit tests, flake tests,
doc generation, merge, pypi release everything handled automatically.

I don't see any disadvantages.

Now regarding the time taken to do this, if you have all the code ready now
in Ironic (which assume you will already have), perhaps it will take a day
to do this - half a day for putting into a separate module in python/github
and half a day for stackforge. The request to add stackforge should get
approved in the same day (if everything is all-right).

Let me know all of your thoughts on this.  If we agree, I feel we should
have some documentation on it in our Ironic docs directory.

Thanks for reading :)

Regards,
Ramesh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Stepping down from core

2015-02-27 Thread Qiming Teng
On Fri, Feb 27, 2015 at 03:23:13PM -0500, Jeff Peeler wrote:
 As discussed during the previous Heat meeting, I'm going to be
 stepping down from core on the Heat project. My day to day focus is
 going to be more focused on TripleO for the foreseeable future, and
 I hope to be able to soon focus on reviews there.
 
 Being part of Heat core since day 0 has been a good experience, but
 keeping up with multiple projects is a lot to manage. I don't know
 how some of you do it!
 
 Jeff

Best wishes to you, Jeff.  Thanks for your contributions and your
mentoring during the past!

Regards,
 Qiming
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Mid-Cycle Meetup Planning

2015-02-27 Thread Weidong Shao
Could you post the detailed agenda for both days?  Thanks!

On Mon, Jan 26, 2015 at 3:54 PM Adrian Otto adrian.o...@rackspace.com
wrote:

 Team,

 Thanks for participating in the poll. Due to considerable scheduling
 conflicts, I am expanding the poll to include the following Monday
 2015-03-02+Tuesday 2015-03-03. Hopefully these alternate dates can get more
 of us together on the same days.

 Please take a moment to respond to the poll a second time to indicate your
 availability on the newly proposed dates:

 http://doodle.com/ddgsptuex5u3394m

 Thanks,

 Adrian

 On Jan 8, 2015, at 2:24 PM, Adrian Otto adrian.o...@rackspace.com wrote:

  Team,
 
  If you have been watching the Magnum project you know that things have
 really taken off recently. At Paris we did not contemplate a Mid-Cycle
 meet-up but now that we have come this far so quickly, and have such a
 broad base of participation now, it makes sense to ask if you would like to
 attend a face-to-face mid-cycle meetup. I propose the following for your
 consideration:
 
  - Two full days to allow for discussion of Magnum architecture, and
 implementation of our use cases.
  - Located in San Francisco.
  - Open to using Los Angeles or another west coast city to drive down
 travel expenses, if that is a concern that may materially impact
 participation.
  - Dates: February 23+24 or 25+26
 
  If you think you can attend (with 80+% certainty) please indicate your
 availability on the proposed dates using this poll:
 
  http://doodle.com/ddgsptuex5u3394m
 
  Please also add a comment on the Doodle Poll indicating what Country/US
 City you will be traveling FROM in order to attend.
 
  I will tabulate the responses, and follow up to this thread. Feel free
 to respond to this thread to discuss your thoughts about if we should meet,
 or if there are other locations or times that we should consider.
 
  Thanks,
 
  Adrian
 
  PS: I do recognize that some of our contributors reside in countries
 that require Visas to travel to the US, and those take a long time to
 acquire. The reverse is also true. For those of you who can not attend in
 person, we will explore options for remote participation using
 teleconferencing technology, IRC, Etherpad, etc for limited portions of the
 agenda.
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-docs] Question about the patch of QoS configuration to commit

2015-02-27 Thread liuxinguo
Recently I have commit a patch in 
openstack-manuals/doc/config-reference/block-storage/drivers/huawei-storage-driver.xmlhttps://review.openstack.org/#/c/150761/8/doc/config-reference/block-storage/drivers/huawei-storage-driver.xml
 contains the QoS configuration and the reviewers said that I should not put 
the QoS configuration as Huawei specific information but in the Cloud Admin 
Guide.

The problem is I think the QoS configuration I want to commit in the patch 
might be really Huawei specific information. For example, I described how to 
create minIOPS QoS in the patch because the name minIOPS is the exactly 
name the user should use in Huawei driver. So I think these configuration is 
really Huawei specific information and I think maybe it is not appropriate to 
put these information in Cloud Admin Guide.

I am confused where to put the QoS configuration and I would be very 
appreciative if anyone can give me some ideas.

Thanks and best regards,
Liu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] creating a unified developer reference manual

2015-02-27 Thread Clint Byrum
Excerpts from Ben Nemec's message of 2015-02-27 09:25:37 -0800:
 On 02/27/2015 03:54 AM, Thierry Carrez wrote:
  Doug Hellmann wrote:
  Maybe some of the folks in the meeting who felt more strongly that it
  should be a separate document can respond with their thoughts?
  
  I don't feel very strongly and could survive this landing in
  openstack-specs. My objection was the following:
  
  - Specs are for designing the solution and implementation plan to a
  specific problem. They are mainly used by developers and reviewers
  during implementation as a clear reference rationale for change and
  approved plan. Once they are fully implemented, they are kept for
  history purpose, not for constant reference.
  
  - Guidelines/developer doc are for all developers (old and new) to
  converge on best practices on topics that are not directly implemented
  as hacking rules. They are constantly used by everyone (not just
  developers/reviewers of a given feature) and never become history.
  
  Putting guidelines doc in the middle of specs makes it a bit less
  discoverable imho, especially by our new developers. It's harder to
  determine which are still current and you should read. An OpenStack
  developer doc sounds like a much better entry point.
  
  That said, the devil is in the details, and some efforts start as specs
  (for existing code to catch up with the recommendation) and become
  guidelines (for future code being written). That is the case of the log
  levels spec: it is both a spec and a guideline. Personally I wouldn't
  object if that was posted in both areas, or if the relevant pieces were
  copied, once the current code has caught up, from the spec to a dev
  guideline.
  
  In the eventlet case, it's only a set of best practices / guidelines:
  there is no specific problem to solve, no catch-up plan for existing
  code to implement. Only a collection of recommendations if you get to
  write future eventlet-based code. Those won't start or end. Which is why
  I think it should go straight to a developer doc.
  
 
 Well, this whole spec arose because we found out there was existing code
 that was doing bad things with eventlet monkey patching that needed to
 be fixed.  The specific problem is actually being worked concurrently
 with the spec because everyone involved has agreed on a solution, which
 became one of the guidelines in the spec.  I'd be surprised if there
 aren't other projects that need similar changes to be in line with the
 new recommendations though.  I'd hope that future projects will follow
 the guidelines, but they were actually written for the purpose of
 eliminating as many potential eventlet gotchas in our _current_ code as
 possible.  Coming up with a specific list of changes needed is tough
 until we have agreement on the best practices though, which is why the
 first work item is a somewhat vague audit and fix all the things point.
 
 Personally, I would expect most best practice/guideline type specs to be
 similar.  Nobody's going to take the time to write up a spec about
 something everyone's already doing - they're going to do it because one
 or a few projects have found something that works well and they think
 everyone should be doing it.  So I think your point about most of these
 things moving from spec to guideline throughout their lifetime is spot
 on, I'm just wondering if it's worth complicating the workflow for that
 process.  Herding the cats for something big like the log guidelines is
 hard enough without requiring two separate documents for the immediate
 work and the long-term information.
 
 That said, I agree with the points about publishing this stuff under a
 developer reference doc rather than specs, and if that can't be done in
 a single repo maybe we do have to split.  I'd still prefer to keep it
 all in one repo though - moving a doc between directories is a lot
 simpler than moving it between repos (and also doesn't lose any previous
 discussion in Gerrit).
 

There are numerous wiki pages that have a wealth of knowledge, but
very poor history attached. Such as:

  * https://wiki.openstack.org/wiki/Python3
  * https://wiki.openstack.org/wiki/GitCommitMessages
  * https://wiki.openstack.org/wiki/Gerrit_Workflow
  * https://wiki.openstack.org/wiki/Getting_The_Code
  * https://wiki.openstack.org/wiki/Testr

Just having these in git would be useful, and having the full change
history with the same care given to commit messages and with reviewers
would I think improve the content and usability of these.

Since these are not even close to specs, but make excellent static
documents, I think having them in a cross-project developer
documentation repository makes a lot of sense.

That said, I do think that we would need to have a much lower bar for
reviews (maybe just 1 * +2, but let it sit at least 3 days or something
to allow for exposure).

I'd be quite happy to help with an effort to convert the wiki pages
above into rst and move forward with things 

Re: [openstack-dev] [Congress][Delegation] Initial workflow design

2015-02-27 Thread Huangyong (Oliver)
Hi Ruby,

  I agree with you on your point below
  The policy (warning) : when will it be evaluated? This should be done 
periodically? Then if the table has even one True entry, then the action should 
be to generate the LP, solve, activate the migrations etc.

ð  The “LP” cannot be generated when the VM-placement engine receives the 
policy snippet

 When we give multiple constraints and want get out solutions which meet 
all the constraints,  for example VM-Placement, there may cause a large amount 
of choices to be checked and filter.  In theory, this kind of problem belongs 
to NPC/NP hard.   Normally, quite a lot methods use SAT solver to do the work.
 I want to introduce Rule-X BP proposal of congress a little here. It is 
some kind of enhanced SAT solver which designed to solve this kind of problem.
https://blueprints.launchpad.net/congress/+spec/rule-x
We are planning give a presentation on the upcoming openstack summit in 
May,vancouver.  Present by John Strassner. Hope you are interest in it.

Best Regards,
Oliver
---
Huangyong (Oliver)
Network research, Huawei

From: ruby.krishnasw...@orange.com [mailto:ruby.krishnasw...@orange.com]
Sent: Friday, February 27, 2015 10:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress][Delegation] Initial workflow design

My first suggestion: why don’t we set up call together with Ramki, Yathi, Debo, 
as soon as possible ?

-How to go forward concretely with the 8 steps for the PoC (details 
within each step),

o   Including nova integration points

-Identify “friction points” in the details above to resolve for  beyond 
PoC


Tim: Where does the rest of the program originate?  I’m not saying the entire 
LP program is generated from the Datalog constraints; some of it is generated 
by the solver independent of the Datalog.  In the text, I gave the example of 
defining hMemUse[j].
Tim: The VM-placement engine does 2 things: (I) translates Datalog to LP and 
(ii) generates additional LP constraints.  (Both work items could leverage any 
constraints that are builtin to a specific solver, e.g. the solver-scheduler.  
The point is that there are 2 distinct, conceptual origins of the LP 
constraints: those that represent the Datalog and those that codify the domain.
Tim: Each domain-specific solver can do whatever it wants, so it’s not clear to 
me what the value of choosing a modeling language actually is—unless we want to 
build a library of common functionality that makes the construction of 
domain-specific engine (wrappers) easier.  I’d prefer to spend our energy 
understanding whether the proposed workflow/interface works for a couple of 
different domain-specific policy engines OR to flush this one out and build it.




ð  The value of choosing a modeling language is related to how “best to 
automate translations” from Datalog constraints (to LP)?

o   We can have look for one unique way of generation, and not, “some of it is 
generated by the VM-placement engine solver independent of the Datalog”.

o   Datalog imposes most constraints (== policies)

o   Two constraints are not “policies”

§  A VM is allocated to only one host.

§  Host capacity is not exceeded.

·Over subscription

ð  Otherwise what was your suggestion?  As follows?

o   Use framework (extend) the nova-solver-scheduler currently implements 
(itself using PuLP). This framework specifies an API to write constraints and 
cost functions (in a domain specific way). Modifying this framework:

§  To read data in from DSE

§  To obtain the cost function from Datalog (e.g. minimize Y[host1]…)

§  To obtain Datalog constraints (e.g. 75% memory allocation constraint for 
hosts of special zone)

o   We need to specify the “format” for this? It will likely to be a string of 
the form (?)

§  “hMemUse[0] – 0.75*hMemCap[0]  100*y[0], “ Memory allocation constraint on 
Host 0“,











ð  From your doc (page 5, section 4)


warning(id) :-
nova:host(id, name, service, zone, memory_capacity),
legacy:special_zone(zone),
ceilometer:statistics(id, memory, avg, count, duration,
durstart, durend, max, min, period, perstart, perend,
sum, unit),
avg  0.75 * memory_capacity


Notice that this is a soft constraint, identified by the use of warning instead 
of error.  When compiling to LP, the VM-placement engine will attempt to 
minimize the number of rows in the warning table.  That is, for each possible 
row r it will create a variable Y[r] and assign it True if the row is a warning 
and False otherwise


The policy (warning) : when will it be evaluated? This should be done 
periodically? Then if the table has even one True entry, then the action should 
be to generate the LP, solve, activate the migrations etc.

ð  The “LP” cannot be generated when the VM-placement engine receives the 
policy snippet.


Ruby


De : Tim Hinrichs [mailto:thinri...@vmware.com]
Envoyé : jeudi 26 

Re: [openstack-dev] [openstack-docs] Question about the patch of QoS configuration to commit

2015-02-27 Thread Anne Gentle
On Fri, Feb 27, 2015 at 8:12 PM, liuxinguo liuxin...@huawei.com wrote:

  Recently I have commit a patch in “openstack-manuals/
 doc/config-reference/block-storage/drivers/huawei-storage-driver.xml
 https://review.openstack.org/#/c/150761/8/doc/config-reference/block-storage/drivers/huawei-storage-driver.xml”
 contains the QoS configuration and the reviewers said that I should not put
 the QoS configuration as Huawei specific information but in the Cloud Admin
 Guide.



 The problem is I think the QoS configuration I want to commit in the patch
 might be really Huawei specific information. For example, I described how
 to create “minIOPS” QoS in the patch because the name “minIOPS” is the
 exactly name the user should use in Huawei driver. So I think these
 configuration is really Huawei specific information and I think maybe it is
 not appropriate to put these information in “Cloud Admin Guide”.




I looked through the patch at
https://review.openstack.org/#/c/150761/2/doc/config-reference/block-storage/drivers/huawei-storage-driver.xml
to try to determine the concern Andreas had.

Is Quality of Service policy only used for the Huawei storage driver? Even
if so, there are examples of QoS drivers for networking here:

http://docs.openstack.org/admin-guide-cloud/content/section_plugin_specific_extensions.html

With that as a pattern, please patch here to add QoS information for
storage drivers.

New file like this one:
https://github.com/openstack/openstack-manuals/blob/master/doc/admin-guide-cloud/blockstorage/section_consistency_groups.xml

with an xi:include statement in this file:

https://github.com/openstack/openstack-manuals/blob/master/doc/admin-guide-cloud/ch_blockstorage.xml

Hope this direction makes sense.
Anne


  I am confused where to put the QoS configuration and I would be very
 appreciative if anyone can give me some ideas.



 Thanks and best regards,

 Liu

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Anne Gentle
annegen...@justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Devstack] Can't start service nova-novncproxy

2015-02-27 Thread Li, Chen
Hi all,

I'm trying to install a fresh all-in-one openstack environment by devstack.
After the installation, all services looks well, but I can't open instance 
console in Horizon.

I did a little check, and found service nova-novncproxy was not started !
Anyone has idea why this happened ?

Here is my local.conf : http://paste.openstack.org/show/183344/

My os is:
Ubuntu 14.04 trusty
3.13.0-24-generic

Thanks.
-chen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova compute will delete all your instances if you change its hostname

2015-02-27 Thread Dan Smith
Did we really need another top-level thread for this?

 1. _destroy_evacuated_instances() should do a better job of sanity 
 checking before performing such a drastic action.

I agree, and no amount of hostname checking will actually address this
problem. If we don't have a record of an evacuate having been scheduled
for the host, then there is no legitmate reason to delete data, IMHO.

 2. The underlying issue is the definition and use of instance.host, 
 instance.node, compute_node.host and compute_node.hypervisor_hostname.

I disagree. I think the underlying issues are:

1. Evacuate assumes too much
2. Nova has a model of one compute per hypervisor and we have some
   drivers that make it easy to violate that in dangerous ways, and
   which don't do their due diligence to avoid catastrophe.

 Note that in the above case the libvirt driver changed the hypervisor
 identifier despite the fact that the hypervisor had not changed, only
 its communication endpoint.

I'd argue they're one and the same, and that's just fine. We just
shouldn't erroneously delete things when that happens unexpectedly.

 VMware[1] and Ironic don't require any changes here.

But they're broken! If they are managing things from another point that
can be duplicated and they don't provide assurances that it's not being
done twice, then that's a problem. I (and others) have argued that
nova's model is one compute per hypervisor. I don't think it should be
up to nova to ensure that, I think it should be up to the driver.

Nova needs to stop deleting things based on cheap guessing. However, if
two hypervisor drivers claim they're different and that they have
deleted running instances (which is what is going on here), I have
little sympathy.

 Other drivers will need to be modified so that get_available_nodes() 
 returns a persistent value rather than just the hostname.

-1 on making the non-problematic drivers (potentially) maintain state
and leaving the problematic ones unchanged.

 A reasonable default implementation of this would be to write a uuid
 to a file which lives with VM data and return its contents. If the 
 hypervisor has a native concept of a globally unique identifier,
 that should be used instead.

Those drivers shouldn't have to maintain state. And they already have a
unique identifier: the hostname.

--Dan



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova compute will delete all your instances if you change its hostname

2015-02-27 Thread Daniel P. Berrange
On Fri, Feb 27, 2015 at 08:57:53AM -0800, Dan Smith wrote:
  Note that in the above case the libvirt driver changed the hypervisor
  identifier despite the fact that the hypervisor had not changed, only
  its communication endpoint.
 
 I'd argue they're one and the same, and that's just fine. We just
 shouldn't erroneously delete things when that happens unexpectedly.

[snip]

  A reasonable default implementation of this would be to write a uuid
  to a file which lives with VM data and return its contents. If the 
  hypervisor has a native concept of a globally unique identifier,
  that should be used instead.
 
 Those drivers shouldn't have to maintain state. And they already have a
 unique identifier: the hostname.

The hostname is a unique identifier, however, it isn't a /stable/ unique
identifier because it is determined at the whim of the administrator. We
could instead use the host UUID which is both unique and stable. That
would avoid nova having to worry about administrator reconfigurations in
this respect.  IMHO if we can make nova more robust against admin changes
like this, it'd be a good thing.  I agree that we don't want to maintain
state and indeed this is not needed - every HV I know of has a concept of
a host UUID it can report that'd do the job just fine.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] UUID Tagging Requirement and Big Bang Patch

2015-02-27 Thread Chris Hoge
This work has landed. New tests will now be gated against existence of an 
idempotent_id.
If you have open submissions to Tempest there’s a good possibility you’ll have 
to rebase.

-Chris

 On Feb 26, 2015, at 2:30 PM, Chris Hoge chris+openstack...@openstack.org 
 wrote:
 
 Update on this:
 
 The tools for checking for and adding UUIDs has been completed and reviewed.
 
 https://review.openstack.org/#/c/157273 
 https://review.openstack.org/#/c/157273
 
 A new patch has been sent up that adds UUIDs to all tests
 
 https://review.openstack.org/#/c/159633 
 https://review.openstack.org/#/c/159633
 
 Note that after discussion with the openstack-qa team the decorator has
 changed to be of the form
 
 @test.idempotent_id('12345678-1234-5678-1234-123456789abc’)
 
 Once the second patch lands you will most certainly need to rebase your work
 and include ids in all new tests. When refactoring tests, please preserve the
 id value so that various projects (Defcore, Refstack, Rally) can track the 
 actual
 location of tests for capability testing.
 
 Thanks,
 Chris Hoge
 Interop Engineer
 OpenStack Foundation
 
 On Feb 22, 2015, at 11:47 PM, Chris Hoge chris+openstack...@openstack.org 
 mailto:chris+openstack...@openstack.org wrote:
 
 Once the gate settles down this week I’ll be sending up a major 
 “big bang” patch to Tempest that will tag all of the tests with unique
 identifiers, implementing this spec: 
 
 https://github.com/openstack/qa-specs/blob/master/specs/meta-data-and-uuid-for-tests.rst
  
 https://github.com/openstack/qa-specs/blob/master/specs/meta-data-and-uuid-for-tests.rst
 
 The work in progress is here, and includes a change to the gate that
 every test developer should be aware of.
 
 https://review.openstack.org/#/c/157273/
 
 All tests will now require a UUID metadata identifier, generated from the
 uuid.uuid4 function. The form of the identifier is a decorator like:
 
 @test.meta(uuid='12345678-1234-5678-1234-567812345678')
 
 To aid in hacking rules, the @test.meta decorator must be directly before the
 function definition and after the @test.services decorator, which itself
 must appear after all other decorators.
 
 The gate will now require that every test have a uuid that is indeed
 unique.
 
 This work is meant to give a stable point of reference to tests that will
 persist through test refactoring and moving.
 
 Thanks,
 Chris Hoge
 Interop Engineer
 OpenStack Foundation
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova compute will delete all your instances if you change its hostname

2015-02-27 Thread Dan Smith
 The hostname is a unique identifier, however, it isn't a /stable/ 
 unique identifier because it is determined at the whim of the 
 administrator.

Honestly, I think it's as stable as the administrator wants it to be. At
the level of automation that any reasonable deployment will be running,
I think it's the right thing to use, personally.

 We could instead use the host UUID which is both unique and stable. 
 That would avoid nova having to worry about administrator 
 reconfigurations in this respect.

Assuming you're talking about the DMI UUID, this is a problem if I have
a failing DIMM, swap the disks out of a box into an identical box (which
will have a different UUID) and reboot. If not, then it's an OS-level
UUID, which is persisted state, and which would change if I re-install
my boxes (puppet run) and expect my instances to remain because I
persist images, config, etc.

 IMHO if we can make nova more robust against admin changes like this,
 it'd be a good thing.  I agree that we don't want to maintain state
 and indeed this is not needed - every HV I know of has a concept of a
 host UUID it can report that'd do the job just fine.

The hostname is the unique identifier that is under the control of the
administrator. It's easily adjustable if need-be, it's as static as it
needs to be, and it's well-understood.

--Dan



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] creating a unified developer reference manual

2015-02-27 Thread Ben Nemec
On 02/27/2015 03:54 AM, Thierry Carrez wrote:
 Doug Hellmann wrote:
 Maybe some of the folks in the meeting who felt more strongly that it
 should be a separate document can respond with their thoughts?
 
 I don't feel very strongly and could survive this landing in
 openstack-specs. My objection was the following:
 
 - Specs are for designing the solution and implementation plan to a
 specific problem. They are mainly used by developers and reviewers
 during implementation as a clear reference rationale for change and
 approved plan. Once they are fully implemented, they are kept for
 history purpose, not for constant reference.
 
 - Guidelines/developer doc are for all developers (old and new) to
 converge on best practices on topics that are not directly implemented
 as hacking rules. They are constantly used by everyone (not just
 developers/reviewers of a given feature) and never become history.
 
 Putting guidelines doc in the middle of specs makes it a bit less
 discoverable imho, especially by our new developers. It's harder to
 determine which are still current and you should read. An OpenStack
 developer doc sounds like a much better entry point.
 
 That said, the devil is in the details, and some efforts start as specs
 (for existing code to catch up with the recommendation) and become
 guidelines (for future code being written). That is the case of the log
 levels spec: it is both a spec and a guideline. Personally I wouldn't
 object if that was posted in both areas, or if the relevant pieces were
 copied, once the current code has caught up, from the spec to a dev
 guideline.
 
 In the eventlet case, it's only a set of best practices / guidelines:
 there is no specific problem to solve, no catch-up plan for existing
 code to implement. Only a collection of recommendations if you get to
 write future eventlet-based code. Those won't start or end. Which is why
 I think it should go straight to a developer doc.
 

Well, this whole spec arose because we found out there was existing code
that was doing bad things with eventlet monkey patching that needed to
be fixed.  The specific problem is actually being worked concurrently
with the spec because everyone involved has agreed on a solution, which
became one of the guidelines in the spec.  I'd be surprised if there
aren't other projects that need similar changes to be in line with the
new recommendations though.  I'd hope that future projects will follow
the guidelines, but they were actually written for the purpose of
eliminating as many potential eventlet gotchas in our _current_ code as
possible.  Coming up with a specific list of changes needed is tough
until we have agreement on the best practices though, which is why the
first work item is a somewhat vague audit and fix all the things point.

Personally, I would expect most best practice/guideline type specs to be
similar.  Nobody's going to take the time to write up a spec about
something everyone's already doing - they're going to do it because one
or a few projects have found something that works well and they think
everyone should be doing it.  So I think your point about most of these
things moving from spec to guideline throughout their lifetime is spot
on, I'm just wondering if it's worth complicating the workflow for that
process.  Herding the cats for something big like the log guidelines is
hard enough without requiring two separate documents for the immediate
work and the long-term information.

That said, I agree with the points about publishing this stuff under a
developer reference doc rather than specs, and if that can't be done in
a single repo maybe we do have to split.  I'd still prefer to keep it
all in one repo though - moving a doc between directories is a lot
simpler than moving it between repos (and also doesn't lose any previous
discussion in Gerrit).

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova compute will delete all your instances if you change its hostname

2015-02-27 Thread Matthew Booth
Gary Kotton originally posted this bug against the VMware driver:

https://bugs.launchpad.net/nova/+bug/1419785

I posted a proposed patch to fix this here:

https://review.openstack.org/#/c/158269/1

However, Dan Smith pointed out that the bug can actually be triggered
against any driver in a manner not addressed by the above patch alone. I
have confirmed this against a libvirt setup as follows:

1. Create some instances
2. Shutdown n-cpu
3. Change hostname
4. Restart n-cpu

Nova compute will delete all instances in libvirt, but continue to
report them as ACTIVE and Running.

There are 2 parts to this issue:

1. _destroy_evacuated_instances() should do a better job of sanity
checking before performing such a drastic action.

2. The underlying issue is the definition and use of instance.host,
instance.node, compute_node.host and compute_node.hypervisor_hostname.

(1) is belt and braces. It's very important, but I want to focus on (2)
here. Instantly you'll notice some inconsistent naming here, so to clarify:

* instance.host == compute_node.host == Nova compute's 'host' value.
* instance.node == compute_node.hypervisor_hostname == an identifier
which represents a hypervisor.

Architecturally, I'd argue that these mean:

* Host: A Nova communication endpoint for a hypervisor.
* Hypervisor: The physical location of a VM.

Note that in the above case the libvirt driver changed the hypervisor
identifier despite the fact that the hypervisor had not changed, only
its communication endpoint. I propose the following:

* ComputeNode describes 1 hypervisor.
* ComputeNode maps 1 hypervisor to 1 compute host.
* A ComputeNode is identified by a hypervisor_id.
* hypervisor_id represents the physical location of running VMs,
independent of a compute host.

We've renamed compute_node.hypervisor_hostname to
compute_node.hypervisor_id. This resolves some confusion, because it
asserts that the identity of the hypervisor is tied to the data
describing VMs, not the host which is running it. In fact, for the
VMware and Ironic drivers it has never been a hostname.

VMware[1] and Ironic don't require any changes here. Other drivers will
need to be modified so that get_available_nodes() returns a persistent
value rather than just the hostname. A reasonable default implementation
of this would be to write a uuid to a file which lives with VM data and
return its contents. If the hypervisor has a native concept of a
globally unique identifier, that should be used instead.

ComputeNode.hypervisor_id is unique. The hypervisor is unique (there is
physically only 1 of it) so it does not make sense to have multiple
representations of it and its associated resources.

An Instance's location is its hypervisor, whereever that may be, so
Instance.host could be removed. This isn't strictly necessary, but it is
redundant as the communication endpoint is available via ComputeNode. If
we wanted to support the possibility of changing a communication
endpoint at some point, it would also make that operation trivial.
Thinking blue sky, it would also open the future possibility for
multiple communication endpoints for a single hypervisor.

There is a data migration issue associated with changing a driver's
reported hypervisor id. The bug linked below fudges it, but if we were
doing it for all drivers I believe it could be handled efficiently by
passing the instance list already collected by ComputeManager.init_host
to the driver at startup.

My proposed patch above fixes a potentially severe issue for users of
the VMware and Ironic drivers. In conjunction with a move to a
persistent hypervisor id for other drivers, it also fixes the related
issue described above across the board. I would like to go forward with
my proposed fix as it has an immediate benefit, and I'm happy to work on
the persistent hypervisor id for other drivers.

Matt

[1] Modulo bugs: https://review.openstack.org/#/c/159481/
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [Cinder-GlusterFS CI] centos7 gate job abrupt failures

2015-02-27 Thread Deepak Shetty
On Fri, Feb 27, 2015 at 4:02 PM, Deepak Shetty dpkshe...@gmail.com wrote:



 On Wed, Feb 25, 2015 at 11:48 PM, Deepak Shetty dpkshe...@gmail.com
 wrote:



 On Wed, Feb 25, 2015 at 8:42 PM, Deepak Shetty dpkshe...@gmail.com
 wrote:



 On Wed, Feb 25, 2015 at 6:34 PM, Jeremy Stanley fu...@yuggoth.org
 wrote:

 On 2015-02-25 17:02:34 +0530 (+0530), Deepak Shetty wrote:
 [...]
  Run 2) We removed glusterfs backend, so Cinder was configured with
  the default storage backend i.e. LVM. We re-created the OOM here
  too
 
  So that proves that glusterfs doesn't cause it, as its happening
  without glusterfs too.

 Well, if you re-ran the job on the same VM then the second result is
 potentially contaminated. Luckily this hypothesis can be confirmed
 by running the second test on a fresh VM in Rackspace.


 Maybe true, but we did the same on hpcloud provider VM too and both time
 it ran successfully with glusterfs as the cinder backend. Also before
 starting
 the 2nd run, we did unstack and saw that free memory did go back to 5G+
 and then re-invoked your script, I believe the contamination could
 result in some
 additional testcase failures (which we did see) but shouldn't be related
 to
 whether system can OOM or not, since thats a runtime thing.

 I see that the VM is up again. We will execute the 2nd run afresh now
 and update
 here.


 Ran tempest with configured with default backend i.e. LVM and was able to
 recreate
 the OOM issue, so running tempest without gluster against a fresh VM
 reliably
 recreates the OOM issue, snip below from syslog.

 Feb 25 16:58:37 devstack-centos7-rax-dfw-979654 kernel: glance-api
 invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0

 Had a discussion with clarkb on IRC and given that F20 is discontinued,
 F21 has issues with tempest (under debug by ianw)
 and centos7 also has issues on rax (as evident from this thread), the
 only option left is to go with ubuntu based CI job, which
 BharatK is working on now.


 Quick Update:

 Cinder-GlusterFS CI job on ubuntu was added (
 https://review.openstack.org/159217)

 We ran it 3 times against our stackforge repo patch @
 https://review.openstack.org/159711
 and it works fine (2 testcase failures, which are expected and we're
 working towards fixing them)

 For the logs of the 3 experimental runs, look @

 http://logs.openstack.org/11/159711/1/experimental/gate-tempest-dsvm-full-glusterfs/

 Of the 3 jobs, 1 was schedued on rax and 2 on hpcloud, so its working
 nicely across
 the different cloud providers.


Clarkb, Fungi,
  Given that the ubuntu job is stable, I would like to propose to add it as
experimental to the
openstack cinder while we work on fixing the 2 failed test cases in parallel

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] How to check admin authentication?

2015-02-27 Thread Dolph Mathews
On Fri, Feb 27, 2015 at 8:39 AM, Dmitry Tantsur dtant...@redhat.com wrote:

 Hi all!

 This (presumably) pretty basic question tortures me for several months
 already, so I kindly seek for help here.

 I'm working on a Flask-based service [1] and I'd like to use Keystone
 tokens for authentication. This is an admin-only API, so we need to check
 for an admin role. We ended up with code [2] first accessing Keystone with
 a given token and (configurable) admin tenant name, then checking 'admin'
 role. Things went well for a while.

 Now I'm writing an Ironic driver accessing API of [1]. Pretty naively I
 was trying to use an Ironic service user credentials, that we use for
 accessing all other services. For TripleO-based installations it's a user
 with name 'ironic' and a special tenant 'service'. Here is where problems
 are. Our code perfectly authenticates a mere user (that has tenant
 'admin'), but asks Ironic to go away.

 We've spent some time researching documentation and keystone middleware
 source code, but didn't find any more clues. Neither did we find a way to
 use keystone middleware without rewriting half of project. What we need is
 2 simple things in a simple Flask application:
 1. validate a token
 2. make sure it belongs to admin


I'm not really clear on what problem you're having, because I'm not sure if
you care about an admin username, admin tenant name, or admin role
name. If you're implementing RBAC, you only really need to care about the
user have an admin role in their list of roles.

You can wrap your flask application with a configured instance of
auth_token middleware; this is about the simplest way to do it, and this
also demos the environment variables exposed to your application that you
can use to validation authorization:


https://github.com/dolph/keystone-deploy/blob/master/playbooks/roles/http/templates/echo.py#L33-L41



 I'll thankfully appreciate any ideas how to fix our situation.
 Thanks in advance!

 Dmitry.

 [1] https://github.com/stackforge/ironic-discoverd
 [2] https://github.com/stackforge/ironic-discoverd/blob/master/
 ironic_discoverd/utils.py#L50-L65

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova compute will delete all your instances if you change its hostname

2015-02-27 Thread Daniel P. Berrange
On Fri, Feb 27, 2015 at 04:24:36PM +, Matthew Booth wrote:
 Gary Kotton originally posted this bug against the VMware driver:
 
 https://bugs.launchpad.net/nova/+bug/1419785
 
 I posted a proposed patch to fix this here:
 
 https://review.openstack.org/#/c/158269/1
 
 However, Dan Smith pointed out that the bug can actually be triggered
 against any driver in a manner not addressed by the above patch alone. I
 have confirmed this against a libvirt setup as follows:
 
 1. Create some instances
 2. Shutdown n-cpu
 3. Change hostname
 4. Restart n-cpu
 
 Nova compute will delete all instances in libvirt, but continue to
 report them as ACTIVE and Running.
 
 There are 2 parts to this issue:
 
 1. _destroy_evacuated_instances() should do a better job of sanity
 checking before performing such a drastic action.
 
 2. The underlying issue is the definition and use of instance.host,
 instance.node, compute_node.host and compute_node.hypervisor_hostname.
 
 (1) is belt and braces. It's very important, but I want to focus on (2)
 here. Instantly you'll notice some inconsistent naming here, so to clarify:
 
 * instance.host == compute_node.host == Nova compute's 'host' value.
 * instance.node == compute_node.hypervisor_hostname == an identifier
 which represents a hypervisor.
 
 Architecturally, I'd argue that these mean:
 
 * Host: A Nova communication endpoint for a hypervisor.
 * Hypervisor: The physical location of a VM.
 
 Note that in the above case the libvirt driver changed the hypervisor
 identifier despite the fact that the hypervisor had not changed, only
 its communication endpoint. I propose the following:
 
 * ComputeNode describes 1 hypervisor.
 * ComputeNode maps 1 hypervisor to 1 compute host.
 * A ComputeNode is identified by a hypervisor_id.
 * hypervisor_id represents the physical location of running VMs,
 independent of a compute host.
 
 We've renamed compute_node.hypervisor_hostname to
 compute_node.hypervisor_id. This resolves some confusion, because it
 asserts that the identity of the hypervisor is tied to the data
 describing VMs, not the host which is running it. In fact, for the
 VMware and Ironic drivers it has never been a hostname.
 
 VMware[1] and Ironic don't require any changes here. Other drivers will
 need to be modified so that get_available_nodes() returns a persistent
 value rather than just the hostname. A reasonable default implementation
 of this would be to write a uuid to a file which lives with VM data and
 return its contents. If the hypervisor has a native concept of a
 globally unique identifier, that should be used instead.

I don't think there's any need to write state in that way. Every hypervisor
I've come across has a way to report a gloally unique identifier, which is
typically the host UUID coming from the BIOS, or some equivalent. For libvirt
you can get the host UUID from the capabilities XML, so we could pretty
easily handle that.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][all] - Openstack.error common library

2015-02-27 Thread Eugeniya Kudryashova
Thanks for the replies!

 Could you elaborate bit what you want to have in the headers and why? Our
plan has been so  far having the error code in the message payload so that
it?s easily available for users and

 operators.  What this library you?re proposing would be actually doing?

I’ve already read specs, proposed in glance about error codes.Actually I
don’t prefer leave such content in message payload at least due to the fact
error messages is translated, and can be changed while transmitting. Also,
as far as I remember in some openstack projects (can’t say more precisely
where) error messages is generated in the strange way, and put something
new in it will be hard, also some problems will appear while extracting.

 We?re more than happy to take extra hands on this so please follow up the
[log] discussion

 and feel free to contact me (IRC: jokke_) or Rockyg (in cc as well)
 around what has been

 done and planned in case you need more clarification.

Thanks, I’ll contact you via IRC.

 I'm not sure a single library as a home to all of the various error

 messages is the right approach. I thought, based on re-reading the

 thread you link to, that the idea was to come up with a standard schema

 for error payloads and then let the projects fill in the details. We

 might need a library for utility functions, but that wouldn't actually

 include the error messages.

Sure, maybe I wrongly spoke it. Library definitely shouldn’t contain  error
messages, errors or something like this - if so it will make managing
errors much more difficult. it should have stuff to put/extract header from
response and, you are right, other needed utilities.

On Wed, Feb 25, 2015 at 4:33 PM, Eugeniya Kudryashova 
ekudryash...@mirantis.com wrote:

 Hi, stackers!

 As was suggested in topic [1], using an HTTP header was a good solution
 for communicating common/standardized OpenStack API error codes.

 So I’d like to begin working on a common library, which will collect all
 openstack HTTP API errors, and assign them string error codes. My suggested
 name for library is openstack.error, but please feel free to propose
 something different.

 The other question is where we should allocate such project, in openstack
 or stackforge, or maybe oslo-incubator? I think such project will be too
 massive (due to dealing with lots and lots of exceptions)  to allocate it
 as a part of oslo, so I propose developing the project on Stackforge and
 then eventually have it moved into the openstack/ code namespace when the
 other projects begin using the library.

 Let me know your feedback, please!


 [1] -
 http://lists.openstack.org/pipermail/openstack-dev/2015-January/055549.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-27 Thread Miguel Ángel Ajo
Ok, I moved the document here [1], and I will eventually submit another patch
with the testing scripts when those are ready.

Let’s move the discussion to the review!,


Best,
Miguel Ángel Ajo
[1] https://review.openstack.org/#/c/159840/


On Friday, 27 de February de 2015 at 7:03, Kevin Benton wrote:

 Sounds promising. We'll have to evaluate it for feature parity when the time 
 comes.
  
 On Thu, Feb 26, 2015 at 8:21 PM, Ben Pfaff b...@nicira.com 
 (mailto:b...@nicira.com) wrote:
  This sounds quite similar to the planned support in OVN to gateway a
  logical network to a particular VLAN on a physical port, so perhaps it
  will be sufficient.
   
  On Thu, Feb 26, 2015 at 05:58:40PM -0800, Kevin Benton wrote:
   If a port is bound with a VLAN segmentation type, it will get a VLAN id 
   and
   a name of a physical network that it corresponds to. In the current 
   plugin,
   each agent is configured with a mapping between physical networks and OVS
   bridges. The agent takes the bound port information and sets up rules to
   forward traffic from the VM port to the OVS bridge corresponding to the
   physical network. The bridge usually then has a physical interface added 
   to
   it for the tagged traffic to use to reach the rest of the network.
  
   On Thu, Feb 26, 2015 at 4:19 PM, Ben Pfaff b...@nicira.com 
   (mailto:b...@nicira.com) wrote:
  
What kind of VLAN support would you need?
   
On Thu, Feb 26, 2015 at 02:05:41PM -0800, Kevin Benton wrote:
 If OVN chooses not to support VLANs, we will still need the current 
 OVS
 reference anyway so it definitely won't be wasted work.

 On Thu, Feb 26, 2015 at 2:56 AM, Miguel Angel Ajo Pelayo 
 majop...@redhat.com (mailto:majop...@redhat.com) wrote:

 
  Sharing thoughts that I was having:
 
  May be during the next summit it???s worth discussing the future of 
  the
  reference agent(s), I feel we???ll be replicating a lot of work 
  across
  OVN/OVS/RYU(ofagent) and may be other plugins,
 
  I guess until OVN and it???s integration are ready we can???t stop, 
  so
it makes
  sense to keep development at our side, also having an independent
plugin
  can help us iterate faster for new features, yet I expect that OVN
will be
  more fluent at working with OVS and OpenFlow, as their designers 
  have
  a very deep knowledge of OVS under the hood, and it???s C. ;)
 
  Best regards,
 
  On 26/2/2015, at 7:57, Miguel ??ngel Ajo majop...@redhat.com 
  (mailto:majop...@redhat.com) wrote:
 
  On Thursday, 26 de February de 2015 at 7:48, Miguel ??ngel Ajo 
  wrote:
 
   Inline comments follow after this, but I wanted to respond to Brian
  questionwhich has been cut out:
  We???re talking here of doing a preliminary analysis of the 
  networking
  performance,before writing any real code at neutron level.
 
  If that looks right, then we should go into a preliminary (and
orthogonal
  to iptables/LB)implementation. At that moment we will be able to
examine
  the scalability of the solutionin regards of switching openflow 
  rules,
  which is going to be severely affectedby the way we use to handle OF
rules
  in the bridge:
 * via OpenFlow, making the agent a ???real OF controller, with 
  the
  current effort to use  the ryu framework plugin to do that.   * 
  via
  cmdline (would be alleviated with the current rootwrap work, but the
former
  one would be preferred).
  Also, ipset groups can be moved into conjunctive groups in OF 
  (thanks
Ben
  Pfaff for theexplanation, if you???re reading this ;-))
  Best,Miguel ??ngel
 
 
  On Wednesday, 25 de February de 2015 at 20:34, Tapio Tallgren wrote:
 
  Hi,
 
  The RFC2544 with near zero packet loss is a pretty standard 
  performance
  benchmark. It is also used in the OPNFV project (
 
https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
  ).
 
  Does this mean that OpenStack will have stateful firewalls (or 
  security
  groups)? Any other ideas planned, like ebtables type filtering?
 
  What I am proposing is in the terms of maintaining the statefulness 
  we
  have nowregards security groups (RELATED/ESTABLISHED connections are
  allowed back on open ports) while adding a new firewall driver 
  working
only
  with OVS+OF (no iptables or linux bridge).
 
  That will be possible (without auto-populating OF rules in oposite
  directions) due to
  the new connection tracker functionality to be eventually merged 
  into
ovs.
 
 
  -Tapio
 
  On Wed, Feb 25, 2015 at 5:07 PM, Rick Jones rick.jon...@hp.com 
  (mailto:rick.jon...@hp.com)
wrote:
 
  On 02/25/2015 05:52 AM, Miguel 

Re: [openstack-dev] Need help in configuring keystone

2015-02-27 Thread Marco Fargetta
Hi Akshik,

the metadata error is in your SP, if the error was on testshib you
should not be redirected back after the login. Maybe there is a configuration
problem with shibboleth. Try to restart the service and look at shibboleth logs.
Check also the metadata of testshib are downloaded correctly because from the 
error
it seems you have not the metadata of testshib.

Cheers,
Marco

On Fri, Feb 27, 2015 at 06:39:30PM +0530, Akshik DBK wrote:
 Hi Marek ,
 I've registered with testshib, this is my keystone-apache-error.log log i get 
 [error] [client 121.243.33.212] No MetadataProvider available., referer: 
 https://idp.testshib.org/idp/profile/SAML2/Redirect/SSO
 From: aks...@outlook.com
 To: openstack-dev@lists.openstack.org
 Date: Fri, 27 Feb 2015 15:56:57 +0530
 Subject: [openstack-dev] Need help in configuring keystone
 
 
 
 
 Hi I'm new to SAML, trying to integrate keystone with SAML, Im using Ubuntu 
 12.04 with Icehouse,im following http://docs.openstack.org/developer/k...when 
 im trying to configure keystone with two idp,when i access 
 https://MYSERVER:5000/v3/OS-FEDERATIO...it gets redirected to testshib.org , 
 it prompts for username and password when the same is given im 
 gettingshibsp::ConfigurationException at ( 
 https://MYSERVER:5000/Shibboleth.sso/... ) No MetadataProvider available.here 
 is my shibboleth2.xml contentSPConfig 
 xmlns=urn:mace:shibboleth:2.0:native:sp:config
 xmlns:conf=urn:mace:shibboleth:2.0:native:sp:config
 xmlns:saml=urn:oasis:names:tc:SAML:2.0:assertion
 xmlns:samlp=urn:oasis:names:tc:SAML:2.0:protocol
 xmlns:md=urn:oasis:names:tc:SAML:2.0:metadata
 clockSkew=180
 
 ApplicationDefaults entityID=https://MYSERVER:5000/Shibboleth;
 Sessions lifetime=28800 timeout=3600 checkAddress=false 
 relayState=ss:mem handlerSSL=false
 SSO entityID=https://idp.testshib.org/idp/shibboleth; 
 ECP=true
 SAML2 SAML1
 /SSO
 
 LogoutSAML2 Local/Logout
 
 Handler type=MetadataGenerator Location=/Metadata 
 signing=false/
 Handler type=Status Location=/Status /
 Handler type=Session Location=/Session 
 showAttributeValues=false/
 Handler type=DiscoveryFeed Location=/DiscoFeed/
 /Sessions
 
 Errors supportContact=root@localhost
 logoLocation=/shibboleth-sp/logo.jpg
 styleSheet=/shibboleth-sp/main.css/
 
 AttributeExtractor type=XML validate=true 
 path=attribute-map.xml/
 AttributeResolver type=Query subjectMatch=true/
 AttributeFilter type=XML validate=true 
 path=attribute-policy.xml/
 CredentialResolver type=File key=sp-key.pem 
 certificate=sp-cert.pem/
 
 ApplicationOverride id=idp_1 
 entityID=https://MYSERVER:5000/Shibboleth;
 
 Sessions lifetime=28800 timeout=3600 checkAddress=false
 relayState=ss:mem handlerSSL=false
 SSO 
 entityID=https://portal4.mss.internalidp.com/idp/shibboleth; ECP=true
 SAML2 SAML1
 /SSO
 LogoutSAML2 Local/Logout
 /Sessions
 
 MetadataProvider type=XML 
 uri=https://portal4.mss.internalidp.com/idp/shibboleth;
  backingFilePath=/tmp/tata.xml reloadInterval=18 /
 /ApplicationOverride
 
 ApplicationOverride id=idp_2 
 entityID=https://MYSERVER:5000/Shibboleth;
 Sessions lifetime=28800 timeout=3600 checkAddress=false
 relayState=ss:mem handlerSSL=false
 SSO entityID=https://idp.testshib.org/idp/shibboleth; 
 ECP=true
 SAML2 SAML1
 /SSO
 
 LogoutSAML2 Local/Logout
 /Sessions
 
 MetadataProvider type=XML 
 uri=https://idp.testshib.org/idp/shibboleth;  
 backingFilePath=/tmp/testshib.xml reloadInterval=18/
 /ApplicationOverride
 /ApplicationDefaults
 
 SecurityPolicyProvider type=XML validate=true 
 path=security-policy.xml/
 ProtocolProvider type=XML validate=true reloadChanges=false 
 path=protocols.xml/
 /SPConfighere is my wsgi-keystoneWSGIScriptAlias /keystone/main  
 /var/www/cgi-bin/keystone/main
 WSGIScriptAlias /keystone/admin  /var/www/cgi-bin/keystone/admin
 
 Location /keystone
 # NSSRequireSSL
 SSLRequireSSL
 Authtype none
 /Location
 
 Location /Shibboleth.sso
 SetHandler shib
 /Location
 
 Location /v3/OS-FEDERATION/identity_providers/idp_1/protocols/saml2/auth
 ShibRequestSetting requireSession 1
 ShibRequestSetting applicationId idp_1
 AuthType shibboleth
 ShibRequireAll On
 ShibRequireSession On
 ShibExportAssertion Off
 Require valid-user
 /Location
 
 Location /v3/OS-FEDERATION/identity_providers/idp_2/protocols/saml2/auth
 ShibRequestSetting requireSession 1
 ShibRequestSetting applicationId idp_2
 AuthType shibboleth
 ShibRequireAll On
 ShibRequireSession On
 

Re: [openstack-dev] Ideas about Openstack Cinder for GSOC 2015

2015-02-27 Thread Anne Gentle
Hi Harry,
I don't see a cinder mentor listed but you could certainly contact the
Project Technical Lead for Cinder, Mike Perez, and see if he knows of a
mentor and possible project.
Anne

On Fri, Feb 27, 2015 at 3:50 AM, harryxiyou harryxi...@gmail.com wrote:

 Hi all,

 I cannot find the proper idea about Openstack Cinder for GSOC 2015
 here[1]. Could anyone please give me some clues?

 [1] https://wiki.openstack.org/wiki/GSoC2015


 Thanks, Harry

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Anne Gentle
annegen...@justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Revisiting the 6 month release cycle [metrics]

2015-02-27 Thread Kyle Mestery
On Fri, Feb 27, 2015 at 4:02 AM, Daniel P. Berrange berra...@redhat.com
wrote:

 On Fri, Feb 27, 2015 at 09:51:34AM +1100, Michael Still wrote:
  On Fri, Feb 27, 2015 at 9:41 AM, Stefano Maffulli stef...@openstack.org
 wrote:
 
   Does it make sense to purge old stuff regularly so we have a better
   overview? Or maybe we should chart a distribution of age of proposed
   changesets, too in order to get a better understanding of where the
   outliers are?
 
  Given the abandon of a review isn't binding (a proposer can easily
  unabandon), I do think we should abandon more than we do now. The
  problem at the moment being that its a manual process which isn't much
  fun for the person doing the work.
 
  Another factor to consider here is that abandoned patches against bugs
  make the bug look like someone is working on a fix, which probably
  isn't the case.
 
  Nova has been trying some very specific things to try and address
  these issues, and I think we're improving. Those things are:
 
  * specs
  * priority features

 This increased level of process in Nova has actually made the negative
 effects of the 6 month cycle noticably worse on balance. If you aren't
 able to propose your feature in the right window of the dev cycle your
 chances of getting stuff merged has gone down significantly and the time
 before users are likely to see your feature has correspondingly gone up.
 Previously people could come along with simple features at the end of
 the cycle and we had the flexibility to be pragmmatic and review and
 approve them. Now we're lacking them that ability even if we have the
 spare review cycles to consider it. The processes adopted have merely
 made us more efficient at disappointing contributors earlier in the
 cycle. There's been no changes made that would  solve the bigger problem
 of the fact that Nova is far too large vs the size of the core review
 team, so we have a ongoing major bottleneck in our development. That,
 bottleneck combined with the length of the 6 month cycle is an ongoing
 disaster for our contributors.

 This is part of the reason we have moved to split Neutron into smaller,
bite-sized chunk repositories with sometimes overlapping core reviewer
teams. It's also why we're spinning out the backend logic from in-tree
drivers and plugins to allow faster iteration for the maintainers. Early
evidence indicates this has been succesful, we'll see how it looks once we
get into the Liberty development cycle.

For a bit more context, you can see the blog I wrote on this [1].

Thanks,
Kyle

[1]
http://www.siliconloons.com/posts/2015-02-26-scaling-openstack-neutron-development/

Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
 :|
 |: http://libvirt.org  -o- http://virt-manager.org
 :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
 :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
 :|

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ideas about Openstack Cinder for GSOC 2015

2015-02-27 Thread Davanum Srinivas
Harry,

hop on to #openstack-gsoc and  #openstack-cinder as well.

-- dims

On Fri, Feb 27, 2015 at 9:26 AM, Anne Gentle
annegen...@justwriteclick.com wrote:
 Hi Harry,
 I don't see a cinder mentor listed but you could certainly contact the
 Project Technical Lead for Cinder, Mike Perez, and see if he knows of a
 mentor and possible project.
 Anne

 On Fri, Feb 27, 2015 at 3:50 AM, harryxiyou harryxi...@gmail.com wrote:

 Hi all,

 I cannot find the proper idea about Openstack Cinder for GSOC 2015
 here[1]. Could anyone please give me some clues?

 [1] https://wiki.openstack.org/wiki/GSoC2015


 Thanks, Harry

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Anne Gentle
 annegen...@justwriteclick.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Need help in configuring keystone

2015-02-27 Thread Akshik DBK
Hi,
I did upload the Metadata generated by keystone by accessing 
https://115.112.68.53:5000/Shibboleth.sso/Metadata
have attached the copy of it, and did uploaded it to the 
http://testshib.org/register.html
Regards,Akshik

 Date: Fri, 27 Feb 2015 14:31:36 +0100
 From: marek.de...@cern.ch
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Need help in configuring keystone
 
 Hi again,
 
 Did you upload Metadata generated by your Service Provider (Keystone) to 
 testshib Identity Providers?
 How did you generate /etc/shibboleth2/shibboleth2.xml file?
 
 Did you read http://testshib.org/register.html ?
 
 cheers,
 
 Marek
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  

115.112.68.53
Description: Binary data
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] How to check admin authentication?

2015-02-27 Thread Dmitry Tantsur

Hi all!

This (presumably) pretty basic question tortures me for several months 
already, so I kindly seek for help here.


I'm working on a Flask-based service [1] and I'd like to use Keystone 
tokens for authentication. This is an admin-only API, so we need to 
check for an admin role. We ended up with code [2] first accessing 
Keystone with a given token and (configurable) admin tenant name, then 
checking 'admin' role. Things went well for a while.


Now I'm writing an Ironic driver accessing API of [1]. Pretty naively I 
was trying to use an Ironic service user credentials, that we use for 
accessing all other services. For TripleO-based installations it's a 
user with name 'ironic' and a special tenant 'service'. Here is where 
problems are. Our code perfectly authenticates a mere user (that has 
tenant 'admin'), but asks Ironic to go away.


We've spent some time researching documentation and keystone middleware 
source code, but didn't find any more clues. Neither did we find a way 
to use keystone middleware without rewriting half of project. What we 
need is 2 simple things in a simple Flask application:

1. validate a token
2. make sure it belongs to admin

I'll thankfully appreciate any ideas how to fix our situation.
Thanks in advance!

Dmitry.

[1] https://github.com/stackforge/ironic-discoverd
[2] 
https://github.com/stackforge/ironic-discoverd/blob/master/ironic_discoverd/utils.py#L50-L65


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress][Delegation] Initial workflow design

2015-02-27 Thread ruby.krishnaswamy
My first suggestion: why don’t we set up call together with Ramki, Yathi, Debo, 
as soon as possible ?

-  How to go forward concretely with the 8 steps for the PoC (details 
within each step),

oIncluding nova integration points

-  Identify “friction points” in the details above to resolve for  
beyond PoC


Tim: Where does the rest of the program originate?  I’m not saying the entire 
LP program is generated from the Datalog constraints; some of it is generated 
by the solver independent of the Datalog.  In the text, I gave the example of 
defining hMemUse[j].
Tim: The VM-placement engine does 2 things: (I) translates Datalog to LP and 
(ii) generates additional LP constraints.  (Both work items could leverage any 
constraints that are builtin to a specific solver, e.g. the solver-scheduler.  
The point is that there are 2 distinct, conceptual origins of the LP 
constraints: those that represent the Datalog and those that codify the domain.
Tim: Each domain-specific solver can do whatever it wants, so it’s not clear to 
me what the value of choosing a modeling language actually is—unless we want to 
build a library of common functionality that makes the construction of 
domain-specific engine (wrappers) easier.  I’d prefer to spend our energy 
understanding whether the proposed workflow/interface works for a couple of 
different domain-specific policy engines OR to flush this one out and build it.




ð  The value of choosing a modeling language is related to how “best to 
automate translations” from Datalog constraints (to LP)?

oWe can have look for one unique way of generation, and not, “some of it is 
generated by the VM-placement engine solver independent of the Datalog”.

oDatalog imposes most constraints (== policies)

oTwo constraints are not “policies”

§  A VM is allocated to only one host.

§  Host capacity is not exceeded.

· Over subscription

ð  Otherwise what was your suggestion?  As follows?

oUse framework (extend) the nova-solver-scheduler currently implements 
(itself using PuLP). This framework specifies an API to write constraints and 
cost functions (in a domain specific way). Modifying this framework:

§  To read data in from DSE

§  To obtain the cost function from Datalog (e.g. minimize Y[host1]…)

§  To obtain Datalog constraints (e.g. 75% memory allocation constraint for 
hosts of special zone)

oWe need to specify the “format” for this? It will likely to be a string of 
the form (?)

§  “hMemUse[0] – 0.75*hMemCap[0]  100*y[0], “ Memory allocation constraint on 
Host 0“,









ð  From your doc (page 5, section 4)


warning(id) :-
nova:host(id, name, service, zone, memory_capacity),
legacy:special_zone(zone),
ceilometer:statistics(id, memory, avg, count, duration,
durstart, durend, max, min, period, perstart, perend,
sum, unit),
avg  0.75 * memory_capacity


Notice that this is a soft constraint, identified by the use of warning instead 
of error.  When compiling to LP, the VM-placement engine will attempt to 
minimize the number of rows in the warning table.  That is, for each possible 
row r it will create a variable Y[r] and assign it True if the row is a warning 
and False otherwise


The policy (warning) : when will it be evaluated? This should be done 
periodically? Then if the table has even one True entry, then the action should 
be to generate the LP, solve, activate the migrations etc.

ð  The “LP” cannot be generated when the VM-placement engine receives the 
policy snippet.


Ruby


De : Tim Hinrichs [mailto:thinri...@vmware.com]
Envoyé : jeudi 26 février 2015 19:17
À : OpenStack Development Mailing List (not for usage questions)
Objet : Re: [openstack-dev] [Congress][Delegation] Initial workflow design

Inline.

From: ruby.krishnasw...@orange.commailto:ruby.krishnasw...@orange.com 
ruby.krishnasw...@orange.commailto:ruby.krishnasw...@orange.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, February 25, 2015 at 8:53 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Congress][Delegation] Initial workflow design

Hi Tim, All,


1)  Step 3: The VM-placement engine is also a “datalog engine” . Right?

When policies are delegated:

when policies are inserted? When the VM-placement engine has already registered 
itself all policies are given to it?




“In our example, this would mean the domain-specific policy engine executes the 
following API call over the DSE”

ð  “domain-agnostic” ….



Done.


2)  Step 4:



Ok

But finally: if Congress will likely “delegate”



Not sure what you’re suggesting here.


3)  Step 5:  Compilation of subpolicy to LP in VM-placement engine



For the PoC, it is likely that the LP program ( in PuLP or some other ML) is 
*not* 

[openstack-dev] [sahara] Feedback on default-templates implementation

2015-02-27 Thread Trevor McKay
Hi Sahara folks,

  please checkout

https://review.openstack.org/#/c/159872/

and respond there in comments, or here on the email thread. We have some
things to figure out for this new CLI, and I want to make sure that we
make
sane choices and set a good precedent.

Once we have a consensus we can go back and extend the spec with more
detail and merge approved changes.  The original spec did not get into
this
kind of detail (because nobody had sat down and tried to do it :) )

The only other CLI we have at this point that touches Sahara components
(other than the python-saharaclient) is sahara-db-manage, but that uses
alembic commands to drive the database directly.  It doesn't really
touch
Sahara in any semantic way, it just drives migration.  It is blissfully
ignorant
of any Sahara object relationships and semantics outside of the table
definitions.

The default-templates CLI will be a new kind of tool, I believe, that we
don't
have yet. It will be tightly integrated with sahara-all and horizon. So
how it
is designed matters a lot imho.

Best,

Trevor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][oslo] plan till end-of-Kilo

2015-02-27 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Some update on oslo/neutron patches in Kilo.

On 01/20/2015 03:17 PM, Kyle Mestery wrote:
 
 
 On Mon, Jan 19, 2015 at 10:54 AM, Ihar Hrachyshka
 ihrac...@redhat.com mailto:ihrac...@redhat.com wrote:
 
 Hi Kyle/all,
 
 (we were going to walk thru that on Mon, but since US is on
 vacation today, sending it via email to openstack-dev@.)
 
 Great, thanks Ihar!
 
 
 So I've talked to Doug Hellmann from oslo, and here is what we
 have in our oslo queue to consider:
 
 1. minor oslo.concurrency cleanup for *aas repos (we need to drop 
 lockutils-wrapper usage now that base test class sets lockutils 
 fixture);

In review:
https://review.openstack.org/#/q/I2a82504ad19f6daddd24a3c2254e8feefc9399ca,n,z

(vpnaas is already fixed)

 2. migration to namespace-less oslo libraries (this is blocked by 
 pending oslo.messaging release scheduled this week, will revive 
 patches for all four branches the end of the week) [1];

Done.

 3. oslo/kilo-2: graduation of oslo.policy;

For this one, some obstacles were found. Specifically, neutron uses
symbols from the module that are now considered private by oslo team.
Neutron implements sub-attribute checks by using private API, so to
migrate to the library, we'll need to move the sub-attr feature to the
library first, and this will require going thru proper oslo-specs
process. So not in this cycle.

Also of potential interest in Kilo scope is oslo.log that was released
recently. I have patches for all four repos. If we think it's still a
good time to do the switch to it, then I'm happy to see reviews at:

https://review.openstack.org/#/q/I310e059a815377579de6bb2aa204de168e72571e,n,z

Cheers,
/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU8JMCAAoJEC5aWaUY1u57iKQH/jz1cUhJJT2hhie5E1oQcInV
9I14wL630HE0jiC7h5Y5P2Zt4psB0+9JLyEBp4dt/J+YySpU0ztwlCp8q8lTJcjF
KRgM0nlXmN5UOxu3xhEyP1xgGwTGkjzDxLc28vj70W/sFppj/D5SDfIjgm7L7c55
aVThT3aiI4f7sY4dyzoPQr9oQXWZVuAWvIUuyI1gtvd9J11gJF259wV/GKPQeczj
cR31YThP/wf3a38uHDo9wJiXCRUIJLWFDZoLvw63torY+kb5y/T3YsQ+yBR3+tT+
QRtu6v/BguYXqCiRY1UoKLx29+dsxykP2/nchtzE9DsXwZdCWTIVpxvZi/Bh9d8=
=tze1
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] H302 considered harmful

2015-02-27 Thread Felipe Reyes
On Wed, 25 Feb 2015 15:47:46 -0500
Doug Hellmann d...@doughellmann.com wrote:
 I think the rule originally came from the way mock works. If you
 import a thing in your module and then a test tries to mock where it
 came from, your module still uses the version it imported because the
 name lookup isn't done again at the point when the test runs. If all
 external symbols are accessed through the module that contains them,
 then the lookup is done at runtime instead of import time and mocks
 can replace the symbols. The same benefit would apply to monkey
 patching like what eventlet does, though that's less likely to come
 up in our own code than it is for third-party and stdlib modules.

I strongly agree, when source code does from module import SomeClass
is a nightmare patch it at runtime.

Those were my 2 cents to the discussion :)
-- 
Felipe Reyes (GPG:0x9B1FFF39)
http://tty.cl
lp:~freyes | freyes@freenode | freyes@github

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress][Delegation] Initial workflow design

2015-02-27 Thread Yathiraj Udupi (yudupi)
Hi,

It will be good to simplify the PoC scenario.  In terms of policies and other 
constraints related to VM Placement,  I guess we agree that not all 
policies/constraints have to originate from the policy framework such as 
Congress.  The existing placement engine logic that is present in the default 
Nova scheduler or say the Nova solver scheduler will be adding its own set of 
constraints to the placement calculations.

My idea would be make the Nova placement engine,  (for e.g., solver scheduler) 
talk to Congress to get the Datalog rules / translated LP constraints, based on 
the defined policies pertaining to a particular tenant/user.  This of course 
needs to be worked out in terms of translation logic, constraint 
specifications, etc.  Also, this workflow will be part of the 
scheduling/Placement workflow as part of the Nova boot instance API call (for 
the initial placement).

As the next phase, for migrations scenario, Congress can periodically trigger a 
check, if any of the violations/warnings are triggered, (corresponding tables 
getting populated, as you show in your example),  if so, then trigger 
migrations, which will have to go through another round of placement decisions  
for figuring out the best destinations, without violating the policies and 
other existing constraints.

Happy to discuss more and simplify a PoC scenario.

Thanks,
Yathi.


On 2/27/15, 6:40 AM, 
ruby.krishnasw...@orange.commailto:ruby.krishnasw...@orange.com 
ruby.krishnasw...@orange.commailto:ruby.krishnasw...@orange.com wrote:

My first suggestion: why don’t we set up call together with Ramki, Yathi, Debo, 
as soon as possible ?

-  How to go forward concretely with the 8 steps for the PoC (details 
within each step),

oIncluding nova integration points

-  Identify “friction points” in the details above to resolve for  
beyond PoC


Tim: Where does the rest of the program originate?  I’m not saying the entire 
LP program is generated from the Datalog constraints; some of it is generated 
by the solver independent of the Datalog.  In the text, I gave the example of 
defining hMemUse[j].
Tim: The VM-placement engine does 2 things: (I) translates Datalog to LP and 
(ii) generates additional LP constraints.  (Both work items could leverage any 
constraints that are builtin to a specific solver, e.g. the solver-scheduler.  
The point is that there are 2 distinct, conceptual origins of the LP 
constraints: those that represent the Datalog and those that codify the domain.
Tim: Each domain-specific solver can do whatever it wants, so it’s not clear to 
me what the value of choosing a modeling language actually is—unless we want to 
build a library of common functionality that makes the construction of 
domain-specific engine (wrappers) easier.  I’d prefer to spend our energy 
understanding whether the proposed workflow/interface works for a couple of 
different domain-specific policy engines OR to flush this one out and build it.




ð  The value of choosing a modeling language is related to how “best to 
automate translations” from Datalog constraints (to LP)?

oWe can have look for one unique way of generation, and not, “some of it is 
generated by the VM-placement engine solver independent of the Datalog”.

oDatalog imposes most constraints (== policies)

oTwo constraints are not “policies”

§  A VM is allocated to only one host.

§  Host capacity is not exceeded.

· Over subscription

ð  Otherwise what was your suggestion?  As follows?

oUse framework (extend) the nova-solver-scheduler currently implements 
(itself using PuLP). This framework specifies an API to write constraints and 
cost functions (in a domain specific way). Modifying this framework:

§  To read data in from DSE

§  To obtain the cost function from Datalog (e.g. minimize Y[host1]…)

§  To obtain Datalog constraints (e.g. 75% memory allocation constraint for 
hosts of special zone)

oWe need to specify the “format” for this? It will likely to be a string of 
the form (?)

§  “hMemUse[0] – 0.75*hMemCap[0]  100*y[0], “ Memory allocation constraint on 
Host 0“,









ð  From your doc (page 5, section 4)


warning(id) :-
nova:host(id, name, service, zone, memory_capacity),
legacy:special_zone(zone),
ceilometer:statistics(id, memory, avg, count, duration,
durstart, durend, max, min, period, perstart, perend,
sum, unit),
avg  0.75 * memory_capacity


Notice that this is a soft constraint, identified by the use of warning instead 
of error.  When compiling to LP, the VM-placement engine will attempt to 
minimize the number of rows in the warning table.  That is, for each possible 
row r it will create a variable Y[r] and assign it True if the row is a warning 
and False otherwise


The policy (warning) : when will it be evaluated? This should be done 
periodically? Then if the table has even one True entry, then the action should 
be to generate the LP, solve, activate the migrations etc.

Re: [openstack-dev] [Congress][Delegation] Initial workflow design

2015-02-27 Thread Ramki Krishnan
@Ramki:  VM size, could we say that only the in-memory state is migrated? The 
VM’s disk resides on a network-attached storage. Disk will not be migrated.

That is exactly right Ruby.

Thanks,
Ramki

From: ruby.krishnasw...@orange.com [mailto:ruby.krishnasw...@orange.com]
Sent: Thursday, February 26, 2015 11:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress][Delegation] Initial workflow design

Tim:“So you’re saying we won’t have fresh enough data to make policy decisions? 
 If the data changes so frequently that we can’t get an accurate view, then I’m 
guessing we shouldn’t be migrating based on that data anyway.”

Ramki: We have to keep in mind that VM migration could be an expensive 
operation depending on the size of the VM and various other factors; such an 
operation cannot be performed frequently.

I do not know at what frequency data may change.
@Ramki:  VM size, could we say that only the in-memory state is migrated? The 
VM’s disk resides on a network-attached storage. Disk will not be migrated.
In any case I agree that migration (probably) cannot be performed frequently.



De : Tim Hinrichs [mailto:thinri...@vmware.com]
Envoyé : jeudi 26 février 2015 19:17
À : OpenStack Development Mailing List (not for usage questions)
Objet : Re: [openstack-dev] [Congress][Delegation] Initial workflow design

Inline.

From: ruby.krishnasw...@orange.commailto:ruby.krishnasw...@orange.com 
ruby.krishnasw...@orange.commailto:ruby.krishnasw...@orange.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, February 25, 2015 at 8:53 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Congress][Delegation] Initial workflow design

Hi Tim, All,


1) Step 3: The VM-placement engine is also a “datalog engine” . Right?

When policies are delegated:

when policies are inserted? When the VM-placement engine has already registered 
itself all policies are given to it?




“In our example, this would mean the domain-specific policy engine executes the 
following API call over the DSE”

ð “domain-agnostic” ….



Done.


2) Step 4:



Ok

But finally: if Congress will likely “delegate”



Not sure what you’re suggesting here.


3) Step 5:  Compilation of subpolicy to LP in VM-placement engine



For the PoC, it is likely that the LP program ( in PuLP or some other ML) is 
*not* completely generated by compiler/translator.

ð Right?

Where does the rest of the program originate?  I’m not saying the entire LP 
program is generated from the Datalog constraints; some of it is generated by 
the solver independent of the Datalog.  In the text, I gave the example of 
defining hMemUse[j].


 You also indicate that some category of constraints (“the LP solver 
doesn’t know what the relationship between assign[i][j], hMemUse[j], and 
vMemUse[i] actually is, so the VM-placement engine must also include 
constraints”) .
 These constraints must be “explicitly” written?  (e.g. max_ram_allocation 
etc that are constraints used in the solver-scheduler’s package).

The VM-placement engine does 2 things: (I) translates Datalog to LP and (ii) 
generates additional LP constraints.  (Both work items could leverage any 
constraints that are builtin to a specific solver, e.g. the solver-scheduler.  
The point is that there are 2 distinct, conceptual origins of the LP 
constraints: those that represent the Datalog and those that codify the domain.



 So what “parts” will be generated:
Cost function :
Constraint from Policy : memory usage  75%

 Then the rest should be “filled” up?

 Could we convene on an intermediary “modeling language”?
@Yathi: do you think we could use some thing like AMPL ? Is this 
proprietary?


A detail: the example “Y[host1] = hMemUse[host1]  0.75 * hMemCap[host1]”


ð To be changed to a linear form (mi – Mi  0 then Yi = 1 else Yi = 0) so 
something like (mi – Mi)  100 yi

Each domain-specific solver can do whatever it wants, so it’s not clear to me 
what the value of choosing a modeling language actually is—unless we want to 
build a library of common functionality that makes the construction of 
domain-specific engine (wrappers) easier.  I’d prefer to spend our energy 
understanding whether the proposed workflow/interface works for a couple of 
different domain-specific policy engines OR to flush this one out and build it.



4) Step 6: This is completely internal to the VM-placement engine (and we 
could say this is “transparent” to Congress)



We should allow configuration of a solver (this itself could be a policy ☺ )

How to invoke the solver API ?



The domain-specific placement engine could send out to DSE (action_handler: 
data)?



I had 

Re: [openstack-dev] [stable][all] Revisiting the 6 month release cycle [metrics]

2015-02-27 Thread Stefano Maffulli
On Thu, 2015-02-26 at 16:44 -0800, James E. Blair wrote:
 It is good to recognize the impact of this, however, I would suggest
 that if having open changes that are not actively being worked is a
 problem for statistics,

I don't think it's a problem for the statistics per se. The reports are
only a tool to analyze complex phenomenons and translate them into
manageable items. In fact, we keep adding more stats as we go because
every chart and table leaves us with more questions.

  let's change the statistics calculation.  Please do not abandon the
 work of contributors to improve the appearance of
 these metrics.  Instead, simply decide what criteria you think should
 apply and exclude those changes from your calculations.

I'm currently thinking that it would be informative to plot the
distribution of the efficiency metrics, instead of simply come up with a
filter to ignore long standing changes with slow/null activity over some
arbitrary amount of time. I think it would be more interesting to see
how many 'inactive' vs 'active' there are at a given time.

In any case, since Sean said that nova (and other projects) already
remove unmergeable changesets regularly, I think the data are already
clean enough to give us food for thoughts.

Why owners seem to be getting slower and slower to provide new patches,
despite the fact that the number of patches per changeset is fairly
stable? I'll look into the data more carefully with Daniel Izquierdo as
I think there are huge outliers skewing the data (the diff between
median and average is huge).

/stef


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Stepping down from core

2015-02-27 Thread Jeff Peeler
As discussed during the previous Heat meeting, I'm going to be stepping 
down from core on the Heat project. My day to day focus is going to be 
more focused on TripleO for the foreseeable future, and I hope to be 
able to soon focus on reviews there.


Being part of Heat core since day 0 has been a good experience, but 
keeping up with multiple projects is a lot to manage. I don't know how 
some of you do it!


Jeff

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.db 1.5.0 release

2015-02-27 Thread Matt Riedemann



On 2/26/2015 3:22 AM, Victor Sergeyev wrote:

Hi folks!

The Oslo team is pleased to announce the release of: oslo.db - OpenStack
common DB library

Changes from the previous release:

$ git log --oneline --no-merges 1.4.1..1.5.0
7bfdb6a Make DBAPI class work with mocks correctly
96cabf4 Updated from global requirements
a3a1bdd Imported Translations from Transifex
ab20754 Fix PyMySQL reference error detection
99e2ab6 Updated from global requirements
6ccea34 Organize provisioning to use testresources
eeb7ea2 Add retry decorator allowing to retry DB operations on request
d78e3aa Imported Translations from Transifex
dcd137a Implement backend-specific drop_all_objects for provisioning.
3fb5098 Refactor database migration manager to use given engine
afcc3df Fix 0 version handling in migration_cli manager
f81653b Updated from global requirements
efdefa9 Fix PatchStacktraceTest for pypy
c0a4373 Update Oslo imports to remove namespace package
1b7c295 Retry query if db deadlock error is received
046e576 Ensure DBConnectionError is raised on failed revalidate


Please report issues through launchpad:
http://bugs.launchpad.net/oslo.db


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Looks like something here might be related to a spike in DBDeadlock 
errors in neutron in the last 24 hours:


https://bugs.launchpad.net/neutron/+bug/1426543

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] H302 considered harmful

2015-02-27 Thread Clay Gerrard
So, Swift doesn't enforce H302 - and our imports are sorta messy frankly -
but it doesn't really bother me, and I do rather enjoy the terseness of not
having to spell out the module name.  It's not really a chore to maintain,
if you don't know where a name came from split the window (or drop a
marker) and pop up to the top of the file and there it is - mystery solved.
  But I've been living in the code base to long to say if hurts new-comers
trying to grok where things are coming from.  I'd be willing to entertain
feedback on this.

But one thing that I'd really love to hear feedback on is if people using
H302 ever find it's inconvenient to enforce the rule *all the time*?
Particularly in stdlib where it'd be *such* bad form to collide with a
common name like `defaultdict` or `datetime` anyway, if you see on of those
names without the module - you *know* where it came from (hopefully?):

 * `collections.defaultdict(collections.defaultdict(list))` - no thank you
 * `datetime.datetime - meh

Anyway, every time I start some project greenfield I try to make myself
H302, (i *do* get so sick of is it time.time() or time() in this file?) -
but I normally break down as soon as I get to a name I'd rather just be be
in my right there in my globals... @contextlib.contextmanager,
functools.partial, itertools.ifilter - maybe it's just stdlib names?

Not sure if there's any compromise, probably better to either *just import
modules*, or live with the inconsistency (you eventually get nose-blind to
do it ;P)

-Clay

On Wed, Feb 25, 2015 at 10:51 AM, Duncan Thomas duncan.tho...@gmail.com
wrote:

 Hi

 So a review [1] was recently submitted to cinder to fix up all of the H302
 violations, and turn on the automated check for them. This is certainly a
 reasonable suggestion given the number of manual reviews that -1 for this
 issue, however I'm far from convinced it actually makes the code more
 readable,

 Is there anybody who'd like to step forward in defence of this rule and
 explain why it is an improvement? I don't discount for a moment the
 possibility I'm missing something, and welcome the education in that case

 Thanks


 [1] https://review.openstack.org/#/c/145780/
 --
 Duncan Thomas

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ideas about Openstack Cinder for GSOC 2015

2015-02-27 Thread Ivan Kolodyazhny
Hi Harry,

Please, ping me in IRC (e0ne) if you are still in Cinder as a part of GSoC.

Regards,
Ivan Kolodyazhny.

On Fri, Feb 27, 2015 at 4:34 PM, Davanum Srinivas dava...@gmail.com wrote:

 Harry,

 hop on to #openstack-gsoc and  #openstack-cinder as well.

 -- dims

 On Fri, Feb 27, 2015 at 9:26 AM, Anne Gentle
 annegen...@justwriteclick.com wrote:
  Hi Harry,
  I don't see a cinder mentor listed but you could certainly contact the
  Project Technical Lead for Cinder, Mike Perez, and see if he knows of a
  mentor and possible project.
  Anne
 
  On Fri, Feb 27, 2015 at 3:50 AM, harryxiyou harryxi...@gmail.com
 wrote:
 
  Hi all,
 
  I cannot find the proper idea about Openstack Cinder for GSOC 2015
  here[1]. Could anyone please give me some clues?
 
  [1] https://wiki.openstack.org/wiki/GSoC2015
 
 
  Thanks, Harry
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Anne Gentle
  annegen...@justwriteclick.com
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Davanum Srinivas :: https://twitter.com/dims

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Flavor] Constraint in creating flavor with the same name and different ID

2015-02-27 Thread Abhishek Talwar/HYD/TCS
Hi All,

I am writing in reference to a bug ( Bug #1423885) I have been working on, It 
is regarding the inconsistency shown my nova flavor show command. I have a 
doubt regarding that why  there is a constraint to create two flavors with the 
same name whereas we can do the same for instances, images, volumes etc. 



-- 
Thanks and Regards
Abhishek Talwar
Employee ID : 770072
Assistant System Engineer
Tata Consultancy Services,Gurgaon
India
Contact Details : +918377882003
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][horizon]Proper error handling/propagation to UI

2015-02-27 Thread Eduard Matei
Hi,

We've been testing our cinder driver extensively and found a strange
behavior in the UI:
- when trying to delete a snapshot that has clones (created volume from
snapshot) and error is raised in our driver which turns into
error_deleting in cinder and the UI; further actions on that snapshot are
impossible from the ui, the user has to go to CLI and do cinder
snapshot-reset-state to be able to delete it (after having deleted the
clones)
- to help with that we implemented a check in the driver and now we raise
exception.SnapshotIsBusy; now the snapshot remains available (as it should
be) but no error bubble is shown in the UI (only the green one: Success.
Scheduled deleting of...). So the user has to go to c-vol screen and check
the cause of the error

So question: how should we handle this so that
a. The snapshot remains in state available
b. An error bubble is shown in the UI stating the cause.

Thanks,
Eduard

-- 

*Eduard Biceri Matei, Senior Software Developer*
www.cloudfounders.com
 | eduard.ma...@cloudfounders.com



*CloudFounders, The Private Cloud Software Company*

Disclaimer:
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed.
If you are not the named addressee or an employee or agent responsible
for delivering this message to the named addressee, you are hereby
notified that you are not authorized to read, print, retain, copy or
disseminate this message or any part of it. If you have received this
email in error we request you to notify us by reply e-mail and to
delete all electronic files of the message. If you are not the
intended recipient you are notified that disclosing, copying,
distributing or taking any action in reliance on the contents of this
information is strictly prohibited.
E-mail transmission cannot be guaranteed to be secure or error free as
information could be intercepted, corrupted, lost, destroyed, arrive
late or incomplete, or contain viruses. The sender therefore does not
accept liability for any errors or omissions in the content of this
message, and shall have no liability for any loss or damage suffered
by the user, which arise as a result of e-mail transmission.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [devstack] About _member_ role

2015-02-27 Thread Pasquale Porreca
As I said in your code review I don't really like an approach that
require saving a random generate id in the config file and a keystone
restart.
I would like you to review my proposal if you don't mind
https://review.openstack.org/156527

I think this is a quite important bug in devstack since it prevents
users to create or manage projects, so I would ask anyone interested -
especially devstack core reviewers - to give a look to the bug report
https://bugs.launchpad.net/devstack/+bug/1421616 and the proposed fixes.

On 02/27/15 06:38, Jamie Lennox wrote:

 - Original Message -
 From: Pasquale Porreca pasquale.porr...@dektech.com.au
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Thursday, February 19, 2015 3:24:03 AM
 Subject: Re: [openstack-dev] [Keystone] [devstack] About _member_ role

 Analyzing Horizon code I can confirm that the existence of _member_ role
 is required, so the commit https://review.openstack.org/#/c/150667/
 introduced the bug in devstack. More details and a fix proposal in my
 change submission: https://review.openstack.org/#/c/156527/

 On 02/18/15 10:04, Pasquale Porreca wrote:
 I saw 2 different bug report that Devstack dashboard gives an error when
 trying to manage projects
 https://bugs.launchpad.net/devstack/+bug/1421616 and
 https://bugs.launchpad.net/horizon/+bug/1421999
 In my devstack environment projects were working just fine, so I tried a
 fresh installation to see if I could reproduce the bug and I could
 confirm that actually the bug is present in current devstack deployment.
 Both reports point to the lack of _member_ role this error, so I just
 tried to manually (i.e. via CLI) add a _member_ role and I verified that
 just having it - even if not assigned to any user - fix the project
 management in Horizon.

 I didn't deeply analyze yet the root cause of this, but this behaviour
 seemed quite weird, this is the reason I sent this mail to dev list.
 Your explanation somewhat confirmed my doubts: I presume that adding a
 _member_ role is merely a workaround and the real bug is somewhere else
 - in Horizon code with highest chance.
 Ok, so I dug into this today. The problem is that the 'member_role_name' that 
 is set in keystone CONF is only being read that first time when it creates 
 the default member role if not already present. At all other times keystone 
 works with the role id set by 'member_role_id' which has a default value. So 
 even though horizon is looking and finding a member_role_name it doesn't 
 match up with what keystone is doing when it uses member_role_id. 

 IMO this is wrong and i filed a bug against keystone: 
 https://bugs.launchpad.net/keystone/+bug/1426184

 In the mean time it works if you add both the member_role_name and 
 member_role_id to the config file. Unfortunately adding an ID means you need 
 to get the value from keystone and then set it into keystone's own config 
 file, so restarting keystone. This is similar to a review I had for policy so 
 I modified that and put up my own review 
 https://review.openstack.org/#/c/159690

 Given the keystone restart I don't know if it's cleaner, however that's the 
 way i know to solve this 'properly'. 


 Jamie

 On 02/17/15 21:01, Jamie Lennox wrote:
 - Original Message -
 From: Pasquale Porreca pasquale.porr...@dektech.com.au
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Sent: Tuesday, 17 February, 2015 9:07:14 PM
 Subject: [openstack-dev]  [Keystone] [devstack] About _member_ role

 I proposed a fix for a bug in devstack
 https://review.openstack.org/#/c/156527/ caused by the fact the role
 _member_ was not anymore created due to a recent change.

 But why is the existence of _member_ role necessary, even if it is not
 necessary to be used? Is this a know/wanted feature or a bug by itself?
 So the way to be a 'member' of a project so that you can get a token
 scoped to that project is to have a role defined on that project.
 The way we would handle that from keystone for default_projects is to
 create a default role _member_ which had no permissions attached to it,
 but by assigning it to the user on the project we granted membership of
 that project.
 If the user has any other roles on the project then the _member_ role is
 essentially ignored.

 In that devstack patch I removed the default project because we want our
 users to explicitly ask for the project they want to be scoped to.
 This patch shouldn't have caused any issues though because in each of
 those cases the user is immediately granted a different role on the
 project - therefore having 'membership'.

 Creating the _member_ role manually won't cause any problems, but what
 issue are you seeing where you need it?


 Jamie


 --
 Pasquale Porreca

 DEK Technologies
 Via dei Castelli Romani, 22
 00040 Pomezia (Roma)

 Mobile +39 3394823805
 Skype paskporr


 

[openstack-dev] [Fuel] Stop distributing IMG artifact and start using hybrid ISO.

2015-02-27 Thread Stanislaw Bogatkin
Hi everyone,

we have merged code that will create hybrid ISO. Current 6.1 #147 ISO
already can be booted from USB by standard method (just using dd
of=/path/to/iso of=/path/to/usb/stick).

Creating IMG artifact will be disabled soon, so, please, be aware of it.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Revisiting the 6 month release cycle [metrics]

2015-02-27 Thread Sean Dague
On 02/26/2015 05:41 PM, Stefano Maffulli wrote:
 On Thu, 2015-02-26 at 15:58 -0600, Kevin L. Mitchell wrote:
 One thing that comes to mind is that there are a lot of reviews that
 appear to have been abandoned; I just cleared several from the
 novaclient review queue (or commented on them to see if they were still
 alive).  I also know of a few novaclient changes that are waiting for
 corresponding nova changes before they can be merged.  Could these be
 introducing a skew factor?
 
 Maybe, depending on how many they are and how old are we talking about.
 How much cruft is there? Maybe the fact that we don't autoabandon
 anymore is a relevant factor?
 
 Looking at Nova time to merge (not the client, since clients are not
 analyzed individually), the median is over 10 days (the mean wait is
 29). But if you look at the trends of time to way for reviewers, they've
 been trending down for 3 quarters in a row (both, average and median)
 while time to wait for submitter is trending up.
 
 http://git.openstack.org/cgit/openstack-infra/activity-board/plain/reports/2014-q4/pdf/projects/nova.pdf
 
 Does it make sense to purge old stuff regularly so we have a better
 overview? Or maybe we should chart a distribution of age of proposed
 changesets, too in order to get a better understanding of where the
 outliers are?

We already purge old stuff that's unmergable (no activity in  4 weeks
with either a core -2 or Jenkins -1). The last purge was about 4 weeks
ago. So effectively abandoned code isn't in the system.

The merge conflict detector will also mean that all patches eventually
get a Jenkins -1 if they aren't maintained. So you should consider
everything in the system active for some definition.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ideas about Openstack Cinder for GSOC 2015

2015-02-27 Thread Jay Bryant
Fyi ...

This is something that Mike Perez was thinking about so you can ping
thingee on irc if you can't find e0ne.

Jay
On Feb 27, 2015 3:51 AM, harryxiyou harryxi...@gmail.com wrote:

 Hi all,

 I cannot find the proper idea about Openstack Cinder for GSOC 2015
 here[1]. Could anyone please give me some clues?

 [1] https://wiki.openstack.org/wiki/GSoC2015


 Thanks, Harry

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack]Failure retrieving TENANT_ID for demo

2015-02-27 Thread Guo, Ruijing
1.   Probem description (compute/network node without keystone service is 
unsuccessfully deployed)

I use devstack to deploy mini-cluster. In controller node, keystone service is 
enabled and in compute/network node, keystone service is not enabled.

Controller node is deployed and compute/ successfully network is unsuccessfully 
deployed with following failure:

2015-02-27 06:32:20.337 | ERROR: openstack Authentication type must be selected 
with --os-auth-type
2015-02-27 06:32:20.355 | + TENANT_ID=
2015-02-27 06:32:20.359 | + die_if_not_set 530 TENANT_ID 'Failure retrieving 
TENANT_ID for demo'
2015-02-27 06:32:20.359 | + local exitcode=0
2015-02-27 06:32:20.366 | [Call Trace]
2015-02-27 06:32:20.366 | ./stack.sh:1203:create_neutron_initial_network
2015-02-27 06:32:20.366 | /opt/stack/devstack/lib/neutron:530:die_if_not_set
2015-02-27 06:32:20.366 | /opt/stack/devstack/functions-common:308:die
2015-02-27 06:32:20.363 | [ERROR] /opt/stack/devstack/functions-common:530 
Failure retrieving TENANT_ID for demo


2.   Root-cause



a.   The following variables are exported when keystone is enable.
if is_service_enabled keystone; then
…
 export OS_AUTH_URL=$SERVICE_ENDPOINT
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=$ADMIN_PASSWORD
export OS_REGION_NAME=$REGION_NAME
fi


b.  Neutron use the above exported varables:

function create_neutron_initial_network {

TENANT_ID=$(openstack project list | grep  demo  | get_field 1)

die_if_not_set $LINENO TENANT_ID Failure retrieving TENANT_ID for demo


3.   Solutions

Solution 1: enable keystone for all nodes when using devstack.


Solution 2: export the above variables if keystone is not enabled.

Solution 3: fix in devstack as:

--- stack.sh.orig   2015-02-28 00:24:31.784321431 +
+++ stack.sh2015-02-28 00:26:22.594307057 +
@@ -973,13 +973,14 @@
 # Begone token-flow auth
 unset OS_TOKEN OS_URL

-# Set up password-flow auth creds now that keystone is bootstrapped
-export OS_AUTH_URL=$SERVICE_ENDPOINT
-export OS_TENANT_NAME=admin
-export OS_USERNAME=admin
-export OS_PASSWORD=$ADMIN_PASSWORD
-export OS_REGION_NAME=$REGION_NAME
fi
+
+# Set up password-flow auth creds now that keystone is bootstrapped
+export OS_AUTH_URL=$SERVICE_ENDPOINT
+export OS_TENANT_NAME=admin
+export OS_USERNAME=admin
+export OS_PASSWORD=$ADMIN_PASSWORD
+export OS_REGION_NAME=$REGION_NAME

I’d like to create bug in devstack and provide patch for it. Any thought?

Thanks,
-Ruijing


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])

2015-02-27 Thread Stefano Maffulli
I'm not expressing myself cleary enough. I don't advocate for the
removal of anything because I like pretty charts. I'm changing the
subject to be even more clear.

On Fri, 2015-02-27 at 13:26 -0800, James E. Blair wrote:
 I am asking you to please independently remove changes that you don't
 think should be considered from your metrics.  

I'm saying that the reports have indicators that seem to show struggle.
In a previous message Kevin hinted that probably a reason for those bad
looking numbers was due to a lot of reviews that appear to have been
abandoned. This doesn't seem the case because some projects have a
habit of 'purging'. 

I have never explicitly ordered developers to purge anything. If their
decision to purge is due to the numbers they may have seen on the
reports I'd like to know. 

That said, the problem that the reports highlight remains confirmed so
far: contributors seem to be left in some cases hanging, for many many
days, *after a comment* and they don't come back.

 I think abandoning changes so that the metrics look the way we want is a
 terrible experience for contributors.

I agree, that should not be a motivator. Also, after chatting with you
on IRC I tend to think that instead of abandoning the review we should
highlight them and have humans act on them. Maybe build a dashboard of
'old stuff' to periodically sift through and see if there are things
worth picking up again or to ping the owner or something else managed by
humans. 

I happened to have found one of such review automatically closed that
could have been fixed with a trivial edit in commit message instead:

https://review.openstack.org/#/c/98735/

(that owner had a bunch of auto-abandoned patches 
https://review.openstack.org/#/q/owner:%22Mh+Raies+%253Craiesmh08%
2540gmail.com%253E%22,n,z). I have made a note to reach out to him and
get more anecdotes.

 Especially as it appears some projects, such as nova, are in a position
 where they are actually leaving -2 votes on changes which will not be
 lifted for 2 or 3 months.  That means that if someone runs a script like
 Sean's, these changes will be abandoned, yet there is nothing that the
 submitter can do to progress the change in the mean time.  Abandoning
 such a review is making an already bad experience for contributors even
 worse.

this sounds like a different issue :(


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])

2015-02-27 Thread John Griffith
On Fri, Feb 27, 2015 at 5:40 PM, Stefano Maffulli stef...@openstack.org
wrote:

 I'm not expressing myself cleary enough. I don't advocate for the
 removal of anything because I like pretty charts. I'm changing the
 subject to be even more clear.

 On Fri, 2015-02-27 at 13:26 -0800, James E. Blair wrote:
  I am asking you to please independently remove changes that you don't
  think should be considered from your metrics.

 I'm saying that the reports have indicators that seem to show struggle.
 In a previous message Kevin hinted that probably a reason for those bad
 looking numbers was due to a lot of reviews that appear to have been
 abandoned. This doesn't seem the case because some projects have a
 habit of 'purging'.

 I have never explicitly ordered developers to purge anything. If their
 decision to purge is due to the numbers they may have seen on the
 reports I'd like to know.

 That said, the problem that the reports highlight remains confirmed so
 far: contributors seem to be left in some cases hanging, for many many
 days, *after a comment* and they don't come back.

  I think abandoning changes so that the metrics look the way we want is a
  terrible experience for contributors.

 I agree, that should not be a motivator. Also, after chatting with you
 on IRC I tend to think that instead of abandoning the review we should
 highlight them and have humans act on them. Maybe build a dashboard of
 'old stuff' to periodically sift through and see if there are things
 worth picking up again or to ping the owner or something else managed by
 humans.

 I happened to have found one of such review automatically closed that
 could have been fixed with a trivial edit in commit message instead:

 https://review.openstack.org/#/c/98735/

 (that owner had a bunch of auto-abandoned patches
 https://review.openstack.org/#/q/owner:%22Mh+Raies+%253Craiesmh08%
 2540gmail.com%253E%22,n,z). I have made a note to reach out to him and
 get more anecdotes.

  Especially as it appears some projects, such as nova, are in a position
  where they are actually leaving -2 votes on changes which will not be
  lifted for 2 or 3 months.  That means that if someone runs a script like
  Sean's, these changes will be abandoned, yet there is nothing that the
  submitter can do to progress the change in the mean time.  Abandoning
  such a review is making an already bad experience for contributors even
  worse.

 this sounds like a different issue :(


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

​
For what it's worth, at one point the Cinder project setup an auto-abandon
job that did purge items that had a negative mark either from a reviewer or
from Jenkins and had not been updated in over two weeks.  This had
absolutely nothing to do with metrics or statistical analysis of the
project.  We simply had a hard time dealing with patches that the submitter
didn't care about.  If somebody takes the time to review a patch, then I
don't think it's too much to ask that the submitter respond to questions or
comments within a two week period.  Note, the auto purge in our case only
purged items that had no updates or activity at all.

We were actually in a position where we had patches that were submitted,
failed unit tests in the gate (valid failures that occurred 100% of the
time) and had sat for an entire release without the submitter ever updating
the patch. I don't think it's unreasonable at all to abandon these and
remove them from the queue.  I don't think this is a bad thing, I think
it's worse to leave them as active when they're bit-rotted and the
submitter doesn't even care about them any longer.  The other thing is,
those patches are still there, they can still be accessed and reinstated.

There's a lot of knocks against core teams regarding time to review and
keeping up with the work load.. that's fine, but at the same time if you
submit something you should follow through on it and respond to comments or
test failures in a timely manner.  Also there should be some scaling factor
here; if a patch that needs updating should be expected to be able to sit
in the queue for a month for example, then we should give one month for
each reviewer; so minimum of three months for a +1, +2 and +A.

I don't think it's reasonable to say hey, you all have to review faster
and get more done and then also say by the way, you need to babysit and
reach out and contact owners of patches that have been idle for long
periods.  Especially considering MANY of the patches in Cinder at least
that end up falling into this category are from folks that aren't on IRC
and do not have public email addresses in LaunchPad.

Just providing another perspective.

[openstack-dev] Ideas about Openstack Cinder for GSOC 2015

2015-02-27 Thread harryxiyou
Hi all,

I cannot find the proper idea about Openstack Cinder for GSOC 2015
here[1]. Could anyone please give me some clues?

[1] https://wiki.openstack.org/wiki/GSoC2015


Thanks, Harry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Need help in configuring keystone

2015-02-27 Thread Akshik DBK
Hi I'm new to SAML, trying to integrate keystone with SAML, Im using Ubuntu 
12.04 with Icehouse,im following http://docs.openstack.org/developer/k...when 
im trying to configure keystone with two idp,when i access 
https://MYSERVER:5000/v3/OS-FEDERATIO...it gets redirected to testshib.org , it 
prompts for username and password when the same is given im 
gettingshibsp::ConfigurationException at ( 
https://MYSERVER:5000/Shibboleth.sso/... ) No MetadataProvider available.here 
is my shibboleth2.xml contentSPConfig 
xmlns=urn:mace:shibboleth:2.0:native:sp:config
xmlns:conf=urn:mace:shibboleth:2.0:native:sp:config
xmlns:saml=urn:oasis:names:tc:SAML:2.0:assertion
xmlns:samlp=urn:oasis:names:tc:SAML:2.0:protocol
xmlns:md=urn:oasis:names:tc:SAML:2.0:metadata
clockSkew=180

ApplicationDefaults entityID=https://MYSERVER:5000/Shibboleth;
Sessions lifetime=28800 timeout=3600 checkAddress=false 
relayState=ss:mem handlerSSL=false
SSO entityID=https://idp.testshib.org/idp/shibboleth; ECP=true
SAML2 SAML1
/SSO

LogoutSAML2 Local/Logout

Handler type=MetadataGenerator Location=/Metadata 
signing=false/
Handler type=Status Location=/Status /
Handler type=Session Location=/Session 
showAttributeValues=false/
Handler type=DiscoveryFeed Location=/DiscoFeed/
/Sessions

Errors supportContact=root@localhost
logoLocation=/shibboleth-sp/logo.jpg
styleSheet=/shibboleth-sp/main.css/

AttributeExtractor type=XML validate=true 
path=attribute-map.xml/
AttributeResolver type=Query subjectMatch=true/
AttributeFilter type=XML validate=true 
path=attribute-policy.xml/
CredentialResolver type=File key=sp-key.pem 
certificate=sp-cert.pem/

ApplicationOverride id=idp_1 
entityID=https://MYSERVER:5000/Shibboleth;

Sessions lifetime=28800 timeout=3600 checkAddress=false
relayState=ss:mem handlerSSL=false
SSO 
entityID=https://portal4.mss.internalidp.com/idp/shibboleth; ECP=true
SAML2 SAML1
/SSO
LogoutSAML2 Local/Logout
/Sessions

MetadataProvider type=XML 
uri=https://portal4.mss.internalidp.com/idp/shibboleth;
 backingFilePath=/tmp/tata.xml reloadInterval=18 /
/ApplicationOverride

ApplicationOverride id=idp_2 
entityID=https://MYSERVER:5000/Shibboleth;
Sessions lifetime=28800 timeout=3600 checkAddress=false
relayState=ss:mem handlerSSL=false
SSO entityID=https://idp.testshib.org/idp/shibboleth; 
ECP=true
SAML2 SAML1
/SSO

LogoutSAML2 Local/Logout
/Sessions

MetadataProvider type=XML 
uri=https://idp.testshib.org/idp/shibboleth;  
backingFilePath=/tmp/testshib.xml reloadInterval=18/
/ApplicationOverride
/ApplicationDefaults

SecurityPolicyProvider type=XML validate=true 
path=security-policy.xml/
ProtocolProvider type=XML validate=true reloadChanges=false 
path=protocols.xml/
/SPConfighere is my wsgi-keystoneWSGIScriptAlias /keystone/main  
/var/www/cgi-bin/keystone/main
WSGIScriptAlias /keystone/admin  /var/www/cgi-bin/keystone/admin

Location /keystone
# NSSRequireSSL
SSLRequireSSL
Authtype none
/Location

Location /Shibboleth.sso
SetHandler shib
/Location

Location /v3/OS-FEDERATION/identity_providers/idp_1/protocols/saml2/auth
ShibRequestSetting requireSession 1
ShibRequestSetting applicationId idp_1
AuthType shibboleth
ShibRequireAll On
ShibRequireSession On
ShibExportAssertion Off
Require valid-user
/Location

Location /v3/OS-FEDERATION/identity_providers/idp_2/protocols/saml2/auth
ShibRequestSetting requireSession 1
ShibRequestSetting applicationId idp_2
AuthType shibboleth
ShibRequireAll On
ShibRequireSession On
ShibExportAssertion Off
Require valid-user
/Location   __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [Cinder-GlusterFS CI] centos7 gate job abrupt failures

2015-02-27 Thread Deepak Shetty
On Wed, Feb 25, 2015 at 11:48 PM, Deepak Shetty dpkshe...@gmail.com wrote:



 On Wed, Feb 25, 2015 at 8:42 PM, Deepak Shetty dpkshe...@gmail.com
 wrote:



 On Wed, Feb 25, 2015 at 6:34 PM, Jeremy Stanley fu...@yuggoth.org
 wrote:

 On 2015-02-25 17:02:34 +0530 (+0530), Deepak Shetty wrote:
 [...]
  Run 2) We removed glusterfs backend, so Cinder was configured with
  the default storage backend i.e. LVM. We re-created the OOM here
  too
 
  So that proves that glusterfs doesn't cause it, as its happening
  without glusterfs too.

 Well, if you re-ran the job on the same VM then the second result is
 potentially contaminated. Luckily this hypothesis can be confirmed
 by running the second test on a fresh VM in Rackspace.


 Maybe true, but we did the same on hpcloud provider VM too and both time
 it ran successfully with glusterfs as the cinder backend. Also before
 starting
 the 2nd run, we did unstack and saw that free memory did go back to 5G+
 and then re-invoked your script, I believe the contamination could result
 in some
 additional testcase failures (which we did see) but shouldn't be related
 to
 whether system can OOM or not, since thats a runtime thing.

 I see that the VM is up again. We will execute the 2nd run afresh now and
 update
 here.


 Ran tempest with configured with default backend i.e. LVM and was able to
 recreate
 the OOM issue, so running tempest without gluster against a fresh VM
 reliably
 recreates the OOM issue, snip below from syslog.

 Feb 25 16:58:37 devstack-centos7-rax-dfw-979654 kernel: glance-api invoked
 oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0

 Had a discussion with clarkb on IRC and given that F20 is discontinued,
 F21 has issues with tempest (under debug by ianw)
 and centos7 also has issues on rax (as evident from this thread), the only
 option left is to go with ubuntu based CI job, which
 BharatK is working on now.


Quick Update:

Cinder-GlusterFS CI job on ubuntu was added (
https://review.openstack.org/159217)

We ran it 3 times against our stackforge repo patch @
https://review.openstack.org/159711
and it works fine (2 testcase failures, which are expected and we're
working towards fixing them)

For the logs of the 3 experimental runs, look @
http://logs.openstack.org/11/159711/1/experimental/gate-tempest-dsvm-full-glusterfs/

Of the 3 jobs, 1 was schedued on rax and 2 on hpcloud, so its working
nicely across
the different cloud providers.

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Need help in configuring keystone

2015-02-27 Thread Marek Denis

Hi Akshik,

Did you upload your Metadata file to the testshib server?
You are advised to follow steps starting from here: 
http://testshib.org/register.html


For the record, Keystone will act here as a Service Provider,  so you 
need to follow testhib docs/tutorials for setting your SP (Service Provider)


Let me know if that was your issue.
If not, a more detailed steps of how your configured your Keystone 
acting as a Service Provider would be more helpful.


Marek Denis

On 27.02.2015 11:26, Akshik DBK wrote:


Hi I'm new to SAML, trying to integrate keystone with SAML, Im using 
Ubuntu 12.04 with Icehouse,


im following http://docs.openstack.org/developer/k... 
http://docs.openstack.org/developer/keystone/extensions/shibboleth.html


when im trying to configure keystone with two idp,

when i access https://MYSERVER:5000/v3/OS-FEDERATIO... 
https://myserver:5000/v3/OS-FEDERATION/identity_providers/idp_2/protocols/saml2/auth


it gets redirected to testshib.org http://testshib.org/ , it prompts 
for username and password when the same is given im getting


*shibsp::ConfigurationException at ( 
https://MYSERVER:5000/Shibboleth.sso/... 
https://myserver:5000/Shibboleth.sso/SAML2/POST ) No 
MetadataProvider available.*


here is my shibboleth2.xml content

|SPConfig  xmlns=urn:mace:shibboleth:2.0:native:sp:config
 xmlns:conf=urn:mace:shibboleth:2.0:native:sp:config
 xmlns:saml=urn:oasis:names:tc:SAML:2.0:assertion
 xmlns:samlp=urn:oasis:names:tc:SAML:2.0:protocol 
 xmlns:md=urn:oasis:names:tc:SAML:2.0:metadata

 clockSkew=180

 ApplicationDefaults  entityID=https://MYSERVER:5000/Shibboleth;
 Sessions  lifetime=28800  timeout=3600  checkAddress=false  
relayState=ss:mem  handlerSSL=false
 SSO  entityID=https://idp.testshib.org/idp/shibboleth;  
ECP=true
 SAML2 SAML1
 /SSO

 LogoutSAML2 Local/Logout

 Handler  type=MetadataGenerator  Location=/Metadata  
signing=false/
 Handler  type=Status  Location=/Status  /
 Handler  type=Session  Location=/Session  
showAttributeValues=false/
 Handler  type=DiscoveryFeed  Location=/DiscoFeed/
 /Sessions

 Errors  supportContact=root@localhost
 logoLocation=/shibboleth-sp/logo.jpg
 styleSheet=/shibboleth-sp/main.css/

 AttributeExtractor  type=XML  validate=true  
path=attribute-map.xml/
 AttributeResolver  type=Query  subjectMatch=true/
 AttributeFilter  type=XML  validate=true  
path=attribute-policy.xml/
 CredentialResolver  type=File  key=sp-key.pem  
certificate=sp-cert.pem/

 ApplicationOverride  id=idp_1  
entityID=https://MYSERVER:5000/Shibboleth;

 Sessions  lifetime=28800  timeout=3600  checkAddress=false
 relayState=ss:mem  handlerSSL=false
 SSO  entityID=https://portal4.mss.internalidp.com/idp/shibboleth;  
ECP=true
 SAML2 SAML1
 /SSO
 LogoutSAML2 Local/Logout
 /Sessions

 MetadataProvider  type=XML  
uri=https://portal4.mss.internalidp.com/idp/shibboleth;
  backingFilePath=/tmp/tata.xml  reloadInterval=18  /
 /ApplicationOverride

 ApplicationOverride  id=idp_2  
entityID=https://MYSERVER:5000/Shibboleth;
 Sessions  lifetime=28800  timeout=3600  checkAddress=false
 relayState=ss:mem  handlerSSL=false
 SSO  entityID=https://idp.testshib.org/idp/shibboleth;  
ECP=true
 SAML2 SAML1
 /SSO

 LogoutSAML2 Local/Logout
 /Sessions

 MetadataProvider  type=XML  uri=https://idp.testshib.org/idp/shibboleth;   
 backingFilePath=/tmp/testshib.xml  reloadInterval=18/

 /ApplicationOverride
 /ApplicationDefaults

 SecurityPolicyProvider  type=XML  validate=true  
path=security-policy.xml/
 ProtocolProvider  type=XML  validate=true  reloadChanges=false  
path=protocols.xml/
/SPConfig|

here is my wsgi-keystone

|WSGIScriptAlias  /keystone/main/var/www/cgi-bin/keystone/main
WSGIScriptAlias  /keystone/admin/var/www/cgi-bin/keystone/admin

Location  /keystone
# NSSRequireSSL
SSLRequireSSL
Authtype  none
/Location

Location /Shibboleth.sso
 SetHandler  shib
/Location

Location /v3/OS-FEDERATION/identity_providers/idp_1/protocols/saml2/auth
 ShibRequestSetting  requireSession1
 ShibRequestSetting  applicationId idp_1
 AuthType  shibboleth
 ShibRequireAll  On
 ShibRequireSession  On
 ShibExportAssertion  Off
 Require  valid-user
/Location

Location /v3/OS-FEDERATION/identity_providers/idp_2/protocols/saml2/auth
 ShibRequestSetting  requireSession1
 ShibRequestSetting  applicationId idp_2
 AuthType  shibboleth
 ShibRequireAll  On
 ShibRequireSession  On
 ShibExportAssertion  Off
 Require  valid-user
/Location|



Re: [openstack-dev] [all] creating a unified developer reference manual

2015-02-27 Thread Thierry Carrez
Doug Hellmann wrote:
 Maybe some of the folks in the meeting who felt more strongly that it
 should be a separate document can respond with their thoughts?

I don't feel very strongly and could survive this landing in
openstack-specs. My objection was the following:

- Specs are for designing the solution and implementation plan to a
specific problem. They are mainly used by developers and reviewers
during implementation as a clear reference rationale for change and
approved plan. Once they are fully implemented, they are kept for
history purpose, not for constant reference.

- Guidelines/developer doc are for all developers (old and new) to
converge on best practices on topics that are not directly implemented
as hacking rules. They are constantly used by everyone (not just
developers/reviewers of a given feature) and never become history.

Putting guidelines doc in the middle of specs makes it a bit less
discoverable imho, especially by our new developers. It's harder to
determine which are still current and you should read. An OpenStack
developer doc sounds like a much better entry point.

That said, the devil is in the details, and some efforts start as specs
(for existing code to catch up with the recommendation) and become
guidelines (for future code being written). That is the case of the log
levels spec: it is both a spec and a guideline. Personally I wouldn't
object if that was posted in both areas, or if the relevant pieces were
copied, once the current code has caught up, from the spec to a dev
guideline.

In the eventlet case, it's only a set of best practices / guidelines:
there is no specific problem to solve, no catch-up plan for existing
code to implement. Only a collection of recommendations if you get to
write future eventlet-based code. Those won't start or end. Which is why
I think it should go straight to a developer doc.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Revisiting the 6 month release cycle [metrics]

2015-02-27 Thread Daniel P. Berrange
On Fri, Feb 27, 2015 at 09:51:34AM +1100, Michael Still wrote:
 On Fri, Feb 27, 2015 at 9:41 AM, Stefano Maffulli stef...@openstack.org 
 wrote:
 
  Does it make sense to purge old stuff regularly so we have a better
  overview? Or maybe we should chart a distribution of age of proposed
  changesets, too in order to get a better understanding of where the
  outliers are?
 
 Given the abandon of a review isn't binding (a proposer can easily
 unabandon), I do think we should abandon more than we do now. The
 problem at the moment being that its a manual process which isn't much
 fun for the person doing the work.
 
 Another factor to consider here is that abandoned patches against bugs
 make the bug look like someone is working on a fix, which probably
 isn't the case.
 
 Nova has been trying some very specific things to try and address
 these issues, and I think we're improving. Those things are:
 
 * specs
 * priority features

This increased level of process in Nova has actually made the negative
effects of the 6 month cycle noticably worse on balance. If you aren't
able to propose your feature in the right window of the dev cycle your
chances of getting stuff merged has gone down significantly and the time
before users are likely to see your feature has correspondingly gone up.
Previously people could come along with simple features at the end of
the cycle and we had the flexibility to be pragmmatic and review and
approve them. Now we're lacking them that ability even if we have the
spare review cycles to consider it. The processes adopted have merely
made us more efficient at disappointing contributors earlier in the
cycle. There's been no changes made that would  solve the bigger problem
of the fact that Nova is far too large vs the size of the core review
team, so we have a ongoing major bottleneck in our development. That,
bottleneck combined with the length of the 6 month cycle is an ongoing
disaster for our contributors.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] creating a unified developer reference manual

2015-02-27 Thread Gorka Eguileor

On Wed, Feb 25, 2015 at 02:54:35PM -0500, Doug Hellmann wrote:
 
 That leads to two questions, then:
 
 1. Should we have a unified developer guide for the project?

Sounds like a great idea to me, I think we should.

 2. Where should it live and how should we manage it?
 

I like Stefano's idea of it being under docs.openstack.org/developer

 An alternative is to designate a subsection of the openstack-specs
 repository for the content, as we’ve done in Oslo. In this case,
 though, I think it makes more sense to create a new repository. If
 there is a general agreement to go ahead with the plan, I will set
 that up with a Sphinx project framework to get us started.
 
 Comments?
 
 Doug

Gorka Eguileor.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Stop distributing IMG artifact and start using hybrid ISO.

2015-02-27 Thread Mike Scherbakov
Thanks, Stas.
Eugene, please ensure that all teams are prepared for it.

On Fri, Feb 27, 2015 at 1:48 PM, Stanislaw Bogatkin sbogat...@mirantis.com
wrote:

 Hi everyone,

 we have merged code that will create hybrid ISO. Current 6.1 #147 ISO
 already can be booted from USB by standard method (just using dd
 of=/path/to/iso of=/path/to/usb/stick).

 Creating IMG artifact will be disabled soon, so, please, be aware of it.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Nova][Cinder] Operations: adding new nodes in disabled state, allowed for test tenant only

2015-02-27 Thread Bogdan Dobrelya
On 23.02.2015 15:13, Bogdan Dobrelya wrote:
 + [Fuel] tag
 + openstack-operators ML
 
 Joe Gordon joe.gordon0 at gmail.com
 Thu Dec 4 13:26:59 UTC 2014
 
 On Wed, Dec 3, 2014 at 3:31 PM, Mike Scherbakov mscherbakov at
 mirantis.com
 wrote:
 
 Hi all,
 enable_new_services in nova.conf seems to allow add new compute nodes in
 disabled state:


 https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L507-L508,
 so it would allow to check everything first, before allowing production
 workloads host a VM on it. I've filed a bug to Fuel to use this by
 default
 when we scale up the env (add more computes) [1].

 A few questions:

1. can we somehow enable compute service for test tenant first? So
cloud administrator would be able to run test VMs on the node, and
 after
ensuring that everything is fine - to enable service for all tenants


 
 Although there may be more then one way to set this up in nova, this can
 definitely be done via nova host aggregates. Put new compute services into
 an aggregate that only specific tenants can access (controlled via
 scheduler filter).
 
 Looks reasonable, +1 for Nova host aggregates [0].
 
 There is still a question, though, about an enable_new_services
 parameter, cinder and other OpenStack services. It is not clear how to
 use this parameter from an operator perspective, for example:
 1) While deploying or scaling the OpenStack environment, we should set
 enable_new_services=false for all services which support it.

Just a note, it looks like Nova doesn't honor the
enable_new_services=false setting [0].

[0] https://bugs.launchpad.net/nova/+bug/1426332

 2) Once the deploy/scale is done, re-enable disabled services. But how
 exactly that should be done?
 * Set enable_new_services=True and restart the schedulers and conductor
 services? Or API services as well?
 * Keep enable_new_services=false in configs, but issue - for nova
 exmaple - 'nova-manage service enable ...' commands for added compute
 nodes? And what about cinder and other ones (there are no *-manage
 service enable commands)?
 * Some another way?
 
 And regarding plans for implementing this improvement in Fuel, I believe
 for nova computes it could be done as a workaround for the 6.1 release
 with the help of enable_new_services configuration parameter and a
 separate Fuel post-deploy granular task which should re-enable the
 disabled compute services. But this should be re-implemented later with
 separate host aggregate for deployment and health checks, see the
 related blueprint [1].
 
 [0]
 http://docs.openstack.org/havana/config-reference/content/host-aggregates.html
 [1] https://blueprints.launchpad.net/fuel/+spec/disable-new-computes
 

1.
2. What about Cinder? Is there a similar option / ability?
3. What about other OpenStack projects?

 What is your opinion, how we should approach the problem (if there is a
 problem)?

 [1] https://bugs.launchpad.net/fuel/+bug/1398817
 --
 Mike Scherbakov
 #mihgen

 


-- 
Best regards,
Bogdan Dobrelya,
Skype #bogdando_at_yahoo.com
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-operators][Rally][HA-testing][multi-scenarios-load-gen] Proposal to change Rally input task format

2015-02-27 Thread Mikhail Dubov
Hi Boris,

nice job! I like how this new task format looks like. I have commented your
patch with a couple of suggestions to make it even easier to understand.

Best regards,
Mikhail Dubov

Engineering OPS
Mirantis, Inc.
E-Mail: mdu...@mirantis.com
Skype: msdubov

On Thu, Feb 26, 2015 at 12:24 AM, Boris Pavlovic bo...@pavlovic.me wrote:

 Hi stackers,


 When we started Rally we have a just small idea to make some tool that
 generates load and measures performance. During almost 2 years a lot of
 changed in Rally, so now it's quite common testing framework that allows to
 cover various topics like: stress, load, volume, performance, negative,
 functional testing. Since the begging we have idea of scenario centric
 approach, where scenario method that is called multiple times
 simultaneously to generate load and duration of it is collected.

 This is huge limitation that doesn't allow us to easily generate real life
 load. (e.g. loading simultaneously few components) or HA testing (where we
 need during the load generation to disable/kill process, reboot or power
 off physical nodes). To make this possible we should just run multiple
 scenarios in parallel, but this change will require change in input task
 format.

 I made proposal of new Rally task input format in this patch:

 https://review.openstack.org/#/c/159065/3/specs/new_rally_input_task_format.yaml

 Please review it. Let's try to resolve all UX issues before starting
 working on it.

 P.S. I hope this will be the last big change in Rally input task format..


 Best regards,
 Boris Pavlovic

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-operators][Rally][HA-testing][multi-scenarios-load-gen] Proposal to change Rally input task format

2015-02-27 Thread Aleksandr Maretskiy
Hi

Patch set #3 looks good for me (but I have a proposal about `description'
field).

Thanks!

On Fri, Feb 27, 2015 at 2:02 PM, Mikhail Dubov mdu...@mirantis.com wrote:

 Hi Boris,

 nice job! I like how this new task format looks like. I have commented
 your patch with a couple of suggestions to make it even easier to
 understand.

 Best regards,
 Mikhail Dubov

 Engineering OPS
 Mirantis, Inc.
 E-Mail: mdu...@mirantis.com
 Skype: msdubov

 On Thu, Feb 26, 2015 at 12:24 AM, Boris Pavlovic bo...@pavlovic.me
 wrote:

 Hi stackers,


 When we started Rally we have a just small idea to make some tool that
 generates load and measures performance. During almost 2 years a lot of
 changed in Rally, so now it's quite common testing framework that allows to
 cover various topics like: stress, load, volume, performance, negative,
 functional testing. Since the begging we have idea of scenario centric
 approach, where scenario method that is called multiple times
 simultaneously to generate load and duration of it is collected.

 This is huge limitation that doesn't allow us to easily generate real
 life load. (e.g. loading simultaneously few components) or HA testing
 (where we need during the load generation to disable/kill process, reboot
 or power off physical nodes). To make this possible we should just run
 multiple scenarios in parallel, but this change will require change in
 input task format.

 I made proposal of new Rally task input format in this patch:

 https://review.openstack.org/#/c/159065/3/specs/new_rally_input_task_format.yaml

 Please review it. Let's try to resolve all UX issues before starting
 working on it.

 P.S. I hope this will be the last big change in Rally input task format..


 Best regards,
 Boris Pavlovic

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev