Re: [openstack-dev] [Mistral]

2014-12-10 Thread Renat Akhmerov
Agree with Nikolay. 

Winson, did you see 
https://blueprints.launchpad.net/mistral/+spec/mistral-execution-environment 
https://blueprints.launchpad.net/mistral/+spec/mistral-execution-environment 
? The concept described their is pretty simple and syntactically we can have a 
predefined key in workflow context, for example accessible as $.__env (similar 
to $.__execution), which contains environment variables. They are the same for 
the whole workflow including subworkflows.

One additional BP that we filed after Paris Summit is 
https://blueprints.launchpad.net/mistral/+spec/mistral-workflow-constants 
https://blueprints.launchpad.net/mistral/+spec/mistral-workflow-constants 
which is a little bit related to it. According to what described in this BP we 
can just define workflow scoped constants for convenience which are accessible 
as regular workflow input variables. Btw, as an idea: they can be initilalized 
by variables from execution environment.

And there’s even one more BP 
https://blueprints.launchpad.net/mistral/+spec/mistral-default-input-values 
https://blueprints.launchpad.net/mistral/+spec/mistral-default-input-values 
that suggests having default workflow input values that is related to those 
two. Btw, it can be extended to having default values for action input values 
as well.


So, I would suggest you take a look at all these BPs and continue to discuss 
this topic. I feel it’s really important since all these things are intended to 
improve usability.

Thanks

Renat Akhmerov
@ Mirantis Inc.



 On 10 Dec 2014, at 00:17, Nikolay Makhotkin nmakhot...@mirantis.com wrote:
 
 Guys, 
 
 May be I misunderstood something here but what is the difference between this 
 one and 
 https://blueprints.launchpad.net/mistral/+spec/mistral-execution-environment 
 https://blueprints.launchpad.net/mistral/+spec/mistral-execution-environment
  ?
 
 On Tue, Dec 9, 2014 at 5:35 PM, Dmitri Zimine dzim...@stackstorm.com 
 mailto:dzim...@stackstorm.com wrote:
 Winson, 
 
 thanks for filing the blueprint: 
 https://blueprints.launchpad.net/mistral/+spec/mistral-global-context 
 https://blueprints.launchpad.net/mistral/+spec/mistral-global-context,
 
 some clarification questions:
 1) how exactly would the user describe these global variables syntactically? 
 In DSL? What can we use as syntax? In the initial workflow input? 
 2) what is the visibility scope: this and child workflows, or truly global”? 
 3) What are the good default behavior? 
 
 Let’s detail it a bit more. 
 
 DZ 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Best Regards,
 Nikolay
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Global Context and Execution Environment

2014-12-10 Thread Renat Akhmerov
Winson, ok, I got the idea.

Just a crazy idea that came to my mind. What if we just mark some of the input 
parameters as “global”? For example,

wf:
  type: direct
  input:
- param1
- param2: global

One way or another we’re talking about different scopes. I see the following 
possible scopes:

* local - default scope, only current workflow tasks can see it
* global - all entities can see it: this workflow itself (its tasks), its 
nested workflows and actions
* workflow - only this workflow and actions called from this workflow can see it

However, if we follow that path we would need to change how Mistral validates 
workflow input parameters. Currently, if we pass something into workflow it 
must be declared as an input parameter. In case of “global” scope and nested 
workflows this mechanism is too primitive because a nested workflow may get 
something that it doesn’t expect. So it may not be that straightforward.

Thoughts?

Just in case, I’ll repeat related BPs from another thread:

* https://blueprints.launchpad.net/mistral/+spec/mistral-execution-environment 
https://blueprints.launchpad.net/mistral/+spec/mistral-execution-environment
* https://blueprints.launchpad.net/mistral/+spec/mistral-global-context 
https://blueprints.launchpad.net/mistral/+spec/mistral-global-context
* https://blueprints.launchpad.net/mistral/+spec/mistral-workflow-constants 
https://blueprints.launchpad.net/mistral/+spec/mistral-workflow-constants
* https://blueprints.launchpad.net/mistral/+spec/mistral-default-input-values 
https://blueprints.launchpad.net/mistral/+spec/mistral-default-input-values

Renat Akhmerov
@ Mirantis Inc.



 On 10 Dec 2014, at 13:12, W Chan m4d.co...@gmail.com wrote:
 
 Nikolay,
 
 Regarding whether the execution environment BP is the same as this global 
 context BP, I think the difference is in the scope of the variables.  The 
 global context that I'm proposing is provided to the workflow at execution 
 and is only relevant to this execution.  For example, some contextual 
 information about this specific workflow execution (i.e. a reference to a 
 record in an external system related such as a service ticket ID or CMDB 
 record ID).  The values do not necessary carry across multiple executions.  
 But as I understand, the execution environment configuration is a set of 
 reusable configuration that can be shared across multiple workflow 
 executions.  The fact where action parameters are specified explicitly over 
 and over again is a common problem in the DSL.
 
 Winson
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2014-12-10 Thread Daniel P. Berrange
On Wed, Dec 10, 2014 at 07:41:55AM +, Irena Berezovsky wrote:
 Hi Daniel,
 Please see inline
 
 -Original Message-
 From: Daniel P. Berrange [mailto:berra...@redhat.com] 
 Sent: Tuesday, December 09, 2014 4:04 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech 
 driver/L2 and vif_driver
 
 The VIF parameters are mapped into the nova.network.model.VIF class which is 
 doing some crude validation. I would anticipate that this validation will be 
 increasing over time, because any functional data flowing over the API and so 
 needs to be carefully managed for upgrade reasons.
 
 Even if the Neutron impl is out of tree, I would still expect both Nova and 
 Neutron core to sign off on any new VIF type name and its associated details 
 (if any).
 
 [IB] This maybe the reasonable integration point. But it requires nova team 
 review and approval. From my experience nova team is extremely overloaded, 
 therefor getting this code reviewed become very difficult mission.
  
  What other reasons am I missing to not have VIF driver classes as a 
  public extension point ?
 
 Having to find  install VIF driver classes from countless different vendors, 
 each hiding their code away in their own obsecure website, will lead to awful 
 end user experiance when deploying Nova. Users are better served by having it 
 all provided when they deploy Nova IMHO
 
 If every vendor goes off  works in their own isolated world we also loose 
 the scope to align the implementations, so that common concepts work the same 
 way in all cases and allow us to minimize the number of new VIF types 
 required. The proposed vhostuser VIF type is a good example of this - it 
 allows a single Nova VIF driver to be capable of potentially supporting 
 multiple different impls on the Neutron side.
 If every vendor worked in their own world, we would have ended up with 
 multiple VIF drivers doing the same thing in Nova, each with their own set of 
 bugs  quirks.
 
 [IB] I think that most of the vendors that maintain vif_driver out of nova, 
 do not do it on purpose and would prefer to see it upstream. Sometimes host 
 side binding is not fully integrated with libvirt and requires some temporary 
 additional code, till libvirt provides complete support. Sometimes, it is 
 just lack of nova team attention to the proposed spec/code  to be reviewed 
 and accepted on time, which ends up with fully supported neutron part and 
 missing small but critical vif_driver piece.


So the problem of Nova review bandwidth is a constant problem across all
areas of the code. We need to solve this problem for the team as a whole
in a much broader fashion than just for people writing VIF drivers. The
VIF drivers are really small pieces of code that should be straightforward
to review  get merged in any release cycle in which they are proposed.
I think we need to make sure that we focus our energy on doing this and
not ignoring the problem by breaking stuff off out of tree.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] python-congressclient 1.0.1 released

2014-12-10 Thread Aaron Rosen
The congress team is pleased to announce the release of the
python-congressclient 1.0.1.

This release includes several bug fixes as well as many other changes - a
few highlights:

- python34 compatibility
- New CLI command to simulate results of rule
- openstack congress policy simulate  Show the result of simulation.
- Add new CLI command to check the status of a datasource
- openstack congress datasource status list
- Add new CLI for viewing schemas
- openstack congress datasource table schema show  Show schema for
datasource table.
- openstack congress datasource schema show  Show schema for
datasource.
- Add missing CLI command
- openstack congress policy rule show


$ git log --abbrev-commit --pretty=oneline --no-merges 1.0.0..1.0.1
1e31e9d Fix version issue
53dccd7 Workflow documentation is now in infra-manual
bab2c9e Updated from global requirements
9941dd6 Updated from global requirements
7be067e Used schema to compute columns for datasource rows
7a81c74 Added datasource schema
bcb1b90 Added datasource status command
f5fe21a Add news file about what was added in each release
3c4867d Make client work with python34
aa9eb14 Use a more simple policy rule for test
f5e0a70 Adding missing CLI command congress policy rule show
d8d9adc Updated from global requirements
d46cee8 Added command for policy engine's simulation functionality
255d834 Work toward Python 3.4 support and testing

Please report issues through launchpad:
https://launchpad.net/python-congressclient

Thanks!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] REST and Django

2014-12-10 Thread Richard Jones
Sorry I didn't respond to this earlier today, I had intended to.

What you're describing isn't REST, and the principles of REST are what have
been guiding the design of the new API so far. I see a lot of value in
using REST approaches, mostly around clarity of the interface.

While the idea of a very thin proxy seemed like a great idea at one point,
my conversations at the summit convinced me that there was value in both
using the client interfaces present in the openstack_dashboard/api code
base (since they abstract away many issues in the apis including across
versions) and also value in us being able to clean up (for example, using
project_id rather than project in the user API we've already
implemented) and extend those interfaces (to allow batched operations).

We want to be careful about what we expose in Horizon to the JS clients
through this API. That necessitates some amount of code in Horizon. About
half of the current API for keysone represents that control (the other half
is docstrings :)


 Richard


On Tue Dec 09 2014 at 9:37:47 PM Tihomir Trifonov t.trifo...@gmail.com
wrote:

 Sorry for the late reply, just few thoughts on the matter.

 IMO the REST middleware should be as thin as possible. And I mean thin in
 terms of processing - it should not do pre/post processing of the requests,
 but just unpack/pack. So here is an example:

 instead of making AJAX calls that contain instructions:

 ​​
 POST --json --data {action: delete, data: [ {name:
 item1}, {name: item2}, {name: item3 ]}


 I think a better approach is just to pack/unpack batch commands, and leave
 execution to the frontend/backend and not middleware:


 ​​
 POST --json --data {
 ​batch
 :
 ​[
 {​
 
 ​
 action
 ​ : delete​
 ,
 ​payload: ​
 {name: item1}
 ​,
 {​
 
 ​
 action
 ​ : delete​
 ,
 ​
 payload:
 ​
 {name: item
 ​2
 }
 ​,
 {​
 
 ​
 action
 ​ : delete​
 ,
 ​
 payload:
 ​
 {name: item
 ​3
 }
 ​ ] ​
 ​
 ​
 }


 ​The idea is that the middleware should not know the actual data. It
 should ideally just unpack the data:

 ​​responses = []


 for cmd in
 ​ ​
 ​
 ​
 request.POST['batch']:​


 ​
 ​​responses
 ​.append(​
 ​
 getattr(controller, cmd['action']
 ​)(**
 cmd['​payload']
 ​)​)


 ​return responses​



 ​and the frontend(JS) will just send batches of simple commands, and will
 receive a list of responses for each command in the batch. The error
 handling will be done in the frontend​(JS) as well.

 ​

 For the more complex example of 'put()' where we have dependent objects:

 project = api.keystone.tenant_get(request, id)
 kwargs = self._tenant_kwargs_from_DATA(request.DATA, enabled=None)
 api.keystone.tenant_update(request, project, **kwargs)



 In practice the project data should be already present in the
 frontend(assuming that we already loaded it to render the project
 form/view), so

 ​
 ​
 POST --json --data {
 ​batch
 :
 ​[
 {​
 
 ​
 action
 ​ : tenant_update​
 ,
 ​payload: ​
 {project: js_project_object.id, name: some name, prop1: some
 prop, prop2: other prop, etc.}
 ​
 ​ ] ​
 ​
 ​
 }​

 So in general we don't need to recreate the full state on each REST call,
 if we make the Frontent full-featured application. This way - the frontend
 will construct the object, will hold the cached value, and will just send
 the needed requests as single ones or in batches, will receive the response
 from the API backend, and will render the results. The whole processing
 logic will be held in the Frontend(JS), while the middleware will just act
 as proxy(un/packer). This way we will maintain just the logic in the
 frontend, and will not need to duplicate some logic in the middleware.




 On Tue, Dec 2, 2014 at 4:45 PM, Adam Young ayo...@redhat.com wrote:

  On 12/02/2014 12:39 AM, Richard Jones wrote:

 On Mon Dec 01 2014 at 4:18:42 PM Thai Q Tran tqt...@us.ibm.com wrote:

  I agree that keeping the API layer thin would be ideal. I should add
 that having discrete API calls would allow dynamic population of table.
 However, I will make a case where it *might* be necessary to add
 additional APIs. Consider that you want to delete 3 items in a given table.

 If you do this on the client side, you would need to perform: n * (1 API
 request + 1 AJAX request)
 If you have some logic on the server side that batch delete actions: n *
 (1 API request) + 1 AJAX request

 Consider the following:
 n = 1, client = 2 trips, server = 2 trips
 n = 3, client = 6 trips, server = 4 trips
 n = 10, client = 20 trips, server = 11 trips
 n = 100, client = 200 trips, server 101 trips

 As you can see, this does not scale very well something to
 consider...

  This is not something Horizon can fix.  Horizon can make matters worse,
 but cannot make things better.

 If you want to delete 3 users,   Horizon still needs to make 3 distinct
 calls to Keystone.

 To fix this, we need either batch calls or a standard way to do multiples
 of the same operation.

 The unified API effort it the right place to drive this.







  Yep, though in the 

Re: [openstack-dev] People of OpenStack (and their IRC nicks)

2014-12-10 Thread Matthew Gilliard
So, are we agreed that http://www.openstack.org/community/members/ is
the authoritative place for IRC lookups? In which case, I'll take the
old content out of https://wiki.openstack.org/wiki/People and leave a
message directing people where to look.

I don't have the imagination to use anything other than my real name
on IRC but for people who do, should we try to encourage putting the
IRC nick in the gerrit name?

On Tue, Dec 9, 2014 at 11:56 PM, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Angus Salkeld's message of 2014-12-09 15:25:59 -0800:
 On Wed, Dec 10, 2014 at 5:11 AM, Stefano Maffulli stef...@openstack.org
 wrote:

  On 12/09/2014 06:04 AM, Jeremy Stanley wrote:
   We already have a solution for tracking the contributor-IRC
   mapping--add it to your Foundation Member Profile. For example, mine
   is in there already:
  
   http://www.openstack.org/community/members/profile/5479
 
  I recommend updating the openstack.org member profile and add IRC
  nickname there (and while you're there, update your affiliation history).
 
  There is also a search engine on:
 
  http://www.openstack.org/community/members/
 
 
 Except that info doesn't appear nicely in review. Some people put their
 nick in their Full Name in
 gerrit. Hopefully Clint doesn't mind:

 https://review.openstack.org/#/q/owner:%22Clint+%27SpamapS%27+Byrum%22+status:open,n,z


 Indeed, I really didn't like that I'd be reviewing somebody's change,
 and talking to them on IRC, and not know if they knew who I was.

 It also has the odd side effect that gerritbot triggers my IRC filters
 when I 'git review'.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] HDP2 testing in Sahara CI

2014-12-10 Thread Sergey Lukjanov
Hi folks,

we have some issues with testing HDP2 in Sahara CI starting from the
weekend, so, please, consider the HDP2 job unstable, but do not approve
changes that could directly affect HDP2 plugin with failed job. We're now
trying to make it working.

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-10 Thread Stephen Balukoff
Hi Keshava,

For the purposes of Octavia, it's going to be service VMs (or containers or
what have you). However, service VM or tenant VM the concept is roughly
similar:  We need some kind of layer-3 routing capability which works
something like Neutron floating IPs (though not just a static NAT in this
case) but which can distribute traffic to a set of back-end VMs running on
a Neutron network according to some predictable algorithm (probably a
distributed hash).

The idea behind ACTIVE-ACTIVE is that you have many service VMs (we call
them amphorae) which service the same public IP in some way-- this allows
for horizontal scaling of services which need it (ie. anything which does
TLS termination with a significant amount of load).

Does this make sense to you?

Thanks,
Stephen


On Mon, Dec 8, 2014 at 9:56 PM, A, Keshava keshav...@hp.com wrote:

  Stephen,



 Interesting to know what is “ACTIVE-ACTIVE topology of load balancing VMs”.

 What is the scenario is it Service-VM (of NFV) or Tennant VM ?

 Curious to know the background of this thoughts .



 keshava





 *From:* Stephen Balukoff [mailto:sbaluk...@bluebox.net]
 *Sent:* Tuesday, December 09, 2014 7:18 AM
 *To:* OpenStack Development Mailing List (not for usage questions)

 *Subject:* Re: [openstack-dev] [Neutron] [RFC] Floating IP idea
 solicitation and collaboration



 For what it's worth, I know that the Octavia project will need something
 which can do more advanced layer-3 networking in order to deliver and
 ACTIVE-ACTIVE topology of load balancing VMs / containers / machines.
 That's still a down the road feature for us, but it would be great to be
 able to do more advanced layer-3 networking in earlier releases of Octavia
 as well. (Without this, we might have to go through back doors to get
 Neutron to do what we need it to, and I'd rather avoid that.)



 I'm definitely up for learning more about your proposal for this project,
 though I've not had any practical experience with Ryu yet. I would also
 like to see whether it's possible to do the sort of advanced layer-3
 networking you've described without using OVS. (We have found that OVS
 tends to be not quite mature / stable enough for our needs and have moved
 most of our clouds to use ML2 / standard linux bridging.)



 Carl:  I'll also take a look at the two gerrit reviews you've linked. Is
 this week's L3 meeting not happening then? (And man-- I wish it were an
 hour or two later in the day. Coming at y'all from PST timezone here.)



 Stephen



 On Mon, Dec 8, 2014 at 11:57 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 Ryan,

 I'll be traveling around the time of the L3 meeting this week.  My
 flight leaves 40 minutes after the meeting and I might have trouble
 attending.  It might be best to put it off a week or to plan another
 time -- maybe Friday -- when we could discuss it in IRC or in a
 Hangout.

 Carl


 On Mon, Dec 8, 2014 at 8:43 AM, Ryan Clevenger
 ryan.cleven...@rackspace.com wrote:
  Thanks for getting back Carl. I think we may be able to make this weeks
  meeting. Jason Kölker is the engineer doing all of the lifting on this
 side.
  Let me get with him to review what you all have so far and check our
  availability.
 
  
 
  Ryan Clevenger
  Manager, Cloud Engineering - US
  m: 678.548.7261
  e: ryan.cleven...@rackspace.com
 
  
  From: Carl Baldwin [c...@ecbaldwin.net]
  Sent: Sunday, December 07, 2014 4:04 PM
  To: OpenStack Development Mailing List
  Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea
 solicitation
  and collaboration
 
  Ryan,
 
  I have been working with the L3 sub team in this direction.  Progress has
  been slow because of other priorities but we have made some.  I have
 written
  a blueprint detailing some changes needed to the code to enable the
  flexibility to one day run glaring ups on an l3 routed network [1].
 Jaime
  has been working on one that integrates ryu (or other speakers) with
 neutron
  [2].  Dvr was also a step in this direction.
 
  I'd like to invite you to the l3 weekly meeting [3] to discuss further.
 I'm
  very happy to see interest in this area and have someone new to
 collaborate.
 
  Carl
 
  [1] https://review.openstack.org/#/c/88619/
  [2] https://review.openstack.org/#/c/125401/
  [3] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam
 
  On Dec 3, 2014 4:04 PM, Ryan Clevenger ryan.cleven...@rackspace.com
  wrote:
 
  Hi,
 
  At Rackspace, we have a need to create a higher level networking service
  primarily for the purpose of creating a Floating IP solution in our
  environment. The current solutions for Floating IPs, being tied to
 plugin
  implementations, does not meet our needs at scale for the following
 reasons:
 
  1. Limited endpoint H/A mainly targeting failover only and not
  multi-active endpoints,
  2. Lack of noisy neighbor and DDOS mitigation,
  3. IP fragmentation (with cells, public connectivity is 

Re: [openstack-dev] [horizon] REST and Django

2014-12-10 Thread Tihomir Trifonov
Richard, thanks for the reply,


I agree that the given example is not a real REST. But we already have the
REST API - that's Keystone, Nova, Cinder, Glance, Neutron etc, APIs. So
what we plan to do here? To add a new REST layer to communicate with other
REST API? Do we really need Frontend-REST-REST architecture ? My opinion is
that we don't need another REST layer as we currently are trying to go away
from the Django layer, which is the same - another processing layer.
Although we call it REST proxy or whatever - it doesn't need to be a real
REST, but just an aggregation proxy that combines and forwards some
requests with adding minimal processing overhead. What makes sense for me
is to keep the authentication in this layer as it is now - push a cookie to
the frontend, but the REST layer will extract the auth tokens from the
session storage and prepare the auth context for the REST API request to OS
services. This way we will not expose the tokens to the JS frontend, and
will have strict control over the authentication. The frontend will just
send data requests, they will be wrapped with auth context and forwarded.

Regarding the existing issues with versions in the API - for me the
existing approach is wrong. All these fixes were made as workarounds. What
should have been done is to create abstractions for each version and to use
a separate class for each version. This was partially done for the
keystoneclient in api/keystone.py, but not for the forms/views, where we
still have if-else for versions. What I suggest here is to have
different(concrete) views/forms for each version, and to use them according
the context. If the Keystone backend is v2.0 - then in the Frontend use
keystone2() object, otherwise use keystone3() object. This of course needs
some more coding, but is much cleaner in terms of customization and
testing. For me the current hacks with 'if keystone.version == 3.0' are
wrong at many levels. And this can be solved now. *The problem till now was
that we had one frontend that had to be backed by different versions of
backend components*. *Now we can have different frontends that map to
specific backend*. That's how I understand the power of Angular with it's
views and directives. That's where I see the real benefit of using
full-featured frontend. Also imagine how easy will be then to deprecate a
component version, compared to what we need to do now for the same.

Otherwise we just rewrite the current Django middleware with another
DjangoRest middleware and don't change anything, we don't fix the problems.
We just move them to another place.

I still think that in Paris we talked about a new generation of the
Dashboard, a different approach on building the frontend for OpenStack.
What I've heard there from users/operators of Horizon was that it was
extremely hard to add customizations and new features to the Dashboard, as
all these needed to go through upstream changes and to wait until next
release cycle to get them. Do we still want to address these concerns and
how? Please, correct me if I got things wrong.


On Wed, Dec 10, 2014 at 11:56 AM, Richard Jones r1chardj0...@gmail.com
wrote:

 Sorry I didn't respond to this earlier today, I had intended to.

 What you're describing isn't REST, and the principles of REST are what
 have been guiding the design of the new API so far. I see a lot of value in
 using REST approaches, mostly around clarity of the interface.

 While the idea of a very thin proxy seemed like a great idea at one point,
 my conversations at the summit convinced me that there was value in both
 using the client interfaces present in the openstack_dashboard/api code
 base (since they abstract away many issues in the apis including across
 versions) and also value in us being able to clean up (for example, using
 project_id rather than project in the user API we've already
 implemented) and extend those interfaces (to allow batched operations).

 We want to be careful about what we expose in Horizon to the JS clients
 through this API. That necessitates some amount of code in Horizon. About
 half of the current API for keysone represents that control (the other half
 is docstrings :)


  Richard


 On Tue Dec 09 2014 at 9:37:47 PM Tihomir Trifonov t.trifo...@gmail.com
 wrote:

 Sorry for the late reply, just few thoughts on the matter.

 IMO the REST middleware should be as thin as possible. And I mean thin in
 terms of processing - it should not do pre/post processing of the requests,
 but just unpack/pack. So here is an example:

 instead of making AJAX calls that contain instructions:

 ​​
 POST --json --data {action: delete, data: [ {name:
 item1}, {name: item2}, {name: item3 ]}


 I think a better approach is just to pack/unpack batch commands, and
 leave execution to the frontend/backend and not middleware:


 ​​
 POST --json --data {
 ​batch
 :
 ​[
 {​
 
 ​
 action
 ​ : delete​
 ,
 ​payload: ​
 {name: item1}
 ​,
 {​
 
 ​
 action
 ​ : delete​
 ,
 ​
 payload:
 ​
 

Re: [openstack-dev] [Third-party] Voting for new Third-party CI weekly IRC meeting time

2014-12-10 Thread Erlon Cruz
Both are fine, but A is better.

On Tue, Dec 9, 2014 at 10:46 PM, Kurt Taylor kurt.r.tay...@gmail.com
wrote:

 So far it looks like we have centered around 2 options:
 Option A 1200 and 2200 UTC
 Option D 1500 and 0400 UTC

 There is still time to pick your best time. Please vote at
 https://www.google.com/moderator/#16/e=21b93c

 Special thanks to Steve, Daya, Markus, Mikhail, Emily, Nurit, Edwin and
 Ramy for taking the time to vote.

 Kurt Taylor (krtaylor)


 On Tue, Dec 9, 2014 at 9:32 AM, Kurt Taylor kurt.r.tay...@gmail.com
 wrote:

 All of the feedback so far has supported moving the existing IRC
 Third-party CI meeting to better fit a worldwide audience.

 The consensus is that we will have only 1 meeting per week at
 alternating times. You can see examples of other teams with alternating
 meeting times at: https://wiki.openstack.org/wiki/Meetings

 This way, one week we are good for one part of the world, the next week
 for the other. You will not need to attend both meetings, just the meeting
 time every other week that fits your schedule.

 Proposed times in UTC are being voted on here:
 https://www.google.com/moderator/#16/e=21b93c

 Please vote on the time that is best for you. I would like to finalize
 the new times this week.

 Thanks!
 Kurt Taylor (krtaylor)



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][docker][containers][qa] nova-docker CI failing a lot on unrelated nova patches

2014-12-10 Thread Sean Dague
On 12/09/2014 06:18 PM, Eric Windisch wrote:
 
 While gating on nova-docker will prevent patches that cause
 nova-docker to break 100% to land, it won't do a lot to prevent
 transient failures. To fix those we need people dedicated to making
 sure nova-docker is working.
  
 
 
 What would be helpful for me is a way to know that our tests are
 breaking without manually checking Kibana, such as an email.

I know that periodic jobs can do this kind of notification, if you ask
about it in #openstack-infra there might be a solution there.

However, having a job in infra on Nova is a thing that comes with an
expectation that someone is staying engaged on the infra and Nova sides
to ensure that it's running correctly, and debug it when it's wrong.
It's not a set it and forget it.

It's already past the 2 weeks politeness boundary before it's considered
fair game to just delete it.

Creating the job is  10% of the work. Long term maintenance is
important. I'm still not getting the feeling that there is really a long
term owner on this job. I'd love that not to be the case, but simple
things like the fact that the directory structure was all out of whack
make it clear no one was regularly looking at it.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Cannot find murano.conf

2014-12-10 Thread Timur Nurlygayanov
Hi,

looks like this is issues with the installation of Murano requirements with
the pip.

To fix them I suggest to do the following:


*pip install pip -Upip install setuptools -U*

and after this try to install Murano requirements again.


On Wed, Dec 10, 2014 at 10:46 AM, raghavendra@accenture.com wrote:

  HI Team,



 I am installing Murano on the Ubuntu 14.04 Juno setup and when I try the
 below install murano-api I encounter the below error. Please assist.



 When I try to install



 $ *tox -e venv -- murano-api --config-file ./etc/murano/murano.conf*



 pip can't proceed with requirement 'pycrypto=2.6 (from -r
 /home/ubuntu/murano/murano/requirements.t xt



 (line 18))' due to a pre-existing build directory.

 location: /home/ubuntu/murano/murano/.tox/venv/build/pycrypto

 This is likely due to a previous installation that failed.

 pip is being responsible and not assuming it can delete this.

 Please delete it and try again.

 Storing debug log for failure in /home/ubuntu/.pip/pip.log



 ERROR: could not install deps
 [-r/home/ubuntu/murano/murano/requirements.txt,
 -r/home/ubuntu/murano/m
 urano/test-requirements.txt]







 Warm Regards,

 *Raghavendra Lad*



 --

 This message is for the designated recipient only and may contain
 privileged, proprietary, or otherwise confidential information. If you have
 received it in error, please notify the sender immediately and delete the
 original. Any other use of the e-mail by you is prohibited. Where allowed
 by local law, electronic communications with Accenture and its affiliates,
 including e-mail and instant messaging (including content), may be scanned
 by our systems for the purposes of information security and assessment of
 internal compliance with Accenture policy.

 __

 www.accenture.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Timur,
Senior QA Engineer
OpenStack Projects
Mirantis Inc

My OpenStack summit schedule:
http://kilodesignsummit.sched.org/timur.nurlygayanov#.VFSrD8mhhOI
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2014-12-10 Thread Murugan, Visnusaran


-Original Message-
From: Zane Bitter [mailto:zbit...@redhat.com] 
Sent: Tuesday, December 9, 2014 3:50 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

On 08/12/14 07:00, Murugan, Visnusaran wrote:

 Hi Zane  Michael,

 Please have a look @ 
 https://etherpad.openstack.org/p/execution-stream-and-aggregator-based
 -convergence

 Updated with a combined approach which does not require persisting graph and 
 backup stack removal.

Well, we still have to persist the dependencies of each version of a resource 
_somehow_, because otherwise we can't know how to clean them up in the correct 
order. But what I think you meant to say is that this approach doesn't require 
it to be persisted in a separate table where the rows are marked as traversed 
as we work through the graph.

[Murugan, Visnusaran] 
In case of rollback where we have to cleanup earlier version of resources, we 
could get the order from old template. We'd prefer not to have a graph table.

 This approach reduces DB queries by waiting for completion notification on a 
 topic. The drawback I see is that delete stack stream will be huge as it will 
 have the entire graph. We can always dump such data in ResourceLock.data Json 
 and pass a simple flag load_stream_from_db to converge RPC call as a 
 workaround for delete operation.

This seems to be essentially equivalent to my 'SyncPoint' proposal[1], with the 
key difference that the data is stored in-memory in a Heat engine rather than 
the database.

I suspect it's probably a mistake to move it in-memory for similar reasons to 
the argument Clint made against synchronising the marking off of dependencies 
in-memory. The database can handle that and the problem of making the DB robust 
against failures of a single machine has already been solved by someone else. 
If we do it in-memory we are just creating a single point of failure for not 
much gain. (I guess you could argue it doesn't matter, since if any Heat engine 
dies during the traversal then we'll have to kick off another one anyway, but 
it does limit our options if that changes in the future.)
[Murugan, Visnusaran] Resource completes, removes itself from resource_lock and 
notifies engine. Engine will acquire parent lock and initiate parent only if 
all its children are satisfied (no child entry in resource_lock). This will 
come in place of Aggregator.

It's not clear to me how the 'streams' differ in practical terms from just 
passing a serialisation of the Dependencies object, other than being 
incomprehensible to me ;). The current Dependencies implementation
(1) is a very generic implementation of a DAG, (2) works and has plenty of unit 
tests, (3) has, with I think one exception, a pretty straightforward API, (4) 
has a very simple serialisation, returned by the edges() method, which can be 
passed back into the constructor to recreate it, and (5) has an API that is to 
some extent relied upon by resources, and so won't likely be removed outright 
in any event. 
Whatever code we need to handle dependencies ought to just build on this 
existing implementation.
[Murugan, Visnusaran] Our thought was to reduce payload size (template/graph). 
Just planning for worst case scenario (million resource stack) We could always 
dump them in ResourceLock.data to be loaded by Worker.

I think the difference may be that the streams only include the
*shortest* paths (there will often be more than one) to each resource. i.e.

  A --- B --- C
  ^ |
  | |
  +-+

can just be written as:

  A --- B --- C

because there's only one order in which that can execute anyway. (If we're 
going to do this though, we should just add a method to the dependencies.Graph 
class to delete redundant edges, not create a whole new data structure.) There 
is a big potential advantage here in that it reduces the theoretical maximum 
number of edges in the graph from O(n^2) to O(n). (Although in practice real 
templates are typically not likely to have such dense graphs.)

There's a downside to this too though: say that A in the above diagram is 
replaced during an update. In that case not only B but also C will need to 
figure out what the latest version of A is. One option here is to pass that 
data along via B, but that will become very messy to implement in a non-trivial 
example. The other would be for C to go search in the database for resources 
with the same name as A and the current traversal_id marked as the latest. But 
that not only creates a concurrency problem we didn't have before (A could have 
been updated with a new traversal_id at some point after C had established that 
the current traversal was still valid but before it went looking for A), it 
also eliminates all of the performance gains from removing that edge in the 
first place.

[1]

Re: [openstack-dev] [Murano] Oslo.messaging error

2014-12-10 Thread Timur Nurlygayanov
Hi Raghavendra Lad,

looks like Murano services can't connect ot the RabbitMQ server.
Could you please share the configuration parameters for RabbitMQ  from
./etc/murano/murano.conf ?


On Wed, Dec 10, 2014 at 10:55 AM, raghavendra@accenture.com wrote:





 HI Team,



 I am installing Murano on the Ubuntu 14.04 Juno setup and when I try the
 below install murano-api I encounter the below error. Please assist.



 When I try to install



 I am using the Murano guide link provided below:

 https://murano.readthedocs.org/en/latest/install/manual.html





 I am trying to execute the section 7



 1.Open a new console and launch Murano API. A separate terminal is
 required because the console will be locked by a running process.

 2. $ cd ~/murano/murano

 3. $ tox -e venv -- murano-api \

 4.  --config-file ./etc/murano/murano.conf





 I am getting the below error : I have a Juno Openstack ready and trying to
 integrate Murano





 2014-12-10 12:10:30.396 7721 DEBUG murano.openstack.common.service [-]
 neutron.endpoint_type  = publicURL log_opt_values
 /home/
 ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2048

 2014-12-10 12:10:30.397 7721 DEBUG murano.openstack.common.service [-]
 neutron.insecure   = False log_opt_values
 /home/ubun
 tu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2048

 2014-12-10 12:10:30.397 7721 DEBUG murano.openstack.common.service [-]
 
  log_opt_values
 /home/ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2050

 2014-12-10 12:10:30.400 7721 INFO oslo.messaging._drivers.impl_rabbit [-]
 Connecting to AMQP server on controller:5672

 2014-12-10 12:10:30.408 7721 INFO oslo.messaging._drivers.impl_rabbit [-]
 Connecting to AMQP server on controller:5672

 2014-12-10 12:10:30.416 7721 INFO eventlet.wsgi [-] (7721) wsgi starting
 up on http://0.0.0.0:8082/

 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Updating
 statistic information. update_stats
 /home/ubuntu/murano/muran
  o/murano/common/statservice.py:57

 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Stats
 object:
 murano.api.v1.request_statistics.RequestStatisticsColle
 ction object at 0x7fada950a510 update_stats
 /home/ubuntu/murano/murano/murano/common/statservice.py:58

 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Stats:
 Requests:0  Errors: 0 Ave.Res.Time 0.

 Per tenant: {} update_stats
 /home/ubuntu/murano/murano/murano/common/statservice.py:64

 2014-12-10 12:10:30.433 7721 DEBUG oslo.db.sqlalchemy.session [-] MySQL
 server mode set to
 STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZER
 O_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
 _check_effective_sql_mode /hom
 e/ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/db/sqlalchemy/session.py:509

 2014-12-10 12:10:33.464 7721 ERROR oslo.messaging._drivers.impl_rabbit [-]
 AMQP server controller:5672 closed the connection. Check
 log  in credentials: Socket closed

 2014-12-10 12:10:33.465 7721 ERROR oslo.messaging._drivers.impl_rabbit [-]
 AMQP server controller:5672 closed the connection. Check
 log  in credentials: Socket closed

 2014-12-10 12:10:37.483 7721 ERROR oslo.messaging._drivers.impl_rabbit [-]
 AMQP server controller:5672 closed the connection. Check
 log  in credentials: Socket closed

 2014-12-10 12:10:37.484 7721 ERROR oslo.messaging._drivers.impl_rabbit [-]
 AMQP server controller:5672 closed the connection. Check
 log  in credentials: Socket closed





 Warm Regards,

 *Raghavendra Lad*



 --

 This message is for the designated recipient only and may contain
 privileged, proprietary, or otherwise confidential information. If you have
 received it in error, please notify the sender immediately and delete the
 original. Any other use of the e-mail by you is prohibited. Where allowed
 by local law, electronic communications with Accenture and its affiliates,
 including e-mail and instant messaging (including content), may be scanned
 by our systems for the purposes of information security and assessment of
 internal compliance with Accenture policy.

 __

 www.accenture.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Timur,
Senior QA Engineer
OpenStack Projects
Mirantis Inc

My OpenStack summit schedule:
http://kilodesignsummit.sched.org/timur.nurlygayanov#.VFSrD8mhhOI
___
OpenStack-dev mailing list

Re: [openstack-dev] [Murano] Oslo.messaging error

2014-12-10 Thread Ilya Pekelny
Please, provide a RabbitMQ logs from a controller and an oslo.messaging
version. Do you use up-stream oslo.messaging version? It looks like well
known heartbeat bug.

On Wed, Dec 10, 2014 at 1:45 PM, Timur Nurlygayanov 
tnurlygaya...@mirantis.com wrote:

 Hi Raghavendra Lad,

 looks like Murano services can't connect ot the RabbitMQ server.
 Could you please share the configuration parameters for RabbitMQ  from
 ./etc/murano/murano.conf ?


 On Wed, Dec 10, 2014 at 10:55 AM, raghavendra@accenture.com wrote:





 HI Team,



 I am installing Murano on the Ubuntu 14.04 Juno setup and when I try the
 below install murano-api I encounter the below error. Please assist.



 When I try to install



 I am using the Murano guide link provided below:

 https://murano.readthedocs.org/en/latest/install/manual.html





 I am trying to execute the section 7



 1.Open a new console and launch Murano API. A separate terminal is
 required because the console will be locked by a running process.

 2. $ cd ~/murano/murano

 3. $ tox -e venv -- murano-api \

 4.  --config-file ./etc/murano/murano.conf





 I am getting the below error : I have a Juno Openstack ready and trying
 to integrate Murano





 2014-12-10 12:10:30.396 7721 DEBUG murano.openstack.common.service [-]
 neutron.endpoint_type  = publicURL log_opt_values
 /home/
 ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2048

 2014-12-10 12:10:30.397 7721 DEBUG murano.openstack.common.service [-]
 neutron.insecure   = False log_opt_values
 /home/ubun
 tu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2048

 2014-12-10 12:10:30.397 7721 DEBUG murano.openstack.common.service [-]
 
  log_opt_values
 /home/ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2050

 2014-12-10 12:10:30.400 7721 INFO oslo.messaging._drivers.impl_rabbit [-]
 Connecting to AMQP server on controller:5672

 2014-12-10 12:10:30.408 7721 INFO oslo.messaging._drivers.impl_rabbit [-]
 Connecting to AMQP server on controller:5672

 2014-12-10 12:10:30.416 7721 INFO eventlet.wsgi [-] (7721) wsgi starting
 up on http://0.0.0.0:8082/

 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Updating
 statistic information. update_stats
 /home/ubuntu/murano/muran
  o/murano/common/statservice.py:57

 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Stats
 object:
 murano.api.v1.request_statistics.RequestStatisticsColle
 ction object at 0x7fada950a510 update_stats
 /home/ubuntu/murano/murano/murano/common/statservice.py:58

 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Stats:
 Requests:0  Errors: 0 Ave.Res.Time 0.

 Per tenant: {} update_stats
 /home/ubuntu/murano/murano/murano/common/statservice.py:64

 2014-12-10 12:10:30.433 7721 DEBUG oslo.db.sqlalchemy.session [-] MySQL
 server mode set to
 STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZER
 O_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
 _check_effective_sql_mode /hom
 e/ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/db/sqlalchemy/session.py:509

 2014-12-10 12:10:33.464 7721 ERROR oslo.messaging._drivers.impl_rabbit
 [-] AMQP server controller:5672 closed the connection. Check
 log  in credentials: Socket closed

 2014-12-10 12:10:33.465 7721 ERROR oslo.messaging._drivers.impl_rabbit
 [-] AMQP server controller:5672 closed the connection. Check
 log  in credentials: Socket closed

 2014-12-10 12:10:37.483 7721 ERROR oslo.messaging._drivers.impl_rabbit
 [-] AMQP server controller:5672 closed the connection. Check
 log  in credentials: Socket closed

 2014-12-10 12:10:37.484 7721 ERROR oslo.messaging._drivers.impl_rabbit
 [-] AMQP server controller:5672 closed the connection. Check
 log  in credentials: Socket closed





 Warm Regards,

 *Raghavendra Lad*



 --

 This message is for the designated recipient only and may contain
 privileged, proprietary, or otherwise confidential information. If you have
 received it in error, please notify the sender immediately and delete the
 original. Any other use of the e-mail by you is prohibited. Where allowed
 by local law, electronic communications with Accenture and its affiliates,
 including e-mail and instant messaging (including content), may be scanned
 by our systems for the purposes of information security and assessment of
 internal compliance with Accenture policy.

 __

 www.accenture.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 

Re: [openstack-dev] [nova][docker][containers][qa] nova-docker CI failing a lot on unrelated nova patches

2014-12-10 Thread Jeremy Stanley
On 2014-12-10 06:37:02 -0500 (-0500), Sean Dague wrote:
 I know that periodic jobs can do this kind of notification, if you
 ask about it in #openstack-infra there might be a solution there.
[...]

E-mail reporting in Zuul is currently implemented pipeline-specific,
so the nova-docker tests would need to be in their own job in a
dedicated pipeline with reporting set to the relevant contact
address. This may be an excessive level of overhead, so we should
have a separate infra discussion on whether that's a realistic
solution, or whether it's worth looking at new Zuul functionality to
tack E-mail reporting addresses onto specific jobs in arbitrary
pipelines.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Plugins] Further development of plugin metadata format

2014-12-10 Thread Evgeniy L
Hi,

First let me describe what our plans for the nearest release. We want to
deliver
role as a simple plugin, it means that plugin developer can define his own
role
with yaml and also it should work fine with our current approach when user
can
define several fields on the settings tab.

Also I would like to mention another thing which we should probably discuss
in separate thread, how plugins should be implemented. We have two types
of plugins, simple and complicated, the definition of simple - I can do
everything
I need with yaml, the definition of complicated - probably I have to write
some
python code. It doesn't mean that this python code should do absolutely
everything it wants, but it means we should implement stable, documented
interface where plugin is connected to the core.

Now lets talk about UI flow, our current problem is how to get the
information
if plugins is used in the environment or not, this information is required
for
backend which generates appropriate tasks for task executor, also this
information can be used in the future if we decide to implement plugins
deletion
mechanism.

I didn't come up with a some new solution, as before we have two options to
solve the problem:

# 1

Use conditional language which is currently used on UI, it will look like
Vitaly described in the example [1].
Plugin developer should:

1. describe at least one element for UI, which he will be able to use in
task

2. add condition which is written in our own programming language

Example of the condition for LBaaS plugin:

condition: settings:lbaas.metadata.enabled == true

3. add condition to metadata.yaml a condition which defines if plugin is
enabled

is_enabled: settings:lbaas.metadata.enabled == true

This approach has good flexibility, but also it has problems:

a. It's complicated and not intuitive for plugin developer.
b. It doesn't cover case when the user installs 3rd party plugin
which doesn't have any conditions (because of # a) and
user doesn't have a way to disable it for environment if it
breaks his configuration.

# 2

As we discussed from the very beginning after user selects a release he can
choose a set of plugins which he wants to be enabled for environment.
After that we can say that plugin is enabled for the environment and we send
tasks related to this plugin to task executor.

 My approach also allows to eliminate enableness of plugins which will
cause UX issues and issues like you described above. vCenter and Ceph also
don't have enabled state. vCenter has hypervisor and storage, Ceph
provides backends for Cinder and Glance which can be used simultaneously or
only one of them can be used.

Both of described plugins have enabled/disabled state, vCenter is enabled
when vCenter is selected as hypervisor. Ceph is enabled when it's selected
as a backend for Cinder or Glance.

If you don't like the idea of having Ceph/vCenter checkboxes on the first
page,
I can suggest as an idea (research is required) to define groups like
Storage Backend,
Network Manager and we will allow plugin developer to embed his option in
radiobutton
field on wizard pages. But plugin developer should not describe conditions,
he should
just write that his plugin is a Storage Backend, Hypervisor or new Network
Manager.
And the plugins e.g. Zabbix, Nagios, which don't belong to any of this
groups
should be shown as checkboxes on the first page of the wizard.


[1]
https://github.com/vkramskikh/fuel-plugins/commit/1ddb166731fc4bf614f502b276eb136687cb20cf

On Sun, Nov 30, 2014 at 3:12 PM, Vitaly Kramskikh vkramsk...@mirantis.com
wrote:



 2014-11-28 23:20 GMT+04:00 Dmitriy Shulyak dshul...@mirantis.com:


- environment_config.yaml should contain exact config which will be
mixed into cluster_attributes. No need to implicitly generate any 
 controls
like it is done now.

  Initially i had the same thoughts and wanted to use it the way it is,
 but now i completely agree with Evgeniy that additional DSL will cause a lot
 of problems with compatibility between versions and developer experience.

 As far as I understand, you want to introduce another approach to describe
 UI part or plugins?

 We need to search for alternatives..
 1. for UI i would prefer separate tab for plugins, where user will be
 able to enable/disable plugin explicitly.

 Of course, we need a separate page for plugin management.

 Currently settings tab is overloaded.
 2. on backend we need to validate plugins against certain env before
 enabling it,
and for simple case we may expose some basic entities like
 network_mode.
 For case where you need complex logic - python code is far more flexible
 that new DSL.


- metadata.yaml should also contain is_removable field. This field
is needed to determine whether it is possible to remove installed plugin.
It is impossible to remove plugins in the current implementation.
This field should contain an expression written in our DSL which we 
 already
use in 

Re: [openstack-dev] [neutron] Linux capabilities vs sudo/rootwrap?

2014-12-10 Thread Thierry Carrez
Angus Lees wrote:
 How crazy would it be to just give neutron CAP_NET_ADMIN (where
 required), and allow it to make network changes via ip (netlink) calls
 directly?

I don't think that's completely crazy. Given what neutron is expected to
do, and what it is already empowered to do (through lazy and less lazy
rootwrap filters), relying on CAP_NET_ADMIN instead should have limited
security impact.

It would be worth precisely analyzing the delta (what will a
capability-enhanced neutron be able to do to the system that the
rootwrap-powered neutron can't already do), and try to get performance
numbers... That would help making the right choice, although I expect
the best gains here are in avoiding the whole external executable call
and result parsing. You could even maintain parallel code paths (use
capability if present).

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][docker][containers][qa] nova-docker CI failing a lot on unrelated nova patches

2014-12-10 Thread Davanum Srinivas
Sean,

fyi, got it stable now for the moment.
http://logstash.openstack.org/#eyJzZWFyY2giOiIgYnVpbGRfbmFtZTpcImNoZWNrLXRlbXBlc3QtZHN2bS1kb2NrZXJcIiBBTkQgbWVzc2FnZTpcIkZpbmlzaGVkOlwiIEFORCBidWlsZF9zdGF0dXM6XCJGQUlMVVJFXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIiLCJzdGFtcCI6MTQxODIyMzEwMjcyOX0=

with https://review.openstack.org/#/c/138714/

thanks,
dims

On Wed, Dec 10, 2014 at 6:37 AM, Sean Dague s...@dague.net wrote:
 On 12/09/2014 06:18 PM, Eric Windisch wrote:

 While gating on nova-docker will prevent patches that cause
 nova-docker to break 100% to land, it won't do a lot to prevent
 transient failures. To fix those we need people dedicated to making
 sure nova-docker is working.



 What would be helpful for me is a way to know that our tests are
 breaking without manually checking Kibana, such as an email.

 I know that periodic jobs can do this kind of notification, if you ask
 about it in #openstack-infra there might be a solution there.

 However, having a job in infra on Nova is a thing that comes with an
 expectation that someone is staying engaged on the infra and Nova sides
 to ensure that it's running correctly, and debug it when it's wrong.
 It's not a set it and forget it.

 It's already past the 2 weeks politeness boundary before it's considered
 fair game to just delete it.

 Creating the job is  10% of the work. Long term maintenance is
 important. I'm still not getting the feeling that there is really a long
 term owner on this job. I'd love that not to be the case, but simple
 things like the fact that the directory structure was all out of whack
 make it clear no one was regularly looking at it.

 -Sean

 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NFV][Telco] Service VM v/s its basic framework

2014-12-10 Thread Murali B
Hi keshava,

We would like contribute towards service chain and NFV

Could you please share the document if you have any related to service VM

The service chain can be achieved if we able to redirect the traffic to
service VM using ovs-flows

in this case we no need to have routing enable on the service VM(traffic is
redirected at L2).

All the tenant VM's in cloud could use this service VM services  by adding
the ovs-rules in OVS


Thanks
-Murali
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting Dec 11 1800 UTC

2014-12-10 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting as usual in
#openstack-meeting-alt channel.

Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20141211T18

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV][Telco] pxe-boot

2014-12-10 Thread Pasquale Porreca
Well, one of the main reason to choose an open source product is to 
avoid vendor lock-in. I think it is not
advisableto embed in the software running in an instance a call to 
OpenStack specific services.


On 12/10/14 00:20, Joe Gordon wrote:


On Wed, Dec 3, 2014 at 1:16 AM, Pasquale Porreca 
pasquale.porr...@dektech.com.au 
mailto:pasquale.porr...@dektech.com.au wrote:


The use case we were thinking about is a Network Function (e.g.
IMS Nodes) implementation in which the high availability is based
on OpenSAF. In this scenario there is an Active/Standby cluster of
2 System Controllers (SC) plus several Payloads (PL) that boot
from network, controlled by the SC. The logic of which service to
deploy on each payload is inside the SC.

In OpenStack both SCs and PLs will be instances running in the
cloud, anyway the PLs should still boot from network under the
control of the SC. In fact to use Glance to store the image for
the PLs and keep the control of the PLs in the SC, the SC should
trigger the boot of the PLs with requests to Nova/Glance, but an
application running inside an instance should not directly
interact with a cloud infrastructure service like Glance or Nova.


Why not? This is a fairly common practice.


--
Pasquale Porreca

DEK Technologies
Via dei Castelli Romani, 22
00040 Pomezia (Roma)

Mobile +39 3394823805
Skype paskporr

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Plugins] Further development of plugin metadata format

2014-12-10 Thread Vitaly Kramskikh
2014-12-10 16:57 GMT+03:00 Evgeniy L e...@mirantis.com:

 Hi,

 First let me describe what our plans for the nearest release. We want to
 deliver
 role as a simple plugin, it means that plugin developer can define his own
 role
 with yaml and also it should work fine with our current approach when user
 can
 define several fields on the settings tab.

 Also I would like to mention another thing which we should probably discuss
 in separate thread, how plugins should be implemented. We have two types
 of plugins, simple and complicated, the definition of simple - I can do
 everything
 I need with yaml, the definition of complicated - probably I have to write
 some
 python code. It doesn't mean that this python code should do absolutely
 everything it wants, but it means we should implement stable, documented
 interface where plugin is connected to the core.

 Now lets talk about UI flow, our current problem is how to get the
 information
 if plugins is used in the environment or not, this information is required
 for
 backend which generates appropriate tasks for task executor, also this
 information can be used in the future if we decide to implement plugins
 deletion
 mechanism.

 I didn't come up with a some new solution, as before we have two options to
 solve the problem:

 # 1

 Use conditional language which is currently used on UI, it will look like
 Vitaly described in the example [1].
 Plugin developer should:

 1. describe at least one element for UI, which he will be able to use in
 task

 2. add condition which is written in our own programming language

 Example of the condition for LBaaS plugin:

 condition: settings:lbaas.metadata.enabled == true

 3. add condition to metadata.yaml a condition which defines if plugin is
 enabled

 is_enabled: settings:lbaas.metadata.enabled == true

 This approach has good flexibility, but also it has problems:

 a. It's complicated and not intuitive for plugin developer.

It is less complicated than python code

 b. It doesn't cover case when the user installs 3rd party plugin
 which doesn't have any conditions (because of # a) and
 user doesn't have a way to disable it for environment if it
 breaks his configuration.

If plugin doesn't have conditions for tasks, then it has invalid metadata.


 # 2

 As we discussed from the very beginning after user selects a release he can
 choose a set of plugins which he wants to be enabled for environment.
 After that we can say that plugin is enabled for the environment and we
 send
 tasks related to this plugin to task executor.

  My approach also allows to eliminate enableness of plugins which
 will cause UX issues and issues like you described above. vCenter and Ceph
 also don't have enabled state. vCenter has hypervisor and storage, Ceph
 provides backends for Cinder and Glance which can be used simultaneously or
 only one of them can be used.

 Both of described plugins have enabled/disabled state, vCenter is enabled
 when vCenter is selected as hypervisor. Ceph is enabled when it's selected
 as a backend for Cinder or Glance.

Nope, Ceph for Volumes can be used without Ceph for Images. Both of these
plugins can also have some granular tasks which are enabled by various
checkboxes (like VMware vCenter for volumes). How would you determine
whether tasks which installs VMware vCenter for volumes should run?


 If you don't like the idea of having Ceph/vCenter checkboxes on the first
 page,
 I can suggest as an idea (research is required) to define groups like
 Storage Backend,
 Network Manager and we will allow plugin developer to embed his option in
 radiobutton
 field on wizard pages. But plugin developer should not describe
 conditions, he should
 just write that his plugin is a Storage Backend, Hypervisor or new Network
 Manager.
 And the plugins e.g. Zabbix, Nagios, which don't belong to any of this
 groups
 should be shown as checkboxes on the first page of the wizard.

Why don't you just ditch enableness of plugins and get rid of this
complex stuff? Can you explain why do you need to know if plugin is
enabled? Let me summarize my opinion on this:

   - You don't need to know whether plugin is enabled or not. You need to
   know what tasks should be run and whether plugin is removable (anything
   else?). These conditions can be described by the DSL.
   - Explicitly asking the user to enable plugin for new environment should
   be considered as a last resort solution because it significantly impair our
   UX for inexperienced user. Just imagine: a new user which barely knows
   about OpenStack chooses a name for the environment, OS release and then he
   needs to choose plugins. Really?

My proposal for complex plugin interface: there should be python classes
with exactly the same fields from yaml files: plugin name, version, etc.
But condition for cluster deletion and for tasks which are written in DSL
in case of simple yaml config should become methods which plugin writer
can make as 

[openstack-dev] [cinder] 3rd Party CI for drivers

2014-12-10 Thread Duncan Thomas
Hi All

Hopefully this shouldn't come as a surprise to anybody, but the cinder team
is requiring working third party CI for all drivers.

For any driver that was merged before the start of Kilo, we expect working
3rd party CI posting on every commit before k-2, that is the 5th of Feb, or
that driver is at risk of being removed from the tree.

Please join #openstack-cinder to discuss CI requirements.

-- 
Duncan Thomas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][devstack] ZeroMQ driver maintenance next steps

2014-12-10 Thread Li Ma


On 2014/12/9 22:07, Doug Hellmann wrote:

On Dec 8, 2014, at 11:25 PM, Li Ma skywalker.n...@gmail.com wrote:


Hi all, I tried to deploy zeromq by devstack and it definitely failed with lots 
of problems, like dependencies, topics, matchmaker setup, etc. I've already 
registered a blueprint for devstack-zeromq [1].

I added the [devstack] tag to the subject of this message so that team will see 
the thread.

Thanks for helping fix this critical bug, Doug. :-)

@devstack:
Currently, I cannot find any devstack-specs related repo for proposing 
blueprint details. So, If any devstack guys here, please help review 
this blueprint [1] and welcome to leave any comments. This is really 
important for us.


Actually, I thought that I could provide some bug fix to make it work, 
but after evaluation, I would like to make it as a blueprint, because 
there are lots of works and a blueprint is suitable for it to trace 
everything.


[1] https://blueprints.launchpad.net/devstack/+spec/zeromq


Besides, I suggest to build a wiki page in order to trace all the workitems related 
with ZeroMQ. The general sections may be [Why ZeroMQ], [Current Bugs  Reviews], 
[Future Plan  Blueprints], [Discussions], [Resources], etc.

Coordinating the work on this via a wiki page makes sense. Please post the link 
when you’re ready.

Doug

OK. I'll get it done soon.

Any comments?

[1] https://blueprints.launchpad.net/devstack/+spec/zeromq

cheers,
Li Ma

On 2014/11/18 21:46, James Page wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 18/11/14 00:55, Denis Makogon wrote:

So if zmq driver support in devstack is fixed, we can easily add a
new job to run them in the same way.


Btw this is a good question. I will take look at current state of
zmq in devstack.

I don't think its that far off and its broken rather than missing -
the rpc backend code needs updating to use oslo.messaging rather than
project specific copies of the rpc common codebase (pre oslo).
Devstack should be able to run with the local matchmaker in most
scenarios but it looks like there was support for the redis matchmaker
as well.

If you could take some time to fixup that would be awesome!

- -- James Page
Ubuntu and Debian Developer
james.p...@ubuntu.com
jamesp...@debian.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIbBAEBCAAGBQJUa03HAAoJEL/srsug59jDdZQP+IeEvXAcfxNs2Tgvt5trnjgg
cnTrJPLbr6i/uIXKjRvNDSkJEdv//EjL/IRVRIf0ld09FpRnyKzUDMPq1CzFJqdo
45RqFWwJ46NVA4ApLZVugJvKc4tlouZQvizqCBzDKA6yUsUoGmRpYFAQ3rN6Gs9h
Q/8XSAmHQF1nyTarxvylZgnqhqWX0p8n1+fckQeq2y7s3D3WxfM71ftiLrmQCWir
aPkH7/0qvW+XiOtBXVTXDb/7pocNZg+jtBkUcokORXbJCmiCN36DBXv9LPIYgfhe
/cC/wQFH4RUSkoj7SYPAafX4J2lTMjAd+GwdV6ppKy4DbPZdNty8c9cbG29KUK40
TSCz8U3tUcaFGDQdBB5Kg85c1aYri6dmLxJlk7d8pOXLTb0bfnzdl+b6UsLkhXqB
P4Uc+IaV9vxoqmYZAzuqyWm9QriYlcYeaIJ9Ma5fN+CqxnIaCS7UbSxHj0yzTaUb
4XgmcQBwHe22ouwBmk2RGzLc1Rv8EzMLbbrGhtTu459WnAZCrXOTPOCn54PoIgZD
bK/Om+nmTxepWD1lExHIYk3BXyZObxPO00UJHdxvSAIh45ROlh8jW8hQA9lJ9QVu
Cz775xVlh4DRYgenN34c2afOrhhdq4V1OmjYUBf5M4gS6iKa20LsMjp7NqT0jzzB
tRDFb67u28jxnIXR16g=
=+k0M
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Plugins] Further development of plugin metadata format

2014-12-10 Thread Evgeniy L
On Wed, Dec 10, 2014 at 6:50 PM, Vitaly Kramskikh vkramsk...@mirantis.com
wrote:



 2014-12-10 16:57 GMT+03:00 Evgeniy L e...@mirantis.com:

 Hi,

 First let me describe what our plans for the nearest release. We want to
 deliver
 role as a simple plugin, it means that plugin developer can define his
 own role
 with yaml and also it should work fine with our current approach when
 user can
 define several fields on the settings tab.

 Also I would like to mention another thing which we should probably
 discuss
 in separate thread, how plugins should be implemented. We have two types
 of plugins, simple and complicated, the definition of simple - I can do
 everything
 I need with yaml, the definition of complicated - probably I have to
 write some
 python code. It doesn't mean that this python code should do absolutely
 everything it wants, but it means we should implement stable, documented
 interface where plugin is connected to the core.

 Now lets talk about UI flow, our current problem is how to get the
 information
 if plugins is used in the environment or not, this information is
 required for
 backend which generates appropriate tasks for task executor, also this
 information can be used in the future if we decide to implement plugins
 deletion
 mechanism.

 I didn't come up with a some new solution, as before we have two options
 to
 solve the problem:

 # 1

 Use conditional language which is currently used on UI, it will look like
 Vitaly described in the example [1].
 Plugin developer should:

 1. describe at least one element for UI, which he will be able to use in
 task

 2. add condition which is written in our own programming language

 Example of the condition for LBaaS plugin:

 condition: settings:lbaas.metadata.enabled == true

 3. add condition to metadata.yaml a condition which defines if plugin is
 enabled

 is_enabled: settings:lbaas.metadata.enabled == true

 This approach has good flexibility, but also it has problems:

 a. It's complicated and not intuitive for plugin developer.

 It is less complicated than python code


I'm not sure why are you talking about python code here, my point
is we should not force developer to use this conditions in any language.

Anyway I don't agree with the statement there are more people who know
python than fuel ui conditional language.


 b. It doesn't cover case when the user installs 3rd party plugin
 which doesn't have any conditions (because of # a) and
 user doesn't have a way to disable it for environment if it
 breaks his configuration.

 If plugin doesn't have conditions for tasks, then it has invalid metadata.


Yep, and it's a problem of the platform, which provides a bad interface.



 # 2

 As we discussed from the very beginning after user selects a release he
 can
 choose a set of plugins which he wants to be enabled for environment.
 After that we can say that plugin is enabled for the environment and we
 send
 tasks related to this plugin to task executor.

  My approach also allows to eliminate enableness of plugins which
 will cause UX issues and issues like you described above. vCenter and Ceph
 also don't have enabled state. vCenter has hypervisor and storage, Ceph
 provides backends for Cinder and Glance which can be used simultaneously or
 only one of them can be used.

 Both of described plugins have enabled/disabled state, vCenter is enabled
 when vCenter is selected as hypervisor. Ceph is enabled when it's selected
 as a backend for Cinder or Glance.

 Nope, Ceph for Volumes can be used without Ceph for Images. Both of these
 plugins can also have some granular tasks which are enabled by various
 checkboxes (like VMware vCenter for volumes). How would you determine
 whether tasks which installs VMware vCenter for volumes should run?


Why nope? I have Cinder OR Glance.
It can be easily handled in deployment script.


 If you don't like the idea of having Ceph/vCenter checkboxes on the first
 page,
 I can suggest as an idea (research is required) to define groups like
 Storage Backend,
 Network Manager and we will allow plugin developer to embed his option in
 radiobutton
 field on wizard pages. But plugin developer should not describe
 conditions, he should
 just write that his plugin is a Storage Backend, Hypervisor or new
 Network Manager.
 And the plugins e.g. Zabbix, Nagios, which don't belong to any of this
 groups
 should be shown as checkboxes on the first page of the wizard.

 Why don't you just ditch enableness of plugins and get rid of this
 complex stuff? Can you explain why do you need to know if plugin is
 enabled? Let me summarize my opinion on this:


I described why we need it many times. Also it looks like you skipped
another option
and I would like to see some more information why you don't like it and why
it's
a bad from UX stand point of view.


- You don't need to know whether plugin is enabled or not. You need to
know what tasks should be run and whether plugin is 

Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

2014-12-10 Thread Stephen Wong
Hi Murali,

There is already a ServiceVM project (Tacker), currently under
development on stackforge:

https://wiki.openstack.org/wiki/ServiceVM

If you are interested in this topic, please take a look at the wiki
page above and see if the project's goals align with yours. If so, you are
certainly welcome to join the IRC meeting and start to contribute to the
project's direction and design.

Thanks,
- Stephen


On Wed, Dec 10, 2014 at 7:01 AM, Murali B mbi...@gmail.com wrote:

 Hi keshava,

 We would like contribute towards service chain and NFV

 Could you please share the document if you have any related to service VM

 The service chain can be achieved if we able to redirect the traffic to
 service VM using ovs-flows

 in this case we no need to have routing enable on the service VM(traffic
 is redirected at L2).

 All the tenant VM's in cloud could use this service VM services  by adding
 the ovs-rules in OVS


 Thanks
 -Murali




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup?

2014-12-10 Thread Kevin Benton
What would the port binding operation do in this case? Just mark the port
as bound and nothing else?

On Wed, Dec 10, 2014 at 12:48 AM, henry hly henry4...@gmail.com wrote:

 Hi Kevin,

 Does it make sense to introduce GeneralvSwitch MD, working with
 VIF_TYPE_TAP? It just do very simple port bind just like OVS and
 bridge. Then anyone can implement their backend and agent without
 patch neutron drivers.

 Best Regards
 Henry

 On Fri, Dec 5, 2014 at 4:23 PM, Kevin Benton blak...@gmail.com wrote:
  I see the difference now.
  The main concern I see with the NOOP type is that creating the virtual
  interface could require different logic for certain hypervisors. In that
  case Neutron would now have to know things about nova and to me it seems
  like that's slightly too far the other direction.
 
  On Thu, Dec 4, 2014 at 8:00 AM, Neil Jerram neil.jer...@metaswitch.com
  wrote:
 
  Kevin Benton blak...@gmail.com writes:
 
   What you are proposing sounds very reasonable. If I understand
   correctly, the idea is to make Nova just create the TAP device and get
   it attached to the VM and leave it 'unplugged'. This would work well
   and might eliminate the need for some drivers. I see no reason to
   block adding a VIF type that does this.
 
  I was actually floating a slightly more radical option than that: the
  idea that there is a VIF type (VIF_TYPE_NOOP) for which Nova does
  absolutely _nothing_, not even create the TAP device.
 
  (My pending Nova spec at https://review.openstack.org/#/c/130732/
  proposes VIF_TYPE_TAP, for which Nova _does_ creates the TAP device, but
  then does nothing else - i.e. exactly what you've described just above.
  But in this email thread I was musing about going even further, towards
  providing a platform for future networking experimentation where Nova
  isn't involved at all in the networking setup logic.)
 
   However, there is a good reason that the VIF type for some OVS-based
   deployments require this type of setup. The vSwitches are connected to
   a central controller using openflow (or ovsdb) which configures
   forwarding rules/etc. Therefore they don't have any agents running on
   the compute nodes from the Neutron side to perform the step of getting
   the interface plugged into the vSwitch in the first place. For this
   reason, we will still need both types of VIFs.
 
  Thanks.  I'm not advocating that existing VIF types should be removed,
  though - rather wondering if similar function could in principle be
  implemented without Nova VIF plugging - or what that would take.
 
  For example, suppose someone came along and wanted to implement a new
  OVS-like networking infrastructure?  In principle could they do that
  without having to enhance the Nova VIF driver code?  I think at the
  moment they couldn't, but that they would be able to if VIF_TYPE_NOOP
  (or possibly VIF_TYPE_TAP) was already in place.  In principle I think
  it would then be possible for the new implementation to specify
  VIF_TYPE_NOOP to Nova, and to provide a Neutron agent that does the kind
  of configuration and vSwitch plugging that you've described above.
 
  Does that sound correct, or am I missing something else?
 
   1 .When the port is created in the Neutron DB, and handled (bound
   etc.)
   by the plugin and/or mechanism driver, the TAP device name is already
   present at that time.
  
   This is backwards. The tap device name is derived from the port ID, so
   the port has already been created in Neutron at that point. It is just
   unbound. The steps are roughly as follows: Nova calls neutron for a
   port, Nova creates/plugs VIF based on port, Nova updates port on
   Neutron, Neutron binds the port and notifies agent/plugin/whatever to
   finish the plumbing, Neutron notifies Nova that port is active, Nova
   unfreezes the VM.
  
   None of that should be affected by what you are proposing. The only
   difference is that your Neutron agent would also perform the
   'plugging' operation.
 
  Agreed - but thanks for clarifying the exact sequence of events.
 
  I wonder if what I'm describing (either VIF_TYPE_NOOP or VIF_TYPE_TAP)
  might fit as part of the Nova-network/Neutron Migration priority
  that's just been announced for Kilo.  I'm aware that a part of that
  priority is concerned with live migration, but perhaps it could also
  include the goal of future networking work not having to touch Nova
  code?
 
  Regards,
  Neil
 
 
 
 
  --
  Kevin Benton
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

[openstack-dev] [QA] Meeting Thursday December 11th at 22:00 UTC

2014-12-10 Thread Matthew Treinish
Hi everyone,

Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, December 11th at 22:00 UTC in the #openstack-meeting
channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

It's also worth noting that a few weeks ago we started having a regular
dedicated Devstack topic during the meetings. So if anyone is interested in
Devstack development please join the meetings to be a part of the discussion.

To help people figure out what time 22:00 UTC is in other timezones tomorrow's
meeting will be at:

17:00 EST
07:00 JST
08:30 ACDT
23:00 CET
16:00 CST
14:00 PST

-Matt Treinish


pgpzL7ozbzJ4D.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2014-12-10 Thread Jay Pipes

On 12/10/2014 04:31 AM, Daniel P. Berrange wrote:

So the problem of Nova review bandwidth is a constant problem across all
areas of the code. We need to solve this problem for the team as a whole
in a much broader fashion than just for people writing VIF drivers. The
VIF drivers are really small pieces of code that should be straightforward
to review  get merged in any release cycle in which they are proposed.
I think we need to make sure that we focus our energy on doing this and
not ignoring the problem by breaking stuff off out of tree.


+1 well said.

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core/Vendor code decomposition

2014-12-10 Thread Kevin Benton
Remove everything out of tree, and leave only Neutron API framework as 
integration
platform, would lower the attractions of the whole Openstack Project.
Without a default good enough reference backend from community, customers
have to depends on packagers to fully test all backends for them.

That's not what's being proposed. Please read the spec.
There will still be a tested reference implementation from the community
that gates all changes. Where the code lives has no impact on customers.

On Wed, Dec 10, 2014 at 12:32 AM, loy wolfe loywo...@gmail.com wrote:

 Remove everything out of tree, and leave only Neutron API framework as
 integration platform, would lower the attractions of the whole
 Openstack Project. Without a default good enough reference backend
 from community, customers have to depends on packagers to fully test
 all backends for them. Can we image nova without kvm, glance without
 swift? Cinder is weak because of default lvm backend, if in the future
 Ceph became the default it would be much better.

 If the goal of this decomposition is eventually moving default
 reference driver out, and the in-tree OVS backend is an eyesore, then
 it's better to split the Neutron core with base repo and vendor repo.
 They only share common base API/DB model, each vendor can extend their
 API, DB model freely, using a shim proxy to delegate all the service
 logic to their backend controller. They can choose to keep out of
 tree, or in tree (vendor repo) with the previous policy that
 contribute code reviewing for their code being reviewed by other
 vendors.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Fuel agent proposal

2014-12-10 Thread Vladimir Kozhukalov
Devananda,

Thank you for such a constructive letter,

First of all, just to make sure we are on the same page, we are totally +1
for using any tool which meets our requirements and we are totally +1 for
working together on the same problems. As you remember we suggested to add
advanced partition capabilities (md and lvm) into IPA. I see that it is
layer violation for Ironic, I see it is not in cloud scope,  but we need
these features, because our users want them and because our use case is
deployment. For me it is seems OK when some tool has some feature which is
not mandatory to use.

And we didn't start Fuel Agent until these features were rejected to be
merged into IPA. If we had a chance to implement them in terms of IPA that
would be a preferred way for us.

Some details:

* Power management

For power management Cobbler uses so called 'fence agents'. It is just a OS
package which provides a bunch of scripts using ILO, IPMI, DRAC clients.
Currently we extended this set of agents with so called 'ssh' agent. This
ssh agent is able to run 'reboot' command inside OS via ssh. We use this
agent by default because many of our users do their experiments on BMC-free
hardware. That is why this spec
https://review.openstack.org/#/c/138115/ refers to SSH power driver.

I know Ironic already has SSH power driver which runs 'virsh' (a little bit
confusing) command via ssh and it is supposed to use it for experimental
envs. The suggestion to implement another SSH power driver can confuse
people. My suggestion is to extend Ironic SSH power driver so as to make it
able to run any command (virsh, vbox or even reboot) from a set. And maybe
renaming this driver into something like 'experimental' or 'development' is
not a very bad idea. I am aware that Ironic wants to remove this driver at
all as it is used for tests only. But there are lots of different power
cases (including w/o BMC) hardware and we definitely need to have a place
where to put this non standard power related stuff. I believe many people
are interested in having such a workaround.

And we certainly need other Ironic power management capabilities like ILO,
DRAC, IPMI. We are also potentially very interested in developing other
hardware management capabilities like configuring hardware RAIDs,
BIOS/UEFI, etc.

* DHCP, TFTP, DNS management

We are aware of the way how Ironic manages DHCP (not directly). As you
mentioned,  currently Ironic has a pluggable framework for DHCP and the
only in-tree driver is neutron. And we are aware that implementing kind of
dnsmasq wrapper immediately breaks Ironic scaling scheme (many conductors).
When I wrote 'it is planned to implement dnsmasq plugin' in this spec
https://review.openstack.org/#/c/138301 I didn't mean Ironic is planning to
do this. I meant Fuel team is planning to implement this dnsmasq plugin out
of Ironic tree (will point it out explicitly) just to be able to fit Fuel
release cycle (iterative development). Maybe in the future we will consider
to switch to Neutron for managing networks (out of scope of this
discussion). This Ironic Fuel Agent driver is supposed to use Ironic
abstractions to configure DHCP, i.e. call plugin methods
 update_port_dhcp_opts, update_port_address,
update_dhcp_opts, get_ip_addresses NOT changing Ironic core (again, we will
point it out explicitly).

* IPA vs. Fuel Agent

My suggestion here is to stop think of Fuel Agent as Fuel only related
stuff. I hope it is clear by now that Fuel Agent is just a generic tool
which is about 'operator == user within traditional IT shop' use case. And
this use case requires all that stuff like LVM and enormous flexibility
which does not even have a chance to be considered as a part of IPA next
few months. A good decision here might be implementing Fuel Agent driver
and then working on distinguishing common IPA and Fuel Agent parts and
putting them into one tree (long term perspective). If it is a big deal we
can even rename Fuel Agent into something which sounds more neutrally (not
related to Fuel) and put it into a separate git.

If this is what FuelAgent is about, why is there so much resistance to
 contributing that functionality to the component which is already
 integrated with Ironic? Why complicate matters for both users and
 developers by adding *another* deploy agent that does (or will soon do) the
 same things?


Briefly, we are glad to contribute to IPA but let's do things iteratively.
I need somehow to deliver power and dhcp management + image based
provisioning by March 2015. According to my previous experience of
contributing to IPA it is almost impossible to merge everything I need by
that time. It is possible to implement Fuel Agent driver by that time. It
is also possible to implement something on my own not integrating Ironic
into Fuel at all. As a long term perspective if it's OK to land MD and LVM
into IPA we definitely can do that.

In summary, if I understand correctly, it seems as though you're trying to
 fit Ironic into 

Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-10 Thread Jason Kölker
On Mon, Dec 8, 2014 at 7:57 PM, Carl Baldwin c...@ecbaldwin.net wrote:
 I'll be traveling around the time of the L3 meeting this week.  My
 flight leaves 40 minutes after the meeting and I might have trouble
 attending.  It might be best to put it off a week or to plan another
 time -- maybe Friday -- when we could discuss it in IRC or in a
 Hangout.

Carl,

Very glad to see the work the L3 team has been working towards in
this. I'm still digesting the specs/blueprints, but as you stated they
are very much in the direction we'd like to head as well. I'll start
lurking in the L3 meetings to get more familiar with the current state
of things as I've been disconnected from upstream for a while. I'm
`jkoelker` on freenode or `jkoel...@gmail.com` for hangouts if you
wanna chat.

Happy Hacking!

7-11

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] interesting problem with config filter

2014-12-10 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 08/12/14 21:58, Doug Hellmann wrote:
 As we’ve discussed a few times, we want to isolate applications
 from the configuration options defined by libraries. One way we
 have of doing that is the ConfigFilter class in oslo.config. When a
 regular ConfigOpts instance is wrapped with a filter, a library can
 register new options on the filter that are not visible to anything
 that doesn’t have the filter object. Unfortunately, the Neutron
 team has identified an issue with this approach. We have a bug
 report [1] from them about the way we’re using config filters in
 oslo.concurrency specifically, but the issue applies to their use
 everywhere.
 
 The neutron tests set the default for oslo.concurrency’s lock_path
 variable to “$state_path/lock”, and the state_path option is
 defined in their application. With the filter in place,
 interpolation of $state_path to generate the lock_path value fails
 because state_path is not known to the ConfigFilter instance.

It's not just unit tests. It's also in generic /etc/neutron.conf file
installed with the rest of neutron:
https://github.com/openstack/neutron/blob/master/etc/neutron.conf#L23

There is nothing wrong in the way neutron sets it up, so I expect the
fix to go in either oslo.concurrency or oslo.config, whichever is
achievable.

 
 The reverse would also happen (if the value of state_path was
 somehow defined to depend on lock_path), and that’s actually a
 bigger concern to me. A deployer should be able to use
 interpolation anywhere, and not worry about whether the options are
 in parts of the code that can see each other. The values are all in
 one file, as far as they know, and so interpolation should “just
 work”.

+1. It's not deployer's job to read code and determine which options
are substitution-aware and which are not.

 
 I see a few solutions:
 
 1. Don’t use the config filter at all.

+1. And that's not just for oslo.concurrency case, but globally.

 2. Make the config filter able to add new options and still see
 everything else that is already defined (only filter in one
 direction). 3. Leave things as they are, and make the error message
 better.
 
 Because of the deployment implications of using the filter, I’m
 inclined to go with choice 1 or 2. However, choice 2 leaves open
 the possibility of a deployer wanting to use the value of an option
 defined by one filtered set of code when defining another. I don’t
 know how frequently that might come up, but it seems like the error
 would be very confusing, especially if both options are set in the
 same config file.
 
 I think that leaves option 1, which means our plans for hiding
 options from applications need to be rethought.
 
 Does anyone else see another solution that I’m missing?

I'm not an oslo guy, so I leave the resolution to you.

 
 Doug
 
 [1] https://bugs.launchpad.net/oslo.config/+bug/1399897 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUiHHnAAoJEC5aWaUY1u57DOsH/i+FY46YWH2lSguYPS5h+Ciu
S/fwzamKrcF6Y2pipl+j55CiIyIejlnXwE+UV90k4gM9G6vl4T6u1w6N9dus67pu
6kWHty4eDGHGIuj0iGWIWUNPN6ChHNmhxoFadvZKCBWULeTvh3DL/Ply4MYx4bqF
MbtpAE5Qh2OUUO977kSjcULZtgrIYeInKd5tdZkLmXf0PQnMKU9rEwa8DNZL24Ro
sBZ6GKDXfa4vqk5alFiWoqxW/MUoi6Ngxm2T0OJZy20L6BL5n8sT96rinAbtGzo5
CELu91D6UeFR/rry2bI6DIS7rPN4BHCsSTZ1cXK/wxLHTqaSP50qj2phZ7zGbVA=
=IuLJ
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] mid-cycle details final draft

2014-12-10 Thread Clint Byrum
Just FYI, we ran into a last minute scheduling conflict with the venue
and are sorting it out, so please _do not book travel yet_. Worst case
it will move to Feb 16 - 18 instead of 18 - 20.

Excerpts from Clint Byrum's message of 2014-12-01 14:58:58 -0800:
 Hello! I've received confirmation that our venue, the HP offices in
 downtown Seattle, will be available for the most-often-preferred
 least-often-cannot week of Feb 16 - 20.
 
 Our venue has a maximum of 20 participants, but I only have 16 possible
 attendees now. Please add yourself to that list _now_ if you will be
 joining us.
 
 I've asked our office staff to confirm Feb 18 - 20 (Wed-Fri). When they
 do, I will reply to this thread to let everyone know so you can all
 start to book travel. See the etherpad for travel details.
 
 https://etherpad.openstack.org/p/kilo-tripleo-midcycle-meetup

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Plugins] Further development of plugin metadata format

2014-12-10 Thread Vitaly Kramskikh
2014-12-10 19:31 GMT+03:00 Evgeniy L e...@mirantis.com:



 On Wed, Dec 10, 2014 at 6:50 PM, Vitaly Kramskikh vkramsk...@mirantis.com
  wrote:



 2014-12-10 16:57 GMT+03:00 Evgeniy L e...@mirantis.com:

 Hi,

 First let me describe what our plans for the nearest release. We want to
 deliver
 role as a simple plugin, it means that plugin developer can define his
 own role
 with yaml and also it should work fine with our current approach when
 user can
 define several fields on the settings tab.

 Also I would like to mention another thing which we should probably
 discuss
 in separate thread, how plugins should be implemented. We have two types
 of plugins, simple and complicated, the definition of simple - I can do
 everything
 I need with yaml, the definition of complicated - probably I have to
 write some
 python code. It doesn't mean that this python code should do absolutely
 everything it wants, but it means we should implement stable, documented
 interface where plugin is connected to the core.

 Now lets talk about UI flow, our current problem is how to get the
 information
 if plugins is used in the environment or not, this information is
 required for
 backend which generates appropriate tasks for task executor, also this
 information can be used in the future if we decide to implement plugins
 deletion
 mechanism.

 I didn't come up with a some new solution, as before we have two options
 to
 solve the problem:

 # 1

 Use conditional language which is currently used on UI, it will look like
 Vitaly described in the example [1].
 Plugin developer should:

 1. describe at least one element for UI, which he will be able to use in
 task

 2. add condition which is written in our own programming language

 Example of the condition for LBaaS plugin:

 condition: settings:lbaas.metadata.enabled == true

 3. add condition to metadata.yaml a condition which defines if plugin is
 enabled

 is_enabled: settings:lbaas.metadata.enabled == true

 This approach has good flexibility, but also it has problems:

 a. It's complicated and not intuitive for plugin developer.

 It is less complicated than python code


 I'm not sure why are you talking about python code here, my point
 is we should not force developer to use this conditions in any language.

 But that's how current plugin-like stuff works. There are various tasks
which are run only if some checkboxes are set, so stuff like Ceph and
vCenter will need conditions to describe tasks.

 Anyway I don't agree with the statement there are more people who know
 python than fuel ui conditional language.


 b. It doesn't cover case when the user installs 3rd party plugin
 which doesn't have any conditions (because of # a) and
 user doesn't have a way to disable it for environment if it
 breaks his configuration.

 If plugin doesn't have conditions for tasks, then it has invalid metadata.


 Yep, and it's a problem of the platform, which provides a bad interface.

Why is it bad? It plugin writer doesn't provide plugin name or version,
then metadata is invalid also. It is plugin writer's fault that he didn't
write metadata properly.




 # 2

 As we discussed from the very beginning after user selects a release he
 can
 choose a set of plugins which he wants to be enabled for environment.
 After that we can say that plugin is enabled for the environment and we
 send
 tasks related to this plugin to task executor.

  My approach also allows to eliminate enableness of plugins which
 will cause UX issues and issues like you described above. vCenter and Ceph
 also don't have enabled state. vCenter has hypervisor and storage, Ceph
 provides backends for Cinder and Glance which can be used simultaneously or
 only one of them can be used.

 Both of described plugins have enabled/disabled state, vCenter is enabled
 when vCenter is selected as hypervisor. Ceph is enabled when it's
 selected
 as a backend for Cinder or Glance.

 Nope, Ceph for Volumes can be used without Ceph for Images. Both of these
 plugins can also have some granular tasks which are enabled by various
 checkboxes (like VMware vCenter for volumes). How would you determine
 whether tasks which installs VMware vCenter for volumes should run?


 Why nope? I have Cinder OR Glance.

Oh, I missed it. So there are 2 checkboxes, how would you determine
enableness?

 It can be easily handled in deployment script.

I don't know much about the status of granular deployment blueprint, but
AFAIK that's what we are going to get rid of.



 If you don't like the idea of having Ceph/vCenter checkboxes on the
 first page,
 I can suggest as an idea (research is required) to define groups like
 Storage Backend,
 Network Manager and we will allow plugin developer to embed his option
 in radiobutton
 field on wizard pages. But plugin developer should not describe
 conditions, he should
 just write that his plugin is a Storage Backend, Hypervisor or new
 Network Manager.
 And the plugins e.g. Zabbix, 

Re: [openstack-dev] People of OpenStack (and their IRC nicks)

2014-12-10 Thread Jay Faulkner
Often times I find myself in need of going the other direction — which IRC nick 
goes to which person. Does anyone know how to do that with the Foundation 
directory?

Thanks,
Jay

 On Dec 10, 2014, at 2:30 AM, Matthew Gilliard matthew.gilli...@gmail.com 
 wrote:
 
 So, are we agreed that http://www.openstack.org/community/members/ is
 the authoritative place for IRC lookups? In which case, I'll take the
 old content out of https://wiki.openstack.org/wiki/People and leave a
 message directing people where to look.
 
 I don't have the imagination to use anything other than my real name
 on IRC but for people who do, should we try to encourage putting the
 IRC nick in the gerrit name?
 
 On Tue, Dec 9, 2014 at 11:56 PM, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Angus Salkeld's message of 2014-12-09 15:25:59 -0800:
 On Wed, Dec 10, 2014 at 5:11 AM, Stefano Maffulli stef...@openstack.org
 wrote:
 
 On 12/09/2014 06:04 AM, Jeremy Stanley wrote:
 We already have a solution for tracking the contributor-IRC
 mapping--add it to your Foundation Member Profile. For example, mine
 is in there already:
 
http://www.openstack.org/community/members/profile/5479
 
 I recommend updating the openstack.org member profile and add IRC
 nickname there (and while you're there, update your affiliation history).
 
 There is also a search engine on:
 
 http://www.openstack.org/community/members/
 
 
 Except that info doesn't appear nicely in review. Some people put their
 nick in their Full Name in
 gerrit. Hopefully Clint doesn't mind:
 
 https://review.openstack.org/#/q/owner:%22Clint+%27SpamapS%27+Byrum%22+status:open,n,z
 
 
 Indeed, I really didn't like that I'd be reviewing somebody's change,
 and talking to them on IRC, and not know if they knew who I was.
 
 It also has the odd side effect that gerritbot triggers my IRC filters
 when I 'git review'.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

2014-12-10 Thread A, Keshava
Hi Murali,

There are many unknows w.r.t ‘Service-VM’ and how it should from NFV 
perspective.
In my opinion it was not decided how the Service-VM framework should be.
Depending on this we at OpenStack also will have impact for ‘Service Chaining’.
Please find the mail attached w.r.t that discussion with NFV for ‘Service-VM + 
Openstack OVS related discussion”.


Regards,
keshava

From: Stephen Wong [mailto:stephen.kf.w...@gmail.com]
Sent: Wednesday, December 10, 2014 10:03 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

Hi Murali,

There is already a ServiceVM project (Tacker), currently under development 
on stackforge:

https://wiki.openstack.org/wiki/ServiceVM

If you are interested in this topic, please take a look at the wiki page 
above and see if the project's goals align with yours. If so, you are certainly 
welcome to join the IRC meeting and start to contribute to the project's 
direction and design.

Thanks,
- Stephen


On Wed, Dec 10, 2014 at 7:01 AM, Murali B 
mbi...@gmail.commailto:mbi...@gmail.com wrote:
Hi keshava,

We would like contribute towards service chain and NFV

Could you please share the document if you have any related to service VM

The service chain can be achieved if we able to redirect the traffic to service 
VM using ovs-flows

in this case we no need to have routing enable on the service VM(traffic is 
redirected at L2).

All the tenant VM's in cloud could use this service VM services  by adding the 
ovs-rules in OVS


Thanks
-Murali




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

---BeginMessage---
Some of my perspective with [RP]

From: A, Keshava keshav...@hp.commailto:keshav...@hp.com
Date: Wednesday, December 10, 2014 at 8:59 AM
To: Christopher Price 
christopher.pr...@ericsson.commailto:christopher.pr...@ericsson.com, 
opnfv-tech-disc...@lists.opnfv.orgmailto:opnfv-tech-disc...@lists.opnfv.org 
opnfv-tech-disc...@lists.opnfv.orgmailto:opnfv-tech-disc...@lists.opnfv.org,
 opnfv-...@lists.opnfv.orgmailto:opnfv-...@lists.opnfv.org 
opnfv-...@lists.opnfv.orgmailto:opnfv-...@lists.opnfv.org
Subject: Re: [opnfv-tech-discuss] Service VM v/s its basic framework

Hi Chris,

Thanks for your reply.
In my opinion it is very important to have common understanding how the 
Service-VM should look .


1.   There are many question coming-in like ‘OVS can be also part of the 
Service-VM”?

[RP] It can, but the only advantage is if it can process NSH headers.


If multiple features(services)  are running with-in one Service-VM, use ‘local 
OVS  to do service-chaining’.

(Of course it can be handled by internal routing table by setting  Next-hop as 
Next Service running with in that local-VM also, if routing is running in that 
VM)

[RP] Then local OVS becomes a SFF. This is fine, I see no issues with it. Your 
Service Path topology would include each (SFF, SF) tuple inside your service-VM 
as a service hop.



2.   OVS running in compute node:

a.   Can be used to do ‘service chaining across Service-VM’ running with in 
same compute Node ?

b.  Service-VM running in different compute-nodes needs can be chained by 
Service Layer.

[RP] As soon as a OVS (irrespective where it is) sends packets to Service 
Functions it becomes a SFF. If you think like that everything becomes simpler.

   With both 1 + 2 and ‘Service Layer running in NFV orchestration’ +  
‘Service topology’ .
This ‘Service Layer’ will configures

a.   ‘OpenStack Controller’ to configure OVS which it manages for Service 
Chaining.

b.  Service-VM , to chain the service within that VM itself.

[RP] I think Openstanck’s  current layer 2  hop-by-hop Service Path diverges 
considerably a departure from IETF’s proposal and consequently ODL 
implementation. I think this is a good opportunity to align everything.


3.   HA framework :

a.   Service VMs will run in Active-Active mode or Active-Standby mode ?

b.  How the incoming packet should be  delivered ?

c.   OpenStack should deliver the packet only to Active-VM ?

   i.  or to 
both Active and Standby-VM together ?

 ii.  or first 
to Standby-VM, from there to Standby-VM to deliver to Active-VM ?

d.  Active-VM should control Standby-VM ?

[RP] Let’s think about SFF and SF. SFF controls where the packet are sent, 
period. SFs has no saying in it.


e.  Active-VM will control the network ?

f.Active-VM will be ahead of Standby-VM as for ‘live network 
information is concerned ‘ ?


4.   Can  the Service-VM can run routing/Forwarding  information to 
OpenStack infrastructure ? Or it should be within that Service-VM 

Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2014-12-10 Thread Zane Bitter

You really need to get a real email client with quoting support ;)

On 10/12/14 06:42, Murugan, Visnusaran wrote:

Well, we still have to persist the dependencies of each version of a resource 
_somehow_, because otherwise we can't know how to clean them up in the correct 
order. But what I think you meant to say is that this approach doesn't require 
it to be persisted in a separate table where the rows are marked as traversed 
as we work through the graph.

[Murugan, Visnusaran]
In case of rollback where we have to cleanup earlier version of resources, we 
could get the order from old template. We'd prefer not to have a graph table.


In theory you could get it by keeping old templates around. But that 
means keeping a lot of templates, and it will be hard to keep track of 
when you want to delete them. It also means that when starting an update 
you'll need to load every existing previous version of the template in 
order to calculate the dependencies. It also leaves the dependencies in 
an ambiguous state when a resource fails, and although that can be 
worked around it will be a giant pain to implement.


I agree that I'd prefer not to have a graph table. After trying a couple 
of different things I decided to store the dependencies in the Resource 
table, where we can read or write them virtually for free because it 
turns out that we are always reading or updating the Resource itself at 
exactly the same time anyway.



This approach reduces DB queries by waiting for completion notification on a topic. The 
drawback I see is that delete stack stream will be huge as it will have the entire graph. 
We can always dump such data in ResourceLock.data Json and pass a simple flag 
load_stream_from_db to converge RPC call as a workaround for delete operation.


This seems to be essentially equivalent to my 'SyncPoint' proposal[1], with the 
key difference that the data is stored in-memory in a Heat engine rather than 
the database.

I suspect it's probably a mistake to move it in-memory for similar reasons to 
the argument Clint made against synchronising the marking off of dependencies 
in-memory. The database can handle that and the problem of making the DB robust 
against failures of a single machine has already been solved by someone else. 
If we do it in-memory we are just creating a single point of failure for not 
much gain. (I guess you could argue it doesn't matter, since if any Heat engine 
dies during the traversal then we'll have to kick off another one anyway, but 
it does limit our options if that changes in the future.)
[Murugan, Visnusaran] Resource completes, removes itself from resource_lock and 
notifies engine. Engine will acquire parent lock and initiate parent only if 
all its children are satisfied (no child entry in resource_lock). This will 
come in place of Aggregator.


Yep, if you s/resource_lock/SyncPoint/ that's more or less exactly what 
I did. The three differences I can see are:


1) I think you are proposing to create all of the sync points at the 
start of the traversal, rather than on an as-needed basis. This is 
probably a good idea. I didn't consider it because of the way my 
prototype evolved, but there's now no reason I can see not to do this. 
If we could move the data to the Resource table itself then we could 
even get it for free from an efficiency point of view.
2) You're using a single list from which items are removed, rather than 
two lists (one static, and one to which items are added) that get 
compared. Assuming (1) then this is probably a good idea too.
3) You're suggesting to notify the engine unconditionally and let the 
engine decide if the list is empty. That's probably not a good idea - 
not only does it require extra reads, it introduces a race condition 
that you then have to solve (it can be solved, it's just more work). 
Since the update to remove a child from the list is atomic, it's best to 
just trigger the engine only if the list is now empty.



It's not clear to me how the 'streams' differ in practical terms from just 
passing a serialisation of the Dependencies object, other than being 
incomprehensible to me ;). The current Dependencies implementation
(1) is a very generic implementation of a DAG, (2) works and has plenty of unit 
tests, (3) has, with I think one exception, a pretty straightforward API, (4) 
has a very simple serialisation, returned by the edges() method, which can be 
passed back into the constructor to recreate it, and (5) has an API that is to 
some extent relied upon by resources, and so won't likely be removed outright 
in any event.
Whatever code we need to handle dependencies ought to just build on this 
existing implementation.
[Murugan, Visnusaran] Our thought was to reduce payload size (template/graph). 
Just planning for worst case scenario (million resource stack) We could always 
dump them in ResourceLock.data to be loaded by Worker.


If there's a smaller representation of a graph than a list of edges then 

Re: [openstack-dev] [Neutron] Core/Vendor code decomposition

2014-12-10 Thread Cedric OLLIVIER
 https://review.openstack.org/#/c/140191/

2014-12-09 18:32 GMT+01:00 Armando M. arma...@gmail.com:


 By the way, if Kyle can do it in his teeny tiny time that he has left
 after his PTL duties, then anyone can do it! :)

 https://review.openstack.org/#/c/140191/

 Fully cloning Dave Tucker's repository [1] and the outdated fork of the
ODL ML2 MechanismDriver included raises some questions (e.g. [2]).
I wish the next patch set removes some files. At least it should take the
mainstream work into account (e.g. [3]) .

[1] https://github.com/dave-tucker/odl-neutron-drivers
[2] https://review.openstack.org/#/c/113330/
[3] https://review.openstack.org/#/c/96459/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] People of OpenStack (and their IRC nicks)

2014-12-10 Thread Jeremy Stanley
On 2014-12-10 17:55:37 + (+), Jay Faulkner wrote:
 Often times I find myself in need of going the other direction —
 which IRC nick goes to which person. Does anyone know how to do
 that with the Foundation directory?

I don't think there's a lookup for that (might be worth logging a
feature request) but generally I rely on using the /whois command to
ask the IRC network for details on a particular nick and look at the
realname returned with it. I would encourage people to make sure
their IRC clients are configured to set that metadata to something
useful unless they really prefer to interact entirely pseudonymously
there (that's of course a legitimate preference too).
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] People of OpenStack (and their IRC nicks)

2014-12-10 Thread Stefano Maffulli
On 12/10/2014 02:30 AM, Matthew Gilliard wrote:
 So, are we agreed that http://www.openstack.org/community/members/ is
 the authoritative place for IRC lookups? In which case, I'll take the
 old content out of https://wiki.openstack.org/wiki/People and leave a
 message directing people where to look.

Yes, please, let me know if you need help.

 I don't have the imagination to use anything other than my real name
 on IRC but for people who do, should we try to encourage putting the
 IRC nick in the gerrit name?

That's hard to enforce. A better way to solve this would be to link
directly gerrit IDs to openstack.org profile URL but I have no idea how
that would work. Gerrit seems only to show full name and email address
as a fly-over, when you hover on the reviewer/owner name in the UI.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] REST and Django

2014-12-10 Thread Thai Q Tran
I think we're arguing for the same thing, but maybe slightly different approach. I think we can both agree that a middle-layer is required, whether we intend to use it as a proxy or REST endpoints. Regardless of the approach, the client needs to relay what API it wants to invoke, and you can do that either via RPC or REST. I personally prefer the REST approach because it shields the client. Client just needs to know which URL to hit in order to invoke a certain API, and does not need to know the procedure name or parameters ordering. Having said all of that, I do believe we should keep it as thin as possible. I do like the idea of having separate classes for different API versions. What we have today is a thin REST layer that acts like a proxy. You hit a certain URL, and the middle layer forwards the API invokation. The only exception to this rule is support for batch deletions.-Tihomir Trifonov t.trifo...@gmail.com wrote: -To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.orgFrom: Tihomir Trifonov t.trifo...@gmail.comDate: 12/10/2014 03:04AMSubject: Re: [openstack-dev] [horizon] REST and DjangoRichard, thanks for the reply,I agree that the given example is not a real REST. But we already have the REST API - that's Keystone, Nova, Cinder, Glance, Neutron etc, APIs. So what we plan to do here? To add a new REST layer to communicate with other REST API? Do we really need Frontend-REST-REST architecture ? My opinion is that we don't need another REST layer as we currently are trying to go away from the Django layer, which is the same - another processing layer. Although we call it REST proxy or whatever - it doesn't need to be a real REST, but just an aggregation proxy that combines and forwards some requests with adding minimal processing overhead. What makes sense for me is to keep the authentication in this layer as it is now - push a cookie to the frontend, but the REST layer will extract the auth tokens from the session storage and prepare the auth context for the REST API request to OS services. This way we will not expose the tokens to the JS frontend, and will have strict control over the authentication. The frontend will just send data requests, they will be wrapped with auth context and forwarded.Regarding the existing issues with versions in the API - for me the existing approach is wrong. All these fixes were made as workarounds. What should have been done is to create abstractions for each version and to use a separate class for each version. This was partially done for the keystoneclient in api/keystone.py, but not for the forms/views, where we still have if-else for versions. What I suggest here is to have different(concrete) views/forms for each version, and to use them according the context. If the Keystone backend is v2.0 - then in the Frontend use keystone2() object, otherwise use keystone3() object. This of course needs some more coding, but is much cleaner in terms of customization and testing. For me the current hacks with 'if keystone.version == 3.0' are wrong at many levels. And this can be solved now. The problem till now was that we had one frontend that had to be backed by different versions of backend components. Now we can have different frontends that map to specific backend. That's how I understand the power of Angular with it's views and directives. That's where I see the real benefit of using full-featured frontend. Also imagine how easy will be then to deprecate a component version, compared to what we need to do now for the same.Otherwise we just rewrite the current Django middleware with another DjangoRest middleware and don't change anything, we don't fix the problems. We just move them to another place.I still think that in Paris we talked about a new generation of the Dashboard, a different approach on building the frontend for OpenStack. What I've heard there from users/operators of Horizon was that it was extremely hard to add customizations and new features to the Dashboard, as all these needed to go through upstream changes and to wait until next release cycle to get them. Do we still want to address these concerns and how? Please, correct me if I got things wrong.On Wed, Dec 10, 2014 at 11:56 AM, Richard Jones r1chardj0...@gmail.com wrote:Sorry I didn't respond to this earlier today, I had intended to.What you're describing isn't REST, and the principles of REST are what have been guiding the design of the new API so far. I see a lot of value in using REST approaches, mostly around clarity of the interface.While the idea of a very thin proxy seemed like a great idea at one point, my conversations at the summit convinced me that there was value in both using the client interfaces present in the openstack_dashboard/api code base (since they abstract away many issues in the apis including across versions) and also value in us being able to clean up (for example, using "project_id" rather than "project" in the user 

Re: [openstack-dev] [neutron][lbaas] Kilo Midcycle Meetup

2014-12-10 Thread Susanne Balle
Cool! Thx

Susanne

On Wed, Dec 10, 2014 at 12:48 AM, Brandon Logan brandon.lo...@rackspace.com
 wrote:

 It's set.  We'll be having the meetup on Feb 2-6 in San Antonio at RAX
 HQ.  I'll add a list of hotels and the address on the etherpad.

 https://etherpad.openstack.org/p/lbaas-kilo-meetup

 Thanks,
 Brandon

 On Tue, 2014-12-02 at 17:27 +, Brandon Logan wrote:
  Per the meeting, put together an etherpad here:
 
  https://etherpad.openstack.org/p/lbaas-kilo-meetup
 
  I would like to get the location and dates finalized ASAP (preferrably
  the next couple of days).
 
  We'll also try to do the same as the neutron and octava meetups for
  remote attendees.
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] 0.3.2 client release

2014-12-10 Thread Devananda van der Veen
Hi folks,

Just a quick announcement that I've tagged an incremental release of our
client library to catch up with the changes so far in Kilo in preparation
for the k-1 milestone next week. Here are the release notes:

- Add keystone v3 CLI support
- Add tty password entry to CLI
- Add node-set-maintenance command to CLI
- Include maintenance_reason in CLI output of node-show
- Add option to specify node uuid in node-create subcommand
- Add GET support for vendor_passthru to the library

It should be winding its way through the build pipeline right now, and
available on pypi later today.

Regards,
Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Fuel agent proposal

2014-12-10 Thread Yuriy Zveryanskyy

New version of the spec:
https://review.openstack.org/#/c/138115/
Problem description updated.
Power interface part removed (not in scope of deploy driver).

On 12/09/2014 12:23 AM, Devananda van der Veen wrote:


I'd like to raise this topic for a wider discussion outside of the 
hallway track and code reviews, where it has thus far mostly remained.



In previous discussions, my understanding has been that the Fuel team 
sought to use Ironic to manage pets rather than cattle - and doing 
so required extending the API and the project's functionality in ways 
that no one else on the core team agreed with. Perhaps that 
understanding was wrong (or perhaps not), but in any case, there is 
now a proposal to add a FuelAgent driver to Ironic. The proposal 
claims this would meet that teams' needs without requiring changes to 
the core of Ironic.



https://review.openstack.org/#/c/138115/


The Problem Description section calls out four things, which have all 
been discussed previously (some are here [0]). I would like to address 
each one, invite discussion on whether or not these are, in fact, 
problems facing Ironic (not whether they are problems for someone, 
somewhere), and then ask why these necessitate a new driver be added 
to the project.



They are, for reference:


1. limited partition support

2. no software RAID support

3. no LVM support

4. no support for hardware that lacks a BMC


#1.

When deploying a partition image (eg, QCOW format), Ironic's PXE 
deploy driver performs only the minimal partitioning necessary to 
fulfill its mission as an OpenStack service: respect the user's 
request for root, swap, and ephemeral partition sizes. When deploying 
a whole-disk image, Ironic does not perform any partitioning -- such 
is left up to the operator who created the disk image.



Support for arbitrarily complex partition layouts is not required by, 
nor does it facilitate, the goal of provisioning physical servers via 
a common cloud API. Additionally, as with #3 below, nothing prevents a 
user from creating more partitions in unallocated disk space once they 
have access to their instance. Therefor, I don't see how Ironic's 
minimal support for partitioning is a problem for the project.



#2.

There is no support for defining a RAID in Ironic today, at all, 
whether software or hardware. Several proposals were floated last 
cycle; one is under review right now for DRAC support [1], and there 
are multiple call outs for RAID building in the state machine 
mega-spec [2]. Any such support for hardware RAID will necessarily be 
abstract enough to support multiple hardware vendor's driver 
implementations and both in-band creation (via IPA) and out-of-band 
creation (via vendor tools).



Given the above, it may become possible to add software RAID support 
to IPA in the future, under the same abstraction. This would closely 
tie the deploy agent to the images it deploys (the latter image's 
kernel would be dependent upon a software RAID built by the former), 
but this would necessarily be true for the proposed FuelAgent as well.



I don't see this as a compelling reason to add a new driver to the 
project. Instead, we should (plan to) add support for software RAID to 
the deploy agent which is already part of the project.



#3.

LVM volumes can easily be added by a user (after provisioning) within 
unallocated disk space for non-root partitions. I have not yet seen a 
compelling argument for doing this within the provisioning phase.



#4.

There are already in-tree drivers [3] [4] [5] which do not require a 
BMC. One of these uses SSH to connect and run pre-determined commands. 
Like the spec proposal, which states at line 122, Control via SSH 
access feature intended only for experiments in non-production 
environment, the current SSHPowerDriver is only meant for testing 
environments. We could probably extend this driver to do what the 
FuelAgent spec proposes, as far as remote power control for cheap 
always-on hardware in testing environments with a pre-shared key.



(And if anyone wonders about a use case for Ironic without external 
power control ... I can only think of one situation where I would 
rationally ever want to have a control-plane agent running inside a 
user-instance: I am both the operator and the only user of the cloud.)






In summary, as far as I can tell, all of the problem statements upon 
which the FuelAgent proposal are based are solvable through 
incremental changes in existing drivers, or out of scope for the 
project entirely. As another software-based deploy agent, FuelAgent 
would duplicate the majority of the functionality which 
ironic-python-agent has today.



Ironic's driver ecosystem benefits from a diversity of 
hardware-enablement drivers. Today, we have two divergent software 
deployment drivers which approach image deployment differently: 
agent drivers use a local agent to prepare a system and download the 
image; pxe drivers use a 

[openstack-dev] Lack of quota - security bug or not?

2014-12-10 Thread George Shuklin
I have some small discussion in launchpad: is lack of a quota for 
unprivileged user counted as security bug (or at least as a bug)?


If user can create 100500 objects in database via normal API and ops 
have no way to restrict this, is it OK for Openstack or not?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2014-12-10 Thread Ian Wells
On 10 December 2014 at 01:31, Daniel P. Berrange berra...@redhat.com
wrote:


 So the problem of Nova review bandwidth is a constant problem across all
 areas of the code. We need to solve this problem for the team as a whole
 in a much broader fashion than just for people writing VIF drivers. The
 VIF drivers are really small pieces of code that should be straightforward
 to review  get merged in any release cycle in which they are proposed.
 I think we need to make sure that we focus our energy on doing this and
 not ignoring the problem by breaking stuff off out of tree.


The problem is that we effectively prevent running an out of tree Neutron
driver (which *is* perfectly legitimate) if it uses a VIF plugging
mechanism that isn't in Nova, as we can't use out of tree code and we won't
accept in code ones for out of tree drivers.  This will get more confusing
as *all* of the Neutron drivers and plugins move out of the tree, as that
constraint becomes essentially arbitrary.

Your issue is one of testing.  Is there any way we could set up a better
testing framework for VIF drivers where Nova interacts with something to
test the plugging mechanism actually passes traffic?  I don't believe
there's any specific limitation on it being *Neutron* that uses the
plugging interaction.
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] deprecation 'pattern' library??

2014-12-10 Thread Joshua Harlow

Hi oslo folks (and others),

I've recently put up a review for some common deprecation patterns:

https://review.openstack.org/#/c/140119/

In summary, this is a common set of patterns that can be used by oslo 
libraries, other libraries... This is different from the versionutils 
one (which is more of a developer-operator deprecation interaction) 
and is more focused on the developer - developer deprecation 
interaction (developers say using oslo libraries).


Doug had the question about why not just put this out there on pypi with 
a useful name not so strongly connected to oslo; since that review is 
more of a common set of patterns that can be used by libraries outside 
openstack/oslo as well. There wasn't many/any similar libraries that I 
found (zope.deprecation is probably the closest) and twisted has 
something in-built to it that is something similar. So in order to avoid 
creating our own version of zope.deprecation in that review we might as 
well create a neat name that can be useful for oslo/openstack/elsewhere...


Some ideas that were thrown around on IRC (check 
'https://pypi.python.org/pypi/%s' % name for 404 to see if likely not 
registered):


* debtcollector
* bagman
* deprecate
* deprecation
* baggage

Any other neat names people can think about?

Or in general any other comments/ideas about providing such a 
deprecation pattern library?


-Josh


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [taskflow] sprint review day

2014-12-10 Thread Doug Hellmann

On Dec 10, 2014, at 2:12 PM, Joshua Harlow harlo...@outlook.com wrote:

 Hi everyone,
 
 The OpenStack oslo team will be hosting a virtual sprint in the
 Freenode IRC channel #openstack-oslo for the taskflow subproject on
 Wednesday 12-17-2014 starting at 16:00 UTC and going for ~8 hours.
 
 The goal of this sprint is to work on any open reviews, documentation
 or any other integration questions, development and so-on, so that we
 can help progress the taskflow subproject forward at a good rate.
 
 Live version of the current documentation is available here:
 
 http://docs.openstack.org/developer/taskflow/
 
 The code itself lives in the openstack/taskflow respository.
 
 http://git.openstack.org/cgit/openstack/taskflow/tree
 
 Please feel free to join if interested, curious, or able.
 
 Much appreciated,
 
 Joshua Harlow
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks for setting this up, Josh!

This day works for me. We need to make sure a couple of other Oslo cores can 
make it that day for the sprint to really be useful, so everyone please let us 
know if you can make it.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Lack of quota - security bug or not?

2014-12-10 Thread Jay Pipes

On 12/10/2014 02:43 PM, George Shuklin wrote:

I have some small discussion in launchpad: is lack of a quota for
unprivileged user counted as security bug (or at least as a bug)?

If user can create 100500 objects in database via normal API and ops
have no way to restrict this, is it OK for Openstack or not?


That would be a major security bug. Please do file one and we'll get on 
it immediately.


Thanks,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] deprecation 'pattern' library??

2014-12-10 Thread Jay Pipes

On 12/10/2014 03:26 PM, Joshua Harlow wrote:

Hi oslo folks (and others),

I've recently put up a review for some common deprecation patterns:

https://review.openstack.org/#/c/140119/

In summary, this is a common set of patterns that can be used by oslo
libraries, other libraries... This is different from the versionutils
one (which is more of a developer-operator deprecation interaction)
and is more focused on the developer - developer deprecation
interaction (developers say using oslo libraries).

Doug had the question about why not just put this out there on pypi with
a useful name not so strongly connected to oslo; since that review is
more of a common set of patterns that can be used by libraries outside
openstack/oslo as well. There wasn't many/any similar libraries that I
found (zope.deprecation is probably the closest) and twisted has
something in-built to it that is something similar. So in order to avoid
creating our own version of zope.deprecation in that review we might as
well create a neat name that can be useful for oslo/openstack/elsewhere...

Some ideas that were thrown around on IRC (check
'https://pypi.python.org/pypi/%s' % name for 404 to see if likely not
registered):

* debtcollector


This would be my choice :)

Best,
-jay


* bagman
* deprecate
* deprecation
* baggage

Any other neat names people can think about?

Or in general any other comments/ideas about providing such a
deprecation pattern library?

-Josh


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] deprecation 'pattern' library??

2014-12-10 Thread Doug Hellmann

On Dec 10, 2014, at 3:26 PM, Joshua Harlow harlo...@outlook.com wrote:

 Hi oslo folks (and others),
 
 I've recently put up a review for some common deprecation patterns:
 
 https://review.openstack.org/#/c/140119/
 
 In summary, this is a common set of patterns that can be used by oslo 
 libraries, other libraries... This is different from the versionutils one 
 (which is more of a developer-operator deprecation interaction) and is more 
 focused on the developer - developer deprecation interaction (developers 
 say using oslo libraries).
 
 Doug had the question about why not just put this out there on pypi with a 
 useful name not so strongly connected to oslo; since that review is more of a 
 common set of patterns that can be used by libraries outside openstack/oslo 
 as well. There wasn't many/any similar libraries that I found 
 (zope.deprecation is probably the closest) and twisted has something in-built 
 to it that is something similar. So in order to avoid creating our own 
 version of zope.deprecation in that review we might as well create a neat 
 name that can be useful for oslo/openstack/elsewhere...
 
 Some ideas that were thrown around on IRC (check 
 'https://pypi.python.org/pypi/%s' % name for 404 to see if likely not 
 registered):
 
 * debtcollector

+1

I suspect we’ll want a minimal spec for the new lib, but let’s wait and hear 
what some of the other cores think.

Doug

 * bagman
 * deprecate
 * deprecation
 * baggage
 
 Any other neat names people can think about?
 
 Or in general any other comments/ideas about providing such a deprecation 
 pattern library?
 
 -Josh
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] People of OpenStack (and their IRC nicks)

2014-12-10 Thread Matthew Gilliard
 I'll take the
 old content out of https://wiki.openstack.org/wiki/People and leave a
 message directing people where to look.
 Yes, please, let me know if you need help.

Done.

 to link
 directly gerrit IDs to openstack.org profile URL

This may be possible with a little javascript hackery in gerrit - I'll
see what I can do there.

 which IRC nick goes to which person. Does anyone know how to do
 that with the Foundation directory?
 I don't think there's a lookup for that (might be worth logging a
 feature request)

Done: https://bugs.launchpad.net/openstack-org/+bug/1401264

Thanks for your time everyone.


  Matthew

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] People of OpenStack (and their IRC nicks)

2014-12-10 Thread Jeremy Stanley
On 2014-12-10 10:39:36 -0800 (-0800), Stefano Maffulli wrote:
[...]
 A better way to solve this would be to link directly gerrit IDs to
 openstack.org profile URL but I have no idea how that would work.
 Gerrit seems only to show full name and email address as a
 fly-over, when you hover on the reviewer/owner name in the UI.

I suppose a Javascript overlay to place REST query calls to the
member system might be an option down the road, but there will be
opposition to that as long as that system is not maintained by the
Infra team. Also, keep in mind that it's possible to have a Gerrit
account without having a Foundation account (though once Gerrit's
authenticating via openstackid.org that should also be tractable).
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Lack of quota - security bug or not?

2014-12-10 Thread Jeremy Stanley
On 2014-12-10 15:34:57 -0500 (-0500), Jay Pipes wrote:
 On 12/10/2014 02:43 PM, George Shuklin wrote:
  I have some small discussion in launchpad: is lack of a quota
  for unprivileged user counted as security bug (or at least as a
  bug)?
  
  If user can create 100500 objects in database via normal API and
  ops have no way to restrict this, is it OK for Openstack or not?
 
 That would be a major security bug. Please do file one and we'll
 get on it immediately.

I think the bigger question is whether the lack of a quota
implementation for everything a tenant could ever possibly create is
something we should have reported in secret, worked under embargo,
backported to supported stable branches, and announced via
high-profile security advisories once fixed.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Lack of quota - security bug or not?

2014-12-10 Thread Jay Pipes

On 12/10/2014 04:05 PM, Jeremy Stanley wrote:

On 2014-12-10 15:34:57 -0500 (-0500), Jay Pipes wrote:

On 12/10/2014 02:43 PM, George Shuklin wrote:

I have some small discussion in launchpad: is lack of a quota
for unprivileged user counted as security bug (or at least as a
bug)?

If user can create 100500 objects in database via normal API and
ops have no way to restrict this, is it OK for Openstack or not?


That would be a major security bug. Please do file one and we'll
get on it immediately.


I think the bigger question is whether the lack of a quota
implementation for everything a tenant could ever possibly create is
something we should have reported in secret, worked under embargo,
backported to supported stable branches, and announced via
high-profile security advisories once fixed.


Sure, fine.

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Lack of quota - security bug or not?

2014-12-10 Thread Jeremy Stanley
On 2014-12-10 16:07:35 -0500 (-0500), Jay Pipes wrote:
 On 12/10/2014 04:05 PM, Jeremy Stanley wrote:
  I think the bigger question is whether the lack of a quota
  implementation for everything a tenant could ever possibly
  create is something we should have reported in secret, worked
  under embargo, backported to supported stable branches, and
  announced via high-profile security advisories once fixed.
 
 Sure, fine.

Any tips for how to implement new quota features in a way that the
patches won't violate our stable backport policies?
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-10 Thread Russell Bryant
 On Fri, Dec 5, 2014 at 8:23 AM, joehuang joehu...@huawei.com wrote:
 Dear all  TC  PTL,

 In the 40 minutes cross-project summit session “Approaches for
 scaling out”[1], almost 100 peoples attended the meeting, and the
 conclusion is that cells can not cover the use cases and
 requirements which the OpenStack cascading solution[2] aim to
 address, the background including use cases and requirements is
 also described in the mail.

I must admit that this was not the reaction I came away with the
discussion with.  There was a lot of confusion, and as we started
looking closer, many (or perhaps most) people speaking up in the room
did not agree that the requirements being stated are things we want to
try to satisfy.

On 12/05/2014 06:47 PM, joehuang wrote:
 Hello, Davanum,
 
 Thanks for your reply.
 
 Cells can't meet the demand for the use cases and requirements described in 
 the mail. 

You're right that cells doesn't solve all of the requirements you're
discussing.  Cells addresses scale in a region.  My impression from the
summit session and other discussions is that the scale issues addressed
by cells are considered a priority, while the global API bits are not.

 1. Use cases
 a). Vodafone use case[4](OpenStack summit speech video from 9'02
 to 12'30 ), establishing globally addressable tenants which result
 in efficient services deployment.

Keystone has been working on federated identity.  That part makes sense,
and is already well under way.

 b). Telefonica use case[5], create virtual DC( data center) cross
 multiple physical DCs with seamless experience.

If we're talking about multiple DCs that are effectively local to each
other with high bandwidth and low latency, that's one conversation.  My
impression is that you want to provide a single OpenStack API on top of
globally distributed DCs.  I honestly don't see that as a problem we
should be trying to tackle.  I'd rather continue to focus on making
OpenStack work *really* well split into regions.

I think some people are trying to use cells in a geographically
distributed way, as well.  I'm not sure that's a well understood or
supported thing, though.  Perhaps the folks working on the new version
of cells can comment further.

 c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6,
 8#. For NFV cloud, it’s in nature the cloud will be distributed but
 inter-connected in many data centers.

I'm afraid I don't understand this one.  In many conversations about
NFV, I haven't heard this before.


 2.requirements
 a). The operator has multiple sites cloud; each site can use one or
 multiple vendor’s OpenStack distributions.

Is this a technical problem, or is a business problem of vendors not
wanting to support a mixed environment that you're trying to work around
with a technical solution?

 b). Each site with its own requirements and upgrade schedule while
 maintaining standard OpenStack API
 c). The multi-site cloud must provide unified resource management
 with global Open API exposed, for example create virtual DC cross
 multiple physical DCs with seamless experience.

 Although a prosperity orchestration layer could be developed for
 the multi-site cloud, but it's prosperity API in the north bound
 interface. The cloud operators want the ecosystem friendly global
 open API for the mutli-site cloud for global access.

I guess the question is, do we see a global API as something we want
to accomplish.  What you're talking about is huge, and I'm not even sure
how you would expect it to work in some cases (like networking).

In any case, to be as clear as possible, I'm not convinced this is
something we should be working on.  I'm going to need to see much more
overwhelming support for the idea before helping to figure out any
further steps.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Bug Squashing Day

2014-12-10 Thread Gregory Haynes
A couple weeks ago we discussed having a bug squash day. AFAICT we all
forgot, and we still have a huge bug backlog. I'd like to propose we
make next Wed. (12/17, in whatever 24 window is Wed. in your time zone)
a bug squashing day. Hopefully we can add this as an item to our weekly
meeting on Tues. to help remind everyone the day before.

Cheers,
Greg

-- 
  Gregory Haynes
  g...@greghaynes.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Kilo specs review day

2014-12-10 Thread Michael Still
Hi,

at the design summit we said that we would not approve specifications
after the kilo-1 deadline, which is 18 December. Unfortunately, we’ve
had a lot of specifications proposed this cycle (166 to my count), and
haven’t kept up with the review workload.

Therefore, I propose that Friday this week be a specs review day. We
need to burn down the queue of specs needing review, as well as
abandoning those which aren’t getting regular updates based on our
review comments.

I’d appreciate nova-specs-core doing reviews on Friday, but its always
super helpful when non-cores review as well. A +1 for a developer or
operator gives nova-specs-core a good signal of what might be ready to
approve, and that helps us optimize our review time.

For reference, the specs to review may be found at:


https://review.openstack.org/#/q/project:openstack/nova-specs+status:open,n,z

Thanks heaps,
Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] FYI: VPNaaS Sub-team meeting setup...

2014-12-10 Thread Paul Michali (pcm)
I created a Wiki page entry and reserved the IRC openstack-meeting-3 channel 
for Tuesday’s 1500 UTC. I’ll flesh out the meeting page with info on Friday, 
when I return from the Neutron mid-cycle sprint.

Let me know if you have any agenda topics (or edit the page directly).

Regards,

PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pc_m (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Bug Squashing Day

2014-12-10 Thread James Polley
How do you find the Australian at the international online meeting?

You don't, they'll find you and make loud pointed remarks about your lack
of understanding of the ramifications of the world being round, the IDL,
and so on.

On Wed, Dec 10, 2014 at 10:36 PM, Gregory Haynes g...@greghaynes.net
wrote:

 A couple weeks ago we discussed having a bug squash day. AFAICT we all
 forgot, and we still have a huge bug backlog. I'd like to propose we
 make next Wed. (12/17, in whatever 24 window is Wed. in your time zone)
 a bug squashing day. Hopefully we can add this as an item to our weekly
 meeting on Tues. to help remind everyone the day before.


Luckily next week's meeting is the UTC1900 meeting - so for Europe that's
Tuesday night, and for Christchurch and Sydney that's 9am and 6am
respectively. The meeting we had earlier today was  at 9am Wednesday CET
(still time for a reminder) - 10pm/8pm Wednesday in Christchurch/Sydney.

On any case, I've added a note to the agenda (
https://wiki.openstack.org/wiki/Meetings/TripleO#One-off_agenda_items) and
linked it back to the original discussion (
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-12-02-19.06.log.html#l-32
)


 Cheers,
 Greg

 --
   Gregory Haynes
   g...@greghaynes.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Kilo specs review day

2014-12-10 Thread Joe Gordon
On Wed, Dec 10, 2014 at 1:41 PM, Michael Still mi...@stillhq.com wrote:

 Hi,

 at the design summit we said that we would not approve specifications
 after the kilo-1 deadline, which is 18 December. Unfortunately, we’ve
 had a lot of specifications proposed this cycle (166 to my count), and
 haven’t kept up with the review workload.

 Therefore, I propose that Friday this week be a specs review day. We
 need to burn down the queue of specs needing review, as well as
 abandoning those which aren’t getting regular updates based on our
 review comments.

 I’d appreciate nova-specs-core doing reviews on Friday, but its always
 super helpful when non-cores review as well. A +1 for a developer or
 operator gives nova-specs-core a good signal of what might be ready to
 approve, and that helps us optimize our review time.

 For reference, the specs to review may be found at:


 https://review.openstack.org/#/q/project:openstack/nova-specs+status:open,n,z


++, count me in!




 Thanks heaps,
 Michael

 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Announcing the openstack ansible deployment repo

2014-12-10 Thread Kevin Carter
Hello all,


The RCBOPS team at Rackspace has developed a repository of Ansible roles, 
playbooks, scripts, and libraries to deploy Openstack inside containers for 
production use. We’ve been running this deployment for a while now,
and at the last OpenStack summit we discussed moving the repo into Stackforge 
as a community project. Today, I’m happy to announce that the 
os-ansible-deployment repo is online within Stackforge. This project is a 
work in progress and we welcome anyone who’s interested in contributing.

This project includes:
  * Ansible playbooks for deployment and orchestration of infrastructure 
resources.
  * Isolation of services using LXC containers.
  * Software deployed from source using python wheels.

Where to find us:
  * IRC: #openstack-ansible
  * Launchpad: https://launchpad.net/openstack-ansible
  * Meetings: #openstack-ansible IRC channel every Tuesday at 14:30 UTC. (The 
meeting schedule is not fully formalized and may be subject to change.)
  * Code: https://github.com/stackforge/os-ansible-deployment

Thanks and we hope to see you in the channel.

—

Kevin



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Bug Squashing Day

2014-12-10 Thread James Polley
My previous email is a long-winded whinging-aussie way of saying that I
think the bug-squashing day is a great idea, and I think Wednesday sounds
like a great day for it.

On Wed, Dec 10, 2014 at 11:01 PM, James Polley j...@jamezpolley.com wrote:

 How do you find the Australian at the international online meeting?

 You don't, they'll find you and make loud pointed remarks about your lack
 of understanding of the ramifications of the world being round, the IDL,
 and so on.

 On Wed, Dec 10, 2014 at 10:36 PM, Gregory Haynes g...@greghaynes.net
 wrote:

 A couple weeks ago we discussed having a bug squash day. AFAICT we all
 forgot, and we still have a huge bug backlog. I'd like to propose we
 make next Wed. (12/17, in whatever 24 window is Wed. in your time zone)
 a bug squashing day. Hopefully we can add this as an item to our weekly
 meeting on Tues. to help remind everyone the day before.


 Luckily next week's meeting is the UTC1900 meeting - so for Europe that's
 Tuesday night, and for Christchurch and Sydney that's 9am and 6am
 respectively. The meeting we had earlier today was  at 9am Wednesday CET
 (still time for a reminder) - 10pm/8pm Wednesday in Christchurch/Sydney.

 On any case, I've added a note to the agenda (
 https://wiki.openstack.org/wiki/Meetings/TripleO#One-off_agenda_items)
 and linked it back to the original discussion (
 http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-12-02-19.06.log.html#l-32
 )


 Cheers,
 Greg

 --
   Gregory Haynes
   g...@greghaynes.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] deprecation 'pattern' library??

2014-12-10 Thread Sean Dague
On 12/10/2014 04:00 PM, Doug Hellmann wrote:
 
 On Dec 10, 2014, at 3:26 PM, Joshua Harlow harlo...@outlook.com wrote:
 
 Hi oslo folks (and others),

 I've recently put up a review for some common deprecation patterns:

 https://review.openstack.org/#/c/140119/

 In summary, this is a common set of patterns that can be used by oslo 
 libraries, other libraries... This is different from the versionutils one 
 (which is more of a developer-operator deprecation interaction) and is 
 more focused on the developer - developer deprecation interaction 
 (developers say using oslo libraries).

 Doug had the question about why not just put this out there on pypi with a 
 useful name not so strongly connected to oslo; since that review is more of 
 a common set of patterns that can be used by libraries outside 
 openstack/oslo as well. There wasn't many/any similar libraries that I found 
 (zope.deprecation is probably the closest) and twisted has something 
 in-built to it that is something similar. So in order to avoid creating our 
 own version of zope.deprecation in that review we might as well create a 
 neat name that can be useful for oslo/openstack/elsewhere...

 Some ideas that were thrown around on IRC (check 
 'https://pypi.python.org/pypi/%s' % name for 404 to see if likely not 
 registered):

 * debtcollector
 
 +1
 
 I suspect we’ll want a minimal spec for the new lib, but let’s wait and hear 
 what some of the other cores think.

Not a core, but as someone that will be using it, that seems reasonable.

The biggest issue with the deprecation patterns in projects is
aggressive cleaning tended to clean out all the deprecations at the
beginning of a cycle... and then all the deprecation assist code, as it
was unused sad panda.

Having it in a common lib as a bunch of decorators would be great.
Especially if we can work out things like *not* spamming deprecation
load warnings on every worker start.

-Sean

 
 Doug
 
 * bagman
 * deprecate
 * deprecation
 * baggage

 Any other neat names people can think about?

 Or in general any other comments/ideas about providing such a deprecation 
 pattern library?

 -Josh


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Announcing the openstack ansible deployment repo

2014-12-10 Thread Alex Leonhardt
This is great, fwiw, I'd also suggest to look at saltstack also supporting
and working on features for OpenStack.

Cheers!
Alex

On Wed, 10 Dec 2014 22:18 Kevin Carter kevin.car...@rackspace.com wrote:

 Hello all,


 The RCBOPS team at Rackspace has developed a repository of Ansible roles,
 playbooks, scripts, and libraries to deploy Openstack inside containers for
 production use. We’ve been running this deployment for a while now,
 and at the last OpenStack summit we discussed moving the repo into
 Stackforge as a community project. Today, I’m happy to announce that the
 os-ansible-deployment repo is online within Stackforge. This project is a
 work in progress and we welcome anyone who’s interested in contributing.

 This project includes:
   * Ansible playbooks for deployment and orchestration of infrastructure
 resources.
   * Isolation of services using LXC containers.
   * Software deployed from source using python wheels.

 Where to find us:
   * IRC: #openstack-ansible
   * Launchpad: https://launchpad.net/openstack-ansible
   * Meetings: #openstack-ansible IRC channel every Tuesday at 14:30 UTC.
 (The meeting schedule is not fully formalized and may be subject to change.)
   * Code: https://github.com/stackforge/os-ansible-deployment

 Thanks and we hope to see you in the channel.

 —

 Kevin

 ___
 OpenStack-operators mailing list
 openstack-operat...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] deprecation 'pattern' library??

2014-12-10 Thread Joshua Harlow

Sean Dague wrote:

On 12/10/2014 04:00 PM, Doug Hellmann wrote:

On Dec 10, 2014, at 3:26 PM, Joshua Harlowharlo...@outlook.com  wrote:


Hi oslo folks (and others),

I've recently put up a review for some common deprecation patterns:

https://review.openstack.org/#/c/140119/

In summary, this is a common set of patterns that can be used by oslo libraries, other 
libraries... This is different from the versionutils one (which is more of a 
developer-operator deprecation interaction) and is more focused on the 
developer-  developer deprecation interaction (developers say using oslo libraries).

Doug had the question about why not just put this out there on pypi with a 
useful name not so strongly connected to oslo; since that review is more of a 
common set of patterns that can be used by libraries outside openstack/oslo as 
well. There wasn't many/any similar libraries that I found (zope.deprecation is 
probably the closest) and twisted has something in-built to it that is 
something similar. So in order to avoid creating our own version of 
zope.deprecation in that review we might as well create a neat name that can be 
useful for oslo/openstack/elsewhere...

Some ideas that were thrown around on IRC (check 
'https://pypi.python.org/pypi/%s' % name for 404 to see if likely not 
registered):

* debtcollector

+1

I suspect we’ll want a minimal spec for the new lib, but let’s wait and hear 
what some of the other cores think.


Not a core, but as someone that will be using it, that seems reasonable.

The biggest issue with the deprecation patterns in projects is
aggressive cleaning tended to clean out all the deprecations at the
beginning of a cycle... and then all the deprecation assist code, as it
was unused sad panda.

Having it in a common lib as a bunch of decorators would be great.
Especially if we can work out things like *not* spamming deprecation
load warnings on every worker start.


We should be able to adjust the deprecation warnings here.

Although I'd almost want these kinds of warnings to not occur/appear at 
worker start since at that point the operator can't do anything about 
them... An idea was to have the jenkins/gerrit/zuul logs have these 
depreciation warnings turned on (perhaps in a blinky red/green color) to 
have them appear at development time (since these would be targeted to 
features depreciated that are only really relevant to developers, not 
operators). Once released then they can just stay off (which I believe 
is the python default[1] to turn these off unless '-Wonce or -Wall' is 
passed to the worker/runtime on python startup)...


https://docs.python.org/2/using/cmdline.html#cmdoption-W 
(DeprecationWarning and its descendants are ignored by default since 
python 2.7+)


-Josh



-Sean


Doug


* bagman
* deprecate
* deprecation
* baggage

Any other neat names people can think about?

Or in general any other comments/ideas about providing such a deprecation 
pattern library?

-Josh


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Services are now split out and neutron is open for commits!

2014-12-10 Thread Kyle Mestery
Folks, just a heads up that we have completed splitting out the services
(FWaaS, LBaaS, and VPNaaS) into separate repositores. [1] [2] [3]. This was
all done in accordance with the spec approved here [4]. Thanks to all
involved, but a special thanks to Doug and Anita, as well as infra. Without
all of their work and help, this wouldn't have been possible!

Neutron and the services repositories are now open for merges again. We're
going to be landing some major L3 agent refactoring across the 4
repositories in the next four days, look for Carl to be leading that work
with the L3 team.

In the meantime, please report any issues you have in launchpad [5] as
bugs, and find people in #openstack-neutron or send an email. We've
verified things come up and all the tempest and API tests for basic neutron
work fine.

In the coming week, we'll be getting all the tests working for the services
repositories. Medium term, we need to also move all the advanced services
tempest tests out of tempest and into the respective repositories. We also
need to beef these tests up considerably, so if you want to help out on a
critical project for Neutron, please let me know.

Thanks!
Kyle

[1] http://git.openstack.org/cgit/openstack/neutron-fwaas
[2] http://git.openstack.org/cgit/openstack/neutron-lbaas
[3] http://git.openstack.org/cgit/openstack/neutron-vpnaas
[4]
http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/kilo/services-split.rst
[5] https://bugs.launchpad.net/neutron
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Services are now split out and neutron is open for commits!

2014-12-10 Thread Edgar Magana
Great Work Team!

Congratulations..

Edgar

From: Kyle Mestery mest...@mestery.commailto:mest...@mestery.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, December 10, 2014 at 3:10 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron] Services are now split out and neutron is 
open for commits!

Folks, just a heads up that we have completed splitting out the services 
(FWaaS, LBaaS, and VPNaaS) into separate repositores. [1] [2] [3]. This was all 
done in accordance with the spec approved here [4]. Thanks to all involved, but 
a special thanks to Doug and Anita, as well as infra. Without all of their work 
and help, this wouldn't have been possible!

Neutron and the services repositories are now open for merges again. We're 
going to be landing some major L3 agent refactoring across the 4 repositories 
in the next four days, look for Carl to be leading that work with the L3 team.

In the meantime, please report any issues you have in launchpad [5] as bugs, 
and find people in #openstack-neutron or send an email. We've verified things 
come up and all the tempest and API tests for basic neutron work fine.

In the coming week, we'll be getting all the tests working for the services 
repositories. Medium term, we need to also move all the advanced services 
tempest tests out of tempest and into the respective repositories. We also need 
to beef these tests up considerably, so if you want to help out on a critical 
project for Neutron, please let me know.

Thanks!
Kyle

[1] http://git.openstack.org/cgit/openstack/neutron-fwaas
[2] http://git.openstack.org/cgit/openstack/neutron-lbaas
[3] http://git.openstack.org/cgit/openstack/neutron-vpnaas
[4] 
http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/kilo/services-split.rst
[5] https://bugs.launchpad.net/neutron
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] mid-cycle update

2014-12-10 Thread Kyle Mestery
The Neutron mid-cycle [1] is now complete, I wanted to let everyone know
how it went. Thanks to all who attended, we got a lot done. I admit to
being skeptical of mid-cycles, especially given the cross project meeting a
month back on the topic. But this particular one was very useful. We had
defined tasks to complete, and we made a lot of progress! What we
accomplished was:

1. We finished splitting out neutron advanced services and get things
working again post-split.
2. We had a team refactoring the L3 agent who now have a batch of commits
to merge post services-split.
3. We worked on refactoring the core API and WSGI layer, and produced
multiple specs on this topic and some POC code.
4. We had someone working on IPV6 tempest tests for the gate who made good
progress here.
5. We had multiple people working on plugin decomposition who are close to
getting this working.

Overall, it was a great sprint! Thanks to Adobe for hosting, Utah is a
beautiful state.

Looking forward to the rest of Kilo!

Kyle

[1] https://wiki.openstack.org/wiki/Sprints/NeutronKiloSprint
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Moving _conf and _scripts to dashboard

2014-12-10 Thread Thai Q Tran
The way we are structuring our_javascript_stoday is complicated. All of our static _javascript_s reside in /horizon/static and are imported through _conf.html and _scripts.html. Notice that there are already some panel specific _javascript_s like: horizon.images.js, horizon.instances.js, horizon.users.js. They do not belong in horizon. They belong in openstack_dashboard because they are specific to a panel.Why am I raising this issue now? In Angular, we need controllers written in _javascript_ for each panel. As we angularize more and more panels, we need to store them in a way that make sense. To me, it make sense for us to move _conf and _scripts to openstack_dashboard. Or if this is not possible, then provide a mechanism to override them in openstack_dashboard.Thoughts?Thai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Moving _conf and _scripts to dashboard

2014-12-10 Thread Richard Jones
+1 to moving application configuration to the application, out of the
library.


 Richard

On Thu Dec 11 2014 at 10:38:20 AM Thai Q Tran tqt...@us.ibm.com wrote:

 The way we are structuring our javascripts today is complicated. All of
 our static javascripts reside in /horizon/static and are imported through
 _conf.html and _scripts.html. Notice that there are already some panel
 specific javascripts like: horizon.images.js, horizon.instances.js,
 horizon.users.js. They do not belong in horizon. They belong in
 openstack_dashboard because they are specific to a panel.

 Why am I raising this issue now? In Angular, we need controllers written
 in javascript for each panel. As we angularize more and more panels, we
 need to store them in a way that make sense. To me, it make sense for us to
 move _conf and _scripts to openstack_dashboard. Or if this is not possible,
 then provide a mechanism to override them in openstack_dashboard.

 Thoughts?
 Thai


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Spring cleaning nova-core

2014-12-10 Thread Vishvananda Ishaya

On Dec 4, 2014, at 4:05 PM, Michael Still mi...@stillhq.com wrote:

 One of the things that happens over time is that some of our core
 reviewers move on to other projects. This is a normal and healthy
 thing, especially as nova continues to spin out projects into other
 parts of OpenStack.
 
 However, it is important that our core reviewers be active, as it
 keeps them up to date with the current ways we approach development in
 Nova. I am therefore removing some no longer sufficiently active cores
 from the nova-core group.
 
 I’d like to thank the following people for their contributions over the years:
 
 * cbehrens: Chris Behrens
 * vishvananda: Vishvananda Ishaya

Thank you Michael. I knew this would happen eventually.  I am around and I
still do reviews from time to time, so everyone feel free to ping me on irc
if there are specific reviews that need my historical knowledge!

Vish

 * dan-prince: Dan Prince
 * belliott: Brian Elliott
 * p-draigbrady: Padraig Brady
 
 I’d love to see any of these cores return if they find their available
 time for code reviews increases.
 
 Thanks,
 Michael
 
 -- 
 Rackspace Australia
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] mid-cycle update

2014-12-10 Thread Michael Still
On Thu, Dec 11, 2014 at 10:14 AM, Kyle Mestery mest...@mestery.com wrote:
 The Neutron mid-cycle [1] is now complete, I wanted to let everyone know how
 it went. Thanks to all who attended, we got a lot done. I admit to being
 skeptical of mid-cycles, especially given the cross project meeting a month
 back on the topic. But this particular one was very useful. We had defined
 tasks to complete, and we made a lot of progress! What we accomplished was:

 1. We finished splitting out neutron advanced services and get things
 working again post-split.
 2. We had a team refactoring the L3 agent who now have a batch of commits to
 merge post services-split.
 3. We worked on refactoring the core API and WSGI layer, and produced
 multiple specs on this topic and some POC code.
 4. We had someone working on IPV6 tempest tests for the gate who made good
 progress here.
 5. We had multiple people working on plugin decomposition who are close to
 getting this working.

This all sounds like good work. Did you manage to progress the
nova-network to neutron migration tasks as well?

 Overall, it was a great sprint! Thanks to Adobe for hosting, Utah is a
 beautiful state.

 Looking forward to the rest of Kilo!

 Kyle

 [1] https://wiki.openstack.org/wiki/Sprints/NeutronKiloSprint

Thanks,
Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup?

2014-12-10 Thread henry hly
On Thu, Dec 11, 2014 at 12:36 AM, Kevin Benton blak...@gmail.com wrote:
 What would the port binding operation do in this case? Just mark the port as
 bound and nothing else?


Also to set the vif type to tap, but don't care what the real backend switch is.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing the openstack ansible deployment repo

2014-12-10 Thread John Griffith
On Wed, Dec 10, 2014 at 3:16 PM, Kevin Carter
kevin.car...@rackspace.com wrote:
 Hello all,


 The RCBOPS team at Rackspace has developed a repository of Ansible roles, 
 playbooks, scripts, and libraries to deploy Openstack inside containers for 
 production use. We’ve been running this deployment for a while now,
 and at the last OpenStack summit we discussed moving the repo into Stackforge 
 as a community project. Today, I’m happy to announce that the 
 os-ansible-deployment repo is online within Stackforge. This project is a 
 work in progress and we welcome anyone who’s interested in contributing.

 This project includes:
   * Ansible playbooks for deployment and orchestration of infrastructure 
 resources.
   * Isolation of services using LXC containers.
   * Software deployed from source using python wheels.

 Where to find us:
   * IRC: #openstack-ansible
   * Launchpad: https://launchpad.net/openstack-ansible
   * Meetings: #openstack-ansible IRC channel every Tuesday at 14:30 UTC. (The 
 meeting schedule is not fully formalized and may be subject to change.)
   * Code: https://github.com/stackforge/os-ansible-deployment

 Thanks and we hope to see you in the channel.

 —

 Kevin


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hey Kevin,

Really cool!  I have some questions though, I've been trying to do
this exact sort of thing on my own with Cinder but can't get iscsi
daemon running in a container.  In fact I run into a few weird
networking problems that I haven't sorted, but the storage piece seems
to be a big stumbling point for me even when I cut some of the extra
stuff I was trying to do with devstack out of it.

Anyway, are you saying that this enables running the reference LVM
impl c-vol service in a container as well?  I'd love to hear/see more
and play around with this.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Kilo specs review day

2014-12-10 Thread Kenichi Oomichi


On 2014/12/11 6:41, Michael Still wrote:
 Hi,

 at the design summit we said that we would not approve specifications
 after the kilo-1 deadline, which is 18 December. Unfortunately, we’ve
 had a lot of specifications proposed this cycle (166 to my count), and
 haven’t kept up with the review workload.

 Therefore, I propose that Friday this week be a specs review day. We
 need to burn down the queue of specs needing review, as well as
 abandoning those which aren’t getting regular updates based on our
 review comments.

 I’d appreciate nova-specs-core doing reviews on Friday, but its always
 super helpful when non-cores review as well. A +1 for a developer or
 operator gives nova-specs-core a good signal of what might be ready to
 approve, and that helps us optimize our review time.

 For reference, the specs to review may be found at:

  
 https://review.openstack.org/#/q/project:openstack/nova-specs+status:open,n,z

+1 for the review day, and the list is very long.

Thanks
Ken'ichi Ohmichi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Kilo specs review day

2014-12-10 Thread Stefano Maffulli
On 12/10/2014 01:41 PM, Michael Still wrote:
 at the design summit we said that we would not approve specifications
 after the kilo-1 deadline, which is 18 December. Unfortunately, we’ve
 had a lot of specifications proposed this cycle (166 to my count), and
 haven’t kept up with the review workload.

Great idea, mikal, thanks for raising this topic. I have asked the
Product and Win The Enterprise working groups to help out, too.

cheers,
stef

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2014-12-10 Thread henry hly
On Thu, Dec 11, 2014 at 3:48 AM, Ian Wells ijw.ubu...@cack.org.uk wrote:
 On 10 December 2014 at 01:31, Daniel P. Berrange berra...@redhat.com
 wrote:


 So the problem of Nova review bandwidth is a constant problem across all
 areas of the code. We need to solve this problem for the team as a whole
 in a much broader fashion than just for people writing VIF drivers. The
 VIF drivers are really small pieces of code that should be straightforward
 to review  get merged in any release cycle in which they are proposed.
 I think we need to make sure that we focus our energy on doing this and
 not ignoring the problem by breaking stuff off out of tree.


 The problem is that we effectively prevent running an out of tree Neutron
 driver (which *is* perfectly legitimate) if it uses a VIF plugging mechanism
 that isn't in Nova, as we can't use out of tree code and we won't accept in
 code ones for out of tree drivers.

The question is, do we really need such flexibility for so many nova vif types?

I also think that VIF_TYPE_TAP and VIF_TYPE_VHOSTUSER is good example,
nova shouldn't known too much details about switch backend, it should
only care about the VIF itself, how the VIF is plugged to switch
belongs to Neutron half.

However I'm not saying to move existing vif driver out, those open
backend have been used widely. But from now on the tap and vhostuser
mode should be encouraged: one common vif driver to many long-tail
backend.

Best Regards,
Henry

 This will get more confusing as *all* of
 the Neutron drivers and plugins move out of the tree, as that constraint
 becomes essentially arbitrary.

 Your issue is one of testing.  Is there any way we could set up a better
 testing framework for VIF drivers where Nova interacts with something to
 test the plugging mechanism actually passes traffic?  I don't believe
 there's any specific limitation on it being *Neutron* that uses the plugging
 interaction.
 --
 Ian.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] People of OpenStack (and their IRC nicks)

2014-12-10 Thread Sean Roberts
I re-noticed that the free form projects involved field in doesn't show up on 
the personal wiki page. Some weird people like me do more other than normal 
stuff. It would be nice to add that free form field, so others know what us 
unusuals do too for elections and such. 

~sean

On Dec 10, 2014, at 1:01 PM, Matthew Gilliard matthew.gilli...@gmail.com 
wrote:

 I'll take the
 old content out of https://wiki.openstack.org/wiki/People and leave a
 message directing people where to look.
 Yes, please, let me know if you need help.
 
 Done.
 
 to link
 directly gerrit IDs to openstack.org profile URL
 
 This may be possible with a little javascript hackery in gerrit - I'll
 see what I can do there.
 
 which IRC nick goes to which person. Does anyone know how to do
 that with the Foundation directory?
 I don't think there's a lookup for that (might be worth logging a
 feature request)
 
 Done: https://bugs.launchpad.net/openstack-org/+bug/1401264
 
 Thanks for your time everyone.
 
 
  Matthew
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Some questions about Ironic service

2014-12-10 Thread xianchaobo
Hi,Fox Kevin M

Thanks for your help.
Also,I want to know whether these features will be implemented in Ironic?
Do we have a plan to implement them?

Thanks
Xianchaobo



-邮件原件-
发件人: openstack-dev-requ...@lists.openstack.org 
[mailto:openstack-dev-requ...@lists.openstack.org] 
发送时间: 2014年12月9日 18:36
收件人: openstack-dev@lists.openstack.org
主题: OpenStack-dev Digest, Vol 32, Issue 25

Send OpenStack-dev mailing list submissions to
openstack-dev@lists.openstack.org

To subscribe or unsubscribe via the World Wide Web, visit
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
or, via email, send a message with subject or body 'help' to
openstack-dev-requ...@lists.openstack.org

You can reach the person managing the list at
openstack-dev-ow...@lists.openstack.org

When replying, please edit your Subject line so it is more specific
than Re: Contents of OpenStack-dev digest...


Today's Topics:

   1.  [Mistral] Query on creating multiple resources (Sushma Korati)
   2. Re: [neutron] Changes to the core team
  (trinath.soman...@freescale.com)
   3. [Neutron][OVS] ovs-ofctl-to-python blueprint (YAMAMOTO Takashi)
   4. Re: [api] Using query string or request body to   pass
  parameter (Alex Xu)
   5. [Ironic] Some questions about Ironic service (xianchaobo)
   6. [Ironic] How to get past pxelinux.0 bootloader? (Peeyush Gupta)
   7. Re: [neutron] Changes to the core team (Gariganti, Sudhakar Babu)
   8. Re: [neutron][lbaas] Shared Objects in LBaaS - Use Cases that
  led us to adopt this. (Samuel Bercovici)
   9. [Mistral] Action context passed to all action executions by
  default (W Chan)
  10. Cross-Project meeting, Tue December 9th, 21:00 UTC
  (Thierry Carrez)
  11. Re: [Mistral] Query on creating multiple resources
  (Renat Akhmerov)
  12. Re: [Mistral] Query on creating multiple resources
  (Renat Akhmerov)
  13. Re: [Mistral] Event Subscription (Renat Akhmerov)
  14. Re: [Mistral] Action context passed to all action executions
  by default (Renat Akhmerov)
  15. Re: Cross-Project meeting, Tue December 9th, 21:00 UTC (joehuang)
  16. Re: [Ironic] Some questions about Ironic service (Fox, Kevin M)
  17. [Nova][Neutron] out-of-tree plugin for Mech   driver/L2 and
  vif_driver (Maxime Leroy)
  18. Re: [Ironic] How to get past pxelinux.0 bootloader? (Fox, Kevin M)
  19. Re: [Ironic] Fuel agent proposal (Roman Prykhodchenko)
  20. Re: [Ironic] How to get past pxelinux.0 bootloader?
  (Peeyush Gupta)
  21. Re: Cross-Project meeting, Tue December 9th, 21:00 UTC
  (Thierry Carrez)
  22.  [neutron] mid-cycle hot reviews (Miguel ?ngel Ajo)
  23. Re: [horizon] REST and Django (Tihomir Trifonov)


--

Message: 1
Date: Tue, 9 Dec 2014 05:57:35 +
From: Sushma Korati sushma_kor...@persistent.com
To: gokrokvertsk...@mirantis.com gokrokvertsk...@mirantis.com,
zbit...@redhat.com zbit...@redhat.com
Cc: openstack-dev@lists.openstack.org
openstack-dev@lists.openstack.org
Subject: [openstack-dev]  [Mistral] Query on creating multiple
resources
Message-ID: 1418105060569.62...@persistent.com
Content-Type: text/plain; charset=iso-8859-1


Hi,


Thank you guys.


Yes I am able to do this with heat, but I faced issues while trying the same 
with mistral.

As suggested will try with the latest mistral branch. Thank you once again.


Regards,

Sushma





From: Georgy Okrokvertskhov [mailto:gokrokvertsk...@mirantis.com]
Sent: Tuesday, December 09, 2014 6:07 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Mistral] Query on creating multiple resources

Hi Sushma,

Did you explore Heat templates? As Zane mentioned you can do this via Heat 
template without writing any workflows.
Do you have any specific use cases which you can't solve with Heat template?

Create VM workflow was a demo example. Mistral potentially can be used by Heat 
or other orchestration tools to do actual interaction with API, but for user it 
might be easier to use Heat functionality.

Thanks,
Georgy

On Mon, Dec 8, 2014 at 7:54 AM, Nikolay Makhotkin 
nmakhot...@mirantis.commailto:nmakhot...@mirantis.com wrote:
Hi, Sushma!
Can we create multiple resources using a single task, like multiple keypairs or 
security-groups or networks etc?

Yes, we can. This feature is in the development now and it is considered as 
experimental - 
https://blueprints.launchpad.net/mistral/+spec/mistral-dataflow-collections

Just clone the last master branch from mistral.

You can specify for-each task property and provide the array of data to your 
workflow:

 

version: '2.0'

name: secgroup_actions

workflows:
  create_security_group:
type: direct
input:
  - array_with_names_and_descriptions

tasks:
  create_secgroups:

for-each:

  data: 

[openstack-dev] [Horizon] Moving _conf and _scripts to dashboard

2014-12-10 Thread Thai Q Tran
Sorry for duplicate mail, forgot the subject.-Thai Q Tran/Silicon Valley/IBM wrote: -To: "OpenStack Development Mailing List \(not for usage questions\)" openstack-dev@lists.openstack.orgFrom: Thai Q Tran/Silicon Valley/IBMDate: 12/10/2014 03:37PMSubject: Moving _conf and _scripts to dashboardThe way we are structuring our_javascript_stoday is complicated. All of our static _javascript_s reside in /horizon/static and are imported through _conf.html and _scripts.html. Notice that there are already some panel specific _javascript_s like: horizon.images.js, horizon.instances.js, horizon.users.js. They do not belong in horizon. They belong in openstack_dashboard because they are specific to a panel.Why am I raising this issue now? In Angular, we need controllers written in _javascript_ for each panel. As we angularize more and more panels, we need to store them in a way that make sense. To me, it make sense for us to move _conf and _scripts to openstack_dashboard. Or if this is not possible, then provide a mechanism to override them in openstack_dashboard.Thoughts?Thai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing the openstack ansible deployment repo

2014-12-10 Thread Kevin Carter
Hey John,

We too ran into the same issue with iSCSI and after a lot of digging and 
chasing red-hearings we found that the cinder-volume service wasn’t the cause 
of the issues, it was iscsiadm login” that caused the problem and it was 
happening from within the nova-compute container. If we weren’t running cinder 
there were no issues with nova-compute running vm’s from within a container 
however once we attempted to attach a volume to a running VM iscsiadm would 
simply refuse to initiate. We followed up on an existing upstream bug regarding 
the issues but its gotten little traction at present: 
https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855”.  In testing we’ve 
found that if we give the compute container the raw device instead of using a 
bridge on a veth type interface we didn’t see the same issues however doing 
that was less than ideal so we opted to simply leave compute nodes as physical 
hosts. From within the playbooks we can set any service to run on bare metal as 
the “container” type so that’s what we’ve done with nova-compute but hopefully 
sometime soon-ish well be able to move nova-compute back into a container, 
assuming the upstream bugs are fixed.

I’d love to chat some more on this or anything else, hit me up anytime; I’m 
@cloudnull in the channel.

—

Kevin


 On Dec 10, 2014, at 19:01, John Griffith john.griffi...@gmail.com wrote:
 
 On Wed, Dec 10, 2014 at 3:16 PM, Kevin Carter
 kevin.car...@rackspace.com wrote:
 Hello all,
 
 
 The RCBOPS team at Rackspace has developed a repository of Ansible roles, 
 playbooks, scripts, and libraries to deploy Openstack inside containers for 
 production use. We’ve been running this deployment for a while now,
 and at the last OpenStack summit we discussed moving the repo into 
 Stackforge as a community project. Today, I’m happy to announce that the 
 os-ansible-deployment repo is online within Stackforge. This project is a 
 work in progress and we welcome anyone who’s interested in contributing.
 
 This project includes:
  * Ansible playbooks for deployment and orchestration of infrastructure 
 resources.
  * Isolation of services using LXC containers.
  * Software deployed from source using python wheels.
 
 Where to find us:
  * IRC: #openstack-ansible
  * Launchpad: https://launchpad.net/openstack-ansible
  * Meetings: #openstack-ansible IRC channel every Tuesday at 14:30 UTC. (The 
 meeting schedule is not fully formalized and may be subject to change.)
  * Code: https://github.com/stackforge/os-ansible-deployment
 
 Thanks and we hope to see you in the channel.
 
 —
 
 Kevin
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 Hey Kevin,
 
 Really cool!  I have some questions though, I've been trying to do
 this exact sort of thing on my own with Cinder but can't get iscsi
 daemon running in a container.  In fact I run into a few weird
 networking problems that I haven't sorted, but the storage piece seems
 to be a big stumbling point for me even when I cut some of the extra
 stuff I was trying to do with devstack out of it.
 
 Anyway, are you saying that this enables running the reference LVM
 impl c-vol service in a container as well?  I'd love to hear/see more
 and play around with this.
 
 Thanks,
 John
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] XenAPI questions

2014-12-10 Thread YAMAMOTO Takashi
hi,

i have questions for XenAPI folks:

- what's the status of XenAPI support in neutron?
- is there any CI covering it?  i want to look at logs.
- is it possible to write a small program which runs with the xen
  rootwrap and proxies OpenFlow channel between domains?
  (cf. https://review.openstack.org/#/c/138980/)

thank you.

YAMAMOTO Takashi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Oslo.messaging error

2014-12-10 Thread raghavendra.lad



HI Team,

I am installing Murano on the Ubuntu 14.04 Juno setup and when I try the below 
install murano-api I encounter the below error. Please assist.

When I install

I am using the Murano guide link provided below:
https://murano.readthedocs.org/en/latest/install/manual.html


I am trying to execute the section 7

1.Open a new console and launch Murano API. A separate terminal is required 
because the console will be locked by a running process.
2. $ cd ~/murano/murano
3. $ tox -e venv -- murano-api \
4.  --config-file ./etc/murano/murano.conf


I am getting the below error : I have a Juno Openstack ready and trying to 
integrate Murano


2014-12-10 12:10:30.396 7721 DEBUG murano.openstack.common.service [-] 
neutron.endpoint_type  = publicURL log_opt_values /home/
  
ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2048
2014-12-10 12:10:30.397 7721 DEBUG murano.openstack.common.service [-] 
neutron.insecure   = False log_opt_values /home/ubun
  
tu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2048
2014-12-10 12:10:30.397 7721 DEBUG murano.openstack.common.service [-] 

   log_opt_values 
/home/ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2050
2014-12-10 12:10:30.400 7721 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connecting to AMQP server on controller:5672
2014-12-10 12:10:30.408 7721 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connecting to AMQP server on controller:5672
2014-12-10 12:10:30.416 7721 INFO eventlet.wsgi [-] (7721) wsgi starting up on 
http://0.0.0.0:8082/
2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Updating 
statistic information. update_stats /home/ubuntu/murano/muran   
   o/murano/common/statservice.py:57
2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Stats object: 
murano.api.v1.request_statistics.RequestStatisticsColle  
ction object at 0x7fada950a510 update_stats 
/home/ubuntu/murano/murano/murano/common/statservice.py:58
2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Stats: 
Requests:0  Errors: 0 Ave.Res.Time 0.
Per tenant: {} update_stats 
/home/ubuntu/murano/murano/murano/common/statservice.py:64
2014-12-10 12:10:30.433 7721 DEBUG oslo.db.sqlalchemy.session [-] MySQL server 
mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZER  
O_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
 _check_effective_sql_mode /hom  
e/ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/db/sqlalchemy/session.py:509
2014-12-10 12:10:33.464 7721 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP 
server controller:5672 closed the connection. Check log  in 
credentials: Socket closed
2014-12-10 12:10:33.465 7721 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP 
server controller:5672 closed the connection. Check log  in 
credentials: Socket closed
2014-12-10 12:10:37.483 7721 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP 
server controller:5672 closed the connection. Check log  in 
credentials: Socket closed
2014-12-10 12:10:37.484 7721 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP 
server controller:5672 closed the connection. Check log  in 
credentials: Socket closed


Warm Regards,
Raghavendra Lad




This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise confidential information. If you have received it in 
error, please notify the sender immediately and delete the original. Any other 
use of the e-mail by you is prohibited. Where allowed by local law, electronic 
communications with Accenture and its affiliates, including e-mail and instant 
messaging (including content), may be scanned by our systems for the purposes 
of information security and assessment of internal compliance with Accenture 
policy.
__

www.accenture.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Services are now split out and neutron is open for commits!

2014-12-10 Thread Doug Wiegley
Hi all,

I’d like to echo the thanks to all involved, and thanks for the patience during 
this period of transition.

And a logistical note: if you have any outstanding reviews against the now 
missing files/directories (db/{loadbalancer,firewall,vpn}, services/, or 
tests/unit/services), you must re-submit your review against the new repos.  
Existing neutron reviews for service code will be summarily abandoned in the 
near future.

Lbaas folks, hold off on re-submitting feature/lbaasv2 reviews.  I’ll have that 
branch merged in the morning, and ping in channel when it’s ready for 
submissions.

Finally, if any tempest lovers want to take a crack at splitting the tempest 
runs into four, perhaps using salv’s reviews of splitting them in two as a 
guide, and then creating jenkins jobs, we need some help getting those going.  
Please ping me directly (IRC: dougwig).

Thanks,
doug


From: Kyle Mestery mest...@mestery.commailto:mest...@mestery.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, December 10, 2014 at 4:10 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron] Services are now split out and neutron is 
open for commits!

Folks, just a heads up that we have completed splitting out the services 
(FWaaS, LBaaS, and VPNaaS) into separate repositores. [1] [2] [3]. This was all 
done in accordance with the spec approved here [4]. Thanks to all involved, but 
a special thanks to Doug and Anita, as well as infra. Without all of their work 
and help, this wouldn't have been possible!

Neutron and the services repositories are now open for merges again. We're 
going to be landing some major L3 agent refactoring across the 4 repositories 
in the next four days, look for Carl to be leading that work with the L3 team.

In the meantime, please report any issues you have in launchpad [5] as bugs, 
and find people in #openstack-neutron or send an email. We've verified things 
come up and all the tempest and API tests for basic neutron work fine.

In the coming week, we'll be getting all the tests working for the services 
repositories. Medium term, we need to also move all the advanced services 
tempest tests out of tempest and into the respective repositories. We also need 
to beef these tests up considerably, so if you want to help out on a critical 
project for Neutron, please let me know.

Thanks!
Kyle

[1] http://git.openstack.org/cgit/openstack/neutron-fwaas
[2] http://git.openstack.org/cgit/openstack/neutron-lbaas
[3] http://git.openstack.org/cgit/openstack/neutron-vpnaas
[4] 
http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/kilo/services-split.rst
[5] https://bugs.launchpad.net/neutron
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Oslo.messaging error

2014-12-10 Thread Georgy Okrokvertskhov
Hi,

Could you, please, check what is in your murano.conf file. There are two
sections for rabbitmq configurations. Both of them should have proper IP
address of RabbitMQ service as well as proper user\password and vhost.
Also you could check if RabbitMQ is actually up and running and listens on
this port\IP. netstat -ltpn command output will help to check if there is
process listening on port 5672.

Hope this help,
Gosha

On Wed, Dec 10, 2014 at 8:25 PM, raghavendra@accenture.com wrote:







 HI Team,



 I am installing Murano on the Ubuntu 14.04 Juno setup and when I try the
 below install murano-api I encounter the below error. Please assist.



 When I install



 I am using the Murano guide link provided below:

 https://murano.readthedocs.org/en/latest/install/manual.html





 I am trying to execute the section 7



 1.Open a new console and launch Murano API. A separate terminal is
 required because the console will be locked by a running process.

 2. $ cd ~/murano/murano

 3. $ tox -e venv -- murano-api \

 4.  --config-file ./etc/murano/murano.conf





 I am getting the below error : I have a Juno Openstack ready and trying to
 integrate Murano





 2014-12-10 12:10:30.396 7721 DEBUG murano.openstack.common.service [-]
 neutron.endpoint_type  = publicURL log_opt_values
 /home/
 ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2048

 2014-12-10 12:10:30.397 7721 DEBUG murano.openstack.common.service [-]
 neutron.insecure   = False log_opt_values
 /home/ubun
 tu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2048

 2014-12-10 12:10:30.397 7721 DEBUG murano.openstack.common.service [-]
 
  log_opt_values
 /home/ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2050

 2014-12-10 12:10:30.400 7721 INFO oslo.messaging._drivers.impl_rabbit [-]
 Connecting to AMQP server on controller:5672

 2014-12-10 12:10:30.408 7721 INFO oslo.messaging._drivers.impl_rabbit [-]
 Connecting to AMQP server on controller:5672

 2014-12-10 12:10:30.416 7721 INFO eventlet.wsgi [-] (7721) wsgi starting
 up on http://0.0.0.0:8082/

 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Updating
 statistic information. update_stats
 /home/ubuntu/murano/muran
  o/murano/common/statservice.py:57

 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Stats
 object:
 murano.api.v1.request_statistics.RequestStatisticsColle
 ction object at 0x7fada950a510 update_stats
 /home/ubuntu/murano/murano/murano/common/statservice.py:58

 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Stats:
 Requests:0  Errors: 0 Ave.Res.Time 0.

 Per tenant: {} update_stats
 /home/ubuntu/murano/murano/murano/common/statservice.py:64

 2014-12-10 12:10:30.433 7721 DEBUG oslo.db.sqlalchemy.session [-] MySQL
 server mode set to
 STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZER
 O_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
 _check_effective_sql_mode /hom
 e/ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/db/sqlalchemy/session.py:509

 2014-12-10 12:10:33.464 7721 ERROR oslo.messaging._drivers.impl_rabbit [-]
 AMQP server controller:5672 closed the connection. Check
 log  in credentials: Socket closed

 2014-12-10 12:10:33.465 7721 ERROR oslo.messaging._drivers.impl_rabbit [-]
 AMQP server controller:5672 closed the connection. Check
 log  in credentials: Socket closed

 2014-12-10 12:10:37.483 7721 ERROR oslo.messaging._drivers.impl_rabbit [-]
 AMQP server controller:5672 closed the connection. Check
 log  in credentials: Socket closed

 2014-12-10 12:10:37.484 7721 ERROR oslo.messaging._drivers.impl_rabbit [-]
 AMQP server controller:5672 closed the connection. Check
 log  in credentials: Socket closed





 Warm Regards,

 *Raghavendra Lad*



 --

 This message is for the designated recipient only and may contain
 privileged, proprietary, or otherwise confidential information. If you have
 received it in error, please notify the sender immediately and delete the
 original. Any other use of the e-mail by you is prohibited. Where allowed
 by local law, electronic communications with Accenture and its affiliates,
 including e-mail and instant messaging (including content), may be scanned
 by our systems for the purposes of information security and assessment of
 internal compliance with Accenture policy.

 __

 www.accenture.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov

[openstack-dev] [diskimage-builder] ramdisk-image-create fails for creating Centos/rhel images.

2014-12-10 Thread Harshada Kakad
Hi All,

I am trying to build Centos/rhel image for baremetal deployment using
ramdisk-image-create. I am using my build host as CentOS release 6.5
(Final).
It fails saying no busybox available.

Here are the logs for more information, Can anyone please help me on this.

Running ramdisk-image-create for centos7 using below command it fails to
install busybox. Attached output for more details.

sudo bin/ramdisk-image-create -a amd64 centos7 deploy-ironic -o
/tmp/deploy-ramdisk-centos7

Total
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Updating   : 12:dhcp-libs-4.2.5-27.el7.centos.2.x86_64
  Updating   : 12:dhcp-common-4.2.5-27.el7.centos.2.x86_64
  Updating   : 12:dhclient-4.2.5-27.el7.centos.2.x86_64
  Cleanup: 12:dhclient-4.2.5-27.el7.centos.x86_64
  Cleanup: 12:dhcp-common-4.2.5-27.el7.centos.x86_64
  Cleanup: 12:dhcp-libs-4.2.5-27.el7.centos.x86_64
  Verifying  : 12:dhcp-libs-4.2.5-27.el7.centos.2.x86_64
  Verifying  : 12:dhclient-4.2.5-27.el7.centos.2.x86_64
  Verifying  : 12:dhcp-common-4.2.5-27.el7.centos.2.x86_64
  Verifying  : 12:dhclient-4.2.5-27.el7.centos.x86_64
  Verifying  : 12:dhcp-libs-4.2.5-27.el7.centos.x86_64
  Verifying  : 12:dhcp-common-4.2.5-27.el7.centos.x86_64

Updated:
  dhclient.x86_64 12:4.2.5-27.el7.centos.2

Dependency Updated:
  dhcp-common.x86_64 12:4.2.5-27.el7.centos.2
   dhcp-libs.x86_64 12:4.2.5-27.e

Complete!
dib-run-parts Thu Oct 9 09:19:08 UTC 2014 20-install-dhcp-client completed
dib-run-parts Thu Oct 9 09:19:08 UTC 2014 Running
/tmp/in_target.d/install.d/50-store-build-settings
dib-run-parts Thu Oct 9 09:19:08 UTC 2014 50-store-build-settings completed
dib-run-parts Thu Oct 9 09:19:08 UTC 2014 Running
/tmp/in_target.d/install.d/52-ramdisk-install-busybox
Running install-packages install. Package list: busybox
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: dallas.tx.mirror.xygenhosting.com
 * epel: mirror.its.dal.ca
 * extras: mirror.cs.vt.edu
 * updates: mirrors.advancedhosters.com
No package busybox available.
Error: Nothing to do
-

Running ramdisk-image-create for rhel  and rhel7 using below commands.
sudo bin/ramdisk-image-create -a amd64 rhel deploy-ironic -o
/tmp/deploy-ramdisk-rhel

sudo bin/ramdisk-image-create -a amd64 rhel7 deploy-ironic -o
/tmp/deploy-ramdisk-rhel


Here is the output.

*subject: serialNumber=dmox-zPOCChZGgYyWu9xg8JTHSbjFg9P; C=US;
ST=North Carolina; L=Raleigh; O=Red Hat Inc; OU=Web Operations; CN=*.
redhat.com
*start date: 2013-09-09 18:07:24 GMT
*expire date: 2015-12-12 02:08:43 GMT
*subjectAltName: rhn.redhat.com matched
*issuer: C=US; O=GeoTrust, Inc.; CN=GeoTrust SSL CA
*SSL certificate verify ok.
 GET /rhel-guest-image-6.5-20140603.0.x86_64.qcow2 HTTP/1.0
 User-Agent: curl/7.35.0
 Host: rhn.redhat.com
 Accept: */*

 HTTP/1.1 404 Not Found
 Date: Thu, 09 Oct 2014 09:40:48 GMT
* Server Apache is not blacklisted
 Server: Apache
 X-Frame-Options: SAMEORIGIN
 Set-Cookie:
pxt-session-cookie=4683981779x5ee55672220e170244faf07ecc0e558b; path=/;
domain=rhn.redhat.com; expires=Fri, 10-Oct-2014 09:40:48 GMT; secure
 Pragma: no-cache
 Cache-control: no-cache
 Content-Length: 50884
 X-Trace: 1B6697A2F0D89CF2871A25B9CC6CA3D7A60410E1666F0EAEAB8E81F1FD
 Connection: close
 Content-Type: text/html; charset=UTF-8
 Expires: Thu, 09 Oct 2014 09:40:48 GMT

{ [data not shown]
100 50884  100 508840 0  48390  0  0:00:01  0:00:01 --:--:--
48390
* Closing connection 1
* SSLv3, TLS alert, Client hello (1):
} [data not shown]
Server returned an unexpected response code. [404]


-- 
*Regards,*
*Harshada Kakad*
**
*Sr. Software Engineer*
*C3/101, Saudamini Complex, Right Bhusari Colony, Paud Road, Pune – 411013,
India*
*Mobile-9689187388*
*Email-Id : harshada.ka...@izeltech.com harshada.ka...@izeltech.com*
*website : www.izeltech.com http://www.izeltech.com*

-- 
*Disclaimer*
The information contained in this e-mail and any attachment(s) to this 
message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information of Izel 
Technologies Pvt. Ltd. If you are not the intended recipient, you are 
notified that any review, use, any form of reproduction, dissemination, 
copying, disclosure, modification, distribution and/or publication of this 
e-mail message, contents or its attachment(s) is strictly prohibited and 
you are requested to notify us the same immediately by e-mail and delete 
this mail immediately. Izel Technologies Pvt. Ltd accepts no liability for 
virus infected e-mail or errors or omissions or consequences which may 
arise as a result of this e-mail transmission.
*End of Disclaimer*
___

Re: [openstack-dev] [Murano] Oslo.messaging error

2014-12-10 Thread raghavendra.lad



HI Team,

I am installing Murano on the Ubuntu 14.04 Juno setup and when I try the below 
install murano-api I encounter the below error. Please assist.

When I install

I am using the Murano guide link provided below:
https://murano.readthedocs.org/en/latest/install/manual.html


I am trying to execute the section 7

1.Open a new console and launch Murano API. A separate terminal is required 
because the console will be locked by a running process.
2. $ cd ~/murano/murano
3. $ tox -e venv -- murano-api \
4.  --config-file ./etc/murano/murano.conf


I am getting the below error : I have a Juno Openstack ready and trying to 
integrate Murano


2014-12-11 12:28:03.676 9524 INFO eventlet.wsgi [-] (9524) wsgi starting up on 
http://0.0.0.0:8082/
2014-12-11 12:28:03.677 9524 DEBUG murano.common.statservice [-] Updating 
statistic information. update_stats 
/root/murano/murano/murano/common/statservice.py:57
2014-12-11 12:28:03.677 9524 DEBUG murano.common.statservice [-] Stats object: 
murano.api.v1.request_statistics.RequestStatisticsCollection object at 
0x7ff72837d410 update_stats /root/murano/murano/murano/common/statservice.py:58
2014-12-11 12:28:03.677 9524 DEBUG murano.common.statservice [-] Stats: 
Requests:0  Errors: 0 Ave.Res.Time 0.
Per tenant: {} update_stats /root/murano/murano/murano/common/statservice.py:64
2014-12-11 12:28:03.692 9524 DEBUG oslo.db.sqlalchemy.session [-] MySQL server 
mode set to 
STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
 _check_effective_sql_mode 
/root/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/db/sqlalchemy/session.py:509
2014-12-11 12:28:06.721 9524 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP 
server 192.168.x.x:5672 closed the connection. Check login credentials: Socket 
closed

Warm Regards,
Raghavendra Lad




This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise confidential information. If you have received it in 
error, please notify the sender immediately and delete the original. Any other 
use of the e-mail by you is prohibited. Where allowed by local law, electronic 
communications with Accenture and its affiliates, including e-mail and instant 
messaging (including content), may be scanned by our systems for the purposes 
of information security and assessment of internal compliance with Accenture 
policy.
__

www.accenture.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev