Re: [openstack-dev] [nova] issues with fakelibvirt in tests

2015-02-12 Thread Daniel P. Berrange
On Thu, Feb 12, 2015 at 12:32:10PM -0500, Sean Dague wrote:
 Looking recently at the following failure -
 http://logs.openstack.org/04/154804/1/gate/gate-nova-python27/1fe94bf/console.html#_2015-02-12_15_02_19_593
 
 It appears that the fakelibvirt fixture is potentially causing races in
 tests because after the first test in a worker starts a libvirt
 connection, the libvirt python library spawns a thread which keeps
 running in a loop for the duration of the tests. This is happening
 regardless of whether or not the test in question is using libvirt (as
 in this case). Having threads thumping around in the background means
 that doing things like testing for when sleep is called can fail because
 libvirt's thread is getting in the way.

libvirt-python shouldn't be spawning any threads itself - any threads
will have been spawned by Nova.

 
 What's the proper method of completely tearing down all the libvirt
 resources so that when this fixture exits it will actually do that
 correctly -
 https://github.com/openstack/nova/blob/master/nova/tests/unit/virt/libvirt/fakelibvirt.py#L1181-L1202
 and not impact unrelated tests?

Most likely the thread will have been created when the libvirt driver
is setup in the tests.

eg nova.virt.libvirt.driver.LibvirtDriver.init_host() method will
call nova.virt.ibvirt.host.Host.initialize() which in turn spawns
a background *native* thread to receive event notifications from
libvirt.

Assuming this is indeed the root cause of the thread you see, I'd
say we want to arrange for the nova.virt.libvirt.host.Host._init_events
method to be a no-op for the tests. This async events thread is not
something any of the tests should need to have around in general.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][PTLs] Stop releasing libraries/clients without capping stable global requirements

2015-02-12 Thread Joe Gordon
On Wed, Feb 11, 2015 at 7:53 AM, Doug Hellmann d...@doughellmann.com
wrote:



 On Tue, Feb 10, 2015, at 07:12 PM, Joe Gordon wrote:
  Hi,
 
  As you know a few of us have been spending way too much time digging
  stable/juno out of the ditch its currently in. And just when we thought
  we
  were in the clear a new library was released without a requirements cap
  in
  stable global-requirements and broke stable/juno grenade.  Everytime this
  happens we risk breaking everything. While there is a good long term fix
  in
  progress (pin all of stable/juno
  https://review.openstack.org/#/c/147451/),
  this will take a bit of time to get right and land.
 
  The  good news is there is a nice easy interim solution. Before releasing
  a
  new library go to stable/juno and stable/icehouse global requirements and
  check if $library has a version cap, if not add one. And once that lands
  go
  ahead and release your library. For example:
  https://review.openstack.org/#/c/154715/2
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 The Oslo team has several libraries we're holding for release until this
 is resolved. We do have projects blocked on those releases, though, so
 if Joe asks you for help with anything related to stable branch
 maintenance, please make it a priority so we can get the caps in place.


We have landed the patch to cap all stable/juno requirements that are
installed in a tempest-dsvm-neutron-full job. So we should be out of the
woods for now (unless you are a project that uses one of the still uncapped
requirements).

https://review.openstack.org/#/c/147451/


Implications:

* Until Dean's patches to install CLI tools (python-*clients) inside of
venvs, we are not testing master clients with stable/juno.
* An indirect dependency can change and still break us, but hopefully this
won't happen.


 Doug

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cache for packages on master node

2015-02-12 Thread Andrew Woodward
On Thu, Feb 12, 2015 at 3:59 AM, Tomasz Napierala tnapier...@mirantis.com
wrote:


  On 10 Feb 2015, at 23:02, Andrew Woodward xar...@gmail.com wrote:
 
  previously we used squid in 3.0 and before. The main problem is that the
 deployment would proceeded even if not all the packages where cached or
 even available on the remote. This often lead to broken deployments that
 where hard to debug and a waste of alot of time. This _MUST_ be resolved or
 we will re-introduce this horrible work flow that we had placed all the
 packages on the system for to begin with.

 Anyway we need to ensure our QA is run against fresh mirror, that would
 prevent a lot of problems. We also think about how situation in the field
 can differ from our labs and QA infra - there might be differences indeed.

  I think we need to add a requirements that we need to be able to:
  a) pre-populate the cacher
  b) we need to not start the deployment until we either have every
 package in the chache (eiew) or at least know every package is reachable
 currently (or allow the user to select either as a deployment criteria)

 This sounds for me like creating local mirror ;) We don’t want to do this.
 We are thinking about mirror verification tool, it was mentioned by
 eifferent guys already. Do you really think we should prepopulate cache?


by pre-populate, I mean that we need to start some form of task that can be
started to create a repo/mirror of the packages we know we need for the
installation. The source of where this would be built from could be an ISO,
or equally any other mirror site. The user should be able to use this as a
base population for the packages. If the mirror is incomplete this should
be OK also as long as the user is told that their nodes will attempt to get
the remaining packages from the internet. The task should be able to be run
at any time, and if desired the user would be able to make the deployment
require it to finish first.

so yes, we need both a repo/mirror like now, with a passthrough that might
use a squid proxy to help with multiple access. Keep in mind that the squid
proxy would have to work with the virtual router for nodes bp [1]


 I hink first node deployment will fetch a lot of packages, and other nodes
 will be easier. Once we have prototype, we will see some number.


The first OS install will fetch packages, then later the fist of each roll
will fetch different packages, it's possible we could get all the way to
compute and fail there because we cant get a package. I can personally
promise that without something else this will have problems with this the
same as we did before with 3.0 (I could run two squid layers one in my host
and one on my fuel vm and still have problems (usually cache misses)). When
this occurs the result is terrible, hard for not fuel people to realize,
and you will end up restarting the whole deployment. the user experience
(UX) from this is horrible. We need the tools to prevent this from
occurring at all.


 Regards,
 --
 Tomasz 'Zen' Napierala
 Sr. OpenStack Engineer
 tnapier...@mirantis.com







 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[1] https://blueprints.launchpad.net/fuel/+spec/virtual-router-for-env-nodes


-- 
Andrew
Mirantis
Fuel community ambassador
Ceph community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [defcore] Proposal for new openstack/defcore repository

2015-02-12 Thread Russell Bryant
On 02/12/2015 12:47 PM, Chris Hoge wrote:
 In the most recent Defcore committee meeting we discussed the need
 for a repository to host artifacts related to the Defcore process[1].
 These artifacts will include documentation of the Defcore process,
 lists of advisory and required capabilities for releases, and useful
 tools and instructions for configuring and testing clouds against the
 capability lists.
 
 One of our goals is increased community awareness and participation
 around the process, and we feel that a Gerrit backed repository helps
 to achieve this goal by providing a well understood mechanism for
 community members to comment on policies and capabilities before they
 are merged. For members of the community who aren’t familiar with the
 Gerrit workflow, I would be more than happy to help them out with
 understanding the process or acting as a proxy for their comments and
 concerns.
 
 We're proposing to host the repository at openstack/defcore, as this
 is work being done by a board-backed committee with cross cutting
 concerns for all OpenStack projects. All projects are owned by some
 parent organization within the OpenStack community. One possiblility
 for ownership that we considered was the Technical Committee, with
 precedent set by the ownership of the API Working Group
 repository[2]. However, we felt that there is a need to allow for
 projects that are owned by the Board, and are also proposing a new
 Board ownership group.
 
 The core reviewers of the repository will be the Defcore Committee
 co-chairs, with additional core members added at the discretion of
 board members leading the committee.

+1 to using the openstack namespace, and having it explicitly listed as
a board owned repository.  Having it listed as a TC owned repository
would be quite confusing since the TC does not own any part of the
process.  I'd also expect concerns about that confusion from both board
and TC members.

 In the coming weeks we're going to be working hard on defining new
 capabilities for the Icehouse, Juno, and future releases. We're
 looking forward to meeting with the developer and operator community
 to help define the new capabilities with an eye towards
 interoperability across the entire OpenStack ecosystem. The creation
 of this repository is an important step in that direction.

I'm a bit concerned about the amount of manual effort needed to define
capabilities for each release over the long term.  It seems like there
is an opportunity for some close collaboration between the defcore group
and tempest (and whatever other test source defcore wants to draw from)
to have test grouping metadata maintained along side the tests and
updated as tests are added/changed/removed over time.  Then for each
release, defcore could extract that information and then do its magic to
decide which ones to use.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress][Delegation] Google doc for working notes

2015-02-12 Thread Yathiraj Udupi (yudupi)
Hi Tim,

Thanks for your response.  Excited too to extend the collaboration and ensure 
there is no need to duplicate effort in the open source community.
 My responses inline.

1)  Choice of LP solver.

I see solver-scheduler uses Pulp, which was on the Congress short list as well. 
 So we’re highly aligned on the choice of underlying solver.

YATHI - This makes me wonder why can’t we easily adapt the solver-scheduler to 
your needs, rather than duplicating the effort!


2) User control over VM-placement.


To choose the criteria for VM-placement, the solver-scheduler user picks from a 
list of predefined options, e.g. ActiveHostConstraint, 
MaxRamAllocationPerHostConstraint.

We’re investigating a slightly different approach, where the user defines the 
criteria for VM-placement by writing any policy they like in Datalog.  Under 
the hood we then convert that Datalog to an LP problem.  From the developer’s 
perspective, with the Congress approach we don’t attempt to anticipate the 
different policies the user might want and write code for each policy; instead, 
we as developers write a translator from Datalog to LP.  From the user’s 
perspective, the difference is that if the option they want isn’t on the 
solver-scheduler's list, they’re out of luck or need to write the code 
themselves.  But with the Congress approach, they can write any VM-placement 
policy they like.

What I’d like to see is the best of both worlds.  Users write Datalog policies 
describing whatever VM-placement policy they want.  If the policy they’ve 
written is on the solver-scheduler’s list of options, we use the hard-coded 
implementation, but if the policy isn’t on that list we translate directly to 
LP.  This approach gives us the ability to write custom code to handle common 
cases while at the same time letting users write whatever policy they like.


YATHI -  The idea of providing some default constraint classes in Solver 
Scheduler was to enable easy pluggability for various placement policy 
scenarios.  We can easily add a custom constraint class in solver scheduler, 
that enables adding additional constraints at runtime (PulP model or any other 
models we can use and support).  It will just take in any external policy (say 
Datalog in Congress example), and it can easily add those set of resulting 
translated constraints via this custom constraint builder class.  This is 
something we can definitely add value to solver scheduler by implementing and 
adding here.


3) API and architecture.

Today the solver-scheduler's VM-placement policy is defined at config-time 
(i.e. not run-time).  Am I correct that this limitation is only because there’s 
no API call to set the solver-scheduler’s policy?  Or is there some other 
reason the policy is set at config-time?

Congress policies change at runtime, so we’ll definitely need a VM-placement 
engine whose policy can be changed at run-time as well.

YATHI -  We have working code to set VM placement policies at run-time to 
dynamically select the constraint or cost classes to use.   It is yet to 
upstreamed to solver scheduler stackforge repo, but will be soon.  But yeah I 
agree with you, this is definitely needed for any policy-driven VM placement 
engine, as the policies are dynamic. Short answer, yes solver scheduler has 
abilities to support this.


If we focus on just migration (and not provisioning), we can build a 
VM-placement engine that sits outside of Nova that has an API call that allows 
us to set policy at runtime.  We can also set up that engine to get data 
updates that influence the policy.  We were planning on creating this kind of 
VM-placement engine within Congress as a node on the DSE (our message bus).  
This is convenient because all nodes on the DSE run in their own thread, any 
node on the DSE can subscribe to any data from any other node (e.g. 
ceilometer’s data), and the algorithms for translating Datalog to LP look to be 
quite similar to the algorithms we’re using in our domain-agnostic policy 
engine.

YATHI – The entire scheduling community in Nova is planning on an external 
scheduler (Gantt), and we are pitching solver scheduler also as a stand-alone 
placement engine a scheduler as a service.  Nova integration is just to ensure 
it fits within the Nova workflow.   I am not quite familiar with the DSE 
architecture yet,  but the simple idea we have is, Congress policies, as part 
of the enforcement workflow, should set the VM placement constraints, and feed 
any additional data to be used for scheduling/placement decisions, which will 
be consumed dynamically by the Solver Scheduler, and after the delegation, the 
Solver scheduler module will calculate the placement decisions, and complete 
the VM initial placement or call the VM migration APIs and enable the required 
migrations.



Thanks,
Yathi.


On 2/12/15, 10:02 AM, Tim Hinrichs 
thinri...@vmware.commailto:thinri...@vmware.com wrote:

Hi Debo and Yathiraj,

I took a third look at the 

[openstack-dev] [API] Do we need to specify follow the HTTP RFCs?

2015-02-12 Thread Chris Dent


I meant to get to this in today's meeting[1] but we ran out of time
and based on the rest of the conversation it was likely to lead to a
spiral of different interpretations, so I thought I'd put it up here.

$SUBJECT says it all: When writing guidelines to what extent do we
think we should be recapitulating the HTTP RFCs and restating things
said there in a form applicable to OpenStack APIs?

For example should we say:

Here are some guidelines, for all else please refer to RFCs
7230-5.

Or should we say something like:

Here are some guidelines, including:

If your API has a resource at /foo which responds to an authentic
request with method GET but not with method POST, PUT, DELETE or PATCH
then when an authentic request is made to /foo that is not a GET it must
respond with a 405 and must include an Allow header listing the
currently support methods.[2]

I ask because I've been fleshing out my gabbi testing tool[3] by running
it against a variety of APIs. Gabbi makes it very easy to write what I
guess the officials call negative tests -- Throw some unexpected but well-
formed input, see if there is a reasonable response -- just by making
exploratory inquiries into the API and then traversing the discovered links
with various methods and content types.

What I've found is too often the response is not reasonable. Some of
the problems arise from the frameworks being used, in other cases it
is the implementing project.

We can fix the existing stuff in a relatively straightforward but
time consuming fashion: Use tools like gabbi to make more negative tests,
fix the bugs as they come up. Same as it ever was.

For new stuff, however, does there need to be increased awareness of
the rules and is it the job of the working group to help that
increasing along?

[1]
http://eavesdrop.openstack.org/meetings/api_wg/2015/api_wg.2015-02-12-16.00.html

[2] This is a paraphase of:
http://tools.ietf.org/html/rfc7231#section-6.5.5

[3] https://pypi.python.org/pypi/gabbi

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Update on DB IPAM driver

2015-02-12 Thread John Belamaric


From: Salvatore Orlando sorla...@nicira.commailto:sorla...@nicira.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, February 12, 2015 at 8:36 AM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron] Update on DB IPAM driver

Hi,

I have updated the patch; albeit not complete yet it's kind of closer to be an 
allocator decent enough to replace the built-in logic.

I will be unable to attend today's L3/IPAM meeting due to a conflict, so here 
are some highlights from me on which your feedback is more than welcome:

- I agree with Carl that the IPAM driver should not have explicit code paths 
for autoaddress subnets, such as DHCPv6 stateless ones. In that case, the 
consumer of the driver will generate the address and then to the IPAM driver 
that would just be allocation of a specific address. However, I have the 
impression the driver still needs to be aware of whether the subnet has an 
automatic address mode or not - since in this case 'any' address allocation 
won't be possible. There already comments about this in the review [1]

I think the auto-generated case should be a base class as you described in [1], 
but each subclass would implement the specific auto-generation. See the 
discussion at line 468 in [2] and see what you think. Of course for addresses 
that come from RA there would be no IPAM.

[1] https://review.openstack.org/#/c/150485/
[2] 
https://review.openstack.org/#/c/153236/2/neutron/db/db_base_plugin_v2.py,unified


- We had a discussion last week on whether the IPAM driver and neutron should 
'share' database tables. I went back and forth a lot of time, but now to me it 
seems the best thing to do is to have the IPAM driver maintain an 'ip_requests' 
tables, where it stores allocation info. This table partially duplicates data 
in IPAllocation, but on the plus side it makes the IPAM driver self sufficient. 
The next step would be to decide whether we want to go a step further and also 
assume the driver should not access at all Neutron's DB, but I would defer that 
discussion to the next iteration (for both the driver and the IPAM interface)

- I promised a non blocking algorithm for IP allocation. The one I was 
developing was based on specifying the primary key on the ip_requests table in 
a way that it would prevent two concurrent requests from getting the same 
address, and would just retry getting an address until the primary key 
constraint was satisfied. However, recent information emerged on MySQL galera's 
(*) data set [2] certification  clarified that this kind of algorithm would 
still result in a deadlock error from failed data set certification. It is 
worth noting that in this case a solution based on traditional compare-and-swap 
is not possible because concurrent requests would be inserting data at the same 
time. I am now working on an alternative solution, and I would like to first 
implement a PoC for it (so that I can prove it works).

- The db base refactoring being performed by Pavel is under way [3]. It is 
worth noting that this is a non-negligible change to some of Neutron's basic 
and more critical workflows. We should expect pushback from the community 
regarding the introduction of this change in the 3rd milestone. At this stage I 
would suggest either:
A) consider a strategy for running pluggable IPAM as optional
B) consider delaying to Liberty.
(and that's where I get virtually jeered and pelted with rotten tomatoes)

I wish I had some old tomatoes! Seriously, I think A is a reasonable 
approach. To make this really explicit we may want to basically replace the DB 
plugin class with a shim that delegates to either the current implementation or 
the new implementation, depending on the flag.


Thanks for reading this post,
Salvatore

[1] https://review.openstack.org/#/c/150485/
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-February/056007.html
[3] https://review.openstack.org/#/c/153236/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What should openstack-specs review approval rules be ?

2015-02-12 Thread James E. Blair
Thierry Carrez thie...@openstack.org writes:

 So what is it we actually want for that repository ? In a world where
 Gerrit can do anything, what would you like to have ?

 Personally, I want our technical community in general, and our PTLs/CPLs
 in particular, to be able to record their opinion on the proposed
 cross-project spec. Then, if consensus is reached, the spec should be
 approved.

 This /could/ be implemented in Gerrit by giving +1/-1 to everyone to
 express technical opinion and +2/-2 to TC members to evaluate consensus
 (with Workflow+1 to the TC chair to mark when all votes are collected
 and consensus is indeed reached).

Thanks for starting this.  Despite the fact that I was explicitly
looking for this thread, I still missed it.

I think in general though, it boils down to the fact that we need to
answer these questions for each of the repos:

A) Should the broader community register +/-1 or simply comments? (Now
   that we may distinguish them from TC member votes.)
B) Should individual TC members get a veto?

I personally think the answer to A is votes and B is no in both
cases.  I'm also okay with sticking with comments for the governance
repo.  I fell pretty strongly about not having veto.

Below I am including a whole bunch of text which is both my analysis of
all of the requirements and potential requirements, the current state,
and technical implementations of changes we might want.  Sorry it's so
long and complicated, but the gist is that we do have options, and if we
can agree on the above 2 questions, I think the next steps are fairly
obvious.

-Jim

==

Since upgrading to Gerrit 2.8, we have some additional tools at our
disposal for configuring voting categories.  For some unique
repositories such as governance and cross-project specs, we may want
to reconfigure voting there.

Governance Repo Requirements


I believe that the following are requirements for the Governance
repository:

* TC members can express approval or disapproval in a way that
  identifies their vote as a vote of a member of the TC.
* TC members may not veto.
* Anyone should be able to express their opinion.
* Only the TC chair may approve the change.  This is so that the chair
  is responsible for the procedural aspects of the vote (ie, when it
  is finalized).

Current Governance Repo Rules
-

These are currently satisfied by the following rules in Gerrit:

* Anyone may comment on a change without leaving a vote.
* Only TC members may vote +1/-1 in Code-Review.
* Only the TC chair may vote Code-Review +2 and Workflow +1.

Unsatisfied Governance Repo Requirements


This does not satisfy the following potential requirements:

* The TC chair may vote -1 and still approve a disputed change with 7
  yes votes (the chair currently would need to leave a comment
  indicating the actual vote tally).
* Non-TC members may register their approval or disapproval with a
  vote (they currently may only leave comments to that effect).

Cross-Project Repo Requirements
---

* TC members can express approval or disapproval in a way that
  identifies their vote as a vote of a member of the TC.
* TC members may not veto.  (This requirement has not achieved
  consensus.)
* Non-TC members may register their approval or disapproval with a
  vote (we must be able to easily see that PTLs of affected projects
  have weighed in).
* Only the TC chair may approve the change.  This is so that the chair
  is responsible for the procedural aspects of the vote (ie, when it
  is finalized).

Current Cross-Project Repo Rules


These are currently satisfied by the following rules in Gerrit:

* Anyone may comment on a change and leave a vote.
* Only TC members may vote +2 in Code-Review.
* Only the TC chair may vote Workflow +1.

Unsatisfied Governance Repo Requirements

The following potential requirements are not satisfied:

* TC members may veto with a -2 Code-Review vote.  (This requirement
  has not achieved consensus.)

Potential Changes
=

To address the unsatisfied requirements, we could make the following
changes, which would only apply to the repos in question:

To address this requirement:
* The TC chair may vote -1 and still approve a disputed change with 7
  yes votes (the chair currently would need to leave a comment
  indicating the actual vote tally).

We could change the Code-Review label function from MaxWithBlock to
NoBlock, so that the votes in Code-Review are ignored by Gerrit, and
only enforced by the chair.

Additionally, we could write a custom submit rule that requires at
least 7 +1 votes in order for the change to be submittable.

Additionally, we could change the name of the review category from
Code-Review to something else, for instance, TC 

Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it

2015-02-12 Thread Chris Dent

On Thu, 12 Feb 2015, Flavio Percoco wrote:

On 11/02/15 11:24 +, Chris Dent wrote:

I think it is time we recognize and act on the fact that the corporate
landlords that pay many of us to farm on this land need to provide more
resources. This will help to ensure the health of semi-artifical
opensource ecology that is OpenStack. At the moment many things are
packed tight with very little room to breathe. We need some air.


I agree with lots of what you said except for this last bit here. I
don't believe OpenStack is a semi-artificial opensrouce ecology.
OpenStack has demostrated throughout the years the ability of growing
without sacrificing openness.


Sorry that probably comes across sounding much more negative than I
intended. What I was trying to say was that there is an avenue that
is probably worth exploring to help with some of the issues that
overwhelm each of as individuals: Implore the corporate entities that
pay us to provide more resources so that there is more room within the
community for people to work on things with less pressure.

There are significant numbers of us who work on OpenStack because it
is our job. Mind you its a pretty cool job with lots of interesting
people and good stuff to learn, but it is a job; one in which money
is a factor.

That money is being applied by the corporate entities because it is in
their interest for this thing called OpenStack to be created _and_ that
it be created in the collaborative fashion provided by opensource.

A lot of people are finding it hard to be as effective as they'd
like to be. One way (of presumably many) to deal with that is to
make sure the economic beneficiaries are fully aware of the
situation. If they are rational actors they may wish to do something
to improve the situation.


Saying OpenStack is semi-artificial opensource is degrading some of
the things most of us have been fighting for. I'm not offended, just
worried. We've many similar messages from outside the community and
having them coming from within the community is worrisome.


a) I'm relatively new, so am fairly fresh-face and naive and willing
   to make somewhat stupid generalities based on things not being
   like what I'm used to. This has its pros and cons...

b) I've been doing some form of FLOSS software on unix-like machines since
   long before the term opensource was popularized. I'm not
   scratching an itch or working on a problem that is solved by
   making OpenStack better. I made a lot of changes to PAM a long
   time ago because I needed better auth on the servers I managed.
   Today I work on OpenStack because the combination of pay and
   learning opportunities make it a reasonable job. There are lots
   of people like me.

b is what makes it semi-artificial. I'm not stating it as a
pejorative. Corporate opensource is a grand thing and I'm very happy
to see it exist, but it's _different_ from old(er) school itch-
scratching opensource, more...constructed?

All I'm saying is that we should recognize that difference and use
it where it could be useful. In practical terms: let's get the landlords
to open up the purse a bit. I think this is a reasonable request: If
your computer no longer has enough memory to do your job you ask
your manager to get your more RAM. Pretty similar thing going on
here.

What the OpenStack community did and does is truly remarkable and that
it has done it while maintaining its opensource cred is a credit to
people like yourself who have kept up the good fight. It's a very
complex environment.


That said, I may have mis-understood what you meant so, please correct
me if I did. Tired and I should've probably waited 'til tomorrow
before replying. Oh well, :D


I may be in the same boat.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] issues with fakelibvirt in tests

2015-02-12 Thread Matt Riedemann



On 2/12/2015 11:32 AM, Sean Dague wrote:

Looking recently at the following failure -
http://logs.openstack.org/04/154804/1/gate/gate-nova-python27/1fe94bf/console.html#_2015-02-12_15_02_19_593

It appears that the fakelibvirt fixture is potentially causing races in
tests because after the first test in a worker starts a libvirt
connection, the libvirt python library spawns a thread which keeps
running in a loop for the duration of the tests. This is happening
regardless of whether or not the test in question is using libvirt (as
in this case). Having threads thumping around in the background means
that doing things like testing for when sleep is called can fail because
libvirt's thread is getting in the way.

What's the proper method of completely tearing down all the libvirt
resources so that when this fixture exits it will actually do that
correctly -
https://github.com/openstack/nova/blob/master/nova/tests/unit/virt/libvirt/fakelibvirt.py#L1181-L1202
and not impact unrelated tests?

-Sean



fakelibvirt shouldn't be using libvirt-python at all since this change:

https://review.openstack.org/#/c/150148/

I'm not saying there isn't a thing going on, but not sure how 
libvirt-python would be involved since it's not in 
test-requirements.txt, unless it's in site-packages on the test nodes.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [defcore] Proposal for new openstack/defcore repository

2015-02-12 Thread Chris Hoge
In the most recent Defcore committee meeting we discussed the need for a 
repository to host artifacts related to the Defcore process[1]. These artifacts 
will include documentation of the Defcore process, lists of advisory and 
required capabilities for releases, and useful tools and instructions for 
configuring and testing clouds against the capability lists.

One of our goals is increased community awareness and participation around the 
process, and we feel that a Gerrit backed repository helps to achieve this goal 
by providing a well understood mechanism for community members to comment on 
policies and capabilities before they are merged. For members of the community 
who aren’t familiar with the Gerrit workflow, I would be more than happy to 
help them out with understanding the process or acting as a proxy for their 
comments and concerns.

We're proposing to host the repository at openstack/defcore, as this is work 
being done by a board-backed committee with cross cutting concerns for all 
OpenStack projects. All projects are owned by some parent organization within 
the OpenStack community. One possiblility for ownership that we considered was 
the Technical Committee, with precedent set by the ownership of the API Working 
Group repository[2]. However, we felt that there is a need to allow for 
projects that are owned by the Board, and are also proposing a new Board 
ownership group.

The core reviewers of the repository will be the Defcore Committee co-chairs, 
with additional core members added at the discretion of board members leading 
the committee.

In the coming weeks we're going to be working hard on defining new capabilities 
for the Icehouse, Juno, and future releases. We're looking forward to meeting 
with the developer and operator community to help define the new capabilities 
with an eye towards interoperability across the entire OpenStack ecosystem. The 
creation of this repository is an important step in that direction.

Thanks,
Chris Hoge
Interop Engineer
OpenStack Foundation

[1] https://etherpad.openstack.org/p/DefCoreScale.4
[2] 
http://git.openstack.org/cgit/openstack/governance/tree/reference/technical-committee-repos.yaml
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it

2015-02-12 Thread Flavio Percoco

On 12/02/15 12:04 +, Chris Dent wrote:

On Thu, 12 Feb 2015, Flavio Percoco wrote:


The important bit, thoguh, is that email is meant for asynchronous
communication and IRC isn't. If things that require the intervention
of other folks from the community are being discussed and those folks
are not on IRC, it'd be wrong to consider the topic as discussed.


This is really the crux of the biscuit and thank you for continuing
to bring it back round to this point.

My personal experience of OpenStack has been that unless I am

* on IRC (too) many hours per day
* going to (too) many IRC meetings when I should be doing something
 interesting with my family
* watching a fair few spec and governance gerrits

then I will miss out on not just the decision making _process_ for
things which are relevant to the work I need or want to do and plan
for but also the _decisions_ themselves.

For example how many people really know the extent and impact of the
big tent governance plans?

Ideally I should be able to delegate a lot of this farming for
information to other people in the community but that only works if
there is a habit by those others of summarizing to the mailing list.

(Which goes back to my earlier point about of gosh aren't we all a
bit busy?)


These are good observations and they impact 2 things. How things are
communicated and our *phisical* ability to cover many things. W.r.t
the later, it's hard to know when something simple is not part of our
responsabilities and that we should delegate to others (this goes back
to what you said in your other email).

That said, I think a key point in understanding when something is not
OK with the way your community (in this case project) communicates is
by analyzing what the effort you need to put on keeping yourself
updated is. If you need ninja-skills to avoid missing things in the
project you're working on, then IMHO there's something wrong.

The above is why I mentioned in one of my previous replies that email
should be the default. I hate emails, really, but It'd take me way
more to dig into all the IRC logs and ping people than just reading
more emails.

If that weren't enough, there're also timezones and a whole bunch of
other things related to this.

I guess what I want to say here - besides that I should probably stop
for today - is that we should strive to make it easier for people to
participate in discussions - keeping in mind all the things related to
this, Nikola elaborated a quite good list in one of his replies - but
we also should be very careful with burnouts.

But that probably deserves a different thread.
Flavio



--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpc3lHpFPmP2.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it

2015-02-12 Thread Jeremy Stanley
On 2015-02-12 18:34:56 +0100 (+0100), Flavio Percoco wrote:
[...]
 we *don't* have a public voip channel
[...]

Well, technically we do if you want one.

https://wiki.openstack.org/wiki/Infrastructure/Conferencing

But of course the logistics around all the project connecting in and
talking at once would be a bit nightmarish.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] FFE Request: Proxy neutron configuration to guest instance

2015-02-12 Thread Jay Faulkner
Hi Nova cores,

We’d like to request an FFE for this added nova feature. It gives a real 
interface - a JSON file - to network data inside the instance. This is a patch 
Rackspace carries downstream, and we’ve had lots of interested users, including 
the OpenStack Infra team and upstream cloud-init. We’d love to get this in for 
Kilo so all can benefit from the better interface.

There are a few small patches remaining to implement this functionality:
https://review.openstack.org/#/c/155116/ Updates the testing portion of the 
spec to reflect we can’t tempest test this, and will instead add functional 
tests to Nova for it.

Core Functionality
https://review.openstack.org/#/c/143755/ - Adds IPv6 support to Nova’s network 
unit tests so we can test the functionality in IPv6.
https://review.openstack.org/#/c/102649/ - Builds and prepares the neutron 
network data to expose
https://review.openstack.org/#/c/153097/ - Exposes the Neutron network data 
built in the last patch to Configdrive/Metadata service

VLAN Support
As a note; while we’d like all these patches to be merged, it’s clear the VLAN 
support is a bit more complex than the other patches, and we’d be OK with the 
other patches receiving an FFE without this one (although obviously we’d prefer 
get everything in K).

https://review.openstack.org/#/c/152703/ - Adds VLAN support for Neutron 
network data generation.

Please let me or Josh know if you have any questions.

Thanks,
Jay Faulkner (JayF)  Josh Gachnang (JoshNang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova]

2015-02-12 Thread Adam Young

On 02/12/2015 10:40 AM, Alexander Makarov wrote:

A trust token cannot be used to get another token:
https://github.com/openstack/keystone/blob/master/keystone/token/controllers.py#L154-L156
You have to make your Nova client use the very same trust scoped token 
obtained from authentication using trust without trying to 
authenticate with it one more time.



Actually, there have been some recent changes to allow re-delegation of 
Trusts, but for older deployments, you are correct.  I hadn't seen 
anywhere here that he was trying to use a trust token to get another 
token, though.




On Wed, Feb 11, 2015 at 9:10 PM, Adam Young ayo...@redhat.com 
mailto:ayo...@redhat.com wrote:


On 02/11/2015 12:16 PM, Nikolay Makhotkin wrote:

No, I just checked it. Nova receives trust token and raise this
error.

In my script, I see:

http://paste.openstack.org/show/171452/

And as you can see, token from trust differs from direct user's
token.


The original user needs to have the appropriate role to perform
the operation on the specified project.  I see the admin role is
created on the trust. If the trustor did not have that role, the
trustee would not be able to exececute the trust and get a token. 
It looks like you were able to execute the trust and get a token, 
but I would like you to confirm that, and not just trust the

keystone client:  either put debug statements in Keystone or call
the POST to tokens from curl with the appropriate options to get a
trust token.  In short, make sure you have not fooled yourself. 
You can also look in the token table inside Keystone to see the

data for the trust token, or validate the token  via curl to see
the data in it.  In all cases, there should be an OS-TRUST stanza
in the token data.


If it is still failing, there might be some issue on the Policy
side.  I have been assuming that you are running with the default
policy for Nova.

http://git.openstack.org/cgit/openstack/nova/tree/etc/nova/policy.json

I'm not sure which rule matches for list servers (Nova developer
input would be appreciated)  but I'm guessing it is executing the
rule
|
admin_or_owner: is_admin:True or project_id:%(project_id)s,

Since that is the default. I am guessing that the project_id in
question comes from the token here, as that seems to be common,
but if not, it might be that the two values are mismatched.
Perhaps there Proejct ID value from the client env var is sent,
and matches what the trustor normally works as, not the project in
question.  If these two values don't match, then, yes, the rule
would fail.
|





On Wed, Feb 11, 2015 at 7:55 PM, Adam Young ayo...@redhat.com
mailto:ayo...@redhat.com wrote:

On 02/11/2015 10:52 AM, Nikolay Makhotkin wrote:

Hi !

I investigated trust's use cases and encountered the
problem: When I use auth_token obtained from keystoneclient
using trust, I get *403* Forbidden error: *You are not
authorized to perform the requested action.*

Steps to reproduce:

- Import v3 keystoneclient (used keystone and keystoneclient
from master, tried also to use stable/icehouse)
- Import v3 novaclient
- initialize the keystoneclient:
 keystone = keystoneclient.Client(username=username,
password=password, tenant_name=tenant_name, auth_url=auth_url)

- create a trust:
  trust = keystone.trusts.create(
  keystone.user_id,
  keystone.user_id,
  impersonation=True,
  role_names=['admin'],
  project=keystone.project_id
)

- initialize new keystoneclient:
  client_from_trust = keystoneclient.Client(
username=username, password=password,
trust_id=trust.id http://trust.id, auth_url=auth_url,
  )

- create nova client using new token from new client:
  nova = novaclient.Client(
auth_token=client_from_trust.auth_token,
auth_url=auth_url_v2,
project_id=from_trust.project_id,
service_type='compute',
username=None,
api_key=None
  )

- do simple request to nova:
nova.servers.list()

- get the error described above.


Maybe I misunderstood something but what is wrong? I
supposed I just can work with nova like it was initialized
using direct token.


From what you wrote here it should work, but since Heat has
been doing stuff like this for a while, I'm pretty sure it is
your setup and not a fundamental problem.

I'd take a look at what is going back and forth on the wire
and make sure the right token is being sent to Nova.  If it
is the original users token and not the trust token, then you
would see that error.



-- 

Re: [openstack-dev] [API] Do we need to specify follow the HTTP RFCs?

2015-02-12 Thread Ryan Brown
On 02/12/2015 01:08 PM, Jay Pipes wrote:
 On 02/12/2015 01:01 PM, Chris Dent wrote:
 I meant to get to this in today's meeting[1] but we ran out of time
 and based on the rest of the conversation it was likely to lead to a
 spiral of different interpretations, so I thought I'd put it up here.

 $SUBJECT says it all: When writing guidelines to what extent do we
 think we should be recapitulating the HTTP RFCs and restating things
 said there in a form applicable to OpenStack APIs?

 For example should we say:

  Here are some guidelines, for all else please refer to RFCs
  7230-5.

 Or should we say something like:

  Here are some guidelines, including:

  If your API has a resource at /foo which responds to an authentic
  request with method GET but not with method POST, PUT, DELETE or
 PATCH
  then when an authentic request is made to /foo that is not a GET it
 must
  respond with a 405 and must include an Allow header listing the
  currently support methods.[2]

 I ask because I've been fleshing out my gabbi testing tool[3] by running
 it against a variety of APIs. Gabbi makes it very easy to write what I
 guess the officials call negative tests -- Throw some unexpected but
 well-
 formed input, see if there is a reasonable response -- just by making
 exploratory inquiries into the API and then traversing the discovered
 links
 with various methods and content types.

 What I've found is too often the response is not reasonable. Some of
 the problems arise from the frameworks being used, in other cases it
 is the implementing project.

 We can fix the existing stuff in a relatively straightforward but
 time consuming fashion: Use tools like gabbi to make more negative tests,
 fix the bugs as they come up. Same as it ever was.

 For new stuff, however, does there need to be increased awareness of
 the rules and is it the job of the working group to help that
 increasing along?
 
 I think it's definitely the role of the API WG to identify places in our
 API implementations that are not following the rules, yes.
 
 I think paraphrasing particular parts of RFCs would be my preference,
 along with examples of bad or incorrect usage.
 
 Best,
 -jay

+1 I think the way to go would be:

We suggest (pretty please) that you comply with RFCs 7230-5 and if you
have any questions ask us. Also here are some examples of usage that
is/isn't RFC compliant for clarity

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Ceilometer] Real world experience with Ceilometer deployments - Feedback requested

2015-02-12 Thread Jordan Pittier
Hi,
My experience with Ceilometer is that MongoDB is/was a major bottleneck.
You need sharding + servers with lot of RAM. You need to set TTL on your
samples, and only save in DB the metrics that really mater to you. MongoDB
v3 should also help.

Regarding RabbitMQ pressure, I think this blueprint helps a lot
https://blueprints.launchpad.net/ceilometer/+spec/multiple-rabbitmq

And also, you should make your own tests because there has been a lot of
FUD around Ceilometer.

Jordan

On Thu, Feb 12, 2015 at 6:23 PM, Diego Parrilla Santamaría 
diego.parrilla.santama...@gmail.com wrote:

 Hi Mash,

 we dropped Ceilometer as the core tool to gather metrics for our rating
 and billing system. I must admit it has improved, but I think it's broken
 by design: a metering and monitoring system is not the same thing.

 We have built a component that directly listens from rabbit notification
 tools (a-la-Stacktach). This tool stores the all events in a database (but
 anything could work, it's just a logging system) and then we process these
 events and store them in a datamart style database every hour. The rating
 and billing system reads this database and process it every hour too. We
 decided to implement this pipeline processing of data because we knew in
 advance that processing such an amount of data was a challenge.

 I think Ceilometer should be used just to trigger alarms for heat for
 example, and something else should be used for rating and billing.

 Cheers
 Diego



  --
 Diego Parrilla
 http://www.stackops.com/*CEO*
 *www.stackops.com http://www.stackops.com/ | *
 diego.parri...@stackops.com | +34 91 005-2164 | skype:diegoparrilla



 On Wed, Feb 11, 2015 at 8:37 PM, Maish Saidel-Keesing mais...@maishsk.com
  wrote:

 Is Ceilometer ready for prime time?

 I would be interested in hearing from people who have deployed OpenStack
 clouds with Ceilometer, and their experience. Some of the topics I am
 looking for feedback on are:

 - Database Size
 - MongoDB management, Sharding, replica sets etc.
 - Replication strategies
 - Database backup/restore
 - Overall useability
 - Gripes, pains and problems (things to look out for)
 - Possible replacements for Ceilometer that you have used instead


 If you are willing to share - I am sure it will be beneficial to the
 whole community.

 Thanks in Advance


 With best regards,


 Maish Saidel-Keesing
 Platform Architect
 Cisco




 ___
 OpenStack-operators mailing list
 openstack-operat...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress][Delegation] Google doc for working notes

2015-02-12 Thread Tim Hinrichs
Hi Debo and Yathiraj,

I took a third look at the solver-scheduler docs and code with your comments in 
mind.  A few things jumped out.

1)  Choice of LP solver.

I see solver-scheduler uses Pulp, which was on the Congress short list as well. 
 So we’re highly aligned on the choice of underlying solver.

2) User control over VM-placement.

To choose the criteria for VM-placement, the solver-scheduler user picks from a 
list of predefined options, e.g. ActiveHostConstraint, 
MaxRamAllocationPerHostConstraint.

We’re investigating a slightly different approach, where the user defines the 
criteria for VM-placement by writing any policy they like in Datalog.  Under 
the hood we then convert that Datalog to an LP problem.  From the developer’s 
perspective, with the Congress approach we don’t attempt to anticipate the 
different policies the user might want and write code for each policy; instead, 
we as developers write a translator from Datalog to LP.  From the user’s 
perspective, the difference is that if the option they want isn’t on the 
solver-scheduler's list, they’re out of luck or need to write the code 
themselves.  But with the Congress approach, they can write any VM-placement 
policy they like.

What I’d like to see is the best of both worlds.  Users write Datalog policies 
describing whatever VM-placement policy they want.  If the policy they’ve 
written is on the solver-scheduler’s list of options, we use the hard-coded 
implementation, but if the policy isn’t on that list we translate directly to 
LP.  This approach gives us the ability to write custom code to handle common 
cases while at the same time letting users write whatever policy they like.

3) API and architecture.

Today the solver-scheduler's VM-placement policy is defined at config-time 
(i.e. not run-time).  Am I correct that this limitation is only because there’s 
no API call to set the solver-scheduler’s policy?  Or is there some other 
reason the policy is set at config-time?

Congress policies change at runtime, so we’ll definitely need a VM-placement 
engine whose policy can be changed at run-time as well.

If we focus on just migration (and not provisioning), we can build a 
VM-placement engine that sits outside of Nova that has an API call that allows 
us to set policy at runtime.  We can also set up that engine to get data 
updates that influence the policy.  We were planning on creating this kind of 
VM-placement engine within Congress as a node on the DSE (our message bus).  
This is convenient because all nodes on the DSE run in their own thread, any 
node on the DSE can subscribe to any data from any other node (e.g. 
ceilometer’s data), and the algorithms for translating Datalog to LP look to be 
quite similar to the algorithms we’re using in our domain-agnostic policy 
engine.

Tim


On Feb 11, 2015, at 4:50 PM, Debojyoti Dutta 
ddu...@gmail.commailto:ddu...@gmail.com wrote:

Hi Tim: moving our thread to the mailer. Excited to collaborate!



From: Debo~ Dutta dedu...@cisco.commailto:dedu...@cisco.com
Date: Wednesday, February 11, 2015 at 4:48 PM
To: Tim Hinrichs thinri...@vmware.commailto:thinri...@vmware.com
Cc: Yathiraj Udupi (yudupi) yud...@cisco.commailto:yud...@cisco.com, 
Gokul B Kandiraju go...@us.ibm.commailto:go...@us.ibm.com, Prabhakar Kudva 
ku...@us.ibm.commailto:ku...@us.ibm.com, 
ruby.krishnasw...@orange.commailto:ruby.krishnasw...@orange.com 
ruby.krishnasw...@orange.commailto:ruby.krishnasw...@orange.com, 
dilik...@in.ibm.commailto:dilik...@in.ibm.com 
dilik...@in.ibm.commailto:dilik...@in.ibm.com, Norival Figueira 
nfigu...@brocade.commailto:nfigu...@brocade.com, Ramki Krishnan 
r...@brocade.commailto:r...@brocade.com, Xinyuan Huang (xinyuahu) 
xinyu...@cisco.commailto:xinyu...@cisco.com, Rishabh Jain -X (rishabja - 
AAP3 INC at Cisco) risha...@cisco.commailto:risha...@cisco.com
Subject: Re: Nova solver scheduler and Congress

Hi Tim

To address your particular questions:

  1.  translate some policy language into constraints for the LP/CVP and we had 
left that to congress hoping to integrate when the policy efforts in openstack 
were ready (our initial effort was pre congress)
  2.  For migrations: we are currently doing that – its about incremental 
constraints into the same solver. Hence its a small deal ….

Joining forces is a terrific idea. Would love to join the IRC call and see how 
we can build cool stuff in the community together. I hope we don’t have to 
replicate the vm placement engine while the work that was done in the community 
does something very similar (and be adapted)

debo

From: Tim Hinrichs thinri...@vmware.commailto:thinri...@vmware.com
Date: Wednesday, February 11, 2015 at 4:43 PM
To: Debo~ Dutta dedu...@cisco.commailto:dedu...@cisco.com
Cc: Yathiraj Udupi (yudupi) yud...@cisco.commailto:yud...@cisco.com, 
Gokul B Kandiraju go...@us.ibm.commailto:go...@us.ibm.com, Prabhakar Kudva 
ku...@us.ibm.commailto:ku...@us.ibm.com, 

Re: [openstack-dev] [nova] issues with fakelibvirt in tests

2015-02-12 Thread Sean Dague
On 02/12/2015 01:09 PM, Daniel P. Berrange wrote:
 On Thu, Feb 12, 2015 at 12:32:10PM -0500, Sean Dague wrote:
 Looking recently at the following failure -
 http://logs.openstack.org/04/154804/1/gate/gate-nova-python27/1fe94bf/console.html#_2015-02-12_15_02_19_593

 It appears that the fakelibvirt fixture is potentially causing races in
 tests because after the first test in a worker starts a libvirt
 connection, the libvirt python library spawns a thread which keeps
 running in a loop for the duration of the tests. This is happening
 regardless of whether or not the test in question is using libvirt (as
 in this case). Having threads thumping around in the background means
 that doing things like testing for when sleep is called can fail because
 libvirt's thread is getting in the way.
 
 libvirt-python shouldn't be spawning any threads itself - any threads
 will have been spawned by Nova.
 

 What's the proper method of completely tearing down all the libvirt
 resources so that when this fixture exits it will actually do that
 correctly -
 https://github.com/openstack/nova/blob/master/nova/tests/unit/virt/libvirt/fakelibvirt.py#L1181-L1202
 and not impact unrelated tests?
 
 Most likely the thread will have been created when the libvirt driver
 is setup in the tests.
 
 eg nova.virt.libvirt.driver.LibvirtDriver.init_host() method will
 call nova.virt.ibvirt.host.Host.initialize() which in turn spawns
 a background *native* thread to receive event notifications from
 libvirt.
 
 Assuming this is indeed the root cause of the thread you see, I'd
 say we want to arrange for the nova.virt.libvirt.host.Host._init_events
 method to be a no-op for the tests. This async events thread is not
 something any of the tests should need to have around in general.

Yeh, we just got to a similar place after mriedem's email.

I'll propose patching that out in the fakelibvirt fixture once I get
some lunch, and make sure there is no other fall out from that.

Thanks for diving in.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging] notification listener; same target with multiple executors

2015-02-12 Thread Boden Russell
Is it possible to have multiple oslo messaging notification listeners
using different executors on the same target?

For example, I was to create multiple notification listeners [1] each
using a different executor for the same set of targets (e.g.
glance/notifications).


When I try this [2], only the endpoints associated with the 1st listener
(server) are executed. Note that I'm using rabbitmq this very ad-hoc
test and listening for glance notifications. Also I've tried returning
different results in my endpoint methods.

Finally - when I try [2] and set each listener pool name to different
values; none of my endpoints are executed. This ?might? be a bug, but
needs additional investigation.


I'm still digging into the oslo.messaging impl, but hoping someone here
has existing knowledge on this topic.


Thanks for any assistance.


[1] http://goo.gl/9JHUkz
[2] http://paste.openstack.org/show/172265/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [API] Do we need to specify follow the HTTP RFCs?

2015-02-12 Thread Jay Pipes

On 02/12/2015 01:01 PM, Chris Dent wrote:

I meant to get to this in today's meeting[1] but we ran out of time
and based on the rest of the conversation it was likely to lead to a
spiral of different interpretations, so I thought I'd put it up here.

$SUBJECT says it all: When writing guidelines to what extent do we
think we should be recapitulating the HTTP RFCs and restating things
said there in a form applicable to OpenStack APIs?

For example should we say:

 Here are some guidelines, for all else please refer to RFCs
 7230-5.

Or should we say something like:

 Here are some guidelines, including:

 If your API has a resource at /foo which responds to an authentic
 request with method GET but not with method POST, PUT, DELETE or PATCH
 then when an authentic request is made to /foo that is not a GET it
must
 respond with a 405 and must include an Allow header listing the
 currently support methods.[2]

I ask because I've been fleshing out my gabbi testing tool[3] by running
it against a variety of APIs. Gabbi makes it very easy to write what I
guess the officials call negative tests -- Throw some unexpected but well-
formed input, see if there is a reasonable response -- just by making
exploratory inquiries into the API and then traversing the discovered links
with various methods and content types.

What I've found is too often the response is not reasonable. Some of
the problems arise from the frameworks being used, in other cases it
is the implementing project.

We can fix the existing stuff in a relatively straightforward but
time consuming fashion: Use tools like gabbi to make more negative tests,
fix the bugs as they come up. Same as it ever was.

For new stuff, however, does there need to be increased awareness of
the rules and is it the job of the working group to help that
increasing along?


I think it's definitely the role of the API WG to identify places in our 
API implementations that are not following the rules, yes.


I think paraphrasing particular parts of RFCs would be my preference, 
along with examples of bad or incorrect usage.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] issues with fakelibvirt in tests

2015-02-12 Thread Daniel P. Berrange
On Thu, Feb 12, 2015 at 01:17:55PM -0500, Sean Dague wrote:
 On 02/12/2015 01:09 PM, Daniel P. Berrange wrote:
  On Thu, Feb 12, 2015 at 12:32:10PM -0500, Sean Dague wrote:
  Looking recently at the following failure -
  http://logs.openstack.org/04/154804/1/gate/gate-nova-python27/1fe94bf/console.html#_2015-02-12_15_02_19_593
 
  It appears that the fakelibvirt fixture is potentially causing races in
  tests because after the first test in a worker starts a libvirt
  connection, the libvirt python library spawns a thread which keeps
  running in a loop for the duration of the tests. This is happening
  regardless of whether or not the test in question is using libvirt (as
  in this case). Having threads thumping around in the background means
  that doing things like testing for when sleep is called can fail because
  libvirt's thread is getting in the way.
  
  libvirt-python shouldn't be spawning any threads itself - any threads
  will have been spawned by Nova.
  
 
  What's the proper method of completely tearing down all the libvirt
  resources so that when this fixture exits it will actually do that
  correctly -
  https://github.com/openstack/nova/blob/master/nova/tests/unit/virt/libvirt/fakelibvirt.py#L1181-L1202
  and not impact unrelated tests?
  
  Most likely the thread will have been created when the libvirt driver
  is setup in the tests.
  
  eg nova.virt.libvirt.driver.LibvirtDriver.init_host() method will
  call nova.virt.ibvirt.host.Host.initialize() which in turn spawns
  a background *native* thread to receive event notifications from
  libvirt.
  
  Assuming this is indeed the root cause of the thread you see, I'd
  say we want to arrange for the nova.virt.libvirt.host.Host._init_events
  method to be a no-op for the tests. This async events thread is not
  something any of the tests should need to have around in general.
 
 Yeh, we just got to a similar place after mriedem's email.
 
 I'll propose patching that out in the fakelibvirt fixture once I get
 some lunch, and make sure there is no other fall out from that.
 
 Thanks for diving in.

It is probably worth making fakelibvirt.virEvenRunDefaultImpl
raise an exception by default too, so we clearly see if anything
in the test suite mistakenly runs it in the future too.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] issues with fakelibvirt in tests

2015-02-12 Thread Daniel P. Berrange
On Thu, Feb 12, 2015 at 11:49:12AM -0600, Matt Riedemann wrote:
 
 
 On 2/12/2015 11:32 AM, Sean Dague wrote:
 Looking recently at the following failure -
 http://logs.openstack.org/04/154804/1/gate/gate-nova-python27/1fe94bf/console.html#_2015-02-12_15_02_19_593
 
 It appears that the fakelibvirt fixture is potentially causing races in
 tests because after the first test in a worker starts a libvirt
 connection, the libvirt python library spawns a thread which keeps
 running in a loop for the duration of the tests. This is happening
 regardless of whether or not the test in question is using libvirt (as
 in this case). Having threads thumping around in the background means
 that doing things like testing for when sleep is called can fail because
 libvirt's thread is getting in the way.
 
 What's the proper method of completely tearing down all the libvirt
 resources so that when this fixture exits it will actually do that
 correctly -
 https://github.com/openstack/nova/blob/master/nova/tests/unit/virt/libvirt/fakelibvirt.py#L1181-L1202
 and not impact unrelated tests?
 
  -Sean
 
 
 fakelibvirt shouldn't be using libvirt-python at all since this change:
 
 https://review.openstack.org/#/c/150148/
 
 I'm not saying there isn't a thing going on, but not sure how libvirt-python
 would be involved since it's not in test-requirements.txt, unless it's in
 site-packages on the test nodes.

The log Sean links to does indeed show fakelibvirt in the stack trace

2015-02-12 15:02:19.595 |   File 
nova/tests/unit/virt/libvirt/fakelibvirt.py, line 1144, in 
virEventRunDefaultImpl
2015-02-12 15:02:19.595 | time.sleep(1)

Which almost certainly comes from the '_init_events' method on
nova.virt.libvirt.host.Host class spawning the events processing
thread.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] monkey patching strategy

2015-02-12 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all,

there were some moves recently to make monkey patching strategy sane
in neutron.

This was triggered by some bugs found when interacting with external
oslo libraries [1], and a cross project spec to make eventlet usage
sane throughout the project [2].

Specifically, instead of monkey patching stdlib in each of services
and agents (and forgetting to do so for some of them [3]), we should
monkey patch it as part of a common import (ideally, it would be any
neutron.* import).

Initially, we've tried to patch it inside neutron/__init__.py [4], but
it didn't place nice with some advanced services importing from
neutron while not expecting stdlib to be patched, and so was reverted.

So an alternative that I currently look into is the Nova way.
Specifically, moving all main() functions for all agents and services
into neutron/cmd/... and monkey patching stdlib thru
neutron/cmd/__init__.py.

I've sent a series of patches to do just that [5]. It was rightfully
blocked by Mark to seek for broader agreement.

I encourage community to say your word on the direction.

[1]: https://bugs.launchpad.net/oslo.concurrency/+bug/1418541
[2]: https://review.openstack.org/154642
[3]:
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/mlnx/agent/eswitch_neutron_agent.py
[4]: https://review.openstack.org/153699
[5]:
https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:bug/1418541,n,z

Cheers,
/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU3P4QAAoJEC5aWaUY1u57A/cH/AuKbkewZy5Z0Hus2m4bClGp
4DJ37ygcY9HwGmJTLpvUyfRcDWnaO9S+6sj28Ebv49MN1w9qJ4MuQmaYA1xsFERb
aR6uKgnkiIT0FS8CVjbClEC7gN5elHCe2dcB8cakrk7uUsTJ2LP5A6rdNQqly/uN
2hkdfa1WBcAYMX6raFWD8DJ49R1MhbPr09YXXU9ApoflMY6ZywvNBzwIZEw5gqPO
Vpjb9DwevaFZ9kqzjHTrXk47CqOSYS7ZXQjS1bOGUOJFOBqNRLzl2qPX7wkBiA2N
12U4Qe3/3MvWwBig0O+mY2RwN2OtnxhK8X5tP6kbrybyOKLGUe4ZgIlvfQHI33Q=
=8pX5
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [third-party] how to use a devstack external plugin in gate testing

2015-02-12 Thread Jaume Devesa
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Following the conversation...

We have seen that glusterfs[1] and ec2api[2] use different approach
when it comes to repository managing: whereas glusterfs is a single
'devstack' directory repository, ec2api is a whole project with a
'devstack' directory on it.

We plan to migrate 'python-neutron-plugin-midonet'[3] project to
Stackforge too. It makes sense to add the 'devstack' directory on it?
Or do you recommend us to have two different repositories in
Stackforge: one for the neutron plugin and the other one for the
devstack plugin?

We can not see any big advantage or disadvantage in any of them... so
we have decided to ask to the community if someone is able see what we
can not see.

Regards,

[1]: https://github.com/stackforge/devstack-plugin-glusterfs
[2]: https://github.com/stackforge/ec2-api
[3]: https://github.com/midonet/python-neutron-plugin-midonet

El 11/02/15 a las 17:43, Jaume Devesa escribió:
 Hello,
 
 I'm working in the same job as Kyle for the midonet plugin, but
 first I need to do some changes in devstack. (Sean's review on my
 patch[1] has lead me to this conversation).
 
 After talking with Lucas, (Midokura's responsible of Third-party 
 testing), we have a question about this that involve the
 third-party folks: if we get our own Jenkins job that tests
 devstack with midonet and we include this job in the Neutron's gate
 (as non-voting, of course), that would be considered Neutron
 Third-party testing?
 
 Can we chat about this on next Monday's third party meeting?
 
 Regards,
 
 [1]: https://review.openstack.org/#/c/152876
 
 El 06/02/15 a las 22:54, Kyle Mestery escribió:
 On Fri, Feb 6, 2015 at 1:36 PM, Sean Dague s...@dague.net
 wrote:
 
 For those that didn't notice, on the Devstack team we've
 started to push back on new in-tree support for all the
 features. That's intentional. We've got an external plugin
 interface now -
 
 http://docs.openstack.org/developer/devstack/plugins.html#externally-hosted-plugins




 
,
 and have a few projects like the ec2api and glusterfs that are
  successfully using it. Our Future direction is to do more of 
 this - https://review.openstack.org/#/c/150789/
 
 The question people ask a lot is 'but, how do I do a gate job 
 with the external plugin?'.
 
 Starting with the stackforge/ec2api we have an example up on
 how to do that: https://review.openstack.org/#/c/153659/
 
 The important bits are as follows:
 
 1. the git repo that you have your external plugin in *must* be
  in gerrit. stackforge is fine, but it has to be hosted in the
  OpenStack infrastructure.
 
 2. The job needs to add your PROJECT to the projects list,
 i.e.:
 
 export PROJECTS=stackforge/ec2-api $PROJECTS
 
 3. The job needs to add a DEVSTACK_LOCAL_CONFIG line for the 
 plugin enablement:
 
 export DEVSTACK_LOCAL_CONFIG=enable_plugin ec2-api 
 git://git.openstack.org/stackforge/ec2-api
 
 Beyond that you can define your devstack job however you like. 
 It can test with Tempest. It can instead use a post_test_hook 
 for functional testing. Whatever is appropriate for your 
 project.
 
 This is awesome Sean! Thanks for the inspiration here. In
 fact, I just
 pushed a series of patches [1] [2] which do the same for the 
 networking-odl stackforge project.
 
 Thanks, Kyle
 
 [1] https://review.openstack.org/#/c/153704/ [2] 
 https://review.openstack.org/#/c/153705/
 
 -Sean
 
 -- Sean Dague http://dague.net
 
 __




 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 
 
 
 
 
 __

 
 
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 

- -- 
Jaume Devesa
Software Engineer, Midokura
-BEGIN PGP SIGNATURE-
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJU3IopAAoJEPvO/ogkgqHBRRIP/0PcoPvFBBOg+TdWLGQFS7jf
4cgPe+PAnUFUtaEn78YTYSp5UkK6TzDEA9I0TqD88a2MK7rP+x0XAxtlTBRpIB0s
AbSIJd6Eg0GCnqdPxztzeRcxagr+oTSys79cYd49JWMBfUYgCDjdQBYHq5IacvgM
PG/8EiHWdQAo1nmxXQhy18p7+nMnntt0BBkLBwFPFmo/QnjCqveN0bvHSCcIhLRv
heVnK/clvPzeyx+5tRT67dEmaP0X+xtSdA2Y/HNJIwUl1m6Z6L9JwIyuffHi/py5
GUxScogIHjids4i9IU+EmO/ltrfwZ/Kad5EjareSSPenvuv0Bk78RiYGUhBjvwdw
CrnlK6fCq5SACtw47kgWMyc1UM+Y5gIw/DguX1eQjaKXtr2Mzp0ij4p/Swi5Meo+
Q09tsKbcNWKDtDUiwmKWTkuhe1+V3iYwLARivs7DygthHWaAs2JnDwPiLO/L07i/
dGVel6hrwsHmIJGafLyxGD8pVeFcIHOx0Bc8/AxzybH8Un8kokwNmTMqfZwthWOK
Az61ofATI53+lDy6jo9ilcm1FNifuUhOekRJqBhyEzJqd5ZZwtGCI1sJv5YGanA/
GYAFGPP0noL+nnavQoEbIjrlurXrYX7Rb5AZT643KSuerMy29J7ReChcuV9olkZd
bqQ3f1PqIv/ZUx+qVvHu
=45Da
-END PGP SIGNATURE-


Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody should know about Galera

2015-02-12 Thread Attila Fazekas


- Original Message -
From: Attila Fazekas afaze...@redhat.com
To: Jay Pipes jaypi...@gmail.com
Cc: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, Pavel Kholkin pkhol...@mirantis.com
Sent: Thursday, February 12, 2015 11:52:39 AM
Subject: Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody 
should know about Galera





- Original Message -
 From: Jay Pipes jaypi...@gmail.com
 To: Attila Fazekas afaze...@redhat.com
 Cc: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org, Pavel
 Kholkin pkhol...@mirantis.com
 Sent: Wednesday, February 11, 2015 9:52:55 PM
 Subject: Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody 
 should know about Galera
 
 On 02/11/2015 06:34 AM, Attila Fazekas wrote:
  - Original Message -
  From: Jay Pipes jaypi...@gmail.com
  To: Attila Fazekas afaze...@redhat.com
  Cc: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org, Pavel
  Kholkin pkhol...@mirantis.com
  Sent: Tuesday, February 10, 2015 7:32:11 PM
  Subject: Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody
  should know about Galera
 
  On 02/10/2015 06:28 AM, Attila Fazekas wrote:
  - Original Message -
  From: Jay Pipes jaypi...@gmail.com
  To: Attila Fazekas afaze...@redhat.com, OpenStack Development
  Mailing
  List (not for usage questions)
  openstack-dev@lists.openstack.org
  Cc: Pavel Kholkin pkhol...@mirantis.com
  Sent: Monday, February 9, 2015 7:15:10 PM
  Subject: Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things
  everybody
  should know about Galera
 
  On 02/09/2015 01:02 PM, Attila Fazekas wrote:
  I do not see why not to use `FOR UPDATE` even with multi-writer or
  Is the retry/swap way really solves anything here.
  snip
  Am I missed something ?
 
  Yes. Galera does not replicate the (internal to InnnoDB) row-level locks
  that are needed to support SELECT FOR UPDATE statements across multiple
  cluster nodes.
 
  Galere does not replicates the row-level locks created by UPDATE/INSERT
  ...
  So what to do with the UPDATE?
 
  No, Galera replicates the write sets (binary log segments) for
  UPDATE/INSERT/DELETE statements -- the things that actually
  change/add/remove records in DB tables. No locks are replicated, ever.
 
  Galera does not do any replication at UPDATE/INSERT/DELETE time.
 
  $ mysql
  use test;
  CREATE TABLE test (id integer PRIMARY KEY AUTO_INCREMENT, data CHAR(64));
 
  $(echo 'use test; BEGIN;'; while true ; do echo 'INSERT INTO test(data)
  VALUES (test);'; done )  | mysql
 
  The writer1 is busy, the other nodes did not noticed anything about the
  above pending
  transaction, for them this transaction does not exists as long as you do
  not call a COMMIT.
 
  Any kind of DML/DQL you issue without a COMMIT does not happened in the
  other nodes perspective.
 
  Replication happens at COMMIT time if the `write sets` is not empty.
 
 We're going in circles here. I was just pointing out that SELECT ... FOR
 UPDATE will never replicate anything. INSERT/UPDATE/DELETE statements
 will cause a write-set to be replicated (yes, upon COMMIT of the
 containing transaction).
 
 Please see my repeated statements in this thread and others that the
 compare-and-swap technique is dependent on issuing *separate*
 transactions for each SELECT and UPDATE statement...
 
  When a transaction wins a voting, the other nodes rollbacks all transaction
  which had a local conflicting row lock.
 
 A SELECT statement in a separate transaction does not ever trigger a
 ROLLBACK, nor will an UPDATE statement that does not match any rows.
 That is IMO how increased throughput is achieved in the compare-and-swap
 technique versus the SELECT FOR UPDATE technique.
 
yes, I mentioned this way in one bug [0].

But the related changes on the review, actually works as I said [1][2][3],
and the SELECT is not in a separated dedicated transaction.


[0] https://bugs.launchpad.net/neutron/+bug/1410854 [sorry I sent a wrong link 
before]
[1] https://review.openstack.org/#/c/143837/
[2] https://review.openstack.org/#/c/153558/
[3] https://review.openstack.org/#/c/149261/

 -jay
 
 -jay
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Ask for help with devstack error

2015-02-12 Thread liuxinguo
Our CI failed when building devstack, the error is about “Unauthorized”. 
Following is the segment of the devstacklog:


2015-02-12 11:16:14.639 | + is_service_enabled c-api
2015-02-12 11:16:14.646 | + return 0
2015-02-12 11:16:14.646 | + is_service_enabled tls-proxy
2015-02-12 11:16:14.646 | + _run_process c-vol '/usr/local/bin/cinder-volume 
--config-file /etc/cinder/cinder.conf' ''
2015-02-12 11:16:14.647 | + local service=c-vol
2015-02-12 11:16:14.647 | + local 'command=/usr/local/bin/cinder-volume 
--config-file /etc/cinder/cinder.conf'
2015-02-12 11:16:14.647 | + local group=
2015-02-12 11:16:14.647 | + exec
2015-02-12 11:16:14.647 | + exec
2015-02-12 11:16:14.658 | + return 1
2015-02-12 11:16:14.658 | + create_volume_types
2015-02-12 11:16:14.659 | + is_service_enabled c-api
2015-02-12 11:16:14.687 | + return 0
2015-02-12 11:16:14.688 | + [[ -n lvm:default ]]
2015-02-12 11:16:14.688 | + local be be_name be_type
2015-02-12 11:16:14.688 | + for be in '${CINDER_ENABLED_BACKENDS//,/ }'
2015-02-12 11:16:14.688 | + be_type=lvm
2015-02-12 11:16:14.688 | + be_name=default
2015-02-12 11:16:14.689 | + cinder type-create default
2015-02-12 11:16:22.734 | ERROR: Unauthorized (HTTP 401) (Request-ID: 
req-33c9392a-046f-4894-b22a-1a119eacec62)‍

In c-api.log, I find the error with “auth_token”:

2015-02-12 03:16:19.722 19912 WARNING keystonemiddleware.auth_token [-] 
Retrying on HTTP connection exception: SSL exception connecting to 
https://127.0.0.1:35357/
2015-02-12 03:16:20.723 19912 DEBUG keystoneclient.session [-] REQ: curl -g -i 
--cacert /opt/stack/data/ca-bundle.pem -X GET https://127.0.0.1:35357/ -H 
Accept: application/json -H User-Agent: python-keystoneclient 
_http_log_request 
/opt/stack/new/python-keystoneclient/keystoneclient/session.py:190
2015-02-12 03:16:20.724 19912 DEBUG urllib3.util.retry [-] Converted retries 
value: 0 - Retry(total=0, connect=None, read=None, redirect=0) from_int 
/usr/local/lib/python2.7/dist-packages/urllib3/util/retry.py:155
2015-02-12 03:16:22.730 19912 ERROR keystonemiddleware.auth_token [-] HTTP 
connection exception: SSL exception connecting to https://127.0.0.1:35357/
2015-02-12 03:16:22.731 19912 WARNING keystonemiddleware.auth_token [-] 
Authorization failed for token‍

Any one can give me some help?
Thanks!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [third-party] how to use a devstack external plugin in gate testing

2015-02-12 Thread Sean Dague
On 02/12/2015 06:10 AM, Jaume Devesa wrote:
 Following the conversation...
 
 We have seen that glusterfs[1] and ec2api[2] use different approach
 when it comes to repository managing: whereas glusterfs is a single
 'devstack' directory repository, ec2api is a whole project with a
 'devstack' directory on it.
 
 We plan to migrate 'python-neutron-plugin-midonet'[3] project to
 Stackforge too. It makes sense to add the 'devstack' directory on it?

Yes, the intent was always to put the devstack directory inside existing
git trees that you would want to clone to add to your devstack environment.

 Or do you recommend us to have two different repositories in
 Stackforge: one for the neutron plugin and the other one for the
 devstack plugin?
 
 We can not see any big advantage or disadvantage in any of them... so
 we have decided to ask to the community if someone is able see what we
 can not see.

I believe in your case if you *don't* do it this way, your devstack
plugin would then need to clone your 'python-neutron-plugin-midonet' git
tree. Which is kind of janky. It also *won't* work in the OpenStack CI
system.

The reason 'devstack-plugin-glusterfs' is done that way is because there
is no other active git trees for glusterfs support. Glusterfs driver
support is in the main Cinder tree -
https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/glusterfs.py.
For projects with a vast array of in tree drivers, doing this as a
dedicated stackforge project per driver is probably best practice.

However, in the neutron case where the drivers are out to tree, I
believe putting the devstack support in the driver tree is best practice.

-Sean

 
 Regards,
 
 [1]: https://github.com/stackforge/devstack-plugin-glusterfs
 [2]: https://github.com/stackforge/ec2-api
 [3]: https://github.com/midonet/python-neutron-plugin-midonet
 
 El 11/02/15 a las 17:43, Jaume Devesa escribió:
 Hello,
 
 I'm working in the same job as Kyle for the midonet plugin, but
 first I need to do some changes in devstack. (Sean's review on my
 patch[1] has lead me to this conversation).
 
 After talking with Lucas, (Midokura's responsible of Third-party 
 testing), we have a question about this that involve the
 third-party folks: if we get our own Jenkins job that tests
 devstack with midonet and we include this job in the Neutron's gate
 (as non-voting, of course), that would be considered Neutron
 Third-party testing?
 
 Can we chat about this on next Monday's third party meeting?
 
 Regards,
 
 [1]: https://review.openstack.org/#/c/152876
 
 El 06/02/15 a las 22:54, Kyle Mestery escribió:
 On Fri, Feb 6, 2015 at 1:36 PM, Sean Dague s...@dague.net
 wrote:
 
 For those that didn't notice, on the Devstack team we've
 started to push back on new in-tree support for all the
 features. That's intentional. We've got an external plugin
 interface now -

 http://docs.openstack.org/developer/devstack/plugins.html#externally-hosted-plugins



 

 ,
 and have a few projects like the ec2api and glusterfs that are
  successfully using it. Our Future direction is to do more of 
 this - https://review.openstack.org/#/c/150789/

 The question people ask a lot is 'but, how do I do a gate job 
 with the external plugin?'.

 Starting with the stackforge/ec2api we have an example up on
 how to do that: https://review.openstack.org/#/c/153659/

 The important bits are as follows:

 1. the git repo that you have your external plugin in *must* be
  in gerrit. stackforge is fine, but it has to be hosted in the
  OpenStack infrastructure.

 2. The job needs to add your PROJECT to the projects list,
 i.e.:

 export PROJECTS=stackforge/ec2-api $PROJECTS

 3. The job needs to add a DEVSTACK_LOCAL_CONFIG line for the 
 plugin enablement:

 export DEVSTACK_LOCAL_CONFIG=enable_plugin ec2-api 
 git://git.openstack.org/stackforge/ec2-api

 Beyond that you can define your devstack job however you like. 
 It can test with Tempest. It can instead use a post_test_hook 
 for functional testing. Whatever is appropriate for your 
 project.

 This is awesome Sean! Thanks for the inspiration here. In
 fact, I just
 pushed a series of patches [1] [2] which do the same for the 
 networking-odl stackforge project.
 
 Thanks, Kyle
 
 [1] https://review.openstack.org/#/c/153704/ [2] 
 https://review.openstack.org/#/c/153705/
 
 -Sean

 -- Sean Dague http://dague.net

 __



 

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 



 
 
 __
 

 
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 
 
 
 
 

Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it

2015-02-12 Thread Nikhil Manchanda
On Wed, Feb 11, 2015 at 1:55 AM, Flavio Percoco fla...@redhat.com wrote:
 [...]

 ## Keep discussions open

 I don't believe there's anything wrong about kicking off some
 discussions in private channels about specs/bugs. I don't believe
 there's anything wrong in having calls to speed up some discussions.
 HOWEVER, I believe it's *completely* wrong to consider those private
 discussions sufficient. If you have had that kind of private
 discussions, if you've discussed a spec privately and right after you
 went upstream and said: This has been discussed in a call and it's
 good to go, I beg you to stop for 2 seconds and reconsider that. I
 don't believe you were able to fit all the community in that call and
 that you had enough consensus.


Completely agree with what you've said here. I think there's a place for
private conversation (eg. discussing a security issue that corresponds
to a CVE, giving folks honest feedback without public shaming, quickly
pinging someone, etc.) but when it comes to discussions that have a
bearing on a project (albeit however minimal) we need to ensure that all
of those happen in the open, so that any interested parties are able to
participate. Personally, I have not seen any examples of private talks
which have led to making decisions in the absence of community
discussion, but if this is happening -- we need to put a definitive stop
to it.


 [...]

 ## Mailing List vs IRC Channel

 I get it, our mailing list is freaking busy, keeping up with it is
 hard and time consuming and that leads to lots of IRC discussions. I
 don't think there's anything wrong with that but I believe it's wrong
 to expect *EVERYONE* to be in the IRC channel when those discussions
 happen.

 If you are discussing something on IRC that requires the attention of
 most of your project's community, I highly recommend you to use the
 mailing list as oppose to pinging everyone independently and fighting
 with time zones. Using IRC bouncers as a replacement for something
 that should go to the mailing list is absurd. Please, use the mailing
 list and don't be afraid of having a bigger community chiming in in
 your discussion.  *THAT'S A GOOD THING*

 Changes, specs, APIs, etc. Everything is good for the mailing list.
 We've fought hard to make this community grow, why shouldn't we take
 advantage of it?


We should absolutely take advantage of all forms of communication, and
all the tools that we have at our disposal so that we can foster more
open and clear communication. However, I do realize that different
strokes work for different folks. While many might find it more
effective to communicate over email, others find IRC, or even a
VOIP call a better way of ironing out differences. I don't think that
makes any one method of communication better than others. While
I personally believe that every discussion or design conversation that
happens on IRC does not need to be taken to the mailing list, there's
absolutely nothing that should prohibit anyone in the community from
taking a discussion from IRC (or anywhere else) to the mailing list at
_any_ time.


 ## Cores are *NOT* special

 At some point, for some reason that is unknown to me, this message
 changed and the feeling of core's being some kind of superheros became
 a thing. It's gotten far enough to the point that I've came to know
 that some projects even have private (flagged with +s), password
 protected, irc channels for core reviewers.
 [...]

Completely agree with you about cores not being super-heroes. On the
latter point though, I'd consider that there's certainly a reasonable
subset of conversations that are okay to have in private (like security
related issues, and some other examples already cited above). However,
if the existence of machinery which makes having such conversations
convenient (hangout, private IRC, face-to-face in a closed room,
whatever) seems to have a detrimental effect on the spirit of openness
in our community, then I would err on the side of caution and dismantle
that machinery rather than let our commitment to openness come under
fire.


 [...]

 All the above being said, I'd like to thank everyone who fights for
 the openness of our community and encourage everyone to make that a
 must have thing in each sub-community. You don't need to be
 core-reviewer or PTL to do so. Speak up and help keeping the community
 as open as possible.

 Cheers,
 Flavio

Thanks for putting this together Flavio -- a timely reminder to strive
towards keeping our community open and inclusive. It's much appreciated!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Priority resizing instance on same host

2015-02-12 Thread Rui Chen
@Manickam, thank you for the information :)

+1 for the use case
-1 for the approach in patch https://review.openstack.org/#/c/117116/

I think we should try to filter the current host and auto fallback to
select a host in nova-scheduler if the current host is no suitable.

2015-02-12 16:17 GMT+08:00 Manickam, Kanagaraj kanagaraj.manic...@hp.com:

  Hi,



 There is a patch on resize https://review.openstack.org/#/c/117116/

 To address the resize,  there are some suggestions and please refer the
 review comments on this patch.



 Regards

 Kanagaraj M



 *From:* Jesse Pretorius [mailto:jesse.pretor...@gmail.com]
 *Sent:* Thursday, February 12, 2015 1:25 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [nova] Priority resizing instance on same
 host



 On Thursday, February 12, 2015, Rui Chen chenrui.m...@gmail.com wrote:

  Currently, resizing instance cause migrating from the host that the
 instance run on to other host, but maybe the current host is suitable for
 new flavor. Migrating will lead to copy image between hosts if no shared
 storage, it waste time.

 I think that priority resizing instance on the current host may be
 better if the host is suitable.

 The logic like this:



 if CONF.allow_resize_to_same_host:

 filte current host

 if suitable:

resize on current host

 else:

select a host

resize on the host



 I don't know whether there have been some discussion about this
 question. Please let me know what do you think. If the idea is no problem,
 maybe I can register a blueprint to implement it.



 But the nova.conf flag for that already exists?



 What I would suggest, however, is that some logic is put in to determine
 whether the disk size remains the same while the cpu/ram size is changing -
 if so, then resize the instance on the host without the disk snapshot and
 copy.



 --

 Jesse Pretorius
 mobile: +44 7586 906045
 email: jesse.pretor...@gmail.com
 skype: jesse.pretorius



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [congress] following up on releasing kilo milestone 2

2015-02-12 Thread Thierry Carrez
sean roberts wrote:
 so 
 git checkout master
 git pull https://git.openstack.org/stackforge/congress.git
 dbef982ea72822e0b7acc16da9b6ac89d3cf3530
 git tag -s 2015.1.0b2
 git push gerrit 2015.1.0b2

You could also try to use the milestone.sh release script I use:

http://git.openstack.org/cgit/openstack-infra/release-tools/tree/

It will push the tag, wait for the tarball to be generated, turn all
FixCommitted bugs in Launchpad into FixReleased for the milestone, and
upload the tarball to Launchpad.

It relies on a few assumptions on the Launchpad part (in particular that
only the Launchpad bugs fixed during the current milestone are in
FixCommitted state, and that your Launchpad milestone is called
kilo-2), but otherwise should work.

Don't hesitate to ask me questions about it if you have any.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about force_host skip filters

2015-02-12 Thread Gary Kotton
Hi,
I think that the filters should be applied to the list of hosts that are in 
'force_hosts'. I am not sure if this is what you are suggesting. If this is not 
then case then it sounds like a bug.
Thanks
Gary

From: Rui Chen chenrui.m...@gmail.commailto:chenrui.m...@gmail.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, February 12, 2015 at 11:05 AM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova] Question about force_host skip filters

Hi:

   If we boot instance with 'force_hosts', the force host will skip all 
filters, looks like that it's intentional logic, but I don't know the reason.

   I'm not sure that the skipping logic is apposite, I think we should remove 
the skipping logic, and the 'force_hosts' should work with the scheduler, test 
whether the force host is appropriate ASAP. Skipping filters and postponing the 
booting failure to nova-compute is not advisable.

On the other side, more and more options had been added into flavor, like 
NUMA, cpu pinning, pci and so on, forcing a suitable host is more and more 
difficult.


Best Regards.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it

2015-02-12 Thread Clint Byrum
Excerpts from Flavio Percoco's message of 2015-02-12 00:13:35 -0800:
 On 11/02/15 09:37 -0800, Clint Byrum wrote:
 Excerpts from Stefano Maffulli's message of 2015-02-11 06:14:39 -0800:
  On Wed, 2015-02-11 at 10:55 +0100, Flavio Percoco wrote:
   This email is dedicated to the openness of our community/project.
 
  It's good to have a reminder every now and then. Thank you Flavio for
  caring enough to notice bad patterns and for raising a flag.
 
   ## Keep discussions open
  
   I don't believe there's anything wrong about kicking off some
   discussions in private channels about specs/bugs. I don't believe
   there's anything wrong in having calls to speed up some discussions.
   HOWEVER, I believe it's *completely* wrong to consider those private
   discussions sufficient.
  [...]
 
  Well said. Conversations can happen anywhere and any time, but they
  should stay in open and accessible channels. Consensus needs to be built
  and decisions need to be shared, agreed upon by the community at large
  (and mailing lists are the most accessible media we have).
 
  That said, it's is very hard to generalize and I'd rather deal/solve
  specific examples. Sometimes, I'm sure there are episodes when a fast
  decision was needed and a limited amount of people had to carry the
  burden of responsibility. Life is hard, software development is hard and
  general rules sometimes need to be adapted to the reality. Again, too
  much generalization here for what I'm confortable with.
 
  Maybe it's worth repeating that I'm personally (and in my role)
  available to listen and mediate in cases when communication seems to
  happen behind closed doors. If you think something unhealthy is
  happening, talk to me (confidentiality assured).
 
   ## Mailing List vs IRC Channel
  
   I get it, our mailing list is freaking busy, keeping up with it is
   hard and time consuming and that leads to lots of IRC discussions.
 
  Not sure I agree with the causality but, the facts are those: traffic on
  the list and on IRC is very high (although not increasing anymore
  [1][2]).
 
I
   don't think there's anything wrong with that but I believe it's wrong
   to expect *EVERYONE* to be in the IRC channel when those discussions
   happen.
 
  Email is hard, I have the feeling that the vast majority of people use
  bad (they all suck, no joke) email clients. Lots and lots of email is
  even worse. Most contributors commit very few patches: the investment
  for them to configure their MUA to filter our traffic is too high.
 
  I have added more topics today to the openstack-dev list[3]. Maybe,
  besides filtering on the receiving end, we may spend some time
  explaining how to use mailman topics? I'll draft something on Ask, it
  may help those that have limited interest in OpenStack.
 
  What else can we do to make things better?
 
 
 I am one of those people who has a highly optimized MUA for mailing list
 reading. It is still hard. Even with one keypress to kill threads from
 view forever, and full text index searching, I still find it takes me
 an hour just to filter the don't want to see from the want to see
 threads each day.
 
 The filtering on the list-server side I think is not known by everybody,
 and it might be a good idea to socialize it even more, and maybe even
 invest in making the UI for it really straight forward for people to
 use.
 
 That said, even if you just choose [all], and [yourproject], some
 [yourproject] tags are pretty busy.
 
 Would it be helpful if we share our email clients configs so that
 others can use them? I guess we could have a section for this in the
 wiki page.
 
 I'm sure each one of us has his/her own server-side filters so, I
 guess we could start with those.
 

Great idea Flavio. I went ahead and created a github repository with my
sup-mail hook which tags everything with openstack-dev. The mail client
itself is where most of the magic happens, but being able to read all
the openstack-dev things and then all the not openstack-dev things
is quite important to my email workflow.

I called the repository FERK for Firehose Email Reading Kit. I'm
happy to merge pull requests if people want to share their other email
client configurations and also things like procmail filters.

https://github.com/SpamapS/ferk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it

2015-02-12 Thread Flavio Percoco

On 12/02/15 01:41 -0800, Nikhil Manchanda wrote:


On Wed, Feb 11, 2015 at 1:55 AM, Flavio Percoco fla...@redhat.com wrote:

[...]

## Keep discussions open

I don't believe there's anything wrong about kicking off some
discussions in private channels about specs/bugs. I don't believe
there's anything wrong in having calls to speed up some discussions.
HOWEVER, I believe it's *completely* wrong to consider those private
discussions sufficient. If you have had that kind of private
discussions, if you've discussed a spec privately and right after you
went upstream and said: This has been discussed in a call and it's
good to go, I beg you to stop for 2 seconds and reconsider that. I
don't believe you were able to fit all the community in that call and
that you had enough consensus.



Completely agree with what you've said here. I think there's a place for
private conversation (eg. discussing a security issue that corresponds
to a CVE, giving folks honest feedback without public shaming, quickly
pinging someone, etc.) but when it comes to discussions that have a
bearing on a project (albeit however minimal) we need to ensure that all
of those happen in the open, so that any interested parties are able to
participate. Personally, I have not seen any examples of private talks
which have led to making decisions in the absence of community
discussion, but if this is happening -- we need to put a definitive stop
to it.


I have seen it and I've also seen things like: This was discussed in
a call and it's good to go

CVE's are a special exception and I'd even argue on the need of
private conversations there. However, lets say there's a private IRC
discussion to quickly solve the CVE. Right after such discussion, the
feedback *has* to be put on the bug otherwise people reviewing the
patch - or even just following the bug - will be missing some context
on the proposed solution or state of the discussion. This fallsback to
the point that it'll probably take as much time to discuss something
privately and then explain it to others than simply keep it open.

That's why we have private bugs for CVEs.

As far as giving honest feedback goes, that's a personal conversation
and I don't really care how/where that happens as long as there are no
discussions about the project itself. If feeedback w.r.t the project -
no individual's comments, performance, work, code, etc - is being
discussed, it can perfectly happen in the public channel.


[...]

## Mailing List vs IRC Channel

I get it, our mailing list is freaking busy, keeping up with it is
hard and time consuming and that leads to lots of IRC discussions. I
don't think there's anything wrong with that but I believe it's wrong
to expect *EVERYONE* to be in the IRC channel when those discussions
happen.

If you are discussing something on IRC that requires the attention of
most of your project's community, I highly recommend you to use the
mailing list as oppose to pinging everyone independently and fighting
with time zones. Using IRC bouncers as a replacement for something
that should go to the mailing list is absurd. Please, use the mailing
list and don't be afraid of having a bigger community chiming in in
your discussion.  *THAT'S A GOOD THING*

Changes, specs, APIs, etc. Everything is good for the mailing list.
We've fought hard to make this community grow, why shouldn't we take
advantage of it?



We should absolutely take advantage of all forms of communication, and
all the tools that we have at our disposal so that we can foster more
open and clear communication. However, I do realize that different
strokes work for different folks. While many might find it more
effective to communicate over email, others find IRC, or even a
VOIP call a better way of ironing out differences. I don't think that
makes any one method of communication better than others. While
I personally believe that every discussion or design conversation that
happens on IRC does not need to be taken to the mailing list, there's
absolutely nothing that should prohibit anyone in the community from
taking a discussion from IRC (or anywhere else) to the mailing list at
_any_ time.


Probably not every decision but I'd go as far as saying that almost
all of them. The reason goes even beyond just openness. The mailing
list also brings history, indexed contents, etc. Good thing that many
channels have logging enabled.

The important bit, thoguh, is that email is meant for asynchronous
communication and IRC isn't. If things that require the intervention
of other folks from the community are being discussed and those folks
are not on IRC, it'd be wrong to consider the topic as discussed.

Will that slow down the work? Yes, likely, but that's the trade-off
we're paying to keep things right and keep this community as a place
where we all feel comfortable to work in.

There's a lot of common sense in the decision of moving discussions to
the m-l or not. However, when in doubt, I'd say the mailing list is
the 

Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody should know about Galera

2015-02-12 Thread Attila Fazekas




- Original Message -
 From: Jay Pipes jaypi...@gmail.com
 To: Attila Fazekas afaze...@redhat.com
 Cc: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org, Pavel
 Kholkin pkhol...@mirantis.com
 Sent: Wednesday, February 11, 2015 9:52:55 PM
 Subject: Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody 
 should know about Galera
 
 On 02/11/2015 06:34 AM, Attila Fazekas wrote:
  - Original Message -
  From: Jay Pipes jaypi...@gmail.com
  To: Attila Fazekas afaze...@redhat.com
  Cc: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org, Pavel
  Kholkin pkhol...@mirantis.com
  Sent: Tuesday, February 10, 2015 7:32:11 PM
  Subject: Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody
  should know about Galera
 
  On 02/10/2015 06:28 AM, Attila Fazekas wrote:
  - Original Message -
  From: Jay Pipes jaypi...@gmail.com
  To: Attila Fazekas afaze...@redhat.com, OpenStack Development
  Mailing
  List (not for usage questions)
  openstack-dev@lists.openstack.org
  Cc: Pavel Kholkin pkhol...@mirantis.com
  Sent: Monday, February 9, 2015 7:15:10 PM
  Subject: Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things
  everybody
  should know about Galera
 
  On 02/09/2015 01:02 PM, Attila Fazekas wrote:
  I do not see why not to use `FOR UPDATE` even with multi-writer or
  Is the retry/swap way really solves anything here.
  snip
  Am I missed something ?
 
  Yes. Galera does not replicate the (internal to InnnoDB) row-level locks
  that are needed to support SELECT FOR UPDATE statements across multiple
  cluster nodes.
 
  Galere does not replicates the row-level locks created by UPDATE/INSERT
  ...
  So what to do with the UPDATE?
 
  No, Galera replicates the write sets (binary log segments) for
  UPDATE/INSERT/DELETE statements -- the things that actually
  change/add/remove records in DB tables. No locks are replicated, ever.
 
  Galera does not do any replication at UPDATE/INSERT/DELETE time.
 
  $ mysql
  use test;
  CREATE TABLE test (id integer PRIMARY KEY AUTO_INCREMENT, data CHAR(64));
 
  $(echo 'use test; BEGIN;'; while true ; do echo 'INSERT INTO test(data)
  VALUES (test);'; done )  | mysql
 
  The writer1 is busy, the other nodes did not noticed anything about the
  above pending
  transaction, for them this transaction does not exists as long as you do
  not call a COMMIT.
 
  Any kind of DML/DQL you issue without a COMMIT does not happened in the
  other nodes perspective.
 
  Replication happens at COMMIT time if the `write sets` is not empty.
 
 We're going in circles here. I was just pointing out that SELECT ... FOR
 UPDATE will never replicate anything. INSERT/UPDATE/DELETE statements
 will cause a write-set to be replicated (yes, upon COMMIT of the
 containing transaction).
 
 Please see my repeated statements in this thread and others that the
 compare-and-swap technique is dependent on issuing *separate*
 transactions for each SELECT and UPDATE statement...
 
  When a transaction wins a voting, the other nodes rollbacks all transaction
  which had a local conflicting row lock.
 
 A SELECT statement in a separate transaction does not ever trigger a
 ROLLBACK, nor will an UPDATE statement that does not match any rows.
 That is IMO how increased throughput is achieved in the compare-and-swap
 technique versus the SELECT FOR UPDATE technique.
 
yes, I mentioned this way in one bug [0].

But the related changes on the review, actually works as I said [1][2][3],
and the SELECT is not in a separated dedicated transaction.


[0] https://blueprints.launchpad.net/nova/+spec/lock-free-quota-management
[1] https://review.openstack.org/#/c/143837/
[2] https://review.openstack.org/#/c/153558/
[3] https://review.openstack.org/#/c/149261/

 -jay
 
 -jay
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cache for packages on master node

2015-02-12 Thread Bartłomiej Piotrowski
On 02/10/2015 03:24 PM, Tomasz Napierala wrote:
 Hi,
 
 We are currently redesigning our apporach to upstream distributions and 
 obviusly we will need some cache system for packages on master node. It 
 should work for deb and rpm packages, and be able to serve up to 200 nodes.
 I know we had bad experience in the past, can you guys share your thought on 
 that?
 I just collected what was mentioned in other discussions:
 - approx
 - squid
 - apt-cacher-ng
 - ?
 
 Regards,
 

Yesterday I have tested apt-cacher-ng on my personal laptop with help of
bunch of virtual machines running Ubuntu 14.04 and a proxy on the host
system. When 14 nodes started to request packages simultaneously,
apt-cacher-ng decided to get stuck and packages installation failed with
timeout.

Approx doesn't seem to be developed actively and won't give us any
advantage if we decide to use similiar approach for CentOS.

I vote for squid as a transparent proxy.

Cheers,
Bartłomiej

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cache for packages on master node

2015-02-12 Thread Tomasz Napierala

 On 10 Feb 2015, at 23:02, Andrew Woodward xar...@gmail.com wrote:
 
 previously we used squid in 3.0 and before. The main problem is that the 
 deployment would proceeded even if not all the packages where cached or even 
 available on the remote. This often lead to broken deployments that where 
 hard to debug and a waste of alot of time. This _MUST_ be resolved or we will 
 re-introduce this horrible work flow that we had placed all the packages on 
 the system for to begin with.

Anyway we need to ensure our QA is run against fresh mirror, that would prevent 
a lot of problems. We also think about how situation in the field can differ 
from our labs and QA infra - there might be differences indeed.

 I think we need to add a requirements that we need to be able to:
 a) pre-populate the cacher 
 b) we need to not start the deployment until we either have every package in 
 the chache (eiew) or at least know every package is reachable currently (or 
 allow the user to select either as a deployment criteria)

This sounds for me like creating local mirror ;) We don’t want to do this.
We are thinking about mirror verification tool, it was mentioned by eifferent 
guys already. Do you really think we should prepopulate cache? I hink first 
node deployment will fetch a lot of packages, and other nodes will be easier. 
Once we have prototype, we will see some number.

Regards,
-- 
Tomasz 'Zen' Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.com







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API meeting

2015-02-12 Thread Christopher Yeoh
Hi,

Just a reminder that the weekly Nova API meeting is being held tomorrow
Friday UTC . 

We encourage cloud operators and those who use the REST API such as
SDK developers and others who and are interested in the future of the
API to participate.

In other timezones the meeting is at:

EST 20:00 (Thu)
Japan 09:00 (Fri)
China 08:00 (Fri)
ACDT 10:30 (Fri)

The proposed agenda and meeting details are here: 

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda. 

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Feature Freeze Exception request for x509 keypairs

2015-02-12 Thread Claudiu Belu
Hello.

I would like to ask for a FFE for the x509 keypairs blueprint: 
https://blueprints.launchpad.net/nova/+spec/keypair-x509-certificates

This blueprint is split up into 3 commits:

[1] Database migration: previously merged, but had to be reverted because of a 
small issue. Everything is fixed, original reverter Johannes Erdfelt gave his 
+1, currently the commit has a +2. https://review.openstack.org/#/c/150800/

[2] Nova-API change: It uses the microversioning API and it has been decided to 
be the first microversioning commit, since it is closest to merge. Christopher 
Yeoh reviewed helped with this commit. https://review.openstack.org/#/c/140313/

[3] X509 keypair implementation: Simple commit, it had a +2 on a previous 
commit. https://review.openstack.org/#/c/136869/

I also want to point out that this blueprint targets all the drivers, not just 
Hyper-V. This blueprint targets all the users that desire to deploy instances 
with Windows guests and desire password-less authentication, the same way users 
can ssh into Linux-type guests.

Best regards,

Claudiu Belu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [third-party] how to use a devstack external plugin in gate testing

2015-02-12 Thread Chmouel Boudjnah
Jaume Devesa devv...@gmail.com writes:

 Following the conversation...

 We have seen that glusterfs[1] and ec2api[2] use different approach
 when it comes to repository managing: whereas glusterfs is a single
 'devstack' directory repository, ec2api is a whole project with a
 'devstack' directory on it.

 We plan to migrate 'python-neutron-plugin-midonet'[3] project to
 Stackforge too. It makes sense to add the 'devstack' directory on it?
 Or do you recommend us to have two different repositories in
 Stackforge: one for the neutron plugin and the other one for the
 devstack plugin?

as you stated I don't think there is a clear advantage or disadvantage
but IMO having too many repositories is not very user friendly and I would
recommend to have the plugin directly in the repo.

For things like glusterfs which is not a native openstack project it
makes sense that the plugin is hosted externally of the project.

Chmouel

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about force_host skip filters

2015-02-12 Thread Rui Chen
 filters should be applied to the list of hosts that are in ‘force_hosts’.

Yes, @Gray, it's my point.

Operator can live-migrate a instance to a specified host and skip filters,
 it's apposite and important, I agree with you.

But when we boot instance, we always want to launch a instance successfully
or get a clear failure reason, if the filters are applied for the force
host, operator maybe find out that he is doing something wrong at earlier
time. For example, he couldn't boot a pci instance on a force host that
don't own pci device.

and I don't think 'force_hosts' is operator action, the default value is
'is_admin:True' in policy.json, but in some case the value may be changed
so that the regular user can boot instance on specified host.

2015-02-12 17:44 GMT+08:00 Sylvain Bauza sba...@redhat.com:


 Le 12/02/2015 10:05, Rui Chen a écrit :

 Hi:

 If we boot instance with 'force_hosts', the force host will skip all
 filters, looks like that it's intentional logic, but I don't know the
 reason.

 I'm not sure that the skipping logic is apposite, I think we should
 remove the skipping logic, and the 'force_hosts' should work with the
 scheduler, test whether the force host is appropriate ASAP. Skipping
 filters and postponing the booting failure to nova-compute is not advisable.

  On the other side, more and more options had been added into flavor,
 like NUMA, cpu pinning, pci and so on, forcing a suitable host is more and
 more difficult.


 Any action done by the operator is always more important than what the
 Scheduler could decide. So, in an emergency situation, the operator wants
 to force a migration to an host, we need to accept it and do it, even if it
 doesn't match what the Scheduler could decide (and could violate any policy)

 That's a *force* action, so please leave the operator decide.

 -Sylvain



  Best Regards.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it

2015-02-12 Thread Kuvaja, Erno
 -Original Message-
 From: Donald Stufft [mailto:don...@stufft.io]
 Sent: Wednesday, February 11, 2015 4:34 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all][tc] Lets keep our community open, lets
 fight for it
 
 
  On Feb 11, 2015, at 11:15 AM, Jeremy Stanley fu...@yuggoth.org wrote:
 
  On 2015-02-11 11:31:13 + (+), Kuvaja, Erno wrote:
  [...]
  If you don't belong to the group of privileged living in the area and
  receiving free ticket somehow or company paying your participation
  you're not included. $600 + travel + accommodation is quite hefty
  premium to be included, not really FOSS.
  [...]
 
  Here I have to respectfully disagree. Anyone who uploads a change to
  an official OpenStack source code repository for review and has it
  approved/merged since Juno release day gets a 100% discount comp
  voucher for the full conference and design summit coming up in May.
  In addition, much like a lot of other large free software projects do
  for their conferences, the OpenStack Foundation sets aside funding[1]
  to cover travel and lodging for participants who need it.
  Let's (continue to) make sure this _is_ really FOSS, and that any of
  our contributors who want to be involved can be involved.
 
  [1] https://wiki.openstack.org/wiki/Travel_Support_Program
 
 For whatever it's worth, I totally agree that the summits don't make
 Openstack not really FOSS and I think the travel program is great, but I do
 just want to point out (as someone for whom travel is not monetarily dificult,
 but
 logistically) that decision making which requires travel can be exclusive. I
 don't personally get too bothered by it but it feels like maybe the
 fundamental issue that some are expericing is when there are decisions
 being made via a single channel, regardless of if that channel is a phone 
 call,
 IRC, a mailing list, or a design summit. The more channels any particular
 decision involves the more likely it is nobody is going to feel like they 
 didn't
 get a chance to participate.
 
 ---
 Donald Stufft
 PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

Thanks Donald,

My point exactly even I now see it did not come out really that way.

Thanks Jeremy,

I'd like to point out that that this discussion has been pushing all inclusive 
open approach. Not ATC, not specially approved individuals, but everyone. 
Mailing list can easily facilitate participation of everyone who wishes to do 
so. Summits cannot. If we pull the line to ATCs and specially invited 
individuals, we can throw this whole topic to the trash as 90% of the discussed 
was just dismissed.

All,

I'm not attacking against having summits, I think the face to face time is 
incredibly valuable for all kind of things. My point was to bring up general 
flaw of the flow between all inclusive decision making vs. decided in summit 
session.

- Erno

 
 
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it

2015-02-12 Thread Nikola Đipanov
On 02/11/2015 06:20 PM, Clint Byrum wrote:
 Excerpts from Nikola Đipanov's message of 2015-02-11 05:26:47 -0800:
 On 02/11/2015 02:13 PM, Sean Dague wrote:

 If core team members start dropping off external IRC where they are
 communicating across corporate boundaries, then the local tribal effects
 start taking over. You get people start talking about the upstream as
 them. The moment we get into us vs. them, we've got a problem.
 Especially when the upstream project is them.


 A lot of assumptions being presented as fact here.

 I believe the technical term for the above is 'slippery slope fallacy'.

 
 I don't see that fallacy, though it could descend into that if people
 keep pushing in that direction. Where I think Sean did a nice job
 stopping short of the slippery slope is that he only identified the step
 that is happening _now_, not the next step.
 
 I tend to agree that right now, if core team members are not talking
 on IRC to other core members in the open, whether inside or outside
 corporate boundaries, then we do see an us vs. them mentality happen.
 It's not I think thats the next step. I have personally seen that
 happening and will work hard to stop it. I think Sean has probably seen
 his share of it too,  as that is what he described in detail without
 publicly shaming anyone or any company (well done Sean).
 

There are several things I don't agree with in Sean's email, but this
one strikes me as particularly annoying, and potentially dangerous. You
also reinforce it in your reply.

Both of you seem to imply that there is the right way to do OpenStack,
and be core outside of following the development process. The notion
is annoying because it leads to exclusivity that Flavio complains about,
and is making our community a worse place for that. Different people who
can be valuable contributors, have wildly different (to name only a
few): personal styles of working, obligations to their own employer,
obligations to their family, level of command of the English language,
possibility to travel to remote parts of the world, possibility to cross
boarders without additional strain on time and finances, possibility to
engage in a real-time written discussion, possibility to engage in a
real time discussion in person in a language that is not their own in a
room full of native speakers of the used language, possibility to engage
in real-time discussions effectively. Need I go on...

Not only does your and Sean's argument not acknowledge these differences
that can easily lead to exclusion of valuable contributors - you
actually go as far as to say that unless everyone does it the right
way, the community will be worse for it, and try to back it up with
made up stuff like local tribe effects (really?! We are talking about
adult professional people here).

So yes there is a us and them - but the divide is not where you
think it is. This is why I believe an argument like this dropped smack
in the middle of a discussion like the one Flavio started is deeply
toxic, all fallacies aside.

 We can and _must_ do much better than this on this mailing list! Let's
 drag the discussion level back up!
 
 I'm certain we can always improve, and I appreciate you taking the time
 to have a Gandalf moment to stop the Balrog of fallacy from  entering
 this thread. We seriously can't let the discussion slip down that
 slope.. oh wait.
 

LOL on the LOTR reference (I look nothing like Gandalf though I may
dress like that sometimes). I hope I explained what I meant when I said
that this kind of argument really has no place in a discussion about
making the community more open by nurturing open communication.

 That said, I do want us to talk about uncomfortable things when
 necessary. I think this thread is not something where it will be entirely
 productive to stay 100% positive throughout. We might just have to use
 some negative language along side our positive suggestions to make sure
 people have an efficient way to measure their own behavior.


By all means - I only wish there would be more level-headed discussion
about the negatives around here.

N.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Feature Freeze Exception for hyper-v unit tests refactoring

2015-02-12 Thread Claudiu Belu
Hello.

I would like to request a FFE for the Hyper-V unit tests refactoring blueprint: 
https://blueprints.launchpad.net/nova/+spec/hyper-v-test-refactoring

The point of the blueprint was to get rid of the ancient test_hypervapi.py 
tests, that use mox, as they prove more and more difficult to maintain, 
especially when adding new features or fixing bugs. Those tests would be 
replaced with mock unit tests, per Ops class.

There were 11 commits in total, 6 already merged, 5 remain. Out of these 5, the 
last 2 are trivial:

[1] https://review.openstack.org/#/c/138934/
[2] https://review.openstack.org/#/c/139796/
[3] https://review.openstack.org/#/c/139797/

[4] https://review.openstack.org/148980 - unit tests for methods that have 1 
instruction each. Just to have coverage on all the modules.

[5] https://review.openstack.org/139798 - just removes test_hypervapi.py

The commits have been reviewed, already have a couple of +1s.


Note: this blueprint is limited to the Hyper-V unit tests and does not change 
the functionality of the Driver in any way. It is barely worthy of the name 
blueprint and I consider it more of a bug, rather than a blueprint. This will 
improve maintainability, readability and coverage for the Hyper-V classes.

Best regards,

Claudiu Belu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A question about strange behavior of oslo.config in eclipse

2015-02-12 Thread Joshua Zhang
Hi Doug,

   Thank you very much for your reply. I don't have any codes, so no any
special codes as well.
   Only thing I did is that:
   1, use devstack to install a fresh openstack env, all are ok.
   2, import neutron-vpnaas directory (no any my own codes) into eclipse as
pydev project, for example, run unit test
(neutron_vpnaas.tests.unit.services.vpn.test_vpn_service ) in eclipse, it
throws the following exception.
   3, but this unit test can be run well in bash, see
http://paste.openstack.org/show/172016/
   4, this unit test can also be run well in eclipse as long as I edit
neutron/openstack/common/policy.py file to change oslo.config into
oslo_config.


==
ERROR: test_add_nat_rule
(neutron_vpnaas.tests.unit.services.vpn.test_vpn_service.TestVPNDeviceDriverCallsToService)
neutron_vpnaas.tests.unit.services.vpn.test_vpn_service.TestVPNDeviceDriverCallsToService.test_add_nat_rule
--
_StringException: Traceback (most recent call last):
  File
/bak/openstack/neutron-vpnaas/neutron_vpnaas/tests/unit/services/vpn/test_vpn_service.py,
line 98, in setUp
super(TestVPNDeviceDriverCallsToService, self).setUp()
  File
/bak/openstack/neutron-vpnaas/neutron_vpnaas/tests/unit/services/vpn/test_vpn_service.py,
line 53, in setUp
super(VPNBaseTestCase, self).setUp()
  File /bak/openstack/neutron-vpnaas/neutron_vpnaas/tests/base.py, line
36, in setUp
override_nvalues()
  File /bak/openstack/neutron-vpnaas/neutron_vpnaas/tests/base.py, line
30, in override_nvalues
cfg.CONF.set_override('policy_file', neutron_policy)
  File /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py, line
1679, in __inner
result = f(self, *args, **kwargs)
  File /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py, line
1949, in set_override
opt_info = self._get_opt_info(name, group)
  File /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py, line
2262, in _get_opt_info
raise NoSuchOptError(opt_name, group)
NoSuchOptError: no such option: policy_file

On Tue, Feb 10, 2015 at 10:38 PM, Doug Hellmann d...@doughellmann.com
wrote:



 On Tue, Feb 10, 2015, at 04:29 AM, Joshua Zhang wrote:
  Hi Stacker,
 A question about oslo.config, maybe a very silly question. but pls
 tell
  me if you know, thanks in advance.
 
 I know oslo has removed 'olso' namespace, oslo.config has been changed
  to oslo_config, it also retains backwards compat.
 
 I found I can run openstack successfully, but as long as I run
 something
  in eclipse/pydev it always said like 'NoSuchOptError: no such option:
  policy_file'. I can change 'oslo.config' to 'oslo_config' in
  neutron/openstack/common/policy.py temporarily to bypass this problem
  when
  I want to debug something in eclipse. But I want to know why? who can
  help
  me to explain ? thanks.

 It sounds like you have code in one module using an option defined
 somewhere else and relying on import ordering to cause that option to be
 defined. The import_opt() method of the ConfigOpts class is meant to
 help make these cross-module option dependencies explicit [1]. If you
 provide a more detailed traceback I may be able to give more specific
 advice about where changes are needed.

 Doug

 [1]

 http://docs.openstack.org/developer/oslo.config/configopts.html?highlight=import_opt#oslo_config.cfg.ConfigOpts.import_opt

 
 
  --
  Best Regards
  Zhang Hua(张华)
  Software Engineer | Canonical
  IRC:  zhhuabj
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best Regards
Zhang Hua(张华)
Software Engineer | Canonical
IRC:  zhhuabj
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What should openstack-specs review approval rules be ?

2015-02-12 Thread Doug Hellmann


On Wed, Jan 28, 2015, at 08:25 AM, Thierry Carrez wrote:
 Hi everyone,
 
 When we first introduced the cross-project specs (specs for things that
 may potentially affect all OpenStack projects, or where more convergence
 is desirable), we defaulted to rather simple rules for approval:
 
 - discuss the spec in a cross-project meeting
 - let everyone +1/-1 and seek consensus
 - wait for the affected PTLs to vote
 - wait even more
 - tally the votes (and agree a consensus is reached) during a TC meeting
 - give +2/Worflow+1 to all TC members to let them push the Go button
 
 However, the recent approval of the Log guidelines
 (https://review.openstack.org/#/c/132552/) revealed that those may not
 be the rules we are looking for.
 
 Sean suggested that only the TC chair should be able to workflow+1 to
 avoid accidental approval.
 
 Doug suggested that we should use the TC voting rules (7 YES, or at
 least 5 YES and more YES than NO) on those.
 
 In yesterday's meeting, Sean suggested that TC members should still have
 a -2-like veto (if there is no TC consensus on the fact that community
 consensus is reached, there probably is no real consensus).

In the past we've shown -1 votes to be a sign of a lack of consensus,
and I'm not aware of any cases where a close vote went through. In fact,
the only close vote I can remember since we started voting in gerrit was
the Zaqar graduation vote, and that outcome kept the status quo in the
face of no clear consensus.

Given that, I'm not sure we need a true veto but I could accept it if
that's the consensus.

 
 There was little time to discuss this more in yesterday's TC meeting, so
 I took the action to push that discussion to the ML.
 
 So what is it we actually want for that repository ? In a world where
 Gerrit can do anything, what would you like to have ?
 
 Personally, I want our technical community in general, and our PTLs/CPLs
 in particular, to be able to record their opinion on the proposed
 cross-project spec. Then, if consensus is reached, the spec should be
 approved.
 
 This /could/ be implemented in Gerrit by giving +1/-1 to everyone to
 express technical opinion and +2/-2 to TC members to evaluate consensus
 (with Workflow+1 to the TC chair to mark when all votes are collected
 and consensus is indeed reached).
 
 Other personal opinions on how you'd like this repository reviews to be
 run ?
 
 -- 
 Thierry Carrez (ttx)
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][PTLs] Stop releasing libraries/clients without capping stable global requirements

2015-02-12 Thread Doug Hellmann


On Thu, Feb 12, 2015, at 03:47 PM, Dean Troyer wrote:
 On Thu, Feb 12, 2015 at 2:22 PM, Doug Hellmann d...@doughellmann.com
 wrote:
 
  It's Thursday, so we're outside of the Oslo team's release window, so we
 
 
 I didn't know this was a thing. Thank You!  Now let's point some other
 upstream maintainers in this direction... ;)

Yeah, the Oslo team doesn't like to work weekends so we try not to cut
releases late in the week if it means we'll need to be around to fight
fires. :-)

Doug

 
 dt
 /me crawls back into venv-land
 
 -- 
 
 Dean Troyer
 dtro...@gmail.com
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][PTLs] Stop releasing libraries/clients without capping stable global requirements

2015-02-12 Thread Dean Troyer
On Thu, Feb 12, 2015 at 2:22 PM, Doug Hellmann d...@doughellmann.com
wrote:

 It's Thursday, so we're outside of the Oslo team's release window, so we


I didn't know this was a thing. Thank You!  Now let's point some other
upstream maintainers in this direction... ;)

dt
/me crawls back into venv-land

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Shell Action, Re: Running HBase Jobs (was: About Sahara Oozie plan)

2015-02-12 Thread Trevor McKay
Hi folks,

Here is another way to do this.  Lu had mentioned Oozie shell actions
previously.
Sahara doesn't support them, but I played with it from the Oozie command
line
to verify that it solves our hbase problem, too.

We can potentially create a blueprint to build a simple Shell action
around a
user-supplied script and supporting files.  The script and files would
all be stored
in Sahara as job binaries (Swift or internal db) and referenced the same
way. The exec
target must be on the path at runtime, or included in the working dir.

To do this, I simply put workflow.xml, doit.sh, and the test jar into
a directory in hdfs.  Then I ran it with the Oozie cli using the job.xml
config file
configured to point at the hdfs dir.  Nothing special here, just
standard Oozie
job execution.

I've attached everything here but the test jar.

$ oozie job -oozie http://localhost:11000/oozie -config job.xml -run

Best,

Trev

On Thu, 2015-02-12 at 08:39 -0500, Trevor McKay wrote:

 Hi Lu, folks,
 
 I've been investigating how to run Java actions in Sahara EDP that
 depend on 
 HBase libraries (see snippet from the original question by Lu below).
 
 In a nutshell, we need to use Oozie sharelibs for this. I am working
 on a spec now, thinking 
 about the best way to support this in Sahara, but here is a
 semi-manual intermediate solution
 that will work if you would like to run such a job from Sahara.
 
 1) Create your own Oozie sharelib that contains the HBase jars.
 
 This ultimately is just an HDFS dir holding the jars.  On any node in
 your cluster with 
 HBase installed, run the attached script or something like it (I like
 Python better than bash :) )
 It simply separates the classpath and uploads all the jars to the
 specified HDFS dir.
 
 $ parsePath.py /user/myhbaselib
 
 2) Run your Java action from EDP, but use the oozie.libpath
 configuration value when you
 launch the job.  For example, on the job configure tab set
 oozie.libpath like this:
 
 NameValue
 
 oozie.libpathhdfs://namenode:8020/user/myhbaselib
 
 (note, support for this was added in
 https://review.openstack.org/#/c/154214/)
 
 That's it! In general, you can add any jars that you want to a
 sharelib and then set the
 oozie.libpath for the job to access them.
 
 Here is a good blog entry about sharelibs and extra jars in Oozie
 jobs:
 
 http://blog.cloudera.com/blog/2014/05/how-to-use-the-sharelib-in-apache-oozie-cdh-5/
 
 Best,
 
 Trevor
 
 --- original question
 (1) EDP job in Java action
 
The background is that we want write integration test case for
 newly added services like HBase, zookeeper just like the way the
 edp-examples does( sample code under sahara/etc/edp-examples/). So I
 thought I can wrote an example via edp job by Java action to test
 HBase Service, then I wrote the HBaseTest.java and packaged as a jar
 file, and run this jar manually with the command java -cp `hbase
 classpath` HBaseTest.jar HBaseTest, it works well in the
 vm(provisioned by sahara with cdh plugin). 
 “/usr/lib/jvm/java-7-oracle-cloudera/bin/java -cp
 HBaseTest.jar:`hbase classpath` HBaseTest”
 So I want run this job via horizon in sahara job execution page, but
 found no place to pass the `hbase classpath` parameter.(I have tried
 java_opt and configuration and args, all failed). When I pass the “-cp
 `hbase classpath`” to java_opts in horizon job execution page. Oozie
 raise this error as below
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




workflow.xml
Description: XML document


doit.sh
Description: application/shellscript


job.xml
Description: XML document
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Shell Action, Re: Running HBase Jobs (was: About Sahara Oozie plan)

2015-02-12 Thread Trevor McKay
Hmm, my attachments were removed :)

Well, the interesting parts were the doit.sh and workflow.xml:

$ more doit.sh 
#!/bin/bash
/usr/lib/jvm/java-7-oracle-cloudera/bin/java -cp HBaseTest.jar:`hbase
classpath` HBaseTest

$ more workflow.xml
workflow-app xmlns='uri:oozie:workflow:0.3' name='shell-wf'
start to='shell1' /
action name='shell1'
shell xmlns=uri:oozie:shell-action:0.1
job-tracker${jobTracker}/job-tracker
name-node${nameNode}/name-node
configuration
property
  namemapred.job.queue.name/name
  valuedefault/value
/property
/configuration
execdoit.sh/exec
fileHBaseTest.jar/file
filedoit.sh/file
/shell
ok to=end /
error to=fail /
/action
kill name=fail
messageScript failed, error
message[${wf:errorMessage(wf:lastErrorNode())}]/message
/kill
end name='end' /
/workflow-app



On Thu, 2015-02-12 at 17:15 -0500, Trevor McKay wrote:

 Hi folks,
 
 Here is another way to do this.  Lu had mentioned Oozie shell actions
 previously.
 Sahara doesn't support them, but I played with it from the Oozie
 command line
 to verify that it solves our hbase problem, too.
 
 We can potentially create a blueprint to build a simple Shell action
 around a
 user-supplied script and supporting files.  The script and files would
 all be stored
 in Sahara as job binaries (Swift or internal db) and referenced the
 same way. The exec
 target must be on the path at runtime, or included in the working dir.
 
 To do this, I simply put workflow.xml, doit.sh, and the test jar into
 a directory in hdfs.  Then I ran it with the Oozie cli using the
 job.xml config file
 configured to point at the hdfs dir.  Nothing special here, just
 standard Oozie
 job execution.
 
 I've attached everything here but the test jar.
 
 $ oozie job -oozie http://localhost:11000/oozie -config job.xml -run
 
 Best,
 
 Trev
 
 On Thu, 2015-02-12 at 08:39 -0500, Trevor McKay wrote:
 
  Hi Lu, folks,
  
  I've been investigating how to run Java actions in Sahara EDP that
  depend on 
  HBase libraries (see snippet from the original question by Lu
  below).
  
  In a nutshell, we need to use Oozie sharelibs for this. I am working
  on a spec now, thinking 
  about the best way to support this in Sahara, but here is a
  semi-manual intermediate solution
  that will work if you would like to run such a job from Sahara.
  
  1) Create your own Oozie sharelib that contains the HBase jars.
  
  This ultimately is just an HDFS dir holding the jars.  On any node
  in your cluster with 
  HBase installed, run the attached script or something like it (I
  like Python better than bash :) )
  It simply separates the classpath and uploads all the jars to the
  specified HDFS dir.
  
  $ parsePath.py /user/myhbaselib
  
  2) Run your Java action from EDP, but use the oozie.libpath
  configuration value when you
  launch the job.  For example, on the job configure tab set
  oozie.libpath like this:
  
  NameValue
  
  oozie.libpathhdfs://namenode:8020/user/myhbaselib
  
  (note, support for this was added in
  https://review.openstack.org/#/c/154214/)
  
  That's it! In general, you can add any jars that you want to a
  sharelib and then set the
  oozie.libpath for the job to access them.
  
  Here is a good blog entry about sharelibs and extra jars in Oozie
  jobs:
  
  http://blog.cloudera.com/blog/2014/05/how-to-use-the-sharelib-in-apache-oozie-cdh-5/
  
  Best,
  
  Trevor
  
  --- original question
  (1) EDP job in Java action
  
 The background is that we want write integration test case for
  newly added services like HBase, zookeeper just like the way the
  edp-examples does( sample code under sahara/etc/edp-examples/). So I
  thought I can wrote an example via edp job by Java action to test
  HBase Service, then I wrote the HBaseTest.java and packaged as a jar
  file, and run this jar manually with the command java -cp `hbase
  classpath` HBaseTest.jar HBaseTest, it works well in the
  vm(provisioned by sahara with cdh plugin). 
  “/usr/lib/jvm/java-7-oracle-cloudera/bin/java -cp
  HBaseTest.jar:`hbase classpath` HBaseTest”
  So I want run this job via horizon in sahara job execution page, but
  found no place to pass the `hbase classpath` parameter.(I have tried
  java_opt and configuration and args, all failed). When I pass the
  “-cp `hbase classpath`” to java_opts in horizon job execution page.
  Oozie raise this error as below
  
  
  
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 __
 OpenStack Development 

Re: [openstack-dev] [TripleO] update on Puppet integration in Kilo

2015-02-12 Thread James Slagle
On Wed, Feb 11, 2015 at 12:06 PM, Dan Prince dpri...@redhat.com wrote:
 I wanted to take a few minutes to go over the progress we've made with
 TripleO Puppet in Kilo so far.

 For those unfamilar with the efforts our initial goal was to be able to
 use Puppet as the configuration tool for a TripleO deployment stack.
 This is largely built around a Heat capability added in Icehouse called
 Software Deployments. By making use of use of the Software Deployment
 Puppet hook and building our images with a few puppet specific elements
 we can integrate with puppet as a configuration tool. There has been no
 blueprint on this effort... blueprints seemed a bit ridged for the task
 at hand. After demo'ing the proof of concept patches in Paris we've been
 tracking progress on an etherpad here:

 https://etherpad.openstack.org/p/puppet-integration-in-heat-tripleo

 Lots of details in that etherpad. But I would like to highlight a few
 things:

 As of a week or so all of the code needed to run devtest_overcloud.sh to
 completion using Puppet (and Fedora packages) has landed. Several
 upstream TripleO developers have been successful in setting up a Puppet
 overcloud using this process.

 As of last Friday We have a running CI job! I'm actually very excited
 about this one for several reasons. First CI is going to be crucial in
 completing the rest of the puppet feature work around HA, etc. Second
 because this job does require packages... and a fairly recent Heat
 release we are using a new upstream packaging tool called Delorean.
 Delorean makes it very easy to go back in time so if the upstream
 packages break for some reason plugging in a stable repo from yesterday,
 or 5 minutes ago should be a quick fix... Lots of things to potentially
 talk about in this area around CI on various projects.


I also wanted to point out that I've posted a WIP element that can be
used in place of the existing seed-stack-config element to build a
seed vm using the Puppet modules:

https://review.openstack.org/#/c/153375/

I'm going to keep working on this and aim to get the CI job updated to
use this as well, so that we can go seed - overcloud with the
configuration/installation all driven by the Puppet modules.

 The puppet deployment is also proving to be quite configurable. We have
 a Heat template parameter called 'EnablePackageInstall' which can be
 used to enable or disable Yum package installation at deployment time.
 So if you want to do traditional image based deployment with images
 containing all of your packages you can do that (no Yum repositories
 required). Or if you want to roll out images and install or upgrade
 packages at deployment time (rather than image build time) you can do
 that too... all by simply modifying this parameter. I think this sort of
 configurability should prove useful to those who want a bit of choice
 with regards to how packages and the like get installed.

I like how this reduces the need for customized images upfront for
those who would choose this route.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][PTLs] Stop releasing libraries/clients without capping stable global requirements

2015-02-12 Thread Doug Hellmann


On Thu, Feb 12, 2015, at 02:17 PM, Joe Gordon wrote:
 On Wed, Feb 11, 2015 at 7:53 AM, Doug Hellmann d...@doughellmann.com
 wrote:
 
 
 
  On Tue, Feb 10, 2015, at 07:12 PM, Joe Gordon wrote:
   Hi,
  
   As you know a few of us have been spending way too much time digging
   stable/juno out of the ditch its currently in. And just when we thought
   we
   were in the clear a new library was released without a requirements cap
   in
   stable global-requirements and broke stable/juno grenade.  Everytime this
   happens we risk breaking everything. While there is a good long term fix
   in
   progress (pin all of stable/juno
   https://review.openstack.org/#/c/147451/),
   this will take a bit of time to get right and land.
  
   The  good news is there is a nice easy interim solution. Before releasing
   a
   new library go to stable/juno and stable/icehouse global requirements and
   check if $library has a version cap, if not add one. And once that lands
   go
   ahead and release your library. For example:
   https://review.openstack.org/#/c/154715/2
  
  __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
   openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  The Oslo team has several libraries we're holding for release until this
  is resolved. We do have projects blocked on those releases, though, so
  if Joe asks you for help with anything related to stable branch
  maintenance, please make it a priority so we can get the caps in place.
 
 
 We have landed the patch to cap all stable/juno requirements that are
 installed in a tempest-dsvm-neutron-full job. So we should be out of the
 woods for now (unless you are a project that uses one of the still
 uncapped
 requirements).

Kudos to Joe and Matt for their time this week. Thank you.

It's Thursday, so we're outside of the Oslo team's release window, so we
will continue to hold our releases until next week. Expect several new
library versions on Monday and Tuesday.

Doug

 
 https://review.openstack.org/#/c/147451/
 
 
 Implications:
 
 * Until Dean's patches to install CLI tools (python-*clients) inside of
 venvs, we are not testing master clients with stable/juno.
 * An indirect dependency can change and still break us, but hopefully
 this
 won't happen.
 
 
  Doug
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] nova api.fault notification isn't collected by ceilometer

2015-02-12 Thread gordon chung
Yep, that's what i'm looking for, thanks, 
another notification from nova that is missed in ceilometer is info from nova 
api:
https://github.com/openstack/nova/blob/master/nova/notifications.py#L64
this notify_decorator will decorate every nova/ec2 rest api and send out a 
notification for each api actions:
https://github.com/openstack/nova/blob/master/nova/utils.py#L526
from which will send out notification like: %s.%s.%s % (module, key, method) ,
and no notification plugin in ceilometer to deal with them.
Let me know if i should file a bug for this.
sorry, i missed this. so as i understand it, we do capture these values in 
ceilometer as they are published on same INFO topic. that said, they aren't 
converted to meters/samples but only stored as events (if you have events 
enabled).  is there something specifically measurable in these notifications? 
if so, we could look at adding it to ceilometer.
cheers,gord   __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [congress][Policy][Copper]Collaboration between OpenStack Congress and OPNFV Copper

2015-02-12 Thread Tim Hinrichs
Bryan and Zhipeng,

Sean Roberts (CCed) is planning to be in Santa Rosa.   Sean’s definitely there 
on Wed.  Less clear about Thu/Fri.

I don’t know if I’ll make the trip yet, but I’m guessing Wed early afternoon if 
I can.

Tim

On Feb 11, 2015, at 9:04 PM, SULLIVAN, BRYAN L 
bs3...@att.commailto:bs3...@att.com wrote:

Hi Tim,

It would be great to meet with members of the Congress project if possible at 
our meetup in Santa Rosa. I plan by then to have a basic understanding of 
Congress and some test driver apps / use cases to demo at the meetup. The goal 
is to assess the current state of Congress support for the use cases on the 
OPNFV wiki: 
https://wiki.opnfv.org/copper/use_caseshttps://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.opnfv.org_copper_use-5Fcasesd=AwMFAgc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=B6BWd4kFfgOzAREgThxkmTZKy7dDXE2-eBAmL0PBK7sm=79iOYd5evGtBk2y36AKWDlDGaxiAbtt-Aago3I-8XcUs=d4pb7BHqqZMj3oOoJBwixcr4VsTM0B4JwHe_JHRQ_VUe=

I would be doing the same with ODL but I’m not as far on getting ready with it. 
So the opportunity to discuss the use cases under Copper and the other 
policy-related projects
(fault 
managementhttps://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.opnfv.org_doctord=AwMFAgc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=B6BWd4kFfgOzAREgThxkmTZKy7dDXE2-eBAmL0PBK7sm=79iOYd5evGtBk2y36AKWDlDGaxiAbtt-Aago3I-8XcUs=Wq56oTQYc1glpCeJ6wfL60x0AdyAphZeL55R7Kc7TvUe=,
 resource 
managementhttps://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.opnfv.org_promised=AwMFAgc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=B6BWd4kFfgOzAREgThxkmTZKy7dDXE2-eBAmL0PBK7sm=79iOYd5evGtBk2y36AKWDlDGaxiAbtt-Aago3I-8XcUs=69Ak90Xh9biVNpWyCeLW8_7I0CoX0WrcDuFwlHQmM30e=,
 resource 
schedulerhttps://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.opnfv.org_requirements-5Fprojects_resource-5Fschedulerd=AwMFAgc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=B6BWd4kFfgOzAREgThxkmTZKy7dDXE2-eBAmL0PBK7sm=79iOYd5evGtBk2y36AKWDlDGaxiAbtt-Aago3I-8XcUs=haq_oYTeYW7TkZp-eJrCx33KJjCg_tQlWTwiH_4OO9Ie=)
 with Congress experts would be great.

Once we understand the gaps in what we are trying to build in OPNFV, the goal 
for our first OPNFV release is to create blueprints for new work in Congress. 
We might also just find some bugs and get directly involved in Congress to 
address them, or start a collaborative development project in OPNFV for that. 
TBD

Thanks,
Bryan Sullivan | Service Standards | ATT

From: Tim Hinrichs [mailto:thinri...@vmware.com]
Sent: Wednesday, February 11, 2015 10:22 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: SULLIVAN, BRYAN L; HU, BIN; Rodriguez, Iben; Howard Huang
Subject: Re: [openstack-dev] [congress][Policy][Copper]Collaboration between 
OpenStack Congress and OPNFV Copper

Hi Zhipeng,

We’d be happy to meet.  Sounds like fun!

I don’t know of anyone on the Congress team who is planning to attend the LF 
collaboration summit.  But we might be able to send a couple of people if it’s 
the only real chance to have a face-to-face.  Otherwise, there are a bunch of 
us in and around Palo Alto.  And of course, phone/google hangout/irc are fine 
options as well.

Tim



On Feb 11, 2015, at 8:59 AM, Zhipeng Huang 
zhipengh...@gmail.commailto:zhipengh...@gmail.com wrote:

Hi Congress Team,

As you might already knew, we had a project in OPNFV covering deployment policy 
called 
Copperhttps://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.opnfv.org_copperd=AwMFaQc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=B6BWd4kFfgOzAREgThxkmTZKy7dDXE2-eBAmL0PBK7sm=R1ER1wU47Knv6PaOiamDwCm76pwx5uuE47mpn_03mzYs=S7VfJALm_Pmzb2S-o3NUlcNzLAy9yYceGzcyKX3CA-we=,
 in which we identify Congress as one of the upstream projects that we need to 
put our requirement to. Our team has been working on setting up a simple 
openstack environment with congress integrated that could demo simple use cases 
for policy deployment.

Would it possible for you guys and our team to find a time do an 
Copper/Congress interlock meeting, during which Congress Team could introduce 
how to best integrate congress with vanilla openstack? Will some of you 
attend LF Collaboration Summit?

Thanks a lot :)

--
Zhipeng (Howard) Huang

Standard Engineer
IT Standard  Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.commailto:huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edumailto:zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OpenDaylight, OpenCompute affcienado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] [nova] release request for python-novaclient

2015-02-12 Thread Michael Still
This was discussed in the nova meeting this morning. In that meeting
we declared ourselves unwedged and ready to do a release, and I said
I'd do that today.

On reflection, I want to recant just a little -- I think its a bad
idea for me to do a release on a Friday. So, I'll do this early next
week instead.

Michael

On Wed, Feb 11, 2015 at 8:51 AM, Joe Gordon joe.gord...@gmail.com wrote:


 On Mon, Feb 9, 2015 at 7:55 PM, Michael Still mi...@stillhq.com wrote:

 The previous policy is that we do a release when requested or when a
 critical bug fix merges. I don't see any critical fixes awaiting
 release, but I am not opposed to a release.

 The reason I didn't do this yesterday is that Joe wanted some time to
 pin the stable requirements, which I believe he is still working on.
 Let's give him some time unless this is urgent.


 So to move this forward, lets just pin novaclient on stable branches. so the
 longer term pin all the reqs isn't blocking this.

 Icehouse already has a cap, so we just need to wait for the juno cap to
 land:

 https://review.openstack.org/154680



 Michael

 On Tue, Feb 10, 2015 at 2:45 PM, melanie witt melwi...@gmail.com wrote:
  On Feb 6, 2015, at 8:17, Matt Riedemann mrie...@linux.vnet.ibm.com
  wrote:
 
  We haven't done a release of python-novaclient in awhile (2.20.0 was
  released on 2014-9-20 before the Juno release).
 
  It looks like there are some important feature adds and bug fixes on
  master so we should do a release, specifically to pick up the change for
  keystone v3 support [1].
 
  So can this be done now or should this wait until closer to the Kilo
  release (library releases are cheap so I don't see why we'd wait).
 
  Thanks for bringing this up -- there are indeed a lot of important
  features and fixes on master.
 
  I agree we should do a release as soon as possible, and I don't think
  there's any reason to wait until closer to Kilo.
 
  melanie (melwitt)
 
 
 
 
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Rackspace Australia

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Rackspace Australia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature Freeze Exception request for x509 keypairs

2015-02-12 Thread Christopher Yeoh
I'm happy to sponsor this. I've reviewed all the patches as well and as
Claudiu mentions we have this lined up as being the first api change to use
microversions

Regards,

Chris

On Thu, Feb 12, 2015 at 10:50 PM, Claudiu Belu cb...@cloudbasesolutions.com
 wrote:


  Hello.

 I would like to ask for a FFE for the x509 keypairs blueprint:
 https://blueprints.launchpad.net/nova/+spec/keypair-x509-certificates

 This blueprint is split up into 3 commits:

 [1] Database migration: previously merged, but had to be reverted because
 of a small issue. Everything is fixed, original reverter Johannes Erdfelt
 gave his +1, currently the commit has a +2.
 https://review.openstack.org/#/c/150800/

 [2] Nova-API change: It uses the microversioning API and it has been
 decided to be the first microversioning commit, since it is closest to
 merge. Christopher Yeoh reviewed helped with this commit.
 https://review.openstack.org/#/c/140313/

 [3] X509 keypair implementation: Simple commit, it had a +2 on a previous
 commit. https://review.openstack.org/#/c/136869/

 I also want to point out that this blueprint targets all the drivers, not
 just Hyper-V. This blueprint targets all the users that desire to deploy
 instances with Windows guests and desire password-less authentication, the
 same way users can ssh into Linux-type guests.

 Best regards,

 Claudiu Belu

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature Freeze Exception request for x509 keypairs

2015-02-12 Thread Ken'ichi Ohmichi
2015-02-12 21:20 GMT+09:00 Claudiu Belu cb...@cloudbasesolutions.com:

 Hello.

 I would like to ask for a FFE for the x509 keypairs blueprint:
 https://blueprints.launchpad.net/nova/+spec/keypair-x509-certificates

 This blueprint is split up into 3 commits:

 [1] Database migration: previously merged, but had to be reverted because of
 a small issue. Everything is fixed, original reverter Johannes Erdfelt gave
 his +1, currently the commit has a +2.
 https://review.openstack.org/#/c/150800/

 [2] Nova-API change: It uses the microversioning API and it has been decided
 to be the first microversioning commit, since it is closest to merge.
 Christopher Yeoh reviewed helped with this commit.
 https://review.openstack.org/#/c/140313/

 [3] X509 keypair implementation: Simple commit, it had a +2 on a previous
 commit. https://review.openstack.org/#/c/136869/

 I also want to point out that this blueprint targets all the drivers, not
 just Hyper-V. This blueprint targets all the users that desire to deploy
 instances with Windows guests and desire password-less authentication, the
 same way users can ssh into Linux-type guests.

The patches have been much reviewed and this feature will be the first
microversion.
so I'm happy to support this development in Kilo.

Thanks
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] monkey patching strategy

2015-02-12 Thread Kevin Benton
Why did the services fail with the stdlib patched? Are they incompatible
with eventlet?

On Thu, Feb 12, 2015 at 11:25 AM, Ihar Hrachyshka ihrac...@redhat.com
wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hi all,

 there were some moves recently to make monkey patching strategy sane
 in neutron.

 This was triggered by some bugs found when interacting with external
 oslo libraries [1], and a cross project spec to make eventlet usage
 sane throughout the project [2].

 Specifically, instead of monkey patching stdlib in each of services
 and agents (and forgetting to do so for some of them [3]), we should
 monkey patch it as part of a common import (ideally, it would be any
 neutron.* import).

 Initially, we've tried to patch it inside neutron/__init__.py [4], but
 it didn't place nice with some advanced services importing from
 neutron while not expecting stdlib to be patched, and so was reverted.

 So an alternative that I currently look into is the Nova way.
 Specifically, moving all main() functions for all agents and services
 into neutron/cmd/... and monkey patching stdlib thru
 neutron/cmd/__init__.py.

 I've sent a series of patches to do just that [5]. It was rightfully
 blocked by Mark to seek for broader agreement.

 I encourage community to say your word on the direction.

 [1]: https://bugs.launchpad.net/oslo.concurrency/+bug/1418541
 [2]: https://review.openstack.org/154642
 [3]:

 http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/mlnx/agent/eswitch_neutron_agent.py
 [4]: https://review.openstack.org/153699
 [5]:

 https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:bug/1418541,n,z

 Cheers,
 /Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iQEcBAEBAgAGBQJU3P4QAAoJEC5aWaUY1u57A/cH/AuKbkewZy5Z0Hus2m4bClGp
 4DJ37ygcY9HwGmJTLpvUyfRcDWnaO9S+6sj28Ebv49MN1w9qJ4MuQmaYA1xsFERb
 aR6uKgnkiIT0FS8CVjbClEC7gN5elHCe2dcB8cakrk7uUsTJ2LP5A6rdNQqly/uN
 2hkdfa1WBcAYMX6raFWD8DJ49R1MhbPr09YXXU9ApoflMY6ZywvNBzwIZEw5gqPO
 Vpjb9DwevaFZ9kqzjHTrXk47CqOSYS7ZXQjS1bOGUOJFOBqNRLzl2qPX7wkBiA2N
 12U4Qe3/3MvWwBig0O+mY2RwN2OtnxhK8X5tP6kbrybyOKLGUe4ZgIlvfQHI33Q=
 =8pX5
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Priority resizing instance on same host

2015-02-12 Thread Lingxian Kong
Hi, Rui,

I think resize VM to the same host if the host could pass scheduler
filters makes sense to me.

2015-02-12 15:01 GMT+08:00 Rui Chen chenrui.m...@gmail.com:
 Hi:

 Currently, resizing instance cause migrating from the host that the
 instance run on to other host, but maybe the current host is suitable for
 new flavor. Migrating will lead to copy image between hosts if no shared
 storage, it waste time.
 I think that priority resizing instance on the current host may be
 better if the host is suitable.
 The logic like this:

 if CONF.allow_resize_to_same_host:
 filte current host
 if suitable:
resize on current host
 else:
select a host
resize on the host

 I don't know whether there have been some discussion about this
 question. Please let me know what do you think. If the idea is no problem,
 maybe I can register a blueprint to implement it.

 Best Regards.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Regards!
---
Lingxian Kong

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it

2015-02-12 Thread Flavio Percoco

On 11/02/15 09:37 -0800, Clint Byrum wrote:

Excerpts from Stefano Maffulli's message of 2015-02-11 06:14:39 -0800:

On Wed, 2015-02-11 at 10:55 +0100, Flavio Percoco wrote:
 This email is dedicated to the openness of our community/project.

It's good to have a reminder every now and then. Thank you Flavio for
caring enough to notice bad patterns and for raising a flag.

 ## Keep discussions open

 I don't believe there's anything wrong about kicking off some
 discussions in private channels about specs/bugs. I don't believe
 there's anything wrong in having calls to speed up some discussions.
 HOWEVER, I believe it's *completely* wrong to consider those private
 discussions sufficient.
[...]

Well said. Conversations can happen anywhere and any time, but they
should stay in open and accessible channels. Consensus needs to be built
and decisions need to be shared, agreed upon by the community at large
(and mailing lists are the most accessible media we have).

That said, it's is very hard to generalize and I'd rather deal/solve
specific examples. Sometimes, I'm sure there are episodes when a fast
decision was needed and a limited amount of people had to carry the
burden of responsibility. Life is hard, software development is hard and
general rules sometimes need to be adapted to the reality. Again, too
much generalization here for what I'm confortable with.

Maybe it's worth repeating that I'm personally (and in my role)
available to listen and mediate in cases when communication seems to
happen behind closed doors. If you think something unhealthy is
happening, talk to me (confidentiality assured).

 ## Mailing List vs IRC Channel

 I get it, our mailing list is freaking busy, keeping up with it is
 hard and time consuming and that leads to lots of IRC discussions.

Not sure I agree with the causality but, the facts are those: traffic on
the list and on IRC is very high (although not increasing anymore
[1][2]).

  I
 don't think there's anything wrong with that but I believe it's wrong
 to expect *EVERYONE* to be in the IRC channel when those discussions
 happen.

Email is hard, I have the feeling that the vast majority of people use
bad (they all suck, no joke) email clients. Lots and lots of email is
even worse. Most contributors commit very few patches: the investment
for them to configure their MUA to filter our traffic is too high.

I have added more topics today to the openstack-dev list[3]. Maybe,
besides filtering on the receiving end, we may spend some time
explaining how to use mailman topics? I'll draft something on Ask, it
may help those that have limited interest in OpenStack.

What else can we do to make things better?



I am one of those people who has a highly optimized MUA for mailing list
reading. It is still hard. Even with one keypress to kill threads from
view forever, and full text index searching, I still find it takes me
an hour just to filter the don't want to see from the want to see
threads each day.

The filtering on the list-server side I think is not known by everybody,
and it might be a good idea to socialize it even more, and maybe even
invest in making the UI for it really straight forward for people to
use.

That said, even if you just choose [all], and [yourproject], some
[yourproject] tags are pretty busy.


Would it be helpful if we share our email clients configs so that
others can use them? I guess we could have a section for this in the
wiki page.

I'm sure each one of us has his/her own server-side filters so, I
guess we could start with those.

Cheers,
Flavio

--
@flaper87
Flavio Percoco


pgpD0VSZxPbJY.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Compute-node-only installation fails

2015-02-12 Thread Alexander Schmidt
On Wed, 11 Feb 2015 17:27:19 +
Daniel P. Berrange berra...@redhat.com wrote:

 On Wed, Feb 11, 2015 at 08:32:31AM -0800, Clark Boylan wrote:
  On Wed, Feb 11, 2015, at 08:18 AM, Alexander Schmidt wrote:
   Hi Daniel,
   
   with your recent change[1] to error handling in stack.sh, compute
   node only installations via devstack fail because there is
   no database selected. A database should not be required on
   compute nodes.
   
   Was this done intentionally? lib/database explicitly says:
   
   # If ``DATABASE_TYPE`` is unset here no database was selected
   # This is not an error as multi-node installs will do this on the
   compute nodes
   
   [1] https://review.openstack.org/#/c/149288/
   
   Regards,
   Alex
  
  We have been setting DATABASE_TYPE [0] so did not notice. Seems
  like a reasonable work around for now and does not install a
  database server (the enabled services list seems to do that). If
  the intended behavior is to not need to set DATABASE_TYPE we should
  probably revert the devstack change then update devstack-gate's
  compute node localrc generation to remove DATABASE_TYPE so that
  breakage of the behavior has a chance of being caught early.
 
 I posted a revert review for my change
 
https://review.openstack.org/#/c/154966/
 

Thanks, working fine again now.

Regards, Alex

 Regards,
 Daniel


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [blazar] currectly status

2015-02-12 Thread Jin, Yuntong
Thanks for the info, I surely will let you know when I have doubts to dive deep 
into the code.

Thanks
-yuntong

From: Nikolay Starodubtsev [mailto:nstarodubt...@mirantis.com]
Sent: Wednesday, February 11, 2015 5:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [blazar] currectly status

Hi,

I'm agree with Sylvain.
Also, if you want to contribute to Blazar ping me in IRC, I can tell you where 
you can start with it.




Nikolay Starodubtsev

Software Engineer

Mirantis Inc.



Skype: dark_harlequine1

2015-02-11 11:18 GMT+03:00 Sylvain Bauza 
sba...@redhat.commailto:sba...@redhat.com:

Le 11/02/2015 04:24, Jin, Yuntong a écrit :
Hello,
May I ask the currently status of blazar project, it’s been very quiet there 
for past half year, part of reason could be related to Gantt project?
The way I see this project is very usefully for NFV use case, and I really like 
to know the status of it, maybe also Gantt project.
Thanks



Hi,

Thanks for your interest on Blazar. The existing core team has been reallocated 
on various other projects so we ended doing regular updates to the repository 
since around 6 months. That said, as it is an open-source project, anybody can 
contribute and I would be glad to review some changes, provided they are not 
time-consuming.

Last discussion with TC members in Atlanta (for the Juno summit) showed that 
there are benefits to have a reservation system in OpenStack, but the thought 
was that it would probably be something related to the Compute program, ie. 
something that Nova could leverage.

As the current Nova scheduler is about to be spined off in a separate project 
called Gantt, I'm IMHO thinking (and that's my sole opinion) that Blazar could 
maybe merge with Gantt so that the existing backend would allow new APIs for 
Gantt by asking to select a destination later in time than now.

That said, Gantt is far from being a separate repository now, as we're 
struggling to reduce the technical debt on Nova for splitting out the scheduler 
so I wouldn't expect any immediate benefit for Gantt nor Blazar now.

As many people are looking around Blazar and Gantt, I think it would be 
interesting to setup a BoF session during the Vancouver Summit about 
reservations and SLA in OpenStack so that we could see how we could move on.

-Sylvain




__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [congress] following up on releasing kilo milestone 2

2015-02-12 Thread Thierry Carrez
Thierry Carrez wrote:
 You could also try to use the milestone.sh release script I use:
 
 http://git.openstack.org/cgit/openstack-infra/release-tools/tree/

Hrm... I now realize the script is forcing openstack/* namespace and
won't work as-is for stackforge projects.

That repo is accepting patches, though :)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about force_host skip filters

2015-02-12 Thread Sylvain Bauza


Le 12/02/2015 10:05, Rui Chen a écrit :

Hi:

   If we boot instance with 'force_hosts', the force host will skip 
all filters, looks like that it's intentional logic, but I don't know 
the reason.


   I'm not sure that the skipping logic is apposite, I think we should 
remove the skipping logic, and the 'force_hosts' should work with the 
scheduler, test whether the force host is appropriate ASAP. Skipping 
filters and postponing the booting failure to nova-compute is not 
advisable.


On the other side, more and more options had been added into 
flavor, like NUMA, cpu pinning, pci and so on, forcing a suitable host 
is more and more difficult.




Any action done by the operator is always more important than what the 
Scheduler could decide. So, in an emergency situation, the operator 
wants to force a migration to an host, we need to accept it and do it, 
even if it doesn't match what the Scheduler could decide (and could 
violate any policy)


That's a *force* action, so please leave the operator decide.

-Sylvain




Best Regards.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it

2015-02-12 Thread Thierry Carrez
Flavio Percoco wrote:
 On 11/02/15 17:19 +, Amrith Kumar wrote:
 Personally, I think the focus on password protected IRC channels is a
 distraction from the real issue that we need to ensure that the
 rapidly growing community is one where public discussion and decision
 making are still the norm. Let's be adult about it and realize that
 people will have private conversations. What we need to focus on is
 ensuring that the community rejects private decision making.
 
 I personally don't care if you have private discussions with other
 folks regardless of what their ATC status and impact on the community
 is. You're free to do so, I don't plan to critizice that and that's
 entirely your problem. However, I do care when those discussions
 happen in a private IRC channel because I don't beleive that's neither
 good for our community nor necessary.
 
 It's not good for our community because it *excludes* people that are
 not in such channels and it creates the wrong message around what core
 means, just like it happened with integrated projects and like it
 happens with PTLs. In addition to that, it isolates discussions which
 is something we've been encouraging people not to do because not
 everyone sees it the same way.

Right. You can't prevent occasional private discussions and pings, and
you shouldn't. It's when you encourage and officialize them (by for
example creating a channel for them) that things start to go bad.

I've been using IRC for more than 20 years, and with various FOSS
communities. I've been in a number of private channels, and they
*always* are a slippery slope to a private club, which quickly turns
into a clique. Those are cozy and convenient: only your friends are
listening, nobody objects with you. It really takes a non-trivial amount
of effort on all participants to continue having public discussions
where they belong, because it's easier and more natural to talk to a
controlled group. When I was working at Canonical, we continually
struggled to have the Ubuntu Server discussions in the Freenode
#ubuntu-server channel instead of on the Canonical IRC #server channel.
That's only human nature.

We can't avoid companies setting up private IRC channels. But we can
avoid OpenStack project teams from setting those up. And I really think
we should. Private discussions should be exceptional rather than the
norm, and avoiding setting up IRC channels for them is a great way to
ensure they stay exceptional.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature Freeze Exception Request (libvirt vhostuser vif driver)

2015-02-12 Thread Daniel P. Berrange
On Mon, Feb 09, 2015 at 10:04:49AM +, Czesnowicz, Przemyslaw wrote:
 Hi,
 
 I would like to request FFE for vhostuser vif driver.
 
 2 reviews : 
 https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/libvirt-vif-vhost-user,n,z
 
 BP: https://blueprints.launchpad.net/nova/+spec/libvirt-vif-vhost-user
 Spec: https://review.openstack.org/138736
 
 Blueprint was approved but it's status was changed because of FF.
 Vhostuser is a Qemu feature that allows fastpath into the VM for userspace 
 vSwitches.
 The changes are small and mostly contained to libvirt driver.
 Vhostuser support was proposed for Juno by Snabb switch guys but didn't make 
 it,
 this implementation supports their usecase as well .

As Nikola says, this is self-contained code, and a pretty simple bit of
code to understand, so should be straightforward to merge. So I'm happy
to sponsor it.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Priority resizing instance on same host

2015-02-12 Thread Manickam, Kanagaraj
Hi,

There is a patch on resize https://review.openstack.org/#/c/117116/
To address the resize,  there are some suggestions and please refer the review 
comments on this patch.

Regards
Kanagaraj M

From: Jesse Pretorius [mailto:jesse.pretor...@gmail.com]
Sent: Thursday, February 12, 2015 1:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Priority resizing instance on same host

On Thursday, February 12, 2015, Rui Chen 
chenrui.m...@gmail.commailto:chenrui.m...@gmail.com wrote:
Currently, resizing instance cause migrating from the host that the 
instance run on to other host, but maybe the current host is suitable for new 
flavor. Migrating will lead to copy image between hosts if no shared storage, 
it waste time.
I think that priority resizing instance on the current host may be better 
if the host is suitable.
The logic like this:

if CONF.allow_resize_to_same_host:
filte current host
if suitable:
   resize on current host
else:
   select a host
   resize on the host

I don't know whether there have been some discussion about this question. 
Please let me know what do you think. If the idea is no problem, maybe I can 
register a blueprint to implement it.

But the nova.conf flag for that already exists?

What I would suggest, however, is that some logic is put in to determine 
whether the disk size remains the same while the cpu/ram size is changing - if 
so, then resize the instance on the host without the disk snapshot and copy.


--
Jesse Pretorius
mobile: +44 7586 906045
email: jesse.pretor...@gmail.commailto:jesse.pretor...@gmail.com
skype: jesse.pretorius

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Question about EC2 Tempest tests

2015-02-12 Thread Yaroslav Lobankov
Joe, thank you for the note!

Regards,
Yaroslav Lobankov.

On Thu, Feb 12, 2015 at 2:43 AM, Joe Gordon joe.gord...@gmail.com wrote:



 On Wed, Feb 11, 2015 at 3:58 AM, Alexandre Levine 
 alev...@cloudscaling.com wrote:

  Yaroslav,

 The bug:
 https://bugs.launchpad.net/nova/+bug/1410622

 And the review:
 https://review.openstack.org/#/c/152112/

 It's recently fixed.


 Note, AFAIK this has not been backported to stable/juno or stable/icehouse
 so running trunk EC2 tempest tests against stable/juno nova is still not
 working.

 In fact to unwedge stable branches we turned these tests off:
 https://review.openstack.org/#/c/154575/


 Best regards,
   Alex Levine


 On 2/11/15 2:23 PM, Yaroslav Lobankov wrote:

 Hello everyone,

  I have some question about EC2 Tempest tests. When I run these tests, I
 regularly have the same error for all tests:

  EC2ResponseError: EC2ResponseError: 400 Bad Request
 ?xml version=1.0?
 ResponseErrorsErrorCodeAuthFailure/CodeMessageSignature not
 provided/Message/Error/Errors

  My environment is OpenStack (the Juno release) deployed by Fuel 6.0.
 Tempest is from master branch.

  I found that the issue was related to boto (Tempest installs it into
 virtual environment as a dependency). The last available release of boto is
 2.36.0.
 When this version of boto is installed, EC2 tests don't work. But if I
 install boto 2.34.0 instead of 2.36.0, all EC2 tests will have success.

  Any thoughts?

  Regards,
 Yaroslav Lobankov.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about force_host skip filters

2015-02-12 Thread Rui Chen
Append blueprint link:
https://blueprints.launchpad.net/nova/+spec/verifiable-force-hosts

2015-02-13 10:48 GMT+08:00 Rui Chen chenrui.m...@gmail.com:

 I agree with you @Chris
 '--force' flag is a good idea, it keep backward compatibility and
 flexibility.
 We can select whether the filters was applied for force_hosts.
 I will register blueprint to trace the feature.

 The 'force_hosts' feature is so age-old that I don't know how many users
 had used it.
 Like @Jay says. Removing it is once and for all idea, but I'm not sure
 that it's a suitable occasion.

 2015-02-12 23:10 GMT+08:00 Chris Friesen chris.frie...@windriver.com:

 On 02/12/2015 03:44 AM, Sylvain Bauza wrote:

  Any action done by the operator is always more important than what the
 Scheduler
 could decide. So, in an emergency situation, the operator wants to force
 a
 migration to an host, we need to accept it and do it, even if it doesn't
 match
 what the Scheduler could decide (and could violate any policy)

 That's a *force* action, so please leave the operator decide.


 Are we suggesting that the operator would/should only ever specify a
 specific host if the situation is an emergency?

 If not, then perhaps it would make sense to have it go through the
 scheduler filters even if a host is specified.  We could then have a
 --force flag that would proceed anyways even if the filters don't match.

 There are some cases (provider networks or PCI passthrough for example)
 where it really makes no sense to try and run an instance on a compute node
 that wouldn't pass the scheduler filters.  Maybe it would make the most
 sense to specify a list of which filters to override while still using the
 others.

 Chris


 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][security][rootwrap] Proposal to replace rootwrap/sudo with privsep helper process (for neutron, but others too)

2015-02-12 Thread Angus Lees
So inspired by the Rootwrap on root-intensive nodes thread, I went and
wrote a proof-of-concept privsep daemon for neutron:
https://review.openstack.org/#/c/155631
There's nothing neutron-specific in the core mechanism and it could easily
be moved out into a common (oslo) library and reused across other projects.


The basic principles guiding a few design choices were:
- it had to be obvious what code would run with elevated privileges
- the interface between that and the rest of the system had to be easy to
understand and audit
- it had to be about as easy as just implementing a function to add new
functionality

The current code does the predictable things to get a privileged buddy
process: assumes you started as root, creates a socketpair, forks, keeps
limited perms and setuids the privileged process, then reads commands over
the socket.  When the socket closes (for whatever reason), the daemon exits.

Currently it scrobbles around below the neutron.agent.privileged._commands
namespace and allows you to invoke any normal function that doesn't start
start with an underscore.  I think I will change this to use some sort of
more explicit decorator, but you get the idea.

On the client side, it generates client stubs of all the same functions
below neutron.agent.privileged.commands at import-time.  Using the daemon
is then as simple as just calling the client stub:

from neutron.agent.privileged import daemon as privsep_daemon
def main():
privsep_daemon.start()
...

from neutron.agent.privileged.commands import ip_lib as priv_ip
def foo():
# Need to create a new veth interface pair - that usually requires
root/NET_ADMIN
priv_ip.CreateLink('veth', 'veth0', peer='veth1')

Because we now have elevated privileges directly (on the privileged daemon
side) without having to shell out through sudo, we can do all sorts of
nicer things like just using netlink directly to configure networking.
This avoids the overhead of executing subcommands, the ugliness (and
danger) of generating command lines and regex parsing output, and make us
less reliant on specific versions of command line tools (since the kernel
API should be very stable).
I demonstrate some of that in the above change by a set of privileged ipset
functions that still call out to commands but don't use sh -c or sudo
anywhere (so are immune to shell metacharacters in arguments), and ip_lib
functions that use pyroute2 to just call netlink directly for network
interface create/delete/update.

Please discuss.  I could have done this in a spec, but I felt the basic
concept and motivation was obvious and the code specifics were of such
importance that this was better explored in a poc change.  I can post-facto
write a spec if it turns out folks would prefer that.

 - Gus

(if you're curious, it took about a day to write the code, and then about 3
long days of debugging eventlet-related conflicts with the 3rd party
libraries I'd just pulled in.  +1 to removing eventlet, particularly in
low-concurrent-queries agent processes where we can presumably just remove
it and use system threads without any further thought)

On Fri Feb 06 2015 at 5:52:44 PM Steven Dake (stdake) std...@cisco.com
wrote:



 On 2/4/15, 10:24 AM, Daniel P. Berrange berra...@redhat.com wrote:

 On Wed, Feb 04, 2015 at 09:10:06AM -0800, James E. Blair wrote:
  Thierry Carrez thie...@openstack.org writes:
 
   You make a good point when you mention traditional distro here. I
   would argue that containers are slightly changing the rules of the
   don't-run-as-root game.
  
   Solution (2) aligns pretty well with container-powered OpenStack
   deployments -- running compute nodes as root in a container (and
   embracing abovementioned simplicity/performance gains) sounds like a
   pretty strong combo.
 
  This sounds at least a little like a suggestion that containers are a
  substitute for the security provided by running non-root.  The security
  landscape around containers is complex, and while there are a lot of
  benefits, I believe the general consensus is that uid 0 processes should
  not be seen as fully isolated.
 
  From https://docs.docker.com/articles/security/ :
 
Docker containers are, by default, quite secure; especially if you
take care of running your processes inside the containers as
non-privileged users (i.e., non-root).
 
  Which is not to say that using containers is not a good idea, but
  rather, if one does, one should avoid running as root (perhaps with
  capabilities), and use selinux (or similar).
 
 Yep, I've seen attempts by some folks to run nova-compute and libvirtd
 and QEMU inside a docker container. Because of the inherantly privileged
 nature of what Nova/libvirt/qemu need to do, you end up having to share
 all the host namespaces with the docker container, except for the
 filesystem
 namespace and even that you need to bind mount a bunch of stuff over. As
 a result the container isn't really offerring 

Re: [openstack-dev] [congress][Policy][Copper]Collaboration between OpenStack Congress and OPNFV Copper

2015-02-12 Thread Zhipeng Huang
THX Tim!

I think It'd be great if we could have some online discussion ahead of F2F
LFC summit.We could have the crash course early next week (Monday or
Tuesday), and then Bryan could discuss with Sean in detail when they met,
with specific questions.

Would this be ok for everyone ?

On Fri, Feb 13, 2015 at 7:21 AM, Tim Hinrichs thinri...@vmware.com wrote:

  Bryan and Zhipeng,

  Sean Roberts (CCed) is planning to be in Santa Rosa.   Sean’s definitely
 there on Wed.  Less clear about Thu/Fri.

  I don’t know if I’ll make the trip yet, but I’m guessing Wed early
 afternoon if I can.

  Tim


  On Feb 11, 2015, at 9:04 PM, SULLIVAN, BRYAN L bs3...@att.com wrote:

   Hi Tim,



 It would be great to meet with members of the Congress project if possible
 at our meetup in Santa Rosa. I plan by then to have a basic understanding
 of Congress and some test driver apps / use cases to demo at the meetup.
 The goal is to assess the current state of Congress support for the use
 cases on the OPNFV wiki: https://wiki.opnfv.org/copper/use_cases
 https://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.opnfv.org_copper_use-5Fcasesd=AwMFAgc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=B6BWd4kFfgOzAREgThxkmTZKy7dDXE2-eBAmL0PBK7sm=79iOYd5evGtBk2y36AKWDlDGaxiAbtt-Aago3I-8XcUs=d4pb7BHqqZMj3oOoJBwixcr4VsTM0B4JwHe_JHRQ_VUe=



 I would be doing the same with ODL but I’m not as far on getting ready
 with it. So the opportunity to discuss the use cases under Copper and the
 other policy-related projects

 (fault management
 https://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.opnfv.org_doctord=AwMFAgc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=B6BWd4kFfgOzAREgThxkmTZKy7dDXE2-eBAmL0PBK7sm=79iOYd5evGtBk2y36AKWDlDGaxiAbtt-Aago3I-8XcUs=Wq56oTQYc1glpCeJ6wfL60x0AdyAphZeL55R7Kc7TvUe=,
 resource management
 https://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.opnfv.org_promised=AwMFAgc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=B6BWd4kFfgOzAREgThxkmTZKy7dDXE2-eBAmL0PBK7sm=79iOYd5evGtBk2y36AKWDlDGaxiAbtt-Aago3I-8XcUs=69Ak90Xh9biVNpWyCeLW8_7I0CoX0WrcDuFwlHQmM30e=,
 resource scheduler
 https://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.opnfv.org_requirements-5Fprojects_resource-5Fschedulerd=AwMFAgc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=B6BWd4kFfgOzAREgThxkmTZKy7dDXE2-eBAmL0PBK7sm=79iOYd5evGtBk2y36AKWDlDGaxiAbtt-Aago3I-8XcUs=haq_oYTeYW7TkZp-eJrCx33KJjCg_tQlWTwiH_4OO9Ie=)
 with Congress experts would be great.



 Once we understand the gaps in what we are trying to build in OPNFV, the
 goal for our first OPNFV release is to create blueprints for new work in
 Congress. We might also just find some bugs and get directly involved in
 Congress to address them, or start a collaborative development project in
 OPNFV for that. TBD



 Thanks,

 Bryan Sullivan | Service Standards | ATT



 *From:* Tim Hinrichs [mailto:thinri...@vmware.com thinri...@vmware.com]
 *Sent:* Wednesday, February 11, 2015 10:22 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Cc:* SULLIVAN, BRYAN L; HU, BIN; Rodriguez, Iben; Howard Huang
 *Subject:* Re: [openstack-dev] [congress][Policy][Copper]Collaboration
 between OpenStack Congress and OPNFV Copper



 Hi Zhipeng,



 We’d be happy to meet.  Sounds like fun!



 I don’t know of anyone on the Congress team who is planning to attend the
 LF collaboration summit.  But we might be able to send a couple of people
 if it’s the only real chance to have a face-to-face.  Otherwise, there are
 a bunch of us in and around Palo Alto.  And of course, phone/google
 hangout/irc are fine options as well.



 Tim







 On Feb 11, 2015, at 8:59 AM, Zhipeng Huang zhipengh...@gmail.com wrote:



 Hi Congress Team,



 As you might already knew, we had a project in OPNFV covering deployment
 policy called Copper
 https://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.opnfv.org_copperd=AwMFaQc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=B6BWd4kFfgOzAREgThxkmTZKy7dDXE2-eBAmL0PBK7sm=R1ER1wU47Knv6PaOiamDwCm76pwx5uuE47mpn_03mzYs=S7VfJALm_Pmzb2S-o3NUlcNzLAy9yYceGzcyKX3CA-we=,
 in which we identify Congress as one of the upstream projects that we need
 to put our requirement to. Our team has been working on setting up a simple
 openstack environment with congress integrated that could demo simple use
 cases for policy deployment.



 Would it possible for you guys and our team to find a time do an
 Copper/Congress interlock meeting, during which Congress Team could
 introduce how to best integrate congress with vanilla openstack? Will
 some of you attend LF Collaboration Summit?



 Thanks a lot :)



 --

 Zhipeng (Howard) Huang



 Standard Engineer

 IT Standard  Patent/IT Prooduct Line

 Huawei Technologies Co,. Ltd

 Email: huangzhip...@huawei.com

 Office: Huawei Industrial Base, Longgang, Shenzhen



 (Previous)

 Research Assistant

 Mobile Ad-Hoc Network Lab, Calit2

 University of California, Irvine

 Email: zhipe...@uci.edu

 Office: Calit2 Building Room 2402



 OpenStack, 

Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-02-12 Thread Robert Collins
On 5 February 2015 at 13:20, Rochelle Grober rochelle.gro...@huawei.com wrote:
 Duncan Thomas [mailto:duncan.tho...@gmail.com] on Wednesday, February 04,
 2015 8:34 AM wrote:



 The downside of numbers rather than camel-case text is that they are less
 likely to stick in the memory of regular users. Not a huge think, but a
 reduction in usability, I think. On the other hand they might lead to less
 guessing about the error with insufficient info, I suppose.

 To make the global registry easier, we can just use a per-service prefix,
 and then keep the error catalogue in the service code repo, pulling them
 into some sort of release periodically



 [Rockyg]  In discussions at the summit about assigning error codes, we
 determined it would be pretty straightforward to build a tool that could be
 called when a new code was needed and it would both assign an unused code
 and insert the error summary for the code in the DB it would keep to ensure
 uniqueness.  If you didn’t provide a summary, it wouldn’t spit out an error
 code;-)  Simple little tool that could be in oslo, or some cross-project
 code location.

Apropos of logging, has https://tools.ietf.org/html/rfc5424 been
considered? Combined with https://tools.ietf.org/html/rfc5426 we'd
have a standards based (and thus already supported by logging and
analysis tools) framework. aka, we seem to be on the verge of
inventing a thing thats already been invented.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][security][rootwrap] Proposal to replace rootwrap/sudo with privsep helper process (for neutron, but others too)

2015-02-12 Thread Robert Collins
On 13 Feb 2015 17:42, Angus Lees g...@inodes.org wrote:

 So inspired by the Rootwrap on root-intensive nodes thread, I went and
wrote a proof-of-concept privsep daemon for neutron:
https://review.openstack.org/#/c/155631
 There's nothing neutron-specific in the core mechanism and it could
easily be moved out into a common (oslo) library and reused across other
projects.

Bravo. More conceptual than a code review my questions are. msgpack rather
than protobuf ? Given your previous experience there I'm just curious.

Are you concerned that commands might call into less trusted areas of code?
Would it make sense to have the privileged commands be separate somehow to
avoid this?

Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Optional Properties in an Entity

2015-02-12 Thread Jay Pipes

Hi Brian, thanks for the response. Some comments inline :)

On 02/11/2015 09:57 AM, Brian Rosmaita wrote:

On 2/9/15, 8:44 PM, Joe Gordon joe.gord...@gmail.com
mailto:joe.gord...@gmail.com wrote:


On Mon, Feb 9, 2015 at 1:22 PM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:

On 01/20/2015 10:54 AM, Brian Rosmaita wrote:

From: Kevin L. Mitchell [kevin.mitch...@rackspace.com
mailto:kevin.mitch...@rackspace.com]
Sent: Monday, January 19, 2015 4:54 PM

When we look at consistency, we look at everything else
in OpenStack.
  From the standpoint of the nova API (with which I am
the most familiar),
I am not aware of any property that is ever omitted from
any payload
without versioning coming in to the picture, even if its
value is null.
Thus, I would argue that we should encourage the first
situation, where
all properties are included, even if their value is null.


That is not the case for the Images API v2:

An image is always guaranteed to have the following
attributes: id,
status, visibility, protected, tags, created_at, file and
self. The other
attributes defined in the image schema below are guaranteed to
be defined, but is only returned with an image entity if
they have
been explicitly set. [1]


This was a mistake, IMHO. Having entirely extensible schemas
means that there is little guaranteed consistency across
implementations of the API.


+1, Subtle hard to discover differences between clouds is a pain for
interchangeability.


Jay and Joe, thanks for weighing in.  I’m still not convinced that the
course taken in the Images v2 API was a mistake, though.  (I wasn’t
involved in its initial design, so this isn’t personal, just curiosity.)
  Here are a few reasons why, maybe someone can set me straight?

(1) Leaving null elements out is parsimonious.
As long as there’s a JSON schema, the client has a good idea what to
expect.  If you include
   “whatever”: null
in the response, I don’t see what that buys you.  If you simply don’t
include the “whatever” element, the recipient knows it’s not set.  If
you do include it set to null, you know that it’s not set … and you
increased the size of the response payload without increasing its
informativeness.  Further, even if you include the “whatever” element
set to null, the client is still going to have to check it to handle the
null case, so it’s really just a matter of how the client checks, not
whether it has to check.


Agreed, it doesn't buy you much at all. I'm more interested in just 
being consistent across APIs regarding this.



(2) Leaving null elements out doesn’t affect interchangeability.
If our convention is that unset elements aren’t included, and we’ve got
a JSON schema, then everyone knows what’s up.  Further, looking
specifically at the use cases for images in Glance, different clouds
have different sets of image properties that they use for specific
purposes that may be unique to their cloud.


And this, right here, is not something we should encourage. Tag images 
with whatever free-form tags you wish, as a user, but deployers of the 
Glance image service should be able to say attribute XYZ means the same 
thing across different deployments of Glance. Otherwise, there's no use 
to having those attributes, IMO, since you cannot rely on them meaning 
the same thing.


  For example, some may put a

hyperlink to licensing info in an image property, or versioning info, or
package lists, or whatever you can fit in 255 chars.  So a client
(intelligent or not) connecting to various clouds can’t expect to find
the same set of properties defined in every cloud (except for the ones
guaranteed by contract, which are listed above).  Thus, you’re going to
have to deal with the problem of non-existent elements when you get to
the additionalProperties in JSON no matter what.  But as long as you
know this, you’re OK.  I think it’s a much bigger problem when you’ve
got a mixture of null, “”, {} and other ways of conveying empty elements
in a response.  By simply leaving properties out, there’s no question
that they’re not set.


I do not think that additionalProperties should ever be anything other 
than false for any public API.



(3) A little consistency is a good thing.
Jay mentions that having entirely extensible schemas means that there’s
little guaranteed consistency across implementations of the API.  In the
Images API v2 case, the schema isn’t entirely extensible, you can add
string-valued additionalProperties.  So there’s that.  But the bigger
picture is that we’re in at the infancy of clouds and cloud management,
there’s no way we can anticipate the set of Image 

[openstack-dev] [cinder] Etherpad for volume replication created ...

2015-02-12 Thread Jay S. Bryant

All,

Several members of the Cinder team and I were discussing the current 
state of volume replication while trying to figure out the best way to 
resolve bug 1383524 [1].  The outcome of the discussion was a decision 
to hold off on integrating volume replication support for additional 
drivers.


I took notes from the discussion and have put them in the etherpad. We 
can use that, first thing in L, as a starting point to rework and fix 
replication support.


Please let me know if you have any questions and feel free to update the 
etherpad with addition thoughts.


Thanks!
Jay


[1] https://bugs.launchpad.net/cinder/+bug/1383524--  Periodic update 
replication status causing issues


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about force_host skip filters

2015-02-12 Thread Rui Chen
I agree with you @Chris
'--force' flag is a good idea, it keep backward compatibility and
flexibility.
We can select whether the filters was applied for force_hosts.
I will register blueprint to trace the feature.

The 'force_hosts' feature is so age-old that I don't know how many users
had used it.
Like @Jay says. Removing it is once and for all idea, but I'm not sure that
it's a suitable occasion.

2015-02-12 23:10 GMT+08:00 Chris Friesen chris.frie...@windriver.com:

 On 02/12/2015 03:44 AM, Sylvain Bauza wrote:

  Any action done by the operator is always more important than what the
 Scheduler
 could decide. So, in an emergency situation, the operator wants to force a
 migration to an host, we need to accept it and do it, even if it doesn't
 match
 what the Scheduler could decide (and could violate any policy)

 That's a *force* action, so please leave the operator decide.


 Are we suggesting that the operator would/should only ever specify a
 specific host if the situation is an emergency?

 If not, then perhaps it would make sense to have it go through the
 scheduler filters even if a host is specified.  We could then have a
 --force flag that would proceed anyways even if the filters don't match.

 There are some cases (provider networks or PCI passthrough for example)
 where it really makes no sense to try and run an instance on a compute node
 that wouldn't pass the scheduler filters.  Maybe it would make the most
 sense to specify a list of which filters to override while still using the
 others.

 Chris


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [API] Do we need to specify follow the HTTP RFCs?

2015-02-12 Thread Ian Cordasco
On 2/12/15, 12:01, Chris Dent chd...@redhat.com wrote:


I meant to get to this in today's meeting[1] but we ran out of time
and based on the rest of the conversation it was likely to lead to a
spiral of different interpretations, so I thought I'd put it up here.

$SUBJECT says it all: When writing guidelines to what extent do we
think we should be recapitulating the HTTP RFCs and restating things
said there in a form applicable to OpenStack APIs?

For example should we say:

 Here are some guidelines, for all else please refer to RFCs
 7230-5.

Or should we say something like:

 Here are some guidelines, including:

 If your API has a resource at /foo which responds to an authentic
 request with method GET but not with method POST, PUT, DELETE or
PATCH
 then when an authentic request is made to /foo that is not a GET it
must
 respond with a 405 and must include an Allow header listing the
 currently support methods.[2]

I ask because I've been fleshing out my gabbi testing tool[3] by running
it against a variety of APIs. Gabbi makes it very easy to write what I
guess the officials call negative tests -- Throw some unexpected but well-
formed input, see if there is a reasonable response -- just by making
exploratory inquiries into the API and then traversing the discovered
links
with various methods and content types.

What I've found is too often the response is not reasonable. Some of
the problems arise from the frameworks being used, in other cases it
is the implementing project.

We can fix the existing stuff in a relatively straightforward but
time consuming fashion: Use tools like gabbi to make more negative tests,
fix the bugs as they come up. Same as it ever was.

For new stuff, however, does there need to be increased awareness of
the rules and is it the job of the working group to help that
increasing along?

[1]
http://eavesdrop.openstack.org/meetings/api_wg/2015/api_wg.2015-02-12-16.0
0.html

[2] This is a paraphase of:
http://tools.ietf.org/html/rfc7231#section-6.5.5

[3] https://pypi.python.org/pypi/gabbi

-- 
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

So that particular problem you mention is an issue with the Routes
package. It assumes you will define very (method, route) that you wish to
handle and it’ll 404 everything else (because no definition was found). If
we used better (slightly higher level) frameworks, we probably wouldn’t
have to concern ourselves with defining those combinations for things that
/should/ intuitively return a 405. Miguel has run into this before and
only recently was able to fix it in one of the projects.

That said, I think that I’ve been referencing RFCs a lot lately in
discussions with people on topics. I’ll happily constantly
restate/recapitulate the relevant parts of each RFC I need to reference in
a discussion but I’ve been trying to decide how much of it we should
emphasize.

I worry that if we put too much emphasis on one part of an RFC, developers
will expect the rest of it to be less important and unnecessary to follow.
Further, there are more relevant RFCs than just 7230-7235.

3986 is relevant from a URI perspective and what exactly is allowed in
what part of a URI and what expectations application developers who handle
unquoting (a.k.a., percent-decoding) some part of the URI should expect
about what always will be encoded and what /may/ be encoded.

4627 defines exactly what three character sets are allowed to be used for
a JSON body that’s returned with a Content-Type of ‘application/json’.

For people who want to accept multipart/form-data bodies with filenames
(or other metadata) that is in a non-latin-1 character set, they’ll need
to read RFC 2231 to learn how to properly parse and handle an upload that
correctly handles those names.

For people wanting to encode non-latin-1 character sets in regular
request/response headers, they’ll need to read 5987.

Give me enough time and I could probably point you at the rest of the RFCs
that I’ve read while working on requests. That said, expecting everyone to
read each and every one of these RFCs is also unreasonable (especially the
new HTTP/1.1 RFCs of which 7238 is also being added presently to define
the behaviour of a 308 redirect.

There’s a lot to read, but it really is necessary to know the standards so
more discussions like the one that took place on
https://review.openstack.org/#/c/141229/ don’t happen. It seems to me many
of our APIs have been built without proper research into standards for how
they should be built.

Paraphrasing is nice, but pointing people towards good existing tools
would be nicer.

Cheers,
Ian


Re: [openstack-dev] [nova] Feature Freeze Exception request for x509 keypairs

2015-02-12 Thread Alex Xu
yea, this patch is on good shape.

2015-02-13 9:05 GMT+08:00 Ken'ichi Ohmichi ken1ohmi...@gmail.com:

 2015-02-12 21:20 GMT+09:00 Claudiu Belu cb...@cloudbasesolutions.com:
 
  Hello.
 
  I would like to ask for a FFE for the x509 keypairs blueprint:
  https://blueprints.launchpad.net/nova/+spec/keypair-x509-certificates
 
  This blueprint is split up into 3 commits:
 
  [1] Database migration: previously merged, but had to be reverted
 because of
  a small issue. Everything is fixed, original reverter Johannes Erdfelt
 gave
  his +1, currently the commit has a +2.
  https://review.openstack.org/#/c/150800/
 
  [2] Nova-API change: It uses the microversioning API and it has been
 decided
  to be the first microversioning commit, since it is closest to merge.
  Christopher Yeoh reviewed helped with this commit.
  https://review.openstack.org/#/c/140313/
 
  [3] X509 keypair implementation: Simple commit, it had a +2 on a previous
  commit. https://review.openstack.org/#/c/136869/
 
  I also want to point out that this blueprint targets all the drivers, not
  just Hyper-V. This blueprint targets all the users that desire to deploy
  instances with Windows guests and desire password-less authentication,
 the
  same way users can ssh into Linux-type guests.

 The patches have been much reviewed and this feature will be the first
 microversion.
 so I'm happy to support this development in Kilo.

 Thanks
 Ken Ohmichi

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][security][rootwrap] Proposal to replace rootwrap/sudo with privsep helper process (for neutron, but others too)

2015-02-12 Thread Eric Windisch
ᐧ


 from neutron.agent.privileged.commands import ip_lib as priv_ip
 def foo():
 # Need to create a new veth interface pair - that usually requires
 root/NET_ADMIN
 priv_ip.CreateLink('veth', 'veth0', peer='veth1')

 Because we now have elevated privileges directly (on the privileged daemon
 side) without having to shell out through sudo, we can do all sorts of
 nicer things like just using netlink directly to configure networking.
 This avoids the overhead of executing subcommands, the ugliness (and
 danger) of generating command lines and regex parsing output, and make us
 less reliant on specific versions of command line tools (since the kernel
 API should be very stable).


One of the advantages of spawning a new process is being able to use flags
to clone(2) and to set capabilities. This basically means to create
containers, by some definition. Anything you have in a privileged daemon
or privileged process ideally should reduce its privilege set for any
operation it performs. That might mean it clones itself and executes
Python, or it may execvp an executable, but either way, the new process
would have less-than-full-privilege.

For instance, writing a file might require root access, but does not need
the ability to load kernel modules. Changing network interfaces does not
need access to the filesystem, no more than changes to the filesystem needs
access to the network. The capabilities and namespaces mechanisms resolve
these security conundrums and simplify principle of least privilege.

Regards,
Eric Windisch
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][security][rootwrap] Proposal to replace rootwrap/sudo with privsep helper process (for neutron, but others too)

2015-02-12 Thread Angus Lees
On Fri Feb 13 2015 at 4:05:33 PM Robert Collins robe...@robertcollins.net
wrote:


 On 13 Feb 2015 17:42, Angus Lees g...@inodes.org wrote:
 
  So inspired by the Rootwrap on root-intensive nodes thread, I went and
 wrote a proof-of-concept privsep daemon for neutron:
 https://review.openstack.org/#/c/155631
  There's nothing neutron-specific in the core mechanism and it could
 easily be moved out into a common (oslo) library and reused across other
 projects.

 Bravo. More conceptual than a code review my questions are. msgpack rather
 than protobuf ? Given your previous experience there I'm just curious.

I have no educated preference between the two, and I didn't know of any
high-performance precedent within openstack.  msgpack was just the first
thing I came across that seemed well supported, fast, and only handled dumb
types (no object auto-vivifying features that might backfire on us).

We could use json too if we wanted to avoid a new dependency, or presumably
numerous other choices.


  Are you concerned that commands might call into less trusted areas of
 code? Would it make sense to have the privileged commands be separate
 somehow to avoid this?

Hrm, not particularly, although we should explore any implications.  If a
standalone chunk of python imported other python libraries, then they may
have a path that ends up with them able to be called - which I figure is
similar to the current situation that also requires an explicit python
import (or some other chain of object references).  If there's a bug that
lets you escape the python level and run arbitrary C code, then it won't
matter what's already loaded and we only have the linux
capabilities/permissions mechanisms to save us.

In addition, the current simple fork/no-exec is also good for sharing most
of the pages in memory - making the overhead extremely minimal.

Oh, if you mean separate just in a filesystem/code organisation sense,
rather than a Linux process sense, then yes I do think they should be in a
separate place for ease of auditing.  in my change above I have them all
below a particular neutron.agent.privileged._commands prefix, and the
communication assumes/restricts it to this.  We can of course pick another
namespace prefix, but I agree that even with some different decorator-based
method, I don't think we should just have privileged commands scattered
anywhere throughout our regular codebase.

 - Gus
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Ryu CI scheduled outage

2015-02-12 Thread YAMAMOTO Takashi
Ryu/ofagent CI will be offline during this weekend.
sorry for inconvenience.

YAMAMOTO Takashi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it

2015-02-12 Thread Stefano Maffulli
On Thu, 2015-02-12 at 10:37 +0100, Thierry Carrez wrote:
 Right. You can't prevent occasional private discussions and pings, and
 you shouldn't. It's when you encourage and officialize them (by for
 example creating a channel for them) that things start to go bad.

Yes, that's very bad. Private IRC channels are a bad habit that
reinforces a bad, anti-social behavior. And IRC is mostly a habit: I
join tens of channels but I regularly read one or two. Most people I
know have similar habits.

Private conversations are a fact of life but in OpenStack space they
should be the *exception*, created when needed and destroyed after the
crisis. 

I have private conversations all the time: they are about specific
individuals, include sensitive data, legal issues that cannot be
diffused and similar. I create a private channel or a PM for that
conversation only. 

I don't hang out with others in a private channel: that's a very bad
habit. if you have a private channel you hangout there, you'll read that
channel, share jokes on that and will eventually throw in there topics
to discuss that are perfectly safe to discuss publicly. 


signature.asc
Description: This is a digitally signed message part
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] Devstack and Grenade futures

2015-02-12 Thread Sean Dague
In the spirit of summarizing conversations that have been had, I'd like
to point people that are interested at the Devstack FUTURE.rst document
- https://github.com/openstack-dev/devstack/blob/master/FUTURE.rst

This is an attempt by the Devstack team recently to write down a where
are we going document, which helps contributors and reviewers figure
where we are headed, and help us along that path.

A big piece of this is external Devstack plugins, which lets a project
keep their Devsack enable code within their own source tree, and let
people easily add their project to a local devstack with a single
``enable_plugin`` line. (And lets you even do gate jobs with pretty
minimal additional configuration). I've been working with the ec2api,
glusterfs, and opendaylight folks on their initial plugins over the past
couple of weeks and the results have been really promising that this is
a much better long term evolution for everyone. Once we get a few more
examples live and working, I think we'll be ready for a best practices
document in the devstack docs tree.

== Grenade ==

Based on the early successes here, I think Grenade is going to need to
go a similar path.

Part of what made doing Devstack plugins out of tree pretty clear was
that extras.d/ already defined a pretty reasonable set of phases, and we
had a bunch of projects that were already supported in tree in that
model. I'm sure we'll need some additional hook points in the future,
but we can roll that out in a backwards compatible way, or at least with
a long enough multi cycle deprecation that everyone can catch up.

For grenade, we've got 2 call out interfaces we need to care about.

1) the upgrade-* scripts

This + the from-/within- parts are all project specific, and you can
imagine how that could be moved out of tree if we guarunteed some
exported variables (like TOP_DIR) and move things like etc saving and
db-sync into common call paths. It's some work, but not really much new
invention.

2) resource survivability

In grizzly this was done ad-hoc, and we broke it for a while. In juno we
rebuilt a tool (javelin2) to try to give us a unified view of the world.
That turned out more difficult and more coupled that we expected.

I think for Liberty we need to step back on this and define the
interface, and let the implementations evolve behind it.

So we need something like:

resources.sh [create|verify|verify-noapi|delete]

old:
   resources.sh create
   resources.sh verify
downforupgrade:
   resources.sh verify-noapi
new:
   resources.sh verify
done:
   resources.sh delete

in the create phase you build your resources, we call out to verify
before we shut down services, after services are down we call
verify-noapi, and after they are up again we call verify. We need 2
verify calls because we want to test that things like computes weren't
deleted when nova-compute was down.

The back end for these might be openstack cli commands, a chunk of
python code, ansible, doesn't really matter much. The interface contract
should provide us isolation for that.


I think all of this is probably a Liberty timeline, because I just can't
imagine anyone getting around to it still in this cycle.

Which leaves the open question of additional project support in Grenade.
I think right now we should not add much more to Grenade during this
cycle, because I expect that's more we have to unwind in the next. So
for projects not in upgrade testing yet, lets hold off. Will try to have
a prototype up for Vancouver discussion and make things a lot more
inclusive for projects in Liberty.

-Sean

-- 
Sean Dague
http://dague.net



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] team meeting Feb 12 1400 UTC

2015-02-12 Thread Sergey Lukjanov
Log:
http://eavesdrop.openstack.org/meetings/sahara/2015/sahara.2015-02-12-14.00.txt
Minutes:
http://eavesdrop.openstack.org/meetings/sahara/2015/sahara.2015-02-12-14.00.html

On Thu, Feb 12, 2015 at 1:26 AM, Andrew Lazarev alaza...@mirantis.com
wrote:

 Hi guys,

 We'll be having the Sahara team meeting tomorrow at #openstack-meeting-3
 channel.

 Agenda: https://wiki.openstack.org/wiki/Meetings
 /SaharaAgenda#Next_meetings


 http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20150212T14

 Thanks,
 Andrew

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Missing the next API WG meeting

2015-02-12 Thread Ian Cordasco
I’ll be around so I’ll do this if no one else will be around to do it.

On 2/11/15, 16:39, Everett Toews everett.to...@rackspace.com wrote:

I’ll be missing the next API WG meeting [1] as I’m in some all day
training. Someone else will have to #startmeeting api wg

Cheers,
Everett

[1] https://wiki.openstack.org/wiki/Meetings/API-WG
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Update on DB IPAM driver

2015-02-12 Thread Salvatore Orlando
Hi,

I have updated the patch; albeit not complete yet it's kind of closer to be
an allocator decent enough to replace the built-in logic.

I will be unable to attend today's L3/IPAM meeting due to a conflict, so
here are some highlights from me on which your feedback is more than
welcome:

- I agree with Carl that the IPAM driver should not have explicit code
paths for autoaddress subnets, such as DHCPv6 stateless ones. In that case,
the consumer of the driver will generate the address and then to the IPAM
driver that would just be allocation of a specific address. However, I have
the impression the driver still needs to be aware of whether the subnet has
an automatic address mode or not - since in this case 'any' address
allocation won't be possible. There already comments about this in the
review [1]

- We had a discussion last week on whether the IPAM driver and neutron
should 'share' database tables. I went back and forth a lot of time, but
now to me it seems the best thing to do is to have the IPAM driver maintain
an 'ip_requests' tables, where it stores allocation info. This table
partially duplicates data in IPAllocation, but on the plus side it makes
the IPAM driver self sufficient. The next step would be to decide whether
we want to go a step further and also assume the driver should not access
at all Neutron's DB, but I would defer that discussion to the next
iteration (for both the driver and the IPAM interface)

- I promised a non blocking algorithm for IP allocation. The one I was
developing was based on specifying the primary key on the ip_requests table
in a way that it would prevent two concurrent requests from getting the
same address, and would just retry getting an address until the primary key
constraint was satisfied. However, recent information emerged on MySQL
galera's (*) data set [2] certification  clarified that this kind of
algorithm would still result in a deadlock error from failed data set
certification. It is worth noting that in this case a solution based on
traditional compare-and-swap is not possible because concurrent requests
would be inserting data at the same time. I am now working on an
alternative solution, and I would like to first implement a PoC for it (so
that I can prove it works).

- The db base refactoring being performed by Pavel is under way [3]. It is
worth noting that this is a non-negligible change to some of Neutron's
basic and more critical workflows. We should expect pushback from the
community regarding the introduction of this change in the 3rd milestone.
At this stage I would suggest either:
A) consider a strategy for running pluggable IPAM as optional
B) consider delaying to Liberty.
(and that's where I get virtually jeered and pelted with rotten tomatoes)

Thanks for reading this post,
Salvatore

[1] https://review.openstack.org/#/c/150485/
[2]
http://lists.openstack.org/pipermail/openstack-dev/2015-February/056007.html
[3] https://review.openstack.org/#/c/153236/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Designate] Documentation Sprint tomorrow @ 16.00 UTC

2015-02-12 Thread Hayes, Graham
Hey

Designate is having a 2 hour docs sprint this Friday @ 16.00 UTC

List of topics is here -
https://etherpad.openstack.org/p/designate-documentation-sprint

If you are interested in helping out, please add your name to an
unclaimed topic :)

We will be co-ordinating via IRC in #openstack-dns

Thanks,

Graham

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it

2015-02-12 Thread Stefano Maffulli
On Thu, 2015-02-12 at 10:35 +, Kuvaja, Erno wrote:
 I'm not attacking against having summits, I think the face to face
 time is incredibly valuable for all kind of things. My point was to
 bring up general flaw of the flow between all inclusive decision
 making vs. decided in summit session.

I have the feeling you're assigning too much importance to the
conversations that happen face to face in the summit. Summits are the
apex, the end (or one of the final moments) of conversations that
started months/weeks before the bi-annual event. They're not the place
where an elite shows up, discusses newly revealed topics and decides
without involving anyone else.

With the design summits being the result of longer conversations, there
is very little risk for the relevant people for *that specific*
conversation not to be in the room. For those rare occasions, we have
setup VoIP bridges and other tools to include them in the room, in real
time and have them participate to the decision-making process in full.

I don't accept the thought that everything has to go back to the mailing
list because that would slow us down *even more*. We're trying to keep a
fine balancing act in place here, between speed and execution and
inclusion. If someone has troubles going to the Summit, let's talk and
solve the problems of the individuals because we can't generalize this
issue too much.

/stef


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Running HBase Jobs (was: About Sahara Oozie plan)

2015-02-12 Thread Trevor McKay
Hi Lu, folks,

I've been investigating how to run Java actions in Sahara EDP that
depend on 
HBase libraries (see snippet from the original question by Lu below).

In a nutshell, we need to use Oozie sharelibs for this. I am working on
a spec now, thinking 
about the best way to support this in Sahara, but here is a semi-manual
intermediate solution
that will work if you would like to run such a job from Sahara.

1) Create your own Oozie sharelib that contains the HBase jars.

This ultimately is just an HDFS dir holding the jars.  On any node in
your cluster with 
HBase installed, run the attached script or something like it (I like
Python better than bash :) )
It simply separates the classpath and uploads all the jars to the
specified HDFS dir.

$ parsePath.py /user/myhbaselib

2) Run your Java action from EDP, but use the oozie.libpath
configuration value when you
launch the job.  For example, on the job configure tab set oozie.libpath
like this:

NameValue

oozie.libpathhdfs://namenode:8020/user/myhbaselib

(note, support for this was added in
https://review.openstack.org/#/c/154214/)

That's it! In general, you can add any jars that you want to a sharelib
and then set the
oozie.libpath for the job to access them.

Here is a good blog entry about sharelibs and extra jars in Oozie jobs:

http://blog.cloudera.com/blog/2014/05/how-to-use-the-sharelib-in-apache-oozie-cdh-5/

Best,

Trevor

--- original question
(1) EDP job in Java action

   The background is that we want write integration test case for newly
added services like HBase, zookeeper just like the way the edp-examples
does( sample code under sahara/etc/edp-examples/). So I thought I can
wrote an example via edp job by Java action to test HBase Service, then
I wrote the HBaseTest.java and packaged as a jar file, and run this jar
manually with the command java -cp `hbase classpath` HBaseTest.jar
HBaseTest, it works well in the vm(provisioned by sahara with cdh
plugin). 
“/usr/lib/jvm/java-7-oracle-cloudera/bin/java -cp HBaseTest.jar:`hbase
classpath` HBaseTest”
So I want run this job via horizon in sahara job execution page, but
found no place to pass the `hbase classpath` parameter.(I have tried
java_opt and configuration and args, all failed). When I pass the “-cp
`hbase classpath`” to java_opts in horizon job execution page. Oozie
raise this error as below


#!/usr/bin/python
import sys
import os
import subprocess

def main():
subprocess.Popen(hadoop fs -mkdir %s % sys.argv[1], shell=True).wait()
cp, stderr = subprocess.Popen(hbase classpath, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()
paths = cp.split(':')		
for p in paths:
if p.endswith(.jar):
print(p)
subprocess.Popen(hadoop fs -put %s %s % (os.path.realpath(p), sys.argv[1]), shell=True).wait()

if __name__ == __main__:
main()
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Feature Freeze Exception request for x509 keypairs

2015-02-12 Thread Claudiu Belu

Hello.

I would like to ask for a FFE for the x509 keypairs blueprint: 
https://blueprints.launchpad.net/nova/+spec/keypair-x509-certificates

This blueprint is split up into 3 commits:

[1] Database migration: previously merged, but had to be reverted because of a 
small issue. Everything is fixed, original reverter Johannes Erdfelt gave his 
+1, currently the commit has a +2. https://review.openstack.org/#/c/150800/

[2] Nova-API change: It uses the microversioning API and it has been decided to 
be the first microversioning commit, since it is closest to merge. Christopher 
Yeoh reviewed helped with this commit. https://review.openstack.org/#/c/140313/

[3] X509 keypair implementation: Simple commit, it had a +2 on a previous 
commit. https://review.openstack.org/#/c/136869/

I also want to point out that this blueprint targets all the drivers, not just 
Hyper-V. This blueprint targets all the users that desire to deploy instances 
with Windows guests and desire password-less authentication, the same way users 
can ssh into Linux-type guests.

Best regards,

Claudiu Belu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] expanding sahara maint team

2015-02-12 Thread Sergey Lukjanov
Hi stable maint folks,

I'd like to propose to add the following folks to the sahara stable maint
team:

* Trevor McKay (tmckay)
* Ethan Gafford (egafford)
* Andrew Lazarev (alazarev)
* Sergey Reshetnyak (sreshetniak)

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Feature Freeze Exception for hyper-v unit tests refactoring

2015-02-12 Thread Daniel P. Berrange
On Thu, Feb 12, 2015 at 12:18:49PM +, Claudiu Belu wrote:
 Hello.
 
 I would like to request a FFE for the Hyper-V unit tests refactoring 
 blueprint: 
 https://blueprints.launchpad.net/nova/+spec/hyper-v-test-refactoring
 
 The point of the blueprint was to get rid of the ancient test_hypervapi.py 
 tests, that use mox, as they prove more and more difficult to maintain, 
 especially when adding new features or fixing bugs. Those tests would be 
 replaced with mock unit tests, per Ops class.
 
 There were 11 commits in total, 6 already merged, 5 remain. Out of these 5, 
 the last 2 are trivial:
 
 [1] https://review.openstack.org/#/c/138934/
 [2] https://review.openstack.org/#/c/139796/
 [3] https://review.openstack.org/#/c/139797/
 
 [4] https://review.openstack.org/148980 - unit tests for methods that have 1 
 instruction each. Just to have coverage on all the modules.
 
 [5] https://review.openstack.org/139798 - just removes test_hypervapi.py
 
 The commits have been reviewed, already have a couple of +1s.
 
 
 Note: this blueprint is limited to the Hyper-V unit tests and does not
 change the functionality of the Driver in any way. It is barely worthy
 of the name blueprint and I consider it more of a bug, rather than a
 blueprint. This will improve maintainability, readability and coverage
 for the Hyper-V classes.

Yeah, I personally don't think this kind of code cleanup requires a
blueprint at all, and probably doesn't even need a bug either. So
from my POV you don't need to even request this FFE - I'd be happy
with those tests cleanups being merged any time except for during
the very final code freeze before release. Lets see if other nova
cores agree...

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Feature Freeze Exception for hyper-v unit tests refactoring

2015-02-12 Thread Sean Dague
On 02/12/2015 07:28 AM, Daniel P. Berrange wrote:
 On Thu, Feb 12, 2015 at 12:18:49PM +, Claudiu Belu wrote:
 Hello.

 I would like to request a FFE for the Hyper-V unit tests refactoring 
 blueprint: 
 https://blueprints.launchpad.net/nova/+spec/hyper-v-test-refactoring

 The point of the blueprint was to get rid of the ancient test_hypervapi.py 
 tests, that use mox, as they prove more and more difficult to maintain, 
 especially when adding new features or fixing bugs. Those tests would be 
 replaced with mock unit tests, per Ops class.

 There were 11 commits in total, 6 already merged, 5 remain. Out of these 5, 
 the last 2 are trivial:

 [1] https://review.openstack.org/#/c/138934/
 [2] https://review.openstack.org/#/c/139796/
 [3] https://review.openstack.org/#/c/139797/

 [4] https://review.openstack.org/148980 - unit tests for methods that have 1 
 instruction each. Just to have coverage on all the modules.

 [5] https://review.openstack.org/139798 - just removes test_hypervapi.py

 The commits have been reviewed, already have a couple of +1s.


 Note: this blueprint is limited to the Hyper-V unit tests and does not
 change the functionality of the Driver in any way. It is barely worthy
 of the name blueprint and I consider it more of a bug, rather than a
 blueprint. This will improve maintainability, readability and coverage
 for the Hyper-V classes.
 
 Yeah, I personally don't think this kind of code cleanup requires a
 blueprint at all, and probably doesn't even need a bug either. So
 from my POV you don't need to even request this FFE - I'd be happy
 with those tests cleanups being merged any time except for during
 the very final code freeze before release. Lets see if other nova
 cores agree...

I believe the policy has always been that Test only patches are fine. I
would agree this does not need an FFE. A blueprint is nicely solely from
tracking purposes.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [third-party] how to use a devstack external plugin in gate testing

2015-02-12 Thread Gary Kotton


On 2/12/15, 1:33 PM, Chmouel Boudjnah chmo...@enovance.com wrote:

Jaume Devesa devv...@gmail.com writes:

 Following the conversation...

 We have seen that glusterfs[1] and ec2api[2] use different approach
 when it comes to repository managing: whereas glusterfs is a single
 'devstack' directory repository, ec2api is a whole project with a
 'devstack' directory on it.

 We plan to migrate 'python-neutron-plugin-midonet'[3] project to
 Stackforge too. It makes sense to add the 'devstack' directory on it?
 Or do you recommend us to have two different repositories in
 Stackforge: one for the neutron plugin and the other one for the
 devstack plugin?

as you stated I don't think there is a clear advantage or disadvantage
but IMO having too many repositories is not very user friendly and I would
recommend to have the plugin directly in the repo.

For things like glusterfs which is not a native openstack project it
makes sense that the plugin is hosted externally of the project.

I am in favor of having these in the devstack reop. I think that keeping
everything under the same umbrella is the healthiest model. Moving things
to different repos is a a challenge and leads to endless problems (that is
my two cents)

Chmouel

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [third-party] how to use a devstack external plugin in gate testing

2015-02-12 Thread Chmouel Boudjnah
Sean Dague s...@dague.net writes:

 I'm going to be -1ing most new or substantially redone drivers at this
 point. External plugins are a better model for those.

+1

Chnmouel

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature Freeze Exception for Add config drive support for PCS containers

2015-02-12 Thread Daniel P. Berrange
On Wed, Feb 11, 2015 at 03:28:49PM +0300, aburluka wrote:
 Hello,
 
 I'd like to request a feature freeze exception for the change [1]
 This change implements configuration drive support in Parallels containers.
 It does not change existing Nova behaviour.
 It's a last patch in parallels series, that implements blueprint pcs-support
 [2].
 Previous patches of that blueprint were merged. So it's the last one to
 implement
 initial Parallels Cloud Server support in Nova compute driver.
 
 This change was reviewed by Daniel Berrange and Garry Kotton.
 I am looking forward for your decision about considering this changes for a
 feature freeze exception

I'm happy to sponsor this, given that it lets us complete the intended
level of support for parallels in Kilo. It does touch a bit of shared
code but the changes at straightforward and should not cause regression
in other drivers.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Feature Freeze Exception for hyper-v unit tests refactoring

2015-02-12 Thread Claudiu Belu

Hello.

I would like to request a FFE for the Hyper-V unit tests refactoring blueprint: 
https://blueprints.launchpad.net/nova/+spec/hyper-v-test-refactoring

The point of the blueprint was to get rid of the ancient test_hypervapi.py 
tests, that use mox, as they prove more and more difficult to maintain, 
especially when adding new features or fixing bugs. Those tests would be 
replaced with mock unit tests, per Ops class.

There were 11 commits in total, 6 already merged, 5 remain. Out of these 5, the 
last 2 are trivial:

[1] https://review.openstack.org/#/c/138934/
[2] https://review.openstack.org/#/c/139796/
[3] https://review.openstack.org/#/c/139797/

[4] https://review.openstack.org/148980 - unit tests for methods that have 1 
instruction each. Just to have coverage on all the modules.

[5] https://review.openstack.org/139798 - just removes test_hypervapi.py

The commits have been reviewed, already have a couple of +1s.


Note: this blueprint is limited to the Hyper-V unit tests and does not change 
the functionality of the Driver in any way. It is barely worthy of the name 
blueprint and I consider it more of a bug, rather than a blueprint. This will 
improve maintainability, readability and coverage for the Hyper-V classes.

Best regards,

Claudiu Belu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about force_host skip filters

2015-02-12 Thread Gary Kotton
I understand the fact that an opertaor can and should be able to place the VM 
where she/he wants. The VM should just adhere to the scheduling constraints :) 
(which are defined in the filters)
:)

From: Rui Chen chenrui.m...@gmail.commailto:chenrui.m...@gmail.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, February 12, 2015 at 1:51 PM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Question about force_host skip filters

 filters should be applied to the list of hosts that are in 'force_hosts'.

Yes, @Gray, it's my point.

Operator can live-migrate a instance to a specified host and skip filters,  
it's apposite and important, I agree with you.

But when we boot instance, we always want to launch a instance successfully or 
get a clear failure reason, if the filters are applied for the force host, 
operator maybe find out that he is doing something wrong at earlier time. For 
example, he couldn't boot a pci instance on a force host that don't own pci 
device.

and I don't think 'force_hosts' is operator action, the default value is 
'is_admin:True' in policy.json, but in some case the value may be changed so 
that the regular user can boot instance on specified host.

2015-02-12 17:44 GMT+08:00 Sylvain Bauza 
sba...@redhat.commailto:sba...@redhat.com:

Le 12/02/2015 10:05, Rui Chen a écrit :
Hi:

   If we boot instance with 'force_hosts', the force host will skip all 
filters, looks like that it's intentional logic, but I don't know the reason.

   I'm not sure that the skipping logic is apposite, I think we should remove 
the skipping logic, and the 'force_hosts' should work with the scheduler, test 
whether the force host is appropriate ASAP. Skipping filters and postponing the 
booting failure to nova-compute is not advisable.

On the other side, more and more options had been added into flavor, like 
NUMA, cpu pinning, pci and so on, forcing a suitable host is more and more 
difficult.


Any action done by the operator is always more important than what the 
Scheduler could decide. So, in an emergency situation, the operator wants to 
force a migration to an host, we need to accept it and do it, even if it 
doesn't match what the Scheduler could decide (and could violate any policy)

That's a *force* action, so please leave the operator decide.

-Sylvain



Best Regards.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [third-party] how to use a devstack external plugin in gate testing

2015-02-12 Thread Sean Dague
On 02/12/2015 07:49 AM, Gary Kotton wrote:
 
 
 On 2/12/15, 1:33 PM, Chmouel Boudjnah chmo...@enovance.com wrote:
 
 Jaume Devesa devv...@gmail.com writes:

 Following the conversation...

 We have seen that glusterfs[1] and ec2api[2] use different approach
 when it comes to repository managing: whereas glusterfs is a single
 'devstack' directory repository, ec2api is a whole project with a
 'devstack' directory on it.

 We plan to migrate 'python-neutron-plugin-midonet'[3] project to
 Stackforge too. It makes sense to add the 'devstack' directory on it?
 Or do you recommend us to have two different repositories in
 Stackforge: one for the neutron plugin and the other one for the
 devstack plugin?

 as you stated I don't think there is a clear advantage or disadvantage
 but IMO having too many repositories is not very user friendly and I would
 recommend to have the plugin directly in the repo.

 For things like glusterfs which is not a native openstack project it
 makes sense that the plugin is hosted externally of the project.
 
 I am in favor of having these in the devstack reop. I think that keeping
 everything under the same umbrella is the healthiest model. Moving things
 to different repos is a a challenge and leads to endless problems (that is
 my two cents)

I believe the question was really should the devstack plugin be in
'python-neutron-plugin-midonet' repo or in an additional
'python-neutron-plugin-midonet-devstack' repo. Being in the main
devstack tree was never on the table. I -1ed that review and sent them
down this path.

The Long Term Evolution for Devstack is external plugins for most things
- https://github.com/openstack-dev/devstack/blob/master/FUTURE.rst

I'm going to be -1ing most new or substantially redone drivers at this
point. External plugins are a better model for those.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Lets keep our community open, lets fight for it

2015-02-12 Thread Chris Dent

On Thu, 12 Feb 2015, Flavio Percoco wrote:


The important bit, thoguh, is that email is meant for asynchronous
communication and IRC isn't. If things that require the intervention
of other folks from the community are being discussed and those folks
are not on IRC, it'd be wrong to consider the topic as discussed.


This is really the crux of the biscuit and thank you for continuing
to bring it back round to this point.

My personal experience of OpenStack has been that unless I am

* on IRC (too) many hours per day
* going to (too) many IRC meetings when I should be doing something
  interesting with my family
* watching a fair few spec and governance gerrits

then I will miss out on not just the decision making _process_ for
things which are relevant to the work I need or want to do and plan
for but also the _decisions_ themselves.

For example how many people really know the extent and impact of the
big tent governance plans?

Ideally I should be able to delegate a lot of this farming for
information to other people in the community but that only works if
there is a habit by those others of summarizing to the mailing list.

(Which goes back to my earlier point about of gosh aren't we all a
bit busy?)

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >