Re: [openstack-dev] [Openstack-operators] [keystone] Removing functionality that was deprecated in Kilo and upcoming deprecated functionality in Mitaka

2015-12-01 Thread Morgan Fainberg
On Tue, Dec 1, 2015 at 1:57 AM, Steve Martinelli 
wrote:

> Trying to summarize here...
>
> - There isn't much interest in keeping eventlet around.
> - Folks are OK with running keystone in a WSGI server, but feel they are
> constrained by Apache.
> - uWSGI could help to support multiple web servers.
>
> My opinion:
>
> - Adding support for uWSGI definitely sounds like it's worth
> investigating, but not achievable in this release (unless someone already
> has something cooked up).
>
uWSGI is trivial to use, but there are a ton of options that go along with
it. Our current wsgi file that works w/ mod_wsgi pretty much "just works"
for uWSGI. The "configure uWSGI" in a sane way is much much much more in
depth.

> - I'm tempted to let eventlet stick around another release, since it's
> causing pain on some of our operators.
> - Other folks have managed to run keystone in a web server (and hopefully
> not feel pain when doing so!), so it might be worth getting technical
> details on just how it was accomplished. If we get an OK from the operator
> community later on in mitaka, I'd still be OK with removing eventlet, but I
> don't want to break folks.
>
> stevemar
>
> From: John Dewey 
> 100% agree.
>
> We should look at uwsgi as the reference architecture. Nginx/Apache/etc
> should be interchangeable, and up to the operator which they choose to use.
> Hell, with tcp load balancing now in opensource Nginx, I could get rid of
> Apache and HAProxy by utilizing uwsgi.
>
> John
>
> On November 30, 2015 at 1:05:26 PM, Paul Czarkowski (
> *pczarkowski+openstack...@bluebox.net*
> ) wrote:
>
>I don't have a problem with eventlet itself going away, but I do feel
>   that keystone should pick a python based web server capable of running 
> WSGI
>   apps ( such as uWSGI ) for the reference implementation rather than 
> Apache
>   which can be declared appropriately in the requirements.txt of the 
> project.
>   I feel it is important to allow the operator to make choices based on 
> their
>   organization's skill sets ( i.e. apache vs nginx ) to help keep 
> complexity
>   low.
>
>   I understand there are some newer features that rely on Apache (
>   federation, etc ) but we should allow the need for those features inform
>   the operators choice of web server rather than force it for everybody.
>
>   Having a default implementation using uWSGI is also more inline
>   with the 12 factor way of writing applications and will run a lot more
>   comfortably in [application] containers than apache would which is 
> probably
>   an important consideration given how many people are focused on being 
> able
>   to run openstack projects inside containers.
>
>   On Mon, Nov 30, 2015 at 2:36 PM, Jesse Keating <*j...@bluebox.net*
>   > wrote:
>  I have an objection to eventlet going away. We have problems
>  with running Apache and mod_wsgi with multiple python virtual 
> environments.
>  In some of our stacks we're running both Horizon and Keystone. Each 
> get
>  their own virtual environment. Apache mod_wsgi doesn't really work 
> that
>  way, so we'd have to do some ugly hacks to expose the python 
> environments
>  of both to Apache at the same time.
>
>  I believe we spoke about this at Summit. Have you had time to
>  look into this scenario and have suggestions?
>
>
>  - jlk
>
>  On Mon, Nov 30, 2015 at 10:26 AM, Steve Martinelli <
>  *steve...@ca.ibm.com* > wrote:
>  This post is being sent again to the operators mailing list, and
>  i apologize if it's duplicated for some folks. The original thread 
> is here:
>  
> *http://lists.openstack.org/pipermail/openstack-dev/2015-November/080816.html*
>  
> 
>
>
>  In the Mitaka release, the keystone team will be removing
>  functionality that was marked for deprecation in Kilo, and marking 
> certain
>  functions as deprecated in Mitaka (that may be removed in at least 2
>  cycles).
>
>  removing deprecated functionality
>  =
>
>  This is not a full list, but these are by and large the most
>  contentious topics.
>
>  * Eventlet support: This was marked as deprecated back in Kilo
>  and is currently scheduled to be removed in Mitaka in favor of 
> running
>  keystone in a WSGI server. This is currently how we test keystone in 
> the
>  gate, and based on the feedback we received at the summit, a lot of 
> folks
>  have moved to running keystone under Apache since we’ve announced 
> this
>  change. OpenStack's CI is configured to mainly test using this 
> deployment
>  

[openstack-dev] [magnum][api] Looking for a Cross-Project Liaison for the API Working Group

2015-12-01 Thread Everett Toews
Hello Magnumites,

The API Working Group [1] is looking for a Cross-Project Liaison [2] from the 
Magnum project.

What does such a role entail?

The API Working Group seeks API subject matter experts for each project to 
communicate plans for API updates, review API guidelines with their project’s 
view in mind, and review the API Working Group guidelines as they are drafted. 
The Cross-Project Liaison (CPL) should be familiar with the project’s REST API 
design and future planning for changes to it.

Please let us know if you're interested and we'll bring you on board!

Cheers,
Everett

[1] http://specs.openstack.org/openstack/api-wg/
[2] http://specs.openstack.org/openstack/api-wg/liaisons.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]New Quota Subteam on Nova

2015-12-01 Thread Joshua Harlow

So whats needed to build out that 'framework'?

Is it just dispatching provision requests to nova, and then seeing when 
the quota becomes out of sync, and then backtracking through logs and 
notifications (+- other) to then figure out the root cause?


Or is that framework some kind of local functional testing built-in to 
nova itself, something that can work without probing via the REST 
endpoints...?


IMHO it'd be nice to have something like rally scenarios that probe the 
quota (via REST or not) but with post-validation that can be done on the 
post scenario run and/or post request run. The post-validation ensures 
the quota is as expected, using some locally computed 'expected' quota 
that nova should also have locally computed, if those two computed 
'expected' quotas are different, then backtracking/log analysis... needs 
to occur to figure out when the computed values started to diverge (and 
to figure out why they diverged); rinse and repeat that and u probably 
have a pretty powerful validation framework...


Vilobh Meshram wrote:

I am highly supportive for the idea of Nova Quota sub-team, for
something as complex as Quota, as it helps to move quickly on reviews
and changes.

Agree with John, a test framework to test quotas will be helpful and can
be one of the first task the Nova Quota sub team can focus on as that
will lay the foundation for whether the bugs mentioned here
http://bit.ly/1Pbr8YL are valid or not.

Having worked in the area of Quotas for a while now by introducing
features like Cinder Nested Quota Driver [1] [2] I strongly feel that
something like a Nova Quota sub-team will definitely help. Mentioning
about Cinder Quota driver since it was accepted in Mitaka design summit
that Nova Nested Quota Driver[3] would like to pursue the route taken by
Cinder.  Since Nested quota is a one part of Quota subsystem and working
in small team helped to iterate quickly for Nested Quota
patches[4][5][6][7] so IMHO forming a Nova quota subteam will help.

Melanie,

If you can share the details of the bug that Joe mentioned, to reproduce
quota bugs locally, it would be helpful.

-Vilobh (irc: vilobhmm)

[1] Code : https://review.openstack.org/#/c/205369/
[2] Blueprint :
https://blueprints.launchpad.net/cinder/+spec/cinder-nested-quota-driver
[3] Nova Nested Quota Spec : https://review.openstack.org/#/c/209969/
[4] https://review.openstack.org/#/c/242568/
[5] https://review.openstack.org/#/c/242500/
[6] https://review.openstack.org/#/c/242514/
[7] https://review.openstack.org/#/c/242626/


On Mon, Nov 30, 2015 at 10:59 AM, melanie witt > wrote:

On Nov 26, 2015, at 9:36, John Garbutt > wrote:

>  A suggestion in the past, that I like, is creating a nova functional
>  test that stress tests the quota code.
>
>  Hopefully that will be able to help reproduce the error.
>  That should help prove if any proposed fix actually works.

+1, I think it's wise to get some data on the current state of
quotas before choosing a redesign. IIRC, Joe Gordon described a test
scenario he used to use to reproduce quota bugs locally, in one of
the launchpad bugs. If we could automate something like that, we
could use it to demonstrate how quotas currently behave during
parallel requests and try things like disabling reservations. I also
like the idea of being able to verify the effects of proposed fixes.

-melanie (irc: melwitt)






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] consolidate IRC change with #openstack-sdk?

2015-12-01 Thread Everett Toews
On Nov 30, 2015, at 7:58 AM, michael mccune  wrote:
> 
> On 11/30/2015 08:45 AM, Sean Dague wrote:
>> Ok, I'm going to assume with no real disagreement we're agreed here. I'm
>> moving the api-wg notifications to #openstack-sdks now -
>> https://review.openstack.org/#/c/251357/1/gerritbot/channels.yaml,cm
> 
> ok, i think we've got a few spots in the documentation that refer to 
> openstack-api. we can start the process of adjusting all those to 
> openstack-sdks.

I updated the couple of places I could find on the wiki. I also sent out a 
personalized message to everyone in the now defunct #openstack-api channel on 
Freenode.

See you in #openstack-sdks (don't forget it's plural!)

Everett
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nfv][telco] Telco Working Group Meeting Reminder - Wednesday December 2nd at 1400 UTC in #openstack-meeting-alt

2015-12-01 Thread Steve Gordon
Hi all,

This week's Telco Working Group meeting [1] is on Wednesday December 2nd at 
1400 UTC in #openstack-meeting-alt [2]. The proposed agenda is here:

https://etherpad.openstack.org/p/nfv-meeting-agenda

Thanks,

Steve

[1] http://eavesdrop.openstack.org/#Telco_Working_Group_meeting
[2] 
https://webchat.freenode.net/?randomnick=1=%23openstack-meeting-alt=1=d4

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO/heat] openstack debug command

2015-12-01 Thread Steve Baker

On 02/12/15 03:18, Lennart Regebro wrote:

On Tue, Dec 1, 2015 at 3:39 AM, Steve Baker  wrote:

I mean _here_

https://review.openstack.org/#/c/251587/

OK, that's great! If you want any help implementing it, I can try.



Hey Lennart, help is always appreciated.

I can elaborate on the implementation approach for ``openstack overcloud 
failed list`` and you can take a crack at that if you like while I work 
on the other two commands.


I think before we start on the commands proper I will need to implement 
some yaml printing utility functions, so lets coordinate on that.


cheers

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [searchlight][ceilometer][qa][infra] Elasticsearch 2.0 client release

2015-12-01 Thread McLellan, Steven
Hi,

The elasticsearch 2.0 python client is not backwards compatible with the 1.x 
client nor server. Currently global-requirements has no upper cap on it; I've 
proposed https://review.openstack.org/252075 to cap it under version 2.0 on the 
basis that devstack installs elasticsearch 1.x and I imagine that's currently 
the most commonly supported version. Deployers using version 2.0 would need to 
change the client version they install with projects that use it (which as best 
I can tell currently is ceilometer and searchlight) and those projects will 
need to allow optional support for it.

Reviews or alternatives would be very welcome.

Thanks,
Steve McLellan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][glance] Add Sabari Kumar Murugesan <smuruge...@vmware.com>

2015-12-01 Thread Sabari Murugesan
Thank you all for the support. It's a big responsibility and I hope to do
my best.

- Sabari

On Tue, Dec 1, 2015 at 11:01 AM, Flavio Percoco  wrote:

> On 23/11/15 17:20 -0300, Flavio Percoco wrote:
>
>> Greetings,
>>
>> I'd like to propose adding Sabari Kumar Murugesan to the glance-core
>> team. Sabari has been contributing for quite a bit to the project with
>> great reviews and he's also been providing great feedback in matters
>> related to the design of the service, libraries and other areas of the
>> team.
>>
>> I believe he'd be a great addition to the glance-core team as he has
>> demonstrated a good knowledge of the code, service and project's
>> priorities.
>>
>> If Sabari accepts to join and there are no objections from other
>> members of the community, I'll proceed to add Sabari to the team in a
>> week from now.
>>
>
> This has been done!
>
> Welcome to the team and thanks for being part of it.
>
> Flavio
>
>
>> Thanks,
>> Flavio
>>
>> --
>> @flaper87
>> Flavio Percoco
>>
>
>
>
> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [ironic] Hardware composition

2015-12-01 Thread Andrew Laski

On 12/01/15 at 03:44pm, Vladyslav Drok wrote:

Hi list!

There is an idea of making use of hardware composition (e.g.
http://www.intel.com/content/www/us/en/architecture-and-technology/rack-scale-architecture/intel-rack-scale-architecture-resources.html)
to create nodes for ironic.

The current proposal is:

1. To create hardware-compositor service under ironic umbrella to manage
this composition process. Its initial implementation will support Intel
RSA, other technologies may be added in future. At the beginning, it will
contain the most basic CRUD logic for composed system.

2. Add logic to nova to compose a node using this new project and register
it in ironic if the scheduler is not able to find any ironic node matching
the flavor. An alternative (as pointed out by Devananda during yesterday's
meeting) could be using it in ironic by claims API when it's implemented (
https://review.openstack.org/204641).


I don't think the logic should be added to Nova to create these nodes.  
This hardware-compositor looks like a hypervisor that can manage the 
subdivision of a set of resources and exposes them as a "virtual 
machine" in a sense.  So I would expect that work to happen within the 
virt driver as it does for all of the others.


The key thing to work out is how to expose the resources for the Nova 
scheduler to use.  I may be simplifying the problem but it looks like 
the pooled system could be exposed as a compute host that can be 
scheduled to, and as systems are composed from the pool the consumed 
resources would be decremented the same as they're done today.




3. If implemented in nova, there will be no changes to ironic right now
(apart from needing the driver to manage these composed nodes, which is
redfish I beleive), but there are cases when it may be useful to call this
service from ironic directly, e.g. to free the resources when a node is
deleted.

Thoughts?

Thanks,
Vlad



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Install Openstack-Ansible

2015-12-01 Thread Major Hayden
On Tue, 2015-12-01 at 20:35 +0530, Sharma Swati6 wrote:
> However, I just want to know if I have to implement this in
> openstack-ansible, or for that matter, I want to add any new
> component to it, are there any steps or guidelines to be followed.
> For example, first I create containers and mention/add it to config
> files. etc. 
> I went through http://docs.openstack.org/developer/openstack-ansible/
> developer-docs/extending.html but this is not much self-explanatory.
> 
> If the steps provided by you are helpful I can begin with this and
> contribute soon.

Hello Sharma,

I haven't implemented a new service in openstack-ansible quite yet, but
I'll give you some tips.

First, you'll need to use the extra_container.yml.example[1] to make a
new container.

Next, you'll want to create a role that will configure the operating
system and the required services within the container. You can review
the roles within the openstack-ansible repository to see what is
typically configured in each one.  The keystone role[2] might be a good
place to start.

From there, you'll need to test the container build-out and
configuration to make sure the service works well with the other
services (like authentication with keystone).

[1] 
https://github.com/openstack/openstack-ansible/tree/master/etc/openstack_deploy/env.d
[2] 
https://github.com/openstack/openstack-ansible/tree/master/playbooks/roles/os_keystone

-- 
Major Hayden



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Feature Freeze Exception Request: Task Based Deployment in Astute

2015-12-01 Thread Vladimir Kuklin
Hi, Folks

** Intro *

During Iteration 3 our Enhancements Team as long as other folks worked on
the feature called "Task Based Deployment with Astute". Here is a link to
its blueprint:
https://blueprints.launchpad.net/fuel/+spec/task-based-deployment-astute

Major implication of this feature complition is that our deployment process
will be drastically optimized allowing us to decrease deployment time of
typical clusters at least by 2,5 times (for BVT/CI cases) and by order of
magnitude for 100-node clusters.

This is achieved by real parallelization of deployment tasks execution
which assumes that we do not wait for the whole 'deployment group/role' to
deploy, but we only wait for particular tasks to finish. For example, we
could deploy 'database' task on secondary controllers as soon as 'database'
task is ready on the first controller. As our deployment workflow consists
only of a small amount of such synchronization points as 'database' task,
we will be able to deploy majority of deployment tasks in parallel
shrinking deployment time to "time-of-deployment-of-the-longest-node". This
actually means that our standard deployment case for development and
testing will take 30 minutes on our CI servers thus drastically improving
developers and users experience, as well as shrinking down time of overall
acceptance testing, time for bug reproducing and so on. This feature also
allows one to use 7.0 role-as-a-plugin feature in much more effective way
as current split-services-with-plugins feature may lead to very inoptimal
deployment flow which might take up to *6 hours* even for the simplest HA
cluster, while it would take again *30 minutes* with *Task-Based *approach.
Also, when multi-roles were used we ran several tasks for each role each
time it was used, making deployment suboptimal again.


** Short List of Work Items*

As we started a little bit lately during iteration 3 we worked on design
and specification of this feature in a way so that its introduction will
bring in almost zero chance of regression with ability to disable it. Here
is the summary

So far we introduce several pieces of code:
1. New version of tasks format introducing cross-node dependencies between
tasks
2. Changes to Nailgun
  a. deduplication of tasks for roles [In Progress]
  b. support for new tasks format [In Progress]
  c. new engine that generates an array of hashes of tasks info consumable
by new Astute engine [In Progress].
3. Changes to Astute
 a. Tasks dependencies parser and visualizer [Ready for review]
 b. Deployment engine capable of graph traversing and reporting [Read for
Review]
 c. Async wrapper for shell-based tasks [Ready for review]
4. Changes to Fuel Library
 a. Add additional fields into existing Fuel Library deployment tasks for
cross-dependencies [In Progress].

** Ensurance of Little Regression and Backward Compatibility*

As we worked on being backward-compatible from the day one, this engine is
enabled ONLY when 2 requirements are met:

1. It is globally enabled in Nailgun settings.yaml
2. ALL tasks scheduled for deployment execution have v2.0.0

This list seems a little bit huge, but this changes are isolated and
granular and actually affect the sequence in which tasks are executed on
the nodes. This means that there will be actually no difference from the
view of resulting functioning of the cluster. This feature can be safely
disabled if user does not want to use it.

But if user wants to work with it, he can gain enormous improvement in
speed, his own engineering/development/testing velocity as well as in Fuel
user experience.

** Additional Cons of the Feature*

Moreover, this feature improves how the following use cases are also
addressed:

*1. When user deploys a specific set of nodes or tasks*
It will be possible to introduce additional flag for deploy/task run
handler for Nailgun to pick up dependencies of specified tasks, even if
they are currently not in place in current deployment graph. This means
that instead of running

*fuel nodes --node-id 2,3 --deploy  *

and see how it fails as node-1 contains some of the tasks that are required
by nodes 2 and 3, user will be calm about it as he will be able to specify
an option to populate deployment flow with needed tasks. No more

*fuel nodes --node-id 2 --tasks netconfig*  -> Fail, because you forgot to
specify some of the required tasks, e.g. hiera, globals.

*2. Post-deployment plugin installation*

This feature also makes post-deployment plugin installation much easier as
plugin installation will happen almost in matter of minutes instead of
hours.

*3. Cluster re-deployment for some of LCM cases support*

Whenever user can change settings on the nodes and trigger full cluster
redeployment or whenever he wants to get tainted cluster converge back to
the previous state deployed by Fuel, he will get his cluster back into
operational state in 30 minutes.

*4. Better capabilities for separated services plugins*

Task-based approach allows one to 

Re: [openstack-dev] [Openstack-operators] [keystone] Removing functionality that was deprecated in Kilo and upcoming deprecated functionality in Mitaka

2015-12-01 Thread Fox, Kevin M
I just upgraded to keystone liberty for one of my production clouds, and went 
with apache since eventlet was listed as deprecated. It was pretty easy. Just 
ran into one issue. RadosGW wouldn't work against it until I added 
"WSGIChunkedRequest On'" in the config. otherwise, the config as shipped with 
RDO worked fine. I am running giant radosgw, so future versions may not require 
that.

Thanks,
Kevin

From: Sean Dague [s...@dague.net]
Sent: Tuesday, December 01, 2015 4:05 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Openstack-operators] [keystone] Removing 
functionality that was deprecated in Kilo and upcoming deprecated functionality 
in Mitaka

On 12/01/2015 01:57 AM, Steve Martinelli wrote:
> Trying to summarize here...
>
> - There isn't much interest in keeping eventlet around.
> - Folks are OK with running keystone in a WSGI server, but feel they are
> constrained by Apache.

>From an interop perspective, this concerns me a bit. My understanding is
that Apache is specifically needed for Federation. Federation is the
norm that we want for environments in the future.

I'd hate to go down a path where the reference architecture we put out
there doesn't support this. It's going to be all the pain of cells /
non-cells that Nova's or nova-net / neutron bifurcation.

Whatever the reference architecture is, it should support Federation. A
non federation capable keystone should be the exception.

> - uWSGI could help to support multiple web servers.


--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] CentOS7 Merging Plan

2015-12-01 Thread Dmitry Borodaenko
With bit more details, I hope this covers all the risks and decision
points now.

First of all, current list of outstanding commits:
https://etherpad.openstack.org/p/fuel_on_centos7

The above list has two sections: backwards compatible changes that can
be merged one at a time even if the rest of CentOS7 support isn't
merged, and backwards incompatible changes that break support for
CentOS6 and must be merged (and, if needed, reverted) all at once.

Decision point 1: FFE for CentOS7

CentOS7 support cannot be fully merged on Dec 2, so it misses FF. Can it
be allowed a Feature Freeze Exception? So far, the disruption of the
Fuel development process implied by the proposed merge plan is
acceptable, if anything goes wrong and we become unable to have a stable
ISO with merged CentOS7 support on Monday, December 7, the FFE will be
revoked.

Wed, Dec 2: Merge party

Merge party before 8.0 FF, we should do our best to merge all remaining
feature commits before end of day (including backwards compatible
CentOS7 support commits), without breaking the build too much.

At the end of the day we'll start a swarm test over the result of the
merge party, and we expect QA to analyze and summarize the results by
17:00 MSK (6:00 PST) on Thu Dec 3.

Risk 1: Merge party breaks the build

If there is a large regression in swarm pass percentage, we won't be
able to afford a merge freeze which is necessary to merge CentOS7
support, we'll have to be merging bugfixes until swarm test pass rate is
back around 70%.

Risk 2: More features get FFE

If some essential 8.0 features are not completely merged by end of day
Wed Dec 2 and are granted FFE, merging the remaining commits can
interfere with merging CentOS7 support, not just from merge conflicts
perspective, but also invalidating swarm results and making it
practically impossible to bisect and attribute potential regressions.

Thu, Dec 3: Start merge freeze for CentOS7

Decision point 2: Other FFEs

In the morning MSK time, we will assess Risk 2 and decide what to do
with the other FFEs. The options are: integrate remaining commits into
CentOS7 merge plan, block remaining commits until Monday, revoke CentOS7
FFE.

If the decision is to go ahead with CentOS7 merge, we announce merge
freeze for all git repositories that go into Fuel ISO, and spend the
rest of the day rebasing and cleaning up the rest of the CentOS7 commits
to make sure they're all in mergeable state by the end of the day. The
outcome of this work must be a custom ISO image with all remaining
commits, with additional requirement that it must not use Jenkins job
parameters (only patches to fuel-main that change default repository
paths) to specify all required package repositories. This will validate
the proposed fuel-main patches and ensure that no unmerged package
changes are used to produce the ISO.

Decision point 3: Swarm pass rate

After swarm results from Wed are available, we will assess the Risk 1.
If the pass rate regression is significant, CentOS7 FFE is revoked and
merge freeze is lifted. If regression is acceptable, we proceed with
merging remaining CentOS7 commmits through Thu Dec 3 and Fri Dec 4.

Fri, Dec 4: Merge and test CentOS7

The team will have until 17:00 MSK to produce a non-custom ISO that
passes BVT and can be run through swarm.

Sat, Dec 5: Assess CentOS7 swarm and bugfix

First of all, someone from CI and QA teams should commit to monitoring
the CentOS7 swarm run and report the results as soon as possible. Based
on the results (which once again must be available by 17:00 MSK), we can
decide on the final step of the plan.

Decision point 4: Keep or revert

If CentOS7 based swarm shows significant regression, we have to spend
the rest of the weekend including Sunday reverting all CentOS7 commits
that were merged during merge freeze. Once revert is completed, we will
lift the merge freeze.

If the regression is acceptable, we lift the merge freeze straight away
and proceed with bugfixing as usual. At this point CI team will need to
update the Fuel ISO used for deployment tests in our CI to this same
ISO.

One way or the other, we will be able to resume bugfixing on Monday
morning MSK time, and will have lost 2 business days (Thu-Fri) during
which we won't be able to merge bugfixes. In addition to that, someone
from QA and everyone from CentOS7 support team has to work on Saturday,
and someone from CI will have to work a few hours on Sunday.

-- 
Dmitry Borodaenko


On Tue, Dec 01, 2015 at 05:58:42PM +0300, Dmitry Teselkin wrote:
> Hello,
> 
> We're almost got green BVT on custom CentOS7 ISO and it seems that it's
> the time to discuss the plan how this feature could be merged.
> 
> This is not the only one feature that is in a queue. Unfortunately,
> almost any other feature will be broken if merged after CentOS7, so it
> was decided to merge our changes last.
> 
> This is not an official announcement, rather a notification letter to
> start a discussion and find any objections.
> 
> So the 

Re: [openstack-dev] [Fuel][FFE] Disabling HA for RPC queues in RabbitMQ

2015-12-01 Thread Peter Lemenkov
Hello All!

Well, side-effects (or any other effects) are quite obvious and
predictable - this will decrease availability of RPC queues a bit.
That's for sure.

However, Dmitry's guess is that the overall messaging backplane
stability increase (RabitMQ won't fail too often in some cases) would
compensate for this change. This issue is very much real - speaking of
me I've seen an awful cluster's performance degradation when a failing
RabbitMQ node was killed by some watchdog application (or even worse
wasn't killed at all). One of these issues was quite recently, and I'd
love to see them less frequently.

That said I'm uncertain about the stability impact of this change, yet
I see a reasoning worth discussing behind it.

2015-12-01 20:53 GMT+01:00 Sergii Golovatiuk :
> Hi,
>
> -1 for FFE for disabling HA for RPC queue as we do not know all side effects
> in HA scenarios.
>
> On Tue, Dec 1, 2015 at 7:34 PM, Dmitry Mescheryakov
>  wrote:
>>
>> Folks,
>>
>> I would like to request feature freeze exception for disabling HA for RPC
>> queues in RabbitMQ [1].
>>
>> As I already wrote in another thread [2], I've conducted tests which
>> clearly show benefit we will get from that change. The change itself is a
>> very small patch [3]. The only thing which I want to do before proposing to
>> merge this change is to conduct destructive tests against it in order to
>> make sure that we do not have a regression here. That should take just
>> several days, so if there will be no other objections, we will be able to
>> merge the change in a week or two timeframe.
>>
>> Thanks,
>>
>> Dmitry
>>
>> [1] https://review.openstack.org/247517
>> [2]
>> http://lists.openstack.org/pipermail/openstack-dev/2015-December/081006.html
>> [3] https://review.openstack.org/249180
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
With best regards, Peter Lemenkov.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] 'NoMatchingFunctionException: No function "#operator_." matches supplied arguments' error when adding an application to an environment

2015-12-01 Thread Vahid S Hashemian
Thanks Stan. It makes sense now.
I was able to see the yaml in the database, but I agree that the cached 
file extension should change to avoid confusion.
I appreciate your thorough response.

Regards,
--Vahid




From:   Stan Lagun 
To: "OpenStack Development Mailing List (not for usage questions)" 

Date:   11/27/2015 04:04 AM
Subject:Re: [openstack-dev] [Murano] 'NoMatchingFunctionException: 
No function "#operator_." matches supplied arguments' error when adding an 
application to an environment



Here is the full story:

real YAML that is generated is stored in murano database in packages 
table. Murano Dashboard obtains form definitions in YAML from API. But to 
improve performance it also caches them locally. And when it does it 
stores them in Python pickle format [1] rather then original YAML, but 
doesn't change the extension (actually this is a bug). That's why when you 
take yamls from dashboard cache they doesn't look like a valid YAML though 
they contain the same information and can be decrypted back (that's what I 
did to find the error you had). But is is much easier to take real YAML 
from database though

[1]: https://docs.python.org/2/library/pickle.html

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ironic][heat] Adding back the tripleo check job

2015-12-01 Thread Zane Bitter

On 01/12/15 06:22, Steven Hardy wrote:

>  +1
>
>  I don't think it hurts to turn it on, but tbh I'm uncomfortable with the
>  mental overhead of a non-voting job that I have to manually treat as a
>  voting job. If it's stable enough to make it a voting job, I'd prefer we
>  just make it voting. And if it's not then I'd like to see it be made
>  stable enough to be a voting job and then make it voting.
>
>This is roughly where I sit as well -- if it's non-voting, experience
>tells me that it will largely be ignored, and as such, isn't a good use of
>resources.

I'm sure you can appreciate it's something of a chicken/egg problem though
- if everyone always ignores non-voting jobs, they never become voting.

That effect is magnified with TripleO though, because it consumes so many
OpenStack projects, any one of which has the capability to break our CI, so
in an ideal world we'd have voting feedback on all-the-things, but that's
not where we are right now due in large-part to the steady stream of
regressions (from Heat, Ironic and other projects).


Yeah, but when it does break the problem becomes that reviewers now have 
to debug TripleO tests on every single patchset to see if they broke it 
(or made it worse) or not. Pretty soon you never know whether you should 
be ignoring it or not, and... we all know how this story ends.


Maybe it would help if there was a periodic job running TripleO tests 
against master and posting the results somewhere. If I could go to 
aretripleocheckfailuresreal.com and it just says YES or NO in giant 
8-inch high letters, that would help a lot. The test could run let's say 
once an hour and if the last 3 all passed then we'd call that YES.


Even better would be if we could tie this into the gate, so that it runs 
as a gating test when the tests are working and not when they aren't. I 
don't think infra will thank me for suggesting that though ;)



It also occurs to me that the trade-off may be different for stable 
branches. It might well be possible to start gating stable/liberty Heat 
against TripleO right away.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [ironic] Hardware composition

2015-12-01 Thread Devananda van der Veen
On Tue, Dec 1, 2015 at 12:07 PM, Andrew Laski  wrote:

> On 12/01/15 at 03:44pm, Vladyslav Drok wrote:
>
>> Hi list!
>>
>> There is an idea of making use of hardware composition (e.g.
>>
>> http://www.intel.com/content/www/us/en/architecture-and-technology/rack-scale-architecture/intel-rack-scale-architecture-resources.html
>> )
>> to create nodes for ironic.
>>
>> The current proposal is:
>>
>> 1. To create hardware-compositor service under ironic umbrella to manage
>> this composition process. Its initial implementation will support Intel
>> RSA, other technologies may be added in future. At the beginning, it will
>> contain the most basic CRUD logic for composed system.
>>
>> 2. Add logic to nova to compose a node using this new project and register
>> it in ironic if the scheduler is not able to find any ironic node matching
>> the flavor. An alternative (as pointed out by Devananda during yesterday's
>> meeting) could be using it in ironic by claims API when it's implemented (
>> https://review.openstack.org/204641).
>>
>
> I don't think the logic should be added to Nova to create these nodes.
> This hardware-compositor looks like a hypervisor that can manage the
> subdivision of a set of resources and exposes them as a "virtual machine"
> in a sense.  So I would expect that work to happen within the virt driver
> as it does for all of the others.
>
> The key thing to work out is how to expose the resources for the Nova
> scheduler to use.  I may be simplifying the problem but it looks like the
> pooled system could be exposed as a compute host that can be scheduled to,
> and as systems are composed from the pool the consumed resources would be
> decremented the same as they're done today.


I believe you've correctly described the ways such resources might be
modeled and scheduled to -- there is, as I understand it, a non-arbitrarily
subdivisible pool that adheres to some rules. This "compositing" moves
guest isolation deeper into the hardware than virtualization does today.

However, this isn't a virt driver any more than nova-baremetal was a virt
driver; these machines, once "composed", still need all the operational
tooling of Ironic to bring a guest OS up. Also, the interfaces to "compose"
these servers are hardware APIs. The service in question will communicate
with a BMC or similar out-of-band hardware management device -- probably
the same BMC that ironic-conductor will communicate with to boot the
servers.

The more I think about this, the more it sounds like a special type of
chassis manager -- a concept we added to Ironic in the beginning but never
used -- and some changes to the nova.virt.ironic driver to call out to
?ironic?new service? and compose the servers on-the-fly, in response to a
boot request, if there is available capacity.

Also, this is pretty far afield from where we're focusing time right now.
We need to get the Ironic claims API and multiple compute host support done
first.

-deva



>
>
>
>> 3. If implemented in nova, there will be no changes to ironic right now
>> (apart from needing the driver to manage these composed nodes, which is
>> redfish I beleive), but there are cases when it may be useful to call this
>> service from ironic directly, e.g. to free the resources when a node is
>> deleted.
>>
>> Thoughts?
>>
>> Thanks,
>> Vlad
>>
>
> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][api] Looking for a Cross-Project Liaison for the API Working Group

2015-12-01 Thread Adrian Otto
Everett,

Thanks for reaching out. Eli is a good choice for this role. We should also 
identify an alternate as well.

Adrian

--
Adrian

> On Dec 1, 2015, at 6:15 PM, Qiao,Liyong  wrote:
> 
> hi Everett
> I'd like to take it.
> 
> thanks
> Eli.
> 
>> On 2015年12月02日 05:18, Everett Toews wrote:
>> Hello Magnumites,
>> 
>> The API Working Group [1] is looking for a Cross-Project Liaison [2] from 
>> the Magnum project.
>> 
>> What does such a role entail?
>> 
>> The API Working Group seeks API subject matter experts for each project to 
>> communicate plans for API updates, review API guidelines with their 
>> project’s view in mind, and review the API Working Group guidelines as they 
>> are drafted. The Cross-Project Liaison (CPL) should be familiar with the 
>> project’s REST API design and future planning for changes to it.
>> 
>> Please let us know if you're interested and we'll bring you on board!
>> 
>> Cheers,
>> Everett
>> 
>> [1] http://specs.openstack.org/openstack/api-wg/
>> [2] http://specs.openstack.org/openstack/api-wg/liaisons.html
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> -- 
> BR, Eli(Li Yong)Qiao
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Gerrit Upgrade 12/16

2015-12-01 Thread Spencer Krum
Hi All,

The infra team will be taking gerrit offline for an upgrade on December
16th. We
will start the operation at 17:00 UTC and will continue until about
21:00 UTC.

This outage is to upgrade Gerrit to version 2.11. The IP address of
Gerrit will not be changing.

There is a thread beginning here:
http://lists.openstack.org/pipermail/openstack-dev/2015-October/076962.html
which covers what to expect from the new software.

If you have questions about the Gerrit outage you are welcome to post a
reply to this thread or find the infra team in the #openstack-infra irc
channel on freenode. If you have questions about the version of Gerrit
we are upgrading to please post a reply to the email thread linked
above, or again you are welcome to ask in the #openstack-infra channel.

-- 
  Spencer Krum
  n...@spencerkrum.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-i18n] DocImpact Changes

2015-12-01 Thread Armando M.
On 24 November 2015 at 13:11, Lana Brindley 
wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Hi Akihiro,
>
> Judging by most of the bugs we see in the openstack-manuals queue, the
> bugs that will be raised in projects' own queues will usually be against
> their own developer documentation.


I am a bit confused. The developer documentation coexists with the code
that's being patched. Is this the documentation you're referring to? So for
instance in Neutron we have [1] rendered into [2].

I always assumed that DocImpact was needed to flag configuration changes,
and more generally user facing issues, and these are not captured in the
developer doc [2] (afaik, *-aas services do have a developer documentation
but no entry point in [4]), as developers are not the audience, but rather
guide [3]. On the other hand, if patches do require dev doc changes (like
design issues etc), reviewers tend to ask to take care of them in the
context of the same patch, without the burden of going through an extra bug
report.

So the fact that Neutron's DocImpact changes raises a bug report in LP
Neutron seems counterintuitive, and I am a bit at loss here. Can you
elaborate on what was the rationale for making only certain projects target
their own queues? I wouldn't want to have bug reports be reassigned to the
right queue only to cause more burden that this initiative originally
intended to relieve.

Having said that, let's assume we are stricter about what gets flagged with
DocImpact, would you be open to revisiting the doc group for Neutron as set
in [5]? In the meantime, I'll go ahead and look at the outstanding patches
and bug reports recently filed to sanitize with the best of my abilities.

Thanks,
Armando

[1] https://github.com/openstack/neutron/tree/master/doc
[2] http://docs.openstack.org/developer/neutron/
[3] http://docs.openstack.org/networking-guide/
[4] http://docs.openstack.org/developer/openstack-projects.html
[5] https://review.openstack.org/#/c/248515/2/gerrit/projects.yaml


> In the rare instance that that isn't the case, and it needs to come into
> one of the openstack-manuals docs, then we are happy to discuss the issue
> with the person who raised the bug and, possibly, take the bug in our queue
> to be fixed.
>
> To be clear: just because DocImpact raises a bug in a projects' queue,
> doesn't mean docs aren't available to help fix it. We're always here for
> you to ask questions about docs bugs, even if those bugs are in your repo
> and your developer documentation.


> I hope that answers the question.
>
> Thanks,
> Lana
>
> On 24/11/15 16:07, Akihiro Motoki wrote:
> > it sounds a good idea in general.
> >
> > One question is how each project team should handle DocImpact bugs
> > filed in their own project.
> > It will be a part of "Plan and implement an education campaign"
> > mentioned in the docs spec.
> > I think some guidance will be needed. One usual question will be which
> > guide(s) needs to be updated?
> > openstack-manuals now have a lot of manuals and most developers are
> > not aware of all of them.
> >
> > Akihiro
> >
> >
> > 2015-11-24 7:33 GMT+09:00 Lana Brindley :
> > Hi everyone,
> >
> > The DocImpact flag was implemented back in Havana to make it easier for
> developers to notify the documentation team when a patch might cause a
> change in the docs. This has had an overwhelming response, resulting in a
> huge amount of technical debt for the docs team, so we've been working on a
> plan to change how DocImpact works. This way, instead of just creating a
> lot of noise, it will hopefully go back to being a useful tool. This is
> written in more detail in our spec[1], but this email attempts to hit the
> highlights for you.
> >
> > TL;DR: In the future, you will need to add a description whenever you
> add a DocImpact flag in your commit message.
> >
> > Right now, if you create a patch and include 'DocImpact' in the commit
> message, Infra raises a bug in either the openstack-manuals queue, or (for
> some projects) your own project queue. DocImpact is capable of handling a
> description field, but this is rarely used. Instead, docs writers are
> having to dig through your patch to determine what (if any) change is
> required.
> >
> > What we are now implementing has two main facets:
> >
> > * The DocImpact flag can only be used if it includes a description. A
> Jenkins job will test for this, and if you include DocImpact without a
> description, the job will fail. The description can be a short note about
> what needs changing in the docs, a link to a gist or etherpad with more
> information, or a note about the location of incorrect docs. There are
> specific examples in the spec[2].
> > * Only Nova, Swift, Glance, Keystone, and Cinder will have DocImpact
> bugs created in the openstack-manuals queue. All other projects will have
> DocImpact bugs raised in their own queue. This is handled in [3] and [4].
> >
> 

Re: [openstack-dev] [tripleo][ironic][heat] Adding back the tripleo check job

2015-12-01 Thread Devananda van der Veen
On Tue, Dec 1, 2015 at 3:22 AM, Steven Hardy  wrote:

> On Mon, Nov 30, 2015 at 03:35:13PM -0800, Devananda van der Veen wrote:
> >On Mon, Nov 30, 2015 at 3:07 PM, Zane Bitter 
> wrote:
> >
> >  On 30/11/15 12:51, Ruby Loo wrote:
> >
> >On 30 November 2015 at 10:19, Derek Higgins  >> wrote:
> >
> >Â  Â  Hi All,
> >
> >Â  Â  Â  Â  Â A few months tripleo switch from its devtest based
> CI to
> >one
> >Â  Â  that was based on instack. Before doing this we anticipated
> >Â  Â  disruption in the ci jobs and removed them from non tripleo
> >projects.
> >
> >Â  Â  Â  Â  Â We'd like to investigate adding it back to heat and
> >ironic as
> >Â  Â  these are the two projects where we find our ci provides the
> >most
> >Â  Â  value. But we can only do this if the results from the job
> are
> >Â  Â  treated as voting.
> >
> >What does this mean? That the tripleo job could vote and do a -1
> and
> >block ironic's gate?
> >
> >Â  Â  Â  Â  Â In the past most of the non tripleo projects tended
> to
> >ignore
> >Â  Â  the results from the tripleo job as it wasn't unusual for
> the
> >job to
> >Â  Â  broken for days at a time. The thing is, ignoring the
> results of
> >the
> >Â  Â  job is the reason (the majority of the time) it was broken
> in
> >the
> >Â  Â  first place.
> >Â  Â  Â  Â  Â To decrease the number of breakages we are now no
> longer
> >Â  Â  running master code for everything (for the non tripleo
> projects
> >we
> >Â  Â  bump the versions we use periodically if they are working).
> I
> >Â  Â  believe with this model the CI jobs we run have become a lot
> >more
> >Â  Â  reliable, there are still breakages but far less frequently.
> >
> >Â  Â  What I proposing is we add at least one of our tripleo jobs
> back
> >to
> >Â  Â  both heat and ironic (and other projects associated with
> them
> >e.g.
> >Â  Â  clients, ironicinspector etc..), tripleo will switch to
> running
> >Â  Â  latest master of those repositories and the cores approving
> on
> >those
> >Â  Â  projects should wait for a passing CI jobs before hitting
> >approve.
> >Â  Â  So how do people feel about doing this? can we give it a
> go? A
> >Â  Â  couple of people have already expressed an interest in doing
> >this
> >Â  Â  but I'd like to make sure were all in agreement before
> switching
> >it on.
> >
> >This seems to indicate that the tripleo jobs are non-voting, or at
> >least
> >won't block the gate -- so I'm fine with adding tripleo jobs to
> >ironic.
> >But if you want cores to wait/make sure they pass, then shouldn't
> they
> >be voting? (Guess I'm a bit confused.)
> >
> >  +1
> >
> >  I don't think it hurts to turn it on, but tbh I'm uncomfortable
> with the
> >  mental overhead of a non-voting job that I have to manually treat
> as a
> >  voting job. If it's stable enough to make it a voting job, I'd
> prefer we
> >  just make it voting. And if it's not then I'd like to see it be made
> >  stable enough to be a voting job and then make it voting.
> >
> >This is roughly where I sit as well -- if it's non-voting, experience
> >tells me that it will largely be ignored, and as such, isn't a good
> use of
> >resources.
>
> I'm sure you can appreciate it's something of a chicken/egg problem though
> - if everyone always ignores non-voting jobs, they never become voting.
>
> That effect is magnified with TripleO though, because it consumes so many
> OpenStack projects, any one of which has the capability to break our CI, so
> in an ideal world we'd have voting feedback on all-the-things, but that's
> not where we are right now due in large-part to the steady stream of
> regressions (from Heat, Ironic and other projects).
>
> >I haven't looked at tripleo or tripleoci in a while, so I wont assume
> that
> >my recollection of the CI jobs bears any resemblance to what exists
> today.
> >Could you explain what areas of ironic (or its subprojects) will be
> >covered by these tests?  If they are already covered by existing
> tests,
> >then I don't see the benefit of adding another job; conversely, if
> this is
> >testing areas we don't cover today, then there's probably value in
> running
> >tripleoci in a voting fashion for now and then moving that coverage
> into
> >ironic's project testing.
>
> I like to think of TripleO as a trunk-chasing "power user", and as such
> gives very valuable "user" feedback, including breaking things in exciting
> ways you hadn't anticipated in your 

[openstack-dev] [ironic]Create vm fails when using IronicDriver

2015-12-01 Thread Zhi Chang
hi, all
I install Ironic in my devstack by following the document 
http://docs.openstack.org/developer/ironic/dev/dev-quickstart.html. But I meet 
a problem now. Create a vm fail by nova by using IronicDriver. My nova.conf 
like this: 
./nova.conf:scheduler_host_manager = 
nova.scheduler.ironic_host_manager.IronicHostManager
./nova.conf:compute_driver = nova.virt.ironic.IronicDriver


Creating vm command is: nova boot --flavor baremetal --image $image --key-name 
default --nic net-id=xxx testing


And the error message is:
InstanceDeployFailure: RPC do_node_deploy failed to validate deploy or power 
info. Error: Node d71babdd-aa91-450d-b957-dc8c633c41f2 is configured to use the 
agent_ssh driver which currently does not support deploying partition images.


Could someone give me some advice?


Thx
Zhi Chang__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Mesos Conductor

2015-12-01 Thread bharath thiruveedula
Hi,
Sorry I was off for some days because of health issues.
So I think the work items for this BP[1] are:
1)Add support to accept json file in container-create command2)Handle JSON 
input in docker_conductor3)Implement mesos conductor for container 
create,delete and list.
Correct me if I am wrong. And let me know the process for implementing BP in 
magnum. I think we need approval for this BP and then implementation?
 [1]https://blueprints.launchpad.net/magnum/+spec/mesos-conductor
RegardsBharath T(tbh)

Date: Fri, 20 Nov 2015 07:44:49 +0800
From: jay.lau@gmail.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] Mesos Conductor

It's great that we come to some agreement on unifying the client call ;-)

As i proposed in previous thread, I think that "magnum app-create" may be 
better than "magnum create", I want to use "magnum app-create" to distinguish 
with "magnum container-create". The "app-create" may also not a good name as 
the k8s also have concept of service which is actually not an app. comments?

I think we can file a bp for this and it will be a great feature in M release!

On Fri, Nov 20, 2015 at 4:59 AM, Egor Guz  wrote:
+1, I found that 'kubectl create -f FILENAME’ 
(https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/user-guide/kubectl/kubectl_create.md)
 works very well for different type of objects and I think we should try to use 
it.



but I think we should support two use-cases

 - 'magnum container-create’, with simple list of options which work for 
Swarm/Mesos/Kub. it will be good option for users who just wants to try 
containers.

 - 'magnum create ’, with file which has Swarm/Mesos/Kub specific payload.



―

Egor



From: Adrian Otto >

Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>

Date: Thursday, November 19, 2015 at 10:36

To: "OpenStack Development Mailing List (not for usage questions)" 
>

Subject: Re: [openstack-dev] [magnum] Mesos Conductor



I’m open to allowing magnum to pass a blob of data (such as a lump of JSON or 
YAML) to the Bay's native API. That approach strikes a balance that’s 
appropriate.



Adrian



On Nov 19, 2015, at 10:01 AM, bharath thiruveedula 
> wrote:



Hi,



At the present scenario, we can have mesos conductor with existing 
attributes[1]. Or we can add  extra options like 'portMappings', 'instances', 
'uris'[2]. And the other options is to take json file as input to 'magnum 
container-create' and dispatch it to corresponding conductor. And the conductor 
will handle the json input. Let me know your opinions.





Regards

Bharath T









[1]https://goo.gl/f46b4H

[2]https://mesosphere.github.io/marathon/docs/application-basics.html



To: openstack-dev@lists.openstack.org

From: wk...@cn.ibm.com

Date: Thu, 19 Nov 2015 10:47:33 +0800

Subject: Re: [openstack-dev] [magnum] Mesos Conductor



@bharath,



1) actually, if you mean use container-create(delete) to do on mesos bay for 
apps. I am not sure how different the interface between docker interface and 
mesos interface. One point that when you introduce that feature, please not 
make docker container interface more complicated than now. I worried that 
because it would confuse end-users a lot than the unified benefits. (maybe as 
optional parameter to pass one json file to create containers in mesos)



2) For the unified interface, I think it need more thoughts, we need not bring 
more trouble to end-users to learn new concepts or interfaces, except we could 
have more clear interface, but different COES vary a lot. It is very challenge.







Thanks



Best Wishes,



Kai Qiang Wu (吴开强 Kennan)

IBM China System and Technology Lab, Beijing



E-mail: wk...@cn.ibm.com

Tel: 86-10-82451647

Address: Building 28(Ring Building), ZhongGuanCun Software Park,

No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193



Follow your heart. You are miracle!



[Inactive hide details for bharath thiruveedula ---19/11/2015 10:31:58 
am---@hongin, @adrian I agree with you. So can we go ahea]bharath thiruveedula 
---19/11/2015 10:31:58 am---@hongin, @adrian I agree with you. So can we go 
ahead with magnum container-create(delete) ... for



From:  bharath thiruveedula 
>

To:  OpenStack Development Mailing List not for usage questions 

Re: [openstack-dev] [magnum][api] Looking for a Cross-Project Liaison for the API Working Group

2015-12-01 Thread Qiao,Liyong

hi Everett
I'd like to take it.

thanks
Eli.

On 2015年12月02日 05:18, Everett Toews wrote:

Hello Magnumites,

The API Working Group [1] is looking for a Cross-Project Liaison [2] from the 
Magnum project.

What does such a role entail?

The API Working Group seeks API subject matter experts for each project to 
communicate plans for API updates, review API guidelines with their project’s 
view in mind, and review the API Working Group guidelines as they are drafted. 
The Cross-Project Liaison (CPL) should be familiar with the project’s REST API 
design and future planning for changes to it.

Please let us know if you're interested and we'll bring you on board!

Cheers,
Everett

[1] http://specs.openstack.org/openstack/api-wg/
[2] http://specs.openstack.org/openstack/api-wg/liaisons.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
BR, Eli(Li Yong)Qiao

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] [Openstack-i18n] DocImpact Changes

2015-12-01 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 02/12/15 12:53, Armando M. wrote:
> On 24 November 2015 at 13:11, Lana Brindley 
> wrote:
> 
> Hi Akihiro,
> 
> Judging by most of the bugs we see in the openstack-manuals queue, the
> bugs that will be raised in projects' own queues will usually be against
> their own developer documentation.
> 
> 
>> I am a bit confused. The developer documentation coexists with the code
>> that's being patched. Is this the documentation you're referring to? So for
>> instance in Neutron we have [1] rendered into [2].

That's correct.

> 
>> I always assumed that DocImpact was needed to flag configuration changes,
>> and more generally user facing issues, and these are not captured in the
>> developer doc [2] (afaik, *-aas services do have a developer documentation
>> but no entry point in [4]), as developers are not the audience, but rather
>> guide [3]. On the other hand, if patches do require dev doc changes (like
>> design issues etc), reviewers tend to ask to take care of them in the
>> context of the same patch, without the burden of going through an extra bug
>> report.

I agree, and that was the original intention, however it doesn't get used that 
way in the majority of cases we see. Feel free to use DocImpact (with a 
description), and then manually flip the bug to openstack-manuals in this case.

> 
>> So the fact that Neutron's DocImpact changes raises a bug report in LP
>> Neutron seems counterintuitive, and I am a bit at loss here. Can you
>> elaborate on what was the rationale for making only certain projects target
>> their own queues? I wouldn't want to have bug reports be reassigned to the
>> right queue only to cause more burden that this initiative originally
>> intended to relieve.

We chose the five 'defcore' projects here, as they are the ones that are 
formally (and most extensively) documented in openstack-manuals.

> 
>> Having said that, let's assume we are stricter about what gets flagged with
>> DocImpact, would you be open to revisiting the doc group for Neutron as set
>> in [5]? In the meantime, I'll go ahead and look at the outstanding patches
>> and bug reports recently filed to sanitize with the best of my abilities.

As we noted in the original spec, we appreciate that Neutron is on the cusp of 
being considered 'defcore', and we're happy to include it once that has 
happened formally.

Thanks for your help.

Lana

- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCAAGBQJWXmDwAAoJELppzVb4+KUyezMH/3WuKrW+8Py9XU3o2vddCK4c
ZovehU0Xq1mHzzpfh8Vfc71MfJVAtotr9Lt0Vd5b4jAnjSWz7QI4/W2CG3d+aM+L
FYPDn9koTC/+EsXlz8ekzLQUz6PToa21jfdmcg6tSAPA09FV5j7SPWIPXD9bvzun
q9NWhs3jjj4hVtgUhmFdeC7B0vDzT6EWU2kAuSWwUEf7z2c3S7ZnuteyOKLf6aLZ
rRbZi59ras8/3jI9EY8h6Wk47gQ3fLwvYyL/tHU9c2jOGPPz47f7HKaXVO0MQ58p
E4V3nhGEITt3NMQH8m0IEW7JLd3tRX1C7ERKK05+k5yl7dirxKsysq52xJh0EMo=
=1mW1
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] The lock files saga (and where we can go from here)

2015-12-01 Thread Sean Dague
On 12/01/2015 08:08 AM, Duncan Thomas wrote:
> 
> 
> On 1 December 2015 at 13:40, Sean Dague  > wrote:
> 
> 
> The current approach means locks block on their own, are processed in
> the order they come in, but deletes aren't possible. The busy lock would
> mean deletes were normal. Some extra cpu spent on waiting, and lock
> order processing would be non deterministic. It's trade offs, but I
> don't know anywhere that we are using locks as queues, so order
> shouldn't matter. The cpu cost on the busy wait versus the lock file
> cleanliness might be worth making. It would also let you actually see
> what's locked from the outside pretty easily.
> 
> 
> The cinder locks are very much used as queues in places, e.g. making
> delete wait until after an image operation finishes. Given that cinder
> can already bring a node into resource issues while doing lots of image
> operations concurrently (such as creating lots of bootable volumes at
> once) I'd be resistant to anything that makes it worse to solve a
> cosmetic issue.

Is that really a queue? Don't do X while Y is a lock. Do X, Y, Z, in
order after W is done is a queue. And what you've explains above about
Don't DELETE while DOING OTHER ACTION, is really just the queue model.

What I mean by treating locks as queues was depending on X, Y, Z
happening in that order after W. With a busy wait approach they might
happen as Y, Z, X or X, Z, B, Y. They will all happen after W is done.
But relative to each other, or to new ops coming in, no real order is
enforced.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] Cross-project Liaisons

2015-12-01 Thread Mike Perez
On 09:29 Dec 01, Doug Hellmann wrote:
> Excerpts from Mike Perez's message of 2015-11-30 17:30:52 -0800:
> > Hello all,
> > 
> > Currently for cross-project specs, the author of the spec spends the time to
> > explain why a certain feature makes sense to be across multiple projects. 
> > This
> > also includes giving technical solutions for it working with a variety of
> > services and making sure everyone is happy.
> > 
> > Today we have the following problems:
> > 
> > * Authors of specs can't progress forward with specs because of lack of
> >   attention. Eventually getting frustrated and giving up.
> > * Some projects could miss a cross-project spec being approved by the TC.
> > 
> > It has been expressed to me at the previous Cross-Project Communication 
> > Tokyo
> > summit session that PTLs don't have time for cross-project issues. I agree, 
> > as
> > being a previous PTL your time is thin. However, I do think someone from 
> > each
> > project needs to be aware and involved with cross-project initiatives.
> > 
> > I would like to propose cross-project liaisons which would have the 
> > following
> > duties:
> > 
> > * Watching the cross-project spec repo [1].
> >   - Comment on specs that involve your project. +1 to carry forward for TC
> > approval.
> > -- If you're not able to provide technical guidance on certain specs for
> >your project, it's up to you to get the right people involved.
> > -- Assuming you get someone else involved, it's up to you to make sure 
> > they
> >keep up with communication.
> >   - Communicate back to your project's meeting on certain cross-project 
> > specs
> > when necessary. This is also good for the previous bullet point of 
> > sourcing
> > who would have technical knowledge for certain specs.
> > * Attend the cross-project meeting when it's called for [2].
> > 
> > 
> > [1] - 
> > https://review.openstack.org/#/q/project:+openstack/openstack-specs+status:+open,n,z
> > [2] - https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting
> 
> I like the idea in general, but have some questions about how you see it
> being implemented.
> 
> Other liaisons typically have someone they're reporting to/working
> with directly. For example Oslo Liaisons and the Oslo PTL, release
> liaisons and the release team PTL, etc.  It doesn't seem fair to
> ask each spec author to keep up with all of the liaisons, so who
> will be in that role for this group of liaisons? With whom are they
> liaising?

I would like to volunteer myself for coordinating with this group. The
OpenStack Foundation has a good deal of my efforts spent here, so it's
something I can dedicate time to.

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Remove nova-network as a deployment option in Fuel?

2015-12-01 Thread Alexander Arzhanov
Hi, we have plugins for neutron (DVS and NSX). We'll finish the work
on component
registry .
You could safely remove nova-network altogether.

On Tue, Dec 1, 2015 at 5:49 PM, Sheena Gregson 
wrote:

> We do support Neutron for vCenter, but – as I mentioned a few months ago –
> we do not yet have a fully vetted way to deploy the multi-hypervisor use
> case with both KVM/QEMU and vCenter.  This depends on our ability to select
> multiple networking options and align them to the correct hypervisors.
>
>
>
> Per a conversation with Andrian Noga and his team yesterday, they are
> finishing work on the component registry [1] which will enable the
> multi-hypervisor, multi-network use case.  This work is being completed now.
>
>
>
> My recommendation remains that we should avoid deprecating nova-network
> until we have at least tested the basic multi-hypervisor use case with the
> new functionality.  Can we remove nova-network after FF as a High priority
> bug?
>
>
>
> [1] https://review.openstack.org/#/c/229306/
>
>
>
> *From:* Mike Scherbakov [mailto:mscherba...@mirantis.com]
>
> *Sent:* Tuesday, December 01, 2015 1:37 AM
>
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [Fuel] Remove nova-network as a deployment
> option in Fuel?
>
>
>
> Aleksey, can you clarify it? Why it can't be deployed? According to what I
> see at our fakeUI install [1], wizard allows to choose nova-network only in
> case if you choose vcenter.
>
>
>
> Do we support Neutron for vCenter already? If so - we could safely remove
> nova-network altogether.
>
>
>
> [1] http://demo.fuel-infra.org:8000/
>
>
>
> On Mon, Nov 30, 2015 at 4:27 AM Aleksey Kasatkin 
> wrote:
>
> This remains unclear.
>
> Now, for 8.0, the Environment with Nova-Network can be created but cannot
> be deployed (and its creation is tested in UI integration tests).
> AFAIC, we should either remove the ability of creation of environments
> with Nova-Network in 8.0 or return it back into working state.
>
>
> Aleksey Kasatkin
>
>
>
> On Fri, Oct 23, 2015 at 3:42 PM, Sheena Gregson 
> wrote:
>
> As a reminder: there are no individual networking options that can be used
> with both vCenter and KVM/QEMU hypervisors once we deprecate nova-network.
>
>
>
> The code for vCenter as a stand-alone deployment may be there, but the
> code for the component registry (
> https://blueprints.launchpad.net/fuel/+spec/component-registry) is still
> not complete.  The component registry is required for a multi-HV
> environment, because it provides compatibility information for Networking
> and HVs.  In theory, landing this feature will enable us to configure DVS +
> vCenter and Neutron with GRE/VxLAN + KVM/QEMU in the same environment.
>
>
>
> While Andriy Popyvich has made considerable progress on this story, I
> personally feel very strongly against deprecating nova-network until we
> have confirmed that we can support *all current use cases* with the
> available code base.
>
>
>
> Are we willing to lose the multi-HV functionality if something prevents
> the component registry work from landing in its entirety before the next
> release?
>
>
>
> *From:* Sergii Golovatiuk [mailto:sgolovat...@mirantis.com]
> *Sent:* Friday, October 23, 2015 6:30 AM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [Fuel] Remove nova-network as a deployment
> option in Fuel?
>
>
>
> Hi,
>
>
>
> As far as I know neutron code for VCenter is ready. Guys are still testing
> it. Keep patience... There will be announce soon.
>
>
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
>
> Mike Scherbakov
> #mihgen
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [swift] [oslo.messaging] [fuel] [ha] Is Swift going to support oslo.messaging?

2015-12-01 Thread Denis Egorenko
>
> Denis, I actually don't think that Swift needs to use oslo.messaging at
> all. The middleware loads the rabbit configuration for the Notifier class
> from the CONF object here:
>
> https://github.com/openstack/ceilometermiddleware/blob/master/ceilometermiddleware/swift.py#L112
> and that conf object should use the config file sections that
> oslo_messaging relies on, right? So, this should really just require a
> change to the swift-proxy conf files to add the [oslo_messaging] sections,
> I think?


Well, yes, it make sense. Current scheme supports only one RabbitMQ node
with url parameter. If there exists possibility use some kind of
oslo_messaging/rabbit_hosts - i'm ok with such approach.

2015-12-01 19:30 GMT+03:00 Jay Pipes :

> On 12/01/2015 08:17 AM, Richard Hawkins wrote:
>
>> ​Is it possible to write the functionality you desire in your
>> own middleware for Swift that lives outside of the Swift code?  I would
>> favor that approach for the following reasons:
>>
>> * You would have more control over code/changes so your middleware could
>> stabilize and mature faster (don't have to wait for reviews from
>> community for minor tweaks).
>>
>> * Since you are writing it, you get exactly what you want.
>>
>> * Swift would not gain more dependancies that would have to be installed.
>>
>> There have been a few projects in the past that have been successful
>> middleware without being included (swauth, swift3, swift-informant).
>>
>> And in the end, if your middleware becomes wildly successful and
>> everybody uses it, there would be no reason it could not be merged into
>> the Swift code at a later time.​
>>
>
> It's not Denis' middleware. It's the Ceilometer community's middleware for
> Swift to emit notification payloads that Ceilometer understands:
>
> https://github.com/openstack/ceilometermiddleware
>
> Denis, I actually don't think that Swift needs to use oslo.messaging at
> all. The middleware loads the rabbit configuration for the Notifier class
> from the CONF object here:
>
>
> https://github.com/openstack/ceilometermiddleware/blob/master/ceilometermiddleware/swift.py#L112
>
> and that conf object should use the config file sections that
> oslo_messaging relies on, right? So, this should really just require a
> change to the swift-proxy conf files to add the [oslo_messaging] sections,
> I think?
>
> Best,
> -jay
>
> Thanks,
>> Richard Hawkins
>> Software Developer - Cloud Files (OpenStack Swift)
>> Rackspace
>>
>>
>>
>> 
>> *From:* Denis Egorenko 
>> *Sent:* Tuesday, December 1, 2015 3:47 AM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* [openstack-dev] [swift] [oslo.messaging] [fuel] [ha] Is Swift
>>
>> going to support oslo.messaging?
>> Hello folks,
>>
>> The issue I want to raise is related to Swift and Oslo.messaging.
>> Currently Swift doesn't support oslo.messaging middleware. There is no
>> possible to setup RabbitMQ HA setup in swift configuration, so we faced
>> the problem [1] in Fuel. If we want to use Ceilometer notifications for
>> Swift, we should use ceilometermiddleware. It provides possibility
>> configure properly transport settings for notifications [2]. The main
>> problem that Fuel uses HA RabbitMQ setup (mirrored queues) with direct
>> connection from clients. The client uses oslo.messaging to establish the
>> connection with one of rabbitmq servers. oslo.messaging uses heartbeats
>> to switch to another RabbitMQ server if/when there are any network
>> issues. However, Swift doesn't use oslo.messaging at all. It's possible
>> to specify only one RabbitMQ server in swift configuration hence there
>> can be problems if specified server is down or has network flapping
>> issues. Alternative solution is to use VIP for RabbitMQ [3]. This setup
>> is not perfect also as timeout and connection restore time is much worse.
>>
>> So, the question is:
>> Is Swift going to support oslo.messaging and particularly rabbit_hosts?
>>
>> [1] https://bugs.launchpad.net/fuel/+bug/1510064
>> [2] https://review.openstack.org/#/c/152273
>> [3] https://review.openstack.org/#/c/248147
>>
>> --
>> Best Regards,
>> Egorenko Denis,
>> Deployment Engineer
>> Mirantis
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best Regards,
Egorenko Denis,
Deployment Engineer
Mirantis

Re: [openstack-dev] [oslo][all] The lock files saga (and where we can go from here)

2015-12-01 Thread Joshua Harlow

Sean Dague wrote:

On 12/01/2015 08:08 AM, Duncan Thomas wrote:


On 1 December 2015 at 13:40, Sean Dague>  wrote:


 The current approach means locks block on their own, are processed in
 the order they come in, but deletes aren't possible. The busy lock would
 mean deletes were normal. Some extra cpu spent on waiting, and lock
 order processing would be non deterministic. It's trade offs, but I
 don't know anywhere that we are using locks as queues, so order
 shouldn't matter. The cpu cost on the busy wait versus the lock file
 cleanliness might be worth making. It would also let you actually see
 what's locked from the outside pretty easily.


The cinder locks are very much used as queues in places, e.g. making
delete wait until after an image operation finishes. Given that cinder
can already bring a node into resource issues while doing lots of image
operations concurrently (such as creating lots of bootable volumes at
once) I'd be resistant to anything that makes it worse to solve a
cosmetic issue.


Is that really a queue? Don't do X while Y is a lock. Do X, Y, Z, in
order after W is done is a queue. And what you've explains above about
Don't DELETE while DOING OTHER ACTION, is really just the queue model.

What I mean by treating locks as queues was depending on X, Y, Z
happening in that order after W. With a busy wait approach they might
happen as Y, Z, X or X, Z, B, Y. They will all happen after W is done.
But relative to each other, or to new ops coming in, no real order is
enforced.



So ummm, just so people know the fasteners lock code (and the stuff that 
has existed for file locks in oslo.concurrency and prior to that 
oslo-incubator...) never has guaranteed the aboved sequencing.


How it works (and has always worked) is the following:

1. A lock object is created 
(https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L85)
2. That lock object acquire is performed 
(https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L125)
3. At that point do_open is called to ensure the file exists (if it 
exists already it is opened in append mode, so no overwrite happen) and 
the lock object has a reference to the file descriptor of that file 
(https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L112)
4. A retry loop starts, that repeats until either a provided timeout is 
elapsed or the lock is acquired, the retry logic u can skip over but the 
code that the retry loop calls is 
https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L92


The retry loop (really this loop @ 
https://github.com/harlowja/fasteners/blob/master/fasteners/_utils.py#L87) 
will idle for a given delay between the next attempt to lock the file, 
so that means there is no queue like sequencing, and that if for example 
entity A (who created lock object at t0) sleeps for 50 seconds between 
delays and entity B (who created lock object at t1) and sleeps for 5 
seconds between delays would prefer entity B getting it (since entity B 
has a smaller retry delay).


So just fyi, I wouldn't be depending on these for queuing/ordering as is...

-Josh


-Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] [oslo.messaging] [fuel] [ha] Is Swift going to support oslo.messaging?

2015-12-01 Thread Mehdi Abaakouk

Hi,


Current scheme supports only one RabbitMQ node with url parameter.


That's not true, you can pass many hosts via the url like that: 
rabbit://user:pass@host1:port1,user:pass@host2:port2/vhost


http://docs.openstack.org/developer/oslo.messaging/transport.html#oslo_messaging.TransportURL

But this is perhaps not enough for your use-case.

Cheers,

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] The lock files saga (and where we can go from here)

2015-12-01 Thread Joshua Harlow

So my takeaway is we need each project to have something like:

https://gist.github.com/harlowja/b4f0ddadbda1f92cc1e2

That could possibly exist in oslo (I just threw it together) but the 
idea is that a thread/greenthread would run that 'run_forever' method in 
that code and it would periodically try to clean off locks by acquiring 
them (with a timeout for acquire) and then deleting the lock path that 
the lock is using (and then releasing the lock).


The problems with that are as mentioned previously, even when we acquire 
a lock (aka the cleaner gets the lock) and then delete the underlying 
file that does *not* release other entities trying to acquire that same 
lock file (especially ones that blocked themselves in there acquire() 
method before the deletion started) so that's where either we need to do 
something like sean stated or we need to IMHO get away from having a 
lock file that is deleted at all (and use byte-ranges inside a single 
lock file, that single lock file would never be deleted in the first 
place) or we need to get off file locks in the first place (but ya, 
that's like umm a bigger issue...)


Such a single lock file would then use something like the following to 
get locks from it:


class LockSharder(object):

def __init__(self, offset_locks):
self.offset_locks = offset_locks

def get_lock(self, name):
return self.offset_locks[hash(name) % len(self.offset_locks)]

So there are a few ideas...

Duncan Thomas wrote:



On 1 December 2015 at 13:40, Sean Dague > wrote:


The current approach means locks block on their own, are processed in
the order they come in, but deletes aren't possible. The busy lock would
mean deletes were normal. Some extra cpu spent on waiting, and lock
order processing would be non deterministic. It's trade offs, but I
don't know anywhere that we are using locks as queues, so order
shouldn't matter. The cpu cost on the busy wait versus the lock file
cleanliness might be worth making. It would also let you actually see
what's locked from the outside pretty easily.


The cinder locks are very much used as queues in places, e.g. making
delete wait until after an image operation finishes. Given that cinder
can already bring a node into resource issues while doing lots of image
operations concurrently (such as creating lots of bootable volumes at
once) I'd be resistant to anything that makes it worse to solve a
cosmetic issue.


--
Duncan Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominating Dmitry Burmistrov to core reviewers of fuel-mirror

2015-12-01 Thread Mike Scherbakov
Vladimir,
if you've been behind of this, could you please share further plans in
separate email thread or (better) provide plans in README in the repo, so
everyone can be aware of planned changes and can review them too? If you or
someone else propose a change, please post a link here...

Thanks,

On Tue, Dec 1, 2015 at 6:27 AM Vladimir Kozhukalov 
wrote:

> Thomas,
>
> You are right about two independent modules in the repo. That is because
> the former intention was to get rid of fuel-mirror (and fuel-createmirror)
> and perestroika and leave only packetary there. Packetary is to be
> developed so it is able to build not only repositories but  packages as
> well. So we'll be able to remove perestroika once it is ready. Two major
> capabilities of fuel-mirror are:
> 1) create mirror (and partial mirror) and packetary can be used for this
> instead
> 2) apply mirror to nailgun (which is rather a matter of python-fuelclient)
> So fuel-mirror also should be removed in the future to avoid functionality
> duplication.
>
> Those were the reasons not to put them separately. (C) "There can be only
> one".
>
>
>
>
>
> Vladimir Kozhukalov
>
> On Tue, Dec 1, 2015 at 1:25 PM, Thomas Goirand  wrote:
>
>> On 12/01/2015 09:25 AM, Mike Scherbakov wrote:
>> >  4. I don't quite understand how repo is organized. I see a lot of
>> > Python code regarding to fuel-mirror itself and packetary, which is
>> > used as fuel-mirrors core and being written and maintained mostly by
>> > Bulat [5]. There are seem to be bash scripts now related to
>> > Perestroika, and. I don't quite get how these things relate each to
>> > other, and if we expect core reviewers to be merging code into both
>> > Perestroika and Packetary? Unless mission of repo, code gets clear,
>> > I'd abstain from giving +1...
>>
>> Also, why isn't packetary living in its own repository? It seems wrong
>> to me to have 2 python modules living in the same source repo, unless
>> they share the same egg-info. It feels weird to have to call setup.py
>> install twice in the resulting Debian source package. That's not how
>> things are done elsewhere, and I'd like to avoid special cases, just
>> because it's fuel...
>>
>> Cheers,
>>
>> Thomas Goirand (zigo)
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Patch size limit

2015-12-01 Thread Michael Krotscheck
TL/DR: I think you're trying to solve the problem the wrong way. If you're
trying to reduce the burden of large patches, I feel the project in
question should distribute the burden of reviewing the big ones so one
person's not stuck doing them all. That also means you can keep that patch
in mental context.

Also, random concerns I have with using "Line of code" as a metric:

* What counts as a Line of Code? Does whitespace count? Comments? Docs?
Tests? Should comments/docs/tests be included in this heuristic?
* Taking a big patch, and splitting it into several, can easily lead to
dead code as the first patch may remain completely inactive on master until
the entire patch chain merges.
* git annotate becomes far less useful, as you cannot simply look at all
the applicable changes for a given patch - you have to dig for all related
patches.
* Reverting things becomes more difficult for the same reason. Would you
revert all-in-one? One revert per patch?

I've seen, and written, 1000+ line patches. Some of them were 200 lines of
logic, 1000+ lines of tests. Others included extensive documentation,
comments, etc, or perhaps-too-verbose parameter and method names that
clearly explain what they do. Others use method-parameter-per-line style
formatting in their code to assist in legibility.

While I totally understand how frustrating it is to have to review a large
patch, I'm not in favor of a hard limit for something which is governed
mostly by whitespace and formatting preferences.

Michael

On Tue, Dec 1, 2015 at 6:36 AM Sylwester Brzeczkowski <
sbrzeczkow...@mirantis.com> wrote:

> Neil, just to clarify: moved/renamed files are marked as "R" so I think
> there may be some way to ignore such files when counting LOC.
>
> Maciej, I completely agree with you. It's pretty hard to review such big
> change,and takes a lot of time which could be saved by submitting smaller
> patches.
>
> +1
>
> On Tue, Dec 1, 2015 at 1:50 PM, Neil Jerram 
> wrote:
>
>> On 01/12/15 12:45, Maciej Kwiek wrote:
>> > Hi,
>> >
>> > I recently noticed the influx of big patches hitting Gerrit
>> > (especially in fuel-web, but I also heard that there was a couple of
>> > big ones in library). I think that patches that have 1000 LOC are
>> > simply too big to review thoroughly and reliably.
>> >
>> > I would argue that there should be a limit to patch size, either a
>> > soft one (i.e. written down in contributor guidelines) or a hard one
>> > (e.g. enforced by gate job).
>> >
>> > I think that we need a discussion whether we really need this limit,
>> > what should it be, and how to enforce it.
>> >
>> > I personally think that most patches that are over 400 LOC could be
>> > easily split into at least two smaller changes.
>> >
>> > Regarding the limit enforcement - I think we should go with the soft
>> > limit, with X LOC written as a guideline and giving the reviewers a
>> > possibility to give -1s to patches that are too big, but also giving
>> > the possibility to merge bigger changes if it's absolutely necessary
>> > (in really rare cases the changes simply cannot be split). We may mix
>> > in the hard limit for ridiculously large patches (twice the "soft
>> > limit" would be good in my opinion), so the gate would automatically
>> > reject such patches, forcing contributor to split his patch.
>> >
>> > Please share your thoughts on this.
>>
>> I think most of your principle is correct.  However I can imagine a file
>> renaming / moving patch that would appear in Gerrit to be >=1000 LOC,
>> but would actually just be file moves, with perhaps some trivial changes
>> to Python module paths; and I don't think it would be helpful to force a
>> patch like that to be split up.  So it may not be correct to enforce a
>> hard limit all the time.
>>
>> Neil
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> *Sylwester Brzeczkowski*
> Junior Python Software Engineer
> Product Development-Core : Product Engineering
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominating Roman Prykhodchenko to python-fuelclient cores

2015-12-01 Thread Aleksey Kasatkin
+1.
No doubts. )


Aleksey Kasatkin


On Tue, Dec 1, 2015 at 5:49 PM, Dmitry Pyzhov  wrote:

> Guys,
>
> I propose to promote Roman Prykhodchenko to python-fuelclient cores. He is
> the main contributor and maintainer of this repo. And he did a great job
> making changes toward OpenStack recommendations. Cores, please reply with
> your +1/-1.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] Cross-project Liaisons

2015-12-01 Thread Doug Hellmann
Excerpts from Mike Perez's message of 2015-12-01 08:45:30 -0800:
> On 09:29 Dec 01, Doug Hellmann wrote:
> > Excerpts from Mike Perez's message of 2015-11-30 17:30:52 -0800:
> > > Hello all,
> > > 
> > > Currently for cross-project specs, the author of the spec spends the time 
> > > to
> > > explain why a certain feature makes sense to be across multiple projects. 
> > > This
> > > also includes giving technical solutions for it working with a variety of
> > > services and making sure everyone is happy.
> > > 
> > > Today we have the following problems:
> > > 
> > > * Authors of specs can't progress forward with specs because of lack of
> > >   attention. Eventually getting frustrated and giving up.
> > > * Some projects could miss a cross-project spec being approved by the TC.
> > > 
> > > It has been expressed to me at the previous Cross-Project Communication 
> > > Tokyo
> > > summit session that PTLs don't have time for cross-project issues. I 
> > > agree, as
> > > being a previous PTL your time is thin. However, I do think someone from 
> > > each
> > > project needs to be aware and involved with cross-project initiatives.
> > > 
> > > I would like to propose cross-project liaisons which would have the 
> > > following
> > > duties:
> > > 
> > > * Watching the cross-project spec repo [1].
> > >   - Comment on specs that involve your project. +1 to carry forward for TC
> > > approval.
> > > -- If you're not able to provide technical guidance on certain specs 
> > > for
> > >your project, it's up to you to get the right people involved.
> > > -- Assuming you get someone else involved, it's up to you to make 
> > > sure they
> > >keep up with communication.
> > >   - Communicate back to your project's meeting on certain cross-project 
> > > specs
> > > when necessary. This is also good for the previous bullet point of 
> > > sourcing
> > > who would have technical knowledge for certain specs.
> > > * Attend the cross-project meeting when it's called for [2].
> > > 
> > > 
> > > [1] - 
> > > https://review.openstack.org/#/q/project:+openstack/openstack-specs+status:+open,n,z
> > > [2] - https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting
> > 
> > I like the idea in general, but have some questions about how you see it
> > being implemented.
> > 
> > Other liaisons typically have someone they're reporting to/working
> > with directly. For example Oslo Liaisons and the Oslo PTL, release
> > liaisons and the release team PTL, etc.  It doesn't seem fair to
> > ask each spec author to keep up with all of the liaisons, so who
> > will be in that role for this group of liaisons? With whom are they
> > liaising?
> 
> I would like to volunteer myself for coordinating with this group. The
> OpenStack Foundation has a good deal of my efforts spent here, so it's
> something I can dedicate time to.
> 

Great, let's do it!

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2015-12-01 Thread Armando M.
On 30 November 2015 at 23:43, Gal Sagie  wrote:

> To me, and i could got it wrong, the stadium means two main things: (At
> this point in time)
>
> 1) Remove/ease the burden of OpenStack governance and extra job for
> projects/drivers that implement Neutron and are "relatively small"
>
This saves the projects that just want to implement Neutron to be
> managed with the same infrastructure but not deal
> with a lot of extra stuff (That same extra stuff you are complaining
> about and i totally understand where you coming from..)
>

This is a two way street, everything has a cost, and cost should not be
borne by a single party.


>
> 2) Be able to set a standard of "quality" (and this needs to be better
> defined) for all the drivers that implement Neutron, and
> also set a standard for development process (specs, bugs, priorities,
> CI, testing)
>

That is somewhat of a sticking point, because right now we have anything
but standard quality. However the biggest problem is: ensuring standard
quality is an effort in itself.


>
> With this definition, it first means to me, as Russell suggested, that
> Kuryr should be an independent project.
> Regarding Dragonflow and Octavia i am not sure yet but lean to the same
> conclusion as Russell.
>
> In order to solve some of the problems you mention, I suggest the
> following:
>
> 1) Define a set of responsibilities/guidelines for the sub-projects
> lieutenants in order to comply with the "quality" standard
> If they fail to do it with no good explanation for X cycles, the
> project should be removed from the stadium.
>

Don't you see that we'd be creating work for ourselves...work that steers
important focus away from what really matters? I don't think that Neutron
needs to become a quality certification body. That's not who we are, and
never will be.


>
> 2) As suggested, delegate and increase the team size that is responsible
> to verify and help these projects with the extra work.
> I am sure there are people willing to volunteer and help with these
> tasks, and test periods could be applied for trust issues.
> I believe we all want to see Neutron and OpenStack succeed.
>

Delegating centralized tasks that are supposed to be distributed in the
first place sounds like nonsense to me.


>
> I dont see how just moving this work to the TC or any other centralized
> group in OpenStack is going to help, i think we
> want to strive to group common work to parents projects, especially in
> this case (in my opinion anyway).
>

I am not advocating to moving anything to the TC or any other centralized
group. I am saying: you want a project hosted in openstack: fine, you are
in charge. No-one else. Help and assistance is always available, but it's
not a birthright.


>
> I think this can be very handy when we will want our processes (at least
> in the Neutron world) to be similar and
> complimenting.
>
> Just the way i see things right now..
>
> Gal.
>
>
>
>
> On Tue, Dec 1, 2015 at 9:10 AM, Armando M.  wrote:
>
>>
>>
>> On 30 November 2015 at 20:11, Russell Bryant  wrote:
>>
>>> Some additional context: there are a few proposals for additional git
>>> repositories for Neutron that have been put on hold while we sort this
>>> out.
>>>
>>> Add networking-bagpipe:
>>>   https://review.openstack.org/#/c/244736/
>>>
>>> Add the Astara driver:
>>>   https://review.openstack.org/#/c/230699/
>>>
>>> Add tap-as-a-service:
>>>   https://review.openstack.org/#/c/229869/
>>>
>>> On 11/30/2015 07:56 PM, Armando M. wrote:
>>> > I would like to suggest that we evolve the structure of the Neutron
>>> > governance, so that most of the deliverables that are now part of the
>>> > Neutron stadium become standalone projects that are entirely
>>> > self-governed (they have their own core/release teams, etc). In order
>>> to
>>> > denote the initiatives that are related to Neutron I would like to
>>> > present two new tags that projects can choose to label themselves with:
>>> >
>>> >   * 'is-neutron-subsystem': this means that the project provides
>>> > networking services by implementing an integral part (or parts) of
>>> > an end-to-end neutron system. Examples are: a service plugin, an
>>> ML2
>>> > mech driver, a monolithic plugin, an agent etc. It's something an
>>> > admin has to use in order to deploy Neutron in a certain
>>> configuration.
>>> >   * 'use-neutron-system': this means that the project provides
>>> > networking services by using a pre-deployed end-to-end neutron
>>> > system as is. No modifications whatsoever.
>>>
>>> I just want to clarify the proposal.  IIUC, you propose splitting most
>>> of what is currently separately deliverables of the Neutron team and
>>> making them separate projects in terms of OpenStack governance.  When I
>>> originally proposed including networking-ovn under Neutron (and more
>>> generally, making room for all drivers to be included), 

Re: [openstack-dev] [Neutron] Evolving the stadium concept

2015-12-01 Thread Armando M.
On 1 December 2015 at 03:03, Neil Jerram  wrote:

> On 01/12/15 10:42, Thierry Carrez wrote:
> > Armando M. wrote:
> >> [...]
> >> So my question is: would revisiting/clarifying the concept be due after
> >> some time we have seen it in action? I would like to think so.
> > I also think it's time to revisit this experience now that it's been
> > around for some time. On one hand the Neutron stadium allowed to
> > increase the development bandwidth by tackling bottlenecks in reviews
> > using smaller core review teams. On the other it's been difficult for
> > Neutron leadership to follow up on all those initiatives and the results
> > in terms of QA and alignment with "the OpenStack way" have been... mixed.
>
> Agreed.  networking-calico has now existed as an OpenStack and Neutron
> stadium project since August, but we don't yet have an IRC meeting or
> formal subproject PTL (if such a thing should exist - I'm not sure), so
> there are some design discussions that we're not conducting in public;
> and in those senses we are not yet fully following the OpenStack way.
> (These things are planned!  But not there yet.)


If one of the projects does not follow the four opens, then IMO it should
not deserve to be in neither tent or stadium, but then again, I don't think
we should put ourselves as an extra layer of judgement between projects and
the TC, just because the project is related to Neutron. The TC is in charge
of making that call irrespective of the technology.


>
> If the Neutron stadium didn't exist and we were an OpenStack project
> like (say) magnum or sahara, would processes and oversight have forced
> us to address those points sooner?  If so, it might be said that the
> Neutron stadium is providing a grey area for projects that are dragging
> their feet a bit.  From a lazy vendor point of view, that might be
> convenient, but from an OpenStack community point of view it's not good.


> Regards,
> Neil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] RFC: profile matching

2015-12-01 Thread Ben Nemec
Sorry for not getting to this earlier.  Some thoughts inline.

On 11/09/2015 08:51 AM, Dmitry Tantsur wrote:
> Hi folks!
> 
> I spent some time thinking about bringing profile matching back in, so 
> I'd like to get your comments on the following near-future plan.
> 
> First, the scope of the problem. What we do is essentially kind of 
> capability discovery. We'll help nova scheduler with doing the right 
> thing by assigning a capability like "suits for compute", "suits for 
> controller", etc. The most obvious path is to use inspector to assign 
> capabilities like "profile=1" and then filter nodes by it.
> 
> A special care, however, is needed when some of the nodes match 2 or 
> more profiles. E.g. if we have all 4 nodes matching "compute" and then 
> only 1 matching "controller", nova can select this one node for 
> "compute" flavor, and then complain that it does not have enough hosts 
> for "controller".
> 
> We also want to conduct some sanity check before even calling to 
> heat/nova to avoid cryptic "no valid host found" errors.
> 
> (1) Inspector part
> 
> During the liberty cycle we've landed a whole bunch of API's to 
> inspector that allow us to define rules on introspection data. The plan 
> is to have rules saying, for example:
> 
>   rule 1: if memory_mb >= 8192, add capability "compute_profile=1"
>   rule 2: if local_gb >= 100, add capability "controller_profile=1"
> 
> Note that these rules are defined via inspector API using a JSON-based 
> DSL [1].
> 
> As you see, one node can receive 0, 1 or many such capabilities. So we 
> need the next step to make a final decision, based on how many nodes we 
> need of every profile.

Is the intent that this will replace the standalone ahc-match call that
currently assigns profiles to nodes?  In general I'm +1 on simplifying
the process (which is why I'm finally revisiting this) so I think I'm
onboard with that idea.

> 
> (2) Modifications of `overcloud deploy` command: assigning profiles
> 
> New argument --assign-profiles will be added. If it's provided, 
> tripleoclient will fetch all ironic nodes, and try to ensure that we 
> have enough nodes with all profiles.
> 
> Nodes with existing "profile:xxx" capability are left as they are. For 
> nodes without a profile it will look at "xxx_profile" capabilities 
> discovered on the previous step. One of the possible profiles will be 
> chosen and assigned to "profile" capability. The assignment stops as 
> soon as we have enough nodes of a flavor as requested by a user.

And this assignment would follow the same rules as the existing AHC
version does?  So if I had a rules file that specified 3 controllers, 3
cephs, and an unlimited number of computes, it would first find and
assign 3 controllers, then 3 cephs, and finally assign all the other
matching nodes to compute.

I guess there's still a danger if ceph nodes also match the controller
profile definition but not the other way around, because a ceph node
might get chosen as a controller and then there won't be enough matching
ceph nodes when we get to that.  IIRC (it's been a while since I've done
automatic profile matching) that's how it would work today so it's an
existing problem, but it would be nice if we could fix that as part of
this work.  I'm not sure how complex the resolution code for such
conflicts would need to be.

> 
> (3) Modifications of `overcloud deploy` command: validation
> 
> To avoid 'no valid host found' errors from nova, the deploy command will 
> fetch all flavors involved and look at the "profile" capabilities. If 
> they are set for any flavors, it will check if we have enough ironic 
> nodes with a given "profile:xxx" capability. This check will happen 
> after profiles assigning, if --assign-profiles is used.
> 
> Please let me know what you think.
> 
> [1] https://github.com/openstack/ironic-inspector#introspection-rules
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Disabling HA for RPC queues in RabbitMQ

2015-12-01 Thread Dmitry Mescheryakov
Hello guys,

I would like to propose to disable HA for OpenStack RPC queues. The
rationale is to reduce load on RabbitMQ by removing necessity for it to
replicate messages across the cluster. You can find more details about
proposal in spec [1].

To what is in the spec I can add that I've ran a test on scale which
confirms that there is at least one case where our messaging stack is
bottleneck currently. That is a Rally boot_and_delete_server_with_secgroups
run against setup with Neutron VXLAN with DVR. Just removing HA policy
reduces the test time 2 times and increases message throughput 2-3 times. I
think that is a very clear indication of benefit we can get.

I do understand that we are almost in Feature Freeze, so I will request a
feature freeze exception for that change in a separate thread with detailed
plan.

Thanks,

Dmitry

[1] https://review.openstack.org/#/c/247517/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2015-12-01 Thread Armando M.
On 1 December 2015 at 02:40, Thierry Carrez  wrote:

> Armando M. wrote:
> > [...]
> > So my question is: would revisiting/clarifying the concept be due after
> > some time we have seen it in action? I would like to think so.
>
> I also think it's time to revisit this experience now that it's been
> around for some time. On one hand the Neutron stadium allowed to
> increase the development bandwidth by tackling bottlenecks in reviews
> using smaller core review teams. On the other it's been difficult for
> Neutron leadership to follow up on all those initiatives and the results
> in terms of QA and alignment with "the OpenStack way" have been... mixed.
>
> And this touches on the governance issue. By adding all those projects
> under your own project team, you bypass the Technical Committee approval
> that they behave like OpenStack projects and are produced by the
> OpenStack community. The Neutron team basically vouches for all of them
> to be on par. As far as the Technical Committee goes, they are all being
> produced by the same team we originally blessed (the Neutron project team).
>
> That is perfectly fine with me, as long as the Neutron team feels
> confident they can oversee every single one of them and vouch for every
> single one of them. If the Neutron PTL feels the core Neutron leadership
> just can't keep up, I think we have a problem we need to address, before
> it taints the Neutron project team itself.
>
> One solution is, like you mentioned, to make some (or all) of them
> full-fledged project teams. Be aware that this means the TC would judge
> those new project teams individually and might reject them if we feel
> the requirements are not met. We might want to clarify what happens then.
>

That's a good point. Do we have existing examples of this or would we be
sailing in uncharted waters? That said, I didn't see you comment on the
possible introduction of neutron-relevant tags, is something that the TC
would be open to?


>
> Thanks for raising this thread!
>

Thank you for chiming in!


>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2015-12-01 Thread Armando M.
On 1 December 2015 at 02:40, Neil Jerram  wrote:

> On 01/12/15 05:16, Doug Wiegley wrote:
> > Part of the issue is that in a year, we added all the repos above. And
> > all of said repos were all heading over to infra with the same newbie
> > questions/mistakes. Not a bad thing in and of itself, but the sheer
> > volume was causing a lot of infra load. So the infra liasions are
> > meant to buffer that; exactly the opposite of splitting again. Now add
> > that many repos again this year, and the problem doubles. The review
> > overhead for centralizing this is quite small. The mentor overhead to
> > avoid the repeated mistakes hitting infra is quite a bit higher, but
> > that has to land somewhere, and still isn’t huge.
>
> On the other hand, it may also be that we're very unlikely to see so
> many new projects again, as we have in the first 'stadium' (half-)year.
> So if this is a significant aspect of the pain that Armando is
> describing, I suspect it would be overreaction to make a big governance
> change for it - as it's unlikely to happen again.
>

I don't think we're nearly reached the inflection point.


>
> But actually I guess most of the pain is elsewhere, i.e. in the
> cognitive aspect of the Neutron PTL logically needing to understand
> everything in all of the stadium projects.
>
> Neil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Responsibility for merging change requests to fuel-main

2015-12-01 Thread Roman Vyalov
Hi all,
according to numerous requests from fuel engineers ci-team 30 Nov updated
the iso in Fuel CI (from ISO#199 to ISO#230). New iso had the last of the
code from the fuel master which pass bvt tests. But ISO had the critical
bug[0] which broke CI for fuel-library repository.
There were 2 ways to resolve this problem:

   1. Revert ISO in Fuel CI.
   2. Find the problem in the code and revert the change request.

For many reasons it was decided to use the option #2.
Change request which broken CI was merged to fuel-main 6 days ago [1] and
after it was merged multiple requests. So the revert [2] would have to be
very well reviewed. it means votes from core reviewers from fuel-build team
(Roman Vyalov, Vladimir Kozhukalov). Because fuel-build team responsible to
the make system (build iso) and fuel-main repository
But revert [2] was merged without +1 from fuel-build team engineers.
Because of this [2] building iso has been broken 2 hours.
Build team prepared the fix for fuel-main [3] to solve the problem with
broken iso and Fuel CI test [4]. After successful building custom iso with
the fix[3] , request [3] was merged.
Please wait +1/+2 from fuel-build team before merge change request to the
fuel-main repository.

[0] https://bugs.launchpad.net/fuel/+bug/1521551
[1] https://review.openstack.org/#/c/241983/
[2] https://review.openstack.org/#/c/251776/
[3] https://review.openstack.org/251821
[4] https://bugs.launchpad.net/fuel/+bug/1521551/comments/2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit Upgrade 12/16

2015-12-01 Thread Stefano Maffulli
On 12/01/2015 06:38 PM, Spencer Krum wrote:
> There is a thread beginning here:
> http://lists.openstack.org/pipermail/openstack-dev/2015-October/076962.html
> which covers what to expect from the new software.

Nice! This is awesome: the new review panel lets you edit files on the
web interface. No more `git review -d` and subsequent commit to fix a
typo. I think this is huge for documentation and all sort of nitpicking :)

And while I'm at it, I respectfully bow to the infra team: keeping pace
with frequent software upgrades at this size is no small feat and a rare
accomplishment. Good job.

/stef

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [midonet] What repos to migrate to OpenStack infrastructure

2015-12-01 Thread Sandro Mathys
Hi all,

Trying to prepare moving our repos to the OpenStack infrastructure, I
got reminded just how many repos we have in our GitHub organization
[1] - and how many of those seem to be deprecated or tangential.

Now, do we still want to migrate them? Pretty sure the answer is "no"
for the deprecated ones but I wonder about the tangential ones. Well,
to be honest, I also wonder which are actually deprecated/tangential
and which are considered "important".

First of all, I think repos that are not set up on GerritHub, don't
need to be migrated. Partially because they don't seem important
enough, and partially because migration would force Gerrit upon them
(i.e. Gerrit is the main repo for a project in OpenStack, the others
are just mirrors). But the list of midonet repos on GerritHub [2] is
still very long.

Of those, I figure these are deprecated and shouldn't be migrated:
* arrakis
* midostack

Also, I'm only looking at what repos to migrate under the "midonet"
umbrella. These repos should probably be migrated under a different
umbrella, if at all:
* ansible-* (-> ansible-openstack)
* puppet-* (-> puppet-openstack)
* zephyr (-> zephyr)

Furthermore, these seem to be forks that should (no longer) see
downstream commits, right?
* python-midonetclient
* python-neutronclient

Last but not least, we have repos that have only a single contributor
and should probably not be part of the midonet organization anyway -
and repos with (virtually) no commits:
* bees
* docker-openvpn-client
* docker-zk-web
* orizuru
* toscana
* ubuntu-integration

Alright, this leaves us with 11 repos - much better number!

These, I think we should migrate:
* mdts
* midonet
* midonet-docs
* midonet-sandbox

These, we could migrate but they are low priority and might actually
go away so let's defer:
* midonethomepage
* planet-midonet

However, I have no idea about these - are they still needed? Do they
belong under the midonet umbrella?
* jmx_exporter
* midonet-charms
* midonet-dockerfile
* tempest-add
* zktimemachine

Please share your thoughts - did I exclude some that should be
included in the migration? Should some be included that are not
currently configured on GerritHub? And what about those 5 repos in the
last list?

Note, that all repos we want to migrate under the midonet umbrella
probably need to have a "midonet-" in front, so it would need to be
added to those who don't have it. (Well, planet-midonet will become
midonet-planet and midonethomepage gain a dash).

Thanks,
Sandro

PS. the "when" and "how" will be discussed later - let's focus on the
"what" in this thread, please.

[1] https://github.com/midonet
[2] https://review.gerrithub.io/#/admin/projects/?filter=midonet%252F

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance][cinder][neutron]How to make use of x-openstack-request-id

2015-12-01 Thread Kekane, Abhishek
Hi Tan,



Most of the OpenStack RESTful API returns `X-Openstack-Request-Id` in the API 
response header but this request id is not available to the caller from the 
python client.

When you use --debug option from command from the command prompt using client, 
you can see `X-Openstack-Request-Id` on the console but it is not logged 
anywhere.



Currently a cross-project specs [1] is submitted and approved for returning 
X-Openstack-Request-Id to the caller and the implementation for the same is in 
progress.

Please go through the specs for detail information which will help you to 
understand more about request-ids and current work about the same.



Please feel free to revert back anytime for your doubts.



[1] 
https://github.com/openstack/openstack-specs/blob/master/specs/return-request-id.rst



Thanks,



Abhishek Kekane









Hi guys

I recently play around with 'x-openstack-request-id' header but have a 
dump question about how it works. At beginning, I thought an action across 
different services should use a same request-id but it looks like this is not 
the true.



First I read the spec: 
https://blueprints.launchpad.net/nova/+spec/cross-service-request-id which said 
"This ID and the request ID of the other service will be logged at service 
boundaries". and I see cinder/neutron/glance will attach its context's 
request-id as the value of "x-openstack-request-id" header to its response 
while nova use X-Compute-Request-Id. This is easy to understand. So It looks 
like each service should generate its own request-id and attach to its 
response, that's all.



But then I see glance read 'X-Openstack-Request-ID' to generate the request-id 
while cinder/neutron/nova read 'openstack.request_id' when using with keystone. 
It is try to reuse the request-id from keystone.



This totally confused me. It would be great if you can correct me or point me 
some reference. Thanks a lot



Best Regards,



Tan


__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle] weekly meeting of Dec.2

2015-12-01 Thread joehuang
Hi,

Let’s have regular meeting today starting UTC1300 at #openstack-meeting.

The networking proposal is updated in the document, and a proposal for 
stateless PoC also was updated in the doc.
https://docs.google.com/document/d/18kZZ1snMOCD9IQvUKI5NVDzSASpw-QKj7l2zNqMEd3g/edit?usp=sharing

Best Regards
Chaoyi Huang ( Joe Huang )

From: Zhipeng Huang [mailto:zhipengh...@gmail.com]
Sent: Wednesday, November 25, 2015 5:44 PM
To: OpenStack Development Mailing List (not for usage questions); joehuang; 
caizhiyuan (A); Irena Berezovsky; Orran Krieger; Mohammad Badruzzaman; 홍석찬
Subject: Re: [openstack-dev][tricircle]Tokyo Summit Summary

Hi All,

We will hold our regular meeting today starting UTC1300 at #openstack-meeting, 
and we will chat about the new architecture discussed during the Tokyo Design 
Summit

On Sat, Nov 21, 2015 at 10:20 AM, Zhipeng Huang 
> wrote:
Hi Team,

We had very great sessions re Tricircle at Tokyo Summit, both main conference 
[1] and design summit [2].

After the summit the core team dived into new architecture design as discussed 
in the design summit session [3] , therefore there had been sorta radio silent, 
but rest assure the work is continuing :)

We will still hold our weekly meeting at openstack-meeting every Wed from UTC 
1300 to UTC 1400, where we will discuss problems and ideas in the developments. 
There would be no specific agenda assigned, except for the last week meeting 
every month where we will deal with major problems with focus.

Other than the everyday dev discuz at openstack-meeting, we will have 
architectural/functional/conceptual discussions at openstack-tricircle at 
earlier time each week, where we will bash ideas on how to proceed the project.

I'm also contemplating google hangout for openstack-meeting sessions so people 
could directly communicate. I will send out detailed info about this later on :)

Anyways wish yall have a great weekend, and meet you guys next week at the 
meeting. At the mean time check out the new arch proposal done by the core team 
[3].

[1] 
https://openstacksummitoctober2015tokyo.sched.org/event/49sw/multisite-openstack-deep-dive
[2] https://wiki.openstack.org/wiki/Design_Summit/Mitaka/Etherpads#Tricircle
[3] https://wiki.openstack.org/wiki/Tricircle
--
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado



--
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][api] Looking for a Cross-Project Liaison for the API Working Group

2015-12-01 Thread Hou Ming Wang
Adrian,
If need more than one alternates, I would like to be that.

Thanks & Best Regards
Hou Ming Wang

On Wed, Dec 2, 2015 at 2:32 PM, 王华  wrote:

> Adrian,
> I would like to be an alternate.
>
> Regards
> Wanghua
>
>
> On Wed, Dec 2, 2015 at 10:19 AM, Adrian Otto 
> wrote:
>
>> Everett,
>>
>> Thanks for reaching out. Eli is a good choice for this role. We should
>> also identify an alternate as well.
>>
>> Adrian
>>
>> --
>> Adrian
>>
>> > On Dec 1, 2015, at 6:15 PM, Qiao,Liyong  wrote:
>> >
>> > hi Everett
>> > I'd like to take it.
>> >
>> > thanks
>> > Eli.
>> >
>> >> On 2015年12月02日 05:18, Everett Toews wrote:
>> >> Hello Magnumites,
>> >>
>> >> The API Working Group [1] is looking for a Cross-Project Liaison [2]
>> from the Magnum project.
>> >>
>> >> What does such a role entail?
>> >>
>> >> The API Working Group seeks API subject matter experts for each
>> project to communicate plans for API updates, review API guidelines with
>> their project’s view in mind, and review the API Working Group guidelines
>> as they are drafted. The Cross-Project Liaison (CPL) should be familiar
>> with the project’s REST API design and future planning for changes to it.
>> >>
>> >> Please let us know if you're interested and we'll bring you on board!
>> >>
>> >> Cheers,
>> >> Everett
>> >>
>> >> [1] http://specs.openstack.org/openstack/api-wg/
>> >> [2] http://specs.openstack.org/openstack/api-wg/liaisons.html
>> >>
>> >>
>> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> > --
>> > BR, Eli(Li Yong)Qiao
>> >
>> > 
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][api] Looking for a Cross-Project Liaison for the API Working Group

2015-12-01 Thread 王华
Adrian,
I would like to be an alternate.

Regards
Wanghua


On Wed, Dec 2, 2015 at 10:19 AM, Adrian Otto 
wrote:

> Everett,
>
> Thanks for reaching out. Eli is a good choice for this role. We should
> also identify an alternate as well.
>
> Adrian
>
> --
> Adrian
>
> > On Dec 1, 2015, at 6:15 PM, Qiao,Liyong  wrote:
> >
> > hi Everett
> > I'd like to take it.
> >
> > thanks
> > Eli.
> >
> >> On 2015年12月02日 05:18, Everett Toews wrote:
> >> Hello Magnumites,
> >>
> >> The API Working Group [1] is looking for a Cross-Project Liaison [2]
> from the Magnum project.
> >>
> >> What does such a role entail?
> >>
> >> The API Working Group seeks API subject matter experts for each project
> to communicate plans for API updates, review API guidelines with their
> project’s view in mind, and review the API Working Group guidelines as they
> are drafted. The Cross-Project Liaison (CPL) should be familiar with the
> project’s REST API design and future planning for changes to it.
> >>
> >> Please let us know if you're interested and we'll bring you on board!
> >>
> >> Cheers,
> >> Everett
> >>
> >> [1] http://specs.openstack.org/openstack/api-wg/
> >> [2] http://specs.openstack.org/openstack/api-wg/liaisons.html
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > --
> > BR, Eli(Li Yong)Qiao
> >
> > 
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] The lock files saga (and where we can go from here)

2015-12-01 Thread Clint Byrum
Excerpts from Joshua Harlow's message of 2015-12-01 09:28:18 -0800:
> Sean Dague wrote:
> > On 12/01/2015 08:08 AM, Duncan Thomas wrote:
> >>
> >> On 1 December 2015 at 13:40, Sean Dague >> >  wrote:
> >>
> >>
> >>  The current approach means locks block on their own, are processed in
> >>  the order they come in, but deletes aren't possible. The busy lock 
> >> would
> >>  mean deletes were normal. Some extra cpu spent on waiting, and lock
> >>  order processing would be non deterministic. It's trade offs, but I
> >>  don't know anywhere that we are using locks as queues, so order
> >>  shouldn't matter. The cpu cost on the busy wait versus the lock file
> >>  cleanliness might be worth making. It would also let you actually see
> >>  what's locked from the outside pretty easily.
> >>
> >>
> >> The cinder locks are very much used as queues in places, e.g. making
> >> delete wait until after an image operation finishes. Given that cinder
> >> can already bring a node into resource issues while doing lots of image
> >> operations concurrently (such as creating lots of bootable volumes at
> >> once) I'd be resistant to anything that makes it worse to solve a
> >> cosmetic issue.
> >
> > Is that really a queue? Don't do X while Y is a lock. Do X, Y, Z, in
> > order after W is done is a queue. And what you've explains above about
> > Don't DELETE while DOING OTHER ACTION, is really just the queue model.
> >
> > What I mean by treating locks as queues was depending on X, Y, Z
> > happening in that order after W. With a busy wait approach they might
> > happen as Y, Z, X or X, Z, B, Y. They will all happen after W is done.
> > But relative to each other, or to new ops coming in, no real order is
> > enforced.
> >
> 
> So ummm, just so people know the fasteners lock code (and the stuff that 
> has existed for file locks in oslo.concurrency and prior to that 
> oslo-incubator...) never has guaranteed the aboved sequencing.
> 
> How it works (and has always worked) is the following:
> 
> 1. A lock object is created 
> (https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L85)
> 2. That lock object acquire is performed 
> (https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L125)
> 3. At that point do_open is called to ensure the file exists (if it 
> exists already it is opened in append mode, so no overwrite happen) and 
> the lock object has a reference to the file descriptor of that file 
> (https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L112)
> 4. A retry loop starts, that repeats until either a provided timeout is 
> elapsed or the lock is acquired, the retry logic u can skip over but the 
> code that the retry loop calls is 
> https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L92
> 
> The retry loop (really this loop @ 
> https://github.com/harlowja/fasteners/blob/master/fasteners/_utils.py#L87) 
> will idle for a given delay between the next attempt to lock the file, 
> so that means there is no queue like sequencing, and that if for example 
> entity A (who created lock object at t0) sleeps for 50 seconds between 
> delays and entity B (who created lock object at t1) and sleeps for 5 
> seconds between delays would prefer entity B getting it (since entity B 
> has a smaller retry delay).
> 
> So just fyi, I wouldn't be depending on these for queuing/ordering as is...
> 

Agreed, this form of fcntl locking is basically equivalent to
O_CREAT|O_EXCL locks as Sean described, since we never use the blocking
form. I'm not sure why though. The main reason one uses fcntl/flock is
to go ahead and block so waiters queue up efficiently. I'd tend to agree
with Sean that if we're going to busy wait, just using creation locks
will be simpler.

That said, I think what is missing is the metadata for efficiently
cleaning up stale locks. That can be done with fcntl or creation locks,
but with fcntl you have the kernel telling you for sure if the locking
process is still alive when you want to clean up and take the lock. With
creation, you need to write that information into the lock, and remove
it, and then have a way to make sure the process is alive and knows it
has the lock, and that is not exactly simple. For this reason only, I
suggest staying with fcntl.

Beyond that, perhaps what is needed is a tool in oslo_concurrency or
fasteners which one can use to prune stale locks based on said metadata.
Once that exists, a cron job running that is the simplest answer. Or if
need be, let the daemons spawn processes periodically to do that (you
can't use a greenthread, since you may be cleaning up your own locks and
fcntl will gladly let a process re-lock something it already has locked).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

[openstack-dev] [Fuel][FFE] Disabling HA for RPC queues in RabbitMQ

2015-12-01 Thread Dmitry Mescheryakov
Folks,

I would like to request feature freeze exception for disabling HA for RPC
queues in RabbitMQ [1].

As I already wrote in another thread [2], I've conducted tests which
clearly show benefit we will get from that change. The change itself is a
very small patch [3]. The only thing which I want to do before proposing to
merge this change is to conduct destructive tests against it in order to
make sure that we do not have a regression here. That should take just
several days, so if there will be no other objections, we will be able to
merge the change in a week or two timeframe.

Thanks,

Dmitry

[1] https://review.openstack.org/247517
[2]
http://lists.openstack.org/pipermail/openstack-dev/2015-December/081006.html
[3] https://review.openstack.org/249180
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][glance] Add Sabari Kumar Murugesan <smuruge...@vmware.com>

2015-12-01 Thread Flavio Percoco

On 23/11/15 17:20 -0300, Flavio Percoco wrote:

Greetings,

I'd like to propose adding Sabari Kumar Murugesan to the glance-core
team. Sabari has been contributing for quite a bit to the project with
great reviews and he's also been providing great feedback in matters
related to the design of the service, libraries and other areas of the
team.

I believe he'd be a great addition to the glance-core team as he has
demonstrated a good knowledge of the code, service and project's
priorities.

If Sabari accepts to join and there are no objections from other
members of the community, I'll proceed to add Sabari to the team in a
week from now.


This has been done!

Welcome to the team and thanks for being part of it.

Flavio



Thanks,
Flavio

--
@flaper87
Flavio Percoco





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominating Roman Prykhodchenko to python-fuelclient cores

2015-12-01 Thread Sergii Golovatiuk
+1

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Tue, Dec 1, 2015 at 6:15 PM, Aleksey Kasatkin 
wrote:

> +1.
> No doubts. )
>
>
> Aleksey Kasatkin
>
>
> On Tue, Dec 1, 2015 at 5:49 PM, Dmitry Pyzhov 
> wrote:
>
>> Guys,
>>
>> I propose to promote Roman Prykhodchenko to python-fuelclient cores. He
>> is the main contributor and maintainer of this repo. And he did a great job
>> making changes toward OpenStack recommendations. Cores, please reply with
>> your +1/-1.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]Different projects authentication strategy

2015-12-01 Thread Adam Young

On 12/01/2015 01:23 AM, 1021710773 wrote:

Every Developers,

Hello. I here would like to ask some questions about policy rules.
Now the policy rules of openstack in keystone and other projects 
are set in policy.json, in other words, the policy rules are equal
to each projects. And the common ways to enforce are in decorative 
function like protected(). And in keystone project, it manage the 
users, projects,  roles and other resources. Now, some particular 
projects(tenants) may have its own enforce rules, not just like the 
policy.json, and in that ways, could we update the usual decorative 
function of enforce to realize the authentification of projects? And 
now, the policy model appears in keystone project. Could we use it to 
create association between projects and policy?



That request has come up in the past.  At this point, I don't think we 
have a path to "Tenant specific policy" but we have a couple features in 
Mitaka that might be close:  Implied Roles and Domain specific roles.


See the specs:

Implied roles has merged:

http://git.openstack.org/cgit/openstack/keystone-specs/tree/specs/mitaka/implied-roles.rst

Domain specific roles was just given the thumbs up and will likely merge 
soon.





Hope to hear from you. Thanks!


Weiwei Yang

yangwei...@cmss.chinamobile.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Networking Subteam Meeting

2015-12-01 Thread Daneyon Hansen (danehans)
All,

I will be on leave this Thursday and will be unable to chair the Networking 
Subteam meeting. Please respond if you would like to chair the meeting. 
Otherwise, we will not have a meeting this week and pick things back up on 
12/10.

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]New Quota Subteam on Nova

2015-12-01 Thread Vilobh Meshram
I am highly supportive for the idea of Nova Quota sub-team, for something
as complex as Quota, as it helps to move quickly on reviews and changes.

Agree with John, a test framework to test quotas will be helpful and can be
one of the first task the Nova Quota sub team can focus on as that will lay
the foundation for whether the bugs mentioned here http://bit.ly/1Pbr8YL are
valid or not.

Having worked in the area of Quotas for a while now by introducing features
like Cinder Nested Quota Driver [1] [2] I strongly feel that something like
a Nova Quota sub-team will definitely help. Mentioning about Cinder Quota
driver since it was accepted in Mitaka design summit that Nova Nested Quota
Driver[3] would like to pursue the route taken by Cinder.  Since Nested
quota is a one part of Quota subsystem and working in small team helped to
iterate quickly for Nested Quota patches[4][5][6][7] so IMHO forming a Nova
quota subteam will help.

Melanie,

If you can share the details of the bug that Joe mentioned, to reproduce
quota bugs locally, it would be helpful.

-Vilobh (irc: vilobhmm)

[1] Code : https://review.openstack.org/#/c/205369/
[2] Blueprint :
https://blueprints.launchpad.net/cinder/+spec/cinder-nested-quota-driver
[3] Nova Nested Quota Spec : https://review.openstack.org/#/c/209969/
[4] https://review.openstack.org/#/c/242568/
[5] https://review.openstack.org/#/c/242500/
[6] https://review.openstack.org/#/c/242514/
[7] https://review.openstack.org/#/c/242626/


On Mon, Nov 30, 2015 at 10:59 AM, melanie witt  wrote:

> On Nov 26, 2015, at 9:36, John Garbutt  wrote:
>
> > A suggestion in the past, that I like, is creating a nova functional
> > test that stress tests the quota code.
> >
> > Hopefully that will be able to help reproduce the error.
> > That should help prove if any proposed fix actually works.
>
> +1, I think it's wise to get some data on the current state of quotas
> before choosing a redesign. IIRC, Joe Gordon described a test scenario he
> used to use to reproduce quota bugs locally, in one of the launchpad bugs.
> If we could automate something like that, we could use it to demonstrate
> how quotas currently behave during parallel requests and try things like
> disabling reservations. I also like the idea of being able to verify the
> effects of proposed fixes.
>
> -melanie (irc: melwitt)
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Patch size limit

2015-12-01 Thread Sergii Golovatiuk
Hi,


On Tue, Dec 1, 2015 at 5:41 PM, Michael Krotscheck 
wrote:

> TL/DR: I think you're trying to solve the problem the wrong way. If you're
> trying to reduce the burden of large patches, I feel the project in
> question should distribute the burden of reviewing the big ones so one
> person's not stuck doing them all. That also means you can keep that patch
> in mental context.
>
> Also, random concerns I have with using "Line of code" as a metric:
>
> * What counts as a Line of Code? Does whitespace count? Comments? Docs?
> Tests? Should comments/docs/tests be included in this heuristic?
>

Comments/tests can be easily excluded from scope of LOC calculation. The
number of lines may be negotiated with community. Rally has 500 lines limit
in CI. Fuel community may vote for 600 or for 400 and change it later. It's
not static.

* Taking a big patch, and splitting it into several, can easily lead to
> dead code as the first patch may remain completely inactive on master until
> the entire patch chain merges.
>

Dependent patches on gerrit is a good sample how big patch should be split.
Some of them can break CI though the chain gives better view than one
massive patch.


> * git annotate becomes far less useful, as you cannot simply look at all
> the applicable changes for a given patch - you have to dig for all related
> patches.
> * Reverting things becomes more difficult for the same reason. Would you
> revert all-in-one? One revert per patch?
>

It's just a strategy. Usually revert all-in-one works. However, I saw when
patch in the middle was reverted.


> I've seen, and written, 1000+ line patches. Some of them were 200 lines of
> logic, 1000+ lines of tests. Others included extensive documentation,
> comments, etc, or perhaps-too-verbose parameter and method names that
> clearly explain what they do. Others use method-parameter-per-line style
> formatting in their code to assist in legibility.
>

>
> While I totally understand how frustrating it is to have to review a large
> patch, I'm not in favor of a hard limit for something which is governed
> mostly by whitespace and formatting preferences.
>

There are some cases when patch cannot be split. However, Core reviewer may
merge with -1 from CI LOC job in that case. Also the author may specify in
commit message why the patch cannot be split. Though in that case it will
be author's responsibility to prove the necessity.


> Michael
>
> On Tue, Dec 1, 2015 at 6:36 AM Sylwester Brzeczkowski <
> sbrzeczkow...@mirantis.com> wrote:
>
>> Neil, just to clarify: moved/renamed files are marked as "R" so I think
>> there may be some way to ignore such files when counting LOC.
>>
>> Maciej, I completely agree with you. It's pretty hard to review such big
>> change,and takes a lot of time which could be saved by submitting smaller
>> patches.
>>
>> +1
>>
>> On Tue, Dec 1, 2015 at 1:50 PM, Neil Jerram 
>> wrote:
>>
>>> On 01/12/15 12:45, Maciej Kwiek wrote:
>>> > Hi,
>>> >
>>> > I recently noticed the influx of big patches hitting Gerrit
>>> > (especially in fuel-web, but I also heard that there was a couple of
>>> > big ones in library). I think that patches that have 1000 LOC are
>>> > simply too big to review thoroughly and reliably.
>>> >
>>> > I would argue that there should be a limit to patch size, either a
>>> > soft one (i.e. written down in contributor guidelines) or a hard one
>>> > (e.g. enforced by gate job).
>>> >
>>> > I think that we need a discussion whether we really need this limit,
>>> > what should it be, and how to enforce it.
>>> >
>>> > I personally think that most patches that are over 400 LOC could be
>>> > easily split into at least two smaller changes.
>>> >
>>> > Regarding the limit enforcement - I think we should go with the soft
>>> > limit, with X LOC written as a guideline and giving the reviewers a
>>> > possibility to give -1s to patches that are too big, but also giving
>>> > the possibility to merge bigger changes if it's absolutely necessary
>>> > (in really rare cases the changes simply cannot be split). We may mix
>>> > in the hard limit for ridiculously large patches (twice the "soft
>>> > limit" would be good in my opinion), so the gate would automatically
>>> > reject such patches, forcing contributor to split his patch.
>>> >
>>> > Please share your thoughts on this.
>>>
>>> I think most of your principle is correct.  However I can imagine a file
>>> renaming / moving patch that would appear in Gerrit to be >=1000 LOC,
>>> but would actually just be file moves, with perhaps some trivial changes
>>> to Python module paths; and I don't think it would be helpful to force a
>>> patch like that to be split up.  So it may not be correct to enforce a
>>> hard limit all the time.
>>>
>>> Neil
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> 

Re: [openstack-dev] [Fuel][FFE] Disabling HA for RPC queues in RabbitMQ

2015-12-01 Thread Sergii Golovatiuk
Hi,

-1 for FFE for disabling HA for RPC queue as we do not know all side
effects in HA scenarios.

On Tue, Dec 1, 2015 at 7:34 PM, Dmitry Mescheryakov <
dmescherya...@mirantis.com> wrote:

> Folks,
>
> I would like to request feature freeze exception for disabling HA for RPC
> queues in RabbitMQ [1].
>
> As I already wrote in another thread [2], I've conducted tests which
> clearly show benefit we will get from that change. The change itself is a
> very small patch [3]. The only thing which I want to do before proposing to
> merge this change is to conduct destructive tests against it in order to
> make sure that we do not have a regression here. That should take just
> several days, so if there will be no other objections, we will be able to
> merge the change in a week or two timeframe.
>
> Thanks,
>
> Dmitry
>
> [1] https://review.openstack.org/247517
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2015-December/081006.html
> [3] https://review.openstack.org/249180
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Install Time Too Long Ironic in devstack

2015-12-01 Thread Pavlo Shchelokovskyy
Hi Zhi,

it seems that Ironic is building a new deploy ramdisk for you with
diskimage-builder. You can skip it if you state

IRONIC_BUILD_DEPLOY_RAMDISK=False

in your local.conf, then the bootstrap image will be downloaded from
tarballs.o.o.

If you are interested here is a sample ironic setting from my local.conf I
use to deploy Ironic with DevStack, and it takes about 1000s to complete
with full reclone:

https://github.com/pshchelo/stackdev/blob/master/local.conf.sample#L128-L145

(this is a sample config, so uncomment all options in this section).

Best regards,

On Tue, Dec 1, 2015 at 9:39 AM Zhi Chang  wrote:

> hi, all
> I want to install Ironic in my devstack by the document
> http://docs.openstack.org/developer/ironic/dev/dev-quickstart.html.
> During the install process, my console displays:
> 2015-12-01 07:08:44.390 | + PACKAGES=
> 2015-12-01 07:08:44.391 | ++ find /tmp/in_target.d/install.d -maxdepth 1
> -name 'package-installs-*'
> 2015-12-01 07:08:44.393 | + '[' -n '' ']'
> 2015-12-01 07:08:44.393 | + package-installs-v2 --phase install.d
> /tmp/package-installs.json
> 2015-12-01 07:08:44.461 | Map file for ubuntu element does not exist.
> 2015-12-01 07:08:44.492 | Map file for ubuntu element does not exist.
> 2015-12-01 07:08:44.526 | Map file for deploy-ironic element does not
> exist.
> 2015-12-01 07:08:44.558 | Map file for deploy-ironic element does not
> exist.
> 2015-12-01 07:08:44.595 | Map file for deploy-ironic element does not
> exist.
> 2015-12-01 07:08:44.633 | Map file for deploy-ironic element does not
> exist.
> 2015-12-01 07:08:44.668 | Map file for deploy-ironic element does not
> exist.
> 2015-12-01 07:08:44.703 | Map file for deploy-ironic element does not
> exist.
> 2015-12-01 07:08:44.815 | Map file for deploy-tgtadm element does not
> exist.
> 2015-12-01 07:08:44.857 | Map file for deploy-tgtadm element does not
> exist.
>
> I wait this a very very long time, does it right? And my devstack's
> local.conf at: http://paste.openstack.org/show/480462/
>
> Could someone help me?
>
> Thx
> Zhi Chang
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2015-12-01 Thread Alex Xu
Hi,

We have weekly Nova API meeting this week. The meeting is being held
Tuesday UTC1200.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Liberty RPMs for RDO

2015-12-01 Thread Chandan kumar
Hello Mathieu,

On Mon, Nov 30, 2015 at 4:48 PM, Alan Pevec  wrote:

> Hi Mathieu,
>
> 2015-11-30 10:54 GMT+01:00 Mathieu Velten :
> > Hi,
> >
> > Let me first introduce myself : I am currently working at CERN to help
> > evaluate and deploy Magnum.
> >
> > In this regard Ricardo recently sends an email regarding Puppet
> > modules, this one is about RPMs of Magnum for CentOS with RDO.
>
> Nice, looking forward to review it!
>
> > You can find here a repository containing the source and binary RPMs
> > for magnum and python-magnumclient.
> > http://linuxsoft.cern.ch/internal/repos/magnum7-testing/
>
> This one is 403 ?
>
> > The version 1.0.0.0b2.dev4 is the Magnum Liberty release and the
> > 1.1.0.0-5 version is the Mitaka M1 release using Liberty dependencies
> > (one client commit regarding keystone auth and one server commit
> > regarding oslo.config have been reverted).
> >
> > Let me know how I can contribute the spec files to somewhere more
> > suitable.
>
> Let's discuss this on rdo-list (CCed)
>
>
I had also created a rdo package review for python-magnumclient.
https://bugzilla.redhat.com/show_bug.cgi?id=1286772
Where can i find you irc channel? or you can ping me on #rdo channel. My
irc nick is chandankumar. So that we can get magnum packaged for RDO.

Thanks,

Chandan Kumar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with Unexpected vif_type=binding_failed

2015-12-01 Thread yujie

Hi Sean,
 I noticed that above talking using openstack with dpdk only in devstack.
 I already have kilo environment and want it to support dpdk. Could 
reinstalling ovs with dpdk will be work?

Thanks.
Yu


在 2015/11/27 20:38, Mooney, Sean K 写道:

For kilo we provided a single node all in one example config here
https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/_downloads/local.conf_example

I have modified that to be a controller with the interfaces and ips form your 
controller local.conf.
I do not have any kilo compute local.conf to hand but I modified an old compute 
local.conf to so that it should work
Using the ip and interface settings from your compute local.conf.


Regards
Sean.

From: Praveen MANKARA RADHAKRISHNAN [mailto:praveen.mank...@6wind.com]
Sent: Friday, November 27, 2015 9:21 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with 
Unexpected vif_type=binding_failed

Hi Sean,

I have changed the hostname in both machines.
and tried again still i have the same error.

I am trying to configure ovs-dpdk with vlan now.
For the kilo version the getting started guide was missing in the repository.
But i have changed the repositories everywhere to kilo.

Please find the attached loal.conf for compute and controller.

one change i have made is i have added ml2 plusgin as vlan for compute config 
also.
because if i exactly use the local.confs as in example the controller was vlan 
and compute is taking as vxlan for the ml2 config.

And please find all the errors present in the compute and controller.

Thanks
Praveen

On Thu, Nov 26, 2015 at 5:58 PM, Mooney, Sean K 
> wrote:
Openstack uses the hostname as a primary key in many of the project.
Nova and neutron both do this.
If you had two nodes with the same host name then it would cause undefined 
behavior.

Based on the error Andreas highlighted  are you currently trying to configure 
ovs-dpdk with vxlan/gre?

I also noticed that the getting started guide you linked to earlier was for the 
master branch(mitaka) but
You mentioned you were deploying kilo.
The local.conf settings will be different in both case.





-Original Message-
From: Andreas Scheuring 
[mailto:scheu...@linux.vnet.ibm.com]
Sent: Thursday, November 26, 2015 1:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with 
Unexpected vif_type=binding_failed

Praveen,
there are many error in your q-svc log.
It says:

InvalidInput: Invalid input for operation: (u'Tunnel IP %(ip)s in use with host 
%(host)s', {'ip': u'10.81.1.150', 'host':
u'localhost.localdomain'}).\n"]


Did you maybe specify duplicated ips in your controllers and compute nodes 
neutron tunnel config?

Or did you change the hostname after installation

Or maybe the code has trouble with duplicated host names?

--
Andreas
(IRC: scheuran)



On Di, 2015-11-24 at 15:28 +0100, Praveen MANKARA RADHAKRISHNAN wrote:
> Hi Sean,
>
>
> Thanks for the reply.
>
>
> Please find the logs attached.
> ovs-dpdk is correctly running in compute.
>
>
> Thanks
> Praveen
>
> On Tue, Nov 24, 2015 at 3:04 PM, Mooney, Sean K
> > wrote:
>  Hi would you be able to attach the
>
>  n-cpu log form the computenode  and  the
>
>  n-sch and q-svc logs for the controller so we can see if there
>  is a stack trace relating to the
>
>  vm boot.
>
>
>
>  Also can you confirm ovs-dpdk is running correctly on the
>  compute node by running
>
>  sudo service ovs-dpdk status
>
>
>
>  the neutron and networking-ovs-dpdk commits are from their
>  respective stable/kilo branches so they should be compatible
>
>  provided no breaking changes have been merged to either
>  branch.
>
>
>
>  regards
>
>  sean.
>
>
>
>  From: Praveen MANKARA RADHAKRISHNAN
>  [mailto:praveen.mank...@6wind.com]
>  Sent: Tuesday, November 24, 2015 1:39 PM
>  To: OpenStack Development Mailing List (not for usage
>  questions)
>  Subject: Re: [openstack-dev] [networking-ovs-dpdk] VM creation
>  fails with Unexpected vif_type=binding_failed
>
>
>
>  Hi Przemek,
>
>
>
>
>  Thanks For the response,
>
>
>
>
>
>  Here are the commit ids for Neutron and networking-ovs-dpdk
>
>
>
>
>
>  [stack@localhost neutron]$ git log --format="%H" -n 1
>
>
>  026bfc6421da796075f71a9ad4378674f619193d
>
>
>  [stack@localhost neutron]$ cd ..
>
>
>  [stack@localhost ~]$ cd networking-ovs-dpdk/
>
>
>  [stack@localhost networking-ovs-dpdk]$  git log --format="%H"
>  -n 1
>
>
>  

Re: [openstack-dev] [oslo][messaging] configurable ack-then-process (at least/most once) behavior

2015-12-01 Thread Bogdan Dobrelya
On 30.11.2015 14:28, Bogdan Dobrelya wrote:
> Hello.
> Please let's make this change [0] happen to the Oslo messaging.
> This is reasonable, straightforward and backwards compatible change. And
> it is required for OpenStack applications - see [1] - to implement a
> sane HA. The only thing left is to cover this change by unit tests.
> 
> [0] https://review.openstack.org/229186
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-October/076217.html
> 

Here is related bp [0]. I will submit the spec as well and put there all
of the concerns Mehdi Abaakouk provided in the aforementioned patch
review process. I believe the ack-then-process pattern *has* use cases,
that is why this topic will be raised again and again unless adressed.

[0]
https://blueprints.launchpad.net/oslo.messaging/+spec/at-least-once-guarantee


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Feature Freeze is soon

2015-12-01 Thread Vladimir Kuklin
Mike I think, it is rather good idea. I guess we can have a couple of
requests still - although everyone is shy, we might get a little storm of
FFE's. BTW, I will file at least one.

On Tue, Dec 1, 2015 at 10:28 AM, Mike Scherbakov 
wrote:

> Hi Fuelers,
> we are couple of days away from FF [1]. I have not noticed any request for
> feature freeze exception, so I assume that we pretty much decided what is
> going into 8.0 and what is not.
>
> If there are items which we'd like to ask exception for, I'd like us to
> have this requested now - so that we all can spend some time on analysis of
> what is done and what is left, and on risks assessment. I'd suggest to not
> consider any exception requests on the day of FF, as it doesn't leave us
> time to spend on it.
>
> To make a formal checkpoint of what is in and what is out, I suggest to
> get together on FF day, Wednesday, and go over all the items we have been
> working on in 8.0. What do you think folks? For instance, in #fuel-dev IRC
> at 8am PST (4pm UTC)?
>
> [1] https://wiki.openstack.org/wiki/Fuel/8.0_Release_Schedule
> --
> Mike Scherbakov
> #mihgen
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Encouraging first-time contributors through bug tags/reviews

2015-12-01 Thread Thierry Carrez
sean roberts wrote:
> Being successful at your first patch for most people means that their
> first effort is different than an a regular patch. 
> 
> Identifying abandoned work more quickly is good. It doesn't help the
> first timer. 
> 
> Tagging low hanging fruit for first timers I like. I'm recommending we
> add a mentor as part of the idea, so the project mentor is responsible
> for the work and the first timer learning. 

I also tend to prefer mentoring to posting detailed instructions to
follow to fix a given bug. The issues you can encounter while pushing a
patch are varied, so a bit more hand-holding is valuable. It also makes
for a better inter-personal experience to interact with a human on IRC
vs. interacting with step-by-step instructions posted on a bug.

So low-hanging-fruit tagging + available mentors sounds like a powerful
combination.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] Cross-project Liaisons

2015-12-01 Thread Thierry Carrez
Mike Perez wrote:
> [...] 
> I would like to propose cross-project liaisons which would have the following
> duties:
> 
> * Watching the cross-project spec repo [1].
>   - Comment on specs that involve your project. +1 to carry forward for TC
> approval.
> -- If you're not able to provide technical guidance on certain specs for
>your project, it's up to you to get the right people involved.
> -- Assuming you get someone else involved, it's up to you to make sure 
> they
>keep up with communication.
>   - Communicate back to your project's meeting on certain cross-project specs
> when necessary. This is also good for the previous bullet point of 
> sourcing
> who would have technical knowledge for certain specs.
> * Attend the cross-project meeting when it's called for [2].
> [...]

I like it. That sounds like a great way to ensure cross-project efforts
don't fall into the cracks. The same horizontal group (chapter? guild?)
could also ultimately end up tracking cross-project cycle goals.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] 'fixed_network' actually means 'fixed_subnet'. Which way is better to fix this?

2015-12-01 Thread Hou Ming Wang
Hi All,
I'm working on API related enhancement and encountered  the following issue:
bay-creation return 400 Bad Request with a valid fixed-network in baymodel(
https://bugs.launchpad.net/magnum/+bug/1519748 )

The 'fixed_network' actually means 'fixed_network_cidr', or more precisely
'fixed_subnet'. There're 2 possible ways to fix this:

1. Rename the 'fixed_network' to 'fixed_subnet', in Baymodel DB, Baymodel
Object and MagnumClient.

2. Leave 'fixed_network' alone, add a new field 'fixed_subnet' to Baymodel,
and use the 'fixed_subnet' to take the place of current 'fixed_network'.


Please don't hesitate to give some inputs of which one is better, by
comment in the bug or reply this mail.


Thanks & Best regards,

HouMing Wang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Stable team PTL nominations are open

2015-12-01 Thread Thierry Carrez
The nomination deadline is passed, we have two candidates!

I'll be setting up the election shortly (with Jeremy's help to generate
election rolls).

Cheers,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [midonet] IRC: ditch #midonet-dev?

2015-12-01 Thread Sandro Mathys
Hi,

Our IRC channels have been neglected for a long time, and as a result
we lost ownership of #midonet-dev, which is now owner by
freenode-staff. In theory, it should be very easy to get ownership
back, particularly since we still own #midonet. But in reality, it
seems like none of the freenode staff feel responsible for these
requests, so we still aren't owners after requesting it for 3 weeks
already.

Therefore, Toni Segura suggested we just ditch it and move to
#openstack-midonet instead.

However, several people have also said we don't need two channels,
i.e. we should merge #midonet and #midonet-dev.

So, here's three proposals:

Proposal #1:
* keep #midonet
* replace #midonet-dev with #openstack-midonet

Proposal #2:
* keep #midonet
* merge #midonet-dev into #midonet

Proposal #3:
* replace both #midonet and #midonet-dev with #openstack-midonet

I don't have any strong feelings for any of the proposals, but suggest
we go with proposal #2. Traffic in both #midonet and #midonet-dev is
rather low, so one channel should do - there's way busier OpenStack
channels out there. Furthermore, #midonet is shorter than
#openstack-midonet and already established. I also think people will
rather look in #midonet than #openstack-midonet if they're looking for
us.

Thoughts?

Cheers,
Sandro

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [midonet] IRC: ditch #midonet-dev?

2015-12-01 Thread Ivan Kelly
+1 for #2

On Tue, Dec 1, 2015 at 10:57 AM, Sandro Mathys  wrote:
> Hi,
>
> Our IRC channels have been neglected for a long time, and as a result
> we lost ownership of #midonet-dev, which is now owner by
> freenode-staff. In theory, it should be very easy to get ownership
> back, particularly since we still own #midonet. But in reality, it
> seems like none of the freenode staff feel responsible for these
> requests, so we still aren't owners after requesting it for 3 weeks
> already.
>
> Therefore, Toni Segura suggested we just ditch it and move to
> #openstack-midonet instead.
>
> However, several people have also said we don't need two channels,
> i.e. we should merge #midonet and #midonet-dev.
>
> So, here's three proposals:
>
> Proposal #1:
> * keep #midonet
> * replace #midonet-dev with #openstack-midonet
>
> Proposal #2:
> * keep #midonet
> * merge #midonet-dev into #midonet
>
> Proposal #3:
> * replace both #midonet and #midonet-dev with #openstack-midonet
>
> I don't have any strong feelings for any of the proposals, but suggest
> we go with proposal #2. Traffic in both #midonet and #midonet-dev is
> rather low, so one channel should do - there's way busier OpenStack
> channels out there. Furthermore, #midonet is shorter than
> #openstack-midonet and already established. I also think people will
> rather look in #midonet than #openstack-midonet if they're looking for
> us.
>
> Thoughts?
>
> Cheers,
> Sandro
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [midonet] IRC: ditch #midonet-dev?

2015-12-01 Thread Antoni Segura Puimedon
On Tue, Dec 1, 2015 at 10:59 AM, Ivan Kelly  wrote:

> +1 for #2
>

PS: Beware of the top-posting! It makes vote counting harder ;-)


>
> On Tue, Dec 1, 2015 at 10:57 AM, Sandro Mathys 
> wrote:
> > Hi,
> >
> > Our IRC channels have been neglected for a long time, and as a result
> > we lost ownership of #midonet-dev, which is now owner by
> > freenode-staff. In theory, it should be very easy to get ownership
> > back, particularly since we still own #midonet. But in reality, it
> > seems like none of the freenode staff feel responsible for these
> > requests, so we still aren't owners after requesting it for 3 weeks
> > already.
> >
> > Therefore, Toni Segura suggested we just ditch it and move to
> > #openstack-midonet instead.
> >
> > However, several people have also said we don't need two channels,
> > i.e. we should merge #midonet and #midonet-dev.
> >
> > So, here's three proposals:
> >
> > Proposal #1:
> > * keep #midonet
> > * replace #midonet-dev with #openstack-midonet
> >
> > Proposal #2:
> > * keep #midonet
> > * merge #midonet-dev into #midonet
>

+1


> >
> > Proposal #3:
> > * replace both #midonet and #midonet-dev with #openstack-midonet
> >
> > I don't have any strong feelings for any of the proposals, but suggest
> > we go with proposal #2. Traffic in both #midonet and #midonet-dev is
> > rather low, so one channel should do - there's way busier OpenStack
> > channels out there. Furthermore, #midonet is shorter than
> > #openstack-midonet and already established. I also think people will
> > rather look in #midonet than #openstack-midonet if they're looking for
> > us.
> >
> > Thoughts?
> >
> > Cheers,
> > Sandro
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [midonet] IRC: ditch #midonet-dev?

2015-12-01 Thread Takashi Yamamoto
On Tue, Dec 1, 2015 at 7:08 PM, Antoni Segura Puimedon
 wrote:
>
>
> On Tue, Dec 1, 2015 at 10:59 AM, Ivan Kelly  wrote:
>>
>> +1 for #2
>
>
> PS: Beware of the top-posting! It makes vote counting harder ;-)
>
>>
>>
>> On Tue, Dec 1, 2015 at 10:57 AM, Sandro Mathys 
>> wrote:
>> > Hi,
>> >
>> > Our IRC channels have been neglected for a long time, and as a result
>> > we lost ownership of #midonet-dev, which is now owner by
>> > freenode-staff. In theory, it should be very easy to get ownership
>> > back, particularly since we still own #midonet. But in reality, it
>> > seems like none of the freenode staff feel responsible for these
>> > requests, so we still aren't owners after requesting it for 3 weeks
>> > already.
>> >
>> > Therefore, Toni Segura suggested we just ditch it and move to
>> > #openstack-midonet instead.
>> >
>> > However, several people have also said we don't need two channels,
>> > i.e. we should merge #midonet and #midonet-dev.
>> >
>> > So, here's three proposals:
>> >
>> > Proposal #1:
>> > * keep #midonet
>> > * replace #midonet-dev with #openstack-midonet
>> >
>> > Proposal #2:
>> > * keep #midonet
>> > * merge #midonet-dev into #midonet
>
>
> +1

+1

>
>>
>> >
>> > Proposal #3:
>> > * replace both #midonet and #midonet-dev with #openstack-midonet
>> >
>> > I don't have any strong feelings for any of the proposals, but suggest
>> > we go with proposal #2. Traffic in both #midonet and #midonet-dev is
>> > rather low, so one channel should do - there's way busier OpenStack
>> > channels out there. Furthermore, #midonet is shorter than
>> > #openstack-midonet and already established. I also think people will
>> > rather look in #midonet than #openstack-midonet if they're looking for
>> > us.
>> >
>> > Thoughts?
>> >
>> > Cheers,
>> > Sandro
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Install Time Too Long Ironic indevstack

2015-12-01 Thread Zhi Chang
Thanks for your reply. 
But now, I meet a new problem. My console display:
2015-12-01 10:06:20.591 | ++ timeout 15 sh -c 'while ! ssh -p 22 -o 
StrictHostKeyChecking=no -i /opt/stack/data/ironic/ssh_keys/ironic_key 
stack@10.250.11.127 echo success; do sleep 1; done'
2015-12-01 10:06:35.598 | ++ die 725 'server didn'\''t become ssh-able!'
2015-12-01 10:06:35.598 | ++ local exitcode=0
2015-12-01 10:06:35.599 | [Call Trace]
2015-12-01 10:06:35.600 | ./stack.sh:1325:run_phase
2015-12-01 10:06:35.600 | /opt/stack/devstack/functions-common:1746:source
2015-12-01 10:06:35.600 | 
/opt/stack/devstack/extras.d/50-ironic.sh:29:prepare_baremetal_basic_ops
2015-12-01 10:06:35.600 | 
/opt/stack/devstack/lib/ironic:816:configure_ironic_auxiliary
2015-12-01 10:06:35.600 | /opt/stack/devstack/lib/ironic:731:ironic_ssh_check
2015-12-01 10:06:35.600 | /opt/stack/devstack/lib/ironic:725:die
2015-12-01 10:06:35.604 | [ERROR] /opt/stack/devstack/lib/ironic:725 server 
didn't become ssh-able!
2015-12-01 10:06:36.607 | Error on exit





Could you give some advice?


Thx 
Zhi Chang
 
 
-- Original --
From:  "Pavlo Shchelokovskyy";
Date:  Tue, Dec 1, 2015 04:39 PM
To:  "OpenStack Development Mailing List (not for usage 
questions)"; 

Subject:  Re: [openstack-dev] [ironic] Install Time Too Long Ironic indevstack

 
Hi Zhi,

it seems that Ironic is building a new deploy ramdisk for you with 
diskimage-builder. You can skip it if you state


IRONIC_BUILD_DEPLOY_RAMDISK=False

in your local.conf, then the bootstrap image will be downloaded from 
tarballs.o.o.


If you are interested here is a sample ironic setting from my local.conf I use 
to deploy Ironic with DevStack, and it takes about 1000s to complete with full 
reclone:


https://github.com/pshchelo/stackdev/blob/master/local.conf.sample#L128-L145



(this is a sample config, so uncomment all options in this section).


Best regards,


On Tue, Dec 1, 2015 at 9:39 AM Zhi Chang  wrote:

hi, all
I want to install Ironic in my devstack by the document 
http://docs.openstack.org/developer/ironic/dev/dev-quickstart.html. During the 
install process, my console displays:
2015-12-01 07:08:44.390 | + PACKAGES=
2015-12-01 07:08:44.391 | ++ find /tmp/in_target.d/install.d -maxdepth 1 -name 
'package-installs-*'
2015-12-01 07:08:44.393 | + '[' -n '' ']'
2015-12-01 07:08:44.393 | + package-installs-v2 --phase install.d 
/tmp/package-installs.json
2015-12-01 07:08:44.461 | Map file for ubuntu element does not exist.
2015-12-01 07:08:44.492 | Map file for ubuntu element does not exist.
2015-12-01 07:08:44.526 | Map file for deploy-ironic element does not exist.
2015-12-01 07:08:44.558 | Map file for deploy-ironic element does not exist.
2015-12-01 07:08:44.595 | Map file for deploy-ironic element does not exist.
2015-12-01 07:08:44.633 | Map file for deploy-ironic element does not exist.
2015-12-01 07:08:44.668 | Map file for deploy-ironic element does not exist.
2015-12-01 07:08:44.703 | Map file for deploy-ironic element does not exist.
2015-12-01 07:08:44.815 | Map file for deploy-tgtadm element does not exist.
2015-12-01 07:08:44.857 | Map file for deploy-tgtadm element does not exist.



I wait this a very very long time, does it right? And my devstack's local.conf 
at: http://paste.openstack.org/show/480462/


Could someone help me?


Thx
Zhi Chang

__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-- 

Dr. Pavlo ShchelokovskyySenior Software Engineer
Mirantis Inc
www.mirantis.com__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Install Openstack-Ansible

2015-12-01 Thread Sharma Swati6
 Hi All,

I have been following the below link for Openstack-Ansible :
http://docs.rackspace.com/rpc/api/v11/bk-rpc-installation/content/ch-playbooks-openstack.html#sec-utility-container-overview
 .

The ansible playbooks are running as of now and as per my understanding, it is 
only installing the basic (main) openstack components.

How to install other Openstack components like Designate, Ironic, etc. Please 
let me know the steps/links for installing any other customized components 
through ansible-playbooks, it will be of great help.

Thanks in advance,

Regards
 Swati Sharma
 System Engineer
 Tata Consultancy Services
 India
 Cell:- +91-9717238784
 Mailto: sharma.swa...@tcs.com
 Website: http://www.tcs.com
 
 Experience certainty.  IT Services
Business Solutions
Consulting
 
 
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano]How to use Murano to transmit files to Mistral and execute scripts on Mistral

2015-12-01 Thread Stan Lagun
On Mon, Nov 30, 2015 at 5:07 AM, WANG, Ming Hao (Tony T) <
tony.a.w...@alcatel-lucent.com> wrote:

> For the object storage support, does Murano have any plan to support the
> auto-uploading function?


Murano has plans to support everything applications may need. It is just a
matter of priorities/schedule/time etc. It will happen much sooner with
your help/contributions.
Currently AFAIK there is no blueprint for this feature so nobody developing
it

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ironic][heat] Adding back the tripleo check job

2015-12-01 Thread Sergey Kraynev
On 30 November 2015 at 18:19, Derek Higgins  wrote:

> Hi All,
>
> A few months tripleo switch from its devtest based CI to one that was
> based on instack. Before doing this we anticipated disruption in the ci
> jobs and removed them from non tripleo projects.
>
> We'd like to investigate adding it back to heat and ironic as these
> are the two projects where we find our ci provides the most value. But we
> can only do this if the results from the job are treated as voting.
>
> In the past most of the non tripleo projects tended to ignore the
> results from the tripleo job as it wasn't unusual for the job to broken for
> days at a time. The thing is, ignoring the results of the job is the reason
> (the majority of the time) it was broken in the first place.
> To decrease the number of breakages we are now no longer running
> master code for everything (for the non tripleo projects we bump the
> versions we use periodically if they are working). I believe with this
> model the CI jobs we run have become a lot more reliable, there are still
> breakages but far less frequently.
>
> What I proposing is we add at least one of our tripleo jobs back to both
> heat and ironic (and other projects associated with them e.g. clients,
> ironicinspector etc..), tripleo will switch to running latest master of
> those repositories and the cores approving on those projects should wait
> for a passing CI jobs before hitting approve. So how do people feel about
> doing this? can we give it a go? A couple of people have already expressed
> an interest in doing this but I'd like to make sure were all in agreement
> before switching it on.
>

+1 for make it for Heat. I'd prefer approach mentioned by Zane - make it
non-voting for first time with enabling voting later in the future, when
 it be stable enough.

Also I have one more suggestion.
Could we make this TripleO job in following way:
 - with separate score, which is independent from score of other Heat jobs

I prefer mentioned approach, because:
1. Other jobs will be executed faster (AFAIK, it will take less or equal 1
hour), for TripleO it will take more time.
2. In the future it will allow to recheck only this particular job in
situation, when we want to tests some fix.
3. I hope, that Sahara team will provide similar job and we will get two
really useful job which will give one score, like: "third-party real
deployment test".



>
> thanks,
> Derek.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Sergey.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominating Dmitry Burmistrov to core reviewers of fuel-mirror

2015-12-01 Thread Mike Scherbakov
-1
I personally know Dmitry and respect his contributions into our package
management and CI systems around it. Thanks to Dmitry, a lot of bad things
were prevented and many things developed which work very well.

However, I don't particularly understand this proposal, and I'd like to get
some clarifications. Even though I'm not a core in the repo, and far from
being an expert in this area, I hope other active contributors will find my
questions and concerns reasonable.
I put -1 for the following reasons:

   1. The most important - I'd suggest to wait until SCF [1], when we
   branch out stable/8.0 and reopen master for new features. Just risk
   management
   2. Roman, are you part of core team for fuel-mirror? Unless I'm missing
   any Fuel-specific policy, [2] states this clear: "member of the existing
   core team can nominate"
   3. Even though repo is fairly fresh, I'd question if numbers are good
   enough to become a core (17 reviews [3], 7 commits [4])
   4. I don't quite understand how repo is organized. I see a lot of Python
   code regarding to fuel-mirror itself and packetary, which is used as
   fuel-mirrors core and being written and maintained mostly by Bulat [5].
   There are seem to be bash scripts now related to Perestroika, and. I don't
   quite get how these things relate each to other, and if we expect core
   reviewers to be merging code into both Perestroika and Packetary? Unless
   mission of repo, code gets clear, I'd abstain from giving +1...


[1] https://wiki.openstack.org/wiki/Fuel/8.0_Release_Schedule
[2] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
[3] http://stackalytics.com/report/contribution/fuel-mirror/180
[4] git log --oneline --author="Dmitry Burmistrov" | wc -l
[5] https://github.com/openstack/fuel-mirror/blob/master/MAINTAINERS#L38

On Fri, Nov 27, 2015 at 6:29 AM Vitaly Parakhin 
wrote:

> +1
>
> пятница, 27 ноября 2015 г. пользователь Roman Vyalov написал:
>
>> Hi all,
>> Dmitry is doing great work and I hope our Perestroika build system will
>> become even better.
>> At the moment Dmitry is core developer in our Perestroika builds system.
>> But he not core reviewer in gerrit repository.
>> Fuelers, please vote for Dmitry
>>
>
>
> --
> Regards,
> Vitaly Parakhin.
> CI Engineer | Mirantis, Inc. | http://www.mirantis.com
> IRC: brain461 @ chat.freenode.net | Slack: vparakhin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominating Dmitry Burmistrov to core reviewers of fuel-mirror

2015-12-01 Thread Thomas Goirand
On 12/01/2015 09:25 AM, Mike Scherbakov wrote:
>  4. I don't quite understand how repo is organized. I see a lot of
> Python code regarding to fuel-mirror itself and packetary, which is
> used as fuel-mirrors core and being written and maintained mostly by
> Bulat [5]. There are seem to be bash scripts now related to
> Perestroika, and. I don't quite get how these things relate each to
> other, and if we expect core reviewers to be merging code into both
> Perestroika and Packetary? Unless mission of repo, code gets clear,
> I'd abstain from giving +1...

Also, why isn't packetary living in its own repository? It seems wrong
to me to have 2 python modules living in the same source repo, unless
they share the same egg-info. It feels weird to have to call setup.py
install twice in the resulting Debian source package. That's not how
things are done elsewhere, and I'd like to avoid special cases, just
because it's fuel...

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][library] CI gate for regressions detection in deployment data

2015-12-01 Thread Bogdan Dobrelya
On 30.11.2015 13:03, Bogdan Dobrelya wrote:
> On 20.11.2015 17:41, Bogdan Dobrelya wrote:
>>> Hi,
>>>
>>> let me try to rephrase this a bit and Bogdan will correct me if I'm wrong
>>> or missing something.
>>>
>>> We have a set of top-scope manifests (called Fuel puppet tasks) that we use
>>> for OpenStack deployment. We execute those tasks with "puppet apply". Each
>>> task supposed to bring target system into some desired state, so puppet
>>> compiles a catalog and applies it. So basically, puppet catalog = desired
>>> system state.
>>>
>>> So we can compile* catalogs for all top-scope manifests in master branch
>>> and store those compiled* catalogs in fuel-library repo. Then for each
>>> proposed patch CI will compare new catalogs with stored ones and print out
>>> the difference if any. This will pretty much show what is going to be
>>> changed in system configuration by proposed patch.
>>>
>>> We were discussing such checks before several times, iirc, but we did not
>>> have right tools to implement such thing before. Well, now we do :) I think
>>> it could be quite useful even in non-voting mode.
>>>
>>> * By saying compiled catalogs I don't mean actual/real puppet catalogs, I
>>> mean sorted lists of all classes/resources with all parameters that we find
>>> during puppet-rspec tests in our noop test framework, something like
>>> standard puppet-rspec coverage. See example [0] for networks.pp task [1].
>>>
>>> Regards,
>>> Alex
>>>
>>> [0] http://paste.openstack.org/show/477839/
>>> [1] 
>>> https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular/openstack-network/networks.pp
>>
>> Thank you, Alex.
>> Yes, the composition layer is a top-scope manifests, known as a Fuel
>> library modular tasks [0].
>>
>> The "deployment data checks", is nothing more than comparing the
>> committed vs changed states of fixtures [1] of puppet catalogs for known
>> deployment paths under test with rspecs written for each modular task [2].
>>
>> And the *current status* is:
>> - the script for data layer checks now implemented [3]
>> - how-to is being documented here [4]
>> - a fix to make catalogs compilation idempotent submitted [5]
> 
> The status update:
> - the issue [0] is the data regression checks blocker and is only the
> Noop tests specific. It has been reworked to not use custom facts [1].
> New uuid will be still generated each time in the catalog, but the
> augeas ensures it will be processed in idempotent way. Let's make this
> change [2] to the upstream puppet-nova as well please.
> 
> [0] https://bugs.launchpad.net/fuel/+bug/1517915
> [1] https://review.openstack.org/251314
> [2] https://review.openstack.org/131710
> 
> - pregenerated catalogs for the Noop tests to become the very first
> committed state in the data regression process has to be put in the
> *separate repo*. Otherwise, the stackalytics would go mad as that would
> be a 600k-liner patch to an OpenStack project, which is the Fuel-library
> now :)
> 
> So, I'm planning to use the separate repo for the templates. Note, we
> could as well move the tests/noop/astute.yaml/ there. Thoughts?
> 
>> - and there is my WIP branch [6] with the initial committed state of
>> deploy data pre-generated. So, you can checkout, make any test changes
>> to manifests and run the data check (see the README [4]). It works for
>> me, there is no issues with idempotent re-checks of a clean committed
>> state or tests failing when unexpected.
>>
>> So the plan is to implement this noop tests extention as a non-voting CI
>> gate after I make an example workflow update for developers to the
>> Fuel wiki. Thoughts?

Folks, here is another example patch [0] with 4k lines of pure fixtures.
That is why we should not keep astute.yaml fixtures (as well as
pregenerated catalogs for the data regression checks being discussed in
this topic) in the main fuel-library repo.

Instead, all of the fixtures and such must be pulled in from an external
repo, like openstack/fuel-noop-fixtures (which is out of the BigTent
projects listed perhaps) at the rake spec_prep stage of the noop tests.

[0] https://review.openstack.org/246358

>>
>> [0]
>> https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular
>> [1]
>> https://github.com/openstack/fuel-library/tree/master/tests/noop/astute.yaml
>> [2] https://github.com/openstack/fuel-library/tree/master/tests/noop/spec
>> [3] https://review.openstack.org/240015
>> [4]
>> https://github.com/openstack/fuel-library/blob/master/tests/noop/README.rst
>> [5] https://review.openstack.org/247989
>> [6] https://github.com/bogdando/fuel-library-1/commits/data_checks
>>
>>
> 
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [Magnum] 'fixed_network' actually means 'fixed_subnet'. Which way is better to fix this?

2015-12-01 Thread Kai Qiang Wu
Hi HouMing,

I checked the heat templates again, and check with heat developers.
For heat stack creation now, it can not directly use existed network when
create new stack. which means, it need create a new net and subnet.
So for fixed_network, the name is still need. just like when you create
network in neutron, you need network and subnet.

I think right now, so for magnum point of view what can be controlled is,
1. network name related properties settting.  2. subnetwork related
properties. like CIDR.


So for fixed_network, give it still what role should be.

Add a new fixed_subnet for that.


Of course, this can also be discussed in weekly meeting if needed.


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Hou Ming Wang 
To: openstack-dev@lists.openstack.org
Date:   01/12/2015 05:49 pm
Subject:[openstack-dev] [Magnum] 'fixed_network' actually means
'fixed_subnet'. Which way is better to fix this?



Hi All,
I'm working on API related enhancement and encountered  the following
issue:
bay-creation return 400 Bad Request with a valid fixed-network in baymodel(
https://bugs.launchpad.net/magnum/+bug/1519748 )

The 'fixed_network' actually means 'fixed_network_cidr', or more precisely
'fixed_subnet'. There're 2 possible ways to fix this:
1. Rename the 'fixed_network' to 'fixed_subnet', in Baymodel DB, Baymodel
Object and MagnumClient.
2. Leave 'fixed_network' alone, add a new field 'fixed_subnet' to Baymodel,
and use the 'fixed_subnet' to take the place of current 'fixed_network'.

Please don't hesitate to give some inputs of which one is better, by
comment in the bug or reply this mail.

Thanks & Best regards,
HouMing Wang

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][library] CI gate for regressions detection in deployment data

2015-12-01 Thread Aleksandr Didenko
Hi,

> pregenerated catalogs for the Noop tests to become the very first
> committed state in the data regression process has to be put in the
> *separate repo*

+1 to that, we can put this new repo into .fixtures.yml

> note, we could as well move the tests/noop/astute.yaml/ there

+1 here too, astute.yaml files are basically configuration fixtures, we can
put them into .fixtures.yml as well

Regards,
Alex


On Mon, Nov 30, 2015 at 1:03 PM, Bogdan Dobrelya 
wrote:

> On 20.11.2015 17:41, Bogdan Dobrelya wrote:
> >> Hi,
> >>
> >> let me try to rephrase this a bit and Bogdan will correct me if I'm
> wrong
> >> or missing something.
> >>
> >> We have a set of top-scope manifests (called Fuel puppet tasks) that we
> use
> >> for OpenStack deployment. We execute those tasks with "puppet apply".
> Each
> >> task supposed to bring target system into some desired state, so puppet
> >> compiles a catalog and applies it. So basically, puppet catalog =
> desired
> >> system state.
> >>
> >> So we can compile* catalogs for all top-scope manifests in master branch
> >> and store those compiled* catalogs in fuel-library repo. Then for each
> >> proposed patch CI will compare new catalogs with stored ones and print
> out
> >> the difference if any. This will pretty much show what is going to be
> >> changed in system configuration by proposed patch.
> >>
> >> We were discussing such checks before several times, iirc, but we did
> not
> >> have right tools to implement such thing before. Well, now we do :) I
> think
> >> it could be quite useful even in non-voting mode.
> >>
> >> * By saying compiled catalogs I don't mean actual/real puppet catalogs,
> I
> >> mean sorted lists of all classes/resources with all parameters that we
> find
> >> during puppet-rspec tests in our noop test framework, something like
> >> standard puppet-rspec coverage. See example [0] for networks.pp task
> [1].
> >>
> >> Regards,
> >> Alex
> >>
> >> [0] http://paste.openstack.org/show/477839/
> >> [1]
> >>
> https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular/openstack-network/networks.pp
> >
> > Thank you, Alex.
> > Yes, the composition layer is a top-scope manifests, known as a Fuel
> > library modular tasks [0].
> >
> > The "deployment data checks", is nothing more than comparing the
> > committed vs changed states of fixtures [1] of puppet catalogs for known
> > deployment paths under test with rspecs written for each modular task
> [2].
> >
> > And the *current status* is:
> > - the script for data layer checks now implemented [3]
> > - how-to is being documented here [4]
> > - a fix to make catalogs compilation idempotent submitted [5]
>
> The status update:
> - the issue [0] is the data regression checks blocker and is only the
> Noop tests specific. It has been reworked to not use custom facts [1].
> New uuid will be still generated each time in the catalog, but the
> augeas ensures it will be processed in idempotent way. Let's make this
> change [2] to the upstream puppet-nova as well please.
>
> [0] https://bugs.launchpad.net/fuel/+bug/1517915
> [1] https://review.openstack.org/251314
> [2] https://review.openstack.org/131710
>
> - pregenerated catalogs for the Noop tests to become the very first
> committed state in the data regression process has to be put in the
> *separate repo*. Otherwise, the stackalytics would go mad as that would
> be a 600k-liner patch to an OpenStack project, which is the Fuel-library
> now :)
>
> So, I'm planning to use the separate repo for the templates. Note, we
> could as well move the tests/noop/astute.yaml/ there. Thoughts?
>
> > - and there is my WIP branch [6] with the initial committed state of
> > deploy data pre-generated. So, you can checkout, make any test changes
> > to manifests and run the data check (see the README [4]). It works for
> > me, there is no issues with idempotent re-checks of a clean committed
> > state or tests failing when unexpected.
> >
> > So the plan is to implement this noop tests extention as a non-voting CI
> > gate after I make an example workflow update for developers to the
> > Fuel wiki. Thoughts?
> >
> > [0]
> >
> https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular
> > [1]
> >
> https://github.com/openstack/fuel-library/tree/master/tests/noop/astute.yaml
> > [2]
> https://github.com/openstack/fuel-library/tree/master/tests/noop/spec
> > [3] https://review.openstack.org/240015
> > [4]
> >
> https://github.com/openstack/fuel-library/blob/master/tests/noop/README.rst
> > [5] https://review.openstack.org/247989
> > [6] https://github.com/bogdando/fuel-library-1/commits/data_checks
> >
> >
>
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

[openstack-dev] [all][oslo] On Python 3, request_id must by Unicode, not bytes

2015-12-01 Thread Victor Stinner

Hi,

The next oslo.context release including the following change (still 
under review) might break the voting Python 3 gate of your project:


   https://review.openstack.org/#/c/250731/

Please try to run Python 3 tests of your project with this change. I 
already ran the tests on Python 3 of the following projects:


- ceilometer
- cinder
- heat
- neutron
- nova


The type of the request_id attribute of oslo_context.RequestContext was 
changed from Unicode (str) to bytes on Python 3 in April "to fix an unit 
test". According to the author of the change, it was a mistake.


The request_id is a string looks like 'req-83a6...', 'req-' followed by 
an UUID. On Python 3, it's annoying to manipulate a bytes string. For 
example, print(request_id) writes b'req-83a6...' instead of req-83a6..., 
request_id.startswith('req-83a6...') raises a TypeError, etc.


I propose to modify request_id type again to make it a Unicode string to 
fix oslo.log (don't log b'req-...' anymore, but req-...):


   https://review.openstack.org/#/c/250731/

It looks like it doesn't break services, but only one specific unit test 
duplicated in some projects. The unit test relies on the exact 
request_id type, it looks like: request_id.startswith(b'req-'). I fixed 
this unit test in Glance, Neutron and oslo.middlware to accept bytes and 
Unicode request_id.


I also searched for b'req-' in http://codesearch.openstack.org/ to find 
all projects relying on the exact request_id type.


Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2015-12-01 Thread Neil Jerram
On 01/12/15 04:13, Russell Bryant wrote:
> On 11/30/2015 07:56 PM, Armando M. wrote:
>> As a result, there is quite an effort imposed on the PTL, the various
>> liaisons (release, infra, docs, testing, etc) and the core team to
>> help manage the existing relationships and to ensure that the picture
>> stays coherent over time. 
> For example, you mention "release" here, though IIUC, Kyle is handling
> releases for all of these sub-projects, right?  If so, Kyle, what do you
> think?  What's causing pain and how can we improve?
>
> "infra" - I take it this is about the Neutron infra liaisons having to
> ack every infra patch for all of these repos.  That does sound annoying.
>  It'd be nice if the lead for each driver or whatever could act as the
> infra liaison for jobs that only affect that repo.

I'd be happy to do that for networking-calico.  (Even though I'm not
sure quite how great the implications are, it sounds like the Right Thing.)

Neil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Remove nova-network as a deployment option in Fuel?

2015-12-01 Thread Aleksey Kasatkin
It can be selected but we have no tests for deployment which are run on
periodical basis. So, we do not have any recent status.
Also, serializer in Nailgun for Nova-Network is now not in use (it is not
tested how serialization will work in 8.0).


Aleksey Kasatkin


On Tue, Dec 1, 2015 at 9:36 AM, Mike Scherbakov 
wrote:

> Aleksey, can you clarify it? Why it can't be deployed? According to what I
> see at our fakeUI install [1], wizard allows to choose nova-network only in
> case if you choose vcenter.
>
> Do we support Neutron for vCenter already? If so - we could safely remove
> nova-network altogether.
>
> [1] http://demo.fuel-infra.org:8000/
>
> On Mon, Nov 30, 2015 at 4:27 AM Aleksey Kasatkin 
> wrote:
>
>> This remains unclear.
>> Now, for 8.0, the Environment with Nova-Network can be created but cannot
>> be deployed (and its creation is tested in UI integration tests).
>> AFAIC, we should either remove the ability of creation of environments
>> with Nova-Network in 8.0 or return it back into working state.
>>
>>
>> Aleksey Kasatkin
>>
>>
>> On Fri, Oct 23, 2015 at 3:42 PM, Sheena Gregson 
>> wrote:
>>
>>> As a reminder: there are no individual networking options that can be
>>> used with both vCenter and KVM/QEMU hypervisors once we deprecate
>>> nova-network.
>>>
>>>
>>>
>>> The code for vCenter as a stand-alone deployment may be there, but the
>>> code for the component registry (
>>> https://blueprints.launchpad.net/fuel/+spec/component-registry) is
>>> still not complete.  The component registry is required for a multi-HV
>>> environment, because it provides compatibility information for Networking
>>> and HVs.  In theory, landing this feature will enable us to configure DVS +
>>> vCenter and Neutron with GRE/VxLAN + KVM/QEMU in the same environment.
>>>
>>>
>>>
>>> While Andriy Popyvich has made considerable progress on this story, I
>>> personally feel very strongly against deprecating nova-network until we
>>> have confirmed that we can support *all current use cases* with the
>>> available code base.
>>>
>>>
>>>
>>> Are we willing to lose the multi-HV functionality if something prevents
>>> the component registry work from landing in its entirety before the next
>>> release?
>>>
>>>
>>>
>>> *From:* Sergii Golovatiuk [mailto:sgolovat...@mirantis.com]
>>> *Sent:* Friday, October 23, 2015 6:30 AM
>>> *To:* OpenStack Development Mailing List (not for usage questions) <
>>> openstack-dev@lists.openstack.org>
>>> *Subject:* Re: [openstack-dev] [Fuel] Remove nova-network as a
>>> deployment option in Fuel?
>>>
>>>
>>>
>>> Hi,
>>>
>>>
>>>
>>> As far as I know neutron code for VCenter is ready. Guys are still
>>> testing it. Keep patience... There will be announce soon.
>>>
>>>
>>> --
>>> Best regards,
>>> Sergii Golovatiuk,
>>> Skype #golserge
>>> IRC #holser
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> --
> Mike Scherbakov
> #mihgen
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2015-12-01 Thread Neil Jerram
On 01/12/15 05:16, Doug Wiegley wrote:
> Part of the issue is that in a year, we added all the repos above. And
> all of said repos were all heading over to infra with the same newbie
> questions/mistakes. Not a bad thing in and of itself, but the sheer
> volume was causing a lot of infra load. So the infra liasions are
> meant to buffer that; exactly the opposite of splitting again. Now add
> that many repos again this year, and the problem doubles. The review
> overhead for centralizing this is quite small. The mentor overhead to
> avoid the repeated mistakes hitting infra is quite a bit higher, but
> that has to land somewhere, and still isn’t huge. 

On the other hand, it may also be that we're very unlikely to see so
many new projects again, as we have in the first 'stadium' (half-)year. 
So if this is a significant aspect of the pain that Armando is
describing, I suspect it would be overreaction to make a big governance
change for it - as it's unlikely to happen again.

But actually I guess most of the pain is elsewhere, i.e. in the
cognitive aspect of the Neutron PTL logically needing to understand
everything in all of the stadium projects.

Neil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2015-12-01 Thread Thierry Carrez
Armando M. wrote:
> [...]
> So my question is: would revisiting/clarifying the concept be due after
> some time we have seen it in action? I would like to think so.

I also think it's time to revisit this experience now that it's been
around for some time. On one hand the Neutron stadium allowed to
increase the development bandwidth by tackling bottlenecks in reviews
using smaller core review teams. On the other it's been difficult for
Neutron leadership to follow up on all those initiatives and the results
in terms of QA and alignment with "the OpenStack way" have been... mixed.

And this touches on the governance issue. By adding all those projects
under your own project team, you bypass the Technical Committee approval
that they behave like OpenStack projects and are produced by the
OpenStack community. The Neutron team basically vouches for all of them
to be on par. As far as the Technical Committee goes, they are all being
produced by the same team we originally blessed (the Neutron project team).

That is perfectly fine with me, as long as the Neutron team feels
confident they can oversee every single one of them and vouch for every
single one of them. If the Neutron PTL feels the core Neutron leadership
just can't keep up, I think we have a problem we need to address, before
it taints the Neutron project team itself.

One solution is, like you mentioned, to make some (or all) of them
full-fledged project teams. Be aware that this means the TC would judge
those new project teams individually and might reject them if we feel
the requirements are not met. We might want to clarify what happens then.

Thanks for raising this thread!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] ValueError: No JSON object could be decoded

2015-12-01 Thread OpenStack Mailing List Archive

Link: https://openstack.nimeyo.com/67158/?show=67158#q67158
From: kunciil 

TASK: [ceph | Fetching Ceph keyrings] * 
fatal: [gridppcl11 -> gridppcl13] => Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/ansible/runner/init.py", line 586, in executor
execrc = self.executorinternal(host, newstdin)
  File "/usr/lib/python2.7/site-packages/ansible/runner/init.py", line 789, in _executorinternal
return self.executorinternalinner(host, self.modulename, self.moduleargs, inject, port, complexargs=complexargs)
  File "/usr/lib/python2.7/site-packages/ansible/runner/init.py", line 1101, in _executorinternalinner
data['changed'] = utils.checkconditional(changedwhen, self.basedir, inject, failonundefined=self.erroronundefinedvars)
  File "/usr/lib/python2.7/site-packages/ansible/utils/init.py", line 265, in checkconditional
conditional = template.template(basedir, conditional, inject, failonundefined=failonundefined)
  File "/usr/lib/python2.7/site-packages/ansible/utils/template.py", line 124, in template
varname = templatefromstring(basedir, varname, templatevars, failonundefined)
  File "/usr/lib/python2.7/site-packages/ansible/utils/template.py", line 382, in templatefromstring
res = jinja2.utils.concat(rf)
  File "", line 9, in root
  File "/usr/lib64/python2.7/json/init.py", line 338, in loads
return _defaultdecoder.decode(s)
  File "/usr/lib64/python2.7/json/decoder.py", line 365, in decode
obj, end = self.rawdecode(s, idx=w(s, 0).end())
  File "/usr/lib64/python2.7/json/decoder.py", line 383, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded

FATAL: all hosts have already failed -- aborting



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ironic][heat] Adding back the tripleo check job

2015-12-01 Thread Steven Hardy
On Mon, Nov 30, 2015 at 06:07:44PM -0500, Zane Bitter wrote:
> On 30/11/15 12:51, Ruby Loo wrote:
> >
> >
> >On 30 November 2015 at 10:19, Derek Higgins  >> wrote:
> >
> >Hi All,
> >
> > A few months tripleo switch from its devtest based CI to one
> >that was based on instack. Before doing this we anticipated
> >disruption in the ci jobs and removed them from non tripleo projects.
> >
> > We'd like to investigate adding it back to heat and ironic as
> >these are the two projects where we find our ci provides the most
> >value. But we can only do this if the results from the job are
> >treated as voting.
> >
> >
> >What does this mean? That the tripleo job could vote and do a -1 and
> >block ironic's gate?
> >
> >
> > In the past most of the non tripleo projects tended to ignore
> >the results from the tripleo job as it wasn't unusual for the job to
> >broken for days at a time. The thing is, ignoring the results of the
> >job is the reason (the majority of the time) it was broken in the
> >first place.
> > To decrease the number of breakages we are now no longer
> >running master code for everything (for the non tripleo projects we
> >bump the versions we use periodically if they are working). I
> >believe with this model the CI jobs we run have become a lot more
> >reliable, there are still breakages but far less frequently.
> >
> >What I proposing is we add at least one of our tripleo jobs back to
> >both heat and ironic (and other projects associated with them e.g.
> >clients, ironicinspector etc..), tripleo will switch to running
> >latest master of those repositories and the cores approving on those
> >projects should wait for a passing CI jobs before hitting approve.
> >So how do people feel about doing this? can we give it a go? A
> >couple of people have already expressed an interest in doing this
> >but I'd like to make sure were all in agreement before switching it on.
> >
> >This seems to indicate that the tripleo jobs are non-voting, or at least
> >won't block the gate -- so I'm fine with adding tripleo jobs to ironic.
> >But if you want cores to wait/make sure they pass, then shouldn't they
> >be voting? (Guess I'm a bit confused.)
> 
> +1
> 
> I don't think it hurts to turn it on, but tbh I'm uncomfortable with the
> mental overhead of a non-voting job that I have to manually treat as a
> voting job. If it's stable enough to make it a voting job, I'd prefer we
> just make it voting. And if it's not then I'd like to see it be made stable
> enough to be a voting job and then make it voting.

I don't think anyone is saying it's not ever going to be voting, it's just
an up-front recongnition that right now it sometimes isn't reliable enough
because there are significant outages, which TripleO will get blamed for,
even if it's a regression in a project (maybe one of those helpfully
ignoring non-voting jobs ;)

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Install Openstack-Ansible

2015-12-01 Thread Sharma Swati6
 Hi Major,

Thanks for your prompt response.

I will plan to create a spec for Designate.

However, I just want to know if I have to implement this in openstack-ansible, 
or for that matter, I want to add any new component to it, are there any steps 
or guidelines to be followed.
For example, first I create containers and mention/add it to config files. etc. 
I went through 
http://docs.openstack.org/developer/openstack-ansible/developer-docs/extending.html
 but this is not much self-explanatory.

If the steps provided by you are helpful I can begin with this and contribute 
soon.


Thanks & Regards
 Swati Sharma
 System Engineer
 Tata Consultancy Services
 Ground to 8th Floors, Building No. 1 & 2,
 Skyview Corporate Park, Sector 74A,NH 8
 Gurgaon - 122 004,Haryana
 India
 Cell:- +91-9717238784
 Mailto: sharma.swa...@tcs.com
 Website: http://www.tcs.com
 
 Experience certainty.  IT Services
Business Solutions
Consulting
 
 

-Major Hayden  wrote: -
To: openstack-dev@lists.openstack.org
From: Major Hayden 
Date: 12/01/2015 06:37PM
Subject: Re: [openstack-dev] Install Openstack-Ansible

On Tue, 2015-12-01 at 13:41 +0530, Sharma Swati6 wrote:
> The ansible playbooks are running as of now and as per my
> understanding, it is only installing the basic (main) openstack
> components.
> 
> How to install other Openstack components like Designate, Ironic,
> etc. Please let me know the steps/links for installing any other
> customized components through ansible-playbooks, it will be of great
> help.

Hello Sharma,

Thanks for the question about openstack-ansible!  Designate and Ironic
aren't currently included in the standard openstack-ansible roles, but
we're always looking for help in getting things like this done.

There's already a spec open[1] for an Ironic role within openstack-
ansible and I've heard talk about Designate from time to time.  If
you're interested in doing this work, you can create a spec using the
template[2] from the openstack-ansible-specs repository.

If you have more questions, feel free to reply on this thread or hop
into #openstack-ansible on Freenode.

[1] 
http://specs.openstack.org/openstack/openstack-ansible-specs/specs/mitaka/role-ironic.html
[2] 
https://github.com/openstack/openstack-ansible-specs/blob/master/specs/template.rst

-- 
Major Hayden



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #61

2015-12-01 Thread Emilien Macchi


On 11/29/2015 03:35 PM, Emilien Macchi wrote:
> Hello!
> 
> Here's an initial agenda for our weekly meeting, Tuesday at 1500 UTC
> in #openstack-meeting-4:
> 
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20151201

We did our (short) meeting, you can read notes here:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-12-01-15.00.html


See you next week!
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone] Removing functionality that was deprecated in Kilo and upcoming deprecated functionality in Mitaka

2015-12-01 Thread Boris Bobrov
On Tuesday 01 December 2015 08:50:14 Lance Bragstad wrote:
> On Tue, Dec 1, 2015 at 6:05 AM, Sean Dague  wrote:
> > 
> > From an interop perspective, this concerns me a bit. My
> > understanding is that Apache is specifically needed for
> > Federation. Federation is the norm that we want for environments
> > in the future.
> 
> (On a side note from removing eventlet, but related to what Sean
> said)
> 
> A spec has been proposed to make keystone a fully fledged saml2
> provider [0]. Depending on how we feel about implementing and
> maintaining something like this, we'd be able to use federation
> within uWSGI (we would no longer *require* Apache for federation).
> Only bringing this up because it would also solve the
> two-reference-architectures problem. A uWSGI reference architecture
> could be used for deploying keystone, regardless if you want
> federation or not.
> 
> We probably wouldn't get a uWSGI reference architecture until after
> that is all fleshed out. This is assuming the spec is accepted and
> implemented in Mitaka.
> 
> [0] https://review.openstack.org/#/c/244694/5

I don't get why we talk about uwsgi in context of federation. uwsgi is 
an application server. Apache is a web server. We can still use uwsgi 
with apache, there are several modules for that:
https://uwsgi-docs.readthedocs.org/en/latest/Apache.html

Now we require apache for federation and support mod_wsgi (which is 
tightly integrated with apache) as an app server. We can still require 
Apache and support uwsgi as an app server, without any changes to 
federation.

-- 
Best regards,
Boris Bobrov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] need help in translating sql query to sqlalchemy query

2015-12-01 Thread Venkata Anil

Thanks Sean. I will check that.

Meanwhile I tried this and it is working

port1 = orm.aliased(models_v2.Port, name="port1")
port2 = orm.aliased(models_v2.Port, name="port2")
router_intf_qry = 
context.session.query(RouterPort.router_id).join((port1, 
port1.id==RouterPort.port_id), (port2, 
port2.device_id==RouterPort.router_id)).filter(port1.network_id==int_net_id, 
port2.network_id==ext_net_id).distinct()


   for router in router_intf_qry:
router_id =router.router_id

Thanks
Anil Venkata

On 12/01/2015 06:35 PM, Sean M. Collins wrote:

Consult the API:

http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.join

In fact, there is already one join happening a couple lines above your
change:

https://github.com/openstack/neutron/blob/stable/liberty/neutron/db/l3_db.py#L800

Most likely, you will also need to use the aliased() function - since
there are some examples that are similar to what you are trying to do -
check out the "Joins to a Target with an ON Clause" section in the first
link.

http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.aliased




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] CentOS7 Merging Plan

2015-12-01 Thread Aleksandr Didenko
Hi,

in your plan you need to have an item about updating ISO on fuel-library CI
gates.

Regards,
Alex

On Tue, Dec 1, 2015 at 4:11 PM, Sergii Golovatiuk 
wrote:

> Hi,
>
> On Tue, Dec 1, 2015 at 3:58 PM, Dmitry Teselkin 
> wrote:
>
>> Hello,
>>
>> We're almost got green BVT on custom CentOS7 ISO and it seems that it's
>> the time to discuss the plan how this feature could be merged.
>>
>> This is not the only one feature that is in a queue. Unfortunately,
>> almost any other feature will be broken if merged after CentOS7, so it
>> was decided to merge our changes last.
>>
>> This is not an official announcement, rather a notification letter to
>> start a discussion and find any objections.
>>
>> So the plan is:
>>
>> * merge all features that are going to be merged before Thusday, Dec 3
>>
>
> As far as I know some features won't be ready. They will require FFE or
> should be moved to 9.0
>
>
>> * call for merge freeze starting at Dec 3, due Dec 7
>> * rebase all CentOS7-related pathsets and resolve any conflicts with
>>   merged code (Dec 3)
>> * build custom ISO, pass BVT (and other tests) (Dec 3)
>> * merge all CentOS7-related patchsets at once (Dec 4)
>>
>
> I guess CI environment should be rebuilt
>
> * build an ISO and pass BVT again (Dec 4)
>> * run additional test during weekend (Dec 5, 6) to be sure that ISO
>>   good enough
>
>
>> According to this plan on Monday, Dec 7 we should either get CentOS7
>> based ISO, or revert all incompatible changes.
>>
>> --
>> Thanks,
>> Dmitry Teselkin
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-12-01 Thread Vladimir Kozhukalov
Fox,  this is one of the reasons. There are others listed in my original
letter.

Guys, I have prepared two patches [1] and [2] that can properly deploy the
master node. I still have not got any positive feedback on the spec [3]. My
intention was to wait until Centos 7 feature is merged and then to rebase
my patches, but now it seems too late. FF is tomorrow and my suggestion is
to make these two patches fully compatible with master so as to make it
possible to build ISO both with and without docker containers. Then we can
remove docker after SCF.

[1] https://review.openstack.org/#/c/248649/
[2] https://review.openstack.org/#/c/248650/
[3] https://review.openstack.org/#/c/248814/

Vladimir Kozhukalov

On Mon, Nov 30, 2015 at 11:19 PM, Fox, Kevin M  wrote:

> Is that because of trying to shoehorn docker containers into rpm's though?
> I've never seen anyone else try and use them that way. Maybe they belong in
> a docker repo like the hub or something openstack.org hosted instead?
>
> Thanks,
> Kevin
> 
> From: Igor Kalnitsky [ikalnit...@mirantis.com]
> Sent: Friday, November 27, 2015 1:22 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Fuel] Getting rid of Docker containers on
> the Fuel master node
>
> Hey Vladimir,
>
> Thanks for your effort on doing this job. Unfortunately we have not so
> much time left and FF is coming, so I'm afraid it's become unreal to
> make it before FF. Especially if it takes 2-3 days to fix system
> tests.
>
>
> Andrew,
>
> I had the same opinion some time ago, but it was changed because
> nobody puts effort to fix our Docker experience. Moreover Docker is
> still buggy and we have a plenty of issues such as stale mount points
> for instance. Besides, I don't like our upgrade procedure -
>
> 1. Install fuel-docker-images.rpm
> 2. Load images from installed tarball to Docker
> 3. Re-create containers from new images
>
> Where (2) and (3) are manual steps and breaks idea of "yum update"
> delivery approach.
>
> Thanks,
> Igor
>
> On Wed, Nov 25, 2015 at 9:43 PM, Andrew Woodward  wrote:
> > 
> > IMO, removing the docker containers is a mistake v.s. fixing them and
> using
> > them properly. They provide an isolation that is necessary (and that we
> > mangle) to make services portable and scaleable. We really should sit
> down
> > and document how we really want all of the services to interact before we
> > rip the containers out.
> >
> > I agree, the way we use containers now still is quite wrong, and brings
> us
> > some negative value, but I'm not sold on stripping them out now just
> because
> > they no longer bring the same upgrades value as before.
> > 
> >
> > My opinion aside, we are rushing into this far to late in the feature
> cycle.
> > Prior to moving forward with this, we need a good QA plan, the spec is
> quite
> > light on that and must receive review and approval from QA. This needs to
> > include an actual testing plan.
> >
> > From the implementation side, we are pushing up against the FF deadline.
> We
> > need to document what our time objectives are for this and when we will
> no
> > longer consider this for 8.0.
> >
> > Lastly, for those that are +1 on the thread here, please review and
> comment
> > on the spec, It's received almost no attention for something with such a
> > large impact.
> >
> > On Tue, Nov 24, 2015 at 4:58 PM Vladimir Kozhukalov
> >  wrote:
> >>
> >> The status is as follows:
> >>
> >> 1) Fuel-main [1] and fuel-library [2] patches can deploy the master node
> >> w/o docker containers
> >> 2) I've not built experimental ISO yet (have been testing and debugging
> >> manually)
> >> 3) There are still some flaws (need better formatting, etc.)
> >> 4) Plan for tomorrow is to build experimental ISO and to begin fixing
> >> system tests and fix the spec.
> >>
> >> [1] https://review.openstack.org/#/c/248649
> >> [2] https://review.openstack.org/#/c/248650
> >>
> >> Vladimir Kozhukalov
> >>
> >> On Mon, Nov 23, 2015 at 7:51 PM, Vladimir Kozhukalov
> >>  wrote:
> >>>
> >>> Colleagues,
> >>>
> >>> I've started working on the change. Here are two patches (fuel-main [1]
> >>> and fuel-library [2]). They are not ready to review (still does not
> work and
> >>> under active development). Changes are not going to be huge. Here is a
> spec
> >>> [3]. Will keep the status up to date in this ML thread.
> >>>
> >>>
> >>> [1] https://review.openstack.org/#/c/248649
> >>> [2] https://review.openstack.org/#/c/248650
> >>> [3] https://review.openstack.org/#/c/248814
> >>>
> >>>
> >>> Vladimir Kozhukalov
> >>>
> >>> On Mon, Nov 23, 2015 at 3:35 PM, Aleksandr Maretskiy
> >>>  wrote:
> 
> 
> 
>  On Mon, Nov 23, 2015 at 2:27 PM, Bogdan Dobrelya
>   wrote:
> >
> > On 23.11.2015 12:47, Aleksandr Maretskiy 

Re: [openstack-dev] [Fuel] CentOS7 Merging Plan

2015-12-01 Thread Sergii Golovatiuk
Hi,

On Tue, Dec 1, 2015 at 3:58 PM, Dmitry Teselkin 
wrote:

> Hello,
>
> We're almost got green BVT on custom CentOS7 ISO and it seems that it's
> the time to discuss the plan how this feature could be merged.
>
> This is not the only one feature that is in a queue. Unfortunately,
> almost any other feature will be broken if merged after CentOS7, so it
> was decided to merge our changes last.
>
> This is not an official announcement, rather a notification letter to
> start a discussion and find any objections.
>
> So the plan is:
>
> * merge all features that are going to be merged before Thusday, Dec 3
>

As far as I know some features won't be ready. They will require FFE or
should be moved to 9.0


> * call for merge freeze starting at Dec 3, due Dec 7
> * rebase all CentOS7-related pathsets and resolve any conflicts with
>   merged code (Dec 3)
> * build custom ISO, pass BVT (and other tests) (Dec 3)
> * merge all CentOS7-related patchsets at once (Dec 4)
>

I guess CI environment should be rebuilt

* build an ISO and pass BVT again (Dec 4)
> * run additional test during weekend (Dec 5, 6) to be sure that ISO
>   good enough


> According to this plan on Monday, Dec 7 we should either get CentOS7
> based ISO, or revert all incompatible changes.
>
> --
> Thanks,
> Dmitry Teselkin
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Milestone 13.0.0.0b1 (aka. Mitaka-1) to be cut tomorrow

2015-12-01 Thread Sylvain Bauza

Hi,

As you probably know, the Mitaka Release schedule for Nova defines a 
first milestone between Dec 1-3 [1]


We had an hard dependency for translations and reno changes [2] but now 
everything is landed in tree.


As a consequence, the release patch is provided for cutting the 
milestone right after the reno changes [3] unless we identify a new 
blocker for the milestone that hasn't been raised at the moment I write 
this e-mail.


In case you identify a bugfix that sounds critical for the release (like 
a regression fix), please raise hand on the #openstack-nova IRC channel 
until tomorrow Dec 2nd 1200 UTC.

Past that time, we'll ask for merging [3].

Thanks,
-Sylvain


[1] https://wiki.openstack.org/wiki/Nova/Mitaka_Release_Schedule

[2] 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/080692.html


[3] https://review.openstack.org/#/c/251805/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][neutron-*] Notice! pylint breakage

2015-12-01 Thread Gary Kotton
Should we not be updating this in the requirements project?

From: Paul Michali >
Reply-To: OpenStack List 
>
Date: Tuesday, December 1, 2015 at 4:50 PM
To: OpenStack List 
>
Subject: Re: [openstack-dev] [neutron][neutron-*] Notice! pylint breakage

Some additional info...

astroid upstream folks are going to try to push for pylint 1.4.5 that pins to 
astroid 1.3.8. If that happens, we could just pin pylint at 1.4.5. Ref:  
https://bitbucket.org/logilab/astroid/issues/275/140-and-141-fail-to-work-with-pylint-144

It sounds like we don't need to pin versions for Juno. My attempt seems to be 
failing tests (badly) https://review.openstack.org/#/c/251865.

The neutron kilo patch seems ready to go, once the infra commit is upstreamed: 
https://review.openstack.org/#/c/251827/

Infra commits are: https://review.openstack.org/#/c/251599/ (juno) and 
https://review.openstack.org/#/c/251600/ (kilo). Need to (at least) get kilo 
upstreamed.

For master, LB and VPN repos are broken. I can see these options

  1.  implement pep8 constraints (takes time)
  2.  ignore pylint errors/warnings until do #1  (less coverage)
  3.  disable pylint until do #1 (no coverage)
  4.  Fix the pylint 1.5.0 errors (will be an issue when go to pylint 1.4.4 as 
part of #1)

I'm thinking #2 is the least intrusive, and will try that for VPN repo.

BTW: FW repo does not do 'pylint' as part of pep8 (essentially #3 option) 
currently, so they don't see this carnage.


Regards,

Paul Michali (pc_m)

On Tue, Dec 1, 2015 at 6:44 AM Paul Michali 
> wrote:
I found a problem yesterday running pep8 locally in neutron-lbaas. After 
discussing with LBaaS team, we identified that there is a problem with pylint.  
The same issues were seen, when hecking in neutron and neutron-vpnaas repos 
(need to check neutron-fwaas). There are two issues seen.

First, the new version of pylint (1.5.0) is finding additional warnings/errors. 
This will require updates to code, to be compliant with the new pylint 
(assuming we move up to that version). I did see one case where the changes 
needed for pylint 1.5.0 are backward incompatible with pylint 1.4.4, which 
raises another concern (how to migrate to newer pylint).

Second, pylint uses the astroid package, which was recently update from 1.3.8 
to 1.4.1. When used with pylint 1.4.4 (the currently used version in neutron), 
we get all sorts of false positive errors. For example, use of each imported 
module shows a "undefined variable" error.

In neutron-vpnaas, this issue is worse, as pylint iand astroid are not pinned 
to any version, so it can update at unexpected times.

After talking to infra, here are the proposed solutions...

Neutron - In pretty good shape

The pep8-constraints job currently works, as global requirements in pylint to 
1.4.4 and upper constraints pin astroid to 1.3.8 - both work together well.  
Locally, one needs to use pep8-constraints and not pep8, as the latter will 
pull in the latest astroid (1.4.1) and cause havoc.

Infra is pushing up two commits to global requirements to pin astroid to 1.3.8 
for kilo (251600) and juno (251599). We need to pin astroid to 1.3.8 in 
test-requirements.txt for those branches. I'll start on that in a few minutes.

LBaaS/VPNaaS/FWaas? - Needs constraints

For pep8 jobs, these repos need to use the new pep8-constraints style job and 
tox.ini should also use the same target, instead of pep8. Like neutron, kilo 
and juno branches need to pin astroid to 1.3.8.

neutron-vpnaas should also pin pylint to 1.4.4 in test-requirements.txt, to 
prevent it from floating to 1.5.0.

All repos will need a plan for updating code to conform to pylint 1.5.0, 
if/when we upgrade to this.

Note: I have not looked at neutron-fwaas, so we need to confirm the above issue 
is present, but it likely is (even if there is not a current pylint failure).

One concern I have, that infra hasn't figured out how it can be addressed, is 
how we'll update to pylint 1.5.0. If were are using pep8-constraints, the 
constraints file is in a different repo. Updating, to say, pylint 1.5.0 and 
astroid 1.4.1, would cause breakage until both the neutron* repos and 
requirements repo are updated.  This is complicated with backward incompatible 
changes needed.

Thanks to blogan, ZZelle, fungi, anteaya, lifeless, ajmiller and others for 
helping investigate and come up with the approach on this issue!

Regards,

Paul Michali (pc_m)



Re: [openstack-dev] [kolla] ValueError: No JSON object could be decoded

2015-12-01 Thread Michał Jastrzębski
Hello,

First of all, this is not a mailing list for debugging, I invite you
sir/madam to join us on #kolla and ask questions there.
Second, this doesn't help whole lot, we can get to the source of problem,
but we need your help with that, so let me invite you again to our #kolla
channel.
Third, it would be nice to at least say hello at the beginning of your
email, just common courtesy...

I do hope you will follow my advices about using openstack-dev and we will
be able to provide you assistance during your adventure with our wonderful
project.

Kind regards,
Michal Jastrzebski


On 1 December 2015 at 04:53, OpenStack Mailing List Archive <
cor...@gmail.com> wrote:

> Link: https://openstack.nimeyo.com/67158/?show=67158#q67158
> From: kunciil 
>
> TASK: [ceph | Fetching Ceph keyrings]
> *
> fatal: [gridppcl11 -> gridppcl13] => Traceback (most recent call last):
> File "/usr/lib/python2.7/site-packages/ansible/runner/*init*.py", line
> 586, in
> *executor exec*rc = self.*executor*internal(host, new
> *stdin) File "/usr/lib/python2.7/site-packages/ansible/runner/init.py",
> line 789, in _executor*internal
> return self.*executor*internal*inner(host, self.module*name, self.module*args,
> inject, port, complex*args=complex
> *args) File "/usr/lib/python2.7/site-packages/ansible/runner/init.py",
> line 1101, in _executor*internal
> *inner data['changed'] = utils.check*conditional(changed*when,
> self.basedir, inject, fail*on*undefined=self.error*on*undefined*vars)
> File "/usr/lib/python2.7/site-packages/ansible/utils/*init*.py", line
> 265, in check
> *conditional conditional = template.template(basedir, conditional, inject,
> fail*on*undefined=fail*on
>
> *undefined) File
> "/usr/lib/python2.7/site-packages/ansible/utils/template.py", line 124, in
> template varname = template*from*string(basedir, varname, templatevars,
> fail*on
> *undefined) File
> "/usr/lib/python2.7/site-packages/ansible/utils/template.py", line 382, in
> template*from
>
>
>
> *string res = jinja2.utils.concat(rf) File "", line 9, in root File
> "/usr/lib64/python2.7/json/init.py", line 338, in loads return _default*
> decoder.decode(s)
> File "/usr/lib64/python2.7/json/decoder.py", line 365, in decode
> obj, end = self.raw*decode(s, idx=*w(s, 0).end())
> File "/usr/lib64/python2.7/json/decoder.py", line 383, in raw_decode
> raise ValueError("No JSON object could be decoded")
> ValueError: No JSON object could be decoded
>
> FATAL: all hosts have already failed -- aborting
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] need help in translating sql query to sqlalchemy query

2015-12-01 Thread Sean M. Collins
On Tue, Dec 01, 2015 at 10:22:41AM EST, Venkata Anil wrote:
> Thanks Sean. I will check that.
> 
> Meanwhile I tried this and it is working
> 
> port1 = orm.aliased(models_v2.Port, name="port1")
> port2 = orm.aliased(models_v2.Port, name="port2")
> router_intf_qry =
> context.session.query(RouterPort.router_id).join((port1,
> port1.id==RouterPort.port_id), (port2,
> port2.device_id==RouterPort.router_id)).filter(port1.network_id==int_net_id,
> port2.network_id==ext_net_id).distinct()
> 
>for router in router_intf_qry:
> router_id =router.router_id
> 

That looks pretty close. My only suggestion would be to try and see if
you can just alias it once, instead of twice. Basically see if it is
possible to replace all the port1 references with "models_v2.Port"

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Configuration management for Fuel 7.0

2015-12-01 Thread Roman Sokolkov
Hello, folks.

We need any kind of CM for Fuel 7.0. Otherwise new project with 800+ nodes
will be near impossible to support. Customer always wants to change
something.

In our opinion, there are two major approaches for CM:

#1 Independent CM (Puppet master, Chef, Ansible, whatever)
#2 Fuel-based CM

Solution for #2
--

Fuel has all info about configuration. So we've tried to
unlock "Settings" [0] and push "deploy" button.

Major findings:

* Task idem-potency. Looks like most of the tasks are idempotent.
We've skipped 3 tasks on controller and were able to get NO downtime
for Horizon and "nova list". BTW deeper QA required.

* Standard changes. Operator can change parameters via WebUI, CLI or API.
For example, i was able to deploy Sahara. Unfortunately there is not
foolproof.
I mean some changes can lead to broken cloud...

* Non-standard changes. Any other changes can be done with plugins.
We can modify plugin tasks and scripts (all except UI flags). And then just
do "--update" + "--sync". BTW, we can change UI for particular env via API
by modifying "clusters/X/attributes".

Conclusion
--

- This works (We have service under cron that runs tasks) [1]
- NOT ready for production (in current state)
- This requires much deeper testing


I want to hear thoughts about approach above?
What is the current status/plans for CM? I saw this discussion [2]

References
--

[0]
https://github.com/rsokolkov/fuel-web/commit/366daaa2eb874c8e54c2d39be475223937cd317d
[1]
https://docs.google.com/presentation/d/12kkh1hu4ZrY9S6XXsY_HWaesFwESfxbl5czUwde8isM/edit#slide=id.p
[2] https://etherpad.openstack.org/p/lcm-use-cases

-- 
Roman Sokolkov,
Deployment Engineer,
Mirantis, Inc.
Skype rsokolkov,
rsokol...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >