Re: [openstack-dev] [all] -1 due to line length violation in commit messages

2015-09-25 Thread Vikram Choudhary
+1 for <=80 chars. It will be uniform with the existing coding style.

On Fri, Sep 25, 2015 at 8:22 PM, Dmitry Tantsur  wrote:

> On 09/25/2015 04:44 PM, Ihar Hrachyshka wrote:
>
>> Hi all,
>>
>> releases are approaching, so it’s the right time to start some bike
>> shedding on the mailing list.
>>
>> Recently I got pointed out several times [1][2] that I violate our commit
>> message requirement [3] for the message lines that says: "Subsequent lines
>> should be wrapped at 72 characters.”
>>
>> I agree that very long commit message lines can be bad, f.e. if they are
>> 200+ chars. But <= 79 chars?.. Don’t think so. Especially since we have 79
>> chars limit for the code.
>>
>> We had a check for the line lengths in openstack-dev/hacking before but
>> it was killed [4] as per openstack-dev@ discussion [5].
>>
>> I believe commit message lines of <=80 chars are absolutely fine and
>> should not get -1 treatment. I propose to raise the limit for the guideline
>> on wiki accordingly.
>>
>
> +1, I never understood it actually. I know some folks even question 80
> chars for the code, so having 72 chars for commit messages looks a bit
> weird to me.
>
>
>> Comments?
>>
>> [1]: https://review.openstack.org/#/c/224728/6//COMMIT_MSG
>> [2]: https://review.openstack.org/#/c/227319/2//COMMIT_MSG
>> [3]:
>> https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_message_structure
>> [4]: https://review.openstack.org/#/c/142585/
>> [5]:
>> http://lists.openstack.org/pipermail/openstack-dev/2014-December/thread.html#52519
>>
>> Ihar
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] -1 due to line length violation in commit messages

2015-09-25 Thread Doug Hellmann
git tools such as git log and git show indent the commit message in
their output, so you don't actually have the full 79/80 character width
to work with. That's where the 72 comes from.

Doug

Excerpts from Vikram Choudhary's message of 2015-09-25 20:25:41 +0530:
> +1 for <=80 chars. It will be uniform with the existing coding style.
> 
> On Fri, Sep 25, 2015 at 8:22 PM, Dmitry Tantsur  wrote:
> 
> > On 09/25/2015 04:44 PM, Ihar Hrachyshka wrote:
> >
> >> Hi all,
> >>
> >> releases are approaching, so it’s the right time to start some bike
> >> shedding on the mailing list.
> >>
> >> Recently I got pointed out several times [1][2] that I violate our commit
> >> message requirement [3] for the message lines that says: "Subsequent lines
> >> should be wrapped at 72 characters.”
> >>
> >> I agree that very long commit message lines can be bad, f.e. if they are
> >> 200+ chars. But <= 79 chars?.. Don’t think so. Especially since we have 79
> >> chars limit for the code.
> >>
> >> We had a check for the line lengths in openstack-dev/hacking before but
> >> it was killed [4] as per openstack-dev@ discussion [5].
> >>
> >> I believe commit message lines of <=80 chars are absolutely fine and
> >> should not get -1 treatment. I propose to raise the limit for the guideline
> >> on wiki accordingly.
> >>
> >
> > +1, I never understood it actually. I know some folks even question 80
> > chars for the code, so having 72 chars for commit messages looks a bit
> > weird to me.
> >
> >
> >> Comments?
> >>
> >> [1]: https://review.openstack.org/#/c/224728/6//COMMIT_MSG
> >> [2]: https://review.openstack.org/#/c/227319/2//COMMIT_MSG
> >> [3]:
> >> https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_message_structure
> >> [4]: https://review.openstack.org/#/c/142585/
> >> [5]:
> >> http://lists.openstack.org/pipermail/openstack-dev/2014-December/thread.html#52519
> >>
> >> Ihar
> >>
> >>
> >>
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Servicegroup refactoring for the Control Plane - Mitaka

2015-09-25 Thread Dulko, Michal
On Wed, 2015-09-23 at 11:11 -0700, Vilobh Meshram wrote:

> Accepted in Liberty [1] [2] :
> [1] Services information be stored in respective backend configured
> by CONF.servicegroup_driver and all the interfaces which plan to
> access service information go through servicegroup layer.
> [2] Add tooz specific drivers e.g. replace existing nova servicegroup
> zookeeper driver with a new zookeeper driver backed by Tooz zookeeper
> driver.
> 
> 
> Proposal for Mitaka [3][4] :
> [3] Services information be stored in nova.services (nova database)
> and liveliness information be managed by CONF.servicegroup_driver
> (DB/Zookeeper/Memcache)
> [4] Stick to what is accepted for #2. Just that the scope will be
> decided based on whether we go with #1 (as accepted for Liberty) or #3
> (what is proposed for Mitaka)
> 
I like Mitaka (#3) proposal more. We still have whole data in the
persistent database and SG driver informs only if a host is alive. This
would make transitions between SG drivers easier for administrators and
at last this is why you want to use ZooKeeper - to know about failure
early and don't schedule new VMs to such non-responding host.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] -1 due to line length violation in commit messages

2015-09-25 Thread Jim Rollenhagen
On Fri, Sep 25, 2015 at 04:44:59PM +0200, Ihar Hrachyshka wrote:
> Hi all,
> 
> releases are approaching, so it’s the right time to start some bike shedding 
> on the mailing list.
> 
> Recently I got pointed out several times [1][2] that I violate our commit 
> message requirement [3] for the message lines that says: "Subsequent lines 
> should be wrapped at 72 characters.”
> 
> I agree that very long commit message lines can be bad, f.e. if they are 200+ 
> chars. But <= 79 chars?.. Don’t think so. Especially since we have 79 chars 
> limit for the code.
> 
> We had a check for the line lengths in openstack-dev/hacking before but it 
> was killed [4] as per openstack-dev@ discussion [5].
> 
> I believe commit message lines of <=80 chars are absolutely fine and should 
> not get -1 treatment. I propose to raise the limit for the guideline on wiki 
> accordingly.
> 
> Comments?

It makes me really sad that we actually even spend time discussing
things like this. As a core reviewer, I would just totally ignore this
-1. I also ignore -1s for things like minor typos in a comment, etc.

Let's focus on building good software instead. :)

// jim


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][devstack] Pika RabbitMQ driver implementation

2015-09-25 Thread Joshua Harlow

Dmitriy Ukhlov wrote:

Hello stackers,

I'm working on new olso.messaging RabbitMQ driver implementation which
uses pika client library instead of kombu. It related to
https://blueprints.launchpad.net/oslo.messaging/+spec/rabbit-pika.
In this letter I want to share current results and probably get first
feedack from you.
Now code is availabe here:
https://github.com/dukhlov/oslo.messaging/blob/master/oslo_messaging/_drivers/impl_pika.py



This will end up on review.openstack.org right so that it can be 
properly reviewed (it will likely take a while since it looks to be 
~1000+ lines of code)?


Also suggestion, before that merges, can docs be added, seems like very 
little docstrings about what/why/how. For sustainability purposes that 
would be appreciated I think.



Current status of this code:
- pika driver passes functional tests
- pika driver tempest smoke tests
- pika driver passes almost all tempest full tests (except 5) but it
seems that reason is not related to oslo.messaging
Also I created small devstack patch to support pika driver testing on
gate (https://review.openstack.org/#/c/226348/)

Next steps:
- communicate with Manish (blueprint owner)
- write spec to this blueprint
- send a review with this patch when spec and devstack patch get merged.

Thank you.


--
Best regards,
Dmitriy Ukhlov
Mirantis Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] should puppet-neutron manage third party software?

2015-09-25 Thread Emilien Macchi
In our last meeting [1], we were discussing about whether managing or
not external packaging repositories for Neutron plugin dependencies.

Current situation:
puppet-neutron is installing (packages like neutron-plugin-*) &
configure Neutron plugins (configuration files like
/etc/neutron/plugins/*.ini
Some plugins (Cisco) are doing more: they install third party packages
(not part of OpenStack), from external repos.

The question is: should we continue that way and accept that kind of
patch [2]?

I vote for no: managing external packages & external repositories should
be up to an external more.
Example: my SDN tool is called "sdnmagic":
1/ patch puppet-neutron to manage neutron-plugin-sdnmagic package and
configure the .ini file(s) to make it work in Neutron
2/ create puppet-sdnmagic that will take care of everything else:
install sdnmagic, manage packaging (and specific dependencies),
repositories, etc.
I -1 puppet-neutron should handle it. We are not managing SDN soltution:
we are enabling puppet-neutron to work with them.

I would like to find a consensus here, that will be consistent across
*all plugins* without exception.


Thanks for your feedback,

[1] http://goo.gl/zehmN2
[2] https://review.openstack.org/#/c/209997/
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?

2015-09-25 Thread Mark Voelker
On Sep 25, 2015, at 12:00 PM, Chris Hoge  wrote:
> 
>> 
>> On Sep 25, 2015, at 6:59 AM, Doug Hellmann  wrote:
>> 
>> Excerpts from Mark Voelker's message of 2015-09-25 01:20:04 +:
 
 On Sep 24, 2015, at 5:55 PM, Sabari Murugesan  
 wrote:
 
 Hi Melanie
 
 In general, images created by glance v1 API should be accessible using v2 
 and
 vice-versa. We fixed some bugs [1] [2] [3] where metadata associated with 
 an image was
 causing incompatibility. These fixes were back-ported to stable/kilo.
 
 Thanks
 Sabari
 
 [1] - https://bugs.launchpad.net/glance/+bug/1447215
 [2] - https://bugs.launchpad.net/bugs/1419823 
 [3] - https://bugs.launchpad.net/python-glanceclient/+bug/1447193 
 
 
 On Thu, Sep 24, 2015 at 2:17 PM, melanie witt  wrote:
 Hi All,
 
 I have been looking and haven't yet located documentation about how to 
 upgrade from glance v1 to glance v2.
 
 From what I understand, images and snapshots created with v1 can't be 
 listed/accessed through the v2 api. Are there instructions about how to 
 migrate images and snapshots from v1 to v2? Are there other 
 incompatibilities between v1 and v2?
 
 I'm asking because I have read that glance v1 isn't defcore compliant and 
 so we need all projects to move to v2, but the incompatibility from v1 to 
 v2 is preventing that in nova. Is there anything else preventing v2 
 adoption? Could we move to glance v2 if there's a migration path from v1 
 to v2 that operators can run through before upgrading to a version that 
 uses v2 as the default?
>>> 
>>> Just to clarify the DefCore situation a bit here: 
>>> 
>>> The DefCore Committee is considering adding some Glance v2
>> capabilities [1] as “advisory” (e.g. not required now but might be
>> in the future unless folks provide feedback as to why it shouldn’t
>> be) in it’s next Guideline, which is due to go the Board of Directors
>> in January and will cover Juno, Kilo, and Liberty [2].   The Nova image
>> API’s are already required [3][4].  As discussion began about which
>> Glance capabilities to include and whether or not to keep the Nova
>> image API’s as required, it was pointed out that the many ways images
>> can currently be created in OpenStack is problematic from an
>> interoperability point of view in that some clouds use one and some use
>> others.  To be included in a DefCore Guideline, capabilities are scored
>> against twelve Criteria [5], and need to achieve a certain total to be
>> included.  Having a bunch of different ways to deal with images
>> actually hurts the chances of any one of them meeting the bar because
>> it makes it less likely that they’ll achieve several criteria.  For
>> example:
>>> 
>>> One of the criteria is “widely deployed” [6].  In the case of images, both 
>>> the Nova image-create API and Glance v2 are both pretty widely deployed 
>>> [7]; Glance v1 isn’t, and at least one uses none of those but instead uses 
>>> the import task API.
>>> 
>>> Another criteria is “atomic” [8] which basically means the capability is 
>>> unique and can’t be built out of other required capabilities.  Since the 
>>> Nova image-create API is already required and effectively does the same 
>>> thing as glance v1 and v2’s image create API’s, the latter lose points.
>> 
>> This seems backwards. The Nova API doesn't "do the same thing" as
>> the Glance API, it is a *proxy* for the Glance API. We should not
>> be requiring proxy APIs for interop. DefCore should only be using
>> tests that talk directly to the service that owns the feature being
>> tested.
> 
> I agree in general, at the time the standard was approved the
> only api we had available to us (because only nova code was
> being considered for inclusion) was the proxy.
> 
> We’re looking at v2 as the required api going forward, but
> as has been mentioned before, the nova proxy requires that
> v1 be present as a non-public api. Not the best situation in
> the world, and I’m personally looking forward to Glance,
> Cinder, and Neutron becoming explicitly required APIs in
> DefCore.
> 

Also worth pointing out here: when we talk about “doing the same thing” from a 
DefCore perspective, we’re essentially talking about what’s exposed to the end 
user, not how that’s implemented in OpenStack’s source code.  So from an end 
user’s perspective:

If I call nova image-create, I get an image in my cloud.  If I call the Glance 
v2 API to create an image, I also get an image in my cloud.  I neither see nor 
care that Nova is actually talking to Glance in the background, because if I’m 
writing code that uses the OpenStack API’s, I need to pick which one of those 
two API’s to make my code call upon to put an image in my cloud.  Or, in the 
worst case, I have to write a bunch of if/else loops into my code 

Re: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?

2015-09-25 Thread Chris Hoge

> On Sep 25, 2015, at 6:59 AM, Doug Hellmann  wrote:
> 
> Excerpts from Mark Voelker's message of 2015-09-25 01:20:04 +:
>>> 
>>> On Sep 24, 2015, at 5:55 PM, Sabari Murugesan  wrote:
>>> 
>>> Hi Melanie
>>> 
>>> In general, images created by glance v1 API should be accessible using v2 
>>> and
>>> vice-versa. We fixed some bugs [1] [2] [3] where metadata associated with 
>>> an image was
>>> causing incompatibility. These fixes were back-ported to stable/kilo.
>>> 
>>> Thanks
>>> Sabari
>>> 
>>> [1] - https://bugs.launchpad.net/glance/+bug/1447215
>>> [2] - https://bugs.launchpad.net/bugs/1419823 
>>> [3] - https://bugs.launchpad.net/python-glanceclient/+bug/1447193 
>>> 
>>> 
>>> On Thu, Sep 24, 2015 at 2:17 PM, melanie witt  wrote:
>>> Hi All,
>>> 
>>> I have been looking and haven't yet located documentation about how to 
>>> upgrade from glance v1 to glance v2.
>>> 
>>> From what I understand, images and snapshots created with v1 can't be 
>>> listed/accessed through the v2 api. Are there instructions about how to 
>>> migrate images and snapshots from v1 to v2? Are there other 
>>> incompatibilities between v1 and v2?
>>> 
>>> I'm asking because I have read that glance v1 isn't defcore compliant and 
>>> so we need all projects to move to v2, but the incompatibility from v1 to 
>>> v2 is preventing that in nova. Is there anything else preventing v2 
>>> adoption? Could we move to glance v2 if there's a migration path from v1 to 
>>> v2 that operators can run through before upgrading to a version that uses 
>>> v2 as the default?
>> 
>> Just to clarify the DefCore situation a bit here: 
>> 
>> The DefCore Committee is considering adding some Glance v2
> capabilities [1] as “advisory” (e.g. not required now but might be
> in the future unless folks provide feedback as to why it shouldn’t
> be) in it’s next Guideline, which is due to go the Board of Directors
> in January and will cover Juno, Kilo, and Liberty [2].The Nova image
> API’s are already required [3][4].  As discussion began about which
> Glance capabilities to include and whether or not to keep the Nova
> image API’s as required, it was pointed out that the many ways images
> can currently be created in OpenStack is problematic from an
> interoperability point of view in that some clouds use one and some use
> others.  To be included in a DefCore Guideline, capabilities are scored
> against twelve Criteria [5], and need to achieve a certain total to be
> included.  Having a bunch of different ways to deal with images
> actually hurts the chances of any one of them meeting the bar because
> it makes it less likely that they’ll achieve several criteria.  For
> example:
>> 
>> One of the criteria is “widely deployed” [6].  In the case of images, both 
>> the Nova image-create API and Glance v2 are both pretty widely deployed [7]; 
>> Glance v1 isn’t, and at least one uses none of those but instead uses the 
>> import task API.
>> 
>> Another criteria is “atomic” [8] which basically means the capability is 
>> unique and can’t be built out of other required capabilities.  Since the 
>> Nova image-create API is already required and effectively does the same 
>> thing as glance v1 and v2’s image create API’s, the latter lose points.
> 
> This seems backwards. The Nova API doesn't "do the same thing" as
> the Glance API, it is a *proxy* for the Glance API. We should not
> be requiring proxy APIs for interop. DefCore should only be using
> tests that talk directly to the service that owns the feature being
> tested.

I agree in general, at the time the standard was approved the
only api we had available to us (because only nova code was
being considered for inclusion) was the proxy.

We’re looking at v2 as the required api going forward, but
as has been mentioned before, the nova proxy requires that
v1 be present as a non-public api. Not the best situation in
the world, and I’m personally looking forward to Glance,
Cinder, and Neutron becoming explicitly required APIs in
DefCore.


> Doug
> 
>> 
>> Another criteria is “future direction” [9].  Glance v1 gets no points here 
>> since v2 is the current API, has been for a while, and there’s even been 
>> some work on v3 already.
>> 
>> There are also criteria for  “used by clients” [11].  Unfortunately both 
>> Glance v1 and v2 fall down pretty hard here as it turns out that of all the 
>> client libraries users reported in the last user survey, it appears the only 
>> one other than the OpenStack clients supports Glance v2 and one supports 
>> Glance v1 while the rest all rely on the Nova API's.  Even within OpenStack 
>> we don’t necessarily have good adoption since Nova still uses the v1 API to 
>> talk to Glance and OpenStackClient didn’t support image creation with v2 
>> until this week’s 1.7.0 release. [13]
>> 
>> So, it’s a bit problematic that v1 is still being used even within the 
>> project (though it did get 

Re: [openstack-dev] Apache2 vs uWSGI vs ...

2015-09-25 Thread Adam Young

On 09/25/2015 07:09 AM, Sergii Golovatiuk wrote:

Hi,

Morgan gave the perfect case why operators want to use uWSGI. Let's 
imagine a future when all openstack services will work as mod_wsgi 
processes under apache. It's like to put all eggs in one basket. If 
you need to reconfigure one service on controller it may affect 
another service. For instance, sometimes operators need to increase 
number of Threads/Processes for wsgi or add new virtual host to 
apache. That will require graceful or cold restart of apache. It 
affects other services. Another case, internal problems in mod_wsgi 
where it may lead to apache crash affecting all services.


uWSGI/gunicorn model is safer as in this case apache is reverse_proxy 
only. This  model gives flexibility for operators. They may use 
apache/nginx as proxy or load balancer. Stop or crash of one service 
won't lead to downtime of other services. The complexity of OpenStack 
management will be easier and friendly.


There are some fallacies here:

1. OpenStack services should all be on the same machine.
2. OpenStack web services should run on ports other than 443.

I think both of these are ideas who's time have come and gone.

If you have a single machine, run them out of separate containers. That 
allows different services to work with different versions of the 
libraries. It lets you mix a newer Keystone with older everything else.


If everything is on port 443, you need a single web server at the front 
end to multiplex it;  uWSGI or any other one does not obviate that.



There are no good ports left in /etc/services;  stop trying to reserve 
new ones for the web.  If you need to run on a web service, you need to 
be able to get through firewalls.  You need to run on standard ports. 
Run on 443.


Keystone again is a great example of this: it has two ports: 5000 and 35357.

port 5000 in /etc/services is

commplex-main   5000/tcp

and  port 35357 is smack dab in the middle of the ephemeral range.


Again, so long as the web server supports the cryptographic secure 
mechanisms, I don't care what one you chose.  But The idea of use going 
to Keystone and getting a bearer token as the basis for security is 
immature;  we should be doing the following on every call:


1. TLS
2. Cryptographic authentication.


They can be together or split up.

So, lets get everything running inside Apache, and, at the same time, 
push our other favorite web servers to support the necessary pieces to 
make OpenStack and the Web secure.







--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Fri, Sep 18, 2015 at 3:44 PM, Morgan Fainberg 
> wrote:


There is and has been desire to support uWSGI and other
alternatives to mod_wsgi. There are a variety of operational
reasons to consider uWSGI and/or gunicorn behind apache most
notably to facilitate easier management of the processes
independently of the webserver itself. With mod_wsgi the processes
are directly tied to the apache server where as with uWSGI and
gunicorn you can manage the various services independently and/or
with differing VENVs more easily.

There are potential other concerns that must be weighed when
considering which method of deployment to use. I hope we have
clear documentation within the next cycle (and possible choices
for the gate) for utilizing uWSGI and/or gunicorn.

--Morgan

Sent via mobile

On Sep 18, 2015, at 06:12, Adam Young > wrote:


On 09/17/2015 10:04 PM, Jim Rollenhagen wrote:

On Thu, Sep 17, 2015 at 06:48:50PM -0400, Davanum Srinivas wrote:

In the fuel project, we recently ran into a couple of issues with Apache2 +
mod_wsgi as we switched Keystone to run . Please see [1] and [2].

Looking deep into Apache2 issues specifically around "apache2ctl graceful"
and module loading/unloading and the hooks used by mod_wsgi [3]. I started
wondering if Apache2 + mod_wsgi is the "right" solution and if there was
something else better that people are already using.

One data point that keeps coming up is, all the CI jobs use Apache2 +
mod_wsgi so it must be the best solutionIs it? If not, what is?

Disclaimer: it's been a while since I've cared about performance with a
web server in front of a Python app.

IIRC, mod_wsgi was abandoned for a while, but I think it's being worked
on again. In general, I seem to remember it being thought of as a bit
old and crusty, but mostly working.


I am not aware of that.  It has been the workhorse of the
Python/wsgi world for a while, and we use it heavily.


At a previous job, we switched from Apache2 + mod_wsgi to nginx + uwsgi[0]
and saw a significant performance increase. This was a Django app. uwsgi
is fairly straightforward to operate and comes loaded with a myriad of
options[1] to help folks 

Re: [openstack-dev] [all] -1 due to line length violation in commit messages

2015-09-25 Thread Dmitry Tantsur

On 09/25/2015 04:44 PM, Ihar Hrachyshka wrote:

Hi all,

releases are approaching, so it’s the right time to start some bike shedding on 
the mailing list.

Recently I got pointed out several times [1][2] that I violate our commit message 
requirement [3] for the message lines that says: "Subsequent lines should be 
wrapped at 72 characters.”

I agree that very long commit message lines can be bad, f.e. if they are 200+ 
chars. But <= 79 chars?.. Don’t think so. Especially since we have 79 chars 
limit for the code.

We had a check for the line lengths in openstack-dev/hacking before but it was 
killed [4] as per openstack-dev@ discussion [5].

I believe commit message lines of <=80 chars are absolutely fine and should not 
get -1 treatment. I propose to raise the limit for the guideline on wiki 
accordingly.


+1, I never understood it actually. I know some folks even question 80 
chars for the code, so having 72 chars for commit messages looks a bit 
weird to me.




Comments?

[1]: https://review.openstack.org/#/c/224728/6//COMMIT_MSG
[2]: https://review.openstack.org/#/c/227319/2//COMMIT_MSG
[3]: 
https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_message_structure
[4]: https://review.openstack.org/#/c/142585/
[5]: 
http://lists.openstack.org/pipermail/openstack-dev/2014-December/thread.html#52519

Ihar



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] -1 due to line length violation in commit messages

2015-09-25 Thread Ryan Brown

On 09/25/2015 10:44 AM, Ihar Hrachyshka wrote:

Hi all,

releases are approaching, so it’s the right time to start some bike
shedding on the mailing list.

Recently I got pointed out several times [1][2] that I violate our
commit message requirement [3] for the message lines that says:
"Subsequent lines should be wrapped at 72 characters.”

I agree that very long commit message lines can be bad, f.e. if they
are 200+ chars. But <= 79 chars?.. Don’t think so. Especially since
we have 79 chars limit for the code.


The default "git log" display shows the commit message already indented, 
and the tab may display as 8 spaces I suppose. I believe the 72 limit is 
derived from 80-8 (terminal width - tab width)


I don't know how many folks use 80-char terminals (I use side-by-side 
110-column terms). Having some limit to prevent 200+ is reasonable, but 
I think it's pedantic to -1 a patch due to a 78-char commit message line.



We had a check for the line lengths in openstack-dev/hacking before
but it was killed [4] as per openstack-dev@ discussion [5].

I believe commit message lines of <=80 chars are absolutely fine and
should not get -1 treatment. I propose to raise the limit for the
guideline on wiki accordingly.

Comments?

[1]: https://review.openstack.org/#/c/224728/6//COMMIT_MSG [2]:
https://review.openstack.org/#/c/227319/2//COMMIT_MSG [3]:
https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_message_structure



[4]: https://review.openstack.org/#/c/142585/

[5]:
http://lists.openstack.org/pipermail/openstack-dev/2014-December/thread.html#52519

 Ihar



__



OpenStack Development Mailing List (not for usage questions)

Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] service default value functions

2015-09-25 Thread Yanis Guenane


On 09/23/2015 02:17 AM, Cody Herriges wrote:
> Alex Schultz wrote:
>> Hey puppet folks,
>>
>> Based on the meeting yesterday[0], I had proposed creating a parser
>> function called is_service_default[1] to validate if a variable matched
>> our agreed upon value of ''.  This got me thinking
>> about how can we maybe not use the arbitrary string throughout the
>> puppet that can not easily be validated.  So I tested creating another
>> puppet function named service_default[2] to replace the use of '> DEFAULT>' throughout all the puppet modules.  My tests seemed to
>> indicate that you can use a parser function as parameter default for
>> classes. 
>>
>> I wanted to send a note to gather comments around the second function. 
>> When we originally discussed what to use to designate for a service's
>> default configuration, I really didn't like using an arbitrary string
>> since it's hard to parse and validate. I think leveraging a function
>> might be better since it is something that can be validated via tests
>> and a syntax checker.  Thoughts?
>>
>>
>> Thanks,
>> -Alex
>>
>> [0] 
>> http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-09-15-15.00.html
>> [1] https://review.openstack.org/#/c/223672
>> [2] https://review.openstack.org/#/c/224187
>>
> I've been mulling this over the last several days and I just can't
> accept an entire ruby function which would be ran for every parameter
> with the desired static value of "" when the class is
> declared and parsed.  I am not generally against using functions as a
> parameter default just not a fan in this case because running ruby just
> to return a static string seems inappropriate and not optimal.
>
> In this specific case I think the params pattern and inheritance can
> obtain us the same goals.  I also find this a valid us of inheritance
> cross module namespaces but...only because all our modules must depend
> on puppet-openstacklib.
>
> http://paste.openstack.org/show/473655

Hello,

I do like the params pattern. This is something we could probably apply
for other purpose later,
centralizing a common parameter value of all our modules into a single
place, yet override-able
for each single module (if necessary) in its own params.pp file.

--
Yanis Guenane

>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] -1 due to line length violation in commit messages

2015-09-25 Thread Jeremy Stanley
On 2015-09-25 16:44:59 +0200 (+0200), Ihar Hrachyshka wrote:
[...]
> I believe commit message lines of <=80 chars are absolutely fine
> and should not get -1 treatment. I propose to raise the limit for
> the guideline on wiki accordingly.
[...]

As one of those traditionalists who still does all his work in 80x24
terminals (well, 80x25 with a status line), I keep my commit message
titles at or under 50 characters and message contents to no more
than 68 characters to allow for cleaner indentation/quoting just
like for my MUA editor settings. After all, some (non-OpenStack)
projects take patch submissions by E-mail and so it's easier to just
follow conservative E-mail line length conventions when possible.

That said, while I appreciate when people keep their commit message
lines wrapped short enough that they render sanely on my terminal, I
make a point of not leaving negative reviews about commit message
formatting unless it's really egregious (and usually not even then).
We have plenty of real bugs to deal with, and it's not worth my time
to berate people for inconsistent commit message layout as long as
the requisite information is present--it's easier to just lead by
example and hope that others follow most of the time.

As for the underlying topic, people leaving -1 reviews about silly,
unimportant details: reviewers need to get used to the fact that
sometimes there will be a -1 on a perfectly good proposed change, so
it's fine to ignore negative votes from people who are wasting their
time on pointless trivia. Please don't set your review filters to
skip changes with a CR -1 on them; review and judge for yourself.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] -1 due to line length violation in commit messages

2015-09-25 Thread Daniel P. Berrange
On Fri, Sep 25, 2015 at 11:05:09AM -0400, Doug Hellmann wrote:
> git tools such as git log and git show indent the commit message in
> their output, so you don't actually have the full 79/80 character width
> to work with. That's where the 72 comes from.

It is also commonly done because so that when you copy commits into
email, the commit message doesn't get further line breaks inserted.
This isn't a big deal with openstack, as we don't use an email workflow
for patch review.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] CephFS native driver

2015-09-25 Thread Fox, Kevin M
I think having a native cephfs driver without nfs in the cloud is a very 
compelling feature. nfs is nearly impossible to make both HA and Scalable 
without adding really expensive dedicated hardware. Ceph on the other hand 
scales very nicely and its very fault tollerent out of the box.

Thanks,
Kevin

From: Shinobu Kinjo [ski...@redhat.com]
Sent: Friday, September 25, 2015 12:04 AM
To: OpenStack Development Mailing List (not for usage questions); John Spray
Cc: Ceph Development; openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Manila] CephFS native driver

So here are questions from my side.
Just question.


 1.What is the biggest advantage comparing others such as RDB?
  We should be able to implement what you are going to do in
  existing module, shouldn't we?

 2.What are you going to focus on with a new implementation?
  It seems to be to use NFS in front of that implementation
  with more transparently.

 3.What are you thinking of integration with OpenStack using
  a new implementation?
  Since it's going to be new kind of, there should be differ-
  ent architecture.

 4.Is this implementation intended for OneStack integration
  mainly?

Since velocity of OpenStack feature expansion is much more than
it used to be, it's much more important to think of performance.

Is a new implementation also going to improve Ceph integration
with OpenStack system?

Thank you so much for your explanation in advance.

Shinobu

- Original Message -
From: "John Spray" 
To: openstack-dev@lists.openstack.org, "Ceph Development" 

Sent: Thursday, September 24, 2015 10:49:17 PM
Subject: [openstack-dev] [Manila] CephFS native driver

Hi all,

I've recently started work on a CephFS driver for Manila.  The (early)
code is here:
https://github.com/openstack/manila/compare/master...jcsp:ceph

It requires a special branch of ceph which is here:
https://github.com/ceph/ceph/compare/master...jcsp:wip-manila

This isn't done yet (hence this email rather than a gerrit review),
but I wanted to give everyone a heads up that this work is going on,
and a brief status update.

This is the 'native' driver in the sense that clients use the CephFS
client to access the share, rather than re-exporting it over NFS.  The
idea is that this driver will be useful for anyone who has such
clients, as well as acting as the basis for a later NFS-enabled
driver.

The export location returned by the driver gives the client the Ceph
mon IP addresses, the share path, and an authentication token.  This
authentication token is what permits the clients access (Ceph does not
do access control based on IP addresses).

It's just capable of the minimal functionality of creating and
deleting shares so far, but I will shortly be looking into hooking up
snapshots/consistency groups, albeit for read-only snapshots only
(cephfs does not have writeable shapshots).  Currently deletion is
just a move into a 'trash' directory, the idea is to add something
later that cleans this up in the background: the downside to the
"shares are just directories" approach is that clearing them up has a
"rm -rf" cost!

A note on the implementation: cephfs recently got the ability (not yet
in master) to restrict client metadata access based on path, so this
driver is simply creating shares by creating directories within a
cluster-wide filesystem, and issuing credentials to clients that
restrict them to their own directory.  They then mount that subpath,
so that from the client's point of view it's like having their own
filesystem.  We also have a quota mechanism that I'll hook in later to
enforce the share size.

Currently the security here requires clients (i.e. the ceph-fuse code
on client hosts, not the userspace applications) to be trusted, as
quotas are enforced on the client side.  The OSD access control
operates on a per-pool basis, and creating a separate pool for each
share is inefficient.  In the future it is expected that CephFS will
be extended to support file layouts that use RADOS namespaces, which
are cheap, such that we can issue a new namespace to each share and
enforce the separation between shares on the OSD side.

However, for many people the ultimate access control solution will be
to use a NFS gateway in front of their CephFS filesystem: it is
expected that an NFS-enabled cephfs driver will follow this native
driver in the not-too-distant future.

This will be my first openstack contribution, so please bear with me
while I come up to speed with the submission process.  I'll also be in
Tokyo for the summit next month, so I hope to meet other interested
parties there.

All the best,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [TripleO] tripleo.org theme

2015-09-25 Thread Derek Higgins



On 25/09/15 13:34, Dan Prince wrote:

It has come to my attention that we aren't making great use of our
tripleo.org domain. One thing that would be useful would be to have the
new tripleo-docs content displayed there. It would also be nice to have
quick links to some of our useful resources, perhaps Derek's CI report
[1], a custom Reviewday page for TripleO reviews (something like this
[2]), and perhaps other links too. I'm thinking these go in the header,
and not just on some random TripleO docs page. Or perhaps both places.


We could even host some of these things on tripleo.org (not just link to 
them)




I was thinking that instead of the normal OpenStack theme however we
could go a bit off the beaten path and do our own TripleO theme.
Basically a custom tripleosphinx project that we ninja in as a
replacement for oslosphinx.

Could get our own mascot... or do something silly with words. I'm
reaching out to graphics artists who could help with this sort of
thing... but before that decision is made I wanted to ask about
thoughts on the matter here first.


+1 from me, the more content, articles etc... we can get up ther the 
better as long as we keep at it and it doesn't go stale.




Speak up... it would be nice to have this wrapped up before Tokyo.

[1] http://goodsquishy.com/downloads/tripleo-jobs.html
[2] http://status.openstack.org/reviews/

Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] -1 due to line length violation in commit messages

2015-09-25 Thread Fox, Kevin M
Yeah, and worse, since I never can remember the exact number(72 I guess), I 
always just round down to 70-1 to be safe.

Its silly.

Thanks,
Kevin

From: Ihar Hrachyshka [ihrac...@redhat.com]
Sent: Friday, September 25, 2015 7:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: bap...@us.ibm.com
Subject: [openstack-dev] [all] -1 due to line length violation in commit
messages

Hi all,

releases are approaching, so it’s the right time to start some bike shedding on 
the mailing list.

Recently I got pointed out several times [1][2] that I violate our commit 
message requirement [3] for the message lines that says: "Subsequent lines 
should be wrapped at 72 characters.”

I agree that very long commit message lines can be bad, f.e. if they are 200+ 
chars. But <= 79 chars?.. Don’t think so. Especially since we have 79 chars 
limit for the code.

We had a check for the line lengths in openstack-dev/hacking before but it was 
killed [4] as per openstack-dev@ discussion [5].

I believe commit message lines of <=80 chars are absolutely fine and should not 
get -1 treatment. I propose to raise the limit for the guideline on wiki 
accordingly.

Comments?

[1]: https://review.openstack.org/#/c/224728/6//COMMIT_MSG
[2]: https://review.openstack.org/#/c/227319/2//COMMIT_MSG
[3]: 
https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_message_structure
[4]: https://review.openstack.org/#/c/142585/
[5]: 
http://lists.openstack.org/pipermail/openstack-dev/2014-December/thread.html#52519

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] -1 due to line length violation in commit messages

2015-09-25 Thread Julien Danjou
On Fri, Sep 25 2015, Ihar Hrachyshka wrote:

> I agree that very long commit message lines can be bad, f.e. if they
> are 200+ chars. But <= 79 chars?.. Don’t think so. Especially since we
> have 79 chars limit for the code.

Agreed. <= 80 chars should be a rule of thumb and applied in a smart
way. As you say, 200+ is not OK – but we're human we can judge and be
smart.

If we wanted to enforce that, we would just have to write a bot setting
-1 automatically. I'm getting tired of seeing people doing bots' jobs in
Gerrit.

-- 
Julien Danjou
// Free Software hacker
// http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][devstack] Pika RabbitMQ driver implementation

2015-09-25 Thread Joshua Harlow

Also a side question, that someone might know,

Whatever happened to the folks from rabbitmq (incorporated? pivotal?) 
who were going to get involved in oslo.messaging, did that ever happen; 
if anyone knows?


They might be a good bunch of people to review such a pika driver (since 
I think they as a corporation created pika?).


Dmitriy Ukhlov wrote:

Hello stackers,

I'm working on new olso.messaging RabbitMQ driver implementation which
uses pika client library instead of kombu. It related to
https://blueprints.launchpad.net/oslo.messaging/+spec/rabbit-pika.
In this letter I want to share current results and probably get first
feedack from you.
Now code is availabe here:
https://github.com/dukhlov/oslo.messaging/blob/master/oslo_messaging/_drivers/impl_pika.py

Current status of this code:
- pika driver passes functional tests
- pika driver tempest smoke tests
- pika driver passes almost all tempest full tests (except 5) but it
seems that reason is not related to oslo.messaging
Also I created small devstack patch to support pika driver testing on
gate (https://review.openstack.org/#/c/226348/)

Next steps:
- communicate with Manish (blueprint owner)
- write spec to this blueprint
- send a review with this patch when spec and devstack patch get merged.

Thank you.


--
Best regards,
Dmitriy Ukhlov
Mirantis Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] -1 due to line length violation in commit messages

2015-09-25 Thread Fox, Kevin M
Another option... why are we wasting time on something that a computer can 
handle? Why not just let the line length be infinite in the commit message and 
have gerrit wrap it to  length lines on merge?

Thanks,
Kevin

From: Jim Rollenhagen [j...@jimrollenhagen.com]
Sent: Friday, September 25, 2015 8:42 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] -1 due to line length violation in commit 
messages

On Fri, Sep 25, 2015 at 04:44:59PM +0200, Ihar Hrachyshka wrote:
> Hi all,
>
> releases are approaching, so it’s the right time to start some bike shedding 
> on the mailing list.
>
> Recently I got pointed out several times [1][2] that I violate our commit 
> message requirement [3] for the message lines that says: "Subsequent lines 
> should be wrapped at 72 characters.”
>
> I agree that very long commit message lines can be bad, f.e. if they are 200+ 
> chars. But <= 79 chars?.. Don’t think so. Especially since we have 79 chars 
> limit for the code.
>
> We had a check for the line lengths in openstack-dev/hacking before but it 
> was killed [4] as per openstack-dev@ discussion [5].
>
> I believe commit message lines of <=80 chars are absolutely fine and should 
> not get -1 treatment. I propose to raise the limit for the guideline on wiki 
> accordingly.
>
> Comments?

It makes me really sad that we actually even spend time discussing
things like this. As a core reviewer, I would just totally ignore this
-1. I also ignore -1s for things like minor typos in a comment, etc.

Let's focus on building good software instead. :)

// jim


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?

2015-09-25 Thread Andrew Laski

On 09/25/15 at 09:59am, Doug Hellmann wrote:

Excerpts from Mark Voelker's message of 2015-09-25 01:20:04 +:

>
> On Sep 24, 2015, at 5:55 PM, Sabari Murugesan  wrote:
>
> Hi Melanie
>
> In general, images created by glance v1 API should be accessible using v2 and
> vice-versa. We fixed some bugs [1] [2] [3] where metadata associated with an 
image was
> causing incompatibility. These fixes were back-ported to stable/kilo.
>
> Thanks
> Sabari
>
> [1] - https://bugs.launchpad.net/glance/+bug/1447215
> [2] - https://bugs.launchpad.net/bugs/1419823
> [3] - https://bugs.launchpad.net/python-glanceclient/+bug/1447193
>
>
> On Thu, Sep 24, 2015 at 2:17 PM, melanie witt  wrote:
> Hi All,
>
> I have been looking and haven't yet located documentation about how to 
upgrade from glance v1 to glance v2.
>
> From what I understand, images and snapshots created with v1 can't be 
listed/accessed through the v2 api. Are there instructions about how to migrate 
images and snapshots from v1 to v2? Are there other incompatibilities between v1 
and v2?
>
> I'm asking because I have read that glance v1 isn't defcore compliant and so 
we need all projects to move to v2, but the incompatibility from v1 to v2 is 
preventing that in nova. Is there anything else preventing v2 adoption? Could we 
move to glance v2 if there's a migration path from v1 to v2 that operators can run 
through before upgrading to a version that uses v2 as the default?

Just to clarify the DefCore situation a bit here:

The DefCore Committee is considering adding some Glance v2

capabilities [1] as “advisory” (e.g. not required now but might be
in the future unless folks provide feedback as to why it shouldn’t
be) in it’s next Guideline, which is due to go the Board of Directors
in January and will cover Juno, Kilo, and Liberty [2].  The Nova image
API’s are already required [3][4].  As discussion began about which
Glance capabilities to include and whether or not to keep the Nova
image API’s as required, it was pointed out that the many ways images
can currently be created in OpenStack is problematic from an
interoperability point of view in that some clouds use one and some use
others.  To be included in a DefCore Guideline, capabilities are scored
against twelve Criteria [5], and need to achieve a certain total to be
included.  Having a bunch of different ways to deal with images
actually hurts the chances of any one of them meeting the bar because
it makes it less likely that they’ll achieve several criteria.  For
example:


One of the criteria is “widely deployed” [6].  In the case of images, both the 
Nova image-create API and Glance v2 are both pretty widely deployed [7]; Glance 
v1 isn’t, and at least one uses none of those but instead uses the import task 
API.

Another criteria is “atomic” [8] which basically means the capability is unique 
and can’t be built out of other required capabilities.  Since the Nova 
image-create API is already required and effectively does the same thing as 
glance v1 and v2’s image create API’s, the latter lose points.


This seems backwards. The Nova API doesn't "do the same thing" as
the Glance API, it is a *proxy* for the Glance API. We should not
be requiring proxy APIs for interop. DefCore should only be using
tests that talk directly to the service that owns the feature being
tested.


I completely agree with this.  I will admit to having some confusion as 
to why Glance capabilities have been tested through Nova and I know 
others have raised this same thought within the process.




Doug



Another criteria is “future direction” [9].  Glance v1 gets no points here 
since v2 is the current API, has been for a while, and there’s even been some 
work on v3 already.

There are also criteria for  “used by clients” [11].  Unfortunately both Glance 
v1 and v2 fall down pretty hard here as it turns out that of all the client 
libraries users reported in the last user survey, it appears the only one other 
than the OpenStack clients supports Glance v2 and one supports Glance v1 while 
the rest all rely on the Nova API's.  Even within OpenStack we don’t 
necessarily have good adoption since Nova still uses the v1 API to talk to 
Glance and OpenStackClient didn’t support image creation with v2 until this 
week’s 1.7.0 release. [13]

So, it’s a bit problematic that v1 is still being used even within the project 
(though it did get slightly better this week).  It’s highly unlikely at this 
point that it makes any sense for DefCore to require OpenStack Powered products 
to expose v1 to end users.  Even if DefCore does end up requiring Glance v2 to 
be exposed to end users, that doesn’t necessarily mean Nova couldn’t continue 
to use v1: OpenStack Powered products wouldn’t be required to expose v1 to end 
users, but if the nova image-create API remains required then they’d have to 
expose it at least internally to the cloud.  But….really?  That’s still sort of 
an ugly 

[openstack-dev] [all] -1 due to line length violation in commit messages

2015-09-25 Thread Ihar Hrachyshka
Hi all,

releases are approaching, so it’s the right time to start some bike shedding on 
the mailing list.

Recently I got pointed out several times [1][2] that I violate our commit 
message requirement [3] for the message lines that says: "Subsequent lines 
should be wrapped at 72 characters.”

I agree that very long commit message lines can be bad, f.e. if they are 200+ 
chars. But <= 79 chars?.. Don’t think so. Especially since we have 79 chars 
limit for the code.

We had a check for the line lengths in openstack-dev/hacking before but it was 
killed [4] as per openstack-dev@ discussion [5].

I believe commit message lines of <=80 chars are absolutely fine and should not 
get -1 treatment. I propose to raise the limit for the guideline on wiki 
accordingly.

Comments?

[1]: https://review.openstack.org/#/c/224728/6//COMMIT_MSG
[2]: https://review.openstack.org/#/c/227319/2//COMMIT_MSG
[3]: 
https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_message_structure
[4]: https://review.openstack.org/#/c/142585/
[5]: 
http://lists.openstack.org/pipermail/openstack-dev/2014-December/thread.html#52519

Ihar


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?

2015-09-25 Thread Mark Voelker
On Sep 25, 2015, at 10:42 AM, Andrew Laski  wrote:
> 
> On 09/25/15 at 09:59am, Doug Hellmann wrote:
>> Excerpts from Mark Voelker's message of 2015-09-25 01:20:04 +:
>>> >
>>> > On Sep 24, 2015, at 5:55 PM, Sabari Murugesan  
>>> > wrote:
>>> >
>>> > Hi Melanie
>>> >
>>> > In general, images created by glance v1 API should be accessible using v2 
>>> > and
>>> > vice-versa. We fixed some bugs [1] [2] [3] where metadata associated with 
>>> > an image was
>>> > causing incompatibility. These fixes were back-ported to stable/kilo.
>>> >
>>> > Thanks
>>> > Sabari
>>> >
>>> > [1] - https://bugs.launchpad.net/glance/+bug/1447215
>>> > [2] - https://bugs.launchpad.net/bugs/1419823
>>> > [3] - https://bugs.launchpad.net/python-glanceclient/+bug/1447193
>>> >
>>> >
>>> > On Thu, Sep 24, 2015 at 2:17 PM, melanie witt  wrote:
>>> > Hi All,
>>> >
>>> > I have been looking and haven't yet located documentation about how to 
>>> > upgrade from glance v1 to glance v2.
>>> >
>>> > From what I understand, images and snapshots created with v1 can't be 
>>> > listed/accessed through the v2 api. Are there instructions about how to 
>>> > migrate images and snapshots from v1 to v2? Are there other 
>>> > incompatibilities between v1 and v2?
>>> >
>>> > I'm asking because I have read that glance v1 isn't defcore compliant and 
>>> > so we need all projects to move to v2, but the incompatibility from v1 to 
>>> > v2 is preventing that in nova. Is there anything else preventing v2 
>>> > adoption? Could we move to glance v2 if there's a migration path from v1 
>>> > to v2 that operators can run through before upgrading to a version that 
>>> > uses v2 as the default?
>>> 
>>> Just to clarify the DefCore situation a bit here:
>>> 
>>> The DefCore Committee is considering adding some Glance v2
>> capabilities [1] as “advisory” (e.g. not required now but might be
>> in the future unless folks provide feedback as to why it shouldn’t
>> be) in it’s next Guideline, which is due to go the Board of Directors
>> in January and will cover Juno, Kilo, and Liberty [2].   The Nova image
>> API’s are already required [3][4].  As discussion began about which
>> Glance capabilities to include and whether or not to keep the Nova
>> image API’s as required, it was pointed out that the many ways images
>> can currently be created in OpenStack is problematic from an
>> interoperability point of view in that some clouds use one and some use
>> others.  To be included in a DefCore Guideline, capabilities are scored
>> against twelve Criteria [5], and need to achieve a certain total to be
>> included.  Having a bunch of different ways to deal with images
>> actually hurts the chances of any one of them meeting the bar because
>> it makes it less likely that they’ll achieve several criteria.  For
>> example:
>>> 
>>> One of the criteria is “widely deployed” [6].  In the case of images, both 
>>> the Nova image-create API and Glance v2 are both pretty widely deployed 
>>> [7]; Glance v1 isn’t, and at least one uses none of those but instead uses 
>>> the import task API.
>>> 
>>> Another criteria is “atomic” [8] which basically means the capability is 
>>> unique and can’t be built out of other required capabilities.  Since the 
>>> Nova image-create API is already required and effectively does the same 
>>> thing as glance v1 and v2’s image create API’s, the latter lose points.
>> 
>> This seems backwards. The Nova API doesn't "do the same thing" as
>> the Glance API, it is a *proxy* for the Glance API. We should not
>> be requiring proxy APIs for interop. DefCore should only be using
>> tests that talk directly to the service that owns the feature being
>> tested.
> 
> I completely agree with this.  I will admit to having some confusion as to 
> why Glance capabilities have been tested through Nova and I know others have 
> raised this same thought within the process.

Because it turns out that’s how most of the world is dealing with images.

Generally speaking, the nova image API and glance v2 API’s have roughly equal 
adoption among public and private cloud products, but among the client SDK’s 
people are using to interact with OpenStack the nova image API’s have much 
better adoption (see notes in previous message for details).  So we gave the 
world lots of different ways to do the same thing and the world has strongly 
adopted two of them (with reasonable evidence that the Nova image API is 
actually the most-adopted of the lot).  If you’re looking for the most 
interoperable way to create an image across lots of different OpenStack clouds 
today, it’s actually through Nova.

At Your Service,

Mark T. Voelker

> 
>> 
>> Doug
>> 
>>> 
>>> Another criteria is “future direction” [9].  Glance v1 gets no points here 
>>> since v2 is the current API, has been for a while, and there’s even been 
>>> some work on v3 already.
>>> 
>>> There are also criteria for  “used by clients” 

Re: [openstack-dev] [all] -1 due to line length violation in commit messages

2015-09-25 Thread Ivan Kolodyazhny
+1 for <=80 chars. 72 characters sometimes are not enough

Regards,
Ivan Kolodyazhny

On Fri, Sep 25, 2015 at 6:05 PM, Doug Hellmann 
wrote:

> git tools such as git log and git show indent the commit message in
> their output, so you don't actually have the full 79/80 character width
> to work with. That's where the 72 comes from.
>
> Doug
>
> Excerpts from Vikram Choudhary's message of 2015-09-25 20:25:41 +0530:
> > +1 for <=80 chars. It will be uniform with the existing coding style.
> >
> > On Fri, Sep 25, 2015 at 8:22 PM, Dmitry Tantsur 
> wrote:
> >
> > > On 09/25/2015 04:44 PM, Ihar Hrachyshka wrote:
> > >
> > >> Hi all,
> > >>
> > >> releases are approaching, so it’s the right time to start some bike
> > >> shedding on the mailing list.
> > >>
> > >> Recently I got pointed out several times [1][2] that I violate our
> commit
> > >> message requirement [3] for the message lines that says: "Subsequent
> lines
> > >> should be wrapped at 72 characters.”
> > >>
> > >> I agree that very long commit message lines can be bad, f.e. if they
> are
> > >> 200+ chars. But <= 79 chars?.. Don’t think so. Especially since we
> have 79
> > >> chars limit for the code.
> > >>
> > >> We had a check for the line lengths in openstack-dev/hacking before
> but
> > >> it was killed [4] as per openstack-dev@ discussion [5].
> > >>
> > >> I believe commit message lines of <=80 chars are absolutely fine and
> > >> should not get -1 treatment. I propose to raise the limit for the
> guideline
> > >> on wiki accordingly.
> > >>
> > >
> > > +1, I never understood it actually. I know some folks even question 80
> > > chars for the code, so having 72 chars for commit messages looks a bit
> > > weird to me.
> > >
> > >
> > >> Comments?
> > >>
> > >> [1]: https://review.openstack.org/#/c/224728/6//COMMIT_MSG
> > >> [2]: https://review.openstack.org/#/c/227319/2//COMMIT_MSG
> > >> [3]:
> > >>
> https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_message_structure
> > >> [4]: https://review.openstack.org/#/c/142585/
> > >> [5]:
> > >>
> http://lists.openstack.org/pipermail/openstack-dev/2014-December/thread.html#52519
> > >>
> > >> Ihar
> > >>
> > >>
> > >>
> > >>
> __
> > >> OpenStack Development Mailing List (not for usage questions)
> > >> Unsubscribe:
> > >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>
> > >>
> > >
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] [app-catalog] versions for murano assets in the catalog

2015-09-25 Thread Fox, Kevin M
To clarify a bit, We're unsure that running glance without keystone as a 
distribution mechanism for apps.openstack.org assets is a good idea/fit.

Having assets stored in the local cloud all in one place (In glance) seems like 
a very good idea to me.

We need closer communications between Murano, Glance, and App-Catalog going 
forward since each project has valuable things to contribute to the over all 
big picture.

Thanks,
Kevin

From: Christopher Aedo [d...@aedo.net]
Sent: Thursday, September 24, 2015 2:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [murano] [app-catalog] versions for murano assets 
in the catalog

On Tue, Sep 22, 2015 at 12:06 PM, Serg Melikyan  wrote:
> Hi Chris,
>
> concern regarding assets versioning in Community App Catalog indeed
> affects Murano because we are constantly improving our language and
> adding new features, e.g. we added ability to select existing Neutron
> network for particular application in Liberty and if user wants to use
> this feature - his application will be incompatible with Kilo. I think
> this also valid for Heat because they HOT language is also improving
> with each release.
>
> Thank you for proposing workaround, I think this is a good way to
> solve immediate blocker while Community App Catalog team is working on
> resolving handling versions elegantly from they side. Kirill proposed
> two changes in Murano to follow this approach that I've already +2 ed:
>
> * https://review.openstack.org/225251 - openstack/murano-dashboard
> * https://review.openstack.org/225249 - openstack/python-muranoclient
>
> Looks like corresponding commit to Community App Catalog is already
> merged [0] and our next step is to prepare new version of applications
> from openstack/murano-apps and then figure out how to publish them
> properly.

Yep, thanks, this looks like a step in the right direction to give us
some wiggle room to handle different versions of assets in the App
Catalog for the next few months.

Down the road we want to make sure that the App Catalog is not closely
tied to any other projects, or how those projects handle versions.  We
will clearly communicate our intentions around versions of assets (and
how to specify which version is desired when retrieving an asset) here
on the mailing list, during the weekly meetings, and during the weekly
cross-project meeting as well.

> P.S. I've also talked with Alexander and Kirill regarding better ways
> to handle versioning for assets in Community App Catalog and they
> shared that they are starting working on PoC using Glance Artifact
> Repository, probably they can share more details regarding this work
> here. We would be happy to work on this together given that in Liberty
> we implemented experimental support for package versioning inside the
> Murano (e.g. having two version of the same app working side-by-side)
> [1]
>
> References:
> [0] https://review.openstack.org/224869
> [1] 
> http://murano-specs.readthedocs.org/en/latest/specs/liberty/murano-versioning.html

Thanks, looking forward to the PoC.  We have discussed whether or not
using Glance Artifact Repository makes sense for the App Catalog and
so far the consensus has been that it is not a great fit for what we
need.  Though it brings a lot of great stuff to the table, all we
really need is a place to drop large (and small) binaries.  Swift as a
storage component is the obvious choice for that - the metadata around
the asset itself (when it was added, by whom, rating, version, etc.)
will have to live in a DB anyway.  Given that, seems like Glance is
not an obvious great fit, but like I said I'm looking forward to
hearing/seeing more on this front :)

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] nominating Brant Knudson for Oslo core

2015-09-25 Thread Ben Nemec
Not sure if my vote still counts, but +1. :-)

On 09/24/2015 12:12 PM, Doug Hellmann wrote:
> Oslo team,
> 
> I am nominating Brant Knudson for Oslo core.
> 
> As liaison from the Keystone team Brant has participated in meetings,
> summit sessions, and other discussions at a level higher than some
> of our own core team members.  He is already core on oslo.policy
> and oslo.cache, and given his track record I am confident that he would
> make a good addition to the team.
> 
> Please indicate your opinion by responding with +1/-1 as usual.
> 
> Doug
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] should puppet-neutron manage third party software?

2015-09-25 Thread Edgar Magana
Hi There,

I just added my comment on the review. I do agree with Emilien. There should be 
specific repos for plugins and drivers.

BTW. I love the sdnmagic name  ;-)

Edgar




On 9/25/15, 9:02 AM, "Emilien Macchi"  wrote:

>In our last meeting [1], we were discussing about whether managing or
>not external packaging repositories for Neutron plugin dependencies.
>
>Current situation:
>puppet-neutron is installing (packages like neutron-plugin-*) &
>configure Neutron plugins (configuration files like
>/etc/neutron/plugins/*.ini
>Some plugins (Cisco) are doing more: they install third party packages
>(not part of OpenStack), from external repos.
>
>The question is: should we continue that way and accept that kind of
>patch [2]?
>
>I vote for no: managing external packages & external repositories should
>be up to an external more.
>Example: my SDN tool is called "sdnmagic":
>1/ patch puppet-neutron to manage neutron-plugin-sdnmagic package and
>configure the .ini file(s) to make it work in Neutron
>2/ create puppet-sdnmagic that will take care of everything else:
>install sdnmagic, manage packaging (and specific dependencies),
>repositories, etc.
>I -1 puppet-neutron should handle it. We are not managing SDN soltution:
>we are enabling puppet-neutron to work with them.
>
>I would like to find a consensus here, that will be consistent across
>*all plugins* without exception.
>
>
>Thanks for your feedback,
>
>[1] http://goo.gl/zehmN2
>[2] https://review.openstack.org/#/c/209997/
>-- 
>Emilien Macchi
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][ironic] Ironic 4.2.0 release

2015-09-25 Thread Jim Rollenhagen
Hi all,

I'm proud to announce the release of Ironic 4.2.0! This follows quickly on
our 4.1.0 release with 21 bug fixes, and marks the completion of four
blueprints.

It is also the basis for our stable/Liberty branch, and will be included in
the coordinated OpenStack Liberty release.

Major changes are listed below and at 
http://docs.openstack.org/developer/ironic/releasenotes/
and full release details are available on Launchpad: 
https://launchpad.net/ironic/liberty/4.2.0

* Deprecated the bash ramdisk

  The older bash ramdisk built by diskimage-builder is now deprecated and
  support will be removed at the beginning of the "N" development cycle. Users
  should migrate to a ramdisk running ironic-python-agent, which now also
  supports the pxe_* drivers that the bash ramdisk was responsible for.
  For more info on building an ironic-python-agent ramdisk, see:
  
http://docs.openstack.org/developer/ironic/deploy/install-guide.html#building-or-downloading-a-deploy-ramdisk-image

* Raised API version to 1.14

  * 1.12 allows setting RAID properties for a node; however support for
putting this configuration on a node is not yet implemented for in-tree
drivers; this will be added in a future release.
  * 1.13 adds a new 'abort' verb to the provision state API. This may be used
to abort cleaning for nodes in the CLEANWAIT state.
  * 1.14 makes the following endpoints discoverable in the API:
* /v1/nodes//states
* /v1/drivers//properties

* Implemented a new Boot interface for drivers

  This change enhances the driver interface for driver authors, and should not
  affect users of Ironic, by splitting control of booting a server from the
  DeployInterface. The BootInterface is responsible for booting an image on a
  server, while the DeployInterface is responsible for deploying a tenant image
  to a server.

  This has been implemented in most in-tree drivers, and is a
  backwards-compatible change for out-of-tree drivers. The following in-tree
  drivers will be updated in a forth-coming release:

  * agent_ilo
  * agent_irmc
  * iscsi_ilo
  * iscsi_irmc

* Implemented a new RAID interface for drivers

  This change enhances the driver interface for driver authors. Drivers may
  begin implementing this interface to support RAID configuration for nodes.
  This is not yet implemented for any in-tree drivers.

* Image size is now checked before deployment with agent drivers

  The agent must download the tenant image in full before writing it to disk.
  As such, the server being deployed must have enough RAM for running the
  agent and storing the image. This is now checked before Ironic tells the
  agent to deploy an image. An optional config [agent]memory_consumed_by_agent
  is provided. When Ironic does this check, this config option may be set to
  factor in the amount of RAM to reserve for running the agent.

* Added Cisco IMC driver

  This driver supports managing Cisco UCS C-series servers through the
  CIMC API, rather than IPMI. Documentation is available at:
  http://docs.openstack.org/developer/ironic/drivers/cimc.html

* iLO virtual media drivers can work without Swift

  iLO virtual media drivers (iscsi_ilo and agent_ilo) can work standalone
  without Swift, by configuring an HTTP(S) server for hosting the
  deploy/boot images. A web server needs to be running on every conductor
  node and needs to be configured in ironic.conf.

  iLO driver documentation is available at:
  http://docs.openstack.org/developer/ironic/drivers/ilo.html

// jim + deva

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Large Deployments Team][Performance Team] New informal working group suggestion

2015-09-25 Thread Sandeep Raman
On Tue, Sep 22, 2015 at 6:27 PM, Dina Belova  wrote:

> Hey, OpenStackers!
>
> I'm writing to propose to organise new informal team to work specifically
> on the OpenStack performance issues. This will be a sub team in already
> existing Large Deployments Team, and I suppose it will be a good idea to
> gather people interested in OpenStack performance in one room and identify
> what issues are worrying contributors, what can be done and share results
> of performance researches :)
>

Dina, I'm focused in performance and scale testing [no coding
background].How can I contribute and what is the expectation from this
informal team?

>
> So please volunteer to take part in this initiative. I hope it will be
> many people interested and we'll be able to use cross-projects session
> slot  to meet in Tokyo and
> hold a kick-off meeting.
>

I'm not coming to Tokyo. How could I still be part of discussions if any? I
also feel it is good to have a IRC channel for perf-scale discussion. Let
me know your thoughts.


> I would like to apologise I'm writing to two mailing lists at the same
> time, but I want to make sure that all possibly interested people will
> notice the email.
>
> Thanks and see you in Tokyo :)
>
> Cheers,
> Dina
>
> --
>
> Best regards,
>
> Dina Belova
>
> Senior Software Engineer
>
> Mirantis Inc.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-i18n] [nova][i18n] Is there any point in using _() inpython-novaclient?

2015-09-25 Thread Davanum Srinivas
That sounds like the right solution Andreas. thanks!

-- Dims

On Fri, Sep 25, 2015 at 1:42 AM, Andreas Jaeger  wrote:

> On 09/20/2015 03:06 PM, Andreas Jaeger wrote:
>
>> On 09/20/2015 02:16 PM, Duncan Thomas wrote:
>>
>>> Certainly for cinder, and I suspect many other project, the openstack
>>> client is a wrapper for python-cinderclient libraries, so if you want
>>> translated exceptions then you need to translate python-cinderclient
>>> too, unless I'm missing something?
>>>
>>
>> Ah - let's investigate some more here.
>>
>> Looking at python-cinderclient, I see translations only for the help
>> strings of the client like in cinderclient/shell.py. Are there strings
>> in the library of cinder that will be displayed to the user as well?
>>
>
> We discussed on the i18n team list and will enable those repos if the
> teams send patches. The i18n team will prioritize which resources gets
> translated,
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> ___
> Openstack-i18n mailing list
> openstack-i...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-i18n
>



-- 
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Apache2 vs uWSGI vs ...

2015-09-25 Thread Sergii Golovatiuk
Hi,

Morgan gave the perfect case why operators want to use uWSGI. Let's imagine
a future when all openstack services will work as mod_wsgi processes under
apache. It's like to put all eggs in one basket. If you need to reconfigure
one service on controller it may affect another service. For instance,
sometimes operators need to increase number of Threads/Processes for wsgi
or add new virtual host to apache. That will require graceful or cold
restart of apache. It affects other services. Another case, internal
problems in mod_wsgi where it may lead to apache crash affecting all
services.

uWSGI/gunicorn model is safer as in this case apache is reverse_proxy only.
This  model gives flexibility for operators. They may use apache/nginx as
proxy or load balancer. Stop or crash of one service won't lead to downtime
of other services. The complexity of OpenStack management will be easier
and friendly.



--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Fri, Sep 18, 2015 at 3:44 PM, Morgan Fainberg 
wrote:

> There is and has been desire to support uWSGI and other alternatives to
> mod_wsgi. There are a variety of operational reasons to consider uWSGI
> and/or gunicorn behind apache most notably to facilitate easier management
> of the processes independently of the webserver itself. With mod_wsgi the
> processes are directly tied to the apache server where as with uWSGI and
> gunicorn you can manage the various services independently and/or with
> differing VENVs more easily.
>
> There are potential other concerns that must be weighed when considering
> which method of deployment to use. I hope we have clear documentation
> within the next cycle (and possible choices for the gate) for utilizing
> uWSGI and/or gunicorn.
>
> --Morgan
>
> Sent via mobile
>
> On Sep 18, 2015, at 06:12, Adam Young  wrote:
>
> On 09/17/2015 10:04 PM, Jim Rollenhagen wrote:
>
> On Thu, Sep 17, 2015 at 06:48:50PM -0400, Davanum Srinivas wrote:
>
> In the fuel project, we recently ran into a couple of issues with Apache2 +
> mod_wsgi as we switched Keystone to run . Please see [1] and [2].
>
> Looking deep into Apache2 issues specifically around "apache2ctl graceful"
> and module loading/unloading and the hooks used by mod_wsgi [3]. I started
> wondering if Apache2 + mod_wsgi is the "right" solution and if there was
> something else better that people are already using.
>
> One data point that keeps coming up is, all the CI jobs use Apache2 +
> mod_wsgi so it must be the best solutionIs it? If not, what is?
>
> Disclaimer: it's been a while since I've cared about performance with a
> web server in front of a Python app.
>
> IIRC, mod_wsgi was abandoned for a while, but I think it's being worked
> on again. In general, I seem to remember it being thought of as a bit
> old and crusty, but mostly working.
>
>
> I am not aware of that.  It has been the workhorse of the Python/wsgi
> world for a while, and we use it heavily.
>
> At a previous job, we switched from Apache2 + mod_wsgi to nginx + uwsgi[0]
> and saw a significant performance increase. This was a Django app. uwsgi
> is fairly straightforward to operate and comes loaded with a myriad of
> options[1] to help folks make the most of it. I've played with Ironic
> behind uwsgi and it seemed to work fine, though I haven't done any sort
> of load testing. I'd encourage folks to give it a shot. :)
>
>
> Again, switching web servers is as likely to introduce as to solve
> problems.  If there are performance issues:
>
> 1.  Idenitfy what causes them
> 2.  Change configuration settings to deal with them
> 3.  Fix upstream bugs in the underlying system.
>
>
> Keystone is not about performance.  Keystone is about security.  The cloud
> is designed to scale horizontally first.  Before advocating switching to a
> difference web server, make sure it supports the technologies required.
>
>
> 1. TLS at the latest level
> 2. Kerberos/GSSAPI/SPNEGO
> 3. X509 Client cert validation
> 4. SAML
>
> OpenID connect would be a good one to add to the list;  Its been requested
> for a while.
>
> If Keystone is having performance issues, it is most likely at the
> database layer, not the web server.
>
>
>
> "Programmers waste enormous amounts of time thinking about, or worrying
> about, the speed of noncritical parts of their programs, and these attempts
> at efficiency actually have a strong negative impact when debugging and
> maintenance are considered. We *should* forget about small efficiencies,
> say about 97% of the time: *premature optimization is the root of all
> evil.* Yet we should not pass up our opportunities in that critical
> 3%."   --Donald Knuth
>
>
>
> Of course, uwsgi can also be ran behind Apache2, if you'd prefer.
>
> gunicorn[2] is another good option that may be worth investigating; I
> personally don't have any experience with it, but I seem to remember
> hearing it has good eventlet support.
>
> // jim
>
> [0] 

[openstack-dev] [Rally][Meeting][Agenda]

2015-09-25 Thread Roman Vasilets
Hi, its a friendly reminder that if you what to discuss some topics at
Rally meetings, please add you topic to our Meeting agenda
https://wiki.openstack.org/wiki/Meetings/Rally#Agenda. Don't forget to
specify who will lead topic discusion. Add some information about
topic(links, etc.) Thank you for your attention.

- Best regards, Vasilets Roman.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Apache2 vs uWSGI vs ...

2015-09-25 Thread Sergii Golovatiuk
Alexandr,

oauth, shibboleth & openid support are very keystone specific features.
Many other openstack projects don't need these modules at all but they may
require faster HTTP server (lighthttp/nginx).

For all projects we may use "HTTP server -> uwsgi" model and leave apache
for keystone as " HTTP server -> apache -> uwsgi/mod_wsgi". However, I
would like to think about whole Openstack ecosystem in general. In that
case we'll minimize the number of programs operator should have knowledge.




--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Fri, Sep 18, 2015 at 4:28 PM, Alexander Makarov 
wrote:

> Please consider that we use some apache mods - does
> nginx/uwsgi/gunicorn have oauth, shibboleth & openid support?
>
> On Fri, Sep 18, 2015 at 4:54 PM, Vladimir Kuklin 
> wrote:
> > Folks
> >
> > I think we do not need to switch to nginx-only or consider any kind of
> war
> > between nginx and apache adherents. Everyone should be able to use
> > web-server he or she needs without being pinned to the unwanted one. It
> is
> > like Postgres vs MySQL war. Why not support both?
> >
> > May be someone does not need something that apache supports and nginx not
> > and needs nginx features which apache does not support. Let's let our
> users
> > decide what they want.
> >
> > And the first step should be simple here - support for uwsgi. It will
> allow
> > for usage of any web-server that can work with uwsgi. It will allow also
> us
> > to check for the support of all apache-like bindings like SPNEGO or
> whatever
> > and provide our users with enough info on making decisions. I did not
> > personally test nginx modules for SAML and SPNEGO, but I am pretty
> confident
> > about TLS/SSL parts of nginx.
> >
> > Moreover, nginx will allow you to do things you cannot do with apache,
> e.g.
> > do smart load balancing, which may be crucial for high-loaded
> installations.
> >
> >
> > On Fri, Sep 18, 2015 at 4:12 PM, Adam Young  wrote:
> >>
> >> On 09/17/2015 10:04 PM, Jim Rollenhagen wrote:
> >>
> >> On Thu, Sep 17, 2015 at 06:48:50PM -0400, Davanum Srinivas wrote:
> >>
> >> In the fuel project, we recently ran into a couple of issues with
> Apache2
> >> +
> >> mod_wsgi as we switched Keystone to run . Please see [1] and [2].
> >>
> >> Looking deep into Apache2 issues specifically around "apache2ctl
> graceful"
> >> and module loading/unloading and the hooks used by mod_wsgi [3]. I
> started
> >> wondering if Apache2 + mod_wsgi is the "right" solution and if there was
> >> something else better that people are already using.
> >>
> >> One data point that keeps coming up is, all the CI jobs use Apache2 +
> >> mod_wsgi so it must be the best solutionIs it? If not, what is?
> >>
> >> Disclaimer: it's been a while since I've cared about performance with a
> >> web server in front of a Python app.
> >>
> >> IIRC, mod_wsgi was abandoned for a while, but I think it's being worked
> >> on again. In general, I seem to remember it being thought of as a bit
> >> old and crusty, but mostly working.
> >>
> >>
> >> I am not aware of that.  It has been the workhorse of the Python/wsgi
> >> world for a while, and we use it heavily.
> >>
> >> At a previous job, we switched from Apache2 + mod_wsgi to nginx +
> uwsgi[0]
> >> and saw a significant performance increase. This was a Django app. uwsgi
> >> is fairly straightforward to operate and comes loaded with a myriad of
> >> options[1] to help folks make the most of it. I've played with Ironic
> >> behind uwsgi and it seemed to work fine, though I haven't done any sort
> >> of load testing. I'd encourage folks to give it a shot. :)
> >>
> >>
> >> Again, switching web servers is as likely to introduce as to solve
> >> problems.  If there are performance issues:
> >>
> >> 1.  Idenitfy what causes them
> >> 2.  Change configuration settings to deal with them
> >> 3.  Fix upstream bugs in the underlying system.
> >>
> >>
> >> Keystone is not about performance.  Keystone is about security.  The
> cloud
> >> is designed to scale horizontally first.  Before advocating switching
> to a
> >> difference web server, make sure it supports the technologies required.
> >>
> >>
> >> 1. TLS at the latest level
> >> 2. Kerberos/GSSAPI/SPNEGO
> >> 3. X509 Client cert validation
> >> 4. SAML
> >>
> >> OpenID connect would be a good one to add to the list;  Its been
> requested
> >> for a while.
> >>
> >> If Keystone is having performance issues, it is most likely at the
> >> database layer, not the web server.
> >>
> >>
> >>
> >> "Programmers waste enormous amounts of time thinking about, or worrying
> >> about, the speed of noncritical parts of their programs, and these
> attempts
> >> at efficiency actually have a strong negative impact when debugging and
> >> maintenance are considered. We should forget about small efficiencies,
> say
> >> about 97% of the time: premature optimization is the root of all evil.
> Yet
> 

Re: [openstack-dev] [cinder] should we use fsync when writing iscsi config file?

2015-09-25 Thread Chris Friesen

On 09/24/2015 04:21 PM, Chris Friesen wrote:

On 09/24/2015 12:18 PM, Chris Friesen wrote:



I think what happened is that we took the SIGTERM after the open() call in
create_iscsi_target(), but before writing anything to the file.

 f = open(volume_path, 'w+')
 f.write(volume_conf)
 f.close()

The 'w+' causes the file to be immediately truncated on opening, leading to an
empty file.

To work around this, I think we need to do the classic "write to a temporary
file and then rename it to the desired filename" trick.  The atomicity of the
rename ensures that either the old contents or the new contents are present.


I'm pretty sure that upstream code is still susceptible to zeroing out the file
in the above scenario.  However, it doesn't take an exception--that's due to a
local change on our part that attempted to fix the below issue.

The stable/kilo code *does* have a problem in that when it regenerates the file
it's missing the CHAP authentication line (beginning with "incominguser").


I've proposed a change at https://review.openstack.org/#/c/227943/

If anyone has suggestions on how to do this more robustly or more cleanly, 
please let me know.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Late Liberty patches should now be unblocked ...

2015-09-25 Thread Jay S. Bryant

All,

Now that Mitaka is open I have done my best to go through and remove all 
the -2's that I had given to block Liberty patches that needed to wait 
for Mitaka.


If you have a patch that I missed please ping me on IRC.

Happy Mitaka merging!

Thanks,
Jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] -1 due to line length violation in commit messages

2015-09-25 Thread Russell Bryant
On 09/25/2015 12:15 PM, Fox, Kevin M wrote:
> Another option... why are we wasting time on something that a
> computer can handle? Why not just let the line length be infinite in
> the commit message and have gerrit wrap it to  here> length lines on merge?

I don't think gerrit should mess with the commit message at all.  Commit
message formatting is often very intentional.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][horizon] Nice new Network Topology panel in Horizon

2015-09-25 Thread Henry Gessau
It has been about three years in the making but now it is finally here.
A screenshot doesn't do it justice, so here is a short video overview:
https://youtu.be/PxFd-lJV0e4

Isn't that neat? I am sure you can see that it is a great improvement,
especially for larger topologies.

This new view will be part of the Liberty release of Horizon. I encourage you to
take a look at it with your own network topologies, play around with it, and
provide feedback. Please stop by the #openstack-horizon IRC channel if there are
issues you would like addressed.

Thanks to the folks who made this happen.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?

2015-09-25 Thread Andrew Laski

On 09/25/15 at 07:44pm, Rochelle Grober wrote:



Doug Hellmann wrote:
Excerpts from Mark Voelker's message of 2015-09-25 17:42:24 +:

On Sep 25, 2015, at 1:24 PM, Brian Rosmaita  
wrote:
>
> I'd like to clarify something.
>
> On 9/25/15, 12:16 PM, "Mark Voelker"  wrote:
> [big snip]
>> Also worth pointing out here: when we talk about ³doing the same thing²
>> from a DefCore perspective, we¹re essentially talking about what¹s
>> exposed to the end user, not how that¹s implemented in OpenStack¹s source
>> code.  So from an end user¹s perspective:
>>
>> If I call nova image-create, I get an image in my cloud.  If I call the
>> Glance v2 API to create an image, I also get an image in my cloud.  I
>> neither see nor care that Nova is actually talking to Glance in the
>> background, because if I¹m writing code that uses the OpenStack API¹s, I
>> need to pick which one of those two API¹s to make my code call upon to
>> put an image in my cloud.  Or, in the worst case, I have to write a bunch
>> of if/else loops into my code because some clouds I want to use only
>> allow one way and some allow only the other.
>
> The above is a bit inaccurate.
>
> The nova image-create command does give you an image in your cloud.  The
> image you get, however, is a snapshot of an instance that has been
> previously created in Nova.  If you don't have an instance, you cannot
> create an image via that command.  There is no provision in the Compute
> (Nova) API to allow you to create an image out of bits that you supply.
>
> The Image (Glance) APIs (both v1 and v2) allow you to supply the bits and
> register them as an image which you can then use to boot instances from by
> using the Compute API.  But note that if all you have available is the
> Images API, you cannot create an image of one of your instances.
>
>> So from that end-user perspective, the Nova image-create API indeed does
>> ³do the same thing" as the Glance API.
>
> They don't "do the same thing".  Even if you have full access to the
> Images v1 or v2 API, you will still have to use the Compute (Nova) API to
> create an image of an instance, which is by far the largest use-case for
> image creation.  You can't do it through Glance, because Glance doesn't
> know anything about instances.  Nova has to know about Glance, because it
> needs to fetch images for instance creation, and store images for
> on-demand images of instances.

Yup, that’s fair: this was a bad example to pick (need moar coffee I guess).  
Let’s use image-list instead. =)


From a "technical direction" perspective, I still think it's a bad
situation for us to be relying on any proxy APIs like this. Yes,
they are widely deployed, but we want to be using glance for image
features, neutron for networking, etc. Having the nova proxy is
fine, but while we have DefCore using tests to enforce the presence
of the proxy we can't deprecate those APIs.

What do we need to do to make that change happen over the next cycle
or so?

[Rocky]
This is likely the first case DefCore will have pf deprecating a requirement 
;-)  The committee wasn't thrilled with the original requirement, but really, 
can you have OpenStack without some way of creating an instance?  And Glance V1 
had no user facing APIs, so the committee was kind of stuck.

But, going forward, what needs to happen in Dev is for Glance V2 to become *the way* to 
create images, and for Glance V1 to be deprecated *and removed*.  Then we've got two more 
cycles before we can require V2 only.  Yes, DefCore is a trailing requirement.  We have 
to give our user community time to migrate to versions of OpenStack that don't have the 
"old" capability.


I still feel that there's a misunderstanding here.  The Nova API is a 
proxy for listing images and getting details on a particular image but 
otherwise does not expose the capabilities of Glance that the Glance API 
does.  Nova does not allow users to create images in Glance in the 
manner that seems to be under discussion here.  You can boot an instance 
from a preexisting image, modify it, and then have Nova upload a 
snapshot of that image to Glance.  You can not take a user provided 
image and get it into Glance via Nova.  And if there are no images in 
Glance you can not bootstrap one in via Nova.





But now comes the tricky partHow do you allow both V1 and V2 capabilities 
and still be interoperable?  This will definitely be the first test for DefCore 
on migration from obsolete capabilities to current capabilities.  We could use 
some help figuring out how to make that work.

--Rocky

Doug



At Your Service,

Mark T. Voelker

>
>
>> At Your Service,
>>
>> Mark T. Voelker
>
> Glad to be of service, too,
> brian
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [oslo.messaging][devstack] Pika RabbitMQ driver implementation

2015-09-25 Thread Dmitriy Ukhlov
Hello Joshua, thank you for your feedback.

This will end up on review.openstack.org right so that it can be properly
> reviewed (it will likely take a while since it looks to be ~1000+ lines of
> code)?


Yes, sure I will send this patch to  review.openstack.org, but first of all
I need to get merged devstack patch (
https://review.openstack.org/#/c/226348/).
Then I will add gate jobs with testing new driver using devstack. And the
will send pika driver patch to review.

Also suggestion, before that merges, can docs be added, seems like very
> little docstrings about what/why/how. For sustainability purposes that
> would be appreciated I think.


Ok. Will add.

On Fri, Sep 25, 2015 at 6:58 PM, Joshua Harlow  wrote:

> Also a side question, that someone might know,
>
> Whatever happened to the folks from rabbitmq (incorporated? pivotal?) who
> were going to get involved in oslo.messaging, did that ever happen; if
> anyone knows?
>
> They might be a good bunch of people to review such a pika driver (since I
> think they as a corporation created pika?).
>
> Dmitriy Ukhlov wrote:
>
>> Hello stackers,
>>
>> I'm working on new olso.messaging RabbitMQ driver implementation which
>> uses pika client library instead of kombu. It related to
>> https://blueprints.launchpad.net/oslo.messaging/+spec/rabbit-pika.
>> In this letter I want to share current results and probably get first
>> feedack from you.
>> Now code is availabe here:
>>
>> https://github.com/dukhlov/oslo.messaging/blob/master/oslo_messaging/_drivers/impl_pika.py
>>
>> Current status of this code:
>> - pika driver passes functional tests
>> - pika driver tempest smoke tests
>> - pika driver passes almost all tempest full tests (except 5) but it
>> seems that reason is not related to oslo.messaging
>> Also I created small devstack patch to support pika driver testing on
>> gate (https://review.openstack.org/#/c/226348/)
>>
>> Next steps:
>> - communicate with Manish (blueprint owner)
>> - write spec to this blueprint
>> - send a review with this patch when spec and devstack patch get merged.
>>
>> Thank you.
>>
>>
>> --
>> Best regards,
>> Dmitriy Ukhlov
>> Mirantis Inc.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards,
Dmitriy Ukhlov
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?

2015-09-25 Thread Mark Voelker
On Sep 25, 2015, at 1:56 PM, Doug Hellmann  wrote:
> 
> Excerpts from Mark Voelker's message of 2015-09-25 17:42:24 +:
>> On Sep 25, 2015, at 1:24 PM, Brian Rosmaita  
>> wrote:
>>> 
>>> I'd like to clarify something.
>>> 
>>> On 9/25/15, 12:16 PM, "Mark Voelker"  wrote:
>>> [big snip]
 Also worth pointing out here: when we talk about ³doing the same thing²
 from a DefCore perspective, we¹re essentially talking about what¹s
 exposed to the end user, not how that¹s implemented in OpenStack¹s source
 code.  So from an end user¹s perspective:
 
 If I call nova image-create, I get an image in my cloud.  If I call the
 Glance v2 API to create an image, I also get an image in my cloud.  I
 neither see nor care that Nova is actually talking to Glance in the
 background, because if I¹m writing code that uses the OpenStack API¹s, I
 need to pick which one of those two API¹s to make my code call upon to
 put an image in my cloud.  Or, in the worst case, I have to write a bunch
 of if/else loops into my code because some clouds I want to use only
 allow one way and some allow only the other.
>>> 
>>> The above is a bit inaccurate.
>>> 
>>> The nova image-create command does give you an image in your cloud.  The
>>> image you get, however, is a snapshot of an instance that has been
>>> previously created in Nova.  If you don't have an instance, you cannot
>>> create an image via that command.  There is no provision in the Compute
>>> (Nova) API to allow you to create an image out of bits that you supply.
>>> 
>>> The Image (Glance) APIs (both v1 and v2) allow you to supply the bits and
>>> register them as an image which you can then use to boot instances from by
>>> using the Compute API.  But note that if all you have available is the
>>> Images API, you cannot create an image of one of your instances.
>>> 
 So from that end-user perspective, the Nova image-create API indeed does
 ³do the same thing" as the Glance API.
>>> 
>>> They don't "do the same thing".  Even if you have full access to the
>>> Images v1 or v2 API, you will still have to use the Compute (Nova) API to
>>> create an image of an instance, which is by far the largest use-case for
>>> image creation.  You can't do it through Glance, because Glance doesn't
>>> know anything about instances.  Nova has to know about Glance, because it
>>> needs to fetch images for instance creation, and store images for
>>> on-demand images of instances.
>> 
>> Yup, that’s fair: this was a bad example to pick (need moar coffee I guess). 
>>  Let’s use image-list instead. =)
> 
> From a "technical direction" perspective, I still think it's a bad

Ah.  Thanks for bringing that up, because I think this may be an area where 
there’s some misconception about what DefCore is set up to do today.  In it’s 
present form, the Board of Directors has structured DefCore to look much more 
at trailing indicators of market acceptance rather than future technical 
direction.  More on that over here. [1] 



> situation for us to be relying on any proxy APIs like this. Yes,
> they are widely deployed, but we want to be using glance for image
> features, neutron for networking, etc. Having the nova proxy is
> fine, but while we have DefCore using tests to enforce the presence
> of the proxy we can't deprecate those APIs.


Actually that’s not true: DefCore can totally deprecate things too, and can do 
so in response to the technical community deprecating things.  See my comments 
in this review [2].  Maybe I need to write another post about that...

/me envisions the title being “Who’s on First?”


> 
> What do we need to do to make that change happen over the next cycle
> or so?

There are several things that can be done:

First, if you don’t like the Criteria or the weights that the various Criteria 
today have, we can suggest changes to them.  The Board of Directors will 
ultimately have to approve that change, but we can certainly ask (I think 
there’s plenty of evidence that our Directors listen to the community’s 
concerns).  There’s actually already some early discussion about that now, 
though most of the energy is going into other things at the moment (because 
deadlines).  See post above for links.

Second, we certainly could consider changes to the Capabilities that are 
currently required.  That happens every six months according to a 
Board-approved schedule. [3]  The window is just about to close for the next 
Guideline, but that might be ok from the perspective of a lot of stuff is 
likely be advisory in the next Guideline anyway, and advisory cycles are 
explicitly meant to generate feedback like this.  Making changes to Guidelines 
is basically submitting a patch. [4]

Third, as a technical community we can make the capabilities we want score 
better.  So for example: we could make nova image use glance v2, or we could 
deprecate 

Re: [openstack-dev] Apache2 vs uWSGI vs ...

2015-09-25 Thread Morgan Fainberg
There is no reason why the wsgi app container matters. This is simply a "we 
should document use if uwsgi and/or gunicorn as an alternative to mod_wsgi". If 
one solution is better for the gate it will be used there and each deployment 
will make the determination of what they want to use. Adam's point remains 
regardless of what wsgi solution is used. 

> On Sep 25, 2015, at 09:23, Adam Young  wrote:
> 
>> On 09/25/2015 07:09 AM, Sergii Golovatiuk wrote:
>> Hi,
>> 
>> Morgan gave the perfect case why operators want to use uWSGI. Let's imagine 
>> a future when all openstack services will work as mod_wsgi processes under 
>> apache. It's like to put all eggs in one basket. If you need to reconfigure 
>> one service on controller it may affect another service. For instance, 
>> sometimes operators need to increase number of Threads/Processes for wsgi or 
>> add new virtual host to apache. That will require graceful or cold restart 
>> of apache. It affects other services. Another case, internal problems in 
>> mod_wsgi where it may lead to apache crash affecting all services. 
>> 
>> uWSGI/gunicorn model is safer as in this case apache is reverse_proxy only. 
>> This  model gives flexibility for operators. They may use apache/nginx as 
>> proxy or load balancer. Stop or crash of one service won't lead to downtime 
>> of other services. The complexity of OpenStack management will be easier and 
>> friendly.
> 
> There are some fallacies here:
> 
> 1. OpenStack services should all be on the same machine.
> 2. OpenStack web services should run on ports other than 443.
> 
> I think both of these are ideas who's time have come and gone.
> 
> If you have a single machine, run them out of separate containers.  That 
> allows different services to work with different versions of the libraries. 
> It lets you mix a newer Keystone with older everything else.
> 

Often the APIs are deployed on a common set of nodes. 

> If everything is on port 443, you need a single web server at the front end 
> to multiplex it;  uWSGI or any other one does not obviate that.
> 

++

> 
> There are no good ports left in /etc/services;  stop trying to reserve new 
> ones for the web.  If you need to run on a web service, you need to be able 
> to get through firewalls.  You need to run on standard ports. Run on 443.
> 
> Keystone again is a great example of this: it has two ports: 5000 and 35357.
> 
> port 5000 in /etc/services is
> 
> commplex-main   5000/tcp
> 
> and  port 35357 is smack dab in the middle of the ephemeral range.
> 

This is a disconnect between linux and IANA. IANA has said 35357 is not 
ephemeral, linux defaults to say it is. 

> 
> Again, so long as the web server supports the cryptographic secure 
> mechanisms, I don't care what one you chose.  But The idea of use going to 
> Keystone and getting a bearer token as the basis for security is immature;  
> we should be doing the following on every call:
> 
> 1. TLS
> 2. Cryptographic authentication.
> 
> 
> They can be together or split up.
> 
> So, lets get everything running inside Apache, and, at the same time, push 
> our other favorite web servers to support the necessary pieces to make 
> OpenStack and the Web secure.
> 

++. We should do this and also document alternatives for wsgi which has no 
impact on this goal. Lets try and keep focused on the different initiatives and 
not cross the reasons for them. 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?

2015-09-25 Thread Andrew Laski

On 09/25/15 at 03:09pm, Mark Voelker wrote:

On Sep 25, 2015, at 10:42 AM, Andrew Laski  wrote:


On 09/25/15 at 09:59am, Doug Hellmann wrote:

Excerpts from Mark Voelker's message of 2015-09-25 01:20:04 +:

>
> On Sep 24, 2015, at 5:55 PM, Sabari Murugesan  wrote:
>
> Hi Melanie
>
> In general, images created by glance v1 API should be accessible using v2 and
> vice-versa. We fixed some bugs [1] [2] [3] where metadata associated with an 
image was
> causing incompatibility. These fixes were back-ported to stable/kilo.
>
> Thanks
> Sabari
>
> [1] - https://bugs.launchpad.net/glance/+bug/1447215
> [2] - https://bugs.launchpad.net/bugs/1419823
> [3] - https://bugs.launchpad.net/python-glanceclient/+bug/1447193
>
>
> On Thu, Sep 24, 2015 at 2:17 PM, melanie witt  wrote:
> Hi All,
>
> I have been looking and haven't yet located documentation about how to 
upgrade from glance v1 to glance v2.
>
> From what I understand, images and snapshots created with v1 can't be 
listed/accessed through the v2 api. Are there instructions about how to migrate 
images and snapshots from v1 to v2? Are there other incompatibilities between v1 
and v2?
>
> I'm asking because I have read that glance v1 isn't defcore compliant and so 
we need all projects to move to v2, but the incompatibility from v1 to v2 is 
preventing that in nova. Is there anything else preventing v2 adoption? Could we 
move to glance v2 if there's a migration path from v1 to v2 that operators can run 
through before upgrading to a version that uses v2 as the default?

Just to clarify the DefCore situation a bit here:

The DefCore Committee is considering adding some Glance v2

capabilities [1] as “advisory” (e.g. not required now but might be
in the future unless folks provide feedback as to why it shouldn’t
be) in it’s next Guideline, which is due to go the Board of Directors
in January and will cover Juno, Kilo, and Liberty [2].  The Nova image
API’s are already required [3][4].  As discussion began about which
Glance capabilities to include and whether or not to keep the Nova
image API’s as required, it was pointed out that the many ways images
can currently be created in OpenStack is problematic from an
interoperability point of view in that some clouds use one and some use
others.  To be included in a DefCore Guideline, capabilities are scored
against twelve Criteria [5], and need to achieve a certain total to be
included.  Having a bunch of different ways to deal with images
actually hurts the chances of any one of them meeting the bar because
it makes it less likely that they’ll achieve several criteria.  For
example:


One of the criteria is “widely deployed” [6].  In the case of images, both the 
Nova image-create API and Glance v2 are both pretty widely deployed [7]; Glance 
v1 isn’t, and at least one uses none of those but instead uses the import task 
API.

Another criteria is “atomic” [8] which basically means the capability is unique 
and can’t be built out of other required capabilities.  Since the Nova 
image-create API is already required and effectively does the same thing as 
glance v1 and v2’s image create API’s, the latter lose points.


This seems backwards. The Nova API doesn't "do the same thing" as
the Glance API, it is a *proxy* for the Glance API. We should not
be requiring proxy APIs for interop. DefCore should only be using
tests that talk directly to the service that owns the feature being
tested.


I completely agree with this.  I will admit to having some confusion as to why 
Glance capabilities have been tested through Nova and I know others have raised 
this same thought within the process.


Because it turns out that’s how most of the world is dealing with images.

Generally speaking, the nova image API and glance v2 API’s have roughly equal 
adoption among public and private cloud products, but among the client SDK’s 
people are using to interact with OpenStack the nova image API’s have much 
better adoption (see notes in previous message for details).  So we gave the 
world lots of different ways to do the same thing and the world has strongly 
adopted two of them (with reasonable evidence that the Nova image API is 
actually the most-adopted of the lot).  If you’re looking for the most 
interoperable way to create an image across lots of different OpenStack clouds 
today, it’s actually through Nova.


I understand that reasoning, but still am unsure on a few things.

The direction seems to be moving towards having a requirement that the 
same functionality is offered in two places, Nova API and Glance V2 API.  
That seems like it would fragment adoption rather than unify it.


Also after digging in on image-create I feel that there may be a mixup.  
The image-create in Glance and image-create in Nova are two different 
things.  In Glance you create an image and send the disk image data in 
the request, in Nova an image-create takes a snapshot of the 

Re: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?

2015-09-25 Thread Mark Voelker
On Sep 25, 2015, at 1:24 PM, Brian Rosmaita  
wrote:
> 
> I'd like to clarify something.
> 
> On 9/25/15, 12:16 PM, "Mark Voelker"  wrote:
> [big snip]
>> Also worth pointing out here: when we talk about ³doing the same thing²
>> from a DefCore perspective, we¹re essentially talking about what¹s
>> exposed to the end user, not how that¹s implemented in OpenStack¹s source
>> code.  So from an end user¹s perspective:
>> 
>> If I call nova image-create, I get an image in my cloud.  If I call the
>> Glance v2 API to create an image, I also get an image in my cloud.  I
>> neither see nor care that Nova is actually talking to Glance in the
>> background, because if I¹m writing code that uses the OpenStack API¹s, I
>> need to pick which one of those two API¹s to make my code call upon to
>> put an image in my cloud.  Or, in the worst case, I have to write a bunch
>> of if/else loops into my code because some clouds I want to use only
>> allow one way and some allow only the other.
> 
> The above is a bit inaccurate.
> 
> The nova image-create command does give you an image in your cloud.  The
> image you get, however, is a snapshot of an instance that has been
> previously created in Nova.  If you don't have an instance, you cannot
> create an image via that command.  There is no provision in the Compute
> (Nova) API to allow you to create an image out of bits that you supply.
> 
> The Image (Glance) APIs (both v1 and v2) allow you to supply the bits and
> register them as an image which you can then use to boot instances from by
> using the Compute API.  But note that if all you have available is the
> Images API, you cannot create an image of one of your instances.
> 
>> So from that end-user perspective, the Nova image-create API indeed does
>> ³do the same thing" as the Glance API.
> 
> They don't "do the same thing".  Even if you have full access to the
> Images v1 or v2 API, you will still have to use the Compute (Nova) API to
> create an image of an instance, which is by far the largest use-case for
> image creation.  You can't do it through Glance, because Glance doesn't
> know anything about instances.  Nova has to know about Glance, because it
> needs to fetch images for instance creation, and store images for
> on-demand images of instances.

Yup, that’s fair: this was a bad example to pick (need moar coffee I guess).  
Let’s use image-list instead. =)

At Your Service,

Mark T. Voelker


> 
> 
>> At Your Service,
>> 
>> Mark T. Voelker
> 
> Glad to be of service, too,
> brian
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Late Liberty patches should now be unblocked ...

2015-09-25 Thread Ivan Kolodyazhny
Thanks, Jay.

I've removed my -2's last night. If I missed something, please ping me via
e-mail or IRC (e0ne).

Regards,
Ivan Kolodyazhny


On Fri, Sep 25, 2015 at 7:59 PM, Jay S. Bryant <
jsbry...@electronicjungle.net> wrote:

> All,
>
> Now that Mitaka is open I have done my best to go through and remove all
> the -2's that I had given to block Liberty patches that needed to wait for
> Mitaka.
>
> If you have a patch that I missed please ping me on IRC.
>
> Happy Mitaka merging!
>
> Thanks,
> Jay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] should we use fsync when writing iscsi config file?

2015-09-25 Thread Mitsuhiro Tanino
> -Original Message-
> From: Eric Harney [mailto:ehar...@redhat.com]
> Sent: Friday, September 25, 2015 2:56 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [cinder] should we use fsync when writing iscsi
> config file?
> 
> On 09/25/2015 02:30 PM, Mitsuhiro Tanino wrote:
> > On 09/22/2015 06:43 PM, Robert Collins wrote:
> >> On 23 September 2015 at 09:52, Chris Friesen
> >>  wrote:
> >>> Hi,
> >>>
> >>> I recently had an issue with one file out of a dozen or so in
> >>> "/opt/cgcs/cinder/data/volumes/" being present but of size zero.
> >>> I'm running stable/kilo if it makes a difference.
> >>>
> >>> Looking at the code in
> >>> volume.targets.tgt.TgtAdm.create_iscsi_target(), I'm wondering if we
> >>> should do a fsync() before the close().  The way it stands now, it
> >>> seems like it might be possible to write the file, start making use
> >>> of it, and then take a power outage before it actually gets written
> >>> to persistent storage.  When we come back up we could have an
> >>> instance expecting to make use of it, but no target information in the on-
> disk copy of the file.
> >
> > I think even if there is no target information in configuration file
> > dir, c-vol started successfully and iSCSI targets were created automatically
> and volumes were exported, right?
> >
> > There is an problem in this case that the iSCSI target was created
> > without authentication because we can't get previous authentication from the
> configuration file.
> >
> > I'm curious what kind of problem did you met?
> >
> >> If its being kept in sync with DB records, and won't self-heal from
> >> this situation, then yes. e.g. if the overall workflow is something
> >> like
> >
> > In my understanding, the provider_auth in database has user name and 
> > password
> for iSCSI target.
> > Therefore if we get authentication from DB, I think we can self-heal
> > from this situation correctly after c-vol service is restarted.
> >
> 
> Is this not already done as-needed by ensure_export()?

Yes. This logic is in the ensure_export but only lio target uses DB and
other targets use file.
 
> > The lio target obtains authentication from provider_auth in database,
> > but tgtd, iet, cxt obtain authentication from file to recreate iSCSI target
> when c-vol is restarted.
> > If the file is missing, these volumes are exported without
> > authentication and configuration file is recreated as I mentioned above.
> >
> > tgtd: Get target chap auth from file
> > iet:  Get target chap auth from file
> > cxt:  Get target chap auth from file
> > lio:  Get target chap auth from Database(in provider_auth)
> > scst: Get target chap auth by using original command
> >
> > If we get authentication from DB for tgtd, iet and cxt same as lio, we
> > can recreate iSCSI target with proper authentication when c-vol is 
> > restarted.
> > I think this is a solution for this situation.
> >
> 
> This may be possible, but fixing the target config file to be written more
> safely to work as currently intended is still a win.

I think it is better to fix both of them,
(1) Add a logic to write configuration file using fsync
(2) Read authentication from database during ensure_export() same as lio target.

Thanks,
Mitsuhiro Tanino

> > Any thought?
> >
> > Thanks,
> > Mitsuhiro Tanino
> >
> >> -Original Message-
> >> From: Chris Friesen [mailto:chris.frie...@windriver.com]
> >> Sent: Friday, September 25, 2015 12:48 PM
> >> To: openstack-dev@lists.openstack.org
> >> Subject: Re: [openstack-dev] [cinder] should we use fsync when
> >> writing iscsi config file?
> >>
> >> On 09/24/2015 04:21 PM, Chris Friesen wrote:
> >>> On 09/24/2015 12:18 PM, Chris Friesen wrote:
> >>>
> 
>  I think what happened is that we took the SIGTERM after the open()
>  call in create_iscsi_target(), but before writing anything to the file.
> 
>   f = open(volume_path, 'w+')
>   f.write(volume_conf)
>   f.close()
> 
>  The 'w+' causes the file to be immediately truncated on opening,
>  leading to an empty file.
> 
>  To work around this, I think we need to do the classic "write to a
>  temporary file and then rename it to the desired filename" trick.
>  The atomicity of the rename ensures that either the old contents or
>  the new
> >> contents are present.
> >>>
> >>> I'm pretty sure that upstream code is still susceptible to zeroing
> >>> out the file in the above scenario.  However, it doesn't take an
> >>> exception--that's due to a local change on our part that attempted
> >>> to fix the
> >> below issue.
> >>>
> >>> The stable/kilo code *does* have a problem in that when it
> >>> regenerates the file it's missing the CHAP authentication line
> >>> (beginning with
> >> "incominguser").
> >>
> >> I've proposed a change at
> >> https://urldefense.proofpoint.com/v2/url?u=https-
> >> 

Re: [openstack-dev] [cinder] should we use fsync when writing iscsi config file?

2015-09-25 Thread Mitsuhiro Tanino
> -Original Message-
> From: Chris Friesen [mailto:chris.frie...@windriver.com]
> Sent: Friday, September 25, 2015 3:04 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [cinder] should we use fsync when writing iscsi
> config file?
> 
> On 09/25/2015 12:30 PM, Mitsuhiro Tanino wrote:
> > On 09/22/2015 06:43 PM, Robert Collins wrote:
> >> On 23 September 2015 at 09:52, Chris Friesen
> >>  wrote:
> >>> Hi,
> >>>
> >>> I recently had an issue with one file out of a dozen or so in
> >>> "/opt/cgcs/cinder/data/volumes/" being present but of size zero.
> >>> I'm running stable/kilo if it makes a difference.
> >>>
> >>> Looking at the code in
> >>> volume.targets.tgt.TgtAdm.create_iscsi_target(), I'm wondering if we
> >>> should do a fsync() before the close().  The way it stands now, it
> >>> seems like it might be possible to write the file, start making use
> >>> of it, and then take a power outage before it actually gets written
> >>> to persistent storage.  When we come back up we could have an
> >>> instance expecting to make use of it, but no target information in the on-
> disk copy of the file.
> >
> > I think even if there is no target information in configuration file
> > dir, c-vol started successfully and iSCSI targets were created automatically
> and volumes were exported, right?
> >
> > There is an problem in this case that the iSCSI target was created
> > without authentication because we can't get previous authentication from the
> configuration file.
> >
> > I'm curious what kind of problem did you met?
> 
> We had an issue in a private patch that was ported to Kilo without realizing
> that the data type of chap_auth had changed.

I understand. Thank you for your explanation.
 
> > In my understanding, the provider_auth in database has user name and 
> > password
> for iSCSI target.
> > Therefore if we get authentication from DB, I think we can self-heal
> > from this situation correctly after c-vol service is restarted.
> >
> > The lio target obtains authentication from provider_auth in database,
> > but tgtd, iet, cxt obtain authentication from file to recreate iSCSI target
> when c-vol is restarted.
> > If the file is missing, these volumes are exported without
> > authentication and configuration file is recreated as I mentioned above.
> >
> > tgtd: Get target chap auth from file
> > iet:  Get target chap auth from file
> > cxt:  Get target chap auth from file
> > lio:  Get target chap auth from Database(in provider_auth)
> > scst: Get target chap auth by using original command
> >
> > If we get authentication from DB for tgtd, iet and cxt same as lio, we
> > can recreate iSCSI target with proper authentication when c-vol is 
> > restarted.
> > I think this is a solution for this situation.
> 
> If we fixed the chap auth info then we could live with a zero-size file.
> However, with the current code if we take a kernel panic or power outage it's
> theoretically possible to end up with a corrupt file of nonzero size (due to
> metadata hitting the persistant storage before the data).  I'm not confident
> that the current code would deal properly with that.
> 
> That said, if we always regenerate every file from the DB on cinder-volume
> startup (regardless of whether or not it existed, and without reading in the
> existing file), then we'd be okay without the robustness improvements.

This file is referred when the SCSI target service is restarted.
Therefore, adding robustness for this file is also good approach. IMO.

Thanks,
Mitsuhiro Tanino

> Chris
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-
> 2Dbin_mailman_listinfo_openstack-2Ddev=BQICAg=DZ-
> EF4pZfxGSU6MfABwx0g=klD1krzABGW034E9oBtY1xmIn3g7xZAIxV0XxaZpkJE=SZPjS9uXH42q
> hmzSRbSZ8x39C9xi3aBDw-SQ7xa8cTM=XWJ91NIJglFkBSr762rSq9TdWeiRSdS5Pl0LzS1_1Z8=

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?

2015-09-25 Thread Rochelle Grober


Doug Hellmann wrote:
Excerpts from Mark Voelker's message of 2015-09-25 17:42:24 +:
> On Sep 25, 2015, at 1:24 PM, Brian Rosmaita  
> wrote:
> > 
> > I'd like to clarify something.
> > 
> > On 9/25/15, 12:16 PM, "Mark Voelker"  wrote:
> > [big snip]
> >> Also worth pointing out here: when we talk about ³doing the same thing²
> >> from a DefCore perspective, we¹re essentially talking about what¹s
> >> exposed to the end user, not how that¹s implemented in OpenStack¹s source
> >> code.  So from an end user¹s perspective:
> >> 
> >> If I call nova image-create, I get an image in my cloud.  If I call the
> >> Glance v2 API to create an image, I also get an image in my cloud.  I
> >> neither see nor care that Nova is actually talking to Glance in the
> >> background, because if I¹m writing code that uses the OpenStack API¹s, I
> >> need to pick which one of those two API¹s to make my code call upon to
> >> put an image in my cloud.  Or, in the worst case, I have to write a bunch
> >> of if/else loops into my code because some clouds I want to use only
> >> allow one way and some allow only the other.
> > 
> > The above is a bit inaccurate.
> > 
> > The nova image-create command does give you an image in your cloud.  The
> > image you get, however, is a snapshot of an instance that has been
> > previously created in Nova.  If you don't have an instance, you cannot
> > create an image via that command.  There is no provision in the Compute
> > (Nova) API to allow you to create an image out of bits that you supply.
> > 
> > The Image (Glance) APIs (both v1 and v2) allow you to supply the bits and
> > register them as an image which you can then use to boot instances from by
> > using the Compute API.  But note that if all you have available is the
> > Images API, you cannot create an image of one of your instances.
> > 
> >> So from that end-user perspective, the Nova image-create API indeed does
> >> ³do the same thing" as the Glance API.
> > 
> > They don't "do the same thing".  Even if you have full access to the
> > Images v1 or v2 API, you will still have to use the Compute (Nova) API to
> > create an image of an instance, which is by far the largest use-case for
> > image creation.  You can't do it through Glance, because Glance doesn't
> > know anything about instances.  Nova has to know about Glance, because it
> > needs to fetch images for instance creation, and store images for
> > on-demand images of instances.
> 
> Yup, that’s fair: this was a bad example to pick (need moar coffee I guess).  
> Let’s use image-list instead. =)

From a "technical direction" perspective, I still think it's a bad
situation for us to be relying on any proxy APIs like this. Yes,
they are widely deployed, but we want to be using glance for image
features, neutron for networking, etc. Having the nova proxy is
fine, but while we have DefCore using tests to enforce the presence
of the proxy we can't deprecate those APIs.

What do we need to do to make that change happen over the next cycle
or so?

[Rocky]
This is likely the first case DefCore will have pf deprecating a requirement 
;-)  The committee wasn't thrilled with the original requirement, but really, 
can you have OpenStack without some way of creating an instance?  And Glance V1 
had no user facing APIs, so the committee was kind of stuck.

But, going forward, what needs to happen in Dev is for Glance V2 to become *the 
way* to create images, and for Glance V1 to be deprecated *and removed*.  Then 
we've got two more cycles before we can require V2 only.  Yes, DefCore is a 
trailing requirement.  We have to give our user community time to migrate to 
versions of OpenStack that don't have the "old" capability.

But now comes the tricky partHow do you allow both V1 and V2 capabilities 
and still be interoperable?  This will definitely be the first test for DefCore 
on migration from obsolete capabilities to current capabilities.  We could use 
some help figuring out how to make that work.

--Rocky

Doug

> 
> At Your Service,
> 
> Mark T. Voelker
> 
> > 
> > 
> >> At Your Service,
> >> 
> >> Mark T. Voelker
> > 
> > Glad to be of service, too,
> > brian
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [Heat] Assumptions regarding extensions to OpenStack api's

2015-09-25 Thread Monty Taylor

On 09/25/2015 02:32 PM, Pratik Mallya wrote:

Hello Heat Team,

I was wondering if OpenStack Heat assumes that the Nova extensions api
would always exist in a cloud? My impression was that since these
features are extensions, they may or may not be implemented by the cloud
provider and hence Heat must not rely on it being present.

My question is prompted by this code change: [0] where it is assumed
that the os-interfaces extension [1] is implemented.

If we cannot rely on that assumption, then that code would need to be
changed with a 404 guard since that endpoint may not exist and the nova
client may thus raise a 404.


Correct. Extensions are not everywhere and so you must either query the 
extensions API to find out what extensions the cloud has, or you must 
404 guard.


Of course, you can't ONLY 404 guard, because the cloud may also throw 
unauthorized - so querying the nova extension API is the more correct 
way to deal with it.



Thanks,
Pratik Mallya
Software Developer
Rackspace, Inc.

[0]:
https://github.com/openstack/heat/commit/54c26453a0a8e8cb574858c7e1d362d0abea3822#diff-b3857cb91556a2a83f40842658589e4fR163
[1]: http://developer.openstack.org/api-ref-compute-v2-ext.html#os-interface


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] should we use fsync when writing iscsi config file?

2015-09-25 Thread Chris Friesen

On 09/25/2015 12:30 PM, Mitsuhiro Tanino wrote:

On 09/22/2015 06:43 PM, Robert Collins wrote:

On 23 September 2015 at 09:52, Chris Friesen
 wrote:

Hi,

I recently had an issue with one file out of a dozen or so in
"/opt/cgcs/cinder/data/volumes/" being present but of size zero.  I'm
running stable/kilo if it makes a difference.

Looking at the code in
volume.targets.tgt.TgtAdm.create_iscsi_target(), I'm wondering if we
should do a fsync() before the close().  The way it stands now, it
seems like it might be possible to write the file, start making use
of it, and then take a power outage before it actually gets written
to persistent storage.  When we come back up we could have an
instance expecting to make use of it, but no target information in the on-disk 
copy of the file.


I think even if there is no target information in configuration file dir, c-vol 
started successfully
and iSCSI targets were created automatically and volumes were exported, right?

There is an problem in this case that the iSCSI target was created without 
authentication because
we can't get previous authentication from the configuration file.

I'm curious what kind of problem did you met?


We had an issue in a private patch that was ported to Kilo without realizing 
that the data type of chap_auth had changed.



In my understanding, the provider_auth in database has user name and password 
for iSCSI target.
Therefore if we get authentication from DB, I think we can self-heal from this 
situation
correctly after c-vol service is restarted.

The lio target obtains authentication from provider_auth in database, but tgtd, 
iet, cxt obtain
authentication from file to recreate iSCSI target when c-vol is restarted.
If the file is missing, these volumes are exported without authentication and 
configuration
file is recreated as I mentioned above.

tgtd: Get target chap auth from file
iet:  Get target chap auth from file
cxt:  Get target chap auth from file
lio:  Get target chap auth from Database(in provider_auth)
scst: Get target chap auth by using original command

If we get authentication from DB for tgtd, iet and cxt same as lio, we can 
recreate iSCSI target
with proper authentication when c-vol is restarted.
I think this is a solution for this situation.


If we fixed the chap auth info then we could live with a zero-size file. 
However, with the current code if we take a kernel panic or power outage it's 
theoretically possible to end up with a corrupt file of nonzero size (due to 
metadata hitting the persistant storage before the data).  I'm not confident 
that the current code would deal properly with that.


That said, if we always regenerate every file from the DB on cinder-volume 
startup (regardless of whether or not it existed, and without reading in the 
existing file), then we'd be okay without the robustness improvements.


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] -1 due to line length violation in commit messages

2015-09-25 Thread Jeremy Stanley
On 2015-09-25 16:15:15 + (+), Fox, Kevin M wrote:
> Another option... why are we wasting time on something that a
> computer can handle? Why not just let the line length be infinite
> in the commit message and have gerrit wrap it to  number here> length lines on merge?

The commit message content (including whitespace/formatting) is part
of the data fed into the hash algorithm to generate the commit
identifier. If Gerrit changed the commit message at upload, that
would alter the Git SHA compared to your local copy of the same
commit. This quickly goes down a Git madness rabbit hole (not the
least of which is that it would completely break signed commits).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Dev] Recent proposal to remove NetApp drivers

2015-09-25 Thread John Griffith
Hey Everyone,

So I've been kinda busy on IRC today with conversations about some patches
I proposed yesterday [1], [2].

I thought maybe I should try and explain my actions a bit, because it seems
that some think I just arbitrarily flew off the handle and proposed
something drastic, or that I had some malicious intent here.

This all started when this bug [3] was submitted against Cinder and
Manilla.  So I took at look at the review that merged this ([4]), and sure
enough that should not have merged due to the licensing issue.

At that point I discussed the issue publicly on IRC in the openstack-dev
channel and asked for some guidance/input from others [5].  Note as the log
continues there were violation discoveries on top of the fact that we don't
usually do proprietary libs in OpenStack. I reached out to a few NetApp
folks on the Cinder channel in IRC but was unable to get any real response
other than "I can't really talk about that", so I attempted to revert the
library patch myself.  This however proved to be difficult due to the high
volume of changes that have merged since the original patch landed.

I took it upon myself to attempt to fix the merge conflicts myself, however
this proved to be a rather large task, and frankly I am not familiar enough
with the NetApp code to be making such a large change and "hoping" that I
got it correct.  I again stated this via IRC to a number of people.  After
spending well over an hour working on merge conflicts in the NetApp code,
and having the only response form NetApp developers be "I can't say
anything about that", I then decided that the alternative was to propose
removal of the NetApp drivers altogether which I proposed here [7].

It seems that there are folks that have taken quite a bit of offense to
this, and are more than mildly upset with me.  To them I apologize if this
upset you.  I will say however that given the same situation and timing, I
would do the same thing again.  I'm a bit offended that there are
accusations that I'm intentionally doing something against NetApp (or any
Vendor here).  I won't even dignify the comments by responding, I'll just
let my contributions and involvement in the community speak for itself.

The good news is that as of this morning a NetApp developer has in fact
worked on the revert patch and fixed the merge conflicts (which I've now
spent a fair amount of time this afternoon reviewing), and as soon as that
merges I will propose a backport to stable/liberty.

Thanks,
John

[1]: https://review.openstack.org/#/c/227427/
[2]: https://review.openstack.org/#/c/227524/
[3]: https://bugs.launchpad.net/cinder/+bug/1499334
[4]: https://review.openstack.org/#/c/215700/
[5]:
http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2015-09-24.log.html#t2015-09-24T16:56:50
[6]:
http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2015-09-24.log.html#t2015-09-24T19:28:33
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Assumptions regarding extensions to OpenStack api's

2015-09-25 Thread Pratik Mallya
Hello Heat Team,

I was wondering if OpenStack Heat assumes that the Nova extensions api would 
always exist in a cloud? My impression was that since these features are 
extensions, they may or may not be implemented by the cloud provider and hence 
Heat must not rely on it being present.

My question is prompted by this code change: [0] where it is assumed that the 
os-interfaces extension [1] is implemented.

If we cannot rely on that assumption, then that code would need to be changed 
with a 404 guard since that endpoint may not exist and the nova client may thus 
raise a 404.

Thanks,
Pratik Mallya
Software Developer
Rackspace, Inc.

[0]: 
https://github.com/openstack/heat/commit/54c26453a0a8e8cb574858c7e1d362d0abea3822#diff-b3857cb91556a2a83f40842658589e4fR163
[1]: http://developer.openstack.org/api-ref-compute-v2-ext.html#os-interface
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][horizon] Nice new Network Topology panel in Horizon

2015-09-25 Thread Daniel Comnea
Great job Henry !

On Fri, Sep 25, 2015 at 6:47 PM, Henry Gessau  wrote:

> It has been about three years in the making but now it is finally here.
> A screenshot doesn't do it justice, so here is a short video overview:
> https://youtu.be/PxFd-lJV0e4
>
> Isn't that neat? I am sure you can see that it is a great improvement,
> especially for larger topologies.
>
> This new view will be part of the Liberty release of Horizon. I encourage
> you to
> take a look at it with your own network topologies, play around with it,
> and
> provide feedback. Please stop by the #openstack-horizon IRC channel if
> there are
> issues you would like addressed.
>
> Thanks to the folks who made this happen.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] should we use fsync when writing iscsi config file?

2015-09-25 Thread Eric Harney
On 09/25/2015 02:30 PM, Mitsuhiro Tanino wrote:
> On 09/22/2015 06:43 PM, Robert Collins wrote:
>> On 23 September 2015 at 09:52, Chris Friesen 
>>  wrote:
>>> Hi,
>>>
>>> I recently had an issue with one file out of a dozen or so in 
>>> "/opt/cgcs/cinder/data/volumes/" being present but of size zero.  I'm 
>>> running stable/kilo if it makes a difference.
>>>
>>> Looking at the code in 
>>> volume.targets.tgt.TgtAdm.create_iscsi_target(), I'm wondering if we 
>>> should do a fsync() before the close().  The way it stands now, it 
>>> seems like it might be possible to write the file, start making use 
>>> of it, and then take a power outage before it actually gets written 
>>> to persistent storage.  When we come back up we could have an 
>>> instance expecting to make use of it, but no target information in the 
>>> on-disk copy of the file.
> 
> I think even if there is no target information in configuration file dir, 
> c-vol started successfully
> and iSCSI targets were created automatically and volumes were exported, right?
> 
> There is an problem in this case that the iSCSI target was created without 
> authentication because
> we can't get previous authentication from the configuration file.
> 
> I'm curious what kind of problem did you met?
>   
>> If its being kept in sync with DB records, and won't self-heal from 
>> this situation, then yes. e.g. if the overall workflow is something 
>> like
> 
> In my understanding, the provider_auth in database has user name and password 
> for iSCSI target. 
> Therefore if we get authentication from DB, I think we can self-heal from 
> this situation
> correctly after c-vol service is restarted.
> 

Is this not already done as-needed by ensure_export()?

> The lio target obtains authentication from provider_auth in database, but 
> tgtd, iet, cxt obtain
> authentication from file to recreate iSCSI target when c-vol is restarted.
> If the file is missing, these volumes are exported without authentication and 
> configuration
> file is recreated as I mentioned above.
> 
> tgtd: Get target chap auth from file
> iet:  Get target chap auth from file
> cxt:  Get target chap auth from file
> lio:  Get target chap auth from Database(in provider_auth)
> scst: Get target chap auth by using original command
> 
> If we get authentication from DB for tgtd, iet and cxt same as lio, we can 
> recreate iSCSI target
> with proper authentication when c-vol is restarted.
> I think this is a solution for this situation.
> 

This may be possible, but fixing the target config file to be written
more safely to work as currently intended is still a win.

> Any thought?
> 
> Thanks,
> Mitsuhiro Tanino
> 
>> -Original Message-
>> From: Chris Friesen [mailto:chris.frie...@windriver.com]
>> Sent: Friday, September 25, 2015 12:48 PM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [cinder] should we use fsync when writing iscsi
>> config file?
>>
>> On 09/24/2015 04:21 PM, Chris Friesen wrote:
>>> On 09/24/2015 12:18 PM, Chris Friesen wrote:
>>>

 I think what happened is that we took the SIGTERM after the open()
 call in create_iscsi_target(), but before writing anything to the file.

  f = open(volume_path, 'w+')
  f.write(volume_conf)
  f.close()

 The 'w+' causes the file to be immediately truncated on opening,
 leading to an empty file.

 To work around this, I think we need to do the classic "write to a
 temporary file and then rename it to the desired filename" trick.
 The atomicity of the rename ensures that either the old contents or the new
>> contents are present.
>>>
>>> I'm pretty sure that upstream code is still susceptible to zeroing out
>>> the file in the above scenario.  However, it doesn't take an
>>> exception--that's due to a local change on our part that attempted to fix 
>>> the
>> below issue.
>>>
>>> The stable/kilo code *does* have a problem in that when it regenerates
>>> the file it's missing the CHAP authentication line (beginning with
>> "incominguser").
>>
>> I've proposed a change at https://urldefense.proofpoint.com/v2/url?u=https-
>> 3A__review.openstack.org_-23_c_227943_=BQICAg=DZ-
>> EF4pZfxGSU6MfABwx0g=klD1krzABGW034E9oBtY1xmIn3g7xZAIxV0XxaZpkJE=SVlOqKiqO04_
>> NttKUIoDiaOR7cePB0SOA1bpjakqAss=q2_8XBAVH9lQ2mdT72nW7dN2EafIqJEpHGLBuf4K970=
>>
>> If anyone has suggestions on how to do this more robustly or more cleanly,
>> please let me know.
>>
>> Chris
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Recent proposal to remove NetApp drivers

2015-09-25 Thread Anita Kuno
On 09/25/2015 03:15 PM, John Griffith wrote:
> Hey Everyone,
> 
> So I've been kinda busy on IRC today with conversations about some patches
> I proposed yesterday [1], [2].
> 
> I thought maybe I should try and explain my actions a bit, because it seems
> that some think I just arbitrarily flew off the handle and proposed
> something drastic, or that I had some malicious intent here.
> 
> This all started when this bug [3] was submitted against Cinder and
> Manilla.  So I took at look at the review that merged this ([4]), and sure
> enough that should not have merged due to the licensing issue.
> 
> At that point I discussed the issue publicly on IRC in the openstack-dev
> channel and asked for some guidance/input from others [5].  Note as the log
> continues there were violation discoveries on top of the fact that we don't
> usually do proprietary libs in OpenStack. I reached out to a few NetApp
> folks on the Cinder channel in IRC but was unable to get any real response
> other than "I can't really talk about that", so I attempted to revert the
> library patch myself.  This however proved to be difficult due to the high
> volume of changes that have merged since the original patch landed.
> 
> I took it upon myself to attempt to fix the merge conflicts myself, however
> this proved to be a rather large task, and frankly I am not familiar enough
> with the NetApp code to be making such a large change and "hoping" that I
> got it correct.  I again stated this via IRC to a number of people.  After
> spending well over an hour working on merge conflicts in the NetApp code,
> and having the only response form NetApp developers be "I can't say
> anything about that", I then decided that the alternative was to propose
> removal of the NetApp drivers altogether which I proposed here [7].
> 
> It seems that there are folks that have taken quite a bit of offense to
> this, and are more than mildly upset with me.  To them I apologize if this
> upset you.  I will say however that given the same situation and timing, I
> would do the same thing again.  I'm a bit offended that there are
> accusations that I'm intentionally doing something against NetApp (or any
> Vendor here).  I won't even dignify the comments by responding, I'll just
> let my contributions and involvement in the community speak for itself.
> 
> The good news is that as of this morning a NetApp developer has in fact
> worked on the revert patch and fixed the merge conflicts (which I've now
> spent a fair amount of time this afternoon reviewing), and as soon as that
> merges I will propose a backport to stable/liberty.
> 
> Thanks,
> John
> 
> [1]: https://review.openstack.org/#/c/227427/
> [2]: https://review.openstack.org/#/c/227524/
> [3]: https://bugs.launchpad.net/cinder/+bug/1499334
> [4]: https://review.openstack.org/#/c/215700/
> [5]:
> http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2015-09-24.log.html#t2015-09-24T16:56:50
> [6]:
> http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2015-09-24.log.html#t2015-09-24T19:28:33
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Thank you John, for a through description of your actions with links for
reference.

Open source software only remains open if we all work hard to understand
what the means and act accordingly.

I value your actions in this regard to bring something questionable to
light, to ask advice of others, to speak publicly about it and to step
forward so the greater community can see the facts for themselves.

Thank you, John, I support this kind of behaviour.

I am additionally grateful for those working hard to rectify the matter.

Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Recent proposal to remove NetApp drivers

2015-09-25 Thread Jeremy Stanley
On 2015-09-25 13:15:24 -0600 (-0600), John Griffith wrote:
[...]
> It seems that there are folks that have taken quite a bit of offense to
> this, and are more than mildly upset with me.  To them I apologize if this
> upset you.  I will say however that given the same situation and timing, I
> would do the same thing again.  I'm a bit offended that there are
> accusations that I'm intentionally doing something against NetApp (or any
> Vendor here).  I won't even dignify the comments by responding, I'll just
> let my contributions and involvement in the community speak for itself.
[...]

For what it's worth, I was personally shocked to see developers
connected to our community copy Apache-licensed software contributed
by others into a proprietary derivative and redistribute it without
attribution in clear violation of the Apache license. I understand
that free software licenses are a bit of an enigma for traditional
enterprises, but I hold our community to a higher standard than
that. Contributing to free software means, among other things, that
you actually ought to understand the licenses under which those
contributions are made.

I was however pleased to see today that a new upload of the
offending library, while still not distributed under a free license,
now at least seems to me (in my non-lawyer opinion) to be abiding by
the terms of the Apache license for the parts of OpenStack it
includes. Thank you for taking our software's licenses seriously!
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Recent proposal to remove NetApp drivers

2015-09-25 Thread John Griffith
On Fri, Sep 25, 2015 at 1:15 PM, John Griffith 
wrote:

> Hey Everyone,
>
> So I've been kinda busy on IRC today with conversations about some patches
> I proposed yesterday [1], [2].
>
> I thought maybe I should try and explain my actions a bit, because it
> seems that some think I just arbitrarily flew off the handle and proposed
> something drastic, or that I had some malicious intent here.
>
> This all started when this bug [3] was submitted against Cinder and
> Manilla.  So I took at look at the review that merged this ([4]), and sure
> enough that should not have merged due to the licensing issue.
>
> At that point I discussed the issue publicly on IRC in the openstack-dev
> channel and asked for some guidance/input from others [5].  Note as the log
> continues there were violation discoveries on top of the fact that we don't
> usually do proprietary libs in OpenStack. I reached out to a few NetApp
> folks on the Cinder channel in IRC but was unable to get any real response
> other than "I can't really talk about that", so I attempted to revert the
> library patch myself.  This however proved to be difficult due to the high
> volume of changes that have merged since the original patch landed.
>
> I took it upon myself to attempt to fix the merge conflicts myself,
> however this proved to be a rather large task, and frankly I am not
> familiar enough with the NetApp code to be making such a large change and
> "hoping" that I got it correct.  I again stated this via IRC to a number of
> people.  After spending well over an hour working on merge conflicts in the
> NetApp code, and having the only response form NetApp developers be "I
> can't say anything about that", I then decided that the alternative was to
> propose removal of the NetApp drivers altogether which I proposed here [7].
>
​Oops... that's link [2]  (s/[7]/[2]/)​


>
> It seems that there are folks that have taken quite a bit of offense to
> this, and are more than mildly upset with me.  To them I apologize if this
> upset you.  I will say however that given the same situation and timing, I
> would do the same thing again.  I'm a bit offended that there are
> accusations that I'm intentionally doing something against NetApp (or any
> Vendor here).  I won't even dignify the comments by responding, I'll just
> let my contributions and involvement in the community speak for itself.
>
> The good news is that as of this morning a NetApp developer has in fact
> worked on the revert patch and fixed the merge conflicts (which I've now
> spent a fair amount of time this afternoon reviewing), and as soon as that
> merges I will propose a backport to stable/liberty.
>
> Thanks,
> John
>
> [1]: https://review.openstack.org/#/c/227427/
> [2]: https://review.openstack.org/#/c/227524/
> [3]: https://bugs.launchpad.net/cinder/+bug/1499334
> [4]: https://review.openstack.org/#/c/215700/
> [5]:
> http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2015-09-24.log.html#t2015-09-24T16:56:50
> [6]:
> http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2015-09-24.log.html#t2015-09-24T19:28:33
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][horizon] Nice new Network Topology panel in Horizon

2015-09-25 Thread Henry Gessau
On Fri, Sep 25, 2015, Daniel Comnea  wrote:
> Great job Henry !

I had nothing to do with it! (See below.)

> On Fri, Sep 25, 2015 at 6:47 PM, Henry Gessau  > wrote:
> 
> It has been about three years in the making but now it is finally here.
> A screenshot doesn't do it justice, so here is a short video overview:
> https://youtu.be/PxFd-lJV0e4
> 
> Isn't that neat? I am sure you can see that it is a great improvement,
> especially for larger topologies.
> 
> This new view will be part of the Liberty release of Horizon. I encourage 
> you to
> take a look at it with your own network topologies, play around with it, 
> and
> provide feedback. Please stop by the #openstack-horizon IRC channel if 
> there are
> issues you would like addressed.
> 
> Thanks to the folks who made this happen.

I forgot to include the list of folks:

Curvature was started by Sam Betts, John Davidge, Jack Fletcher and Bradley
Jones as an intern project under Debo Dutta. It was first implemented for
"quantum" on the Grizzly release of OpenStack [1]. Sam, John and Brad are now
regular upstream contributors to OpenStack. In the Horizon project Rob Cresswell
has been instrumental in getting the panel view integrated.

[1] https://youtu.be/pmpRhcwyJIo


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [puppet] feedback request about puppet-keystone

2015-09-25 Thread Emilien Macchi


On 09/21/2015 12:20 PM, Emilien Macchi wrote:
> Hi,
> 
> Puppet OpenStack group would like to know your feedback about using
> puppet-keystone module.
> 
> Please take two minutes and feel the form [1] that contains a few
> questions. The answers will help us to define our roadmap for the next
> cycle and make Keystone deployment stronger for our users.
> 
> The result of the forms should be visible online, otherwise I'll make
> sure the results are 100% public and transparent.
> 
> Thank you for your time,
> 
> [1] http://goo.gl/forms/eiGWFkkXLZ
> 

So after 5 days, here is a bit of feedback (13 people did the poll [1]):

1/ Providers
Except for 1, most of people are managing a few number of Keystone
users/tenants.
I would like to know if it's because the current implementation (using
openstackclient) is too slow or just because they don't need to do that
(they use bash, sdk, ansible, etc).

2/ Features you want

* "Configuration of federation via shibboleth":
WIP on https://review.openstack.org/#/c/216821/

* "Configuration of federation via mod_mellon":
Will come after shibboleth I guess.

* "Allow to configure websso"":
See
http://specs.openstack.org/openstack/puppet-openstack-specs/specs/liberty/enabling-federation.html

* "Management of fernet keys":
nothing *yet* in our roadmap AFIK, adding it in our backlog [2]

* "Support for hybrid domain configurations (e.g. using both LDAP and
built in database backend)":
http://specs.openstack.org/openstack/puppet-openstack-specs/specs/liberty/support-for-keystone-domain-configuration.html

* "Full v3 API support (depends on other modules beyond just
puppet-keystone)":
http://specs.openstack.org/openstack/puppet-openstack-specs/specs/kilo/api-v3-support.html

* "the ability to upgrade modules independently of one another, like we
do in production - currently the puppet dependencies dictate the order
in which we do upgrades more than the OpenStack dependencies":

During the last Summit, we decided [3] as a community that our modules
branches will only support the OpenStack release of the branch.
ie: stable/kilo supports OpenStack 2015.1 (Kilo). Maybe you can deploy
Juno or Liberty with it, but our community does not support it.
To give a little background, we already discussed about it [4] on the ML.
Our interface is 100% (or should be) backward compatible for at least
one full cycle, so you should not have issue when using a new version of
the module with the same parameters. Though (and indeed), you need to
keep your modules synchronized, specially because we have libraries and
common providers (in puppet-keystone).
AFIK, OpenStack also works like this with openstack/requirements.
I'm not sure you can run Glance Kilo with Oslo Juno (maybe I'm wrong).
What you're asking would be technically hard because we would have to
support old versions of our providers & libraries, with a lot of
backward compatible & legacy code in place, while we already do a good
job in the parameters (interface).
If you have any serious proposal, we would be happy to discuss design
and find a solution.

3/ What we could improve in Puppet Keystone (and in general, regarding
the answers)

* "(...) but it would be nice to be able to deploy master and the most
recent version immediately rather than wait. Happy to get involved with
that as our maturity improves and we actually start to use the current
version earlier. Contribution is hard when you folk are ahead of the
game, any fixes and additions we have are ancient already":

I would like to understand the issues here:
do you have problems to contribute?
is your issue "a feature is in master and not in stable/*" ? If that's
the case, that means we can do a better job in backport policy.
Something we already talked each others and I hope our group is aware
about that.

* "We were using keystone_user_role until we had huge compilation times
due to the matrix (tenant x role x user) that is not scalable. With
every single user and tenant on the environment, the catalog compilation
increased. An improvement on that area will be useful."

I understand the frustration and we are working on it [5].

* "Currently does not handle deployment of hybrid domain configurations."

Ditto:
http://specs.openstack.org/openstack/puppet-openstack-specs/specs/liberty/support-for-keystone-domain-configuration.html


I liked running a poll like this, if you don't mind I'll take time to
prepare a bigger poll so we can gather more and more feedback, because
it's really useful. Thanks for that.


Discussion is open on this thread about features/concerns mentioned in
the poll.


[1]
https://docs.google.com/forms/d/1Z6IGeJRNmX7xx0Ggmr5Pmpzq7BudphDkZE-3t4Q5G1k/viewanalytics
[2] https://trello.com/c/HjiWUng3/65-puppet-keystone-manage-fernet-keys
[3]
http://specs.openstack.org/openstack/puppet-openstack-specs/specs/kilo/master-policy.html
[4] http://lists.openstack.org/pipermail/openstack-dev/2015-July/069147.html
[5] 

Re: [openstack-dev] [TripleO] tripleo.org theme

2015-09-25 Thread Ben Nemec
On 09/25/2015 07:34 AM, Dan Prince wrote:
> It has come to my attention that we aren't making great use of our
> tripleo.org domain. One thing that would be useful would be to have the
> new tripleo-docs content displayed there. It would also be nice to have
> quick links to some of our useful resources, perhaps Derek's CI report
> [1], a custom Reviewday page for TripleO reviews (something like this
> [2]), and perhaps other links too. I'm thinking these go in the header,
> and not just on some random TripleO docs page. Or perhaps both places.

Note that there's a TripleO Inbox Dashboard linked toward the bottom of
https://wiki.openstack.org/wiki/TripleO#Review_team (It should probably
be higher up than that, since it's incredibly useful).  This is actually
what I use for tracking TripleO reviews, and would be a simple thing to
start with for this.

+1 to everything else.

> 
> I was thinking that instead of the normal OpenStack theme however we
> could go a bit off the beaten path and do our own TripleO theme.
> Basically a custom tripleosphinx project that we ninja in as a
> replacement for oslosphinx.
> 
> Could get our own mascot... or do something silly with words. I'm
> reaching out to graphics artists who could help with this sort of
> thing... but before that decision is made I wanted to ask about
> thoughts on the matter here first.

I like the mascot/logo idea.  Not sure why we would want to deviate from
the standard OpenStack docs theme though.  What is your motivation for
suggesting that?

Also, if we get a mascot I want t-shirts. ;-)

> 
> Speak up... it would be nice to have this wrapped up before Tokyo.
> 
> [1] http://goodsquishy.com/downloads/tripleo-jobs.html
> [2] http://status.openstack.org/reviews/
> 
> Dan
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?

2015-09-25 Thread Doug Hellmann
Excerpts from Mark Voelker's message of 2015-09-25 17:42:24 +:
> On Sep 25, 2015, at 1:24 PM, Brian Rosmaita  
> wrote:
> > 
> > I'd like to clarify something.
> > 
> > On 9/25/15, 12:16 PM, "Mark Voelker"  wrote:
> > [big snip]
> >> Also worth pointing out here: when we talk about ³doing the same thing²
> >> from a DefCore perspective, we¹re essentially talking about what¹s
> >> exposed to the end user, not how that¹s implemented in OpenStack¹s source
> >> code.  So from an end user¹s perspective:
> >> 
> >> If I call nova image-create, I get an image in my cloud.  If I call the
> >> Glance v2 API to create an image, I also get an image in my cloud.  I
> >> neither see nor care that Nova is actually talking to Glance in the
> >> background, because if I¹m writing code that uses the OpenStack API¹s, I
> >> need to pick which one of those two API¹s to make my code call upon to
> >> put an image in my cloud.  Or, in the worst case, I have to write a bunch
> >> of if/else loops into my code because some clouds I want to use only
> >> allow one way and some allow only the other.
> > 
> > The above is a bit inaccurate.
> > 
> > The nova image-create command does give you an image in your cloud.  The
> > image you get, however, is a snapshot of an instance that has been
> > previously created in Nova.  If you don't have an instance, you cannot
> > create an image via that command.  There is no provision in the Compute
> > (Nova) API to allow you to create an image out of bits that you supply.
> > 
> > The Image (Glance) APIs (both v1 and v2) allow you to supply the bits and
> > register them as an image which you can then use to boot instances from by
> > using the Compute API.  But note that if all you have available is the
> > Images API, you cannot create an image of one of your instances.
> > 
> >> So from that end-user perspective, the Nova image-create API indeed does
> >> ³do the same thing" as the Glance API.
> > 
> > They don't "do the same thing".  Even if you have full access to the
> > Images v1 or v2 API, you will still have to use the Compute (Nova) API to
> > create an image of an instance, which is by far the largest use-case for
> > image creation.  You can't do it through Glance, because Glance doesn't
> > know anything about instances.  Nova has to know about Glance, because it
> > needs to fetch images for instance creation, and store images for
> > on-demand images of instances.
> 
> Yup, that’s fair: this was a bad example to pick (need moar coffee I guess).  
> Let’s use image-list instead. =)

From a "technical direction" perspective, I still think it's a bad
situation for us to be relying on any proxy APIs like this. Yes,
they are widely deployed, but we want to be using glance for image
features, neutron for networking, etc. Having the nova proxy is
fine, but while we have DefCore using tests to enforce the presence
of the proxy we can't deprecate those APIs.

What do we need to do to make that change happen over the next cycle
or so?

Doug

> 
> At Your Service,
> 
> Mark T. Voelker
> 
> > 
> > 
> >> At Your Service,
> >> 
> >> Mark T. Voelker
> > 
> > Glad to be of service, too,
> > brian
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?

2015-09-25 Thread Brian Rosmaita
I'd like to clarify something.

On 9/25/15, 12:16 PM, "Mark Voelker"  wrote:
[big snip]
>Also worth pointing out here: when we talk about ³doing the same thing²
>from a DefCore perspective, we¹re essentially talking about what¹s
>exposed to the end user, not how that¹s implemented in OpenStack¹s source
>code.  So from an end user¹s perspective:
>
>If I call nova image-create, I get an image in my cloud.  If I call the
>Glance v2 API to create an image, I also get an image in my cloud.  I
>neither see nor care that Nova is actually talking to Glance in the
>background, because if I¹m writing code that uses the OpenStack API¹s, I
>need to pick which one of those two API¹s to make my code call upon to
>put an image in my cloud.  Or, in the worst case, I have to write a bunch
>of if/else loops into my code because some clouds I want to use only
>allow one way and some allow only the other.

The above is a bit inaccurate.

The nova image-create command does give you an image in your cloud.  The
image you get, however, is a snapshot of an instance that has been
previously created in Nova.  If you don't have an instance, you cannot
create an image via that command.  There is no provision in the Compute
(Nova) API to allow you to create an image out of bits that you supply.

The Image (Glance) APIs (both v1 and v2) allow you to supply the bits and
register them as an image which you can then use to boot instances from by
using the Compute API.  But note that if all you have available is the
Images API, you cannot create an image of one of your instances.

>So from that end-user perspective, the Nova image-create API indeed does
>³do the same thing" as the Glance API.

They don't "do the same thing".  Even if you have full access to the
Images v1 or v2 API, you will still have to use the Compute (Nova) API to
create an image of an instance, which is by far the largest use-case for
image creation.  You can't do it through Glance, because Glance doesn't
know anything about instances.  Nova has to know about Glance, because it
needs to fetch images for instance creation, and store images for
on-demand images of instances.


>At Your Service,
>
>Mark T. Voelker

Glad to be of service, too,
brian


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] should we use fsync when writing iscsi config file?

2015-09-25 Thread Mitsuhiro Tanino
On 09/22/2015 06:43 PM, Robert Collins wrote:
> On 23 September 2015 at 09:52, Chris Friesen 
>  wrote:
>> Hi,
>>
>> I recently had an issue with one file out of a dozen or so in 
>> "/opt/cgcs/cinder/data/volumes/" being present but of size zero.  I'm 
>> running stable/kilo if it makes a difference.
>>
>> Looking at the code in 
>> volume.targets.tgt.TgtAdm.create_iscsi_target(), I'm wondering if we 
>> should do a fsync() before the close().  The way it stands now, it 
>> seems like it might be possible to write the file, start making use 
>> of it, and then take a power outage before it actually gets written 
>> to persistent storage.  When we come back up we could have an 
>> instance expecting to make use of it, but no target information in the 
>> on-disk copy of the file.

I think even if there is no target information in configuration file dir, c-vol 
started successfully
and iSCSI targets were created automatically and volumes were exported, right?

There is an problem in this case that the iSCSI target was created without 
authentication because
we can't get previous authentication from the configuration file.

I'm curious what kind of problem did you met?
  
> If its being kept in sync with DB records, and won't self-heal from 
> this situation, then yes. e.g. if the overall workflow is something 
> like

In my understanding, the provider_auth in database has user name and password 
for iSCSI target. 
Therefore if we get authentication from DB, I think we can self-heal from this 
situation
correctly after c-vol service is restarted.

The lio target obtains authentication from provider_auth in database, but tgtd, 
iet, cxt obtain
authentication from file to recreate iSCSI target when c-vol is restarted.
If the file is missing, these volumes are exported without authentication and 
configuration
file is recreated as I mentioned above.

tgtd: Get target chap auth from file
iet:  Get target chap auth from file
cxt:  Get target chap auth from file
lio:  Get target chap auth from Database(in provider_auth)
scst: Get target chap auth by using original command

If we get authentication from DB for tgtd, iet and cxt same as lio, we can 
recreate iSCSI target
with proper authentication when c-vol is restarted.
I think this is a solution for this situation.

Any thought?

Thanks,
Mitsuhiro Tanino

> -Original Message-
> From: Chris Friesen [mailto:chris.frie...@windriver.com]
> Sent: Friday, September 25, 2015 12:48 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [cinder] should we use fsync when writing iscsi
> config file?
> 
> On 09/24/2015 04:21 PM, Chris Friesen wrote:
> > On 09/24/2015 12:18 PM, Chris Friesen wrote:
> >
> >>
> >> I think what happened is that we took the SIGTERM after the open()
> >> call in create_iscsi_target(), but before writing anything to the file.
> >>
> >>  f = open(volume_path, 'w+')
> >>  f.write(volume_conf)
> >>  f.close()
> >>
> >> The 'w+' causes the file to be immediately truncated on opening,
> >> leading to an empty file.
> >>
> >> To work around this, I think we need to do the classic "write to a
> >> temporary file and then rename it to the desired filename" trick.
> >> The atomicity of the rename ensures that either the old contents or the new
> contents are present.
> >
> > I'm pretty sure that upstream code is still susceptible to zeroing out
> > the file in the above scenario.  However, it doesn't take an
> > exception--that's due to a local change on our part that attempted to fix 
> > the
> below issue.
> >
> > The stable/kilo code *does* have a problem in that when it regenerates
> > the file it's missing the CHAP authentication line (beginning with
> "incominguser").
> 
> I've proposed a change at https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__review.openstack.org_-23_c_227943_=BQICAg=DZ-
> EF4pZfxGSU6MfABwx0g=klD1krzABGW034E9oBtY1xmIn3g7xZAIxV0XxaZpkJE=SVlOqKiqO04_
> NttKUIoDiaOR7cePB0SOA1bpjakqAss=q2_8XBAVH9lQ2mdT72nW7dN2EafIqJEpHGLBuf4K970=
> 
> If anyone has suggestions on how to do this more robustly or more cleanly,
> please let me know.
> 
> Chris
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-
> 2Dbin_mailman_listinfo_openstack-2Ddev=BQICAg=DZ-
> EF4pZfxGSU6MfABwx0g=klD1krzABGW034E9oBtY1xmIn3g7xZAIxV0XxaZpkJE=SVlOqKiqO04_
> NttKUIoDiaOR7cePB0SOA1bpjakqAss=0DBbmeXSIK2c5QlBnwURY1iwNN1AXuqOLaUYnxjBl0w=

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] should puppet-neutron manage third party software?

2015-09-25 Thread Anita Kuno
On 09/25/2015 12:14 PM, Edgar Magana wrote:
> Hi There,
> 
> I just added my comment on the review. I do agree with Emilien. There should 
> be specific repos for plugins and drivers.
> 
> BTW. I love the sdnmagic name  ;-)
> 
> Edgar
> 
> 
> 
> 
> On 9/25/15, 9:02 AM, "Emilien Macchi"  wrote:
> 
>> In our last meeting [1], we were discussing about whether managing or
>> not external packaging repositories for Neutron plugin dependencies.
>>
>> Current situation:
>> puppet-neutron is installing (packages like neutron-plugin-*) &
>> configure Neutron plugins (configuration files like
>> /etc/neutron/plugins/*.ini
>> Some plugins (Cisco) are doing more: they install third party packages
>> (not part of OpenStack), from external repos.
>>
>> The question is: should we continue that way and accept that kind of
>> patch [2]?
>>
>> I vote for no: managing external packages & external repositories should
>> be up to an external more.
>> Example: my SDN tool is called "sdnmagic":
>> 1/ patch puppet-neutron to manage neutron-plugin-sdnmagic package and
>> configure the .ini file(s) to make it work in Neutron
>> 2/ create puppet-sdnmagic that will take care of everything else:
>> install sdnmagic, manage packaging (and specific dependencies),
>> repositories, etc.
>> I -1 puppet-neutron should handle it. We are not managing SDN soltution:
>> we are enabling puppet-neutron to work with them.
>>
>> I would like to find a consensus here, that will be consistent across
>> *all plugins* without exception.
>>
>>
>> Thanks for your feedback,
>>
>> [1] http://goo.gl/zehmN2
>> [2] https://review.openstack.org/#/c/209997/
>> -- 
>> Emilien Macchi
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

I think the data point provided by the Cinder situation needs to be
considered in this decision: https://bugs.launchpad.net/manila/+bug/1499334

The bug report outlines the issue, but the tl;dr is that one Cinder
driver changed their licensing on a library required to run in tree code.

Thanks,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Solum] Using os-auth-token and os-image-url with glance client

2015-09-25 Thread Devdatta Kulkarni
Steve,


Similar to other OpenStack services, Solum client uses the provided/configured 
username and password of a user to get a token, and sends it to Solum API 
service in a http header. On the API side, we use keystonemiddleware to 
validate the token. Upon successful authentication, we store information which 
we get back from keystone (project-id, username, and token) and use it to 
instantiate other services' python clients to interact with them (glance, 
swift, neutron, heat).


Let us know if there is a better approach for enabling inter-service 
interactions.


Thanks,

Devdatta



From: Steve Martinelli 
Sent: Thursday, September 24, 2015 9:01 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance][Solum] Using os-auth-token and 
os-image-url with glance client


I can't speak to the glance client changes, but this seems like an awkward 
design.

If you don't know the end user's name and password, then how are you getting 
the token? Is it the admin token? Why not create a service account and use 
keystonemiddleware?

Thanks,

Steve Martinelli
OpenStack Keystone Core

[Inactive hide details for Devdatta Kulkarni ---2015/09/24 06:44:57 PM---Hi, 
Glance team, In Solum, we use Glance to store Docke]Devdatta Kulkarni 
---2015/09/24 06:44:57 PM---Hi, Glance team, In Solum, we use Glance to store 
Docker images that we create for applications. We

From: Devdatta Kulkarni 
To: "OpenStack Development Mailing List (not for usage questions)" 

Date: 2015/09/24 06:44 PM
Subject: [openstack-dev] [Glance][Solum] Using os-auth-token and os-image-url 
with glance client





Hi, Glance team,

In Solum, we use Glance to store Docker images that we create for applications. 
We use Glance client internally to upload these images. Till recently, 'glance 
image-create' with only token has been
working for us (in devstack). Today, I started noticing that glance 
image-create with just token is not working anymore. It is also not working 
when os-auth-token and os-image-url are passed in. According to documentation 
(http://docs.openstack.org/developer/python-glanceclient/), passing token and 
image-url should work. The client, which I have installed from master, is 
asking username (and password, if username is specified).

Solum does not have access to end-user's password. So we need the ability to 
interact with Glance without providing password, as it has been working till 
recently.

I investigated the issue a bit and have filed a bug with my findings.
https://bugs.launchpad.net/python-glanceclient/+bug/1499540

Can someone help with resolving this issue.

Regards,
Devdatta__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Solum] Using os-auth-token and os-image-url with glance client

2015-09-25 Thread Devdatta Kulkarni
Nikhil, Flavio,

Thank you for giving immediate attention to this issue.

Regards,
Devdatta


From: Flavio Percoco 
Sent: Friday, September 25, 2015 4:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance][Solum] Using os-auth-token and 
os-image-url with glance client

On 24/09/15 22:44 +, Devdatta Kulkarni wrote:
>Hi, Glance team,
>
>
>In Solum, we use Glance to store Docker images that we create for applications.
>We use Glance client internally to upload these images. Till recently, 'glance
>image-create' with only token has been
>
>working for us (in devstack). Today, I started noticing that glance
>image-create with just token is not working anymore. It is also not working
>when os-auth-token and os-image-url are passed in. According to documentation (
>http://docs.openstack.org/developer/python-glanceclient/), passing token and
>image-url should work. The client, which I have installed from master, is
>asking username (and password, if username is specified).
>
>
>Solum does not have access to end-user's password. So we need the ability to
>interact with Glance without providing password, as it has been working till
>recently.
>
>
>I investigated the issue a bit and have filed a bug with my findings.
>
>https://bugs.launchpad.net/python-glanceclient/+bug/1499540
>
>
>Can someone help with resolving this issue.
>

This should fix your issue and we'll backport it to Liberty.

https://review.openstack.org/#/c/227723/

Thanks for reporting,
Flavio


--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Compute API (Was Re: [nova][cinder] how to handle AZ bug 1496235?)

2015-09-25 Thread Duncan Thomas
I think there's a place for yet another service breakout from nova - some
sort of like-weight platform orchestration piece, nothing as complicated or
complete as heat, nothing that touches the inside of a VM, just something
that can talk to cinder, nova and neutron (plus I guess ironic and whatever
the container thing is called) and work through long running /
cross-project tasks. I'd probably expect it to provide a task style
interface, e.g. a boot-from-new-volume call returns a request-id that can
then be polled for detailed status.

The existing nova API for this (and any other nova APIs where this makes
sense) can then become a proxy for the new service, so that tenants are not
affected. The nova apis can then be deprecated in slow time.

Anybody else think this could be useful?

On 25 September 2015 at 17:12, Andrew Laski  wrote:

> On 09/24/15 at 03:13pm, James Penick wrote:
>
>>
>>>
>>> At risk of getting too offtopic I think there's an alternate solution to
>>> doing this in Nova or on the client side.  I think we're missing some
>>> sort
>>> of OpenStack API and service that can handle this.  Nova is a low level
>>> infrastructure API and service, it is not designed to handle these
>>> orchestrations.  I haven't checked in on Heat in a while but perhaps this
>>> is a role that it could fill.
>>>
>>> I think that too many people consider Nova to be *the* OpenStack API when
>>> considering instances/volumes/networking/images and that's not something
>>> I
>>> would like to see continue.  Or at the very least I would like to see a
>>> split between the orchestration/proxy pieces and the "manage my
>>> VM/container/baremetal" bits
>>>
>>
>>
>> (new thread)
>> You've hit on one of my biggest issues right now: As far as many deployers
>> and consumers are concerned (and definitely what I tell my users within
>> Yahoo): The value of an OpenStack value-stream (compute, network, storage)
>> is to provide a single consistent API for abstracting and managing those
>> infrastructure resources.
>>
>> Take networking: I can manage Firewalls, switches, IP selection, SDN, etc
>> through Neutron. But for compute, If I want VM I go through Nova, for
>> Baremetal I can -mostly- go through Nova, and for containers I would talk
>> to Magnum or use something like the nova docker driver.
>>
>> This means that, by default, Nova -is- the closest thing to a top level
>> abstraction layer for compute. But if that is explicitly against Nova's
>> charter, and Nova isn't going to be the top level abstraction for all
>> things Compute, then something else needs to fill that space. When that
>> happens, all things common to compute provisioning should come out of Nova
>> and move into that new API. Availability zones, Quota, etc.
>>
>
> I do think Nova is the top level abstraction layer for compute.  My issue
> is when Nova is asked to manage other resources.  There's no API call to
> tell Cinder "create a volume and attach it to this instance, and create
> that instance if it doesn't exist."  And I'm not sure why the reverse isn't
> true.
>
> I want Nova to be the absolute best API for managing compute resources.
> It's when someone is managing compute and volumes and networks together
> that I don't feel that Nova is the best place for that.  Most importantly
> right now it seems that not everyone is on the same page on this and I
> think it would be beneficial to come together and figure out what sort of
> workloads the Nova API is intending to provide.
>
>
>
>> -James
>>
>
> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Congress Usecases VM

2015-09-25 Thread Shiv Haris
Thanks Alex, Zhou,

I get errors from congress when I do a re-join. These errors seem to due to the 
order in which the services are coming up. Hence I still depend on running 
stack.sh after the VM is up and running. Please try out the new VM - also 
advise if you need to add any of your use cases. Also re-join starts "screen" - 
do we expect the end user to know how to use "screen".

I do understand that running "stack.sh" takes time to run - but it does not do 
things that appear to be any kind of magic which we want to avoid in order to 
get the user excited.

I have uploaded a new version of the VM please experiment with this and let me 
know:

http://paloaltan.net/Congress/Congress_Usecases_SEPT_25_2015.ova

(root: vagrant password: vagrant)

-Shiv



From: Alex Yip [mailto:a...@vmware.com]
Sent: Thursday, September 24, 2015 5:09 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM


I was able to make devstack run without a network connection by disabling 
tempest.  So, I think it uses the loopback IP address, and that does not 
change, so rejoin-stack.sh works without a network at all.



- Alex






From: Zhou, Zhenzan >
Sent: Thursday, September 24, 2015 4:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Rejoin-stack.sh works only if its IP was not changed. So using NAT network and 
fixed ip inside the VM can help.

BR
Zhou Zhenzan

From: Alex Yip [mailto:a...@vmware.com]
Sent: Friday, September 25, 2015 01:37
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM


I have been using images, rather than snapshots.



It doesn't take that long to start up.  First, I boot the VM which takes a 
minute or so.  Then I run rejoin-stack.sh which takes just another minute or 
so.  It's really not that bad, and rejoin-stack.sh restores vms and openstack 
state that was running before.



- Alex






From: Shiv Haris >
Sent: Thursday, September 24, 2015 10:29 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Hi Congress folks,

I am looking for ideas. We want the Openstack to be running when the user 
instantiates the Usecase-VM. However creating a OVA file is possible only when 
the VM is halted which means Openstack is not running and the user will have to 
run devstack again (which is time consuming) when the VM is restarted.

The option is to take a snapshot. It appears that taking a snapshot of the VM 
and using it in another setup is not very straight forward. It involves 
modifying the .vbox file and seems that it is prone to user errors. I am 
leaning towards halting the machine and generating an OVA file.

I am looking for suggestions 

Thanks,

-Shiv


From: Shiv Haris [mailto:sha...@brocade.com]
Sent: Thursday, September 24, 2015 9:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

First of all I apologize for not making it at the meeting yesterday, could not 
cut short another overlapping meeting.

Also, Tim thanks for the feedback. I have addressed some of the issues you 
posed however I am still working on some of the subtle issues raised. Once I 
have addressed all I will post another VM by end of the week.

-Shiv


From: Tim Hinrichs [mailto:t...@styra.com]
Sent: Friday, September 18, 2015 5:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

It's great to have this available!  I think it'll help people understand what's 
going on MUCH more quickly.

Some thoughts.
- The image is 3GB, which took me 30 minutes to download.  Are all VMs this 
big?  I think we should finish this as a VM but then look into doing it with 
containers to make it EVEN easier for people to get started.

[shivharis] Yes, unfortunately that is the case. The disk size I set is 20GB - 
but the OVA compress the image and disk to 3 GB. I will looking at other 
options.


- It gave me an error about a missing shared directory when I started up.
[shivharis] will fix this

- I expected devstack to be running when I launched the VM.  devstack startup 
time is substantial, and if there's a problem, it's good to assume the user 
won't know how to fix it.  Is it possible to have devstack up and running when 
we start the VM?  That said, it started up fine for me.
[shivharis] OVA files can be created only when the VM is halted, so devstack 
will be down when you bring up  the VM. I agree a snapshot will be a better 
choice.

- It'd be good to have a README to explain how to use the use-case 

[openstack-dev] [cinder][neutron][all] New third-party-ci testing requirements for OpenStack Compatible mark

2015-09-25 Thread Chris Hoge
In November, the OpenStack Foundation will start requiring vendors requesting
new "OpenStack Compatible" storage driver licenses to start passing the Cinder
third-party integration tests. The new program was approved by the Board at
the July meeting in Austin and follows the improvement of the testing standards
and technical requirements for the "OpenStack Powered" program. This is all
part of the effort of the Foundation to use the OpenStack brand to guarantee a
base-level of interoperability and consistency for OpenStack users and to
protect the work of our community of developers by applying a trademark backed
by their technical efforts.

The Cinder driver testing is the first step of a larger effort to apply
community determined standards to the Foundation marketing programs. We're
starting with Cinder because it has a successful testing program in place, and
we have plans to extend the program to network drivers and OpenStack
applications. We're going require CI testing for new "OpenStack Compatible"
storage licenses starting on November 1, and plan to roll out network and
application testing in 2016.

One of our goals is to work with project leaders and developers to help us
define and implement these test programs. The standards for third-party
drivers and applications should be determined by the developers and users
in our community, who are experts in how to maintain the quality of the
ecosystem.

We welcome and feedback on this program, and are also happy to answer any
questions you might have.

Thanks!

Chris Hoge
Interop Engineer
OpenStack Foundation
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Murano code flow for custom development and combining murano with horizon in devstack

2015-09-25 Thread Sumanth Sathyanarayana
Hello,

Could anyone let me know if the changes in murano dashboard and horizon's
openstackdashboard, both be combined and showed under one tab.
i.e. say under Murano tab on the side left panel all the changes done in
horizon and murano both appear.

If anyone could point me to a link explaining custom development of murano
and the code flow would be very helpful...

Thanks & Best Regards
Sumanth
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?

2015-09-25 Thread Chris Hoge

> On Sep 25, 2015, at 10:12 AM, Andrew Laski  wrote:
> 
> I understand that reasoning, but still am unsure on a few things.
> 
> The direction seems to be moving towards having a requirement that the same 
> functionality is offered in two places, Nova API and Glance V2 API. That 
> seems like it would fragment adoption rather than unify it.

My hope would be that proxies would be deprecated as new capabilities
moved in. Some of this will be driven by application developers too,
though. We’re looking at an interoperability standard, which has a
natural tension between backwards compatibility and new features.

> 
> Also after digging in on image-create I feel that there may be a mixup.  The 
> image-create in Glance and image-create in Nova are two different things. In 
> Glance you create an image and send the disk image data in the request, in 
> Nova an image-create takes a snapshot of the instance provided in the 
> request.  But it seems like DefCore is treating them as equivalent unless I'm 
> misunderstanding.
> 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?

2015-09-25 Thread Monty Taylor

On 09/25/2015 06:37 PM, Chris Hoge wrote:



On Sep 25, 2015, at 10:12 AM, Andrew Laski > wrote:

I understand that reasoning, but still am unsure on a few things.

The direction seems to be moving towards having a requirement that the
same functionality is offered in two places, Nova API and Glance V2
API. That seems like it would fragment adoption rather than unify it.


My hope would be that proxies would be deprecated as new capabilities
moved in. Some of this will be driven by application developers too,
though. We’re looking at an interoperability standard, which has a
natural tension between backwards compatibility and new features.


Yeah. The proxies are also less efficient, because they have to bounce 
through two places.




Also after digging in on image-create I feel that there may be a
mixup.  The image-create in Glance and image-create in Nova are two
different things. In Glance you create an image and send the disk
image data in the request, in Nova an image-create takes a snapshot of
the instance provided in the request.  But it seems like DefCore is
treating them as equivalent unless I'm misunderstanding.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] CephFS native driver

2015-09-25 Thread Ben Swartzlander

On 09/24/2015 09:49 AM, John Spray wrote:

Hi all,

I've recently started work on a CephFS driver for Manila.  The (early)
code is here:
https://github.com/openstack/manila/compare/master...jcsp:ceph


Awesome! This is something that's been talking about for quite some time 
and I'm pleased to see progress on making it a reality.



It requires a special branch of ceph which is here:
https://github.com/ceph/ceph/compare/master...jcsp:wip-manila

This isn't done yet (hence this email rather than a gerrit review),
but I wanted to give everyone a heads up that this work is going on,
and a brief status update.

This is the 'native' driver in the sense that clients use the CephFS
client to access the share, rather than re-exporting it over NFS.  The
idea is that this driver will be useful for anyone who has such
clients, as well as acting as the basis for a later NFS-enabled
driver.


This makes sense, but have you given thought to the optimal way to 
provide NFS semantics for those who prefer that? Obviously you can pair 
the existing Manila Generic driver with Cinder running on ceph, but I 
wonder how that wound compare to some kind of ganesha bridge that 
translates between NFS and cephfs. It that something you've looked into?



The export location returned by the driver gives the client the Ceph
mon IP addresses, the share path, and an authentication token.  This
authentication token is what permits the clients access (Ceph does not
do access control based on IP addresses).

It's just capable of the minimal functionality of creating and
deleting shares so far, but I will shortly be looking into hooking up
snapshots/consistency groups, albeit for read-only snapshots only
(cephfs does not have writeable shapshots).  Currently deletion is
just a move into a 'trash' directory, the idea is to add something
later that cleans this up in the background: the downside to the
"shares are just directories" approach is that clearing them up has a
"rm -rf" cost!


All snapshots are read-only... The question is whether you can take a 
snapshot and clone it into something that's writable. We're looking at 
allowing for different kinds of snapshot semantics in Manila for Mitaka. 
Even if there's no create-share-from-snapshot functionality a readable 
snapshot is still useful and something we'd like to enable.


The deletion issue sounds like a common one, although if you don't have 
the thing that cleans them up in the background yet I hope someone is 
working on that.



A note on the implementation: cephfs recently got the ability (not yet
in master) to restrict client metadata access based on path, so this
driver is simply creating shares by creating directories within a
cluster-wide filesystem, and issuing credentials to clients that
restrict them to their own directory.  They then mount that subpath,
so that from the client's point of view it's like having their own
filesystem.  We also have a quota mechanism that I'll hook in later to
enforce the share size.


So quotas aren't enforced yet? That seems like a serious issue for any 
operator except those that want to support "infinite" size shares. I 
hope that gets fixed soon as well.



Currently the security here requires clients (i.e. the ceph-fuse code
on client hosts, not the userspace applications) to be trusted, as
quotas are enforced on the client side.  The OSD access control
operates on a per-pool basis, and creating a separate pool for each
share is inefficient.  In the future it is expected that CephFS will
be extended to support file layouts that use RADOS namespaces, which
are cheap, such that we can issue a new namespace to each share and
enforce the separation between shares on the OSD side.


I think it will be important to document all of these limitations. I 
wouldn't let them stop you from getting the driver done, but if I was a 
deployer I'd want to know about these details.



However, for many people the ultimate access control solution will be
to use a NFS gateway in front of their CephFS filesystem: it is
expected that an NFS-enabled cephfs driver will follow this native
driver in the not-too-distant future.


Okay this answers part of my above question, but how to you expect the 
NFS gateway to work? Ganesha has been used successfully in the past.



This will be my first openstack contribution, so please bear with me
while I come up to speed with the submission process.  I'll also be in
Tokyo for the summit next month, so I hope to meet other interested
parties there.


Welcome and I look forward you meeting you in Tokyo!

-Ben



All the best,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not 

Re: [openstack-dev] CephFS native driver

2015-09-25 Thread Shinobu Kinjo
> nfs is nearly impossible to make both HA and Scalable without adding really 
> expensive dedicated hardware.

I don't think we need quite expensive hardware for this purpose.

What I'm thinking now is:

[Controller] [Compute1] ... [ComputeN]
[ RADOS  ]

Controller becomes Ceph native client using RBD, CephFS whatever Ceph provides.

[Controller] [Compute1] ... [ComputeN]
[ Driver   ]
[ RADOS  ]

Controller provides share space with VMs through NFS.

[Controller] [Compute1] ... [ComputeN]
|[ VM1]
[ NFS  ]-[ Share  ]
[ Driver   ]
|
[ RADOS  ]

Pacemaker or pacemaker remote, (and stonith) provide HA between RADOS, 
controller and compute.

In here, what we really need to think about is which one is better to realize 
this concept, CephFS or RBD.

If we use CephFS, Ceph client (controller) always accesses to MON, MDS, OSD to 
get latest map, access to data, in this scenario, cost of rebalancing could be 
high when failover happens.

Anyway we need to think what architecture is more reasonable in case of any 
kind of disaster scenarios.

Shinobu

- Original Message -
From: "Kevin M Fox" 
To: "OpenStack Development Mailing List (not for usage questions)" 
, "John Spray" 
Cc: "Ceph Development" 
Sent: Saturday, September 26, 2015 1:05:38 AM
Subject: Re: [openstack-dev] CephFS native driver

I think having a native cephfs driver without nfs in the cloud is a very 
compelling feature. nfs is nearly impossible to make both HA and Scalable 
without adding really expensive dedicated hardware. Ceph on the other hand 
scales very nicely and its very fault tollerent out of the box.

Thanks,
Kevin

From: Shinobu Kinjo [ski...@redhat.com]
Sent: Friday, September 25, 2015 12:04 AM
To: OpenStack Development Mailing List (not for usage questions); John Spray
Cc: Ceph Development; openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Manila] CephFS native driver

So here are questions from my side.
Just question.


 1.What is the biggest advantage comparing others such as RDB?
  We should be able to implement what you are going to do in
  existing module, shouldn't we?

 2.What are you going to focus on with a new implementation?
  It seems to be to use NFS in front of that implementation
  with more transparently.

 3.What are you thinking of integration with OpenStack using
  a new implementation?
  Since it's going to be new kind of, there should be differ-
  ent architecture.

 4.Is this implementation intended for OneStack integration
  mainly?

Since velocity of OpenStack feature expansion is much more than
it used to be, it's much more important to think of performance.

Is a new implementation also going to improve Ceph integration
with OpenStack system?

Thank you so much for your explanation in advance.

Shinobu

- Original Message -
From: "John Spray" 
To: openstack-dev@lists.openstack.org, "Ceph Development" 

Sent: Thursday, September 24, 2015 10:49:17 PM
Subject: [openstack-dev] [Manila] CephFS native driver

Hi all,

I've recently started work on a CephFS driver for Manila.  The (early)
code is here:
https://github.com/openstack/manila/compare/master...jcsp:ceph

It requires a special branch of ceph which is here:
https://github.com/ceph/ceph/compare/master...jcsp:wip-manila

This isn't done yet (hence this email rather than a gerrit review),
but I wanted to give everyone a heads up that this work is going on,
and a brief status update.

This is the 'native' driver in the sense that clients use the CephFS
client to access the share, rather than re-exporting it over NFS.  The
idea is that this driver will be useful for anyone who has such
clients, as well as acting as the basis for a later NFS-enabled
driver.

The export location returned by the driver gives the client the Ceph
mon IP addresses, the share path, and an authentication token.  This
authentication token is what permits the clients access (Ceph does not
do access control based on IP addresses).

It's just capable of the minimal functionality of creating and
deleting shares so far, but I will shortly be looking into hooking up
snapshots/consistency groups, albeit for read-only snapshots only
(cephfs does not have writeable shapshots).  Currently deletion is
just a move into a 'trash' directory, the idea is to add something
later that cleans this up in the background: the downside to the
"shares are just directories" approach is that clearing them up has a
"rm -rf" cost!

A note on the implementation: cephfs recently got the ability (not yet
in master) to restrict client metadata access based on path, so this
driver is simply creating shares by creating directories within a
cluster-wide filesystem, and issuing credentials to clients 

Re: [openstack-dev] [lbaas] [octavia] Proposing new meeting time Wednesday 16:00 UTC

2015-09-25 Thread Eichberger, German
All,

In our last meeting [1] we discussed moving the meeting earlier to
accommodate participants from the EMEA region. I am therefore proposing to
move the meeting to 16:00 UTC on Wednesday. Please respond to this e-mail
if you have alternate suggestions. I will send out another e-mail
announcing the new time and the date we will start with that.

Thanks,
German

[1] 
http://eavesdrop.openstack.org/meetings/octavia/2015/octavia.2015-09-23-20.
00.log.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] congrats to armax!

2015-09-25 Thread Ryan Moats

First, congratulations to armax on being elected PTL for Mitaka.  Looking
forward to Neutron improving over the next six months.

Second thanks to everybody that voted in the election. Hopefully we had
something close to 100% turnout, because that is an important
responsibility of the population.

Ryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][elections] PTL Election Conclusion and Results

2015-09-25 Thread Tristan Cacqueray
Thank you to the electorate, to all those who voted and to all
candidates who put their name forward for PTL for this election. A
healthy, open process breeds trust in our decision making capability
thank you to all those who make this process possible.

Now for the results of the PTL election process, please join me in
extending congratulations to the following PTLs:

* Barbican
** Douglas Mendizabal
* Ceilometer
** Gordon Chung
* ChefOpenstack
** Jan Klare
* Cinder
** Sean Mcginnis
* Community App Catalog
** Christopher Aedo
* Congress
** Tim Hinrichs
* Cue
** Vipul Sabhaya
* Designate
** Graham Hayes
* Documentation
** Lana Brindley
* Glance
** Flavio Percoco
* Heat
** Sergey Kraynev
* Horizon
** David Lyle
* I18n
** Ying Chun Guo
* Infrastructure
** Jeremy Stanley
* Ironic
** Jim Rollenhagen
* Keystone
** Steve Martinelli
* Kolla
** Steven Dake
* Magnum PTL will be elected in another round.
* Manila
** Ben Swartzlander
* Mistral
** Renat Akhmerov
* Murano
** Serg Melikyan
* Neutron
** Armando Migliaccio
* Nova
** John Garbutt
* OpenStack UX
** Piet Kruithof
* OpenStackAnsible
** Jesse Pretorius
* OpenStackClient
** Dean Troyer
* Oslo
** Davanum Srinivas
* Packaging-deb
** Thomas Goirand
* PuppetOpenStack
** Emilien Macchi
* Quality Assurance
** Matthew Treinish
* Rally
** Boris Pavlovic
* RefStack
** Catherine Diep
* Release cycle management
** Doug Hellmann
* RpmPackaging
** Dirk Mueller
* Sahara
** Sergey Lukjanov
* Searchlight
** Travis Tripp
* Security
** Robert Clark
* Solum
** Devdatta Kulkarni
* Swift
** John Dickinson
* TripleO
** Dan Prince
* Trove
** Craig Vyvial
* Zaqar
** Fei Long Wang


Cinder results:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_bbc6b6675115d3cd

Glance results:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_03e00971a7e1fad8

Ironic results:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_ff53995355fda506

Keystone results:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_7f18159b9ba89ad1

Mistral results:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_c448863622ee81e0

Neutron results:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_844e671ae72d37dd

Oslo results:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_ff0ab5e43b8f44e4


Thank you to all involved in the PTL election process,
Tristan



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][elections] PTL Election Conclusion and Results

2015-09-25 Thread Edgar Magana
Congratulations to our PTLs!

Let's make Mitaka a great release.

As Neutron core I want to congratulate to Rosella, Ryan and Armando for 
volunteering. It is a huge responsibility and we all going to help. 

Armando,

Next round of beers on you! 

Cheers,

Edgar




Sent from my iPhone
> On Sep 25, 2015, at 5:10 PM, Tristan Cacqueray  wrote:
> 
> Thank you to the electorate, to all those who voted and to all
> candidates who put their name forward for PTL for this election. A
> healthy, open process breeds trust in our decision making capability
> thank you to all those who make this process possible.
> 
> Now for the results of the PTL election process, please join me in
> extending congratulations to the following PTLs:
> 
> * Barbican
> ** Douglas Mendizabal
> * Ceilometer
> ** Gordon Chung
> * ChefOpenstack
> ** Jan Klare
> * Cinder
> ** Sean Mcginnis
> * Community App Catalog
> ** Christopher Aedo
> * Congress
> ** Tim Hinrichs
> * Cue
> ** Vipul Sabhaya
> * Designate
> ** Graham Hayes
> * Documentation
> ** Lana Brindley
> * Glance
> ** Flavio Percoco
> * Heat
> ** Sergey Kraynev
> * Horizon
> ** David Lyle
> * I18n
> ** Ying Chun Guo
> * Infrastructure
> ** Jeremy Stanley
> * Ironic
> ** Jim Rollenhagen
> * Keystone
> ** Steve Martinelli
> * Kolla
> ** Steven Dake
> * Magnum PTL will be elected in another round.
> * Manila
> ** Ben Swartzlander
> * Mistral
> ** Renat Akhmerov
> * Murano
> ** Serg Melikyan
> * Neutron
> ** Armando Migliaccio
> * Nova
> ** John Garbutt
> * OpenStack UX
> ** Piet Kruithof
> * OpenStackAnsible
> ** Jesse Pretorius
> * OpenStackClient
> ** Dean Troyer
> * Oslo
> ** Davanum Srinivas
> * Packaging-deb
> ** Thomas Goirand
> * PuppetOpenStack
> ** Emilien Macchi
> * Quality Assurance
> ** Matthew Treinish
> * Rally
> ** Boris Pavlovic
> * RefStack
> ** Catherine Diep
> * Release cycle management
> ** Doug Hellmann
> * RpmPackaging
> ** Dirk Mueller
> * Sahara
> ** Sergey Lukjanov
> * Searchlight
> ** Travis Tripp
> * Security
> ** Robert Clark
> * Solum
> ** Devdatta Kulkarni
> * Swift
> ** John Dickinson
> * TripleO
> ** Dan Prince
> * Trove
> ** Craig Vyvial
> * Zaqar
> ** Fei Long Wang
> 
> 
> Cinder results:
> http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_bbc6b6675115d3cd
> 
> Glance results:
> http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_03e00971a7e1fad8
> 
> Ironic results:
> http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_ff53995355fda506
> 
> Keystone results:
> http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_7f18159b9ba89ad1
> 
> Mistral results:
> http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_c448863622ee81e0
> 
> Neutron results:
> http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_844e671ae72d37dd
> 
> Oslo results:
> http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_ff0ab5e43b8f44e4
> 
> 
> Thank you to all involved in the PTL election process,
> Tristan
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [lbaas] [octavia] Proposing new meeting time Wednesday 16:00 UTC

2015-09-25 Thread Doug Wiegley
Works for me. 

Doug


> On Sep 25, 2015, at 5:58 PM, Eichberger, German  
> wrote:
> 
> All,
> 
> In our last meeting [1] we discussed moving the meeting earlier to
> accommodate participants from the EMEA region. I am therefore proposing to
> move the meeting to 16:00 UTC on Wednesday. Please respond to this e-mail
> if you have alternate suggestions. I will send out another e-mail
> announcing the new time and the date we will start with that.
> 
> Thanks,
> German
> 
> [1] 
> http://eavesdrop.openstack.org/meetings/octavia/2015/octavia.2015-09-23-20.
> 00.log.html
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][elections] PTL Election Conclusion and Results

2015-09-25 Thread Vikram Choudhary
Congrats Armando!
On Sep 26, 2015 5:57 AM, "Edgar Magana"  wrote:

> Congratulations to our PTLs!
>
> Let's make Mitaka a great release.
>
> As Neutron core I want to congratulate to Rosella, Ryan and Armando for
> volunteering. It is a huge responsibility and we all going to help.
>
> Armando,
>
> Next round of beers on you!
>
> Cheers,
>
> Edgar
>
>
>
>
> Sent from my iPhone
> > On Sep 25, 2015, at 5:10 PM, Tristan Cacqueray 
> wrote:
> >
> > Thank you to the electorate, to all those who voted and to all
> > candidates who put their name forward for PTL for this election. A
> > healthy, open process breeds trust in our decision making capability
> > thank you to all those who make this process possible.
> >
> > Now for the results of the PTL election process, please join me in
> > extending congratulations to the following PTLs:
> >
> > * Barbican
> > ** Douglas Mendizabal
> > * Ceilometer
> > ** Gordon Chung
> > * ChefOpenstack
> > ** Jan Klare
> > * Cinder
> > ** Sean Mcginnis
> > * Community App Catalog
> > ** Christopher Aedo
> > * Congress
> > ** Tim Hinrichs
> > * Cue
> > ** Vipul Sabhaya
> > * Designate
> > ** Graham Hayes
> > * Documentation
> > ** Lana Brindley
> > * Glance
> > ** Flavio Percoco
> > * Heat
> > ** Sergey Kraynev
> > * Horizon
> > ** David Lyle
> > * I18n
> > ** Ying Chun Guo
> > * Infrastructure
> > ** Jeremy Stanley
> > * Ironic
> > ** Jim Rollenhagen
> > * Keystone
> > ** Steve Martinelli
> > * Kolla
> > ** Steven Dake
> > * Magnum PTL will be elected in another round.
> > * Manila
> > ** Ben Swartzlander
> > * Mistral
> > ** Renat Akhmerov
> > * Murano
> > ** Serg Melikyan
> > * Neutron
> > ** Armando Migliaccio
> > * Nova
> > ** John Garbutt
> > * OpenStack UX
> > ** Piet Kruithof
> > * OpenStackAnsible
> > ** Jesse Pretorius
> > * OpenStackClient
> > ** Dean Troyer
> > * Oslo
> > ** Davanum Srinivas
> > * Packaging-deb
> > ** Thomas Goirand
> > * PuppetOpenStack
> > ** Emilien Macchi
> > * Quality Assurance
> > ** Matthew Treinish
> > * Rally
> > ** Boris Pavlovic
> > * RefStack
> > ** Catherine Diep
> > * Release cycle management
> > ** Doug Hellmann
> > * RpmPackaging
> > ** Dirk Mueller
> > * Sahara
> > ** Sergey Lukjanov
> > * Searchlight
> > ** Travis Tripp
> > * Security
> > ** Robert Clark
> > * Solum
> > ** Devdatta Kulkarni
> > * Swift
> > ** John Dickinson
> > * TripleO
> > ** Dan Prince
> > * Trove
> > ** Craig Vyvial
> > * Zaqar
> > ** Fei Long Wang
> >
> >
> > Cinder results:
> > http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_bbc6b6675115d3cd
> >
> > Glance results:
> > http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_03e00971a7e1fad8
> >
> > Ironic results:
> > http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_ff53995355fda506
> >
> > Keystone results:
> > http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_7f18159b9ba89ad1
> >
> > Mistral results:
> > http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_c448863622ee81e0
> >
> > Neutron results:
> > http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_844e671ae72d37dd
> >
> > Oslo results:
> > http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_ff0ab5e43b8f44e4
> >
> >
> > Thank you to all involved in the PTL election process,
> > Tristan
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Compute API (Was Re: [nova][cinder] how to handle AZ bug 1496235?)

2015-09-25 Thread Joshua Harlow

+1 from me, although I thought heat was supposed to be this thing?

Maybe there should be a 'warm' project or something ;)

Or we can call it 'bbs' for 'building block service' (obviously not 
bulletin board system); ask said service to build a set of blocks into 
well defined structures and let it figure out how to make that happen...


This though most definitely requires cross-project agreement though so 
I'd hope we can reach that somehow (before creating a halfway done new 
orchestration thing that is halfway integrated with a bunch of other 
apis that do one quarter of the work in ten different ways).


Duncan Thomas wrote:

I think there's a place for yet another service breakout from nova -
some sort of like-weight platform orchestration piece, nothing as
complicated or complete as heat, nothing that touches the inside of a
VM, just something that can talk to cinder, nova and neutron (plus I
guess ironic and whatever the container thing is called) and work
through long running / cross-project tasks. I'd probably expect it to
provide a task style interface, e.g. a boot-from-new-volume call returns
a request-id that can then be polled for detailed status.

The existing nova API for this (and any other nova APIs where this makes
sense) can then become a proxy for the new service, so that tenants are
not affected. The nova apis can then be deprecated in slow time.

Anybody else think this could be useful?

On 25 September 2015 at 17:12, Andrew Laski > wrote:

On 09/24/15 at 03:13pm, James Penick wrote:



At risk of getting too offtopic I think there's an alternate
solution to
doing this in Nova or on the client side.  I think we're
missing some sort
of OpenStack API and service that can handle this.  Nova is
a low level
infrastructure API and service, it is not designed to handle
these
orchestrations.  I haven't checked in on Heat in a while but
perhaps this
is a role that it could fill.

I think that too many people consider Nova to be *the*
OpenStack API when
considering instances/volumes/networking/images and that's
not something I
would like to see continue.  Or at the very least I would
like to see a
split between the orchestration/proxy pieces and the "manage my
VM/container/baremetal" bits



(new thread)
You've hit on one of my biggest issues right now: As far as many
deployers
and consumers are concerned (and definitely what I tell my users
within
Yahoo): The value of an OpenStack value-stream (compute,
network, storage)
is to provide a single consistent API for abstracting and
managing those
infrastructure resources.

Take networking: I can manage Firewalls, switches, IP selection,
SDN, etc
through Neutron. But for compute, If I want VM I go through
Nova, for
Baremetal I can -mostly- go through Nova, and for containers I
would talk
to Magnum or use something like the nova docker driver.

This means that, by default, Nova -is- the closest thing to a
top level
abstraction layer for compute. But if that is explicitly against
Nova's
charter, and Nova isn't going to be the top level abstraction
for all
things Compute, then something else needs to fill that space.
When that
happens, all things common to compute provisioning should come
out of Nova
and move into that new API. Availability zones, Quota, etc.


I do think Nova is the top level abstraction layer for compute.  My
issue is when Nova is asked to manage other resources.  There's no
API call to tell Cinder "create a volume and attach it to this
instance, and create that instance if it doesn't exist."  And I'm
not sure why the reverse isn't true.

I want Nova to be the absolute best API for managing compute
resources.  It's when someone is managing compute and volumes and
networks together that I don't feel that Nova is the best place for
that.  Most importantly right now it seems that not everyone is on
the same page on this and I think it would be beneficial to come
together and figure out what sort of workloads the Nova API is
intending to provide.



-James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




Re: [openstack-dev] [OPNFV] [Functest] Tempest & Rally

2015-09-25 Thread Boris Pavlovic
Morgan,


You should add at least:

sla:
  failure_rate:
max: 0

Otherwise rally will pass 100% no matter what is happening.


Best regards,
Boris Pavlovic

On Thu, Sep 24, 2015 at 10:29 PM, Tikkanen, Viktor (Nokia - FI/Espoo) <
viktor.tikka...@nokia.com> wrote:

> Hi Morgan
>
> and thank you for the overview.
>
> I'm now waiting for the POD#2 VPN profile (will be ready soon). We will
> try then to figure out what OpenStack/tempest/rally configuration changes
> are needed in order to get rid of those test failures.
>
> I suppose that most of the problems (like "Multiple possible networks
> found" etc.) are relatively easy to solve.
>
> BTW, since tempest is being currently developed in "branchless" mode
> (without release specific stable versions), do we have some common
> understanding/requirements how "dynamically" Functest should use its code?
>
> For example, config_functest.py seems to contain routines for
> cloning/installing rally (and indirectly tempest) code, does it mean that
> the code will be cloned/installed at the time when the test set is executed
> for the first time? (I'm just wondering if it is necessary or not to
> "freeze" somehow used code for each OPNFV release to make sure that it will
> remain compatible and that test results will be comparable between
> different OPNFV setups).
>
> -Viktor
>
> > -Original Message-
> > From: EXT morgan.richo...@orange.com [mailto:morgan.richo...@orange.com]
> > Sent: Thursday, September 24, 2015 4:56 PM
> > To: Kosonen, Juha (Nokia - FI/Espoo); Tikkanen, Viktor (Nokia - FI/Espoo)
> > Cc: Jose Lausuch
> > Subject: [OPNFV] [Functest] Tempest & Rally
> >
> > Hi,
> >
> > I was wondering whether you could have a look at Rally/Tempest tests we
> > automatically launch in Functest.
> > We have still some errors and I assume most of them are due to
> > misconfiguration and/or quota ...
> > With Jose, we planned to have a look after SR0 but we do not have much
> > time and we are not fully skilled (even if we progressed a little bit:))
> >
> > If you could have a look and give your feedback, it would be very
> > helpful, we could discuss it during an IRC weekly meeting
> > In Arno we did not use the SLA criteria, that is also something we could
> > do for the B Release
> >
> > for instance if you look at
> > https://build.opnfv.org/ci/view/functest/job/functest-foreman-
> > master/19/consoleText
> >
> > you will see rally and Tempest log
> >
> > Rally scenario are a compilation of default Rally scenario played one
> > after the other and can be found in
> >
> https://git.opnfv.org/cgit/functest/tree/testcases/VIM/OpenStack/CI/suites
> >
> > the Rally artifacts are also pushed into the artifact server
> > http://artifacts.opnfv.org/
> > e.g.
> > http://artifacts.opnfv.org/functest/lf_pod2/2015-09-23_17-36-
> > 07/results/rally/opnfv-authenticate.html
> > look for 09-23 to get Rally json/html files and tempest.conf
> >
> > thanks
> >
> > Morgan
> >
> >
> >
> __
> > ___
> >
> > Ce message et ses pieces jointes peuvent contenir des informations
> > confidentielles ou privilegiees et ne doivent donc
> > pas etre diffuses, exploites ou copies sans autorisation. Si vous avez
> > recu ce message par erreur, veuillez le signaler
> > a l'expediteur et le detruire ainsi que les pieces jointes. Les messages
> > electroniques etant susceptibles d'alteration,
> > Orange decline toute responsabilite si ce message a ete altere, deforme
> ou
> > falsifie. Merci.
> >
> > This message and its attachments may contain confidential or privileged
> > information that may be protected by law;
> > they should not be distributed, used or copied without authorisation.
> > If you have received this email in error, please notify the sender and
> > delete this message and its attachments.
> > As emails may be altered, Orange is not liable for messages that have
> been
> > modified, changed or falsified.
> > Thank you.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][elections] Candidate proposals for TC (Technical Committee) positions are now open

2015-09-25 Thread Tony Breeds
Candidate proposals for the Technical Committee positions (6 positions) are now
open and will remain open until October 1, 05:59 UTC.

All candidacies must be submitted as a text file to the openstack/election
repository as explained on the wiki[0].

Candidates for the Technical Committee Positions: Any Foundation individual
member can propose their candidacy for an available, directly-elected TC seat.
(except the seven TC members who were elected for a one-year seat in April[1]).

The election will be held from October 2nd through to 23:59 October 09, 2015
UTC. The electorate are the Foundation individual members that are also
committers for one of the official programs projects[2] over the Kilo-Liberty
timeframe (September 18, 2014 06:00 UTC to September 18, 2015 05:59 UTC), as
well as the extra-ATCs who are acknowledged by the TC.[3]

Please see the wikipage[0] for additional details about this election. Please
find below the timeline:

Nominations open  @ now
Nominations close @ 2015-10-01 05:59:00 UTC
Election open @ 2015-10-02 ~16:00:00 UTC
Election close@ 2015-10-09 23:59:59 UTC


If you have any questions please be sure to either voice them on the mailing
list or to the elections officials[4].

Thank you, and we look forward to reading your candidate proposals,

[0] https://wiki.openstack.org/wiki/TC_Elections_September/October_2015
[1] https://wiki.openstack.org/wiki/TC_Elections_April_2015#Results
[2] 
http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml?id=sept-2015-elections
Note the tag for this repo, sept-2015-elections.
[3] 
http://git.openstack.org/cgit/openstack/governance/tree/reference/extra-atcs?id=sept-2015-elections
[4] Tony's email: tony at bakeyournoodle dot com
Tristan's email: tdecacqu at redhat dot com

Yours Tony.


pgp_BOq6k95DP.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Define better terms for WAITING and DELAYED states

2015-09-25 Thread Nikolay Makhotkin
Thank you Robert for the explanation in details!



>- RUNNING_DELAYED - a substate of RUNNING and it has exactly this
>meaning: it’s generally running but delayed till some later time.
>- WAITING - it is not a substate of RUNNING and hence it means a task
>has not started yet
>
>
Yes, I agree, need to introduce RUNNING_DELAYED state. It reflects the task
is already running but delayed for certain amount of time.

So, we can proceed with this right in Liberty cycle.

-- 
Best Regards,
Nikolay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-operators][Rally] Rally plugins reference is available

2015-09-25 Thread Aleksandr Maretskiy
Cool!

On Fri, Sep 25, 2015 at 3:05 AM, Boris Pavlovic  wrote:

> Hi stackers,
>
> As far as you know Rally test cases are created as a mix of plugins.
>
> At this point of time we have more than 200 plugins for almost all
> OpenStack projects.
> Before you had to analyze code of plugins or use "rally plugin find/list"
> commands to find plugins that you need, which was the pain in neck.
>
> So finally we have auto generated plugin reference:
> https://rally.readthedocs.org/en/latest/plugin/plugin_reference.html
>
>
> Best regards,
> Boris Pavlovic
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] How could an L2 agent extension access agent methods ?

2015-09-25 Thread thomas.morin

Hi everyone,

(TL;DR: we would like an L2 agent extension to be able to call methods 
on the agent class, e.g. OVSAgent)


In the networking-bgpvpn project, we need the reference driver to 
interact with the ML2 openvswitch agent with new RPCs to allow 
exchanging information with the BGP VPN implementation running on the 
compute nodes. We also need the OVS agent to setup specific things on 
the OVS bridges for MPLS traffic.


To extend the agent behavior, we currently create a new agent by 
mimicking the main() in ovs_neutron_agent.py but instead of 
instantiating instantiate OVSAgent, with instantiate a class that 
overloads the OVSAgent class with the additional behavior we need [1] .


This is really not the ideal way of extending the agent, and we would 
prefer using the L2 agent extension framework [2].


Using the L2 agent extension framework would work, but only partially: 
it would easily allos us to register our RPC consumers, but not to let 
us access to some datastructures/methods of the agent that we need to 
use: setup_entry_for_arp_reply and local_vlan_map, access to the 
OVSBridge objects to manipulate OVS ports.


I've filled-in an RFE bug to track this issue [5].

We would like something like one of the following:
1) augment the L2 agent extension interface (AgentCoreResourceExtension) 
to give access to the agent object (and thus let the extension call 
methods of the agent) by giving the agent as a parameter of the 
initialize method [4]
2) augment the L2 agent extension interface (AgentCoreResourceExtension) 
to give access to the agent object (and thus let the extension call 
methods of the agent) by giving the agent as a parameter of a new 
setAgent method
3) augment the L2 agent extension interface (AgentCoreResourceExtension) 
to give access only to specific/chosen methods on the agent object, for 
instance by giving a dict as a parameter of the initialize method [4], 
whose keys would be method names, and values would be pointer to these 
methods on the agent object
4) define a new interface with methods to access things inside the 
agent, this interface would be implemented by an object instantiated by 
the agent, and that the agent would pass to the extension manager, thus 
allowing the extension manager to passe the object to an extension 
through the initialize method of AgentCoreResourceExtension [4]


Any feedback on these ideas...?
Of course any other idea is welcome...

For the sake of triggering reaction, the question could be rephrased as: 
if we submit a change doing (1) above, would it have a reasonable chance 
of merging ?


-Thomas

[1] 
https://github.com/openstack/networking-bgpvpn/blob/master/networking_bgpvpn/neutron/services/service_drivers/bagpipe/ovs_agent/ovs_bagpipe_neutron_agent.py

[2] https://review.openstack.org/#/c/195439/
[3] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py#L30
[4] 
https://github.com/openstack/neutron/blob/master/neutron/agent/l2/agent_extension.py#L28

[5] https://bugs.launchpad.net/neutron/+bug/1499637


_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Ironic is open for Mitaka development

2015-09-25 Thread Thierry Carrez
Jim Rollenhagen wrote:
> I've proposed a patch to release Ironic 4.2.0, and we will be cutting
> the stable/liberty branch from the same SHA:
> https://review.openstack.org/#/c/227582/

It's now released at:
https://launchpad.net/ironic/liberty/4.2.0

and the Liberty release branch was cut at:
http://git.openstack.org/cgit/openstack/ironic/log/?h=stable/liberty

> This means Ironic is now open for Mitaka development; commit away!
> [...]

Yay!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] CephFS native driver

2015-09-25 Thread John Spray
On Fri, Sep 25, 2015 at 8:04 AM, Shinobu Kinjo  wrote:
> So here are questions from my side.
> Just question.
>
>
>  1.What is the biggest advantage comparing others such as RDB?
>   We should be able to implement what you are going to do in
>   existing module, shouldn't we?

I guess you mean compared to using a local filesystem on top of RBD,
and exporting it over NFS?  The main distinction here is that for
native CephFS clients, they get a shared filesystem where all the
clients can talk to all the Ceph OSDs directly, and avoid the
potential bottleneck of an NFS->local fs->RBD server.

Workloads requiring a local filesystem would probably continue to map
a cinder block device and use that.  The Manila driver is intended for
use cases that require a shared filesystem.

>  2.What are you going to focus on with a new implementation?
>   It seems to be to use NFS in front of that implementation
>   with more transparently.

The goal here is to make cephfs accessible to people by making it easy
to provision it for their applications, just like Manila in general.
The motivation for putting an NFS layer in front of CephFS is to make
it easier for people to adopt, because they won't need to install any
ceph-specific code in their guests.  It will also be easier to
support, because any ceph client bugfixes would not need to be
installed within guests (if we assume existing nfs clients are bug
free :-))

>  3.What are you thinking of integration with OpenStack using
>   a new implementation?
>   Since it's going to be new kind of, there should be differ-
>   ent architecture.

Not sure I understand this question?

>  4.Is this implementation intended for OneStack integration
>   mainly?

Nope (I had not heard of onestack before).

> Since velocity of OpenStack feature expansion is much more than
> it used to be, it's much more important to think of performance.

> Is a new implementation also going to improve Ceph integration
> with OpenStack system?

This piece of work is specifically about Manila; general improvements
in Ceph integration would be a different topic.

Thanks,
John

>
> Thank you so much for your explanation in advance.
>
> Shinobu
>
> - Original Message -
> From: "John Spray" 
> To: openstack-dev@lists.openstack.org, "Ceph Development" 
> 
> Sent: Thursday, September 24, 2015 10:49:17 PM
> Subject: [openstack-dev] [Manila] CephFS native driver
>
> Hi all,
>
> I've recently started work on a CephFS driver for Manila.  The (early)
> code is here:
> https://github.com/openstack/manila/compare/master...jcsp:ceph
>
> It requires a special branch of ceph which is here:
> https://github.com/ceph/ceph/compare/master...jcsp:wip-manila
>
> This isn't done yet (hence this email rather than a gerrit review),
> but I wanted to give everyone a heads up that this work is going on,
> and a brief status update.
>
> This is the 'native' driver in the sense that clients use the CephFS
> client to access the share, rather than re-exporting it over NFS.  The
> idea is that this driver will be useful for anyone who has such
> clients, as well as acting as the basis for a later NFS-enabled
> driver.
>
> The export location returned by the driver gives the client the Ceph
> mon IP addresses, the share path, and an authentication token.  This
> authentication token is what permits the clients access (Ceph does not
> do access control based on IP addresses).
>
> It's just capable of the minimal functionality of creating and
> deleting shares so far, but I will shortly be looking into hooking up
> snapshots/consistency groups, albeit for read-only snapshots only
> (cephfs does not have writeable shapshots).  Currently deletion is
> just a move into a 'trash' directory, the idea is to add something
> later that cleans this up in the background: the downside to the
> "shares are just directories" approach is that clearing them up has a
> "rm -rf" cost!
>
> A note on the implementation: cephfs recently got the ability (not yet
> in master) to restrict client metadata access based on path, so this
> driver is simply creating shares by creating directories within a
> cluster-wide filesystem, and issuing credentials to clients that
> restrict them to their own directory.  They then mount that subpath,
> so that from the client's point of view it's like having their own
> filesystem.  We also have a quota mechanism that I'll hook in later to
> enforce the share size.
>
> Currently the security here requires clients (i.e. the ceph-fuse code
> on client hosts, not the userspace applications) to be trusted, as
> quotas are enforced on the client side.  The OSD access control
> operates on a per-pool basis, and creating a separate pool for each
> share is inefficient.  In the future it is expected that CephFS will
> be extended to support file layouts that use RADOS namespaces, which
> are cheap, such that we can issue a new namespace to each share and
> 

Re: [openstack-dev] [neutron] How could an L2 agent extension access agent methods ?

2015-09-25 Thread Kevin Benton
I think the 4th of the options you proposed would be the best. We don't
want to give agents direct access to the agent object or else we will run
the risk of breaking extensions all of the time during any kind of
reorganization or refactoring. Having a well defined API in between will
give us flexibility to move things around.

On Fri, Sep 25, 2015 at 1:32 AM,  wrote:

> Hi everyone,
>
> (TL;DR: we would like an L2 agent extension to be able to call methods on
> the agent class, e.g. OVSAgent)
>
> In the networking-bgpvpn project, we need the reference driver to interact
> with the ML2 openvswitch agent with new RPCs to allow exchanging
> information with the BGP VPN implementation running on the compute nodes.
> We also need the OVS agent to setup specific things on the OVS bridges for
> MPLS traffic.
>
> To extend the agent behavior, we currently create a new agent by mimicking
> the main() in ovs_neutron_agent.py but instead of instantiating instantiate
> OVSAgent, with instantiate a class that overloads the OVSAgent class with
> the additional behavior we need [1] .
>
> This is really not the ideal way of extending the agent, and we would
> prefer using the L2 agent extension framework [2].
>
> Using the L2 agent extension framework would work, but only partially: it
> would easily allos us to register our RPC consumers, but not to let us
> access to some datastructures/methods of the agent that we need to use:
> setup_entry_for_arp_reply and local_vlan_map, access to the OVSBridge
> objects to manipulate OVS ports.
>
> I've filled-in an RFE bug to track this issue [5].
>
> We would like something like one of the following:
> 1) augment the L2 agent extension interface (AgentCoreResourceExtension)
> to give access to the agent object (and thus let the extension call methods
> of the agent) by giving the agent as a parameter of the initialize method
> [4]
> 2) augment the L2 agent extension interface (AgentCoreResourceExtension)
> to give access to the agent object (and thus let the extension call methods
> of the agent) by giving the agent as a parameter of a new setAgent method
> 3) augment the L2 agent extension interface (AgentCoreResourceExtension)
> to give access only to specific/chosen methods on the agent object, for
> instance by giving a dict as a parameter of the initialize method [4],
> whose keys would be method names, and values would be pointer to these
> methods on the agent object
> 4) define a new interface with methods to access things inside the agent,
> this interface would be implemented by an object instantiated by the agent,
> and that the agent would pass to the extension manager, thus allowing the
> extension manager to passe the object to an extension through the
> initialize method of AgentCoreResourceExtension [4]
>
> Any feedback on these ideas...?
> Of course any other idea is welcome...
>
> For the sake of triggering reaction, the question could be rephrased as:
> if we submit a change doing (1) above, would it have a reasonable chance of
> merging ?
>
> -Thomas
>
> [1]
> https://github.com/openstack/networking-bgpvpn/blob/master/networking_bgpvpn/neutron/services/service_drivers/bagpipe/ovs_agent/ovs_bagpipe_neutron_agent.py
> [2] https://review.openstack.org/#/c/195439/
> [3]
> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py#L30
> [4]
> https://github.com/openstack/neutron/blob/master/neutron/agent/l2/agent_extension.py#L28
> [5] https://bugs.launchpad.net/neutron/+bug/1499637
>
> _
>
> Ce message et ses pieces jointes peuvent contenir des informations 
> confidentielles ou privilegiees et ne doivent donc
> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu 
> ce message par erreur, veuillez le signaler
> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
> electroniques etant susceptibles d'alteration,
> Orange decline toute responsabilite si ce message a ete altere, deforme ou 
> falsifie. Merci.
>
> This message and its attachments may contain confidential or privileged 
> information that may be protected by law;
> they should not be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and delete 
> this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have been 
> modified, changed or falsified.
> Thank you.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton

[openstack-dev] [Neutron] [Ceilometer] Liberty RC1 available

2015-09-25 Thread Thierry Carrez
Hello everyone,

Ceilometer and Neutron just produced their first release candidate for
the end of the Liberty cycle. The RC1 tarballs, as well as a list of
last-minute features and fixed bugs since liberty-1 are available at:

https://launchpad.net/ceilometer/liberty/liberty-rc1
https://launchpad.net/neutron/liberty/liberty-rc1

Unless release-critical issues are found that warrant a release
candidate respin, these RC1s will be formally released as final versions
on October 15. You are therefore strongly encouraged to test and
validate these tarballs !

Alternatively, you can directly test the stable/liberty release branch at:

http://git.openstack.org/cgit/openstack/ceilometer/log/?h=stable/liberty
http://git.openstack.org/cgit/openstack/neutron/log/?h=stable/liberty

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/ceilometer/+filebug
or
https://bugs.launchpad.net/neutron/+filebug

and tag it *liberty-rc-potential* to bring it to the release crew's
attention.

Note that the "master" branches of Ceilometer and Neutron are now
officially open for Mitaka development, so feature freeze restrictions
no longer apply there.

Regards,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] How could an L2 agent extension access agent methods ?

2015-09-25 Thread Kevin Benton
Sorry, that should have said, "We don't want to give extensions direct
access to the agent object..."

On Fri, Sep 25, 2015 at 1:57 AM, Kevin Benton  wrote:

> I think the 4th of the options you proposed would be the best. We don't
> want to give agents direct access to the agent object or else we will run
> the risk of breaking extensions all of the time during any kind of
> reorganization or refactoring. Having a well defined API in between will
> give us flexibility to move things around.
>
> On Fri, Sep 25, 2015 at 1:32 AM,  wrote:
>
>> Hi everyone,
>>
>> (TL;DR: we would like an L2 agent extension to be able to call methods on
>> the agent class, e.g. OVSAgent)
>>
>> In the networking-bgpvpn project, we need the reference driver to
>> interact with the ML2 openvswitch agent with new RPCs to allow exchanging
>> information with the BGP VPN implementation running on the compute nodes.
>> We also need the OVS agent to setup specific things on the OVS bridges for
>> MPLS traffic.
>>
>> To extend the agent behavior, we currently create a new agent by
>> mimicking the main() in ovs_neutron_agent.py but instead of instantiating
>> instantiate OVSAgent, with instantiate a class that overloads the OVSAgent
>> class with the additional behavior we need [1] .
>>
>> This is really not the ideal way of extending the agent, and we would
>> prefer using the L2 agent extension framework [2].
>>
>> Using the L2 agent extension framework would work, but only partially: it
>> would easily allos us to register our RPC consumers, but not to let us
>> access to some datastructures/methods of the agent that we need to use:
>> setup_entry_for_arp_reply and local_vlan_map, access to the OVSBridge
>> objects to manipulate OVS ports.
>>
>> I've filled-in an RFE bug to track this issue [5].
>>
>> We would like something like one of the following:
>> 1) augment the L2 agent extension interface (AgentCoreResourceExtension)
>> to give access to the agent object (and thus let the extension call methods
>> of the agent) by giving the agent as a parameter of the initialize method
>> [4]
>> 2) augment the L2 agent extension interface (AgentCoreResourceExtension)
>> to give access to the agent object (and thus let the extension call methods
>> of the agent) by giving the agent as a parameter of a new setAgent method
>> 3) augment the L2 agent extension interface (AgentCoreResourceExtension)
>> to give access only to specific/chosen methods on the agent object, for
>> instance by giving a dict as a parameter of the initialize method [4],
>> whose keys would be method names, and values would be pointer to these
>> methods on the agent object
>> 4) define a new interface with methods to access things inside the agent,
>> this interface would be implemented by an object instantiated by the agent,
>> and that the agent would pass to the extension manager, thus allowing the
>> extension manager to passe the object to an extension through the
>> initialize method of AgentCoreResourceExtension [4]
>>
>> Any feedback on these ideas...?
>> Of course any other idea is welcome...
>>
>> For the sake of triggering reaction, the question could be rephrased as:
>> if we submit a change doing (1) above, would it have a reasonable chance of
>> merging ?
>>
>> -Thomas
>>
>> [1]
>> https://github.com/openstack/networking-bgpvpn/blob/master/networking_bgpvpn/neutron/services/service_drivers/bagpipe/ovs_agent/ovs_bagpipe_neutron_agent.py
>> [2] https://review.openstack.org/#/c/195439/
>> [3]
>> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py#L30
>> [4]
>> https://github.com/openstack/neutron/blob/master/neutron/agent/l2/agent_extension.py#L28
>> [5] https://bugs.launchpad.net/neutron/+bug/1499637
>>
>> _
>>
>> Ce message et ses pieces jointes peuvent contenir des informations 
>> confidentielles ou privilegiees et ne doivent donc
>> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu 
>> ce message par erreur, veuillez le signaler
>> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
>> electroniques etant susceptibles d'alteration,
>> Orange decline toute responsabilite si ce message a ete altere, deforme ou 
>> falsifie. Merci.
>>
>> This message and its attachments may contain confidential or privileged 
>> information that may be protected by law;
>> they should not be distributed, used or copied without authorisation.
>> If you have received this email in error, please notify the sender and 
>> delete this message and its attachments.
>> As emails may be altered, Orange is not liable for messages that have been 
>> modified, changed or falsified.
>> Thank you.
>>
>>
>> __
>> OpenStack Development 

Re: [openstack-dev] [OPNFV] [Functest] Tempest & Rally

2015-09-25 Thread Boris Pavlovic
Jose,


Rally community provides official docker images here:
https://hub.docker.com/r/rallyforge/rally/
So I would suggest to use them.


Best regards,
Boris Pavlovic



On Fri, Sep 25, 2015 at 5:07 AM, Jose Lausuch 
wrote:

> Hi,
>
>
>
> Thanks for the hint Boris.
>
>
>
> Regarding what we do at functest with Rally, yes, we clone the latest from
> the Rally repo. We thought about that before and the possible errors it can
> convey, compatibility and so on.
>
>
>
> As I am working on a Docker image where all the Functest environment will
> be pre-installed, we might get rid of such potential problems. But, that
> image will need constant updates if there are major patches/bugfixes in the
> rally repo.
>
>
>
> What is your opinion on this? What do you think it makes more sense?
>
>
>
> /Jose
>
>
>
>
>
>
>
> *From:* bo...@pavlovic.ru [mailto:bo...@pavlovic.ru] *On Behalf Of *Boris
> Pavlovic
> *Sent:* Friday, September 25, 2015 7:56 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Cc:* EXT morgan.richo...@orange.com; Kosonen, Juha (Nokia - FI/Espoo);
> Jose Lausuch
> *Subject:* Re: [openstack-dev] [OPNFV] [Functest] Tempest & Rally
>
>
>
> Morgan,
>
>
>
>
>
> You should add at least:
>
>
>
> sla:
>   failure_rate:
> max: 0
>
>
>
> Otherwise rally will pass 100% no matter what is happening.
>
>
>
>
>
> Best regards,
>
> Boris Pavlovic
>
>
>
> On Thu, Sep 24, 2015 at 10:29 PM, Tikkanen, Viktor (Nokia - FI/Espoo) <
> viktor.tikka...@nokia.com> wrote:
>
> Hi Morgan
>
> and thank you for the overview.
>
> I'm now waiting for the POD#2 VPN profile (will be ready soon). We will
> try then to figure out what OpenStack/tempest/rally configuration changes
> are needed in order to get rid of those test failures.
>
> I suppose that most of the problems (like "Multiple possible networks
> found" etc.) are relatively easy to solve.
>
> BTW, since tempest is being currently developed in "branchless" mode
> (without release specific stable versions), do we have some common
> understanding/requirements how "dynamically" Functest should use its code?
>
> For example, config_functest.py seems to contain routines for
> cloning/installing rally (and indirectly tempest) code, does it mean that
> the code will be cloned/installed at the time when the test set is executed
> for the first time? (I'm just wondering if it is necessary or not to
> "freeze" somehow used code for each OPNFV release to make sure that it will
> remain compatible and that test results will be comparable between
> different OPNFV setups).
>
> -Viktor
>
> > -Original Message-
> > From: EXT morgan.richo...@orange.com [mailto:morgan.richo...@orange.com]
> > Sent: Thursday, September 24, 2015 4:56 PM
> > To: Kosonen, Juha (Nokia - FI/Espoo); Tikkanen, Viktor (Nokia - FI/Espoo)
> > Cc: Jose Lausuch
> > Subject: [OPNFV] [Functest] Tempest & Rally
> >
> > Hi,
> >
> > I was wondering whether you could have a look at Rally/Tempest tests we
> > automatically launch in Functest.
> > We have still some errors and I assume most of them are due to
> > misconfiguration and/or quota ...
> > With Jose, we planned to have a look after SR0 but we do not have much
> > time and we are not fully skilled (even if we progressed a little bit:))
> >
> > If you could have a look and give your feedback, it would be very
> > helpful, we could discuss it during an IRC weekly meeting
> > In Arno we did not use the SLA criteria, that is also something we could
> > do for the B Release
> >
> > for instance if you look at
> > https://build.opnfv.org/ci/view/functest/job/functest-foreman-
> > master/19/consoleText
> >
> > you will see rally and Tempest log
> >
> > Rally scenario are a compilation of default Rally scenario played one
> > after the other and can be found in
> >
> https://git.opnfv.org/cgit/functest/tree/testcases/VIM/OpenStack/CI/suites
> >
> > the Rally artifacts are also pushed into the artifact server
> > http://artifacts.opnfv.org/
> > e.g.
> > http://artifacts.opnfv.org/functest/lf_pod2/2015-09-23_17-36-
> > 07/results/rally/opnfv-authenticate.html
> > look for 09-23 to get Rally json/html files and tempest.conf
> >
> > thanks
> >
> > Morgan
> >
> >
> >
> __
> > ___
> >
> > Ce message et ses pieces jointes peuvent contenir des informations
> > confidentielles ou privilegiees et ne doivent donc
> > pas etre diffuses, exploites ou copies sans autorisation. Si vous avez
> > recu ce message par erreur, veuillez le signaler
> > a l'expediteur et le detruire ainsi que les pieces jointes. Les messages
> > electroniques etant susceptibles d'alteration,
> > Orange decline toute responsabilite si ce message a ete altere, deforme
> ou
> > falsifie. Merci.
> >
> > This message and its attachments may contain confidential or privileged
> > information that may be protected by law;
> > they should not be 

[openstack-dev] VDI questions

2015-09-25 Thread John Hunter
Hi guys,
I am new to OpenStack, so I want to ask is there a project that
works on VDI(Virtual Desktop Infrastructure) based OpenStack?
I want to dig into it, please help.

Sincerely,
Zhao

-- 
Best regards
Junwang Zhao
Department of Computer Science 
Peking University
Beijing, 100871, PRC
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] tripleo.org theme

2015-09-25 Thread Dan Prince
It has come to my attention that we aren't making great use of our
tripleo.org domain. One thing that would be useful would be to have the
new tripleo-docs content displayed there. It would also be nice to have
quick links to some of our useful resources, perhaps Derek's CI report
[1], a custom Reviewday page for TripleO reviews (something like this
[2]), and perhaps other links too. I'm thinking these go in the header,
and not just on some random TripleO docs page. Or perhaps both places.

I was thinking that instead of the normal OpenStack theme however we
could go a bit off the beaten path and do our own TripleO theme.
Basically a custom tripleosphinx project that we ninja in as a
replacement for oslosphinx.

Could get our own mascot... or do something silly with words. I'm
reaching out to graphics artists who could help with this sort of
thing... but before that decision is made I wanted to ask about
thoughts on the matter here first.

Speak up... it would be nice to have this wrapped up before Tokyo.

[1] http://goodsquishy.com/downloads/tripleo-jobs.html
[2] http://status.openstack.org/reviews/

Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Apache2 vs uWSGI vs ...

2015-09-25 Thread Adam Heczko
Are we discussing mod_wsgi and Keystone or OpenStack as a general?
If Keystone specific use case, then probably Apache provides broadest
choice of tested external authenticators.
I'm not against uwsgi at all, but to be honest expectation that nginx could
substitute Apache in terms of authentication providers is simply
unrealistic.

A.


On Fri, Sep 25, 2015 at 1:24 PM, Sergii Golovatiuk  wrote:

> Alexandr,
>
> oauth, shibboleth & openid support are very keystone specific features.
> Many other openstack projects don't need these modules at all but they may
> require faster HTTP server (lighthttp/nginx).
>
> For all projects we may use "HTTP server -> uwsgi" model and leave apache
> for keystone as " HTTP server -> apache -> uwsgi/mod_wsgi". However, I
> would like to think about whole Openstack ecosystem in general. In that
> case we'll minimize the number of programs operator should have knowledge.
>
>
>
>
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
>
> On Fri, Sep 18, 2015 at 4:28 PM, Alexander Makarov 
> wrote:
>
>> Please consider that we use some apache mods - does
>> nginx/uwsgi/gunicorn have oauth, shibboleth & openid support?
>>
>> On Fri, Sep 18, 2015 at 4:54 PM, Vladimir Kuklin 
>> wrote:
>> > Folks
>> >
>> > I think we do not need to switch to nginx-only or consider any kind of
>> war
>> > between nginx and apache adherents. Everyone should be able to use
>> > web-server he or she needs without being pinned to the unwanted one. It
>> is
>> > like Postgres vs MySQL war. Why not support both?
>> >
>> > May be someone does not need something that apache supports and nginx
>> not
>> > and needs nginx features which apache does not support. Let's let our
>> users
>> > decide what they want.
>> >
>> > And the first step should be simple here - support for uwsgi. It will
>> allow
>> > for usage of any web-server that can work with uwsgi. It will allow
>> also us
>> > to check for the support of all apache-like bindings like SPNEGO or
>> whatever
>> > and provide our users with enough info on making decisions. I did not
>> > personally test nginx modules for SAML and SPNEGO, but I am pretty
>> confident
>> > about TLS/SSL parts of nginx.
>> >
>> > Moreover, nginx will allow you to do things you cannot do with apache,
>> e.g.
>> > do smart load balancing, which may be crucial for high-loaded
>> installations.
>> >
>> >
>> > On Fri, Sep 18, 2015 at 4:12 PM, Adam Young  wrote:
>> >>
>> >> On 09/17/2015 10:04 PM, Jim Rollenhagen wrote:
>> >>
>> >> On Thu, Sep 17, 2015 at 06:48:50PM -0400, Davanum Srinivas wrote:
>> >>
>> >> In the fuel project, we recently ran into a couple of issues with
>> Apache2
>> >> +
>> >> mod_wsgi as we switched Keystone to run . Please see [1] and [2].
>> >>
>> >> Looking deep into Apache2 issues specifically around "apache2ctl
>> graceful"
>> >> and module loading/unloading and the hooks used by mod_wsgi [3]. I
>> started
>> >> wondering if Apache2 + mod_wsgi is the "right" solution and if there
>> was
>> >> something else better that people are already using.
>> >>
>> >> One data point that keeps coming up is, all the CI jobs use Apache2 +
>> >> mod_wsgi so it must be the best solutionIs it? If not, what is?
>> >>
>> >> Disclaimer: it's been a while since I've cared about performance with a
>> >> web server in front of a Python app.
>> >>
>> >> IIRC, mod_wsgi was abandoned for a while, but I think it's being worked
>> >> on again. In general, I seem to remember it being thought of as a bit
>> >> old and crusty, but mostly working.
>> >>
>> >>
>> >> I am not aware of that.  It has been the workhorse of the Python/wsgi
>> >> world for a while, and we use it heavily.
>> >>
>> >> At a previous job, we switched from Apache2 + mod_wsgi to nginx +
>> uwsgi[0]
>> >> and saw a significant performance increase. This was a Django app.
>> uwsgi
>> >> is fairly straightforward to operate and comes loaded with a myriad of
>> >> options[1] to help folks make the most of it. I've played with Ironic
>> >> behind uwsgi and it seemed to work fine, though I haven't done any sort
>> >> of load testing. I'd encourage folks to give it a shot. :)
>> >>
>> >>
>> >> Again, switching web servers is as likely to introduce as to solve
>> >> problems.  If there are performance issues:
>> >>
>> >> 1.  Idenitfy what causes them
>> >> 2.  Change configuration settings to deal with them
>> >> 3.  Fix upstream bugs in the underlying system.
>> >>
>> >>
>> >> Keystone is not about performance.  Keystone is about security.  The
>> cloud
>> >> is designed to scale horizontally first.  Before advocating switching
>> to a
>> >> difference web server, make sure it supports the technologies required.
>> >>
>> >>
>> >> 1. TLS at the latest level
>> >> 2. Kerberos/GSSAPI/SPNEGO
>> >> 3. X509 Client cert validation
>> >> 4. SAML
>> >>
>> >> OpenID connect would be a good one to add to the list;  Its 

[openstack-dev] [mistral] Mistral Liberty RC1 available

2015-09-25 Thread Renat Akhmerov
Hi,

Mistral Liberty RC1 release is available. The exact version name is 1.0.0.0rc1.

Look at the release page in order to see a list of bugs fixed during RC1 cycle: 
https://launchpad.net/mistral/liberty/liberty-rc1 


Thanks!

Renat Akhmerov
@ Mirantis Inc.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging][devstack] Pika RabbitMQ driver implementation

2015-09-25 Thread Dmitriy Ukhlov
Hello stackers,

I'm working on new olso.messaging RabbitMQ driver implementation which uses
pika client library instead of kombu. It related to
https://blueprints.launchpad.net/oslo.messaging/+spec/rabbit-pika.
In this letter I want to share current results and probably get first
feedack from you.
Now code is availabe here:
https://github.com/dukhlov/oslo.messaging/blob/master/oslo_messaging/_drivers/impl_pika.py

Current status of this code:
- pika driver passes functional tests
- pika driver tempest smoke tests
- pika driver passes almost all tempest full tests (except 5) but it seems
that reason is not related to oslo.messaging
Also I created small devstack patch to support pika driver testing on gate (
https://review.openstack.org/#/c/226348/)

Next steps:
- communicate with Manish (blueprint owner)
- write spec to this blueprint
- send a review with this patch when spec and devstack patch get merged.

Thank you.


-- 
Best regards,
Dmitriy Ukhlov
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] CephFS native driver

2015-09-25 Thread Shinobu Kinjo
Thanks!
Keep me in the loop.

Shinobu

- Original Message -
From: "John Spray" 
To: "Shinobu Kinjo" 
Cc: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Friday, September 25, 2015 6:54:09 PM
Subject: Re: [openstack-dev] [Manila] CephFS native driver

On Fri, Sep 25, 2015 at 10:16 AM, Shinobu Kinjo  wrote:
> Thank you for your reply.
>
>> The main distinction here is that for
>> native CephFS clients, they get a shared filesystem where all the
>> clients can talk to all the Ceph OSDs directly, and avoid the
>> potential bottleneck of an NFS->local fs->RBD server.
>
> As you know each pass from clients to rados is:
>
>  1) CephFS
>   [Apps] -> [VFS] -> [Kernel Driver] -> [Ceph-Kernel Client]
>-> [MON], [MDS], [OSD]
>
>  2) RBD
>   [Apps] -> [VFS] -> [librbd] -> [librados] -> [MON], [OSD]
>
> Considering above, there could be more bottleneck in 1) than 2),
> I think.
>
> What do you think?

The bottleneck I'm talking about is when you share the filesystem
between many guests.  In the RBD image case, you would have a single
NFS server, through which all the data and metadata would have to
flow: that becomes a limiting factor.  In the CephFS case, the clients
can talk to the MDS and OSD daemons individually, without having to
flow through one NFS server.

The preference depends on the use case: the benefits of a shared
filesystem like CephFS don't become apparent until you have lots of
guests using the same shared filesystem.  I'd expect people to keep
using Cinder+RBD for cases where a filesystem is just exposed to one
guest at a time.

>>  3.What are you thinking of integration with OpenStack using
>>   a new implementation?
>>   Since it's going to be new kind of, there should be differ-
>>   ent architecture.
>
> Sorry, it's just too ambiguous. Frankly how are you going to
> implement such a new future, was my question.
>
> Make sense?

Right now this is just about building Manila drivers to enable use of
Ceph, rather than re-architecting anything.  A user would create a
conventional Ceph cluster and a conventional OpenStack cluster, this
is just about enabling the use of the two together via Manila (i.e. to
do for CephFS/Manila what is already done for RBD/Cinder).

I expect there will be more discussion later about exactly what the
NFS layer will look like, though we can start with the simple case of
creating a guest VM that acts as a gateway.

>>  4.Is this implementation intended for OneStack integration
>>   mainly?
>
> Yes, that's just my typo -;
>
>  OneStack -> OpenStack

Naturally the Manila part is just for openstack.  However, some of the
utility parts (e.g. the "VolumeClient" class) might get re-used in
other systems that require a similar concept (like containers, other
clouds).

John

>
>
>> This piece of work is specifically about Manila; general improvements
>> in Ceph integration would be a different topic.
>
> That's interesting to me.
>
> Shinobu
>
> - Original Message -
> From: "John Spray" 
> To: "Shinobu Kinjo" 
> Cc: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Friday, September 25, 2015 5:51:36 PM
> Subject: Re: [openstack-dev] [Manila] CephFS native driver
>
> On Fri, Sep 25, 2015 at 8:04 AM, Shinobu Kinjo  wrote:
>> So here are questions from my side.
>> Just question.
>>
>>
>>  1.What is the biggest advantage comparing others such as RDB?
>>   We should be able to implement what you are going to do in
>>   existing module, shouldn't we?
>
> I guess you mean compared to using a local filesystem on top of RBD,
> and exporting it over NFS?  The main distinction here is that for
> native CephFS clients, they get a shared filesystem where all the
> clients can talk to all the Ceph OSDs directly, and avoid the
> potential bottleneck of an NFS->local fs->RBD server.
>
> Workloads requiring a local filesystem would probably continue to map
> a cinder block device and use that.  The Manila driver is intended for
> use cases that require a shared filesystem.
>
>>  2.What are you going to focus on with a new implementation?
>>   It seems to be to use NFS in front of that implementation
>>   with more transparently.
>
> The goal here is to make cephfs accessible to people by making it easy
> to provision it for their applications, just like Manila in general.
> The motivation for putting an NFS layer in front of CephFS is to make
> it easier for people to adopt, because they won't need to install any
> ceph-specific code in their guests.  It will also be easier to
> support, because any ceph client bugfixes would not need to be
> installed within guests (if we assume existing nfs clients are bug
> free :-))
>
>>  3.What are you thinking of integration with OpenStack using
>>   a new implementation?
>>   Since it's going to be new kind of, 

Re: [openstack-dev] [neutron] How could an L2 agent extension access agent methods ?

2015-09-25 Thread thomas.morin

Kevin, Miguel,

I agree that (4) is what makes most sense.
(more below)

Miguel Angel Ajo :

Do you have a rough idea of what operations you may need to do?


Right now, what bagpipe driver for networking-bgpvpn needs to interact 
with is:

- int_br OVSBridge (read-only)
- tun_br OVSBridge (add patch port, add flows)
- patch_int_ofport port number (read-only)
- local_vlan_map dict (read-only)
- setup_entry_for_arp_reply method (called to add static ARP entries)

Please bear in mind, the extension interface will be available from 
different agent types
(OVS, SR-IOV, [eventually LB]), so this interface you're talking about 
could also serve as
a translation driver for the agents (where the translation is 
possible), I totally understand
that most extensions are specific agent bound, and we must be able to 
identify

the agent we're serving back exactly.


Yes, I do have this in mind, but what we've identified for now seems to 
be OVS specific.


-Thomas




Kevin Benton wrote:

I think the 4th of the options you proposed would be the best. We don't
want to give agents direct access to the agent object or else we will 
run

the risk of breaking extensions all of the time during any kind of
reorganization or refactoring. Having a well defined API in between will
give us flexibility to move things around.

On Fri, Sep 25, 2015 at 1:32 AM, wrote:


Hi everyone,

(TL;DR: we would like an L2 agent extension to be able to call 
methods on

the agent class, e.g. OVSAgent)

In the networking-bgpvpn project, we need the reference driver to 
interact

with the ML2 openvswitch agent with new RPCs to allow exchanging
information with the BGP VPN implementation running on the compute 
nodes.
We also need the OVS agent to setup specific things on the OVS 
bridges for

MPLS traffic.

To extend the agent behavior, we currently create a new agent by 
mimicking
the main() in ovs_neutron_agent.py but instead of instantiating 
instantiate
OVSAgent, with instantiate a class that overloads the OVSAgent class 
with

the additional behavior we need [1] .

This is really not the ideal way of extending the agent, and we would
prefer using the L2 agent extension framework [2].

Using the L2 agent extension framework would work, but only 
partially: it

would easily allos us to register our RPC consumers, but not to let us
access to some datastructures/methods of the agent that we need to use:
setup_entry_for_arp_reply and local_vlan_map, access to the OVSBridge
objects to manipulate OVS ports.

I've filled-in an RFE bug to track this issue [5].

We would like something like one of the following:
1) augment the L2 agent extension interface 
(AgentCoreResourceExtension)
to give access to the agent object (and thus let the extension call 
methods
of the agent) by giving the agent as a parameter of the initialize 
method

[4]
2) augment the L2 agent extension interface 
(AgentCoreResourceExtension)
to give access to the agent object (and thus let the extension call 
methods
of the agent) by giving the agent as a parameter of a new setAgent 
method
3) augment the L2 agent extension interface 
(AgentCoreResourceExtension)

to give access only to specific/chosen methods on the agent object, for
instance by giving a dict as a parameter of the initialize method [4],
whose keys would be method names, and values would be pointer to these
methods on the agent object
4) define a new interface with methods to access things inside the 
agent,
this interface would be implemented by an object instantiated by the 
agent,
and that the agent would pass to the extension manager, thus 
allowing the

extension manager to passe the object to an extension through the
initialize method of AgentCoreResourceExtension [4]

Any feedback on these ideas...?
Of course any other idea is welcome...

For the sake of triggering reaction, the question could be rephrased 
as:
if we submit a change doing (1) above, would it have a reasonable 
chance of

merging ?

-Thomas

[1]
https://github.com/openstack/networking-bgpvpn/blob/master/networking_bgpvpn/neutron/services/service_drivers/bagpipe/ovs_agent/ovs_bagpipe_neutron_agent.py 


[2] https://review.openstack.org/#/c/195439/
[3]
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py#L30 


[4]
https://github.com/openstack/neutron/blob/master/neutron/agent/l2/agent_extension.py#L28 


[5] https://bugs.launchpad.net/neutron/+bug/1499637

_ 



Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous 
avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les 
messages electroniques etant susceptibles d'alteration,
Orange 

Re: [openstack-dev] Apache2 vs uWSGI vs ...

2015-09-25 Thread David Stanek
On Fri, Sep 25, 2015 at 8:25 AM Adam Heczko  wrote:

> Are we discussing mod_wsgi and Keystone or OpenStack as a general?
> If Keystone specific use case, then probably Apache provides broadest
> choice of tested external authenticators.
> I'm not against uwsgi at all, but to be honest expectation that nginx
> could substitute Apache in terms of authentication providers is simply
> unrealistic.
>

uwsgi isn't a replacement for Apache. It's a replacement for mod_wsgi. It
just so happens that it does let you use user web servers if that's what
your usecase dictates.

As a Keystone developer I don't want to tell deployers that they have to
use Apache. It should be their choice. Since Apache is the most common web
server in our community I think we should continue to provide example
configurations and guidance for it.


-- David
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >